Video Game Design/Chapters/Implementation

< Video Game Design‎ | Chapters


=== Design implementation === Before you consider implementing your design one thing that you should ponder as you consider implementation is how costly it will be, in time and money. You can monetize your game design (concept) by selling it to a game creation studio, or you can create an open source project for open implementation.

As you consider what to do, take also in consideration the marketability of the game you created and your objectives. Is it going to be free or are you selling it? If you are selling it, will people want to buy it? How are people going to hear about it? How much money and resources are you willing to spend on marketing this game? Do you have them and are they worth it?

The design phase will not survive the implementation intact, compromises and adaptations will become part of the process, as the game is implemented the design will need to adapt and evolve. As with any plan, the design will not survive unchanged its first encounter with reality in the field.

The implementation should also be seen as a sandbox where things will be tried out and pruned to meet the required objectives. You should only get worried in the later stages of implementation since at some point one must accept that too much changes will probably ruin the project.

If you stall in some section of your implementation, go back to your design references a see on how others before you handled the issues. You can, and should, be creative but you should expect that the someone has already found at least a workable solution to every issue, use you creative resources to build upon that in place of recreating something equal to what was already done before.


To do:
Cover the possibility of using Emulation to implement the games in newer platforms. Or developing games for older platforms.


Concept vs. abilityEdit

Before you start developing keep in mind to recognize between what you wish to try and do, and what you truly will do. work out the resources that you just have and compare them with the resources the project would require and modify consequently. If you've got plenty of resources however your idea is easy, maybe you'll expand it. And if you've got a sophisticated idea however easy resources, perhaps you would like to expand your resources.

After you've got saw all of those issues define the project, initial by how the sport can work and the way it's organized then chronologically by how you're visiting accomplish those things - it'd be useful to line a maturity date for yourself.


Consider your resources, what abilities do you personally have? Can you program, draw, and render a polygon? How much money do you have and how much time are you going to spend on it? Do you have the technology to build a game or do you need to get it? Do you know what you need?

Programming Languages

Graphic Design

Music composition


A video game is a really big project for someone to take on, especially just one person. It may be a good idea to do the project as a team bringing together all the resources you need to put together a good video game. It is almost impossible for only one person to create a game that people will actually enjoy. Being in game development requires you to have good social skills, because 99% of the time you will be working with a team of designers. It Is important that you build a friendly relationship with your fellow game designers to get the best results of a project.


Development phaseEdit

The testing and development phase are where the game is actually created. As you program, make graphics, compose music and collaborate these resources you will have a lot of testing and debugging to to do. Consider the following sections.




To do:
Mention the various models of development in relation to test process. Cover software testing best practices.

Conceptual artEdit

The importance of having a good portfolio of conceptual art in the early steps of game implementation is extremely important not only to permit a richer visualization of the concept but to coordinate development across a team of developers.

Conceptual art also increases the value of the game design alone before implementation.

Content creationEdit

The game content can be static or dynamic, in relation to movement, and depending on how it is created (set or procedural) and even a mix of both, generation can also be in real-time or from storage data, this is mostly dependent by the level of interaction or hardware capability.


To do:
Link to film tools, practices and techniques.


Presentation in a game, like in most things in real life is extremely important. The wow factor, the creative ways simple mundane things can be changed so that the player becomes engrossed in the production is one of the more important factors for the success of a game.

A commercial game can become easily profitable if it succeeds simply in being attractive. Especially using a creative selling scheme, like pre-ordering, of course that this will only result in the short term and decrease the reputations of those involved, but serves as a good example on how presentation is a deciding factor in the success of games.

Presentation encompasses how all visuals of the game are utilized, from selecting between a 2D or 3D implementation to the quality of all game artwork, from in-game artifacts to real life marketing adds and exposure and game boxes designs.

There are many repositories of freely licensed content that one can use not only for prototyping games, but even to build a fully fledged game implementation.

  • ( - a all type of media repository with varying copyleft status intended for use with free software game projects.
  • ccMixter ( - a community music site that promotes remix culture and makes samples, remixes, and a cappella tracks licensed under Creative Commons available for download and re-use in creative works.
  • The Freesound Project ( - a repository of Creative Commons licensed audio samples. Sounds uploaded to the website by its users cover a wide range of subjects, from field recordings to synthesized sound effects.


Game composition has much in common with cinematography and animation. Like in movies most games tend to tell a story, even if in an interactive way. Anyone doing a 3D game today should learn for instance on how properly do camera cuts, wide angles and montage. Understand the relation between the focal point and the zero plane.


Every visual aspect of a game will require some artwork in a form or another.


Animations are one example of dynamic content, they may be necessary in a game for plot advancements or to provide background information. The level of complexity of the production may require also a more complex staff, writes, directors and animators are often utilized. Since cinematics, the use of cinematic techniques; producing material that will make effective cinema viewing would be important.

In any type of animated scene understanding how subjects move, every body acts and interacts with the surroundings is extremely important.


To do:

Motion captureEdit

Motion capture of actors performances is becoming a requirement for realistic character animation in recent games, especially if realism is indeed a requirement.

The capture of actors interactions is best when done in live interaction, and not in as an integration of individual performances, in this way the interactions are more realistic as it permit the actors to innovate in ways that a scrip is not able to plan for. The natural interactions that are often only subconsciously perceived will help to make the scene more realistic to the player.


To do:
minewikipedia:Motion capture

Motion capture stageEdit
Sound capabilitiesEdit

The ability to capture dialog in real time is extremely important to impart realism to scenes, since speech changes in accordance to body position an the location and movement of the actors during the performance. It will also help the actors in acting out the necessary performances to its fullest.

Multiple takesEdit

It is important for the director of a motion capture performance to enable the actors to use their own initiative and to permit multiple takes and liberty to go outside of the script. This often will result in better and richer solutions. It may depend on the time available and the resources required to process the material, but in todays digital world this is normally possible without incurring in a prohibitive cost increase.

3D GraphicsEdit

There are several ways to generate 3D computer graphics that can represent shapes in a 3D environment. Understanding how the object will be utilized and the technical characteristics and required level of detail is extremely important. The artist must be aware of any limitations that may exist, for instance there may be a need to reduce the level of detail to preserve resources or simply because they are not required, each requirement requires a distinct artistic approach.

Before starting to model an object, you first need to observe what you are trying to create. You must carefully note the details and how they could be reproduced in your software. You must note everything you plan to create in your art. The recreation of each of these sections will be a major task in any software. Taking notes (On paper) of all of the details you need now will speed up the work when you start using your software.

The surfaces and corners of things with 90 degree angles are easy to remember, however up close there could be more detail. For the more complicated things write or draw specifics about the subject. Trying to model a bicycle wheel without looking at one would be nearly impossible if you didn't know the spokes are tangent to the axle connection and go in the opposite direction on the other side.

An organic subject's curves often have varying degrees of sharpness. Where there is a sharper curve, there will probably need to be more detail added to that area in the modeling phase. The position and direction of curves will also be of utmost importance during the modeling phase.

Proportions of the subject are important to confirm that the model looks accurate and real. They can be used during the modeling process and/or after for final corrections.

Every surface/material has several distinct intrinsic characteristics (not dependent of the environment), like:

  • color
  • texture
  • reflectiveness
  • transparency

If replicating or creating complex scene in 3D, the proper and consistent use of lighting is important, take note of:

  • source(s)
  • placement
  • direction
  • dispersal

After a having good idea of how and what to represent from the object, the next step is to generate a model of the object to be displayed. There are many different ways of going about creating your models, each with its own pros and cons. For the artist to become more efficient in this step they need to know the different methods available with their advantages and disadvantages.


To do:

Consider your subject and which method would be the most appropriate for the situation.

Theory of Polygon and Mesh ModelingEdit

In real life, objects are made of unimaginable numbers of atoms. Computers have difficulty in dealing with the complexity of real life, so we need to use something simpler that can be used to model it.

The simplest thing we can define on a computer is a point in space. (Similarly, if I had a piece of paper in front of me, the easiest thing I could draw on it would be a point, I’d just tap my pencil to the paper.) A point in space is called a Vertex.


Now consider this. Each point (or vertex) on the paper has a number. We will call the first point I drew, Vertex 1. If I went ahead and drew more vertexes, the second vertex I drew would be called Vertex 2 and the third would be called Vertex 3, and so fourth.

A bunch of points really don’t do us much good on their own. So we will connect them, like a connect the dots game. If we connect three of them, and fill in the center, We’ll get a triangle, the simplest surface we can create with vertexes.


If we create additional triangles, (extended from the first), we can create more complicated surfaces. Any surface can be created if we use enough triangles!

If two triangles are beside one another, and seem to form one side of an object, we’ll usually call them a polygon, and deal with them as a polygon as opposed to calling them two triangles. It will still be made of two triangles, but we’ll just call them a polygon to make it easier.

Triangles have several properties which make them easy for the computer to deal with:

  • They are made of straight angles. Triangles are made of straight sides and have no curves. Computers deal with straight lines well. They do not deal with curved lines easily. Think of it this way, if I gave you a piece of paper with two points on it and said, “draw a straight line between those two points”, you’d know exactly what I meant. Everyone I gave that paper to would draw the same line if they followed directions. Now suppose I gave you that same piece of paper and said, draw a curved, rounded line between the two points. Those are vague directions. You would be unsure of what exactly I wanted you to draw. Each person I gave that assignment to would draw slightly different curved lines. In order to make sure everyone drew identical curves between the two points, I would need to give much more complicated directions.
  • They are flat.
  • They cannot self intersect. If you had two polygons, they could intersect with (go through) each other. Computers have a hard time handling intersections. So triangles are easier to deal with because they can not go through themselves.

Polygons are the next simplest surface.

A polygon is like a triangle but has more sides. A square is a polygon. Any polygon can be easily broken down into triangles, so it is still quite simple. Polygons are usually flat, or close to being flat. If the two triangles form an extreme angle (are not flat) then we usually won’t call them a polygon. The concept of Normals:

Each triangle or polygon in animation software has a ”normal” If a triangle was a tabletop, its normal would point straight up, away from the surface. Normals are always perpendicular to the surface. In order to simplify the amount of work the computer needs to do, 3D software can perform something called “backface culling”. Cull meaning “to not show”, “trim away”, “ignore”, backface meaning, the back of faces, or, the back of polygons. Backface culling means not showing the back of polygons, only showing the front, or more accurately, the side normal points from.

Example: Normals on a regular sphere point away from the center of the sphere. If you were standing outside of a giant sphere and you looked at, you would be able to see it. If you were standing inside of it however, you would not be able to see it. Backface culling would eliminate the inside of the sphere because it normals do not face towards you.

The normal is defined by the order you count the vertexes in when defining the polygon. Whether you go around one way or the other when drawing the original triangle or polygon. You should never need to worry about this. Just be aware that you will often need to “flip the normal” a command found somewhere in every respectable modeling package. Element (Continuous Mesh):

A element is a distinct surface. If two polygons are created side by side, each created out of different vertexes than the last, they are considered to be individual elements, (or not a continuous mesh). Suppose we have two triangles. (Fig 1) They are two elements. Not suppose we move them together so that they are touching. They are still considered to be two elements, even though they look like one. What separates them is that they are defined by different vertexes. They do not share any vertexes. In order to make them one element we would need to “Merge” (or weld, or collapse, as it is sometimes referred to) the two vertexes. Each place where the triangles seemed to touch one another, we would make sure there was only one vertex. Then the two triangles would share the vertexes, and they would be one element. Usually, modeling software keeps your objects as one element most of the time, automatically sharing vertexes when you extend the surface of you model.


Polygons that are not connected to an element, are not “continuous” with it. This is difficult to understand on paper. Work with in the software.

Elements are useful in selecting groups of polygons at a time in objects where several distinct surfaces exist. In 3DSMAX you can choose element mode in an editable mesh, and select the element. In Maya you can select Elements by extending the selection as far as it will go.

  • Vertex - A point, in a place. A vertex in perfectly small. In has no width, length or height. It just has a position. A vertex by itself is useless. It is useful when combined with other things. If we create several vertexes we can start connecting them to make visible surfaces.
  • Edge - One side of a polygon or triangle. If you move an edge, the two vertexes that define that side of the polygon or triangle will really be moved.
  • Triangle - It is defined by 3 vertexes. I could say that vertex 1 vertex 2 and vertex 3 make a triangle. That would give me a surface. The area inside the triangles borders, is also part of the triangle. The triangle is a surface. A triangle can be rendered, and would appear solid.
  • Polygon - Polygons are like triangles but have more sides than three. Polygons are really made up of several triangles. Usually the software lets you deal with the polygons without having to worry about the triangles. It worries about the triangles itself. You don’t have to define each triangle separately. You can just deal with polygons and usually software will figure out how to work the triangles itself. For some advanced modeling purposes, you might one day need to worry about the individual triangle, but it is uncommon.
    A three sided polygon is a “tri” a 4 sided polygon is a quad. Well constructed models should generally consist mostly of quads, with a few tris present. If the model is intended to be used for a subdivision surface (a way of rounding models), it should not have polygons with more than 4 sides.
  • Element (or a “continuous mesh”) - An element is a collection of polygons which are welded to each other. They share vertexes with each other.
  • Normal - Served indicate the polygon which side is visible. When backface culling is turned on, you can only see a triangle if its normal faces you. Essentially, only one side of the triangle would be visible.

These components of a polygonal (or “mesh”) model also defined as “sub-objects” In 3DSMAX and “components” in Maya.

Subsurface modelingEdit

Using subsurface is ideal for subjects that have symmetrical levels of detail. On the contrast a human head needs much more detail on the face and ears, but very little elsewhere. This uneven detail starts to make the model's wire frame look messy, and uses triangles to make up for the seams with the difference in the area. While usable, continued deeper levels of subsurface will complicate the model far to much. Thus animating the face could then be a large pain from all the unpredicted triangles.

Box modelingEdit

This process of modeling is taken as to be one of the most common method of modeling new objects. Here we take the box as the base object and using modeling tools and techniques we make changes in the shape to get the model done.

The power of the little detailsEdit

Consider the power that an extra consideration to details entails in possibility for amazement of the player, this will increase the level of satisfaction and equate to the perception of effort put into the production.

The human mind is an amazing thing, if consideration is given to capture its imagination, a good and detailed environment will often obscure some minor errors in the implementation.


Background musicEdit

Gameplay musicEdit


Speech in games can be important to advance the plot, introduce new elements or even as part of he game play. Most games use digitized speech, and if the game is simulating reality the quality of the voice and the coordination with the action is extremely important. Some games have failed to please gamers just because the quality of the voice acting was very poor or badly presented, or even if the same voice actor does too many personages.

Emotions effect on the human voice
joy sorrow anger fear disgust surprise
voice quality breathy and blearing resonant breathy with chest tone irregular and blearing grumbled with chest tone breathy and blearing
articulation normal slurring tense precise normal tense and precise
speech rate faster or slower slightly slower slightly faster much faster very much slower much faster
intensity higher lower higher normal lower higher
pitch range much wider slightly narrower much wider much wider slightly wider much wider
pitch average much higher slightly lower very much higher very much higher very much lower much higher
pitch changes smooth with upwards inflections downward inflections abrupt on stressed syllable normal wide downward on terminal inflections rising contour

Since games can be directed to an international public and as a way to reduce any shortcoming like bad sound reproduction, the use of subtitles has also become important in games.

Synthetic SpeechEdit


DECTalk usage examples at YouTube (Computer Sings "Tender Lies" or Voice Synth is Awesome). DECTalk started as an hardware implementation of synthetic speech and was later turned into a software only solution, still unparalleled in its versatility.

Synthetic speech has yet to gain momentum in games mostly due to the low quality and slow technological progress made in the field. As the technology advances it is possible that dialogs could become more dynamic and easily adaptable to international audiences. It would also be interesting to add this capacity to AI characters.

Most of synthetic speech will not fall directly to the game art work department but to the programmers, but the scripts as any textual or vocal section of a game will always need a creative writer.

Sound effectsEdit


To do:

Procedural GenerationEdit


To do:
Transwiki wikipedia:Procedural generation, adapt and merge with what is already covered in Video Game Design/Programming/Framework/Procedural Content Generation

As most procedurally generated content depends on an algorithmic implementation these routines are often part of the game framework making it easy to use on multiple game creations.

Open ended contentEdit


To do:
Mine wikipedia:Mod (video gaming), explain the increased importance it has taken in the success of games, cover creative limitations and having the game be a platform. Touch the subject of script languages like LUA and the social aspects and challenges of creating a community driven product.


Programming is the way you put your concept in practice, how you build your game. There are a wide variety of programming languages. These languages will be covered in more detail later on.

Game programmer or game developers, take the implement the game design, most parts of a video game programming are boring and non-creative unless the game design requires some innovation or updates. But take care the worst situation for any developer especially one implementing a video game is in having a poor video game design to start with, the inability to make decisions or be non committal to choices will result in the developers having to implement bad concepts until the game designer accepts the results (or is forced to give the go ahead because of time/costs constrains), resulting in a substandard product.

Not all developing is attractive and in games most of it is not, for instance the front of a game is mostly common amongst most games designs doing one more is just going through hops. Now let's take for example the task of supporting hardware changes in video cards or even the low level optimization tasks, that would be the top of the cream for a creative programmer.

This notions can also serve to establish a good game developing team, not all jobs are the same or even as complex and depending on how down to basics you wish to go on developing your idea, you may need very few expert programmers.

Learning to ProgramEdit

Because it is arguably the most difficult part of game design, we are going to spend a fair bit of time on it. If you have some idea of what language you want to learn and have read up on the various languages, you should actually start learning. If you can take classes that is great, if not there are many alternatives. Buy some programming books, look up some tutorials on the Internet or on Wikibooks and look through the source code of Open Source programs. Don't think it will be easy, it is not; but try and have fun with it. If you do not have fun that sort of defeats the whole purpose does it not?

Some Resources: Google Code, Sourceforge

Choosing a Programming LanguageEdit

Before you start programming, it is important to choose a programming language that suits your needs. Remember that no language is perfect for everyone or every situation. There is such an incredible selection of languages that it can be nearly impossible to choose one. Before you make up your mind to learn Java or Assembly, make sure you know what you are planning to make. How complex is the game? Certainly it would be counter-productive to spend a lot of time and energy learning a language that does not have the power to make what you are planning, just as it would be counter-productive to learn a language that is overly complex for your needs. When you start reading about the various languages, you will inevitably read about "low-level" and "high-level" languages. At this stage this does not concern you so much, but later on it will be very important. Essentially, low level languages (ex: C++, C, Asm) are more powerful and faster allowing you to control the inner workings of the computer. However, they are generally harder to learn. Higher level languages (ex: BASIC) are easier to learn and use, but lack the power and flexibility that lower level languages have.


Sound plays an integral part in any game as it affects the mood of the player at a conscious and subconscious level! Could you imagine playing UT or Quake without sound? It would be unbearable! Sound in games ( depending on the game of course ) generally consists of background music, event sound effects ( honking a car horn, gunfire etc ) and environmental sound effects (footsteps, wind blowing, birds, beach waves, bugs, echoes etc).

Background music depending on the game can play all the time, but also like in film stop completely and change to fit certain moods, such as if you enter a battle the music might change to a track with a faster beat or become more erratic.

Sound effects on the other hand, play when they are triggered by some event. If a player were to open a door, there could be creaking noise coming from the door. Sound effects can add a lot of realism to a game and choosing the right sounds can really make a game come alive. Please note however, too many of them, or ones that have unrealistic properties, can hurt the game experience, or annoy the player. For example there is a game with a jetpack in it. This jetpack has unlimited fuel, so players can float in the air for an indefinite amount of time. While the jetpack is running, it makes a noise like rushing air. This noise becomes very annoying over time, because it is heard a lot during the game. Also, if a sound has strange properties, it can detract from realism. Eg, a machine gun that goes quack, or a machine gun that has sound faster than its actual firing rate.

Environmental sound effects are triggered simply when the player enters the environment and play in a loop until the player leaves. Please note that these sound files are most numerous and multiple sounds are sometimes looped in a random order to create a sense of variety in a environment (i.e. two birds singing that sound completely different or two characters walking and the shoes clucking sounds different for each character).


Games usually give many options to players regarding input. Common means of input include the mouse, keyboard, joysticks, and gamepads. Ideally, a game engine should abstract the input so that the user can select from any of the above. Furthermore, one important thing to remember is that all gamers have different preferences in regards to key or button placement, and often want a certain specific configuration. This means the input should also be abstracted to allow buttons or keys to perform different actions in the game.


It's important to first understand the different ways keyboard events can be interpreted by the program. The most common ways to receive keyboard events are through callbacks and polling.

  • Callbacks - Often used by games that utilize the GLUT library, function pointers are passed to GLUT which "register" that function as the keyboard event callback. This means, any time a key is pressed or released, GLUT calls the respective function, passing the key data and allowing the program to respond accordingly.
  • Polling - Used more often by games using SDL, polling is helpful if callback functions break abstraction in an engine. Polling is a process by which the game checks a collection of keyboard events in its spare time. So, for each pass through the game loop, your game can poll the collection, resulting in quick response to key events, and no loss of data.


Every operating system has its own TCP/IP API, so if you are planning on developing for a specific platform, then you must look into that OS's SDK (such as WinSock for the Windows API). If you are writing games for portability across multiple platforms, one good possibility is SDL_net.

After choosing the networking API, classes should be constructed for a game engine that encapsulate sockets. One must also make the decision between networking protocols, TCP and UDP (although through abstraction, either could be used).

  • TCP - This protocol sets up a connection between two computers. Data sent between computers is resent if any errors are present. The disadvantage to this protocol is that it is not as fast overall as UDP.
  • UDP - This protocol does not set up a connection. Data packets are sent to an address, and the sender does not know if it arrived properly and error free. A protocol could be written using UDP to provide error checking and resending.

The decision is up to the programmer, and what is best for the game. If the subject is an online game of chess, where speed is not a major concern, TCP could be used to avoid some headaches. But, for a large team of people in a FPS, UDP would be a better choice, due to speed.


Here's a list of free scripting engines used in games development:

  • KonsolScript - A free software game scripting language
  • Lua - The Lua scripting language

Game ToolsEdit

Here is a list of free software tools for use in game development.

  • Blender3D - A free and very advanced modeling program, a bit tricky to get used to but just as capable as any other commercial modeling program.
  • OGRE - A free software graphics engine. Top notch.
  • Terragen - A free for non-commercial use terrain generator
  • TrueSpace - Professional grade 3D modeling, animation and rendering package, previously costing up to $700 (USD), now available as a free download after a buy-out of Caligari by Microsoft.

Assembling and Coordinating the TeamEdit


Software Engineering GamesEdit


To do:
Interconnect Software Engineering

Selecting the hardwareEdit

To develop a game you may need not only qualified personal is special tasks, like graphics designers but you will have also to consider the software for editors (for creating things such as characters, items, scenery) but also the physical hardware to support those tasks, for example graphic tablets may be required for the graphic design, and you will probably need scanners to input hand drawn pictures from artists as well as cameras, cameras could to be used to take pictures of drawn artwork. All these extra costs have to be well considered and depend only on the level of your project.

But hardware considerations are not restricted to the tools you will be using, but most important is the requirement to run your finished product, this will not only affect the target audience but shape the development of the project. The software requirements to implement you game design may require to access large amounts of memory but have a limited amount available to work with at each time the can be also other limitations such weaker CPUs, weaker processor caches or non-default memory alignment requirements.

The easy way out...Edit

No frills, no whistles and no extra costs. If you select a virtual framework, lets say by using Flash as your developing framework you will skip the need to think about and supporting special hardware or setups, you will also have a RAD tool to put your concept into action.

Distribution mediaEdit

Distribution media is important to determine how much resources can be packed and how they should be packed.

Selecting the Programming LanguageEdit


A prototype is meant to be an exploration, a journey to see how ideas will play out once they are embodied into a full working program. It is the testing of a new concept that has not been seen before and because of this it must be most flexible in its implementation so it it easy to improve, since the creation of the prototype will inspires completely new ideas. If the exploration of new concepts is not done during the prototyping phase then what was the point of the prototype at all?

Prototyping is meant to be done quickly and then it needs to change even quicker. To be able to do that it as to be maintainable and use a flexible and well-written code base. But keep in mind that a prototype is only useful if it is good enough so to prove your vision as correct and to be discarded for a real implementation; if you spend so much time on the prototype that you start to begging to question its replacement, then you are on the wrong path or have spent too much effort on what should remain the first tryout of the master plan.


3D AudioEdit


Multiplayer gamesEdit

Close or open-sourceEdit

General Architecture IssuesEdit

A game's framework is basically all the programming that goes into the creation of the game but does not directly implement any of the gameplay. This can be the code that manages the display, access to files, sound and other peripherals.

There is no one size fits all framework for video games. Each game requires a selection of components and strategies for linking them together. Using a freely available or even licensing a popular framework has the benefit that you will not need to "reinvent the wheel" and get support and collaboration in solving issues and extending capabilities. In fact the only advantage in creating your own framework is to have control over it, this can be due to the need of implementing something that other oppose or simply to get monetary compensation from that specific work and license it to others.

Choosing an API (Application Programming Interface)Edit

There are a large number of APIs that are suitable for Game Programming. APIs range from specialty (Graphics only, such as OpenGL) to very, very broad (windowing, graphics, networking, etc are available in ClanLib)

  • OpenGL -- Specifically, this is a graphics library. Some other APIs can integrate very nicely with OpenGL (such as SDL). It is also cross-platform.
  • DirectX -- A set of APIs by Microsoft, specifically for machines running Windows, though it is on some other Microsoft platforms(xbox 1 used an modified version of the DirectX API). They include sound, music, graphics, input, and networking.
  • SDL (Simple DirectMedia Layer) -- A good C-based library that is very portable, and while pretty low-level, it is complete enough to control sound, graphics, and input (from joysticks, keyboard, mouse and CD-Rom). zlib/png license.
  • SFML -- An Object Oriented C/C++/.Net API supporting audio, graphics, window handling, multi-threading, networking, and input (from mouse, keyboard and joysticks).
  • Allegro -- An easy to use library for C/C++ programs. Cross-platform (supports Windows, DOS, Mac OS X, UNIX, and BeOS). Provides functions for graphics, sounds, input, and timers.
  • ClanLib -- A C++ toolkit and OpenGL 2.0 wrapper.


Graphics is the common name for the visual presentation of the game environment and also of each visual representation its components. Creating these visual environments begin generally with the concept artist, that in accordance to the game concept creates visual representation not only of the game characters, objects but also how the environments will look like or even the as the visual expansion of the game creator imagination. That part of work is mostly done outside of the game, with third party software dedicated to those distinct tasks and data.

A games graphics are not restricted to in game art, but also in font design, logos and advertisings for marketing purposes, they also can have a great impact on other types of merchandising like t-shits or toys, or other product spin-offs like animation series or even live action movies based on the game concept.


Vision is the most important sense in humans. Presenting a visually stunning product will trump most other aspects, in attracting players and guarantee initial sales, we already the covered how an attractive presentation is important.

multi-view display
Since today it is not uncommon to have a multi-monitor setup, as prices continue to fall and image technology increase multi-monitor support in games will also become common. A multi-monitor setup can also easily expand a normal view to into a mosaic without no special consideration from the game creator, but having distinct screens is extremely interesting for strategy games and simulations as it is not uncommon for the game-play to permit the player to visualize a large number of distinct data.


To do:
Extend if possible give examples of games.

"Special" effectsEdit


To do:
Mine w:Parallax, w:Parallax scrolling and w:Parallax mapping.

2D or 3DEdit

Most modern games until recently seemed entrenched in the 3D craze, until the mobile phone brought back the market for a return to simpler visuals and good old 2D creativity and innovation. 2D had been mostly relegated to emulators and reimplementation of old game models.

Most classic table games, like chess, checkers and card games for instance will not benefit much for complex 3D implementation. This is also true to implementations of classic arcade games that predate the 3D evolution, even if some could be implemented in 3D the game-play itself would be the same or result in a different game altogether.

After the initial advances in 3D and the development of good 3D hardware, 3D soon started to be mostly a marketing gimmick, a sell point and a way to hide the frailty of the game concept with good visual, where each new inch gained in performance or visual realism was heralded as a revolutionary must see discovery. Remember what we said about presentation, that is mostly how it is used. Most games do not know how to use it in a way that complements game-play and game design.

One should also consider the impact that companies developing and commercializing 3D hardware have in game creation. We now even have 3D requirements to the user interface of modern operating systems.

A good 3D game uses dynamic composition of scenes and environments, in most the same way movies do, with the express purpose of showing off the game design and engaging the player's imagination, but with unwavering respect for game-play.


To do:
Cover Allegro, SDL ,OpenGL ,DirectX - The limitation of DirectX is that the technology is only available in Windows OS and the XBox.

Graphics' enginesEdit

The game engine should be debugged and tested with more primitive environments and models particularly with game consoles. What the game calls the graphics engine is used to manipulate the games animations scripts characters position in an environment and memory allocation for graphics rendering. Other data such as physics, AI and the game scripts are handled by other engines. There is a large misunderstanding with the general public in that a game is made with only one "engine".

2D GraphicsEdit

3D GraphicsEdit

Beyond 3DEdit

stereoscopic view
While most game engine intend only in simulate the simulative of depth and a depth environment alien with other graphics artifacts real 3D only recently started to become popular in games. There are several techniques to produce and display 3D moving pictures. At the core there is the requirement to display offset images that are filtered separately to the left and right eye, hence providing independent focus therefore depth to the scene that is being observed. Two strategies have been used to accomplish this: have the viewer wear eyeglasses to filter the separately offset images to each eye, or have the light source split the images directionally into the viewer's eyes (no glasses required). The issue so far was in the CPU or video card to the generate the required similar but distinct images and the software that would generate those vies, the rise of the GPU and the adoption of 3D TVs has finally permitted general use of this 3D view even if there is not a great number of implementation for games.


To do:
Extend if possible give examples of games.

beyond polygons
In science and medicine the concept of point cloud data started to get importance due to the high detail it permits with imagining equipment (magnetic, lazer). An Australian company Euclideon has made claims of and advance in 3D scene rendering using points in place of polygon, therefore an increment in the possible level of detail based on Sparse Voxel Octree Graphics. Others are also working on the this type of technique like Atomontage Engine (hybrid approach) or the Voxlap engine by Ken Silverman (that also wrote the Build engine, used in Duke Nukem 3D).


To do:
wikipedia:Sparse voxel octree, add videos ?

User InterfaceEdit

The user interface (UI) is a very important component of any game because this is usually the first thing a new player will see when starting up your game. It is also (in most games) always visible to the player, so it is wise to put some effort into making an interface that is intuitive, easy to use and that looks good! There is nothing like a badly designed interface to put someone off a potentially great game.

A user interface consists of:

  • Graphics - Buttons, info panels, maps, etc.)
  • Layout - Where those things are placed on the screen
  • Interaction - How these things respond to user input; do they bring up a map? your inventory? access settings?

2D and 3DEdit

UIs can be rendered in 2D or 3D, the interface itself is in most cases independent from the game-engine, at most it serves as a I/O interface from the player to the game world. Many of the primitives on a UI render will not be present of a game-engine, especially if one is relying on packaged engine.

2D games can have 3D UIs and 3D games can have 2D UIs. The choice is only on design and presentation. Note that computationally 3D UIs will take more resources away from the game and flashy UIs tend to cause only an impact on first contact if there is not deeper integration with the game-play. The UI should also not outshine the game itself, at best it should aim to be useful and informative.



To do:

Packaged solutionsEdit


Artificial Intelligence (AI)Edit

Artificial intelligence is what make your game world come alive and gives your in-game creatures a mind of their own. AI also allows the difficulty of the game to change either by having a user select their own difficulty level or by having the game adapt automatically to the players skill level. There are many ways to implement AI in a game.

Finite state machinesEdit

Very simple type of AI were used in Doom. Finite State Machines consist of a list of possible states (or "emotions"). Let's say there is a game that has guards that patrol a room looking for intruders.
Let's also suppose these guards have five states:

  • Patrolling - The guard is walking along on his assigned route
  • Waiting, or idle - The guard is standing somewhere, possibly talking with another nearby guard, smoking, etc.
  • Suspicious - The guard has heard a noise and thinks something is nearby (ex: a guard heard a player kick a rock off a ledge)
  • Alert - The guard has visibly seen you and is attacking, or possibly shouting out for help
  • Injured/In trouble - You have hurt the guard, but he is alive, and is running for help

These states would have particular actions consistent with the "emotion" of the guard. Patrol would activate the waypoint system, Alert would mean you activate the targeting system for the AI.

Neural NetsEdit

Scripted rules based systemEdit

Procedural Content GenerationEdit

The concept is defined as the process of procedurally generating any type of 2D/3D geometry, textures, scripts or sound, even using a randomization seed. To date this technique is primarily used to create environments, simple worlds or structures. This approach can also be applied to many other content, using strict definitions it can be used in cooperative work with AI (dynamic environments, object and creatures), but it is still difficult to be as reliable as realistic as traditional animation, even if it can provide greater variability and in replicating human voice it is still at its infancy.

Unless there is a benefits for interactivity or to enable randomization, procedural content generation should be avoided as it is expensive in terms of CPU cycles and generally complex to implement, exceptions are made to repetitive but simple patterns, even if interaction is not possible, for instance in the simulation of water, the effect of rain or even the display of clouds, smoke, fire or explosions. A less important factor that may prompt the use of procedural content generation are space (size) restrictions in the finished product, due to the media to be used or the size of the download.


When creating a game there are some features that will aid the gamer to be able to easily interact in the virtual environment created. If simulating real environments this is extremely important since realism is expected, from object trajectories to how they collide, the feeling of mass and how fluid behave and other similar behaviors can all be modeled in conformance to well understood formulas that are also valid on the real word.



  • Surface tension
  • Viscosity



Realism (Reality Simulation)Edit


Light and shadowsEdit

Lighting and texturingEdit

Light mapsEdit


Bump mapsEdit

Normal mapsEdit

Paralax mappingEdit


Adding details

Having the game include small, even if repetitive details will provide a richer experience, be it a gush of wind, a dust mote, flies over a trashcan, a leaf floating in the wind can be as powerful a more complex effects like dynamic lighting.


Water is probably the most difficult effect to reproduce in a game, it includes reflections, transparencies and distortions, high details as waves, foam and is a fluid, behaving like a solid and liquid, so depending on the level of detail that ones is aiming for this will become not only a difficult task but will also consume a lot of computational power if done in real-time.





For this part of our engine we are going to be using the OpenAL API. Why? For the simple reasons outlined in Choosing an API; It is open-source, cross-platform and powerful while still remaining relatively easy to use. So lets get started...

All objects that can emit sound in our game world have a position (with the exception of background music). Also associated with the sound is some sort of trigger event, so when the player does something to activate the trigger the sound will start to play. Simple hey! So now how are we actually going to implement this?

Well, OpenAL works on the concept of having a source(the sound that plays) and a listener(the person listening), these two objects can be placed anywhere in our 3D environment and given certain properties such as what attenuation model to use, what speed the source is traveling at etc, but we'll get onto that later. You can also have many sources (which makes sense), but only ONE listener (which also makes sense). When you add a sound for OpenAL to play with you first have to do three things; you create a buffer which you use to load your audio data into, you then create a source and then you associate the source with the buffer so that OpenAL knows what audio to play from which source. So taking all that into account we are going to encapsulate each source into a C++ struct. The struct so far, which we will call newSource, will hold the sources positional information as sourcePos[3], a sourceID and a bufferID so that we can uniquely address each source.

Something else we need to take into consideration is that since OpenAL very kindly attenuates sound for us based on distance we need to make the sound start playing when the player reaches the 'outer bounds' of the source (the point at which you can no longer hear the sound play). So well add in an activateDistance value to our struct as well.

Additionally, we need to take into consideration that sound data cannot load instantaneously from the hard drive since hard drives are pretty slow things in comparison to RAM. So we'll add a preloadDistance value to our struct as well so that when we move within that value the sound will load into the buffer and when we move within the activateDistance value the sound will start to play. Cool hey!

And finally, since we are most probably going to have more than one source (would be a pretty boring game if we did not) we are going to shove our structs into a C++ vector (if you do not know what that is, it is just an array but with more functionality) which we will call pipeline. We also need to add some functionality to remove 'dead' sources from the pipeline and free up memory, but well get onto this later on.

To illustrate how all this fits together.

And this one illustrates an 'in-game' view of how preloadDistance, activateDistance and sourcePos fit into the picture.

So, to outline the process:

  • When the player moves within the outer red sphere a new newSource struct is created, the sound is loaded into the buffer and pushed onto the pipeline.
  • When the player moves within the yellow sphere the sound starts playing and as the player moves closer towards the inner white sphere the sound will get louder until it reaches maximum volume at the white sphere.
  • Going in reverse, as the player moves away from the white sphere the sound will decrease in volume until you move outside the yellow sphere at which point the sound switches off but remains in the pipeline.
  • When the player exits the red sphere the source is removed from the pipeline and destroyed (culled) so that we do not take up unnecessary memory.

Though there are many different types of video games, there are a few properties that are constants: Every game requires at least one player, every game gives the player at least one challenge, Every game uses a display, Every game has at least one method of input/control.

The User InterfaceEdit

As described at the beginning of this chapter, the user interface is made up of sprites, menus and so forth. Its what the user is given to control the actions within the game. These graphics are defined as buttons' which can be pushed, or a character which can be moved by the arrow keys. All of these elements are a part of the user interface.


To do:
Add some graphs in regard to the UI and the game sub-systems

The Main MenuEdit

To start off with, just about every video game boots up to a main menu. This is usually a screen with some type of background, with an arrangement of buttons for actions such as new game or start game, options, load game and quit game.

This screen acts as a control panel for the game, allowing the player to change settings, choose modes, or access the actual game.

Sometimes, a game will use the main menu as the in game menu. The in game menu is usually accessed by the escape key or the start button during gameplay. The in game menu allows the player to access most of the main menu actions with additional ones such as displaying character stats, points, inventory and so forth. Not all menus have to be squares with words in them though. The game Secret of Mana uses a creative menu where the level stays in focus while the choices form a circle about the player.

These menus are not required, but it is traditional to include them.

Starting the gameEdit

When you first start up the game a series of splash screens are shown. A splash screen contains elements such as logos, movies, and so forth. This often is used to tell the player the companies that contributed to the game firsthand and sometimes gives part or whole introduction to the plot.

When the actual game has started there is often an introductory movie that gives the prologue to the plot. This is not a movie like you see in the theater but usually a better rendered use of the game's own graphics and sounds.

In most games, you will then be asked for your name and in some games you will be allowed to customize your character, settings and so forth.

This stage of the game is called the tutorial. It is not always considered a part of the game's plot, but in some games it is integrated into the game that even though it is the tutorial stage it is a part of the game's plot anyways. We will call this tutorial integration. It is widely used in games such as The Legend of Zelda and Super Mario 64

Playing the gameEdit

During gameplay there are some basic concepts that just about every game uses. They are listed below:

Player-character relationshipEdit

The player's role in the character. How does the player control the character? There are usually 3 types of PCR, 3rd person, 1st person and influence:

3rd person: The player is not the character but instead is controlling the character/characters impersonally.

1st Person: The player is the character/characters - and sees things from the characters point of view both personally and visually.

Influence: The player is not tied to any character/characters but merely has an influence in the game. This is seen in puzzle games such as Tetris, and also in RTS games.

Game worldEdit

What is the 'world' that is portrayed by the game? Within this there are 2 considerations.

Character role: There is also the question of what role the character plays in the game itself, there are 3 types in this sense.

Protagonist: Everything revolves around the character/characters the save the day type deal. Seen in games such as Zelda, Mario, Final Fantasy, and so forth.

Arcadic Conventional: An impersonal arcade character.

Influence: The character is a faceless influence within the realm of the game.

Law What are the laws, concepts, rules etc. that define the realm?

Graphics What is seen and the laws of style

Sound What is heard and the laws of style

Gameplay What is played how the game is played


Considering the saving and loading of the game, usually this can be a basic menu action wherein the player types a save name and the game is saved. In some games though, more creative approaches are taken so that the player is not pulled out of the gaming experience. Metroid does this with its save stations.

Loading however, is usually a menu action.

The Main LoopEdit

At the heart of our game is the main loop (or game loop). Like most interactive programs, our game runs until we tell it to stop. Each cycle through the loop is like the heart beat of the game. The main loop of a real time game is often tied to the video update (vsync). If our main loop is synchronized to a fixed time hardware event, such as a vsync, then we must keep the total processing time for each update call under that time interval or our game will "chug."

// a simple game loop in C++

int main( int argc, char* argv[] )
    game our_game;
    while ( our_game.is_running())
    return our_game.exit_code();

Each console manufacturer has their own standards for video game publication, but most require that the game should provide visual feedback within the first few seconds of starting. As a general design guideline, it is desirable to provide the player with feedback as quickly as possible.

For this reason most start up and shut down code is usually processed from within the main loop. Lengthy start up and shut down code can either run in a sub thread monitored from the main update() or sliced into small chunks and executed in order from within the update() routine itself.

State MachineEdit

Even without considering the various modes of play within the game itself, most game code will belong to one of several states. A game might contain the following states and sub states:

  • start up
  • licenses
  • introductory movie
  • front end
    • game options
    • sound options
    • video options
  • loading screen
  • main game
    • introduction
    • game play
      • game modes
    • pause options
  • end game movie
  • credits
  • shut down

One way to model this in the code is with a state machine:

class state
    virtual void enter( void )= 0;
    virtual void update( void )= 0;
    virtual void leave( void )= 0;

Derived classes can then override these virtual functions to provide state specific code. The main game object can then hold a pointer to the current state and allow the game to flow from state to state.

extern state* shut_down;

class game
    state* current_state;
    game( state* initial_state ): current_state( initial_state )


    void change_state( state* new_state )
        current_state= new_state;

    void update( void )

    bool is_running( void ) const
        return current_state != shut_down;


A game loop must consider both how much real time has passed and how much game time has passed. Separating the two makes slow motion (i.e. BulletTime) effects, pause states and debugging much easier. If you intend to make a game that can rewind time, like Blinx or Sands of Time you will need to be able to run the game loop forward while running the game time backwards.

Another consideration surrounding time depends on whether you want to go for a fixed or variable frame rate. Fixed frame rates can simplify much of the maths and timings within the game but they can make the game much harder to port internationally (e.g. going from 60 Hz TVs in the US to 50 Hz TVs in Europe.) For this reason it is advisable to pass frame time as a variable even if the value never changes. Fixed frame rates suffer from stuttering when the work load per frame reaches the limits and this can feel worse than a lower frame rate.

Variable frame rates, on the other hand, automatically compensate for different TV refresh rates. But variable rates often feel soggy in comparison to fixed rate games. Debugging, particularly debugging timing and physics issues, is usually more difficult with variable time. When implementing timing in your code there are often several hardware timers available on a given platform, often with different resolutions, overheads for accessing them and latencies. Pay special attention to the real time clocks available. You must use a clock with a high enough resolution, while not using excessive precision. You might need to handle the case where the clock wraps (for example, a 32-bit nanosecond timer will overflow back to zero every 2^32 nanoseconds which is only 4.2949673 seconds).

const float game::NTSC_interval= 1.f / 59.94f;
const float game::PAL_interval=  1.f / 50.f;

float game::frame_interval( void )
    if ( time_system() == FIXED_RATE )
        if ( region() == NTSC )
            return NTSC_interval;
            return PAL_interval;
        float current_time= get_system_time();
        float interval= current_time - last_time;
        last_time= current_time;
        if ( interval < 0.f || interval > MAX_interval )
            return MAX_interval;
            return interval;

void game::update( void )
    current_state->update( frame_interval());


Modern games are usually loaded either directly from CD or indirectly from the hard drive. Either way, your game could spend a significant amount of time in I/O access. Disc access, particularly CD and DVD access, is a lot slower than the rest of the game. Many console manufacturers make it a standard that all disc access must be indicated visually; and that is not a bad design choice anyway.

However, most disc access API functions (particularly those that map through the standard I/O of the C runtime library) stall the processor until the transfer is complete. This is called synchronous access.

Multi-threaded disc accessEdit

One way to get feedback while accessing the disc is to run disc operations in their own thread. This has the advantage of allowing other processing to continue, including drawing some visual feedback of the disc operation. But the cost is that there is a lot more code to write and access to resources needs to be synchronized.

Asynchronous disc accessEdit

Some console operating system API's handle some of the multi-threading code for you by allowing disc access to be scheduled with asynchronous read operations. Asynchronous reads can tell that they are done either by polling with the file handle or using a callback.

Renderable ObjectsEdit

Whether a game uses 2D graphics, 3D graphics, or a combination of both, the engine should handle them similarly. There are three main things to consider.

  1. Certain objects may take a while to load, and can momentarily freeze the game.
  2. Some machines run slower than others, and the gameplay must continue with a low framerate.
  3. Some machines run faster, and animation could be smoother than the time interval with a higher framerate.

Therefore, it is a good idea to create a base class as an interface that separates these functions. This way, every drawable object can be treated the same way, all loading can be done at the same time (for load screens), and all drawing can be done independently of the time interval. OpenGL also requires object display lists to have a unique integer identifier, so we'll also need support for assigning that value.

class IDrawable
    virtual void load( void ) {};
    virtual void draw( void ) {};
    virtual void step( void ) {};

    int listID()            {return m_list_id;}
    void setListID(int id)  {m_list_id = id;}
    int m_list_id;

Bounding BoxesEdit

One common method of collision detecting is by using axis-aligned bounding boxes. To implement this, we will build upon our previous interface, IDrawable. It should remain separate from IDrawable, because after all, not every object drawn on the screen will require collision detecting. A 3D box should be defined by six values: x, y, z, width, height, and depth. The box should also return the object's current minimum and maximum values in space. Here is an example 3D bounding box class:

class IBox : public IDrawable {
        IBox(CVector loc, CVector size);
        float X()	{return m_loc.X();}
        float XMin()	{return m_loc.X() - m_width / 2.;}
        float XMax()	{return m_loc.X() + m_width / 2.;}
        float Y()	{return m_loc.Y();}
        float YMin()	{return m_loc.Y() - m_height / 2.;}
        float YMax()	{return m_loc.Y() + m_height / 2.;}
        float Z()	{return m_loc.Z();}
        float ZMin()	{return m_loc.Z() - m_depth / 2.;}
        float ZMax()	{return m_loc.Z() + m_depth / 2.;}
        float m_x, m_y, m_z;
        float m_width, m_height, m_depth;

IBox::IBox() {
	m_x = m_y = m_z = 0;
	m_width = m_height = m_depth = 0;

IBox::IBox(CVector loc, CVector size) {
	m_x      = loc.X();
	m_y      = loc.Y();
	m_z      = loc.Z();
	m_width  = size.X();
	m_height = size.Y();
	m_depth  = size.Z();

Make your own game engineEdit


While it is simple enough in the majority of APIs to display an image, or a textured cube, as you begin to add more complexity to your game, the task naturally becomes slightly harder. With a poorly structured engine this complexity becomes increasingly more so as your engine becomes larger. It can become unclear what changes are needed, and you may end up with huge special case switch blocks where some simple abstraction would have simplified the problem.


This ties in with the above point - as your game engine evolves you are going to want to add new features. With an unstructured engine these new features are hard to add in, and a lot of time may be spent finding out why the feature is not working as expected. Maybe its some strange function that is interrupting it. A carefully crafted engine separates tasks out so that extending a certain area is just that - and not having to modify prior code.

Know your codeEdit

With a well thought out game engine design, you will begin to know your code. You'll find yourself spending a lot less time staring (or maybe cursing) blindly at a blank screen wondering why on Earth your code is not doing what you thought you had told it.

DRY CodeEdit

DRY is an acronym frequently used (especially in the extreme programming environment) that means do not repeat yourself. It sounds simple but can provide you with a lot more time to do other things. Also, code that does a specific task is in one central location so you can modify that small section and see your changes take effect everywhere.

In fact, it's common senseEdit

The above points probably do not seem that incredible to you - they are really common sense. But without the thought and planning in designing a game engine, you will find reaching these targets a lot harder.