Monday, March 19, 2012

2: How the Cylone Game Engine Began

       This next blog post covers the making of my game engine when it first began; displaying screenshots of the very first builds, up until now where it is currently. I first started creating the Cyclone Game Engine engine in my parents basement when I was 17 in my spare time. During that time, the engine was programmed in Microsoft Visual Studio 2008 C# and it utilized the XNA 3.1 framework. Starting out with a new project, I created a class for the camera which is first person, the player, the player's movement and the weapon. The fist-person-camera class was based off of the chase camera sample from the App Hub Community website which can be found here. These classes are all connect to the Game class which is initially what you start with when creating a game project in XNA. I needed 3D models as placeholders to see if the game engine was working properly to provide a visual. 

       Models are crucial to graphics applications, because without them, there is nothing to view on the screen. The 3D modeling process can be complex and time consuming. When creating 3D models, there are several different softwares available. Softwares I considered were Autodesk Maya, 3D Studio Max and Blender. Blender, unlike Maya and 3D Studio Max is free and open source and serves as a great alternative. However at the time, for this project, I purchased Autodesk Maya 2009 at Cincinnati State which was within my budget with the help of my student license and it was the program I was most familiar with. Blender at the time was not as developed as it is now.

       I modeled a low poly MP5 gun model in Autodesk Maya 2009 and I used free textures I found online for it. I used the level from the robot game sample from the Apps Hub Website to navigate around in. This level was used just to serve as a placeholder before I began creating my first level on my own. Below is a screenshot of my very first successful build. It was astonishing as I had no idea what to expect after I clicked the Debug button. I simply had just an image in my mind of what I thought it would appear to look like. There were plenty of things wrong of-course, but it was only the first build. Just the fact that I got it working and seeing my project come to life for the first time was fascinating. Words couldn't even express the joy and excitement I had. There is no better feeling in the world than the magic that comes from creating something from nothing. 


       This image above is the very first successful build of the game engine with the MP5 model and the sample world. My first thought was, "Why can't I see everything?" This was the first problem I encountered when creating my game engine. I then realized, I couldn't see all the the sample world's environment because I needed to fix my field-of-view in the first-person-camera class and change its distance. Another issue was the angle of the gun model. 


       This next image above is the second screenshot of the game engine with the gun slightly turned and positioned somewhat better. The field-of-view was then fixed to show all of the level from the robot game sample in its entirety. 



       In this next screenshot, I was really getting ahead of myself and "jumping the gun". I wanted to see if a high poly model could be supported. So I modeled a high-poly M4A1 gun model with and a scope attachment to replace the low-poly MP5 gun model. I got it to successfully run in the game, but that led to another problem; which is XNA only supported models up to 60,000 polygons.  I wanted to make everything eventually high-poly but I knew this would be difficult to achieve due to performance issues. My computer at the time was an IBM T-43 laptop which was no where near capable of rendering such complex scenes, let alone running Autodesk Maya and my projects.

       So why would I risk making my computer crash, running software it could barely support and making programs it could barely run? In short, it was what I had at the time and I was excited In this build I had strafe left, and strafe right movement working successfully and the look-up and look-down controls at the time were inverted. Although I had the movement working, it seemed way to stiff in my opinion and I needed a spring-like behavior to make the movement more fluid and truly feel like you can move in 360 degrees. I looked at Microsoft’s Chase Camera sample as a reference because I needed a spring to make the movement seem less stiff. 


       This above screenshot of the game engine shows the M4A1 gun model with slight modifications. I used a checkered temporary ground model since I was working on my first level in Maya. Many of my peers at the time asked me, "Why did you use a checkered ground model?" It seemed obvious to me but I didn't have any building models or 3D assets created yet to fill the environment. If the ground was all one color with no models in the scene, I couldn't tell if I was actually moving in 3D space or not. I could only tell if I was looking up or down. By making the ground checkered, I could see if my program was responding correctly to the player input from the controller.

       This visually allowed me to tell if I was moving forward, backward, strafing and turning. The ground was also checkered so that the ground textures would layout nice and evenly. Each square represents the textured image to be used for the ground model. This allowed me to make sure the resolution of the ground texture was scaled correctly so that it wouldn't be too stretched and "pixel-ated" when applied. I also added a red cross hair in the engine. The gun model seemed way to dark which was my next problem I encountered. I needed lighting on the weapon model to also show off its detail to players.


       This next screenshot shows the gun model successfully lit from behind with a light source. Some people have asked me what shader did I use to light the model. To light the gun, I started with the BasicEffect which is built into the XNA game framework. Ironically, its anything but basic. So I added the following to my draw method:

                 foreach (BasicEffect effect in mesh.Effects)
                {

                    effect.EnableDefaultLighting();
                    effect.PreferPerPixelLighting = true;
                    effect.SpecularColor = new Vector3(0.5f);
                    effect.SpecularPower = 7;
                }
                mesh.Draw();


The above source code is somewhat older and can hurt performance when debugged to Xbox 360 if your game or game engine is not multi-threaded. For now, to help performance somewhat, I plan to change my foreach statements all to for and eventually use DrawPrimitives. Later I plan on implementing specular lighting. Specular lighting will give the gun models more of a shiny metal surface I am looking for.



       In this screenshot I added the holographic sight to the M4A1 gun model. The holographic sight had some missing gaps in the model that didn't export properly.


       This screenshot above shows the latest scar gun model with a holographic sight modeled and running in the game.


 

       This image above is an in-game screenshot of my first level in progress. I have a default green ground model and a low poly temple model in the scene. At this stage there is no collision detection yet so you just walk through the temple itself. The character model was successfully attached but I didn't have his animations pulled up yet. I used the SkinnedModel Pipeline from the Skinning Sample which can be found from the App Hub website. The skinning sample can be found here. Below is my first video of my first level showing off the start of my progress.





       In this above screenshot build of the game engine, I was working on jumping and sprinting. Only the character model's idle animation running because XNA 4.0 does not support .fbx models with multiple animation takes within them. So my next challenge is to code an Animation Pipeline in XNA 4.0 to support it. Once I get it working, I can get more animations running.




       I eventually expanded my level and modeled an entire scene with buildings and skyscrapers. This environment was huge! So as you can image, it was pretty time consuming.  




Do I need to know math to create 3D graphics?
       Yes, however there is a ton of mathematics that is required when you are working on a 3D game. When I got my very first successful build working, I didn't know as much math at the time but that didn't stop me. I was able to get by through constant trial and error. Don't let math scare you away and prevent you from trying. I will say that having a better understanding of mathematics will lead you to be less confused, more quickly to understand new concepts and far more productive. As long as you continuously learn you will see results and that's really true with anything. With my first successful builds of the Cyclone Game Engine, everything was trial and error. So I kept trying and iterating over and over again until I saw results and or improvements. 

       With 3D, you will also need to learn the math behind the physics to calculate collision detection between objects how they interact for instance. To clear things up, both math and physics are a science. The difference is that math is a pure science and physics is an applied science. Nearly every physics problem involves math and most of the theoretical stuff is nearly entirely mathematically work out and then physically verified. In short, physics is math applied to problems in the physical world. That's why this is so crucial in 3D games. Physics helps bring 3D game worlds to life and behave more realistically; adding a degree of gameplay as objects in the game world can interact with one another. At this stage, I don't have a lot of physics implemented just yet. 



COORDINATE SYSTEMS
       In screen coordinate space, there are two dimensions: one for each of the X and Y directions. In screen coordinates, the X direction increases in value from the left side of the screen to the right, and the Y increases in value from the top of the screen to the bottom.

There are two main types of 3D coordinate systems used in computer graphics:
Left-Handed Coordinate System
Right-Handed Coordinate System 

Both of these systems contain three directions for the X,Y, and Z axes. The three axes converge at a central point called the origin where their values equal 0. The values along each axis either gain or lower in value at regular intervals depending on whether you are moving in the positive or negative direction along that axis. XNA primarily by convention uses the right handed coordinate system.




Why is XNA right-handed?
       For the most part, XNA doesn’t care what orientation you use. DirectX can use either left handed or a right handed coordinate system. It used a left handed system by convention previously. However, XNA and OpenGL both use a right handed coordinate system by convention. Many mathematics textbooks work in a right handed coordinate system. The App Hub Community Samples are right handed as well, but you can use the left handed coordinate system if you prefer that way around. You would have to manually fix the projection and view matrix, but I think you can still safely use Frustrum classes. XNA is also right handed to match the Windows Phone Framework. There is no single universal standard for right and left handed coordinate systems. 

XNA’s coordinate system is setup like so: 
Forward is -Z, backward is +Z. Forward points into the screen.
Right is +X, left is -X. Right points to the right-side of the screen.
Up is +Y, down is -Y. Up points to the top of the screen.



ENTITY
       The Cyclone Game Engine represents each object within the 3D space of the game world as an entity. The entity is composed of the translation, rotation and scaling of objects within 3D space along with the matrices on which the transformation operations are carried out. 


VECOTRS
       Each Entity instance contains three 3D vectors. Think of vectors as a set of floating point values used to represent a point or direction in space. A 3D vector (in its simplest form) is a set of 3 floating point numbers which can represent a 3D coordinate comprising of an X, Y and Z component. The Entity has a 3D Vector to store its Position (i.e. the Cartesian co-ordinate of the entity in 3d space), its Rotation (the amount by which the entity is rotated about the X, Y and Z axis) and its Scale (the factor by which the entity is scaled on the X, Y and Z axis).


What Units are Vectors In?
       Vectors can be in whatever unit you want them to be in. Staying consistent was key throughout the art process creating the buildings in the scene. If you are working as a team, it is important for the artists to decide on what one unit is equal to in your game. This will prevent scaling issues so that if one artist creates buildings and one unit equals a yard, and another artist creates characters for the game with one unit equaling one inch, buildings might look too small or characters might look too big and vise versa.



       XNA Game Studio supports three types of vectors: Vector2, Vector3 and Vector4. Vector2 has two dimensions, so its primarily used in 2D graphics. Likewise Vector3 has three dimensions, and is used in 3D graphics. So the question is what should Vector4 be used in? A Vector4 like the Vector3 type contains X, Y and Z values. The fourth component is called the homogeneous component. The homogeneous component is represented by the W property, which is not used for space time manipulation unfortantely. The fourth component is required for multiplying the vector by a matrix, which has four rows of values. This is where understanding matrices and vector multiplication in math becomes so crucial. 




MATRICES
       In mathematics, a matrix is a rectangle group of numbers called elements. A matrix represents an array of sixteen floating point numbers and is used to calculate the position, scale and rotation of an entity in world space. So the size of the matrix is expressed in the number of rows by the number of columns. The 4 by 4 matrix, which contains the 16 floating point values is the most common in 3D graphics. XNA Game Studio's Matrix structure is also a 4 by 4 matrix. A matrix has a number of mathematical applications in a variety of fields, especially calculus and optics. In 3D game programming, we focus on the use of a matrix in linear algebra because of how useful this becomes in computer graphics. 

       In XNA Game Studio, the matrix structure is row major meaning that the vectors that make up the X,Y, and Z directions and the translation vector are laid out in each row of the matrix. Each row represents a different direction in the coordinate space defined by the matrix.

       In the first row, the X vector represents the right vector of the coordinate space. In the second row, the Y vector represents the up vector of the coordinate space. In the third row, the Z vector represents the backward vector of the coordinate space.The forward vector is actually the negation of the Z vector because in a right-handed coordinate space, the Z direction points backwards. The forth row contains the vector to use for the translation of the position. 



Matrix Transforms
       Here is a documentation on XNA's Matrix Structure: https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.matrix.aspx

Matrix transformations can be used to express the location of an object relative to another object, an object's rotation, and scale of objects in 3D space. They can also be used to change the viewing positions, directions and perspectives. 


Identity Matrix
       The identity matrix, also called the unit matrix, contains elements with the value of one from the top left diagonal down to the bottom right.The rest of the elements in the matrix are all zeros. When the identity matrix is multiplied by any other matrix, the result is always the original matrix. The identity matrix is an orthonormalized matrix that defines the unit directions for X,Y, and Z for the unit coordinate space. The identity matrix is a base starting point for many of the other types of transforms. The engine tracks where the models are located and positioned in Model Space in an object editor like Autodesk Maya or 3D Studio Max, and displays them in-game exactly how you had them placed in Maya or 3D Studio Max. This is with the help of XNA Game Studio’s Identity Matrix. 



Translation Matrix
       A translation matrix is a type of matrix that is used to translate or move a vector from one location to another. For example if a vector contains the values { 1, 2, 3 }, a translation of { 2, 1, 0 } moves the vector to { 3, 3, 3 }. When a vector is multiplied by a translation matrix, the result is a vector with a value equaling the original vector’s value plus the translation of the matrix. In a translation matrix, the last row contains the values to translate by.


Scaling
          A big issue that is encountered when creating 3D games is scale. The gun models were scaled too big initially. A scale matrix transforms a vector by scaling the components of the vector. Like the identity matrix, the scale matrix uses only the elements in the diagonal direction from top left to lower right.The rest of the elements are all zeros. When a vector is multiplied by a scale matrix, each component in the vector is scaled by a corresponding element in the matrix. A scale matrix does not have to be uniform in all directions. Nonuniform scale is also possible where some axis directions are scaled more or less than others.


Rotation
       A rotation matrix transforms a vector by rotating it. Rotation matrices come in the form of rotations around the X,Y, or Z axes along with rotation around an arbitrary axis. Each type of rotation matrix has its own element layout for defining the rotation. When a vector is multiplied by a rotation matrix, the resulting vector’s value is equal to the original vector value rotated around the defined axis.

       In XNA Game Studio, the creation of rotation matrices is as simple as calling one of the static methods provided by the Matrix structure, such as CreateRotationX.The specified angles are in radian units. In radians, 2 Pi units is equal to 360 degrees. Pi is a mathematical constant, which is the ratio of a circle’s circumference to the diameter of the circle. The value of Pi is around 3.14159.To convert between radians and degrees, use the MathHelper.ToRadians and MathHelper.ToDegrees methods.

       Rotations are sometimes referred to in terms of yaw, pitch, and roll. These represent rotations around the current coordinate spaces right, up, and forward vectors, which are not necessarily the same as the unit X,Y, and Z axes. For example, if an object is already rotated 45 degrees around the Y axis, the forward vector is not in the negative Z direction anymore. It is now halfway between negative Z and negative X.

       Its important to note that rotating around the X, Y and Z axis is slightly different in right-hand systems than left-hand systems. Rotating this object around the X axis is not the same as rotating around the pitch vector, because the pitch vector no longer is equal to the unit X vector.



World Transform
       A world transform changes coordinates from model space, where vertices are defined relative to a model's local origin, to World Space, where vertices are defined relative to an origin common to all the objects in a scene. Essentially the world transform places a model into the world.


CAMERAS
       A Camera in Cyclone represents an Entity through with the 3d scene can be viewed. For convenience, the engine creates a default camera when the Base is initialised (see Base section). The analogy of a camera is carried through in the 25 ability of the camera to be configured and manipulated in the same way as a physical film camera, with the ability to set the perspective, the near and far clipping distances (or range), and the ability to pan, dolly, roll, pitch and yaw.

       It is possible to have multiple cameras within an environment. However it is encouraged that each camera be set as the Base camera before it is rendered so that any other elements of the scene have access to it if they require it – i.e. an Effect may need to access the projection matrix or view position of the current camera to render correctly or a NodeTree mesh (see NodeTree section) will need to access the current cameras view frustum (see frustum section). This design decision was made to keep the camera system open-ended so the programmer has the choice of how they want to use the camera.


View Transform
       The view transform locates the viewer in world space, transforming vertices into camera space. In camera space, the camera, or viewer, is at the origin, looking in the positive z-direction. The game engine uses the right-handed coordinate system, so z is positive into a scene. The view matrix relocates the objects in the world around a camera's position - the origin of camera space - and orientation. There are many ways to create a view matrix. In all cases, the camera has some logical position and orientation in world space that is used as a starting point to create a view matrix that will be applied to the models in a scene. The view matrix translates and rotates objects to place them in camera space, where the camera is at the origin. One way to create a view matrix is to combine a translation matrix with rotation matrices for each axis. 



Projection Transform
       The projection transformation can be thought of as choosing a lens for the camera. The projection matrix is typically a scale and perspective projection. The projection transformation converts the viewing frustum into a cuboid shape. Because the near end of the viewing frustum is smaller than the far end, this has the effect of expanding objects that are near to the camera. Creating a projection does not change the world's Z-axis. All it does is is creates a view frustum. (It is best explained here: Viewing Frustrum).


Frustrum
Each camera contains a viewing frustum.

In 3D computer graphics, the viewing frustum or view frustum is the region of space in the modeled world that may appear on the screen; it is the field of view of the notional camera. The exact shape of this region varies depending on what kind of camera lens is being simulated, but typically it is a frustum of a rectangular pyramid.” 
--(Wikkipedia (2004) Viewing Frustum Definition)


       So far, this is a brief overview of how 3D computer graphics programming works in XNA Game Studio. I will explain animation in later blog posts and physics once I get it fully implemented. I can't fully explain how crucial physics is for 3D games, let alone a game engine. Complex behaviors and object interactions brings the 3D game worlds to life. At this stage, there is no collision detection just yet since I am debating on what physics library to use. I am deciding between JigLibX and BEPU, and although BEPU is more powerful, I am considering JigLibX because I am more familiar with it and I think I can fix many of the issues developers encountered when using it. 


 

       Many people asked me, "Where are all the textures?! Why isn't everything textured?" The screenshot below demonstrates different parts of the models which were color coated so that I could visually tell them part and know where to apply certain specific textures later. For example, the lime green color you see would represent a specific building texture, so wherever there is a lime green color coated, that texture gets applied. Many games use the same textures in their scenes to help save and cut down processing, which is why I have not applied any textures yet. I have to plan out which textures to use and what color they are associated to.



       I wasn’t concerned too much about applying textures at such a very early stage in development. Especially when my computer at the time could barely run the program. Textures weren't applied yet also because I wanted to save processing power for more important things later like polygon-count, the models the engine can support and artificial intelligence. More importantly, I needed to program complex physics that would to truly bring the game to life and make it more playable. Many games save processing by using the same textures for a good amount of their models. I also had to take into consideration the file size at the time because Xbox Live Indie Games could only be submitted up to 500MB. Texturing will take place during the final stages to give everything a finished look and polish.


No comments:

Post a Comment