Monday, March 19, 2012

2: How the Cylone Game Engine Began

       This next blog post covers the making of my game engine when it first began; displaying screenshots of the very first builds, up until now where it is currently. I first started creating the Cyclone Game Engine engine in my parents basement when I was 17 in my spare time. During that time, the engine was programmed in Microsoft Visual Studio 2008 C# and it utilized the XNA 3.1 framework. Starting out with a new project, I created a class for the camera which is first person, the player, the player's movement and the weapon. The fist-person-camera class was based off of the chase camera sample from the App Hub Community website which can be found here. These classes are all connect to the Game class which is initially what you start with when creating a game project in XNA. I needed 3D models as placeholders to see if the game engine was working properly to provide a visual. 


3D MODELING 

What is 3D Modeling? 
       3D modeling is the process of virtually developing the surface and structure of a 3D object. Models are crucial to graphics applications, because without them, there is nothing to view on the screen. The 3D modeling process can be complex and time consuming. When creating 3D models, there are several different softwares available. Software I considered were Autodesk Maya, 3D Studio Max and Blender. Blender, unlike Maya and 3D Studio Max is free and open source and serves as a great alternative. However at the time, for this project, I purchased Autodesk Maya 2009 at Cincinnati State which was within my budget with the help of my good grades and student license. Also, Maya was the was the program I was most familiar with. Blender at the time was not as developed as it is now which was why I went with Maya at the time. I highly recommend Blender because its completely free, so there shouldn't be any financial burdens or worries. If you want to learn 3D modeling in any of these mentioned programs as well as many others, check out my Game Art page. 

       I modeled a low poly MP5 gun model in Autodesk Maya 2009 and I used free textures I found online for it. The model texture resolution on the models were not accurate, however it gave me experience using .fbx files.  I used the first level from the robot game sample from the Apps Hub Website to navigate around in. This level was used just to serve as a placeholder before I began creating my own first level. Below is a screenshot of my very first successful build. It was astonishing as I had no idea what to expect after I clicked the Debug button. I simply had just an image in my mind of what I thought it would appear to look like. There were plenty of things wrong of-course, but it was only the first build. Just the fact that I got it working and seeing my project come to life for the first time was fascinating. Words couldn't even express the joy and excitement I had. There is no better feeling in the world than the magic that comes from creating something from nothing. 


       This image above is the very first successful build of the game engine with the MP5 gun model and the sample world. The first objective was simply to display a gun model on the screen in first-person perspective. This taught me a lot of things necessary for the final build. First I learned how three dimensional models work in XNA, second I learned how to take input from both the key board and the Xbox game pad controller. My first thought was, "Why can't I see everything?" This was the first problem I encountered when creating my game engine. I then realized, I couldn't see all the the sample world's environment because I needed to fix my field-of-view in the first-person-camera class and change its distance. Another issue was the angle of the gun model. 


       This next image above is the second screenshot of the game engine with the gun slightly turned and positioned somewhat better. The field-of-view was then fixed to show all of the level from the robot game sample in its entirety. In short, field-of-view is simply how narrow or wide the camera is looking. I used the Chase Camera Sample as a basis for the first-person camera class because I wanted place a "spring effect" on the gun model for when the player turns and moves. Having a spring effect on the weapon for when the player moves added more fluidity to the movement and less stiff. The second step was figuring out how to make the gun move in three dimensions. Initially it moved in two dimensions since it was based off the chase camera sample. It was here I quickly learned that movement in three dimensions is much more difficult than two dimensions. For the object I needed 4 vectors, the object position, its direction, its up vector and its right vector. All of this is just to get the ship's orientation in 3D space. This lead to one of the very first questions people asked me when programming games.



Do I need to know math to create 3D games?
       No and yes depending on what tools you use, however there's a tremendous amount of mathematics required when you are programming a 3D game; especially your own engine. When I got my very first successful build working, I didn't know as much math at the time but that didn't stop me. I was able to get by through constant trial and error. Don't let math scare you away and prevent you from trying. I will say that having a better understanding of mathematics will lead you to be less confused, more quickly to understand new concepts and far more productive. As long as you continuously learn you will see results and that's really true with anything. With my first successful builds of the Cyclone Game Engine, everything was trial and error. So I kept trying and iterating over and over again until I saw results and or improvements. 

       With 3D, you will also need to learn the math behind the physics to calculate collision detection between objects how they interact for instance. To clear things up, both math and physics are a science. The difference is that math is a pure science and physics is an applied science. Nearly every physics problem involves math and most of the theoretical stuff is nearly entirely mathematically work out and then physically verified. In short, physics is math applied to problems in the physical world. That's why this is so crucial in 3D games. Physics helps bring 3D game worlds to life and behave more realistically; adding a degree of gameplay as objects in the game world can interact with one another. At this stage, I don't have a lot of physics implemented just yet.



       In this next screenshot, I was really getting ahead of myself and "jumping the gun". I wanted to see if a high poly model could be supported. So I modeled a high-poly M4A1 gun model with and a scope attachment to replace the low-poly MP5 gun model. I got it to successfully run in the game, but that led to another problem; which is XNA only supported models up to 60,000 polygons.  I wanted to make everything eventually high-poly but I knew this would be difficult to achieve due to performance issues. My computer at the time was an IBM T-43 laptop which was no where near capable of rendering such complex scenes, let alone running Autodesk Maya and my projects.



       At the time, I named my prototype Code Red. It wasn't the name for a game project necessarily but naming the test programs I created added a sense of fun. Later, I expanded the text on screen which was displayed as instructions. The way we do this is by creating a SpriteBatch. Then I create a string with the instructions. Finally, we begin the spritebatch like so

 spriteBatch.DrawString(spriteFont, text, new Vector2(65, 65), Color.Red);

The variable spriteFont was just a  font I picked earlier in the program, text was the string variable with the instructions, and Vector2(65,65) just says where on screen the text should be drawn. This is a common way of showing HUDs on video games.



       So why would I risk making my computer crash, running software it could barely support and making programs it could barely run? In short, it was out of curiosity with what I had available at the time and I was simply excited. In this build I had strafe left, and strafe right movement working successfully and the look-up and look-down controls at the time were inverted. Although I had the movement working, it seemed way to stiff in my opinion and I needed a spring-like behavior to make the movement more fluid and truly feel like you can move in 360 degrees. 

LEVEL DESIGN: Phase 1

Grey-boxing & Blockout 


       This above screenshot of the game engine shows the M4A1 gun model with slight modifications. I used a checkered temporary ground model since I was working on my first level in Maya. Many of my peers asked me, "Why did you use a checkered ground model?" or "Why is the ground not green or one color?" The answer seemed so obvious to me at the time but I realized that most people don't see games so early in development. They are use to seeing finished products and its a long process a tripple-A game goes through before being complete. In short, one of the most important phases of level design and environment art creation is called grey-boxing or blockout. There are other terms such as block-in and whitebox. For my project, I use the term block-mesh but they all essentially mean the same thing. I didn't have any building models or 3D assets created yet to fill the environment.


Why is the ground checkered and not green? 
       There are several reasons for this. The first is since I am building an engine from scratch, I need to determine if I was actually moving in 3D space or not. If the ground was all one color with no models in the scene, I wouldn't be able to determine if I was actually moving in 3D space or not. I could only determine if I was looking up or down. By making the ground checkered, I could see if my program was responding correctly to the player input from the controller. This visually allowed me to tell if I was moving forward, backward, strafing and turning. The second reason the ground is checkered it to determine the scale of the textures for the ground. That way, the textures would layout nice and evenly. Each square represents the textured image to be used for the ground model. This allowed me to make sure the resolution of the ground texture was scaled correctly so that it wouldn't be too stretched and "pixel-ated" when applied. I also added a red cross hair in the engine. The gun model seemed way to dark which was my next problem I encountered. I needed lighting on the weapon model to also show off its detail to players.


What is Grey-boxing or Blockout?
    Blockout is a process where you use primitive geometric shapes such as a cube, sphere, planes or cylinder to block-in your level designs, game environments and game art assets. Check out the Primitves 3D open-source sample which provides code for drawing basic geometric primitives in XNA. There is also the Primitives sample which shows how to easily draw points, lines, and triangles on the screen. This phase is essentially the frame or the skeleton of the level, game environment or asset that you will use to build upon and eventually finish. The blockout or in my case, the block-mesh phase is the foundation of your level. Below is a video example of blockout used in this single player level for one of my favorite games of all time; Gears of War. Once the grey-boxing or blockout process is complete, static meshes and other various 3D models are added to the scene to flesh out the level. Gear of War to date has some of the best level design in the industry. I could talk all day about the level design in Gear of War but I might save that for a later blog post.



       The goal of blockout is to focus on blocking-in and figuring out how to establish the size, scale, layout, proportions and composition of your environment for your playable level. Once that is accomplished, you must then focus on the actual playable space of your level. You will have to figure out other things such as flow, pacing, gameplay implementation and scripting so you can begin playtesting your level. The purpose of the blockout phase is not to finish your level or game environment. This phase is basically the first step so don't worry about making your level look pretty, texturing, lighting, and other details during this phase. Nothing is final during the blockout phase and make room during this phase for changes and implementations to improve on things over time. As you can see in my screenshot below, my blockmesh phase is very rough and messy. I did this deliberately to give me the flexibility to change things around while working on my first level. This also helps me to determine the scale and size of my map.


Adding "Basic" Lighting 


       This next screenshot shows the gun model successfully lit from behind with a light source. Some people have asked me what shader did I use to light the model. To light the gun, I started with the BasicEffect which is built into the XNA game framework. Ironically, its anything but basic. So I added the following to my draw method:

                 foreach (BasicEffect effect in mesh.Effects)
                {

                    effect.EnableDefaultLighting();
                    effect.PreferPerPixelLighting = true;
                    effect.SpecularColor = new Vector3(0.5f);
                    effect.SpecularPower = 7;
                }
                mesh.Draw();


The above source code is somewhat older and can hurt performance when debugged to Xbox 360 if your game or game engine is not multi-threaded. For now, to help performance somewhat, I plan to change my foreach statements all to for and eventually use DrawPrimitives. Later I plan on implementing specular lighting. Specular lighting will give the gun models more of a shiny metal surface I am looking for.


What exactly does XNA's Default Lighting do?
     From what I can tell from Debugging and pulling out values, the effect.EnableDefaultLighting() does the following:

                    effect.LightingEnabled = true;

                    effect.AmbientLightColor = new Vector3(0.053f, 0.098f, 0.181f);
                    effect.SpecularColor = new Vector3(0, 0, 0);
                    effect.DiffuseColor = new Vector3(0.64f, 0.64f, 0.64f);

                    effect.DirectionalLight0.Enabled = true;
                    effect.DirectionalLight0.DiffuseColor = new Vector3(1f, 0.96f, 0.81f);
                    effect.DirectionalLight0.Direction = new Vector3(-0.52f, -0.57f, -0.62f);
                    effect.DirectionalLight0.SpecularColor = new Vector3(1f, 0.96f, 0.81f);

                    effect.DirectionalLight1.Enabled = true;
                    effect.DirectionalLight1.DiffuseColor = new Vector3(0.96f, 0.76f, 0.40f);
                    effect.DirectionalLight1.Direction = new Vector3(0.71f, 0.34f, 0.60f);
                    effect.DirectionalLight1.SpecularColor = new Vector3(0f, 0f, 0f);

                    effect.DirectionalLight2.Enabled = true;
                    effect.DirectionalLight2.DiffuseColor = new Vector3(0.32f, 0.36f, 0.39f);
                    effect.DirectionalLight2.Direction = new Vector3(0.45f, -0.76f, 0.45f);
                    effect.DirectionalLight2.SpecularColor = new Vector3(0.32f, 0.36f, 0.39f);


I hope this helps.



       In this screenshot I added the holographic sight to the M4A1 gun model. The holographic sight had some missing gaps in the model that didn't export properly.


       This screenshot above shows the latest scar gun model with a holographic sight modeled and running in the game.



       As of June 23, 2011, I was able to get the sample character marine model attached. Next I had to rig the character not only for animation. Bone information needed to be added to that I could rotate the character's shoulders and head for whenever the player looks down or up. I also have to figure out how to keep the character stationary and not rotate the entire character model with the camera. I pulled the camera back in 3rd person perspective to see if the character was positioned accurately. The character marine model in this screenshot above was created by Carlos Augusto of Floatbox Studios. It was originally used for a demo at the SBGames 2007 Independent Games Festival under the XNA category and contributed to coming in third place. It was used for the book published in 2008 "Beginning XNA 2.0 Game Programming". This screenshot is mainly for demonstration purposes.



 

       This image above is an in-game screenshot of my first level in progress. I have a default green ground model and a low poly temple model in the scene. At this stage there is no collision detection yet so you just walk through the temple itself. The character model was successfully attached but I didn't have his animations pulled up yet. I used the SkinnedModel Pipeline from the Skinning Sample which can be found from the App Hub website. The skinning sample can be found here. Below is my first video of my first level showing off the start of my progress.





       This video demonstration above does not have any gameplay. I am currently ways off from that. It simply shows how the spring properties were altered for the fluid movement and turning. The stiffness of the spring was greatly increased, therefore the gun is mostly stationary with the player while the camera follows closely behind it. Later, I will have to change the source code some more so that instead of following the player, the camera moves in-conjunction with the player. In the physics of the game, the masses of the camera and the gun were increased as well, as well as the spring force.


       In this above screenshot build of the game engine, I was working on jumping and sprinting. Only the character model's idle animation running because XNA 4.0 does not support .fbx models with multiple animation takes within them. So my next challenge is to code an Animation Pipeline in XNA 4.0 to support it. Once I get it working, I can get more animations running.


       The third step was adding sound.



 I eventually expanded my level and modeled an entire scene with buildings and skyscrapers. 




      This environment was huge! So as you can image, it was pretty time consuming.  





COORDINATE SYSTEMS
       In screen coordinate space, there are two dimensions: one for each of the X and Y directions. In screen coordinates, the X direction increases in value from the left side of the screen to the right, and the Y increases in value from the top of the screen to the bottom.

There are two main types of 3D coordinate systems used in computer graphics:
Left-Handed Coordinate System
Right-Handed Coordinate System 

Both of these systems contain three directions for the X,Y, and Z axes. The three axes converge at a central point called the origin where their values equal 0. The values along each axis either gain or lower in value at regular intervals depending on whether you are moving in the positive or negative direction along that axis. XNA primarily by convention uses the right handed coordinate system.




Why is XNA right-handed?
       For the most part, XNA doesn’t care what orientation you use. DirectX can use either left handed or a right handed coordinate system. It used a left handed system by convention previously. However, XNA and OpenGL both use a right handed coordinate system by convention. Many mathematics textbooks work in a right handed coordinate system. The App Hub Community Samples are right handed as well, but you can use the left handed coordinate system if you prefer that way around. You would have to manually fix the projection and view matrix, but I think you can still safely use Frustrum classes. XNA is also right handed to match the Windows Phone Framework. There is no single universal standard for right and left handed coordinate systems. 

XNA’s coordinate system is setup like so: 
Forward is -Z, backward is +Z. Forward points into the screen.
Right is +X, left is -X. Right points to the right-side of the screen.
Up is +Y, down is -Y. Up points to the top of the screen.



ENTITY
       The Cyclone Game Engine represents each object within the 3D space of the game world as an entity. The entity is composed of the translation, rotation and scaling of objects within 3D space along with the matrices on which the transformation operations are carried out. 


VECOTRS
       Each Entity instance contains three 3D vectors. Think of vectors as a set of floating point values used to represent a point or direction in space. A 3D vector (in its simplest form) is a set of 3 floating point numbers which can represent a 3D coordinate comprising of an X, Y and Z component. The Entity has a 3D Vector to store its Position (i.e. the Cartesian co-ordinate of the entity in 3d space), its Rotation (the amount by which the entity is rotated about the X, Y and Z axis) and its Scale (the factor by which the entity is scaled on the X, Y and Z axis).


What Units are Vectors In?
       Vectors can be in whatever unit you want them to be in. Staying consistent was key throughout the art process creating the buildings in the scene. If you are working as a team, it is important for the artists to decide on what one unit is equal to in your game. This will prevent scaling issues so that if one artist creates buildings and one unit equals a yard, and another artist creates characters for the game with one unit equaling one inch, buildings might look too small or characters might look too big and vise versa.



       XNA Game Studio supports three types of vectors: Vector2, Vector3 and Vector4. Vector2 has two dimensions, so its primarily used in 2D graphics. Likewise Vector3 has three dimensions, and is used in 3D graphics. So the question is what should Vector4 be used in? A Vector4 like the Vector3 type contains X, Y and Z values. The fourth component is called the homogeneous component. The homogeneous component is represented by the W property, which is not used for space time manipulation unfortunately. The fourth component is required for multiplying the vector by a matrix, which has four rows of values. This is where understanding matrices and vector multiplication in math becomes so crucial. 




MATRICES
       In mathematics, a matrix is a rectangle group of numbers called elements. A matrix represents an array of sixteen floating point numbers and is used to calculate the position, scale and rotation of an entity in world space. So the size of the matrix is expressed in the number of rows by the number of columns. The 4 by 4 matrix, which contains the 16 floating point values is the most common in 3D graphics. XNA Game Studio's Matrix structure is also a 4 by 4 matrix. A matrix has a number of mathematical applications in a variety of fields, especially calculus and optics. In 3D game programming, we focus on the use of a matrix in linear algebra because of how useful this becomes in computer graphics. 

       In XNA Game Studio, the matrix structure is row major meaning that the vectors that make up the X,Y, and Z directions and the translation vector are laid out in each row of the matrix. Each row represents a different direction in the coordinate space defined by the matrix.

       In the first row, the X vector represents the right vector of the coordinate space. In the second row, the Y vector represents the up vector of the coordinate space. In the third row, the Z vector represents the backward vector of the coordinate space.The forward vector is actually the negation of the Z vector because in a right-handed coordinate space, the Z direction points backwards. The forth row contains the vector to use for the translation of the position. 



Matrix Transforms
       Here is a documentation on XNA's Matrix Structure: https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.matrix.aspx

Matrix transformations can be used to express the location of an object relative to another object, an object's rotation, and scale of objects in 3D space. They can also be used to change the viewing positions, directions and perspectives. 


Identity Matrix
       The identity matrix, also called the unit matrix, contains elements with the value of one from the top left diagonal down to the bottom right.The rest of the elements in the matrix are all zeros. When the identity matrix is multiplied by any other matrix, the result is always the original matrix. The identity matrix is an orthonormalized matrix that defines the unit directions for X,Y, and Z for the unit coordinate space. The identity matrix is a base starting point for many of the other types of transforms. The engine tracks where the models are located and positioned in Model Space in an object editor like Autodesk Maya or 3D Studio Max, and displays them in-game exactly how you had them placed in Maya or 3D Studio Max. This is with the help of XNA Game Studio’s Identity Matrix. 



Translation Matrix
       A translation matrix is a type of matrix that is used to translate or move a vector from one location to another. For example if a vector contains the values { 1, 2, 3 }, a translation of { 2, 1, 0 } moves the vector to { 3, 3, 3 }. When a vector is multiplied by a translation matrix, the result is a vector with a value equaling the original vector’s value plus the translation of the matrix. In a translation matrix, the last row contains the values to translate by.


Scaling
          A big issue that is encountered when creating 3D games is scale. As you can see in the screenshot below, the gun models were scaled too big initially. A scale matrix transforms a vector by scaling the components of the vector. Like the identity matrix, the scale matrix uses only the elements in the diagonal direction from top left to lower right.The rest of the elements are all zeros. When a vector is multiplied by a scale matrix, each component in the vector is scaled by a corresponding element in the matrix. A scale matrix does not have to be uniform in all directions. Nonuniform scale is also possible where some axis directions are scaled more or less than others.


Rotation
       A rotation matrix transforms a vector by rotating it. Rotation matrices come in the form of rotations around the X,Y, or Z axes along with rotation around an arbitrary axis. Each type of rotation matrix has its own element layout for defining the rotation. When a vector is multiplied by a rotation matrix, the resulting vector’s value is equal to the original vector value rotated around the defined axis.

       In XNA Game Studio, the creation of rotation matrices is as simple as calling one of the static methods provided by the Matrix structure, such as CreateRotationX.The specified angles are in radian units. In radians, 2 Pi units is equal to 360 degrees. Pi is a mathematical constant, which is the ratio of a circle’s circumference to the diameter of the circle. The value of Pi is around 3.14159.To convert between radians and degrees, use the MathHelper.ToRadians and MathHelper.ToDegrees methods.

       Rotations are sometimes referred to in terms of yaw, pitch, and roll. These represent rotations around the current coordinate spaces right, up, and forward vectors, which are not necessarily the same as the unit X,Y, and Z axes. For example, if an object is already rotated 45 degrees around the Y axis, the forward vector is not in the negative Z direction anymore. It is now halfway between negative Z and negative X.

       Its important to note that rotating around the X, Y and Z axis is slightly different in right-hand systems than left-hand systems. Rotating this object around the X axis is not the same as rotating around the pitch vector, because the pitch vector no longer is equal to the unit X vector.



World Transform
       A world transform changes coordinates from model space, where vertices are defined relative to a model's local origin, to World Space, where vertices are defined relative to an origin common to all the objects in a scene. Essentially the world transform places a model into the world.


CAMERAS
       A Camera in Cyclone represents an Entity through with the 3d scene can be viewed. For convenience, the engine creates a default camera when the Base is initialized. The analogy of a camera is carried through in the 25 ability of the camera to be configured and manipulated in the same way as a physical film camera, with the ability to set the perspective, the near and far clipping distances (or range), and the ability to pan, dolly, roll, pitch and yaw.

       It is possible to have multiple cameras within an environment. However it is encouraged that each camera be set as the Base camera before it is rendered so that any other elements of the scene have access to it if they require it – i.e. an Effect may need to access the projection matrix or view position of the current camera to render correctly or a NodeTree mesh (see NodeTree section) will need to access the current cameras view frustum (see frustum section). This design decision was made to keep the camera system open-ended so the programmer has the choice of how they want to use the camera.


View Transform
       The view transform locates the viewer in world space, transforming vertices into camera space. In camera space, the camera, or viewer, is at the origin, looking in the positive z-direction. The game engine uses the right-handed coordinate system, so z is positive into a scene. The view matrix relocates the objects in the world around a camera's position - the origin of camera space - and orientation. There are many ways to create a view matrix. In all cases, the camera has some logical position and orientation in world space that is used as a starting point to create a view matrix that will be applied to the models in a scene. The view matrix translates and rotates objects to place them in camera space, where the camera is at the origin. One way to create a view matrix is to combine a translation matrix with rotation matrices for each axis. 



Projection Transform
       The projection transformation can be thought of as choosing a lens for the camera. The projection matrix is typically a scale and perspective projection. The projection transformation converts the viewing frustum into a cuboid shape. Because the near end of the viewing frustum is smaller than the far end, this has the effect of expanding objects that are near to the camera. Creating a projection does not change the world's Z-axis. All it does is is creates a view frustum. (It is best explained here: Viewing Frustrum).


Frustrum
Each camera contains a viewing frustum.

In 3D computer graphics, the viewing frustum or view frustum is the region of space in the modeled world that may appear on the screen; it is the field of view of the notional camera. The exact shape of this region varies depending on what kind of camera lens is being simulated, but typically it is a frustum of a rectangular pyramid.” 
--(Wikkipedia (2004) Viewing Frustum Definition)


       So far, this is a brief overview of how 3D computer graphics programming works in XNA Game Studio. I will explain animation in later blog posts and physics once I get it fully implemented. I can't fully explain how crucial physics is for 3D games, let alone a game engine. Complex behaviors and object interactions brings the 3D game worlds to life. At this stage, there is no collision detection just yet since I am debating on what physics library to use. I am deciding between JigLibX and BEPU, and although BEPU is more powerful, I am considering JigLibX because I am more familiar with it and I think I can fix many of the issues developers encountered when using it. 


 

       Many people asked me, "Where are all the textures?! Why isn't everything textured?" The screenshot below demonstrates different parts of the models which were color coated so that I could visually tell them part and know where to apply certain specific textures later. For example, the lime green color you see would represent a specific building texture, so wherever there is a lime green color coated, that texture gets applied. Many games use the same textures in their scenes to help save and cut down processing, which is why I have not applied any textures yet. I have to plan out which textures to use and what color they are associated to.



       I wasn’t concerned too much about applying textures at such a very early stage in development. Especially when my computer at the time could barely run the program. Textures weren't applied yet also because I wanted to save processing power for more important things later like polygon-count, the models the engine can support and artificial intelligence. More importantly, I needed to program complex physics that would to truly bring the game to life and make it more playable. Many games save processing by using the same textures for a good amount of their models. I also had to take into consideration the file size at the time because Xbox Live Indie Games could only be submitted up to 500MB. Texturing will take place during the final stages to give everything a finished look and polish.

       To conclude, I was very happy with the progress I made towards getting a great start on my game engine. I learned a lot about Visual Studio, the C# language and most importantly XNA. Ultimately I was able to get an object, for my demonstration the object happened to be a low-poly MP5 gun model. Gravity was not implemented at the time because there was no collision detection between objects in the world. So the player was more so or less was flying in three dimensions with the camera following it. It also had a “spring effect” which worked well in emphasizing the speed as well as fluidity of the player's movement.

No comments:

Post a Comment