Sunday, January 17, 2016

29. Cross Fade Animation Blending Fixed


     I have fixed the cross fade function in the XNAnimation Library for XNA 4.0 & MonoGame. Cross Fade interpolates between two animation clips, fading out the current animation clip and fading in a new one. This allows the character's animations to transition more smoothly when their actions change. I am currently working on another technique called Additve Blending.


Saturday, December 5, 2015

28. More Xbox One Dev Kit Tools & Middleware Support

       Unfortunately, with the release of the Xbox One, I understand that Microsoft has departed XNA. I am also fully aware that Microsoft is promoting Unity as a replacement. I speak for all indie developers when I say it should be up to the game developers and game development companies to decide what tools they wish to use to create their games. They should be able to decided on the tools that fits their needs, not forced to use a tool simply because the platform will not support it. I use XNA and MonoGame simply because its what I prefer and its what I am more comfortable as well as familiar with. I propose that the Xbox One dev kit should support XNA 4.0, XNA 4.0 Refresh and MonoGame. By allowing the Xbox One dev kit to support more tools and middleware, Microsoft would be opening themselves up to an even larger potential market. Microsoft's competitor, Sony, announced a while back that the PS4 now supports MonoGame and other middleware to registered developers.

       Monogame is essentially the continuation of XNA 4.0. So now, many developers who created Xbox Live Indie Games for Xbox 360 have moved on to develop games for PS4 since it supports MonoGame. Microsoft will gain more respect from its developers that they lost who created indie games for Xbox 360 by allowing the Xbox One dev kit to support XNA and MonoGame. This is why I am trying my hardest to convince Microsoft to make their dev kit support these platforms as well as other tools and Middleware. Microsoft would not have to worry about creating a new version of XNA necessarily because the creators behind MonoGame are essentially doing that right now. With that said, that doesn't mean MonoGame's creators can't be compensated or offered help from Microsoft in some shape or form. It took a lot of work to make XNA function on other platforms. Great games can be created regardless of which engines and tools were used to create them. Visit the website link below to vote.

Xbox.uservoice.com

Thanks for reading, and I would greatly appreicate your vote!


Long Live XNA/ MonoGame!!!

If there are any XNA or MonoGame developers reading this, please comment and share your thoughts below if you would like to see the Xbox One dev kit support these tools. 


Saturday, November 14, 2015

27. Euclideon's Unlimited Detail

       The false assumption this day in age is that you need great hardware in order to have amazing graphics. The following video below destroys that assumption. Hardware simply runs the graphics; it doesn't necessarily create the content that the artists behind games created themselves in the game. I say this because I hear so many statements from gamers today like, "They should re-make this game that came out two years ago with Xbox One graphics!" It doesn't fully make sense because it's the game engine that handles that for the most part. The real question is, "Can the hardware handle it?"
       Some game companies are remaking games today to show off the so called "power" of next-gen consoles. We have seen this with the Gears of War: Utlimate Edition for Xbox One for instance. So in order to do this, the artists have to change the actual graphics content in the game themselves. Basically they are taking that same game, and they might increase the polygons in the 3D models to show off more detail, improve the lighting, add features like better shaders, normal-mapping, bump-mapping, and maybe even ray-tracing if the hardware can handle it. The problem I am getting at is more and more games are becoming highly dependent on hardware specs.





I recently saw this image below posted by IGN on Facebook. It's a screenshot comparison of Call of Duty: Black Ops 3 which recently came out for both the Xbox 360 and Xbox One platforms. With point cloud data, they could pull off that same level of detail you would see on Xbox One, on Xbox 360. So why am I showing you all of this?

Man, Black Ops 3 looks rough on last gen... http://go.ign.com/v9iJqMv #UpAtNoon
Posted by IGN on Monday, November 9, 2015




Why? Simply put, gamers are literally buying into the idea that games on last-gen can't look as good as a games on next-gen due to lack of hardware specs. Euclideon has shattered this belief!  



       I know there are many people out there who are skeptical like this gentleman in the video above. Light-maps were not actually used. The lighting came from the scan itself. If this technology were used in game engines, they might need to come up with a way that removes the lighting. So after the scan, the game engine handle the lighting effects using  shaders and dynamic lighting. I will admit, I too was skeptical at first until I actually saw the technology run on Nintedo Wii. This is wild because Nintendo Wii for example lacks the hardware to run a game like The Last of Us or Crysis. With that said, game developers might still need advance hardware to create games, but not to necessarily play them. So the question in your minds is:


Can it be Proven Further?
Maybe a Live Demonstration?





       Whenever I talk about unlimited detail and point-cloud data, I get responses like, "It's not graphics that make a game successful, its gameplay!" I am fully aware of this. I don't favor graphics over gameplay! I would take the content in the game, the story, and how well the game plays over graphics any day.


How will this affect gamers and game development companies?
  1. People will not have to buy expensive gaming PCs and hardware to pull off the level of graphics that they want or to run their games on full settings.

  2. Regardless of what console or game platform you own, it will be even harder to tell the difference visually.

  3. Game Development companies will save money creating art assets for their games because they can simply laser scan objects from the real world. So instead of creating a high-poly 3D car model, why not laser scan a car?

  4. Game Development companies will be able to expand their audience and reach more people to play their games, thus ushering in more money. 

       I would love to somehow integrate point-cloud data into my game engine someday but that is the least of my concerns right now.


If this technology were in your game engine, how would you utilize it? 

       Before I would even consider implementing this technology into my own game or game engine, I will need to have fully answered and resolved the following questions:

  1. How would unlimited detail work for animated models?
           If you are unfamiliar with the term "rigging" the best way I can describe it is when a 3D model is given a skeleton and a painted weight map. The weight map determines how much and where individual bones deform the mesh. You need more than just a model and a rig. A rig is only as good as the topology. Topology is the polygonal and edge flow of the model. If you look at the wire frame of a nice 3D model, you can see how the model's faces and edges follow along muscle groups. After a model is rigged, it is animated and then exported. I realize I am omitting a lot but this is just a basic understanding.
           My thoughts starting out would be to maybe have my game engine somehow convert my 3D models into point-cloud data strictly in-game. The program logic basically would trick the computer into thinking that the model should be treated like a traditional 3D model when in reality it is not. The reason why is so that I can always go back and make changes to my models in my 3D modeling programs without having to worry if the my 3D modeling programs can import, render and support such models. A point-cloud model viewer mode in my engine could show what the model currently looks like in the engine when imported, and what the model looks like after its converted.


  2. Will unlimited detail be able to handle the textures for my models?
           So will this technology be able to interpret the UV coordinates? Will it matter? How will I be able to use the texture map I made for the model? If a model conversion in my game engine is possible, could the textures somehow be merged with the model?


  3. What is the extent of lighting?
           Honestly, I think I would let my game engine handle the lighting itself. Whether the lighting will be specular, diffusion, etc... will be up to the game engine programmers and developers themselves. Shaders and real-time lighting would most likely be handled by the game engine itself.


  4. How will collision detection work with point-cloud data?
           Since my thought is to trick the computer into thinking that the converted 3D model should still be treated like a traditional 3D model, collision detection would work the same for the most part. Problem areas is the fact that unlimited detail can render scenes extended down to individual rocks. So will those individual rocks in the terrain be treated as separate models with their own weight and physics? This would be an enormous amount of physics calculations to process. The engine would essentially have to make physics calculations for every single individual rock. Or would it have to make physics calculations for every single rock? This could be fixed if a single rock model was duplicated however many times through the program in the game world by instancing the model. The computer will think all the rocks are simply one model and only have to make physics calculations for that specific rock model. Will I have to stick with my terrain processor starting out instead? 3D models in of themselves aren't as much of a strain on computers often as the logic is. They are just one of the few reasons as to why your computer may slow down depending on how detailed they are and the number of polygons they have. Thanks to point-cloud, computers read and process tiny dots faster rather than polygons.


  5. Will point-cloud data work best for static objects in the game?
           
    Many game engines extracts the triangles and polygon information from 3D models in order to create more accurate collision detection. With a point-cloud based model, will I still need to use primitive invisible meshes for collision detection? How accurate will it be?


  6. Now that my game engine is capable of applying fur to 3D models, will it still look the same after converted?



Why is this technology not utilized in videogames yet?

       I sort of answered some of this in my own questions above. As it stands right now, many game companies (especially big name ones which I won't mention) are either unaware of this software technology or simply skeptical of it. Some game companies are rebuilding their game engines with the Xbox One and PS4s' hardware in mind. With unlimited detail, this wouldn't necessarily be as big of an issue. Videogame companies will have to approach Euclideon themselves if they want to invest in this technology and have it running in their games. Other game companies have made technology similar to unlimited detail, but plan on saving it for later use since they encountered many issues.

       John Carmack who is widely known as "the creator of the FPS" suggested that a system like Euclideon was possible in theory. Coming from Carmack, this greatly surprised me. Euclideon's point-cloud technology runs entirely in software, so it does not use up a tremendous amount of processing power. All of its demonstrations did not utilize the GPU. Intel had once tried making its own system in order to do things along the field of unlimited graphics. Unfortunately, it ended up closing this avenue and put it aside as something for the distant future when computers have "more power".


Minecraft creator Notch made extremely negative comments towards both Dell and the validity of his work.

"If their demo was real and you add up all the little atoms in the island then it would use petabytes of data. But everybody knows that if you see a tree twice, it's reconnected. There isn't a game that I know of that doesn't use the same object twice and I assure you that they didn't store that object two times separately."
-- Notch

       
       This is a misinterpretation of the point-cloud technology. Notch is referring to a rendering technique called instancing which I touched upon a bit earlier, but I will save for later discussion. In short, instancing is when the same model is duplicated multiple times to save processing and the engines can potentially treat each duplicated model as its own separate entity. Its a way of "tricking" the computer. Games often render many copies of the same model, for instance by covering a landscape with trees or by filling a room with crates. Instancing helps performance by reducing the cost of repeated draw calls. In games, the calls needed to render a model are relatively expensive and can quickly add up if you are drawing hundreds, if not thousands of models. Many Xbox Live Indie Games with "Minecraft-like" worlds utilize this technique. 

       Cevat Yerli, the head of Crytek however, didn't think that Euclideon was creating some big hoax. He went on to say that he truly respects the work Euclideon has accomplished and that unlimited detail is absolutely believable. In his past keynotes, he always talked about how such technology could be possible in the future.

       With so many negative reactions raised at Dell's claim that Unlimited Detail will improve graphics by 100,000 times, has this industry become... jealous? Is this part of the reason why we haven't seen game companies approach Euclideon? However, there are plenty of positive reactions.


What can we do as gamers?

       As gamers, we need to bring this technology up to game companies who are building games with graphics of intense realism. We need to stop questioning Euclideon about when this technology will be implemented in games, and instead ask the questions to game companies themselves. We can encourage game companies to get involved by creating our own petitions if we have to. As game consumers, we need to realize that we have the power. The purchasing decisions we make contribute to the decisions game companies make with their games; for better and for worse. We've accepted the notion that last-gen is holding games back somehow, so we purchase monster gaming computers, new graphics cards, next-gen consoles and hardware. With unlimited detail, videogame graphics will no longer be held back by hardware specs. Whether we realize it or not, we as gamers play a crucial role in game development companies learning from their mistakes and taking some risks if we choose not to constantly vote with our wallets.


For more updates, you can also follow their Facebook Page:
https://www.facebook.com/Euclideon/

So what are your thoughts on Euclideon's technology breakthrough?
Please comment below  



Wednesday, November 11, 2015

26. Creating the Logos


       I created the Cyclone Game Engine logo by accident. Initially, I wasn't trying to create a logo at all. I was working on a flash animation back in college and I needed to come up with designs for simple shapes. I simply wanted to make them spin and rotate like a wheel. As I was working on creating the shapes in Adobe Illustrator, my close friend Patrik Sjoberg stopped by to see what I was working on.

"Hey, that looks cool man, "he said. That could be the logo design for your game engine man.

Patrik gave me the idea of using the design as the logo for the game engine which I was working on in what little of my spare time I had outside of my job and school. 



       In the near future, I plan to find graphic designers to help create logos, buttons and designs that will go into the Cyclone Game Engine's User-Interface. 




       This was the first logo design for Steel Cyclone Studios in the image above. I am not a graphic designer by any means. I was rushing with this design at the time because it wasn't the most important thing I was working on. I needed a logo quick when I started my business through Kentucky Secretary of State. This logo was just a start. Notice there are so many things wrong with this logo but starting out its not important. I knew I could make improvements to my design later. At the time, my game projects were more important and getting as much progress completed as possible.



       Later I redesigned my logo to looks something like this in the image above. This was a major improvement and I made tons of iterations and sketches on paper. But... still something didn't sit well with me. I wanted to use no more than three colors. As I stared at this design, I couldn't escape the feeling that it felt more like a sports team logo in my own opinion. I needed to make the design more simple somehow.



       And then it happend... I took the best of both worlds from the original and new design. I made the tornado out of simple shapes and made them... "steel-metal-like" if that makes any sense. It was a very hard effect for me to give off. I sort of made this by accident as well just like my engine logo. Accidents can turnout to be amazing things. 




Tuesday, November 10, 2015

25. XNA & MonoGame Tiff Importer

       By default, the XNA Framework does not natively support tiff based image files. To use the native content pipeline importers, you will need to use PNG, JPG, DDS, TGA, or BMP. The following link shows you how to create a tiff importer.

Make sure you add the System.Drawing Reference to the Importer.

Here is the Source Code:

using System;
using System.Collections.Generic;
using System.Drawing;

using System.Linq;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Content.Pipeline;
using Microsoft.Xna.Framework.Content.Pipeline.Graphics;
using Microsoft.Xna.Framework.Content.Pipeline.Processors;

// TODO: replace these with the processor input and output types.
using TInput = System.String;
using TOutput = System.String;

namespace TiffLib
{
    /// <summary>
    /// This class will be instantiated by the XNA Framework Content Pipeline
    /// to apply custom processing to content data, converting an object of
    /// type TInput to TOutput. The input and output types may be the same if
    /// the processor wishes to alter data without changing its type.
    ///
    /// This should be part of a Content Pipeline Extension Library project.
    ///
    /// TODO: change the ContentProcessor attribute to specify the correct
    /// display name for this processor.
    /// </summary>
    [ContentImporter(".tif", ".tiff", DisplayName = "TIFF Importer", DefaultProcessor = "TextureProcessor")]
    public class TiffImporter : ContentImporter<Texture2DContent>
    {
        public override Texture2DContent Import(string filename, ContentImporterContext context)
        {
            Bitmap bitmap = Image.FromFile(filename) as Bitmap;
            var bitmapContent = new PixelBitmapContent<Microsoft.Xna.Framework.Color>(bitmap.Width, bitmap.Height);

            for (int i = 0; i < bitmap.Width; i++)
            {
                for (int j = 0; j < bitmap.Height; j++)
                {
                    System.Drawing.Color from = bitmap.GetPixel(i, j);
                    Microsoft.Xna.Framework.Color to = new Microsoft.Xna.Framework.Color(from.R, from.G, from.B, from.A);
                    bitmapContent.SetPixel(i, j, to);
                }
            }

            return new Texture2DContent()
            {
                Mipmaps = new MipmapChain(bitmapContent)
            };
        }
    }

}

Saturday, November 7, 2015

24. Game Project Sneak Peak and Terrain Engine Update


Long-Term Project Sneak Peek
The following video provides a series of early work in progress screenshots of my 3D models and assets I have created for my long-term game project. These were taken some time ago.
Posted by Steel Cyclone Studios LLC on Monday, October 19, 2015

     
       Above is a sneak peak of some 3D models for my long-term game project I am working on. The following footage is still in very early stages of development. The textures for many of the models is simply a starting point. I will eventually use Substance Painter to texture the models to achieve much better results.



Cyclone Game Engine Terrain and Snow
The Cyclone Game Engine now has the ability to generate a terrain landscape by reading a bitmap and using the intensity of its pixels as height values. The game engine also supports Billboarding which is both a fast and efficient way to render lots of grass on the terrain. To save memory, the grass is actually 2d images rendered in 3D. The grass moves depending on the speed of the wind. As you can see in the video, I also got snow working which is also effected by the direction and speed of the wind. I am working further on the game engine’s weather component and possibly a day and night system.
Posted by Steel Cyclone Studios LLC on Thursday, December 5, 2013

       The Cyclone Game Engine now has the ability to generate a terrain landscape by reading a bitmap and using the intensity of its pixels as height values. I found that the heigthmap processor that comes with JigLibX was similar to the XNA Generated Geometry Sample. A custom content pipeline processor converts the heightmap in 3D geometry. The game engine also supports Billboarding which is both a fast and efficient way to render lots of grass on the terrain. To save memory, the grass is actually 2d images rendered in 3D. The grass moves depending on the speed of the wind. As you can see in the video, I also got snow working which is also effected by the direction and speed of the wind. The snow effect is implemented as a 3D particle by using point sprites. I am working further on the game engine’s weather component and possibly a day and night system. Weather effects can help contribute to the atmosphere of the game since it is mostly set in an outdoor world. 



       Lately, I have been making improvements to my terrain. I am working on shader effects for my terrain and applying better textures and lighting. This is my terrain for planet Mars in the following image above. 



Monday, August 3, 2015

23. Character Physics Controller Part 1


       An important feature of modern physics libraries is the character controller. This feature may seem obvious to implement, but it is not so straight forward for every game. This is simply due to the fact that a controller usually has to "break" the laws of physics in order to follow the user commands. In this update, I am utlizing the JigLibX physics library. This video shows the very first successful build of my basic character interacting with the physics world. The character for now is essentially a capsule which gets moved by player input. The animated character model for now is encapsulated inside the capsule. I am adding a crouch function soon for the player. Two other features I am currently adding are changing the jumping distance proportional to the character's speed, while preserving its momentum.

       I am fixing the first-person camera's clip planes so that you will not see the player's head and other triangles from the character's model mesh. In this video, I also try to show the collision skins from the models. I am working on an 'Editor' mode where you leave first-person perspective into a Free-Cam mode. From there, you will be able to access the Level Editor. I am still programming the Level Editor and adding more functionality to it so you can add, remove, rotate, scale and translate models in your levels. This following video is a blank sandbox for demonstration purposes. The building models in the scene are also for demonstration. Visualizing the collision skins decreases the frame rate as of now, and I am fixing this. I now got it working to where you are fully attached to the character at eye level. Now, I am able to move the bones of the character model so that the head and arms move according to the player's input. For example, when you look down, the character's head will look down as well and you can also see your feet.

       Allowing the players to look down and see their feet and movements of the characters they control makes them feel more immersed and as if they are actually in the character's shoes and looking through the character's eyes. That way, players don't feel detached from their characters they control. Although players can see their character's hands, arms and weapons that they're holding; its good to feel as if they are that character. I am working on an animation feature called Additive Blending which allows for two animation clips to be played from the animated model simultaneously.

Saturday, June 6, 2015

22. Real-Time Reflection


       I have successfully converted the XNA 3.1 Real-Time Reflection example from the XNA Community website to XNA 4.0 This example of real-time reflection was originally created by Canton Javier Ferroro. The reflection of the ship is distorted by the shape of the wood. The technique used is based on making a render with a reflection angle using a clip plane and saving this into a texture for use in the rendering of the scene. This technique is similar to games such as NBA 2K8. I will be making my source code for the XNA 4.0 version available soon.


Sunday, April 5, 2015

21: Real-Time Fur Working in XNA 4.0 & MonoGame


       With the help of this awesome tutorial I found here by Catalin Zima, I was able to implement real-time fur. I also converted it to XNA 4.0. In this next blog post, I will show you how. Changing the draw code to the code below made it work in XNA 4.0. The XNA 3.1 to 4.0 Conversion Cheat Sheet was also a big help.


private void DrawModelGeometry(Model model, Matrix[] bones, Effect effect)
        {
            foreach (ModelMesh mesh in model.Meshes)
            {
           
                foreach (ModelMeshPart meshpart in mesh.MeshParts)
                {
                 
effect.Parameters["World"].SetValue(bones[mesh.ParentBone.Index]); //set World Matrix
                    effect.Parameters["Texture"].SetValue(furColorTexture);            //set Texture
                    // effect.CommitChanges(); //commit changes
                    //graphics.GraphicsDevice.VertexDeclaration = meshpart.VertexDeclaration; (XNA 3.1)
                    graphics.GraphicsDevice.SetVertexBuffer(meshpart.VertexBuffer);
                    //graphics.GraphicsDevice.Vertices[0].SetSource(mesh.VertexBuffer, meshpart.StreamOffset, meshpart.VertexStride);
                    graphics.GraphicsDevice.Indices = meshpart.IndexBuffer;
                    //draw the geometry
                    graphics.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0,
                                                                                    meshpart.NumVertices, meshpart.StartIndex, meshpart.PrimitiveCount);
                }
            }
        }

        private void DrawFurModel(Model model)
        {
            Matrix[] bones = new Matrix[model.Bones.Count];
            model.CopyAbsoluteBoneTransformsTo(bones);

            furEffect.Parameters["Displacement"].SetValue(displacement);
            furEffect.Parameters["MaxHairLength"].SetValue(maxHairLength);
            furEffect.Parameters["FurTexture"].SetValue(furTexture);

            furEffect.Parameters["View"].SetValue(camera.View);
            furEffect.Parameters["Projection"].SetValue(camera.Projection);
            furEffect.Parameters["MaxHairLength"].SetValue(maxHairLength);
            furEffect.Parameters["Texture"].SetValue(furColorTexture);

           // furEffect.Begin();
            for (int i = 0; i < nrOfLayers; i++)
            {
                furEffect.Parameters["CurrentLayer"].SetValue((float)i / nrOfLayers);
                //furEffect.CommitChanges();
                //furEffect.CurrentTechnique.Passes[0].Begin();
                furEffect.CurrentTechnique.Passes[0].Apply();
                //draw geometry of current layer
                DrawModelGeometry(model, bones, furEffect);
                //furEffect.CurrentTechnique.Passes[0].End();
            }
            //furEffect.End();
        }

If you are having trouble watching the video, below are some screenshots.










I am working on applying this fur shader effect to the animated spider beast model. Stay tuned!


Sunday, March 8, 2015

20: XNA 4.0 & MonoGame Robot Animation Processor


       I have successfully converted the XNA 2.0 Robot Game Animation Processor to XNA 4.0 and MonoGame with a few modifications. This is a demonstration video of the animation processor running in my game engine. The scene is a temporary environment I modeled over a year ago. Notice I disabled the sky-box and collision detection in this test. The processor animates the individual bones of the mech. The processor can also play multiple animations simultaneously. For example, the run animation has more impact on the legs, while the shooting has more impact on the torso and arms. Once my first stages of my AI is complete, I will post a video show the mech fighting the spider after I get skinned animation interpolation working. To learn more about how to convert your older XNA game project over to XNA 4.0, visit my page here. Thanks for watching and I greatly appreciate your support.


Monday, February 23, 2015

19: Camera Recoil Effect Test


       This is a brief video demonstration of the camera recoil effect I recently got working. It is very recent so it is by no means perfect and is still an early work-in-progress. The recoil effect increases immersion by providing a visual feedback in the form of shaking the player's camera as they shoot. This effect is used in many first-person shooters. Tactile feedback is provided as the player shoots by vibrating the player's gamepad controller. The sky-box was disabled temporarily in this video since I am working on volumetric clouds. The weapon sound effect was off some when capturing the footage, so my apologies in advance. That weird green stuff you see is the new blood-splatter particle effect I got working.



       Above is a screenshot of the new exaggerated blood splatter particle effect within the spider. I am placing bounding spheres within the spider to work as collision detection. So whenever a bullet fired from the player intersects the bounding spheres, the blood-splatter will emit. 



Saturday, January 17, 2015

18: Vehicle Physics Tests


       In my first vehicle test, I used JigLibX 4.0 for the physics. I added a Chase Camera to follow the vehicle while driving. You can download source code of the chase camera sample here. Chase cameras are great for creating a feeling of realistic motion. As the car accelerates and brakes, it changes distance from the camera. The problem I ran into is when the vehicle accelerates at very high speeds, the chase camera tends to get too far behind the vehicle object. I found that simply reducing the springiness of the camera won't work. To fix this problem, my solution is to add an extra Z-Stiffness property. This will control how much the camera tries to keep up with the moving vehicle object. You probably first noticed that the rotation of the wheels are off greatly. 



       For my second test with vehicle physics, I used the BEPU Physics Engine. In the following video above, I was able to add one of the custom monster trucks I modeled to the engine. To do this, I declared a model variable for the vehicle's body at the top like so: 

Model car; // This will be the vehicle's body

Under the following line:

vehicle = new Vehicle(body);

add: 

vehicle.Body.Tag = car;

Then Load the car model: 

car = Content.Load<Model>("car");

Then in your Draw Method, draw the car model there. 



       In this next vehicle test, I continued with JigLibX 4.0 for the physics. Notice the rotation of the wheels are fixed. At first, I thought this was due to the coordinates in which the wheel model was placed and exported in 3D space from Autodesk Maya . The wheel model in Maya has to be centered at the origin. The program passes the wheel model through an array to create copies for the other wheels of the vehicle. The problem was the model itself due to the amount if detail it has. So I placed a low-poly wheel inside of it to be used as a guide to get the wheel rotating correctly. I forgot to render it transparent so its not visible to the player.

       The car object class inside the vehicle folder does not actually use spheres for the wheels. Instead it uses some number of rays and casts out the wheel direction to detect the ground. I have greatly improved the controls along with the overall handling of the vehicle after a few adjustments. I found great help from a tutorial in the JigLibX forums by nikescar. Check out nikescar's Vehicle Class Modifications for more information. I also added a nifty power sliding effect and you can take tight turns by letting off the gas some. As of now, it runs a little slow on Xbox 360 and I am optimizing it. To help with JigLibX's physics performance on Xbox 360, I am changing my foreach statements to for and I will be adding separate loading threads to help improve the frame-rate issues.




       In this next vehicle physics test above, I was testing how well the model handles on a static mesh composed of many triangles. The 3D terrain model is composed of many hills; making this test pretty fun when driving over them at high speeds. The vehicle also did a complete 360 barrel roll in mid air while falling as I drove off a hill near the end. This randomly occurred, so it was a good thing I was able to capture the moment on video. If you have found this post helpful, please post a comment in the comment section.


Monster Truck 3D Model Work-In-Progress



Below are screenshots of vehicles I plan on creating physics for.


Laser Tank Car







Right now, I have the custom laser tank car I modeled working. I just have to animate the tracks which will take some time. The model didn't fully export correctly but I found out the problem. Overall I was glad to see it in-game. 



Here is a side view of the custom laser tank car. 



I am also working on driving vehicles in first-person perspective. 



I am figuring out out how the camera will be positioned when driving in first-person and animating the certain individual parts of the vehicle. 


Speedboats

I am also working on water physics so that soon, the player can drive water-based vehicles like the one I modeled in this image above. 


Gyrocopters

I have flight working for the most part but I am working on how flight for a helicopter or gyro-copter will function.  




Thursday, September 4, 2014

17. Particle System



This is the beginnings of the Cyclone Game Engine's particle system. It is mostly a 3D particle system implemented by using point sprites. It animates the particles entirely on the graphics card by using a custom vertex shader, so that it can draw large numbers of particles with minimal CPU overhead.

When displaying large numbers of particles, games can easily become bottle-necked by the amount of CPU work involved in updating everything and transferring the latest particle positions across the GPU for drawing. The game engine avoids that by animating the particles entirely on the GPU, so the CPU overhead remains low regardless of how many particles are active. By moving things to the GPU, it leaves the CPU free for many other things such as gameplay, physics and or Artificial Intelligence.

When a new particle is created, the CPU fills a vertex structure with the position, velocity, and creation time. After this vertex is uploaded to the GPU, the CPU never needs to touch it again. Every frame, the current time is set as a vertex shader parameter. When the GPU draws the particles, it can work out the age of each one by comparing the creation time (which is stored as part of the vertex data) with the current time. Knowing the starting position, velocity, and age, the shader can compute the position and draw a point sprite at this location.

New particles are always added to the end of the vertex buffer, and old ones are removed from the start. Because all particles last the same amount of time, this means the active particles will always be grouped together in a consecutive region of the buffer, and so can all be drawn in a single call. The CPU is responsible for adding new particle vertices to the end of the buffer, and for retiring old ones from the start, but it doesn't need to do anything with the active particles in the middle of the buffer. Even games with thousands of particles typically only create and retire a few each frame, so the resulting CPU workload is very low. The Cyclone Game Engine's particle system can also be used for efficient 2D particles by setting an orthographic matrix as the camera projection.

The video above is a demonstration of the particle system. The smoke is a 3D particle system where the game engine creates a giant plume of long lasting smoke. The fire is an animated 2D billboard sprite rendered in 3D, mostly done to save processing.



Thursday, June 5, 2014

16. Support multiple animations from a single .fbx file for XNA 4.0, XNA 4.0 Refresh & MonoGame!!!



       One of the biggest issues developers had and still have with XNA 4.0 is that it no longer supports a 3D model which contains multiple animations for .fbx files. One approach is to export each animation into a separate FBX file, and then use a custom processor to merge them. You can find a great example of this approach on Shawn Hargreaves Blog here. An open source implementation of this can also be found here. To explain this better, lets say you have 5 characters in your game for example. Each of those characters has 15 animations. So for all five characters, you would have to export each of their animations into separate individual files. This can be a pain and cause hassles for developers; especially if your game has many characters.

       It was because of this flaw in XNA 4.0 that many developers found themselves leaving the framework entirely. Some had no problems with their .fbx files using Blender as their modeling and animation program of choice. Many abandoned .fbx altogether and switched to DirectX files. Although .X rather than .FBX is a decent alternative, its a completely different format that may or may not be supported by your 3D software which posed some problems for developers using XNA 4.0. This flaw in XNA 4.0 hindered my progress for quite some time. I simply didn't want to give up on creating a multi-take animation importer. It took a long time but I was able to fix this and get multiple animations working from a single .fbx file after modifying the XNAnimation 4.0 Library. I have posted the source code below for everyone. I hope others will find this useful.


How to extend the XNAnimation library for XNA 4.0 & MonoGame
       In the XNAnimation Pipeline, open up SkinnedModelImporter.cs and replace it with the following source code below.

Source Code Below:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.IO;
using System.Runtime.InteropServices;
using System.Text;
using System.Xml;
using System.Linq;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.Content.Pipeline;
using Microsoft.Xna.Framework.Content.Pipeline.Graphics;
using Microsoft.Xna.Framework.Content.Pipeline.Processors;
using Microsoft.Xna.Framework.Design;


// The type to import.
using TImport = Microsoft.Xna.Framework.Content.Pipeline.Graphics.NodeContent;

// Change the namespace to suit your project
namespace XNAnimationPipeline.Pipeline
{
    [ContentImporter(".fbx", DisplayName = "Multi-take FBX Importer", DefaultProcessor = "")]
    public class SkinnedModelImporter : FbxImporter
    {
        private List<string> _animfiles;
        private List<string> _fbxheader;
        private TImport _master;
        private ContentImporterContext _context;
        
        public override TImport Import(string filename, ContentImporterContext context)
        {
            _context = context;

            // _animfiles will contain list of new temp anim files.
            _animfiles = new List<string>();

            // Decouple header and animation data.
            ExtractAnimations(filename);
            
            // Process master file (this will also process the first animation)
            _master = base.Import(filename, context);

            // Process the remaining animations.
            foreach (string file in _animfiles) {
                TImport anim = base.Import(file, context);
                
                // Append animation to master NodeContent.
                AppendAnimation(_master, anim);
            }
            
            // Delete the temporary animation files.
            DeleteTempFiles();
            
            return _master;
        }
        
        private void AppendAnimation(NodeContent masternode, NodeContent animnode)
        {
            foreach (KeyValuePair<string, AnimationContent> anim in animnode.Animations) {
                masternode.Animations.Add(anim.Key, anim.Value);
            }
            
            //foreach (NodeContent child in animnode.Children) {
            //    if (child != null) {
            //        AppendAnimation(child);
            //    }
            //}

            for (int i = 0; i < masternode.Children.Count; i++) {
                if (animnode.Children[i] != null) {
                    AppendAnimation(masternode.Children[i], animnode.Children[i]);
                }
            }
        }
        
        private void ExtractAnimations(string filename)
        {
            List<string> masterFile = File.ReadAllLines(filename).ToList();
            string path = Path.GetDirectoryName(filename);
            int open_idx = 0,
                length,
                num_open = -1,
                filenum = 0;
            bool foundTake = false;

            int idx = masterFile.IndexOf("Takes:  {") + 1;
            _fbxheader = masterFile.Take(idx).ToList();
            List<string> anims = masterFile.Skip(idx).ToList();
            
            // Extract each animation and create a temporary anim file.
            for (int i = 0; i < anims.Count; i++) {
                if (anims[i].Contains("Take: ")) {
                    open_idx = i;
                    num_open = 0;
                    foundTake = true;
                }
                
                if (anims[i].Contains("{") &&
                    foundTake) {
                    num_open++;
                }

                if (anims[i].Contains("}") &&
                    foundTake) {
                    num_open--;
                }
                
                if (num_open == 0 &&
                    foundTake) {
                    // Skip first animation since this is processed in the master
                    // fbx file.
                    if (filenum > 0) {
                        length = i - open_idx + 1;
                        
                        // Create temp file from header + anim data.
                        CreateTempFile(Path.Combine(path, "tmp.anim." + filenum + ".fbx"),
                                       anims.Skip(open_idx).Take(length).ToArray());
                    }
                    filenum++;
                    foundTake = false;
                }
            }
        }
        
        private void CreateTempFile(string filename, string[] data)
        {
            List<string> completefile = new List<string>();
            completefile.AddRange(_fbxheader);
            completefile.AddRange(data);

            try {
                // Write data to new temp file.
                File.WriteAllLines(filename, completefile.ToArray());

                // Store temp file name for processing.
                _animfiles.Add(filename);
            }
            catch {
                // Error while creating temp file.
                _context.Logger.LogWarning(null, null, "Error creating temp file: {0}", filename);
            }
        }
        
        private void DeleteTempFiles()
        {
            foreach (string file in _animfiles) {
                File.Delete(file);
            }
        }
    }
}




How to extend the XNAnimation library for XNA 4.0 Refresh 
       In the XNAnimation Pipeline, open up SkinnedModelImporter.cs and replace it with the following source code below.

Source Code Below:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.IO;
using System.Runtime.InteropServices;
using System.Text;
using System.Xml;
using System.Linq;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.Content.Pipeline;
using Microsoft.Xna.Framework.Content.Pipeline.Graphics;
using Microsoft.Xna.Framework.Content.Pipeline.Processors;
using Microsoft.Xna.Framework.Design;


// The type to import.
using TImport = Microsoft.Xna.Framework.Content.Pipeline.Graphics.NodeContent;

// Change the namespace to suit your project
namespace SkinnedModelPipeline
{
    [ContentImporter(".fbx", DisplayName = "Multi-take FBX Importer", DefaultProcessor = "")]
    public class SkinnedModelImporter : FbxImporter
    {
        private List<string> _animfiles;
        private List<string> _fbxheader;
        private TImport _master;
        private ContentImporterContext _context;

        public override TImport Import(string filename, ContentImporterContext context)
        {
            _context = context;

            // _animfiles will contain list of new temp anim files.
            _animfiles = new List<string>();

            // Decouple header and animation data.
            ExtractAnimations(filename);

            // Process master file (this will also process the first animation)
            _master = base.Import(filename, context);

            // Process the remaining animations.
            foreach (string file in _animfiles)
            {
                TImport anim = base.Import(file, context);

                // Append animation to master NodeContent.
                AppendAnimation(_master, anim);
            }

            // Delete the temporary animation files.
            DeleteTempFiles();

            return _master;
        }

        private void AppendAnimation(NodeContent masternode, NodeContent animnode)
        {
            foreach (KeyValuePair<string, AnimationContent> anim in animnode.Animations)
            {
                if (!masternode.Animations.ContainsKey(anim.Key))
                {
                    masternode.Animations.Add(anim.Key, anim.Value);
                }
                else
                {
                    //overwrite the animation that was stored inside the
                    //master file because it is of the wrong length (except the first animation).
                    masternode.Animations[anim.Key] = anim.Value;
                }
            }

            //foreach (NodeContent child in animnode.Children) {
            //    if (child != null) {
            //        AppendAnimation(child);
            //    }
            //}

            for (int i = 0; i < masternode.Children.Count; i++)
            {
                if (animnode.Children[i] != null)
                {
                    AppendAnimation(masternode.Children[i], animnode.Children[i]);
                }
            }
        }

        private void ExtractAnimations(string filename)
        {
            List<string> masterFile = File.ReadAllLines(filename).ToList();
            string path = Path.GetDirectoryName(filename);
            int open_idx = 0,
                length,
                num_open = -1,
                filenum = 0;
            bool foundTake = false;

            int idx = masterFile.IndexOf("Takes:  {") + 1;
            _fbxheader = masterFile.Take(idx).ToList();
            List<string> anims = masterFile.Skip(idx).ToList();

            // Extract each animation and create a temporary anim file.
            for (int i = 0; i < anims.Count; i++)
            {
                if (anims[i].Contains("Take: "))
                {
                    open_idx = i;
                    num_open = 0;
                    foundTake = true;
                }

                if (anims[i].Contains("{") &&
                    foundTake)
                {
                    num_open++;
                }

                if (anims[i].Contains("}") &&
                    foundTake)
                {
                    num_open--;
                }

                if (num_open == 0 &&
                    foundTake)
                {
                    // Skip first animation since this is processed in the master
                    // fbx file.
                    if (filenum > 0)
                    {
                        length = i - open_idx + 1;

                        // Create temp file from header + anim data.
                        CreateTempFile(Path.Combine(path, "tmp.anim." + filenum + ".fbx"),
                                       anims.Skip(open_idx).Take(length).ToArray());
                    }
                    filenum++;
                    foundTake = false;
                }
            }
        }

        private void CreateTempFile(string filename, string[] data)
        {
            List<string> completefile = new List<string>();
            completefile.AddRange(_fbxheader);
            completefile.AddRange(data);

            try
            {
                // Write data to new temp file.
                File.WriteAllLines(filename, completefile.ToArray());

                // Store temp file name for processing.
                _animfiles.Add(filename);
            }
            catch
            {
                // Error while creating temp file.
                _context.Logger.LogWarning(null, null, "Error creating temp file: {0}", filename);
            }
        }

        private void DeleteTempFiles()
        {
            foreach (string file in _animfiles)
            {
                File.Delete(file);
            }
        }
    }
}