Saturday, May 14, 2016

38. How to Install MonoGame for Visual Studio 2015




37. Install XNA on Visual Studio 2015!



       This video will guide you to install XNA Framework in your Visual Studio. Having difficulty to follow this lesson, feel free to download the ready to use extension for VS 2015 here. Find the XNA4-VS2015.zip file.




36. XNA 4.0 Parallax Occlusion Mapping

     

       Recently, I came across Electronic Meteor's blog post which updated the Parallax Occlusion Mapping Sample to XNA 4.0. In short, parallax mapping is a way to make textures pop out. It's a very efficient way to add depth to a texture, so there is no tessellation here. Parallax Occlusion Mapping was utilized in the hit first-person shooter game called Crysis. This sample also demonstrates another technique called Normal Mapping. In this sample you can switch between the techniques to see each of their effects. The sample was originally created by Alex Alvarex Urban and updated to XNA 3.1 by Canton Javier Ferrero. Thanks to Chris, the sample was updated to XNA 4.0, however he ran into some problems getting the particle system to work. In time, I was able to fix it. I have made the source code available at the following link below for download. If you have found this sample helpful or run into any problems, please comment below.

Also, on a side note, make sure under the File Properties for the smoke.png image, click the drop-down arrow under Content Processor and set the following values like so if they aren't already:

Color Key Enabled = True
Generate Mipmaps = True
Premultiplay Alpha = False
Resize to Power of Two = False
Texture Format = DxtCompressed


Download



Friday, May 13, 2016

35. Creating A Universally Free Education

           

       In this blog post, rather than showing my next game engine update which would still be too early at the moment, I am going to talk about something else. That is, how can we make education universally free if not more affordable. My outlook on education might seem nontraditional and unconventional to many but this is primarily due to the fact that I am for the most part self-taught. I have always believed in a virtually free higher education. Higher education cannot be a luxury for a privileged few. It is an economic necessity that every family should be able to afford, and every person with a dream and ambition should be able to access. Why do I believe in making a free higher education? Simply put, education is a human right. With that said, you are probably reading thinking:

  • How will teachers get paid?
  • How is it going to be free?
  • Who will see the value in a free education?

       We need to see the value in higher education, and I am not talking about the price tag. The value of having a great quality education is that it can change your life – it can help improve your quality of life. For now, if you are reading this, I just want you to visualize what America would look like if there were to be a free higher quality education. What would that look like? It wouldn’t matter was “Class” you are. It wouldn’t matter what sort of income you make to have this. It wouldn’t even matter where you live because it will be accessible everywhere- in every library, in every school, from every home because it would be made accessible online. No, I am not necessarily talking about online classes. For now, I want you to imagine all of the endless possibilities if education were made free and feel free to post comments below of what you came up with. Education will then be utilized as a way for people to become free of poverty and the “Rich or Upper Class”, the “Middle Class” and the “Poor Class” will be virtually pretty much gone. My belief, my dream is for this country to have a free higher education.

       If this were to truly be free then there will be one so called “Class” in America. I would simply like to call it “The Human Class”. The real reason why we have classes began to divide further apart traces back years ago ever since books were invented. Ever since the first book printed with movable type, created by Gutenberg, this posed a new opportunity as well as a problem. People could access books because they were so full of ‘Knowledge and facts’ and in this was seen as a problem because those in authority saw the “Value” in books. It was a problem to them because jobs at the time were chosen for you. So by making books more accessible, it would allow people to learn different skills outside of the job they were given. Later books were distributed with soaring prices, prices so high that only the few could afford them. This is what made the gap between classes even wider, thus adding fuel to the segregation between the “rich” and the “poor”. I realize I am omitting and skipping a lot of information but just bare with me.


       We need to move education in America to the 21st century. I explain more below brainstorming solutions as to how we can do that:


1.) Video Tutorials
       What if America created a Universal Virtual School that can be accessed online that provided clear comprehensive tutorials taught be some of the best professors in the country that is fully accredited? This is very realistic especially with the advancements in technology we have today. You will learn from the greatest minds of the best professors in the country as well as around the world. What if America paid the professors up front to help create these great quality video tutorials? To further help these professors, what it these videos they help create were published on this virtual school’s YouTube Channel and monetized thus further adding to their salary? The goal of the video tutorials will be to break down hard subjects and topics in easy-to-understand language. With this feature, the student can follow along as they watch the video and pause, fast-forward, and rewind the video at any time in case they missed something or wanted to hear something repeated again. We have seen this work well with YouTube video tutorials as well as resources like Khan Academy, Thinkwell, and Lynda.com.

       The goal is to allow students to essentially work at their own pace and not feel rushed. This would be great for non-traditional students coming to school. This would also be great for students who are working a job or two and or raising a family by helping to work around their schedule. They can create their own schedule to match their needs so that their job doesn’t feel like its compensating for their course work. With video tutorials like these, students will not have to worry about when to sign up for a class and what days and hours the classes are because they are all videos and can be accessed at any time if they were to sign up for this virtual school. This will eliminate many “excuses” and “problems” students run into in traditional schools and colleges. All schools whether it be public or private high-schools and colleges will be able to utilize these video tutorials to be used in the classroom and engage students in new ways. Today’s education system disengages students in schools.


2.) Notes
       What if this Universal Virtual School provided Lecture Notes in the form of pdfs and Power Points to review the material from the videos, complete with key concepts and definitions that students will need to remember?


3.) Interactive Games & Simulations
       What if this school provided students with interactive games and simulations to put into practice their knowledge so they can actually apply what they have learned?




4.) Innovative Test & Quizzes
       What if this Universal Virtual School created by America gave test and quizzes (that are not timed!)? You can’t fully see what someone knows by setting a time-limit. These quizzes will let you put your knowledge directly into action. When you, the student gets your quiz results back, they don’t simply just list which ones you got wrong. They will show you how to work each problem and explain what sort of mistakes you might be making. That is the goal! To help students learn from their mistakes. Knowledge cannot be strictly determined by how many questions students got right or simply multiple choice. These quizzes will score each of its questions out of a percentage by looking at the steps the student takes to arrive to their answer. The questions in the quizzes can change if a student decides to take it over to avoid cheating but they will be ‘similar’. If a student wants to do better they can take the test & quizzes over as many times as they please. 

       I will explain briefly why I don’t think standardized testing truly determines someone’s intelligence. For one, many students completely guess! I will give a prime example from a personal experience of mine. I had two friends who took the same multiple choice math test. One got an A and the other barely got a B. The one who got an A wasn’t necessarily more knowledgeable or smarter than my friend who got a B. Why you may ask? I looked at both of their test thoroughly. My friend who got the B clearly knew more because I saw him work out the problems to arrive to his answer and I was upset at how close apart the optional answers were on this test. I mean very close. I saw how he worked out his problems and although he got the wrong answer he actually understood what he was doing. By seeing how he worked out his problem, I was able to see his thought process. My friend who got the A had barely anything written down, didn’t work through the problems, didn’t really prepare for the test. I asked him how did he get an A and he simply said, “Dude, I completely guessed on all of them haha!” This kind of irritated me a little. The problem I see in education is that we measure someone’s intelligence on standardized test strictly by how many they got right or wrong without knowing their thought process and many students are taught to guess the “most probable” answers.


5.) eBooks
       What if this Universal Virtual School provided the world’s largest library of books that can be accessed by all. These eBooks can be looked at and viewed freely online through this Universal Virtual School. This school would pay the authors to make thier books viewable. It would also provide the option to students to purchase hard-copies if they wanted to and they can be sent to their home address.


       To conclude, this is not necessarily like online classes. Many online classes work great and many are not so great. Online classes can be missed, whereas these are video tutorials which can be accessed any point in time. Also, my idea of this Universal Virtual School is not necessarily created to somehow replace colleges, but to provide people an alternative option. So whatever your reason may be, whether you couldn't afford to attend a university, lack the confidence, or if it's personal - this can help. Something like this will not necessarily work for every major but it will be highly beneficial. An example of such a major is becoming a doctor which you must be present to understand surgeries hands-on but the doctorate program can still utilize this Universal School as far as videos, tests and quizzes. This Universal Virtual School could potentially work in-conjunction with colleges, allowing students to transfer credits over.

       For example, a student who wants to be a doctor can use this Universal Virtual School to save money and get their General Education course requirements out of the way such as English Composition, Math, Science, etc. thus saving them money for when they go into the core work of their doctorate program. Students can use this Universal Virtual School to transfer credits in to earn certain degrees. What makes this unlike most traditional colleges is that you as the student, can work and learn at your own pace and eventually graduate in your own time. It might take you longer than somebody else and that’s OK. Also, if you are interested in learning game design, art and or programming, check out my Resources page. I highly recommend it! Thanks for reading and feel free to share your thoughts below. 


Friday, April 15, 2016

34. Playing Sound Effects



       The video above I came across is the fourth part of a MonoGame tutorial series. It covers all aspects of audio programming, playing music, sound effects, and even creating audio using XACT from the XNA tools. Below I show how I got sound effects working better recently in XNA. Sound is crucial as it helps to further bring the game to life. In XNA, sound is pretty straightforward. Import the sound into your project, just like you did with textures. WAV is the preferred format in most environment, however MP3 should be fine too. Other than the project knowing the existence of the sound, the program must store it too, so you will need to create a SoundEffect object with a suitable name. I will use the name Gear1SoundEffect as an example based of my vehicle physics program. The command should be:

Gear1SoundEffect = content.Load(“gear1”);

Now, to get this to play, you will use a function that is built into the class, conveniently,

Gear1SoundEffect.Play();

This will play the sound effect once and then stop.


Problem
        Although you can overlap sounds without any problems, if you repeatedly play the same effect at a high rate, the sound will be distorted beyond all recognition. Check out the first time I ran into this problem in an older video of mine here as an example. I needed to limit the rate at which the sound effect was being played using timers; especially since the game program is now running at high frame-rates. So if you have been wondering why most my videos thus far don't have any sound, this is the reason as to why.


Solution
       To fix this I created an object called SoundEffectInstance, assigning it the returned object from Gear1SoundEffect.CreateInstance(). After that, all I did was call it’s Play() function. This class has many interesting functions that I suggest you check out on MSDN. Other than Pause(), Stop() and Resume(), an additional function is Apply3D(), which gives it a position in 3D space, taking as arguments one or more listeners and an emitter. This will probably not be of use to you currently, but if you move into 3D XNA programming, it will be of great value.

       Note that you may have up to 16 different SoundEffectInstance object playing simultaneously, this should be more than enough however. Now you can add some life to your game, essentially anything that does not require rotation of images can now be done with what you know.

For more information, check out this article here.

If you found this blog post helpful, please comment below. Thanks for reading and happy coding!



Wednesday, April 13, 2016

33. Basic Car Drifting Physics


       After some trial and error, I was able to get some drifting mechanics working for my vehicle physics. I am continuing to modify and improve the JigLibX Physics Library that I ported over to XNA 4.0 and MonoGame. Inside my Wheel.cs (wheel class), the key was a variable named "smallVel". This variable applies a force between the drive force and the side force. Initially, it was set to three and I decided to change it to 20 for this vehicle. At the moment, it feels like I am driving on ice but it is controllable which makes it a lot of fun to drift. 

Make the following changes so that the car doesn't slide around too much when stopped: 

            float smallVel;
            if (angVel == 0 & !locked)
            {
                smallVel = 1;
            }
            else smallVel = 20;



To give it more of an authentic feel, you can decrease smallVel to tweleve. I also greatly improved the handling. I will most likely create a starter kit based off of this sample program. So if you are using XNA 4.0 or MonoGame and need some on-road, off-road or arcade style car physics, this will help.




Triangle Mesh Collision Problem
       Many developers using JigLibX in XNA had many problems with the Triangle Mesh Object, especially when using it as a ground model but not with the Height Map Object. I sometimes run into these issues. When a physics object hits a vertex or seam between triangles in the mesh, strange things can occur. 




Solution 
       I separated the garage model into multiple meshes; just two for the most part. Using Autodesk Maya, I then selected them all, grouped them and finally exported the selection. You might have to separate models that have a lot of triangles and faces into separate models. JigLibX uses models that support 16-bit indices in size and not 32-bit. Below is a sample of  my JigLibX TriangeMeshObject.cs code that lets you extract the vertices and indices from a loaded model.


using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework;
using JigLibX.Geometry;
using JigLibX.Physics;
using JigLibX.Collision;


namespace JigLibXGame
{
    class TriangleMeshObject : PhysicObject
    {
        TriangleMesh triangleMesh;
        Matrix xform;
        public TriangleMeshObject(Game game, Model model, Matrix orientation, Vector3 position)
            : base(game, model)
        {
            body = new Body();
            collision = new CollisionSkin(null);
            triangleMesh = new TriangleMesh();
            List<Vector3> vertexList = new List<Vector3>();
            List<TriangleVertexIndices> indexList = new List<TriangleVertexIndices>();
            ExtractData(vertexList, indexList, model);
            triangleMesh.CreateMesh(vertexList, indexList, 4, 1.0f);
            collision.AddPrimitive(triangleMesh, new MaterialProperties(0.8f, 0.7f, 0.6f));
            PhysicsSystem.CurrentPhysicsSystem.CollisionSystem.AddCollisionSkin(collision);
            // Transform
            collision.ApplyLocalTransform(new JigLibX.Math.Transform(position, orientation));
            // we also need to move this dummy, so the object is *rendered* at the correct 
               position
            // body.MoveTo(position, orientation);
            this.body.CollisionSkin = this.collision;
          
        }

    
        // Helper Method to get the vertex and index List from the model.
        public void ExtractData(List<Vector3> vertices, List<TriangleVertexIndices> indices, Model model)
        {
            Matrix[] bones_ = new Matrix[model.Bones.Count];
            model.CopyAbsoluteBoneTransformsTo(bones_);
            foreach (ModelMesh mm in model.Meshes)
            {
                Game.GraphicsDevice.RasterizerState = RasterizerState.CullNone;
              
                xform = bones_[mm.ParentBone.Index];
                foreach (ModelMeshPart mmp in mm.MeshParts)
                {
                    int offset = vertices.Count;
                    Vector3[] a = new Vector3[mmp.NumVertices];
                    int vertexStride = mmp.VertexBuffer.VertexDeclaration.VertexStride;
                    mmp.VertexBuffer.GetData<Vector3>(mmp.VertexOffset * vertexStride, a, 0, mmp.NumVertices, vertexStride);
                   
                    for (int i = 0; i != a.Length; ++i)
                        Vector3.Transform(ref a[i], ref xform, out a[i]);
                    vertices.AddRange(a);

                    // new
                    for (int i = 0; i < a.Length; i++) vertices.Add(new Vector3(a[i].X, a[i].Y, a[i].Z));

                    if (mmp.IndexBuffer.IndexElementSize != IndexElementSize.SixteenBits)
                        throw new Exception(
                            String.Format("Model uses 32-bit indices, which are not supported."));
                    short[] s = new short[mmp.PrimitiveCount * 3];
                    
                    mmp.IndexBuffer.GetData<short>(mmp.StartIndex * 2, s, 0, mmp.PrimitiveCount * 3);
                    JigLibX.Geometry.TriangleVertexIndices[] tvi = new JigLibX.Geometry.TriangleVertexIndices[mmp.PrimitiveCount];
                    for (int i = 0; i != tvi.Length; ++i)
                    {
                        tvi[i].I0 = s[i * 3 + 2] + offset;
                        tvi[i].I1 = s[i * 3 + 1] + offset;
                        tvi[i].I2 = s[i * 3 + 0] + offset;
                    }
                    indices.AddRange(tvi);
                }
            }
        }
  
        public override void ApplyEffects(BasicEffect effect)
        {
            //effect.DiffuseColor = Vector3.One * 0.8f;
            
        }
    }
}


       Next I will be making a way where the player can switch to cockpit-view and drive in first-person perspective while in-game. Thanks for reading and I look forward to posting more updates. If you have found this blog post helpful or would like to share any improvements, please comment below.




Tuesday, April 5, 2016

32. MonoGame Video Showcases





       MonoGame is an open-source implementation of the Microsoft XNA 4.0 Framework for creating powerful cross-platform games. Learn more by visiting monogame.net and also fork and star their repository on Github


Tuesday, March 15, 2016

31. MonoGame is coming to Xbox One

     

       XNA is not Dead! It lives on through MonoGame, which implements the XNA 4.0 Framework. This allows developers using the XNA dev kit to port their games to other platforms besides the default platforms supported already by the XNA 4.0 Framework. Recently at GDC (the Game Developer's Conference) 2016, in addition to the cross-network play support, Microsoft announced MonoGame is coming to Xbox One. In an older blog post I wrote, I was asking Microsoft to allow their Xbox One dev kit to support MonoGame as well as other tools and middleware.



       The Cyclone Game Engine is built upon the XNA and MonoGame framework, so this is great news for myself as well as Steel Cyclone Studios. Back in 2014 of March, Sony announced that their PS4 dev kits would support MonoGame to registered PS4 developers. Microsoft officially ended support for XNA some time ago, but MonoGame launched an open source implementation of XNA 4.0 in 2009. Since then MonoGame has been used to help create a number of games for many various platforms, including Windows 8, Windows Phone and others. The first Xbox One game made with MonoGame is called Axiom Verge, which is a retro-themed side-scroller. The game was created by just one person, Tom Happ, who also worked at Petroglyph Games. Axiom Verge launched last year on Playstation 4 and Windows PC, and will be released for Xbox One sometime later this year. Congratulations to the MonoGame team! Below is a video by Blit Works released in 2015 showcasing MonoGame samples running on PSVita and Xbox One.


“With nearly every game on the market today being released simultaneously on all platforms, the need for a good cross-platform development strategy is crucially important. Every hour spent reinventing the wheel is an hour wasted. Or to put it another way, every hour spent writing cross-platform code constitutes three saved hours.” 
       --(Steven Goodwin (2005), Cross-Platform Game Programming)






Friday, March 4, 2016

30. XNA Vehicle Physics Test 4


       In this update, I am continuing to modify and expand the JigLibX (Jiggle Library X) Physics Engine. Recently I added an exhaust particle effect to the vehicle. This particle effect streams behind the vehicle when the player presses the gas so often. I will change the color of the smoke to grey. The standard vehicle class in my game engine has a hand brake that locks the rear wheels, thereby creating some drag and slowing the car down. That part is realistic. What isn't realistic is the rear wheels still have full traction. If you've ever pulled a hand brake in a car while moving you've notice that the rear end loses traction and wants to swing around causing a spin. It can be useful for making tight 180 degree turns, orienting your car coming into a turn and for good 'ol fashioned fun. This is why I kept this feature. 


Adding the smoke particle

Vector3 OffsetAmount = new Vector3(0, 0, -3);

Vector3 OffsetPosition = Car.carObject.Car.Chassis.Body.Position +
    (Car.carObject.Car.Chassis.Body.Orientation.Right * OffsetAmount.Z) +
    (Car.carObject.Car.Chassis.Body.Orientation.Up * OffsetAmount.Y) -
    (Car.carObject.Car.Chassis.Body.Orientation.Forward * OffsetAmount.X);


ParticleComponent.smokeParticles.AddParticle(OffsetPosition, Vector3.Zero);


       OffsetAmount is a Vector3 that is the relative offset you want, in this case 3 units behind the center of the car. OffsetPosition is the final, global, relative position, that you can then pass into your particle system or something. It took some trial and error to get all of these working correctly, but I've tested it in several positions and it seems to work OK. And the last line tells my ParticleComponent to add a particle at the specified position and initial velocity - replace it with your own particle code to get a nice stream of particles behind your car. I plan on adding a similar particle effect to the actual engine when the vehicle takes damage. For example, if the engine is emitting grey smoke, the damage to the vehicle is rather light. But if the smoke is black, the vehicle is heavily damaged and requires repair. The next particle effect I will be working on is adding a dust trail streaming behind the tires.


Change the Gravity 
       One thing I forgot to mention in my previous vehicle physics post was how I changed the gravity. Inside the Physics System Class (PhysicsSystem.cs), scroll down and change the gravity variable from -10 to -30. 

        public PhysicsSystem()  
        {  
            CurrentPhysicsSystem = this;  
            Gravity = -30.0f * Vector3.Up;  
        } 

This will make the car and the physics in general less like the "moon's gravity" and more like the Earth's gravity.


Add Rotational Friction to the Wheels
       Before, if you were on level ground, you could coast forever after letting off the throttle.

     if (driveTorque == 0)
    {
        angVel *= .97f;
    }  

Put this in Update() in the Wheels.cs right above angVel += driveTorque * dt / inertia:

                 if (driveTorque == 0)
                {
                    float axleFriction = 5; // higher number = more rotational friction
                    if (angVel > axleFriction)
                    {
                        if (angVel > 0) angVel -= axleFriction;
                        else angVel += axleFriction;
                    }
                    else angVel = 0;
                }

If you want to add more friction, multiply by a smaller number. For less friction, use a bigger number that's less than 1. Also, to help counteract any bumps on the track, I decreased the wheel's damping fraction.


Improve JigLibX's Performance 
       I have highly modified the Jiggle Physics Library to run on XNA 4.0 and MonoGame. I am extending and improving it in many ways. Many indie game developers chose BEPU rather than JigLibX when developing their games for Xbox 360. One of the major reasons why was the fact that JigLibX's performance on Xbox 360 was extremely slow which decreased the game's frame rate. To help with JigLibX's physics performance on Xbox 360 as well as other less advanced hardware, I am changing its foreach statements to for and I will be adding separate loading threads to help improve the low frame-rate issues. JigLibX also makes extensive use of the "float" object. I mention this because the Xbox 360 has some issues processing floating points. So I am changing out all the floats to avoid major physics performance issues.


How to Get Rid of of the Infamous Wobble at High Speed
       One of the major issues people encountered using JigLibX was once the vehicle got to a certain speed, it started to rock side to side, with no obvious way to stop it. Nikescar came up with a solution to fix this issue. If you would like to see this problem for yourself, adjust the wheelFriction to 5.0f and wheelDampingFrac to 0.3f. The car should start wobbling at max speed.

       Each Update( ) is called at a different time for each wheel. One wheel gets slowed down and the other wheels stay at the current speed. Next, the update from one of the three wheels from the last frame gets slowed down and the remaining wheels go at the same speed. If you do this sixty times a second, you will get quick declaration corners of the car causing the wobbling. After a lot of trial and error, here is how you can fix this.

This problem occurs when the andVel reaches maxAngVel in the Update( ) method of the Wheel.cs:

angVel = MathHelper.Clamp(angVel, -maxAngVel, maxAngVel);

Either delete or comment out this line and it will take care of the problem. However, this creates another problem because now you won't be able to set a limit for the top speed.

In the Wheel.cs, add this with the rest of them:

public float AngVel
        {
            get { return angVel; }

        }


Then just after this under PostPhysics( ) in the Car.cs:

for (int i = 0; i < wheels.Count; i++)
                wheels[i].Update(dt);


add this:

for (int i = 0; i < wheels.Count; i++)
            {
                if (wheels[i].AngVel >= topSpeed)
                    destAccelerate = 0;
            }


This should fix it. Now you can freely setup your car without having to worry about the limitations of the infamous wobble. Lastly, to see how I implemented the skybox, check out my blog post here. Thanks for reading and if anyone has found this blog post helpful, please comment below.


Sunday, January 17, 2016

29. Cross Fade Animation Blending Fixed


     I have fixed the cross fade function in the XNAnimation Library for XNA 4.0 & MonoGame. Cross Fade interpolates between two animation clips, fading out the current animation clip and fading in a new one. This allows the character's animations to transition more smoothly when their actions change. I am currently working on another technique called Additve Blending.


Saturday, December 5, 2015

28. More Xbox One Dev Kit Tools & Middleware Support

       Unfortunately, with the release of the Xbox One, I understand that Microsoft has departed XNA. I am also fully aware that Microsoft is promoting Unity as a replacement. I speak for all indie developers when I say it should be up to the game developers and game development companies to decide what tools they wish to use to create their games. They should be able to decided on the tools that fits their needs, not forced to use a tool simply because the platform will not support it. I use XNA and MonoGame simply because its what I prefer and its what I am more comfortable as well as familiar with. I propose that the Xbox One dev kit should support XNA 4.0, XNA 4.0 Refresh and MonoGame. By allowing the Xbox One dev kit to support more tools and middleware, Microsoft would be opening themselves up to an even larger potential market. Microsoft's competitor, Sony, announced a while back that the PS4 now supports MonoGame and other middleware to registered developers.

       Monogame is essentially the continuation of XNA 4.0. So now, many developers who created Xbox Live Indie Games for Xbox 360 have moved on to develop games for PS4 since it supports MonoGame. Microsoft will gain more respect from its developers that they lost who created indie games for Xbox 360 by allowing the Xbox One dev kit to support XNA and MonoGame. This is why I am trying my hardest to convince Microsoft to make their dev kit support these platforms as well as other tools and Middleware. Microsoft would not have to worry about creating a new version of XNA necessarily because the creators behind MonoGame are essentially doing that right now. With that said, that doesn't mean MonoGame's creators can't be compensated or offered help from Microsoft in some shape or form. It took a lot of work to make XNA function on other platforms. Great games can be created regardless of which engines and tools that were used to create them. Visit the website link below to vote.

Xbox.uservoice.com

Thanks for reading, and I would greatly appreicate your vote!


Long Live XNA/ MonoGame!!!

If there are any XNA or MonoGame developers reading this, please comment and share your thoughts below if you would like to see the Xbox One dev kit support these tools. 


Saturday, November 14, 2015

27. Euclideon's Unlimited Detail

       The false assumption this day in age is that you need great hardware in order to have amazing graphics. The following video below destroys that assumption. Hardware simply runs the graphics; it doesn't necessarily create the content that the artists behind games created themselves in the game. I say this because I hear so many statements from gamers today like, "They should re-make this game that came out two years ago with Xbox One graphics!" It doesn't fully make sense because it's the game engine that handles that for the most part. The real question is, "Can the hardware handle it?"
       Some game companies are remaking games today to show off the so called "power" of next-gen consoles. We have seen this with the Gears of War: Utlimate Edition for Xbox One for instance. So in order to do this, the artists have to change the actual graphics content in the game themselves. Basically they are taking that same game, and they might increase the polygons in the 3D models to show off more detail, improve the lighting, add features like better shaders, normal-mapping, bump-mapping, and maybe even ray-tracing if the hardware can handle it. The problem I am getting at is more and more games are becoming highly dependent on hardware specs.





I recently saw this image below posted by IGN on Facebook. It's a screenshot comparison of Call of Duty: Black Ops 3 which recently came out for both the Xbox 360 and Xbox One platforms. With point cloud data, they could pull off that same level of detail you would see on Xbox One, on Xbox 360. So why am I showing you all of this?

Man, Black Ops 3 looks rough on last gen... http://go.ign.com/v9iJqMv #UpAtNoon
Posted by IGN on Monday, November 9, 2015




Why? Simply put, gamers are literally buying into the idea that games on last-gen can't look as good as a games on next-gen due to lack of hardware specs. Euclideon has shattered this belief!  



       I know there are many people out there who are skeptical like this gentleman in the video above. Light-maps were not actually used. The lighting came from the scan itself. If this technology were used in game engines, they might need to come up with a way that removes the lighting. So after the scan, the game engine handle the lighting effects using  shaders and dynamic lighting. I will admit, I too was skeptical at first until I actually saw the technology run on Nintedo Wii. This is wild because Nintendo Wii for example lacks the hardware to run a game like The Last of Us or Crysis. With that said, game developers might still need advance hardware to create games, but not to necessarily play them. So the question in your minds is:


Can it be Proven Further?
Maybe a Live Demonstration?





       Whenever I talk about unlimited detail and point-cloud data, I get responses like, "It's not graphics that make a game successful, its gameplay!" I am fully aware of this. I don't favor graphics over gameplay! I would take the content in the game, the story, and how well the game plays over graphics any day.


How will this affect gamers and game development companies?
  1. People will not have to buy expensive gaming PCs and hardware to pull off the level of graphics that they want or to run their games on full settings.

  2. Regardless of what console or game platform you own, it will be even harder to tell the difference visually.

  3. Game Development companies will save money creating art assets for their games because they can simply laser scan objects from the real world. So instead of creating a high-poly 3D car model, why not laser scan a car?

  4. Game Development companies will be able to expand their audience and reach more people to play their games, thus ushering in more money. 

       I would love to somehow integrate point-cloud data into my game engine someday but that is the least of my concerns right now.


If this technology were in your game engine, how would you utilize it? 

       Before I would even consider implementing this technology into my own game or game engine, I will need to have fully answered and resolved the following questions:

  1. How would unlimited detail work for animated models?
           Dell claims they got animation to work very well, but has not provided any information as yet to how for now. If you are unfamiliar with the term "rigging" the best way I can describe it is when a 3D model is given a skeleton and a painted weight map. The weight map determines how much and where individual bones deform the mesh. You need more than just a model and a rig. A rig is only as good as the topology. Topology is the polygonal and edge flow of the model. If you look at the wire frame of a nice 3D model, you can see how the model's faces and edges follow along muscle groups. After a model is rigged, it is animated and then exported. I realize I am omitting a lot but this is just a basic understanding.
           My thoughts starting out would be to maybe have my game engine somehow convert my 3D models into point-cloud data strictly in-game. The program logic basically would trick the computer into thinking that the model should be treated like a traditional 3D model when in reality it is not. The reason why is so that I can always go back and make changes to my models in my 3D modeling programs without having to worry if the my 3D modeling programs can import, render and support such models. A point-cloud model viewer mode in my engine could show what the model currently looks like in the engine when imported, and what the model looks like after its converted.


  2. Will unlimited detail be able to handle the textures for my models?
           So will this technology be able to interpret the UV coordinates? Will it matter? How will I be able to use the texture map I made for the model? If a model conversion in my game engine is possible, could the textures somehow be merged with the model?


  3. What is the extent of lighting?
           The lighting actually came from the environment of the laser scan itself. In a game, Euclideon would have to come up with a way to remove the lighting from the environment after the laser scan. Then add the effect back using shaders and dynamic lighting powered from the game engine. Whether the lighting will be specular, diffusion, etc... will be up to the game engine programmers and developers themselves. 


  4. How will physics and collision detection work with point-cloud data?
           As it stands right now, there are no physics engines to date that supports collision detection for unlimited detail or point-cloud data. It would take a tremendous amount of calculations to handle collision detection for every single object in a game world composed of point-cloud data. This is why we have not seen it implemented in games yet, however Bruce Dell did say they were working on two titles. Dell said the following:

    "The first is an adventure game with a sword, solid scan forests, and a lot of alien type monsters. The second is a cute, clay scanned adventure where you ride giraffes. Can't say more than that I'm afraid."
    --Bruce Dell

     Unfortunately, we haven't seen any gameplay footage yet which still raises a lot of questions. To somehow get the physics to work, my thought is to trick the computer into thinking that the converted 3D model should still be treated like a traditional 3D model, so collision detection would work the same for the most part. Problem areas is the fact that unlimited detail can render scenes extended down to individual rocks. So will those individual rocks in the terrain be treated as separate models with their own weight and physics? This would be an enormous amount of physics calculations to process.
           The engine would essentially have to make physics calculations for every single individual rock. Or would it have to make physics calculations for every single rock? This could be fixed if a single rock model was duplicated however many times through the program in the game world by instancing the model. The computer will think all the rocks are simply one model and only have to make physics calculations for that specific rock model. Will I have to stick with my terrain processor starting out instead? 3D models in of themselves aren't as much of a strain on computers often as the logic is. They are just one of the few reasons as to why your computer may slow down depending on how detailed they are and the number of polygons they have. Thanks to point-cloud, computers read and process tiny dots faster rather than polygons.


  5. Will point-cloud data work best for static objects in the game?
           
    Many game engines extracts the triangles and polygon information from 3D models in order to create more accurate collision detection. With a point-cloud based model, will I still need to use primitive invisible meshes for collision detection? How accurate will it be?


  6. Now that my game engine is capable of applying fur to 3D models, will it still look the same after converted?



Why is this technology not utilized in videogames yet?

       I sort of answered some of this in my own questions above. As it stands right now, many game companies (especially big name ones which I won't mention) are either unaware of this software technology or simply skeptical of it. Some game companies are rebuilding their game engines with the Xbox One and PS4s' hardware in mind. With unlimited detail, this wouldn't necessarily be as big of an issue. Videogame companies will have to approach Euclideon themselves if they want to invest in this technology and have it running in their games. Other game companies have made technology similar to unlimited detail, but plan on saving it for later use since they encountered many issues.

       John Carmack who is widely known as "the creator of the FPS" suggested that a system like Euclideon was possible in theory. Coming from Carmack, this greatly surprised me. Euclideon's point-cloud technology runs entirely in software, so it does not use up a tremendous amount of processing power. All of its demonstrations did not utilize the GPU. Intel had once tried making its own system in order to do things along the field of unlimited graphics. Unfortunately, it ended up closing this avenue and put it aside as something for the distant future when computers have "more power".


Minecraft creator Notch made extremely negative comments towards both Dell and the validity of his work.

"If their demo was real and you add up all the little atoms in the island then it would use petabytes of data. But everybody knows that if you see a tree twice, it's reconnected. There isn't a game that I know of that doesn't use the same object twice and I assure you that they didn't store that object two times separately."
-- Notch

       
       This is a misinterpretation of the point-cloud technology. Notch is referring to a rendering technique called instancing which I touched upon a bit earlier, but I will save for later discussion. In short, instancing is when the same model is duplicated multiple times to save processing and the engines can potentially treat each duplicated model as its own separate entity. Its a way of "tricking" the computer. Games often render many copies of the same model, for instance by covering a landscape with trees or by filling a room with crates. Instancing helps performance by reducing the cost of repeated draw calls. In games, the calls needed to render a model are relatively expensive and can quickly add up if you are drawing hundreds, if not thousands of models. Many Xbox Live Indie Games with "Minecraft-like" worlds utilize this technique. 

       Cevat Yerli, the head of Crytek however, didn't think that Euclideon was creating some big hoax. He went on to say that he truly respects the work Euclideon has accomplished and that unlimited detail is absolutely believable. In his past keynotes, he always talked about how such technology could be possible in the future.

       With so many negative reactions raised at Dell's claim that Unlimited Detail will improve graphics by 100,000 times, has this industry become... jealous? Is this part of the reason why we haven't seen game companies approach Euclideon? However, there are plenty of positive reactions.


What can we do as gamers?

       As gamers, we need to bring this technology up to game companies who are building games with graphics of intense realism. We need to stop questioning Euclideon about when this technology will be implemented in games, and instead ask the questions to game companies themselves. We can encourage game companies to get involved by creating our own petitions if we have to. As game consumers, we need to realize that we have the power. The purchasing decisions we make contribute to the decisions game companies make with their games; for better and for worse. We've accepted the notion that last-gen is holding games back somehow, so we purchase monster gaming computers, new graphics cards, next-gen consoles and hardware. With unlimited detail, videogame graphics will no longer be held back by hardware specs. Whether we realize it or not, we as gamers play a crucial role in game development companies learning from their mistakes and taking some risks if we choose not to constantly vote with our wallets.


For more updates, you can also follow their Facebook Page:
https://www.facebook.com/Euclideon/

So what are your thoughts on Euclideon's technology breakthrough?
Please comment below  



Wednesday, November 11, 2015

26. Creating the Logos


       I created the Cyclone Game Engine logo by accident. Initially, I wasn't trying to create a logo at all. I was working on a flash animation back in college and I needed to come up with designs for simple shapes. I simply wanted to make them spin and rotate like a wheel. As I was working on creating the shapes in Adobe Illustrator, my close friend Patrik Sjoberg stopped by to see what I was working on.

"Hey, that looks cool man, "he said. That could be the logo design for your game engine man.

Patrik gave me the idea of using the design as the logo for the game engine which I was working on in what little of my spare time I had outside of my job and school. 



       In the near future, I plan to find graphic designers to help create logos, buttons and designs that will go into the Cyclone Game Engine's User-Interface. 




       This was the first logo design for Steel Cyclone Studios in the image above. I am not a graphic designer by any means. I was rushing with this design at the time because it wasn't the most important thing I was working on. I needed a logo quick when I started my business through Kentucky Secretary of State. This logo was just a start. Notice there are so many things wrong with this logo but starting out its not important. I knew I could make improvements to my design later. At the time, my game projects were more important and getting as much progress completed as possible.



       Later I redesigned my logo to looks something like this in the image above. This was a major improvement and I made tons of iterations and sketches on paper. But... still something didn't sit well with me. I wanted to use no more than three colors. As I stared at this design, I couldn't escape the feeling that it felt more like a sports team logo in my own opinion. I needed to make the design more simple somehow.



       And then it happend... I took the best of both worlds from the original and new design. I made the tornado out of simple shapes gave them a "steel metallic" look if that makes any sense. It was a very hard effect for me to give off. I sort of made this by accident as well just like my engine logo. Accidents can turnout to be amazing things. 




Tuesday, November 10, 2015

25. XNA & MonoGame Tiff Importer

       By default, the XNA Framework does not natively support tiff based image files. To use the native content pipeline importers, you will need to use PNG, JPG, DDS, TGA, or BMP. The following link shows you how to create a tiff importer.

Make sure you add the System.Drawing Reference to the Importer.

Here is the Source Code:

using System;
using System.Collections.Generic;
using System.Drawing;

using System.Linq;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Content.Pipeline;
using Microsoft.Xna.Framework.Content.Pipeline.Graphics;
using Microsoft.Xna.Framework.Content.Pipeline.Processors;

// TODO: replace these with the processor input and output types.
using TInput = System.String;
using TOutput = System.String;

namespace TiffLib
{
    /// <summary>
    /// This class will be instantiated by the XNA Framework Content Pipeline
    /// to apply custom processing to content data, converting an object of
    /// type TInput to TOutput. The input and output types may be the same if
    /// the processor wishes to alter data without changing its type.
    ///
    /// This should be part of a Content Pipeline Extension Library project.
    ///
    /// TODO: change the ContentProcessor attribute to specify the correct
    /// display name for this processor.
    /// </summary>
    [ContentImporter(".tif", ".tiff", DisplayName = "TIFF Importer", DefaultProcessor = "TextureProcessor")]
    public class TiffImporter : ContentImporter<Texture2DContent>
    {
        public override Texture2DContent Import(string filename, ContentImporterContext context)
        {
            Bitmap bitmap = Image.FromFile(filename) as Bitmap;
            var bitmapContent = new PixelBitmapContent<Microsoft.Xna.Framework.Color>(bitmap.Width, bitmap.Height);

            for (int i = 0; i < bitmap.Width; i++)
            {
                for (int j = 0; j < bitmap.Height; j++)
                {
                    System.Drawing.Color from = bitmap.GetPixel(i, j);
                    Microsoft.Xna.Framework.Color to = new Microsoft.Xna.Framework.Color(from.R, from.G, from.B, from.A);
                    bitmapContent.SetPixel(i, j, to);
                }
            }

            return new Texture2DContent()
            {
                Mipmaps = new MipmapChain(bitmapContent)
            };
        }
    }

}

Saturday, November 7, 2015

24. Game Project Sneak Peak and Terrain Engine Update


Long-Term Project Sneak Peek
The following video provides a series of early work in progress screenshots of my 3D models and assets I have created for my long-term game project. These were taken some time ago.
Posted by Steel Cyclone Studios LLC on Monday, October 19, 2015

     
       Above is a sneak peak of some 3D models for my long-term game project I am working on. The following footage is still in very early stages of development. The textures for many of the models is simply a starting point. I will eventually use Substance Painter to texture the models to achieve much better results.



Cyclone Game Engine Terrain and Snow
The Cyclone Game Engine now has the ability to generate a terrain landscape by reading a bitmap and using the intensity of its pixels as height values. The game engine also supports Billboarding which is both a fast and efficient way to render lots of grass on the terrain. To save memory, the grass is actually 2d images rendered in 3D. The grass moves depending on the speed of the wind. As you can see in the video, I also got snow working which is also effected by the direction and speed of the wind. I am working further on the game engine’s weather component and possibly a day and night system.
Posted by Steel Cyclone Studios LLC on Thursday, December 5, 2013

       The Cyclone Game Engine now has the ability to generate a terrain landscape by reading a bitmap and using the intensity of its pixels as height values. I found that the heigthmap processor that comes with JigLibX was similar to the XNA Generated Geometry Sample. A custom content pipeline processor converts the heightmap in 3D geometry. The game engine also supports Billboarding which is both a fast and efficient way to render lots of grass on the terrain. To save memory, the grass is actually 2d images rendered in 3D. The grass moves depending on the speed of the wind. As you can see in the video, I also got snow working which is also effected by the direction and speed of the wind. The snow effect is implemented as a 3D particle by using point sprites. I am working further on the game engine’s weather component and possibly a day and night system. Weather effects can help contribute to the atmosphere of the game since it is mostly set in an outdoor world. 



       Lately, I have been making improvements to my terrain. I am working on shader effects for my terrain and applying better textures and lighting. This is my terrain for planet Mars in the following image above. 















Driving in first-person perspective is coming along. I am still making adjustments. 





Monday, August 3, 2015

23. Character Physics Controller Part 1


       An important feature of modern physics libraries is the character controller. This feature may seem obvious to implement, but it is not so straight forward for every game. This is simply due to the fact that a controller usually has to "break" the laws of physics in order to follow the user commands. In this update, I am utlizing the JigLibX physics library. This video shows the very first successful build of my basic character interacting with the physics world. The character for now is essentially a capsule which gets moved by player input. The animated character model for now is encapsulated inside the capsule. I am adding a crouch function soon for the player. Two other features I am currently adding are changing the jumping distance proportional to the character's speed, while preserving its momentum.

       I am fixing the first-person camera's clip planes so that you will not see the player's head and other triangles from the character's model mesh. In this video, I also try to show the collision skins from the models. I am working on an 'Editor' mode where you leave first-person perspective into a Free-Cam mode. From there, you will be able to access the Level Editor. I am still programming the Level Editor and adding more functionality to it so you can add, remove, rotate, scale and translate models in your levels. This following video is a blank sandbox for demonstration purposes. The building models in the scene are also for demonstration. Visualizing the collision skins decreases the frame rate as of now, and I am fixing this. I now got it working to where you are fully attached to the character at eye level. Now, I am able to move the bones of the character model so that the head and arms move according to the player's input. For example, when you look down, the character's head will look down as well and you can also see your feet.

       Allowing the players to look down and see their feet and movements of the characters they control makes them feel more immersed and as if they are actually in the character's shoes and looking through the character's eyes. That way, players don't feel detached from their characters they control. Although players can see their character's hands, arms and weapons that they're holding; its good to feel as if they are that character. I am working on an animation feature called Additive Blending which allows for two animation clips to be played from the animated model simultaneously.

Saturday, June 6, 2015

22. Real-Time Reflection


       I have successfully converted the XNA 3.1 Real-Time Reflection example from the XNA Community website to XNA 4.0 This example of real-time reflection was originally created by Canton Javier Ferroro. The reflection of the ship is distorted by the shape of the wood. The technique used is based on making a render with a reflection angle using a clip plane and saving this into a texture for use in the rendering of the scene. This technique is similar to games such as NBA 2K8. I have made my source code for the XNA 4.0 version available below.


XNA 4.0: Download
Indie DB Page: Download