Hello and welcome to my next engine update. My apologies for the long wait. I recently started a new job a while ago and I am adjusting to my new schedule while trying to make time for working on the game engine. Oh, before I forget, the date located in the right hand corner of the following images below were not updated. So... let's get started! With the help of the billboards sample, I recently got the vegetation processor to work in my game engine. As stated on the billboards link, the vegetation processor renders a very large number of billboard sprites using a vertex shader to perform computations entirely on the GPU. Its a fake way of rendering complex objects without bothering to render a full 3D model. This technique helps performance so that there is no additional CPU load from rendering billboards compared to normal static geometry. I tested the vegetation processor in my sample menu system's 3D scene. Initially I encountered some rendering issues with the positioning and transparency of the grass billboards on the 3D terrain model.
Fixing the Position
What is the View Matrix?
Basically the View Matrix or the View Transform locates the viewer in world space, transforming the vertices into camera space. The View Matrix represents the position and orientation of the camera. The camera, or viewer is the origin, looking in the positive z-direction. It is created by passing in the camera location, where the camera is pointing and by specifying which axis represents “Up” in the universe. XNA uses a Y-up orientation, which is important to be aware of when creating 3D models. Blender by default treats Z as the up/down axis, while 3D Studio MAX uses the Y-axis as “Up”.
What is the Projection Matrix?
The Projection Matrix or Projection Transform can be thought of as the actual camera lens which is used to convert 3D view space to 2D. The Projection Transform is mainly a scale and perspective projection. The Projection Matrix converts the viewing frustrum into a cuboid shape. It is created by specifying calling CreatePerspectiveFieldOfView() or CreateOrthographicFieldOfView(). As a general rule, for a 2D game you use Orthographic, while in 3D you use Perspective projection. When creating a Perspective view we specify the field of view ( think of this as the degrees of visibility from the center of your eye view ), the aspect ratio ( the proportions between width and height of the display ), near and far plane ( minimum and maximum depth to render with camera… basically the range of the camera ). These values all go together to calculate something called the view frustum, which can be thought of as a pyramid in 3D space representing what is currently available.
What is the World Matrix?
The World matrix is used to position your entity within the scene. Essentially the World Tranfrom positions ans places a model in a 3D world. A World Transform changes coordinates from model space, where vertices are defined relative to a model's local origin, to World Space, where vertices are defined relative to an origin common to all the objects in a scene. So in addition to positional information, the World matrix can also represent an objects orientation.
In short, think of it like so:
- View Matrix = Camera Location
- Projection Matrix = Camera Lens
- World Matrix = Object Position/Orientation in 3D Scene
In short, by multiplying these three Matrices together in the first pass rendering of the the billboards, positioned them correctly. Further explanation is below in the source code. You can read more information about matrices and cameras in my second blog post.
Now that the position was fixed, I encountered another issue which is the transparency. With the help of Shawn Hargreave's blog post on depth sorting alpha blended objects, I was able to solve it. Check out the source code to my terrain's draw method with the source code from the billboard sample integrated below. Its not perfect by any means, but its a decent start.
public void DrawTerrain(Model model, Matrix view, Matrix projection, Matrix world)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
// First we draw the ground geometry using BasicEffect.
foreach (ModelMesh mm2 in model.Meshes)
{
if (mm2.Name != "Billboards")
{
foreach (BasicEffect be2 in mm2.Effects)
{
ScreenManager.GraphicsDevice.BlendState = BlendState.Opaque;
ScreenManager.GraphicsDevice.DepthStencilState = DepthStencilState.Default;
ScreenManager.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise;
be2.View = view;
be2.Projection = projection;
be2.World = transforms[mm2.ParentBone.Index] * world;
be2.EnableDefaultLighting();
be2.AmbientLightColor = new Vector3(0.1f);
be2.SpecularPower = 5;
}
mm2.Draw();
}
}
// Then we use a two-pass technique to render alpha blended billboards with
// almost-correct depth sorting. The only way to make blending truly proper for
// alpha objects is to draw everything in sorted order, but manually sorting all
// our billboards would be very expensive. Instead, we draw in two passes.
//
// The first pass has alpha blending turned off, alpha testing set to only accept
// ~95% or more opaque pixels, and the depth buffer turned on. Because this is only
// rendering the solid parts of each billboard, the depth buffer works as
// normal to give correct sorting, but obviously only part of each billboard will
// be rendered.
//
// Then in the second pass we enable alpha blending, set alpha test to only accept
// pixels with fractional alpha values, and set the depth buffer to test against
// the existing data but not to write new depth values. This means the translucent
// areas of each billboard will be sorted correctly against the depth buffer
// information that was previously written while drawing the opaque parts, although
// there can still be sorting errors between the translucent areas of different
// billboards.
//
// In practice, sorting errors between translucent pixels tend not to be too
// noticable as long as the opaque pixels are sorted correctly, so this technique
// often looks ok, and is much faster than trying to sort everything 100%
// correctly. It is particularly effective for organic textures like grass and
// trees.
foreach (ModelMesh mm2 in model.Meshes)
{
if (mm2.Name == "Billboards")
{
// First pass renders opaque pixels.
foreach (Effect effect in mm2.Effects)
{
ScreenManager.GraphicsDevice.BlendState = BlendState.Opaque;
ScreenManager.GraphicsDevice.DepthStencilState = DepthStencilState.Default;
ScreenManager.GraphicsDevice.RasterizerState = RasterizerState.CullNone;
ScreenManager.GraphicsDevice.SamplerStates[0] = SamplerState.LinearClamp;
// At first, only the camera's view was just stored in the effect
// parameter's value for view. Then I multiplied the terrain model's
// parent bone index ( transforms[mm2.ParentBone.Index] ) which is
// what the world matrix is equivalent to by the camera's view matrix
// to fix the position. If we recall, the world matrix is the object's
// position and orientation in the 3D scene. So in order to position
// it correctly, it needed to be multiplied by the camera's
// location which is the camera's view matrix (the variable view).
effect.Parameters["View"].SetValue(transforms[mm2.ParentBone.Index] * view);
effect.Parameters["Projection"].SetValue(projection);
effect.Parameters["LightDirection"].SetValue(lightDirection);
effect.Parameters["WindTime"].SetValue(time);
effect.Parameters["AlphaTestDirection"].SetValue(1f);
}
mm2.Draw();
// Second pass renders the alpha blended fringe pixels.
foreach (Effect effect in mm2.Effects)
{
ScreenManager.GraphicsDevice.BlendState = BlendState.NonPremultiplied;
ScreenManager.GraphicsDevice.DepthStencilState = DepthStencilState.DepthRead;
effect.Parameters["AlphaTestDirection"].SetValue(-1f);
}
mm2.Draw();
}
}
}
Now that the positioning and transparency of the grass billboards are fixed, I plan on adding different grass images later to give it more of a variety. If you have found this blog post helpful with your XNA and or MonoGame projects, please comment below. Thanks for reading..
No comments:
Post a Comment