Saturday, November 14, 2015

27. Euclideon's Unlimited Detail

       The false assumption this day in age is that you need great hardware in order to have amazing graphics. The following video below destroys that assumption. Hardware simply runs the graphics; it doesn't necessarily create the content that the artists behind games created themselves in the game. I say this because I hear so many statements from gamers today like, "They should re-make this game that came out two years ago with Xbox One graphics!" It doesn't fully make sense because it's the game engine that handles that for the most part. The real question is, "Can the hardware handle it?"
       Some game companies are remaking games today to show off the so called "power" of next-gen consoles. We have seen this with the Gears of War: Utlimate Edition for Xbox One for instance. So in order to do this, the artists have to change the actual graphics content in the game themselves. Basically they are taking that same game, and they might increase the polygons in the 3D models to show off more detail, improve the lighting, add features like better shaders, normal-mapping, bump-mapping, and maybe even ray-tracing if the hardware can handle it. The problem I am getting at is more and more games are becoming highly dependent on hardware specs.

I recently saw this image below posted by IGN on Facebook. It's a screenshot comparison of Call of Duty: Black Ops 3 which recently came out for both the Xbox 360 and Xbox One platforms. With point cloud data, they could pull off that same level of detail you would see on Xbox One, on Xbox 360. So why am I showing you all of this?

Man, Black Ops 3 looks rough on last gen... #UpAtNoon
Posted by IGN on Monday, November 9, 2015

Why? Simply put, gamers are literally buying into the idea that games on last-gen can't look as good as a games on next-gen due to lack of hardware specs. Euclideon has shattered this belief!  

       I know there are many people out there who are skeptical like this gentleman in the video above. Light-maps were not actually used. The lighting came from the scan itself. If this technology were used in game engines, they might need to come up with a way that removes the lighting. So after the scan, the game engine handle the lighting effects using  shaders and dynamic lighting. I will admit, I too was skeptical at first until I actually saw the technology run on Nintedo Wii. This is wild because Nintendo Wii for example lacks the hardware to run a game like The Last of Us or Crysis. With that said, game developers might still need advance hardware to create games, but not to necessarily play them. So the question in your minds is:

Can it be Proven Further?
Maybe a Live Demonstration?

       Whenever I talk about unlimited detail and point-cloud data, I get responses like, "It's not graphics that make a game successful, its gameplay!" I am fully aware of this. I don't favor graphics over gameplay! I would take the content in the game, the story, and how well the game plays over graphics any day.

How will this affect gamers and game development companies?
  1. People will not have to buy expensive gaming PCs and hardware to pull off the level of graphics that they want or to run their games on full settings.

  2. Regardless of what console or game platform you own, it will be even harder to tell the difference visually.

  3. Game Development companies will save money creating art assets for their games because they can simply laser scan objects from the real world. So instead of creating a high-poly 3D car model, why not laser scan a car?

  4. Game Development companies will be able to expand their audience and reach more people to play their games, thus ushering in more money. 

       I would love to somehow integrate point-cloud data into my game engine someday but that is the least of my concerns right now.

If this technology were in your game engine, how would you utilize it? 

       Before I would even consider implementing this technology into my own game or game engine, I will need to have fully answered and resolved the following questions:

  1. How would unlimited detail work for animated models?
           Dell claims they got animation to work very well, but has not provided any information as yet to how for now. If you are unfamiliar with the term "rigging" the best way I can describe it is when a 3D model is given a skeleton and a painted weight map. The weight map determines how much and where individual bones deform the mesh. You need more than just a model and a rig. A rig is only as good as the topology. Topology is the polygonal and edge flow of the model. If you look at the wire frame of a nice 3D model, you can see how the model's faces and edges follow along muscle groups. After a model is rigged, it is animated and then exported. I realize I am omitting a lot but this is just a basic understanding.
           My thoughts starting out would be to maybe have my game engine somehow convert my 3D models into point-cloud data strictly in-game. The program logic basically would trick the computer into thinking that the model should be treated like a traditional 3D model when in reality it is not. The reason why is so that I can always go back and make changes to my models in my 3D modeling programs without having to worry if the my 3D modeling programs can import, render and support such models. A point-cloud model viewer mode in my engine could show what the model currently looks like in the engine when imported, and what the model looks like after its converted.

  2. Will unlimited detail be able to handle the textures for my models?
           So will this technology be able to interpret the UV coordinates? Will it matter? How will I be able to use the texture map I made for the model? If a model conversion in my game engine is possible, could the textures somehow be merged with the model?

  3. What is the extent of lighting?
           The lighting actually came from the environment of the laser scan itself. In a game, Euclideon would have to come up with a way to remove the lighting from the environment after the laser scan. Then add the effect back using shaders and dynamic lighting powered from the game engine. Whether the lighting will be specular, diffusion, etc... will be up to the game engine programmers and developers themselves. 

  4. How will physics and collision detection work with point-cloud data?
           As it stands right now, there are no physics engines to date that supports collision detection for unlimited detail or point-cloud data. It would take a tremendous amount of calculations to handle collision detection for every single object in a game world composed of point-cloud data. This is why we have not seen it implemented in games yet, however Bruce Dell did say they were working on two titles. Dell said the following:

    "The first is an adventure game with a sword, solid scan forests, and a lot of alien type monsters. The second is a cute, clay scanned adventure where you ride giraffes. Can't say more than that I'm afraid."
    --Bruce Dell

     Unfortunately, we haven't seen any gameplay footage yet which still raises a lot of questions. To somehow get the physics to work, my thought is to trick the computer into thinking that the converted 3D model should still be treated like a traditional 3D model, so collision detection would work the same for the most part. Problem areas is the fact that unlimited detail can render scenes extended down to individual rocks. So will those individual rocks in the terrain be treated as separate models with their own weight and physics? This would be an enormous amount of physics calculations to process.
           The engine would essentially have to make physics calculations for every single individual rock. Or would it have to make physics calculations for every single rock? This could be fixed if a single rock model was duplicated however many times through the program in the game world by instancing the model. The computer will think all the rocks are simply one model and only have to make physics calculations for that specific rock model. Will I have to stick with my terrain processor starting out instead? 3D models in of themselves aren't as much of a strain on computers often as the logic is. They are just one of the few reasons as to why your computer may slow down depending on how detailed they are and the number of polygons they have. Thanks to point-cloud, computers read and process tiny dots faster rather than polygons.

  5. Will point-cloud data work best for static objects in the game?
    Many game engines extracts the triangles and polygon information from 3D models in order to create more accurate collision detection. With a point-cloud based model, will I still need to use primitive invisible meshes for collision detection? How accurate will it be?

  6. Now that my game engine is capable of applying fur to 3D models, will it still look the same after converted?

Why is this technology not utilized in videogames yet?

       I sort of answered some of this in my own questions above. As it stands right now, many game companies (especially big name ones which I won't mention) are either unaware of this software technology or simply skeptical of it. Some game companies are rebuilding their game engines with the Xbox One and PS4s' hardware in mind. With unlimited detail, this wouldn't necessarily be as big of an issue. Videogame companies will have to approach Euclideon themselves if they want to invest in this technology and have it running in their games. Other game companies have made technology similar to unlimited detail, but plan on saving it for later use since they encountered many issues.

       John Carmack who is widely known as "the creator of the FPS" suggested that a system like Euclideon was possible in theory. Coming from Carmack, this greatly surprised me. Euclideon's point-cloud technology runs entirely in software, so it does not use up a tremendous amount of processing power. All of its demonstrations did not utilize the GPU. Intel had once tried making its own system in order to do things along the field of unlimited graphics. Unfortunately, it ended up closing this avenue and put it aside as something for the distant future when computers have "more power".

Minecraft creator Notch made extremely negative comments towards both Dell and the validity of his work.

"If their demo was real and you add up all the little atoms in the island then it would use petabytes of data. But everybody knows that if you see a tree twice, it's reconnected. There isn't a game that I know of that doesn't use the same object twice and I assure you that they didn't store that object two times separately."
-- Notch

       This is a misinterpretation of the point-cloud technology. Notch is referring to a rendering technique called instancing which I touched upon a bit earlier, but I will save for later discussion. In short, instancing is when the same model is duplicated multiple times to save processing and the engines can potentially treat each duplicated model as its own separate entity. Its a way of "tricking" the computer. Games often render many copies of the same model, for instance by covering a landscape with trees or by filling a room with crates. Instancing helps performance by reducing the cost of repeated draw calls. In games, the calls needed to render a model are relatively expensive and can quickly add up if you are drawing hundreds, if not thousands of models. Many Xbox Live Indie Games with "Minecraft-like" worlds utilize this technique. 

       Cevat Yerli, the head of Crytek however, didn't think that Euclideon was creating some big hoax. He went on to say that he truly respects the work Euclideon has accomplished and that unlimited detail is absolutely believable. In his past keynotes, he always talked about how such technology could be possible in the future.

       With so many negative reactions raised at Dell's claim that Unlimited Detail will improve graphics by 100,000 times, has this industry become... jealous? Is this part of the reason why we haven't seen game companies approach Euclideon? However, there are plenty of positive reactions.

What can we do as gamers?

       As gamers, we need to bring this technology up to game companies who are building games with graphics of intense realism. We need to stop questioning Euclideon about when this technology will be implemented in games, and instead ask the questions to game companies themselves. We can encourage game companies to get involved by creating our own petitions if we have to. As game consumers, we need to realize that we have the power. The purchasing decisions we make contribute to the decisions game companies make with their games; for better and for worse. We've accepted the notion that last-gen is holding games back somehow, so we purchase monster gaming computers, new graphics cards, next-gen consoles and hardware. With unlimited detail, videogame graphics will no longer be held back by hardware specs. Whether we realize it or not, we as gamers play a crucial role in game development companies learning from their mistakes and taking some risks if we choose not to constantly vote with our wallets.

For more updates, you can also follow their Facebook Page:

So what are your thoughts on Euclideon's technology breakthrough?
Please comment below  


  1. First off, where did you get to see unlimited detail running on Nintendo Wii? And is Euclideon still planning to release the 2 games they promised would come this year?

    1. Euclideon is not a game company. They are n Australian computer software company who are best know for their unreleased middleware 3D graphics engine called Unlimited Detail. It is based on a point cloud search engine system which surpasses the need for polygon based rendering which is what current 3D game models use. I think game companies should take advantage of this technology and approach Euclideon. This technology could help Fallout 4 for example because that game has such a large scale. The demo for the Wii was simply a tech demo. There were no animations yet. The demo was simply showing laser-scanned information on screen running directly off of the Wii's hardware. There was no gameplay or things like physics, lighting, or artificial intelligence in the demonstration yet.

    2. If I may correct you, Euclideon announced back in September of 2014 that they launching a games division in 2015, and to release 2 games. So what ever became of that idea? And I'm also wondering, at what point in time did you see this demo?

    3. There has been much confusion lately on Twitter about the tech demos I said were running on the Wii. Euclideon released some web demos last year in 2015 of their technology. You can visit the link here: .
      All we simply did was stream it through the Nintendo Wii's web browser. With a fast internet connection, it ran pretty decent. Many people had issues with the web demos' visual quality because of latency and possibly low bandwidth. We basically waited until everything had fully loaded which took a while and later it was ran pretty smooth. We've asked Euclideon for downloads instead. The reason is so that we could run the demos directly from the Nintendo Wii's hardware to avoid latency and bandwidth issues that comes with streaming. This wasn't too profound since its simply laser-scanned technology. These are just tech demos, not actual games. However, I was wrong about Euclideon not being a game company. I have updated this blog post with what Bruce Dell stated. While I don't think Euclideon has given up on games, unfortunately we have not seen any gameplay footage of the two games as it stands. It still raises many questions and concerns. Hopefully, we will see more details and footage in the near future. Thanks for the comments and my apologies for the confusion.

  2. Very mysterious..... sounds like someone wants to spread rumours.
    Then you must also have seenn the new nintendo console or what?

    1. I simply stated it was a tech demo of the technology running on the Wii. It was shown years ago. As far as whether or not Nintendo is making a new console, is completely irrelevant to the topic of this blog post. My goal of this specific blog post was not to spread rumors. My goal was to help encourage gamers to look into point-cloud technology; known as unlimited detail and encourage game companies to invest into this technology. Gamers are asking, "Why isn't this technology in games or game engines yet?" Sadly so many game companies are either unaware or skeptical of this technology. If game hardware companies can still persuade people into spending thousands of dollars on hardware alone to achieve great graphics, why would they stop? If game companies can knowingly and deliberately sell a half-baked product at full retail price, full of Micro-transactions, why would they stop; especially if gamers keep buying the content when its already on the disc? In the end, what we should be looking for as gamers are the features that really sets the systems apart and makes them unique. In the end, its a matter of preference and specs don't necessarily sell systems; awesome videogames do.

  3. Really amazing blog, I’d love to discover some extra information.

    Tally Training

    1. Thank you for the comment! Currently Euclideon is working with hologram technology by introducing the world's first holographic entertainment center called Holoverse. You can find more information about Holoverse here: You can see a video of it in action here: Also, you can follow them on Facebook here: