"Perpetual Motion Engine" (Early Demo)... By Me! : D
A new generation of Software Rendering Engines.
We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.
My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.
Errata: BioShock uses a modified Unreal Engine 2.5, not 3.
In the above video:
- I show the problems with graphics APIs such as DirectX and OpenGL.
- I talk about what those APIs attempt to solve, finding color values for your monitor.
- I discuss the advantages of boiling graphics problems down to general mathematics.
- Finally, I prove the advantages of boiling graphics problems down to general mathematics.
I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.
The whole purpose of graphics processing is to get a whole bunch of color values: one per pixel per frame of animation. Originally, every game engine was software rendered but there were certain tasks which took up the vast majority of CPU time. Graphics processors were invented to offload those difficult tasks to somewhere better suited. APIs, such as DirectX and OpenGL, were created to harness these computing devices.
Even then, at least for a while, the hardware accelerated engines (while much higher resolution) lost some features to gain that performance. A good example is Quake's "Fullbright" textures.
These graphics processing units (GPUs), due to the complexity of modern shaders and the interest of high performance computing (HPC) customers, are now massively parallel solvers of general math problems. Tim Sweeney of Epic Games has predicted the end of DirectX and OpenGL for quite some time although cost concerns dampened his projections. Back in 2008, Tim expected to write "100% of the rendering code" for their next generation in a GPU-accelerated "real programming language" such as CUDA. The next year, he predicted an order of magnitude increase in development costs. Unreal Engine 4 is based on DirectX and OpenGL.
But, as described in the video above, there are benefits to trailblazing.
Note: Due to a limitation with Nokia Research's WebCL prototype, textures were calculated in WebCL and pass through a 2D Canvas before arriving at WebGL. Nokia is unable, via an extension, to access WebGL buffers. Mozilla would need to add that functionality whenever WebCL is integrated into Firefox directly.
Another Note: Perpetual Motion Engine is designed to not require an internet connection. You could copy the game to a USB thumb drive or a directory on your hard drive and simply point the web browser to its index.html (Firefox will not even throw a security complaint as all files are contained in subdirectories). Of course, if it takes off, some games can be hosted on HTTP and, thus, require an internet connection... but that is their choice.
See why I chose that name?
The real benefit is developer control. Imagine the following four scenarios:
1) If you develop a primarily voxel-based game, like Minecraft, it might make sense to not convert your content to triangles and render them with a scanline algorithm; you might prefer to render voxels directly. (Update: Apparently point-in-triangle, actually just about the algorithm I used in the demo, is commonly used by GPUs to acquire pixels within a triangle and not scanline since like... the Nintendo DS.)
2) A video production house could wish to divide their compute workload across multiple Tesla, FirePro, or Xeon Phi coprocessors in a way not possible with SLi or Crossfire. For instance, they could deliberately render 6-or-more frames in advance and sacrifice input delay (in high quality mode) for the ability to harness a half-dozen fully-utilized GPUs.
3) A game could even combine rendering methods. Imagine a typical scene with a few reflective objects. The scene could be rendered, in the normal way, using the primary GPU; the reflective surfaces could, simultaneously, be raytraced to a separate render buffer using the (normally idle) APU or iGPU. The main image and its highlights could then be layered, like Photoshop, and composited together using any algorithm the developer wishes to implement (normal transparency, add, multiply, screen, etc.).
4) As mentioned earlier, a game have gorgeous scenes comprised of hundreds of millions of triangles (or whatever). If the developer scales back the resolution, polygon count, and features to require some factor less performance? It will run on hardware which are about that factor slower. You could see the same game running on a high-end desktop and a tablet plugged into a TV, just much less pretty.
This has been something I have thought about for quite some time. If you have any questions or comments, please, discuss below! I am still deciding where to go from here, from a project sense, but that is not getting in the way of my development. I believe this would be a good open source project, perhaps BSD-license game code and LGPL-license engine code, but it is still much too early for anything like that.
And, of course, I will keep our readers up to date as it (literally) develops.
Get notified when we go live!