John Carmack Keynote: Quakecon 2013
Carmack Continued: Displays, Software, and Everything Else Imaginable
Mr. Carmack is very happy about where displays are going. They are much improved over the years for a variety of reasons. A lot of the early TN issues have been solved, and the number of quality panels that a user could buy for very low prices is impressive. Latency is still an issue though. While most panels advertise very low times for pixel response, the other aspects of latency are not addressed. Once the signal comes out of the graphics card, it is often slowed down by the circuitry in the panel before it is output as a pixel change.
Refresh rates need to be improved as well. Currently most panels are at 60 Hz and not all that many are at 120 Hz and above. There are also some tricks users have to do to get their panels to run at 120 Hz in games and in the desktop. Connectivity has improved with DisplayPort, but there is still a lot of work to be done to get panels to where they should be. He is not entirely pleased with the push for 4K, as he feels that obviously more work needs to be done to get 120 Hz implemented effectively before we go for a big push in pixel density. For most usage cases, he is very pleased with the 1080P resolution. While more "retina" type displays are great to look at, he feels that the advantages to higher Hz outweigh that of greater pixel density for both desktop applications and games.
There needs to be greater improvements for head mounted displays as well. This is both in terms of pixel density and latency. While pixel density will get rid of the "latticework" type effect that we see in current HMDs, the latency involved is the bigger problem. The human eye (and brain) is pretty sensitive when it comes to movement and visual cues. When the head moves, and the visual side is out of phase with this movement, nausea and other negative effects soon overtake the experience. Pixel persistance and judder are also big issues when it comes to these usage case scenarios. Any kind of pixel smearing in HMDs can again cause disorientation and nausea because visual cues again do not match what the brain is experiencing with movement.
One very interesting area that he talked about was stepping away from fixed Hz modes. Unlike CRT screens, LCDs do not necessarily need fixed refresh rates. He commented quickly on more isochronous schemes that can be implemented. Instead of waiting for a refresh, as soon as the GPU is ready to present a frame, it is sent to the panel. This would get rid of tearing and a lot of the other visual artifacts that simply should not have to exist on a modern LCD panel. So a panel should be able to run a game for example at 72 Hz without vsync issues such as tearing or runt frames when it is forced to run without vsync at 60 Hz.
Touch has become incredibly important for input. Touch was pretty much maligned until we started to get into the smartphone era. Now it is essentially superfluous with both cell phones and tablets. We are starting to see more use in notebooks and desktops, and it has been a disruptive technology. While the mouse and keyboard is very accurate and low latency for games and applications, touch has become much more intuitive and natural.
He is not a big fan of game controllers. While very early gen controllers like the NES did great for what they were made for (making Mario go right/left and jump), they have become much worse due to the complexity of the games. Pushing in analog sticks has become very problematic for fine grained control in games. Gamers have to deal with very coarse movements and low control over increasingly complex games.
This has been the red headed stepchild for quite some time, for a variety of reasons. The primary one is that transistors and software had much greater returns in terms of CPU and GPU power rather than concentrating on sound. This is going to change. The primary movement for this will be the use of head mounted displays. Audio is going to become much more important to get a truly immersive environment, so we will see the return of much more complex HRTF (head-related transfer function) algorithms. John hopes that at some point users can simply close their eyes and "echo-locate" objects in a room.
This was an area that John really pushed hard, but unfortunately I am not a programmer. Much of this was unfamiliar territory for me, so I will try to pass along what he said. iD has been doing a lot of code optimizations to clean things up. There is still code in their latest engine which dates back to the mid-90s. Some things just have not required changes, but we are getting to the point where some things need to be updated so they can be more effectively ported over in the future.
John is really pushing for less complexity and more "purity" in code. Many functions in the past had to be referenced to external values that may not be ported over easily to modern operating systems and consoles. John wants to focus on code that is first robust, then predictable, and finally has good performance. He believes that this will obviously enable code to be more long lived and more easily portable. He spoke at great length about this, and how he is implementing this at iD and what work he has done with other programming languages and tools (Haskell and LISP). If a reader really wants to know more about what all he said, they should certainly watch a replay of the speech. It is far too much to really cover well in this article.
OpenGL has slowly taken over the world. This was not a foregone conclusion some years ago. OpenGL was often too slow in implementing changes, but the rise of mobile computing utilizing OpenGL and OpenGL ES has enabled the API to regain prominence. While Microsoft has done a good job in implementing changes in their closed garden DirectX, OpenGL has done quite a bit of advancement themselves and the mobile market has really powered this transformation. We also must consider that the PS3 and PS4 will be using OpenGL. DirectX will obviously not go away, but it is not nearly the dominant rendering platform that it once was.
Obviously a lot more was said in the 2.5+ hours that John talked, as well as the Q&A session. He quickly touched on HSA and how important it will eventually be, but there are many hurdles in the way. The primary issue is that there are few usage cases in modern games that can actually utilize heterogeneous workloads. They have experimented with some OpenCL code, but it still is not mature enough to really lean upon. Eventually the learning curve and workloads will adjust to HSA type solutions, but so far that has not been the case.
I highly suggest watching John's keynote if there is time. It is a fascinating couple of hours, but it certainly is not for everyone. The breadth of knowledge about many subjects held by John is impressive.