Review Index:
Feedback

GTC 2012: NVIDIA Announces GeForce GRID Cloud Gaming Platform

Author: Ryan Shrout
Manufacturer: NVIDIA

Partners and Our Thoughts

The truth of the matter is that most of this technology is available in other areas and we are eager to see how NVIDIA plans to differentiate their GRID service.  One way for them to do so is with Kepler GPUs and the power efficiency benefits it can offer to server farms that are doing the streaming for Gaikai, etc.  

View Full Size

Increased density of compute power is huge for these companies and if NVIDIA can in fact raise the number of game streams possible per server node by a factor of four it would be a tremendous benefit.  By enabling 4 GPUs for game streaming purposes in a server NVIDIA claims to improve the power consumption for each game stream as well by 50% - going from 150 watts to 75 watts. 

View Full Size

NVIDIA has built a compute card exactly for this purpose known as the NVIDIA GeForce GRID Processor, a dual-GPU card that includes a total of 3,072 CUDA cores and 8GB of memory.  Likely this is a pair of GK104 GPUs (similar to those seen in the GTX 680) but with a TDP of 250 watts it should be running at slightly lower clock speeds than the recently released dual-GPU GTX 690 card.  This is also likely the same design as the newly announced Tesla K10.

View Full Size

The goal of not just NVIDIA's new streaming technology push, but all streaming gaming companies, is to make the game console a virtual device.  Playing games on your TV (without the need for a physical console), on your tablet and even on your phone with graphics that you would only expect to find on a PC is a lofty goal that would benefit gamers all across the world by bringing down costs to entry. 

View Full Size

NVIDIA isn't going at this alone though and as you would expect of a company with this much gaming cache, they have lined up the best in the business.  Online game streaming services have signed on including Gaikai and OTOY, virtualization technologies from Microsoft, Xen, Citrix and VMWare are support and system OEMs like Cisco, Dell, IBM and HP will be able to produce systems with the GeForce GRID processor.

Closing Thoughts

We are really just starting to learn about NVIDIA's new GRID technology and because of that I will leave our final verdict for when we get some hands on time with an implementation of it in a real-world environment.  I have still many questions about what NVIDIA is actually doing to improve the latency problem in online streaming gaming and we only have part of the answer today.  While the H.264 encoder in the Kepler GPU will definitely help on both ends of the network connection (server and client) the truth is that Sandy Bridge and Ivy Bridge CPUs have had that capability for some time - what does NVIDIA offer above and beyond that?  

Improving the latency of game time is also interesting - yes Kepler can render a 720p or 1080p image much faster than previous GPUs used on streaming servers today, but how can GPUs are increase programmatic "game time" internal to the game engine needs to be explored. 

View Full Size

NVIDIA is definitely treading into new water here by not only supporting the streaming gaming solutions but also pushing them.  It is possible that this would backfire on NVIDIA and users would decide to subscribe to Gaikai rather than buy the new GeForce GTX 680.  Another theory is that NVIDIA has missed out on all of the next-generation console licenses and will instead take the battle to the entirety of the console market by offering "console quality" gaming through GRID. 

We have much to learn about the improvements brought forth by GRID and we'll be sure to share them with you soon.

May 15, 2012 | 03:03 PM - Posted by Justin (not verified)

I'm sure that the reason they posted a decreased network latency timer is because they have some sort of more efficient encoding technology that allows the H.264 video to be even smaller in size, which means lower download times, and less network latency. I doubt they meant overcoming the current bandwidth constraints.

May 15, 2012 | 03:14 PM - Posted by Ryan Shrout

Fair enough - though a ping time is a ping time.

Keep in mind that can't do much fancier on the encode because they had admitted to being able to use "just about any H.264 decoder" on the receiving end.

May 15, 2012 | 09:17 PM - Posted by Justin (not verified)

True and true. But ping on a connection (I'm sure they're taking into account the best of the best for internet connections) would probably range between 10 and 25ms. I'm thinking that the time it takes to transfer high quality H.264 video is more substantial than the ping delay, which is why better compression would make sense to me.

I was really just trying to rationalize their statistic there. Otherwise, like you guys said, it would be ridiculous for them to claim that they can somehow make the infrastructure better.

Thanks for the reply by the way. It's nice to see you guys in the comments section. Love the site and the podcast.

May 15, 2012 | 06:48 PM - Posted by DragonTHC (not verified)

I'll have to admit, this is both innovative and terrifying news on the gaming front.

If this takes off, there will be no need for massive gaming computers anymore. It's kind of scary.

I mean, gaikai has leagues lower latency than does onlive. But this is going to give pause to developers who might want to treat cloud gaming as a passing fad.

I, incorrectly, made a prediction in 2008 about the future of gaming. I thought then that a linux live disc would be the future. I was terribly wrong on that front.

May 16, 2012 | 12:25 AM - Posted by Dragorth (not verified)

Personally, I am interested in a different use case. I am currently learning 3d modeling, and have been using Mudbox on my PC. I would ideally like to use a tablet optimized version, say on my Kindle Fire, to allow me to mold the model with my hands. Using this cloud could give me that, and the latency issues may not matter so much.

May 16, 2012 | 09:35 AM - Posted by Ryan Shrout

NVIDIA talked about that use case pretty specifically, with standard GPU virtualization as an option called NVIDIA VGX. Could be VERY powerful for development!

May 16, 2012 | 11:30 AM - Posted by Dude o Lagmeather (not verified)

How can a reviewer say 100ms + 66ms is reasonable for any console?

Typical gaming latencies.

Lan 10ms
Same Coast 30ms
Half US 60-70ms
Across Atlantic +80-100ms
Across Pacific +100-120
Anything else FORGETOBOTUIT.

May 16, 2012 | 02:01 PM - Posted by Ryan Shrout

Please keep in mind that the 100ms from "game time" is there for you even when you are playing on a local display. It is not "latency" in the typical way you are thinking.

May 16, 2012 | 01:23 PM - Posted by manny (not verified)

something very promising and will also help is Gaikai's non-linear downloads:

http://gamasutra.com/view/news/168734/Gaikai_adds_rapid_downloads_to_clo...

maybe they can even get the devices to do part of the processing to counter even more the latency.

May 22, 2012 | 06:59 AM - Posted by collie man (not verified)

This is all very cool, but I for one will need to see it work before I have an opinion. IN THEORY streaming games rendered on the cloud should be quite do-able, maybe not "twitch" mmpfps, but there should be a lot of games that can be handled this way, HOWEVER...
I have been "hurt" before. ONLIVE was sooooooo exciting as a concept, we all waited on pins and needles for it's arival, and we all know how that turned out.
Time will tell, if this does work as well as they say, it may change gaming forever. Time will tell

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.