CES 2013: NVIDIA Grid to Fight Gaikai and OnLive?

Subject: Editorial, General Tech, Graphics Cards, Systems, Mobile, Shows and Expos | January 7, 2013 - 01:07 AM |
Tagged: CES, ces 2013, nvidia

The second act of the NVIDIA keynote speech re-announced their Grid cloud-based gaming product first mentioned back in May during GTC. You have probably heard of its competitors, Gaikai and OnLive. The mission of these services is to have all of the gaming computation done in a server somewhere and allow the gamer to log in and just play.

phpzXHEk9IMG_8955.JPG

The NVIDIA Grid is their product top-to-bottom. Even the interface was created by NVIDIA and, as they laud, rendered server-side using the Grid. It was demonstrated to stream to an LG smart TV directly or Android tablets. A rack will contain 20 servers with 240 GPUs with a total of 200 Teraflops of computational power. Each server will initially be able to support 24 players, which is interesting, given the last year of NVIDIA announcements.

Last year, during the GK110 announcement, Kepler was announced to support hundreds of clients to access a single server for professional applications. It seems only natural that Grid would benefit from that advancement: but it apparently does not. With a limit of 24 players per box, equating to a maximum of two players per GPU, it seems odd that a limit would be in place. The benefit of stacking multiple players per GPU is that you can achieve better-than-linear scaling in the long-tail of games.

Then again, all they need to do is solve the scaling problem before they have a problem with scaling their service.

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

NVIDIA GeForce GTX 660 SE Information Comes Out

Subject: Graphics Cards | January 4, 2013 - 09:10 PM |
Tagged: nvidia, kepler, gk106, gtx 660 se

Are you ready for another entry to the confusing graphics market?  NVIDIA has you covered with the upcoming GeForce GTX 660 SE that will target the $180-200 market and hit the AMD Radeon HD 7870 1GB square in the jaw.  With the current lineup of GeForce cards that is the one area where NVIDIA is at an obvious disadvantage with the gap between the GTX 650 Ti and the GTX 660. 

As is usually the case, when a new graphics card is ready to hit the market, leaks occur in all directions.  Already we are seeing screenshots of specifications and benchmarks from PCEVA.  If the rumors are right you'll see the GTX 660 SE released in Q1 of 2013 with 768 CUDA cores, 24 ROPs and a 192-bit memory bus.  Interestingly, the GTX 660 SE will be based on GK106 and has the same core count as the GTX 650 Ti...the performance differences will be seen going from the 128-bit memory bus to 192-bit.  

660se-2.jpg

Current GPU-Z screenshots are showing a clock speed of 928 MHz with a Boost clock of 1006 MHz, running just about the same clock rates as the GTX 650 Ti (though the 650 series does not have GPU Boost technology enabled).  It also looks like the GTX 660 SE will use GDDR5 memory running 5.6 GHz and a 2GB capacity

660se-1.jpg

With CES just around the corner (we are leaving in the morning!) we will ask around and see if anyone has more information about a solid price point and time frame for release!

Source: WCCFtech
Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

Phoronix on OpenCL Driver Optimization, NVIDIA vs. AMD

Subject: Editorial, General Tech, Graphics Cards | December 28, 2012 - 02:43 AM |
Tagged: opencl, nvidia, amd

The GPU is slowly becoming the parallel processing complement to your branching logic-adept CPU. Developers have been slow to adopt this new technology but that does not hinder the hardware manufacturers from putting on a kettle of tea for when guests arrive.

While the transition to GPGPU is slower than I am sure many would like, developers are rarely quick on the uptake of new technologies. The Xbox 360 was one of the first platforms where unified shaders became mandatory and early developers avoided them by offloading vertex code to the CPU. On that note: how much software still gets released without multicore support?

7-TuxGpu.png

Phoronix, practically the arbiter of all Linux news, decided to put several GPU drivers and their manufacturers to the test. AMD was up first and their results showed a pretty sizeable jump in performance at around October of this year through most of their tests. The article on NVIDIA arrived two days later and saw performance trended basically nowhere since February with the 295.20 release.

A key piece of information is that both benchmarks were performed with last generation GPUs: the GTX 460 on the NVIDIA side, with the 6950 holding AMD’s flag. You might note that 295.20 was the last tested driver to be released prior to the launch of Kepler.

These results seem to suggest that upon the launch of Kepler, NVIDIA did practically zero optimizations to their older "Fermi" architecture at least as far as these Linux OpenCL benchmarks are concerned. On the AMD side, it seems as though they are more willing to go back and advance the performance of their prior generation as they release new driver versions.

There are very few instances where AMD beats out NVIDIA in terms of driver support -- it is often a selling point for the jolly green giant -- but this appears to be a definite win for AMD.

Source: Phoronix

Podcast #231 - Intel NUC, AMD 8000M GPUs, Building a Hackintosh and more!

Subject: General Tech | December 20, 2012 - 03:16 PM |
Tagged: video, virtu, VIA, tegra 4, Samsung, radeon, podcast, nvidia, nvelo, nuc, lucid, Intel, hackintosh, gigabyte, Dataplex, arm, amd, 8000m

PC Perspective Podcast #231 - 12/20/2012

Join us this week as we talk about the Intel NUC, AMD 8000M GPUs, Building a Hackintosh and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano and Chris Barbere

This Podcast is brought to you by MSI!

Program length: 1:13:41

Podcast topics of discussion:

  1. 0:01:50 We are going to try Planetside 2 after the podcast!
  2. Week in Reviews:
    1. 0:02:50 Intel Next Unit of Computing NUC
    2. 0:17:55 Corsair AX860i Digital ATX Power Supply
    3. 0:19:00 HP Z1 Workstation All in One
    4. 0:25:00 Building a Hackintosh Computer - A Guide
  3. 0:32:35 This Podcast is brought to you by MSI!
  4. News items of interest:
    1. 0:33:30 Cutting the Cord Complete!
    2. 0:36:10 VIA ARM-based SoCs in upcoming ASUS tablet
    3. 0:42:00 Lucid MVP 2.0 will be sold direct
    4. 0:44:50 Samsung acquires NVELO SSD Caching Software
    5. 0:49:00 AMD announces mobility 8000M series of GPUs
    6. 0:54:15 Some NVIDIA Tegra 4 Details
    7. 0:58:55 NEC Unveils Super Thin Ultrabook
    8. 1:00:30 Win a Sapphire HD 7870 GHz Edition FleX!!
  5. Closing:
    1. 1:02:30 Hardware / Software Pick of the Week
      1. Ryan: Panasonic GH2 Micro 4/3 Camera
      2. Josh: Preparation is key!
      3. Allyn: Cheap RAM
      4. Chris: Had solar panels installed this week
  1. 1-888-38-PCPER or podcast@pcper.com
  2. http://pcper.com/podcast
  3. http://twitter.com/ryanshrout and http://twitter.com/pcper
  4. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

NVIDIA Tegra 4 Details Revealed By Leaked Slide

Subject: Processors, Mobile | December 19, 2012 - 03:26 AM |
Tagged: wayne, tegra 4, SoC, nvidia, cortex a15, arm

Earlier this year, NVIDIA showed off a roadmap for its Tegra line of mobile system on a chip (SoC) processors. Namely, the next generation Tegra 4 mobile chip is codenamed Wayne and will be the successor to the Tegra 3.

Tegra 4 will use a 28nm manufacturing process and feature improvements to the CPU, GPU, and IO components. Thanks to a leaked slide that appeared on Chip Hell, we now have more details on Tegra 4.

NVIDIA Tegra 4 Leaked Slide.jpg

The 28nm Tegra 4 SoC will keep the same 4+1 CPU design* as the Tegra 3, but it will use ARM Cortex A15 CPU cores instead of the Cortex A9 cores used in the current generation chips. NVIDIA is also improving the GPU portion, and Tegra 4 will reportedly feature a 72 core GPU based on a new architecture. Unfortunately, we do not have specifics on how that GPU is set up architecturally, but the leaked slide indicates that the GPU will be as much as 6x faster than NVIDIA’s own Tegra 3. It will allegedly be fast enough to power displays with resolutions from 1080p @ 120Hz to 4K (refresh rate unknown). Don’t expect to drive games at native 4K resolution, however it should run a tablet OS fine. Interestingly, NVIDIA has included hardware to hardware accelerate VP8 and H.264 video at up to 2560x1440 resolutions.

Additionally, Tegra 4 will feature support for dual channel DDR3L memory, USB 3.0 and hardware accelerated secuity options including HDCP, Secure Boot, and DRM which may make Tegra 4 an attractive option for Windows RT tablets.

The leaked slide has revealed several interesting details on Tegra 4, but it has also raised some questions on the nitty-gritty details. Also, there is no mention of the dual core variant of Tegra 4 – codenamed Grey – that is said to include an integrated Icera 4G LTE cellular modem. Here’s hoping more details surface at CES next month!

* NVIDIA's name for a CPU that features four ARM CPU cores and one lower power ARM companion core.

Source: Chip Hell

How much horsepower do you need to BLOPS2 properly?

Subject: Graphics Cards | December 3, 2012 - 02:02 PM |
Tagged: nvidia, call of duty, black ops 2, amd

[H]ard|OCP set out to determine how well AMD and NVIDIA's cards can deal with the new Call of Duty game.  To do so they took a system built on a GIGABYTE Z77X-UP4-TH, a Core i7 2600k @ 4.8GHz, and 8GB of Corsair RAM and then tested a HD7970, 7950 and 7870 as well as a GTX680, 670 and 660Ti.  There is good news for both graphics companies and gamers, the HD7870 was the slowest card and still managed great performance on maximum settings @ 2560x1600 with 8X MSAA and FXAA.  For the absolute best performance it is NVIDIA's GTX680 that is your go to card though since this is a console port, albeit one that [H] describes as well implemented, don't expect to be blown away by the quality of the graphics.

Hoptions.jpg

"Call of Duty: Black Ops II is the first Call of Duty game on PC to support DX11 and new graphical features. Hopefully improvements to the IW Engine will be enough to boost the CoD franchise near the top graphics-wise. We also examine NVIDIA's TXAA technology which combines shader based antialiasing and traditional multisampling AA."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Too good to be true; bad coding versus GPGPU compute power

Subject: General Tech | November 23, 2012 - 01:03 PM |
Tagged: gpgpu, amd, nvidia, Intel, phi, tesla, firepro, HPC

The skeptics were right to question the huge improvements seen when using GPGPUs in a system for heavy parallel computing tasks.  The cards do help a lot but the 100x improvements that have been reported by some companies and universities had more to do with poorly optimized CPU code than with the processing power of GPGPUs.  This news comes from someone who you might not expect to burst this particular bubble, Sumit Gupta is the GM of NVIDIA's Tesla team and he might be trying to mitigate any possible disappointment from future customers which have optimized CPU coding and won't see the huge improvements seen by academics and other current customers.  The Inquirer does point out a balancing benefit, it is obviously much easier to optimize code in CUDA, OpenCL and other GPGPU languages than it is to code for multicored CPUs.

bubble-burst.jpg

"Both AMD and Nvidia have been using real-world code examples and projects to promote the performance of their respective GPGPU accelerators for years, but now it seems some of the eye popping figures including speed ups of 100x or 200x were not down to just the computing power of GPGPUs. Sumit Gupta, GM of Nvidia's Tesla business told The INQUIRER that such figures were generally down to starting with unoptimised CPU."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

GeForce GTX Call of Duty Rivalries competition ...and more

Subject: General Tech, Graphics Cards | November 15, 2012 - 04:05 PM |
Tagged: nvidia, call of duty, black ops 2

header-rivalries.jpg

NVIDIA will be celebrating the release of Call of Duty: Black Ops II by launching the first-ever “GeForce GTX Call of Duty Rivalries” competition which pits top colleges against each other in Call of Duty: Black Ops II four-person, last team standing multiplayer matches. Participants in the first round of competition include the storied rivalries of Cal vs. Stanford, USC vs. UCLA and UNC vs. NC State. Two additional wildcard colleges from any accredited college in the United States will also be chosen by the Facebook community to field teams. See details on GeForce.com or visit NVIDIA’s Facebook page on how you can walk away with a Maingear gaming rig.

logo_geforce.png

In addition to the contest NVIDIA also released the GeForce 310.54 beta driver with specific benefits for players of Black Ops 2, specifically the inclusion of TXAA.

  • Delivers up to 26 percent faster performance in Call of Duty: Black Ops 2 and up to 18% faster performance in Assassin’s Creed III.
  • Provides smooth, shimmer-free graphics with NVIDIA TXAA antialiasing in Call of Duty: Black Ops 2 and Assassin’s Creed III.
  • Improves performance by up to 16% in other top games likes Battlefield 3, The Elder Scrolls V: Skyrim, and StarCraft II.

As always, our new driver includes new profiles for today’s top titles, increasing multi-GPU performance.

  • Hawken – Added SLI profile
  • Hitman: Absolution – Added SLI profile
  • Natural Selection 2 – Added SLI profile
  • Primal Carnage – Added SLI profile

You can grab the driver and read about all the improvements right here.

Source: NVIDIA

Podcast #227 - Golden Z77 Motherboard from ECS, High Powered WiFi from Amped Wireless, Supercomputing GPUs and more!

Subject: General Tech | November 15, 2012 - 02:10 PM |
Tagged: titan, thor, tesla, s1000, podcast, nvidia, k20x, Intel, golden board, firepro, ECS, dust, Amped Wireless, amd

PC Perspective Podcast #227 - 11/15/2012

Join us this week as we talk about a Golden Z77 Motherboard from ECS, High Powered WiFi from Amped Wireless, Supercomputing GPUs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

This Podcast is brought to you by MSI!

Program length: 1:07:04

Podcast topics of discussion:

  1. Join us for the Hitman: Absolution Game Stream
  2. Week in Reviews:
    1. 0:02:00 ECS Z77H2-AX Golden Board Motherboard
    2. 0:07:00 Amped Wireless R20000G Router and Adapter
    3. 0:12:20 Intel says USB 3.0 and 2.4 GHz don't get along
  3. 0:18:00 This Podcast is brought to you by MSI!
  4. News items of interest:
    1. 0:19:00 A renaissance of game types that have been sadly missing
    2. 0:24:00 You missed our live Medal of Honor Game Stream - loser!
    3. 0:26:12 NVIDIA launches Tesla K20X Card, Powers Titan Supercomputer
    4. 0:30:15 AMD Launches Dual Tahiti FirePro S10000
    5. 0:38:00 Some guy leaves Microsoft - is the Start Menu on its way back??
    6. 0:41:40 AMD is apparently not for sale
    7. 0:46:05 ECS joins the Thunderbolt family with a new Z77 motherboard
  5. Closing:
    1. 0:54:00 Hardware / Software Pick of the Week
      1. Ryan: Corsair Hydro Series H60 for $75
      2. Jeremy: Form over function or vice versa?
      3. Josh: A foundation worth donating to
      4. Allyn: ArmorSuit Military Shields
  1. 1-888-38-PCPER or podcast@pcper.com
  2. http://pcper.com/podcast
  3. http://twitter.com/ryanshrout and http://twitter.com/pcper
  4. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!