CES 2013: Live Hands-on with NVIDIA Shield Powered by Tegra 4

Subject: General Tech, Graphics Cards, Mobile | January 7, 2013 - 09:54 PM |
Tagged: video, tegra 4, shield, nvidia, live

Powered by the upcoming Tegra 4 SoC, Shield is an Android-powered device built into the form of a gaming controller with a 5-in display attached.  Not only will it play Android games in a new and interesting way but NVIDIA has promised the ability to stream PC games from your GeForce-powered desktop directly to your wireless device!

shield1.png

We got our hands on the prototypes of the Shield and got to see the build quality, demo the Android games and even test out the PC game streaming technology.

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

CES 2013: Tegra 4, the Vision of Windows RT?

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | January 7, 2013 - 09:42 AM |
Tagged: CES, ces 2013, nvidia, windows rt

It is the day after the NVIDIA keynote and the Tegra 4 floodgates are open. Sure, the rumors were fairly accurate, but I guess speculation waits for a solid basis to be believable.

The Tegra 4 marries 72 of the expected GPU cores with four… “plus one” as the bonus core is present although 4+1 branding does not seem to be… ARM Cortex-A15 cores. This push to an A15-based design provides a significant performance increase over Tegra 3. Another interesting feature is the ability to transmit 4K video should you have a suitable source or the rendered application can support 4K at a suitable framerate. You can then add in Icera’s LTE modem which is interesting in its own right to see a compelling product.

tegra4-super-processors-hover.png

Jen-Hsun spent about as much time justifying the need for speed as he did hyping its performance. Photographers, particularly those who wish to dabble with HDR, are able to use the Tegra 4 to vastly increase the speed of image processing at the time of taking the shot. Tonal mapping for an HDR image will take just 200ms of processing which allows HDR to be used along with burst mode and a flash.

Paul Thurrott over at the Supersite for Windows ponders whether this was Microsoft’s vision for Windows RT. He wonders whether Microsoft will try to take a mulligan on the first generation similar to Windows Phone 7-based devices led us to Windows Phone 8. At the same point, the weight which the Surface was designed to bare is pretty immense if it was just designed to buckle to Tegra 4. I would not put it past Microsoft although the Surface does not strike me as a product designed to have a doughy half-baked middle -- despite what actually shipped.

PC World also notes how Qualcomm continues to improve their products and have just recently transitioned to a 28nm process for the Snapdragon S4. Qualcomm is a giant and even then there is also Samsung to contend with in the ARM space -- then you consider x86 brings at least Intel to the game with its massive advantage in legacy software that are usually not abstracted by a platform-independent runtime layer.

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: NVIDIA

CES 2013 Podcast Day 1 - Everything Lenovo, NVIDIA Tegra 4 and Shield

Subject: General Tech | January 7, 2013 - 01:33 AM |
Tagged: video, Thinkpad, tegra 4, shield, podcast, nvidia, Lenovo, ideacentre, ces 2013, CES

CES 2013 Podcast Day 1 - 01/06/2013

It's time for podcast fun at CES!  Join us as we talk about the first day of the show including a lot of announcements from Lenovo, the NVIDIA Tegra 4 and Shield products and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano and Ken Addison

Program length: 45:50

Be sure to subscribe to the PC Perspective YouTube channel!!

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

CES 2013: NVIDIA Officially Releases Tegra 4 SoC

Subject: General Tech | January 7, 2013 - 12:01 AM |
Tagged: tegra 4, SoC, nvidia, cortex a15, ces 2013, CES, arm

Details about NVIDIA’s latest system on a chip (SoC) for mobile devices leaked last month. On Sunday, NVIDIA officially released its Tegra 4 chip, and talked up more details on the new silicon.

Interestingly, the leaked information from the slide held true over the weekend when NVIDIA officially unveiled it during a press conference. The new chip is manufactured on a 28nm, low power, high-k metal gate process.  It features four ARM Cortex-A15 CPU cores running at up to 1.9GHz, one (additional) low power Cortex-A15 companion core, a NVIDIA GeForce GPU with 72 cores (not unified shader design unfortunately). In addition, the Tegra 4 SoC includes the company’s i500 programmable soft modem, and a number of fixed function hardware used for audio and image processing.

According to Anandtech, the majority of GPU cores in the Tegra 4 are 20-bit pixel shaders though exact specifications on the GPU are still unknown. Further, the i500 modem currectly supports LTE UE Category 3 on the WCDMA band with an LTE 4 modem expected in the future.

Nvidia Tegra 4 SoC at CES 2013.jpg

Image Credit: ArsTechnica attended the NVIDIA press conference.

Tegra 4 will support dual channel LP-DDR3 memory, USB 3.0, and a technology that NVIDIA is calling its Computational Photography Architecture that allegedly will allow real-time HDR imagery with still and video shoots.

According to NVIDIA, Tegra 4 will be noticeably faster than its predecessors and the competing SoCs from Apple and Qualcomm et al. When compared to the Nexus 10 (Samsung Exynos 5 SoC) and the stock Android web browser, the Tegra 4 device (Chrome browser) opened pages in 27 seconds versus the Nexus 10’s 50 second benchmark time. Users will have to wait for retail devices with Tegra 4 hardware for independent benchmarks, however. Thanks to the higher top-end clockspeed and beefier GPU, you can expect Tegra 4 to be faster than Tegra 3, but until reviewers get their hands on Tegra 4-powered devices it is difficult to say just how much faster it is.

Speaking of hardware, the Tegra 4 chip will most likely be used in tablets (and not smartphones). Here’s hoping we see some prototype Tegra 4 devices or product announcements later this week at CES.

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: Ars Technica

CES 2013: NVIDIA Grid to Fight Gaikai and OnLive?

Subject: Editorial, General Tech, Graphics Cards, Systems, Mobile, Shows and Expos | January 6, 2013 - 10:07 PM |
Tagged: CES, ces 2013, nvidia

The second act of the NVIDIA keynote speech re-announced their Grid cloud-based gaming product first mentioned back in May during GTC. You have probably heard of its competitors, Gaikai and OnLive. The mission of these services is to have all of the gaming computation done in a server somewhere and allow the gamer to log in and just play.

phpzXHEk9IMG_8955.JPG

The NVIDIA Grid is their product top-to-bottom. Even the interface was created by NVIDIA and, as they laud, rendered server-side using the Grid. It was demonstrated to stream to an LG smart TV directly or Android tablets. A rack will contain 20 servers with 240 GPUs with a total of 200 Teraflops of computational power. Each server will initially be able to support 24 players, which is interesting, given the last year of NVIDIA announcements.

Last year, during the GK110 announcement, Kepler was announced to support hundreds of clients to access a single server for professional applications. It seems only natural that Grid would benefit from that advancement: but it apparently does not. With a limit of 24 players per box, equating to a maximum of two players per GPU, it seems odd that a limit would be in place. The benefit of stacking multiple players per GPU is that you can achieve better-than-linear scaling in the long-tail of games.

Then again, all they need to do is solve the scaling problem before they have a problem with scaling their service.

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

NVIDIA GeForce GTX 660 SE Information Comes Out

Subject: Graphics Cards | January 4, 2013 - 06:10 PM |
Tagged: nvidia, kepler, gk106, gtx 660 se

Are you ready for another entry to the confusing graphics market?  NVIDIA has you covered with the upcoming GeForce GTX 660 SE that will target the $180-200 market and hit the AMD Radeon HD 7870 1GB square in the jaw.  With the current lineup of GeForce cards that is the one area where NVIDIA is at an obvious disadvantage with the gap between the GTX 650 Ti and the GTX 660. 

As is usually the case, when a new graphics card is ready to hit the market, leaks occur in all directions.  Already we are seeing screenshots of specifications and benchmarks from PCEVA.  If the rumors are right you'll see the GTX 660 SE released in Q1 of 2013 with 768 CUDA cores, 24 ROPs and a 192-bit memory bus.  Interestingly, the GTX 660 SE will be based on GK106 and has the same core count as the GTX 650 Ti...the performance differences will be seen going from the 128-bit memory bus to 192-bit.  

660se-2.jpg

Current GPU-Z screenshots are showing a clock speed of 928 MHz with a Boost clock of 1006 MHz, running just about the same clock rates as the GTX 650 Ti (though the 650 series does not have GPU Boost technology enabled).  It also looks like the GTX 660 SE will use GDDR5 memory running 5.6 GHz and a 2GB capacity

660se-1.jpg

With CES just around the corner (we are leaving in the morning!) we will ask around and see if anyone has more information about a solid price point and time frame for release!

Source: WCCFtech
Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

Phoronix on OpenCL Driver Optimization, NVIDIA vs. AMD

Subject: Editorial, General Tech, Graphics Cards | December 27, 2012 - 11:43 PM |
Tagged: opencl, nvidia, amd

The GPU is slowly becoming the parallel processing complement to your branching logic-adept CPU. Developers have been slow to adopt this new technology but that does not hinder the hardware manufacturers from putting on a kettle of tea for when guests arrive.

While the transition to GPGPU is slower than I am sure many would like, developers are rarely quick on the uptake of new technologies. The Xbox 360 was one of the first platforms where unified shaders became mandatory and early developers avoided them by offloading vertex code to the CPU. On that note: how much software still gets released without multicore support?

7-TuxGpu.png

Phoronix, practically the arbiter of all Linux news, decided to put several GPU drivers and their manufacturers to the test. AMD was up first and their results showed a pretty sizeable jump in performance at around October of this year through most of their tests. The article on NVIDIA arrived two days later and saw performance trended basically nowhere since February with the 295.20 release.

A key piece of information is that both benchmarks were performed with last generation GPUs: the GTX 460 on the NVIDIA side, with the 6950 holding AMD’s flag. You might note that 295.20 was the last tested driver to be released prior to the launch of Kepler.

These results seem to suggest that upon the launch of Kepler, NVIDIA did practically zero optimizations to their older "Fermi" architecture at least as far as these Linux OpenCL benchmarks are concerned. On the AMD side, it seems as though they are more willing to go back and advance the performance of their prior generation as they release new driver versions.

There are very few instances where AMD beats out NVIDIA in terms of driver support -- it is often a selling point for the jolly green giant -- but this appears to be a definite win for AMD.

Source: Phoronix

Podcast #231 - Intel NUC, AMD 8000M GPUs, Building a Hackintosh and more!

Subject: General Tech | December 20, 2012 - 12:16 PM |
Tagged: video, virtu, VIA, tegra 4, Samsung, radeon, podcast, nvidia, nvelo, nuc, lucid, Intel, hackintosh, gigabyte, Dataplex, arm, amd, 8000m

PC Perspective Podcast #231 - 12/20/2012

Join us this week as we talk about the Intel NUC, AMD 8000M GPUs, Building a Hackintosh and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano and Chris Barbere

This Podcast is brought to you by MSI!

Program length: 1:13:41

Podcast topics of discussion:

  1. 0:01:50 We are going to try Planetside 2 after the podcast!
  2. Week in Reviews:
    1. 0:02:50 Intel Next Unit of Computing NUC
    2. 0:17:55 Corsair AX860i Digital ATX Power Supply
    3. 0:19:00 HP Z1 Workstation All in One
    4. 0:25:00 Building a Hackintosh Computer - A Guide
  3. 0:32:35 This Podcast is brought to you by MSI!
  4. News items of interest:
    1. 0:33:30 Cutting the Cord Complete!
    2. 0:36:10 VIA ARM-based SoCs in upcoming ASUS tablet
    3. 0:42:00 Lucid MVP 2.0 will be sold direct
    4. 0:44:50 Samsung acquires NVELO SSD Caching Software
    5. 0:49:00 AMD announces mobility 8000M series of GPUs
    6. 0:54:15 Some NVIDIA Tegra 4 Details
    7. 0:58:55 NEC Unveils Super Thin Ultrabook
    8. 1:00:30 Win a Sapphire HD 7870 GHz Edition FleX!!
  5. Closing:
    1. 1:02:30 Hardware / Software Pick of the Week
      1. Ryan: Panasonic GH2 Micro 4/3 Camera
      2. Josh: Preparation is key!
      3. Allyn: Cheap RAM
      4. Chris: Had solar panels installed this week
  1. 1-888-38-PCPER or podcast@pcper.com
  2. http://pcper.com/podcast
  3. http://twitter.com/ryanshrout and http://twitter.com/pcper
  4. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

NVIDIA Tegra 4 Details Revealed By Leaked Slide

Subject: Processors, Mobile | December 19, 2012 - 12:26 AM |
Tagged: wayne, tegra 4, SoC, nvidia, cortex a15, arm

Earlier this year, NVIDIA showed off a roadmap for its Tegra line of mobile system on a chip (SoC) processors. Namely, the next generation Tegra 4 mobile chip is codenamed Wayne and will be the successor to the Tegra 3.

Tegra 4 will use a 28nm manufacturing process and feature improvements to the CPU, GPU, and IO components. Thanks to a leaked slide that appeared on Chip Hell, we now have more details on Tegra 4.

NVIDIA Tegra 4 Leaked Slide.jpg

The 28nm Tegra 4 SoC will keep the same 4+1 CPU design* as the Tegra 3, but it will use ARM Cortex A15 CPU cores instead of the Cortex A9 cores used in the current generation chips. NVIDIA is also improving the GPU portion, and Tegra 4 will reportedly feature a 72 core GPU based on a new architecture. Unfortunately, we do not have specifics on how that GPU is set up architecturally, but the leaked slide indicates that the GPU will be as much as 6x faster than NVIDIA’s own Tegra 3. It will allegedly be fast enough to power displays with resolutions from 1080p @ 120Hz to 4K (refresh rate unknown). Don’t expect to drive games at native 4K resolution, however it should run a tablet OS fine. Interestingly, NVIDIA has included hardware to hardware accelerate VP8 and H.264 video at up to 2560x1440 resolutions.

Additionally, Tegra 4 will feature support for dual channel DDR3L memory, USB 3.0 and hardware accelerated secuity options including HDCP, Secure Boot, and DRM which may make Tegra 4 an attractive option for Windows RT tablets.

The leaked slide has revealed several interesting details on Tegra 4, but it has also raised some questions on the nitty-gritty details. Also, there is no mention of the dual core variant of Tegra 4 – codenamed Grey – that is said to include an integrated Icera 4G LTE cellular modem. Here’s hoping more details surface at CES next month!

* NVIDIA's name for a CPU that features four ARM CPU cores and one lower power ARM companion core.

Source: Chip Hell