Subject: General Tech | January 7, 2013 - 03:01 AM | Tim Verry
Tagged: tegra 4, SoC, nvidia, cortex a15, ces 2013, CES, arm
Details about NVIDIA’s latest system on a chip (SoC) for mobile devices leaked last month. On Sunday, NVIDIA officially released its Tegra 4 chip, and talked up more details on the new silicon.
Interestingly, the leaked information from the slide held true over the weekend when NVIDIA officially unveiled it during a press conference. The new chip is manufactured on a 28nm, low power, high-k metal gate process. It features four ARM Cortex-A15 CPU cores running at up to 1.9GHz, one (additional) low power Cortex-A15 companion core, a NVIDIA GeForce GPU with 72 cores (not unified shader design unfortunately). In addition, the Tegra 4 SoC includes the company’s i500 programmable soft modem, and a number of fixed function hardware used for audio and image processing.
According to Anandtech, the majority of GPU cores in the Tegra 4 are 20-bit pixel shaders though exact specifications on the GPU are still unknown. Further, the i500 modem currectly supports LTE UE Category 3 on the WCDMA band with an LTE 4 modem expected in the future.
Image Credit: ArsTechnica attended the NVIDIA press conference.
Tegra 4 will support dual channel LP-DDR3 memory, USB 3.0, and a technology that NVIDIA is calling its Computational Photography Architecture that allegedly will allow real-time HDR imagery with still and video shoots.
According to NVIDIA, Tegra 4 will be noticeably faster than its predecessors and the competing SoCs from Apple and Qualcomm et al. When compared to the Nexus 10 (Samsung Exynos 5 SoC) and the stock Android web browser, the Tegra 4 device (Chrome browser) opened pages in 27 seconds versus the Nexus 10’s 50 second benchmark time. Users will have to wait for retail devices with Tegra 4 hardware for independent benchmarks, however. Thanks to the higher top-end clockspeed and beefier GPU, you can expect Tegra 4 to be faster than Tegra 3, but until reviewers get their hands on Tegra 4-powered devices it is difficult to say just how much faster it is.
Speaking of hardware, the Tegra 4 chip will most likely be used in tablets (and not smartphones). Here’s hoping we see some prototype Tegra 4 devices or product announcements later this week at CES.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Editorial, General Tech, Graphics Cards, Systems, Mobile, Shows and Expos | January 7, 2013 - 01:07 AM | Scott Michaud
Tagged: CES, ces 2013, nvidia
The second act of the NVIDIA keynote speech re-announced their Grid cloud-based gaming product first mentioned back in May during GTC. You have probably heard of its competitors, Gaikai and OnLive. The mission of these services is to have all of the gaming computation done in a server somewhere and allow the gamer to log in and just play.
The NVIDIA Grid is their product top-to-bottom. Even the interface was created by NVIDIA and, as they laud, rendered server-side using the Grid. It was demonstrated to stream to an LG smart TV directly or Android tablets. A rack will contain 20 servers with 240 GPUs with a total of 200 Teraflops of computational power. Each server will initially be able to support 24 players, which is interesting, given the last year of NVIDIA announcements.
Last year, during the GK110 announcement, Kepler was announced to support hundreds of clients to access a single server for professional applications. It seems only natural that Grid would benefit from that advancement: but it apparently does not. With a limit of 24 players per box, equating to a maximum of two players per GPU, it seems odd that a limit would be in place. The benefit of stacking multiple players per GPU is that you can achieve better-than-linear scaling in the long-tail of games.
Then again, all they need to do is solve the scaling problem before they have a problem with scaling their service.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards | January 4, 2013 - 09:10 PM | Ryan Shrout
Tagged: nvidia, kepler, gk106, gtx 660 se
Are you ready for another entry to the confusing graphics market? NVIDIA has you covered with the upcoming GeForce GTX 660 SE that will target the $180-200 market and hit the AMD Radeon HD 7870 1GB square in the jaw. With the current lineup of GeForce cards that is the one area where NVIDIA is at an obvious disadvantage with the gap between the GTX 650 Ti and the GTX 660.
As is usually the case, when a new graphics card is ready to hit the market, leaks occur in all directions. Already we are seeing screenshots of specifications and benchmarks from PCEVA. If the rumors are right you'll see the GTX 660 SE released in Q1 of 2013 with 768 CUDA cores, 24 ROPs and a 192-bit memory bus. Interestingly, the GTX 660 SE will be based on GK106 and has the same core count as the GTX 650 Ti...the performance differences will be seen going from the 128-bit memory bus to 192-bit.
Current GPU-Z screenshots are showing a clock speed of 928 MHz with a Boost clock of 1006 MHz, running just about the same clock rates as the GTX 650 Ti (though the 650 series does not have GPU Boost technology enabled). It also looks like the GTX 660 SE will use GDDR5 memory running 5.6 GHz and a 2GB capacity.
With CES just around the corner (we are leaving in the morning!) we will ask around and see if anyone has more information about a solid price point and time frame for release!
A change is coming in 2013
If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs. A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system. The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.
More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.
And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year. A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance. For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms. But rather than average those out by each second of time, what if you looked at each frame individually?
Scott over at Tech Report started doing that this past year and found some interesting results. I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting.
Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc. I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results. I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.
At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz. Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes. This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.
Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software. There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this.
Subject: Editorial, General Tech, Graphics Cards | December 28, 2012 - 02:43 AM | Scott Michaud
Tagged: opencl, nvidia, amd
The GPU is slowly becoming the parallel processing complement to your branching logic-adept CPU. Developers have been slow to adopt this new technology but that does not hinder the hardware manufacturers from putting on a kettle of tea for when guests arrive.
While the transition to GPGPU is slower than I am sure many would like, developers are rarely quick on the uptake of new technologies. The Xbox 360 was one of the first platforms where unified shaders became mandatory and early developers avoided them by offloading vertex code to the CPU. On that note: how much software still gets released without multicore support?
Phoronix, practically the arbiter of all Linux news, decided to put several GPU drivers and their manufacturers to the test. AMD was up first and their results showed a pretty sizeable jump in performance at around October of this year through most of their tests. The article on NVIDIA arrived two days later and saw performance trended basically nowhere since February with the 295.20 release.
A key piece of information is that both benchmarks were performed with last generation GPUs: the GTX 460 on the NVIDIA side, with the 6950 holding AMD’s flag. You might note that 295.20 was the last tested driver to be released prior to the launch of Kepler.
These results seem to suggest that upon the launch of Kepler, NVIDIA did practically zero optimizations to their older "Fermi" architecture at least as far as these Linux OpenCL benchmarks are concerned. On the AMD side, it seems as though they are more willing to go back and advance the performance of their prior generation as they release new driver versions.
There are very few instances where AMD beats out NVIDIA in terms of driver support -- it is often a selling point for the jolly green giant -- but this appears to be a definite win for AMD.
Subject: General Tech | December 20, 2012 - 03:16 PM | Ken Addison
Tagged: video, virtu, VIA, tegra 4, Samsung, radeon, podcast, nvidia, nvelo, nuc, lucid, Intel, hackintosh, gigabyte, Dataplex, arm, amd, 8000m
PC Perspective Podcast #231 - 12/20/2012
Join us this week as we talk about the Intel NUC, AMD 8000M GPUs, Building a Hackintosh and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano and Chris Barbere
Program length: 1:13:41
Podcast topics of discussion:
- 0:01:50 We are going to try Planetside 2 after the podcast!
- Week in Reviews:
- 0:32:35 This Podcast is brought to you by MSI!
News items of interest:
- 0:33:30 Cutting the Cord Complete!
- 0:36:10 VIA ARM-based SoCs in upcoming ASUS tablet
- 0:42:00 Lucid MVP 2.0 will be sold direct
- 0:44:50 Samsung acquires NVELO SSD Caching Software
- 0:49:00 AMD announces mobility 8000M series of GPUs
- 0:54:15 Some NVIDIA Tegra 4 Details
- 0:58:55 NEC Unveils Super Thin Ultrabook
- 1:00:30 Win a Sapphire HD 7870 GHz Edition FleX!!
- 1-888-38-PCPER or email@example.com
- http://twitter.com/ryanshrout and http://twitter.com/pcper
Subject: Processors, Mobile | December 19, 2012 - 03:26 AM | Tim Verry
Tagged: wayne, tegra 4, SoC, nvidia, cortex a15, arm
Earlier this year, NVIDIA showed off a roadmap for its Tegra line of mobile system on a chip (SoC) processors. Namely, the next generation Tegra 4 mobile chip is codenamed Wayne and will be the successor to the Tegra 3.
Tegra 4 will use a 28nm manufacturing process and feature improvements to the CPU, GPU, and IO components. Thanks to a leaked slide that appeared on Chip Hell, we now have more details on Tegra 4.
The 28nm Tegra 4 SoC will keep the same 4+1 CPU design* as the Tegra 3, but it will use ARM Cortex A15 CPU cores instead of the Cortex A9 cores used in the current generation chips. NVIDIA is also improving the GPU portion, and Tegra 4 will reportedly feature a 72 core GPU based on a new architecture. Unfortunately, we do not have specifics on how that GPU is set up architecturally, but the leaked slide indicates that the GPU will be as much as 6x faster than NVIDIA’s own Tegra 3. It will allegedly be fast enough to power displays with resolutions from 1080p @ 120Hz to 4K (refresh rate unknown). Don’t expect to drive games at native 4K resolution, however it should run a tablet OS fine. Interestingly, NVIDIA has included hardware to hardware accelerate VP8 and H.264 video at up to 2560x1440 resolutions.
Additionally, Tegra 4 will feature support for dual channel DDR3L memory, USB 3.0 and hardware accelerated secuity options including HDCP, Secure Boot, and DRM which may make Tegra 4 an attractive option for Windows RT tablets.
The leaked slide has revealed several interesting details on Tegra 4, but it has also raised some questions on the nitty-gritty details. Also, there is no mention of the dual core variant of Tegra 4 – codenamed Grey – that is said to include an integrated Icera 4G LTE cellular modem. Here’s hoping more details surface at CES next month!
* NVIDIA's name for a CPU that features four ARM CPU cores and one lower power ARM companion core.
Subject: Graphics Cards | December 3, 2012 - 02:02 PM | Jeremy Hellstrom
Tagged: nvidia, call of duty, black ops 2, amd
[H]ard|OCP set out to determine how well AMD and NVIDIA's cards can deal with the new Call of Duty game. To do so they took a system built on a GIGABYTE Z77X-UP4-TH, a Core i7 2600k @ 4.8GHz, and 8GB of Corsair RAM and then tested a HD7970, 7950 and 7870 as well as a GTX680, 670 and 660Ti. There is good news for both graphics companies and gamers, the HD7870 was the slowest card and still managed great performance on maximum settings @ 2560x1600 with 8X MSAA and FXAA. For the absolute best performance it is NVIDIA's GTX680 that is your go to card though since this is a console port, albeit one that [H] describes as well implemented, don't expect to be blown away by the quality of the graphics.
"Call of Duty: Black Ops II is the first Call of Duty game on PC to support DX11 and new graphical features. Hopefully improvements to the IW Engine will be enough to boost the CoD franchise near the top graphics-wise. We also examine NVIDIA's TXAA technology which combines shader based antialiasing and traditional multisampling AA."
Here are some more Graphics Card articles from around the web:
- Far Cry 3 VGA Graphics Benchmark performance test @ Guru of 3D
- A brief history of video cards: 64 GPUs tested from the last five years @ Hardwaare.Info
- Gigabyte GeForce GTX 680 Super Overclock Graphics Card with WindForce 5X Cooling System @ X-bit Labs
- ASUS GeForce GTX 680 2GB DirectCU II Review @ circuitREMIX
- GIGABYTE Geforce GTX 670 (2GB) WINDFORCE 3X Video Card Review @circuitREMIX
- Four passive graphics cards review: 100% quiet @ Hardware.Info
- The Best Graphics Cards: AMD and Nvidia GPU Comparison with Latest Drivers @ Techspot
- NVIDIA Publishes Open-Source 2D Driver Code @ Phoronix
- 8-Way NVIDIA Nouveau GPU Comparison @ Phoronix
- 12-Way Radeon Gallium3D GPU Comparison @ Phoronix
- AMD Catalyst vs. Open-Source Gallium3D Driver Performance @ Phoronix
- HIS Radeon HD 7950 IceQ X Boost 3 GB @ techPowerUp
- Club3D HD7970 RoyalAce Graphics Card @ Kitguru
Subject: General Tech | November 23, 2012 - 01:03 PM | Jeremy Hellstrom
Tagged: gpgpu, amd, nvidia, Intel, phi, tesla, firepro, HPC
The skeptics were right to question the huge improvements seen when using GPGPUs in a system for heavy parallel computing tasks. The cards do help a lot but the 100x improvements that have been reported by some companies and universities had more to do with poorly optimized CPU code than with the processing power of GPGPUs. This news comes from someone who you might not expect to burst this particular bubble, Sumit Gupta is the GM of NVIDIA's Tesla team and he might be trying to mitigate any possible disappointment from future customers which have optimized CPU coding and won't see the huge improvements seen by academics and other current customers. The Inquirer does point out a balancing benefit, it is obviously much easier to optimize code in CUDA, OpenCL and other GPGPU languages than it is to code for multicored CPUs.
"Both AMD and Nvidia have been using real-world code examples and projects to promote the performance of their respective GPGPU accelerators for years, but now it seems some of the eye popping figures including speed ups of 100x or 200x were not down to just the computing power of GPGPUs. Sumit Gupta, GM of Nvidia's Tesla business told The INQUIRER that such figures were generally down to starting with unoptimised CPU."
Here is some more Tech News from around the web:
- Intel reportedly speeds up development of low-power processors @ DigiTimes
- Firefox and Opera squish big buffer overflow bugs @ The Register
- Hexing MAC address reveals Wifi passwords @ The Register
- Cisco Linksys EA6500 Smart Wi-Fi Router Review @ Legit Reviews
- Camera shootout: Samsung Galaxy S III vs S III mini @ Hardware.info
- Black Friday Tech Deals @ TechReviewSource
- Lawrence 'Empire Strikes Back' Kasdan to pen future Star Wars script @ The Register
- Win Corsair AX860i, AX760i, AX860 & AX760 power supplies @ Kitguru
Subject: General Tech, Graphics Cards | November 15, 2012 - 04:05 PM | Jeremy Hellstrom
Tagged: nvidia, call of duty, black ops 2
NVIDIA will be celebrating the release of Call of Duty: Black Ops II by launching the first-ever “GeForce GTX Call of Duty Rivalries” competition which pits top colleges against each other in Call of Duty: Black Ops II four-person, last team standing multiplayer matches. Participants in the first round of competition include the storied rivalries of Cal vs. Stanford, USC vs. UCLA and UNC vs. NC State. Two additional wildcard colleges from any accredited college in the United States will also be chosen by the Facebook community to field teams. See details on GeForce.com or visit NVIDIA’s Facebook page on how you can walk away with a Maingear gaming rig.
In addition to the contest NVIDIA also released the GeForce 310.54 beta driver with specific benefits for players of Black Ops 2, specifically the inclusion of TXAA.
- Delivers up to 26 percent faster performance in Call of Duty: Black Ops 2 and up to 18% faster performance in Assassin’s Creed III.
- Provides smooth, shimmer-free graphics with NVIDIA TXAA antialiasing in Call of Duty: Black Ops 2 and Assassin’s Creed III.
- Improves performance by up to 16% in other top games likes Battlefield 3, The Elder Scrolls V: Skyrim, and StarCraft II.
As always, our new driver includes new profiles for today’s top titles, increasing multi-GPU performance.
- Hawken – Added SLI profile
- Hitman: Absolution – Added SLI profile
- Natural Selection 2 – Added SLI profile
- Primal Carnage – Added SLI profile