Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 18, 2013 - 09:10 PM | Scott Michaud
Tagged: GTC 2013, nvidia
We just received word from Tim Verry, our GTC correspondent and news troll, about his first kick at the conference. This... is his story.
Graphics card manufacturer, NVIDIA, is hosting its annual GPU Technology Conference (GTC 2013) in San Jose, California this week. PC Perspective will be roaming the exhibit floor and covering sessions as NVIDIA and its partners discuss upcoming graphics technologies, GPGPU, programming, and a number of other low level computing topics.
The future... is tomorrow!
A number of tech companies will be on site and delivering presentations to show off their latest Kepler-based systems. NVIDIA will deliver its keynote presentation tomorrow for the press, financial and industry analysts, and business partners to provide a glimpse at the green team's roadmap throughout 2013 - and maybe beyond.
We cannot say for certain what NVIDIA will reveal during its keynote; but, since we have not been briefed ahead of time, we are completely free to speculate! I think one certainty is the official launch of the Kepler-based K6000 workstation card; for example. While I do not expect to see Maxwell, we could possibly see a planned refresh of the Kepler-based components with some incremental improvements: I predict power efficiency over performance. Perhaps we will receive a cheaper Titan-like consumer card towards the end of 2013? Wishful thinking on my part? A refresh of its GK104 architecture would be nice to see as well, even if actual hardware will not show up until next year. I expect that NVIDIA will react to whatever plans AMD has to decide whether it is in their interest to match them or not.
I do expect to see more information on GRID and Project SHIELD, however. NVIDIA has reportedly broadened the scope of this year's conference to include mobile sessions: expect Tegra programming and mobile GPGPU goodness to be on tap.
It should be an interesting week of GPU news. Stay tuned to PC Perspective for more coverage as the conference gets underway.
What are you hoping to see from NVIDIA at GTC 2013?
Subject: General Tech | March 18, 2013 - 02:23 PM | Jeremy Hellstrom
Tagged: nvidia, hack, GTX 690, K5000, K10, quadro, tesla, linux
It will take a bit of work with a soldering iron but Hack a Day has posted an article covering how to mod one of the GPUs on a GTX690 into thinking it is either a Quadro K5000 or Tesla K10. More people will need to apply this mod and test it to confirm that the performance of the GPU actually does match or at least compare to the professional level graphics but the ID string is definitely changed to match one of those two much more expensive GPUs. They also believe that a similar mod could be applied to the new TITAN graphics card as it is electronically similar to the GTX690. Of course, if things go bad during the modification you could kill a $1000 card so do be careful.
"If hardware manufacturers want to keep their firmware crippling a secret, perhaps they shouldn’t mess with Linux users? We figure if you’re using Linux you’re quite a bit more likely than the average Windows user to crack something open and see what’s hidden inside. And so we get to the story of how [Gnif] figured out that the NVIDIA GTX690 can be hacked to perform like the Quadro K5000. The thing is, the latter costs nearly $800 more than the former!"
Here is some more Tech News from around the web:
- The TR Podcast 130: A series of grunts about convertible tablets
- Microsoft updates its Kinect for Windows SDK @ The Inquirer
- Asustek to launch new Intel-based smartphone in June @ DigiTimes
- The 2013 Top 7 Best Linux Distributions for You @ Linux.com
- Watch out, office bods: A backdoor daemon lurks in HP LaserJets @ The Register
Subject: General Tech | March 16, 2013 - 11:36 PM | Scott Michaud
Tagged: nvidia, tomb raider
The last month has been good to PC gamers: from Starcraft, to SimCity, to Tomb Raider, all with the promise of Bioshock Infinite just around the corner. We are being dog piled by one bulky release after another... most of which we are theoretically able to play.
Of course this is a call to action for GPU driver engineers. The software required to make your video card run is extremely complex with graphics instructions being compiled and interpreted at runtime for routinely shifting architectures. Performance increases are often measured in the double digit percentages albeit for some set "X" of components in some set "Y" of games.
GeForce 314.14 beta drivers launched early in the month with decent performance increases particularly for setups with SLi-paired 680s. Tomb Raider fans found themselves quite a bit left out with the reboot of the franchise doing everything but rebooting their PCs with NVIDIA and Intel hardware.
Now, two weeks later, NVIDIA has released yet another beta driver, dubbed 314.21, aimed squarely at Tomb Raider. Performance increases are claimed to be an average 45% higher than previous versions with some configurations seeing upwards of 60% increases in performance. The delay was allegedly caused by the hardware developer not receiving the game code with enough time before launch to create the updates.
If you are a Tomb Raider, check out the drivers at NVIDIA's website.
Subject: Graphics Cards | March 8, 2013 - 09:17 AM | Tim Verry
Tagged: quadro, nvidia, kepler, k6000, gk110
Earlier this week, NVIDIA updated its Quadro line of workstation cards with new GPUs with GK104 “Kepler” cores. The updated line introduced four new Kepler cards, but the Quadro 6000 successor was notably absent from the NVIDIA announcement. If rumors hold true, professionals may get access to a K6000 Quadro card after all, and one that is powered by GK110 as well.
According to rumors around the Internet, NVIDIA has reserved its top-end Quadro slot for a GK110-based graphics card. Dubbed the K6000 (and in line with the existing Kepler Quadro cards), the high-end workstation card will feature 13 SMX units, 2,496 CUDA cores, 192 Texture Manipulation Units, 40 Raster Operations Pipeline units, and a 320-bit memory bus. The K6000 card will likely have 5GB of GDDR5 memory, like its Tesla K20 counterpart. Interestingly, this Quadro K6000 graphics card has one less SMX unit than NVIDIA’s Tesla K20X and even NVIDIA’s consumer-grade GTX Titan GPU. A comparison between the rumored K6000 card, the Quadro K5000 (GK104), and other existing GK110 cards is available in the table below. Also, note that the (rumored) K6000 specs put it more in like with the Tesla K20 than the K20X, but as it is the flagship Quadro card I felt it was still fair to compare it to the flagship Telsa and GeForce cards.
|Quadro K6000||Tesla K20X||GTX Titan||GK110 Full (Not available yet)||Quadro K5000|
|DP TFLOPS||~1.17 TFLOPS||1.31 TFLOPS||1.31 TFLOPS||~1.4 TFLOPS||.09 TFLOPS|
The Quadro cards are in an odd situation when it comes to double precision floating point performance. The Quadro K5000 which uses GK104 brings an abysmal 90 GFLOPS of double precision. The rumored GK110-powered Quadro K6000 brings double precision performance up to approximately 1 TFLOPS, which is quite the jump and shows that GK104 really was cut down to focus on gaming performance! Further, the card that the K6000 is replacing in name, the Quadro 6000 (no prefixed K), is based on NVIDIA’s previous-generation Fermi architecture and offers .5152 TFLOPS (515.2 GFLOPS) of double precision performance. On the plus side, users can expect around 3.5 TFLOPS of single precision horsepower, which is a substantial upgrade over Quadro 6000's 1.03 TFLOPS of single precision floating point. For comparison, the GK104-based Quadro K5000 offers 2.1 TFLOPS of single precision. Although it's no full GK110, it looks to be the Quadro card to beat for the intended usage.
Of course, Quadro is more about stable drivers, beefy memory, and single precision than double precision, but it would be nice to see the expensive Quadro workstation cards have the ability to pull double duty, as it were. NVIDIA’s Tesla line is where DP floating point is key. It is just a rather wide gap between the two lineups that the K6000 somewhat closes, fortunately. I would have really liked to see the K6000 have at least 14 SMX units, to match consumer Titan and the Tesla K20X, but rumors are not looking positive in that regard. Professionals should expect to see quite the premium with the K6000 versus the Titan, despite the hardware differences. It will likely be sold for around $3,000.
No word on availability, but the card will likely be released soon in order to complete the Kepler Quadro lineup update.
Subject: General Tech, Graphics Cards | March 6, 2013 - 08:02 PM | Scott Michaud
Tagged: quadro, nvidia
Be polite, be efficient, have a plan to Kepler every card that you meet.
The professional graphics market is not designed for gamers although that should have been fairly clear. These GPUs are designed to effectively handle complex video, 3D, and high resolution display environments found in certain specialized workspaces.
This is the class of cards which allow a 3D animator to edit their creations with stereoscopic 3D glasses, for instance.
NVIDIA's branding will remain consistent with the scheme developed for the prior generation. Previously, if you were in the market for a Fermi-based Quadro solution, you would have the choice between: the Quadro 600, the 2000, the 4000, the 5000, and the 6000. Now that the world revolves around Kepler... heh heh heh... each entry has been prefixed with a K with the exception of the highest-end 6000 card. These entries are therefore:
- Quadro K600, 192 CUDA Cores, 1GB, $199 MSRP
- Quadro K2000, 384 CUDA Cores, 2GB, $599 MSRP
- Quadro K4000, 768 CUDA Cores, 3GB, $1,269 MSRP
- Quadro K5000, 1536 CUDA Cores, 4GB + ECC, $2,249 MSRP
This product line is demonstrated graphically by the NVIDIA slide below.
Clicking the image while viewing the article will enlargen it.
It should be noted that each of the above products have been developed on the series of GK10X architectures and not the more computationally-intensive GK110 products. As the above slide alludes: while these Quadro cards are designed to handle the graphically-intensive applications, they are designed to be paired with GK110-based Tesla K20 cards to offload the GPGPU muscle.
Should you need the extra GPGPU performance, particularly when it comes to double precision mathematics, those cards can be found online for somewhere in the ballpark of $3,300 and $3,500.
The new Quadro products were available starting yesterday, March 5th, from “leading OEM and Channel Partners.”
Subject: Graphics Cards | March 5, 2013 - 02:28 PM | Jeremy Hellstrom
Tagged: nvidia, geforce, graphics drivers
After evaluating the evolution of AMD's drivers over 2012, [H]ard|OCP has now finalized their look at NVIDIA's offerings over the past year. They chose a half dozen drivers spanning March to December, tested on both the GTX680 and GTX 670. As you can see throughout the review, NVIDIA's performance was mostly stable apart from the final driver of 2012 which provided noticeably improved performance in several games. [H] compared the frame rates from both companies on the same chart and it makes the steady improvement of AMD's drivers over the year even more obvious. That does imply that AMD's initial drivers for this year needed improvement and that perhaps the driver team at AMD has a lot of work cut out for them in 2013 if they want to reach a high level of performance across the board, with game specific improvements offering the only deviation in performance.
"We have evaluated AMD and NVIDIA's 2012 video card driver performances separately. Today we will be combining these two evaluations to show each companies full body of work in 2012. We will also be looking at some unique graphs that show how each video cards driver improved or worsened performance in each game throughout the year."
Here are some more Graphics Card articles from around the web:
- AMD EyeFinity - Issues with Triple-Screen setups and 120Hz Refresh Rates @ Tweaktown
- Low-End NVIDIA/AMD GPU Comparison On Open-Source Drivers @ Phoronix
- AMD Radeon HD 7950 Boost vs. Nvidia GeForce GTX 660 Ti: frametimes @ Hardware.info
- Radeon Gallium3D Can Beat AMD's Catalyst In Select Workloads @ Phoronix
- Sapphire Radeon HD 7870 OC GHz Edition @ Funkykit
- AMD Radeon HD 7970 GHz Edition vs. Nvidia GeForce GTX 680: frametimes review @ Hardware.info
- NVIDIA Chips Comparison Table @ Hardware Secrets
- NVIDIA GeForce GTX Titan Video Card Review @ Legit Reviews
- GTX TITAN: The beast to unseat the best! @ Bjorn3D
- NVIDIA GeForce GTX TITAN: The Most Advanced Single-GPU Video Card Ever Made @Hi Tech Legion
- GTX TITAN Single Card @ Bjorn3D
- sus GeForce GTX 660 DirectCU II OC 2 GB @ X-bit Labs
- Gigabyte GeForce GTX Titan 6GB @ eTeknix
- Nvidia GeForce GTX Titan 3-way/4-way SLI review incl 5760x1080 and frametimes @ Hardware.info
- Sparkle GeForce GTX 650 Ti Dragon Series @ Kitguru
Subject: General Tech | February 28, 2013 - 03:45 PM | Ken Addison
Tagged: video, titan, sli, R5000, podcast, nvidia, H90, H110, gtx titan, frame rating, firepro, crossfire, amd
PC Perspective Podcast #240 - 02/28/2013
Join us this week as we discuss GTX TITAN Benchmarks, Frame Rating, Tegra 4 Details and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Program length: 1:24:28
Podcast topics of discussion:
- 0:01:18 PCPer Podcast BINGO!
- Week in Reviews:
- 0:40:30 This Podcast is brought to you by MSI!
News items of interest:
- 0:41:45 New Offices coming for NVIDIA
- 0:45:00 Chromebook Pixel brings high-res to high-price
- 0:48:00 GPU graphics market updates from JPR
- 0:55:45 Tegra 4 graphics details from Mobile World Congress
- 1:01:00 Unreal Engine 4 on PS4 has reduced quality
- 1:04:10 Micron SAS SSDs
- 1:08:25 AMD FirePro R5000 PCoIP Card
1:13:35 Hardware / Software Pick of the Week
- Ryan: NOT this 3 port HDMI switch
- Jeremy: Taxidermy + PICAXE, why didn't we think of this before?
- Josh: Still among my favorite headphones
- Allyn: Cyto
- 1:13:35 Hardware / Software Pick of the Week
- 1-888-38-PCPER or firstname.lastname@example.org
- http://twitter.com/ryanshrout and http://twitter.com/pcper
Subject: Graphics Cards | February 25, 2013 - 08:01 PM | Josh Walrath
Tagged: nvidia, tegra, tegra 4, Tegra 4i, pixel, vertex, PowerVR, mali, adreno, geforce
When Tegra 4 was introduced at CES there was precious little information about the setup of the integrated GPU. We all knew that it would be a much more powerful GPU, but we were not entirely sure how it was set up. Now NVIDIA has finally released a slew of whitepapers that deal with not only the GPU portion of Tegra 4, but also some of the low level features of the Cortex A15 processor. For this little number I am just going over the graphics portion.
This robust looking fellow is the Tegra 4. Note the four pixel "pipelines" that can output 4 pixels per clock.
The graphics units on the Tegra 4 and Tegra 4i are identical in overall architecture, just that the 4i has fewer units and they are arranged slightly differently. Tegra 4 is comprised of 72 units, 48 of which are pixel shaders. These pixel shaders are VLIW based VEC4 units. The other 24 units are vertex shaders. The Tegra 4i is comprised of 60 units, 48 of which are pixel shaders and 12 are vertex shaders. We knew at CES that it was not a unified shader design, but we were still unsure of the overall makeup of the part. There are some very good reasons why NVIDIA went this route, as we will soon explore.
If NVIDIA were to transition to unified shaders, it would increase the overall complexity and power consumption of the part. Each shader unit would have to be able to handle both vertex and pixel workloads, which means more transistors are needed to handle it. Simpler shaders focused on either pixel or vertex operations are more efficient at what they do, both in terms of transistors used and power consumption. This is the same train of thought when using fixed function units vs. fully programmable. Yes, the programmability will give more flexibility, but the fixed function unit is again smaller, faster, and more efficient at its workload.
On the other hand here we have the Tegra 4i, which gives up half the pixel pipelines and vertex shaders, but keeps all 48 pixel shaders.
If there was one surprise here, it would be that the part is not completely OpenGL ES 3.0 compliant. It is lacking in one major function that is required for certification. This particular part cannot render at FP32 levels. It has been quite a few years since we have heard of anything not being able to do FP32 in the PC market, but it is quite common to not support it in the power and transistor conscious mobile market. NVIDIA decided to go with a FP 20 partial precision setup. They claim that for all intents and purposes, it will not be noticeable to the human eye. Colors will still be rendered properly and artifacts will be few and far between. Remember back in the day when NVIDIA supported FP16 and FP32 while they chastised ATI for choosing FP24 with the Radeon 9700 Pro? Times have changed a bit. Going with FP20 is again a power and transistor saving decision. It still supports DX9.3 and OpenGL ES 2.0, but it is not fully OpenGL ES 3.0 compliant. This is not to say that it does not support any 3.0 features. It in fact does support quite a bit of the functionality required by 3.0, but it is still not fully compliant.
This will be an interesting decision to watch over the next few years. The latest Mali 600 series, PowerVR 6 series, and Adreno 300 series solutions all support OpenGL ES 3.0. Tegra 4 is the odd man out. While most developers have no plans to go to 3.0 anytime in the near future, it will eventually be implemented in software. When that point comes, then the Tegra 4 based devices will be left a bit behind. By then NVIDIA will have a fully compliant solution, but that is little comfort for those buying phones and tablets in the near future that will be saddled with non-compliance once applications hit.
The list of OpenGL ES 3.0 features that are actually present in Tegra 4, but the lack of FP32 relegates it to 2.0 compliant status.
The core speed is increased to 672 MHz, well up from the 520 MHz in Tegra 3 (8 pixel and 4 vertex shaders). The GPU can output four pixels per clock, double that of Tegra 3. Once we consider the extra clock speed and pixel pipelines, the Tegra 4 increases pixel fillrate by 2.6x. Pixel and vertex shading will get a huge boost in performance due to the dramatic increase of units and clockspeed. Overall this is a very significant improvement over the previous generation of parts.
The Tegra 4 can output to a 4K display natively, and that is not the only new feature for this part. Here is a quick list:
2x/4x Multisample Antialiasing (MSAA)
24-bit Z (versus 20-bit Z in the Tegra 3 processor) and 8-bit Stencil
4K x 4K texture size incl. Non-Power of Two textures (versus 2K x 2K in the Tegra 3 processor) – for higher quality textures, and easier to port full resolution textures from console and PC games to Tegra 4 processor. Good for high resolution displays.
16:1 Depth (Z) Compression and 4:1 Color Compression (versus none in Tegra 3 processor) – this is lossless compression and is useful for reducing bandwidth to/from the frame buffer, and especially effective in antialiasing processing when processing multiple samples per pixel
Percentage Closer Filtering for Shadow Texture Mapping and Soft Shadows
Texture border color eliminate coarse MIP-level bleeding
sRGB for Texture Filtering, Render Surfaces and MSAA down-filter
1 - CSAA is no longer supported in Tegra 4 processors
This is a big generational jump, and now we only have to see how it performs against the other top end parts from Qualcomm, Samsung, and others utilizing IP from Imagination and ARM.
Subject: General Tech | February 22, 2013 - 12:23 PM | Jeremy Hellstrom
Tagged: nvidia, jen-hsun huang
NVIDIA will have a new nerve center across the street from their existing headquarters as from what Jen-Hsun told The Register they are almost at the point where they need bunk-desks in their current HQ. The triangle pattern that the artists concepts shown not only embodies a key part of NVIDIA's technology but is also a well recognized technique in architecture to provide very sturdy construction. Hao Ko was the architect chosen for the design, his resume includes a terminal at JFK airport as well as a rather tall building in China. For NVIDIA's overlord to plan such an expensive undertaking shows great confidence in his companies success, even with the shrinking discrete GPU market.
"Move over Apple. Nvidia cofounder and CEO Jen-Hsun Huang wants to build his own futuristic space-station campus – and as you might expect, the Nvidia design is black and green and built from triangles, the basic building block of the mathematics around graphics processing. And, as it turns out, the strongest shape in architecture."
Here is some more Tech News from around the web:
- TSMC supply of 28nm chips remains tight @ DigiTimes
- AMD and The Sony PS4. Allow Me To Elaborate @ AMD
- Google reveals Glass details in patent application @ The Register
- Build your own dumb USB power strip @ Hack a Day
- ARM and Synopsys tape out a Mali-T658 GPU at 20nm @ The Inquirer
- Philips Gioco review: Ambiglow on your desk and much more @ Hardware.Info
- Win a MSI GTX 670 Twin Frozr Power Edition OC 2GB @ eTeknix
In case you missed it...
In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time. Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle. If you already read that page of our TITAN review, nothing new is included below.
I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!
Why are you not testing CrossFire??
If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out. (Part 1 is here, part 2 is here.) The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.
Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system. With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.
We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS. As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.
Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all. They are simply presenting data that they believe to be true based on the tools at their disposal. More data is always better.
Here are these results and our discussion. I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way. I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI. To gather results I used two processes:
- Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
- Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.
Here is an example of what the overlay looks like in Battlefield 3.
Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge
The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process. A solid color is added to the PRESENT call (more details to come later) for each individual frame. As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.
By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.
Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge
Here you see a very similar screenshot running on CrossFire. Notice the thin silver band between the maroon and purple? That is a complete frame according to FRAPS and most reviews. Not to us - we think that frame rendered is almost useless.
Get notified when we go live!