Subject: General Tech, Graphics Cards | March 19, 2013 - 06:52 PM | Tim Verry
Tagged: GTC 2013, tyan, HPC, servers, tesla, kepler, nvidia
Server platform manufacturer TYAN is showing off several of its latest servers aimed at the high performance computing (HPC) market. The new servers range in size from 2U to 4U chassis and hold up to 8 Kepler-based Tesla accelerator cards. The new product lineup consists of two motherboards and three bare-bones systems. The S7055 and S7056 are the motherboards while the FT77-B7059, TA77-B7061, and FT48-B7055.
The TA77-B7061 is the smallest system, with support for two Intel Xeon E5-2600 processors and four Kepler-based Tesla accelerator cards. The FT48-B7055 has si7056 specifications but is housed in a 4U chassis. Finally, the FT77-B7059 is a 4U system with support for two Intel Xeon E5-2600 processors, and up to eight Tesla accelerator cards. The S7055 supports a maximum of 4 GPUs while the S7056 can support two Tesla cards, though these are bare boards so you will have to supply your own cards, processors, and RAM (of course).
According to TYAN, the new Kepler-based HPC systems will be available in Q2 2013, though there is no word on pricing yet.
Stay tuned to PC Perspective for further GTC 2013 Coverage!
Subject: General Tech, Graphics Cards | March 19, 2013 - 02:55 PM | Tim Verry
Tagged: unified virtual memory, ray tracing, nvidia, GTC 2013, grid vca, grid, graphics cards
Today, NVIDIA's CEO Jen-Hsun Huang stepped on stage to present the GTC keynote. In the presentation (which was live streamed on the GTC website and archived here.), NVIDIA discussed five major points, looking back over 2013 and into the future of its mobile and professional products. In addition to the product roadmap, NVIDIA discussed the state of computer graphics and GPGPU software. Remote graphics and GPU virtualization was also on tap. Finally, towards the end of the Keynote, the company revealed its first appliance with the NVIDIA GRID VCA. The culmination of NVIDIA's GRID and GPU virtualization technology, the VCA is a device that hosts up to 16 virtual machines which each can tap into one of 16 Kepler-based graphics processors (8 cards, 16 GPUs per card) to fully hardware accelerate software running of the VCA. Three new mobile Tegra parts and two new desktop graphics processors were also hinted at, with improvements to power efficiency and performance.
On the desktop side of things, NVIDIA's roadmap included two new GPUs. Following Kepler, NVIDIA will introduce Maxwell and Volta. Maxwell will feature a new virtualized memory technology called Unified Virtual Memory. This tech will allow both the CPU and GPU to read from a single (virtual) memory store. Much as with the promise of AMD's Kaveri APU, the Unified Virtual Meory will result in speed improvements in heterogeneous applications because data will not have to be copied to/from the GPU and CPU in order for the data to be processed. Server applications will really benefit from the shared memory tech. NVIDIA did not provide details, but from the sound of it, the CPU and GPU both continue to write to their own physical memory, but their is a layer of virtualized memory on top of that, that will allow the two (or more) different processors to read from each other's memory store.
Following Maxwell, Volta will be a physically smaller chip with more transistors (likely a smaller process node). In addition to the power efficiency improvements over Maxwell, it steps up the memory bandwidth significantly. NVIDIA will use TSV (through silicon via) technology to physically mount the graphics DRAM chips over the GPU (attached to the same silicon substrate electrically). According to NVIDIA, this new TSV-mounted memory will achieve up to 1 Terabytes/second of memory bandwidth, which is a notable increase over existing GPUs.
NVIDIA continues to pursue the mobile market with its line of Tegra chips that pair an ARM CPU, NVIDIA GPU, and SDR modem. Two new mobile chips called Logan and Parker will follow Tegra 4. Both new chips will support the full CUDA 5 stack and OpenGL 4.3 out of the box. Logan will feature a Kepler-based graphics porcessor on the chip that can “everything a modern computer ought to do” according to NVIDIA. Parker will have a yet-to-be-revealed graphics processor (Kepler successor). This mobile chip will utilize 3D FinFET transistors. It will have a greater number of transistors in a smaller package than previous Tegra parts (it will be about the size of a dime), and NVIDIA also plans to ramp up the frequency to wrangle more performance out of the mobile chip. NVIDIA has stated that Logan silicon should be completed towards the end of 2013, with the mobile chips entering production in 2014.
Interestingly, Logan has a sister chip that NVIDIA is calling Kayla. This mobile chip is capable of running ray tracing applications and features OpenGL geometric shaders. It can support GPGPU code and will be compatible with Linux.
NVIDIA has been pushing CUDA for several years, now. The company has seen some respectable adoption rates, by growing from 1 Tesla supercomputer in 2008 to its graphics cards being used in 50 supercomputers, with 500 million CUDA processors on the market. There are now allegedly 640 universities working with CUDA and 37,000 academic papers on CUDA.
Finally, NVIDIA's hinted-at new product announcement was the NVIDIA VCA, which is a GPU virtualization appliance that hooks into the network and can deliver up to 16 virtual machines running independant applications. These GPU accelerated workspaces can be presneted to thin clinets over the netowrk by installing the GRID client software on users' workstations. The specifications of the GRID VCA is rather impressive, as well.
The GRID VCA features:
- 2 x Intel Xeon processors with 16 threads each (32 total threads)
- 192GB to 384GB of system memory
- 8 Kepler-based graphics cards, with two GPUs each (16 total GPUs)
- 16 x GPU-accelerated virtual machines
The GRID VCA fits into a 4U case. It can deliver remote graphics to workstations, and is allegedly fast enough to deliver gpu accelerated software that is equivalent to having it run on the local machine (at least over LAN). The GRID Visual Computing Appliance will come in two flavors at different price points. The first will have 8 Kepler GPUs with 4GB of memory each, 16 CPU threads, and 192GB of system memory for $24,900. The other version will cost $34,900 and features 16 Kepler GPUs (4GB memory), 32 CPU threads, and 384GB system memory. On top of the hardware cost, NVIDIA is also charging licensing fees. While both GRID VCA devices can support unlimited devices, the licenses cost $2,400 and $4,800 per year respectively.
Overall, it was an interesting keynote, and the proposed graphics cards look to be offering up some unique and necessary features that should help hasten the day of ubiquitous general purpose GPU computing. The Unified Virtual Memory was something I was not expecting, and it will be interesting to see how AMD responds. AMD is already promising shared memory in its Kaveri APU, but I am interested to see the details of how NVIDIA and AMD will accomplish shared memory with dedicated grapahics cards (and whether CrossFire/SLI setups will all have a single shared memory pool)..
Stay tuned to PC Perspective for more GTC 2013 Coverage!
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 18, 2013 - 09:10 PM | Scott Michaud
Tagged: GTC 2013, nvidia
We just received word from Tim Verry, our GTC correspondent and news troll, about his first kick at the conference. This... is his story.
Graphics card manufacturer, NVIDIA, is hosting its annual GPU Technology Conference (GTC 2013) in San Jose, California this week. PC Perspective will be roaming the exhibit floor and covering sessions as NVIDIA and its partners discuss upcoming graphics technologies, GPGPU, programming, and a number of other low level computing topics.
The future... is tomorrow!
A number of tech companies will be on site and delivering presentations to show off their latest Kepler-based systems. NVIDIA will deliver its keynote presentation tomorrow for the press, financial and industry analysts, and business partners to provide a glimpse at the green team's roadmap throughout 2013 - and maybe beyond.
We cannot say for certain what NVIDIA will reveal during its keynote; but, since we have not been briefed ahead of time, we are completely free to speculate! I think one certainty is the official launch of the Kepler-based K6000 workstation card; for example. While I do not expect to see Maxwell, we could possibly see a planned refresh of the Kepler-based components with some incremental improvements: I predict power efficiency over performance. Perhaps we will receive a cheaper Titan-like consumer card towards the end of 2013? Wishful thinking on my part? A refresh of its GK104 architecture would be nice to see as well, even if actual hardware will not show up until next year. I expect that NVIDIA will react to whatever plans AMD has to decide whether it is in their interest to match them or not.
I do expect to see more information on GRID and Project SHIELD, however. NVIDIA has reportedly broadened the scope of this year's conference to include mobile sessions: expect Tegra programming and mobile GPGPU goodness to be on tap.
It should be an interesting week of GPU news. Stay tuned to PC Perspective for more coverage as the conference gets underway.
What are you hoping to see from NVIDIA at GTC 2013?
Subject: General Tech | March 18, 2013 - 02:23 PM | Jeremy Hellstrom
Tagged: nvidia, hack, GTX 690, K5000, K10, quadro, tesla, linux
It will take a bit of work with a soldering iron but Hack a Day has posted an article covering how to mod one of the GPUs on a GTX690 into thinking it is either a Quadro K5000 or Tesla K10. More people will need to apply this mod and test it to confirm that the performance of the GPU actually does match or at least compare to the professional level graphics but the ID string is definitely changed to match one of those two much more expensive GPUs. They also believe that a similar mod could be applied to the new TITAN graphics card as it is electronically similar to the GTX690. Of course, if things go bad during the modification you could kill a $1000 card so do be careful.
"If hardware manufacturers want to keep their firmware crippling a secret, perhaps they shouldn’t mess with Linux users? We figure if you’re using Linux you’re quite a bit more likely than the average Windows user to crack something open and see what’s hidden inside. And so we get to the story of how [Gnif] figured out that the NVIDIA GTX690 can be hacked to perform like the Quadro K5000. The thing is, the latter costs nearly $800 more than the former!"
Here is some more Tech News from around the web:
- The TR Podcast 130: A series of grunts about convertible tablets
- Microsoft updates its Kinect for Windows SDK @ The Inquirer
- Asustek to launch new Intel-based smartphone in June @ DigiTimes
- The 2013 Top 7 Best Linux Distributions for You @ Linux.com
- Watch out, office bods: A backdoor daemon lurks in HP LaserJets @ The Register
Subject: General Tech | March 16, 2013 - 11:36 PM | Scott Michaud
Tagged: nvidia, tomb raider
The last month has been good to PC gamers: from Starcraft, to SimCity, to Tomb Raider, all with the promise of Bioshock Infinite just around the corner. We are being dog piled by one bulky release after another... most of which we are theoretically able to play.
Of course this is a call to action for GPU driver engineers. The software required to make your video card run is extremely complex with graphics instructions being compiled and interpreted at runtime for routinely shifting architectures. Performance increases are often measured in the double digit percentages albeit for some set "X" of components in some set "Y" of games.
GeForce 314.14 beta drivers launched early in the month with decent performance increases particularly for setups with SLi-paired 680s. Tomb Raider fans found themselves quite a bit left out with the reboot of the franchise doing everything but rebooting their PCs with NVIDIA and Intel hardware.
Now, two weeks later, NVIDIA has released yet another beta driver, dubbed 314.21, aimed squarely at Tomb Raider. Performance increases are claimed to be an average 45% higher than previous versions with some configurations seeing upwards of 60% increases in performance. The delay was allegedly caused by the hardware developer not receiving the game code with enough time before launch to create the updates.
If you are a Tomb Raider, check out the drivers at NVIDIA's website.
Subject: Graphics Cards | March 8, 2013 - 09:17 AM | Tim Verry
Tagged: quadro, nvidia, kepler, k6000, gk110
Earlier this week, NVIDIA updated its Quadro line of workstation cards with new GPUs with GK104 “Kepler” cores. The updated line introduced four new Kepler cards, but the Quadro 6000 successor was notably absent from the NVIDIA announcement. If rumors hold true, professionals may get access to a K6000 Quadro card after all, and one that is powered by GK110 as well.
According to rumors around the Internet, NVIDIA has reserved its top-end Quadro slot for a GK110-based graphics card. Dubbed the K6000 (and in line with the existing Kepler Quadro cards), the high-end workstation card will feature 13 SMX units, 2,496 CUDA cores, 192 Texture Manipulation Units, 40 Raster Operations Pipeline units, and a 320-bit memory bus. The K6000 card will likely have 5GB of GDDR5 memory, like its Tesla K20 counterpart. Interestingly, this Quadro K6000 graphics card has one less SMX unit than NVIDIA’s Tesla K20X and even NVIDIA’s consumer-grade GTX Titan GPU. A comparison between the rumored K6000 card, the Quadro K5000 (GK104), and other existing GK110 cards is available in the table below. Also, note that the (rumored) K6000 specs put it more in like with the Tesla K20 than the K20X, but as it is the flagship Quadro card I felt it was still fair to compare it to the flagship Telsa and GeForce cards.
|Quadro K6000||Tesla K20X||GTX Titan||GK110 Full (Not available yet)||Quadro K5000|
|DP TFLOPS||~1.17 TFLOPS||1.31 TFLOPS||1.31 TFLOPS||~1.4 TFLOPS||.09 TFLOPS|
The Quadro cards are in an odd situation when it comes to double precision floating point performance. The Quadro K5000 which uses GK104 brings an abysmal 90 GFLOPS of double precision. The rumored GK110-powered Quadro K6000 brings double precision performance up to approximately 1 TFLOPS, which is quite the jump and shows that GK104 really was cut down to focus on gaming performance! Further, the card that the K6000 is replacing in name, the Quadro 6000 (no prefixed K), is based on NVIDIA’s previous-generation Fermi architecture and offers .5152 TFLOPS (515.2 GFLOPS) of double precision performance. On the plus side, users can expect around 3.5 TFLOPS of single precision horsepower, which is a substantial upgrade over Quadro 6000's 1.03 TFLOPS of single precision floating point. For comparison, the GK104-based Quadro K5000 offers 2.1 TFLOPS of single precision. Although it's no full GK110, it looks to be the Quadro card to beat for the intended usage.
Of course, Quadro is more about stable drivers, beefy memory, and single precision than double precision, but it would be nice to see the expensive Quadro workstation cards have the ability to pull double duty, as it were. NVIDIA’s Tesla line is where DP floating point is key. It is just a rather wide gap between the two lineups that the K6000 somewhat closes, fortunately. I would have really liked to see the K6000 have at least 14 SMX units, to match consumer Titan and the Tesla K20X, but rumors are not looking positive in that regard. Professionals should expect to see quite the premium with the K6000 versus the Titan, despite the hardware differences. It will likely be sold for around $3,000.
No word on availability, but the card will likely be released soon in order to complete the Kepler Quadro lineup update.
Subject: General Tech, Graphics Cards | March 6, 2013 - 08:02 PM | Scott Michaud
Tagged: quadro, nvidia
Be polite, be efficient, have a plan to Kepler every card that you meet.
The professional graphics market is not designed for gamers although that should have been fairly clear. These GPUs are designed to effectively handle complex video, 3D, and high resolution display environments found in certain specialized workspaces.
This is the class of cards which allow a 3D animator to edit their creations with stereoscopic 3D glasses, for instance.
NVIDIA's branding will remain consistent with the scheme developed for the prior generation. Previously, if you were in the market for a Fermi-based Quadro solution, you would have the choice between: the Quadro 600, the 2000, the 4000, the 5000, and the 6000. Now that the world revolves around Kepler... heh heh heh... each entry has been prefixed with a K with the exception of the highest-end 6000 card. These entries are therefore:
- Quadro K600, 192 CUDA Cores, 1GB, $199 MSRP
- Quadro K2000, 384 CUDA Cores, 2GB, $599 MSRP
- Quadro K4000, 768 CUDA Cores, 3GB, $1,269 MSRP
- Quadro K5000, 1536 CUDA Cores, 4GB + ECC, $2,249 MSRP
This product line is demonstrated graphically by the NVIDIA slide below.
Clicking the image while viewing the article will enlargen it.
It should be noted that each of the above products have been developed on the series of GK10X architectures and not the more computationally-intensive GK110 products. As the above slide alludes: while these Quadro cards are designed to handle the graphically-intensive applications, they are designed to be paired with GK110-based Tesla K20 cards to offload the GPGPU muscle.
Should you need the extra GPGPU performance, particularly when it comes to double precision mathematics, those cards can be found online for somewhere in the ballpark of $3,300 and $3,500.
The new Quadro products were available starting yesterday, March 5th, from “leading OEM and Channel Partners.”
Subject: Graphics Cards | March 5, 2013 - 02:28 PM | Jeremy Hellstrom
Tagged: nvidia, geforce, graphics drivers
After evaluating the evolution of AMD's drivers over 2012, [H]ard|OCP has now finalized their look at NVIDIA's offerings over the past year. They chose a half dozen drivers spanning March to December, tested on both the GTX680 and GTX 670. As you can see throughout the review, NVIDIA's performance was mostly stable apart from the final driver of 2012 which provided noticeably improved performance in several games. [H] compared the frame rates from both companies on the same chart and it makes the steady improvement of AMD's drivers over the year even more obvious. That does imply that AMD's initial drivers for this year needed improvement and that perhaps the driver team at AMD has a lot of work cut out for them in 2013 if they want to reach a high level of performance across the board, with game specific improvements offering the only deviation in performance.
"We have evaluated AMD and NVIDIA's 2012 video card driver performances separately. Today we will be combining these two evaluations to show each companies full body of work in 2012. We will also be looking at some unique graphs that show how each video cards driver improved or worsened performance in each game throughout the year."
Here are some more Graphics Card articles from around the web:
- AMD EyeFinity - Issues with Triple-Screen setups and 120Hz Refresh Rates @ Tweaktown
- Low-End NVIDIA/AMD GPU Comparison On Open-Source Drivers @ Phoronix
- AMD Radeon HD 7950 Boost vs. Nvidia GeForce GTX 660 Ti: frametimes @ Hardware.info
- Radeon Gallium3D Can Beat AMD's Catalyst In Select Workloads @ Phoronix
- Sapphire Radeon HD 7870 OC GHz Edition @ Funkykit
- AMD Radeon HD 7970 GHz Edition vs. Nvidia GeForce GTX 680: frametimes review @ Hardware.info
- NVIDIA Chips Comparison Table @ Hardware Secrets
- NVIDIA GeForce GTX Titan Video Card Review @ Legit Reviews
- GTX TITAN: The beast to unseat the best! @ Bjorn3D
- NVIDIA GeForce GTX TITAN: The Most Advanced Single-GPU Video Card Ever Made @Hi Tech Legion
- GTX TITAN Single Card @ Bjorn3D
- sus GeForce GTX 660 DirectCU II OC 2 GB @ X-bit Labs
- Gigabyte GeForce GTX Titan 6GB @ eTeknix
- Nvidia GeForce GTX Titan 3-way/4-way SLI review incl 5760x1080 and frametimes @ Hardware.info
- Sparkle GeForce GTX 650 Ti Dragon Series @ Kitguru
Subject: General Tech | February 28, 2013 - 03:45 PM | Ken Addison
Tagged: video, titan, sli, R5000, podcast, nvidia, H90, H110, gtx titan, frame rating, firepro, crossfire, amd
PC Perspective Podcast #240 - 02/28/2013
Join us this week as we discuss GTX TITAN Benchmarks, Frame Rating, Tegra 4 Details and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Program length: 1:24:28
Podcast topics of discussion:
- 0:01:18 PCPer Podcast BINGO!
- Week in Reviews:
- 0:40:30 This Podcast is brought to you by MSI!
News items of interest:
- 0:41:45 New Offices coming for NVIDIA
- 0:45:00 Chromebook Pixel brings high-res to high-price
- 0:48:00 GPU graphics market updates from JPR
- 0:55:45 Tegra 4 graphics details from Mobile World Congress
- 1:01:00 Unreal Engine 4 on PS4 has reduced quality
- 1:04:10 Micron SAS SSDs
- 1:08:25 AMD FirePro R5000 PCoIP Card
1:13:35 Hardware / Software Pick of the Week
- Ryan: NOT this 3 port HDMI switch
- Jeremy: Taxidermy + PICAXE, why didn't we think of this before?
- Josh: Still among my favorite headphones
- Allyn: Cyto
- 1:13:35 Hardware / Software Pick of the Week
- 1-888-38-PCPER or email@example.com
- http://twitter.com/ryanshrout and http://twitter.com/pcper
Subject: Graphics Cards | February 25, 2013 - 08:01 PM | Josh Walrath
Tagged: nvidia, tegra, tegra 4, Tegra 4i, pixel, vertex, PowerVR, mali, adreno, geforce
When Tegra 4 was introduced at CES there was precious little information about the setup of the integrated GPU. We all knew that it would be a much more powerful GPU, but we were not entirely sure how it was set up. Now NVIDIA has finally released a slew of whitepapers that deal with not only the GPU portion of Tegra 4, but also some of the low level features of the Cortex A15 processor. For this little number I am just going over the graphics portion.
This robust looking fellow is the Tegra 4. Note the four pixel "pipelines" that can output 4 pixels per clock.
The graphics units on the Tegra 4 and Tegra 4i are identical in overall architecture, just that the 4i has fewer units and they are arranged slightly differently. Tegra 4 is comprised of 72 units, 48 of which are pixel shaders. These pixel shaders are VLIW based VEC4 units. The other 24 units are vertex shaders. The Tegra 4i is comprised of 60 units, 48 of which are pixel shaders and 12 are vertex shaders. We knew at CES that it was not a unified shader design, but we were still unsure of the overall makeup of the part. There are some very good reasons why NVIDIA went this route, as we will soon explore.
If NVIDIA were to transition to unified shaders, it would increase the overall complexity and power consumption of the part. Each shader unit would have to be able to handle both vertex and pixel workloads, which means more transistors are needed to handle it. Simpler shaders focused on either pixel or vertex operations are more efficient at what they do, both in terms of transistors used and power consumption. This is the same train of thought when using fixed function units vs. fully programmable. Yes, the programmability will give more flexibility, but the fixed function unit is again smaller, faster, and more efficient at its workload.
On the other hand here we have the Tegra 4i, which gives up half the pixel pipelines and vertex shaders, but keeps all 48 pixel shaders.
If there was one surprise here, it would be that the part is not completely OpenGL ES 3.0 compliant. It is lacking in one major function that is required for certification. This particular part cannot render at FP32 levels. It has been quite a few years since we have heard of anything not being able to do FP32 in the PC market, but it is quite common to not support it in the power and transistor conscious mobile market. NVIDIA decided to go with a FP 20 partial precision setup. They claim that for all intents and purposes, it will not be noticeable to the human eye. Colors will still be rendered properly and artifacts will be few and far between. Remember back in the day when NVIDIA supported FP16 and FP32 while they chastised ATI for choosing FP24 with the Radeon 9700 Pro? Times have changed a bit. Going with FP20 is again a power and transistor saving decision. It still supports DX9.3 and OpenGL ES 2.0, but it is not fully OpenGL ES 3.0 compliant. This is not to say that it does not support any 3.0 features. It in fact does support quite a bit of the functionality required by 3.0, but it is still not fully compliant.
This will be an interesting decision to watch over the next few years. The latest Mali 600 series, PowerVR 6 series, and Adreno 300 series solutions all support OpenGL ES 3.0. Tegra 4 is the odd man out. While most developers have no plans to go to 3.0 anytime in the near future, it will eventually be implemented in software. When that point comes, then the Tegra 4 based devices will be left a bit behind. By then NVIDIA will have a fully compliant solution, but that is little comfort for those buying phones and tablets in the near future that will be saddled with non-compliance once applications hit.
The list of OpenGL ES 3.0 features that are actually present in Tegra 4, but the lack of FP32 relegates it to 2.0 compliant status.
The core speed is increased to 672 MHz, well up from the 520 MHz in Tegra 3 (8 pixel and 4 vertex shaders). The GPU can output four pixels per clock, double that of Tegra 3. Once we consider the extra clock speed and pixel pipelines, the Tegra 4 increases pixel fillrate by 2.6x. Pixel and vertex shading will get a huge boost in performance due to the dramatic increase of units and clockspeed. Overall this is a very significant improvement over the previous generation of parts.
The Tegra 4 can output to a 4K display natively, and that is not the only new feature for this part. Here is a quick list:
2x/4x Multisample Antialiasing (MSAA)
24-bit Z (versus 20-bit Z in the Tegra 3 processor) and 8-bit Stencil
4K x 4K texture size incl. Non-Power of Two textures (versus 2K x 2K in the Tegra 3 processor) – for higher quality textures, and easier to port full resolution textures from console and PC games to Tegra 4 processor. Good for high resolution displays.
16:1 Depth (Z) Compression and 4:1 Color Compression (versus none in Tegra 3 processor) – this is lossless compression and is useful for reducing bandwidth to/from the frame buffer, and especially effective in antialiasing processing when processing multiple samples per pixel
Percentage Closer Filtering for Shadow Texture Mapping and Soft Shadows
Texture border color eliminate coarse MIP-level bleeding
sRGB for Texture Filtering, Render Surfaces and MSAA down-filter
1 - CSAA is no longer supported in Tegra 4 processors
This is a big generational jump, and now we only have to see how it performs against the other top end parts from Qualcomm, Samsung, and others utilizing IP from Imagination and ARM.