GCN-Based AMD 7000 Series GPUs Will Fully Support DirectX 11.2 After Driver Update

Subject: Graphics Cards | August 26, 2013 - 01:24 AM |
Tagged: amd, Windows 8.1, microsoft, directx 11.2, graphics cards, gaming, GCN

Earlier this month, several websites reported that AMD’s latest Graphics Core Next (GCN) based graphics cards (7000 series and 8000 series OEM lines) would not be compatible with the Windows 8.1-only DirectX 11.2 API. This was inferred from a statement made by AMD engineer Laylah Mah in an interview with c1 Magazin.

AMD Radeon 7970 GHz Edition.jpg

An AMD Radeon HD 7970 GHz Edition.

Fortunately, the GCN-based cards will fully support DirectX 11.2 once an updated driver has been released. As it turns out, Microsoft’s final DirectX 11.2 specification ended up being slightly different than what AMD expected. As a result, the graphics cards do not currently fully support the API. The issue is not one of hardware, however, and an updated driver can allow the GCN-based 7000 series hardware to fully support the latest DirectX 11.2 API and major new features such as tiled resources.

The updated driver will reportedly be released sometime in October to coincide with Microsoft’s release of Windows 8.1. Specifically, Maximum PC quoted AMD in stating the following:

"The Radeon HD 7000 series hardware architecture is fully DirectX 11.2-capable when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows 8.1 launch timeframe in October, when DirectX 11.2 ships. Today, AMD is the only GPU manufacturer to offer fully-compatible DirectX 11.1 support, and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.”

So fret not, Radeon 7000-series owners, you will be able to fully utilize DX 11.2 and all its features once games start implementing them, and assuming you upgrade to Windows 8.1.

Source: Maximum PC

GTC 2013: Pedraforca Is A Power Efficient ARM + GPU Cluster For Homogeneous (GPU) Workloads

Subject: General Tech, Graphics Cards | March 20, 2013 - 01:47 PM |
Tagged: tesla, tegra 3, supercomputer, pedraforca, nvidia, GTC 2013, GTC, graphics cards, data centers

There is a lot of talk about heterogeneous computing at GTC, in the sense of adding graphics cards to servers. If you have HPC workloads that can benefit from GPU parallelism, adding GPUs gives you computing performance in less physical space, and using less power, than a CPU only cluster (for equivalent TFLOPS).

However, there was a session at GTC that actually took things to the opposite extreme. Instead of a CPU only cluster or a mixed cluster, Alex Ramirez (leader of Heterogeneous Architectures Group at Barcelona Supercomputing Center) is proposing a homogeneous GPU cluster called Pedraforca.
Pedraforca V2 combines NVIDIA Tesla GPUs with low power ARM processors. Each node is comprised of the following components:

  • 1 x Mini-ITX carrier board
  • 1 x Q7 module (which hosts the ARM SoC and memory)
    • Current config is one Tegra 3 @ 1.3GHz and 2GB DDR2
  • 1 x NVIDIA Tesla K20 accelerator card (1170 GFLOPS)
  • 1 x InfiniBand 40Gb/s card (via Mellanox ConnectX-3 slot)
  • 1 x 2.5" SSD (SATA 3 MLC, 250GB)

The ARM processor is used solely for booting the system and facilitating GPU communication between nodes. It is not intended to be used for computing. According to Dr. Ramirez, in situations where running code on a CPU would be faster, it would be best to have a small number of Intel Xeon powered nodes to do the CPU-favorable computing, and then offload the parallel workloads to the GPU cluster over the InfiniBand connection (though this is less than ideal, Pedraforca would be most-efficient with data-sets that can be processed solely on the Tesla cards).

DSCF2421.JPG

While Pedraforca is not necessarily locked to NVIDIA's Tegra hardware, it is currently the only SoC that meets their needs. The system requires the ARM chip to have PCI-E support. The Tegra 3 SoC has four PCI-E lanes, so the carrier board is using two PLX chips to allow the Tesla and InfiniBand cards to both be connected.

The researcher stated that he is also looking forward to using NVIDIA's upcoming Logan processor in the Pedraforca cluster. It will reportedly be possible to upgrade existing Pedraforca clusters with the new chips by replacing the existing (Tegra 3) Q7 module with one that has the Logan SoC when it is released.

Pedraforca V2 has an initial cluster size of 64 nodes. While the speaker was reluctant to provide TFLOPS performance numbers, as it would depend on the workload, with 64 Telsa K20 cards, it should provide respectable performance. The intent of the cluster is to save power costs by using a low power CPU. If your sever kernel and applications can run on GPUs alone, there are noticeable power savings to be had by switching from a ~100W Intel Xeon chip to a lower-power (approximately 2-3W) Tegra 3 processor. If you have a kernel that needs to run on a CPU, it is recommended to run the OS on an Intel server and transfer just the GPU work to the Pedraforca cluster. Each Pedraforca node is reportedly under 300W, with the Tesla card being the majority of that figure. Despite the limitations, and niche nature of the workloads and software necessary to get the full power-saving benefits, Pedraforca is certainly an interesting take on a homogeneous server cluster!

DSCF2413.JPG

In another session relating to the path to exascale computing, power use in data centers was listed as one of the biggest hurdles to getting to Exaflop-levels of performance, and while Pedraforca is not the answer to Exascale, it should at least be a useful learning experience at wringing the most parallelism out of code and pushing GPGPU to the limits. And that research will help other clusters use the GPUs more efficiently as researchers explore the future of computing.

The Pedraforca project built upon research conducted on Tibidabo, a multi-core ARM CPU cluster, and CARMA (CUDA on ARM development kit) which is a Tegra SoC paired with an NVIDIA Quadro card. The two slides below show CARMA benchmarks and a Tibidabo cluster (click on image for larger version).

Stay tuned to PC Perspective for more GTC 2013 coverage!

 

GTC 2013: Jen-Hsun Huang Takes the Stage to Discuss NVIDIA's Future, New Hardware

Subject: General Tech, Graphics Cards | March 19, 2013 - 02:55 PM |
Tagged: unified virtual memory, ray tracing, nvidia, GTC 2013, grid vca, grid, graphics cards

Today, NVIDIA's CEO Jen-Hsun Huang stepped on stage to present the GTC keynote. In the presentation (which was live streamed on the GTC website and archived here.), NVIDIA discussed five major points, looking back over 2013 and into the future of its mobile and professional products. In addition to the product roadmap, NVIDIA discussed the state of computer graphics and GPGPU software. Remote graphics and GPU virtualization was also on tap. Finally, towards the end of the Keynote, the company revealed its first appliance with the NVIDIA GRID VCA. The culmination of NVIDIA's GRID and GPU virtualization technology, the VCA is a device that hosts up to 16 virtual machines which each can tap into one of 16 Kepler-based graphics processors (8 cards, 16 GPUs per card) to fully hardware accelerate software running of the VCA. Three new mobile Tegra parts and two new desktop graphics processors were also hinted at, with improvements to power efficiency and performance.

DSCF2303.JPG

On the desktop side of things, NVIDIA's roadmap included two new GPUs. Following Kepler, NVIDIA will introduce Maxwell and Volta. Maxwell will feature a new virtualized memory technology called Unified Virtual Memory. This tech will allow both the CPU and GPU to read from a single (virtual) memory store. Much as with the promise of AMD's Kaveri APU, the Unified Virtual Meory will result in speed improvements in heterogeneous applications because data will not have to be copied to/from the GPU and CPU in order for the data to be processed. Server applications will really benefit from the shared memory tech. NVIDIA did not provide details, but from the sound of it, the CPU and GPU both continue to write to their own physical memory, but their is a layer of virtualized memory on top of that, that will allow the two (or more) different processors to read from each other's memory store.
Following Maxwell, Volta will be a physically smaller chip with more transistors (likely a smaller process node). In addition to the power efficiency improvements over Maxwell, it steps up the memory bandwidth significantly. NVIDIA will use TSV (through silicon via) technology to physically mount the graphics DRAM chips over the GPU (attached to the same silicon substrate electrically). According to NVIDIA, this new TSV-mounted memory will achieve up to 1 Terabytes/second of memory bandwidth, which is a notable increase over existing GPUs.

DSCF2354.JPG

NVIDIA continues to pursue the mobile market with its line of Tegra chips that pair an ARM CPU, NVIDIA GPU, and SDR modem. Two new mobile chips called Logan and Parker will follow Tegra 4. Both new chips will support the full CUDA 5 stack and OpenGL 4.3 out of the box. Logan will feature a Kepler-based graphics porcessor on the chip that can “everything a modern computer ought to do” according to NVIDIA. Parker will have a yet-to-be-revealed graphics processor (Kepler successor). This mobile chip will utilize 3D FinFET transistors. It will have a greater number of transistors in a smaller package than previous Tegra parts (it will be about the size of a dime), and NVIDIA also plans to ramp up the frequency to wrangle more performance out of the mobile chip. NVIDIA has stated that Logan silicon should be completed towards the end of 2013, with the mobile chips entering production in 2014.

DSCF2371.JPG

Interestingly, Logan has a sister chip that NVIDIA is calling Kayla. This mobile chip is capable of running ray tracing applications and features OpenGL geometric shaders. It can support GPGPU code and will be compatible with Linux.

NVIDIA has been pushing CUDA for several years, now. The company has seen some respectable adoption rates, by growing from 1 Tesla supercomputer in 2008 to its graphics cards being used in 50 supercomputers, with 500 million CUDA processors on the market. There are now allegedly 640 universities working with CUDA and 37,000 academic papers on CUDA.

DSCF2331.JPG

Finally, NVIDIA's hinted-at new product announcement was the NVIDIA VCA, which is a GPU virtualization appliance that hooks into the network and can deliver up to 16 virtual machines running independant applications. These GPU accelerated workspaces can be presneted to thin clinets over the netowrk by installing the GRID client software on users' workstations. The specifications of the GRID VCA is rather impressive, as well.

The GRID VCA features:

  • 2 x Intel Xeon processors with 16 threads each (32 total threads)
  • 192GB to 384GB of system memory
  • 8 Kepler-based graphics cards, with two GPUs each (16 total GPUs)
  • 16 x GPU-accelerated virtual machines

The GRID VCA fits into a 4U case. It can deliver remote graphics to workstations, and is allegedly fast enough to deliver gpu accelerated software that is equivalent to having it run on the local machine (at least over LAN). The GRID Visual Computing Appliance will come in two flavors at different price points. The first will have 8 Kepler GPUs with 4GB of memory each, 16 CPU threads, and 192GB of system memory for $24,900. The other version will cost $34,900 and features 16 Kepler GPUs (4GB memory), 32 CPU threads, and 384GB system memory. On top of the hardware cost, NVIDIA is also charging licensing fees. While both GRID VCA devices can support unlimited devices, the licenses cost $2,400 and $4,800 per year respectively.

DSCF2410.JPG

Overall, it was an interesting keynote, and the proposed graphics cards look to be offering up some unique and necessary features that should help hasten the day of ubiquitous general purpose GPU computing. The Unified Virtual Memory was something I was not expecting, and it will be interesting to see how AMD responds. AMD is already promising shared memory in its Kaveri APU, but I am interested to see the details of how NVIDIA and AMD will accomplish shared memory with dedicated grapahics cards (and whether CrossFire/SLI setups will all have a single shared memory pool)..

Stay tuned to PC Perspective for more GTC 2013 Coverage!

NVIDIA Launches GTX 650 for Budget Gamers

Subject: Graphics Cards | September 13, 2012 - 09:38 AM |
Tagged: nvidia, kepler, gtx 650, graphics cards, geforce

Ah, Kepler: the (originally intended as) midrange graphics card architecture that took the world by storm and allowed NVIDIA to take it from the dual-GPU GeForce GTX 690 all the way down to budget discrete HTPC cards. So far this year we have seen the company push Kepler to its limits by adding GPU boost and placing it in the GTX 690 and GTX 680. Those cards were great, but commanded a price premium that most gamers could not afford. Enter the GTX 670 and GTX 660 Ti earlier this year and Kepler started to become an attractive option for gamers wanting a high-end single GPU system without breaking the bank. Those cards, at $399 and $299 respectively were a step in the right direction to making the Kepler architecture available to everyone but were still a bit pricey if you were on a tighter budget for your gaming rig (or needed to factor in the Significant Other Approval Process™).

Well, Kepler has now been on the market for about six months, and I’m excited to (finally) announce that NVIDIA is launching its first Kepler-based budget gaming card! The NVIDIA GeForce GTX 650 brings Kepler down to the ever-attractive $109 price point and is even capable of playing new games at 1080p above 30FPS. Not bad for such a cheap card!

GTX 650.jpg

With the GTX 650, you are making some sacrifices as far as hardware, but things are not all bad. The card features a mere 384 CUDA cores and 1GB of GDDR5 memory on a 128-bit bus. This is a huge decrease in hardware compared to the GTX 660 Ti’s 1344 CUDA cores and 2GB memory on a 192-bit bus – but that card is also $200 more. And while the GTX 650 runs the memory at 5Gbps, NVIDIA was not shy about pumping up the GPU core clockspeed. No boost functionality was mentioned but the base clockspeed is a respectable 1058 MHz. Even better, the card only requires a single 6-pin PCI-E power connector and has a TDP of 64W (less than half of its higher-end GeForce brethren).

Specs Comparison

The following chart compares the specifications between the new Geforce GTX 650 through the GTX 670 graphics card. 

GTX 650 and GTX 660 Specifications.jpg

Click on the above chart for a larger image.

Gaming Potential?

The really important question is how well it handles games, and NVIDIA showed off several slides with claimed performance numbers. Taking these numbers with a grain of salt as they are coming from the same company that built the hardware, the GTX 650 looks like a capable GPU for the price. The company compared it to both its GTS 450 (Fermi) and AMD’s 7750 graphics card. Naturally, it was shown in a good light in both comparisons, but nothing egregious.

NVIDIA is claiming an 8X performance increase versus the old 9500 GT, and an approximate 20% speed increase versus the GTS 450. And improvements to the hardware itself has allowed NVIDIA to improve performance while requiring less power; the company claims the GTX 650 uses up to half the power of its Fermi predecessor.

20percent better than fermi.jpg

The comparison between the GTX 650 and AMD Radeon HD 7750 is harder to gauge, though the 7750 is priced competitively around the GTX 650’s $109 MSRP so it will be interesting to see how that shakes out. NVIDIA is claiming anywhere from 1.08 to 1.34 times the performance of the 7750 in a number of games, shown in the chart below.

GTX 650 vs HD 7750.jpg

If you have been eyeing a 7750, the GTX 650 looks like it might be the better option, assuming reviewers are able to replicate NVIDIA’s results.

FPS GTX 650.png

Keep in mind, these are NVIDIA's numbers and not from our reviews.

Unfortunately, NVIDIA did not benchmark the GTS 450 against the GTX 650 in the games. Rather, they compared it to the 9500 GT to show the upgrade potential for anyone still holding onto the older hardware (pushing the fact that you can run DirectX 11 at 1080p if you upgrade). Still, the results for the 650 are interesting by themselves. In MechWarrior Online, World of Warcraft, and Max Payne 3 the budget GPU managed at least 40 FPS at 1920x1080 resolution in DirectX 11 mode. Nothing groundbreaking, for sure, but fairly respectable for the price. Assuming it can pull at least a min of 30 FPS in other recent games, this will be a good option for DIY builders that want to get started with PC gaming on a budget.

All in all, the NVIDIA GeForce GTX 650 looks to be a decent card and finally rounds out the Kepler architecture. At this price point, NVIDIA can finally give every gamer a Kepler option instead of continuing to rely on older cards to answer AMD at the lower price points. I’m interested to see how AMD answers this, and specifically if gamers will see more price cuts on the AMD side.

GTX 650 Specs.jpg

If you have not already, I strongly recommend you give our previous Kepler GPU reviews a read through for a look at what NVIDIA’s latest architecture is all about.

PC Perspective Kepler-based GTX Graphics Card Reviews:

NVIDIA GTX 660 Ti May Be Pricier Than Gamers Have Been Hoping For

Subject: Graphics Cards | August 5, 2012 - 08:33 PM |
Tagged: nvidia, kepler, gtx 660ti, graphics cards, gaming

Update: US-based retailers are starting to list the GTX 660 Ti as well, and at least one card is listed for $299, so there may be some hope despite the $349.99 MSRP. See the $299 PNY GTX 660 Ti graphics card here.

The GTX 660 Ti is an NVIDIA Kepler-based graphics card that has seen several leaks and even a full review ahead of official release. In the leaked review, rumored specifications were confirmed, and the card was shown to be very close to the existing GTX 670 GPU. Sometimes it was merely a couple of frames behind the $400+ GPU.

gtx660ti,B-2-347726-3.jpg

On the podcast, Ryan, Josh, and Jeremy speculated that–should the GTX 660 Ti be priced closer to the $300 mark in the rumored $300-400 pricing–it would be a very desirable gaming graphics card. Hardware-wise, the GTX 660 Ti is nearly identical to the GTX 670, and only sees a reduction in the memory bus from 256-bit to 192-bit. For a $100 cheaper card, gamers would be getting extremely close to the performance of the much more expensive GTX 670 Kepler card.

Unfortunately, it may not be the gaming card that people have been hoping for. According to Tom’s Hardware, a Swedish retailer has listed the GTX 660 Ti on its website for pre-orders at just under $400. At that price point, the GTX 660 Ti is much less desirable, and will be hard to justify versus springing for the GTX 670 for a bit more money.

Here’s hoping that the pre-order pricing is simply higher than the prices people will see once actual cards from NVIDIA and partners are officially released en masse. Do you think that there is still hope for the GTX 660 Ti as the gaming card of choice, or will you be looking elsewhere? 

GTX 660Ti Rumors Confirmed By Leaked Review

Subject: Graphics Cards | July 31, 2012 - 03:03 PM |
Tagged: nvidia, kepler, gtx 670, gtx 660ti, graphics cards

Last week, additional information leaked about the upcoming Kepler-based NVIDIA GTX 660 Ti graphics card. Those rumors suggested that the GPU would be very similar to the one found in existing GTX 670 (which we recently reviewed).

We speculated that the GTX 660 Ti could be an awesome card, assuming the price was right. While we do not have any pricing information–the best guess from rumors is that it is in the $300 to $400 range–as a result of Tweaktown breaking the release date, we now know that the latest rumors were true.

nvidia-graphic-drivers.jpg

The GTX 660Ti will feature 1344 CUDA cores and 2GB of GDDR5 memory on a 192-bit memory bus. This puts the GTX 660 Ti very close to the current 670 in terms of potential performance. According to the leaked benchmarks, that seems to be the case. The GTX 660 Ti is only a couple of frames behind the GTX 670 in Just Cause 2 and Dirt 3, for example. Considering this card is likely to use a bit less power and cost less, it is shaping up to be a rather desirable card. If this ends up being on the low end of the $300-400 range (rumors suggest otherwise, however), I suspect many gamers are going to opt for this new Kepler card rather than the more expensive and only very slightly faster GTX 670.

What do you think about the GTX 660 Ti, is the card you were hoping for?

Source: Tweaktown

NVIDIA Preparing New Mid-range Kepler Graphics Cards

Subject: General Tech | July 10, 2012 - 07:20 PM |
Tagged: nvidia, kepler, gtx 660 Ti, gtx 660, gtx 650 Ti, gtx 640, graphics cards, gpu

We have seen and reviewed NVIDIA’s high-end Kepler graphics cards, but the company’s mid-range line has been even harder to find than the GTX 680 was a couple months ago. That may be about to change though, as recent rumors suggest that the company is preparing at least three mid-range graphics cards for public release.

geforce-gt-640-3qtr.png

The current GT 640. Expect the refresh to look very similar.

The cheapest rumored card is a refresh of the existing GeForce GT 640. The refresh is slated for an August 2012 release and it takes the existing GK107 GPU with 384 CUDA cores and pairs it with GDDR5 memory instead of the currently used GDDR3. Videocardz predicts that the move to GDDR5 will bump the price up to a bit over $100.

The next card up will reportedly cost around $150 and will be released in August. The GeForce GT 650 Ti will allegedly be based around the GK106 GPU with 960 CUDA cores enabled. It will likely be paired with up to 2GB of GDDR5 memory and a 192-bit memory interface. This card will likely be the high-end HTPC and/or very entry level gaming card on the NVIDIA side.

However, for those serious about wanting to get into gaming, they should probably spend a bit more on the GPU and get at least the GTX 660. This rumored card is using a GK106 GPU with 1152 CUDA cores enabled and an alleged 1.5GB of GDDR5 memory with a 192-bit interface. As far as pricing, it will be positioned between the GT 650 Ti and the GeForce GTX 670 – somewhere in the $200-300 range.

Interestingly, if rumors turn out to be true, there may be yet another new graphics card that would fill the performance (and price) void between the GTX 660 and GTX 670: the GeForce GTX 660 Ti. Allegedly, the GTX 660 Ti would be very close to the GTX 670 as far as specifications are concerned. Both cards are based on the GK104 Kepler GPU (which we recently reviewed) and would have 1344 CUDA cores enabled. Where the two differ in the predicted specifications is memory. While the GeForce GTX 670 has either 2GB or 4GB of GDDR5 memory on a 256-bit interface, the GTX 660 Ti will have 1.5GB or 3GB of GDDR5 memory with a 192-bit interface. This card is also predicted to be released in August with the above mentioned NVIDIA GPUs. You can expect this card to be priced in the $300 to $400 dollar range, with an emphasis on the former for reference designs with 1.5GB of memory.

All these rumored cards should really help NVIDIA to flesh out its Kepler lineup and take on AMD on all fronts. These cards (assuming the rumors hold true, of course) should also be much easier to find and get a hold of since they are probably using binned chips that could not be sold as a GTX 670 or GTX 680 which were difficult to find in stock at launch.

What do you think about these rumors, do they sound plausible? Have you been holding off on Kepler until cheaper cards are released? Let us know in the comments below. You can find more information on the rumored graphics cards here.

Source: Videocardz

Gigabyte Releases Lower Clocked Revision of R7770 OC GPU

Subject: Graphics Cards | July 3, 2012 - 08:36 PM |
Tagged: radeon, hd 7770, graphics cards, gpu, clock speed, amd

Gigabyte has had an overclocked version of the Radeon HD 7770 graphics card for a couple months now, but the company is already readying a second revision of the card. Curiously, the new revision will maintain the same hardware but will run at slightly lower clock speeds. While the current revision (1.0) runs at 1100 MHz and 5000 MHz for the GPU core and memory respectively, the updated graphics card will run at 1050 MHz and 4500 MHz.

5596_big.jpg

Beyond the lower clock speeds, the new revision of the GV-R7770 OC card maintains the same PCB, chips, and cooler design. That hardware includes a 28nm GPU, 1GB of GDDR5 memory on a 128-bit interface, and a PCI-E 3.0 expansion slot. Display outputs include a DVI port, full-size HDMI port, and two mini DisplayPorts. It also maintains the same custom Gigabyte heatsink and fan.

According to Videocardz, users will be able to identify which revision they are getting before handing over any money by looking at the box. Alternatively, users can identify which revision it is by looking at the sticker on the underside of the card just above the PCI-E connector. As a new revision, especially with the release of higher-binning chips from AMD, it is a bit confusing that the card is being released with lower clock speeds than its predecessor. It may be that the higher factory overclock was not stable on enough cards and Gigabyte was having to deal with too many returns – that’s only a guess though.

All the same, if you are shopping for a 7770 graphics card and have been considering the Gigabyte model, be sure to double check which revision you are getting.

Source: Videocardz

NVIDIA Likely Not Recalling All 600 Series Kepler GPUs

Subject: Graphics Cards | May 21, 2012 - 06:10 PM |
Tagged: recall, nvidia, kepler, graphics cards, gpu

Editor's Note: We are getting a lot of flak for posting this story today, telling us that we are "giving the site credibility", referring to the Pnosker site that first started the recall rumor, simply by posting about it on our site.  Even though our post by Tim states to "take the leak with a grain of salt" and that "these GPUS go through rigorous testing and certification", some people think we were in the wrong to post about this. 

So let me be perfectly clear - the recall referenced in the story below is almost assuredly complete and utter BULLSHIT. 

According to Pnosker, NVIDIA is allegedly looking into recalling all Kepler based, 600-series graphics cards. Such a recall would affect users that have purchased GTX 670, GTX 680, and GTX 690 GPUs. The website has stated that their source has indicated that the graphics cards will possibly be recalled because the chips suffer from performance degradation after prolonged periods of heavy usage.

Galaxy GTX 670 Kepler GPU.JPG

While their source has reportedly been correct in the past, the author cautions readers to take the leak with a grain of salt. Other websites that have picked up on this have mentioned that these GPUs go through rigorous testing and certification processes before getting to the market, so this rumor does not have much ground to stand on. Another reason to take this report with a shaker-full of salt is that if there was such a defect in the Kepler GPU, it would be more likely to completely fail rather than continue working with degraded performance.

This rumor is likely just that: a rumor. Why such a rumor was started is unknown but your Kepler graphics card purchases are probably safe from performance degradation, though they may not get as high of a boost clock as other users’ cards.

UPDATE @ 7:30pm ET: To quote from NVIDIA PR - "There is no truth to this rumor."

Source: Pnosker

MSI Announces 7950 Twin Frozr III Graphics Cards

Subject: Graphics Cards | January 31, 2012 - 05:48 PM |
Tagged: msi, HD7950, hd 7950, graphics cards, gpu, amd

MSI today officially announced their new Radeon HD 7950 graphics cards with Twin Frozr III coolers. Specifically, the new cards are part of the "R7950 Twin Frozr 3GD5/OC" series. The new Twin Frozr III cooler features a nickel plated block, two 8mm Superpipes (heatpipes), and dual 80mm propeller blade fans that, according to MSI, delivers up to 10 degrees Celsius lower GPU temperatures versus reference coolers. Further, the dark gray Twin Frozr III cooler reduces noise by 13.7dB by using two slower spinning fans versus the single reference design fan spinning twice or more as fast. This extra bit of overclocking headroom has allowed MSI to claim a large "core and memory voltage potential providing up to 37.5% overclockability" Just like the company's motherboards, they are advertising the new graphics cards as being built with Hi-c CAP Super Ferrite Choke and solid capacitors that pass MIL-STD-810G testing. Based on the AMD 28nm Radeon HD 7950 reference design, the card supports the PCI Express 3.0 interface.  Also, the card features 1 DVI, 1 HDMI, and two Mini-DisplayPort video outputs.

Twin Frozr 7950.jpg

Further specifications include 3 GB of GDDR5 memory on a 384 bit bus, a core clock speed of 880 MHz, and memory clock of 5,200 MHz (effective, 1,300 MHz base). The card itself measures 261mm x 111mm x 38mm, (just under 10.3") which means that it should fit comfortably inside most Mid Tower (or larger) cases. While the 80 MHz increase in GPU clock speed over the reference design is not saying much, the cards themselves should have plenty of overclocking headroom beyond what MSI does at the factory. In our review of the AMD Radeon HD 7950 graphics card with reference cooler we achieved a nice 1050 MHz clock speed, and the "Supa-pipe" (as Josh likes to say) powered Twin Frozr III 7950 cards should be able to go even further beyond that, specific GPU permitting of course.

20120130_3.jpg

In addition to the new Twin Frozr III cooler powered cards, MSI is releasing a version of the Radeon 7950 with a reference design cooler and another Radeon 7970 card with a reference cooler to provide gamers with plenty of alternative options. Unfortunately, there is no word (yet) on pricing or availability.  The Twin Frozr III version of the 7950 sure looks a lot cooler, so it will be interesting to see if it actually keeps the GPU cooler (heh).

Source: MSI