PowerColor Teasing Dual GPU Graphics Card With Massive Air Cooler

Subject: Graphics Cards | May 6, 2014 - 03:36 AM |
Tagged: r9 295x2, powercolor, hawaii, dual gpu, devil 13

PowerColor has been teasing a new graphics card on its Facebook page. The photos show a macro shot of the Devil 13 logo along with captions hitting at the new card being a dual GPU monster including one caption referring the upcoming Devil 13 as a "dual beast."

PowerColor's previous Devil 13 branded graphics card was the Radeon HD 7990 Devil 13 which contained two HD 7970 "Tahiti" GPUs on one PCB. Coincidentally, AMD recently launched a new dual GPU reference design based around two R9 290x "Hawaii" GPUs called the R9 295x2. It is still rumor and speculation at this point, but the timing and leaked photos seem to point squarely at the upcoming Devil 13 card being the first air cooled custom R9 295x2!

PowerColor Dual Beast R9 295X2 Dual Hawaii GPU Air Cooled.jpg

Adding credence to the rumors, leaked photos have appeared online with a PCB backplate that appears to match the backplate shown in the official teaser photo. The leaked photos show an absolutely beastly triple slot graphics card that places two GPUs in CrossFire on a single custom PCB powered by four 8-pin PCI-E power connectors and cooled by a gargantuan HSF comprised of an aluminum fin stack and multiple large diameter copper heatpipes along with three fans. The cooler and PCB are reinforced with brackets and a metal backplate to help keep the air cooler in pace and the PCB from bending.

PowerColor Devil 13 Dual GPU Air Cooled Graphics Card 4 8-pin PCIe.jpg

If the rumors hold true, PowerColor will be unveiling the first air cooled dual GPU R9 295X2 graphics card which is an impressive feat of engineering! Using four 8-pin PCI-E power connectors definitely suggests that aftermarket overclocking is encouraged and supported even if PowerColor does not end up factory overclocking their dual GPU beast.

For reference, the stock AMD R9 295X2 features two full Hawaii GPUs with 5,632 stream processors clocked at up to 1018 MHz interfaced with 8GB of total GDDR5 memory over a 512-bit bus (each GPU has 4GB of memory and a 512-bit bus). AMD rates this configuration at 11.5 TFLOPS of single precision performance. The reference R9 295X2 has a 500W TDP and uses two 8-pin PCI-E power connectors.

Please excuse me while I wipe the drool off of my keyboard...

Stay tuned to PC Perspective for more details on the mysterious dual GPU Devil 13 from PowerColor!

In the meantime, check out our full review of the R9 295X2 (and the Hawaii architecture) and what happens when you put two R9 295X2s in Quad CrossFire into a single system for 4K gaming goodness!

Source: PowerColor

GeForce Experience 2.0.1 Update Released

Subject: General Tech, Graphics Cards | May 5, 2014 - 05:03 PM |
Tagged: nvidia, geforce experience, shield

NVIDIA has released version 2.0.1 of GeForce Experience. This update does not bring many new features, hence why it is a third-level increment to the version number, but is probably worthwhile to download regardless. Its headlining feature is security enhancements with OpenSSL under remote GameStream on SHIELD. The update also claims to improve streaming quality and reduce audio latency.

nvidia-shield-gamestream-02.jpg

While they do not seem to elaborate, I assume this is meant to fix Heartbleed, which is an exploit that allows an attacker to receive a small snapshot of active memory. If that is that case, it is unclear whether the SHIELD, the host PC during a game session, or both endpoints are affected.

The new GeForce Experience is available at the NVIDIA website. If it is running, it will also ask you to update it, of course.

Source: NVIDIA

Asus Launches GTX TITAN Z Dual GK110 Graphics Card

Subject: Graphics Cards | May 2, 2014 - 01:29 AM |
Tagged: titan z, nvidia, gpgpu, gk110, dual gpu, asus

NVIDIA unveiled the GeForce GTX TITAN Z at the GPU Technology Conference last month, and the cards will be for sale soon from various partners. ASUS will be one of the first AIB partners to offer a reference TITAN-Z.

The ASUS GTX TITAN Z pairs two full GK110-based GPUs with 12GB of GDDR5 memory. The graphics card houses a total of 5,760 CUDA cores, 480 texture manipulation units (TMUs), and 96 ROPs. Each GK110 GPU interfaces with 6GB of GDDR5 memory via a 384-bit bus. ASUS is using reference clockspeeds with this card, which means 705 MHz base and up to 876 MHz GPU Boost for the GPUs and 7.0 GHz for the memory.

ASUS GTX TITAN Z Dual GPU Graphics Card.jpg

For comparison, the dual-GPU TITAN Z is effectively two GTX TITAN Black cards on a single PCB. However, the TITAN Black runs at 889 MHz base and up to 980 MHz GPU Boost. A hybrid water cooling solution may have allowed NVIDIA to maintain the clockspeed advantage, but doing so would compromise the only advantage the TITAN Z has over using two (much cheaper) TITAN Blacks in a workstation or server: card density. A small hit in clockspeed will be a manageable sacrifice for the target market, I believe.

The ASUS GTX TITAN Z has a 375W TDP and is powered by two 8-pin PCI-E power connectors. The new flagship dual GPU NVIDIA card has an MSRP of $3,000 and should be available in early May.

Source: Asus

AMD Mantle Private Beta Announced

Subject: General Tech, Graphics Cards | May 1, 2014 - 08:00 AM |
Tagged: Mantle, amd

As our readers are well aware, Mantle is available for use with a few games. Its compatibility begun with the beta Catalyst 14.1 driver and an update for Battlefield 4. AMD was quite upfront about the technology, even granting a brief interview with Guennadi Riguer, Chief Architect of the API to fill in a few of the gaps left from their various keynote speeches.

AMD_Mantle_Logo.png

What is under lock and key, however, is the actual software development kit (SDK). AMD claimed that it was too immature for the public. It was developed in partnership with DICE, Oxide Games, and other, established developers to fine-tune its shape, all the while making it more robust. That's fine. They have a development plan. There is nothing wrong with that. Today, while the SDK is still not public and sealed by non-disclosure agreement, AMD is accepting applications from developers who are requesting to enter the program.

If you want to develop a Mantle application or game, follow the instructions at their website for AMD to consider you. They consider it stable, performant, and functional enough for "a broader audience in the developer community".

AMD cites 40 developers already registered, up from seven (DICE, Crytek, Oxide, etc.).

If you are not a developer, then this news really did not mean too much to you -- except that progress is being made.

Source: AMD

Post Tax Day Celebration! Win an EVGA Hadron Air and GeForce GTX 750!

Subject: Editorial, General Tech, Graphics Cards | April 30, 2014 - 10:05 AM |
Tagged: hadron air, hadron, gtx 750, giveaway, evga, contest

Congrats to our winner: Pierce H.! Check back soon for more contests and giveaways at PC Perspective!!

In these good old United States of America, April 15th is a trying day. Circled on most of our calendars is the final deadline for paying up your bounty to Uncle Sam so we can continue to have things like freeway systems and universal Internet access. 

But EVGA is here for us! Courtesy of our long time sponsor you can win a post-Tax Day prize pack that includes both an EVGA Hadron Air mini-ITX chassis (reviewed by us here) as well as an EVGA GeForce GTX 750 graphics card. 

evgacontestapril.jpg

Nothing makes paying taxes better than free stuff that falls under the gift limit...

With these components under your belt you are well down the road to PC gaming bliss, upgrading your existing PC or starting a new one in a form factor you might not have otherwise imagined. 

Competing for these prizes is simple and open to anyone in the world, even if you don't suffer the same April 15th fear that we do. (I'm sure you have your own worries...)

  1. Fill out the form at the bottom of this post to give us your name and email address, in addition to the reasons you love April 15th! (Seriously, we need some good ideas for next year to keep our heads up!) Also, this does not mean you should leave a standard comment on the post to enter, though you are welcome to do that too.
     
  2. Stop by our Facebook page and give us a LIKE (I hate saying that), head over to our Twitter page and follow @pcper and heck, why not check our our many videos and subscribe to our YouTube channel?
     
  3. Why not do the same for EVGA's Facebook and Twitter accounts?
     
  4. Wait patiently for April 30th when we will draw and update this news post with the winners name and tax documentation! (Okay, probably not that last part.)

A huge thanks goes out to friends and supporters at EVGA for providing us with the hardware to hand out to you all. If it weren't for sponsors like this PC Perspective just couldn't happen, so be sure to give them some thanks when you see them around the In-tar-webs!!

Good luck!

Source: EVGA

NVIDIA Announces Watch_Dogs Bundle with GeForce GPUs

Subject: Graphics Cards | April 29, 2014 - 10:22 AM |
Tagged: nvidia, watch_dogs, watch dogs, bundle, geforce

A bit of a surprise email found its way to my inbox today that announced NVIDIA's partnership with Ubisoft to include copies of the upcoming Watch_Dogs game with GeForce GTX graphics cards. 

watchdogs.jpg

Gamers that purchase a GeForce GTX 780 Ti, GTX 780, GTX 770 or GTX 760 from select retailers will qualify for a free copy of the game. You can details on this bundle and available GPUs to take advantage of it at Amazon.com!

The press release also confirms inclusion of NVIDIA exclusive features like TXAA and HBAO+ in the game itself, which is interesting. From what I am hearing, Watch_Dogs is going to be a beast of a game on GPU hardware and we are looking forward to using it as a test platform going forward.

Full press release is included below.

OWN THE TECH AND CONTROL THE CITY WITH NVIDIA® AND UBISOFT®

Select GeForce GTX GPUs Now Include the Hottest Game of the Year: Watch_Dogs™

Santa Clara, CA  April 29, 2014 — Destructoid calls it one of the “most wanted games of 2014.” CNET said it was “one of the most anticipated games in recent memory.” MTV said it’s one of the “Can’t-Miss Video Games of 2014.” This, all before anyone out there has even played it.

So, starting today(1), gamers who purchase select NVIDIA® GeForce® GTX® 780 Ti, 780, 770 and 760 desktop GPUs can get their chance to play Watch_Dogs™, the new PC game taking the world by storm and latest masterpiece from Ubisoft®.

Described as a “polished, refined and truly next generation experience,” in Watch_Dogs you play as Aiden Pearce, a brilliant hacker whose criminal past led to a violent family tragedy. While seeking justice, you will monitor and hack those around you, access omnipresent security cameras, download personal information to locate a target, control traffic lights and public transportation to stop the enemy and more.

Featuring NVIDIA TXAA and HBAO+ technology for an interactive, immersive experience, it’s clear that gamers can’t wait to play Watch_Dogs, especially considering the effusive praise that the official trailer received. Launched mere weeks ago, the trailer has already been viewed more than a combined 650,000 times. For gamers, Watch_Dogs seamlessly blends a mixture of single-player and multiplayer action in a way never before seen, and Ubisoft has gone one step further in creating a unique ctOS mobile companion app for users of smartphone and tablet devices allowing for even greater access to the fun. If you haven’t checked out the trailer, please check it out here: https://www.youtube.com/watch?v=3eHCJ8pWdf0.

The GeForce GTX and Watch_Dogs bundle is available starting today from leading e-tailers including Newegg, Amazon.com, TigerDirect, NCIX; add-in card vendors such as EVGA; and nationwide system builders including AVADirect, CyberPower, Digital Storm, Falcon Northwest, iBUYPOWER, Maingear, Origin PC, Puget Systems, V3 Gaming PC and Velocity Micro. For a full list of participating partners, please visit: www.GeForce.com/GetWatchDogs.

Source: NVIDIA

Another GPU Driver Showdown: AMD vs NVIDIA in Linux

Subject: General Tech, Graphics Cards | April 27, 2014 - 04:22 AM |
Tagged: nvidia, linux, amd

GPU drivers have been a hot and sensitive topic at the site, especially recently, probably spurred on by the announcements of Mantle and DirectX 12. These two announcements admit and illuminate (like a Christmas tree) the limitations of APIs on gaming performance. Both AMD and NVIDIA have their recent successes and failures on their respective fronts. This will not deal with that, though. This is a straight round-up of new GPUs running the latest drivers... in Linux.

7-TuxGpu.png

Again, results are mixed and a bit up for interpretation.

In all, NVIDIA tends to have better performance with its 700-series parts than equivalently-priced R7 or R9 products from AMD, especially in low-performance Source Engine titles such as Team Fortress 2. Sure, even the R7 260X was almost at 120 FPS, but the R9 290 was neck-and-neck with the GeForce GTX 760. The GeForce GTX 770, about $50 cheaper than the R9 290, had a healthy 10% lead over it.

In Unigine Heaven, however, the AMD R9 290 passed the NVIDIA GTX 770 by a small margin, coming right in line with it's aforementioned $50-bigger price tag. In that situation, where performance became non-trivial, AMD caught up (but did not beat). Also, third-party driver support is more embraced by AMD than NVIDIA. On the other hand, NVIDIA's proprietary drivers are demonstrably better, even if you would argue that the specific cases are trivial because of overkill.

And then there's Unvanquished, where AMD's R9 290 did not achieve triple-digit FPS scores despite the $250 GTX 760 getting 110 FPS.

Update: As pointed out in the comments, some games perform significantly better on the $130 R7 260X than the $175 GTX 750 Ti (HL2: Lost Coast, TF2, OpenArena, Unigine Sanctuary). Some other games are the opposite, with the 750 Ti holding a sizable lead over the R7 260X (Unigine Heaven and Unvanquished). Again, Linux performance is a grab bag between vendors.

There's a lot of things to consider, especially if you are getting into Linux gaming. I expect that it will be a hot topic, soon, as it picks up... ... Steam.

Source: Phoronix

AMD Catalyst 14.4 Release Candidate is now available

Subject: Graphics Cards | April 22, 2014 - 01:06 PM |
Tagged: catalyst 14.4, catalyst, amd

The latest available AMD Catalyst Windows and Linux drivers can be found here:
AMD Catalyst Windows: http://support.amd.com/en-us/kb-articles/Pages/latest-catalyst-windows-beta.aspx
AMD Catalyst Linux: http://support.amd.com/en-us/kb-articles/Pages/latest-linux-beta-driver.aspx

image001.jpg

Highlights of AMD Catalyst™ 14.4 Windows Driver

  • Support for the AMD Radeon R9 295X

CrossFire fixes enhancements:

  • Crysis 3 – frame pacing improvements
  • Far Cry 3 – 3 and 4 GPU performance improvements at high quality settings, high resolution settings
  • Anno 2070 – Improved CrossFire scaling up to 34%
  • Titanfall – Resolved in game flickering with CrossFire enabled
  • Metro Last Light – Improved Crossfire scaling up to 10%
  • Eyefinity 3x1 (with three 4K panels) no longer cuts off portions of the application
  • Stuttering has been improved in certain applications when selecting mid-Eyefinity resolutions with V-sync Enabled

Full support for OpenGL 4.4
Mantle beta driver improvements:

  • BattleField 4: Performance slowdown is no longer seen when performing a task switch/Alt-tab
  • BattleField 4: Fuzzy images when playing in rotated SLS resolution with an A10 Kaveri system

Highlights of AMD Catalyst™ 14.1 Linux Driver

  • Support for the AMD Radeon R9 295X
  • Ubuntu 12.04.4 support
  • Full support for OpenGL 4.4

Resolved Issue highlights:

  • Corruption and system hang observed while running Sanctuary BM with Tear Free Desktop enabled
  • Memory leak about hardware context EGL create context error for glesx
  • GPU hand in CrossFire Mode [Piglit]
  • Test "spec/arb_vertex_array_object" failed [Piglit]
  • Test "glx/GLX_EXT_import_context/free context" failed [Piglit]
  • Test "spec/ARB_seamless_cube_map" failed Piglit]
  • Test "texture swizzle with border color" failed
  • Glxtest failures observed in log file Blank screen observed while running steam games with Big picture
  • 4ms delay observed in the glxSwapBuffers when vsync is enabled
  • RBDoom3BFG the game auto quit when use the security camera terminal
  • ETQW segmentation fault

Source: AMD

Nope, Never Settling... Forever. More Bundles.

Subject: General Tech, Graphics Cards | April 21, 2014 - 01:55 PM |
Tagged: radeon, never settle forever, never settle, amd

AMD has been taking PC gaming very seriously, especially over the last couple of years. While they have a dominant presence in the console space, with only IBM in opposition, I believe that direct licensing revenue was not their main goal, rather that they hope to see benefits carry over to the PC and maybe mobile spaces, eventually. In the PC space, Never Settle launched as a very successful marketing campaign. While it had a stutter with the launch of the R9 (and R7) product lines, it is back and is still called, "Never Settle Forever".

AMD-Never-Settle-Forever-2014-01.jpg

Keeping with Forever's alteration to the Never Settle formula, the type of card that you purchase yields a Gold, Silver, or Bronze reward. Gold (the R9 280 and R9 290 series, and the R9 295X2) gets three free games in the Gold tier, Silver (R9 270 and R7 260 series) gets two in the Silver tier, and Bronze (R7 250 and R7 240 series) gets one free game in the Bronze tier. By and large, the tiers are the same as last time plus a few old games and one upcoming Square Enix release: Murdered: Soul Suspect. They have also made deals with certain independent developers, where two indie titles bundled together count as one choice.

The complete breakdown of games is as follows:

 
Gold
(Choose 3)
Silver
(Choose 2)
Bronze
(Choose 1)
Murdered: Soul Suspect (June 3, 2014) Yes Yes No
Thief Yes Yes No
Tomb Raider Yes Yes No
Hitman: Absolution Yes Yes No
Sleeping Dogs Yes Yes No
Dungeon Siege III Yes Yes Yes
Dirt 3 Yes Yes Yes
Alan Wake Yes Yes Yes
Darksiders Yes Yes Yes
Darksiders II Yes Yes Yes
Company of Heroes 2 Yes Yes Yes
Total War: Shogun 2 Yes Yes Yes
Titan Quest (Gold Edition) Yes Yes Yes
Supreme Commander (Gold Edition) Yes Yes Yes
Deus Ex: Human Revolution Yes Yes No
Payday 2 Yes Yes No
Just Cause 2 Yes Yes Yes
Banner Saga + Mutant Blobs Attack (indie combo) Yes Yes Yes
Guacamelee + DYAD (indie combo) Yes Yes Yes
Mutant Blobs Attack + DYAD (indie combo) Yes Yes Yes
Banner Saga + DYAD (indie combo) Yes Yes Yes
Mutant Blobs Attack + Guacamelee (indie combo) Yes Yes Yes

Oddly enough, there does not seem to be a Banner Saga + Guacamelee combo...

... the only impossible combination.

AMD has also announced that Never Settle will continue for more "additions" in 2014. Which ones? Who knows. It is clear that they have a great working relationship with Square Enix Europe, including basically their last six major titles in Never Settle and keeping them there, but there is not really anything from them on the horizon (at least, not announced). AMD does sound confident in having other deals lined up this year, however.

amd-never-settle-forever-2014-02.jpg

Never Settle Forever graphics cards are available now "at participating retailers". Bundle codes can be redeemed any time between now and August 31st.

There is some regional variance in game availability, however. Read up before you purchase (especially if you live in Japan). You should be fine if you live in North America, Europe, Middle East, Africa, New Zealand, Australia, and Latin America, though, at least where AMD products are available. Still, it is a good idea to check.

Source: AMD

An overclocked flagship GPU duel

Subject: Graphics Cards | April 17, 2014 - 04:10 PM |
Tagged: amd, nividia, gigabyte, asus, R9 290X, GeForce GTX 780 Ti, factory overclocked

In the green trunks is the ASUS GTX 780 Ti DirectCU II OC which [H]ard|OCP overclocked to the point they saw in game performance of 1211MHz GPU and 7.2GHz on the memory.  In the red trunks we find Gigabyte's R9 290X 4GB OC weighing in at 1115MHz and 5.08GHz for the GPU and memory respectively.  Both cards have been pushed beyond the factory overclock that they came with and will fight head to head in such events as Battling the Field, Raiding the Tomb and counting to three twice, once in a Crysis and again in a Far Cry from safety.  Who will triumph?  Will the battle be one sided or will the contenders trade top spot depending on the challenge?  Get the full coverage at [H]ard|OCP!

1397411267OPl8cM2MpM_2_8_l.jpg

"Today we look at the GIGABYTE R9 290X 4GB OC and ASUS GeForce GTX 780 Ti DirectCU II OC video cards. Each of these video cards features a custom cooling system, and a factory overclock. We will push the overclock farther and put these two video cards head-to-head for a high-end performance comparison."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Win a Galaxy GeForce GTX 750 Ti GC or GeForce GTX 750 GC!

Subject: General Tech, Graphics Cards | April 16, 2014 - 11:39 AM |
Tagged: video, giveaway, galaxy, contest

UPDATE: Our winners have been selected and notified! Thanks to everyone for participating and stayed tuned to pcper.com as we'll have more contests starting VERY SOON!!!

Our sponsors are the best, they really are. Case in point - Galaxy would like us to give away a pair of graphics cards to our fans. On the block for the contest are a Galaxy GTX 750 Ti GC and a Galaxy GTX 750 GC option, both based on the latest generation Maxwell GPU architecture from NVIDIA.

I posted a GTX 750 Ti Roundup story that looked at the Galaxy GTX 750 Ti GC option and it impressed in both stock performance and in the amount of overclocking headroom provided by the custom cooler.

IMG_9862.JPG

How can you win these awesome prizes? Head over to our YouTube channel to find or just watch the video below! You need to be a subscriber to our YouTube channel as well as leave a comment on the video itself over on YouTube.

Anyone, any where in the world can win. We'll pick a winner on April 16th - good luck!

Source: YouTube

NVIDIA GeForce Driver 337.50 Early Results are Impressive

Subject: Graphics Cards | April 11, 2014 - 03:30 PM |
Tagged: nvidia, geforce, dx11, driver, 337.50

UPDATE: We have put together a much more comprehensive story based on the NVIDIA 337.50 driver that includes more cards and more games while also disputing the Total War: Rome II results seen here. Be sure to read it!!

When I spoke with NVIDIA after the announcement of DirectX 12 at GDC this past March, a lot of the discussion centered around a pending driver release that promised impressive performance advances with current DX11 hardware and DX11 games. 

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

09.jpg

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

Lofty goals to be sure. This driver was released last week and I immediately wanted to test and verify many of these claims. However, a certain other graphics project kept me occupied most of the week and then a short jaunt to Dallas kept me from the task until yesterday. 

To be clear, I am planning to look at several more games and card configurations next week, but I thought it was worth sharing our first set of results. The test bed in use is the same as our standard GPU reviews.

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 780 Ti 3GB
NVIDIA GeForce GTX 770 2GB
Graphics Drivers NVIDIA: 335.23 WHQL, 337.50 Beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

The most interesting claims from NVIDIA were spikes as high as 70%+ in Total War: Rome II, so I decided to start there. 

First up, let's take a look at the GTX 780 Ti SLI results, the flagship gaming card from NVIDIA.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

With this title, running at the Extreme preset, jumps from an average frame rate of 59 FPS to 88 FPS, an increase of 48%! Frame rate variance does increase a bit with the faster average frame rate but it stays within limits of smoothness, but barely.

Next up, the GeForce GTX 770 SLI results.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

Results here are even more impressive as the pair of GeForce GTX 770 cards running in SLI jump from 29.5 average FPS to 51 FPS, an increase of 72%!! Even better, this occurs without any kind of frame rate variance increase and in fact, the blue line of the 337.50 driver is actually performing better in that perspective.

All of these tests were run with the latest patch on Total War: Rome II and I did specifically ask NVIDIA if there were any differences in the SLI profiles between these two drivers for this game. I was told absolutely not - this just happens to be the poster child example of changes NVIDIA has made with this DX11 efficiency push.

Of course, not all games are going to see performance improvements like this, or even improvements that are measurable at all. Just as we have seen with other driver enhancements over the years, different hardware configurations, image quality settings and even scenes used to test each game will shift the deltas considerably. I can tell you already that based on some results I have (but am holding for my story next week) performance improvements in other games are ranging from <5% up to 35%+. While those aren't reaching the 72% level we saw in Total War: Rome II above, these kinds of experience changes with driver updates are impressive to see.

Even though we are likely looking at the "best case" for NVIDIA's 337.50 driver changes with the Rome II results here, clearly there is merit behind what the company is pushing. We'll have more results next week!

AMD Selects Asetek to Liquid Cool The World’s Fastest Graphics Card

Subject: Graphics Cards | April 8, 2014 - 06:51 PM |
Tagged: asetek, amd, r9 295x2

If you wondered where the custom cooler for the impressively powerful AMD Radeon R9 295X2 came from then wonder no more.  The cooler was designed specifically for this card by Asetek, a veteran in cooling computer components with water.  You should keep that in mind the next time you think about picking up a third party watercooler!

GPU_LC.jpg

Asetek, the world’s leading supplier of computer liquid cooling solutions, today announced that its liquid cooling technology will be used to cool AMD’s latest flagship graphics card. The new AMD Radeon R9 295X2 is the world’s fastest graphics card. Boasting 8 gigabytes of memory and over 11 teraflops of computing power, the AMD Radeon R9 295X2 graphics card is the undisputed graphics performance champion.

“Today’s high-end graphic cards pack insane amounts of power into a very small area and removing that heat is no small task. Utilizing our liquid cooling for graphics cards unlocks new opportunities for performance and low noise,” said André Sloth Eriksen, Founder and CEO of Asetek. “The fact that AMD has chosen Asetek liquid cooling for their reference cooling design is a testament to the reliability and performance of our technology.”

The AMD Radeon R9 295X2 is the first graphics card reference design ever to ship with an advanced closed-loop water cooling system. The Asetek-developed liquid cooling system on the AMD Radeon R9 295X2 graphics card delivers significant benefits for the performance-hungry enthusiast, hardcore gamer or Bitcoin miner. Users will appreciate the unobtrusive noise, low GPU and component temperatures, and blistering performance - right out of the box.

“As the most powerful graphics card offered to date, we knew we needed an outstanding custom cooling solution for the AMD Radeon R9 295X2 graphics card,” said Matt Skynner, corporate vice president and general manager, Graphics Business Unit, AMD. “Asetek’s liquid cooling embodies the efficient performance, reliability and reputation we were seeking in a partner. As GPUs become more powerful, the benefits of collaborating with Asetek and integrating our world-class technologies are clear.”

The AMD Radeon R9 295X2 graphics card utilizes Asetek’s proven, maintenance free, factory sealed liquid cooling technology to cool the two powerful GPUs. This liquid cooling design ensures continuous stability even under full load. The card is easy to install and fits in most computer cases on the market today. With more than 1.5 million units in the field today, Asetek liquid cooling provides worry free operation to gamers and PC enthusiasts alike.

Source: Asetek

Some NVIDIA R337.50 Driver Controversy

Subject: General Tech, Graphics Cards | April 8, 2014 - 06:44 PM |
Tagged: nvidia, geforce, drivers

NVIDIA's GeForce 337.50 Driver was said to address performance when running DirectX 11-based software. Now that it is out, multiple sources are claiming the vendor-supplied benchmarks are exaggerated or simply untrue.

nvidia-337-sli.png

ExtremeTech compiled benchmarks from Anandtech and BlackHoleTec.

Going alphabetically, Anandtech tested the R337.50 and R331.xx drivers with a GeForce GTX 780 Ti, finding a double-digit increase with BioShock: Infinite and Metro: Last Light and basically zero improvement for GRID 2, Rome II, Crysis: Warhead, Crysis 3, and Company of Heroes 2. Adding a second GTX 780 Ti into the mix helped matters, seeing a 76% increase in Rome II and about 9% in most of the other titles.

BlackHoleTec is next. Testing the mid-range, but overclocked GeForce 760 between R337.50 and R335.23 drivers, they found slight improvements (1-3 FPS), except for Battlefield 4 and Skyrim (the latter is not DX11 to be fair) which noticed a slight reduction in performance (about 1 FPS).

ExtremeTech, finally, published one benchmark but it did not compare between drivers. All it really shows is CPU scaling in AMD GPUs.

Unfortunately, I do not have any benchmarks to present of my own because I am not a GPU reviewer nor do I have a GPU testbed. Ironically, the launch of the Radeon R9 295 X2 video card might have lessened that number of benchmarks available for NVIDIA's driver, who knows?

If it is true, and R337.50 does basically nothing in a setup with one GPU, I am not exactly sure what NVIDIA was hoping to accomplish. Of course someone was going to test it and publish their results. The point of the driver update was apparently to show how having a close relationship with Microsoft can lead you to better PC gaming products now and in the future. That can really only be the story if you have something to show. Now, at least I expect, we will probably see more positive commentary about Mantle - at least when people are not talking about DirectX 12.

If you own a GeForce card, I would still install the new driver though, especially if you have an SLi configuration. Scaling to a second GPU does see measurable improvements with Release 337.50. Even for a single-card configuration, it certainly should not hurt anything.

Source: ExtremeTech

NAB 2014: Intel Iris Pro Support in Adobe Creative Cloud (CC)

Subject: General Tech, Graphics Cards, Processors, Shows and Expos | April 8, 2014 - 03:43 PM |
Tagged: Intel, NAB, NAB 14, iris pro, Adobe, premiere pro, Adobe CC

When Adobe started to GPU-accelerate their applications beyond OpenGL, it started with NVIDIA and its CUDA platform. After some period of time, they started to integrate OpenCL support and bring AMD into the fold. At first, it was limited to a couple of Apple laptops but has since expanded to include several GPUs on both OSX and Windows. Since then, Adobe switched to a subscription-based release system and has published updates on a more rapid schedule. The next update of Adobe Premiere Pro CC will bring OpenCL to Intel Iris Pro iGPUs.

Intel-IrisPro-Adobe-Masking.jpg

Of course, they specifically mentioned Adobe Premiere Pro CC which suggests that Photoshop CC users might be coming later. The press release does suggest that the update will affect both Mac and Windows versions of Adobe Premiere Pro CC, however, so at least platforms will not be divided. Well, that is, if you find a Windows machine with Iris Pro graphics. They do exist...

A release date has not been announced for this software upgrade.

Source: Intel

MSI's R9 290X GAMING 4G sports a variety of overclocked settings and a Twin Frozr IV

Subject: Graphics Cards | April 7, 2014 - 07:14 PM |
Tagged: msi, R9 290X GAMING 4G, amd, hawaii, R9 290X, Twin Frozr IV, factory overclocked

The familiar Twin Frozr IV cooler has been added to the R9 290X GPU on MSI's latest AMD graphics card.  The R9 290X GAMING 4G sports 4GB of GDDR5 running at an even 5GHz and a GPU that has three separate top speeds depending on the profile you choose; 1040 MHz with OC Mode, 1030 MHz for Gaming Mode and 1000 MHz in Silent Mode.  [H]ard|OCP also tried manually overclocking and ended up with a peak of 1130MHz GPU and 5.4GHz for the GDDR5, not a bad bump over the factory overclock.  Check out the performance of the various speeds in their full review.

1396151094av674gYKyI_1_6_l.jpg

"On our test bench today is MSI's newest high-end GAMING series graphics cards in the form of the MSI Radeon R9 290X GAMING 4G video card. We will strap it to our test bench and compare it to the MSI GeForce GTX 780 Ti GAMING 3G card out-of-box and overclocked to determine which card provides the best gameplay experience."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA 337.50 Driver and GeForce Experience 2.0 Released

Subject: General Tech, Graphics Cards | April 7, 2014 - 09:01 AM |
Tagged: nvidia, geforce experience, directx 11

We knew that NVIDIA had an impending driver update providing DirectX 11 performance improvements. Launched today, 337.50 still claims significant performance increases over the previous 335.23 version. What was a surprise is GeForce Experience 2.0. This version allows both ShadowPlay and GameStream to operate on notebooks. It also allows ShadowPlay to record, and apparently stream to Twitch, your Windows desktop (but not on notebooks). It also enables Battery Boost, discussed previously.

nvidia-shadowplay-desktop.png

Personally, I find desktop streaming is the headlining feature, although I rarely use laptops (and much less for gaming). This is especially useful for OpenGL, games which run in windowed mode, and if you want to occasionally screencast without paying for Camtasia or tinkering with CamStudio. If I were to make a critique, and of course I will, I would like the option to select which monitor gets recorded. Its current behavior records the primary monitor as far as I can tell.

I should also mention that, in my testing, "shadow recording" is not supported when not recording a fullscreen game. I'm guessing that NVIDIA believes their users would prefer to not record their desktops until manually started and likewise stopped. It seems like it had to have been a conscious decision. It does limit its usefulness in OpenGL or windowed games, however.

This driver also introduces GameStream for devices out of your home discussed in the SHIELD update.

nvidia-337-sli.png

This slide is SLi improvements, driver-to driver, for the GTX 770 and the 780 Ti.

As for the performance boost, NVIDIA claims up to 64% faster performance in configurations with one active GPU and up to 71% faster in SLI. It will obviously vary on a game-by-game and GPU-by-GPU basis. I do not have any benchmarks, besides a few examples provided by NVIDIA, to share. That said, it is a free driver. If you have a GeForce GPU, download it. It does complicate matters if you are deciding between AMD and NVIDIA, however.

Source: NVIDIA

GTC 2014: NVIDIA Launches Iray VCA Networked Rendering Appliance

Subject: General Tech, Graphics Cards | April 1, 2014 - 04:42 PM |
Tagged: VCA, nvidia, GTC 2014

NVIDIA launched a new visual computing appliance called the Iray VCA at the GPU Technology Conference last week. This new piece of enterprise hardware uses full GK 110 graphics cards to accelerate the company’s Iray renderer which is used to create photo realistic models in various design programs.

NVIDIA IRAY VCA.jpg

The Iray VCA specifically is a licensed appliance (hardware + software) that combines NVIDIA hardware and software. On the hardware side of things, the Iray VCA is powered by eight graphics cards, dual processors (unspecified but likely Intel Xeons based on usage in last year’s GRID VCA), 256GB of system RAM, and a 2TB SSD. Networking hardware includes two 10GbE NICs, two 1GbE NICs, and one Infiniband connection. In total, the Iray VCA features 20 CPU cores and 23,040 CUDA cores. The GPUs used are based on the full GK110 die and are paired with 12GB of memory each.

Even better, it is a scalable solution such that companies can add additional Iray VCAs to the network. The appliances reportedly transparently accelerate the Iray accelerated renders done on designer’s workstations. NVIDIA reports that an Iray VCA is approximately 60-times faster than a Quadro K5000-powered workstation. Further, according to NVIDIA, 19 Iray VCAs working together amounts to 1 PetaFLOP of compute performance which is enough to render photo realistic simulations using 1 billion rays with up to hundreds of thousands of bounces.

DSC01431.JPG

The Iray VCA enables some rather impressive real time renders of 3D models with realistic physical properties and lighting. The models are light simulations that use ray tracing, global illumination and other techniques to show photo realistic models using up to billions of rays of light. NVIDIA is positioning the Iray VCA as an alternative to physical prototyping, allowing designers to put together virtual prototypes that can be iterated and changed at significantly less cost and time.

DSC01447.JPG

Iray itself is NVIDIA’s GPU-accelerated photo realistic renderer. The Iray technology is used in a number of design software packages. The Iray VCA is meant to further accelerate that Iray renderer by throwing massive amounts of parallel processing hardware at the resource intensive problem over the network (the Iray VCAs can be installed at a data center or kept on site). Initially the Iray VCA will support 3ds Max, Catia, Bunkspeed, and Maya, but NVIDIA is working on supporting all Iray accelerated software with the VCA hardware.

GTC 2014 IRAY VCA Renders Honda Car Interior In Real Time.jpg

The virtual prototypes can be sliced and examined and can even be placed in real world environments by importing HDR photos. Jen-Hsun Huang demonstrated this by placing Honda’s vehicle model on the GTC stage (virtually).

DSC01450.JPG

In fact, one of NVIDIA’s initial partners with the Iray VCA is Honda. Honda is currently beta testing a cluster of 25 Iray VCAs to refine styling designs for cars and their interiors based on initial artistic work. Honda Research and Development System Engineer Daisuke Ide was quoted by NVIDIA as stating that “Our TOPS tool, which uses NVIDIA Iray on our NVIDIA GPU cluster, enables us to evaluate our original design data as if it were real. This allows us to explore more designs so we can create better designs faster and more affordably.”

The Iray VCA (PDF) will be available this summer for $50,000. The sticker price includes the hardware, Iray license, and the first year of updates and maintenance. This is far from consumer technology, but it is interesting technology that may be used in the design process of your next car or other major purchase.

What do you think about the Iray VCA and NVIDIA's licensed hardware model?

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

AMD FirePro W9100 Announced: Doing Work in Hawaii.

Subject: General Tech, Graphics Cards | March 26, 2014 - 05:43 PM |
Tagged: amd, firepro, W9100

The AMD FirePro W9100 has been announced, bringing the Hawaii architecture to non-gaming markets. First seen in the Radeon R9 series of graphics cards, it has the capacity for 5 TeraFLOPs of single-precision (32-bit) performance and 2 TeraFLOPs of double-precision (64-bit). The card also has 16GB of GDDR5 memory to support it. From the raw numbers, this is slightly more capacity than either the Titan Black or Quadro K6000 in all categories. It will also support six 4K monitors (or three at 60Hz), per card. AMD supports up to four W9100 cards in a single system.

amd-firepro-w9100.jpg

Professional users can be looking for several things in their graphics cards: compute performance (either directly or through licensed software such as Photoshop, Premiere, Blender, Maya, and so forth), several high-resolution monitors (or digital signage units), and/or a lot of graphics performance. The W9100 is basically the top of the stack which covers all three of these requirements.

amd-firepro-w9100-2.jpg

AMD also announced a system branding initiative called, "AMD FirePro Ultra Workstation". They currently have five launch partners, Supermicro, Boxx, Tarox, Silverdraft, and Versatile Distribution Services, which will have workstations available under this program. The list of components for a "Recommend" certification is: two eight-core 2.6 GHz CPUs, 32GB of RAM, four PCIe 3.0 x16 slots, a 1500W Platinum PSU, and a case with nine expansion slots (to allow four W9100 GPUs along with one SSD or SDI interface card).

amd-firepro-w9100-3.jpg

Also, while the company has heavily discussed OpenCL in their slide deck, they have not mentioned specific versions. As such, I will assume that the FirePro W9100 supports OpenCL 1.2, like the R9-series, and not OpenCL 2.0 which was ratified back in November. This is still a higher conformance level than NVIDIA, which is at OpenCL 1.1.

Currently no word about pricing or availability.

Source: AMD