AMD Cancels Catalyst, Introduces Radeon Software Crimson

Subject: Graphics Cards | November 2, 2015 - 08:00 AM |
Tagged: radeon software, radeon, driver, crimson, catalyst, amd

For as long as I can remember, the AMD (previously ATI) graphics driver was know as Catalyst. The Catalyst Control Center (CCC) offered some impressive features, grew over time with the Radeon hardware but it had more than its share of issues. It was slow, it was ugly and using it was kind of awful. And today we mourn the passing of Catalyst but welcome the birth of "Radeon Software" and the first iteration if it, Crimson.


Starting with the next major driver release from AMD you'll see a major change the speed, design and usability of the most important interface between AMD and its users. I want to be clear today: we haven't had a chance to actually use the software yet, so all of the screenshots and performance claims are from an AMD presentation to the media last week. 


Let's start with new branding: gone is the AMD Catalyst name, replaced by "Radeon Software" as the overarching title for the software and driver packages that AMD releases. The term "Crimson Edition" refers to the major revision of the software and will likely be a portion of the brand that changes with the year or with important architectural changes. Finally, the numeric part of the branding will look familiar and represents the year and month of release: "15.11" equates to 2015, November release.


With the new brand comes an entire new design that AMD says targets simplicity, ease of use and speed. The currently available Catalyst Control Center software is none of those so it is great news for consumers that AMD has decided to address it. This is one of AMD's Radeon Technology Group SVP Raja Koduri's pet projects - and it's a great start to a leadership program that should spell positive momentum for the Radeon brand.

While the Catalyst Control Center was written in the aging and bloated .Net coding ecosystem, Radeon Software is designed on QT. The first and most immediate advantage will be startup speed. AMD says that Radeon Software will open in 0.6 seconds compared to 8.0 seconds for Catalyst on a modestly configured system. 


The style and interface look to be drastically improved with well defined sections along the top and settings organized in a way that makes them easy to find and address by the user. Your video settings are all in a single spot, the display configuration is on its as well, just as they were with Catalyst, but the look and feel is completely different. Without hands on time its difficult to say for sure, but it appears that AMD has made major strides. 


There are some new interesting capabilities as well, starting with per-game settings available in Game Manager. This is not a duplication of what GeForce Experience does in terms of adjust in-game settings, but it does allow you to set control panel-specific settings like anti-aliasing, texture filtering quality, vertical sync. This capability was around in the previous versions of Catalyst but it was hard to utilize.

Overdrive, the AMD-integrated GPU overclocking portion of Radeon Software, gets a new feature as well: per-game overclocking settings. That's right - you will now be able to set game-specific overclocking settings for your GPU that will allow you to turn up the power for GTA V while turning things down for lower power consumption and noise while catching up on new DLC for Rocket League. I can see this being an incredibly useful feature for gamers willing to take the time to customize their systems.


There are obviously more changes for Radeon Software and the first iteration of it known as Crimson, including improved Eyefinity configuration, automatic driver downloads and much more, and we look forward to playing around with the improved software package in the next few weeks. For AMD, this shows a renewed commitment to Radeon and PC gaming. With its declining market share against the powerful NVIDIA GeForce brand, AMD needs these types of changes.

Testing GPU Power Draw at Increased Refresh Rates using the ASUS PG279Q

Subject: Graphics Cards, Displays | October 24, 2015 - 04:16 PM |
Tagged: ROG Swift, refresh rate, pg279q, nvidia, GTX 980 Ti, geforce, asus, 165hz, 144hz

In the comments to our recent review of the ASUS ROG Swift PG279Q G-Sync monitor, a commenter by the name of Cyclops pointed me in the direction of an interesting quirk that I hadn’t considered before. According to reports, the higher refresh rates of some panels, including the 165Hz option available on this new monitor, can cause power draw to increase by as much as 100 watts on the system itself. While I did say in the review that the larger power brick ASUS provided with it (compared to last year’s PG278Q model) pointed toward higher power requirements for the display itself, I never thought to measure the system.

To setup a quick test I brought the ASUS ROG Swift PG279Q back to its rightful home in front of our graphics test bed, connected an EVGA GeForce GTX 980 Ti (with GPU driver 358.50) and chained both the PC and the monitor up to separate power monitoring devices. While sitting at a Windows 8.1 desktop I cycled the monitor through different refresh rate options and then recorded the power draw from both meters after 60-90 seconds of time to idle out.


The results are much more interesting than I expected! At 60Hz refresh rate, the monitor was drawing just 22.1 watts while the entire testing system was idling at 73.7 watts. (Note: the display was set to its post-calibration brightness of just 31.) Moving up to 100Hz and 120Hz saw very minor increases in power consumption from both the system and monitor.

But the jump to 144Hz is much more dramatic – idle system power jumps from 76 watts to almost 134 watts – an increase of 57 watts! Monitor power only increased by 1 watt at that transition though. At 165Hz we see another small increase, bringing the system power up to 137.8 watts.

Interestingly we did find that the system would repeatedly jump to as much as 200+ watts of idle power draw for 30 seconds at time and then drop back down to the 135-140 watt area for a few minutes. It was repeatable and very measurable.

So, what the hell is going on? A look at GPU-Z clock speeds reveals the source of the power consumption increase.


When running the monitor at 60Hz, 100Hz and even 120Hz, the GPU clock speed sits comfortably at 135MHz. When we increase from 120Hz to 144Hz though, the GPU clock spikes to 885MHz and stays there, even at the Windows desktop. According to GPU-Z the GPU is running at approximately 30% of the maximum TDP.

Though details are sparse, it seems pretty obvious what is going on here. The pixel clock and the GPU clock are connected through the same domain and are not asynchronous. The GPU needs to maintain a certain pixel clock in order to support the required bandwidth of a particular refresh rate, and based on our testing, the idle clock speed of 135MHz doesn’t give the pixel clock enough throughput to power anything more than a 120Hz refresh rate.


Pushing refresh rates of 144Hz and higher causes a surprsing increase in power draw

The obvious question here though is why NVIDIA would need to go all the way up to 885MHz in order to support the jump from 120Hz to 144Hz refresh rates. It seems quite extreme and the increased power draw is significant, causing the fans on the EVGA GTX 980 Ti to spin up even while sitting idle at the Windows desktop. NVIDIA is aware of the complication, though it appears that a fix won’t really be in order until an architectural shift is made down the road. With the ability to redesign the clock domains available to them, NVIDIA could design the pixel and GPU clock to be completely asynchronous, increasing one without affecting the other. It’s not a simple process though, especially in a processor this complex. We have seen Intel and AMD correctly and effectively separate clocks in recent years on newer CPU designs.

What happens to a modern AMD GPU like the R9 Fury with a similar test? To find out we connected our same GPU test bed to the ASUS MG279Q, a FreeSync enabled monitor capable of 144 Hz refresh rates, and swapped the GTX 980 Ti for an ASUS R9 Fury STRIX.



The AMD Fury does not demonstrate the same phenomenon that the GTX 980 Ti does when running at high refresh rates. The Fiji GPU runs at the same static 300MHz clock rate at 60Hz, 120Hz and 144Hz and the power draw on the system only inches up by 2 watts or so. I wasn't able to test 165Hz refresh rates on the AMD setup so it is possible that at that threshold the AMD graphics card would behave differently. It's also true that the NVIDIA Maxwell GPU is running at less than half the clock rate of AMD Fiji in this idle state, and that may account for difference in pixel clocks we are seeing. Still, the NVIDIA platform draws slightly more power at idle than the AMD platform, so advantage AMD here.

For today, know that if you choose to use a 144Hz or even a 165Hz refresh rate on your NVIDIA GeForce GPU you are going to be drawing a bit more power and will be less efficient than expected even just sitting in Windows. I would bet that most gamers willing to buy high end display hardware capable of those speeds won’t be overly concerned with 50-60 watts of additional power draw, but it’s an interesting data point for us to track going forward and to compare AMD and NVIDIA hardware in the future.

Are NVIDIA and AMD ready for SteamOS?

Subject: Graphics Cards | October 23, 2015 - 03:19 PM |
Tagged: linux, amd, nvidia, steam os

Steam Machines powered by SteamOS are due to hit stores in the coming months and in order to get the best performance you need to make sure that the GPU inside the machine plays nicely with the new OS.  To that end Phoronix has tested 22 GPUs, 15 NVIDIA ranging from a GTX 460 straight through to a TITAN X and seven AMD cards from an HD 6570 through to the new R9 Fury.  Part of the reason they used less AMD cards in the testing stems from driver issues which prevented some models from functioning properly.  They tested Bioshock Infinite, both Metro 2033 games, CS:GO and one of Josh's favourites, DiRT Showdown.  The performance results may not be what you expect and are worth checking out fully.  As well Phoronix put in cost to performance findings, for budget conscious gamers.


"With Steam Machines set to begin shipping next month and SteamOS beginning to interest more gamers as an alternative to Windows for building a living room gaming PC, in this article I've carried out a twenty-two graphics card comparison with various NVIDIA GeForce and AMD Radeon GPUs while testing them on the Debian Linux-based SteamOS 2.0 "Brewmaster" operating system using a variety of Steam Linux games."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Report: AMD Radeon 400 Series Taped Out, Coming 2016

Subject: Graphics Cards | October 23, 2015 - 01:49 AM |
Tagged: tape out, rumor, report, Radeon 400 Series, radeon, graphics card, gpu, Ellesmere, Baffin, amd

Details are almost nonexistent, but a new report claims that AMD has reached tape out for an upcoming Radeon 400 series of graphics cards, which could be the true successor to the R9 200-series after the rebranded 3xx cards.


Image credit: WCCFtech

According to the report:

"AMD has reportedly taped out two of its next-gen GPUs, with "Ellesmere" and "Baffin" both taping out - and both part of the upcoming Radeon 400 series of video cards."

I wish there was more here to report, but if this is accurate we should start to hear some details about these new cards fairly soon. The important thing is that AMD is working on the new performance mainstream cards so soon after releasing what was largely a simple rebrand accross much of the 300-series GPUs this year.

Source: WCCFTech

ASUS Has Created a White AMD Radeon R9 Nano

Subject: Graphics Cards | October 23, 2015 - 12:29 AM |
Tagged: r9 nano, mITX, mini-itx, graphics card, gpu, asus, amd

AMD's Radeon R9 Nano is a really cool product, able to provide much of power of the bigger R9 Fury X without the need for more than a standard air cooler, and doing so with an impossibly tiny size for a full graphics card. And while mini-ITX graphics cards serve a small segment of the market, just who might be buying a white one when this is released?


According to a report published first by Computer Base in Germany, ASUS is releasing an all-white AMD R9 Nano, and it looks really sharp. The stock R9 Nano is no slouch in the looks department as you can see here in our full review of AMD's newest GPU, but with this design ASUS provides a totally different look that could help unify the style of your build depending on your other component choices. White is just starting to show up for things like motherboard PCBs, but it's pretty rare in part due to the difficulty in manufacturing white parts that stay white when they are subjected to heat.


There was no mention on a specific release window for the ASUS R9 Nano White, so we'll have to wait for official word on that. It is possible that ASUS has also implemented their own custom PCB, though details are not know just yet. We should know more by the end of next month according to the report.

Gigabyte GTX 980 WATERFORCE Liquid-Cooled Graphics Card

Subject: Graphics Cards | October 21, 2015 - 07:18 AM |
Tagged: water cooling, nvidia, liquid cooled, GTX 980 WATERFORCE, GTX 980, GPU Water Block, gigabyte, AIO

Gigabyte has announced the GeForce GTX 980 WATERFORCE water-cooled graphics card, and this one is ready to go out of the box thanks to an integrated closed-loop liquid cooler.


In addition to full liquid cooling, the card - model GV-N980WAOC-4GD - also features "GPU Gauntlet Sorting", meaning that each card has a binned GTX 980 core for better overclocking performance.

"The GTX 980 WATERFORCE is fitted with only the top-performing GPU core through the very own GPU Gauntlet Sorting technology that guarantees superior overclocking capabilities in terms of excellent power switching and thermal efficiency. Only the strongest processors survived can be qualified for the GTX 980 WATERFORCE, which can fulfill both gaming enthusiasts’ and overclockers’ expectations with greater overclocking headroom, and higher, stable boost clocks under heavy load."


The cooling system for the GTX 980 WATERFORCE begins with a full-coverage block that cools the GPU, RAM, power delivery, without the need for any additional fan for board components. The tubes carrying liquid to the radiator are 45 cm SFP, which Gigabyte says "effectively prevent...leak(s) and fare a lower coolant evaporation rate", and the system is connected to a 120 mm radiator.

Gigabyte says both the fan and the pump offer low noise output, and claim that this cooling system allows the GTX 980 WATERFORCE to "perform up to 38.8% cooler than the reference cooling" for cool and quiet gaming.


The WATERFORCE card also features two DVI outputs (reference is one dual-link output) in addition to the standard three DisplayPort 1.2 and single HDMI 2.0 outputs of a GTX 980.

Pricing and availability have not been announced.

Source: Gigabyte

NVIDIA Releases Share Beta, Requires GFE for Future Beta Driver Downloads

Subject: Graphics Cards | October 15, 2015 - 12:01 PM |
Tagged: nvidia, geforce experience, beta drivers

NVIDIA just released a new driver, version 358.50, with an updated version of GeForce Experience that brings about some interesting changes to the program. First, let's talk about the positive changes, including beta access to the updated NVIDIA Share utility and improvements in GameStream.

As we detailed first with the release of the GeForce GTX 950, NVIDIA is making some impressive additions to the ShadowPlay portion of GeForce Experience, along with a rename to NVIDIA Share


The idea is to add functionality to the Shadowplay feature including an in-game overlay to control the settings and options for local recording and even an in-overlay editor and previewer for your videos. This allows the gamer to view, edit, ­snip and then upload those completed videos to YouTube directly, without ever having to leave the game. (Though you’ll obviously want to pause it before going through that process.) Capture and “Instant Replay” support is now capable of 4K / 60 Hz capture and upload as well – nice!

Besides added capability for the local recording portion of Share, NVIDIA is also adding some new features to mix. NVIDIA Share will now allow for point to point stream sharing, giving you the ability to send a link to your friend that they can open in a web browser and watch the game that you are playing with very low latency. You could use this as a way to show your friend that new skill you learned for Rocket League, to try and convince him to pick up his own copy or even just for a social event. It supports voice communication for the ability to talk smack if necessary.


But it goes beyond just viewing the game – this point to point streaming allows the remote player to take over the controls to teach the local gamer something new or to finish a difficult portion of the game you might be stuck on. And if the game supports local multiplayer, you can BOTH play as the remote gaming session will emulate a second attached Xbox / SHIELD controller to the system! This does have a time limit of 1 hour as a means to persuade game developers and publishers to not throw a hissy-fit.

The demo I saw recently was very impressive and it all worked surprisingly well out of the box.


Fans of NVIDIA local network GameStream might enjoy the upgrade to support streaming games at 4K 60 FPS - as long as you have an NVIDIA SHIELD Android TV device connected to a 4K capable TV in your home. Clearly this will make the visual presentation of your games on your television more impressive than ever and NVIDIA has added support for 5.1 channel surround sound pass through. 

There is another change coming with this release of GFE that might turn some heads surrounding the frequently updated "Game Ready" drivers NVIDIA puts out for specific game launches. These drivers have been a huge part of NVIDIA's success in recent years as the day one experience for GeForce users has been improved over AMD in many instances. It is vital for drivers and performance to be optimal on the day of a game's release as many enthusiast gamers are the ones going through the preloading process and midnight release timings. 


Future "Game Ready" drivers will no longer be made available through and instead will ONLY be delivered through GeForce Experience. You'll also be required to have a validated email address to get the downloads for beta drivers - though NVIDIA admitted to me you would be able to opt-out of the mailing list anytime after signing up.

NVIDIA told media that this method of driver release was planning for stuff in the future but gamers would be getting early access to new features, chances to win free hardware and the ability to take part in the driver development process like never before. Honestly though, this is a way to get users to sign up for a marketing mailing list that has some specific purpose going forward. Not all mailing lists are bad obviously (have you signed up for the PC Perspective Live! Mailing List yet?!?) but there is bound to be some raised eyebrows over this.


NVIDIA says that more than 90% of its driver downloads today already come through GeForce Experience, so changes to the user experience should be minimal. We'll wait to see how the crowd reacts but I imagine once we get past the initial shock of the change over to this system, the roll outs will be fast, clean and simple. But dammit - we fear change.

Source: NVIDIA

AMD Releases Catalyst 15.10 Beta Drivers

Subject: Graphics Cards | October 14, 2015 - 11:24 AM |
Tagged: radeon, dx12, DirectX 12, Catalyst 15.10 beta, catalyst, ashes of the singularity, amd


The AMD Catalyst 15.9 beta driver was released just two weeks ago, and already AMD is ready with a new version. 15.10 is available now and offers several bug fixes, though the point of emphasis is DX12 performance improvements to the Ashes of the Singularity benchmark.

From AMD:

Highlights of AMD Catalyst 15.10 Beta Windows Driver

Performance Optimizations:

  • Ashes of the Singularity - DirectX 12 Quality and Performance optimizations

Resolved Issues:

  • Video playback of MPEG2 video fails with a playback error/error code message
  • A TDR error or crash is experienced when running the Unreal Engine 4 DirectX benchmark
  • Star Wars: Battlefront is able to use high performance graphics when launched on mobile devices with switchable graphics
  • Intermittent playback issues with Cyberlink PowerDVD when connecting to a 3D display with an HDMI cable
  • Ashes of the Singularity - A 'Driver has stopped responding' error may be experienced in DirectX 12 mode
  • Driver installation may halt on some configurations
  • A TDR error may be experienced while toggling between minimized and maximized mode while viewing 4K YouTube content

Known Issues:

  • Ashes of the Singularity may crash on some AMD 300 series GPUs
  • Core clock fluctuations may be experienced when FreeSync and FRTC are both enabled on some AMD CrossFire systems
  • Ashes of the Singularity may fail to launch on some GPUs with 2GB Video Memory. AMD continues to work with Stardock to resolve the issue. In the meantime, deleting the game config file helps resolve the issue
  • The secondary display adapter is missing in the Device Manager and the AMD Catalyst Control Center after installing the driver on a Microsoft Windows 8.1 system
  • Elite: Dangerous - poor performance may be experienced in SuperCruise mode
  • A black screen may be encountered on bootup on Windows 10 systems. The system will ultimately continue to the Windows login screen

The driver is available now from AMD's Catalyst beta download page.

Source: AMD

NVIDIA Releases 358.50 WHQL Game Ready Drivers

Subject: Graphics Cards | October 7, 2015 - 01:45 PM |
Tagged: opengl es 3.2, nvidia, graphics drivers, geforce

The GeForce Game Ready 358.50 WHQL driver has been released so users can perform their updates before the Star Wars Battlefront beta goes live tomorrow (unless you already received a key). As with every “Game Ready” driver, NVIDIA ensures that the essential performance and stability tweaks are rolled in to this version, and tests it against the title. It is WHQL certified too, which is a recent priority for NVIDIA. Years ago, “Game Ready” drivers were often classified as Beta, but the company now intends to pass their work through Microsoft for a final sniff test.


Another interesting addition to this driver is the inclusion of OpenGL 2015 ARB and OpenGL ES 3.2. To use OpenGL ES 3.2 on the PC, if you want to develop software in it for instance, you needed to use a separate release since it was released at SIGGRAPH. It has now been rolled into the main, public driver. The mobile devs who use their production machines to play Battlefront rejoice, I guess. It might also be useful if developers, for instance at Mozilla or Google, want to create pre-release implementations of future WebGL specs too.

Source: NVIDIA

Who Decided to Call a Lightweight API "Metal"?

Subject: Graphics Cards | October 7, 2015 - 07:01 AM |
Tagged: opengl, metal, apple

Ars Technica took it upon themselves to benchmark Metal in the latest OSX El Capitan release. Even though OpenGL on Mac OSX is not considered to be on par with its Linux counterparts, which is probably due to the driver situation until recently, it pulls ahead of Metal in many situations.


Image Credit: Ars Technica

Unlike the other graphics APIs, Metal uses the traditional binding model. Basically, you have a GPU object that you attach your data to, then call one of a handful of “draw” functions to signal the driver. DirectX 12, Vulkan, and Mantle, on the other hand, treat work like commands on queues. The latter model works better in multi-core environments, and it aligns with GPU compute APIs, but the former is easier to port OpenGL and DirectX 11 applications to.

Ars Technica notes that faster GPUs, such as the NVIDIA GeForce GTX 680MX, show higher gains than slower ones. Their “best explanation” is that “faster GPUs can offload more work from the CPU”. That is pretty much true, yes. The new APIs are designed to keep GPUs loaded and working as much as possible, because they really do sit around doing nothing a lot. If you are able to keep a GPU loaded, because it can't accept much load in the first place, then there is little benefit to decreasing CPU load or spreading out across multiple cores.

Granted, there are many ways that benchmarks like these could be incorrectly used. I'll assume that Ars Technica and GFXBench are not making any simple mistakes, though, but it's good to be critical just in case.

Source: Ars Technica

NVIDIA Announces New "Bullets or Blades" GeForce Bundle

Subject: Graphics Cards | October 7, 2015 - 01:51 AM |

The latest game bundle for NVIDIA GPU customers offers the buyer a choice between Tom Clancy’s Rainbow Six Siege or Assassin’s Creed Syndicate.


To qualify for the free game you need to purchase a GTX 980 Ti, GTX 980, or GTX 970 graphics card. On the mobile side of things purchasing a laptop with GTX 970M or above graphics earns the game.

"It’s the final few months of the year, and as always that means a rush of new triple-A games that promise to excite and delight over the Holiday season. This year, Ubisoft's Assassin’s Creed Syndicate andTom Clancy's Rainbow Six Siege are vying for glory. And to ensure the definitive versions are found on PC we’ve teamed up with Ubisoft once again to add NVIDIA GameWorks effects to each, bringing richer, more detailed experiences to your desktop."

The Bullets or Blades bundle is already underway as of 10/06/15, and to qualify for the game codes purchases require the retailer to be participating in this program. Full details are available from NVIDIA here.

Source: NVIDIA

4K performance when you can spend at least $1.3K

Subject: Graphics Cards | October 6, 2015 - 02:40 PM |
Tagged: 4k, gtx titan x, fury x, GTX 980 Ti, crossfire, sli

[H]ard|OCP shows off just what you can achieve when you spend over $1000 on graphics cards and have a 4K monitor in their latest review.  In Project Cars you can expect never to see less than 40fps with everything cranked to maximum and if you invested in Titan X's you can even enable DS2X AntiAliasing for double the resolution, before down sampling.  Witcher 3 is a bit more challenging and no card is up for HairWorks without a noticeable hit to performance.  FarCry 4 still refuses to believe in Crossfire and as far as NVIDIA performance goes, if you want to see soft shadows you are going to have to invest in a pair of Titan X's.  Check out the full review to see what the best of the current market is capable of.


"The ultimate 4K battle is about to begin, AMD Radeon R9 Fury X CrossFire, NVIDIA GeForce GTX 980 Ti SLI, and NVIDIA GeForce GTX TITAN X SLI will compete for the best gameplay experience at 4K resolution. Find out what $1300 to $2000 worth of GPU backbone will buy you. And find out if Fiji really can 4K."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Releases Catalyst 15.9.1 to Fix Several Issues

Subject: Graphics Cards | October 5, 2015 - 07:13 AM |
Tagged: graphics drivers, amd

Apparently users of AMD's Catalyst 15.9 drivers have been experiencing issues. Specifically, “major memory leaks” could be caused by adjusting windows, such as resizing them or snapping them to edges of the desktop. According to PC Gamer, AMD immediately told users to roll back when they found out about the bug.


They have since fixed it with Catalyst 15.9.1 Beta. This subversion driver also fixes crashes and potential “signal loss” problems with a BenQ FreeSync monitor. As such, if you were interested in playing around with the Catalyst 15.9 beta driver, then it should be safe to do so now. I wish I could offer more input, but I just found out about it and it seems pretty cut-and-dry: if you had problems, they should be fixed. The update is available here.

Source: PC Gamer

Report: AMD's Dual-GPU Fiji XT Card Might Be Coming Soon

Subject: Graphics Cards | October 5, 2015 - 02:33 AM |
Tagged: rumor, report, radeon, graphics cards, Gemini, fury x, fiji xt, dual-GPU, amd

The AMD R9 Fury X, Fury, and Nano have all been released, but a dual-GPU Fiji XT card could be on the way soon according to a new report.


Back in June at AMD's E3 event we were shown Project Quantum, AMD's concept for a powerful dual-GPU system in a very small form-factor. It was speculated that the system was actually housing an unreleased dual-GPU graphic card, which would have made sense given the very small size of the system (and mini-ITX motherboard therein). Now a report from WCCFtech is pointing to a manifest that just might be a shipment of this new dual-GPU card, and the code-name is Gemini.


"Gemini is the code-name AMD has previously used in the past for dual GPU variants and surprisingly, the manifest also contains another phrase: ‘Tobermory’. Now this could simply be a reference to the port that the card shipped from...or it could be the actual codename of the card, with Gemini just being the class itself."

The manifest also indicates a Cooler Master cooler for the card, the maker of the liquid cooling solution for the Fury X. As the Fury X has had its share of criticism for pump whine issues it would be interesting to see how a dual-GPU cooling solution would fare in that department, though we could be seeing an entirely new generation of the pump as well. Of course speculation on an unreleased product like this could be incorrect, and verifiable hard details aren't available yet. Still, of the dual-GPU card is based on a pair of full Fiji XT cores the specs could be very impressive to say the least:

  • Core: Fiji XT x2
  • Stream Processors: 8192
  • GCN Compute Units: 128
  • ROPs: 128
  • TMUs: 512
  • Memory: 8 GB (4GB per GPU)
  • Memory Interface: 4096-bit x2
  • Memory Bandwidth: 1024 GB/s

In addition to the specifics above the report also discussed the possibility of 17.2 TFLOPS of performance based on 2x the performance of Fury X, which would make the Gemini product one of the most powerful single-card GPU solutions in the world. The card seems close enough to the final stage that we should expect to hear something official soon, but for now it's fun to speculate - unless of course the speculation concerns a high initial retail price, and unfortunately something at or above $1000 is quite likely. We shall see.

Source: WCCFtech

The fast and the Fury(ous): 4K

Subject: Graphics Cards | September 28, 2015 - 04:45 PM |
Tagged: R9 Fury, asus strix r9 fury, r9 390x, GTX 980, crossfire, sli, 4k

Bring your wallets to this review from [H]ard|OCP which pits multiple AMD and NVIDIA GPUs against each other at 4K resolutions and no matter the outcome it won't be cheap!  They used the Catalyst 15.8 Beta and the GeForce 355.82 WHQL which were the latest drivers available at the time of writing as well as trying out Windows 10 Pro x64.  There were some interesting results, for instance you want an AMD card when driving in the rain playing Project Cars as the GTX 980's immediately slowed down in inclement weather.  With Witcher 3, AMD again provided frames faster but unfortunately the old spectre of stuttering appeared, which those of you familiar with our Frame Rating tests will understand the source of.  Dying Light proved to be a game that liked VRAM with the 390X taking top spot though sadly neither AMD card could handle Crossfire in Far Cry 4.  There is a lot of interesting information in the review and AMD's cards certainly show their mettle but the overall winner is not perfectly clear, [H] chose Fury the R9 Fury with a caveat about Crossfire support.


"We gear up for multi-GPU gaming with AMD Radeon R9 Fury CrossFire, NVIDIA GeForce GTX 980 SLI, and AMD Radeon R9 390X CrossFire and share our head-to-head results at 4K resolution and find out which solution offers the best gameplay experience. How well does Fiji game when utilized in a CrossFire configuration?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Publishes DirectX 12 Tips for Developers

Subject: Graphics Cards | September 26, 2015 - 09:10 PM |
Tagged: microsoft, windows 10, DirectX 12, dx12, nvidia

Programming with DirectX 12 (and Vulkan, and Mantle) is a much different process than most developers are used to. The biggest change is how work is submit to the driver. Previously, engines would bind attributes to a graphics API and issue one of a handful of “draw” commands, which turns the current state of the API into a message. Drivers would play around with queuing them and manipulating them, to optimize how these orders are sent to the graphics device, but the game developer had no control over that.


Now, the new graphics APIs are built more like command lists. Instead of bind, call, bind, call, and so forth, applications request queues to dump work into, and assemble the messages themselves. It even allows these messages to be bundled together and sent as a whole. This allows direct control over memory and the ability to distribute a lot of the command control across multiple CPU cores. Applications are only as fast as its slowest (relevant) thread, so the ability to spread work out increases actual performance.

NVIDIA has created a large list of things that developers should do, and others that they should not, to increase performance. Pretty much all of them apply equally, regardless of graphics vendor, but there are a few NVIDIA-specific comments, particularly the ones about NvAPI at the end and a few labeled notes in the “Root Signatures” category.

The tips are fairly diverse, covering everything from how to efficiently use things like command lists, to how to properly handle multiple GPUs, and even how to architect your engine itself. Even if you're not a developer, it might be interesting to look over to see how clues about what makes the API tick.

Source: NVIDIA

Nintendo Joins the Khronos Group

Subject: Graphics Cards | September 26, 2015 - 03:46 PM |
Tagged: Nintendo, Khronos

Console developers need to use the APIs that are laid out by the system's creator. Nintendo has their own graphics API for the last three generations, called GX, although it is rumored to be somewhat like OpenGL. A few days ago, Nintendo's logo appeared on the Khronos Group's website as a Contributor Member. This leads sites like The Register to speculate that Nintendo “pledges allegiance to the Vulkan (API)”.

I wouldn't be so hasty.


There are many reasons why a company would want to become a member of the Khronos Group. Microsoft, for instance, decided that the small, $15,000 USD/year membership fee was worth it to influence the future of WebGL. Nintendo, at least currently, does not make their own web browser, they license NetFront from Access Co. Ltd., but that could change (just like their original choice of Opera Mini did). Even with a licensed browser, they might want to discuss and vote on the specifics. But yes, WebGL is unlikely to be on their minds, let alone a driving reason, especially since they are not involved with the W3C. Another unlikely option is OpenCL, especially if they get into cloud services, but I can't see them caring enough about the API to do anything more than blindly use it.

Vulkan is, in fact, most likely what Nintendo is interested in, but that also doesn't mean that they will support it. The membership fee is quite low for a company like Nintendo, and, even if they don't use the API, their input could benefit them, especially since they rely upon third parties for graphics processors. Pushing for additions to Vulkan could force GPU vendors to adopt it, so it will be available for their own APIs, and so forth. There might even be some learning, up to the limits of the Khronos Group's confidentiality requirements.

Or, of course, Nintendo could adopt the Vulkan API to some extent. We'll see. Either way, the gaming company is beginning to open up with industry bodies. This could be positive.

Source: NeoGAF

The Fable of the uncontroversial benchmark

Subject: Graphics Cards | September 24, 2015 - 02:53 PM |
Tagged: radeon, nvidia, lionhead, geforce, fable legends, fable, dx12, benchmark, amd

By now you should have memorized Ryan's review of Fable's DirectX 12 performance on a variety of cards and hopefully tried out our new interactive IFU charts.  You can't always cover every card, as those who were brave enough to look at the CSV file Ryan provided might have come to realize.  That's why it is worth peeking at The Tech Report's review after reading through ours.  They have included an MSI R9 285 and XFX R9 390 as well as an MSI GTX 970, which may be cards you are interested in seeing.  They also spend some time looking at CPU scaling and the effect that has on AMD and NVIDIA's performance.  Check it out here.


"Fable Legends is one of the first games to make use of DirectX 12, and it produces some truly sumptuous visuals. Here's a look at how Legends performs on the latest graphics cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Phoronix Looks at NVIDIA's Linux Driver Quality Settings

Subject: Graphics Cards | September 22, 2015 - 09:09 PM |
Tagged: nvidia, linux, graphics drivers

In the NVIDIA driver control panel, there is a slider that controls Performance vs Quality. On Windows, I leave it set to “Let the 3D application decide” and change my 3D settings individually, as needed. I haven't used NVIDIA's control panel on Linux too much, mostly because my laptop is what I usually install Linux on, which runs an AMD GPU, but the UI seems to put a little more weight on it.


Or is that GTux?

Phoronix decided to test how each of these settings affects a few titles, and the only benchmark they bothered reporting is Team Fortress 2. It turns out that other titles see basically zero variance. TF2 saw a difference of 6FPS though, from 115 FPS at High Quality to 121 FPS at Quality. Oddly enough, Performance and High Performance were worse performance than Quality.

To me, this sounds like NVIDIA has basically forgot about the feature. It barely affects any title, the game it changes anything measureable in is from 2007, and it contradicts what the company is doing on other platforms. I predict that Quality is the default, which is the same as Windows (albeit with only 3 choices: “Performance”, “Balanced”, and the default “Quality”). If it is, you probably should just leave it there 24/7 in case NVIDIA has literally not thought about tweaking the other settings. On Windows, it is kind-of redundant with GeForce Experience, anyway.

Final note: Phoronix has only tested the GTX 980. Results may vary elsewhere, but probably don't.

Source: Phoronix

Intel Will Not Bring eDRAM to Socketed Skylake

Subject: Graphics Cards, Processors | September 17, 2015 - 09:33 PM |
Tagged: Skylake, kaby lake, iris pro, Intel, edram

Update: Sept 17, 2015 @ 10:30 ET -- To clarify: I'm speaking of socketed desktop Skylake. There will definitely be Iris Pro in the BGA options.

Before I begin, the upstream story has a few disputes that I'm not entirely sure on. The Tech Report published a post in September that cited an Intel spokesperson, who said that Skylake would not be getting a socketed processor with eDRAM (unlike Broadwell did just before Skylake launched). This could be a big deal, because the fast, on-processor cache could be used by the CPU as well as the RAM. It is sometimes called “128MB of L4 cache”.


Later, ITWorld and others posted stories that said Intel killed off a Skylake processor with eDRAM, citing The Tech Report. After, Scott Wasson claimed that a story, which may or may not be ITWorld's one, had some “scrambled facts” but wouldn't elaborate. Comparing the two articles doesn't really illuminate any massive, glaring issues, but I might just be missing something.

Update: Sept 18, 2015 @ 9:45pm -- So I apparently misunderstood the ITWorld article. They were claiming that Broadwell-C was discontinued, while The Tech Report was talking about Socketed Skylake with Iris Pro. I thought they both were talking about the latter. Moreover, Anandtech received word from Intel that Broadwell-C is, in fact, not discontinued. This is odd, because ITWorld said they had confirmation from Intel. My guess is that someone gave them incorrect information. Sorry that it took so long to update.

In the same thread, Ian Cutress of Anandtech asked whether The Tech Report benchmarked the processor after Intel tweaked its FCLK capabilities, which Scott did not (but is interested in doing so). Intel addressed a slight frequency boost between the CPU and PCIe lanes after Skylake shipped, which naturally benefits discrete GPUs. Since the original claim was that Broadwell-C is better than Skylake-K for gaming, giving a 25% boost to GPU performance (or removing a 20% loss, depending on how you look at it) could tilt Skylake back above Broadwell. We won't know until it's benchmarked, though.

Iris Pro and eDRAM, while skipping Skylake, might arrive in future architectures though, such as Kaby Lake. It seems to have been demonstrated that, in some situations, and ones relevant to gamers at that, that this boost in eDRAM can help computation -- without even considering the compute potential of a better secondary GPU. One argument is that cutting the extra die room gives Intel more margins, which is almost definitely true, but I wonder how much attention Kaby Lake will get. Especially with AVX-512 and other features being debatably removed, it almost feels like Intel is treating this Tock like a Tick, since they didn't really get one with Broadwell, and Kaby Lake will be the architecture that will lead us to 10nm. On the other hand, each of these architectures are developed by independent teams, so I might be wrong in comparing them serially.