AMD Plans Two GPUs in 2016

Subject: Graphics Cards | November 16, 2015 - 09:34 PM |
Tagged: amd, radeon, GCN

Late last week, Forbes published an editorial by Patrick Moorhead, who spoke with Raja Koduri about AMD's future in the GPU industry. Patrick was a Corporate Vice President at AMD until late 2011. He then created Moor Insights and Strategy, which provides industry analysis. He regularly publishes editorials to Forbes and CIO. Raja Koduri is the head of the Radeon Technologies Group at AMD.


I'm going to be focusing on a brief mention a little more than half-way through, though. According to the editorial, Raja stated that AMD will release two new GPUs in 2016. “He promised two brand new GPUs in 2016, which are hopefully going to both be 14nm/16nm FinFET from GlobalFoundries or TSMC and will help make Advanced Micro Devices more power and die size competitive.”

We have been expecting AMD's Artic Islands to arrive at some point in 2016, which will compete with NVIDIA's Pascal architecture at the high end. AMD's product stack has been relatively stale for a while, with most of the innovation occurring at the top end and pushing the previous top-end down a bit. Two new GPU architectures almost definitely mean that a second one will focus on the lower end of the market, making more compelling products on smaller processes to be more power efficient, cheaper per unit, and include newer features.

Add the recent report of the Antigua architecture, which I assume is in addition to AMD's two architecture announcement, and AMD's product stack could look much less familiar next year.

Source: Forbes

Putting the R9 Nano under the microscope

Subject: Graphics Cards | November 13, 2015 - 03:30 PM |
Tagged: radeon, r9 nano, amd

We are not the only ones investigating usage scenarios for AMD's tiny R9 Nano, [H]ard|OCP has also recently looked at this card to determine if or when there is a good reason to pay the extra price for this tiny GPU.  This particular review focuses on performance against a similarly sized Gigabyte GTX 970 in a Cooler Master Elite 110, there will be a follow up in which the cards will run inside a Corsair Obsidian Series 250D case.  At 1080p the cards performed at very similar levels with the significantly more expensive Nano holding a small lead while at 1440p the R9 Nano truly shines.  This card is certainly not for everyone and both the FuryX and GTX 980 Ti offer much better performance at a simliar price point but neither of them will fit inside the case of someone determined to build a tiny gaming machine.


"This evaluation will compare the new retail purchased Radeon R9 Nano with a GIGABYTE GeForce GTX 970 N970-IX-OC small form factor video card in a mini-ITX Cooler Master Elite 110 Intel Skylake system build. We will find out if the higher priced Nano is worth the money for a 1440p and 1080p gameplay experience in a tiny footprint. "

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Basemark GPU Vulkan Announced for Q2'16 Release

Subject: Graphics Cards | November 10, 2015 - 03:02 PM |
Tagged: vulkan

The Vulkan API, announced during the Game Developers Conference last March, is a low-level method to communicate with GPUs. It is essentially a fork of AMD's Mantle, which was modified to include things like OpenCL's SPIR bytecode for its shading and compute language, rather than DirectX and Mantle's HLSL (or OpenGL's GLSL). At the time, Khronos mentioned that Vulkan is expected to be released in 2015, and that they intend to “under promise and over deliver” on that schedule. Being November, I expect that something came up, which isn't too surprising as Microsoft seems to have similar issues with DirectX 12.


That said, Basemark has just announced that they will have (at least one?) Vulkan-compatible benchmark available in Q2 2016. It is unclear whether they mean calendar year or some arbitrary fiscal year. Basemark GPU Vulkan is planned to focus on “relevant Vulkan API performance tests as opposed to theoretical workloads”. This sounds like more than a high-draw, low detail technical demo, which is an interesting metric, but one that will probably be covered elsewhere (like the competing 3DMark from Futuremark).

Hopefully the roll-out, for developers at the very least, will occur this year, though.

Source: Basemark

NVIDIA Releases Driver 358.91 for Fallout 4, Star Wars Battlefront, Legacy of the Void

Subject: Graphics Cards | November 9, 2015 - 01:44 PM |
Tagged: nvidia, geforce, 358.91, fallout 4, Star Wars, battlefront, starcraft, legacy of the void

It's a huge month for PC gaming with the release of Bethesda's Fallout 4 and EA's Star Wars Battlefront likely to take up hours and hours of your (and my) time in the lead up to the holiday season. NVIDIA just passed over links to its latest "Game Ready" driver, version 358.91.


Fallout 4 is going to be impressive graphically

Here's the blurb from NVIDIA directly:

Continuing to fulfill our commitment to GeForce gamers to have them Game Ready for the top Holiday titles, today we released a new Game Ready driver.  This Game Ready driver will get GeForce Gamers set-up for tomorrow’s release of Fallout 4, as well as Star Wars Battlefront, StarCraft II: Legacy of the Void. WHQLed and ready for the Fallout wasteland, driver version 358.91 will deliver the best experience for GeForce gamers in some of the holiday’s hottest titles.

Other than learning that NVIDIA considers "WHQLed" to be a verb now, this is good news for PC gamers looking to dive into the world of Fallout or take up arms against the Empire on the day of release. I honestly believe that these kinds of software updates and frequent driver improvements timed to major game releases is one of the biggest advantages that GeForce gamers have over Radeon users; though I hold out hope that the red team will get on the same cadence with one Raja Koduri in charge.

You can also find more information from NVIDIA about configuration with its own GPUs for Fallout 4 and for Star Wars Battlefront on

Source: NVIDIA

ASUS Announces ROG Maximus VIII Extreme/Assembly Motherboard and Matrix GTX 980 Ti

Subject: Graphics Cards, Motherboards | November 9, 2015 - 10:49 AM |
Tagged: ROG, Republic of Gamers, Maximus VIII Extreme/Assembly, Matrix GTX 980 Ti, Headphone Amp, E9018K2M, DAC, asus, 10GbE, 10 Gbps Ethernet

ASUS has announced two new products for their Republic of Gamers lineup today, and while we saw the Matrix GTX 980 Ti at IFA in September (and the Maximus VIII Extreme/Assembly was also on display), there are further details for both products in today's press release.


ASUS ROG Maximus VIII Extreme/Assembly motherboard with Matrix 980 Ti

The motherboard in question is the Maximus VIII Extreme/Assembly, a Z170 board with an external headphone amp and 10Gb/s Ethernet add-in card included. This board could run into some money.


The ROG 10G Express expansion card

While other Maximus VIII series motherboards have high-end audio support, the Extreme/Assembly further differentiates itself with an included 10Gb/s Ethernet card. ASUS has partnered with Tehuti Networks for the card, which in addition to 10Gbps also operates at conventional 100/1000 Ethernet speeds, as well as new 2.5/5Gbps over CAT5e.

“ROG 10G Express is the enterprise-speed Ethernet card, powered by Aquantia® and Tehuti Networks: these key partners are both members of the NBASE-T™ alliance, and are working closely to create the new 2.5Gbit/s and 5Gbit/s standards that will be compatible with the existing Category 5e (Cat 5e) cabling and ports. With PCI Express 2.0 x4 speed, it equips Maximus VIII Extreme/Assembly gamers for next-generation LAN speeds of up to 10Gbit/s — or up to ten times (10X) faster than today’s fastest onboard consumer Ethernet.”

This will certainly add to the cost of the motherboard considering a 10GbE card (without the 2.5/5Gbps feature) currently sells for $239.99 on Amazon.


The ROG SupremeFX Hi-Fi amplifier

If you’re an audio enthusiast (like me) you’ll be impressed by the attention to audio, which begins with the audiophile-grade ESS E9018K2M DAC chip found on other members of the Maximus VIII family, and capable of not only native PCM 32-bit/384kHz playback, but up to dual-rate DSD (DSD128). The external headphone amplifier features the Texas Instruments TPA6120A2, and has a very high 6V output to drive the most challenging headphone loads.

What about the Matrix GTX 980 Ti? Full specifications were announced for the card, with boost GPU clock speeds of up to 1317 MHz.


  • Graphics Engine: NVIDIA GeForce GTX 980 Ti
  • Video memory: 6GB GDDR5
  • CUDA cores: 2816
  • GPU clock (boosted):
    • 1317MHz (OC mode)
    • 1291MHz (gaming mode)
  • GPU clock (base)
    • 1216MHz (OC mode)
    • 1190MHz (gaming mode)
  • Memory clock: 7200MHz
  • Memory interface: 384-bit
  • Display Output: 3x DisplayPort 1.2, 1x HDMI 2.0, 1x Dual-link DVI
  • Dimensions: 11.62 x 5.44 x 2 inches

Availability and pricing information for these new ROG products was not released.

Source: ASUS ROG

Report: AMD Radeon R9 380X Coming November 15 for $249

Subject: Graphics Cards | November 7, 2015 - 04:46 PM |
Tagged: tonga, rumor, report, Radeon R9 380X, r9 285, graphics card, gpu, GDDR5, amd

AMD will reportedly be launching their latest performance graphics card soon, and specs for this rumored R9 380X have now been reported at VR-Zone (via Hardware Battle).


(Image credit: VR-Zone)

Here are the full specifications from this report:

  • GPU Codename: Antigua
  • Process: 28 nm
  • Stream Processors: 2048
  • GPU Clock: Up to 1000 – 1100 MHz (exact number not known)
  • Memory Size: 4096 MB
  • Memory Type: GDDR5
  • Memory Interface: 256-bit
  • Memory Clock: 5500 – 6000 MHz (exact number not known)
  • Display Output: DisplayPort 1.2, HDMI 1.4, Dual-Link DVI-D

The launch date is reportedly November 15, and the card will (again, reportedly) carry a $249 MSRP at launch.


The 380X would build on the existing R9 285

Compared to the R9 280X, which also offers 2048 stream processors, a boost clock up to 1000 MHz, and 6000 MHz GDDR5, the R9 380X would lose memory bandwidth due to the move from a 384-bit memory interface to 256-bit. The actual performance won’t be exactly comparable however, as the core (Antigua, previously Tonga) will share more in common with the R9 285 (Tonga), though the R9 285 only offered 1792 Stream processors and 2 GB of GDDR5.

You can check out our review of the R9 285 here to see how it performed against the R9 280X, and it will certainly be interesting to see how this R9 380X will fare if these specifications are accurate.

Source: VR-Zone

NVIDIA Confirms Clock Speed, Power Increases at High Refresh Rates, Promises Fix

Subject: Graphics Cards | November 6, 2015 - 04:05 PM |
Tagged: ROG Swift, refresh rate, pg279q, nvidia, GTX 980 Ti, geforce, asus, 165hz, 144hz

Last month I wrote a story that detailed some odd behavior with NVIDIA's GeForce GTX graphics cards and high refresh rate monitors, in particular with the new ASUS ROG Swift PG279Q that has a rated 165Hz refresh rate. We found that when running this monitor at 144Hz or higher refresh rate, idle clock speeds and power consumption of the graphics card increased dramatically.

The results are much more interesting than I expected! At 60Hz refresh rate, the monitor was drawing just 22.1 watts while the entire testing system was idling at 73.7 watts. (Note: the display was set to its post-calibration brightness of just 31.) Moving up to 100Hz and 120Hz saw very minor increases in power consumption from both the system and monitor.


But the jump to 144Hz is much more dramatic – idle system power jumps from 76 watts to almost 134 watts – an increase of 57 watts! Monitor power only increased by 1 watt at that transition though. At 165Hz we see another small increase, bringing the system power up to 137.8 watts.

When running the monitor at 60Hz, 100Hz and even 120Hz, the GPU clock speed sits comfortably at 135MHz. When we increase from 120Hz to 144Hz though, the GPU clock spikes to 885MHz and stays there, even at the Windows desktop. According to GPU-Z the GPU is running at approximately 30% of the maximum TDP.

We put NVIDIA on notice with the story and followed up with emails including more information from other users as well as additional testing completed after the story was posted. The result: NVIDIA has confirmed it exists and has a fix incoming!

In an email we got from NVIDIA PR last night: 

We checked into the observation you highlighted with the newest 165Hz G-SYNC monitors.
Guess what? You were right! That new monitor (or you) exposed a bug in the way our GPU was managing clocks for GSYNC and very high refresh rates.
As a result of your findings, we are fixing the bug which will lower the operating point of our GPUs back to the same power level for other displays.
We’ll have this fixed in an upcoming driver.

This actually supports an oddity we found before: we noticed that the PG279Q at 144Hz refresh was pushing GPU clocks up pretty high while a monitor without G-Sync support at 144Hz did not. We'll see if this addresses the entire gamut of experiences that users have had (and have emailed me about) with high refresh rate displays and power consumption, but at the very least NVIDIA is aware of the problems and working to fix them.

I don't have confirmation of WHEN I'll be able to test out that updated driver, but hopefully it will be soon, so we can confirm the fix works with the displays we have in-house. NVIDIA also hasn't confirmed what the root cause of the problem is - was it related to the clock domains as we had theorized? Maybe not, since this was a G-Sync specific display issue (based on the quote above). I'll try to weasel out the technical reasoning for the bug if we can and update the story later!

Bethesda Blogs Fallout 4 Graphics Features

Subject: General Tech, Graphics Cards | November 4, 2015 - 09:37 PM |
Tagged: fallout 4, bethesda

Fallout 4 is just a few days from release, and the hype train is roaring into the station. Bethesda titles are particularly interesting for PC hardware websites because they tend to find a way into our benchmarking suites. They're relatively demanding, open world titles that are built with a unique engine, and they are popular. They are very, very popular. Skyrim is still in our lineup even though it launched four whole years ago (although that is mostly because it's our last DirectX 9 representative).


Being a demanding, open world title means that it has several interesting features. First, it has full time-of-day lighting and weather effects, which were updated in this release with enhanced post processing effects. A bright, daytime scene will have blue skies and a soft fog that scatters light. Materials are developed using a “Physically Based Shading” model, which is more of an artist feature, but it tends to simplify asset creation and make it much more consistent.

They also have “dynamic dismemberment using hardware tessellation”. In other words, GPUs will add detail to models as they are severed into smaller chunks. Need I say more?


A lot of these features are seen in many other engines lately, like Unreal Engine 4, so it shouldn't be too surprising. Bokeh Depth of Field is a blurring technique to emulate how camera apertures influence out-of-focus elements. This is most obvious in small highlights, which ends up taking the shape of the camera's aperture. If a camera uses a six-blade aperture, then blurred point blooms will look like hexagons. This is very useful to emulate film. They also use “filmic tonemapping”, which is another post process effect to emulate film.

Fallout 4 seems to be making use of high-end DirectX 11-era features. While this means that it should be about the best-looking game out there, it also holds a lot of promise for mods.

As you're well aware, Fallout 4 ships on November 10th (and screenshots have already leaked).

Source: Bethesda

NVIDIA Promoting Game Ready Drivers with Giveaway

Subject: Graphics Cards | November 4, 2015 - 09:01 AM |
Tagged: nvidia, graphics drivers, geforce, game ready

In mid-October, NVIDIA announced that Game Ready drivers would only be available through GeForce Experience with a registered email address, which we covered. Users are able to opt-out of NVIDIA's mailing list, though. They said that this would provide early access to new features, chances to win free hardware, and the ability to participate in the driver development process.


Today's announcement follows up on the “win free hardware” part. The company will be giving away $100,000 worth of prizes, including graphics cards up to the GeForce GTX 980 Ti, game keys, and SHIELD Android TV boxes. To be eligible, users need to register with GeForce Experience and use it to download the latest Game Ready driver.

Speaking of Game Ready drivers, the main purpose of this blog post is to share the list of November/December games that are in this program. NVIDIA pledges to have optimized drivers for these titles on or before their release date:

  • Assassin's Creed: Syndicate
  • Call of Duty Black Ops III
  • Civilization Online
  • Fallout 4
  • Just Cause 3
  • Monster Hunter Online
  • Overwatch
  • RollerCoaster Tycoon World
  • StarCraft II: Legacy of the Void
  • Star Wars Battlefront
  • Tom Clancy's Rainbow Six Siege
  • War Thunder

As is the case recently, NVIDIA also plans to get every Game Ready driver certified by Microsoft, through Microsoft's WHQL driver certification program.

Source: NVIDIA

Gaming at $165? The ASUS Strix GTX 950 DC2OC

Subject: Graphics Cards | November 3, 2015 - 02:57 PM |
Tagged: Strix GTX 950 DC2OC, strix, gtx 950, factory overclocked, asus

At $165 currently the ASUS Strix GTX 950 DC2OC sports the same custom cooler higher end Strix cards use and is overclocked by 141MHz right out of the box.  That cooler helped Bjorn3D get the Boost Clock on the card up to 1425MHz and the memory to 6900MHz effective, not too shabby for such an inexpensive card.  The real question is if that boost is enough to allow this card to provide decent performance while gaming at 1080p.  See if it can in the full review.


"Naturally NVIDIA wants to cover all price points so they did a snip and clip on the GM206 Maxwell core and trimmed 256 Cuda cores off the GTX 960 leaving 768 Shaders on the GTX 950. You still have the same 2GB GDDR5 running across a 128-bit bus and 32 ROPS but GTX 960 gets 85 TMUs while GTX 950 gets 64 and those are really the hardware trade offs NVIDIA had to do to field a $160 video card."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Bjorn3D

AMD Cancels Catalyst, Introduces Radeon Software Crimson

Subject: Graphics Cards | November 2, 2015 - 08:00 AM |
Tagged: radeon software, radeon, driver, crimson, catalyst, amd

For as long as I can remember, the AMD (previously ATI) graphics driver was know as Catalyst. The Catalyst Control Center (CCC) offered some impressive features, grew over time with the Radeon hardware but it had more than its share of issues. It was slow, it was ugly and using it was kind of awful. And today we mourn the passing of Catalyst but welcome the birth of "Radeon Software" and the first iteration if it, Crimson.


Starting with the next major driver release from AMD you'll see a major change the speed, design and usability of the most important interface between AMD and its users. I want to be clear today: we haven't had a chance to actually use the software yet, so all of the screenshots and performance claims are from an AMD presentation to the media last week. 


Let's start with new branding: gone is the AMD Catalyst name, replaced by "Radeon Software" as the overarching title for the software and driver packages that AMD releases. The term "Crimson Edition" refers to the major revision of the software and will likely be a portion of the brand that changes with the year or with important architectural changes. Finally, the numeric part of the branding will look familiar and represents the year and month of release: "15.11" equates to 2015, November release.


With the new brand comes an entire new design that AMD says targets simplicity, ease of use and speed. The currently available Catalyst Control Center software is none of those so it is great news for consumers that AMD has decided to address it. This is one of AMD's Radeon Technology Group SVP Raja Koduri's pet projects - and it's a great start to a leadership program that should spell positive momentum for the Radeon brand.

While the Catalyst Control Center was written in the aging and bloated .Net coding ecosystem, Radeon Software is designed on QT. The first and most immediate advantage will be startup speed. AMD says that Radeon Software will open in 0.6 seconds compared to 8.0 seconds for Catalyst on a modestly configured system. 


The style and interface look to be drastically improved with well defined sections along the top and settings organized in a way that makes them easy to find and address by the user. Your video settings are all in a single spot, the display configuration is on its as well, just as they were with Catalyst, but the look and feel is completely different. Without hands on time its difficult to say for sure, but it appears that AMD has made major strides. 


There are some new interesting capabilities as well, starting with per-game settings available in Game Manager. This is not a duplication of what GeForce Experience does in terms of adjust in-game settings, but it does allow you to set control panel-specific settings like anti-aliasing, texture filtering quality, vertical sync. This capability was around in the previous versions of Catalyst but it was hard to utilize.

Overdrive, the AMD-integrated GPU overclocking portion of Radeon Software, gets a new feature as well: per-game overclocking settings. That's right - you will now be able to set game-specific overclocking settings for your GPU that will allow you to turn up the power for GTA V while turning things down for lower power consumption and noise while catching up on new DLC for Rocket League. I can see this being an incredibly useful feature for gamers willing to take the time to customize their systems.


There are obviously more changes for Radeon Software and the first iteration of it known as Crimson, including improved Eyefinity configuration, automatic driver downloads and much more, and we look forward to playing around with the improved software package in the next few weeks. For AMD, this shows a renewed commitment to Radeon and PC gaming. With its declining market share against the powerful NVIDIA GeForce brand, AMD needs these types of changes.

Testing GPU Power Draw at Increased Refresh Rates using the ASUS PG279Q

Subject: Graphics Cards, Displays | October 24, 2015 - 04:16 PM |
Tagged: ROG Swift, refresh rate, pg279q, nvidia, GTX 980 Ti, geforce, asus, 165hz, 144hz

In the comments to our recent review of the ASUS ROG Swift PG279Q G-Sync monitor, a commenter by the name of Cyclops pointed me in the direction of an interesting quirk that I hadn’t considered before. According to reports, the higher refresh rates of some panels, including the 165Hz option available on this new monitor, can cause power draw to increase by as much as 100 watts on the system itself. While I did say in the review that the larger power brick ASUS provided with it (compared to last year’s PG278Q model) pointed toward higher power requirements for the display itself, I never thought to measure the system.

To setup a quick test I brought the ASUS ROG Swift PG279Q back to its rightful home in front of our graphics test bed, connected an EVGA GeForce GTX 980 Ti (with GPU driver 358.50) and chained both the PC and the monitor up to separate power monitoring devices. While sitting at a Windows 8.1 desktop I cycled the monitor through different refresh rate options and then recorded the power draw from both meters after 60-90 seconds of time to idle out.


The results are much more interesting than I expected! At 60Hz refresh rate, the monitor was drawing just 22.1 watts while the entire testing system was idling at 73.7 watts. (Note: the display was set to its post-calibration brightness of just 31.) Moving up to 100Hz and 120Hz saw very minor increases in power consumption from both the system and monitor.

But the jump to 144Hz is much more dramatic – idle system power jumps from 76 watts to almost 134 watts – an increase of 57 watts! Monitor power only increased by 1 watt at that transition though. At 165Hz we see another small increase, bringing the system power up to 137.8 watts.

Interestingly we did find that the system would repeatedly jump to as much as 200+ watts of idle power draw for 30 seconds at time and then drop back down to the 135-140 watt area for a few minutes. It was repeatable and very measurable.

So, what the hell is going on? A look at GPU-Z clock speeds reveals the source of the power consumption increase.


When running the monitor at 60Hz, 100Hz and even 120Hz, the GPU clock speed sits comfortably at 135MHz. When we increase from 120Hz to 144Hz though, the GPU clock spikes to 885MHz and stays there, even at the Windows desktop. According to GPU-Z the GPU is running at approximately 30% of the maximum TDP.

Though details are sparse, it seems pretty obvious what is going on here. The pixel clock and the GPU clock are connected through the same domain and are not asynchronous. The GPU needs to maintain a certain pixel clock in order to support the required bandwidth of a particular refresh rate, and based on our testing, the idle clock speed of 135MHz doesn’t give the pixel clock enough throughput to power anything more than a 120Hz refresh rate.


Pushing refresh rates of 144Hz and higher causes a surprsing increase in power draw

The obvious question here though is why NVIDIA would need to go all the way up to 885MHz in order to support the jump from 120Hz to 144Hz refresh rates. It seems quite extreme and the increased power draw is significant, causing the fans on the EVGA GTX 980 Ti to spin up even while sitting idle at the Windows desktop. NVIDIA is aware of the complication, though it appears that a fix won’t really be in order until an architectural shift is made down the road. With the ability to redesign the clock domains available to them, NVIDIA could design the pixel and GPU clock to be completely asynchronous, increasing one without affecting the other. It’s not a simple process though, especially in a processor this complex. We have seen Intel and AMD correctly and effectively separate clocks in recent years on newer CPU designs.

What happens to a modern AMD GPU like the R9 Fury with a similar test? To find out we connected our same GPU test bed to the ASUS MG279Q, a FreeSync enabled monitor capable of 144 Hz refresh rates, and swapped the GTX 980 Ti for an ASUS R9 Fury STRIX.



The AMD Fury does not demonstrate the same phenomenon that the GTX 980 Ti does when running at high refresh rates. The Fiji GPU runs at the same static 300MHz clock rate at 60Hz, 120Hz and 144Hz and the power draw on the system only inches up by 2 watts or so. I wasn't able to test 165Hz refresh rates on the AMD setup so it is possible that at that threshold the AMD graphics card would behave differently. It's also true that the NVIDIA Maxwell GPU is running at less than half the clock rate of AMD Fiji in this idle state, and that may account for difference in pixel clocks we are seeing. Still, the NVIDIA platform draws slightly more power at idle than the AMD platform, so advantage AMD here.

For today, know that if you choose to use a 144Hz or even a 165Hz refresh rate on your NVIDIA GeForce GPU you are going to be drawing a bit more power and will be less efficient than expected even just sitting in Windows. I would bet that most gamers willing to buy high end display hardware capable of those speeds won’t be overly concerned with 50-60 watts of additional power draw, but it’s an interesting data point for us to track going forward and to compare AMD and NVIDIA hardware in the future.

Are NVIDIA and AMD ready for SteamOS?

Subject: Graphics Cards | October 23, 2015 - 03:19 PM |
Tagged: linux, amd, nvidia, steam os

Steam Machines powered by SteamOS are due to hit stores in the coming months and in order to get the best performance you need to make sure that the GPU inside the machine plays nicely with the new OS.  To that end Phoronix has tested 22 GPUs, 15 NVIDIA ranging from a GTX 460 straight through to a TITAN X and seven AMD cards from an HD 6570 through to the new R9 Fury.  Part of the reason they used less AMD cards in the testing stems from driver issues which prevented some models from functioning properly.  They tested Bioshock Infinite, both Metro 2033 games, CS:GO and one of Josh's favourites, DiRT Showdown.  The performance results may not be what you expect and are worth checking out fully.  As well Phoronix put in cost to performance findings, for budget conscious gamers.


"With Steam Machines set to begin shipping next month and SteamOS beginning to interest more gamers as an alternative to Windows for building a living room gaming PC, in this article I've carried out a twenty-two graphics card comparison with various NVIDIA GeForce and AMD Radeon GPUs while testing them on the Debian Linux-based SteamOS 2.0 "Brewmaster" operating system using a variety of Steam Linux games."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Report: AMD Radeon 400 Series Taped Out, Coming 2016

Subject: Graphics Cards | October 23, 2015 - 01:49 AM |
Tagged: tape out, rumor, report, Radeon 400 Series, radeon, graphics card, gpu, Ellesmere, Baffin, amd

Details are almost nonexistent, but a new report claims that AMD has reached tape out for an upcoming Radeon 400 series of graphics cards, which could be the true successor to the R9 200-series after the rebranded 3xx cards.


Image credit: WCCFtech

According to the report:

"AMD has reportedly taped out two of its next-gen GPUs, with "Ellesmere" and "Baffin" both taping out - and both part of the upcoming Radeon 400 series of video cards."

I wish there was more here to report, but if this is accurate we should start to hear some details about these new cards fairly soon. The important thing is that AMD is working on the new performance mainstream cards so soon after releasing what was largely a simple rebrand accross much of the 300-series GPUs this year.

Source: WCCFTech

ASUS Has Created a White AMD Radeon R9 Nano

Subject: Graphics Cards | October 23, 2015 - 12:29 AM |
Tagged: r9 nano, mITX, mini-itx, graphics card, gpu, asus, amd

AMD's Radeon R9 Nano is a really cool product, able to provide much of power of the bigger R9 Fury X without the need for more than a standard air cooler, and doing so with an impossibly tiny size for a full graphics card. And while mini-ITX graphics cards serve a small segment of the market, just who might be buying a white one when this is released?


According to a report published first by Computer Base in Germany, ASUS is releasing an all-white AMD R9 Nano, and it looks really sharp. The stock R9 Nano is no slouch in the looks department as you can see here in our full review of AMD's newest GPU, but with this design ASUS provides a totally different look that could help unify the style of your build depending on your other component choices. White is just starting to show up for things like motherboard PCBs, but it's pretty rare in part due to the difficulty in manufacturing white parts that stay white when they are subjected to heat.


There was no mention on a specific release window for the ASUS R9 Nano White, so we'll have to wait for official word on that. It is possible that ASUS has also implemented their own custom PCB, though details are not know just yet. We should know more by the end of next month according to the report.

Gigabyte GTX 980 WATERFORCE Liquid-Cooled Graphics Card

Subject: Graphics Cards | October 21, 2015 - 07:18 AM |
Tagged: water cooling, nvidia, liquid cooled, GTX 980 WATERFORCE, GTX 980, GPU Water Block, gigabyte, AIO

Gigabyte has announced the GeForce GTX 980 WATERFORCE water-cooled graphics card, and this one is ready to go out of the box thanks to an integrated closed-loop liquid cooler.


In addition to full liquid cooling, the card - model GV-N980WAOC-4GD - also features "GPU Gauntlet Sorting", meaning that each card has a binned GTX 980 core for better overclocking performance.

"The GTX 980 WATERFORCE is fitted with only the top-performing GPU core through the very own GPU Gauntlet Sorting technology that guarantees superior overclocking capabilities in terms of excellent power switching and thermal efficiency. Only the strongest processors survived can be qualified for the GTX 980 WATERFORCE, which can fulfill both gaming enthusiasts’ and overclockers’ expectations with greater overclocking headroom, and higher, stable boost clocks under heavy load."


The cooling system for the GTX 980 WATERFORCE begins with a full-coverage block that cools the GPU, RAM, power delivery, without the need for any additional fan for board components. The tubes carrying liquid to the radiator are 45 cm SFP, which Gigabyte says "effectively prevent...leak(s) and fare a lower coolant evaporation rate", and the system is connected to a 120 mm radiator.

Gigabyte says both the fan and the pump offer low noise output, and claim that this cooling system allows the GTX 980 WATERFORCE to "perform up to 38.8% cooler than the reference cooling" for cool and quiet gaming.


The WATERFORCE card also features two DVI outputs (reference is one dual-link output) in addition to the standard three DisplayPort 1.2 and single HDMI 2.0 outputs of a GTX 980.

Pricing and availability have not been announced.

Source: Gigabyte

NVIDIA Releases Share Beta, Requires GFE for Future Beta Driver Downloads

Subject: Graphics Cards | October 15, 2015 - 12:01 PM |
Tagged: nvidia, geforce experience, beta drivers

NVIDIA just released a new driver, version 358.50, with an updated version of GeForce Experience that brings about some interesting changes to the program. First, let's talk about the positive changes, including beta access to the updated NVIDIA Share utility and improvements in GameStream.

As we detailed first with the release of the GeForce GTX 950, NVIDIA is making some impressive additions to the ShadowPlay portion of GeForce Experience, along with a rename to NVIDIA Share


The idea is to add functionality to the Shadowplay feature including an in-game overlay to control the settings and options for local recording and even an in-overlay editor and previewer for your videos. This allows the gamer to view, edit, ­snip and then upload those completed videos to YouTube directly, without ever having to leave the game. (Though you’ll obviously want to pause it before going through that process.) Capture and “Instant Replay” support is now capable of 4K / 60 Hz capture and upload as well – nice!

Besides added capability for the local recording portion of Share, NVIDIA is also adding some new features to mix. NVIDIA Share will now allow for point to point stream sharing, giving you the ability to send a link to your friend that they can open in a web browser and watch the game that you are playing with very low latency. You could use this as a way to show your friend that new skill you learned for Rocket League, to try and convince him to pick up his own copy or even just for a social event. It supports voice communication for the ability to talk smack if necessary.


But it goes beyond just viewing the game – this point to point streaming allows the remote player to take over the controls to teach the local gamer something new or to finish a difficult portion of the game you might be stuck on. And if the game supports local multiplayer, you can BOTH play as the remote gaming session will emulate a second attached Xbox / SHIELD controller to the system! This does have a time limit of 1 hour as a means to persuade game developers and publishers to not throw a hissy-fit.

The demo I saw recently was very impressive and it all worked surprisingly well out of the box.


Fans of NVIDIA local network GameStream might enjoy the upgrade to support streaming games at 4K 60 FPS - as long as you have an NVIDIA SHIELD Android TV device connected to a 4K capable TV in your home. Clearly this will make the visual presentation of your games on your television more impressive than ever and NVIDIA has added support for 5.1 channel surround sound pass through. 

There is another change coming with this release of GFE that might turn some heads surrounding the frequently updated "Game Ready" drivers NVIDIA puts out for specific game launches. These drivers have been a huge part of NVIDIA's success in recent years as the day one experience for GeForce users has been improved over AMD in many instances. It is vital for drivers and performance to be optimal on the day of a game's release as many enthusiast gamers are the ones going through the preloading process and midnight release timings. 


Future "Game Ready" drivers will no longer be made available through and instead will ONLY be delivered through GeForce Experience. You'll also be required to have a validated email address to get the downloads for beta drivers - though NVIDIA admitted to me you would be able to opt-out of the mailing list anytime after signing up.

NVIDIA told media that this method of driver release was planning for stuff in the future but gamers would be getting early access to new features, chances to win free hardware and the ability to take part in the driver development process like never before. Honestly though, this is a way to get users to sign up for a marketing mailing list that has some specific purpose going forward. Not all mailing lists are bad obviously (have you signed up for the PC Perspective Live! Mailing List yet?!?) but there is bound to be some raised eyebrows over this.


NVIDIA says that more than 90% of its driver downloads today already come through GeForce Experience, so changes to the user experience should be minimal. We'll wait to see how the crowd reacts but I imagine once we get past the initial shock of the change over to this system, the roll outs will be fast, clean and simple. But dammit - we fear change.

Source: NVIDIA

AMD Releases Catalyst 15.10 Beta Drivers

Subject: Graphics Cards | October 14, 2015 - 11:24 AM |
Tagged: radeon, dx12, DirectX 12, Catalyst 15.10 beta, catalyst, ashes of the singularity, amd


The AMD Catalyst 15.9 beta driver was released just two weeks ago, and already AMD is ready with a new version. 15.10 is available now and offers several bug fixes, though the point of emphasis is DX12 performance improvements to the Ashes of the Singularity benchmark.

From AMD:

Highlights of AMD Catalyst 15.10 Beta Windows Driver

Performance Optimizations:

  • Ashes of the Singularity - DirectX 12 Quality and Performance optimizations

Resolved Issues:

  • Video playback of MPEG2 video fails with a playback error/error code message
  • A TDR error or crash is experienced when running the Unreal Engine 4 DirectX benchmark
  • Star Wars: Battlefront is able to use high performance graphics when launched on mobile devices with switchable graphics
  • Intermittent playback issues with Cyberlink PowerDVD when connecting to a 3D display with an HDMI cable
  • Ashes of the Singularity - A 'Driver has stopped responding' error may be experienced in DirectX 12 mode
  • Driver installation may halt on some configurations
  • A TDR error may be experienced while toggling between minimized and maximized mode while viewing 4K YouTube content

Known Issues:

  • Ashes of the Singularity may crash on some AMD 300 series GPUs
  • Core clock fluctuations may be experienced when FreeSync and FRTC are both enabled on some AMD CrossFire systems
  • Ashes of the Singularity may fail to launch on some GPUs with 2GB Video Memory. AMD continues to work with Stardock to resolve the issue. In the meantime, deleting the game config file helps resolve the issue
  • The secondary display adapter is missing in the Device Manager and the AMD Catalyst Control Center after installing the driver on a Microsoft Windows 8.1 system
  • Elite: Dangerous - poor performance may be experienced in SuperCruise mode
  • A black screen may be encountered on bootup on Windows 10 systems. The system will ultimately continue to the Windows login screen

The driver is available now from AMD's Catalyst beta download page.

Source: AMD

NVIDIA Releases 358.50 WHQL Game Ready Drivers

Subject: Graphics Cards | October 7, 2015 - 01:45 PM |
Tagged: opengl es 3.2, nvidia, graphics drivers, geforce

The GeForce Game Ready 358.50 WHQL driver has been released so users can perform their updates before the Star Wars Battlefront beta goes live tomorrow (unless you already received a key). As with every “Game Ready” driver, NVIDIA ensures that the essential performance and stability tweaks are rolled in to this version, and tests it against the title. It is WHQL certified too, which is a recent priority for NVIDIA. Years ago, “Game Ready” drivers were often classified as Beta, but the company now intends to pass their work through Microsoft for a final sniff test.


Another interesting addition to this driver is the inclusion of OpenGL 2015 ARB and OpenGL ES 3.2. To use OpenGL ES 3.2 on the PC, if you want to develop software in it for instance, you needed to use a separate release since it was released at SIGGRAPH. It has now been rolled into the main, public driver. The mobile devs who use their production machines to play Battlefront rejoice, I guess. It might also be useful if developers, for instance at Mozilla or Google, want to create pre-release implementations of future WebGL specs too.

Source: NVIDIA

Who Decided to Call a Lightweight API "Metal"?

Subject: Graphics Cards | October 7, 2015 - 07:01 AM |
Tagged: opengl, metal, apple

Ars Technica took it upon themselves to benchmark Metal in the latest OSX El Capitan release. Even though OpenGL on Mac OSX is not considered to be on par with its Linux counterparts, which is probably due to the driver situation until recently, it pulls ahead of Metal in many situations.


Image Credit: Ars Technica

Unlike the other graphics APIs, Metal uses the traditional binding model. Basically, you have a GPU object that you attach your data to, then call one of a handful of “draw” functions to signal the driver. DirectX 12, Vulkan, and Mantle, on the other hand, treat work like commands on queues. The latter model works better in multi-core environments, and it aligns with GPU compute APIs, but the former is easier to port OpenGL and DirectX 11 applications to.

Ars Technica notes that faster GPUs, such as the NVIDIA GeForce GTX 680MX, show higher gains than slower ones. Their “best explanation” is that “faster GPUs can offload more work from the CPU”. That is pretty much true, yes. The new APIs are designed to keep GPUs loaded and working as much as possible, because they really do sit around doing nothing a lot. If you are able to keep a GPU loaded, because it can't accept much load in the first place, then there is little benefit to decreasing CPU load or spreading out across multiple cores.

Granted, there are many ways that benchmarks like these could be incorrectly used. I'll assume that Ars Technica and GFXBench are not making any simple mistakes, though, but it's good to be critical just in case.

Source: Ars Technica