Giving the R9 Nano some breathing space

Subject: Graphics Cards | November 25, 2015 - 02:54 PM |
Tagged: radeon, r9 nano, mini ITX, amd, obsidian 250d, corsair

When Ryan tested out how the R9 Nano performs in tiny cases he chose the Cooler Master Elite 110, the Raijintek Metis, the Lian Li PC-Q33BL and their PC-Q30X.  The card did slow down somewhat because of a lack of airflow in the case but that was quickly remedied with a drill press and we saw vast improvements in the in-game frequencies.  [H]ard|OCP performed a similar experiment with the Cooler Master Elite 110 as well and found similar results.

They are now back at it again, this time testing in a Corsair Obsidian Series 250D Mini ITX case, which is large enough to accommodate a full sized GPU and provide improved airflow.  They tested the Nano against a GTX 980 Ti and a R9 Fury X as they cost a similar amount to the tiny little Nano.  They tested the cards at both 1440p and 4K resolutions and as you might reasonably expect the Nano fell behind, especially at 4K.  If you have a case which can fit a full sized GPU then the Nano does not make sense to purchase, however in cases in which the larger cards will not fit then the Nano's performance is unmatched.


"Our second installment covering our AMD Radeon R9 Nano in a Small Form Factor chassis is finally done. We will upgrade the case to a Corsair Obsidian Series 250D Mini ITX PC Case and compare the R9 Nano to price competitive video cards that can be installed. We game at 1440p and 4K for the ultimate small form factor experience."

Here are some more Graphics Card articles from around the web:

Graphics Cards


Source: [H]ard|OCP

PCPer Live! AMD Radeon Crimson Live Stream and Giveaway!

Subject: Graphics Cards | November 24, 2015 - 09:08 PM |
Tagged: video, radeon software, radeon, live, giveaway, freesync, crimson, contest, amd

UPDATE: Did you miss today's live stream? No worries! You can get the full rundown of the new Radeon Software Crimson Edition driver and get details on new features like FreeSync Low Frame Rate Compensation, DX9 frame pacing, custom resolutions, and more. Check out the video embed below.

It's nearly time for the holidays to begin but that doesn't mean the hardware and software news train comes to a halt! This week we are hosting AMD in the PC Perspective offices for a live stream to discuss the upcoming release of the new AMD Radeon Software Crimson Edition. Earlier in the month we showed you a preview of what changes were coming to the AMD GPU driver and now we are going to not only demo it for you but let the community ask AMD questions directly about it!


And what's a live stream without prizes? AMD has stepped up to the plate to offer up some awesome hardware for those of you that tune in to watch the live stream! 

  • 2 x AMD Radeon R9 Nano 4GB Fiji Graphics Cards
  • 2 x PowerColor PCS+ Radeon R9 380 Graphics Cards




AMD Radeon Software Crimson Live Stream and Giveaway

12pm PT / 3pm ET - November 24th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

The event will take place Tuesday, November 24th at 12pm PT / 3pm ET at There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience. To win the prizes you will have to be watching the live stream, with exact details of the methodology for handing out the goods coming at the time of the event.

I will be joined by Adrian Costelo, Product Manager for Radeon Software, and Steven Gans, UX Designer for Radeon Software. In short, these are the two people you want to hear from and have answer your questions!

If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from AMD?

So join us! Set your calendar for Tuesday at 12pm PT / 3pm ET and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!

Source: PCPer Live!

AMD is the new King of Crimson

Subject: Graphics Cards | November 24, 2015 - 01:36 PM |
Tagged: radeon software, radeon, low frame rate compensation, freesync, frame pacing, crimson, AMD VISION Engine

In case you thought we missed something in our discussion of the new AMD Crimson software you can check out what some of the other websites thought of the new release.  The Tech Report is a good first stop, they used the Fable Legends DX 12 to test the improvements to frametime which will be of interest to those who do not obsess over DX 9 games and their performance.  They also delve a bit more into the interface so you can see what the new screens will look at as well as learning the path that will take you to a familiar settings screen.  Check out their impressions right here.


"AMD's Radeon Software Crimson Edition is the second in a line of major annual graphics driver updates from the company. Crimson also replaces the Catalyst Control Center software with a faster, more refined utility called Radeon Settings. We dug in to see what Crimson has to offer."

Here are some more Graphics Card articles from around the web:

Graphics Cards

The R9 380X arrives

Subject: Graphics Cards | November 19, 2015 - 01:37 PM |
Tagged: asus, strix, Radeon R9 380X, tonga

The full serving of Tonga in the AMD Radeon R9 380X has 32 compute units, 2048 stream processors, 32 ROPs and 128 texture units which compares favourably to the 23CUs, 1792 stream processors, 32 ROPs and 112 texture units of the existing R9 380.  Memory bandwidth and amount is unchanged, 182GB/sec of memory bandwidth at the stock speed of 5.7GHz effective and the GPU clock remains around 970MHz as well.  The MSRP is to be $230 for the base model.

With the specifications out of the way, the next question to answer is how it fares against the direct competition, the GTX 960 and 970.  That is where this review from [H]ard|OCP comes in, with a look at the ASUS STRIX R9 380X DirectCU II OC, running 1030MHz default and 1050MHz at the push of a button.  Their tests at 1440p were a little disappointing, the card did not perform well until advanced graphics settings were reduced but at 1080p they saw great performance with all the bells and whistles turned up.  The pricing will be key to this product, if sellers can keep it at or below MSRP it is a better deal than the GTX 970 but if the prices creep closer then the 970 is the better value.


"AMD has let loose the new AMD Radeon R9 380X GPU, today we evaluate the ASUS STRIX R9 380X OC video card and find out how it compares to a 4GB GeForce GTX 960 and GeForce GTX 970 for a wide picture of where performance lies at 1440p or where it does not at 1440p considering your viewpoint."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD R9 Fury X Voltage and HBM Unlocked with Sapphire TriXX 5.2.1

Subject: Graphics Cards | November 18, 2015 - 01:22 PM |
Tagged: Sapphire TriXX, R9 Fury X, overclocking, hbm, amd

The new version (5.2.1) of Sapphire's TriXX overclocking utility has been released, and it finally unlocks voltage and HBM overclocking for AMD's R9 Fury X.


(Image credit: Sapphire)

Previously the voltage of the R9 Fury X core was not adjustable, leaving what would seem to be quite a bit of untapped headroom for the cards which shipped with a powerful liquid-cooling solution rated for 500 watts of thermal dissipation. This should allow for much better results than what Ryan was able to achieve when he attempted overclocking for our review of the R9 Fury X in June (without the benefit of voltage adjustments):

"My net result: a clock speed of 1155 MHz rather than 1050 MHz, an increase of 10%. That's a decent overclock for a first attempt with a brand new card and new architecture, but from the way that AMD had built up the "500 watt cooler" and the "375 watts available power" from the dual 8-pin power connectors, I was honestly expecting quite a bit more. Hopefully we'll see some community adjustments, like voltage modifications, that we can mess around with later..."


(Image credit: Sapphire)

Will TriXX v5.2.1 unleash the full potential of the Fury X? We will have to wait for some overclocked benchmark numbers, but having the ability can only be a good thing for enthusiasts.

Source: WCCFtech

AMD Plans Two GPUs in 2016

Subject: Graphics Cards | November 16, 2015 - 09:34 PM |
Tagged: amd, radeon, GCN

Late last week, Forbes published an editorial by Patrick Moorhead, who spoke with Raja Koduri about AMD's future in the GPU industry. Patrick was a Corporate Vice President at AMD until late 2011. He then created Moor Insights and Strategy, which provides industry analysis. He regularly publishes editorials to Forbes and CIO. Raja Koduri is the head of the Radeon Technologies Group at AMD.


I'm going to be focusing on a brief mention a little more than half-way through, though. According to the editorial, Raja stated that AMD will release two new GPUs in 2016. “He promised two brand new GPUs in 2016, which are hopefully going to both be 14nm/16nm FinFET from GlobalFoundries or TSMC and will help make Advanced Micro Devices more power and die size competitive.”

We have been expecting AMD's Artic Islands to arrive at some point in 2016, which will compete with NVIDIA's Pascal architecture at the high end. AMD's product stack has been relatively stale for a while, with most of the innovation occurring at the top end and pushing the previous top-end down a bit. Two new GPU architectures almost definitely mean that a second one will focus on the lower end of the market, making more compelling products on smaller processes to be more power efficient, cheaper per unit, and include newer features.

Add the recent report of the Antigua architecture, which I assume is in addition to AMD's two architecture announcement, and AMD's product stack could look much less familiar next year.

Source: Forbes

Putting the R9 Nano under the microscope

Subject: Graphics Cards | November 13, 2015 - 03:30 PM |
Tagged: radeon, r9 nano, amd

We are not the only ones investigating usage scenarios for AMD's tiny R9 Nano, [H]ard|OCP has also recently looked at this card to determine if or when there is a good reason to pay the extra price for this tiny GPU.  This particular review focuses on performance against a similarly sized Gigabyte GTX 970 in a Cooler Master Elite 110, there will be a follow up in which the cards will run inside a Corsair Obsidian Series 250D case.  At 1080p the cards performed at very similar levels with the significantly more expensive Nano holding a small lead while at 1440p the R9 Nano truly shines.  This card is certainly not for everyone and both the FuryX and GTX 980 Ti offer much better performance at a simliar price point but neither of them will fit inside the case of someone determined to build a tiny gaming machine.


"This evaluation will compare the new retail purchased Radeon R9 Nano with a GIGABYTE GeForce GTX 970 N970-IX-OC small form factor video card in a mini-ITX Cooler Master Elite 110 Intel Skylake system build. We will find out if the higher priced Nano is worth the money for a 1440p and 1080p gameplay experience in a tiny footprint. "

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Basemark GPU Vulkan Announced for Q2'16 Release

Subject: Graphics Cards | November 10, 2015 - 03:02 PM |
Tagged: vulkan

The Vulkan API, announced during the Game Developers Conference last March, is a low-level method to communicate with GPUs. It is essentially a fork of AMD's Mantle, which was modified to include things like OpenCL's SPIR bytecode for its shading and compute language, rather than DirectX and Mantle's HLSL (or OpenGL's GLSL). At the time, Khronos mentioned that Vulkan is expected to be released in 2015, and that they intend to “under promise and over deliver” on that schedule. Being November, I expect that something came up, which isn't too surprising as Microsoft seems to have similar issues with DirectX 12.


That said, Basemark has just announced that they will have (at least one?) Vulkan-compatible benchmark available in Q2 2016. It is unclear whether they mean calendar year or some arbitrary fiscal year. Basemark GPU Vulkan is planned to focus on “relevant Vulkan API performance tests as opposed to theoretical workloads”. This sounds like more than a high-draw, low detail technical demo, which is an interesting metric, but one that will probably be covered elsewhere (like the competing 3DMark from Futuremark).

Hopefully the roll-out, for developers at the very least, will occur this year, though.

Source: Basemark

NVIDIA Releases Driver 358.91 for Fallout 4, Star Wars Battlefront, Legacy of the Void

Subject: Graphics Cards | November 9, 2015 - 01:44 PM |
Tagged: nvidia, geforce, 358.91, fallout 4, Star Wars, battlefront, starcraft, legacy of the void

It's a huge month for PC gaming with the release of Bethesda's Fallout 4 and EA's Star Wars Battlefront likely to take up hours and hours of your (and my) time in the lead up to the holiday season. NVIDIA just passed over links to its latest "Game Ready" driver, version 358.91.


Fallout 4 is going to be impressive graphically

Here's the blurb from NVIDIA directly:

Continuing to fulfill our commitment to GeForce gamers to have them Game Ready for the top Holiday titles, today we released a new Game Ready driver.  This Game Ready driver will get GeForce Gamers set-up for tomorrow’s release of Fallout 4, as well as Star Wars Battlefront, StarCraft II: Legacy of the Void. WHQLed and ready for the Fallout wasteland, driver version 358.91 will deliver the best experience for GeForce gamers in some of the holiday’s hottest titles.

Other than learning that NVIDIA considers "WHQLed" to be a verb now, this is good news for PC gamers looking to dive into the world of Fallout or take up arms against the Empire on the day of release. I honestly believe that these kinds of software updates and frequent driver improvements timed to major game releases is one of the biggest advantages that GeForce gamers have over Radeon users; though I hold out hope that the red team will get on the same cadence with one Raja Koduri in charge.

You can also find more information from NVIDIA about configuration with its own GPUs for Fallout 4 and for Star Wars Battlefront on

Source: NVIDIA

ASUS Announces ROG Maximus VIII Extreme/Assembly Motherboard and Matrix GTX 980 Ti

Subject: Graphics Cards, Motherboards | November 9, 2015 - 10:49 AM |
Tagged: ROG, Republic of Gamers, Maximus VIII Extreme/Assembly, Matrix GTX 980 Ti, Headphone Amp, E9018K2M, DAC, asus, 10GbE, 10 Gbps Ethernet

ASUS has announced two new products for their Republic of Gamers lineup today, and while we saw the Matrix GTX 980 Ti at IFA in September (and the Maximus VIII Extreme/Assembly was also on display), there are further details for both products in today's press release.


ASUS ROG Maximus VIII Extreme/Assembly motherboard with Matrix 980 Ti

The motherboard in question is the Maximus VIII Extreme/Assembly, a Z170 board with an external headphone amp and 10Gb/s Ethernet add-in card included. This board could run into some money.


The ROG 10G Express expansion card

While other Maximus VIII series motherboards have high-end audio support, the Extreme/Assembly further differentiates itself with an included 10Gb/s Ethernet card. ASUS has partnered with Tehuti Networks for the card, which in addition to 10Gbps also operates at conventional 100/1000 Ethernet speeds, as well as new 2.5/5Gbps over CAT5e.

“ROG 10G Express is the enterprise-speed Ethernet card, powered by Aquantia® and Tehuti Networks: these key partners are both members of the NBASE-T™ alliance, and are working closely to create the new 2.5Gbit/s and 5Gbit/s standards that will be compatible with the existing Category 5e (Cat 5e) cabling and ports. With PCI Express 2.0 x4 speed, it equips Maximus VIII Extreme/Assembly gamers for next-generation LAN speeds of up to 10Gbit/s — or up to ten times (10X) faster than today’s fastest onboard consumer Ethernet.”

This will certainly add to the cost of the motherboard considering a 10GbE card (without the 2.5/5Gbps feature) currently sells for $239.99 on Amazon.


The ROG SupremeFX Hi-Fi amplifier

If you’re an audio enthusiast (like me) you’ll be impressed by the attention to audio, which begins with the audiophile-grade ESS E9018K2M DAC chip found on other members of the Maximus VIII family, and capable of not only native PCM 32-bit/384kHz playback, but up to dual-rate DSD (DSD128). The external headphone amplifier features the Texas Instruments TPA6120A2, and has a very high 6V output to drive the most challenging headphone loads.

What about the Matrix GTX 980 Ti? Full specifications were announced for the card, with boost GPU clock speeds of up to 1317 MHz.


  • Graphics Engine: NVIDIA GeForce GTX 980 Ti
  • Video memory: 6GB GDDR5
  • CUDA cores: 2816
  • GPU clock (boosted):
    • 1317MHz (OC mode)
    • 1291MHz (gaming mode)
  • GPU clock (base)
    • 1216MHz (OC mode)
    • 1190MHz (gaming mode)
  • Memory clock: 7200MHz
  • Memory interface: 384-bit
  • Display Output: 3x DisplayPort 1.2, 1x HDMI 2.0, 1x Dual-link DVI
  • Dimensions: 11.62 x 5.44 x 2 inches

Availability and pricing information for these new ROG products was not released.

Source: ASUS ROG

Report: AMD Radeon R9 380X Coming November 15 for $249

Subject: Graphics Cards | November 7, 2015 - 04:46 PM |
Tagged: tonga, rumor, report, Radeon R9 380X, r9 285, graphics card, gpu, GDDR5, amd

AMD will reportedly be launching their latest performance graphics card soon, and specs for this rumored R9 380X have now been reported at VR-Zone (via Hardware Battle).


(Image credit: VR-Zone)

Here are the full specifications from this report:

  • GPU Codename: Antigua
  • Process: 28 nm
  • Stream Processors: 2048
  • GPU Clock: Up to 1000 – 1100 MHz (exact number not known)
  • Memory Size: 4096 MB
  • Memory Type: GDDR5
  • Memory Interface: 256-bit
  • Memory Clock: 5500 – 6000 MHz (exact number not known)
  • Display Output: DisplayPort 1.2, HDMI 1.4, Dual-Link DVI-D

The launch date is reportedly November 15, and the card will (again, reportedly) carry a $249 MSRP at launch.


The 380X would build on the existing R9 285

Compared to the R9 280X, which also offers 2048 stream processors, a boost clock up to 1000 MHz, and 6000 MHz GDDR5, the R9 380X would lose memory bandwidth due to the move from a 384-bit memory interface to 256-bit. The actual performance won’t be exactly comparable however, as the core (Antigua, previously Tonga) will share more in common with the R9 285 (Tonga), though the R9 285 only offered 1792 Stream processors and 2 GB of GDDR5.

You can check out our review of the R9 285 here to see how it performed against the R9 280X, and it will certainly be interesting to see how this R9 380X will fare if these specifications are accurate.

Source: VR-Zone

NVIDIA Confirms Clock Speed, Power Increases at High Refresh Rates, Promises Fix

Subject: Graphics Cards | November 6, 2015 - 04:05 PM |
Tagged: ROG Swift, refresh rate, pg279q, nvidia, GTX 980 Ti, geforce, asus, 165hz, 144hz

Last month I wrote a story that detailed some odd behavior with NVIDIA's GeForce GTX graphics cards and high refresh rate monitors, in particular with the new ASUS ROG Swift PG279Q that has a rated 165Hz refresh rate. We found that when running this monitor at 144Hz or higher refresh rate, idle clock speeds and power consumption of the graphics card increased dramatically.

The results are much more interesting than I expected! At 60Hz refresh rate, the monitor was drawing just 22.1 watts while the entire testing system was idling at 73.7 watts. (Note: the display was set to its post-calibration brightness of just 31.) Moving up to 100Hz and 120Hz saw very minor increases in power consumption from both the system and monitor.


But the jump to 144Hz is much more dramatic – idle system power jumps from 76 watts to almost 134 watts – an increase of 57 watts! Monitor power only increased by 1 watt at that transition though. At 165Hz we see another small increase, bringing the system power up to 137.8 watts.

When running the monitor at 60Hz, 100Hz and even 120Hz, the GPU clock speed sits comfortably at 135MHz. When we increase from 120Hz to 144Hz though, the GPU clock spikes to 885MHz and stays there, even at the Windows desktop. According to GPU-Z the GPU is running at approximately 30% of the maximum TDP.

We put NVIDIA on notice with the story and followed up with emails including more information from other users as well as additional testing completed after the story was posted. The result: NVIDIA has confirmed it exists and has a fix incoming!

In an email we got from NVIDIA PR last night: 

We checked into the observation you highlighted with the newest 165Hz G-SYNC monitors.
Guess what? You were right! That new monitor (or you) exposed a bug in the way our GPU was managing clocks for GSYNC and very high refresh rates.
As a result of your findings, we are fixing the bug which will lower the operating point of our GPUs back to the same power level for other displays.
We’ll have this fixed in an upcoming driver.

This actually supports an oddity we found before: we noticed that the PG279Q at 144Hz refresh was pushing GPU clocks up pretty high while a monitor without G-Sync support at 144Hz did not. We'll see if this addresses the entire gamut of experiences that users have had (and have emailed me about) with high refresh rate displays and power consumption, but at the very least NVIDIA is aware of the problems and working to fix them.

I don't have confirmation of WHEN I'll be able to test out that updated driver, but hopefully it will be soon, so we can confirm the fix works with the displays we have in-house. NVIDIA also hasn't confirmed what the root cause of the problem is - was it related to the clock domains as we had theorized? Maybe not, since this was a G-Sync specific display issue (based on the quote above). I'll try to weasel out the technical reasoning for the bug if we can and update the story later!

Bethesda Blogs Fallout 4 Graphics Features

Subject: General Tech, Graphics Cards | November 4, 2015 - 09:37 PM |
Tagged: fallout 4, bethesda

Fallout 4 is just a few days from release, and the hype train is roaring into the station. Bethesda titles are particularly interesting for PC hardware websites because they tend to find a way into our benchmarking suites. They're relatively demanding, open world titles that are built with a unique engine, and they are popular. They are very, very popular. Skyrim is still in our lineup even though it launched four whole years ago (although that is mostly because it's our last DirectX 9 representative).


Being a demanding, open world title means that it has several interesting features. First, it has full time-of-day lighting and weather effects, which were updated in this release with enhanced post processing effects. A bright, daytime scene will have blue skies and a soft fog that scatters light. Materials are developed using a “Physically Based Shading” model, which is more of an artist feature, but it tends to simplify asset creation and make it much more consistent.

They also have “dynamic dismemberment using hardware tessellation”. In other words, GPUs will add detail to models as they are severed into smaller chunks. Need I say more?


A lot of these features are seen in many other engines lately, like Unreal Engine 4, so it shouldn't be too surprising. Bokeh Depth of Field is a blurring technique to emulate how camera apertures influence out-of-focus elements. This is most obvious in small highlights, which ends up taking the shape of the camera's aperture. If a camera uses a six-blade aperture, then blurred point blooms will look like hexagons. This is very useful to emulate film. They also use “filmic tonemapping”, which is another post process effect to emulate film.

Fallout 4 seems to be making use of high-end DirectX 11-era features. While this means that it should be about the best-looking game out there, it also holds a lot of promise for mods.

As you're well aware, Fallout 4 ships on November 10th (and screenshots have already leaked).

Source: Bethesda

NVIDIA Promoting Game Ready Drivers with Giveaway

Subject: Graphics Cards | November 4, 2015 - 09:01 AM |
Tagged: nvidia, graphics drivers, geforce, game ready

In mid-October, NVIDIA announced that Game Ready drivers would only be available through GeForce Experience with a registered email address, which we covered. Users are able to opt-out of NVIDIA's mailing list, though. They said that this would provide early access to new features, chances to win free hardware, and the ability to participate in the driver development process.


Today's announcement follows up on the “win free hardware” part. The company will be giving away $100,000 worth of prizes, including graphics cards up to the GeForce GTX 980 Ti, game keys, and SHIELD Android TV boxes. To be eligible, users need to register with GeForce Experience and use it to download the latest Game Ready driver.

Speaking of Game Ready drivers, the main purpose of this blog post is to share the list of November/December games that are in this program. NVIDIA pledges to have optimized drivers for these titles on or before their release date:

  • Assassin's Creed: Syndicate
  • Call of Duty Black Ops III
  • Civilization Online
  • Fallout 4
  • Just Cause 3
  • Monster Hunter Online
  • Overwatch
  • RollerCoaster Tycoon World
  • StarCraft II: Legacy of the Void
  • Star Wars Battlefront
  • Tom Clancy's Rainbow Six Siege
  • War Thunder

As is the case recently, NVIDIA also plans to get every Game Ready driver certified by Microsoft, through Microsoft's WHQL driver certification program.

Source: NVIDIA

Gaming at $165? The ASUS Strix GTX 950 DC2OC

Subject: Graphics Cards | November 3, 2015 - 02:57 PM |
Tagged: Strix GTX 950 DC2OC, strix, gtx 950, factory overclocked, asus

At $165 currently the ASUS Strix GTX 950 DC2OC sports the same custom cooler higher end Strix cards use and is overclocked by 141MHz right out of the box.  That cooler helped Bjorn3D get the Boost Clock on the card up to 1425MHz and the memory to 6900MHz effective, not too shabby for such an inexpensive card.  The real question is if that boost is enough to allow this card to provide decent performance while gaming at 1080p.  See if it can in the full review.


"Naturally NVIDIA wants to cover all price points so they did a snip and clip on the GM206 Maxwell core and trimmed 256 Cuda cores off the GTX 960 leaving 768 Shaders on the GTX 950. You still have the same 2GB GDDR5 running across a 128-bit bus and 32 ROPS but GTX 960 gets 85 TMUs while GTX 950 gets 64 and those are really the hardware trade offs NVIDIA had to do to field a $160 video card."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Bjorn3D

AMD Cancels Catalyst, Introduces Radeon Software Crimson

Subject: Graphics Cards | November 2, 2015 - 08:00 AM |
Tagged: radeon software, radeon, driver, crimson, catalyst, amd

For as long as I can remember, the AMD (previously ATI) graphics driver was know as Catalyst. The Catalyst Control Center (CCC) offered some impressive features, grew over time with the Radeon hardware but it had more than its share of issues. It was slow, it was ugly and using it was kind of awful. And today we mourn the passing of Catalyst but welcome the birth of "Radeon Software" and the first iteration if it, Crimson.


Starting with the next major driver release from AMD you'll see a major change the speed, design and usability of the most important interface between AMD and its users. I want to be clear today: we haven't had a chance to actually use the software yet, so all of the screenshots and performance claims are from an AMD presentation to the media last week. 


Let's start with new branding: gone is the AMD Catalyst name, replaced by "Radeon Software" as the overarching title for the software and driver packages that AMD releases. The term "Crimson Edition" refers to the major revision of the software and will likely be a portion of the brand that changes with the year or with important architectural changes. Finally, the numeric part of the branding will look familiar and represents the year and month of release: "15.11" equates to 2015, November release.


With the new brand comes an entire new design that AMD says targets simplicity, ease of use and speed. The currently available Catalyst Control Center software is none of those so it is great news for consumers that AMD has decided to address it. This is one of AMD's Radeon Technology Group SVP Raja Koduri's pet projects - and it's a great start to a leadership program that should spell positive momentum for the Radeon brand.

While the Catalyst Control Center was written in the aging and bloated .Net coding ecosystem, Radeon Software is designed on QT. The first and most immediate advantage will be startup speed. AMD says that Radeon Software will open in 0.6 seconds compared to 8.0 seconds for Catalyst on a modestly configured system. 


The style and interface look to be drastically improved with well defined sections along the top and settings organized in a way that makes them easy to find and address by the user. Your video settings are all in a single spot, the display configuration is on its as well, just as they were with Catalyst, but the look and feel is completely different. Without hands on time its difficult to say for sure, but it appears that AMD has made major strides. 


There are some new interesting capabilities as well, starting with per-game settings available in Game Manager. This is not a duplication of what GeForce Experience does in terms of adjust in-game settings, but it does allow you to set control panel-specific settings like anti-aliasing, texture filtering quality, vertical sync. This capability was around in the previous versions of Catalyst but it was hard to utilize.

Overdrive, the AMD-integrated GPU overclocking portion of Radeon Software, gets a new feature as well: per-game overclocking settings. That's right - you will now be able to set game-specific overclocking settings for your GPU that will allow you to turn up the power for GTA V while turning things down for lower power consumption and noise while catching up on new DLC for Rocket League. I can see this being an incredibly useful feature for gamers willing to take the time to customize their systems.


There are obviously more changes for Radeon Software and the first iteration of it known as Crimson, including improved Eyefinity configuration, automatic driver downloads and much more, and we look forward to playing around with the improved software package in the next few weeks. For AMD, this shows a renewed commitment to Radeon and PC gaming. With its declining market share against the powerful NVIDIA GeForce brand, AMD needs these types of changes.

Testing GPU Power Draw at Increased Refresh Rates using the ASUS PG279Q

Subject: Graphics Cards, Displays | October 24, 2015 - 04:16 PM |
Tagged: ROG Swift, refresh rate, pg279q, nvidia, GTX 980 Ti, geforce, asus, 165hz, 144hz

In the comments to our recent review of the ASUS ROG Swift PG279Q G-Sync monitor, a commenter by the name of Cyclops pointed me in the direction of an interesting quirk that I hadn’t considered before. According to reports, the higher refresh rates of some panels, including the 165Hz option available on this new monitor, can cause power draw to increase by as much as 100 watts on the system itself. While I did say in the review that the larger power brick ASUS provided with it (compared to last year’s PG278Q model) pointed toward higher power requirements for the display itself, I never thought to measure the system.

To setup a quick test I brought the ASUS ROG Swift PG279Q back to its rightful home in front of our graphics test bed, connected an EVGA GeForce GTX 980 Ti (with GPU driver 358.50) and chained both the PC and the monitor up to separate power monitoring devices. While sitting at a Windows 8.1 desktop I cycled the monitor through different refresh rate options and then recorded the power draw from both meters after 60-90 seconds of time to idle out.


The results are much more interesting than I expected! At 60Hz refresh rate, the monitor was drawing just 22.1 watts while the entire testing system was idling at 73.7 watts. (Note: the display was set to its post-calibration brightness of just 31.) Moving up to 100Hz and 120Hz saw very minor increases in power consumption from both the system and monitor.

But the jump to 144Hz is much more dramatic – idle system power jumps from 76 watts to almost 134 watts – an increase of 57 watts! Monitor power only increased by 1 watt at that transition though. At 165Hz we see another small increase, bringing the system power up to 137.8 watts.

Interestingly we did find that the system would repeatedly jump to as much as 200+ watts of idle power draw for 30 seconds at time and then drop back down to the 135-140 watt area for a few minutes. It was repeatable and very measurable.

So, what the hell is going on? A look at GPU-Z clock speeds reveals the source of the power consumption increase.


When running the monitor at 60Hz, 100Hz and even 120Hz, the GPU clock speed sits comfortably at 135MHz. When we increase from 120Hz to 144Hz though, the GPU clock spikes to 885MHz and stays there, even at the Windows desktop. According to GPU-Z the GPU is running at approximately 30% of the maximum TDP.

Though details are sparse, it seems pretty obvious what is going on here. The pixel clock and the GPU clock are connected through the same domain and are not asynchronous. The GPU needs to maintain a certain pixel clock in order to support the required bandwidth of a particular refresh rate, and based on our testing, the idle clock speed of 135MHz doesn’t give the pixel clock enough throughput to power anything more than a 120Hz refresh rate.


Pushing refresh rates of 144Hz and higher causes a surprsing increase in power draw

The obvious question here though is why NVIDIA would need to go all the way up to 885MHz in order to support the jump from 120Hz to 144Hz refresh rates. It seems quite extreme and the increased power draw is significant, causing the fans on the EVGA GTX 980 Ti to spin up even while sitting idle at the Windows desktop. NVIDIA is aware of the complication, though it appears that a fix won’t really be in order until an architectural shift is made down the road. With the ability to redesign the clock domains available to them, NVIDIA could design the pixel and GPU clock to be completely asynchronous, increasing one without affecting the other. It’s not a simple process though, especially in a processor this complex. We have seen Intel and AMD correctly and effectively separate clocks in recent years on newer CPU designs.

What happens to a modern AMD GPU like the R9 Fury with a similar test? To find out we connected our same GPU test bed to the ASUS MG279Q, a FreeSync enabled monitor capable of 144 Hz refresh rates, and swapped the GTX 980 Ti for an ASUS R9 Fury STRIX.



The AMD Fury does not demonstrate the same phenomenon that the GTX 980 Ti does when running at high refresh rates. The Fiji GPU runs at the same static 300MHz clock rate at 60Hz, 120Hz and 144Hz and the power draw on the system only inches up by 2 watts or so. I wasn't able to test 165Hz refresh rates on the AMD setup so it is possible that at that threshold the AMD graphics card would behave differently. It's also true that the NVIDIA Maxwell GPU is running at less than half the clock rate of AMD Fiji in this idle state, and that may account for difference in pixel clocks we are seeing. Still, the NVIDIA platform draws slightly more power at idle than the AMD platform, so advantage AMD here.

For today, know that if you choose to use a 144Hz or even a 165Hz refresh rate on your NVIDIA GeForce GPU you are going to be drawing a bit more power and will be less efficient than expected even just sitting in Windows. I would bet that most gamers willing to buy high end display hardware capable of those speeds won’t be overly concerned with 50-60 watts of additional power draw, but it’s an interesting data point for us to track going forward and to compare AMD and NVIDIA hardware in the future.

Are NVIDIA and AMD ready for SteamOS?

Subject: Graphics Cards | October 23, 2015 - 03:19 PM |
Tagged: linux, amd, nvidia, steam os

Steam Machines powered by SteamOS are due to hit stores in the coming months and in order to get the best performance you need to make sure that the GPU inside the machine plays nicely with the new OS.  To that end Phoronix has tested 22 GPUs, 15 NVIDIA ranging from a GTX 460 straight through to a TITAN X and seven AMD cards from an HD 6570 through to the new R9 Fury.  Part of the reason they used less AMD cards in the testing stems from driver issues which prevented some models from functioning properly.  They tested Bioshock Infinite, both Metro 2033 games, CS:GO and one of Josh's favourites, DiRT Showdown.  The performance results may not be what you expect and are worth checking out fully.  As well Phoronix put in cost to performance findings, for budget conscious gamers.


"With Steam Machines set to begin shipping next month and SteamOS beginning to interest more gamers as an alternative to Windows for building a living room gaming PC, in this article I've carried out a twenty-two graphics card comparison with various NVIDIA GeForce and AMD Radeon GPUs while testing them on the Debian Linux-based SteamOS 2.0 "Brewmaster" operating system using a variety of Steam Linux games."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Report: AMD Radeon 400 Series Taped Out, Coming 2016

Subject: Graphics Cards | October 23, 2015 - 01:49 AM |
Tagged: tape out, rumor, report, Radeon 400 Series, radeon, graphics card, gpu, Ellesmere, Baffin, amd

Details are almost nonexistent, but a new report claims that AMD has reached tape out for an upcoming Radeon 400 series of graphics cards, which could be the true successor to the R9 200-series after the rebranded 3xx cards.


Image credit: WCCFtech

According to the report:

"AMD has reportedly taped out two of its next-gen GPUs, with "Ellesmere" and "Baffin" both taping out - and both part of the upcoming Radeon 400 series of video cards."

I wish there was more here to report, but if this is accurate we should start to hear some details about these new cards fairly soon. The important thing is that AMD is working on the new performance mainstream cards so soon after releasing what was largely a simple rebrand accross much of the 300-series GPUs this year.

Source: WCCFTech

ASUS Has Created a White AMD Radeon R9 Nano

Subject: Graphics Cards | October 23, 2015 - 12:29 AM |
Tagged: r9 nano, mITX, mini-itx, graphics card, gpu, asus, amd

AMD's Radeon R9 Nano is a really cool product, able to provide much of power of the bigger R9 Fury X without the need for more than a standard air cooler, and doing so with an impossibly tiny size for a full graphics card. And while mini-ITX graphics cards serve a small segment of the market, just who might be buying a white one when this is released?


According to a report published first by Computer Base in Germany, ASUS is releasing an all-white AMD R9 Nano, and it looks really sharp. The stock R9 Nano is no slouch in the looks department as you can see here in our full review of AMD's newest GPU, but with this design ASUS provides a totally different look that could help unify the style of your build depending on your other component choices. White is just starting to show up for things like motherboard PCBs, but it's pretty rare in part due to the difficulty in manufacturing white parts that stay white when they are subjected to heat.


There was no mention on a specific release window for the ASUS R9 Nano White, so we'll have to wait for official word on that. It is possible that ASUS has also implemented their own custom PCB, though details are not know just yet. We should know more by the end of next month according to the report.