Author:
Manufacturer: AMD

FreeSync and Frame Pacing Get a Boost

Make sure you catch today's live stream we are hosting with AMD to discuss much more about the new Radeon Software Crimson driver. We are giving away four Radeon graphics cards as well!! Find all the information right here.

Earlier this month AMD announced plans to end the life of the Catalyst Control Center application for control of your Radeon GPU, introducing a new brand simply called Radeon Software. The first iteration of this software, Crimson, is being released today and includes some impressive user experience changes that are really worth seeing and, well, experiencing.

Users will no doubt lament the age of the previous Catalyst Control Center; it was slow, clunky and difficult to navigate around. Radeon Software Crimson changes all of this with a new UI, a new backend that allows it to start up almost instantly, as well as a handful of new features that might be a surprise to some of our readers. Here's a quick rundown of what stands out to me:

  • Opens in less than a second in my testing
  • Completely redesigned and modern user interface
  • Faster display initialization
  • New clean install utility (separate download)
  • Per-game Overdrive (overclocking) settings
  • LiquidVR integration
  • FreeSync improvements at low frame rates
  • FreeSync planned for HDMI (though not implemented yet)
  • Frame pacing support in DX9 titles
  • New custom resolution support
  • Desktop-based Virtual Super Resolution
  • Directional scaling for 2K to 4K upscaling (Fiji GPUs only)
  • Shader cache (precompiled) to reduce compiling-induced frame time variance
  • Non-specific DX12 improvements
  • Flip queue size optimizations (frame buffer length) for specific games
  • Wider target range for Frame Rate Target Control

crimson-7.jpg

That's quite a list of new features, some of which will be more popular than others, but it looks like there should be something for everyone to love about the new Crimson software package from AMD.

For this story today I wanted to focus on two of the above features that have long been a sticking point for me, and see how well AMD has fixed them with the first release of Radeon Software.

FreeSync: Low Frame Rate Compensation

I might be slightly biased, but I don't think anyone has done a more thorough job of explaining and diving into the differences between AMD FreeSync and NVIDIA G-Sync than the team at PC Perspective. Since day one of the G-Sync variable refresh release we have been following the changes and capabilities of these competing features and writing about what really separates them from a technological point of view, not just pricing and perceived experiences. 

Continue reading our overview of new features in AMD Radeon Software Crimson!!

The R9 380X arrives

Subject: Graphics Cards | November 19, 2015 - 01:37 PM |
Tagged: asus, strix, Radeon R9 380X, tonga

The full serving of Tonga in the AMD Radeon R9 380X has 32 compute units, 2048 stream processors, 32 ROPs and 128 texture units which compares favourably to the 23CUs, 1792 stream processors, 32 ROPs and 112 texture units of the existing R9 380.  Memory bandwidth and amount is unchanged, 182GB/sec of memory bandwidth at the stock speed of 5.7GHz effective and the GPU clock remains around 970MHz as well.  The MSRP is to be $230 for the base model.

With the specifications out of the way, the next question to answer is how it fares against the direct competition, the GTX 960 and 970.  That is where this review from [H]ard|OCP comes in, with a look at the ASUS STRIX R9 380X DirectCU II OC, running 1030MHz default and 1050MHz at the push of a button.  Their tests at 1440p were a little disappointing, the card did not perform well until advanced graphics settings were reduced but at 1080p they saw great performance with all the bells and whistles turned up.  The pricing will be key to this product, if sellers can keep it at or below MSRP it is a better deal than the GTX 970 but if the prices creep closer then the 970 is the better value.

1447932227T8cfKzsD56_1_7_l.jpg

"AMD has let loose the new AMD Radeon R9 380X GPU, today we evaluate the ASUS STRIX R9 380X OC video card and find out how it compares to a 4GB GeForce GTX 960 and GeForce GTX 970 for a wide picture of where performance lies at 1440p or where it does not at 1440p considering your viewpoint."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD R9 Fury X Voltage and HBM Unlocked with Sapphire TriXX 5.2.1

Subject: Graphics Cards | November 18, 2015 - 01:22 PM |
Tagged: Sapphire TriXX, R9 Fury X, overclocking, hbm, amd

The new version (5.2.1) of Sapphire's TriXX overclocking utility has been released, and it finally unlocks voltage and HBM overclocking for AMD's R9 Fury X.

SapphireTriXX1.jpg

(Image credit: Sapphire)

Previously the voltage of the R9 Fury X core was not adjustable, leaving what would seem to be quite a bit of untapped headroom for the cards which shipped with a powerful liquid-cooling solution rated for 500 watts of thermal dissipation. This should allow for much better results than what Ryan was able to achieve when he attempted overclocking for our review of the R9 Fury X in June (without the benefit of voltage adjustments):

"My net result: a clock speed of 1155 MHz rather than 1050 MHz, an increase of 10%. That's a decent overclock for a first attempt with a brand new card and new architecture, but from the way that AMD had built up the "500 watt cooler" and the "375 watts available power" from the dual 8-pin power connectors, I was honestly expecting quite a bit more. Hopefully we'll see some community adjustments, like voltage modifications, that we can mess around with later..."

SapphireTriXX.jpg

(Image credit: Sapphire)

Will TriXX v5.2.1 unleash the full potential of the Fury X? We will have to wait for some overclocked benchmark numbers, but having the ability can only be a good thing for enthusiasts.

Source: WCCFtech

AMD Plans Two GPUs in 2016

Subject: Graphics Cards | November 16, 2015 - 09:34 PM |
Tagged: amd, radeon, GCN

Late last week, Forbes published an editorial by Patrick Moorhead, who spoke with Raja Koduri about AMD's future in the GPU industry. Patrick was a Corporate Vice President at AMD until late 2011. He then created Moor Insights and Strategy, which provides industry analysis. He regularly publishes editorials to Forbes and CIO. Raja Koduri is the head of the Radeon Technologies Group at AMD.

amd-gaming-evolved.png

I'm going to be focusing on a brief mention a little more than half-way through, though. According to the editorial, Raja stated that AMD will release two new GPUs in 2016. “He promised two brand new GPUs in 2016, which are hopefully going to both be 14nm/16nm FinFET from GlobalFoundries or TSMC and will help make Advanced Micro Devices more power and die size competitive.”

We have been expecting AMD's Artic Islands to arrive at some point in 2016, which will compete with NVIDIA's Pascal architecture at the high end. AMD's product stack has been relatively stale for a while, with most of the innovation occurring at the top end and pushing the previous top-end down a bit. Two new GPU architectures almost definitely mean that a second one will focus on the lower end of the market, making more compelling products on smaller processes to be more power efficient, cheaper per unit, and include newer features.

Add the recent report of the Antigua architecture, which I assume is in addition to AMD's two architecture announcement, and AMD's product stack could look much less familiar next year.

Source: Forbes

Putting the R9 Nano under the microscope

Subject: Graphics Cards | November 13, 2015 - 03:30 PM |
Tagged: radeon, r9 nano, amd

We are not the only ones investigating usage scenarios for AMD's tiny R9 Nano, [H]ard|OCP has also recently looked at this card to determine if or when there is a good reason to pay the extra price for this tiny GPU.  This particular review focuses on performance against a similarly sized Gigabyte GTX 970 in a Cooler Master Elite 110, there will be a follow up in which the cards will run inside a Corsair Obsidian Series 250D case.  At 1080p the cards performed at very similar levels with the significantly more expensive Nano holding a small lead while at 1440p the R9 Nano truly shines.  This card is certainly not for everyone and both the FuryX and GTX 980 Ti offer much better performance at a simliar price point but neither of them will fit inside the case of someone determined to build a tiny gaming machine.

1446526873afwlDtt3r8_1_1.gif

"This evaluation will compare the new retail purchased Radeon R9 Nano with a GIGABYTE GeForce GTX 970 N970-IX-OC small form factor video card in a mini-ITX Cooler Master Elite 110 Intel Skylake system build. We will find out if the higher priced Nano is worth the money for a 1440p and 1080p gameplay experience in a tiny footprint. "

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Basemark GPU Vulkan Announced for Q2'16 Release

Subject: Graphics Cards | November 10, 2015 - 03:02 PM |
Tagged: vulkan

The Vulkan API, announced during the Game Developers Conference last March, is a low-level method to communicate with GPUs. It is essentially a fork of AMD's Mantle, which was modified to include things like OpenCL's SPIR bytecode for its shading and compute language, rather than DirectX and Mantle's HLSL (or OpenGL's GLSL). At the time, Khronos mentioned that Vulkan is expected to be released in 2015, and that they intend to “under promise and over deliver” on that schedule. Being November, I expect that something came up, which isn't too surprising as Microsoft seems to have similar issues with DirectX 12.

Basemark-2015-Vulkan_Hero_20151105_low.jpg

That said, Basemark has just announced that they will have (at least one?) Vulkan-compatible benchmark available in Q2 2016. It is unclear whether they mean calendar year or some arbitrary fiscal year. Basemark GPU Vulkan is planned to focus on “relevant Vulkan API performance tests as opposed to theoretical workloads”. This sounds like more than a high-draw, low detail technical demo, which is an interesting metric, but one that will probably be covered elsewhere (like the competing 3DMark from Futuremark).

Hopefully the roll-out, for developers at the very least, will occur this year, though.

Source: Basemark

NVIDIA Releases Driver 358.91 for Fallout 4, Star Wars Battlefront, Legacy of the Void

Subject: Graphics Cards | November 9, 2015 - 01:44 PM |
Tagged: nvidia, geforce, 358.91, fallout 4, Star Wars, battlefront, starcraft, legacy of the void

It's a huge month for PC gaming with the release of Bethesda's Fallout 4 and EA's Star Wars Battlefront likely to take up hours and hours of your (and my) time in the lead up to the holiday season. NVIDIA just passed over links to its latest "Game Ready" driver, version 358.91.

bethesda-2015-fallout4-official-2.jpg

Fallout 4 is going to be impressive graphically

Here's the blurb from NVIDIA directly:

Continuing to fulfill our commitment to GeForce gamers to have them Game Ready for the top Holiday titles, today we released a new Game Ready driver.  This Game Ready driver will get GeForce Gamers set-up for tomorrow’s release of Fallout 4, as well as Star Wars Battlefront, StarCraft II: Legacy of the Void. WHQLed and ready for the Fallout wasteland, driver version 358.91 will deliver the best experience for GeForce gamers in some of the holiday’s hottest titles.

Other than learning that NVIDIA considers "WHQLed" to be a verb now, this is good news for PC gamers looking to dive into the world of Fallout or take up arms against the Empire on the day of release. I honestly believe that these kinds of software updates and frequent driver improvements timed to major game releases is one of the biggest advantages that GeForce gamers have over Radeon users; though I hold out hope that the red team will get on the same cadence with one Raja Koduri in charge.

You can also find more information from NVIDIA about configuration with its own GPUs for Fallout 4 and for Star Wars Battlefront on GeForce.com.

Source: NVIDIA

ASUS Announces ROG Maximus VIII Extreme/Assembly Motherboard and Matrix GTX 980 Ti

Subject: Graphics Cards, Motherboards | November 9, 2015 - 10:49 AM |
Tagged: ROG, Republic of Gamers, Maximus VIII Extreme/Assembly, Matrix GTX 980 Ti, Headphone Amp, E9018K2M, DAC, asus, 10GbE, 10 Gbps Ethernet

ASUS has announced two new products for their Republic of Gamers lineup today, and while we saw the Matrix GTX 980 Ti at IFA in September (and the Maximus VIII Extreme/Assembly was also on display), there are further details for both products in today's press release.

ROG-Maximus-VIII-Extreme_Assembly-Matrix-GTX-980Ti.jpg

ASUS ROG Maximus VIII Extreme/Assembly motherboard with Matrix 980 Ti

The motherboard in question is the Maximus VIII Extreme/Assembly, a Z170 board with an external headphone amp and 10Gb/s Ethernet add-in card included. This board could run into some money.

ROG-Maximus-VIII-Extreme_Assembly_10G-Express.jpg

The ROG 10G Express expansion card

While other Maximus VIII series motherboards have high-end audio support, the Extreme/Assembly further differentiates itself with an included 10Gb/s Ethernet card. ASUS has partnered with Tehuti Networks for the card, which in addition to 10Gbps also operates at conventional 100/1000 Ethernet speeds, as well as new 2.5/5Gbps over CAT5e.

“ROG 10G Express is the enterprise-speed Ethernet card, powered by Aquantia® and Tehuti Networks: these key partners are both members of the NBASE-T™ alliance, and are working closely to create the new 2.5Gbit/s and 5Gbit/s standards that will be compatible with the existing Category 5e (Cat 5e) cabling and ports. With PCI Express 2.0 x4 speed, it equips Maximus VIII Extreme/Assembly gamers for next-generation LAN speeds of up to 10Gbit/s — or up to ten times (10X) faster than today’s fastest onboard consumer Ethernet.”

This will certainly add to the cost of the motherboard considering a 10GbE card (without the 2.5/5Gbps feature) currently sells for $239.99 on Amazon.

ROG-Maximus-VIII-Extreme_SupremeFX-Hi-Fi.jpg

The ROG SupremeFX Hi-Fi amplifier

If you’re an audio enthusiast (like me) you’ll be impressed by the attention to audio, which begins with the audiophile-grade ESS E9018K2M DAC chip found on other members of the Maximus VIII family, and capable of not only native PCM 32-bit/384kHz playback, but up to dual-rate DSD (DSD128). The external headphone amplifier features the Texas Instruments TPA6120A2, and has a very high 6V output to drive the most challenging headphone loads.

What about the Matrix GTX 980 Ti? Full specifications were announced for the card, with boost GPU clock speeds of up to 1317 MHz.

MATRIX-GTX980TI-P-6GD5-GAMING.jpg

  • Graphics Engine: NVIDIA GeForce GTX 980 Ti
  • Video memory: 6GB GDDR5
  • CUDA cores: 2816
  • GPU clock (boosted):
    • 1317MHz (OC mode)
    • 1291MHz (gaming mode)
  • GPU clock (base)
    • 1216MHz (OC mode)
    • 1190MHz (gaming mode)
  • Memory clock: 7200MHz
  • Memory interface: 384-bit
  • Display Output: 3x DisplayPort 1.2, 1x HDMI 2.0, 1x Dual-link DVI
  • Dimensions: 11.62 x 5.44 x 2 inches

Availability and pricing information for these new ROG products was not released.

Source: ASUS ROG

Report: AMD Radeon R9 380X Coming November 15 for $249

Subject: Graphics Cards | November 7, 2015 - 04:46 PM |
Tagged: tonga, rumor, report, Radeon R9 380X, r9 285, graphics card, gpu, GDDR5, amd

AMD will reportedly be launching their latest performance graphics card soon, and specs for this rumored R9 380X have now been reported at VR-Zone (via Hardware Battle).

R9-380X.jpg

(Image credit: VR-Zone)

Here are the full specifications from this report:

  • GPU Codename: Antigua
  • Process: 28 nm
  • Stream Processors: 2048
  • GPU Clock: Up to 1000 – 1100 MHz (exact number not known)
  • Memory Size: 4096 MB
  • Memory Type: GDDR5
  • Memory Interface: 256-bit
  • Memory Clock: 5500 – 6000 MHz (exact number not known)
  • Display Output: DisplayPort 1.2, HDMI 1.4, Dual-Link DVI-D

The launch date is reportedly November 15, and the card will (again, reportedly) carry a $249 MSRP at launch.

slides-16.jpg

The 380X would build on the existing R9 285

Compared to the R9 280X, which also offers 2048 stream processors, a boost clock up to 1000 MHz, and 6000 MHz GDDR5, the R9 380X would lose memory bandwidth due to the move from a 384-bit memory interface to 256-bit. The actual performance won’t be exactly comparable however, as the core (Antigua, previously Tonga) will share more in common with the R9 285 (Tonga), though the R9 285 only offered 1792 Stream processors and 2 GB of GDDR5.

You can check out our review of the R9 285 here to see how it performed against the R9 280X, and it will certainly be interesting to see how this R9 380X will fare if these specifications are accurate.

Source: VR-Zone

NVIDIA Confirms Clock Speed, Power Increases at High Refresh Rates, Promises Fix

Subject: Graphics Cards | November 6, 2015 - 04:05 PM |
Tagged: ROG Swift, refresh rate, pg279q, nvidia, GTX 980 Ti, geforce, asus, 165hz, 144hz

Last month I wrote a story that detailed some odd behavior with NVIDIA's GeForce GTX graphics cards and high refresh rate monitors, in particular with the new ASUS ROG Swift PG279Q that has a rated 165Hz refresh rate. We found that when running this monitor at 144Hz or higher refresh rate, idle clock speeds and power consumption of the graphics card increased dramatically.

The results are much more interesting than I expected! At 60Hz refresh rate, the monitor was drawing just 22.1 watts while the entire testing system was idling at 73.7 watts. (Note: the display was set to its post-calibration brightness of just 31.) Moving up to 100Hz and 120Hz saw very minor increases in power consumption from both the system and monitor.

powerdraw.png

But the jump to 144Hz is much more dramatic – idle system power jumps from 76 watts to almost 134 watts – an increase of 57 watts! Monitor power only increased by 1 watt at that transition though. At 165Hz we see another small increase, bringing the system power up to 137.8 watts.

When running the monitor at 60Hz, 100Hz and even 120Hz, the GPU clock speed sits comfortably at 135MHz. When we increase from 120Hz to 144Hz though, the GPU clock spikes to 885MHz and stays there, even at the Windows desktop. According to GPU-Z the GPU is running at approximately 30% of the maximum TDP.

We put NVIDIA on notice with the story and followed up with emails including more information from other users as well as additional testing completed after the story was posted. The result: NVIDIA has confirmed it exists and has a fix incoming!

In an email we got from NVIDIA PR last night: 

We checked into the observation you highlighted with the newest 165Hz G-SYNC monitors.
 
Guess what? You were right! That new monitor (or you) exposed a bug in the way our GPU was managing clocks for GSYNC and very high refresh rates.
 
As a result of your findings, we are fixing the bug which will lower the operating point of our GPUs back to the same power level for other displays.
 
We’ll have this fixed in an upcoming driver.

This actually supports an oddity we found before: we noticed that the PG279Q at 144Hz refresh was pushing GPU clocks up pretty high while a monitor without G-Sync support at 144Hz did not. We'll see if this addresses the entire gamut of experiences that users have had (and have emailed me about) with high refresh rate displays and power consumption, but at the very least NVIDIA is aware of the problems and working to fix them.

I don't have confirmation of WHEN I'll be able to test out that updated driver, but hopefully it will be soon, so we can confirm the fix works with the displays we have in-house. NVIDIA also hasn't confirmed what the root cause of the problem is - was it related to the clock domains as we had theorized? Maybe not, since this was a G-Sync specific display issue (based on the quote above). I'll try to weasel out the technical reasoning for the bug if we can and update the story later!