Pushing the ASUS STRIX R9 380X DirectCU II OC to the limit

Subject: Graphics Cards | December 14, 2015 - 03:55 PM |
Tagged: amd, asus, STRIX R9 380X DirectCU II OC, overclock

Out of the box the ASUS STRIX R9 380X OC has a top GPU speed of 1030MHz and memory at 5.7GHz, enough to outperform a stock GTX 960 4GB at 1440p but not enough to provide satisfactory performance at that resolution.  After spending some time with the card, [H]ard|OCP determined that the best overclock they could coax out of this particular GPU was 1175MHz and 6.5GHz, so they set about testing the performance at 1440p again.  To make it fair they also overclocked their STRIX GTX 960 OC 4GB to 1527MHz and 8GHz.  Read the full review for the detailed results, you will see that overclocking your 380X does really increase the value you get for your money.

1450065721TtW76q7G9J_1_1.gif

"We take the new ASUS STRIX R9 380X DirectCU II OC based on AMD's new Radeon R9 380X GPU and overclock this video card to its highest potential. We'll compare performance in six games, including Fallout 4, to a highly overclocked ASUS GeForce GTX 960 4GB video card and find out who dominates 1440p gaming."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP
Author:
Manufacturer: AMD

What RTG has planned for 2016

Last week the Radeon Technology Group invited a handful of press and analysts to a secluded location in Sonoma, CA to discuss the future of graphics, GPUs and of course Radeon. For those of you that seem a bit confused, the RTG (Radeon Technologies Group) was spun up inside AMD to encompass all of the graphics products and IP inside the company. Though today’s story is not going to focus on the fundamental changes that RTG brings to the future of AMD, I will note, without commentary, that we saw not a single AMD logo in our presentations or in the signage present throughout the week.

Much of what I learned during the RTG Summit in Sonoma is under NDA and will likely be so for some time. We learned about the future architectures, direction and product theories that will find their way into a range of solutions available in 2016 and 2017.

What I can discuss today is a pair of features that are being updated and improved for current generation graphics cards and for Radeon GPUs coming in 2016: FreeSync and HDR displays. The former is one that readers of PC Perspective should be very familiar with while the latter will offer a new window into content coming in late 2016.

High Dynamic Range Displays: Better Pixels

In just the last couple of years we have seen a spike in resolution for mobile, desktop and notebook displays. We now regularly have 4K monitors on sale for around $500 and very good quality 4K panels going for something in the $1000 range. Couple that with the increase in market share of 21:9 panels with 3440x1440 resolutions and clearly there is a demand from consumers for a better visual experience on their PCs.

rtg1-8.jpg

But what if the answer isn’t just more pixels, but better pixels? We already have this discussed weekly when comparing render resolutions in games of 4K at lower image quality solutions versus 2560x1440 at maximum IQ settings (for example) but the truth is that panel technology has the ability to make a dramatic change to how we view all content – games, movies, productivity – with the introduction of HDR, high dynamic range.

rtg1-10.jpg

As the slide above demonstrates there is a wide range of luminance in the real world that our eyes can see. Sunlight crosses the 1.6 billion nits mark while basic fluorescent lighting in our homes and offices exceeds 10,000 nits. Compare to the most modern PC displays that range from 0.1 nits to 250 nits and you can already tell where the discussion is heading. Even the best LCD TVs on the market today have a range of 0.1 to 400 nits.

Continue reading our overview of new FreeSync and HDR features for Radeon in 2016!!

AMD HSA Patches Hoping for GCC 6

Subject: Graphics Cards, Processors | December 8, 2015 - 08:07 AM |
Tagged: hsa, GCC, amd

Phoronix, the Linux-focused hardware website, highlighted patches for the GNU Compiler Collection (GCC) that implement HSA. This will allow newer APUs, such as AMD's Carrizo, to accelerate chunks of code (mostly loops) that have been tagged with a precompiler flag as valuable to be done on the GPU. While I have done some GPGPU development, many of the low-level specifics of HSA aren't areas that I have too much experience with.

amd-2015-carrizo-8.jpg

The patches have been managed by Martin Jambor of SUSE Labs. You can see a slideshow presentation of their work on the GNU website. Even though features froze about a month ago, they are apparently hoping that this will make it into the official GCC 6 release. If so, many developers around the world will be able to target HSA-compatible hardware in the first half of 2016. Technically, anyone can do so regardless, but they would need to specifically use the unofficial branch on the GCC Subversion repository. This probably means compiling it themselves, and it might even be behind on a few features in other branches that were accepted into GCC 6.

Source: Phoronix

Asetek Sends Cease and Desist for Water-Cooled GPUs

Subject: Graphics Cards | December 6, 2015 - 11:29 PM |
Tagged: gigabyte, cooler master, asetek, amd

AMD and Gigabyte have each received cease and desist letters from Asetek, regarding the Radeon Fury X and GeForce GTX 980 Water Force, respectively, for using a Cooler Master-based liquid cooling solution. The Cooler Master Seiden 120M is a self-contained block and water pump, which courts have ruled that it infringes on one of Asetek's patents. Asetek has been awarded 25.375% of Cooler Master's revenue from all affected products since January 1st, 2015.

amd-2015-coolermaster-furyxopen.JPG

This issue obviously affects NVIDIA less than AMD, since it applies to a single product from just one AIB partner. On AMD's side, however, it affects all Fury X products, but obviously not the air-cooled Fury and Fury Nano cards. It's also possible that future SKUs could be affected as well, especially since upcoming, top end GPUs will probably be in small packages adjacent HBM 2.0 memory. This dense form-factor lends itself well to direct cooling techniques, like closed-loop water.

gigabyte-2015-waterforce-asetekcoolermaster.jpg

Even more interesting is that we believe Asetek was expecting to get the Fury X contract. We reported on an Asetek press release that claimed they received their “Largest Ever Design Win” with an undisclosed OEM. We expected it to be the follow-up to the 290X, which we assumed was called 390X because, I mean, AMD just chose that branding, right? Then the Fury X launched and it contained a Cooler Master pump. I was confused. No other candidate for “Largest Ever Design Win” popped up from Asetek, either. I guess we were right? Question mark? The press release of Asetek's design win came out in August 2014 while Asetek won the patent case in December of that year.

Regardless, this patent war has been ongoing for several months now. If it even affects any future products, I'd hope that they'd have enough warning at this point.

Source: GamersNexus

AMD Confirms Tonga 384-bit Memory Bus, Not Enabled For Any Products

Subject: Graphics Cards | December 3, 2015 - 10:34 PM |
Tagged: Tonga XT, tonga, Radeon R9 380X, Radeon R9 285, Radeon R9 280X, Radeon R9 280, radeon, amd, 384-bit

While it was reported a year ago that AMD's Tonga XT GPU had a 384-bit memory bus in articles sourcing the same PC Watch report, when the Radeon R9 380X was released last month we saw a Tonga XT GPU with a 256-bit memory interface.

tonga_384.png

The full Tonga core features a 384-bit GDDR5 memory bus (Credit: PC Watch)

Reports of the upcoming card had consistently referenced the wider 384-bit bus, and tonight we are able to officially confirm that Tonga (not just Tonga XT) has been 384-bit capable all along, though this was never enabled by AMD. The reason? The company never found the right price/performance combination.

AMD's Raja Koduri confirmed Tonga's 384-bit bus tonight, and our own Ryan Shrout broke the news on Twitter.

So does this mean an upcoming Tonga GPU could offer this wider memory bus? Tonga itself was a follow-up to Tahiti (R9 280/280X), which did have a 384-bit bus, but all along the choice had been made to keep the updated core at 256-bit.

Now more than a year after the launch of Tonga a new part featuring a fully enabled memory bus doesn't seem realistic, but it's still interesting to know that significantly more memory bandwidth is locked away from owners of these cards.

Podcast #377 - AMD Radeon Software Crimson, our Holiday Gift Guide, Scott Wasson moving to AMD and more!

Subject: General Tech | December 3, 2015 - 03:40 PM |
Tagged: podcast, video, amd, radeon software, crimson, holiday gift guide, ATIC, GLOBALFOUNDRIES, raspberry pi zero, scott wasson, tech report, Thinkpad, yoga p40

PC Perspective Podcast #377 - 12/03/2015

Join us this week as we discuss AMD Radeon Software Crimson, our Holiday Gift Guide, Scott Wasson moving to AMD and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Jeremy Hellstrom, Josh Walrath, and Sebastian Peak

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

The Tech Report's Scott Wasson Leaves for AMD

Subject: General Tech | December 2, 2015 - 11:39 PM |
Tagged: amd

Update 2 (11:50pm): Okay, so. Scott Wasson was asked by Raja Koduri to join the Radeon Technologies Group, their intention being to implement the work he did with (and surrounding) his frame time benchmark to improve user experience. Scott Wasson will step down as Editor-in-Chief of The Tech Report, and promote Jeff Kampman in his place.

Update: Scott Wasson has just published a blog post about it, naturally minutes after I wrote this. We'll add more details above this as we digest them. Original news below!

AMD has just announced that Scott Wasson, Editor-in-Chief at The Tech Report, will leave his site and join their ranks. Details are still scarce because of how fresh this announcement is, but he will join the company to lead User Experience. Scott Wasson is a good friend of PC Perspective and our Editor-in-Chief, Ryan Shrout. They shared notes during the development of Frame Rating.

amd-2015-scottwasson.jpg

Ryan is still visiting AMD for the scheduled briefing, and will probably be talking more about this over the next couple of days. Scott Wasson's new position at the chip designer will take effect in January. We don't yet know how this will affect The Tech Report itself, whether someone will take over or not. Ryan broke the news on our most recent podcast from the event.

Sony Unlocks One More Jaguar Core in the PS4

Subject: Systems | November 29, 2015 - 09:12 PM |
Tagged: sony, playstation 4, ps4, amd, Jaguar, APU

Of the eight Jaguar cores that Sony added to the PlayStation 4 APU, two were locked down the console's operating system and other tasks. This left the developer with six to push their workloads through. This was the same as the Xbox One until Microsoft released an update last year, which unlocked one to give seven.

sony-2015-ps4-soc.jpg

NeoGAF users report that, allegedly, PlayStation 4 games can now utilize seven of the eight cores after a recent SDK update from Sony. They source a recent changelist for FMOD, a popular audio management library for PC, mobile, and console platforms, which references targeting “the newly unlocked 7th core.”

Since this is not an official Sony announcement, at least not publicly, we don't know some key details. For instance, is the core completely free, or will the OS still push tasks on it during gameplay? Will any features be disabled if the seventh core is targeted? How frequently will the seventh core be blocked, if ever? What will happen if you block it, if anything? The Xbox One is said to use about 20% of their unlocked seventh core for Microsoft-related tasks, and claiming the remaining 80% is said to disable voice recognition and Kinect features.

The Xbox One and PlayStation 4 are interesting devices to think about. They go low frequency, but wide, in performance, similar to many mobile devices. They also utilize a well-known instruction set, x86, which obviously has a huge catalog of existing libraries and features. I don't plan on every buying another console, but they move with the industry and has a fairly big effect on it (albeit much less than previous generations).

Source: NeoGAF

Fix Coming for AMD Radeon Software Crimson Edition

Subject: Graphics Cards | November 29, 2015 - 05:25 PM |
Tagged: amd, graphics driver, radeon, crimson

Users have been reporting that the latest AMD graphics driver, Radeon Software Crimson Edition, has been incorrectly setting fan speeds. Some users report that the driver spins up fans to 100% and others report that they slow down to 30% regardless of load.

amd-2015-crimson-logo.png

Over the weekend, AMD acknowledged the issue and claim that a fix is intended for Monday.

Some users also claim that the card will stick with that fan setting until it cooks itself. This seems odd to me, since GPUs (and CPUs of course) are now designed to down-volt if temperatures reach unsafe levels, and even cut power entirely if heat cannot be managed. We haven't really seen reports of graphics cards cooking themselves since the Radeon HD 5000 series implemented hardware in response to Furmark and OCCT. That said, the driver bug might some how override these hardware protections.

In the mean time, you'll either want to keep an eye on your fan settings and reset them as necessary, or roll back to the previous driver. AMD didn't comment on the high fan speed issue that some were complaining about, so I'm not sure if this fix will address both issues.

AMD GPU Architectures pre-GCN Are Now Legacy

Subject: Graphics Cards | November 26, 2015 - 03:09 PM |
Tagged: amd, graphics drivers, GCN, terascale

The Graphics Core Next (GCN) architecture is now a minimum requirement for upcoming AMD graphics drivers. If your graphics card (or APU) uses the TeraScale family of microarchitectures, then your last expected WHQL driver is AMD Catalyst 15.7.1 for Windows 7, 8.x, and 10. You aren't entirely left out of Radeon Software Crimson Edition, however. The latest Crimson Edition Beta driver is compatible with TeraScale, but the upcoming certified one will not be.

AMD-Catalyst.jpg

GCN was introduced with the AMD Radeon HD 7000 series, although it was only used in the Radeon HD 7700 series GPUs and above. The language doesn't seem to rule out an emergency driver release, such as if Microsoft breaks something in a Windows 10 update that causes bluescreens and fire on older hardware, but they also don't say that they will either. NVIDIA made a similar decision to deprecate pre-Fermi architectures back in March of 2014, which applied to the release of GeForce 343 Drivers in September of that year. Extended support for NVIDIA's old cards end on April 1st, 2016.

I wonder why AMD chose a beta driver to stop with, though. If AMD intended to support TeraScale with Crimson, then why wouldn't they keep it supported until at the first WHQL-certified version? If they didn't intend to support TeraScale, then why go through the effort of supporting it with the beta driver? This implies that AMD reached a hurdle with TeraScale that they didn't want to overcome. That may not be the case, but it's the first thing that comes to my mind none-the-less. Probably the best way to tell is to see how people with Radeon HD 6000-series (or lower-end 7000/8000-series) cards work with Radeon Software Crimson Beta.

Likely the last drivers that users with Radeon HD 6000-series graphics need are 15.7.1 or Radeon Software Crimson Edition Beta. We will soon learn which of the two will be best long-term.

Or, of course, you can buy a newer GPU / APU when you get a chance.

Source: AMD