Subject: Graphics Cards | March 10, 2016 - 01:27 PM | Sebastian Peak
Tagged: XConnect, thunderbolt 3, radeon, graphics card, gpu, gaming laptop, external gpu, amd
AMD has announced their new external GPU technology called XConnect, which leverages support from the latest Radeon driver to support AMD graphics over Thunderbolt 3.
The technology showcased by AMD is powered by Razer, who partnered with AMD to come up with an expandable solution that supports up to 375W GPUs, including R9 Fury, R9 Nano, and all R9 300 series GPUs up to the R9 390X (there is no liquid cooling support, and the R9 Fury X isn't listed as being compatible). The notebook in AMD's marketing material is the Razer Blade Stealth, which offers the Razer Core external GPU enclosure as an optional accessory. (More information about these products from Razer here.) XConnect is not tied to any vendor, however; this is "generic driver" support for GPUs over Thunderbolt 3.
AMD has posted this video with the head of Global Technical Marketing, Robert Hallock, to explain the new tech and show off the Razer hardware:
The exciting part has to be the promise of an industry standard for external graphics, something many have hoped for. Not everyone will produce a product exactly like Razer has, since there is no requirement to provide a future upgrade path in a larger enclosure like this, but the important thing is that Thunderbolt 3 support is built in to the newest Radeon Crimson drivers.
Here are the system requirements for AMD XConnect from AMD:
- Radeon Software 16.2.2 driver (or later)
- 1x Thunderbolt 3 port
- 40Gbps Thunderbolt 3 cable
- Windows 10 build 10586 (or later)
- BIOS support for external graphics over Thunderbolt 3 (check with system vendor for details)
- Certified Thunderbolt 3 graphics enclosure configured with supported Radeon R9 Series GPU
- Thunderbolt firmware (NVM) v.16
The announcement introduces all sorts of possibilities. How awesome would it be to see a tiny solution with an R9 Nano powered by, say, an SFX power supply? Or what about a dual-GPU enclosure (possibly requiring 2 Thunderbolt 3 connections?), or an enclosure supporting liquid cooling (and the R9 Fury X)? The potential is certainly there, and with a standard in place we could see some really interesting products in the near future (or even DIY solutions). It's a promising time for mobile gaming!
Subject: Graphics Cards | March 9, 2016 - 07:42 PM | Scott Michaud
Tagged: amd, radeon, graphics drivers, vulkan, dx12, DirectX 12
New graphics drivers from AMD have just been published, and it's a fairly big release. First, Catalyst 16.3 adds Vulkan support to main-branch drivers, which they claim is conformant to the 1.0 specification. The Khronos Group website still doesn't list AMD as conforming, but I assume that they will be added shortly (rather than some semantic “conformant” “fully conformant” thing going on). This is great for the platform, as we are still in the launch window of DirectX 12.
Performance has apparently increased as well, significantly. This is especially true in the DirectX 12 title, Gears of War Ultimate Edition. AMD claims that FuryX will see up to a 60% increase in that title, and the R9 380 will gain up to 44%. It's unclear how much that is in real world performance, especially in terms of stutter and jank, which apparently plagues that game.
The driver also has a few other interesting features. One that I don't quite understand is “Power Efficiency Toggle”. This supposedly “allows the user to disable some power efficiency optimizations”. I would assume that means keeping you GPU up-clocked under certain conditions, but I don't believe that was much of an issue for the last few generations. That said, the resolved issues section claims that some games were choppy because of core clock fluctuation, and lists this option as the solution, so maybe it was. It is only available on “select” Radeon 300 GPUs and Fury X. That is, Fury X specifically, not the regular Fury or the Nano. I expect Ryan will be playing around with it in the next little while.
Last of the main features, the driver adds support for XConnect, which is AMD's new external graphics standard. It requires a BIOS that support external GPUs, which AMD lists the Razer Blade Stealth as. Also noteworthy, Eyefinity can now be enabled with just two displays, and Display Scaling can be set per-game. I avoid manually controlling drivers, even my Wacom tablet, to target specific applications, but that's probably great for those who do.
As a final note: the Ashes of the Singularity 2.0 benchmark now supports DirectFlip.
If you have a recent AMD GPU, grab the drivers from AMD's website.
Subject: Graphics Cards | March 9, 2016 - 03:30 PM | Scott Michaud
Tagged: ubuntu, graphics drivers, graphics driver, amd
AMD has been transitioning their kernel driver from the closed-source fglrx to the open-source AMDGPU driver that was announced last year. This forms the base that both closed and open user-mode drivers will utilize. For the upcoming Ubuntu 16.04 LTS, Canonical has decided to deprecate fglrx and remove it from the system upon upgrade. Users can then choose to install an AMDGPU-based one, or reinstall the Radeon driver. That will need to be done without Canonical's support, though.
It makes sense that they would choose Ubuntu 16.04 to pull the plug. This is the version that Canonical will be maintaining for the next five years, which could give a headache when AMD has spent the last year trying to get rid of it. AMDGPU is a much safer target as the years roll forward. On the other hand, GPUs prior to Fiji will not have the luxury of choosing, because AMD still hasn't announced AMDGPU for
GDC (Update March 9th @ 6pm: Fixed typo) GCN 1.0 and 1.1.
Subject: Graphics Cards | March 3, 2016 - 03:00 PM | Ryan Shrout
Tagged: uwp, radeon, dx12, amd
AMD's Robert Hallock, frequenter of the PC Perspective live streams and a favorite of the team here, is doing an AMAA on reddit today. While you can find some excellent information and views from Robert in that Q&A session, two particular answers stood out to me.
Asked by user CataclysmZA: Can you comment on the recent developments regarding Ashes of the Singularity and DirectX 12 in PC Perspective and Extremetech's tests? Will changes in AMD's driver to include FlipEx support fix the framerate issues and allow high-refresh monitor owners to enjoy their hardware fully? http://www.pcper.com/reviews/General-Tech/PC-Gaming-Shakeup-Ashes-Singularity-DX12-and-Microsoft-Store
Answer from Robert: We will add DirectFlip support shortly.
Well, there you have it. This is the first official notice I have from AMD that it is in fact its driver that was causing the differences in behavior between Radeon and GeForce cards in Ashes of the Singularity last week. It appears that a new driver will be incoming (sometime) that will enable DirectFlip / FlipEx, allowing exclusive full screen modes in DX12 titles. Some of our fear of the unknown can likely be resolved - huzzah!
Ashes of the Singularity wouldn't enter exclusive full screen mode on AMD Radeon hardware.
Another quesiton also piqued my interest:
Asked by user CataclysmZA: Can you comment on how FreeSync is affected by the way games sold through the Windows Store run in borderless windowed mode?
Answer from Robert: This article discusses the issue thoroughly. Quote: "games sold through Steam, Origin [and] anywhere else will have the ability to behave with DX12 as they do today with DX11."
While not exactly spelling it out, this answer seems to indicate that for the time being, AMD doesn't think FreeSync will work with Microsoft Store sold games in the forced borderless windowed mode. NVIDIA has stated that G-Sync works in some scenarios with the new Gears of War (a Universal Windows Platform app), but it seems they too have issues.
As more informaiton continues to come in, from whatever sources we can validate, I'll keep you updated!
Clockspeed Jump and More!
On March 1st AMD announced the availability of two new processors as well as more information on the A10 7860 APU.
The two new units are the A10-7890K and the Athlon X4 880K. These are both Kaveri based parts, but of course the Athlon has the GPU portion disabled. Product refreshes for the past several years have followed a far different schedule than the days of yore. Remember back in time when the Phenom II series and the competing Core 2 series would have clockspeed updates that were expected yearly, if not every half year with a slightly faster top end performer to garner top dollar from consumers?
Things have changed, for better or worse. We have so far seen two clockspeed bumps for the Kaveri /Godavari based APU. Kaveri was first introduced over two years ago with the A10-7850K and the lower end derivatives. The 7850K has a clockspeed that ranges from 3.7 GHz to the max 4 GHz with boost. The GPU portion is clocked at 720 MHz. This is a 95 watt TDP part that is one of the introductory units from GLOBALFOUNDRIES 28 nm HKMG process.
Today the new top end A10-7890K is clocked at 4.1 GHz to 4.3 GHz max. The GPU receives a significant boost in performance with a clockspeed of 866 MHz. The combination of CPU and GPU clockspeed increases push the total performance of the part exceeding 1 TFLOPs. It features the same dual module/quad core Godavari design as well as the 8 GCN Units. The interesting part here is that the APU does not exceed the 95 watt TDP that it shares with the older and slower 7850K. It is also a boost in performance from last year’s refresh of the A10-7870K which is clocked 200 MHz slower on the CPU portion but retains the 866 MHz speed of the GPU. This APU is fully unlocked so a user can easily overclock both the CPU and GPU cores.
The Athlon X4 880K is still based on the Godavari family rather than the Carizzo update that the X4 845 uses. This part is clocked from 4.0 to 4.2 GHz. It again retains the 95 watt TDP rating of the previous Athlon X4 CPUs. Previously the X4 860K was the highest clocked unit at 3.7 GHz to 4.0, but the 880K raises that to 4 to 4.2 GHz. A 300 MHz gain in base clock is pretty significant as well as stretching that ceiling to 4.2 GHz. The Godavari modules retain their full amount of L2 cache so the 880K has 4 MB available to it. These parts are very popular with budget enthusiasts and gaming builds as they are extremely inexpensive and perform at an acceptable level with free overclocking thrown in.
Earlier this week Samsung formally made a couple of announcements for new monitors due out this spring. The CF591 and CF390 range in size from 23 to 27 inches, mating a 1920x1080 resolution with an 1800R curvature and an attractive design. Even better news for gamers, all of the monitors in these two series will offer AMD's variable refresh rate technology known as FreeSync over HDMI.
The specifications of the monitors are interesting in their own light. The CF390 will be available in both 23.5-in and 27-in varieties, with a 1920x1080 resolution on a VA panel, a 4ms response time rating and a maximum brightness of 250 nits. The VA technology allows for solid viewing angles and color reproduction though all of them are limited to a 60Hz maximum refresh rate. The CF591 monitor is only available in a 27-in variety, shares almost all of the same traits, but sheds the glossy black design for a silver and white color option.
The CF390 features only VGA (D-Sub) and HDMI inputs while the CF591 overs VGA, dual HDMI and a single DisplayPort connection as well. Only the CF591 allows for audio input through a 3.5mm connection.
The supposed value of HDMI-based FreeSync is ubiquity and lower cost. Unfortunately, we don't have any pricing information from Samsung on either the CF390 or CF591 monitors, leaving a big question mark for AMD Radeon gamers that might be looking for a new display. Also, while the CF390 directly benefits from the addition of HDMI support on FreeSync, the CF591 still has a DisplayPort connection, meaning the value of HDMI-based FreeSync is lessened.
They 60Hz maximum refresh rate is disappointing in a world where 75Hz, 90Hz, even 165Hz monitors are being released left and right. Will the AMD driver-based frame doubling technology work on these displays? I have an inquiry in to AMD to verify but it might be difficult with the VA panels' minimum refresh rate. To be fair to AMD and Samsung though, this isn't marketed as a gaming monitor, just a monitor that happens to have a very gaming friendly option.
Both of these monitors look pretty sexy though; we need to see and test them in person to see if the image quality and FreeSync performance meet our expectations. Hopefully we'll be able to do so soon, but until then, let's hope that Samsung is able to release these at very competitive prices to help drive down the cost of VRR.
Things are about to get...complicated
Earlier this week, the team behind Ashes of the Singularity released an updated version of its early access game, which updated its features and capabilities. With support for DirectX 11 and DirectX 12, and adding in multiple graphics card support, the game featured a benchmark mode that got quite a lot of attention. We saw stories based on that software posted by Anandtech, Guru3D and ExtremeTech, all of which had varying views on the advantages of one GPU or another.
That isn’t the focus of my editorial here today, though.
Shortly after the initial release, a discussion began around results from the Guru3D story that measured frame time consistency and smoothness with FCAT, a capture based testing methodology much like the Frame Rating process we have here at PC Perspective. In that post on ExtremeTech, Joel Hruska claims that the results and conclusion from Guru3D are wrong because the FCAT capture methods make assumptions on the output matching what the user experience feels like. Maybe everyone is wrong?
First a bit of background: I have been working with Oxide and the Ashes of the Singularity benchmark for a couple of weeks, hoping to get a story that I was happy with and felt was complete, before having to head out the door to Barcelona for the Mobile World Congress. That didn’t happen – such is life with an 8-month old. But, in my time with the benchmark, I found a couple of things that were very interesting, even concerning, that I was working through with the developers.
FCAT overlay as part of the Ashes benchmark
First, the initial implementation of the FCAT overlay, which Oxide should be PRAISED for including since we don’t have and likely won’t have a DX12 universal variant of, was implemented incorrectly, with duplication of color swatches that made the results from capture-based testing inaccurate. I don’t know if Guru3D used that version to do its FCAT testing, but I was able to get some updated EXEs of the game through the developer in order to the overlay working correctly. Once that was corrected, I found yet another problem: an issue of frame presentation order on NVIDIA GPUs that likely has to do with asynchronous shaders. Whether that issue is on the NVIDIA driver side or the game engine side is still being investigated by Oxide, but it’s interesting to note that this problem couldn’t have been found without a proper FCAT implementation.
With all of that under the bridge, I set out to benchmark this latest version of Ashes and DX12 to measure performance across a range of AMD and NVIDIA hardware. The data showed some abnormalities, though. Some results just didn’t make sense in the context of what I was seeing in the game and what the overlay results were indicating. It appeared that Vsync (vertical sync) was working differently than I had seen with any other game on the PC.
For the NVIDIA platform, tested using a GTX 980 Ti, the game seemingly randomly starts up with Vsync on or off, with no clear indicator of what was causing it, despite the in-game settings being set how I wanted them. But the Frame Rating capture data was still working as I expected – just because Vsync is enabled doesn’t mean you can look at the results in capture formats. I have written stories on what Vsync enabled captured data looks like and what it means as far back as April 2013. Obviously, to get the best and most relevant data from Frame Rating, setting vertical sync off is ideal. Running into more frustration than answers, I moved over to an AMD platform.
Subject: Graphics Cards | February 29, 2016 - 02:06 PM | Scott Michaud
Tagged: nvidia, amd, AIB, pc gaming
Jon Peddie Research, which is market analysis firm that specializes in PC hardware, has compiled another report about add-in board (AIB) sales. There's a few interesting aspects to this report. First, shipments of enthusiast AIBs (ie: discrete GPUs) are up, not a handful of percent, but a whole two-fold. Second, AMD's GPU market share climbed once again, from 18.8% up to 21.1%.
This image seems contradict their report, which claims the orange line rose from 44 million in 2014 to 50 million in 2015. I'm not sure where the error is, so I didn't mention it in the news post.
Image Credit: JPR
The report claims that neither AMD nor NVIDIA released a “killer new AIB in 2015.” That... depends on how you look at it. They're clearly referring to upper mainstream, which sit just below the flagship and contribute to a large chunk of enthusiast sales. If they were including the flagship, then they ignored the Titan X, 980 Ti, and Fury line of GPUs, which would just be silly. Since they were counting shipped units, though, it makes sense to neglect those SKUs because they are priced way above the inflection point in actual adoption.
Image Credit: JPR
But that's not the only “well... sort-of” with JPR's statement. Unlike most generations, the GTX 970 and 980 launched late in 2014, rather than their usual Spring-ish cadence. Apart from the GeForce GTX 580, this trend has been around since the GeForce 9000-series. As such, these 2014 launches could have similar influence as another year's early-2015 product line. Add a bit of VR hype, and actual common knowledge that consoles are lower powered than PCs this generation, and you can see these numbers make a little more sense.
Even still, a 100% increase in enthusiast AIB shipments is quite interesting. This doesn't only mean that game developers can target higher-end hardware. The same hardware to consume content can be used to create it, which boosts both sides of the artist / viewer conversation in art. Beyond its benefits to society, this could snowball into more GPU adoption going forward.
Subject: General Tech | February 11, 2016 - 12:27 PM | Ken Addison
Tagged: vr edition, video, UMC, ue4, podcast, phanteks, nvidia, logitech, GTX 980 Ti, g810, evga, enthoo evolv itx, asrtock, arm, amd, 28HPCU
PC Perspective Podcast #386 - 02/10/2016
Join us this week as we discuss the Logitech G810, Phanteks Enthoo EVOLV ITX, GTX 980 Ti VR Edition and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:30:34
Week in Review:
0:26:20 EVGA 750W GQ Power Supply Review
0:36:45 This week’s podcast is brought to you by Casper. Use code PCPER at checkout for $50 towards your order!
News items of interest:
Hardware/Software Picks of the Week
Early testing for higher end GPUs
UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.
I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.
Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.
In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.