AMD Releases Radeon Software Crimson Edition 16.5.1 Beta

Subject: Graphics Cards | May 9, 2016 - 02:05 PM |
Tagged: amd, graphics drivers, crimson

This is good to see. AMD has released Radeon Software Crimson Edition 16.5.1 to align with Forza Motorsport 6: Apex. The drivers are classified as Beta, and so is the game, coincidentally, which means 16.5.1 is not WHQL-certified. That doesn't have the weight that it used to, though. Its only listed feature is performance improvements with that title, especially for the R9 Fury X graphics card. Game-specific optimizations near launch appear to be getting consistent, and that was an area that AMD really needed to improve upon, historically.


There are a handful of known issues, but they don't seem particularly concerning. The AMD Gaming Evolved overlay may crash in some titles, and The Witcher 3 may flicker in Crossfire, both of which could be annoying if they affect a game that you have been focusing on, but that's about it. There might be other issues (and improvements) that are not listed in the notes, but that's all I have to work on at the moment.

If you're interested in Forza 6: Apex, check out AMD's download page.

Source: AMD

NVIDIA GeForce GTX 1080 and GTX 1070 Announced

Subject: Graphics Cards | May 6, 2016 - 10:38 PM |
Tagged: pascal, nvidia, GTX 1080, gtx 1070, GP104, geforce

So NVIDIA has announced their next generation of graphics processors, based on the Pascal architecture. They introduced it as “a new king,” because they claim that it is faster than the Titan X, even at a lower power. It will be available “around the world” on May 27th for $599 USD (MSRP). The GTX 1070 was also announced, with slightly reduced specifications, and it will be available on June 10th for $379 USD (MSRP).


Pascal is created on the 16nm process at TSMC, which gives them a lot of headroom. They have fewer shaders than the Titan X, but with a significantly higher clock rate. It also uses GDDR5X, which is an incremental improvement over GDDR5. We knew it wasn't going to use HBM2.0, like Big Pascal does, but it's interesting that they did not stick with old, reliable GDDR5.



The full specifications of the GTX 1080 are as follows:

  • 2560 CUDA Cores
  • 1607 MHz Base Clock (8.2 TFLOPs)
  • 1733 MHz Boost Clock (8.9 TFLOPs)
  • 8GB GDDR5X Memory at 320 GB/s (256-bit)
  • 180W Listed Power (Update: uses 1x 8-pin power)

We do not currently have the specifications of the GTX 1070, apart from it being 6.5 TFLOPs.


It also looks like it has five display outputs: 3x DisplayPort 1.2, which are “ready” for 1.3 and 1.4, 1x HDMI 2.0b, and 1x DL-DVI. They do not explicitly state that all three DisplayPorts will run on the same standard, even though that seems likely. They also do not state whether all five outputs can be used simultaneously, but I hope that they can be.


They also have a new SLI bridge, called SLI HB Bridge, that is supposed to have double the bandwidth of Maxwell. I'm not sure what that will mean for multi-gpu systems, but it will probably be something we'll find out about soon.

Source: NVIDIA

NVIDIA GeForce "GTX 1080" Benchmark Leaked

Subject: Graphics Cards | May 5, 2016 - 02:38 PM |
Tagged: nvidia, pascal, geforce

We're expecting a major announcement tomorrow... at some point. NVIDIA create a teaser website, called “Order of 10,” that is counting down to a 1PM EDT. On the same day, at 9PM EDT, they will have a live stream on their Twitch channel. This wasn't planned as long-term as their Game24 event, which turned out to be a GTX 970 and GTX 980 launch party, but it wouldn't surprise me if it ended up being a similar format. I don't know for sure whether one or both events will be about the new mainstream Pascal, but it would be surprising if Friday ends (for North America) without a GPU launch of some sort.


VideoCardz got a hold of 3DMark Fire Strike Extreme benchmarks, though. It is registered as an 8GB card with a GPU clock of 1860 MHz. While synthetic benchmarks, let alone a single benchmark of anything, isn't necessarily representative of overall performance, it scores slightly higher than a reasonably overclocked GTX 980 Ti (and way above a stock one). Specifically, this card yields a graphics score of 10102 on Fire Strike Extreme 1.1, while the 980 Ti achieved 7781 for us without an overclock.

We expected a substantial bump in clock rate, especially after GP100 was announced at GTC. This “full” Pascal chip was listed at a 1328 MHz clock, with a 1480 MHz boost. Enterprise GPUs are often underclocked compared to consumer parts, stock to stock. As stated a few times, overclocking could be a huge gap, too. The GTX 980 Ti was able to go from 1190 MHz to 1465 MHz. On the other hand, consumer Pascal's recorded 1860 MHz could itself be an overclock. We won't know until NVIDIA makes an official release. If not, maybe we could see these new parts break 2 GHz in general use?

Source: VideoCardz

This is your AMD APU. This is your AMD APU on DX12; any questions?

Subject: Graphics Cards | April 29, 2016 - 07:09 PM |
Tagged: amd, dx12, async shaders

Earlier in the month [H]ard|OCP investigated the performance scaling that Intel processors display in DX12, now they have finished their tests on AMD processors.  These tests include Async computing information, so be warned before venturing forth into the comments.  [H] tested an FX 8370 at 2GHz and 4.3GHz to see what effect this had on the games, the 3GHz tests did not add any value and were dropped in favour of these two turbo frequencies.  There are some rather interesting results and discussion, drop by for the details.


"One thing that has been on our minds about the new DX12 API is its ability to distribute workloads better on the CPU side. Now that we finally have a couple of new DX12 games that have been released to test, we spend a bit of time getting to bottom of what DX12 might be able to do for you. And a couple sentences on Async Compute."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: AMD

History and Specifications

The Radeon Pro Duo had an interesting history. Originally shown as an unbranded, dual-GPU PCB during E3 2015, which took place last June, AMD touted it as the ultimate graphics card for both gamers and professionals. At that time, the company thought that an October launch was feasible, but that clearly didn’t work out. When pressed for information in the Oct/Nov timeframe, AMD said that they had delayed the product into Q2 2016 to better correlate with the launch of the VR systems from Oculus and HTC/Valve.

During a GDC press event in March, AMD finally unveiled the Radeon Pro Duo brand, but they were also walking back on the idea of the dual-Fiji beast being aimed at the gaming crowd, even partially. Instead, the company talked up the benefits for game developers and content creators, such as its 8192 stream processors for offline rendering, or even to aid game devs in the implementation and improvement of multi-GPU for upcoming games.


Anyone that pays attention to the graphics card market can see why AMD would make the positional shift with the Radeon Pro Duo. The Fiji architecture is on the way out, with Polaris due out in June by AMD’s own proclamation. At $1500, the Radeon Pro Duo will be a stark contrast to the prices of the Polaris GPUs this summer, and it is well above any NVIDIA-priced part in the GeForce line. And, though CrossFire has made drastic improvements over the last several years thanks to new testing techniques, the ecosystem for multi-GPU is going through a major shift with both DX12 and VR bearing down on it.

So yes, the Radeon Pro Duo has both RADEON and PRO right there in the name. What’s a respectable PC Perspective graphics reviewer supposed to do with a card like that if it finds its way into your office? Test it of course! I’ll take a look at a handful of recent games as well as a new feature that AMD has integrated with 3DS Max called FireRender to showcase some of the professional chops of the new card.

Continue reading our review of the AMD Radeon Pro Duo!!

AMD Radeon Crimson Edition 16.4.2 hits the streets

Subject: Graphics Cards | April 25, 2016 - 04:41 PM |
Tagged: graphics driver, crimson, amd

AMD's new Crimson driver has just been released with new features including official support for the new Radeon Pro Duo as well as both the Oculus Rift and HTC Vive VR headsets.  It also adds enhanced support for AMD's XConnect technology for external GPUs connected via a Thunderbolt 3 interface.  Crossfire profile updates include Hitman, Elite Dangerous and Need for Speed and they have also resolved the ongoing issue with the internal update procedure not seeing the newest drivers.  If you are having issues with games crashing to desktop on launch you will still need to disable the AMD Gaming Evolved overlay, unfortunately.

Get 'em right here!


"The latest version of Radeon Software Crimson Edition is here with 16.4.2. With this version, AMD delivers many quality improvements, updated/introduced new CrossFire profiles and delivered full support for AMD’s XConnect technology (including plug’n’play simplicity for Thunderbolt 3 eGFX enclosures configured with Radeon R9 Fury, Nano or 300 Series GPUs.)  Best of all, our DirectX 12 leadership continues to be strong, as shown by the performance numbers below."


Source: AMD
Manufacturer: AMD

The Dual-Fiji Card Finally Arrives

This weekend, leaks of information on both WCCFTech and have revealed all the information about the pending release of AMD’s dual-GPU giant, the Radeon Pro Duo. While no one at PC Perspective has been briefed on the product officially, all of the interesting data surrounding the product is clearly outlined in the slides on those websites, minus some independent benchmark testing that we are hoping to get to next week. Based on the report from both sites, the Radeon Pro Duo will be released on April 26th.

AMD actually revealed the product and branding for the Radeon Pro Duo back in March, during its live streamed Capsaicin event surrounding GDC. At that point we were given the following information:

  • Dual Fiji XT GPUs
  • 8GB of total HBM memory
  • 4x DisplayPort (this has since been modified)
  • 16 TFLOPS of compute
  • $1499 price tag

The design of the card follows the same industrial design as the reference designs of the Radeon Fury X, and integrates a dual-pump cooler and external fan/radiator to keep both GPUs running cool.


Based on the slides leaked out today, AMD has revised the Radeon Pro Duo design to include a set of three DisplayPort connections and one HDMI port. This was a necessary change as the Oculus Rift requires an HDMI port to work; only the HTC Vive has built in support for a DisplayPort connection and even in that case you would need a full-size to mini-DisplayPort cable.

The 8GB of HBM (high bandwidth memory) on the card is split between the two Fiji XT GPUs on the card, just like other multi-GPU options on the market. The 350 watts power draw mark is exceptionally high, exceeded only by AMD’s previous dual-GPU beast, the Radeon 295X2 that used 500+ watts and the NVIDIA GeForce GTX Titan Z that draws 375 watts!


Here is the specification breakdown of the Radeon Pro Duo. The card has 8192 total stream processors and 128 Compute Units, split evenly between the two GPUs. You are getting two full Fiji XT GPUs in this card, an impressive feat made possible in part by the use of High Bandwidth Memory and its smaller physical footprint.

  Radeon Pro Duo R9 Nano R9 Fury R9 Fury X GTX 980 Ti TITAN X GTX 980 R9 290X
GPU Fiji XT x 2 Fiji XT Fiji Pro Fiji XT GM200 GM200 GM204 Hawaii XT
GPU Cores 8192 4096 3584 4096 2816 3072 2048 2816
Rated Clock up to 1000 MHz up to 1000 MHz 1000 MHz 1050 MHz 1000 MHz 1000 MHz 1126 MHz 1000 MHz
Texture Units 512 256 224 256 176 192 128 176
ROP Units 128 64 64 64 96 96 64 64
Memory 8GB (4GB x 2) 4GB 4GB 4GB 6GB 12GB 4GB 4GB
Memory Clock 500 MHz 500 MHz 500 MHz 500 MHz 7000 MHz 7000 MHz 7000 MHz 5000 MHz
Memory Interface 4096-bit (HMB) x 2 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM) 384-bit 384-bit 256-bit 512-bit
Memory Bandwidth 1024 GB/s 512 GB/s 512 GB/s 512 GB/s 336 GB/s 336 GB/s 224 GB/s 320 GB/s
TDP 350 watts 175 watts 275 watts 275 watts 250 watts 250 watts 165 watts 290 watts
Peak Compute 16.38 TFLOPS 8.19 TFLOPS 7.20 TFLOPS 8.60 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 5.63 TFLOPS
Transistor Count 8.9B x 2 8.9B 8.9B 8.9B 8.0B 8.0B 5.2B 6.2B
Process Tech 28nm 28nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $1499 $499 $549 $649 $649 $999 $499 $329

The Radeon Pro Duo has a rated clock speed of up to 1000 MHz. That’s the same clock speed as the R9 Fury and the rated “up to” frequency on the R9 Nano. It’s worth noting that we did see a handful of instances where the R9 Nano’s power limiting capability resulted in some extremely variable clock speeds in practice. AMD recently added a feature to its Crimson driver to disable power metering on the Nano, at the expense of more power draw, and I would assume the same option would work for the Pro Duo.

Continue reading our preview of the AMD Radeon Pro Duo!!

Report: NVIDIA GP104 Die Pictured; GTX 1080 Does Not Use HBM

Subject: Graphics Cards | April 22, 2016 - 10:16 AM |
Tagged: rumor, report, pascal, nvidia, leak, graphics card, gpu, gddr5x, GDDR5

According to a report from VideoCardz (via Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.


Image credit:

"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."

A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory - based on a previously leaked photo of the die on PCB.


Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)

"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."

While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.

Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X - the first GPU to use HBM - was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.

If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected - which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.

Source: VideoCardz

Zotac Releases PCI-E x1 Version of NVIDIA GT 710 Graphics Card

Subject: Graphics Cards | April 21, 2016 - 08:37 AM |

Zotac has released a new variant of the low-power NVIDIA GeForce GT 710, and while this wouldn't normally be news this card has a very important distinction: its PCI-E x1 interface.


With a single-slot design, low-profile ready (with a pair of brackets included), and that PCI-E x1 interface, this card can go places where GPUs have never been able to go (AFAIK). Granted, you won't be doing much gaming on a GT 710, which features 192 CUDA cores and 1GB of DDR3 memory, but this card does provide support for up to 3 monitors via DVI, HDMI, and VGA outputs. 

A PCI-E x1 GPU would certainly provide some interesting options for ultra-compact systems such as those based on thin mini-ITX, which does not offer a full-length PCI Express slot; or for adding additional monitor support for business machines that only offer a single PCI-E x16 slot, but have a x1 slot available.


Specifications from Zotac:

Zotac ZT-71304-20L

  • GPU: GeForce GT 710
  • CUDA cores: 192
  • Video Memory: 1GB DDR3
  • Memory Bus: 64-bit
  • Engine Clock: 954 MHz
  • Memory Clock: 1600 MHz
  • PCI Express: PCI-E x1
  • Display Outputs: DL-DVI, VGA, HDMI
  • HDCP Support: Yes
  • Multi Display Capability: 3
  • Recommended Power Supply: 300W
  • Power Consumption: 25W
  • Power Input: N/A
  • API Support: DirectX 12 (feature level 11_0), OpenGL 4.5
  • Cooling: Passive
  • Slot Size: Single Slot
  • SLI: N/A
  • Supported OS: Windows 10 / 8 / 7 / Vista / XP
  • Card Length: 146.05mm x 111.15mm
  • Accessories: 2x Low profile bracket I/O brackets, Driver Disk, User Manual


The card, which is listed with the model ZT-71304-20L, has not yet appeared on any U.S. sites for purchase (that I can find, anyway), so we will have to wait to see where pricing will be.

Source: Zotac

Sony plans PlayStation NEO with massive APU hardware upgrade

Subject: Graphics Cards, Processors | April 19, 2016 - 11:21 AM |
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd

Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.

Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO." 


The current PlayStation 4 APU
Image source:

There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.

The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.


Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.

Giant Bomb even has details on the architectural changes.

  Shipping PS4 PS4 "NEO"
CPU 8 Jaguar Cores @ 1.6 GHz 8 Jaguar Cores @ 2.1 GHz
GPU AMD GCN, 18 CUs @ 800 MHz AMD GCN+, 36 CUs @ 911 MHz
Stream Processors 1152 SPs ~ HD 7870 equiv. 2304 SPs ~ R9 390 equiv.
Memory 8GB GDDR5 @ 176 GB/s 8GB GDDR5 @ 218 GB/s

(We actually did a full video teardown of the PS4 on launch day!)

If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).


Could Sony's NEO platform rival the R9 390?

If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so. 

It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.

However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.


New features of the AMD Polaris architecture due this summer

There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path. 

The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on

In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.

There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.

Source: Giant Bomb