Review Index:
Feedback

AMD Radeon VII Review: Supercharged Vega

Manufacturer: AMD

2560x1440 Synthetic and Game Benchmarks

Synthetic Benchmarks

PC Perspective GPU Test Platform
Processor Intel Core i7-8700K
Motherboard ASUS ROG STRIX Z370-H Gaming
Memory Corsair Vengeance LED 16GB (8GBx2) DDR4-3000
Storage Samsung 850 EVO 1TB
Power Supply CORSAIR RM1000x 1000W
Operating System Windows 10 64-bit (Version 1803)
Drivers AMD: 18.50
NVIDIA: 417.71

While not necessarily indicative of real-world gaming performance, synthetic benchmarks provide a quick reference for gaming potential, and to this end we will first look at results using 3DMark Time Spy, a DirectX 12 benchmark rendered at 2560x1440.

View Full Size

Next we have UNIGINE Superposition, which was run using the high preset settings and 2560x1440 resolution.

View Full Size

Not the most auspicious of entrances with the synthetic benchmarks, but these were not indicative of the final results with Radeon VII and are simply provided for your reference. 

2560x1440 Game Benchmarks 

We will begin the 1440 game benchmarks with a couple of titles known to be better optimized for Radeon graphics, and what better place to start than Ashes of the Singularity: Escalation. We are running the game with DX12 at the high preset (no custom settings).

View Full Size

Right off the bat we have the first win for Radeon VII, which surpasses NVIDIA's RTX 2080 in this test by a full 3 FPS. As you can see from the frame times below the Radeon VII gameplay was not quite as smooth as the RTX 2080, but still very good:

View Full Size

Next up is another game which performs well with Radeon graphics: Far Cry 5. This test was run at the high preset, all default settings:

View Full Size

This time the RTX 2080 came out on top with a 4.7 FPS advantage, but the Radeon VII did offer a smoother experience as you can see from the minimal disparity in average and 99th-percentile frame times:

View Full Size

Next we'll look at results from F1 2018, run using the high preset settings:

View Full Size

The Radeon VII almost catches the RTX 2080 here, and it delivered a more consistent frame rate as well:

View Full Size

Middle Earth: Shadow of War is next, and, like the other tests, this was run using default "high" settings.

View Full Size

About 9 FPS separates the RTX 2080 from the Radeon VII, though the AMD card was less consistent with frame times in this game:

View Full Size

Now we come to a game that typically does not perform as well using AMD hardware, and this time was no exception with Shadow of the Tomb Raider, run using DX12 at the high preset:

View Full Size

I won't belabor the point here, but Shadow of the Tomb Raider is clearly optimized for NVIDIA graphics and this pushes the Radeon VII down to a virtual tie with the GTX 1080 in this test. The most obvious indication of the performance issues with this game is the frame time chart below, where all three AMD GPUs have much higher 99th percentile frame times with many more spikes during gameplay.

View Full Size

Finally we will look at two standalone benchmark apps, beginning with Final Fantasy XV. This game benchmark uses GameWorks features at the "high" preset, so only the "standard" setting was used to level the playing field as much as possible. It will likely still be argued that the game favors NVIDIA GPUs, but I have encountered no issues testing with AMD cards at this "standard" preset:

View Full Size

Once again the Radeon VII is in second place behind the RTX 2080, though it was only slightly faster than the RTX 2070 in this test.

View Full Size

Finally we come to World of Tanks: enCore, an update to the WoT game engine that poses less of a challenge to higher-end cards and is thus run at its "ultra" preset here:

View Full Size

View Full Size

There was one win for Radeon VII in the game benchmarks at 2560x1440 with Ashes of the Singularity, and it performed very well with Far Cry 5 - and these results makes sense as these are both games better optimized for AMD graphics. These are day-one results using pre-release drivers, and I expect to see performance gains as games are optimized for Vega 20's unique abilities as the year progresses. As things stand with this very small sample of games it sits between the RTX 2070 and RTX 2080 at the top of the charts, but we are nowhere near the end of the story.

As we will see on the next page the card did perform better when we moved up to 3840x2160, and we need to push even farther with resolution and texture quality to find out just how far a card equipped with 16GB of HBM2 can go.

Video News


February 7, 2019 | 10:02 AM - Posted by Prodeous@Work (not verified)

So intresting and nice to see AMD at least "keeping" up with RTX 2080 (win some/loose some)..

Price of course will be a concern (and potential availability)

Looking at it from compute metrics, especially Blender, I have a feeling that for the price of single Radeon 7, I can get 2 VEGA 64, downclock them and have far more rendering performance..

Still that 16GB buffer is a nice touch.

Gaming wise, that is an "easy" pick. If you love AMD then this is a nice boost and almost sufficient to spend your money on an upgrade. If you like Nvida, then clearly RTX 2080 is the best choice.

Nice that we have now a card for both markets.

Now AMD just needs NAVI out and get some RTX 2080ti competition

February 7, 2019 | 10:06 AM - Posted by PixyMisa

Sebastian, according to AnandTech AMD did a last minute firmware / driver update to enable DP at 1/4 SP. So last minute that it happened after the review cards were sent out.

https://www.anandtech.com/show/13923/the-amd-radeon-vii-review/3

February 7, 2019 | 10:26 AM - Posted by Sebastian Peak

I saw that a few minutes ago at AnandTech. 1/4SP rate now rather than 1/16SP. Updated. Thank you!

February 7, 2019 | 10:30 AM - Posted by Zyhmet (not verified)

About the noise. Have you tried changing the fan curve so the temps are more like the 2080?

When I buy a GPU I look at price, performance and noise. Sadly AMD couldnt compete in all 3 at the same time for a while now. And this card fails on the noise part whith the standard fan curve :C

February 7, 2019 | 11:42 AM - Posted by StillMoreProsumerThanGamingOnlyFocusedForTheVII (not verified)

looking at Anandtech's review of the Radeon VII and it still has more of a prosumer feel because of the 16GB of VRAM and the 1/4 DP FP to 1 SP FP ratio. And while the Radeon VII's DP FP ratio is not as good as the MI50's 1/2 DP FP to 1 SP FP ratio it's still better than Vega 10's 1/16 DP FP to 1 SP FP ratio.

I think a lot of the power used can be attributed to that extra DP FP power on Radeon VII. There are some additional AI focused Instructions added also on Vega 20 above what Vega 10 offers. And more needs to be asked of AMD about that and some prosumer usage or even graphics related uses of Vega 20's additional AI related ISA extentions on the Vega-2 GPU micro-arch. All of the major graphics software packages have some AI related Image Filters and effects added and that was done before even Nvidia's Volta was released with the AI workloads being accelerated on the GPU's shader cores instead of dedicated tensor cores.

The ROP's available bandwith has also been improved on Radeon VII and maybe there are other tweaks that will becme known as the whitepapers become available.

Also the drivers for Radeon VII have to have more time to fully mature so maybe a few more rounds of benchmarking will have to be done once the next round of driver updates is available. Radeon VII's Fine Wine(TM) may be somewhat different depending on the TSMC 7nm process and production becoming more mature over time. The Vega 20 die production has been ongoing since 2018 and the better die bins are being used for Radeon Instinct and Radeon Pro WX production so Radeon VII is not getting the top binned Vega 20 die output.

My biggest difference with the Anandtech article is their speculation on TSMC's yields on Vega 20 and Anandtech very well knows that because the die size at 7nm is smaller then that equates to more DIEs/Wafer and the yields will actually go up as a result of there being more DIEs/Wafer.

It looks like a good time for AdoredTV to do a review of what the various review sites' Radeon VII review content for some peer reviewed fact checking.

I'm also suspicious of all this speculation on the costs of the 4GB HBM2 die stacks on Radeon VII as the Professional Compute/AI markets make more use of the 8GB and Higher capacity HBM2 stacks rather than the 4GB HBM2 stacks. And that amended JEDEC HBM2 standard allowes for even higher capacity HBM2 per stack capacities than 8GB.

So really those 4GB HBM2 stacks may not be as costly anymore to produce but who really knows for sure. HBM2 is used in many more products than just GPUs so maybe the HBM2 makers have had enough HBM2 in production for a long enough time period to fully amortize their initial HBM2 R&D and equipment costs. Both SK Hynix and Samsung have had HBM2 production ongoing so there should be a little more competative price pressures on the 4GB HBM2 stacks as the R&D and tooling/equipment costs become fully amortized and the HBM2 makers have more room to lower their respective HBM2 pricess in order to attempt take market share.

What about andy Dual Radeon VII Benchmarking for games that make us of DX12's and Vulkan's Explicit Multi-GPU adaptor ability. It looks like xGMI(Infinity Fabric) links are not available for consumer Radeon VII and AMD is starting to segement it's cosnumer GPU variants from is Pro GPU variants a little more than AMD has done in the past. I hope that this means that Navi will be more gaming oriented with a higher Ratio of ROPs to Shader cores and that Mainstream Navi with have much higher pixel fill rates than the Polaris Mainstream offerings.

February 7, 2019 | 11:47 AM - Posted by StillMoreProsumerThanGamingOnlyFocusedForTheVII (not verified)

Edit: What about andy
To: What about any

February 7, 2019 | 01:46 PM - Posted by Othertomperson (not verified)

Bad edit. I’m very interested in Andy. Tell me about Andy.

February 7, 2019 | 02:15 PM - Posted by Jeremy Hellstrom

February 7, 2019 | 03:10 PM - Posted by HeadlineDoubleEntendre (not verified)

Maybe his girlfriend's been testing the Linux 5.1 Kernel against her Qualcomm device(1) and not paying much attention to Andy!

Andy's Girlfriend: Qualcommmmmmmmmmmmmmm...!

(1)

"Qualcomm Vibrator Driver Queued For Linux 5.1"

https://www.phoronix.com/scan.php?page=news_item&px=Qualcomm-Vib-Driver-...

February 7, 2019 | 12:08 PM - Posted by ManyAnnoyingBugsInTheReviewSoup (not verified)

GamersNexus is not happy with the redone API calls on Radeon VII that are making things impossible to validate!

"Because AMD completely overhauled its API calls for this card, no current software utilities work for it. Afterburner is broken, GPU-z needs an update (and its creator is on vacation), and Wattool is also largely non-functioning. This leaves us with AMD’s WattMan, which is also presently in a largely unusable state."(1)

(1)

"AMD Radeon VII Review: Rushed to Launch (& Pad vs. Paste Test)"

https://www.gamersnexus.net/hwreviews/3437-amd-radeon-vii-review-not-rea...

February 7, 2019 | 02:17 PM - Posted by Anonymous-911z (not verified)

Typically dog on AMD but at least i think they have a card that will be decent come launch titles in winter 2020 due to that memory buffer. The games will probably be unoptimized for pc as usual but at least there is more hope there to at least hold out until 2021 when the real next gen pc cards come out.

indie devs can probably put this card to good use

now lets see what Nvidia has planned for Valentines Day

February 7, 2019 | 06:09 PM - Posted by Mark classified (not verified)

No gtx 1080 Ti in games tested. for shame.

February 8, 2019 | 02:06 AM - Posted by nushydude (not verified)

Isn't the ROP count of the Radeon 7 supposed to be 60 instead of 64?

February 9, 2019 | 01:03 AM - Posted by ZoyosJD

Compute units are 60 on the Radeon 7 where they were 64 on vega. ROPs remain the same at 64 last I checked.

February 8, 2019 | 02:18 AM - Posted by zgradt

It seems to me like AMD isn't interested in taking the performance crown from Nvidia. The VII is basically the same as a watercooled (overclocked) Vega64 with double the memory bandwidth. I wonder how close the watercooled Vega compares to the VII.

Rather than heaping on more cores/transistors when going to 7nm, they decided to up the clockspeed a bit to get it competitive with RTX 2080 and leave it as a smaller die to help their profit margins (and R&D costs). The transistor size is 5.6% higher than Vega64, while the 2080's number is 88% higher than the 1080. By contrast, Nvidia added cores/transistors when going from 16nm to 12nm to accommodate the additional cores and AI and raytracing bits. Nvidia went from Pascal to Turing, but the VII is still Vega.

Radeon VII die: 331 mm2
RX Vega 64 die : 495 mm2
2080 RTX die: 545 mm2
1080 GTX die: 314 mm2

According to these benchmarks, there isn't much difference between the VII and the 2080, but you also not getting the fancy antialiasing and raytracing stuff. I also read something a while back about some sort of VR optimization in Pascal called simultaneous multi-projection. Did that ever catch on? The extra memory is nice I guess, but I'm not sure how useful it is.

I don't know why I'm obsessing over this stuff though, since I won't be upgrading the 1070 I bought on sale before the crypto craze took off. My old rule of thumb is to upgrade when a new card comes out that is 2x the speed at the same price. These are 2x the speed at 2x the price.

February 8, 2019 | 03:56 AM - Posted by othertomperson (not verified)

They've never gone higher than 4096 cores. Seems a GCN limit more than a conscious choice.

February 8, 2019 | 04:02 PM - Posted by DesignLibrariesAndProcessNode (not verified)

Maybe Nvidia is using 9.5T libraries and more fins per cell than AMD who uses 7.5T librares and less fins per cell. The 9.5T libreries take up more room(More Fins per Cell) but they afford Nvidia the option of creating more 4 fin transistors that can be driven to higher clocks than 7.5T libraries with less fins per cell and more 3 and 2 fin transistrs that can not be driven to higher clock rates.

Vega 20 is initially a data center part so 7.5T lower power libraries and greater density is what AMD was targeting. Radeon VII's die at 331 mm^2 is sure to allow for more DIEs/Wafer and that's a Known way to increase Yields for any processor/other chip maker.

It's not the GCN or Vega Micro-Arch that's the problem for AMD and gamng it's the tessellation and rasterization deficiencies that are the result of AMD's Die Tapeouts that are at fault and that's because AMD does not design Gaming Only focused designs. Nvidia has a 5+ Different Tapeouts lead to AMD's Only 1 or 2 different Desktop Die Tapeouts per generation.

The GP102 and TU102 Die Tapeouts still offer 96 available ROPs that Nvidia prunes back to 88 ROP's to bin out its GTX 1080Ti and RTX 2080Ti series of consumer flagship gaming variants. And both the GP102 and TU102 Base Die Tapeouts have more Quadro variants before Nvidia makes the bottom binned DIe for its flagship gaming variant based off of those GP102/TU102 Base Die Tapeouts.

AMD just needs a Tapeout with 96+ available ROPs and the better tessellation resources. So really each GPU tapeout costs in the millions for the mask sets and the Wafer Start capacity at the chip fab. Nvidia's spends billion+ dollar figures on all its various different Base Die tapeouts where AMD can only afford at most 1 or 2 per generation of Desktop GPU.

AMD's APU/Laprop market share numbers are around 12% now so the Vega based graphics installed base on devices is going to be rather high and going higher with each new Raven Ridge APU/Vega integrated Graphics based Laptop Sold. Desktop Raven Ridge is popular also so that's even more integrated Vega graphics marketshare.

Gamers should be attacking AMD's Base Die tapouts for not having more available ROPs to increase the pixel fill rates and that's where AMD's lack of cash for RTG hurts the most. AMD really need a gaming only oriented Desktop GPU Base Die Tapeout, maybe even 2 Gaming focused Desktop Die Tapeouts similar to Nvidia's GP104/TU104 and GP106/TU106 Base Die tapeouts for the mainstream gaming market. AMD should be also going after the Pro Graphics market with some Base die tapeout similar to Nvidia's pro oriented GP102/TU102 Tapeouts where the lowest binned variant becomes the Flagship Gaming GPU variant.

But that all takes money that AMD currently does not have, and RTG has to pay its own way so that's even less available for consumer gaming focused GPU Base Die Tapeouts.

February 8, 2019 | 04:07 AM - Posted by Hakuren

And it's only refined Vega. Don't get me wrong. AMD learned in VII from unmitigated disaster that VI was.

I like the HBM2 16GB I really do, but it still is only a Vega. It still cannot really compete on Windows platform with 1080Ti which is 2 years old.

No IRAY support means I won't even try it. But I'm sure on Linux or Mac where support for AMD is light years ahead of Windows VII will be a blast, just as long say rendering engine doesn't require nVidia IRAY... ;)

February 8, 2019 | 04:37 PM - Posted by DesignLibrariesAndProcessNode (not verified)

Vega 10 is most definitly not a failure in the HPC/AI market and Gamers are where the non performant GPU DIEs go that do not maket the grade to become Pro Parts. The Miners made Vega 10 a winner from a business perspective even if the gaming market may not have.

Radeon VII's gaming drivers at release are not very optimized as is that refactored API that had all of the GPU Vitals/Metrics reporting Third Party Software needing the have their code base refactored also to work for Radeon VII.

So I trust the initial benchmarks even less and I never pass judgment on any newely released GPU until at least 3 months afterwards when the the kinks are worked out.

Vega 20 is most definitely more prosumer than Vega 10 what with Vega 20's DF FP rate at that 1/4 DP FP to 1 SP FP ratio. The 16GB of HBM2 is better for aminations graphics workloads where animation scene assets can easily be larger than 16GB and Vega's HBCC/HBC IP that can turn the HBM2 into a last level VRAM Cache for the GPU with any non recently needed assets(Textures/Mesh Data) paged out to/from system DRAM. Animation rendering on Radeon VII is going to go smoothly for sure.

The Vega-1 Micro-Arch has been supplanted by the Vega-2 Micro-arch on Vega 20 with there being 50-something more AI related instructions added on the Vega-2 ISA. MY beggest disappointment is the Lack of xGMI links on Conumer Radeon VII compared to the Pro Vega 20 based parts.

February 9, 2019 | 11:16 AM - Posted by Bond (not verified)

"Radeon VII's gaming drivers at release are not very optimized... ... ...So I trust the initial benchmarks even less and I never pass judgment on any newely released GPU until at least 3 months afterwards when the the kinks are worked out."

If AMD cannot even provide decent drivers for a GCN based card in 2019, the next 3 months (or years) aren't going to change much.

Secondly, considering AMD's track record in support and the limited availability and production of RVIIs, you can bet your bottom dollar this product is going to receive limited support and optimization.

February 9, 2019 | 03:17 PM - Posted by StillNotReadyForPrimeTimeBenchies (not verified)

They completely overhauled the API calls for Radeon VII so GamersNexus(1) had no way of properly reviewing the SKU.

"Because AMD completely overhauled its API calls for this card, no current software utilities work for it. Afterburner is broken, GPU-z needs an update (and its creator is on vacation), and Wattool is also largely non-functioning. This leaves us with AMD’s WattMan, which is also presently in a largely unusable state." (1)

So Yes this product needs more time in the software/firmware oven before it's fully done and any new GPU product will have issues. AMD's been better with it's products but this is definitely a regression in some respects of AMD's driver/API launch support.

It's AMD's flagship consumer/prosumer 7nm offering until Navi is ready(Oct 2019 rumored Navi release)! So I think that it will recieve more attention than you think.

Both Vega 10 and Vega 20 are really not gaming only Focused GPU Base Die Tapeouts and unless you have half a billion+ dollars to loan AMD to afford to create 5+ different Desktop oriented Base Die Tapeouts per generation like Nvidia can afford then AMD/RTG will continue to lack the resources of Nvidia.

Nvidia has specific Gaming oriented Base Die Tapeouts that are stripped of excess compute and have more gaming oriented focused layouts. GP104/TU104 as well as GP106/TU106 are mostly for mainstream gaming and GP102/TU102(More Quadro Variants and only one gaming variant) are more like Vega 10/Vega 20 except that Nvidia has much higher numbers of available ROPs on GP102/TU102, 96 ROPs available on those Base Die Tapeouts. So Nvidia still leads even Vega 20 with GP102's/TU102's 88 out of 96 available ROPs enabled and a much higher Pixel fill rate than Vega 20.

Vega 20, The Redeon VII derivative of Vega 20, is most definitely not a gaming only focused SKU with that 1/4 to 1 DP FP to SP FP ratio. So that's 3,360 GFLOPS of DP FP performance on Radeon VII compared to the RTX 2080TI's 420.2 GFLOPS (1:32) DP FP : SP FP performance. And Vega 20 is not just a die shrink of Vega 10 as there are New Deep Learning/AI instructions added on the Vega-2(Vega 20) GPU Micro-Arch that Vega-1(Vega 10) GPU Micro-Arch does not support.

And 16GB of HBM2 and the content creators are going to like Radeon VII even more. And 16GB of HBC if that HBM2 is used as a VRAM Cache to a larger pool of Virtual VRAM paged to/from system DRAM By Vega 20's HBCC.

So large complex animation scenes with way more than 16GB of Texture/Mesh assets and Radeon VII's HBCC able to use that 16GB of HBM2 as a last level HBC and even more Virtual VRAM paged out to and from system DRAM in the background while the GPU will only have to work from the 16GB of HBM2, the HBCC will take care of getting that in the background swapping done.

So it's a GCN based card in 2019 with its API so completely refactored that the current software utilities need to be updated to work with that new API. Vega 20's GPU Micro-Arch has been expanded with new AI related Instructions and more tweaks are still to be revealed for Vega 20's tapeout. Vega 20 die production has been ongoing since 2018 with the Radeon Instinct MI60/MI50 SKUs released in Q4 2018.

So not much is known about Vega 20's current microcode version or what changes been done for Radeon VII's current microcode version but there are always tweaks that come with shrinks and new tapeouts.

From Anandtech's Vega 20 review(2):

"The big improvement here is all of that extra memory bandwidth; there's now over twice as much bandwidth per ROP, texture unit, and ALU as there was on Vega 10. The bodes particularly well for the ROPs, which have traditionally always been big bandwidth consumers. Not stopping there, AMD has also made some improvements to the Core Fabric, which is what connects the memory to the ROPs (among other things). Unfortunately AMD isn't willing to divulge just what these improvements are, but they have confirmed that there aren't any cache changes among them." (2)

So until the Vega 20 ISA manuals and whitepapers are available it's still too early to make an informed decision.
And if you are only interested in gaming workloads it's been a long known fact that Nvidia has the funds to afford those specifically gaming oriented Base Die Tapeouts whereas AMD lacks the funding to afford that sort of specilization.

(1)

"AMD Radeon VII Review: Rushed to Launch (& Pad vs. Paste Test)
By Steve Burke Published February 07, 2019 at 9:02 am"

https://www.gamersnexus.net/hwreviews/3437-amd-radeon-vii-review-not-rea...

(2)[See page 2 of article "Vega 20 Under The Hood:"

"The AMD Radeon VII Review: An Unexpected Shot At The High-End"

https://www.anandtech.com/show/13923/the-amd-radeon-vii-review/

February 11, 2019 | 01:19 AM - Posted by Anonymous74747r88r8jjfifj (not verified)

The card is trash. It's no different than the Vega 64 vs 980Ti head to head. All credible reviewers and gamers recommended and chose the 980 To over it. There is NO reason for gamers to choose it over the 2080. NONE. AMD (Radeon) is not back, and now it looks like you'll be waiting longer for the RX cards and resupplies of the VII.

Another soft launch for AMD. Not impressed.

February 11, 2019 | 08:49 AM - Posted by Spunjji

Vega 64 was never in competition with the 980Ti - that would be the Fury X. Back then (and again with Vega 64 vs. GTX 1080) there were still two reasons to go with AMD even though they lost on performance and power: FreeSync support and price.

Now that Nvidia has finally enabled Adaptive Sync on their products, I'm inclined to agree that this card has no real selling point for the majority of users. It's not "trash", though, and a price cut could make it into a very solid option, especially for a user prepared to gamble on some power/noise gains via undervolting.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.