Review Index:

The Radeon Vega Frontier Edition 16GB Air Cooled Review

Author: Ryan Shrout
Manufacturer: AMD

Clock Speeds and Power Consumption

Clock Speeds and Clock Consistency

As we start to dive into the performance of the card itself, I like to get a better understanding of the behavior of the product itself. For GPUs this means looking at the reported clock speeds, and how stable they appear to be over time. I turn to a long run of Unigine Heaven benchmark for this.

View Full Size

In general, the clock speeds on the Vega Frontier Edition appear to hover around the 1440 MHz mark, well under the 1600 MHz rated clock speed of the card from specifications. The jumps hit inside the 1300 and 1500 MHz marks though the granularity of the shifts are quite large, indicating that maybe the boost/turbo capability of the Vega architecture remains more limited than what we see on the GeForce side of things.

I also ran a test with the fan speed cranked up to 3000 RPM or so, fixed, to see how keeping the GPU cooler would affect the ability for the card to hit higher clock speeds. As you can see, while the higher fan speed produced one fewer step down into the 1300 MHz range, we did NOT see it go any higher than the 1528 MHz peaks you see in both results.

Compared to previous generations of hardware from AMD, the Vega FE is running at impressive clock speeds. The Radeon Fury X was only able to top out at the 1050 MHz (reference) with the same stream processor count. (Note however than the compute units are different from Fiji to Vega.) Even the Polaris GPUs used on the RX 580 were topping out at 1340 MHz for reference speeds.

Detailed Power Consumption

With a rated TDP of 300 watts, we had a lot of questions about the power consumption of the Radeon Vega Frontier Edition. Based on our direct power consumption measurement (not at the wall), we are able to get precise numbers.

View Full Size

View Full Size

In general, the power draw of the Vega FE is under the 300-watt level, which is good. After the issues that haunted the Radeon RX 480 at its launch (drawing well over the 150 watts rated TDP), AMD has learned its lesson it appears. Power draw from the PCIe connection on the motherboard stayed very low actually (near 25 watts) with the vast majority of the power coming from the dual 8-pin power connections on the card. For reference, that power draw of 300 watts is 50 watts higher than the GeForce GTX 1080 Ti and the Titan Xp but 120 watts higher than the GeForce GTX 1080.

We did find one odd behavior with the power draw on the Radeon Vega FE card that showed itself when the GPU got up to a peak temperature of 85C – power draw would dip to a lower level. And when I say level, I mean exactly that: the card appears to shift into a lower power draw state for varying amounts of time as it attempts to regulate the temperature of the GPU. Look at the latter part of the RoTR power results above - the blue line of the Vega FE power consumption clearly has a back and forth step to it.

View Full Size

In our power testing with Metro: Last Light at 4K you can clearly see two different steppings below the “nominal” power draw level of 280+ watts. At 240 watts and 200 watts, the card was pulling less power, but interestingly, did NOT appear to drop the clock speeds during those intervals. That seems counter-intuitive to be sure, and we are still asking questions of AMD to figure out what is going on. This behavior was repeatable through multiple runs and through different games – we saw it in both Witcher 3 and Rise of the Tomb Raider.

View Full Size

We ran a quick experiment to make sure it was thermals we were dealing with. By cranking the GPU fan up to 100% (which is quite loud) we could play Metro: Last Light at 4K without seeing the power draw drops. The GPU temperature never breached 56C during that testing run so it is likely we could lower the GPU fan speed to something more reasonable on our ears while still keeping the card out of that 85C range that appears to cause the power draw throttling. I am hoping that AMD takes heed of this and manages the fan curves differently with the RX Vega product line.

Video News

June 30, 2017 | 02:11 PM - Posted by Me (not verified)

Who's the guy who said it will match 1070 in your gaming banchmarks?! This guy ;D

July 1, 2017 | 10:23 AM - Posted by Godrilla

Could be the reason why the 1070 is selling out everywhere but doubtful this article just posted. I'm going to say that this is probably due to premature drivers. Best case it will probably match the 1080ti just based on paper specs.

July 1, 2017 | 12:49 PM - Posted by RchUncleSkeleton (not verified)

The 1070 is selling out everywhere because everyone and their mom are on a mining craze.

July 3, 2017 | 10:48 PM - Posted by Godrilla

Radeon 580s more so but makes sense.

July 6, 2017 | 12:27 PM - Posted by Andrei M (not verified)

Canadian retailers are having sales. You can buy a Zotac GTX 1070 at Staples in Toronto for CA$550 + 13% tax, and a Zotax GTX 1080 at for CA$700 + tax. These prices seem very attractive, you could probably resell face-to-face for regular price. Radeon cards like RX 580 -- don't even hope, been "out of stock" for weeks.

July 7, 2017 | 11:03 AM - Posted by BThornton (not verified)

The reason the 1070 is selling out, just like all the sold out GPUs is the fucking coin miners that are ruining gaming for us all.
Since all 470/480s, all 570/580s and all 1060 are now as rare a find as the damned Dodo bird, all the drooling morons have started the next cheepest gpu in line the 1080.
Hope this crap crashes hard, and hen we will have all the used gpus flood the market hopefully at rock bottom pricing.

July 8, 2017 | 09:01 PM - Posted by ddferrari

"Best case it will probably match the 1080ti".

Your kidding, right? Best case is that it gets a little closer to the 1080 from its current 1070~ish performance. It came in anywhere between 30% and 50% slower than the Ti, and no tweaks or drivers are going to close that gigantic gap. I'm always amused at how AMD fans think if they just wait long enough, their Honda Civics are going to morph into Ferraris.

Matching the Ti isn't a best case scenario, it's an "In your wildest dreams" scenario.

July 10, 2017 | 11:02 AM - Posted by hyp36rmax

To be fair the new Civic Type-R is a beast, even if it isn't a Ferrari...

At this point everything is speculation for RX Vega. We would all love for AMD to have a trump card when the consumer version drops. IT surely has the specs to compete with a 1080Ti, but we're comparing a prosumer version of the card that requires a switch to activate a consumer version to test games. Driver optimization not only provides performance, but it can also go the other way to limit resources to focus what the card was intended for. (Surely gaming was not a priority)

We'll just have to wait and see. I support both Nvidia and AMD.

July 6, 2017 | 02:22 PM - Posted by Flyingcircus (not verified)

And now you'll get a cookie for guessing correctly? And if so, am I going to get one for the coin toss I guessed correctly last week? Oo

July 15, 2017 | 06:38 AM - Posted by (not verified)

This article has some vast and valuable information about this subject.

June 30, 2017 | 02:11 PM - Posted by F2F (not verified)

Awesome job guys!

June 30, 2017 | 02:16 PM - Posted by Digidi

@Ryan Shrout
Hello i'm the guy who pleased you yesterday to do the triangle test. Thank you for test the triangle test.

If you watched the Video from you and the Video from David Kanter you will racognize that the Tiled Based Rasterizer at Vega isn't switched on. You should Ask AMD why they dont Switch on the main Feature of Vega ?

I was wondering yesterday about the Performance of wega and steaded that there "must be something horrible wrong with the rasterizer" ;) Maybe thats why the Frontier Edition is so slow.

June 30, 2017 | 02:25 PM - Posted by Ryan Shrout

I can inquire, but not sure they will answer quite yet.

June 30, 2017 | 02:28 PM - Posted by Digidi

Thank you, and thank you that you heard to the People yesterday. The Stream was awesome ;)

June 30, 2017 | 03:07 PM - Posted by tatakai

how did you please him? whats going on here.

June 30, 2017 | 03:27 PM - Posted by Digidi

There was a Chat under the live stream. Also now when you click PCPER Live there is the IRC Chat.

Ryan was very often listeing to the People. Thanks for That.

@Ryan Shrout
I think you should include in your Test that the tiled based rasterizer is not working. Thank you.

July 1, 2017 | 12:05 AM - Posted by quest4glory

I think you meant to say you asked, or requested. Pleased means something entirely different than you think it means in many contexts. :)

June 30, 2017 | 04:43 PM - Posted by Fiji (not verified)

When I ran the triangle test on my R9 Nano (thank you Ryan) months ago. There were only 3 vertical strips of triangle fill.

From what I observed in the broadcast yesterday, Vega now produces 5 stripes of triangle fill. A 66% change.

June 30, 2017 | 04:55 PM - Posted by Digidi

Please notice ist a Pixel Test and also depends on Resolution of the Screen.

Thats from my fury x and this Looks like Ryans Video:

Also its not so importend how it Looks like. Ist more Importent The behavior. In Ryns Video you see that the triangles are drawn step by step if you move the second slider (Num Pixels)

If you have a tiled based rasterizer the triangles should be drawn step by step. Ther should be drawn small pices. Each pice have parts from varius triangles included:

See Video for explanation

June 30, 2017 | 05:26 PM - Posted by Fiji (not verified)

So what resolution were you running? can I assume 4K or 1440. my was at 1080p.

This behavior in ist (Fury/Fiji) is different than the example provided in earlier AMD cards. I posed a question about this to David Canter in the comments section below the Video, months ago as well. I never received a reply. Not that I expected one, he is a busy man.

June 30, 2017 | 05:27 PM - Posted by Fiji (not verified)

So what resolution were you running? can I assume 4K or 1440. my was at 1080p.

This behavior in ist (Fury/Fiji) is different than the example provided in earlier AMD cards. I posed a question about this to David Canter in the comments section below the Video, months ago as well. I never received a reply. Not that I expected one, he is a busy man.

June 30, 2017 | 05:45 PM - Posted by Digidi

I used 1440p

June 30, 2017 | 05:07 PM - Posted by SimSim (not verified)

I think the tiled based rasterizer is working on the Vega FE but AMD's implementation of this tech is diffrent then nvidia's implementation.

On nvidia side they try to upload all the triangles within a tile to the L2 cache (which explain the bigger size of the L2 cache in maxwell and pascal) to avoid going back and forth to the main memory (saving bandwidth and power). On the AMD side they try to draw one triangle in a tile manner whitch allows to draw efficiently.

This seems to be the way to follow to improve performance.
In this video Bill Daly (nvidia) explain how data transfer consumes energy.

So I think, in the up coming architectures, the on chip memory (register and caches) will get bigger and bigger (GV100 20MB register).

@RyanShrout Thank you for this review, it was intresting. I hope you will do more testing specially with the rasterizer between Maxwell, Pascal, Polaris and Vega.

Thank you.

June 30, 2017 | 05:26 PM - Posted by Digidi

No its not! Vega in this Video is clearly a intermediate renderer. Because triangles are pixeld step by step! And Also if you look now at my picture it has the same behavior like my Fury X.

Compare Ryans Video with my Pic. Ist the same Picture. Also whatch the Video from David Kanter you will get my Point.

That means Vega is now rastering like fiji? Seariusly? Where is the Advantage?

Speculation going that the tiled based rasterizer is deactivated and fallback rasterizer (intermediate rasterizer) is activated

Thats why the performace of Vega is so horrible!

June 30, 2017 | 06:41 PM - Posted by SimSim (not verified)

This is interesting. this is not a proof of tile based rasterizer.
We need to see the behaviour of the rasterizer on the rx480 (or any polaris card) to get an idea.
I made a test on my gtx 1060. I wanted to see how the rasterizer handles the triangles so with 12 triangles it behaves like the video.

but if I change the numbre of the triangles and num of floats vertices I get this behavior

so the tiled based rasterizer is technique to render all the pixel within a tile in a parallel fashion.
I think nvidia try to group the maximum of informations about the maximum of the triangles (the constraint is the L2 cache) and render theme at once, while amd (vega) render all the pixels within a tile in parallel but one triangle at once.

So this is my guess.

June 30, 2017 | 07:27 PM - Posted by Digidi

But also the second Pictures ist tiled based. You have multy colour in one tile. At imediate renderer you have colure after colure.

June 30, 2017 | 02:17 PM - Posted by cegli (not verified)

Hey Ryan,

In the memory clock part of the table, you have Fury and Fury X's SDR clock rate listed, while you're using the quad pumped/DDR rate for the rest of them. Fury and Fury X's clock rate should be doubled in the table to remain consistent.

June 30, 2017 | 02:24 PM - Posted by Ryan Shrout

Got it!

June 30, 2017 | 02:34 PM - Posted by cegli (not verified)

Fury and Fury X are right now, but now Vega is 2x too high :-P. Vega was right @ 1890MHz for the DDR rate.

1890MHz * 2048bits/8 = 483GB/s
1000MHz * 4096bits/8 = 512GB/s

June 30, 2017 | 02:52 PM - Posted by Ryan Shrout

Bah, I thought not. :)

June 30, 2017 | 02:19 PM - Posted by Taylor Ratliff (not verified)

Any chance you could do some VR benchmarks?

At the very least please run SteamVR performance test and tell us the total frames.

July 8, 2017 | 09:04 PM - Posted by ddferrari

Read the 1070 VR benches and assume they're very similar.

June 30, 2017 | 02:21 PM - Posted by The Indian Enthusiast (not verified)

Well the Frontier Edition is targeted towards workstations, but still i was expecting better performance in gaming. If AMD wants to be more relevant in gaming market i hope they bring it bcuz people have been waiting and if they can't deliver, its gonna be a watershed moment for the entire enthusiast community. WE JUST WANT COMPETITION!!!

June 30, 2017 | 02:25 PM - Posted by Me (not verified)

Don't rely on their gaming benches too much. According to them 1060 destroys 580, while everywhere else, with much more games tested, they trade blows, with 580 winning by a little.

June 30, 2017 | 02:44 PM - Posted by My_head_is (not verified)

Well in fact in newer title non nvidia sponsored, AMD wins a bit in 1080/1440p res thx to brute force.
If testers wouldn't have tested the card in AMD sponsored title the win would be withing reach of the 1060 far more often.

In every graphs we can found, AMD lead by a small margin because of the total games benched, but not by much.

It means that with time, RX vega would be as powerful as GP104/102/106 ... just because it have the same performance as a stock 1080.

June 30, 2017 | 03:32 PM - Posted by arbiter

i believe to be more real world of a 1060/580, it needs to be tested on a more budget cpu. Something along the lines what those gpus will be paired with from both sides intel/amd. just to see what happen.s

July 1, 2017 | 07:29 AM - Posted by Disqutera

Ironically, the RX 480 fell behind a year ago when this scenario was tested:

June 30, 2017 | 02:42 PM - Posted by JohnGR

Ryan, wasn't Fallout 4 giving strange(compared to other games) much higher performance numbers compared to those in this article? I didn't notice something in the "before you ask" page, or the page with Fallout 4 scores explaining that.

June 30, 2017 | 02:46 PM - Posted by My_head_is (not verified)

AMD must have paid pcper otherwise it's IMPOSSIBLE AMD can have good numbers in one/multiple game/s.

There it is, PCPER is fraud website that has been bought by AMD.


June 30, 2017 | 02:53 PM - Posted by Ryan Shrout

Yah, those numbers looked odd to me last night, only the 2560x1440. I re-ran them three times today, all with the lower score you see in the story today.

My guess is I had the incorrect resolution set previously.

June 30, 2017 | 02:57 PM - Posted by JohnGR

OK thanks.

You might want to add this explanation somewhere in your article, because other sites had posted screenshots from the live and those Fallout 4 scores are used from many to base their arguments about bad drivers.
I am expecting this question to pop up again in the comments.

June 30, 2017 | 07:10 PM - Posted by Ryan Shrout

That's what they get for stealing my results! :)

June 30, 2017 | 02:49 PM - Posted by Hugh Mungus (not verified)

Another ryzen style release or just worse fury x clock for clock? Maybe we just need driver updates and game optimizations, maybe 1070-1080 is all we're going to get and maybe something between best case scenario (1080 ti+ after driver optimizations or just with rx vega) and worst case scenario (gtx "1070 ti"). I still have high hopes for rx vega, but if it fails to beat a 1080 overall, I'm probably sticking with nvidia.

June 30, 2017 | 02:51 PM - Posted by YTech

Any explanation to why the High Bandwidth Memory 2 is not as high as the earlier HBM and if and how that affects results and performance?

June 30, 2017 | 03:01 PM - Posted by cegli (not verified)

HBM2 has gotten *almost* twice as fast, so they went with half the bus width to save money. Here's the calculations:

Vega: 1890MHz * 2048bits/8 = 483GB/s
Fury: 1000MHz * 4096bits/8 = 512GB/s

Halving the bus width allows them to use half the number of stacks, or cut the height of stacks in half. Either one is significant savings to the BOM.

June 30, 2017 | 04:14 PM - Posted by AnonymousFart (not verified)

Fury's memory controller couldn't utilize the full bandwidth, anyway, and it remains to be seen whether Vega's can. I think they did the smart thing here.

Looking forward to some memory compression benchmarks too.

July 1, 2017 | 02:16 PM - Posted by Clmentoz (not verified)

Simple the JEDEC standard only deals with what is needed for the creation of one HBM/HBM2 die stack with GPU/CPU/Other processor makers able to decide to use from one to many HBM/HBM2 die stacks, depending on needs. Each Stack of HBM2, ditto for HBM, Supports only a 1024 bit interface(Subdivided into 8 128 bit channels) per die stack.

So one stack of HBM/HBM2 has a maximum bandwidth, with a stack of HBM2 memory supports twice the bandwidth per PIN: 1 GT(GigaTransfer/s) per pen for HBM, 2 GT(GigaTransfer/s) per pin for HBM2.

It looks like AMD did not need any more bandwidth above 512GB/s for Vega so AMD could go with two stacks of HBM2 and get around the same effective bandwidth with 2 stacks of HBM2 as 4 stacks of HBM. Also HBM2/HBM stacks do not have to be clocked at their maximum rating, so maybe AMD decided to go a little lower with its clocks on the HBM2 to save power, or have better error correction rates, or both.

But there is no sense in having more effective bandwith from your memory if you are never going to need it. AMD is probably making more use of compression anyways so until there is an need for the extra bandwidth in the hardware why have any unused bandwidth.

FROM Wikipedia:

" "The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into independent channels. Each channel is completely independent of one another. Channels are not necessarily synchronous to each other. The HBM DRAM uses a wide-interface architecture to achieve high-speed, low-power operation. The HBM DRAM uses a 500 MHz differential clock CK_t/CK_c. Commands are registered at the rising edge of CK_t, CK_c. Each channel interface maintains a 128 bit data bus operating at DDR data rates. HBM supports transfer rates of 1 GT/s per pin (transferring 1 bit), yielding an overall package bandwidth of 128 GB/s.[10]

HBM 2[edit]

The second generation of High Bandwidth Memory, HBM 2, also specifies up to eight dies per stack and doubles pin transfer rates up to 2 GT/s. Retaining 1024-bit wide access, HBM2 is able to reach 256 GB/s memory bandwidth per package. The HBM2 spec allows up to 8 GB per package. HBM2 is predicted to be especially useful for performance sensitive consumer applications such as virtual reality.[11]

On January 19, 2016, Samsung announced early mass production of HBM2, at up to 4 GB per stack.[12][13] SK Hynix also announced availability of 4 GB stacks in August 2016.[14]

HBM 3[edit]

A third generation of High Bandwidth Memory, HBM 3, was announced in 2016.[15][16] HBM3 is expected to offer increased memory capacity, greater bandwidth, lower voltage, and lower costs. The increased density is expected to come from greater density per die and more die stacks per chip. Bandwidth is expected to be up to 512 GB/s. No release date has been announced, though Samsung expects volume production by 2020. " (1)


"High Bandwidth Memory"

June 30, 2017 | 02:55 PM - Posted by biohazard918

Stupid question:
What makes the titan series of cards an in-between for the geforce series and the quadro series is it only price? I mean it doesn't have the double precision performance that the og titan and titan black had. It doesn't have the pro drivers the quadro has. All I see is a slight cuda core increase and an extra 1gb of memory over the 1080ti. What does a titan xp do that a 1080ti doesn't? What is it that justify's the 500 dollar increase in price and makes it more then an expensive gaming card?

June 30, 2017 | 03:25 PM - Posted by mahama (not verified)

Marketing and early access.
A couple 100$ tax on gamers with deep pockets or poor researchers/hobbyists who dabble with deep learning etc.
Don't worry about it if you are neither. The only effect it would have on you is that the x80ti cards are delayed a bit.

June 30, 2017 | 04:34 PM - Posted by biohazard918

Except the current titan xp you know the second pascal titan (fuck nvidia's naming) came out after the 1080ti.

June 30, 2017 | 06:02 PM - Posted by Clmentoz (not verified)

Professional certified drivers and Quadro BIOSs that are certified for error free and accurate usage. That certification process costs millions of dollars over the costs for consumer gaming SKUs. Every professional software package has to undergo certification for the Quadro branded SKUs and that's a very expensive process, that and the error correcting hardware on the Quadro's coats extra $$$$$$ to certify for the error correcting hardware side of the Quadro branded SKUs.

You do not want some bridge structural engineering workloads run on anything other than the certified hadrware/dirver/BIOS professional grade GPUs like Quadros, Ditto for AMD's Radeon Pro WX branded(Formally called FirePro) Professional GPU SKUs! Ditto for needing the certified CPUs(Xeon, Epyc, Power) and the costly certified workstation motherboards/RDIMMs. It's that certification process that doubles/more than doubles the costs of the Quadro/Radeon Pro WX parts. And System Software/Driver engineeers and hardware engineers payrolls run in the millions of dollars over the millions of extra programming/hardware testing hours needed for certification.

If it's not Quadro or Radeon Pro "WX" then its not a real server/workstation part, it's a semi-pro part that can not be used for production workloads where lives are at stake.

These Titan XPs and Radeon Pro FEs are great for students to learn on but without the full blown certified Radeon Pro WX or Quadro Beanded SKUs you can not rely on them for actual production workloads where error free workloads are top priority.

June 30, 2017 | 06:05 PM - Posted by Clmentoz (not verified)

that should be Branded not Beanded.

July 2, 2017 | 12:14 AM - Posted by Exascale

GM200 and GP102 based Quadros and Teslas no longer support ECC in the GPU itself(L1 cache and registers) but only on the memory.

The GP100 and V100 chips support full ECC from L1 cache and registers to all memory.

July 2, 2017 | 12:01 AM - Posted by Exascale

Nothing now. When Titans using GK110 came out, they had the same double precision compute capability as the Quadro and Teslas using GK110.

Geforce had its DP removed from GK110. GK110 Quadros and Teslas had full ECC support from L1 cache and registers to all memory. Titan got rid of ECC and had 1/2 as much memory as the Quadro as well.

With GM200, and GP102 the Titan is no longer in between. Its just an overpriced Geforce. The GM200 is NOT the successor to GK110, even if they branded it as such. It was the predecessor of an entirely new line of chips with the code 102.

The Quadro and Tesla lines have also had corners cut because of Nvidia adding the 102 chip. Quadro x6000 is no longer the highest end Quadro, and is now the 102 chip.

GM200 and GP102 DO NOT support ECC in the GPU structures, only in their GDDR RAM, meaning the $6,000 P40 and P6000 dont fully support ECC. GM200 and GP102 also have no double precision cores so theyre useless for work that requires FP64.

Quadro P100 is the real successor to the ancient K6000. GP100 and V100 are the successors to GK110.

Nvidia differentiated their chip lineup from two main chips(100 and 104) into three(100, 102 and 104). In doing so,the highest end GPU from Nvidia is now $13,000 and not available in any consumer variant.

June 30, 2017 | 03:00 PM - Posted by PCPerFan1 (not verified)

Oh, wow, you brought back unregistered comments. Thank you very much!

June 30, 2017 | 03:52 PM - Posted by PCPerFan2 (not verified)

thank you for allowing us to be anonymous adults. and sometimes children

June 30, 2017 | 06:07 PM - Posted by Clmentoz (not verified)

Now you have jinxed it! Thank's a lot!

June 30, 2017 | 07:10 PM - Posted by Ryan Shrout

I'm putting a lot of faith in you all...

June 30, 2017 | 03:04 PM - Posted by Thred (not verified)

The fact that the Tile based Rasterized rendering is not working right now should improve performance by about 10% with rx vega along with another 10% possibly for driver improvements since FE is using a old Fiji driver right now from January. Expect around 15-20% improvement on RX Vega would be my guess along with better cooling and higher clock speeds.

June 30, 2017 | 07:11 PM - Posted by Ryan Shrout

My conversations with people smarter than me about this indicates that the rasterizer imapct will only affect high resolutions (4K+) and the difference would not be near 10%, even if it were the case. Instead, it is much more about efficiency.

June 30, 2017 | 08:47 PM - Posted by tatakai

less power to do the same work? how much less?

July 1, 2017 | 01:46 PM - Posted by Josh Walrath

It does seem to be more about power and memory efficiency than a kick up in performance.  Like Ryan said, I would expect some improvements at much higher resolutions as there won't be as much memory pressure with the frame buffer and write out.  I could be wrong here, but turning on tiled functionality won't suddenly make this a 10% faster card across the board.

June 30, 2017 | 03:06 PM - Posted by Sabresuplay (not verified)

Wanna know whats stupid? They vest the Vega FE against the 1080, 1080ti, and 1070 in gaming... How about test the Vega FE vs those cards in the work loads? Show us the scores. If you want to test a workload Vega FE in gaming I want to see the 1080ti and 1080 tested in workload situations... These reviews are asinine...

June 30, 2017 | 03:22 PM - Posted by madPav3L (not verified)

Normal cards are not good for professional workload but you can easely game on professional cards, it's almost indistinguishable from normal cards

June 30, 2017 | 04:16 PM - Posted by AnonymousFart (not verified)

Makes perfect sense because it helps us get an idea of what the performance of RX Vega will be.

June 30, 2017 | 03:47 PM - Posted by Mr.Gold (not verified)

You say the clock speed didn't drop... but are you sure it didn't ?
How did you measure it ? because clock can fluctuate very fast.

The only way you can get an idea is BENCHMARKING.

Try metro 4K 100% default.
Try the exact same test with 100% fan.

Do you get the exact same identical score ?

July 1, 2017 | 07:38 AM - Posted by BenchZowner (not verified)

Apparently you haven't been paying much attention to the live stream that you were supposedly watching.

They did run Witcher 3 with the same settings with the fan using the default fan curve and with the fan at 100%.
Both times the clocks and performance remained the same, only the power consumption changed ( the weird drops to 240W from 280W )

July 1, 2017 | 12:40 PM - Posted by tatakai

would love more investigation of this. it would suggest the card can do the same work at lower power. another thing drivers could solve

June 30, 2017 | 03:52 PM - Posted by Mr.Gold (not verified)

For overclocking. not need to boost the MHZ if it already cant reach it full P states.

When overclocking the #1 thing to do with this card is keep the clock exactly the same, but lower the voltage.

Try lowering voltage by 3%, +25% power limit, 90c thermal limit

If its like the reference RX 480, you will get HIGHER clock rate and better performance by NOT overclocking.

June 30, 2017 | 03:52 PM - Posted by justme (not verified)

If Vega FE targets pro users then we need more tests with Quadro cards ranging from P4000 to the new GP100.

July 3, 2017 | 04:41 AM - Posted by Hishnash (not verified)

Would be interesting to have some test comparing some of the machine learning frameworks amd released a new Radion Open Compute framework as well along side Vega FE this is focused on development and implementation of GPU compute tasks with better performance than OpenCL. They already support a collection of machine learning systems so they could be benchmarked comparing Vega FE and the Quadro and GP lines.

June 30, 2017 | 04:00 PM - Posted by tatakai

"The one caveat to this is that the Vega architecture itself is still unoptimized in the driver. But this would affect gaming and non-gaming workloads most of the time. So if the driver isn’t ready for ANYTHING, then that’s a valid concern, but it applies to ALL workloads and not just games."

this isnt true. gaming and pure compute tasks are different. all the fancy stuff they put in there to push polygons and stuff aren't going to be touched in compute. probably easier to support professional workloads than gaming.

July 1, 2017 | 02:33 AM - Posted by Anonymous67878645 (not verified)

Exactly, also in professional work loads its all over the place, sometimes really good but most preforms worse than the p4000 which it should beat, it'll be interesting to see what improved drivers can do as they have a lot of people working on it and its the biggest change since gcns release

June 30, 2017 | 04:13 PM - Posted by boot318

Where are the most important benchmarks! Where are hash rates!!! ;)

June 30, 2017 | 04:23 PM - Posted by Hypetrain (not verified)

They had trouble during the live stream getting miners to run as they were throwing errors and wouldn't start.

IIRC other users already did get it to work, but the numbers weren't great (30-35 MH/s).

June 30, 2017 | 07:13 PM - Posted by Ryan Shrout

Yes, our results were in the ~30 MH/s range. Not good. Thinking some optimization on the miner front will help.

June 30, 2017 | 04:39 PM - Posted by Hishnash (not verified)

@Ryan Shrout

Its a shame there are no workload tests that the HBMC cache would come into play all lots of pro tasks (in particular in the machine learning and rendering worlds) the input data for a given task is much much larger than the memory on the cards (in the TB) it would be really helpful for the story on this card to see how it performed in such a case compared to the other workstation cards.

Possibly an increasing test would be to set up a scene in a blender with some really large textures, subdivided the messes a load and put a noice displayment map, you can see the video memory needed to render in the top of blender if you can get this above the 16GB then it would let us see how turning HBC on and off performs.

June 30, 2017 | 04:43 PM - Posted by Danternas (not verified)

Why are you testing a workstation card with focus on gaming. I know you made an attempt to answer this question with some reasoning that if Titan is a gaming/workstation hybrid then this card have to be the same.

The Titan brand have always been a hybrid but more importantly it have always been the top of the line card for Nvidia. It have always, with some exception between launches, outperformed its cheaper counterparts.

The Radeon Pro brand isn't the same thing. It have never been focused on gaming and AMD have never marketed it as such. Still, you ASSUME that it is. And I made the work "assume" capital here because this is an important word in the sense.

You got to the website for Titan and is says clearly that it is for gaming and it is the ultimate card for such. Titan is a gaming card that can do productivity on the side.

Now,you go to the website for Radeon Pro and Vega Frontier Edition and it says nothing about gaming until you scroll all the way down to finding drivers. It does however say just about everywhere how it can improve productivity in a workstation environment. It speaks about AI, medical applications, game design, graphics design and engineering - but not gaming. Radeon Pro is a workstation/productivity card that can do gaming on the side.

The distinction is important but is ignored by less knowledgeable sites who wish this card was AMD:s answer to Titan. They wish it so much that they assume it is even though there is little reason to make such an assumption. But it isn't, not by far. Fury was AMD:s answer to Titan.

June 30, 2017 | 04:53 PM - Posted by ppi (not verified)

I guess I do not have to feel any remorse about buying G-Sync monitor for yet another year.

There is still this FLOPS-to-performance ratio that works against AMD cards of recent generations.

RX580 needs 6 tflops to match (or not even, depending on review site) a 4 tflops GTX 1060.

In the same vein, if I take 13.1 tflops VEGA, but adjust it to actual frequency of 1440 instead of 1600, i.e. 11.8 tflops, and then apply 2/3 multiplier...

... I get 7.9 tflops ... while GTX 1080 has 8.2 and GTX 1070 has 5.9. And this is where VEGA roughly lands.

With so many proclaimed architectural improvements, there had to be potential to improve it, however. For example:
- Deferred rendering
- Use of shader array for geometry (where AMD historically lacked) - especially under DX11, this might be implementable so that it is transparent to the game, and then it must be absolute beast at tesselation
Does any of that work?

July 1, 2017 | 02:37 AM - Posted by Anonymous67878645 (not verified)

I think some of the gaming specific improvements aren't working fully yet, like tile based rasterization isn't working fully yet, and I heard somewhere that geometry culling isn't working fully yet

June 30, 2017 | 04:53 PM - Posted by PsychoEva01 (not verified)

My Titan X Pascal runs cinebench R15 OpenGL at 186 FPS @ 1418mhz ~30-50% GPU load. How did you get a Titan Xp, which is faster, to run so slow?

June 30, 2017 | 07:15 PM - Posted by Ryan Shrout

That's interesting, not sure. We ran the test several times however.

July 1, 2017 | 06:37 AM - Posted by Jabbadap

Cinebench is more cpu benchmark than opengl benchmark(Yes even opengl part of it). Out of curiosity, have you tried to use application profile that drivers would even try to load gpu on that bench?

June 30, 2017 | 04:56 PM - Posted by Edkiefer (not verified)

I want to say, great livestream, even if it was live :)

Keep up good reviews.

June 30, 2017 | 07:15 PM - Posted by Ryan Shrout

Hey, thanks!

June 30, 2017 | 04:57 PM - Posted by Patrice (not verified)

A good professionnal card on part with P5000 at half the price.
And you could game on it...

June 30, 2017 | 07:16 PM - Posted by Ryan Shrout

But you can game on a P5000 or P4000 as well, and these number tell us the gaming on those parts would be better than the Vega FE.

July 1, 2017 | 01:40 AM - Posted by Nature1984 (not verified)

Please don't compare this card with P4000/P5000, Nvidia has hell lot of specific workstation features... You know it. Quadro always shown firepro it's place workstation market.

Titan is targeted for Ultra High End gaming + individual researchers/ small business house for compute/WS Applications.
Vega FE seems to be targeting only later half of Titan's Market.

Anyways Pascal is sufficient for Vega.

July 7, 2017 | 01:58 AM - Posted by Neon00 (not verified)

So you're saying to handicap it by comparing it to a sector that it's not supposed to be compared to? It's intended to rival the Quadro lineup, and that's quite clear based on benchmarks, and going to the official Frontier Edition page on AMD.

This is in no way a rival to a Titan or 1080ti, the scores show that, it cleans the Titan's clock in many various workstation applications, and the Titan cleans its clock in gaming applications, it tends to go neck and neck more with the P4000 and P5000 in performance, and actually is arguably a much better purchase than the P5000 due to costing half the price, and going neck and neck with it in various benchmarks.

That's like saying we should compare a 1080ti to an RX 560, sure they can both perform the same functions, but come on, they clearly have different intentions, and that's transparent. It's ignorant to say the two are in the same sector, as it is ignorant to say the FE doesn't belong with the P5000 and P4000, and as all the professional tier benchmarks has shown, it fights amazingly in those regions with it's price/perf, as where in gaming, it's an overpriced 1070.

July 7, 2017 | 01:58 AM - Posted by Neon00 (not verified)

So you're saying to handicap it by comparing it to a sector that it's not supposed to be compared to? It's intended to rival the Quadro lineup, and that's quite clear based on benchmarks, and going to the official Frontier Edition page on AMD.

This is in no way a rival to a Titan or 1080ti, the scores show that, it cleans the Titan's clock in many various workstation applications, and the Titan cleans its clock in gaming applications, it tends to go neck and neck more with the P4000 and P5000 in performance, and actually is arguably a much better purchase than the P5000 due to costing half the price, and going neck and neck with it in various benchmarks.

That's like saying we should compare a 1080ti to an RX 560, sure they can both perform the same functions, but come on, they clearly have different intentions, and that's transparent. It's ignorant to say the two are in the same sector, as it is ignorant to say the FE doesn't belong with the P5000 and P4000, and as all the professional tier benchmarks has shown, it fights amazingly in those regions with it's price/perf, as where in gaming, it's an overpriced 1070.

June 30, 2017 | 04:57 PM - Posted by John Connor (not verified)

I have been watching A.I hardware for awhile and this new AMD Frontier Card intrigues me.

The issue that it is slow is that it is not using tile rasterizer and it is not hitting above 1400 on the GPU and the boost clock is 1600.

If the card had a 1600 clock/Better Drivers(including more optimizations/tile rasterizer turned on). I would say a 25% to 30% inprovement so it will be a little faster sometimes/a little slower sometimes than a 1080TI

June 30, 2017 | 05:03 PM - Posted by willmore

I like the LED color control switch. There is a red setting and a blue setting. No green setting. I bet they're RGB LEDs. Can you imagine the meeting the decide that? "Why not green?" "Screw you, that's why!"

June 30, 2017 | 05:14 PM - Posted by me again (not verified)

To be perfectly fair
You should give us the gaming results comparing with TitanXp and P5000....(the ones this card is fighting for)

June 30, 2017 | 07:17 PM - Posted by Ryan Shrout

I don't disagree. However, it just wasn't necessary to show the performance levels of Vega FE. I did not originally plan to include GTX 1070 in my results. Only after I saw where Vega FE gaming results were sitting did I realize I had to include it.

June 30, 2017 | 05:33 PM - Posted by Clmentoz (not verified)

Hopefully with Vega FE released to market there can be some development on the benchmarking/testing software front like some very very huge Billion Polygon mesh model scenes with debug testing on to measure how Vega FE manages that HBM2/CACHE via the HBCC and the GPUs entire cache subsystems and the moving data/textures/etc. between HBM2, regular DRAM and even SSD virtual paged GPU memory.

I'm very interested in knowing just how effectively Vega FE can manage very large 3D modeled scenes that are larger than the available HBM2/Cache's 16GB/whatever size of Video memory space. Hopefully Vega's HBCC can manage say swapping data/textures/GPU kernels into and out of that HBM2/Cache and allow for 3D scenes of sizes that are much larger than 16GB(Or whatever the HBM2 memory size is for various Vega SKUs. If Vaga can manage large amounts of data/textures swapped out to regular DDR4 DRAM of say 64GB and the HBCC can effectively manage the task of swapping between HBM2/cache and regular DRAM/paged GPU memory in the background then that will be great for 3D animated production workloads.

The Vega micro-arch has many new features over Polaris/older GCN micro-archs so that implies the usual development/optimization period after any new GPU micro-arch is released. Let the tweaking and testing begin because there is so much new IP to optimize for with Vega.

June 30, 2017 | 05:42 PM - Posted by Cerebralassassin

I watched the WHOLE live stream. I liked it a lot, Thank you for doing that. Great job as always Ryan and PCPER!

June 30, 2017 | 07:18 PM - Posted by Ryan Shrout

Awesome, thanks! We had a surprising number of people check in on it!

July 1, 2017 | 03:01 AM - Posted by Martin Trautvetter


I really enjoyed watching the replay!

June 30, 2017 | 05:49 PM - Posted by Marcelo Viana (not verified)

Is actually sad that most of the pages are about games.
-We have HBCC to see, but no. how it perform in that game?
-terabytes HBCC can see, lets figure out a way to see how it works. not now, we need to see how that other game runs.
-for a card that is made to creation let's see what rocm can do with this cards. rocm? what is it? we have to see this other game.
- lets put a bunch of cameras and see if this card can really handle. Nope, another game.
-ok, ok just put some sinthetic benchmark that anyone can do by themselves and call it a day.

I know most of the audiences is gamers, as you well put no problem in do that, but for god sake, for a professional stand point, do a review of the card for people that are really interest on the card and not in preview what a future rx card gonna perform, it's useless really.
a lot of new tech on this card like:
HBCC- find way to test at least 1 terabyte of virtual adress space.
NGC- 8bits(512 op per clock), 16bits(256 op per clock) and 32bits(128 op per clock) see if it scales, or its just bullS@it propaganda. since nvidia felt necessary to build the propper 8 bit processor, instead of use cuda.
next gen pixel engine- this kind of performance we want to see.
programable geometry pipeline- create a very complex mesh and see the performance of the thing.
and test the F@cking free sync 2.

I ask so much? Well maybe, but is all about the card, is actually what this card brings for godsake.

And before i forget, thank you to bring this game performance and some pro sinthetic performance benchmark, even much less than i expect i really apreciate.
Still waiting for a real review of the vegaFE card.

June 30, 2017 | 07:19 PM - Posted by Ryan Shrout

Honestly, I'm totally with you on this. But for now we have no way to evaluate some of those use cases and no input from AMD on how to do so.

July 1, 2017 | 08:19 PM - Posted by Marcelo Viana (not verified)

Thank you Ryan, i'm glad we are in the same page here. i'm wating for a full review and hoping AMD get out of the cave, and give some. It's us consumers, Interested in their product that claiming for God sake.

July 1, 2017 | 12:38 PM - Posted by Cerebralassassin

calm your 8008135 Marcelo Viana

July 1, 2017 | 08:28 PM - Posted by Marcelo Viana (not verified)

lol, ok you're right, even i thinking that my brain is more capable than that, well, maybe just a little.
i have to confess that i'm really Anxious. i'm waiting for this new arch for so long, you know...

June 30, 2017 | 06:26 PM - Posted by Chris C (not verified)

sad to see DVI been phased out on some new cards, If they really want to reduce flexibility in ports I would prefer hdmi be ditched. I personally use DVI as my primary. I dont see the benefit of 3 display ports instead of say 2 display ports and a dual link dvi.

Before anyone asks why, well first off we all dont want to buy a new monitor to just use a GPU, is many capable screens out there without display ports, also my current monitor even tho has a display port has a nasty bug where the port gets stuck in sleep mode.

June 30, 2017 | 11:45 PM - Posted by bria5544

DVI and HDMI are pin compatible. Buy a passive adapter and quit whining.

July 1, 2017 | 07:35 AM - Posted by Martin Trautvetter

I think the FE even comes with a SL adapter in the box.

June 30, 2017 | 06:44 PM - Posted by Cellar Door

Hey Ryan - great job as always! Anyway, can you guys can do something about the size the blurriness of the font on your graphs. It's super hard to read.

For reference I've been viewing your site on Dell 27" Ultrasharps for longer then I can remember and that is the only thing that lets it down.

Cheers - keep on rocking!

June 30, 2017 | 07:21 PM - Posted by Ryan Shrout

Yah, that font needs to be addressed. Legacy scripting... I'll try to work on it!

June 30, 2017 | 07:39 PM - Posted by streaml1ne556 (not verified)

So if the frequency did not dip during the voltage drops in ROTR, did the FPS drop as well during those periods? If it can run the card at those freqs with lesser voltage, might that suggest some kind of bug with their voltage curves?

June 30, 2017 | 08:56 PM - Posted by Ryan Shrout

Frequency did not appear to drop, though our monitoring tool may have been running at a low enough resolution to catch it.

June 30, 2017 | 08:22 PM - Posted by tatakai

was the average clock for the nvidia GPUs mentioned?

June 30, 2017 | 08:56 PM - Posted by Ryan Shrout

Not in this story - but the 1080 Ti review and others list those each card.

June 30, 2017 | 09:40 PM - Posted by #636cores (not verified)

I am really interested in the HBC implementation, can you your guys run some test on this awesome new tech?

June 30, 2017 | 09:42 PM - Posted by Tony (not verified)

Does the Radeon Vega Frontier Edition have Solidworks certified drivers for realview? Does it have certified drivers for *any* professional CAD application? Thanks!

July 3, 2017 | 04:35 AM - Posted by Hishnash (not verified)

Not yet, due to the new architecture with the features such as packed math etc, I think AMD want the card in the hands of the developers first.

but given it si being branded under the Radeon pro line (look at the webdomain) i think they will add them later but want collaboration with the tool makers.

June 30, 2017 | 09:47 PM - Posted by AquaVixen (not verified)

So you didn't use 100% maximum possible crazy settings in GTA-V. Anti-Aliasing (MSAA) at half, Post-Processing at high instead of ultra.. and didn't even show us what you had set in the advanced graphics setting.

Thanks for a shitty review. It's so hard to find people that actually run games 100% completely maxed out in all possible settings these days.

June 30, 2017 | 09:48 PM - Posted by AquaVixen (not verified)

Nevermind we do get to see in advanced, I missed that.. but still.. not maxed out everywhere.

July 1, 2017 | 09:12 AM - Posted by OrphanageExplosion (not verified)

Max out GTA settings - extended distance scaling, in particular - and you will end up measuring CPU performance, not GPU performance. It's really not a particularly clever idea.

And even at these settings, it's still identical workloads being compared.

July 1, 2017 | 09:14 AM - Posted by OrphanageExplosion (not verified)

Oh, I see that extended distance is maxed in this test. If so, the big difference in the test results between AMD and Nvidia is likely down to the less than ideal Radeon DX11 driver.

June 30, 2017 | 10:47 PM - Posted by lhowe005 (not verified)

Ughh.... why are there no RX480/580's in these benchmarks?

July 1, 2017 | 02:18 AM - Posted by Martin Trautvetter

What would be the point of cluttering up the graphs with a significantly slower GPU?

July 1, 2017 | 01:54 PM - Posted by J-dawg (not verified)

It would be useful, so you could check whether Vega was performing consistently with other polaris architecture, in terms of IPC compared to Fiji (which seems to somehow have gone down).

June 30, 2017 | 11:28 PM - Posted by Mandrake

Great review Ryan. The results seem puzzling. The fact that the Vega product is not, at minimum, consistenly matching a GTX 1080 is worrisome. I'd be curious to see what AMD has to say about this in due course, and what they have to say at SIGGRAPH regarding the 'RX' variant.

The livestream was interesting too; fantastic to be able to see the benchmarking in progress and hear some of your input too. Thanks!

July 1, 2017 | 12:01 AM - Posted by alucard (not verified)

Great review as always. please don't mind the idiots blabbering "bu bu buut it's not a gaming card!"

July 1, 2017 | 12:23 AM - Posted by James M (not verified)

So you "overclocked" the card by increasing the offset and power limit, but didn't undervolt the card, so the clocks didn't even get to stock clocks. Now the 1080 and 1080ti tested, were they both fe or were they aftermarket cards, as thermals are definitely the limiting factor here.

everyone reading this article, please wait for gamers nexus's video/article, as thats where a valid and proper review of the card will be conducted (and the fact that this is still not a gaming card will be acknowledged)

July 1, 2017 | 12:40 AM - Posted by Exascale

I heard that the memory is made by Micron? Is Micron making HBM2 now?

July 1, 2017 | 04:26 PM - Posted by Tim Verry

SK Hynix and Samsung are making hbm2, i haven't heard anything about Micron getting into it but not sure.

July 1, 2017 | 01:04 AM - Posted by Bruzur (not verified)

I mean, but can the added benefits of (enabling) tile-based rasterization, working alongside proper driver optimization really boost their "high-end gaming" cards beyond the 1080 Ti?

July 1, 2017 | 08:04 AM - Posted by looncraz (not verified)

There's a chance, yes.

TBR will reduce power draw, so the card will then hit higher clocks.

At the same time, draw binning can occur using part of the TBR algorithm's side effects, so even less work can be done per draw, which leads to the draw completing faster, which leads to higher framerates.

How high it will go depends entirely on the GPUs ability to then keep the pipelines filled after an in-flight draw is canceled.

You might see 1% improvement or you might see 20% improvement (even though, technically, the geometry performance may have doubled)... but it won't be consistent.

July 1, 2017 | 01:05 AM - Posted by AlmirPreldzic (not verified)

You guys do realize that AMD has stated there is no performance DELTA between pro and gaming mode with this card, right?
This card's "gaming mode" is NOT even running Vega optimized drivers. AT ALL.
RX VEGA is launching july 30th/august for a reason. The drivers are not even finished.
This Vega card is running on old GCN drivers not optimized for Vega at all.
At the end of the day, this is a WORKSTATION video card. If You trolls and fanboys had any brains then you would ask yourself how come nobody is doing gaming benchmarks on nvidia quadro workstation cards and drawing conclusions on how well their gaming cards should do? No? cuz You're literally RETARDED.

July 1, 2017 | 05:21 AM - Posted by Peter2k (not verified)

no gaming benchmarks on Quadro

if you say so

maybe it's because those are expensive and Nvidia doesn't provide samples

or Vega is advertised as a pro card you can game on as well
like a Titan

July 1, 2017 | 03:32 AM - Posted by Prateek (not verified)

Since this is one of the few cards with 16 GB of VRAM, could you please run some games at 4K to observe the max VRAM usage?
Rottr and Cod are some games which commit high vram.

July 1, 2017 | 04:19 AM - Posted by Anonymouse (not verified)

Imagine waiting all this time for a 1075. Man I hope the RX is faster or at least 200 notes cheaper than I thought it would be.

July 1, 2017 | 08:06 AM - Posted by looncraz (not verified)

This level of performance would be decent for about $375.

July 1, 2017 | 05:42 AM - Posted by Anonymous57 (not verified)

For the Quadro testing did you use the latest drivers, or are the numbers for previous review? I remember as of GeForce 378.66 Driver and beyond, new OpenCL updates were added to the drivers. Might be the reason TitanXp scored pretty well in the OpenCL testing vs the Quadro cards.

July 3, 2017 | 04:29 AM - Posted by Hishnash (not verified)

With OpenCL tests for the high-end pro cards you need to find test cases the push the memory more than the raw compute power.

In particular for rendering when you consider most CGI scenes in production you are looking at multiple GB of VRAM needed normally exceeding that of the card itself so the performance of the cards io and caching is of critical importance.

July 1, 2017 | 05:54 AM - Posted by Uranus (not verified)

np it cost 450$

July 3, 2017 | 01:18 PM - Posted by Sorta (not verified)

gaming ver

July 1, 2017 | 06:07 AM - Posted by rd (not verified)

It's highly likely that the gaming drivers are still the modified Fiji drivers that Vega was first shown gaming on.

It would certainly explain why the card behaves like an OC'd Fiji in gaming, and its observed tile based rendering pattern is identical to Fiji, despite Vega totally changing it. Also, the gigantic uplift in pro applications is not present in gaming.

I suspect when the RX drivers drop, this will be an extremely fast card in gaming. Perhaps faster overall than the 1080Ti or Xp.

July 1, 2017 | 06:17 AM - Posted by Damin (not verified)

Thanks for keeping the hype alive :D

July 1, 2017 | 10:27 AM - Posted by alucard (not verified)

it's extremely unlikely that RX Vega will perform much better. If it could, AMD would've launched it first, instead of letting FE out the door early. They aren't *that* dense.

July 1, 2017 | 11:01 AM - Posted by HP (not verified)

I won't comment on the DX/OGL drivers but the Vulkan driver cannot be too old. Vega FE supports version 1.0.39 which only received driver support in April and is still the current release. This API version didn't exist yet at the time of demoing in December/January.

July 1, 2017 | 05:52 PM - Posted by rd (not verified)

It's the Fiji driver. This has now been confirmed. AMD support Fiji, hence the latest Vulkan update.

Vega FE is not using Vega drivers for gaming.

July 3, 2017 | 04:27 AM - Posted by Hishnash (not verified)

If it is true that they are using the Fiji code base with minimal changes at this point then i would expect significant improvements given this is the first AMD display driver chipset that supports packed math.

That means they will need to do a lot of adjustment to the Fiji driver code to make use of it but if they do they will be able to pack multiple operations into fewer clock cycles. This could bring a very big boost to performance if it is the case.

July 1, 2017 | 08:52 AM - Posted by ChristopheR (not verified)

I observed that NVIDIA disable the geometry tile caching (introduced in Maxwell) on their Quadro card. My guess is that pro people like to throw a lot of triangle at the cards which would result in constant flushing of the tile cache once it's full.

My guess is that if AMD target this card for pro people, the geometry tile caching introduced in Vega is also disabled on that card. That could lead to significant performance improvements on the gamer variation of this card.

To test this theory, I would say run the gaming benchmark on a Quadro and a GeForce that use the same GPUs.

This said, geometry tile caching is phenomenal (see GeForce GTX 750 release a the time!) but I hardly imagine it's going to be enough to do significantly better than a GeForce GTX 1080.

July 1, 2017 | 08:52 AM - Posted by Harney

Thank you Ryan for this i do appreciate the review :)

July 1, 2017 | 10:47 AM - Posted by HP (not verified)

Any chance of seeing more purely synthetic kinds of results?

I think many of us are quite curious to see how it performs in pixel/texel fillrate tests and tessellation/geometry tests. I think they would reveal much about the architecture.

July 3, 2017 | 04:23 AM - Posted by Hishnash (not verified)

Yer would be interesting to see how the HBMC compares as well, render out a scene with some super massive textures maybe.

July 1, 2017 | 12:36 PM - Posted by Mr Anthony Davis (not verified)

I hate your graphs, why can't you just do a normal FPS chart like everyone else to go along with your current ones?

July 3, 2017 | 04:22 AM - Posted by Hishnash (not verified)

Frame times are much more informative than FPS, FPS is always an AVG (over at min 1second)

But frame times are mutch better at showing if there visual stutters.

for example, if you have a 1 second period with 100fps and there are 5 frames that are slower than normal on the Avg you will not really notice since all 100frames rendered in the second but some were much much slower so you have a long 30ms game with no update on screen.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.