Podcast #457 - Radeon Vega FE, NVIDIA Multi-Die, Ryzen Pro, and more!

Subject: General Tech | July 6, 2017 - 10:40 AM |
Tagged: video, Vega FE, starcraft, seasonic, ryzen pro, radeon, podcast, nvidia, Multi-Die, gtx 1060, galax

PC Perspective Podcast #457 - 07/6/17

Join us for Radeon Vega FE, NVIDIA Multi-Die, Ryzen Pro, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath

Peanut Gallery: Alex Lustenberg, Ken Addison

Program length: 1:08:04
 
Podcast topics of discussion:
  1. Week in Review:
      1. RX Vega perf leak
    1. 0:33:10 Casper!
  2. News items of interest:
  3. Hardware/Software Picks of the Week
  4. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Source:

July 6, 2017 | 11:52 AM - Posted by Digidi

Wrote this in Vega review. Maybe Josh Walrath is wrong with his 10%

"
Thank you Ryan, i knew that you did trianglebin test. I hoped you dived a little bit deeper at the Rasterizer behavior.
I have seen in another post that you have asked some expert which mean that the improvement of Tiled Based Rasterizer is 10%.
I'm a little bit surprised. Nvidia hat an ipc improvement of 35% between Kepler and Maxwell with the tiles based Rasterizer.
If you think about it. You save double Performance with TRB. You don't argue the shader with unimportant workload. Because of this you get also capacity back from the shader. The shader wich done unimportant work before are now free to do important work.
Also did you remember your article about Deus x and the 220Million Triangles where are only 2 billion are viewed.
That's the advantage of Tiled Based Rasterizer.
https://www.pcper.com/reviews/Graphics-Cards/AMD-Vega-GPU-Architecture-P...

But thank you for your investigation and that you listen to your community. I honor this!
"

July 6, 2017 | 12:04 PM - Posted by psuedonymous

"I'm a little bit surprised. Nvidia hat an ipc improvement of 35% between Kepler and Maxwell with the tiles based Rasterizer."

Tiled Rasterisation was not the only change between Kepler and Maxwell. 35% was the total improvement from all changes, not just from TR.

Plus, TR only starts coming into its own at higher resolutions: tiling the rasteriser means you have an extra overhead for geometry segmentation, and added overhead for work repeated per-tile (that for one-shot rasterisation would only be done once per scene).

"Also did you remember your article about Deus x and the 220Million Triangles where are only 2 billion are viewed."

That's to do with geometry culling, which you can do with many different types of rasterisation.

July 6, 2017 | 02:07 PM - Posted by Digidi

No that's not true. Tiled based Rasterizer is efficient when many polygons share one pixel. So if you have the same picture in 4k and 1080p the polygon size remains the same, but the pixel at 1080p are less. That means at 1080p one pixel have more polygons which is total inefficient for Rasterizer.

July 6, 2017 | 03:00 PM - Posted by Clmentoz (not verified)

Polygons that are smaller than one pixel are supposed to be culled early in the process by a primative discard process while the Tiled based Rasterizers are there to take advantage cache locality and doing every rendering step on a smaller individual, and less cache stressing, tile construct that can be held completely in l2 cache while rendering work on that tile is finished to completion. Tile and binning Rasterizers help keep the memoty to cache bandwidth and latency issues to a minimum on the render operations processers and their limited cache resources.

AMD's drivers are still needing more work, and will continue to need more work even after RX Vega's release. Those not happy with AMD's limited budget are free to finance more driver developers for AMD, or they can wait for the Ryzen/ThreadRipper/Epyc revenues to come in and give AMD the necessary finances(And total Revenue Growth figures) to allow AMD to get its driver developer staff up to the level nearer to Nvidia's.

This process in not going to be fixed overnight for AMD, and hopefully by Navi's RTM AMD will have had those Zen CPU based revenues to fix any GPU/CPU driver development staffing issues. Vega is a new micro-arch with many new features and AMD has more problems to overcome this time around with a New Vega IP that really will help AMD in the future, For Professional Markets, but also for gaming.

Maybe by Navi RTM AMD will have the resources for a gaming only focused GPU design with less compute and more ROPs and also a specilized line of Navi compute/AI ASICs for the professional markets, but the funds ar not there this go around for Vega. No Money No Funny.

July 6, 2017 | 04:40 PM - Posted by Digidi

It's not so easy a Tiled base Rasterizer have totally different procedure for culling and this is much more efficient.

Why do you think that Nvidia hide this aspect of there gpu a long time?

July 7, 2017 | 10:48 AM - Posted by Clmentoz (not verified)

Tile based rendering has been around for a good while(1). And Nvidia was never fooling the experts, they knew for a long time what Nvidia was doing.

(1)

"Tiled rendering"

https://en.wikipedia.org/wiki/Tiled_rendering

July 6, 2017 | 06:29 PM - Posted by Anonymously Anonymous (not verified)

I agree with Josh, you get a significant node shrink and the die size of Vega is pretty close to what Fiji is, clock speeds are much higher, and yet we see only a small increase of performance?

It's either hardware features baked into the GPU that gaming apps can't use and/or driver features not fully baked and left off until a later time.

It just seems to me that Vega isn't ready yet, but AMD has to release something so why not release the card in a way that would aritificially limit who can get it so as to generate interest?

July 6, 2017 | 06:56 PM - Posted by Alamo

is this something that can take off ?
i think so , yes, if you have no choice, shrinking nodes are getting harder and way more expensive, developers won't have a choice in the matter, they will have to follow.
AMD as usual is coming first with the tech, ready or not here i come!
they will need the next node to be able to design the smallest die so that way games that doesn't support m-gpu will still be playable at 1080p(mostly old), but this will put the devs on the spot light, forcing them to add m-gpu support to fully utilize the hardware, especialy if both vendors do it.

July 6, 2017 | 08:06 PM - Posted by CNote

I picked up the same Asrock board Jeremy recommended... its pretty nice with my 1600. Runs my LPX 3200 at 2966 but OC'd the ryzen to 3.9 pretty easy.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.