Podcast #432 - Kaby Lake, Vega, CES Review

Subject: Editorial | January 12, 2017 - 04:42 PM |
Tagged: Vega, Valerie, snapdragon, podcast, nvidia, msi, Lenovo, kaby lake, hdr, hdmi, gus, FreeSync2, dell, coolermaster, CES, asus, AM4, acer, 8k

PC Perspective Podcast #432 - 01/12/17

Join us this week as we DasKeyboard, Samsung 750 EVO, CES predictions and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts:  Ryan Shrout, Allyn Malventano, Josh Walrath, Jermery Hellstrom

Program length: 1:45:28

Podcast topics of discussion:
 
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. Jeremy: 1:42:11 They did it, they beat the hairbrush
  4. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Source:

January 12, 2017 | 11:37 PM - Posted by Anonymous (not verified)

Will Ryzen be a Fat Tuesday event for real, or will it be later! Throw me something, Lisa Su!

January 13, 2017 | 12:46 AM - Posted by notwalle (not verified)

thanks guys!!!!!!!!!!!!!!!!!

January 13, 2017 | 02:34 AM - Posted by Anonymous (not verified)

The high bandwidth cache stuff probably isn't just a name change. They added some form of virtual memory to GPUs a long time ago, but it sounded like it needed software and driver support, and it could only swap out to system memory. I don't know if it has been used that much. This sounds like they are adding hardware level virtual memory support. If that is the case, then you can allocate as much memory as you want; it will just allocate it in a virtual address space. It won't actually be mapped to real memory until it is written to. If the amount of memory exceeds the physical memory available, then it should be able to swap out to the next level, whatever that may be. With the way virtual memory works, system memory can be considered a cache for the state on secondary storage. Calling it a cache doesn't change much. The presence of a hardware virtual memory management unit is a real thing though, not just a name change.

They could use this functionality across multiple products. Their HPC parts with HBM will need to connect to much more memory somehow. It is unclear how it will map external memory. In my opinion, HBM moves off package DRAM farther out in the hierarchy. With it farther out, it may make more sense to put stacked DRAM on an m.2 like device. That is, a higher level interface rather than the very low level memory interface used for DRAM currently. I don't know if pci-e is low enough latency. Hybrid memory cube type interconnect is a similar physical level interface, but I don't know if it allows off package connections. You could actually fit a lot of DRAM on a m.2 sized device using stacked memory packages.

January 13, 2017 | 09:59 AM - Posted by Anonymous (not verified)

I see AMD doing HBC and that large Virtual address space(512TB) for several reasons, one being the server/HPC/Exascale/AI markets where GPUs are using large in memory data sets. So with the large Virtual address space and some HBM memory controller logic these large data-sets can be swapped out in the background while the GPU works mostly from the HBC/HBM2. So the GPU can directly manage a virtual memory pool for itself and also issue direct calls to the OS and have the CPU/DMA controllers manage staging more data into slower DRAM/DIMM based system memory to be transferred in the background to the HBM2 while the GPU just keeps on crunching whatever workloads are needed uninterrupted from the faster HBM2.

As with any caching system a smaller amount of faster memory(HBM2 treated like Cache memory in this case) can leverage a larger amount of slower DIMM based DRAM and with proper caching algorithms the GPU processor can be made to mostly work from the HBM2/Cache. Even for a SSD/Hard-drive based Virtual paging file that latency can be hidden by the HBC/cache subsystems by proper pre-staging to slower system DRAM from any SSD/Hard-drive paging swap file and then from DIMM based DRAM to the HBM2 via that HBC controller as needed.

There will be another type of consumer system that will also benefit from AMD’s Vega and Ryzen IP with any APU on an interposer with even a single stack of HBM2 treated as High bandwidth Cache by the HBC memory controller IP/CPU memory controller on the APU. So even if say a laptop OEM only provided a single channel to a slower/larger pool of DIMM based DRAM the Integrated GPU/Graphics would not be starved of memory bandwidth because the integrated GPU would be working mostly from its own pool of HBM2/Cache and the slower/larger pool of DIMM based DRAM would have its latency/slower bandwidth issues hidden by the logic in the APU’s memory management hardware. So an Interposer based APU with 4/8GB of HBM2 would probably be able to manage/leverage 8-16GB+ of extra single channel DIMM Based DRAM in such a way that the integrated GPU/Graphics will never actually work from the slower DRAM pool and the integrated graphics will never be starved for bandwidth like it has in the past being only dependent on any single channel DIMM based DRAM.

Any Zen/Vega APUs are really going to be great if they can at least can be made with only a single stack of HBM2 at 4/8GB and some access to slower/larger amounts of DIMM based DRAM with the option to add 8/16GB of DIMM based DRAM. There will probably be laptop SKUs for home use that may only need a single stack of HBM2 at 4/8GB of memory while maybe for some higher end graphics application usage there could be some Zen/Vega SKUs will have HBM2 and single/dual channel access to a pool of larger/slower DIMM Based DRAM up to say at least 64GB of memory for large graphics/video file workloads.

The HBC controller and the large Virtual address space allowed by the new Vega IP is going to be great for discrete GPU systems managing their own PCIe card based SSD/NVM texture storage pools under Vega’s high bandwidth cache controller via the HBM2 cache and the GPU’s managed Virtual memory paging swap from its on GPU card NVM store.
I’d expect that on GPU/PCIe card SSD/NVM storage IP to eventually work its way down into the consumer market on AMD’s flagship GPU systems at some point. I’m really waiting for the Ryzen/Vega APUs to be announced especially if any APU SKUs come with some HBM2.

January 13, 2017 | 08:10 AM - Posted by Anonymous (not verified)

RSS feed has the episode as 432 - 01/12/16. Should be /17, right? Just a heads up!

January 13, 2017 | 10:29 AM - Posted by Anonymous (not verified)

Oh NOs, Thin and light freaks! those U series lakes have sprung a Leak straight out of the USB port!

"Researchers at Positive Technologies have released information detailing how Intel's U-Series Skylake and Kaby Lake series processors are vulnerable to a USB debugging bypass which could be used to attack systems."(1)

(1)

"Intel's Skylake and Kaby Lake-based Systems Vulnerable to USB Exploit"

https://www.techpowerup.com/229594/intels-skylake-and-kaby-lake-based-sy...

January 17, 2017 | 03:53 AM - Posted by notwalle (not verified)

i guess people will be getting more mini pcs if the cost of 3d tv goes down. also i wonder if next console will skip vr and do 3d tv with hdr.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.