Subject: General Tech | November 8, 2018 - 01:54 PM | Ken Addison
Tagged: Zen 2, xeon, Vega, rome, radeon instinct, podcast, MI60, Intel, EPYC, cxl-ap, chiplet, cascade lake, amd, 7nm
PC Perspective Podcast #521 - 11/08/18
Join us this week for discussion on AMD's new Zen 2 architecture, 7nm Vega GPUs, SSD encryption vulnerabilities, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Jim Tanous, Jeremy Hellstrom, Josh Walrath, Allyn Malventano, Ken Addison, and Sebastian Peak
Peanut Gallery: Alex Lustenberg
Program length: 1:42:27
Podcast topics of discussion:
Week in Review:
Thanks to Casper for supporting our podcast! Save $50 on select mattresses at http://www.casper.com/pcper code pcper
News items of interest:
Picks of the Week:
Subject: General Tech | November 6, 2018 - 03:42 PM | Jeremy Hellstrom
Tagged: AMD Radeon Instinct, MI60, MI50, 7nm, ROCm 2.0, HPC, amd
If you haven't been watching AMD's launch of the 7nm Vega based MI60 and MI50 then you can catch up right here.
You won't be gaming with these beasts, but for those working on deep learning, HPC, cloud computing or rendering apps you might want to take a deeper look. The new PCIe 4.0 cards use HBM2 ECC memory and Infinity Fabric interconnects, offering up to 1 TB/s of memory bandwidth.
The MI60 features 32GB of HBM2 with 64 Compute Units containing 4096 Stream Processors which translates into 59 TOPS INT8, up to 29.5 TFLOPS FP16, 14.7 TFLOPS FP32 and 7.4 TFLOPS FP64. AMD claims is currently the fastest double precision PCIe card on the market, with the 16GB Tesla V100 offering 7 TFLOPS of FP64 performance.
The MI50 is a little less powerful though with 16GB of HBM2, 53.6 TFLOPS of INT8, up to 26.8 TFLOPS FP16, 13.4 TFLOPS FP32 and 6.7 TFLOPS FP64 it is no slouch.
With two Infinity Fabric links per GPU, they can deliver up to 200 GB/s of peer-to-peer bandwidth and you can configure up to four GPUs in a hive ring configuration, made of two hives in eight GPU servers with the help of the new ROCm 2.0 software.
Expect to see AMD in more HPC servers starting at the beginning of the new year, when they start shipping.