Podcast #521 - Zen 2, 7nm Vega, SSD Vulnerabilities, and more!

Subject: General Tech | November 8, 2018 - 01:54 PM |
Tagged: Zen 2, xeon, Vega, rome, radeon instinct, podcast, MI60, Intel, EPYC, cxl-ap, chiplet, cascade lake, amd, 7nm

PC Perspective Podcast #521 - 11/08/18

Join us this week for discussion on AMD's new Zen 2 architecture, 7nm Vega GPUs, SSD encryption vulnerabilities, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Jim Tanous, Jeremy Hellstrom, Josh Walrath, Allyn Malventano, Ken Addison, and Sebastian Peak

Peanut Gallery: Alex Lustenberg

Program length: 1:42:27

Podcast topics of discussion:

  1. Week in Review:
  2. Thanks to Casper for supporting our podcast! Save $50 on select mattresses at http://www.casper.com/pcper code pcper
  3. News items of interest:
  4. Picks of the Week:
    1. Jim: N7 Day! Amazon - Origin

Meet the AMD Radeon Instinct MI60 and MI50 accelerators

Subject: General Tech | November 6, 2018 - 03:42 PM |
Tagged: AMD Radeon Instinct, MI60, MI50, 7nm, ROCm 2.0, HPC, amd

If you haven't been watching AMD's launch of the 7nm Vega based MI60 and MI50 then you can catch up right here.

unnamed.png

You won't be gaming with these beasts, but for those working on deep learning, HPC, cloud computing or rendering apps you might want to take a deeper look.  The new PCIe 4.0 cards use HBM2 ECC memory and Infinity Fabric interconnects, offering up to 1 TB/s of memory bandwidth. 

The MI60 features 32GB of HBM2 with 64 Compute Units containing 4096 Stream Processors which translates into 59 TOPS INT8, up to 29.5 TFLOPS FP16, 14.7 TFLOPS FP32 and 7.4 TFLOPS FP64.  AMD claims is currently the fastest double precision  PCIe card on the market, with the 16GB Tesla V100 offering 7 TFLOPS of FP64 performance.

unnameved.jpg

The MI50 is a little less powerful though with 16GB of HBM2, 53.6 TFLOPS of INT8, up to 26.8 TFLOPS FP16, 13.4 TFLOPS FP32 and 6.7 TFLOPS FP64 it is no slouch.

1.png

With two Infinity Fabric links per GPU, they can deliver up to 200 GB/s of peer-to-peer bandwidth and you can configure up to four GPUs in a hive ring configuration, made of two hives in eight GPU servers with the help of the new ROCm 2.0 software. 

Expect to see AMD in more HPC servers starting at the beginning of the new year, when they start shipping.

 

Source: AMD