Meet the AMD Radeon Instinct MI60 and MI50 accelerators

Subject: General Tech | November 6, 2018 - 03:42 PM |
Tagged: AMD Radeon Instinct, MI60, MI50, 7nm, ROCm 2.0, HPC, amd

If you haven't been watching AMD's launch of the 7nm Vega based MI60 and MI50 then you can catch up right here.

unnamed.png

You won't be gaming with these beasts, but for those working on deep learning, HPC, cloud computing or rendering apps you might want to take a deeper look.  The new PCIe 4.0 cards use HBM2 ECC memory and Infinity Fabric interconnects, offering up to 1 TB/s of memory bandwidth. 

The MI60 features 32GB of HBM2 with 64 Compute Units containing 4096 Stream Processors which translates into 59 TOPS INT8, up to 29.5 TFLOPS FP16, 14.7 TFLOPS FP32 and 7.4 TFLOPS FP64.  AMD claims is currently the fastest double precision  PCIe card on the market, with the 16GB Tesla V100 offering 7 TFLOPS of FP64 performance.

unnameved.jpg

The MI50 is a little less powerful though with 16GB of HBM2, 53.6 TFLOPS of INT8, up to 26.8 TFLOPS FP16, 13.4 TFLOPS FP32 and 6.7 TFLOPS FP64 it is no slouch.

1.png

With two Infinity Fabric links per GPU, they can deliver up to 200 GB/s of peer-to-peer bandwidth and you can configure up to four GPUs in a hive ring configuration, made of two hives in eight GPU servers with the help of the new ROCm 2.0 software. 

Expect to see AMD in more HPC servers starting at the beginning of the new year, when they start shipping.

 

Source: AMD