CES 2018: AMD teases 7nm Vega for machine learning in 2018

Subject: Graphics Cards | January 8, 2018 - 12:00 AM |
Tagged: Vega, CES 2018, CES, amd, 7nm

Though just the most basic of teases, AMD confirmed at CES that it will have a 7nm based Vega product sampling sometime in 2018. No mention of shipping timeline, performance, or consumer variants were to be found.

View Full Size

This product will target the machine learning market, with hardware and platform optimizations key to that segment. AMD mentions “new DL Ops”, or deep learning operations, but the company didn’t expand on that. It could mean it will integrate Tensor Core style compute units (as NVIDIA did on the Volta architecture) or it may be something more unique. AMD will integrate a new IO, likely to compete with NVLink, and MxGPU support for dividing resources efficiently for virtualization.

View Full Size

AMD did present a GPU “roadmap” at the tech day as well. I put that word in quotes because it is incredibly, and intentionally, vague. You might assume that Navi is being placed into the 2019 window, but its possible it might show in late 2018. AMD was also unable to confirm if a 7nm Vega variant would arrive for gaming and consumer markets in 2018.

Source: PCPer

January 8, 2018 | 12:30 AM - Posted by LotsOfConnectivityFoSures (not verified)

"will integrate a new IO, likely to compete with NVLink"

That Would be the Infinity Fabric and at 7nm maybe a Dual Vega GPU core on a single PCIe card to replace the MI25 current variant.

AMD does need its own tensor core units but even some Packed 16 bit math that could be done on 64 bit double percision FP untis like it's done on with 32 bit units would help.

All of AMD's Epyc and Vega products speak the Infinity Fabric and that's already been compared to NVLink if you watched those pre Epyc Release videos where they where doing those oil/seismic imaging workloads on some Epyc engineering samples at that trade show/confrence.

The Infinity Fabric is what will allow Navi to be modular like Zen/Zeppelin and even Dual Vega DIEs on a PCIe card variants for any Vega 20 dual die SKUs that can communicate via the infinity fabric. So 2 big Vaga Dies using the Infinity fabrics and when it arrives Navi's easier to fab/with hgher yields smaller dies when they arrive communucating via the Infinity Fabric.

January 8, 2018 | 07:50 AM - Posted by psuedonymous

Infinity Fabric won't be making its way to consumer desktops as a CPU-GPU interconnect, that would break compatibility with PCIe that everyone else uses. Radeon Instinct on an Epyc board using IF as an interconnect is more likely (similar to the POWER board using NVLink).

We might see IF as a secondary interconnect as NVLink is being used for on PCIe cards.

January 8, 2018 | 12:50 PM - Posted by LipidsForGreyWithTheDafts (not verified)

You are a complete idiot as the Infinity fabric is already there and that PCIe infrastructure is already used by other communication fabric protocols like CCIX/Other fabric protocols that can make use of the existing PCIe PHY and can using other than PCIe siginaling over the existing PCIe PHY to make use of the PCIe wires/traces. OpenPower uses many different protocols and IBM's Systems deliver NVLink on their BlueLink PHY/Protocal so IBM uses its Bluelink and BlueLink can speak the NVlink protocal on IBM's systems. IBM's power9 Motherboard support PCIe 4.0 also but BlueLink is what IBM uses for attatched accelerators running whetever protocols.

"But with New CAPI, as the OpenCAPI port was called when the Power9 chip was unveiled in April at the OpenPower Summit and then talked about in greater detail at the Hot Chips conference in August, IBM it etching new “BlueLink” 25 Gb/sec ports on the chip specifically for coupling accelerators, network adapters, and various memories more tightly to the Power9 chip than was possible with the prior CAPI 1.0 and CAPI 2.0 protocols over PCI-Express 3.0 and 4.0, respectively. It is interesting to note that these same 25 Gb/sec BlueLink ports on the Power9 chip will be able to run the updated NVLink 2.0 protocol from Nvidia. NVLink provides high bandwidth, coherent memory addressing across teams of Tesla GPUs, with NVLink 1.0 supported in the current “Pascal” Tesla P100s and the faster NVLink 2.0 coming with the “Volta” Tesla V100s die next year." (1)

Infility Fabric is in all of AMD's Zen micro-arch based CPUs/APUs and Vega GPUs and That's what AMD will be using going froward on die and off die to other processor dies for inter-die communication from now on for AMD. Those PCIe traces are just wires and that PCIe PHY can be used by other protocols also it's all the in the PCI-SIG standards since forever for that usage with other protocols over the PCIe PHY.

(1)

"Opening Up The Server Bus For Coherent Acceleration"

https://www.nextplatform.com/2016/10/17/opening-server-bus-coherent-acce...

January 8, 2018 | 05:15 PM - Posted by msroadkill612

Ouch, idiot is harsh, but you are quite right imo.

Fabric is a winner so why wouldnt the stick to it. Its surely coming for teaming GPU processors & could happen anytime amd choose to prioritise it.

A dual gpu Fabric card would stomp on a 1080 TI, and compete with volta using the current warts and all vega

If a procesor product's competitiveness is perceived as challenged, simply add processors.

nvlink afaik, is fundamentally different and suffers a major "unknown"?

it seems a parallel/bypass pcie link specifically between high end pcie gpu cards (physically like the crossfire harness), which links to a further pcie card which is a cpu.

The NVlink connected cpu is not x86 (arm maybe?), which is an old story which sounds a good, but never seems to take off, even for savvy server farms. Its a new untried direction for clients. Its a very closed/proprietary system.

Fabric sets out to act as a traffic policeman between as many diverse existing system resources as possible, and thrives on teaming multiple economical processors into champion products.

as you say, fabric linked resources can choose either fabric links or pcie links to interconnect. Fabric is compatible with pcie, but resources which interconnect using fabric links are at an advantage.

January 8, 2018 | 11:00 PM - Posted by ItsAllAboutThoseROPsForGamingAndNotMuchElse (not verified)

It's ROP that fling the frames in that mad gaming FPS competition and Nvidia has more base die designs with more available ROPs to throw out there and win against only one Vega 10 base die design that only has a maximum of 64 ROPs to fling out frames. So Vega 56/64 have 64 ROPs, as does the GP104 based GTX 1080(64 ROPs). So JHH over at Nvidia can pull fromm that GP102 base die with its 96 available ROPs to bang out the 88 ROP based GTX 1080Ti.

So that's 88 ROPs based on the GP102 base die design has a higher Pixel Fill rate than Vega 56/64 and the GP104 based GTX 1080 and the GTX 1080Ti's 88 ROPs can really fling more frames out per second from those 88 ROPs and give Bubba Gamer his little ePeen/Frame Flinging champ that comes with bragging rights. This is because JHH over at Nvidia has the Billions of Dollars to fund the GP100, GP102, GP104, GP106, GP108 base die design! AMD has only one Vega 10 base die design that so shader heavy that the miners have driven up the demand pricing of the Vega 64 above that of even the GTX 1080Ti with the 88 ROPs that Bubba gamer loves even more that his own mama!

JHH Knows that Bubba gamer wants bragging rights more than Bubba gamer wants to even game on Nvidia's GPUs and Bubba gamer would be just as happy hanging that GTX 1080Ti off of a gold chain and wearing it aroud town to show Vern down at the hog fat rendering plant that Bubba is someone special.
So JHH over at Nvidia can give Bubba those 88 ROPs to keep Bubba thinking that he is more than just another fat stinking failure. JHH does not really care what you do with his gaming GPUs after you purchase it because that's more money in the bank, ditto for Lisa Su when a miner buys those Vega 56/64 SKUs for coin mining!

The retailers love the demand pricing that they get on those Vega 56/64 SKUs and even Polaris SKUs from the miners so JHH can really not be faulted for wanting to raise the prices on his Nvidia SKUs with all those Bubba Pleasing ROPs even higher because Bubba will pay for that bling/ego/ePeen regardless.

All that Bubba Gamer needs is ROPs!
All that Bubba Gamer needs is ROPs!
It ROPs that is the thing that Bubba Gamer needs!
To give the FPS winning metrics for Bubba's little Bitty tiny tiny Peen.
All that Bubba Gamer needs is ROPs!
ROPs are things that can Really Frame Fling!
And ROPs make Bubba's GPU some damn nice bling!
All that Bubba Gamer needs is ROPs!
All that Bubba Gamer needs is ROPs!

January 14, 2018 | 05:13 PM - Posted by Anonymous#20173 (not verified)

bro are u autistic

January 8, 2018 | 05:15 PM - Posted by msroadkill612

Ouch, idiot is harsh, but you are quite right imo.

Fabric is a winner so why wouldnt the stick to it. Its surely coming for teaming GPU processors & could happen anytime amd choose to prioritise it.

A dual gpu Fabric card would stomp on a 1080 TI, and compete with volta using the current warts and all vega

If a procesor product's competitiveness is perceived as challenged, simply add processors.

nvlink afaik, is fundamentally different and suffers a major "unknown"?

it seems a parallel/bypass pcie link specifically between high end pcie gpu cards (physically like the crossfire harness), which links to a further pcie card which is a cpu.

The NVlink connected cpu is not x86 (arm maybe?), which is an old story which sounds a good, but never seems to take off, even for savvy server farms. Its a new untried direction for clients. Its a very closed/proprietary system.

Fabric sets out to act as a traffic policeman between as many diverse existing system resources as possible, and thrives on teaming multiple economical processors into champion products.

as you say, fabric linked resources can choose either fabric links or pcie links to interconnect. Fabric is compatible with pcie, but resources which interconnect using fabric links are at an advantage.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.