NVIDIA Teases Low Power, High Performance Xavier SoC That Will Power Future Autonomous Vehicles

Subject: Processors | October 1, 2016 - 06:11 PM |
Tagged: xavier, Volta, tegra, SoC, nvidia, machine learning, gpu, drive px 2, deep neural network, deep learning

Earlier this week at its first GTC Europe event in Amsterdam, NVIDIA CEO Jen-Hsun Huang teased a new SoC code-named Xavier that will be used in self-driving cars and feature the company's newest custom ARM CPU cores and Volta GPU. The new chip will begin sampling at the end of 2017 with product releases using the future Tegra (if they keep that name) processor as soon as 2018.

View Full Size

NVIDIA's Xavier is promised to be the successor to the company's Drive PX 2 system which uses two Tegra X2 SoCs and two discrete Pascal MXM GPUs on a single water cooled platform. These claims are even more impressive when considering that NVIDIA is not only promising to replace the four processors but it will reportedly do that at 20W – less than a tenth of the TDP!

The company has not revealed all the nitty-gritty details, but they did tease out a few bits of information. The new processor will feature 7 billion transistors and will be based on a refined 16nm FinFET process while consuming a mere 20W. It can process two 8k HDR video streams and can hit 20 TOPS (NVIDIA's own rating for deep learning int(8) operations).

Specifically, NVIDIA claims that the Xavier SoC will use eight custom ARMv8 (64-bit) CPU cores (it is unclear whether these cores will be a refined Denver architecture or something else) and a GPU based on its upcoming Volta architecture with 512 CUDA cores. Also, in an interesting twist, NVIDIA is including a "Computer Vision Accelerator" on the SoC as well though the company did not go into many details. This bit of silicon may explain how the ~300mm2 die with 7 billion transistors is able to match the 7.2 billion transistor Pascal-based Telsa P4 (2560 CUDA cores) graphics card at deep learning (tera-operations per second) tasks. Of course in addition to the incremental improvements by moving to Volta and a new ARMv8 CPU architectures on a refined 16nm FF+ process.

  Drive PX Drive PX 2 NVIDIA Xavier Tesla P4
CPU 2 x Tegra X1 (8 x A57 total) 2 x Tegra X2 (8 x A57 + 4 x Denver total) 1 x Xavier SoC (8 x Custom ARM + 1 x CVA) N/A
GPU 2 x Tegra X1 (Maxwell) (512 CUDA cores total 2 x Tegra X2 GPUs + 2 x Pascal GPUs 1 x Xavier SoC GPU (Volta) (512 CUDA Cores) 2560 CUDA Cores (Pascal)
TDP ~30W (2 x 15W) 250W 20W up to 75W
Process Tech 20nm 16nm FinFET 16nm FinFET+ 16nm FinFET
Transistors ? ? 7 billion 7.2 billion

For comparison, the currently available Tesla P4 based on its Pascal architecture has a TDP of up to 75W and is rated at 22 TOPs. This would suggest that Volta is a much more efficient architecture (at least for deep learning and half precision)! I am not sure how NVIDIA is able to match its GP104 with only 512 Volta CUDA cores though their definition of a "core" could have changed and/or the CVA processor may be responsible for closing that gap. Unfortunately, NVIDIA did not disclose what it rates the Xavier at in TFLOPS so it is difficult to compare and it may not match GP104 at higher precision workloads. It could be wholly optimized for int(8) operations rather than floating point performance. Beyond that I will let Scott dive into those particulars once we have more information!

Xavier is more of a teaser than anything and the chip could very well change dramatically and/or not hit the claimed performance targets. Still, it sounds promising and it is always nice to speculate over road maps. It is an intriguing chip and I am ready for more details, especially on the Volta GPU and just what exactly that Computer Vision Accelerator is (and will it be easy to program for?). I am a big fan of the "self-driving car" and I hope that it succeeds. It certainly looks to continue as Tesla, VW, BMW, and other automakers continue to push the envelope of what is possible and plan future cars that will include smart driving assists and even cars that can drive themselves. The more local computing power we can throw at automobiles the better and while massive datacenters can be used to train the neural networks, local hardware to run and make decisions are necessary (you don't want internet latency contributing to the decision of whether to brake or not!).

I hope that NVIDIA's self-proclaimed "AI Supercomputer" turns out to be at least close to the performance they claim! Stay tuned for more information as it gets closer to launch (hopefully more details will emerge at GTC 2017 in the US).

What are your thoughts on Xavier and the whole self-driving car future?

Also read:

Source: NVIDIA

Video News

October 2, 2016 | 06:07 AM - Posted by Drazen (not verified)

Again nVidia! Since Tegra1 only "tuned" data but no real results behind.
As explained in text, a lot of fancy data without any explanations and real measurements. Will see but, probably, same as before.

October 2, 2016 | 07:13 AM - Posted by JohnGR

They are very serious about cars (and investors see it with Nvidia's share price going for $70) and also we have the Nintendo NX using a Tegra. 2017 is going to be very different year for Nvidia's Tegra business. Probably will skyrocket.

October 2, 2016 | 08:01 AM - Posted by Anonymous Nvidia User (not verified)

Yeah. Tegra might finally be making some profit for Nvidia. Also just a rumor right now but Samsung might be interested in getting Tegra for their phones. This is somewhat ironic since it seems like Samsung were using their clout to prevent Nvidia from entering the phone and tablet market.

I can forward self driving cars maybe preventing some tragedies related to medical emergencies behind the wheel. If someone passes out or falls asleep behind the wheel, the car could take over. Also every drunk would want one. Can they get a DUI if the car was doing the driving? Law enforcement could focus on real crime instead of traffic. They would get way less fine money as a result of self driving cars. Law enforcement would probably hate self driving cars for that reason.

October 2, 2016 | 10:41 AM - Posted by Anonymous (not verified)

Unless Nvidia comes forward with as much info on its custom ARM cores(Denver?, or other) like the x86 CPU/APU based makers provide on their x86 CPUs/APUs. Nvidia is not going to see much consumer business with this new design. It looks like this SKU may be only for the Auto/Auto AI market with maybe the GPU cores only having 8 bit, or mostly 8 bit(For AI tasks), functionality to save on power usage. That 20 watt figure probably means no 32 bit and maybe little 16 bit with mostly 8 bit resources to get down to that low power metric! More information is required or this will be treated like the other Tegra offerings with not so much attention from the consumer side.

October 2, 2016 | 11:12 AM - Posted by Pixy Misa (not verified)

Interesting. That's 40 billion operations per second per core. There's no way it's using the same approach as Pascal; it would have to run at 5GHz.

But a 32x32 multiplier has 16 times the hardware of an 8x8 unit, not 4 times, so if you design it to wring out the maximum throughput on 8-bit ops you can get 20 TOPS from 512 cores at ~1.2Ghz. That would put FP performance around 1.25 TFLOPs, which is in line with the perf/watt of the X1 and P4.

October 2, 2016 | 06:55 PM - Posted by Tim Verry

Interesting, yeah they have to be doing something different with Volta and then on top of that this chips is probably furrher specialized just for the int8 ops that they need and not the single ir double precision floating point that consumer and hpc cards feature.

October 2, 2016 | 03:21 PM - Posted by Anonymous (not verified)

i think the problem with tegra is , they dont release the product in time.
the press release will be impressive for the time of the announcement, but by the time they release the product, everyone else would have caught up to those specs (some might even surpass them).

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.