NVIDIA is using the Consumer Electronics Show to launch the Drive PX 2 which is the latest bit of hardware aimed at autonomous vehicles. Several NVIDIA products combine to create the company's self-driving "end to end solution" including DIGITS, DriveWorks, and the Drive PX 2 hardware to train, optimize, and run the neural network software that will allegedly be the brains of future self-driving cars (or so NVIDIA hopes).
The Drive PX 2 hardware is the successor to the Tegra-powered Drive PX released last year. The Drive PX 2 represents a major computational power jump with 12 CPU cores and two discrete "Pascal"-based GPUs! NVIDIA has not revealed the full specifications yet, but they have made certain details available. There are two Tegra SoCs along with two GPUs that are liquid cooled. The liquid cooling consists of a large metal block with copper tubing winding through it and then passing into what looks to be external connectors that attach to a completed cooling loop (an exterior radiator, pump, and reservoir).
There are a total of 12 CPU cores including eight ARM Cortex A57 cores and four "Denver" cores. The discrete graphics are based on the 16nm FinFET process and will use the company's upcoming Pascal architecture. The total package will draw a maximum of 250 watts and will offer up to 8 TFLOPS of computational horsepower and 24 trillion "deep learning operations per second." That last number relates to the number of special deep learning instructions the hardware can process per second which, if anything, sounds like an impressive amount of power when it comes to making connections and analyzing data to try to classify it. Drive PX 2 is, according to NVIDIA, 10 times faster than it's predecessor at running these specialized instructions and has nearly 4 times the computational horsepower when it comes to TLOPS.
Similar to the original Drive PX, the driving AI platform can accept and process the inputs of up to 12 video cameras. It can also handle LiDAR, RADAR, and ultrasonic sensors. NVIDIA compared the Drive PX 2 to the TITAN X in its ability to process 2,800 images per second versus the consumer graphics card's 450 AlexNet images which while possibly not the best comparison does make it look promising.
Neural networks and machine learning are at the core of what makes autonomous vehicles possible along with hardware powerful enough to take in a multitude of sensor data and process it fast enough. The software side of things includes the DriveWorks development kit which includes specialized instructions and a neural network that can detect objects based on sensor input(s), identify and classify them, determine the positions of objects relative to the vehicle, and calculate the most efficient path to the destination.
Specifically, in the press release NVIDIA stated:
"This complex work is facilitated by NVIDIA DriveWorks™, a suite of software tools, libraries and modules that accelerates development and testing of autonomous vehicles. DriveWorks enables sensor calibration, acquisition of surround data, synchronization, recording and then processing streams of sensor data through a complex pipeline of algorithms running on all of the DRIVE PX 2's specialized and general-purpose processors. Software modules are included for every aspect of the autonomous driving pipeline, from object detection, classification and segmentation to map localization and path planning."
DIGITS is the platform used to train the neural network that is then used by the Drive PX 2 hardware. The software is purportedly improving in both accuracy and training time with NVIDIA achieving a 96% accuracy rating at identifying traffic signs based on the traffic sign database from Ruhr University Bochum after a training session lasting only 4 hours as opposed to training times of days or even weeks.
NVIDIA claims that the initial Drive PX has been picked up by over 50 development teams (automakers, universities, software developers, et al) interested in autonomous vehicles. Early access to development hardware is expected to be towards the middle of the year with general availability of final hardware in Q4 2016.
The new Drive PX 2 is getting a serious hardware boost with the inclusion of two dedicated graphics processors (the Drive PX was based around two Tegra X1 SoCs), and that should allow automakers to really push what's possible in real time and push the self-driving car a bit closer to reality and final (self) drive-able products. I'm excited to see that vision come to fruition and am looking forward to seeing what this improved hardware will enable in the auto industry!
PC Perspective's CES 2016 coverage is sponsored by Logitech.
Follow all of our coverage of the show at https://pcper.com/ces!
Nvidia DriveWorks? WTF!!! Is
Nvidia DriveWorks? WTF!!! Is this like their GameWorks?
Where if you have all the features enabled your car has a top speed of 25mph and you need to disable all the Nvidia Features to get back to 60mph.
I lake funny comments!!! and
I lake funny comments!!! and you did one right there!!!
But, nvidia is the King, you can like that or not, but they are!!!
They will not remain king for
They will not remain king for long if they don’t stop gimping the compute resources in their GPUs! And not having fully in hardware GPU processor thread dispatch and management in Nvidia’s GPUs will result in underutilized and idle GPU execution resources with calls forced wait in the queue and unable to be dispatched efficiently to the GPU’s cores/execution units. Pascal better have that fully in hardware GPU processor thread dispatch and management or they will be behind for VR, and driver-less car systems.
“The liquid cooling consists
"The liquid cooling consists of a large metal block with copper tubing winding through it and then passing into what looks to be external connections for connecting to an exterior radiator and reservoir and completely the loop. There are a total of 12 CPU cores including eight ARM Cortex A57 cores and four "Denver" cares."
[ for connecting to an exterior radiator and reservoir and completely the loop ]
Needs reworded.
[ four "Denver" cares." ]
s/cares/cores
Heh, I blame the cold
Heh, I blame the cold medicine ;). Thanks, I've corrected the typos.
I don’t see this going
I don’t see this going anywhere, cant fault the ambition tho
Good for them, I don’t see
Good for them, I don’t see any other company coming with similar hardware to install this in cars/self-driving cars etc.
When it comes to neural
When it comes to neural networks and supercomputers IBM has the latest Chip called the TrueNorth. Running deep learning algorithms on a GPU is inefficient as compared to an array of chips that works like a brain. TrueNorth doesnt require any cooling and consumes only very less power, plus it can have more number of neurons and synapses than any GPU can ever run. NVidia has to develop their own version of TrueNorth if they have to shine in neural network domain.
I dont see them trying to
I dont see them trying to sell this to car manufacturers, do you?
NVidia is already partnered
NVidia is already partnered with most major car manufacturers:
http://www.nvidia.ca/object/automotive-partner-innovation.html
No but IBM could license the
No but IBM could license the technology to others that can, and IBM has been Tops in new patents awarded for more than 2 decades. I’m sure that those IBM neural network chips will be used in its Watson line of analytical supercomputers for medical, business, science, and other analytics! IBM could very well sell its TrueNorth chips to others who could use them for automotive control devices or license the IP.
Nvidia is not the only supplier of GPU/CPU compute for the auto industry! Imagination technologies will also be looking to bring MIPS based CPUs paired with its PowerVR GPUs to the automotive market players to compete with any ARM/MALI, or ARM(Custom)/Nvidia GPU based offerings. Do not rule out AMD from this market also, especially through AMDs semi-custom division as they already make products for the console makers, and embedded display market, and AMD’s new Radeon Technologies Group is looking at GPGPU compute uses just as much as it is looking at graphics compute. There may be some APUs on an interposer in cars with plenty of HBM and a much wider and faster than even PCI/NVLink connections between CPU and GPU on AMD’s APUs on an interposer.
NVidia works with the deep
NVidia works with the deep learning software engineers to design their hardware specifically for this, so it’s hardly inefficient.
They are partnered with most of the major auto manufacturers.
NVidia isn’t designing a human brain; they are designing computer hardware optimized for the specific task of understanding the world as digitized by the sensors.
True North requires new ways to design software and is an educational tool not intended for the use case NVidia is involved in.
And the Human Brain is best
And the Human Brain is best at the very processes you describe, so those neural networking chips for visual recognition will be very popular. I’ll bet that Google is evaluating IBM’s TrueNorth chips for driver-less car applications just as Google is evaluating Power8 chips form OpenPower for its servers! The human brain has the best performance per watt of any computing device thanks to those brain neurons!
Automotive electronics
Automotive electronics requires operational temperature range up to minimal of 125 deg C. How they could achieve this with 250W and liquid cooling? No problem at home but car on summer sun heated up on 70-80 deg C!?
Should we also remember of some nVidia announcements regarding Tegras in past? They predicted death of everybody else and Tegras in every device. And? On one hand we can count devices running Tegras.
This might make Samsung
This might make Samsung richer. Didn’t Nvidia infringe on 3 patents in building their SoCs.
Its almost like Nvidia does
Its almost like Nvidia does not think Intel will not have a solution of there own for the automotive industry and with there deep pockets they will out bid Nvidia.