Subject: General Tech | December 13, 2018 - 01:15 PM | Jeremy Hellstrom
Tagged: nvidia, machine learning, jetson, AGX Xavier
NVIDIA claims their newly announced Jetson AGX Xavier SoC can provide up to 32 trillion operations per second for specific tasks, requiring a mere 10W to do so. The chips are design for image processing and recognition along with all those other 'puter learnin' things you would expect and chances are a device will have several of these chips working in tandem, which offers a lot of processing power. It is already being used for real time monitoring of DNA sequencing and will be installed in car manufacturing lines in Japan.
The Inquirer points out that this performance comes at a cost, currently $1100 per unit as long as you are buying 1000 of them or more.
"Essentially a data wrangling server plonked onto a silicon package, Jetson AGX Xavier is designed to handle all the tech and processing that autonomous things need to go about their robot lives, such as image processing and computer vision and the inference of deep learning algorithms."
Here is some more Tech News from around the web:
- RISC-V Will Stop Hackers Dead From Getting Into Your Computer @ Hackaday
- Small American town rejects Comcast – while ISP reps take issue with your El Reg vultures @ The Register
- In a Test, 3D Model of a Head Was Able To Fool Facial Recognition System of Several Popular Android Smartphones @ Slashdot
- Russia's cutting edge robot turns out to be bloke in costume @ The Inquirer
- Asustek president Jerry Shen to leave in major business revamp @ DigiTimes
- Julius Lilienfeld and the First Transistor @ Hackaday
- Ticketmaster tells customer it's not at fault for site's Magecart malware pwnage @ The Register
- The Worst CPU & GPU Purchases of 2018 @ Techspot
- Hands-on: Switch’s NES controllers offer unmatched old-school authenticity @ Ars Technica
- Standing Desk Starter Guide: Some Dos and Don'ts @ Techspot
Subject: General Tech | June 4, 2018 - 04:14 PM | Tim Verry
Tagged: nvidia, ai, robotics, machine learning, machine vision, jetson, xavier
NVIDIA launched a new platform for programming and training AI-powered robots called NVIDIA Isaac. The platform is based around the company’s Xavier SoC and supported with Isaac Robotics Software which includes an Isaac SDK with accelerated libraries, NVIDIA developed Isaac IMX algorithms, and a virtualized training and testing environment called Isaac SIM.
According to NVIDIA, Isaac will enable a new wave of machines and robotics powered by artificial intelligence aimed at manufacturing, logistics, agriculture, construction, and other industrial and infrastructural industries. Using the Jetson Xavier hardware platform for processing along with a suite of sensors and cameras, Isaac-powered robots will be capable of accurately analyze their environment and their spatial positioning within it to be able to adapt to obstacles and work safely in hazardous areas and/or alongside human workers.
NVIDIA notes that its new Jetson Xavier platform is 10-times more energy efficient while offering 20-times more compute performance than the Jetson TX2. It seems that NVIDIA has been able to juice up the chip since it was last teased at GTC Europe with it now being rated at up to 30 TOPS and featuring 9 billion transistors. The 30W module (it can also operate in 10W and 15W modes) combines a 512-core Volta GPU with Tensor cores, two NVDLA deep learning accelerators, an 8-core 64-bit ARM CPU (8MB L2 + 4MB L3), and accelerators for image, vision, and video inputs. The Jetson Xavier can handle up to 16 camera inputs along with supporting sensor inputs through GPIO and other specialized interfaces. It supports three 4K60 display outputs, PCI-E 4.0, 10 Gbps USB 3.1, USB 2.0, Gigabit Ethernet, UFS, UART, SD, I2S, I2C, SPI, and CAN for I/O.
The virtual world simulation with Jetson Xavier in-the-loop testing sounds interesting if it works as described which would help accelerate development of software to run these promised smarter production lines, more efficient building of homes and other infrastructure like bridges, and easier and more cost effective home package delivery using adaptable and smarter robotics.
The Isaac development platform will be priced at $1,299 and will be available starting in August to early access partners.
What are your thoughts on NVIDIA Isaac?
- GTC 2018: Nvidia and ARM Integrating NVDLA Into Project Trillium For Inferencing at the Edge
- NVIDIA Teases Low Power, High Performance Xavier SoC That Will Power Future Autonomous Vehicles
- NVIDIA Launches Jetson TX2 With Pascal GPU For Embedded Devices
Subject: Editorial | March 16, 2017 - 12:38 PM | Alex Lustenberg
Tagged: podcast, ryzen 5, ryzen, nvidia, mobileye, jetson, gtx 1080 ti, fcat vr, delidding
PC Perspective Podcast #441 - 03/16/17
Join us for NVIDIA GTX 1080 Ti, AMD Ryzen Scheduler Discussion, AMD Ryzen 5 Announcement, Intel Kaby Lake de-lidding, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, Morry Teitelman, Ken Addison
Program length: 1:24:48
Subject: General Tech | November 12, 2015 - 02:46 AM | Tim Verry
Tagged: Tegra X1, nvidia, maxwell, machine learning, jetson, deep neural network, CUDA, computer vision
Nearly two years ago, NVIDIA unleashed the Jetson TK1, a tiny module for embedded systems based around the company's Tegra K1 "super chip." That chip was the company's first foray into CUDA-powered embedded systems capable of machine learning including object recognition, 3D scene processing, and enabling things like accident avoidance and self-parking cars.
Now, NVIDIA is releasing even more powerful kit called the Jetson TX1. This new development platform covers two pieces of hardware: the credit card sized Jetson TX1 module and a larger Jetson TX1 Development Kit that the module plugs into and provides plenty of I/O options and pin outs. The dev kit can be used by software developers or for prototyping while the module alone can be used with finalized embedded products.
NVIDIA foresees the Jetson TX1 being used in drones, autonomous vehicles, security systems, medical devices, and IoT devices coupled with deep neural networks, machine learning, and computer vision software. Devices would be able to learn from the environment in order to navigate safely, identify and classify objects of interest, and perform 3D mapping and scene modeling. NVIDIA partnered with several companies for proof-of-concepts including Kespry and Stereolabs.
Using the TX1, Kespry was able to use drones to classify and track in real time construction equipment moving around a construction site (in which the drone was not necessarily programmed for exactly as sites and weather conditions vary, the machine learning/computer vision was used to allow the drone to navigate the construction site and a deep neural network was used to identify and classify the type of equipment it saw using its cameras. Meanwhile Stereolabs used high resolution cameras and depth sensors to capture photos of buildings and then used software to reconstruct the 3D scene virtually for editing and modeling. You can find other proof-of-concept videos, including upgrading existing drones to be more autonomous posted here.
From the press release:
"Jetson TX1 will enable a new generation of incredibly capable autonomous devices," said Deepu Talla, vice president and general manager of the Tegra business at NVIDIA. "They will navigate on their own, recognize objects and faces, and become increasingly intelligent through machine learning. It will enable developers to create industry-changing products."
But what about the hardware side of things? Well, the TX1 is a respectable leap in hardware and compute performance. Sitting at 1 Teraflops of rated (FP16) compute performance, the TX1 pairs four ARM Cortex A57 and four ARM Cortex A53 64-bit CPU cores with a 256-core Maxwell-based GPU. Definitely respectable for its size and low power consumption, especially considering NVIDIA claims the SoC can best the Intel Skylake Core i7-6700K in certain workloads (thanks to the GPU portion). The module further contains 4GB of LPDDR4 memory and 16GB of eMMC flash storage.
In short, while on module storage has not increased, RAM has been doubled and compute performance has tripled for FP16 compute performance and jumped by approximately 40% for FP32 versus the Jetson TK1's 2GB of DDR3 and 192-core Kepler GPU. The TX1 also uses a smaller process node at 20nm (versus 28nm) and the chip is said to use "very little power." Networking support includes 802.11ac and Gigabit Ethernet. The chart below outlines the major differences between the two platforms.
|Jetson TX1||Jetson TK1|
|GPU (Architecture)||256-core (Maxwell)||192-core (Kepler)|
|CPU||4 x ARM Cortex A57 + 4 x A53||"4+1" ARM Cortex A15 "r3"|
|RAM||4 GB LPDDR4||2 GB LPDDR3|
|eMMC||16 GB||16 GB|
|Compute Performance (FP16)||1 TFLOP||326 GFLOPS|
|Compute Performance (FP32) - via AnandTech||512 GFLOPS (AT's estimation)||326 GFLOPS (NVIDIA's number)|
The TX1 will run the Linux For Tegra operating system and supports the usual suspects of CUDA 7.0, cuDNN, and VisionWorks development software as well as the latest OpenGL drivers (OpenGL 4.5, OpenGL ES 3.1, and Vulkan).
NVIDIA is continuing to push for CUDA Everywhere, and the Jetson TX1 looks to be a more mature product that builds on the TK1. The huge leap in compute performance should enable even more interesting projects and bring more sophisticated automation and machine learning to smaller and more intelligent devices.
For those interested, the Jetson TX1 Development Kit (the full I/O development board with bundled module) will be available for pre-order today at $599 while the TX1 module itself will be available soon for approximately $299 each in orders of 1,000 or more (like Intel's tray pricing).
With CUDA 7, it is apparently possible for the GPU to be used for general purpose processing as well which may open up some doors that where not possible before in such a small device. I am interested to see what happens with NVIDIA's embedded device play and what kinds of automated hardware is powered by the tiny SoC and its beefy graphics.