NVIDIA Launches Jetson TX2 With Pascal GPU For Embedded Devices

Subject: General Tech, Processors | March 12, 2017 - 05:11 PM |
Tagged: pascal, nvidia, machine learning, iot, Denver, Cortex A57, ai

NVIDIA recently unveiled the Jetson TX2, a credit card sized compute module for embedded devices that has been upgraded quite a bit from its predecessor (the aptly named TX1).

View Full Size

Measuring 50mm x 87mm, the Jetson TX2 packs quite a bit of processing power and I/O including an SoC with two 64-bit Denver 2 cores with 2MB L2, four ARM Cortex A57 cores with 2MB L2, and a 256-core GPU based on NVIDIA’s Pascal architecture. The TX2 compute module also hosts 8 GB of LPDDR4 (58.3 GB/s) and 32 GB of eMMC storage (SDIO and SATA are also supported). As far as I/O, the Jetson TX2 uses a 400-pin connector to connect the compute module to the development board or final product and the final I/O available to users will depend on the product it is used in. The compute module supports up to the following though:

  • 2 x DSI
  • 2 x DP 1.2 / HDMI 2.0 / eDP 1.4
  • USB 3.0
  • USB 2.0
  • 12 x CSI lanes for up to 6 cameras (2.5 GB/second/lane)
  • PCI-E 2.0:
    • One x4 + one x1 or two x1 + one x2
  • Gigabit Ethernet
  • 802.11ac
  • Bluetooth


The Jetson TX2 runs the “Linux for Tegra” operating system. According to NVIDIA the Jetson TX2 can deliver up to twice the performance of the TX1 or up to twice the efficiency at 7.5 watts at the same performance.

The extra horsepower afforded by the faster CPU, updated GPU, and increased memory and memory bandwidth will reportedly enable smart end user devices with faster facial recognition, more accurate speech recognition, and smarter AI and machine learning tasks (e.g. personal assistant, smart street cameras, smarter home automation, et al). Bringing more power locally to these types of internet of things devices is a good thing as less reliance on the cloud potentially means more privacy (unfortunately there is not as much incentive for companies to make this type of product for the mass market but you could use the TX2 to build your own).

Cisco will reportedly use the Jetson TX2 to add facial and speech recognition to its Cisco Spark devices. In addition to the hardware, NVIDIA offers SDKs and tools as part of JetPack 3.0. The JetPack 3.0 toolkit includes Tensor-RT, cuDNN 5.1, VisionWorks 1.6, CUDA 8, and support and drivers for OpenGL 4.5, OpenGL ES 3 2, EGL 1.4, and Vulkan 1.0.

The TX2 will enable better, stronger, and faster (well I don't know about stronger heh) industrial control systems, robotics, home automation, embedded computers and kiosks, smart signage, security systems, and other connected IoT devices (that are for the love of all processing are hardened and secured so they aren't used as part of a botnet!).

Interested developers and makers can pre-order the Jetson TX2 Development Kit for $599 with a ship date for US and Europe of March 14 and other regions “in the coming weeks.” If you just want the compute module sans development board, it will be available later this quarter for $399 (in quantities of 1,000 or more). The previous generation Jetson TX1 Development Kit has also received a slight price cut to $499.

Also read:

Source: NVIDIA

March 12, 2017 | 06:23 PM - Posted by AJ Klein (not verified)

HDMI 2.9? Impressive!

March 12, 2017 | 10:14 PM - Posted by Anonymous (not verified)

Had the same reaction. Love those so called tech "journalists" who can barely copy/paste BS from PR release. But don't worry, Nvidia will finance your next trip to CES, and you will get free Geforce, everything is fine. Bunch of clowns.

March 12, 2017 | 11:07 PM - Posted by AJ Klein (not verified)

-1 internet point to this response.

March 12, 2017 | 11:44 PM - Posted by Anonymous (not verified)

You need to relax, loser.

March 13, 2017 | 12:08 AM - Posted by Tim Verry

Well, no, if I had copied and pasted, I would not have made a typo ;-). So you're welcome. Fixed though, sorry I fat fingered the number and didn't notice hehe. Damn touchscreens!

March 12, 2017 | 06:25 PM - Posted by Anonymous (not verified)

Get One and get Allyn to do some cache/core and other ping/ding/other stress-testing types of testing on those Denver2 cores. And while he is at it maybe the Same for Apple's A8 though A10 SKU variants, and Apple's A11 when that is out.

And do not forget the AMD K12 SKUs in late 2017/early 2018(?) also! Although I'd expect that AMD will be providing a better list of CPU core specifications on the K12 if that project sees the light of day!

All those custom micro-architectures that are engineered to run the ARMv8A ISA need some very detailed probing because under the hood that are all different with some having more CPU core execution resources than others, and most of the csutom micro-architectural designs are better than ARM holdings Refrence Designs on average! That ARM A72/reference core my be an exception for low power useage in the 64 bit ARMv8A ISA running market, ditto for the Mali/Bifrost GPU micro-architecture for low power/high performance and async compute.

March 12, 2017 | 06:36 PM - Posted by PeeJay (not verified)

I wonder how performance of the TX1/TX2 compares to the Nintendo Switch. I seem to remember the latter having some ostensible similarities - four A57 cores for CPU, 256 CUDA core GPU.

March 12, 2017 | 08:57 PM - Posted by Breadfish64 (not verified)

The TX2 also has two Denver2 cores which have much higher performance. The SOC in the Switch is more closely related to the Tegra X1 which had 4 A57s and 4 A53s and was based on the Maxwell GPU architecture instead of the newer Pascal Arch. The switch is also rumored to have considerably lower clock speeds, and the A53s might have been omitted in the Switch since they wouldn't see much use in gaming anyway.

March 12, 2017 | 10:16 PM - Posted by Anonymous (not verified)

It is barely better than the original Gameboy. It is Nintendo we are talking about. What did you expect? Great tech at a decent price? SNES days are long gone.

March 13, 2017 | 12:18 AM - Posted by Tim Verry

It should be quite a bit faster than the one in the Switch ;-). It irks me that Nintendo went with such an old Tegra when this one was so close!

March 13, 2017 | 01:05 PM - Posted by Mr. Mike P (not verified)

I completely agree. It's the one thought I had about the switch even before this article. Why not go with the newer tech?!? The benefits in performance would have made for a much more compelling product. Also, raw materials price shouldn't be a factor. When you buy in volume the price for the chip would have been the same, maybe even lower considering the die shrink. The only reason I can think Nintendo went with the Maxwell is to hit a quarterly or annual earnings boggy, or maybe a corporate imperative to get away from the Wii U as quickly as possible. Such a lost opportunity!

March 13, 2017 | 01:16 PM - Posted by Anonymous (not verified)

"Why not go with the newer tech?!?"
Because Pascal is stripped from FP16 packed math which cuts performance in half.

March 12, 2017 | 09:26 PM - Posted by Anonymous (not verified)

Can you pop these into the TX1 dev board?

March 13, 2017 | 12:17 AM - Posted by Tim Verry

I believe so. It appears they both use the same 400-pin connector and Jetsonhacks claims the boards are the same.

It appears he took the post down (the board might have been under NDA? Not sure I'm not under one though heh), but you can see a cached copy here: http://webcache.googleusercontent.com/search?q=cache:qHm5TwFdQ8kJ:www.jetsonhacks.com/2017/03/08/nvidia-jetson-tx2-development-kit-2/+&cd=3&hl=en&ct=clnk&gl=us

March 13, 2017 | 01:30 AM - Posted by nem (not verified)


March 13, 2017 | 02:15 AM - Posted by Anonymous (not verified)

Why is any remotely interesting ARM board more expensive than a more powerful x86 machine?

March 13, 2017 | 06:13 AM - Posted by Anonymous (not verified)

Any devkit for an IC below $1k is a steal. Remember that with devkits, you're not so much paying for the raw components, but for the access to the vendor. e.g. if you have a problem with a consumer board, no matter how detailed your troubleshooting and bug reporting is your vendor response will invariably be "hello sir, please do the needful and be turning off and on again". With a devkit, you end up talking to actual engineers and get fixes implemented.

March 13, 2017 | 09:49 AM - Posted by Anonymous (not verified)

And access to the software and SDK/SDK plug-ins, etc. I can't wait for maybe AMD to do something similar with its K12(Custom ARMv8A ISA running micro-arch) once that project gets to market(?). Maybe AMD's open driver stack(Not fully open yet) can be utilized for a device like this SKU.

I'd reayy live a good review of the Denver2 core to see what tweaks Nvidia applied in addition to the 16nm process node shrink(?)"

Here is a link to the first Devner Micro-arch and that decoder(Predecode) 8 wide is very interesting:

"Darrell Boggs, CPU Architecture
Co-authors: Gary Brown, Bill Rozas,
Nathan Tuck, K S Venkatraman


March 13, 2017 | 09:54 AM - Posted by Anonymous (not verified)

Edit: reayy live

to: Really like

As in, I really must drink some coffee before posting!

There, in the proper place this time!

March 13, 2017 | 09:52 AM - Posted by Anonymous (not verified)

Edit: reayy live

to: Really like

As in, I really must drink some coffee before posting!

March 14, 2017 | 11:24 AM - Posted by fradsham (not verified)

Have been reviewing and like the Jetson TX1, you cannot plan to do any baremetal GPU programming, since Nvidia will not open source the GPU libraries for hardware hobbyists. You can baremetal program the arm processor but no access to GPU defeats any gains in purchasing the more expensive board.