Today's episode features special guest Denver Jetson

Subject: Processors | March 14, 2017 - 03:17 PM |
Tagged: nvidia, JetsonTX1, Denver, Cortex A57, pascal, SoC

Amongst the furor of the Ryzen launch the NVIDIA's new Jetson TX2 SoC was quietly sent out to reviewers and today the NDA expired so we can see how it performs.  There are more Ryzen reviews below the fold, including Phoronix's Linux testing if you want to skip ahead.  In addition to the specifications in the quote, you will find 8GB of 128-bit LPDDR4 offering memory bandwidth of 58.4 GB/s and 32GBs of eMMC for local storage.  This Jetson is running JetPack 3.0 L4T based off of the Linux 4.4.15 kernel.  Phoronix tested out its performance, see for yourself.

image.php_.jpg

"Last week we got to tell you all about the new NVIDIA Jetson TX2 with its custom-designed 64-bit Denver 2 CPUs, four Cortex-A57 cores, and Pascal graphics with 256 CUDA cores. Today the Jetson TX2 is shipping and the embargo has expired for sharing performance metrics on the JTX2."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

NVIDIA Launches Jetson TX2 With Pascal GPU For Embedded Devices

Subject: General Tech, Processors | March 12, 2017 - 05:11 PM |
Tagged: pascal, nvidia, machine learning, iot, Denver, Cortex A57, ai

NVIDIA recently unveiled the Jetson TX2, a credit card sized compute module for embedded devices that has been upgraded quite a bit from its predecessor (the aptly named TX1).

jx10-jetson-tx2-170203.jpg

Measuring 50mm x 87mm, the Jetson TX2 packs quite a bit of processing power and I/O including an SoC with two 64-bit Denver 2 cores with 2MB L2, four ARM Cortex A57 cores with 2MB L2, and a 256-core GPU based on NVIDIA’s Pascal architecture. The TX2 compute module also hosts 8 GB of LPDDR4 (58.3 GB/s) and 32 GB of eMMC storage (SDIO and SATA are also supported). As far as I/O, the Jetson TX2 uses a 400-pin connector to connect the compute module to the development board or final product and the final I/O available to users will depend on the product it is used in. The compute module supports up to the following though:

  • 2 x DSI
  • 2 x DP 1.2 / HDMI 2.0 / eDP 1.4
  • USB 3.0
  • USB 2.0
  • 12 x CSI lanes for up to 6 cameras (2.5 GB/second/lane)
  • PCI-E 2.0:
    • One x4 + one x1 or two x1 + one x2
  • Gigabit Ethernet
  • 802.11ac
  • Bluetooth

 

The Jetson TX2 runs the “Linux for Tegra” operating system. According to NVIDIA the Jetson TX2 can deliver up to twice the performance of the TX1 or up to twice the efficiency at 7.5 watts at the same performance.

The extra horsepower afforded by the faster CPU, updated GPU, and increased memory and memory bandwidth will reportedly enable smart end user devices with faster facial recognition, more accurate speech recognition, and smarter AI and machine learning tasks (e.g. personal assistant, smart street cameras, smarter home automation, et al). Bringing more power locally to these types of internet of things devices is a good thing as less reliance on the cloud potentially means more privacy (unfortunately there is not as much incentive for companies to make this type of product for the mass market but you could use the TX2 to build your own).

Cisco will reportedly use the Jetson TX2 to add facial and speech recognition to its Cisco Spark devices. In addition to the hardware, NVIDIA offers SDKs and tools as part of JetPack 3.0. The JetPack 3.0 toolkit includes Tensor-RT, cuDNN 5.1, VisionWorks 1.6, CUDA 8, and support and drivers for OpenGL 4.5, OpenGL ES 3 2, EGL 1.4, and Vulkan 1.0.

The TX2 will enable better, stronger, and faster (well I don't know about stronger heh) industrial control systems, robotics, home automation, embedded computers and kiosks, smart signage, security systems, and other connected IoT devices (that are for the love of all processing are hardened and secured so they aren't used as part of a botnet!).

Interested developers and makers can pre-order the Jetson TX2 Development Kit for $599 with a ship date for US and Europe of March 14 and other regions “in the coming weeks.” If you just want the compute module sans development board, it will be available later this quarter for $399 (in quantities of 1,000 or more). The previous generation Jetson TX1 Development Kit has also received a slight price cut to $499.

Also read:

Source: NVIDIA

PCPer Live! GeForce GTX 1080 Ti Live Stream with Tom Petersen

Subject: General Tech, Graphics Cards | March 10, 2017 - 11:15 AM |
Tagged: video, tom petersen, pascal, nvidia, live, gtx 1080 ti, gtx, gp102, geforce

Our review of the GeForce GTX 1080 Ti 11GB graphics card is live and ready for consumption! Make sure you check it out before this afternoon's live stream!

Did you miss our GTX 1080 Ti Live Stream? Catch the reply below!

Ready your mind and body, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout and NVIDIA’s Tom Petersen. The general details about the GeForce GTX 1080 Ti graphics card are already official and based on the hype train and the response on social media, there is more than a little excitement.

box1.jpg

On hand to talk about the new graphics card will be Tom Petersen, well known in our community. While the GTX 1080 Ti will be the flagship part of our live stream we will also be diving into the world of VR performance evaluation and how the new FCAT VR tool will help reviewers and standard enthusiast see where their systems stand in producing smooth, effective virtual reality gaming. We have done quite a few awesome live steams with Tom in the past, check them out if you haven't already.

pcperlive.png

NVIDIA GeForce GTX 1080 Ti and FCAT VR Live Stream

1pm PT / 4pm ET - March 9th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

The event will take place Thursday, March 9th at 4pm ET / 1pm PT at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience, asking questions for me and Tom to answer live. 

Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else. Previous streams have produced news as well – including statements on support for Adaptive Sync, release dates for displays and first-ever demos of triple display G-Sync functionality. You never know what’s going to happen or what will be said!

This just in fellow gamers: Tom is going to be providing a GeForce GTX 1080 Ti graphics card to give away during the live stream! We won't be able to ship it until the end of next week, but one lucky viewer of the live stream will be able to get their paws on the fastest graphics card we have ever tested!! Make sure you are scheduled to be here on March 9th at 1pm PT / 4pm ET!!

icon2.jpg

Win this beauty.

If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?

So join us! Set your calendar for this coming Thursday at 4pm ET / 1pm PT and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!

NVIDIA Releases GeForce 378.78 Drivers

Subject: Graphics Cards | March 10, 2017 - 02:49 AM |
Tagged: nvidia, graphics drivers

Alongside the launch of the GeForce GTX 1080 Ti, NVIDIA has released a new graphics driver that, one, obviously supports the new card and, two, also rolls in a bunch of optimizations for DirectX 12 titles. The graphics vendor already announced the initiative at last week’s GDC, but it is now released and available for public use. 378.78 is also “Game Ready” for Ghost Recon Wildlands, although that’s mostly for Ansel support; most of the optimizations for Wildlands were pushed into the previous driver.

nvidia-geforce.png

The advertised gains vary from title to title, but they claim that Rise of the Tomb Raider at 4K will jump from 20 FPS to 27 FPS. This can be viewed as either a frame rate gain of about 33%, or it can be seen as an average frame time savings of about 12ms each and every frame. If that’s what actual end-users will see -- that’s a lot!

They also note improvements in Vulkan support, too, but without any hard, numeric assertions.

If you have a GeForce 1050 Ti notebook, then this driver is also said to fix a potential bluescreen bug that you have been facing. You can pick it up from GeForce Experience or the NVIDIA website.

Source: NVIDIA

The GTX 1080 Ti reviews are here; the card not so much

Subject: Graphics Cards | March 9, 2017 - 01:53 PM |
Tagged: 1080 ti, geforce, gp102, gtx 1080 ti, nvidia, pascal

As you have probably noticed from our front page, today is the day we can see how the GTX 1080 Ti performs in reviewers systems.  The unfortunate news is that you can't buy one yet nor do we know when you will be able to spend the $699 it will cost to order one.  We can share the performance with you, once again NVIDIA's Ti model takes the top spot out performing even the $1200 TITAN X.  As for overclocking the reference model, as we have not had a chance to test any cards with third party cooler on them, [H]ard|OCP were able to increase the GPU frequency over 200MHz to 1967-1987MHz in game and push the memory to 12GHz, somewhat better than what Ryan was able to.  Check out their full review here, with many more just below.

1489035168S7z42o2d6c_1_11_l.png

"NVIDIA is launching the fastest video card it offers for gaming today in the new $699 GeForce GTX 1080 Ti. We will take this video card and test it against the GeForce GTX 1080 and GeForce GTX TITAN X at 1440p and 4K resolutions to find out how it compares. Is it really faster than a $1200 GeForce GTX TITAN X Pascal?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: NVIDIA

Flagship Performance Gets Cheaper

UPDATE! If you missed our launch day live stream, you can find the reply below:

It’s a very interesting time in the world of PC gaming hardware. We just saw the release of AMD’s Ryzen processor platform that shook up the processor market for the first time in a decade, AMD’s Vega architecture has been given the brand name “Vega”, and the anticipation for the first high-end competitive part from AMD since Hawaii grows as well. AMD was seemingly able to take advantage of Intel’s slow innovation pace on the processor and it was hoping to do the same to NVIDIA on the GPU. NVIDIA’s product line has been dominant in the mid and high-end gaming market since the 900-series with the 10-series products further cementing the lead.

box1.jpg

The most recent high end graphics card release came in the form of the updated Titan X based on the Pascal architecture. That was WAY back in August of 2016 – a full seven months ago! Since then we have seen very little change at the top end of the product lines and what little change we did see came from board vendors adding in technology and variation on the GTX 10-series.

Today we see the release of the new GeForce GTX 1080 Ti, a card that offers only a handful of noteworthy technological changes but instead is able to shake up the market by instigating pricing adjustments to make the performance offers more appealing, and lowering the price of everything else.

The GTX 1080 Ti GP102 GPU

I already wrote about the specifications of the GPU in the GTX 1080 Ti when it was announced last week, so here’s a simple recap.

  GTX 1080 Ti Titan X (Pascal) GTX 1080 GTX 980 Ti TITAN X GTX 980 R9 Fury X R9 Fury R9 Nano
GPU GP102 GP102 GP104 GM200 GM200 GM204 Fiji XT Fiji Pro Fiji XT
GPU Cores 3584 3584 2560 2816 3072 2048 4096 3584 4096
Base Clock 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz
Boost Clock 1582 MHz 1480 MHz 1733 MHz 1076 MHz 1089 MHz 1216 MHz - - -
Texture Units 224 224 160 176 192 128 256 224 256
ROP Units 88 96 64 96 96 64 64 64 64
Memory 11GB 12GB 8GB 6GB 12GB 4GB 4GB 4GB 4GB
Memory Clock 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 500 MHz
Memory Interface 352-bit 384-bit G5X 256-bit G5X 384-bit 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 484 GB/s 480 GB/s 320 GB/s 336 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s 512 GB/s
TDP 250 watts 250 watts 180 watts 250 watts 250 watts 165 watts 275 watts 275 watts 175 watts
Peak Compute 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS 8.19 TFLOPS
Transistor Count 12.0B 12.0B 7.2B 8.0B 8.0B 5.2B 8.9B 8.9B 8.9B
Process Tech 16nm 16nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $699 $1,200 $599 $649 $999 $499 $649 $549 $499

The GTX 1080 Ti looks a whole lot like the TITAN X launched in August of last year. Based on the 12B transistor GP102 chip, the new GTX 1080 Ti will have 3,584 CUDA core with a 1.60 GHz Boost clock. That gives it the same processor count as Titan X but with a slightly higher clock speed which should make the new GTX 1080 Ti slightly faster by at least a few percentage points and has a 4.7% edge in base clock compute capability. It has 28 SMs, 28 geometry units, 224 texture units.

GeForce_GTX_1080_Ti_Block_Diagram.png

Interestingly, the memory system on the GTX 1080 Ti gets adjusted – NVIDIA has disabled a single 32-bit memory controller to give the card a total of 352-bit wide bus and an odd-sounding 11GB memory capacity. The ROP count also drops to 88 units. Speaking of 11, the memory clock on the G5X implementation on GTX 1080 Ti will now run at 11 Gbps, a boost available to NVIDIA thanks to a chip revision from Micron and improvements to equalization and reverse signal distortion.

The move from 12GB of memory on the GP102-based Titan X to 11GB on the GTX 1080 Ti is an interesting move, and evokes memories of the GTX 970 fiasco where NVIDIA disabled a portion of that memory controller but left the memory that would have resided on it ON the board. At that point, what behaved as 3.5GB of memory at one speed and 500 MB at another speed, was the wrong move to make. But releasing the GTX 970 with "3.5GB" of memory would have seemed odd too. NVIDIA is not making the same mistake, instead building the GTX 1080 Ti with 11GB out the gate.

Continue reading our review of the NVIDIA GeForce GTX 1080 Ti 11GB graphics card!

NVIDIA, Microsoft, Ingrasys (Foxconn) Announce HGX-1

Subject: Systems | March 9, 2017 - 07:01 AM |
Tagged: nvidia, microsoft, hgx-1, GP100, dgx-1

When NVIDIA announced the Pascal architecture at last year’s GTC, they started with the GP100 architecture that was to be initially available in their $129,000 DGX-1 PC. In fact, this device contained eight of those “Big Pascal” GPUs that are connected together by their NVLink interconnection.

microsoft-2017-server.jpg

Now, almost a full year later, Microsoft, NVIDIA, and Ingrasys have announced the HGX-1 system. It, too, will contain eight GP100 GPUs through eight Tesla P100 accelerators. On the CPU side of things, Microsoft is planning on utilizing the next generation of x86 processors, Intel Skylake (which we assume means Skylake-X) and AMD Naples in these "Project Olympus" servers. Future versions could also integrate Intel FPGAs for an extra level of acceleration. ARM64 is another goal of theirs, but in the more distant future.

At the same time, NVIDIA has also announced, through a single-paragraph statement, that they are joining the Open Compute Project. This organization contains several massive players in the data center market, spanning from Facebook to Rackspace to Bank of America.

Whenever it arrives, the HGX-1 will be intended for cloud-based AI computations. Four of these machines are designed to be clustered together at high bandwidth, which I estimate would have north of 160 TeraFLOPs of double-precision (FP64) or 670 TeraFLOPs of half-precision (FP16) performance in the GPUs alone, depending on final clocks.

Source: NVIDIA

GamersNexus Tears Down a Nintendo Switch

Subject: Systems, Mobile | March 4, 2017 - 07:01 AM |
Tagged: Tegra X1, teardown, switch, nvidia, Nintendo

Here at PC Perspective, videos of Ryan and Ken dismantling consoles on their launch date were some of our most popular... ever. While we didn’t do one for the Nintendo Switch, GamersNexus did, and I’m guessing that a segment of our audience would be interested in seeing what the device looks like when dismantled.

Credit: GamersNexus

As he encounters many chips, he mentions what, if anything, is special about them based on their part numbers. For instance, the NVIDIA SoC is listed as A2, which is apparently different from previous Maxwell-based Tegra X1 SoCs, but it’s unclear how. From my perspective, I can think of three possibilities: NVIDIA made some customizations (albeit still on the Maxwell architecture) for Nintendo, NVIDIA had two revisions for their own purposes and Nintendo bought the A2, or the A2 shipped with NVIDIA's Maxwell-based Shield and my Google-fu is terrible.

Regardless, if you’re interested, it should be an interesting twenty-or-so minutes.

Source: GamersNexus

Linked Multi-GPU Arrives... for Developers

The Khronos Group has released the Vulkan 1.0.42.0 specification, which includes experimental (more on that in a couple of paragraphs) support for VR enhancements, sharing resources between processes, and linking similar GPUs. This spec was released alongside a LunarG SDK and NVIDIA drivers, which are intended for developers, not gamers, that fully implement these extensions.

I would expect that the most interesting feature is experimental support for linking similar GPUs together, similar to DirectX 12’s Explicit Linked Multiadapter, which Vulkan calls a “Device Group”. The idea is that the physical GPUs hidden behind this layer can do things like share resources, such as rendering a texture on one GPU and consuming it in another, without the host code being involved. I’m guessing that some studios, like maybe Oxide Games, will decide to not use this feature. While it’s not explicitly stated, I cannot see how this (or DirectX 12’s Explicit Linked mode) would be compatible in cross-vendor modes. Unless I’m mistaken, that would require AMD, NVIDIA, and/or Intel restructuring their drivers to inter-operate at this level. Still, the assumptions that could be made with grouped devices are apparently popular with enough developers for both the Khronos Group and Microsoft to bother.

microsoft-dx12-build15-linked.png

A slide from Microsoft's DirectX 12 reveal, long ago.

As for the “experimental” comment that I made in the introduction... I was expecting to see this news around SIGGRAPH, which occurs in late-July / early-August, alongside a minor version bump (to Vulkan 1.1).

I might still be right, though.

The major new features of Vulkan 1.0.42.0 are implemented as a new classification of extensions: KHX. In the past, vendors, like NVIDIA and AMD, would add new features as vendor-prefixed extensions. Games could query the graphics driver for these abilities, and enable them if available. If these features became popular enough for multiple vendors to have their own implementation of it, a committee would consider an EXT extension. This would behave the same across all implementations (give or take) but not be officially adopted by the Khronos Group. If they did take it under their wing, it would be given a KHR extension (or added as a required feature).

The Khronos Group has added a new layer: KHX. This level of extension sits below KHR, and is not intended for production code. You might see where this is headed. The VR multiview, multi-GPU, and cross-process extensions are not supposed to be used in released video games until they leave KHX status. Unlike a vendor extension, the Khronos Group wants old KHX standards to drop out of existence at some point after they graduate to full KHR status. It’s not something that NVIDIA owns and will keep it around for 20 years after its usable lifespan just so old games can behave expectedly.

khronos-group-logo.png

How long will that take? No idea. I’ve already mentioned my logical but uneducated guess a few paragraphs ago, but I’m not going to repeat it; I have literally zero facts to base it on, and I don’t want our readers to think that I do. I don’t. It’s just based on what the Khronos Group typically announces at certain trade shows, and the length of time since their first announcement.

The benefit that KHX does bring us is that, whenever these features make it to public release, developers will have already been using it... internally... since around now. When it hits KHR, it’s done, and anyone can theoretically be ready for it when that time comes.

Author:
Manufacturer: NVIDIA

VR Performance Evaluation

Even though virtual reality hasn’t taken off with the momentum that many in the industry had expected on the heels of the HTC Vive and Oculus Rift launches last year, it remains one of the fastest growing aspects of PC hardware. More importantly for many, VR is also one of the key inflection points for performance moving forward; it requires more hardware, scalability, and innovation than any other sub-category including 4K gaming.  As such, NVIDIA, AMD, and even Intel continue to push the performance benefits of their own hardware and technology.

Measuring and validating those claims has proven to be a difficult task. Tools that we used in the era of standard PC gaming just don’t apply. Fraps is a well-known and well-understood tool for measuring frame rates and frame times utilized by countless reviewers and enthusiasts. But Fraps lacked the ability to tell the complete story of gaming performance and experience. NVIDIA introduced FCAT and we introduced Frame Rating back in 2013 to expand the capabilities that reviewers and consumers had access to. Using more sophisticated technique that includes direct capture of the graphics card output in uncompressed form, a software-based overlay applied to each frame being rendered, and post-process analyzation of that data, we were able to communicate the smoothness of a gaming experience, better articulating it to help gamers make purchasing decisions.

pipe1.jpg

VR pipeline when everything is working well.

For VR though, those same tools just don’t cut it. Fraps is a non-starter as it measures frame rendering from the GPU point of view and completely misses the interaction between the graphics system and the VR runtime environment (OpenVR for Steam/Vive and OVR for Oculus). Because the rendering pipeline is drastically changed in the current VR integrations, what Fraps measures is completely different than the experience the user actually gets in the headset. Previous FCAT and Frame Rating methods were still viable but the tools and capture technology needed to be updated. The hardware capture products we used since 2013 were limited in their maximum bandwidth and the overlay software did not have the ability to “latch in” to VR-based games. Not only that but measuring frame drops, time warps, space warps and reprojections would be a significant hurdle without further development.  

pipe2.jpg

VR pipeline with a frame miss.

NVIDIA decided to undertake the task of rebuilding FCAT to work with VR. And while obviously the company is hoping that it will prove its claims of performance benefits for VR gaming, it should not be overlooked the investment in time and money spent on a project that is to be open sourced and free available to the media and the public.

vlcsnap-2017-02-27-11h31m17s057.png

NVIDIA FCAT VR is comprised of two different applications. The FCAT VR Capture tool runs on the PC being evaluated and has a similar appearance to other performance and timing capture utilities. It uses data from Oculus Event Tracing as a part of the Windows ETW and SteamVR’s performance API, along with NVIDIA driver stats when used on NVIDIA hardware to generate performance data. It will and does work perfectly well on any GPU vendor’s hardware though with the access to the VR vendor specific timing results.

fcatvrcapture.jpg

Continue reading our preview of the new FCAT VR tool!