Subject: General Tech, Graphics Cards | January 8, 2018 - 06:22 PM | Ken Addison
Tagged: wigig, VR, vive wireless adapter, vive pro, vive, steamvr, Oculus, htc, CES 2018, CES
As it turns out, HTC's teasing of a new Vive product late last week was well warranted. Today, at their press conference before CES 2018, HTC announced two new hardware products—the Vive Pro and the Vive Wireless adapter.
Just like their teaser indicated, one of the major features of this new Vive hardware is an increased display resolution. The Vive Pro's resolution is 2880x1600 (combined), a 78% increase from the standard 2160×1200 resolution shared by the original Vive and the Oculus Rift.
Currently, there are no details about upgraded optics in the form of new lenses, but considering Valve's announcement of new lenses for future VR headsets, I would expect these to be included in the Vive Pro.
In addition to the display improvements, there are also some design changes in the Vive Pro that aim to allow users to quickly put on the headset and adjust it for maximum comfortability. The Vive Pro now features a dial on the back of the head strap to adjust the headset rather than having to adjust velcro straps. This setup is very reminiscent of the PSVR headset which is widely regarded as one of the most comfortable VR headsets currently on the market.
While we've already seen some of these design changes like integrated headphones in the currently shipping Deluxe Audio Strap for Vive, the Vive Pro is built from the ground up with this new strap instead of it being a replacement option.
HTC was very quiet about the change from a single front-facing camera on the standard Vive to dual front cameras on the Vive Pro. Having stereo cameras on the device have the potential to provide a lot of utility ranging from a stereo view of your surroundings when you are nearing the chaperone boundaries to potential AR applications.
The Vive Pro will work with the current 1.0 base stations for positional tracking, as well as Valve's previously announced but unreleased 2.0 base stations. When using SteamVR 2.0 tracking, the Vive Pro supports up to 4 base stations, allowing for a significantly larger play area of up to 10m x 10m.
Initially, the Vive Pro is slated to ship this quarter as a headset-only upgrade for customers who already have the original Vive with its 1.0 base stations. The full Vive Pro kit with 2.0 tracking is said to ship in the Summer time frame. Pricing for both configurations is yet to be announced.
In addition to new headset hardware, HTC also announced their first official solution for wireless VR connectivity.
Built in partnership with Intel, the Vive Wireless Adapter will use 60 GHz WiGig technology to provide a low latency experience for wirelessly streaming video to the HMD. Both the original Vive and the Vive Pro will support this adapter set to be available this summer. We also have no indications of pricing on the Vive Wireless Adapter.
HTC's announcements today are impressive and should help push PC VR forward. We have yet to get hands-on experience with either the Vive Pro or the Vive Wireless adapter, but we have a demo appointment tomorrow, so keep checking PC Perspective for our updated impressions of the next generation of VR!
Subject: Graphics Cards, Displays | January 8, 2018 - 12:30 AM | Ken Addison
Tagged: SHIELD TV, nvidia, hp, hdr, g-sync, DCI-P3, bgfd, asus, android tv, acer
Although their Keynote presentation tonight at CES is all about automotive technology, that hasn't stopped NVIDIA from providing us with a few gaming-related announcements this week. The most interesting of which is what NVIDIA is calling "Big Format Gaming Displays" or BFGDs (get it?!).
Along with partners ASUS, Acer, and HP, NVIDIA has developed what seems to be the ultimate living room display solution for gamers.
Based on an HDR-enabled 65" 4K 120Hz panel, these displays integrate both NVIDIA G-SYNC variable refresh rate technology for smooth gameplay, as well as a built-in NVIDIA SHIELD TV set-top box.
In addition to G-SYNC technology, these displays will also feature a full direct-array backlight capable of a peak luminance of 1000-nits and conform to the DCI-P3 color gamut, both necessary features for a quality HDR experience. These specifications put the BFGDs in line with the current 4K HDR TVs on the market.
Unlike traditional televisions, these BFGDs are expected to have very low input latencies, a significant advantage for both PC and console gamers.
Integration of the SHIELD TV means that these displays will be more than just an extremely large PC monitor, but rather capable of replacing the TV in your living room. The Android TV operating system means you will get access to a lot of the most popular streaming video applications, as well as features like Google Assistant and NVIDIA GameStream.
Personally, I am excited at the idea of what is essentially a 65" TV, but optimized for things like low input latency. The current crop of high-end TVs on the market cater very little to gamers, with game modes that don't turn off all of the image processing effects and still have significant latency.
It's also interesting to see companies like ASUS, Acer, and HP who are well known in the PC display market essentially entering the TV market with these BFGD products.
Stay tuned as for eyes-on impression of the BFGD displays as part of our CES 2018 coverage!
Update: ASUS has officially announced their BFGD offering, the aptly named PG65 (pictured below). We have a meeting with ASUS this week, and we hope to get a look at this upcoming product!
Subject: Graphics Cards | January 8, 2018 - 12:00 AM | Ryan Shrout
Tagged: Vega, CES 2018, CES, amd, 7nm
Though just the most basic of teases, AMD confirmed at CES that it will have a 7nm based Vega product sampling sometime in 2018. No mention of shipping timeline, performance, or consumer variants were to be found.
This product will target the machine learning market, with hardware and platform optimizations key to that segment. AMD mentions “new DL Ops”, or deep learning operations, but the company didn’t expand on that. It could mean it will integrate Tensor Core style compute units (as NVIDIA did on the Volta architecture) or it may be something more unique. AMD will integrate a new IO, likely to compete with NVLink, and MxGPU support for dividing resources efficiently for virtualization.
AMD did present a GPU “roadmap” at the tech day as well. I put that word in quotes because it is incredibly, and intentionally, vague. You might assume that Navi is being placed into the 2019 window, but its possible it might show in late 2018. AMD was also unable to confirm if a 7nm Vega variant would arrive for gaming and consumer markets in 2018.
Subject: General Tech, Graphics Cards | January 5, 2018 - 02:59 PM | Jeremy Hellstrom
Tagged: meltdown, spectre, geforce, quadro, NVS, nvidia, tesla, security
If you were wondering if NVIDIA products are vulnerable to some of the latest security threats, the answer is yes. Your Shield device or GPU is not vulnerable to CVE-2017-5754, aka Meltdown, however the two variants of Spectre could theoretically be used to infect you.
Variant 1 (CVE-2017-5753): Mitigations are provided with the security update included in this bulletin. NVIDIA expects to work together with its ecosystem partners on future updates to further strengthen mitigations.
Variant 2 (CVE-2017-5715): Mitigations are provided with the security update included in this bulletin. NVIDIA expects to work together with its ecosystem partners on future updates to further strengthen mitigations.
Variant 3 (CVE-2017-5754): At this time, NVIDIA has no reason to believe that Shield TV/tablet is vulnerable to this variant.
The Android based Shield tablet should be updated to Shield Experience 5.4, which should arrive before the end of the month. Your Shield TV, should you actually still have a working on will receive Shield Experience 6.3 along the same time frame.
The GPU is a little more complex as there are several product lines and OSes which need to be dealt with. There should be a new GeForce driver appearing early next week for gaming GPUs, with HPC cards receiving updates on the dates you can see below.
There is no reason to expect Radeon and Vega GPUs to suffer from these issues at this time. Intel could learn a bit from NVIDIA's response, which has been very quick and includes ther older hardware.
Subject: Graphics Cards | January 4, 2018 - 10:15 AM | Jim Tanous
Tagged: external graphics, external gpu, CES 2017, CES, ASUS ROG, asus
ASUS today announced the XG Station Pro, a Thunderbolt 3-based external GPU enclosure tailored for both gamers and professionals. The XG Station Pro can accommodate full-size GPUs up to 2.5 slots wide, including large cards such as the ROG Strix 1080 Ti and Radeon RX Vega 64.
Featuring a "contemporary design with clean lines and subtle styling," the XG Station Pro has a footprint of 4.3-inches x 14.8-inches, thanks to ASUS's decision to use an external power supply. In order to provide enough juice for high-end graphics cards, ASUS is borrowing the power supply design from its GX800 gaming laptop, which puts out up to 330 watts.
The XG Station Pro's chassis, designed by case maker In Win, has a smooth dark gray finish with a black PCB and sleeved PCIe power cables. It features a soft white internal glow that can be controlled by ASUS's Aura software, including Aura Sync to synchronize lighting with your compatible ASUS and ROG graphics cards and laptops.
Inside the XG Station Pro, dual 120mm PWM fans provide exhaust out of the right side of the chassis. The fans automatically ramp down and even shut off below certain temperatures, but users can also manually control the fans with the ASUS GPU Tweak II application.
Around back, users will find an extra USB Type-C 3.1 Gen 2 port, which can supply up to 15 watts of power to compatible devices such as smartphones and external storage. Finally, ASUS notes that it includes the require Thunderbolt 3 cable in the box, something that many Thunderbolt-based devices seem to lack.
The ASUS XG Station Pro will launch later this month for $329 with support for both AMD and NVIDIA GPUs in Windows 10, and just AMD Vega-based GPUs in macOS Sierra and newer.
Subject: Graphics Cards | December 28, 2017 - 03:32 PM | Jeremy Hellstrom
Tagged: external gpu, gigabyte, aorus, gtx 1070, thunderbolt 3, nvidia, gaming box
Have a laptop with Thunderbolt 3 and a mobile GPU that just doesn't cut it anymore? Gigabyte now offers an incredibly easy way to upgrade your laptop, with no screwdriver required! The Aorus GTX 1070 Gaming Box contains an external desktop class GTX 1070 and separate PSU, giving you a dock with some serious gaming prowess. The Tech Report's benchmarks compare this external GPU against the GTX 1060 installed in their Alienware gaming laptop and Alienware's own external GPU enclosure, on both the internal display and an external monitor. The results are somewhat mixed and worth reading through fully, however if you are on an integrated GPU then this solution is an incredible upgrade.
"Gigabyte's Aorus GTX 1070 Gaming Box offers us a look into a future where a big shot of graphics performance is just a single cable away for ultraportable notebook PCs. We plugged the Gaming Box into a test notebook and gave it a spin to see just how bright that future looks."
Here are some more Graphics Card articles from around the web:
- Zotac GeForce GTX 1080 Ti ArcticStorm Mini @ Guru of 3D
- Zotac GT 1030 2 GB @ Modders-Inc
- The PowerColor Red Devil Vega 56 benchmarked vs. the GTX 1070 Ti at BabelTechReviews
- 13-Way Radeon AMDGPU-PRO 17.50 vs. NVIDIA Linux OpenCL Compute Comparison @ Phoronix
- Sapphire RX Vega 64 Nitro+ Limited Edition @ Kitguru
- PowerColor Red Devil Vega 56 8GB @ Guru of 3D
- The AMD Linux Drivers Do *Not* Yet Support Radeon "Navi" @ Phoronix
- AMD Radeon Software Adrenalin Edition Performance @ [H]ard|OCP
Specifications and Design
With all of the activity in both the GPU and CPU markets this year, it's hard to remember some of the launches in the first half of the year—including NVIDIA's GTX 1080 Ti. Maintaining the rank of fastest gaming GPU for the majority of the year, little has challenged NVIDIA's GP102-based offering, making it the defacto choice for high-end gamers.
Even though we've been giving a lot of attention to NVIDIA's new flagship TITAN V graphics card, the $3000 puts it out of the range of almost every gamer who doesn't have a day job involving deep learning.
Today, we're taking a look back to the (slightly) more reasonable GP102 and the one of the most premiere offerings to feature it, the ASUS ROG Strix GTX 1080 Ti.
While the actual specifications of the GP102 GPU onboard the ASUS Strix GTX 1080 Ti hasn't changed at all, let's take a moment to refresh ourselves on where it sits in regards to the rest of the market.
|RX Vega 64 Liquid||RX Vega 56||GTX 1080 Ti||GTX 1080||GTX 1070 Ti||GTX 1070|
|Base Clock||1406 MHz||1156 MHz||1480 MHz||1607 MHz||1607 MHz||1506 MHz|
|Boost Clock||1677 MHz||1471 MHz||1582 MHz||1733 MHz||1683 MHz||1683 MHz|
|Memory Clock||1890 MHz||1600 MHz||11000 MHz||10000 MHz||8000 MHz||8000 MHz|
|Memory Interface||2048-bit HBM2||2048-bit HBM2||352-bit G5X||256-bit G5X||256-bit||256-bit|
|Memory Bandwidth||484 GB/s||410 GB/s||484 GB/s||320 GB/s||256 GB/s||256 GB/s|
|TDP||345 watts||210 watts||250 watts||180 watts||180 watts||150 watts|
|Peak Compute||13.7 TFLOPS||10.5 TFLOPS||11.3 TFLOPS||8.2 TFLOPS||7.8 TFLOPS||5.7 TFLOPS|
The GTX 1000 series of products from NVIDIA has marked a consolidation in ASUS's GPU offerings. Instead of having both Strix and Matrix products available, the Strix has supplanted everything to be the most premium option from ASUS for any given GPU, and the Strix GTX 1080 Ti doesn't disappoint.
While it might not be the largest graphics card we've ever seen, the ASUS Strix GTX 1080 Ti is more massive in all dimensions compared to both the NVIDIA Founder's Edition card, as well as the EVGA ICX option we took a look at earlier this year. Compared to the Founder's Edition, the Strix GTX 1080 Ti is 1.23-in longer, 0.9-in taller, and takes up an extra PCIe slot in width.
How deep is your learning?
Recently, we've had some hands-on time with NVIDIA's new TITAN V graphics card. Equipped with the GV100 GPU, the TITAN V has shown us some impressive results in both gaming and GPGPU compute workloads.
However, one of the most interesting areas that NVIDIA has been touting for GV100 has been deep learning. With a 1.33x increase in single-precision FP32 compute over the Titan Xp, and the addition of specialized Tensor Cores for deep learning, the TITAN V is well positioned for deep learning workflows.
In mathematics, a tensor is a multi-dimensional array of numerical values with respect to a given basis. While we won't go deep into the math behind it, Tensors are a crucial data structure for deep learning applications.
NVIDIA's Tensor Cores aim to accelerate Tensor-based math by utilizing half-precision FP16 math in order to process both dimensions of a Tensor at the same time. The GV100 GPU contains 640 of these Tensor Cores to accelerate FP16 neural network training.
It's worth noting that these are not the first Tensor operation-specific hardware, with others such as Google developing hardware for these specific functions.
|PC Perspective Deep Learning Testbed|
|Processor||AMD Ryzen Threadripper 1920X|
|Motherboard||GIGABYTE X399 AORUS Gaming 7|
|Memory||64GB Corsair Vengeance RGB DDR4-3000|
|Storage||Samsung SSD 960 Pro 2TB|
|Power Supply||Corsair AX1500i 1500 watt|
|OS||Ubuntu 16.04.3 LTS|
|Drivers||AMD: AMD GPU Pro 17.50
For our NVIDIA testing, we used the NVIDIA GPU Cloud 17.12 Docker containers for both TensorFlow and Caffe2 inside of our Ubuntu 16.04.3 host operating system.
For all tests, we are using the ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) data set.
Looking Towards the Professionals
This is a multi-part story for the NVIDIA Titan V:
Earlier this week we dove into the new NVIDIA Titan V graphics card and looked at its performacne from a gaming perspective. Our conclusions were more or less what we expected - the card was on average ~20% faster than the Titan Xp and about ~80% faster than the GeForce GTX 1080. But with that $3000 price tag, the Titan V isn't going to win any enthusiasts over.
What the Titan V is meant for in reality is the compute space. Developers, coders, engineers, and professionals that use GPU hardware for research, for profit, or for both. In that case, $2999 for the Titan V is simply an investment that needs to show value in select workloads. And though $3000 is still a lot of money, keep in mind that the NVIDIA Quadro GP100, the most recent part with full-performance double precision compute from the Pascal chip, is still selling for well over $6000 today.
The Volta GV100 GPU offers 1:2 double precision performance, equating to 2560 FP64 cores. That is a HUGE leap over the GP102 GPU used on the Titan Xp that uses a 1:32 ratio, giving us just 120 FP64 cores equivalent.
|Titan V||Titan Xp||GTX 1080 Ti||GTX 1080||GTX 1070 Ti||GTX 1070||RX Vega 64 Liquid||Vega Frontier Edition|
|Base Clock||1200 MHz||1480 MHz||1480 MHz||1607 MHz||1607 MHz||1506 MHz||1406 MHz||1382 MHz|
|Boost Clock||1455 MHz||1582 MHz||1582 MHz||1733 MHz||1683 MHz||1683 MHz||1677 MHz||1600 MHz|
|Memory Clock||1700 MHz MHz||11400 MHz||11000 MHz||10000 MHz||8000 MHz||8000 MHz||1890 MHz||1890 MHz|
|384-bit G5X||352-bit G5X||256-bit G5X||256-bit||256-bit||2048-bit HBM2||2048-bit HBM2|
|Memory Bandwidth||653 GB/s||547 GB/s||484 GB/s||320 GB/s||256 GB/s||256 GB/s||484 GB/s||484 GB/s|
|TDP||250 watts||250 watts||250 watts||180 watts||180 watts||150 watts||345 watts||300 watts|
|Peak Compute||12.2 (base) TFLOPS
14.9 (boost) TFLOPS
|12.1 TFLOPS||11.3 TFLOPS||8.2 TFLOPS||7.8 TFLOPS||5.7 TFLOPS||13.7 TFLOPS||13.1 TFLOPS|
|Peak DP Compute||6.1 (base) TFLOPS
7.45 (boost) TFLOPS
|0.37 TFLOPS||0.35 TFLOPS||0.25 TFLOPS||0.24 TFLOPS||0.17 TFLOPS||0.85 TFLOPS||0.81 TFLOPS|
The current AMD Radeon RX Vega 64, and the Vega Frontier Edition, all ship with a 1:16 FP64 ratio, giving us the equivalent of 256 DP cores per card.
Test Setup and Benchmarks
Our testing setup remains the same from our gaming tests, but obviously the software stack is quite different.
|PC Perspective GPU Testbed|
|Processor||Intel Core i7-5960X Haswell-E|
|Motherboard||ASUS Rampage V Extreme X99|
|Memory||G.Skill Ripjaws 16GB DDR4-3200|
|Storage||OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
|Power Supply||Corsair AX1500i 1500 watt|
|OS||Windows 10 x64|
Applications in use include:
- Cinebench R15
- Sisoft Sandra GPU Compute
- SPECviewperf 12.1
Let's not drag this along - I know you are hungry for results! (Thanks to Ken for running most of these tests for us!!)
Subject: Graphics Cards | December 15, 2017 - 09:00 AM | Scott Michaud
Tagged: vega 64 liquid, vega 64, vega 56, Vega, sapphire, radeon, amd
SAPPHIRE has just launched a pair of custom cooled, factory overclocked, RX Vega-based graphics cards. As you might guess: the SAPPHIRE NITRO+ Radeon RX Vega 64 uses the Vega 64 chip with its 4096 stream processors, while the SAPPHIRE NITRO+ Radeon RX Vega 56 uses the Vega 56 chip and its 3584 stream processors. Both cards have 8GB of HBM2 memory (two stacks of 4GB). The cooler design uses three fans and vapor chambers, with separate heat pipes for the GPU+Memory (six pipes) and VRMs (two pipes).
It also has a back plate!
The clock rate is where it gets interesting. The NITRO+ RX Vega 64 will have a boost clock of 1611 MHz out-of-the-box. This is above the RX Vega 64 Air’s boost clock (1546 MHz) but below the RX Vega 64 Liquid’s boost clock (1677 MHz). The liquid-cooled Radeon RX Vega 64 still has the highest clocks, but this product sits almost exactly half-way between it (the liquid-cooled RX Vega 64) and the air-cooled RX Vega 64.
The NITRO+ Radeon RX Vega 56, with its 1572 MHz boost clock, is well above the stock RX Vega 56’s 1471 MHz boost clock, though. It’s a clear win.
As for enthusiast features, this card has quite a few ways to keep it cool. First, it will operate fanless until 56C. Second, the card accepts a 4-pin fan connector, which allows it to adjust the speed of two case fans based on the temperature readings from the card. I am a bit curious whether it’s better to let the GPU control the fans, or whether having them all attached to the same place allows them to work together more effectively. Either way, if you ran out of fan headers, then I’m guessing that this feature will be good for you anyway.
The SAPPHIRE NITRO+ Radeon RX Vega 64 and 56 are available now.