Subject: General Tech | April 7, 2016 - 02:47 PM | Ken Addison
Tagged: VR, vive, video, tesla p100, steamvr, Spectre 13.3, rift, podcast, perfmon, pascal, Oculus, nvidia, htc, hp, GP100, Bristol Ridge, APU, amd
PC Perspective Podcast #394 - 04/07/2016
Join us this week as we discuss measuring VR Performance, NVIDIA's Pascal GP100, Bristol Ridge APUs and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
This episode of the PC Perspective Podcast is sponsored by Lenovo!
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Program length: 1:32:19
Week in Review:
0:46:25 This week’s podcast is brought to you by Casper. Use code PCPER at checkout for $50 towards your order!
News items of interest:
Hardware/Software Picks of the Week
93% of a GP100 at least...
NVIDIA has announced the Tesla P100, the company's newest (and most powerful) accelerator for HPC. Based on the Pascal GP100 GPU, the Tesla P100 is built on 16nm FinFET and uses HBM2.
NVIDIA provided a comparison table, which we added what we know about a full GP100 to:
|Tesla K40||Tesla M40||Tesla P100||Full GP100|
|GPU||GK110 (Kepler)||GM200 (Maxwell)||GP100 (Pascal)||GP100 (Pascal)|
|FP32 CUDA Cores / SM||192||128||64||64|
|FP32 CUDA Cores / GPU||2880||3072||3584||3840|
|FP64 CUDA Cores / SM||64||4||32||32|
|FP64 CUDA Cores / GPU||960||96||1792||1920|
|Base Clock||745 MHz||948 MHz||1328 MHz||TBD|
|GPU Boost Clock||810/875 MHz||1114 MHz||1480 MHz||TBD|
|Memory Interface||384-bit GDDR5||384-bit GDDR5||4096-bit HBM2||4096-bit HBM2|
|Memory Size||Up to 12 GB||Up to 24 GB||16 GB||TBD|
|L2 Cache Size||1536 KB||3072 KB||4096 KB||TBD|
|Register File Size / SM||256 KB||256 KB||256 KB||256 KB|
|Register File Size / GPU||3840 KB||6144 KB||14336 KB||15360 KB|
|TDP||235 W||250 W||300 W||TBD|
|Transistors||7.1 billion||8 billion||15.3 billion||15.3 billion|
|GPU Die Size||551 mm2||601 mm2||610 mm2||610mm2|
|Manufacturing Process||28 nm||28 nm||16 nm||16nm|
This table is designed for developers that are interested in GPU compute, so a few variables (like ROPs) are still unknown, but it still gives us a huge insight into the “big Pascal” architecture. The jump to 16nm allows for about twice the number of transistors, 15.3 billion, up from 8 billion with GM200, with roughly the same die area, 610 mm2, up from 601 mm2.
A full GP100 processor will have 60 shader modules, compared to GM200's 24, although Pascal stores half of the shaders per SM. The GP100 part that is listed in the table above is actually partially disabled, cutting off four of the sixty total. This leads to 3584 single-precision (32-bit) CUDA cores, which is up from 3072 in GM200. (The full GP100 architecture will have 3840 of these FP32 CUDA cores -- but we don't know when or where we'll see that.) The base clock is also significantly higher than Maxwell, 1328 MHz versus ~1000 MHz for the Titan X and 980 Ti, although Ryan has overclocked those GPUs to ~1390 MHz with relative ease. This is interesting, because even though 10.6 TeraFLOPs is amazing, it's only about 20% more than what GM200 could pull off with an overclock.
Subject: Graphics Cards | April 5, 2016 - 11:57 AM | Sebastian Peak
Tagged: PCIe power, nvidia, low-power, GTX950, GTX 950 Low Power, graphics card, gpu, GeForce GTX 950, evga
EVGA has announced new low-power versions of the NVIDIA GeForce GTX 950, some of which do not require any PCIe power connection to work.
"The EVGA GeForce GTX 950 is now available in special low power models, but still retains all the performance intact. In fact, several of these models do not even have a 6-Pin power connector!"
With or without power, all of these cards are full-on GTX 950's, with 768 CUDA cores and 2GB of GDDR5 memory. The primary difference will be with clock speeds, and EVGA provides a chart to illustrate which models still require PCIe power, as well as how they compare in performance.
It looks like the links to the 75W (no PCIe power required) models aren't working just yet on EVGA's site. Doubtless we will soon have active listings for pricing and availability info.
Subject: Graphics Cards | April 4, 2016 - 09:00 AM | Sebastian Peak
Tagged: workstation, VR, virtual reality, quadro, NVIDIA Quadro M5500, nvidia, msi, mobile workstation, enterprise
NVIDIA's VR Ready program, which is designed to inform users which GeForce GTX GPUs “deliver an optimal VR experience”, has moved to enterprise with a new program aimed at NVIDIA Quadro GPUs and related systems.
“We’re working with top OEMs such as Dell, HP and Lenovo to offer NVIDIA VR Ready professional workstations. That means models like the HP Z Workstation, Dell Precision T5810, T7810, T7910, R7910, and the Lenovo P500, P710, and P910 all come with NVIDIA-recommended configurations that meet the minimum requirements for the highest performing VR experience.
Quadro professional GPUs power NVIDIA professional VR Ready systems. These systems put our VRWorks software development kit at the fingertips of VR headset and application developers. VRWorks offers exclusive tools and technologies — including Context Priority, Multi-res Shading, Warp & Blend, Synchronization, GPU Affinity and GPU Direct — so pro developers can create great VR experiences.”
Partners include Dell, HP, and Lenovo, with new workstations featuring NVIDIA professional VR Ready certification.
Desktop isn't the only space for workstations, and in this morning's announcement NVIDIA and MSI are introducing the WT72 mobile workstation; the “the first NVIDIA VR Ready professional laptop”:
"The MSI WT72 VR Ready laptop is the first to use our new Maxwell architecture-based Quadro M5500 GPU. With 2,048 CUDA cores, the Quadro M5500 is the world’s fastest mobile GPU. It’s also our first mobile GPU for NVIDIA VR Ready professional mobile workstations, optimized for VR performance with ultra-low latency."
Here are the specs for the WT72 6QN:
- GPU: NVIDIA Quadro M5500 3D (8GB GDDR5)
- CPU Options:
- Xeon E3-1505M v5
- Core i7-6920HQ
- Core i7-6700HQ
- Chipset: CM236
- 64GB ECC DDR4 2133 MHz (Xeon)
- 32GB DDR4 2133 MHz (Core i7)
- Storage: Super RAID 4, 256GB SSD + 1TB SATA 7200 rpm
- 17.3” UHD 4K (Xeon, i7-6920HQ)
- 17.3” FHD Anti-Glare IPS (i7-6700HQ)
- LAN: Killer Gaming Network E2400
- Optical Drive: BD Burner
- I/O: Thunderbolt, USB 3.0 x6, SDXC card reader
- Webcam: FHD type (1080p/30)
- Speakers: Dynaudio Tech Speakers 3Wx2 + Subwoofer
- Battery: 9 cell
- Dimensions: 16.85” x 11.57” x 1.89”
- Weight: 8.4 lbs
- Warranty: 3-year limited
- Xeon E3-1505M v5 model: $6899
- Core i7-6920HQ model: $6299
- Core i7-6700HQ model: $5499
No doubt we will see details of other Quadro VR Ready workstations as GTC unfolds this week.
A new fighter has entered the ring
When EVGA showed me that it was entering the world of gaming notebooks at CES in January, I must admit, I questioned the move. A company that, at one point, only built and distributed graphics cards based on NVIDIA GeForce GPUs had moved to mice, power supplies, tablets (remember that?) and even cases, was going to get into the cutthroat world of notebooks. But I was promised that EVGA had an angle; it would not be cutting any corners in order to bring a truly competitive and aggressive product to the market.
Just a couple of short months later (seriously, is it the end of March already?) EVGA presented us with a shiny new SC17 Gaming Notebook to review. It’s thinner than you might expect, heavier than I would prefer and packs some impressive compute power, along with unique features and overclocking capability, that will put it on your short list of portable gaming rigs for 2016.
Let’s start with a dive into the spec table and then go from there.
|EVGA SC17 Specifications|
|Processor||Intel Core i7-6820HK|
|Memory||32GB G.Skill DDR4-2666|
|Graphics Card||GeForce GTX 980M 8GB|
|Storage||256GB M.2 NVMe PCIe SSD
1TB 7200 RPM SATA 6G HDD
|Display||Sharp 17.3 inch UDH 4K with matte finish|
|Connectivity||Intel 219-V Gigabit Ethernet
Intel AC-8260 802.11ac
2x USB 3.0 Type-A
1x USB 3.1 Type-C
|Audio||Realtek ALC 255
|Video||1x HDMI 1.4
2x mini DisplayPort (1x G-Sync support)
|Dimensions||16-in x 11.6-in x 1.05-in|
|OS||Windows 10 Home|
With a price tag of $2,699, EVGA owes you a lot – and it delivers! The processor of choice is the Intel Core i7-6820HK, an unlocked, quad-core, HyperThreaded processor that brings desktop class computing capability to a notebook. The base clock speed is 2.7 GHz but the Turbo clock reaches as high as 3.6 GHz out of the box, supplying games, rendering programs and video editors plenty of horsepower for production on the go. And don’t forget that this is one of the first unlocked processors from Intel for mobile computing – multipliers and voltages can all be tweaked in the UEFI or through Precision X Mobile software to push it even further.
Based on EVGA’s relationship with NVIDIA, it should surprise exactly zero people that a mobile GeForce GPU is found inside the SC17. The GTX 980M is based on the Maxwell 2.0 design and falls slightly under the desktop consumer class GeForce GTX 970 card in CUDA core count and clock speed. With 1536 CUDA cores and a 1038 MHz base clock, with boost capability, the discrete graphics will have enough juice for most games at very high image quality settings. EVGA has configured the GPU with 8GB of GDDR5 memory, more than any desktop GTX 970… so there’s that. Obviously, it would have been great to see the full powered GTX 980 in the SC17, but that would have required changes to the thermal design, chassis and power delivery.
Subject: General Tech, Graphics Cards | March 28, 2016 - 11:24 PM | Ryan Shrout
Tagged: pcper, hardware, technology, review, Oculus, rift, Kickstarter, nvidia, geforce, GTX 980 Ti
It's Oculus Rift launch day and the team and I spent the afternoon setting up the Rift, running through a set of game play environments and getting some good first impressions on performance, experience and more. Oh, and we entered a green screen into the mix today as well.
Subject: Graphics Cards | March 28, 2016 - 10:20 AM | Ryan Shrout
Tagged: vive, valve, steamvr, rift, Oculus, nvidia, htc, amd
As the first Oculus Rift retail units begin hitting hands in the US and abroad, both AMD and NVIDIA have released new drivers to help gamers ease into the world of VR gaming.
Up first is AMD, with Radeon Software Crimson Edition 16.3.2. It adds support for Oculus SDK v1.3 and the Radeon Pro Duo...for all none of you that have that product in your hands. AMD claims that this driver will offer "the most stable and compatible driver for developing VR experiences on the Rift to-date." AMD tells us that the latest implementation of LiquidVR features in the software help the SDKs and VR games at release take better advantage of AMD Radeon GPUs. This includes capabilities like asynchronous shaders (which AMD thinks should be capitalized for some reason??) and Quick Response Queue (which I think refers to the ability to process without context change penalties) to help Oculus implement Asynchronous Timewarp.
NVIDIA's release is a bit more substantial, with GeForce Game Ready 364.72 WHQL drivers adding support for the Oculus Rift, HTC Vive and improvements for Dark Souls III, Killer Instinct, Paragon early access and even Quantum Break.
For the optimum experience when using the Oculus Rift, and when playing the thirty games launching alongside the headset, upgrade to today's VR-optimized Game Ready driver. Whether you're playing Chronos, Elite Dangerous, EVE: Valkyrie, or any of the other VR titles, you'll want our latest driver to minimize latency, improve performance, and add support for our newest VRWorks features that further enhance your experience.
Today's Game Ready driver also supports the HTC Vive Virtual Reality headset, which launches next week. As with the Oculus Rift, our new driver optimizes and improves the experience, and adds support for the latest Virtual Reality-enhancing technology.
Good to see both GPU vendors giving us new drivers for the release of the Oculus Rift...let's hope it pans out well and the response from the first buyers is positive!
Subject: Graphics Cards | March 24, 2016 - 02:04 PM | Jeremy Hellstrom
Tagged: Ubuntu 16.04, linux, vulkan, amd, nvidia
Last week AMD released a new GPU-PRO Beta driver stack and this Monday, NVIDIA released the 364.12 beta driver, both of which support Vulkan and meant that Phoronix had a lot of work to do. Up for testing were the GTX 950, 960, 970, 980, and 980 Ti as well as the R9 Fury, 290 and 285. Logically, they used the Talos Principal test, their results compare not only the cards but also the performance delta between OpenGL and Vulkan and finished up with several OpenGL benchmarks to see if there were any performance improvements from the new drivers. The results look good for Vulkan as it beats OpenGL across the board as you can see in the review.
"Thanks to AMD having released their new GPU-PRO "hybrid" Linux driver a few days ago, there is now Vulkan API support for Radeon GPU owners on Linux. This new AMD Linux driver holds much potential and the closed-source bits are now limited to user-space, among other benefits covered in dozens of Phoronix articles over recent months. With having this new driver in hand plus NVIDIA promoting their Vulkan support to the 364 Linux driver series, it's a great time for some benchmarking. Here are OpenGL and Vulkan atop Ubuntu 16.04 Linux for both AMD Radeon and NVIDIA GeForce graphics cards."
Here are some more Graphics Card articles from around the web:
- XFX R9 390 Double Dissipation Black Edition @ [H]ard|OCP
- Far Cry Primal Graphics Card Performance Analysis @ eTeknix
- Inno3D GTX 980Ti iChill Black @ eTeknix
Subject: Graphics Cards | March 18, 2016 - 01:59 PM | Jeremy Hellstrom
Tagged: msi, GTX 980 Ti, MSI GTX 980 Ti GOLDEN Edition, nvidia, factory overclocked
Apart from the golden fan and HDMI port MSI's 980 Ti GOLDEN Edition also comes with a moderate factory overclock, 1140MHz Base, 1228MHz Boost and 7GHz memory, with an observed frequency of 1329MHz in game. [H]ard|OCP managed to up those to 1290MHz Base and 1378MHz Boost and 7.8GHz memory with the card hitting 1504MHz in game. That overclock produced noticeable results in many games and pushed it close to the performance of [H]'s overclocked MSI 980 Ti LIGHTNING. The LIGHTNING proved to be the better card in terms of performance, both graphically and thermally, however it is also more expensive than the GOLDEN and does not have quite the same aesthetics, if that is important to you.
"Today we evaluate the MSI GTX 980 Ti GOLDEN Edition video card. This video card features a pure copper heatsink geared towards faster heat dissipation and better temps on air than other air cooled video cards. We will compare it to the MSI GTX 980 Ti LIGHTNING, placing the two video cards head to head in an overclocking shootout. "
Here are some more Graphics Card articles from around the web:
- Gigabyte GeForce GTX 980Ti Xtreme @ eTeknix
- ASUS GeForce GTX 980 Ti Matrix 6 GB @ techPowerUp
- 4 Weeks with NVIDIA TITAN X SLI at 4K Resolution @ [H]ard|OCP
- NVIDIA GeForce GT 710: Trying NVIDIA's Newest Sub-$50 GPU On Linux @ Phoronix
Shedding a little light on Monday's announcement
Most of our readers should have some familiarity with GameWorks, which is a series of libraries and utilities that help game developers (and others) create software. While many hardware and platform vendors provide samples and frameworks, taking the brunt of the work required to solve complex problems, this is NVIDIA's branding for their suite of technologies. Their hope is that it pushes the industry forward, which in turn drives GPU sales as users see the benefits of upgrading.
This release, GameWorks SDK 3.1, contains three complete features and two “beta” ones. We will start with the first three, each of which target a portion of the lighting and shadowing problem. The last two, which we will discuss at the end, are the experimental ones and fall under the blanket of physics and visual effects.
The first technology is Volumetric Lighting, which simulates the way light scatters off dust in the atmosphere. Game developers have been approximating this effect for a long time. In fact, I remember a particular section of Resident Evil 4 where you walk down a dim hallway that has light rays spilling in from the windows. Gamecube-era graphics could only do so much, though, and certain camera positions show that the effect was just a translucent, one-sided, decorative plane. It was a cheat that was hand-placed by a clever artist.
GameWorks' Volumetric Lighting goes after the same effect, but with a much different implementation. It looks at the generated shadow maps and, using hardware tessellation, extrudes geometry from the unshadowed portions toward the light. These little bits of geometry sum, depending on how deep the volume is, which translates into the required highlight. Also, since it's hardware tessellated, it probably has a smaller impact on performance because the GPU only needs to store enough information to generate the geometry, not store (and update) the geometry data for all possible light shafts themselves -- and it needs to store those shadow maps anyway.
Even though it seemed like this effect was independent of render method, since it basically just adds geometry to the scene, I asked whether it was locked to deferred rendering methods. NVIDIA said that it should be unrelated, as I suspected, which is good for VR. Forward rendering is easier to anti-alias, which makes the uneven pixel distribution (after lens distortion) appear more smooth.