Subject: Graphics Cards | February 9, 2017 - 02:46 PM | Jeremy Hellstrom
Tagged: amd, nvidia
New graphics drivers are a boon to everyone who isn't a hardware reviewer, especially one who has just wrapped up benchmarking a new card the same day one is released. To address this issue see what changes have been implemented by AMD and NVIDIA in their last few releases, [H]ard|OCP tested a slew of recent drivers from both companies. The performance of AMD's past releases, up to and including the AMD Crimson ReLive Edition 17.1.1 Beta can be found here. For NVIDIA users, recent drivers covering up to the 378.57 Beta Hotfix are right here. The tests show both companies generally increasing the performance of their drivers, however the change is so small you are not going to notice a large difference.
"We take the AMD Radeon R9 Fury X and AMD Radeon RX 480 for a ride in 11 games using drivers from the time of each video card’s launch date, to the latest AMD Radeon Software Crimson ReLive Edition 17.1.1 Beta driver. We will see how performance in old and newer games has changed over the course of 2015-2017 with new drivers. "
Here are some more Graphics Card articles from around the web:
- Intel Celeron/Pentium/Core i3/i5/i7 - NVIDIA vs. AMD Linux Gaming Performance @ Phoronix
- PowerColor Radeon RX 470 Red Devil (4GB) @ Custom PC Review
- GeForce GTX 1080 @ Hardware Secrets
Subject: Editorial | February 9, 2017 - 10:50 AM | Alex Lustenberg
Tagged: podcast, Zen, Windows 10 Game Mode, webcam, ryzen, quadro, Optane, nvidia, mini-stx, humble bundle, gddr6, evga, ECS, atom, amd, 4k
PC Perspective Podcast #436 - 02/09/17
Join us for ECS Mini-STX, NVIDIA Quadro, AMD Zen Arch, Optane, GDDR6 and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Allyn Malventano, Ken Addison, Josh Walrath, Jermey Hellstrom
Program length: 1:32:21
Podcast topics of discussion:
Week in Review:
News items of interest:
1:14:00 Zen Price Points Leaked
Hardware/Software Picks of the Week
Allyn: Low cost OBD II scanner
Subject: Systems, Mobile | February 6, 2017 - 03:37 PM | Sebastian Peak
Tagged: xeon, Thinkpad, quadro, P71, P51s, P51, nvidia, notebook, mobile workstation, Lenovo, kaby lake, core i7
Lenovo has announced a trio of new ThinkPad mobile workstations, featuring updated Intel 7th-generation Core (Kaby Lake) processors and NVIDIA Quadro graphics, and among these is the thinnest and lightest ThinkPad mobile workstation to date in the P51s.
"Engineered to deliver breakthrough levels of performance, reliability and long battery life, the ThinkPad P51s features a new chassis, designed to meet customer demands for a powerful but portable machine. Developed with engineers and professional designers in mind, this mobile workstation features Intel’s 7th generation Core i7 processors and the latest NVIDIA Quadro dedicated workstation graphics, as well as a 4K UHD IPS display with optional IR camera."
Lenovo says that the ThinkPad P51s is more than a half pound lighter than the previous generation (P50s), stating that "the P51s is the lightest and thinnest mobile workstation ever developed by ThinkPad" at 14.4 x 9.95 x 0.79 inches, and weight starting at 4.3 lbs.
Specs for the P51s include:
- Up to a 7th Generation Intel Core i7 Processor
- NVIDIA Quadro M520M Graphics
- Choice of standard or touchscreen FHD (1920 x 1080) IPS, or 4K UHD (3840 x 2160) IPS display
- Up to 32 GB DDR4 2133 RAM (2x SODIMM slots)
- Storage options including up to 1 TB (5400 rpm) HDD and 1 TB NVMe PCIe SSDs
- USB-C with Intel Thunderbolt 3
- 802.11ac and LTE-A wireless connectivity
Lenovo also announced the ThinkPad P51, which is slightly larger than the P51s, but brings the option of Intel Xeon E3-v6 processors (in addition to Kaby Lake Core i7 CPUs), Quadro M2200M graphics, faster 2400 MHz memory up to 64 GB (4x SODIMM slots), and up to a 4K IPS display with X-Rite Pantone color calibration.
Finally there is the new VR-ready P71 mobile workstation, which offers up to an NVIDIA Quadro P5000M GPU along with Oculus and HTC VR certification.
"Lenovo is also bringing virtual reality to life with the new ThinkPad P71. One of the most talked about technologies today, VR has the ability to bring a new visual perspective and immersive experience to our customers’ workflow. In our new P71, the NVIDIA Pascal-based Quadro GPUs offer a stunning level of performance never before seen in a mobile workstation, and it comes equipped with full Oculus and HTC certifications, along with NVIDIA’s VR-ready certification."
Pricing and availability is as follows:
- ThinkPad P51s, starting at $1049, March
- ThinkPad P51, starting at $1399, April
- ThinkPad P71, starting at $1849, April
NVIDIA P100 comes to Quadro
At the start of the SOLIDWORKS World conference this week, NVIDIA took the cover off of a handful of new Quadro cards targeting professional graphics workloads. Though the bulk of NVIDIA’s discussion covered lower cost options like the Quadro P4000, P2000, and below, the most interesting product sits at the high end, the Quadro GP100.
As you might guess from the name alone, the Quadro GP100 is based on the GP100 GPU, the same silicon used on the Tesla P100 announced back in April of 2016. At the time, the GP100 GPU was specifically billed as an HPC accelerator for servers. It had a unique form factor with a passive cooler that required additional chassis fans. Just a couple of months later, a PCIe version of the GP100 was released under the Tesla GP100 brand with the same specifications.
Today that GPU hardware gets a third iteration as the Quadro GP100. Let’s take a look at the Quadro GP100 specifications and how it compares to some recent Quadro offerings.
|Quadro GP100||Quadro P6000||Quadro M6000||Full GP100|
|FP32 CUDA Cores / SM||64||64||64||64|
|FP32 CUDA Cores / GPU||3584||3840||3072||3840|
|FP64 CUDA Cores / SM||32||2||2||32|
|FP64 CUDA Cores / GPU||1792||120||96||1920|
|Base Clock||1303 MHz||1417 MHz||1026 MHz||TBD|
|GPU Boost Clock||1442 MHz||1530 MHz||1152 MHz||TBD|
|FP32 TFLOPS (SP)||10.3||12.0||7.0||TBD|
|FP64 TFLOPS (DP)||5.15||0.375||0.221||TBD|
|Memory Interface||1.4 Gbps
|Memory Bandwidth||716 GB/s||432 GB/s||316.8 GB/s||?|
|Memory Size||16GB||24 GB||12GB||16GB|
|TDP||235 W||250 W||250 W||TBD|
|Transistors||15.3 billion||12 billion||8 billion||15.3 billion|
|GPU Die Size||610mm2||471 mm2||601 mm2||610mm2|
There are some interesting stats here that may not be obvious at first glance. Most interesting is that despite the pricing and segmentation, the GP100 is not the de facto fastest Quadro card from NVIDIA depending on your workload. With 3584 CUDA cores running at somewhere around 1400 MHz at Boost speeds, the single precision (32-bit) rating for GP100 is 10.3 TFLOPS, less than the recently released P6000 card. Based on GP102, the P6000 has 3840 CUDA cores running at something around 1500 MHz for a total of 12 TFLOPS.
GP100 (full) Block Diagram
Clearly the placement for Quadro GP100 is based around its 64-bit, double precision performance, and its ability to offer real-time simulations on more complex workloads than other Pascal-based Quadro cards can offer. The Quadro GP100 offers 1/2 DP compute rate, totaling 5.2 TFLOPS. The P6000 on the other hand is only capable of 0.375 TLOPS with the standard, consumer level 1/32 DP rate. Inclusion of ECC memory support on GP100 is also something no other recent Quadro card has.
Raw graphics performance and throughput is going to be questionable until someone does some testing, but it seems likely that the Quadro P6000 will still be the best solution for that by at least a slim margin. With a higher CUDA core count, higher clock speeds and equivalent architecture, the P6000 should run games, graphics rendering and design applications very well.
There are other important differences offered by the GP100. The memory system is built around a 16GB HBM2 implementation which means more total memory bandwidth but at a lower capacity than the 24GB Quadro P6000. Offering 66% more memory bandwidth does mean that the GP100 offers applications that are pixel throughput bound an advantage, as long as the compute capability keeps up on the backend.
Subject: Graphics Cards | February 6, 2017 - 11:43 AM | Sebastian Peak
Tagged: video card, silent, Passive, palit, nvidia, KalmX, GTX 1050 Ti, graphics card, gpu, geforce
Palit is offering a passively-cooled GTX 1050 Ti option with their new KalmX card, which features a large heatsink and (of course) zero fan noise.
"With passive cooler and the advanced powerful Pascal architecture, Palit GeForce GTX 1050 Ti KalmX - pursue the silent 0dB gaming environment. Palit GeForce GTX 1050 Ti gives you the gaming horsepower to take on today’s most demanding titles in full 1080p HD @ 60 FPS."
The specs are identical to a reference GTX 1050 Ti (4GB GDDR5 @ 7 Gb/s, Base 1290/Boost 1392 MHz, etc.), so expect the full performance of this GPU - with some moderate case airflow, no doubt.
We don't have specifics on pricing or availablity just yet.
Subject: Processors | February 3, 2017 - 08:22 PM | Sebastian Peak
Tagged: titan x, ryzen, report, processor, nvidia, leak, cpu, benchmark, ashes of the singularity, amd
AMD's upcoming 8-core Ryzen CPU has appeared online in an apparent leak showing performance from an Ashes of the Singularity benchmark run. The benchmark results, available here on imgur and reported by TechPowerUp (among others today) shows the result of a run featuring the unreleased CPU paired with an NVIDIA Titan X graphics card.
It is interesting to consider that this rather unusual system configuration was also used by AMD during their New Horizon fan event in December, with an NVIDIA Titan X and Ryzen 8-core processor powering the 4K game demos of Battlefield 1 that were pitted against an Intel Core i7-6900K/Titan X combo.
It is also interesting to note that the processor listed in the screenshot above is (apparently) not an engineering sample, as TechPowerUp points out in their post:
"Unlike some previous benchmark leaks of Ryzen processors, which carried the prefix ES (Engineering Sample), this one carried the ZD Prefix, and the last characters on its string name are the most interesting to us: F4 stands for the silicon revision, while the 40_36 stands for the processor's Turbo and stock speeds respectively (4.0 GHz and 3.6 GHz)."
March is fast approaching, and we won't have to wait long to see just how powerful this new processor will be for 4K gaming (and other, less important stuff). For now, I want to find results from an AotS benchmark with a Titan X and i7-6900K to see how these numbers compare!
Subject: Graphics Cards | February 3, 2017 - 05:58 PM | Scott Michaud
Tagged: nvidia, graphics drivers, vulkan
On February 1st, NVIDIA released a new developer beta driver, which fixes a couple of issues with their Vulkan API implementation. Unlike what some sites have been reporting, you should not download it to play games that use the Vulkan API, like DOOM. In short, it is designed for developers, not end-users. The goal is to provide correct results when software interacts with the driver, not the best gaming performance or anything like that.
In a little more detail, it looks like 376.80 implements the Vulkan 126.96.36.199 SDK. This update addresses two issues with accessing devices and extensions, under certain conditions, when using the 188.8.131.52 SDK. 184.108.40.206 was released on January 23rd, and thus it will not even be a part of current video games. Even worse, it, like most graphics drivers for software developers, is based on the old, GeForce 376 branch, so it won’t even have NVIDIA’s most recent fixes and optimizations. NVIDIA does this so they can add or change the features that Vulkan developers require without needing to roll-in patches every time they make a "Game Ready" optimization or something. There is no reason to use this driver unless you are developing Vulkan applications, and you want to try out the new extensions. It will eventually make it to end users... when it's time.
If you are wishing to develop software using Vulkan’s bleeding-edge features, then check out NVIDIA’s developer portal to pick up the latest drivers. Basically everyone else should use 378.49 or its 378.57 hotfix.
Subject: General Tech | February 2, 2017 - 12:54 PM | Jeremy Hellstrom
Tagged: nvidia, GFE, game bundle, geforce experience 3.0
Giving away a bonus to your customers is a nice thing to do; watching them jump through hoops to get the bonus is less so. Many companies have realized that offering a mail in rebate is a great way to look like you are doing something for your customers while at the same time ensuring a significant percentage of those customers never actually claim said MIR. Strangely this practice has not impressed consumers.
NVIDIA started to embrace something similar towards the end of 2016 with GeForce Experience 3.0, requiring a mandatory login to get at beta drivers and more pertinently, giveaways. That login requirement includes all the game bundles, such as the one announced yesterday. Ars Technica reported something interesting this morning that anyone thinking of picking up one of the games bundles should be aware of; NVIDIA now intends to tie these game codes to hardware. Currently you will need to sign into GFE, verify your code and then allow GFE to verify you have a GTX 1070 or 1080 installed in your system, which strongly suggests you will need to install software. Ars speculates that this could one day be tied directly to a card via a hardwareID or serial number; making it impossible to give away.
The rational offered references an incident with Microsoft a few months back when "some Gears 4 Windows 10 keys were obtained illegitimately via our Nvidia promotion". This was thanks to a loophole created by Amazon's return policy. Of course, some of those so called illegitimate installations were caused by someone giving or even selling the game key which they obtained legally, because they had already purchased Gears 4. It is unclear if NVIDIA only pays for codes which are redeemed or if the money has already been invested; in the former it at least makes financial sense, if the latter then Bird Culture has an appropriate phrase.
Once again, everyone must be punished thanks to the overreaction caused by a few smegheads. Keep this in mind when you are shopping in the future.
"GFE then performs "a hardware verification step to ensure the coupon code is redeemed on the system with the qualifying GPU." It's not yet clear whether the codes will be tied to a specific serial number/hardware identifier, or whether they will be tied to an overall product line like a GTX 1070 or GTX 1080."
Here is some more Tech News from around the web:
- AMD Ryzen CPUs to support Windows 7 with drivers @ Guru of 3D
- 2.5 Million Xbox and PlayStation Gamers' Details Have Been Leaked From Piracy Forums @ Slashdot
- Microsoft details new Edge features coming in Windows 10 Creators Edition @ The Inquirer
- Adobe blames Brexit as users rage over Creative Cloud price hikes @ The Inquirer
- Apple weans itself off Intel with 'more ARM chips' for future Macs @ The Register
- Supermicro sockets it to Skylake rivals @ The Register
Subject: Graphics Cards | February 2, 2017 - 07:01 AM | Scott Michaud
Tagged: nvidia, graphics drivers
If you were having issues with Minecraft on NVIDIA’s recent 378.49 drivers, then you probably want to try out their latest hotfix. This version, numbered 378.57, will not be pushed down GeForce Experience, so you will need to grab them from NVIDIA’s customer support page.
Beyond Minecraft, this also fixes an issue with “debug mode”. For some Pascal-based graphics cards, the option in NVIDIA Control Panel > Help > Debug Mode might be on by default. This option will reduce factory-overclocked GPUs down to NVIDIA’s reference speeds, which is useful to eliminate stability issues in testing, but pointlessly slow if you’re already stable. I mean, you bought the factory overclock, right? I’m guessing someone at NVIDIA used it to test 378.49 during its development, fixed an issue, and accidentally commit the config file with the rest of the fix. Either way, someone caught it, and it’s now fixed, even though you should be able to just untick it if you have a factory-overclocked GPU.
Subject: General Tech | January 31, 2017 - 12:57 PM | Jeremy Hellstrom
Tagged: game, nvidia, GTX 1080, gtx 1070, For Honor, tom clancy, Ghost Recon Wildlands
Today NVIDIA offers a new free Ubisoft game for those picking up a GTX 1070, GTX 1080 or a system containing one or more of those cards. You can choose either For Honor, an arena stlye game pitting Knights, Samurai and Vikings in hand to hand combat or Tom Clancy's Ghost Recon Wildlands which will lie somewhere between Arma and Just Cause. Neither game is yet released, For Honor arrives February 14th while Ghost Recon Wildlands doesn't launch until March 7th but you can get an early look at the game.
NVIDIA has also made the process to collect your game somewhat easier, as long as your GeForce and Ubisoft accounts are linked you can simply enter the code to chose your free game. If you are one to avoid Uplay at all costs you could always give your code away as a gift.
"We are also debuting a new easier way to redeem codes through GeForce Experience, it means customers no longer have to tolerate long sign up webpages but can simply enter their code within GeForce Experience itself and have their choice of game automatically added to their Uplay account."
Here is some more Tech News from around the web:
- Magnetic skyrmion 'brain' connections save energy @ Nanotechweb
- Flashy Intel sees the XPoint of solid state @ The Register
- Microsoft rumoured to be remixing Windows RT as Windows Cloud @ The Inquirer
- Android 7.1.2 beta release signals end of life for the Nexus 6 and Nexus 9 @ The Inquirer
- IPv6 for Server Admins and Client Developers @ Linux.com
- 'It's Tricky': Apple Misses the Deadline To Pay $13.9 Bn To Ireland in Illegal Tax Benefit @ Slashdot