Intel Releases 15.60.0.4849 Graphics Drivers

Subject: Graphics Cards | November 8, 2017 - 09:29 PM |
Tagged: Intel, graphics drivers

When we report on graphics drivers, it’s almost always for AMD or NVIDIA. It’s Intel’s turn this time, however, with their latest 15.60 release. This version supports HDR playback on NetFlix and YouTube, and it adds Windows Mixed Reality for Intel HD 620 and higher.

inteltf2.jpg

I should note that this driver only supports Skylake-, Kaby Lake-, and Coffee Lake-based parts. I’m not sure whether this means that Haswell-and-earlier have been deprecated, but it looks like the latest ones that support those chips are from May.

In terms of game-specific optimizations? Intel has some to speak of. This driver focuses on The LEGO Ninjago Movie Video Game, Middle-earth: Shadow of War, Pro Evolution Soccer 2018, Call of Duty: WWII, Destiny 2, and Divinity: Original Sin 2. All of these name-drops are alongside Iris Pro, so I'm not sure how low you can go for any given title. Thankfully, many game distribution sites allow refunds for this very reason, although you still want to do a little research ahead-of-time.

That's all beside the point, though: Intel's advertising game-specific optimizations.

If you have a new Intel GPU, pick up the new drivers from Intel's website.

Source: Intel
Author:
Manufacturer: Intel

The Expected Unexpected

Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega.  The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe.  During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.

raja_ryan.JPG

Raja has dropped by the PCPer offices in the past.

Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there.  This behavior is quite common in Silicon Valley and Raja is no stranger to this.  Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI.  While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts.  During this time ATI was acquired by AMD and he became one of the top graphics guru at that company.  In 2009 he quit AMD and moved on to Apple.  He was Director of Graphics Architecture at Apple, but little is known about what he actually did.  During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology.  Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.

In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water.  AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading.  On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards.  After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up.  The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.

Continue reading about Raja Koduri joinging Intel...

More GTX 1070 Ti overclocking

Subject: Graphics Cards | November 7, 2017 - 03:21 PM |
Tagged: pascal, nvidia, gtx 1070 ti, geforce, msi

NVIDIA chose to limit the release of their GTX 1070 Ti to reference cards, all sporting the same clocks regardless of the model.  That does not mean that the manufacturers skimped on the features which help you overclock successfully.  As a perfect example, the MSI GTX 1070 Ti GAMING TITANIUM was built with Hi-C CAPs, Super Ferrite Chokes, and Japanese Solid Caps and 10-phase PWM.  This resulted in an impressive overclock of 2050MHz on the GPU and a memory frequency of 9GHz once [H]ard|OCP boosted the power delivered to the card.  That boost is enough to meet or even exceed the performance of a stock GTX 1080 or Vega 64 in most of the games they tested.

1509608832rtdxf9e1ls_1_16_l.jpg

"NVIDIA is launching the GeForce GTX 1070 Ti today, and we’ve got a custom retail MSI GeForce GTX 1070 Ti GAMING TITANIUM video card to test and overclock, yes overclock, to the max. We’ll make comparisons against GTX 1080/1070, AMD Radeon RX Vega 64 and 56 for a complete review."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

Light And Dark-Side Collector’s Edition NVIDIA TITAN Xp for Pre-Order November 8th

Subject: Graphics Cards | November 7, 2017 - 02:04 PM |
Tagged: Star Wars, nvidia, titan xp, disney

Priced at $1200, you can choose to power your gaming rig with either the light side of the Force or the dark side.  NVIDIA have announced two new Titan Xp GPUs, one battle scarred and lightsaber green representing the Rebel Alliance and a pristine black card which glows a familiar red.  It would seem that they are a bit behind the times as neither of those organizations exist in the current Star Wars timeline but that doesn't make them any less attractive to fans. 

unnamed.jpg

The specifications are familiar, a Pascal-based GP102 GPU, with 3840 CUDA cores @ 1.6GHz, and 12GB of GDDR5X memory running at 11.4Gbps.  The look is very unique however, so if you are a big fan of Star Wars then this might just be something you want to consider.  The full PR and launch movie are just below.

side.PNG

Tatooine, Outer Rim Territory—NVIDIA has announced two new collector’s edition NVIDIA TITAN Xp GPUs created for the ultimate Star Wars fan. The new Jedi Order™ and Galactic Empire™ editions of the NVIDIA TITAN Xp have been crafted to reflect the look and feel of the Star Wars galaxy.

These new Star Wars collector’s edition GPUs pay homage to the light side/dark side dichotomy, and contain hints of the Star Wars galaxy, such as the hilt of Luke Skywalker's lightsaber and light panels reminiscent of the Death Star.

The Jedi Order GPU simulates the wear and tear and battle-worn finish of many items used by the Rebel Alliance, resulting from its diecast aluminum cover being subjected to an extensive, corrosive salt spray.

Conversely, the Galactic Empire GPU’s finish features simple, clean lines, emulating the high-end, orderly nature of the resource-rich Empire.

Both versions have multiple windowed areas to showcase internals and lighting, evoking each faction’s lightsabers, green and red, respectively. The finishes of both versions took over a year to perfect.

The retail box packaging also pays homage to the light and dark sides of the Force, with the Jedi Order edition bathed in white, and the Galactic Empire edition bathed in black.

Exclusive Pre-Order Access for GeForce Experience Users
GeForce Experience users get exclusive pre-order access to purchase(1) the Jedi Order and Galactic Empire TITAN Xp editions before the cards are broadly available in mid-November. Starting tomorrow, GeForce Experience users can purchase one card of each design by using their log-in credentials in the NVIDIA store.

Power! Unlimited Power!
The Jedi Order and Galactic Empire TITAN Xp GPUs use the NVIDIA Pascal-based GP102 GPU, each with 3,840 CUDA cores running at 1.6GHz and 12GB of GDDR5X memory running at 11.4Gbps.

Their staggering 12TFLOPs of processing power under the hood allows Star Wars fans to play any of today’s most cutting-edge titles at the highest resolution with the highest detail quality turned on.

Priced at $1,200, each edition also includes a collectible electroformed metal badge containing the insignia of their preferred alliance.

Source: NVIDIA

You were to bring balance to the price, not leave it in darkness!

Subject: Graphics Cards | November 2, 2017 - 03:03 PM |
Tagged: pascal, nvidia, gtx 1070 ti, geforce

It should come as no surprise to anyone how the GTX 1070 Ti performs, better than a GTX 1070 but not quite as fast as a GTX 1080 ... unless you overclock.  With the push of two buttons Ryan was able to hit 1987 MHz which surpasses your average GTX 1080 by a fair margin.  Hardware Canucks saw 2088MHz when they overclocked as well as memory of  8.9Gbps which pushed the performance past the reference GTX 1080 in many games. Their benchmark suite encompasses a few different games so you should check to see if your favourites are there.

The real hope of this launch was that prices would change, not so much the actual prices you pay but the MSRP of cards both AMD and NVIDIA.  For now that has not happened but perhaps soon it will, though Bitcoin hitting $7000 does not help.

GTX1070TI-5.jpg

"NVIDIA’s launch of their new GTX 1070 Ti is both senseless and completely sensible depending on which way you tend to look at things. The emotional among you are going to wonder why NVIDIA is even bothering to introduce a new product into a lineup that’s more than a year old."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Author:
Manufacturer: NVIDIA

Here comes a new challenger

The release of the GeForce GTX 1070 Ti has been an odd adventure. Launched into a narrow window of a product stack between the GTX 1070 and the GTX 1080, the GTX 1070 Ti is a result of the competition from the AMD RX Vega product line. Sure, NVIDIA might have speced out and prepared an in-between product for some time, but it was the release of competitive high-end graphics cards from AMD (for the first time in forever it seems) that pushed NVIDIA to launch what you see before us today.

With MSRPs of $399 and $499 for the GTX 1070 and GTX 1080 respectively, a new product that fits between them performance wise has very little room to stretch its legs. Because of that, there are some interesting peculiarities involved with the release cycle surrounding overclocks, partner cards, and more.

IMG_4944.JPG

But before we get into that concoction, let’s first look at the specifications of this new GPU option from NVIDIA as well as the reference Founders Edition and EVGA SC Black Edition cards that made it to our offices!

GeForce GTX 1070 Ti Specifications

We start with our classic table of details.

  RX Vega 64 Liquid RX Vega 64 Air RX Vega 56 Vega Frontier Edition GTX 1080 Ti GTX 1080 GTX 1070 Ti GTX 1070
GPU Cores 4096 4096 3584 4096 3584 2560 2432 1920
Base Clock 1406 MHz 1247 MHz 1156 MHz 1382 MHz 1480 MHz 1607 MHz 1607 MHz 1506 MHz
Boost Clock 1677 MHz 1546 MHz 1471 MHz 1600 MHz 1582 MHz 1733 MHz 1683 MHz 1683 MHz
Texture Units 256 256 256 256 224 160 152 120
ROP Units 64 64 64 64 88 64 64 64
Memory 8GB 8GB 8GB 16GB 11GB 8GB 8GB 8GB
Memory Clock 1890 MHz 1890 MHz 1600 MHz 1890 MHz 11000 MHz 10000 MHz 8000 MHz 8000 MHz
Memory Interface 2048-bit HBM2 2048-bit HBM2 2048-bit HBM2 2048-bit HBM2 352-bit G5X 256-bit G5X 256-bit 256-bit
Memory Bandwidth 484 GB/s 484 GB/s 410 GB/s 484 GB/s 484 GB/s 320 GB/s 256 GB/s 256 GB/s
TDP 345 watts 295 watts 210 watts 300 watts 250 watts 180 watts 180 watts 150 watts
Peak Compute 13.7 TFLOPS 12.6 TFLOPS 10.5 TFLOPS 13.1 TFLOPS 11.3 TFLOPS 8.2 TFLOPS 7.8 TFLOPS 5.7 TFLOPS
MSRP (current) $699 $499 $399 $999 $699 $499 $449 $399

If you have followed the leaks and stories over the last month or so, the information here isn’t going to be a surprise. The CUDA core count of the GTX 1070 Ti is 2432, only one SM unit less than the GTX 1080. Base and boost clock speeds are the same as the GTX 1080. The memory system includes 8GB of GDDR5 running at 8 GHz, matching the performance of the GTX 1070 in this case. The TDP gets a bump up to 180 watts, in line with the GTX 1080 and slightly higher than the GTX 1070.

Continue reading our review of the GeForce GTX 1070 Ti!

NVIDIA Partners with AWS for Volta V100 in the Cloud

Subject: Graphics Cards | October 31, 2017 - 09:58 PM |
Tagged: nvidia, amazon, google, pascal, Volta, gv100, tesla v100

Remember last month? Remember when I said that Google’s introduction of Tesla P100s would be good leverage over Amazon, as the latter is still back in the Kepler days (because Maxwell was 32-bit focused)?

Amazon has leapfrogged them by introducing Volta-based V100 GPUs.

nvidia-2017-voltatensor.jpg

To compare the two parts, the Tesla P100 has 3584 CUDA cores, yielding just under 10 TFLOPs of single-precision performance. The Tesla V100, with its ridiculous die size, pushes that up over 14 TFLOPs. Same as Pascal, they also support full 1:2:4 FP64:FP32:FP16 performance scaling. It also has access to NVIDIA’s tensor cores, which are specialized for 16-bit, 4x4 multiply-add matrix operations that are apparently common in neural networks, both training and inferencing.

Amazon allows up to eight of them at once (with their P3.16xlarge instances).

So that’s cool. While Google has again been quickly leapfrogged by Amazon, it’s good to see NVIDIA getting wins in multiple cloud providers. This keeps money rolling in that will fund new chip designs for all the other segments.

Source: Amazon

The GTX 1070 Ti: NVIDIA's Response to RX Vega

Subject: Graphics Cards | October 26, 2017 - 09:00 AM |
Tagged: nvidia, GTX 1070Ti, gtx 1070 ti, graphics card, gpu, evga

NVIDIA today announced the launch of the GTX 1070 Ti. The card, which has been the subject of leaks and rumors for several weeks, is NVIDIA’s first major response to AMD’s RX Vega line, designed to go head-to-head with the RX Vega 56, and give Vega 64 a run for its money in terms of price-to-performance in many games.

gtx-1070-ti.jpg

Compared to the GTX 1070, the 1070 Ti increases the GPU core count from 1920 to 2432 — 128 shy of the GTX 1080 — and raises the base clock frequency to the GTX 1080’s 1607 MHz. The 1070 Ti’s stock boost clock remains the same as the 1070, however, at 1683 MHz, although NVIDIA’s Pascal based cards have been shown to easily exceed this rated maximum clock speed. Other changes between the 1070 and 1070 Ti include an increase in texture units from 120 to 152 and a jump in TDP from 150 to 180 watts.

  RX Vega 64 Liquid RX Vega 64 Air RX Vega 56 Vega Frontier Edition GTX 1080 Ti GTX 1080 GTX 1070 Ti GTX 1070
GPU Cores 4096 4096 3584 4096 3584 2560 2432 1920
Base Clock 1406 MHz 1247 MHz 1156 MHz 1382 MHz 1480 MHz 1607 MHz 1607 MHz 1506 MHz
Boost Clock 1677 MHz 1546 MHz 1471 MHz 1600 MHz 1582 MHz 1733 MHz 1683 MHz 1683 MHz
Texture Units 256 256 256 256 224 160 152 120
ROP Units 64 64 64 64 88 64 64 64
Memory 8GB 8GB 8GB 16GB 11GB 8GB 8GB 8GB
Memory Clock 1890 MHz 1890 MHz 1600 MHz 1890 MHz 11000 MHz 10000 MHz 8000 MHz 8000 MHz
Memory Interface 2048-bit HBM2 2048-bit HBM2 2048-bit HBM2 2048-bit HBM2 352-bit G5X 256-bit G5X 256-bit 256-bit
Memory Bandwidth 484 GB/s 484 GB/s 410 GB/s 484 GB/s 484 GB/s 320 GB/s 256 GB/s 256 GB/s
TDP 345 watts 295 watts 210 watts 300 watts 250 watts 180 watts 180 watts 150 watts
Peak Compute 13.7 TFLOPS 12.6 TFLOPS 10.5 TFLOPS 13.1 TFLOPS 11.3 TFLOPS 8.2 TFLOPS 8.1 TFLOPS 5.7 TFLOPS
MSRP (current) $699 $499 $399 $999 $699 $499 $449 $349

The GTX 1070 Ti Founders Edition is launching at $449, which puts it $100 above the current MSRP of the 1070 and $50 higher than the RX Vega 56. The GTX 1080 and 1070 first launched at $599 and $379 but saw a price drop in late February to $499 and $349, respectively.

EVGA’s GTX 1070 Ti Launch Lineup

The GTX 1070 Ti launch will of course include dozens of options from NVIDIA’s partners, but we have some specifics to share from EVGA. The GPU maker is launching with four 1070 Ti models:

EVGA GeForce GTX 1070 Ti GAMING
EVGA GeForce GTX 1070 Ti SC GAMING Black Edition
EVGA GeForce GTX 1070 Ti FTW2
EVGA GeForce GTX 1070 Ti GAMING HYBRID

Following the pattern of EVGA’s other Pascal-based releases, the 1070 Ti GAMING features a basic blower-style cooler, the SC model features ACX 3.0 cooling, and the FTW 2 version includes EVGA’s ICX cooling system. The HYBRID model utilizes a self-contained, all-in-one 120mm water cooler.

evga-gtx-1070-ti.jpg

Pricing is not yet known for every model, but we’ve learned that the base GAMING edition will start at $469 and the FTW2 will carry a $489 MSRP. For comparison, the FTW2 version of the GTX 1070 is currently priced at $480 (expect prices to change once 1070 Ti stock hits the market) while the GTX 1080 FTW2 is $600.

GTX 1070 Ti Availability

NVIDIA is doing things a bit differently for the 1070 Ti launch. Although today (October 26th) marks the official “launch date,” actual product availability and performance benchmarks won’t land until next Thursday, November 2.

Aside from the advertised specifications, we therefore having nothing more to share at this time in terms of benchmarking or performance analysis, but rest assured that we’ll have our complete coverage ready to go as soon as we get our hands on these new cards.

Source:

Zotac Shrinks GTX 1080 Ti Into Water-Cooled Small Form Factor ArcticStorm Mini

Subject: Graphics Cards | October 25, 2017 - 03:34 PM |
Tagged: zotac, gtx 1080 ti, SFF, water cooler

Zotac finally made its watercooled GTX 1080 Ti ArcticStorm Mini official last week. A card that was first teased at Computex, the ArcticStorm Mini is a dual slot with metal backplate and full cover water block that has been significantly shortened such that it can fit into many more cases including Micro ATX and some Mini ITX form factors. Specifically, the ArcticStorm Mini measures 212mm (8.35”) x 164mm (6.46”) and uses a custom shortened PCB that appears to be the same platform as the dual fan air cooled model.

Zotac GTX 1080 Ti ArcticStorm Mini.jpg

The star of the ArcticStorm Mini is the full cover waterblock with nickel plated copper base and a tinted acrylic top cover. According to Zotac the waterblock uses 0.3mm micro channels above the GPU to improve cooling performance by moving as much heat from the GPU into the water loop as possible. There are ports for vertical or horizontal barb orientation though I would have loved to see a card that routed the water cooling in and out ports to the rear of the card rather than the side especially since this is aimed at small form factor builds. The water block can accommodate standard G1/4” fittings and Zotac includes two barbs that support 10mm ID (inner diameter) tubing in the box. A metal backplate helps prevent warping of the PCB from the water cooling which can be rather hefty.

While there is no RGB on this card, Zotac did go with an always on white LED that along with the gray and silver colors of the card itself are supposed to be color neutral and allow it to fit into more builds (as opposed to Zotac’s usual yellow and black colors). Around the front are five display outputs including: DVI-D, HDMI 2.0b, and three DisplayPort 1.4 connections.

Out of the box, the GTX 1080 Ti ArcticStorm Mini comes with a modest factory overlock that pushes the GP102’s 3,584 CUDA cores to 1506 MHz base and 1620 MHz boost. The 11GB of GDDR5X remains clocked at the stock 11 GHz, however. (For comparison, reference clocks are 1480 MHz base and 1582 MHz boost.) The graphics card is powered by two 8-pin PCI-E power connectors and enthusiasts should be able to push it quite a bit further than the out of the box clocks simply by increasing the power target as we saw in our review of the 1080 Ti, and barring any silicon lottery duds this card should be able to clock higher and have more stable clocks than our card thanks to the liquid cooler.

As is usual with these things, Zotac did not reveal exact pricing or availability, but with the full sized GTX 1080 Ti ArcticStorm already selling for $809 on Amazon and $820 over at Newegg, I would expect the little SFF brother to sell for a bit of a premium beyond that, say $840 at launch with the price going down a bit with sales later.

It would have been nice to see this be a single slot card, and giving up DVI would be worth it, but you can’t have everything (heh). I am looking forward to seeing the systems modders and enthusiasts are able to cram this card (or two) into!

Source: Zotac

Forza Motorsport 7 Performance

The first full Forza Motorsport title available for the PC, Forza Motorsport 7 on Windows 10 launched simultaneously with the Xbox version earlier this month. With native 4K assets, HDR support, and new visual features like fully dynamic weather, this title is an excellent showcase of what modern PC hardware can do.

forza7-screen.png

Now that both AMD and NVIDIA have released drivers optimized for Forza 7, we've taken an opportunity to measure performance across an array of different GPUs. After some significant performance mishaps with last year's Forza Horizon 3 at launch on PC, we are excited to see if Forza Motorsport 7 brings any much-needed improvements. 

For this testing, we used our standard GPU testbed, including an 8-core Haswell-E processor and plenty of memory and storage.

  PC Perspective GPU Testbed
Processor Intel Core i7-5960X Haswell-E
Motherboard ASUS Rampage V Extreme X99
Memory G.Skill Ripjaws 16GB DDR4-3200
Storage OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
Power Supply Corsair AX1500i 1500 watt
OS Windows 10 x64 
Drivers AMD: 17.10.1 (Beta)
NVIDIA: 387.92

As with a lot of modern console-first titles, Forza 7 defaults to "Dynamic" image quality settings. This means that the game engine is supposed to find the best image settings for your hardware automatically, and dynamically adjust them so that you hit a target frame rate (adjustable between 30 and 60fps) no matter what is going on in the current scene that is being rendered.

While this is a good strategy for consoles, and even for casual PC gamers, it poses a problem for us trying to measure equivalent performance across GPUs. Luckily the developers of Forza Motorsport 7, Turn 10 Studios, still let you disable the dynamic control and configure the image quality settings as you desire.

One quirk however though is that in order for V-Sync to be disabled, the rendering resolution within the game must match the native resolution of your monitor. This means that if you are running 2560x1440 on your 4K monitor, you must first set the resolution within windows to 2560x1440 in order to run the game in V-Sync off mode.

forza7-settings.png

We did our testing with an array of three different resolutions (1080p, 1440p, and 4K) at maximum image quality settings. We tested both AMD and NVIDIA graphics cards in similar price and performance segments. The built-in benchmark mode for this game was used, which does feature some variance due to dynamic weather patterns. However, our testing within the full game matched the results of the benchmark mode closely, so we used it for our final results.

forza7-avgfps.png

Right off the bat, I have been impressed at how well optimized Forza Motorsport 7 seems to be on the PC. Compared to the unoptimized disaster that was Forza Horizon 3 when it launched on PC last year, it's clear that Turn 10 Studios and Microsoft have come a long way.

Even gamers looking to play on a 4K display at 60Hz can seemingly get away with the cheaper, and more mainstream GPUs such as the RX 580 or the GTX 1060 with acceptable performance in most scenarios.

Games on high-refresh-rate displays don't appear to have the same luxury. If you want to game at a resolution such as 2560x1440 at a full 144Hz, neither the RX Vega 64 or GTX 1080 will do this with maximum image quality settings. Although these GPUs appear to be in the margin where you could turn down a few settings to achieve your full refresh rate.

For some reason, the RX Vega cards didn't seem to show any scaling in performance when moving from 2560x1440 to 1920x1080, unlike the Polaris-based RX 580 and the NVIDIA options. We aren't quite sure of the cause of this and have reached out to AMD for clarification.

As far as frame times are concerned, we also gathered some data with our Frame Rating capture analysis system

Forza7_2560x1440_PLOT.png

Forza7_2560x1440_STUT.png

Taking a look at the first chart, we can see while the GTX 1080 frame times are extremely consistent, the RX Vega 64 shows some additional variance.

However, the frame time variance chart shows that over 95% of the frame times of the RX Vega 64 come in at under 2ms of variance, which will still provide a smooth gameplay experience in most scenarios. This matches with our experience while playing on both AMD and NVIDIA hardware where we saw no major issues with gameplay smoothness.

forza7-screen2.png

Forza Motorsport 7 seems to be a great addition to the PC gaming world (if you don't mind using the Microsoft store exclusively) and will run great on a wide array of hardware. Whether or not you have a NVIDIA or AMD GPU, you should be able to enjoy this fantastic racing simulator.