Author:
Manufacturer: ASUS

Specifications and Card Breakdown

The flurry of retail built cards based on NVIDIA's new Pascal GPUs has been hitting us hard at PC Perspective. So much in fact that, coupled with new gaming notebooks, new monitors, new storage and a new church (you should listen to our podcast, really) output has slowed dramatically. How do you write reviews for all of these graphics cards when you don't even know where to start? My answer: blindly pick one and start typing away.

07.jpg

Just after launch day of the GeForce GTX 1060, ASUS sent over the GTX 1060 Turbo 6GB card. Despite the name, the ASUS Turbo line of GTX 10-series graphics cards is the company's most basic, most stock iteration of graphics cards. That isn't necessarily a drawback though - you get reference level performance at the lowest available price and you still get the promises of quality and warranty from ASUS.

With a target MSRP of just $249, does the ASUS GTX 1060 Turbo make the cut for users looking for that perfect mainstream 1080p gaming graphics card? Let's find out.

Continue reading our review of the ASUS GeForce GTX 1060 Turbo 6GB!

AMD Gains Significant Market Share in Q2 2016

Subject: Graphics Cards | August 24, 2016 - 10:34 AM |
Tagged: nvidia, market share, jpr, jon peddie, amd

As reported by both Mercury Research and now by Jon Peddie Research, in a graphics add-in card market that dropped dramatically in Q2 2016 in terms of total units shipped, AMD has gained significant market share against NVIDIA.

GPU Supplier Market share this QTR Market share last QTR Market share last year
AMD 29.9% 22.8% 18.0%
NVIDIA 70.0% 77.2% 81.9%
Total 100% 100% 100%

Source: Jon Peddie Research

Last year at this time, AMD was sitting at 18% market share in terms of units sold, an absolutely dismal result compared to NVIDIA's dominating 81.9%. Over the last couple of quarters we have seen AMD gain in this space, and keeping in mind that Q2 2016 does not include sales of AMD's new Polaris-based graphics cards like the Radeon RX 480, the jump to 29.9% is a big move for the company. As a result, NVIDIA falls back to 70% market share for the quarter, which is still a significant lead over the AMD.

Numbers like that shouldn't be taken lightly - for AMD to gain 7 points of market share in a single quarter indicates a substantial shift in the market. This includes all add-in cards: budget, mainstream, enthusiast and even workstation class products. One report I am received says that NVIDIA card sales specifically dropped off in Q2, though the exact reason why isn't known, and as a kind of defacto result, AMD gained sales share.

unnamed.png

There are several other factors to watch with this data however. First, the quarterly drop in graphics card sales was -20% in Q2 when compared to Q1. That is well above the average seasonal Q1-Q2 drop, which JPR claims to be -9.7%. Much of this sell through decrease is likely due to consumers expecting releases of both NVIDIA Pascal GPUs and AMD Polaris GPUs, stalling sales as consumers delay their purchases. 

The NVIDIA GeForce GTX 1080 launched on May 17th and the GTX 1070 on May 29th. The company has made very bold claims about product sales of Pascal parts so I am honestly very surprised that the overall market would drop the way it did in Q2 and that NVIDIA would fall behind AMD as much as it has. Q3 2016 may be the defining time for both GPU vendors however as it will show the results of the work put into both new architectures and both new product lines. NVIDIA reported record profits recently so it will be interesting to see how that matches up to unit sales.

EVGA's Water Cooled GTX 1080 FTW Hybrid Runs Cool and Quiet

Subject: Graphics Cards | August 23, 2016 - 04:18 PM |
Tagged: water cooling, pascal, hybrid cooler, GTX 1080, evga

EVGA recently launched a water cooled graphics card that pairs the GTX 1080 processor with the company's FTW PCB and a closed loop (AIO) water cooler to deliver a heavily overclockable card that will set you back $730.

The GTX 1080 FTW Hybrid is interesting because the company has opted to use the same custom PCB design as its FTW cards rather than a reference board. This FTW board features improved power delivery with a 10+2 power phase, two 8-pin PCI-E power connectors, Dual BIOS, and adjustable RGB LEDs. The cooler is shrouded with backlit EVGA logos and has a fan to air cool the memory and VRMs that is reportedly quiet and uses a reverse swept blade design (like their ACX air coolers) rather than a traditional blower style fan. The graphics processor is cooled by a water loop.

EVGA GTX 1080 FTW Hybrid.jpg

The water block and pump sit on top of the GPU with tubes running out to the 120mm radiator. Luckily the fan on the radiator can be easily disconnected, allowing users to use their own fan if they wish. According to Youtuber Jayztwocents, the Precision XOC software controls the fan speed of the fan on the card itself but users can not adjust the radiator fan speed themselves. You can connect your own fan to your motherboard and control it that way, however.

Display outputs include one DVI-D, one HDMI, and three DisplayPort outputs (any four of the five can be used simultaneously).

Out of the box this 215W TDP graphics card has a factory overclock of 1721 MHz base and 1860 MHz boost. Thanks to the water cooler, the GPU stays at a frosty 42°C under load. When switched to the slave BIOS (which has a higher power limit and more aggressive fan curve), the card GPU Boosted to 2025 and hit 51°C (he managed to keep that to 44°C by swapping his own EK-Vardar fan onto the radiator). Not bad, especially considering the Founder's Edition hit 85°C on air in our testing! Unfortunately, EVGA did not touch the memory and left the 8GB of GDDR5X at the stock 10 GHz.

  GTX 1080 GTX 1080 FTW Hybrid GTX 1080 FTW Hybrid Slave BIOS
GPU GP104 GP104 GP104
GPU Cores 2560 2560 2560
Rated Clock 1607 MHz 1721 MHz 1721 MHz
Boost Clock 1733 MHz 1860 MHz 2025 MHz
Texture Units 160 160 160
ROP Units 64 64 64
Memory 8GB 8GB 8GB
Memory Clock 10000 MHz 10000 MHz 10000 MHz
TDP 180 watts 215 watts ? watts
Max Tempurature 85°C 42°C 51°C
MSRP (current) $599 ($699 FE) $730 $730

The water cooler should help users hit even higher overclocks and/or maintain a consistent GPU Boost clock at much lower temperatures than on air. The GTX 1080 FTW Hybrid graphics card does come at a bit of a premium at $730 (versus $699 for Founders or ~$650+ for custom models), but if you have the room in your case for the radiator this might be a nice option! (Of course custom water cooling is more fun, but it's also more expensive, time consuming, and addictive. hehe)

What do you think about these "hybrid" graphics cards?

Source: EVGA

Creatively testing GPUs with Google's Tilt Brush

Subject: Graphics Cards | August 23, 2016 - 01:43 PM |
Tagged: amd, nvidia, Tilt Brush, VR

[H]ard|OCP continues their foray into testing VR applications, this time moving away from games to try out the rather impressive Tilt Brush VR drawing application from Google.  If you have yet to see this software in action it is rather incredible, although you do still require an artist's talent and practical skills to create true 3D masterpieces. 

Artisic merit may not be [H]'s strong suite but testing how well a GPU can power VR applications certainly lies within their bailiwick.  Once again they tested five NVIDIA GPUs and a pair of AMD's for dropped frames and reprojection caused by a drop in FPS.

1471635809gU37bh4rad_6_1.jpg

"We are changing gears a bit with our VR Performance coverage and looking at an application that is not as GPU-intensive as those we have looked at in the recent past. Google's Tilt Brush is a virtual reality application that makes use of the HTC Vive head mounted display and its motion controllers to allow you to paint in 3D space."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: PC Perspective

Why Two 4GB GPUs Isn't Necessarily 8GB

We're trying something new here at PC Perspective. Some topics are fairly difficult to explain cleanly without accompanying images. We also like to go fairly deep into specific topics, so we're hoping that we can provide educational cartoons that explain these issues.

This pilot episode is about load-balancing and memory management in multi-GPU configurations. There seems to be a lot of confusion around what was (and was not) possible with DirectX 11 and OpenGL, and even more confusion about what DirectX 12, Mantle, and Vulkan allow developers to do. It highlights three different load-balancing algorithms, and even briefly mentions what LucidLogix was attempting to accomplish almost ten years ago.

pcper-2016-animationlogo-multiGPU.png

If you like it, and want to see more, please share and support us on Patreon. We're putting this out not knowing if it's popular enough to be sustainable. The best way to see more of this is to share!

Open the expanded article to see the transcript, below.

AMD Announces TrueAudio Next

Subject: Graphics Cards | August 18, 2016 - 07:58 PM |
Tagged: amd, TrueAudio, trueaudio next

Using a GPU for audio makes a lot of sense. That said, the original TrueAudio was not really about that, and it didn't really take off. The API was only implemented in a handful of titles, and it required dedicated hardware that they have since removed from their latest architectures. It was not about using the extra horsepower of the GPU to simulate sound, although they did have ideas for “sound shaders” in the original TrueAudio.

amd-2016-true-audio-next.jpg

TrueAudio Next, on the other hand, is an SDK that is part of AMD's LiquidVR package. It is based around OpenCL; specifically, it uses AMD's open-source FireRays library to trace the ways that audio can move from source to receiver, including reflections. For high-frequency audio, this is a good assumption, and that range of frequencies are more useful for positional awareness in VR, anyway.

Basically, TrueAudio Next has very little to do with the original.

Interestingly, AMD is providing an interface for TrueAudio Next to reserve compute units, but optionally (and under NDA). This allows audio processing to be unhooked from the video frame rate, provided that the CPU can keep both fed with actual game data. Since audio is typically a secondary thread, it could be ready to send sound calls at any moment. Various existing portions of asynchronous compute could help with this, but allowing developers to wholly reserve a fraction of the GPU should remove the issue entirely. That said, when I was working on a similar project in WebCL, I was looking to the integrated GPU, because it's there and it's idle, so why not? I would assume that, in actual usage, CU reservation would only be enabled if an AMD GPU is the only device installed.

Anywho, if you're interested, then be sure to check out AMD's other post on it, too.

Source: AMD

NVIDIA Officially Announces GeForce GTX 1060 3GB Edition

Subject: Graphics Cards | August 18, 2016 - 02:28 PM |
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores

NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.

GTX1060.jpg

The GTX 1060 Founders Edition

The product page on NVIDIA.com now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.

  GeForce GTX 1060 3GB GeForce GTX 1060 6GB
Architecture Pascal Pascal
CUDA Cores 1152 1280
Base Clock 1506 MHz 1506 MHz
Boost Clock 1708 MHz 1708 MHz
Memory Speed 8 Gbps 8 Gbps
Memory Configuration 3GB 6GB
Memory Interface 192-bit 192-bit
Power Connector 6-pin 6-pin
TDP 120W 120W

As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.

Source: NVIDIA
Manufacturer: NVIDIA

Is Enterprise Ascending Outside of Consumer Viability?

So a couple of weeks have gone by since the Quadro P6000 (update: was announced) and the new Titan X launched. With them, we received a new chip: GP102. Since Fermi, NVIDIA has labeled their GPU designs with a G, followed by a single letter for the architecture (F, K, M, or P for Fermi, Kepler, Maxwell, and Pascal, respectively), which is then followed by a three digit number. The last digit is the most relevant one, however, as it separates designs by their intended size.

nvidia-2016-Quadro_P6000_7440.jpg

Typically, 0 corresponds to a ~550-600mm2 design, which is about as larger of a design that fabrication labs can create without error-prone techniques, like multiple exposures (update for clarity: trying to precisely overlap multiple designs to form a larger integrated circuit). 4 corresponds to ~300mm2, although GM204 was pretty large at 398mm2, which was likely to increase the core count while remaining on a 28nm process. Higher numbers, like 6 or 7, fill back the lower-end SKUs until NVIDIA essentially stops caring for that generation. So when we moved to Pascal, jumping two whole process nodes, NVIDIA looked at their wristwatches and said “about time to make another 300mm2 part, I guess?”

The GTX 1080 and the GTX 1070 (GP104, 314mm2) were born.

nvidia-2016-gtc-pascal-banner.png

NVIDIA already announced a 600mm2 part, though. The GP100 had 3840 CUDA cores, HBM2 memory, and an ideal ratio of 1:2:4 between FP64:FP32:FP16 performance. (A 64-bit chunk of memory can store one 64-bit value, two 32-bit values, or four 16-bit values, unless the register is attached to logic circuits that, while smaller, don't know how to operate on the data.) This increased ratio, even over Kepler's 1:6 FP64:FP32, is great for GPU compute, but wasted die area for today's (and tomorrow's) games. I'm predicting that it takes the wind out of Intel's sales, as Xeon Phi's 1:2 FP64:FP32 performance ratio is one of its major selling points, leading to its inclusion in many supercomputers.

Despite the HBM2 memory controller supposedly being actually smaller than GDDR5(X), NVIDIA could still save die space while still providing 3840 CUDA cores (despite disabling a few on Titan X). The trade-off is that FP64 and FP16 performance had to decrease dramatically, from 1:2 and 2:1 relative to FP32, all the way down to 1:32 and 1:64. This new design comes in at 471mm2, although it's $200 more expensive than what the 600mm2 products, GK110 and GM200, launched at. Smaller dies provide more products per wafer, and, better, the number of defective chips should be relatively constant.

Anyway, that aside, it puts NVIDIA in an interesting position. Splitting the xx0-class chip into xx0 and xx2 designs allows NVIDIA to lower the cost of their high-end gaming parts, although it cuts out hobbyists who buy a Titan for double-precision compute. More interestingly, it leaves around 150mm2 for AMD to sneak in a design that's FP32-centric, leaving them a potential performance crown.

nvidia-2016-pascal-volta-roadmap-extremetech.png

Image Credit: ExtremeTech

On the other hand, as fabrication node changes are becoming less frequent, it's possible that NVIDIA could be leaving itself room for Volta, too. Last month, it was rumored that NVIDIA would release two architectures at 16nm, in the same way that Maxwell shared 28nm with Kepler. In this case, Volta, on top of whatever other architectural advancements NVIDIA rolls into that design, can also grow a little in size. At that time, TSMC would have better yields, making a 600mm2 design less costly in terms of waste and recovery.

If this is the case, we could see the GPGPU folks receiving a new architecture once every second gaming (and professional graphics) architecture. That is, unless you are a hobbyist. If you are? I would need to be wrong, or NVIDIA would need to somehow bring their enterprise SKU into an affordable price point. The xx0 class seems to have been pushed up and out of viability for consumers.

Or, again, I could just be wrong.

Intel Larrabee Post-Mortem by Tom Forsyth

Subject: Graphics Cards, Processors | August 17, 2016 - 01:38 PM |
Tagged: Xeon Phi, larrabee, Intel

Tom Forsyth, who is currently at Oculus, was once on the core Larrabee team at Intel. Just prior to Intel's IDF conference in San Francisco, which Ryan is at and covering as I type this, Tom wrote a blog post that outlined the project and its design goals, including why it didn't hit market as a graphics device. He even goes into the details of the graphics architecture, which was almost entirely in software apart from texture units and video out. For instance, Larrabee was running FreeBSD with a program, called DirectXGfx, that gave it the DirectX 11 feature set -- and it worked on hundreds of titles, too.

Intel_Xeon_Phi_Family.jpg

Also, if you found the discussion interesting, then there is plenty of content from back in the day to browse. A good example is an Intel Developer Zone post from Michael Abrash that discussed software rasterization, doing so with several really interesting stories.

Author:
Manufacturer: NVIDIA

Take your Pascal on the go

Easily the strongest growth segment in PC hardware today is in the adoption of gaming notebooks. Ask companies like MSI and ASUS, even Gigabyte, as they now make more models and sell more units of notebooks with a dedicated GPU than ever before.  Both AMD and NVIDIA agree on this point and it’s something that AMD was adamant in discussing during the launch of the Polaris architecture.

pascalnb-2.jpg

Both AMD and NVIDIA predict massive annual growth in this market – somewhere on the order of 25-30%. For an overall culture that continues to believe the PC is dying, seeing projected growth this strong in any segment is not only amazing, but welcome to those of us that depend on it. AMD and NVIDIA have different goals here: GeForce products already have 90-95% market share in discrete gaming notebooks. In order for NVIDIA to see growth in sales, the total market needs to grow. For AMD, simply taking back a portion of those users and design wins would help its bottom line.

pascalnb-4.jpg

But despite AMD’s early talk about getting Polaris 10 and 11 in mobile platforms, it’s NVIDIA again striking first. Gaming notebooks with Pascal GPUs in them will be available today, from nearly every system vendor you would consider buying from: ASUS, MSI, Gigabyte, Alienware, Razer, etc. NVIDIA claims to have quicker adoption of this product family in notebooks than in any previous generation. That’s great news for NVIDIA, but might leave AMD looking in from the outside yet again.

Technologically speaking though, this makes sense. Despite the improvement that Polaris made on the GCN architecture, Pascal is still more powerful and more power efficient than anything AMD has been able to product. Looking solely at performance per watt, which is really the defining trait of mobile designs, Pascal is as dominant over Polaris as Maxwell was to Fiji. And this time around NVIDIA isn’t messing with cut back parts that have brand changes – GeForce is diving directly into gaming notebooks in a way we have only seen with one release.

g752-open.jpg

The ASUS G752VS OC Edition with GTX 1070

Do you remember our initial look at the mobile variant of the GeForce GTX 980? Not the GTX 980M mind you, the full GM204 operating in notebooks. That was basically a dry run for what we see today: NVIDIA will be releasing the GeForce GTX 1080, GTX 1070 and GTX 1060 to notebooks.

Continue reading our preview of the new GeForce GTX 1080, 1070 and 1060 mobile Pascal GPUs!!