Subject: Graphics Cards | August 31, 2016 - 05:38 PM | Jeremy Hellstrom
Tagged: amd, radeon, open source, linux, RADV, graphics driver
As of yet, AMD has not delivered the open-source Radeon Vulkan driver originally slated to arrive early this year, instead relying on their current proprietary driver. That has not stopped a team of plucky programmers from creating RADV, utilizing the existing AMDGPU LLVM compiler back-end and Intel's work with Mesa NIR intermediate representation to pass to LLVM IR. You won't get Gallium3D support, ironically RADV is too close to the metal for that to work.
Phoronix just wrapped up testing of the new driver, looking at performance for The Talos Principal and DOTA 2, contrasting the open source driver with the closed source AMDGPU-PRO. RADV is not quite 4k ready but at lower resolutions it proves very competitive.
"With word coming out last week that the RADV open-source Vulkan driver can now render Dota 2 correctly, I've been running some tests the past few days of this RADV Vulkan driver compared to AMD's official (but currently closed-source) Vulkan driver bundled with the AMDGPU-PRO Vulkan driver."
Here are some more Graphics Card articles from around the web:
- Windows 10 vs. Linux Radeon Software Performance @ Phoronix
- PowerColor Red Devil RX 480 8GB Review @ OCC
- XFX Radeon RX 460 Double Dissipation @ [H]ard|OCP
- NVIDIA GeForce GTX 1060 Founders Edition Review @ Neoseeker
Subject: General Tech, Graphics Cards | August 30, 2016 - 12:46 PM | Jeremy Hellstrom
Tagged: nvidia, GeForce 372.70, driver
NVIDIA continues with their Game Ready driver program, releasing the GeForce 372.70 driver, hand crafted in the new world by artisanal engineers to bring enhanced support to World of Warcraft: Legion, Battlefield 1: Open Beta, Deus Ex: Mankind Divided, and Quantum Break. There is not much to see in the release notes, although you can now enjoy Deus Ex in glorious 3D vision assuming you have the monitor and glasses.
If you are testing the new Battlefield you should consider updating, one would suppose the bug reports submitted using this driver will be more beneficial to the developers than an older release. You know the drill, grab them from GeForce.com or NVIDIA.com.
Subject: Graphics Cards, Motherboards | August 29, 2016 - 01:20 AM | Scott Michaud
Tagged: pcie, PCI SIG
Last week, various outlets were reporting (incorrectly) that PCIe 4.0 would provide “at least 300W” through the slot. This would have been roughly equal to the power draw that a PCIe 3.0 GPU could provide with an extra six-pin and an extra eight-pin power connector, but do so all through the slot.
Later, the PCI-SIG contacted Tom's Hardware (and likely others) to say that this is not the case. The slot will still only provide 75W of power; any other power will still need to come from external connectors. The main advantage of the standard will be extra bandwidth, about double that of PCIe 3.0, not easing cable management or making it easier to design a graphics card (by making it harder to design a motherboard).
Specifications and Card Breakdown
The flurry of retail built cards based on NVIDIA's new Pascal GPUs has been hitting us hard at PC Perspective. So much in fact that, coupled with new gaming notebooks, new monitors, new storage and a new church (you should listen to our podcast, really) output has slowed dramatically. How do you write reviews for all of these graphics cards when you don't even know where to start? My answer: blindly pick one and start typing away.
Just after launch day of the GeForce GTX 1060, ASUS sent over the GTX 1060 Turbo 6GB card. Despite the name, the ASUS Turbo line of GTX 10-series graphics cards is the company's most basic, most stock iteration of graphics cards. That isn't necessarily a drawback though - you get reference level performance at the lowest available price and you still get the promises of quality and warranty from ASUS.
With a target MSRP of just $249, does the ASUS GTX 1060 Turbo make the cut for users looking for that perfect mainstream 1080p gaming graphics card? Let's find out.
Subject: Graphics Cards | August 24, 2016 - 10:34 AM | Ryan Shrout
Tagged: nvidia, market share, jpr, jon peddie, amd
As reported by both Mercury Research and now by Jon Peddie Research, in a graphics add-in card market that dropped dramatically in Q2 2016 in terms of total units shipped, AMD has gained significant market share against NVIDIA.
|GPU Supplier||Market share this QTR||Market share last QTR||Market share last year|
Source: Jon Peddie Research
Last year at this time, AMD was sitting at 18% market share in terms of units sold, an absolutely dismal result compared to NVIDIA's dominating 81.9%. Over the last couple of quarters we have seen AMD gain in this space, and keeping in mind that Q2 2016 does not include sales of AMD's new Polaris-based graphics cards like the Radeon RX 480, the jump to 29.9% is a big move for the company. As a result, NVIDIA falls back to 70% market share for the quarter, which is still a significant lead over the AMD.
Numbers like that shouldn't be taken lightly - for AMD to gain 7 points of market share in a single quarter indicates a substantial shift in the market. This includes all add-in cards: budget, mainstream, enthusiast and even workstation class products. One report I am received says that NVIDIA card sales specifically dropped off in Q2, though the exact reason why isn't known, and as a kind of defacto result, AMD gained sales share.
There are several other factors to watch with this data however. First, the quarterly drop in graphics card sales was -20% in Q2 when compared to Q1. That is well above the average seasonal Q1-Q2 drop, which JPR claims to be -9.7%. Much of this sell through decrease is likely due to consumers expecting releases of both NVIDIA Pascal GPUs and AMD Polaris GPUs, stalling sales as consumers delay their purchases.
The NVIDIA GeForce GTX 1080 launched on May 17th and the GTX 1070 on May 29th. The company has made very bold claims about product sales of Pascal parts so I am honestly very surprised that the overall market would drop the way it did in Q2 and that NVIDIA would fall behind AMD as much as it has. Q3 2016 may be the defining time for both GPU vendors however as it will show the results of the work put into both new architectures and both new product lines. NVIDIA reported record profits recently so it will be interesting to see how that matches up to unit sales.
Subject: Graphics Cards | August 23, 2016 - 04:18 PM | Tim Verry
Tagged: water cooling, pascal, hybrid cooler, GTX 1080, evga
EVGA recently launched a water cooled graphics card that pairs the GTX 1080 processor with the company's FTW PCB and a closed loop (AIO) water cooler to deliver a heavily overclockable card that will set you back $730.
The GTX 1080 FTW Hybrid is interesting because the company has opted to use the same custom PCB design as its FTW cards rather than a reference board. This FTW board features improved power delivery with a 10+2 power phase, two 8-pin PCI-E power connectors, Dual BIOS, and adjustable RGB LEDs. The cooler is shrouded with backlit EVGA logos and has a fan to air cool the memory and VRMs that is reportedly quiet and uses a reverse swept blade design (like their ACX air coolers) rather than a traditional blower style fan. The graphics processor is cooled by a water loop.
The water block and pump sit on top of the GPU with tubes running out to the 120mm radiator. Luckily the fan on the radiator can be easily disconnected, allowing users to use their own fan if they wish. According to Youtuber Jayztwocents, the Precision XOC software controls the fan speed of the fan on the card itself but users can not adjust the radiator fan speed themselves. You can connect your own fan to your motherboard and control it that way, however.
Display outputs include one DVI-D, one HDMI, and three DisplayPort outputs (any four of the five can be used simultaneously).
Out of the box this 215W TDP graphics card has a factory overclock of 1721 MHz base and 1860 MHz boost. Thanks to the water cooler, the GPU stays at a frosty 42°C under load. When switched to the slave BIOS (which has a higher power limit and more aggressive fan curve), the card GPU Boosted to 2025 and hit 51°C (he managed to keep that to 44°C by swapping his own EK-Vardar fan onto the radiator). Not bad, especially considering the Founder's Edition hit 85°C on air in our testing! Unfortunately, EVGA did not touch the memory and left the 8GB of GDDR5X at the stock 10 GHz.
|GTX 1080||GTX 1080 FTW Hybrid||GTX 1080 FTW Hybrid Slave BIOS|
|Rated Clock||1607 MHz||1721 MHz||1721 MHz|
|Boost Clock||1733 MHz||1860 MHz||2025 MHz|
|Memory Clock||10000 MHz||10000 MHz||10000 MHz|
|TDP||180 watts||215 watts||? watts|
|MSRP (current)||$599 ($699 FE)||$730||$730|
The water cooler should help users hit even higher overclocks and/or maintain a consistent GPU Boost clock at much lower temperatures than on air. The GTX 1080 FTW Hybrid graphics card does come at a bit of a premium at $730 (versus $699 for Founders or ~$650+ for custom models), but if you have the room in your case for the radiator this might be a nice option! (Of course custom water cooling is more fun, but it's also more expensive, time consuming, and addictive. hehe)
What do you think about these "hybrid" graphics cards?
Subject: Graphics Cards | August 23, 2016 - 01:43 PM | Jeremy Hellstrom
Tagged: amd, nvidia, Tilt Brush, VR
[H]ard|OCP continues their foray into testing VR applications, this time moving away from games to try out the rather impressive Tilt Brush VR drawing application from Google. If you have yet to see this software in action it is rather incredible, although you do still require an artist's talent and practical skills to create true 3D masterpieces.
Artisic merit may not be [H]'s strong suite but testing how well a GPU can power VR applications certainly lies within their bailiwick. Once again they tested five NVIDIA GPUs and a pair of AMD's for dropped frames and reprojection caused by a drop in FPS.
"We are changing gears a bit with our VR Performance coverage and looking at an application that is not as GPU-intensive as those we have looked at in the recent past. Google's Tilt Brush is a virtual reality application that makes use of the HTC Vive head mounted display and its motion controllers to allow you to paint in 3D space."
Here are some more Graphics Card articles from around the web:
- PowerColor Red Devil RX 470 Overclocking @ [H]ard|OCP
- MSI GeForce GTX 1060 OC 6 GB @ techPowerUp
- ASUS STRIX GAMING GTX 1070 OC @ eTeknix
- EVGA GeForce GTX 1070 FTW GAMING ACX 3.0 @ Bjorn3d
Why Two 4GB GPUs Isn't Necessarily 8GB
We're trying something new here at PC Perspective. Some topics are fairly difficult to explain cleanly without accompanying images. We also like to go fairly deep into specific topics, so we're hoping that we can provide educational cartoons that explain these issues.
This pilot episode is about load-balancing and memory management in multi-GPU configurations. There seems to be a lot of confusion around what was (and was not) possible with DirectX 11 and OpenGL, and even more confusion about what DirectX 12, Mantle, and Vulkan allow developers to do. It highlights three different load-balancing algorithms, and even briefly mentions what LucidLogix was attempting to accomplish almost ten years ago.
If you like it, and want to see more, please share and support us on Patreon. We're putting this out not knowing if it's popular enough to be sustainable. The best way to see more of this is to share!
Subject: Graphics Cards | August 18, 2016 - 07:58 PM | Scott Michaud
Tagged: amd, TrueAudio, trueaudio next
Using a GPU for audio makes a lot of sense. That said, the original TrueAudio was not really about that, and it didn't really take off. The API was only implemented in a handful of titles, and it required dedicated hardware that they have since removed from their latest architectures. It was not about using the extra horsepower of the GPU to simulate sound, although they did have ideas for “sound shaders” in the original TrueAudio.
TrueAudio Next, on the other hand, is an SDK that is part of AMD's LiquidVR package. It is based around OpenCL; specifically, it uses AMD's open-source FireRays library to trace the ways that audio can move from source to receiver, including reflections. For high-frequency audio, this is a good assumption, and that range of frequencies are more useful for positional awareness in VR, anyway.
Basically, TrueAudio Next has very little to do with the original.
Interestingly, AMD is providing an interface for TrueAudio Next to reserve compute units, but optionally (and under NDA). This allows audio processing to be unhooked from the video frame rate, provided that the CPU can keep both fed with actual game data. Since audio is typically a secondary thread, it could be ready to send sound calls at any moment. Various existing portions of asynchronous compute could help with this, but allowing developers to wholly reserve a fraction of the GPU should remove the issue entirely. That said, when I was working on a similar project in WebCL, I was looking to the integrated GPU, because it's there and it's idle, so why not? I would assume that, in actual usage, CU reservation would only be enabled if an AMD GPU is the only device installed.
Anywho, if you're interested, then be sure to check out AMD's other post on it, too.
Subject: Graphics Cards | August 18, 2016 - 02:28 PM | Sebastian Peak
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores
NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.
The GTX 1060 Founders Edition
The product page on NVIDIA.com now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.
|GeForce GTX 1060 3GB||GeForce GTX 1060 6GB|
|Base Clock||1506 MHz||1506 MHz|
|Boost Clock||1708 MHz||1708 MHz|
|Memory Speed||8 Gbps||8 Gbps|
As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.