The Strix strikes again, Asus' ROG Strix GeForce GTX 1080

Subject: Graphics Cards | September 7, 2016 - 03:41 PM |
Tagged: GTX 1080 STRIX GAMING, asus, GTX 1080, aerogel

ASUS has updated their GTX 1080 Strix with a few more features than the previous models in this family.  The aesthetics are a bit different but there is more to the card than that, hidden under the front edge of the card are two four pin fan headers which allow you to connect two case fans to the card which will react according to the heat load on the GPU.  The new DirectCU cooler has five copper heatpipes and the PCB has eight-plus-two power-phases.  There are two models, the ROG Strix-GTX1080-A8G-Gaming and the ROG Strix-GTX1080-O8G-Gaming, with core speeds of 1670/1809MHz and 1759/1898MHz respectively.  The Tech Report tested out the first of those two cards, see how it matches up to the competition here.


"Asus' graphics cards are favorites of ours at TR, so we were excited when the ROG Strix GeForce GTX 1080 landed in our labs. We put it to the test to see whether Asus gave Pascal a good set of wings."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Rumor: NVIDIA GeForce GTX 1050 GPU-Z Screenshot

Subject: Graphics Cards | September 6, 2016 - 05:45 PM |
Tagged: nvidia, pascal, gtx 1050, geforce

I don't know why people insist on encoding screenshots from form-based windows in JPEG. You have very little color variation outside of text, which is typically thin and high-contrast from its surroundings. JPEG's Fourier Transform will cause rippling artifacts in the background, which should be solid color, and will almost definitely have a larger file size. Please, everyone, at least check to see how big a PNG will be before encoding it as JPEG. (In case you notice that I encoded it in JPEG too, that's because re-compressing JPEG artifacts makes PNG's file-size blow up, forcing me to actually need to use JPEG.)


It also makes it a bit more difficult to tell whether a screenshot has been manipulated, because the hitches make everything look suspect. Regardless, BenchLife claims to have a leaked GPU-Z result for the GeForce GTX 1050. They claim that it will be using the GP107 die at 75W, although the screenshot claims neither of these. If true, this means that it will not be a further cut-down version of GP106, as seen in the two GTX 1060 parts, which would explain a little bit why they wanted both of them to remain in the 1060 level of branding. (Although why they didn't call the 6GB version the 1060 Ti is beyond me.)

What the screenshot does suggest, though, is that it will have 4GB of GDDR5 memory, on a 128-bit bus. It will have 768 shaders, the same as the GTX 950, although clocked about 15% higher (boost vs boost) and 15W lower, bringing it back into the range of PCIe bus power (75W). That doesn't mean that it will not have a six-pin external power connector, but that could be the case, like the 750 Ti.

This would give it about 2.1 TeraFLOPs of performance, which is on part with the GeForce GTX 660 from a few generations ago, as well as the RX 460, which is also 75W TDP.

Source: Benchlife

IFA 2016: AMD Announces Radeon ProRender

Subject: Graphics Cards | September 6, 2016 - 02:53 AM |
Tagged: radeon, firepro, amd

AMD is apparently interested in supporting open-source, professional graphics. For instance, the Blender Foundation is interviewing potential hires based on a potential deal with the CPU and graphics vendor. They have also open-sourced a bunch of technologies through their GPUOpen Initiative, such as the Radeon Rays (formerly FireRays) library.


This time, at IFA 2016, they released the Radeon ProRender, which used to be called FireRender. This is a plug-in for multiple 3D applications to render high-quality, raytraced images. The open-source, third-party renderer is currently available for 3D Studio Max, in beta for Maya, Rhinoceros, and Solidworks, and coming soon for Blender. While Cycles is pretty good, the potential for cross-pollination is interesting for the future of open 3D development.

We can't go wrong with more options.

Source: AMD

NVIDIA Announces a Fallout 4 Mod

Subject: Graphics Cards | August 31, 2016 - 07:50 PM |
Tagged: nvidia, gameworks, fallout 4, pc gaming

Vault 1080, which is a terrible pun by the way, is a free mod of Fallout 4 that is developed by NVIDIA Lightspeed Studios. It is designed to show off GameWorks technologies, such as volumetric lighting and HBAO+, more heavily than Bethesda did with the base game. They claim that the content lasts more than an hour, which is pretty decent for a free expansion.

It will launch on the first day of PAX West: September 2nd.

If you're wondering why NVIDIA has a game development studio, they are mostly responsible for bringing content from the PC to their Shield devices, such as Half-Life 2 and Portal. They also created NVIDIA's VR Funhouse demo, which was also release for free to show off GameWorks (such as NVIDIA Flow and VRWorks Audio) for the HTC Vive. Basically, they develop games (and now game content) to make NVIDIA's hardware more appealing.

Source: NVIDIA

If you don't Ottoman-tically update your drivers, Radeon 16.8.3 has just arrived

Subject: Graphics Cards | August 31, 2016 - 06:09 PM |
Tagged: radeon 16.8.3, crimson, amd

Similar to the release yesterday from NVIDIA, AMD's Crimson 16.8.3 hotfix has been timed for release with Deus Ex: Mankind Divided and the Battlefield 1 Beta.  This particular update will add Crossfire profiles for both games and also fixes an unfortunate bug from the previous release which occasionally caused a static, albeit colourful screen over top of your game.  Unfortunately, the Gaming Evolved overlay launch problem still exists, as does the workaround. 


If you do plan on submitting bug reports whilst trying out the new Battlefield, please do head on over and upgrade so the devs are not working on issues which are already resolved.

Source: AMD

Testing the community developed RADV driver against AMDGPU-PRO

Subject: Graphics Cards | August 31, 2016 - 05:38 PM |
Tagged: amd, radeon, open source, linux, RADV, graphics driver

As of yet, AMD has not delivered the open-source Radeon Vulkan driver originally slated to arrive early this year, instead relying on their current proprietary driver.  That has not stopped a team of plucky programmers from creating RADV, utilizing the existing AMDGPU LLVM compiler back-end and Intel's work with Mesa NIR intermediate representation to pass to LLVM IR.  You won't get Gallium3D support, ironically RADV is too close to the metal for that to work.

Phoronix just wrapped up testing of the new driver, looking at performance for The Talos Principal and DOTA 2, contrasting the open source driver with the closed source AMDGPU-PRO.  RADV is not quite 4k ready but at lower resolutions it proves very competitive.


"With word coming out last week that the RADV open-source Vulkan driver can now render Dota 2 correctly, I've been running some tests the past few days of this RADV Vulkan driver compared to AMD's official (but currently closed-source) Vulkan driver bundled with the AMDGPU-PRO Vulkan driver."

Here are some more Graphics Card articles from around the web:

Graphics Cards


Source: Phoronix

Battlefield 1 Beta is out, as is the GeForce 372.70 driver

Subject: General Tech, Graphics Cards | August 30, 2016 - 12:46 PM |
Tagged: nvidia, GeForce 372.70, driver

NVIDIA continues with their Game Ready driver program, releasing the GeForce 372.70 driver, hand crafted in the new world by artisanal engineers to bring enhanced support to World of Warcraft: Legion, Battlefield 1: Open Beta, Deus Ex: Mankind Divided, and Quantum Break.  There is not much to see in the release notes, although you can now enjoy Deus Ex in glorious 3D vision assuming you have the monitor and glasses.


If you are testing the new Battlefield you should consider updating, one would suppose the bug reports submitted using this driver will be more beneficial to the developers than an older release.  You know the drill, grab them from or

Source: NVIDIA

PCIe 4.0 Will Still Deliver 75W of Slot Power

Subject: Graphics Cards, Motherboards | August 29, 2016 - 01:20 AM |
Tagged: pcie, PCI SIG

Last week, various outlets were reporting (incorrectly) that PCIe 4.0 would provide “at least 300W” through the slot. This would have been roughly equal to the power draw that a PCIe 3.0 GPU could provide with an extra six-pin and an extra eight-pin power connector, but do so all through the slot.


Later, the PCI-SIG contacted Tom's Hardware (and likely others) to say that this is not the case. The slot will still only provide 75W of power; any other power will still need to come from external connectors. The main advantage of the standard will be extra bandwidth, about double that of PCIe 3.0, not easing cable management or making it easier to design a graphics card (by making it harder to design a motherboard).

AMD Gains Significant Market Share in Q2 2016

Subject: Graphics Cards | August 24, 2016 - 10:34 AM |
Tagged: nvidia, market share, jpr, jon peddie, amd

As reported by both Mercury Research and now by Jon Peddie Research, in a graphics add-in card market that dropped dramatically in Q2 2016 in terms of total units shipped, AMD has gained significant market share against NVIDIA.

GPU Supplier Market share this QTR Market share last QTR Market share last year
AMD 29.9% 22.8% 18.0%
NVIDIA 70.0% 77.2% 81.9%
Total 100% 100% 100%

Source: Jon Peddie Research

Last year at this time, AMD was sitting at 18% market share in terms of units sold, an absolutely dismal result compared to NVIDIA's dominating 81.9%. Over the last couple of quarters we have seen AMD gain in this space, and keeping in mind that Q2 2016 does not include sales of AMD's new Polaris-based graphics cards like the Radeon RX 480, the jump to 29.9% is a big move for the company. As a result, NVIDIA falls back to 70% market share for the quarter, which is still a significant lead over the AMD.

Numbers like that shouldn't be taken lightly - for AMD to gain 7 points of market share in a single quarter indicates a substantial shift in the market. This includes all add-in cards: budget, mainstream, enthusiast and even workstation class products. One report I am received says that NVIDIA card sales specifically dropped off in Q2, though the exact reason why isn't known, and as a kind of defacto result, AMD gained sales share.


There are several other factors to watch with this data however. First, the quarterly drop in graphics card sales was -20% in Q2 when compared to Q1. That is well above the average seasonal Q1-Q2 drop, which JPR claims to be -9.7%. Much of this sell through decrease is likely due to consumers expecting releases of both NVIDIA Pascal GPUs and AMD Polaris GPUs, stalling sales as consumers delay their purchases. 

The NVIDIA GeForce GTX 1080 launched on May 17th and the GTX 1070 on May 29th. The company has made very bold claims about product sales of Pascal parts so I am honestly very surprised that the overall market would drop the way it did in Q2 and that NVIDIA would fall behind AMD as much as it has. Q3 2016 may be the defining time for both GPU vendors however as it will show the results of the work put into both new architectures and both new product lines. NVIDIA reported record profits recently so it will be interesting to see how that matches up to unit sales.

EVGA's Water Cooled GTX 1080 FTW Hybrid Runs Cool and Quiet

Subject: Graphics Cards | August 23, 2016 - 04:18 PM |
Tagged: water cooling, pascal, hybrid cooler, GTX 1080, evga

EVGA recently launched a water cooled graphics card that pairs the GTX 1080 processor with the company's FTW PCB and a closed loop (AIO) water cooler to deliver a heavily overclockable card that will set you back $730.

The GTX 1080 FTW Hybrid is interesting because the company has opted to use the same custom PCB design as its FTW cards rather than a reference board. This FTW board features improved power delivery with a 10+2 power phase, two 8-pin PCI-E power connectors, Dual BIOS, and adjustable RGB LEDs. The cooler is shrouded with backlit EVGA logos and has a fan to air cool the memory and VRMs that is reportedly quiet and uses a reverse swept blade design (like their ACX air coolers) rather than a traditional blower style fan. The graphics processor is cooled by a water loop.

EVGA GTX 1080 FTW Hybrid.jpg

The water block and pump sit on top of the GPU with tubes running out to the 120mm radiator. Luckily the fan on the radiator can be easily disconnected, allowing users to use their own fan if they wish. According to Youtuber Jayztwocents, the Precision XOC software controls the fan speed of the fan on the card itself but users can not adjust the radiator fan speed themselves. You can connect your own fan to your motherboard and control it that way, however.

Display outputs include one DVI-D, one HDMI, and three DisplayPort outputs (any four of the five can be used simultaneously).

Out of the box this 215W TDP graphics card has a factory overclock of 1721 MHz base and 1860 MHz boost. Thanks to the water cooler, the GPU stays at a frosty 42°C under load. When switched to the slave BIOS (which has a higher power limit and more aggressive fan curve), the card GPU Boosted to 2025 and hit 51°C (he managed to keep that to 44°C by swapping his own EK-Vardar fan onto the radiator). Not bad, especially considering the Founder's Edition hit 85°C on air in our testing! Unfortunately, EVGA did not touch the memory and left the 8GB of GDDR5X at the stock 10 GHz.

  GTX 1080 GTX 1080 FTW Hybrid GTX 1080 FTW Hybrid Slave BIOS
GPU GP104 GP104 GP104
GPU Cores 2560 2560 2560
Rated Clock 1607 MHz 1721 MHz 1721 MHz
Boost Clock 1733 MHz 1860 MHz 2025 MHz
Texture Units 160 160 160
ROP Units 64 64 64
Memory 8GB 8GB 8GB
Memory Clock 10000 MHz 10000 MHz 10000 MHz
TDP 180 watts 215 watts ? watts
Max Tempurature 85°C 42°C 51°C
MSRP (current) $599 ($699 FE) $730 $730

The water cooler should help users hit even higher overclocks and/or maintain a consistent GPU Boost clock at much lower temperatures than on air. The GTX 1080 FTW Hybrid graphics card does come at a bit of a premium at $730 (versus $699 for Founders or ~$650+ for custom models), but if you have the room in your case for the radiator this might be a nice option! (Of course custom water cooling is more fun, but it's also more expensive, time consuming, and addictive. hehe)

What do you think about these "hybrid" graphics cards?

Source: EVGA

Creatively testing GPUs with Google's Tilt Brush

Subject: Graphics Cards | August 23, 2016 - 01:43 PM |
Tagged: amd, nvidia, Tilt Brush, VR

[H]ard|OCP continues their foray into testing VR applications, this time moving away from games to try out the rather impressive Tilt Brush VR drawing application from Google.  If you have yet to see this software in action it is rather incredible, although you do still require an artist's talent and practical skills to create true 3D masterpieces. 

Artisic merit may not be [H]'s strong suite but testing how well a GPU can power VR applications certainly lies within their bailiwick.  Once again they tested five NVIDIA GPUs and a pair of AMD's for dropped frames and reprojection caused by a drop in FPS.


"We are changing gears a bit with our VR Performance coverage and looking at an application that is not as GPU-intensive as those we have looked at in the recent past. Google's Tilt Brush is a virtual reality application that makes use of the HTC Vive head mounted display and its motion controllers to allow you to paint in 3D space."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Announces TrueAudio Next

Subject: Graphics Cards | August 18, 2016 - 07:58 PM |
Tagged: amd, TrueAudio, trueaudio next

Using a GPU for audio makes a lot of sense. That said, the original TrueAudio was not really about that, and it didn't really take off. The API was only implemented in a handful of titles, and it required dedicated hardware that they have since removed from their latest architectures. It was not about using the extra horsepower of the GPU to simulate sound, although they did have ideas for “sound shaders” in the original TrueAudio.


TrueAudio Next, on the other hand, is an SDK that is part of AMD's LiquidVR package. It is based around OpenCL; specifically, it uses AMD's open-source FireRays library to trace the ways that audio can move from source to receiver, including reflections. For high-frequency audio, this is a good assumption, and that range of frequencies are more useful for positional awareness in VR, anyway.

Basically, TrueAudio Next has very little to do with the original.

Interestingly, AMD is providing an interface for TrueAudio Next to reserve compute units, but optionally (and under NDA). This allows audio processing to be unhooked from the video frame rate, provided that the CPU can keep both fed with actual game data. Since audio is typically a secondary thread, it could be ready to send sound calls at any moment. Various existing portions of asynchronous compute could help with this, but allowing developers to wholly reserve a fraction of the GPU should remove the issue entirely. That said, when I was working on a similar project in WebCL, I was looking to the integrated GPU, because it's there and it's idle, so why not? I would assume that, in actual usage, CU reservation would only be enabled if an AMD GPU is the only device installed.

Anywho, if you're interested, then be sure to check out AMD's other post on it, too.

Source: AMD

NVIDIA Officially Announces GeForce GTX 1060 3GB Edition

Subject: Graphics Cards | August 18, 2016 - 02:28 PM |
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores

NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.


The GTX 1060 Founders Edition

The product page on now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.

  GeForce GTX 1060 3GB GeForce GTX 1060 6GB
Architecture Pascal Pascal
CUDA Cores 1152 1280
Base Clock 1506 MHz 1506 MHz
Boost Clock 1708 MHz 1708 MHz
Memory Speed 8 Gbps 8 Gbps
Memory Configuration 3GB 6GB
Memory Interface 192-bit 192-bit
Power Connector 6-pin 6-pin
TDP 120W 120W

As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.

Source: NVIDIA

Intel Larrabee Post-Mortem by Tom Forsyth

Subject: Graphics Cards, Processors | August 17, 2016 - 01:38 PM |
Tagged: Xeon Phi, larrabee, Intel

Tom Forsyth, who is currently at Oculus, was once on the core Larrabee team at Intel. Just prior to Intel's IDF conference in San Francisco, which Ryan is at and covering as I type this, Tom wrote a blog post that outlined the project and its design goals, including why it didn't hit market as a graphics device. He even goes into the details of the graphics architecture, which was almost entirely in software apart from texture units and video out. For instance, Larrabee was running FreeBSD with a program, called DirectXGfx, that gave it the DirectX 11 feature set -- and it worked on hundreds of titles, too.


Also, if you found the discussion interesting, then there is plenty of content from back in the day to browse. A good example is an Intel Developer Zone post from Michael Abrash that discussed software rasterization, doing so with several really interesting stories.

3GB Version of NVIDIA GTX 1060 Has 128 Fewer CUDA Cores

Subject: Graphics Cards | August 12, 2016 - 06:33 PM |
Tagged: report, nvidia, gtx 1060 3gb, gtx 1060, GeForce GTX 1060, geforce, cuda cores

NVIDIA will offer a 3GB version of the GTX 1060, and there's more to the story than the obvious fact that is has half the frame buffer of the 6GB version available now. It appears that this is an entirely different product, with 128 fewer CUDA cores (1152) than the 6GB version's 1280.


Image credit:

Boost clocks are the same at 1.7 GHz, and the 3GB version will still operate with a 120W TDP and require a 6-pin power connector. So why not simply name this product differently? It's always possible that this will be an OEM version of the GTX 1060, but in any case expect slightly lower performance than the existing version even if you don't run at high enough resolutions to require the larger 6GB frame buffer.

Source: VideoCardz

Wherein the RX 470 teaches us a valuable lesson about deferred procedure calls

Subject: Graphics Cards | August 12, 2016 - 05:44 PM |
Tagged: rx 470, LatencyMon, dpc, amd

When The Tech Report first conducted their review of the RX 470 they saw benchmark behaviour very different from any other GPU in that family but could not figure out what it was and resolve it before the mob arrived with pitchforks and torches demanding they publish or die. 

As it turns out there was indeed something rotten in benchmark; incredibly high DPC on the test machine.  Investigation determined the culprit to be the beta BIOS on their ASRock Z170 Extreme7+, specifically the BIOS which allowed you to overclock locked Intel CPUs.  They have just released their new findings along with a look at LatencyMon and DPC in general.  Take a look at the new benchmarks and information about DPC, but also absorb the consequences of demanding articles arrive picoseconds after the NDA expires; if there is a delay in publishing there might just be a damn good reason why.


"We retested our RX 470 to account for this issue, and we also updated our review with DirectX 12 benchmarks for Rise of the Tomb Raider and Hitman, plus full OpenGL and Vulkan benchmarks for Doom."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Corsair Releases Hydro GFX GTX 1080 Liquid-Cooled Graphics Card

Subject: Graphics Cards | August 12, 2016 - 10:59 AM |
Tagged: overclock, nvidia, msi, liquid cooled, hydro H55, hydro gfx, GTX 1080, graphics card, gaming, corsair

Corsair and MSI have teamed up once again to produce a liquid-cooled edition of the latest NVIDIA GPU, with the GTX 1080 receiving the same treatment these two gave to the Hydro GFX version of GTX 980 Ti last year.


“The CORSAIR Hydro GFX GTX 1080 brings all the benefits of liquid cooling to the GeForce GTX 1080, boasting an integrated CORSAIR Hydro Series H55 cooler that draws heat from the GPU via a micro-fin copper base cold plate and dissipates it efficiently using a 120mm high-surface area radiator. A pre-installed low-noise LED-lit 120mm fan ensures steady, reliable air-flow, keeping GPU temperatures down and clock speeds high.

With a low-profile PCB and pre-fitted, fully-sealed liquid cooler, the Hydro GFX GTX 1080 is simple and easy to install. Just fit the card into a PCI-E 3.0 x16 slot, mount the radiator and enjoy low maintenance liquid cooling for the lifetime of the card.”

Naturally, with an integrated closed-loop liquid cooler this GTX 1080 won't be relegated to stock speeds out of the box, though Corsair leaves this up to the user. The card offers three performance modes which allow users to choose between lower noise and higher performance. Silent Mode leaves the GTX 1080 at stock settings (1733 MHz Boost), Gaming Mode increases the Boost clock to 1822 MHz, and OC Mode increases this slightly to 1847 MHz (while increasing memory speed in this mode as well).


This liquid-cooled version will provide higher sustained clocks

Here are the full specs from Corsair:

  • GPU: NVIDIA GeForce GTX 1080
  • CUDA Cores: 2,560
  • Interface: PCI Express 3.0 x16
  • Boost / Base Core Clock:
    • 1,847 MHz / 1,708 MHz (OC Mode)
    • 1,822 MHz / 1,683 MHz (Gaming Mode)
    • 1,733 MHz / 1,607 MHz (Silent Mode)
  • Memory Clock:
    • 10,108 MHz (OC Mode)
    • 10,010 MHZ (Gaming Mode)
    • 10,010 MHz (Silent Mode)
  • Memory Size: 8192MB
  • Memory Type: 8GB GDDR5X
  • Memory Bus: 256-bit
  • Outputs:
    • 3x DisplayPort (Version 1.4)
    • 1x HDMI (Version 2.0)
    • 1x DL-DVI-D
  • Power Connector: 8-pin x 1
  • Power Consumption: 180W
  • Dimension / Weight:Card: 270 x 111 x 40 mm / 1249 g
  • Cooler: 151 x 118 x 52 mm/ 1286 g
  • SKU: CB-9060010-WW


The Corsair Hydro GFX GTX 1080 is available now, exclusively on Corsair's official online store, and priced at $749.99.

Source: Corsair

ASUS Adds Radeon RX 470 and RX 460 to ROG STRIX Gaming Lineup

Subject: Graphics Cards | August 10, 2016 - 08:22 PM |
Tagged: video card, strix rx470, strix rx460, strix, rx 470, rx 460, ROG, Republic of Gamers, graphics, gpu, gaming, asus

Ryan posted details about the Radeon RX 470 and 460 graphics cards at the end of last month, and both are now available. Now the largest of the board partners, ASUS, has added both of these new GPUs to their Republic of Gamers STRIX series.


The STRIX Gaming RX 470 (Image: ASUS)

ASUS announced the Radeon RX 470 STRIX Gaming cards last week, and today the more affordable RX 460 GPU variant has been announced. The RX 470 is certainly a capable gaming option as it's a slightly cut-down version of the RX 480 GPU, and with the two versions of the STRIX Gaming cards offering varying levels of overclocking, they can come even closer to the performance of a stock RX 480.


The STRIX Gaming RX 460 (Image: ASUS)

The new STRIX Gaming RX 460 is significantly slower, with just 896 stream processors (to the 2048 of the RX 470) and a 128-bit memory interface (compared to 256-bit). Part of the appeal of the reference RX 460 - aside from low cost - is low power draw, as the <75W power draw allows for slot-powered board designs. This STRIX Gaming version adds a 6-pin power connector, however, which should provide additional overhead for further overclocking.


GPU AMD Radeon RX 470 AMD Radeon RX 470 AMD Radeon RX 460
Stream Processors 2048 2048 896
Memory Clock 6600 MHz 6600 MHz 7000 MHz
Memory Interface 256-bit 256-bit 128-bit
Core Clock 1270 MHz (OC Mode)
1250 MHz (Gaming Mode)
1226 MHz (OC Mode)
1206 MHz (Gaming Mode)
1256 MHz (OC Mode)
1236 MHz (Gaming Mode)
Video Output DVI-D x2
HDMI 2.0
DVI-D x2
HDMI 2.0
HDMI 2.0
Power Connection 6-pin 6-pin 6-pin
Dimensions 9.5" x 5.1" x 1.6" 9.5" x 5.1" x 1.6" 7.6" x 4.7" x 1.4"

The STRIX Gaming RX 470 OC 4GB is priced at $199, matching the (theoretical) retail of the 4GB RX 480, and the STRIX Gaming RX 470 is just behind at $189. The considerably lower-end STRIX Gaming RX 460 is $139. A check of Amazon/Newegg shows listings for these cards, but no in-stock units as of early this afternoon.

Source: ASUS

AMD Releases Radeon Software Crimson Edition 16.8.1

Subject: Graphics Cards | August 10, 2016 - 04:59 PM |
Tagged: amd, graphics drivers

Alongside the release of the Radeon RX 460 and RX 470 graphics cards, AMD has released the Radeon Software Crimson Edition 16.8.1 drivers. Beyond adding support for these new products, it also adds a Crossfire profile for F1 2016 and fixes a few issues, like Firefox and Overwatch crashing under certain circumstances. It also allows users of the RX 480 to overclock their memory higher than they previously could.


AMD is continuing their trend of steadily releasing graphics drivers, and rapidly fixing important issues as they arise. Also, they have been verbose in their release notes, outlining fixes and known problems as they occur. Users can often track the bugs that affect them as they are added to the Known Issues, then graduated to Fixed Issues. While this often goes unrecognized, it's frustrating as a user to experience a bug and not know whether the company even knows about it, or they are just refusing to acknowledge it.

Useful release notes, like AMD has been publishing, are very helpful in that regard.

Source: AMD

AMD and NVIDIA on the Vive; perfomance data on Raw Data

Subject: Graphics Cards | August 8, 2016 - 05:25 PM |
Tagged: htc vive, amd, nvidia, raw data

Raw Data is an early access game for the HTC Vive, one which requires space to move and which allows the Vive to show off its tracking ability.  [H]ard|OCP wanted to see how the GPUs found in most high end systems would perform in this VR game and so grabbed several AMD and NVIDIA cards to test out.  Benchmarking VR games is not an easy task, instead of raw performance you need to focus on the dropped frames and unstable fps which result in nausea and a less engrossing VR experience.  To that end [H] has played the game numerous times on a variety of GPUs with settings changing throughout to determine the sweet spot for the GPU you are running.  VR offers a new gaming experience and new tests need to be developed to demonstrate performance to those interested in jumping into the new market.  Check out the full review to see what you think of their methodology as well as the raw performance of the cards.


"Both AMD and NVIDIA have had a lot to say about "VR" for a while now. VR is far from mainstream, but we are now seeing some games that are tremendously compelling to play, putting you in middle of the action. Raw Data is one of those, and it is extremely GPU intensive. How do the newest GPUs stack up in Raw Data?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP