The fast and the Fury(ous): 4K

Subject: Graphics Cards | September 28, 2015 - 04:45 PM |
Tagged: R9 Fury, asus strix r9 fury, r9 390x, GTX 980, crossfire, sli, 4k

Bring your wallets to this review from [H]ard|OCP which pits multiple AMD and NVIDIA GPUs against each other at 4K resolutions and no matter the outcome it won't be cheap!  They used the Catalyst 15.8 Beta and the GeForce 355.82 WHQL which were the latest drivers available at the time of writing as well as trying out Windows 10 Pro x64.  There were some interesting results, for instance you want an AMD card when driving in the rain playing Project Cars as the GTX 980's immediately slowed down in inclement weather.  With Witcher 3, AMD again provided frames faster but unfortunately the old spectre of stuttering appeared, which those of you familiar with our Frame Rating tests will understand the source of.  Dying Light proved to be a game that liked VRAM with the 390X taking top spot though sadly neither AMD card could handle Crossfire in Far Cry 4.  There is a lot of interesting information in the review and AMD's cards certainly show their mettle but the overall winner is not perfectly clear, [H] chose Fury the R9 Fury with a caveat about Crossfire support.


"We gear up for multi-GPU gaming with AMD Radeon R9 Fury CrossFire, NVIDIA GeForce GTX 980 SLI, and AMD Radeon R9 390X CrossFire and share our head-to-head results at 4K resolution and find out which solution offers the best gameplay experience. How well does Fiji game when utilized in a CrossFire configuration?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Publishes DirectX 12 Tips for Developers

Subject: Graphics Cards | September 26, 2015 - 09:10 PM |
Tagged: microsoft, windows 10, DirectX 12, dx12, nvidia

Programming with DirectX 12 (and Vulkan, and Mantle) is a much different process than most developers are used to. The biggest change is how work is submit to the driver. Previously, engines would bind attributes to a graphics API and issue one of a handful of “draw” commands, which turns the current state of the API into a message. Drivers would play around with queuing them and manipulating them, to optimize how these orders are sent to the graphics device, but the game developer had no control over that.


Now, the new graphics APIs are built more like command lists. Instead of bind, call, bind, call, and so forth, applications request queues to dump work into, and assemble the messages themselves. It even allows these messages to be bundled together and sent as a whole. This allows direct control over memory and the ability to distribute a lot of the command control across multiple CPU cores. Applications are only as fast as its slowest (relevant) thread, so the ability to spread work out increases actual performance.

NVIDIA has created a large list of things that developers should do, and others that they should not, to increase performance. Pretty much all of them apply equally, regardless of graphics vendor, but there are a few NVIDIA-specific comments, particularly the ones about NvAPI at the end and a few labeled notes in the “Root Signatures” category.

The tips are fairly diverse, covering everything from how to efficiently use things like command lists, to how to properly handle multiple GPUs, and even how to architect your engine itself. Even if you're not a developer, it might be interesting to look over to see how clues about what makes the API tick.

Source: NVIDIA

Nintendo Joins the Khronos Group

Subject: Graphics Cards | September 26, 2015 - 03:46 PM |
Tagged: Nintendo, Khronos

Console developers need to use the APIs that are laid out by the system's creator. Nintendo has their own graphics API for the last three generations, called GX, although it is rumored to be somewhat like OpenGL. A few days ago, Nintendo's logo appeared on the Khronos Group's website as a Contributor Member. This leads sites like The Register to speculate that Nintendo “pledges allegiance to the Vulkan (API)”.

I wouldn't be so hasty.


There are many reasons why a company would want to become a member of the Khronos Group. Microsoft, for instance, decided that the small, $15,000 USD/year membership fee was worth it to influence the future of WebGL. Nintendo, at least currently, does not make their own web browser, they license NetFront from Access Co. Ltd., but that could change (just like their original choice of Opera Mini did). Even with a licensed browser, they might want to discuss and vote on the specifics. But yes, WebGL is unlikely to be on their minds, let alone a driving reason, especially since they are not involved with the W3C. Another unlikely option is OpenCL, especially if they get into cloud services, but I can't see them caring enough about the API to do anything more than blindly use it.

Vulkan is, in fact, most likely what Nintendo is interested in, but that also doesn't mean that they will support it. The membership fee is quite low for a company like Nintendo, and, even if they don't use the API, their input could benefit them, especially since they rely upon third parties for graphics processors. Pushing for additions to Vulkan could force GPU vendors to adopt it, so it will be available for their own APIs, and so forth. There might even be some learning, up to the limits of the Khronos Group's confidentiality requirements.

Or, of course, Nintendo could adopt the Vulkan API to some extent. We'll see. Either way, the gaming company is beginning to open up with industry bodies. This could be positive.

Source: NeoGAF

The Fable of the uncontroversial benchmark

Subject: Graphics Cards | September 24, 2015 - 02:53 PM |
Tagged: radeon, nvidia, lionhead, geforce, fable legends, fable, dx12, benchmark, amd

By now you should have memorized Ryan's review of Fable's DirectX 12 performance on a variety of cards and hopefully tried out our new interactive IFU charts.  You can't always cover every card, as those who were brave enough to look at the CSV file Ryan provided might have come to realize.  That's why it is worth peeking at The Tech Report's review after reading through ours.  They have included an MSI R9 285 and XFX R9 390 as well as an MSI GTX 970, which may be cards you are interested in seeing.  They also spend some time looking at CPU scaling and the effect that has on AMD and NVIDIA's performance.  Check it out here.


"Fable Legends is one of the first games to make use of DirectX 12, and it produces some truly sumptuous visuals. Here's a look at how Legends performs on the latest graphics cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Phoronix Looks at NVIDIA's Linux Driver Quality Settings

Subject: Graphics Cards | September 22, 2015 - 09:09 PM |
Tagged: nvidia, linux, graphics drivers

In the NVIDIA driver control panel, there is a slider that controls Performance vs Quality. On Windows, I leave it set to “Let the 3D application decide” and change my 3D settings individually, as needed. I haven't used NVIDIA's control panel on Linux too much, mostly because my laptop is what I usually install Linux on, which runs an AMD GPU, but the UI seems to put a little more weight on it.


Or is that GTux?

Phoronix decided to test how each of these settings affects a few titles, and the only benchmark they bothered reporting is Team Fortress 2. It turns out that other titles see basically zero variance. TF2 saw a difference of 6FPS though, from 115 FPS at High Quality to 121 FPS at Quality. Oddly enough, Performance and High Performance were worse performance than Quality.

To me, this sounds like NVIDIA has basically forgot about the feature. It barely affects any title, the game it changes anything measureable in is from 2007, and it contradicts what the company is doing on other platforms. I predict that Quality is the default, which is the same as Windows (albeit with only 3 choices: “Performance”, “Balanced”, and the default “Quality”). If it is, you probably should just leave it there 24/7 in case NVIDIA has literally not thought about tweaking the other settings. On Windows, it is kind-of redundant with GeForce Experience, anyway.

Final note: Phoronix has only tested the GTX 980. Results may vary elsewhere, but probably don't.

Source: Phoronix

Intel Will Not Bring eDRAM to Socketed Skylake

Subject: Graphics Cards, Processors | September 17, 2015 - 09:33 PM |
Tagged: Skylake, kaby lake, iris pro, Intel, edram

Update: Sept 17, 2015 @ 10:30 ET -- To clarify: I'm speaking of socketed desktop Skylake. There will definitely be Iris Pro in the BGA options.

Before I begin, the upstream story has a few disputes that I'm not entirely sure on. The Tech Report published a post in September that cited an Intel spokesperson, who said that Skylake would not be getting a socketed processor with eDRAM (unlike Broadwell did just before Skylake launched). This could be a big deal, because the fast, on-processor cache could be used by the CPU as well as the RAM. It is sometimes called “128MB of L4 cache”.


Later, ITWorld and others posted stories that said Intel killed off a Skylake processor with eDRAM, citing The Tech Report. After, Scott Wasson claimed that a story, which may or may not be ITWorld's one, had some “scrambled facts” but wouldn't elaborate. Comparing the two articles doesn't really illuminate any massive, glaring issues, but I might just be missing something.

Update: Sept 18, 2015 @ 9:45pm -- So I apparently misunderstood the ITWorld article. They were claiming that Broadwell-C was discontinued, while The Tech Report was talking about Socketed Skylake with Iris Pro. I thought they both were talking about the latter. Moreover, Anandtech received word from Intel that Broadwell-C is, in fact, not discontinued. This is odd, because ITWorld said they had confirmation from Intel. My guess is that someone gave them incorrect information. Sorry that it took so long to update.

In the same thread, Ian Cutress of Anandtech asked whether The Tech Report benchmarked the processor after Intel tweaked its FCLK capabilities, which Scott did not (but is interested in doing so). Intel addressed a slight frequency boost between the CPU and PCIe lanes after Skylake shipped, which naturally benefits discrete GPUs. Since the original claim was that Broadwell-C is better than Skylake-K for gaming, giving a 25% boost to GPU performance (or removing a 20% loss, depending on how you look at it) could tilt Skylake back above Broadwell. We won't know until it's benchmarked, though.

Iris Pro and eDRAM, while skipping Skylake, might arrive in future architectures though, such as Kaby Lake. It seems to have been demonstrated that, in some situations, and ones relevant to gamers at that, that this boost in eDRAM can help computation -- without even considering the compute potential of a better secondary GPU. One argument is that cutting the extra die room gives Intel more margins, which is almost definitely true, but I wonder how much attention Kaby Lake will get. Especially with AVX-512 and other features being debatably removed, it almost feels like Intel is treating this Tock like a Tick, since they didn't really get one with Broadwell, and Kaby Lake will be the architecture that will lead us to 10nm. On the other hand, each of these architectures are developed by independent teams, so I might be wrong in comparing them serially.

What to use for 1080p on Linux or your future SteamOS machine

Subject: Graphics Cards | September 17, 2015 - 03:34 PM |
Tagged: linux, amd, nvidia

If you are using a 1080p monitor or perhaps even outputting to a large 1080p TV, there is no point in picking up a $500+ GPU as you will not be using the majority of its capabilities.  Phoronix has just done research on what GPU offers you the best value for gaming at that resolution, putting five AMD GPUs from the Radeon R9 270X to the R9 Fury and six NVIDIA cards ranging from the GTX 950 to a GTX TITAN X into their test bench.  The TITAN X is a bit of overkill, unless somehow your display is capable of 200+ fps.  When you look at frames per second per dollar the GTX 950 came out on top, providing playable frame rates at a very low cost.  These results may change as AMD's Linux driver improves but for now NVIDIA is the way to go for those who game on Linux.


"Earlier this week I posted a graphics card comparison using the open-source drivers and looking at the best value and power efficiency. In today's article is a larger range of AMD Radeon and NVIDIA GeForce graphics cards being tested under a variety of modern Linux OpenGL games/demos while using the proprietary AMD/NVIDIA Linux graphics drivers to see how not only the raw performance compares but also the performance-per-Watt, overall power consumption, and performance-per-dollar metrics."

Here are some more Graphics Card articles from around the web:

Graphics Cards


Source: Phoronix

MSI and Corsair Launch Liquid Cooled GTX 980 Ti SEA HAWK

Subject: Graphics Cards | September 17, 2015 - 09:14 AM |
Tagged: nvidia, msi, liquid cooled, GTX980Ti SEA HAWK, GTX 980 Ti, graphics card, corsair

We reported last night on Corsair's new Hydro GFX, a liquid-cooled GTX 980 Ti powered by an MSI GPU, and MSI has their own new product based on this concept as well.


"The MSI GTX 980Ti SEA HAWK utilizes the popular Corsair H55 closed loop liquid-cooling solution. The micro-fin copper base takes care of an efficient heat transfer to the high-speed circulation pump. The low-profile aluminum radiator is easy to install and equipped with a super silent 120 mm fan with variable speeds based on the GPU temperature. However, to get the best performance, the memory and VRM need top-notch cooling as well. Therefore, the GTX 980Ti SEA HAWK is armed with a ball-bearing radial fan and a custom shroud design to ensure the best cooling performance for all components."

The MSI GTX 980 Ti Sea Hawk actually appears identical to the Corsair Hydro GFX, and a looking through the specs confirms the similarities:

  • NVIDIA GeForce GTX 980 Ti GPU
  • 2816 Processor Units
  • 1291 MHz/1190 MHz Boost/Base Core Clock
  • 6 GB 384-bit GDDR5 Memory
  • 7096 MHz Memory Clock
  • Dimensions: Card - 270x111x40 mm; Cooler - 151x118x52 mm
  • Weight: 1286 g
  • With a 1190 MHz Base and 1291 MHz Boost clock the SEA HAWK has the same factory overclock speeds as the Corsair-branded unit, and MSI is also advertising the card's potential to go further:

    "Even though the GTX 980Ti SEA HAWK boasts some serious clock speeds out-of-the-box, the MSI Afterburner overclocking utility allows users to go even further. Explore the limits with Triple Overvoltage, custom profiles and real-time hardware monitoring."

    I imagine the availability of this MSI branded product will be greater than the Corsair branded equivalent, but in either case you get a GTX 980 Ti with the potential to run as fast and cool as a custom cooled solution, without any of the extra work. Pricing wasn't immediately available this morning but expect something close to the $739 MSRP we saw with Corsair.

    Source: MSI

    Corsair and MSI Introduce Hydro GFX Liquid Cooled GeForce GTX 980 Ti

    Subject: Graphics Cards | September 16, 2015 - 09:00 PM |
    Tagged: nvidia, msi, liquid cooler, GTX 980 Ti, geforce, corsair, AIO

    A GPU with attached closed-loop liquid cooler is a little more mainstream these days with AMD's Fury X a high-profile example, and now a partnership between Corsair and MSI is bringing a very powerful NVIDIA option to the market.


    The new product is called the Hydro GFX, with NVIDIA's GeForce GTX 980 Ti supplying the GPU horsepower. Of course the advantage of a closed-loop cooler would be higher (sustained) clocks and lower temps/noise, which in turns means much better performance. Corsair explains:

    "Hydro GFX consists of a MSI GeForce GTX 980 Ti card with an integrated aluminum bracket cooled by a Corsair Hydro Series H55 liquid cooler.

    Liquid cooling keeps the card’s hottest, most critical components - the GPU, memory, and power circuitry - 30% cooler than standard cards while running at higher clock speeds with no throttling, boosting the GPU clock 20% and graphics performance up to 15%.

    The Hydro Series H55 micro-fin copper cooling block and 120mm radiator expels the heat from the PC reducing overall system temperature and noise. The result is faster, smoother frame rates at resolutions of 4K and beyond at whisper quiet levels."

    The factory overclock this 980 Ti is pretty substantial out of the box with a 1190 MHz Base (stock 1000 MHz) and 1291 MHz Boost clock (stock 1075 MHz). Memory is not overclocked (running at the default 7096 MHz), so there should still be some headroom for overclocking thanks to the air cooling for the RAM/VRM.


    A look at the box - and the Corsair branding

    Specs from Corsair:

    • NVIDIA GeForce GTX 980 Ti GPU with Maxwell 2.0 microarchitecture
    • 1190/1291 MHz base/boost clock
    • Clocked 20% faster than standard GeForce GTX 980 Ti cards for up to a 15% performance boost.
    • Integrated liquid cooling technology keeps GPU, video RAM, and voltage regulator 30% cooler than standard cards
    • Corsair Hydro Series H55 liquid cooler with micro-fin copper block, 120mm radiator/fan
    • Memory: 6GB GDDR5, 7096 MHz, 384-bit interface
    • Outputs: 3x DisplayPort 1.2, HDMI 2.0, and Dual Link DVI
    • Power: 250 watts (600 watt PSU required)
    • Requirements: PCI Express 3.0 16x dual-width slot, 8+6-pin power connector, 600 watt PSU
    • Dimensions: 10.5 x 4.376 inches
    • Warranty: 3 years
    • MSRP: $739.99

    As far as pricing/availability goes Corsair says the new card will debut in October in the U.S. with an MSRP of $739.99.

    Source: Corsair

    Report: TSMC To Produce NVIDIA Pascal On 16 nm FinFET

    Subject: Graphics Cards | September 16, 2015 - 09:16 AM |
    Tagged: TSMC, Samsung, pascal, nvidia, hbm, graphics card, gpu

    According to a report by BusinessKorea TSMC has been selected to produce the upcoming Pascal GPU after initially competing with Samsung for the contract.


    Though some had considered the possibility of both Samsung and TSMC sharing production (albeit on two different process nodes, as Samsung is on 14 nm FinFET), in the end the duties fall on TSMC's 16 nm FinFET alone if this report is accurate. The move is not too surprising considering the longstanding position TSMC has maintained as a fab for GPU makers and Samsung's lack of experience in this area.

    The report didn't make the release date for Pascal any more clear, naming it "next year" for the new HBM-powered GPU, which will also reportedly feature 16 GB of HBM 2 memory for the flagship version of the card. This would potentially be the first GPU released at 16 nm (unless AMD has something in the works before Pascal's release), as all current AMD and NVIDIA GPUs are manufactured at 28 nm.

    The premium priced ASUS GTX 980 Ti STRIX DCIII OC certainly does perform

    Subject: Graphics Cards | September 8, 2015 - 05:56 PM |
    Tagged: STRIX DirectCU III OC, nvidia, factory overclocked, asus, 980 Ti

    The ASUS GTX 980 Ti STRIX DCIII OC comes with the newest custom cooler from ASUS and a fairly respectable factory overclock of 1216MHz, 1317MHz boost and a 7.2GHz effective clock on the impressive 6GB of VRAM.  Once [H]ard|OCP had a chance to use GPUTweak II those values were increased to 1291MHz, 1392MHz boost and a 6GB VRAM clock with manual tweaking, for those who prefer automated OCing there are three modes which range from Silent to OC mode that will instantly get you ready to use the card.  With an MSRP of $690 and a street price usually over $700 you have to be ready to invest a lot of hard earned cash into this card but at 4k resolutions it does outperform the Fury X by a noticeable margin.


    "Today we have the custom built ASUS GTX 980 Ti STRIX DirectCU III OC 6GB video card. It features a factory overclock, extreme cooling capabilities and state of the art voltage regulation. We compare it to the AMD Radeon R9 Fury, and overclock the ASUS GTX 980 Ti STRIX DCIII to its highest potential and look at some 4K playability."

    Here are some more Graphics Card articles from around the web:

    Graphics Cards


    Source: [H]ard|OCP

    AMD Hosting R9 Nano Live Stream Tomorrow at 3pm ET

    Subject: Graphics Cards | September 2, 2015 - 05:58 PM |
    Tagged: video, r9 nano, Fiji, amd

    Tomorrow afternoon, at 12pm PT / 3pm ET, AMD is hosting a live stream on its Twitch channel to show off and discuss a little more about the upcoming Radeon R9 Nano product we previewed last month.

    I have no idea what is going to be discussed, I have no idea how long it will be and I don't really know what to expect at all other than that. Apparently AMD is going to play some games on the R9 Nano as well as talk about mods that the small form factor enables.


    Source: AMD

    IFA 2015: ASUS ROG Matrix GTX 980Ti Platinum Announced

    Subject: Graphics Cards | September 2, 2015 - 11:43 AM |
    Tagged: ROG, Matrix GTX 980Ti Platinum, matrix, IFA 2015, GTX 980 Ti, DirectCU II, asus

    The GTX 980 Ti has received the Matrix treatment from ASUS, and the ROG GTX 980Ti Platinum graphics card features a DirectCU II cooler with the new plasma copper color scheme.

    Matrix GTX 980 Ti Platinum.png

    In addition to the claimed 25% cooling advantage from the DirectCU II cooler, which also promises "3X less noise than reference cards", the Matrix Platinum is constructed with Super Alloy Power II components for maximum stability. An interesting addition is something called Memory Defroster, which ASUS explains:

    "Memory Defroster is an ASUS-exclusive technology that takes overclocking to extremes – it defrosts the Matrix card's memory during subzero overclocking to ensure sustained stability."

    The overbuilt ROG Matrix cards are meant to be overclocked of course, and the GTX 980Ti Platinum offers convenience features such as a one-click "Safe Mode" to restore the card's BIOS to default settings, and a color-coded load indicator that "lets users check GPU load levels at a glance".

    ROG Matrix GTX 980Ti Platinum.jpg

    The Matrix GTX 980 Ti Platinum also comes with a one‑year XSplit Gamecaster premium license, which is a $99 value. So what is the total cost of this card? That hasn't been announced just yet, and availability is also TBA.

    Source: ASUS

    NVIDIA Releases 355.82 WHQL Drivers

    Subject: Graphics Cards | August 31, 2015 - 07:19 PM |
    Tagged: nvidia, graphics drivers, geforce, drivers

    Unlike last week's 355.80 Hotfix, today's driver is fully certified by both NVIDIA and Microsoft (WHQL). According to users on GeForce Forums, this driver includes the hotfix changes, although I am still seeing a few users complain about memory issues under SLI. The general consensus seems to be that a number of bugs were fixed, and that driver quality is steadily increasing. This is also a “Game Ready” driver for Mad Max and Metal Gear Solid V: The Phantom Pain.


    NVIDIA's GeForce Game Ready 355.82 WHQL Mad Max and Metal Gear Solid V: The Phantom Pain drivers (inhale, exhale, inhale) are now available for download at their website. Note that Windows 10 drivers are separate from Windows 7 and Windows 8.x ones, so be sure to not take shortcuts when filling out the “select your driver” form. That, or just use GeForce Experience.

    Source: NVIDIA

    AMD Releases App SDK 3.0 with OpenCL 2.0

    Subject: Graphics Cards, Processors | August 30, 2015 - 09:14 PM |
    Tagged: amd, carrizo, Fiji, opencl, opencl 2.0

    Apart from manufacturers with a heavy first-party focus, such as Apple and Nintendo, hardware is useless without developer support. In this case, AMD has updated their App SDK to include support for OpenCL 2.0, with code samples. It also updates the SDK for Windows 10, Carrizo, and Fiji, but it is not entirely clear how.


    That said, OpenCL is important to those two products. Fiji has a very high compute throughput compared to any other GPU at the moment, and its memory bandwidth is often even more important for GPGPU workloads. It is also useful for Carrizo, because parallel compute and HSA features are what make it a unique product. AMD has been creating first-party software software and helping popular third-party developers such as Adobe, but a little support to the world at large could bring a killer application or two, especially from the open-source community.

    The SDK has been available in pre-release form for quite some time now, but it is finally graduated out of beta. OpenCL 2.0 allows for work to be generated on the GPU, which is especially useful for tasks that vary upon previous results without contacting the CPU again.

    Source: AMD

    NVIDIA 355.80 Hotfix for Windows 10 SLI Memory Issues

    Subject: Graphics Cards | August 27, 2015 - 05:23 PM |
    Tagged: windows 10, nvidia, geforce, drivers, graphics drivers

    While GeForce Hotfix driver 355.80 is not certified, or even beta, I know that a lot of our readers have issues with SLI in Windows 10. Especially in games like Battlefield 4, memory usage would expand until, apparently, a crash occurs. Since I run a single GPU, I have not experienced this issue and so I cannot comment on what happens. I just know that it was very common in the GeForce forums and in our comment section, so it was probably a big problem for many users.


    If you are not experiencing this problem, then you probably should not install this driver. This is a hotfix that, as stated above, was released outside of NVIDIA's typical update process. You might experience new, unknown issues. Affected users, on the other hand, have the choice to install the fix now, which could very well be stable, or wait for a certified release later.

    You can pick it up from NVIDIA's support site.

    Source: NVIDIA

    Detailed Photos of AMD Radeon R9 Nano Surface (Confirmed)

    Subject: Graphics Cards | August 25, 2015 - 02:23 PM |
    Tagged: Radeon R9 Nano, radeon, r9 nano, hbm, graphics, gpu, amd

    New detailed photos of the upcoming Radeon R9 Nano have surfaced, and Ryan has confirmed with AMD that these are in fact real.


    We've seen the outside of the card before, but for the first time we are provided a detailed look under the hood.


    The cooler is quite compact and has copper heatpipes for both core and VRM

    The R9 Nano is a very small card and it will be powered with a single 8-pin power connector directed toward the back.



    Connectivity is provided via three DisplayPort outputs and a single HDMI port

    And fans of backplates will need to seek 3rd-party offerings as it looks like this will have a bare PCB around back.


    We will keep you updated if any official specifications become available, and of course we'll have complete coverage once the R9 Nano is officially launched!

    The great GTX 950 review roundup

    Subject: Graphics Cards | August 24, 2015 - 03:43 PM |
    Tagged: nvidia, moba, maxwell, gtx 950, GM206, geforce, DOTA 2

    It is more fun testing at the high end and the number of MOBA gamers here at PCPer could be described as very sparse, to say the least.  Perhaps you are a MOBA gamer looking to play on a 1080p screen and have less than $200 to invest in a GPU and feel that Ryan somehow missed a benchmark that is important to you.  One of the dozens of reviews linked to below are likely to have covered that game or specific feature which you are looking for.  They also represent the gamut of cards available at launch from a wide variety of vendors, both stock and overclocked models.  If you just want a quick refresher on the specifications and what has happened to the pricing on already released models, The Tech Report has handy tables for you to reference here.


    "For most of this summer, much of the excitement in the GPU market has been focused on pricey, high-end products like the Radeon Fury and the GeForce GTX 980 Ti. Today, Nvidia is turning the spotlight back on more affordable graphics cards with the introduction of the GeForce GTX 950, a $159.99 offering that promises to handle the latest games reasonably well at the everyman's resolution of 1080p."

    Here are some more Graphics Card articles from around the web:

    Graphics Cards

    Report: Leaked Slide From AMD Gives Glimpse of R9 Nano Performance

    Subject: Graphics Cards | August 24, 2015 - 02:37 PM |
    Tagged: rumor, report, Radeon R9 Nano, R9 290X, leak, hot chips, hbm, amd

    A report from German-language tech site Golem contains what appears to be a slide leaked from AMD's GPU presentation at Hot Chips in Cupertino, and the results paint a very efficient picture of the upcoming Radeon R9 Nano GPU.


    The spelling of "performance" doesn't mean this is fake, does it?

    While only managing 3 FPS better than the Radeon R9 290X in this particular benchmark, this result was achieved with 1.9x the performance per watt of the baseline 290X in the test. The article speculates on the possible clock speed of the R9 Nano based on the relative performance, and estimates 850 MHz (which is of course up for debate as no official specs are known).

    The most compelling part of the result has to be the ability of the Nano to match or exceed the R9 290X in performance, while only requiring a single 8-pin PCIe connector and needing an average of only 175 watts. With a mini-ITX friendly 15 cm board (5.9 inches) this could be one of the more compelling options for a mini gaming rig going forward.

    We have a lot of questions that have yet to be answered of course, including the actual speed of both core and HBM, and just how quiet this air-cooled card might be under load. We shouldn't have to wait much longer!


    GPU Market Share: NVIDIA Gains in Shrinking Add-in Board Market

    Subject: Graphics Cards | August 21, 2015 - 11:30 AM |
    Tagged: PC, nvidia, Matrox, jpr, graphics cards, gpu market share, desktop market share, amd, AIB, add in board

    While we reported recently on the decline of overall GPU shipments, a new report out of John Peddie Research covers the add-in board segment to give us a look at the desktop graphics card market. So how are the big two (sorry Matrox) doing?

    GPU Supplier Market Share This Quarter Market Share Last Quarter Market Share Last Year
    AMD 18.0% 22.5% 37.9%
    Matrox 0.00% 0.1% 0.1%
    NVIDIA 81.9% 77.4% 62.0%

    The big news is of course a drop in market share for AMD of 4.5% quarter-to-quarter, and down to just 18% from 37.9% last year. There will be many opinions as to why their share has been dropping in the last year, but it certainly didn't help that the 300-series GPUs are rebrands of 200-series, and the new Fury cards have had very limited availability so far.


    The graph from Mercury Research illustrates what is almost a mirror image, with NVIDIA gaining 20% as AMD lost 20%, for a 40% swing in overall share. Ouch. Meanwhile (not pictured) Matrox didn't have a statistically meaningful quarter but still manage to appear on the JPR report with 0.1% market share (somehow) last quarter.

    The desktop market isn't actually suffering quite as much as the overall PC market, and specifically the enthusiast market.

    "The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful (GPUs). The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space."

    But not all is well considering overall the add-in board attach rate with desktops "has declined from a high of 63% in Q1 2008 to 37% this quarter". This is indicative of the overall trend toward integrated GPUs in the industry with AMD APUs and Intel processor graphics, as illustrated by this graphic from the report.


    The year-to-year numbers show an overall drop of 18.8%, and even with their dominant 81.9% market share NVIDIA has still seen their shipments decrease by 12% this quarter. These trends seem to indicate a gloomy future for discrete graphics in the coming years, but for now we in the enthusiast community will continue to keep it afloat. It would certainly be nice to see some gains from AMD soon to keep things interesting, which might help lower prices down from their lofty $400 - $600 mark for flagship cards at the moment.