Report: Leaked Slide From AMD Gives Glimpse of R9 Nano Performance

Subject: Graphics Cards | August 24, 2015 - 02:37 PM |
Tagged: rumor, report, Radeon R9 Nano, R9 290X, leak, hot chips, hbm, amd

A report from German-language tech site Golem contains what appears to be a slide leaked from AMD's GPU presentation at Hot Chips in Cupertino, and the results paint a very efficient picture of the upcoming Radeon R9 Nano GPU.

nano_chart.png

The spelling of "performance" doesn't mean this is fake, does it?

While only managing 3 FPS better than the Radeon R9 290X in this particular benchmark, this result was achieved with 1.9x the performance per watt of the baseline 290X in the test. The article speculates on the possible clock speed of the R9 Nano based on the relative performance, and estimates 850 MHz (which is of course up for debate as no official specs are known).

The most compelling part of the result has to be the ability of the Nano to match or exceed the R9 290X in performance, while only requiring a single 8-pin PCIe connector and needing an average of only 175 watts. With a mini-ITX friendly 15 cm board (5.9 inches) this could be one of the more compelling options for a mini gaming rig going forward.

We have a lot of questions that have yet to be answered of course, including the actual speed of both core and HBM, and just how quiet this air-cooled card might be under load. We shouldn't have to wait much longer!

Source: Golem.de

GPU Market Share: NVIDIA Gains in Shrinking Add-in Board Market

Subject: Graphics Cards | August 21, 2015 - 11:30 AM |
Tagged: PC, nvidia, Matrox, jpr, graphics cards, gpu market share, desktop market share, amd, AIB, add in board

While we reported recently on the decline of overall GPU shipments, a new report out of John Peddie Research covers the add-in board segment to give us a look at the desktop graphics card market. So how are the big two (sorry Matrox) doing?

GPU Supplier Market Share This Quarter Market Share Last Quarter Market Share Last Year
AMD 18.0% 22.5% 37.9%
Matrox 0.00% 0.1% 0.1%
NVIDIA 81.9% 77.4% 62.0%

The big news is of course a drop in market share for AMD of 4.5% quarter-to-quarter, and down to just 18% from 37.9% last year. There will be many opinions as to why their share has been dropping in the last year, but it certainly didn't help that the 300-series GPUs are rebrands of 200-series, and the new Fury cards have had very limited availability so far.

mercury_research.png

The graph from Mercury Research illustrates what is almost a mirror image, with NVIDIA gaining 20% as AMD lost 20%, for a 40% swing in overall share. Ouch. Meanwhile (not pictured) Matrox didn't have a statistically meaningful quarter but still manage to appear on the JPR report with 0.1% market share (somehow) last quarter.

The desktop market isn't actually suffering quite as much as the overall PC market, and specifically the enthusiast market.

"The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful (GPUs). The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space."

But not all is well considering overall the add-in board attach rate with desktops "has declined from a high of 63% in Q1 2008 to 37% this quarter". This is indicative of the overall trend toward integrated GPUs in the industry with AMD APUs and Intel processor graphics, as illustrated by this graphic from the report.

AIB_attach.jpg

The year-to-year numbers show an overall drop of 18.8%, and even with their dominant 81.9% market share NVIDIA has still seen their shipments decrease by 12% this quarter. These trends seem to indicate a gloomy future for discrete graphics in the coming years, but for now we in the enthusiast community will continue to keep it afloat. It would certainly be nice to see some gains from AMD soon to keep things interesting, which might help lower prices down from their lofty $400 - $600 mark for flagship cards at the moment.

Author:
Manufacturer: NVIDIA

Another Maxwell Iteration

The mainstream end of the graphics card market is about to get a bit more complicated with today’s introduction of the GeForce GTX 950. Based on a slightly cut down GM206 chip, the same used in the GeForce GTX 960 that was released almost 8 months ago, the new GTX 950 will fill a gap in the product stack for NVIDIA, resting right at $160-170 MSRP. Until today that next-down spot from the GTX 960 was filled by the GeForce GTX 750 Ti, the very first iteration of Maxwell (we usually call it Maxwell 1) that came out in February of 2014!

Even though that is a long time to go without refreshing the GTX x50 part of the lineup, NVIDIA was likely hesitant to do so based on the overwhelming success of the GM107 for mainstream gaming. It was low cost, incredibly efficient and didn’t require any external power to run. That led us down the path of upgrading OEM PCs with GTX 750 Ti, an article and video that still gets hundreds of views and dozens of comments a week.

IMG_3123.JPG

The GTX 950 has some pretty big shoes to fill. I can tell you right now that it uses more power than the GTX 750 Ti, and it requires a 6-pin power connector, but it does so while increasing gaming performance dramatically. The primary competition from AMD is the Radeon R7 370, a Pitcairn GPU that is long in the tooth and missing many of the features that Maxwell provides.

And NVIDIA is taking a secondary angle with the GTX 950 launch –targeting the MOBA players (DOTA 2 in particular) directly and aggressively. With the success of this style of game over the last several years, and the impressive $18M+ purse for the largest DOTA 2 tournament just behind us, there isn’t a better area of PC gaming to be going after today. But are the tweaks and changes to the card and software really going to make a difference for MOBA gamers or is it just marketing fluff?

Let’s dive into everything GeForce GTX 950!

Continue reading our review of the NVIDIA GeForce GTX 950 2GB Graphics Card!!

Intel (Allegedly) Plans DisplayPort Adaptive-Sync

Subject: Graphics Cards, Displays | August 19, 2015 - 08:03 PM |
Tagged: Intel, freesync, DisplayPort, adaptive sync

DisplayPort Adaptive-Sync is a VESA standard, pushed by AMD, that allows input signals to control when a monitor refreshes. A normal monitor redraws on a defined interval because old CRT monitors needed to scan with an electron gun, and this took time. LCDs never needed to, but they did. This process meant that the monitor was drawing a frame whether it was ready or not, which led to tearing, stutter, and other nasty effects if the GPU couldn't keep up. With Adaptive-Sync, GPUs don't “miss the train” -- the train leaves when they board.

Intel-logo.png

Intel has, according to The Tech Report, decided to support Adaptive-Sync -- but not necessarily in their current product line. David Blythe of Intel would not comment on specific dates or release windows, just that it is in their plans. This makes sense for Intel because it allows their customers to push settings higher while maintaining a smooth experience, which matters a lot for users of integrated graphics.

While “AMD FreeSync” is a stack of technologies, VESA DisplayPort Adaptive-Sync should be all that is required on the monitor side. This should mean that Intel has access to all of AMD's adaptive refresh monitors, although the driver and GPU circuitry would need to be their burden. G-Sync monitors (at least those with NVIDIA-design modules -- this is currently all of them except for one laptop I think) would be off limits, though.

Author:
Manufacturer: Stardock

Benchmark Overview

I knew that the move to DirectX 12 was going to be a big shift for the industry. Since the introduction of the AMD Mantle API along with the Hawaii GPU architecture we have been inundated with game developers and hardware vendors talking about the potential benefits of lower level APIs, which give more direct access to GPU hardware and enable more flexible threading for CPUs to game developers and game engines. The results, we were told, would mean that your current hardware would be able to take you further and future games and applications would be able to fundamentally change how they are built to enhance gaming experiences tremendously.

I knew that the reader interest in DX12 was outstripping my expectations when I did a live blog of the official DX12 unveil by Microsoft at GDC. In a format that consisted simply of my text commentary and photos of the slides that were being shown (no video at all), we had more than 25,000 live readers that stayed engaged the whole time. Comments and questions flew into the event – more than me or my staff could possible handle in real time. It turned out that gamers were indeed very much interested in what DirectX 12 might offer them with the release of Windows 10.

game3.jpg

Today we are taking a look at the first real world gaming benchmark that utilized DX12. Back in March I was able to do some early testing with an API-specific test that evaluates the overhead implications of DX12, DX11 and even AMD Mantle from Futuremark and 3DMark. This first look at DX12 was interesting and painted an amazing picture about the potential benefits of the new API from Microsoft, but it wasn’t built on a real game engine. In our Ashes of the Singularity benchmark testing today, we finally get an early look at what a real implementation of DX12 looks like.

And as you might expect, not only are the results interesting, but there is a significant amount of created controversy about what those results actually tell us. AMD has one story, NVIDIA another and Stardock and the Nitrous engine developers, yet another. It’s all incredibly intriguing.

Continue reading our analysis of the Ashes of the Singularity DX12 benchmark!!

Author:
Manufacturer: Intel

It comes after 8, but before 10

As the week of Intel’s Developer Forum (IDF) begins, you can expect to see a lot of information about Intel’s 6th Generation Core architecture, codenamed Skylake, finally revealed. When I posted my review of the Core i7-6700K, the first product based on that architecture to be released in any capacity, I was surprised that Intel was willing to ship product without the normal amount of background information for media and developers. Rather than give us the details and then ship product, which has happened for essentially every consumer product release I have been a part of, Intel did the reverse: ship a consumer friendly CPU and then promise to tell us how it all works later in the month at IDF.

Today I came across a document posted on Intel’s website that dives into very specific detail on the new Gen9 graphics and compute architecture of Skylake. Details on the Core architecture changes are not present, and instead we are given details on how the traditional GPU portion of the SoC has changed. To be clear: I haven’t had any formal briefing from Intel on this topic or anything surrounding the architecture of Skylake or the new Gen9 graphics system but I wanted to share the details we found available. I am sure we’ll learn more this week as IDF progresses so I will update this story where necessary.

What Intel calls Processor Graphics is what we used to call simply integrated graphics for the longest time. The purpose and role of processor graphics has changed drastically over the years and it is now not only responsible for 3D graphics rendering but compute, media and display capabilities of the Intel Skylake SoC (when discrete add-in graphics is not used). The architecture document used to source this story focuses on Gen9 graphics, the compute architecture utilized in the latest Skylake CPUs. The Intel HD Graphics 530 on the Core i7-6700K / Core i5-6600K is the first product released and announced using Gen9 graphics and is also the first to adopt Intel’s new 3-digit naming scheme.

skylakegen9-4.jpg

This die shot of the Core i7-6700K shows the increased size and prominence of the Gen9 graphics in the overall SoC design. Containing four traditional x86 CPU cores and 1 “slice” implementation of Gen9 graphics (with three visible sub-slices we’ll describe below), this is not likely to be the highest performing iteration of the latest Intel HD Graphics technology.

skylakegen9-4.2.jpg

Like the Intel processors before it, the Skylake design utilizes a ring bus architecture to connect the different components of the SoC. This bi-directional interconnect has a 32-byte wide data bus and connects to multiple “agents” on the CPU. Each individual CPU core is considered its own agent while the Gen9 compute architecture is considered one complete agent. The system agent bundles the DRAM memory, the display controller, PCI Express and other I/O interface that communicate with the rest of the PC. Any off-chip memory requests and transactions occur through this bus while on-chip data transfers tend to be handled differently.

Continue reading our look at the new Gen9 graphics and compute architecture on Skylake!!

Overall GPU Shipments Down from Last Year, PC Industry Drops 10%

Subject: Graphics Cards, Systems | August 17, 2015 - 11:00 AM |
Tagged: NPD, gpu, discrete gpu, graphics, marketshare, PC industry

News from NPD Research today shows a sharp decline in discrete graphics shipments from all major vendors. Not great news for the PC industry, but not all that surprising, either.

graphics_shipments.jpg

These numbers don’t indicate a lack of discrete GPU interest in the PC enthusiast community of course, but certainly show how the mainstream market has changed. OEM laptop and (more recently) desktop makers predominantly use processor graphics from Intel and AMD APUs, though the decrease of over 7% for Intel GPUs suggests a decline in PC shipments overall.

Here are the highlights, quoted directly from NPD Research:

  • AMD's overall unit shipments decreased -25.82% quarter-to-quarter, Intel's total shipments decreased -7.39% from last quarter, and Nvidia's decreased -16.19%.
  • The attach rate of GPUs (includes integrated and discrete GPUs) to PCs for the quarter was 137% which was down -10.82% from last quarter, and 26.43% of PCs had discrete GPUs, which is down -4.15%.
  • The overall PC market decreased -4.05% quarter-to-quarter, and decreased -10.40% year-to-year.
  • Desktop graphics add-in boards (AIBs) that use discrete GPUs decreased -16.81% from last quarter.

marketshare.png

An overall decrease of 10.4 % year-to-year indicates what I'll call the continuing evolution of the PC (rather than a decline, per se), and shows how many have come to depend on smartphones for the basic computing tasks (email, web browsing) that once required a PC. Tablets didn’t replace the PC in the way that was predicted only 5 years ago, and it’s almost become essential to pair a PC with a smartphone for a complete personal computing experience (sorry, tablets – we just don’t NEED you as much).

I would guess anyone reading this on a PC enthusiast site is not only using a PC, but probably one with discrete graphics, too. Or maybe you exclusively view our site on a tablet or smartphone? I for one won’t stop buying PC components until they just aren’t available anymore, and that dark day is probably still many years off.

Source: NPD Research

AMD Radeon R9 Fury Unlocked as Fury X, Overclocked to 1 GHz HBM

Subject: Graphics Cards | August 12, 2015 - 05:29 PM |
Tagged: STRIX R9 Fury, Radeon R9 Fury, overclocking, oc, LN2, hbm, fury x, asus, amd

What happens when you unlock an AMD Fury to have the Compute Units of a Fury X, and then overclock the snot out of it using LN2? User Xtreme Addict in the HWBot forums has created a comprehensive guide to do just this, and the results are incredible.

fury_ln2_01.JPG

Not for the faint of heart (image credit: Xtreme Addict)

"The steps include unlocking the Compute Units to enable Fury X grade performance, enabling the hotwire soldering pads, a 0.95v Rail mod, and of course the trimpot/hotwire VGPU, VMEM, VPLL (VDDCI) mods.

The result? A GPU frequency of 1450 MHz and HBM frequency of 1000 MHz. For the HBM that's a 100% overclock."

Beginning with a stock ASUS R9 Fury STRIX card Xtreme Addict performed some surgery to fully unlock the voltage, and unlocked the Compute Units using a tool from this Overclock.net thread.

fury_ln2_02.jpg

The results? Staggering. HBM at 1000 MHz is double the rate of the stock Fury X, and a GPU core of 1450 MHz is a 400 MHz increase. So what kind of performance did this heavily overclocked card achieve?

"The performance goes up from 6237 points at default to 6756 after unlocking the CUs, then 8121 points after overclock on air cooling, to eventually end up at 9634 points when fully unleashed with liquid nitrogen."

Apparently they were able to push the card even further, ending up with a whopping 10033 score in 3DMark Fire Strike Extreme.

fury_ln2_03.JPG

While this method is far too extreme for 99% of enthusiasts, the idea of unlocking a retail Fury to the level of a Fury X through software/BIOS mods is much more accessible, as is the possibility of reaching much higher clocks through advanced cooling methods.

Unfortunately, if reading through this makes you want to run out and grab one of these STRIX cards availability is still limited. Hopefully supply catches up to demand in the near future.

fury_strix.PNG

A quick look at stock status on Newegg for the featured R9 Fury card

Source: HWBot

3dfx Voodoo 3 2000 PCI Unboxing - What year is it??!?

Subject: Graphics Cards | August 12, 2015 - 04:43 PM |
Tagged: what year is it, voodoo 3, voodoo, video, unboxing, pci, 3dfx

What do you do when you have a new, in box 3dfx Voodoo 3 2000 graphics card that gets some water damage? You do a classic unboxing and then try to get that PCI graphics card from 1999 up and running and playing some Unreal Tournament. 
 
pic1.jpg
 
Were we successful?
 

GIGABYTE GTX 980 Ti G1 GAMING loves it when you overclock

Subject: Graphics Cards | August 12, 2015 - 02:44 PM |
Tagged: GTX 980 Ti G1 GAMING, gigabyte, GTX 980 Ti, factory overclocked

The Gigabyte GTX 980 Ti G1 GAMING card comes with a 1152MHz Base Clock and 1241MHz Boost Clock straight out of the box and uses two 8-pin power connectors as opposed to an 8 and a 6-pin.  That extra power and the WINDFORCE 3X custom cooler help you when overclocking the card beyond the frequencies it ships at.  [H]ard|OCP used OC GURU II to up the voltage provided to this card and reached an overclock that hit 1367MHz in game with a 7GHz clock for the VRAM.  Manually they managed to go even further, the VRAM could reach 8GHz and the GPU clock was measured at 1535 in game, a rather significant increase.  The overclock increased performance by around 10% in most of the tests; which makes this card impressive even before you consider some of the other beneficial features which you can read about at [H]ard|OCP.

1439202407lsYxTzB0s7_1_15_l.jpg

"Today we review a custom built retail factory overclocked GIGABYTE GTX 980 Ti G1 GAMING video card. This video card is built to overclock in every way. We'll take this video card, compare it to the AMD Radeon R9 Fury X and overclock the GIGABYTE GTX 980 Ti G1 GAMING to its highest potential. The overclocking potential is amazing."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP