You have a 4K monitor and $650USD, what do you do?

Subject: Graphics Cards | July 27, 2015 - 04:33 PM |
Tagged: 4k, amd, R9 FuryX, GTX 980 Ti, gtx titan x

[H]ard|OCP have set up their testbed for a 4K showdown between the similarly priced GTX 980 Ti and Radeon R9 Fury X with the $1000 TITAN X tossed in there for those with more money than sense.  The test uses the new Catalyst 15.7 and the GeForce 353.30 drivers to give a more even playing field while benchmarking Witcher 3, GTA V and other games.  When the dust settled the pattern was obvious and the performance differences could be seen.  The deltas were not huge but when you are paying $650 + tax for a GPU even performance a few frames better or a graphical option that can be used really matters.  Perhaps the most interesting result was the redemption of the TITAN X, its extra price was reflected in the performance results.  Check them out for yourself here.

1437535126iwTl74Zfm5_1_1.gif

"We take the new AMD Radeon R9 Fury X and evaluate the 4K gaming experience. We will also compare against the price competitive GeForce GTX 980 Ti as well as a GeForce GTX TITAN X. Which video card provides the best experience and performance when gaming at glorious 4K resolution?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: PC Perspective

... But Is the Timing Right?

Windows 10 is about to launch and, with it, DirectX 12. Apart from the massive increase in draw calls, Explicit Multiadapter, both Linked and Unlinked, has been the cause of a few pockets of excitement here and there. I am a bit concerned, though. People seem to find this a new, novel concept that gives game developers the tools that they've never had before. It really isn't. Depending on what you want to do with secondary GPUs, game developers could have used them for years. Years!

Before we talk about the cross-platform examples, we should talk about Mantle. It is the closest analog to DirectX 12 and Vulkan that we have. It served as the base specification for Vulkan that the Khronos Group modified with SPIR-V instead of HLSL and so forth. Some claim that it was also the foundation of DirectX 12, which would not surprise me given what I've seen online and in the SDK. Allow me to show you how the API works.

amd-2015-mantle-execution-model.png

Mantle is an interface that mixes Graphics, Compute, and DMA (memory access) into queues of commands. This is easily done in parallel, as each thread can create commands on its own, which is great for multi-core processors. Each queue, which are lists leading to the GPU that commands are placed in, can be handled independently, too. An interesting side-effect is that, since each device uses standard data structures, such as IEEE754 decimal numbers, no-one cares where these queues go as long as the work is done quick enough.

Since each queue is independent, an application can choose to manage many of them. None of these lists really need to know what is happening to any other. As such, they can be pointed to multiple, even wildly different graphics devices. Different model GPUs with different capabilities can work together, as long as they support the core of Mantle.

microsoft-dx12-build15-ue4frame.png

DirectX 12 and Vulkan took this metaphor so their respective developers could use this functionality across vendors. Mantle did not invent the concept, however. What Mantle did is expose this architecture to graphics, which can make use of all the fixed-function hardware that is unique to GPUs. Prior to AMD's usage, this was how GPU compute architectures were designed. Game developers could have spun up an OpenCL workload to process physics, audio, pathfinding, visibility, or even lighting and post-processing effects... on a secondary GPU, even from a completely different vendor.

Vista's multi-GPU bug might get in the way, but it was possible in 7 and, I believe, XP too.

Read on to see a couple reasons why we are only getting this now...

Rumor: NVIDIA Pascal up to 17 Billion Transistors, 32GB HBM2

Subject: Graphics Cards | July 24, 2015 - 12:16 PM |
Tagged: rumor, pascal, nvidia, HBM2, hbm, graphics card, gpu

An exclusive report from Fudzilla claims some outlandish numbers for the upcoming NVIDIA Pascal GPU, including 17 billion transistors and a massive amount of second-gen HBM memory.

According to the report:

"Pascal is the successor to the Maxwell Titan X GM200 and we have been tipped off by some reliable sources that it will have  more than a double the number of transistors. The huge increase comes from  Pascal's 16 nm FinFET process and its transistor size is close to two times smaller."

PascalBoard.jpg

The NVIDIA Pascal board (Image credit: Legit Reviews)

Pascal's 16nm FinFET production will be a major change from the existing 28nm process found on all current NVIDIA GPUs. And if this report is accurate they are taking full advantage considering that transistor count is more than double the 8 billion found in the TITAN X.

PlanarFinFET.jpg

(Image credit: Fudzilla)

And what about memory? We have long known that Pascal will be NVIDIA's first forray into HBM, and Fudzilla is reporting that up to 32GB of second-gen HBM (HBM2) will be present on the highest model, which is a rather outrageous number even compared to the 12GB TITAN X.

"HBM2 enables cards with 4 HBM 2.0 cards with 4GB per chip, or four HBM 2.0 cards with 8GB per chips results with 16GB and 32GB respectively. Pascal has power to do both, depending on the SKU."

Pascal is expected in 2016, so we'll have plenty of time to speculate on these and doubtless other rumors to come.

Source: Fudzilla

NVIDIA Adds Metal Gear Solid V: The Phantom Pain Bundle to GeForce Cards

Subject: Graphics Cards | July 23, 2015 - 10:52 AM |
Tagged: nvidia, geforce, gtx, bundle, metal gear solid, phantom pain

NVIDIA continues with its pattern of flagship game bundles with today's announcement. Starting today, GeForce GTX 980 Ti, 980, 970 and 960 GPUs from select retailers will include a copy of Metal Gear Solid V: The Phantom Pain, due out September 15th. (Bundle is live on Amazon.com.) Also, notebooks that use the GTX 980M or 970M GPU qualify.

mgsv-bundle-header-new.png

From NVIDIA's marketing on the bundle:

Only GeForce GTX gives you the power and performance to game like the Big Boss. Experience the METAL GEAR SOLID V: THE PHANTOM PAIN with incredible visuals, uncompromised gameplay, and advanced technologies. NVIDIA G-SYNC™ delivers smooth and stutter-free gaming, GeForce Experience™ provides optimal playable settings, and NVIDIA GameStream™ technology streams your game to any NVIDIA SHIELD™ device.

It appears that Amazon.com already has its landing page up and ready for the MGS V bundle program, so if you are hunting for a new graphics card stop there and see what they have in your range.

Let's hope that this game release goes a bit more smooth than Batman: Arkham Knight...

Source: NVIDIA

Stop me if you've heard this before; change your .exe and change your performance

Subject: Graphics Cards | July 20, 2015 - 02:00 PM |
Tagged: amd, linux, CS:GO

Thankfully it has been quite a while since we saw GPU driver optimization specific to .exe filenames on Windows, in the past both major providers have tweaked performance based on the name of the executable which launches the game.  Until now this particular flavour of underhandedness had become passé, at least until now.  Phoronix has spotted it once again, this time seeing a big jump in performance in CS:GO when they rename the binary from csgo_linux binary to hl2_Linux.  The game is built on the same engine but the optimization for the Source Engine are not properly applied to CS:GO.

There is nothing nefarious about this particular example, it seems more a case of AMD's driver team being lazy, or more likely short-staffed.  If you play CS:GO on Linux then rename your binary, you will see a jump in performance with no deleterious side effects.  Phoronix is investigating more games to see if there are other inconsistently applied optimizations.

csgo.jpg

"Should you be using a Radeon graphics card with the AMD Catalyst Linux driver and are disappointed by the poor performance, there is a very easy workaround for gaining much better performance under Linux... In some cases a simple tweak will yield around 40% better performance!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

TSMC Plans 10nm, 7nm, and "Very Steep" Ramping of 16nm.

Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM |
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm

Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.

tsmc.jpg

So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.

Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.

In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.

Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.

Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.

Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?

AMD Confirms August Availability of Radeon R9 Nano

Subject: Graphics Cards | July 17, 2015 - 08:20 AM |
Tagged: radeon, r9 nano, hbm, Fiji, amd

AMD has spilled the beans on at least one aspect of the R9 Nano: the release timeframe. On their Q2 earnings call yesterday AMD CEO Lisa Su made this telling remark:

“Fury just launched, actually this week, and we will be launching Nano in the August timeframe.”

AMD-Radeon-R9-Nano-4.jpg

Image credit: VideoCardz.com

Wccftech had the story based on the AMD earnings call, but unfortunately there is no other new information the card just yet. We've speculated on how much lower clocks would need to be to meet the 175W target with full Fiji silicon, and it's going to be significant. The air coolers we've seen on the Fury (non-X) cards to date have extended well beyond the PCB, and the Nano is a mini-ITX form factor design.

Regardless of where the final GPU and memory clock numbers are I think it's safe to assume there won't be much (if any) overclocking headroom. Then again, of the card does have higher performance than the 290X in a mini ITX package at 175W, I don't think OC headroom will be a drawback. I guess we'll have to keep waiting for more information on the official specs before the end of August.

Source: Wccftech

Meet ASUS' DirectCU III on the Radeon Fury

Subject: Graphics Cards | July 13, 2015 - 03:34 PM |
Tagged: Fury, DirectCU III, asus, amd

The popular ASUS STRIX series has recently been updated with the DirectCU III custom cooler, on both the GTX 980 and the new Radeon Fury.  This version uses dual-10mm heatpipes and Triple Wing-Blade fans which are billed as providing 220% larger surface area as well as an increase in air pressure of 105%, which provide a claimed 40% reduction in temperature.  We cannot directly compare the cooling ability directly to the retail model, however [H]ard|OCP's tests show you can indeed cool a Fury on air, 71C at full load is lower than the 81C seen on a GTX 980.  Even more impressive is that fans were only at 43% speed and operating almost silently, at the cost of increased noise you could lower those temperatures if you desired.  Check out their full review to see how the card did but do take note, [H] does not at this time have access to the new GPU Tweak II utility required to overclock the card.

-update - now with less X's

1436520543zZMsl7GpwE_1_1.jpg

"AMD's Radeon Fury X is here, the AMD Radeon R9 Fury presents itself and we evaluate a full retail custom ASUS STRIX R9 Fury using ASUS' new DirectCU III technology. We will compare this to a GeForce GTX 980 using the new drivers AMD just released and find out what kind of gameplay experience the R9 Fury has to offer."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: Sapphire

Fiji brings the (non-X) Fury

Last month was a big one for AMD. At E3 the company hosted its own press conference to announce the Radeon R9 300-series of graphics as well as the new family of products based on the Fiji GPU. It started with the Fury X, a flagship $650 graphics card with an integrated water cooler that was well received.  It wasn't perfect by any means, but it was a necessary move for AMD to compete with NVIDIA on the high end of the discrete graphics market.

02.jpg

At the event AMD also talked about the Radeon R9 Fury (without the X) as the version of Fiji that would be taken by board partners to add custom coolers and even PCB designs. (They also talked about the R9 Nano and a dual-GPU version of Fiji, but nothing new is available on those products yet.) The Fury, priced $100 lower than the Fury X at $549, is going back to a more classic GPU design. There is no "reference" product though, so cooler and PCB designs are going to vary from card to card. We already have two different cards in our hands that differ dramatically from one another.

The Fury cuts down the Fiji GPU a bit with fewer stream processors and texture units, but keeps most other specs the same. This includes the 4GB of HBM (high bandwidth memory), 64 ROP count and even the TDP / board power. Performance is great and it creates an interesting comparison between itself and the GeForce GTX 980 cards on the market. Let's dive into this review!

Continue reading our review of the Sapphire Radeon R9 Fury 4GB with CrossFire Results!

Author:
Manufacturer: Various

SLI and CrossFire

Last week I sat down with a set of three AMD Radeon R9 Fury X cards, our sampled review card as well as two retail cards purchased from Newegg, to see how the reports of the pump whine noise from the cards was shaping up. I'm not going to dive into that debate again here in this story as I think we have covered it pretty well thus far in that story as well as on our various podcasts, but rest assured we are continuing to look into the revisions of the Fury X to see if AMD and Cooler Master were actually able to fix the issue.

group1.jpg

What we have to cover today is something very different, and likely much more interesting for a wider range of users. When you have three AMD Fury X cards in your hands, you of course have to do some multi-GPU testing with them. With our set I was able to run both 2-Way and 3-Way CrossFire with the new AMD flagship card and compare them directly to the comparable NVIDIA offering, the GeForce GTX 980 Ti.

There isn't much else I need to do to build up this story, is there? If you are curious how well the new AMD Fury X scales in CrossFire with two and even three GPUs, this is where you'll find your answers.

Continue reading our results testing the AMD Fury X and GeForce GTX 980 Ti in 3-Way GPU configurations!!