You have a 4K monitor and $650USD, what do you do?

Subject: Graphics Cards | July 27, 2015 - 04:33 PM |
Tagged: 4k, amd, R9 FuryX, GTX 980 Ti, gtx titan x

[H]ard|OCP have set up their testbed for a 4K showdown between the similarly priced GTX 980 Ti and Radeon R9 Fury X with the $1000 TITAN X tossed in there for those with more money than sense.  The test uses the new Catalyst 15.7 and the GeForce 353.30 drivers to give a more even playing field while benchmarking Witcher 3, GTA V and other games.  When the dust settled the pattern was obvious and the performance differences could be seen.  The deltas were not huge but when you are paying $650 + tax for a GPU even performance a few frames better or a graphical option that can be used really matters.  Perhaps the most interesting result was the redemption of the TITAN X, its extra price was reflected in the performance results.  Check them out for yourself here.


"We take the new AMD Radeon R9 Fury X and evaluate the 4K gaming experience. We will also compare against the price competitive GeForce GTX 980 Ti as well as a GeForce GTX TITAN X. Which video card provides the best experience and performance when gaming at glorious 4K resolution?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: PC Perspective

... But Is the Timing Right?

Windows 10 is about to launch and, with it, DirectX 12. Apart from the massive increase in draw calls, Explicit Multiadapter, both Linked and Unlinked, has been the cause of a few pockets of excitement here and there. I am a bit concerned, though. People seem to find this a new, novel concept that gives game developers the tools that they've never had before. It really isn't. Depending on what you want to do with secondary GPUs, game developers could have used them for years. Years!

Before we talk about the cross-platform examples, we should talk about Mantle. It is the closest analog to DirectX 12 and Vulkan that we have. It served as the base specification for Vulkan that the Khronos Group modified with SPIR-V instead of HLSL and so forth. Some claim that it was also the foundation of DirectX 12, which would not surprise me given what I've seen online and in the SDK. Allow me to show you how the API works.


Mantle is an interface that mixes Graphics, Compute, and DMA (memory access) into queues of commands. This is easily done in parallel, as each thread can create commands on its own, which is great for multi-core processors. Each queue, which are lists leading to the GPU that commands are placed in, can be handled independently, too. An interesting side-effect is that, since each device uses standard data structures, such as IEEE754 decimal numbers, no-one cares where these queues go as long as the work is done quick enough.

Since each queue is independent, an application can choose to manage many of them. None of these lists really need to know what is happening to any other. As such, they can be pointed to multiple, even wildly different graphics devices. Different model GPUs with different capabilities can work together, as long as they support the core of Mantle.


DirectX 12 and Vulkan took this metaphor so their respective developers could use this functionality across vendors. Mantle did not invent the concept, however. What Mantle did is expose this architecture to graphics, which can make use of all the fixed-function hardware that is unique to GPUs. Prior to AMD's usage, this was how GPU compute architectures were designed. Game developers could have spun up an OpenCL workload to process physics, audio, pathfinding, visibility, or even lighting and post-processing effects... on a secondary GPU, even from a completely different vendor.

Vista's multi-GPU bug might get in the way, but it was possible in 7 and, I believe, XP too.

Read on to see a couple reasons why we are only getting this now...

Team Red gets NASty with QNAP

Subject: Storage | July 23, 2015 - 07:38 PM |
Tagged: TVS-x63, qnap, Puma, amd

AMD is exploring alternate product routes to raise their income and the latest seems to be the Puma powered QNAP TVS-x63.  It is a four bay NAS which is powered by the 2.4GHz AMD GX424-CC SoC which happens to have a 28 stream processor GCN Radeon clocked at 497 MHz.  It has a pair of gigabit ports with an optional add-in card offering a single 10Gb or two additional 1Gb ports, though that will raise you above the cost of the $630 base model. Bjorn3d found the power consumption to be higher than the competition but the overall operation was flawless.


"The QNAP TVS-x63 marked the world’s first NAS featuring AMD processor. AMD’s new strategy is targeting the markets with high profit return and the company is returning to the server market. NAS, by extension, is like a small scale server, so it makes sense to see AMD putting their processors into these devices."

Here are some more Storage reviews from around the web:


Source: Bjorn3D

Podcast #359 - AMD R9 Nano, 4TB Samsung SSDs, Windows 10 and more!

Subject: General Tech | July 23, 2015 - 01:53 PM |
Tagged: podcast, video, amd, r9 nano, Fiji, Samsung, 4TB, windows 10, acer, aspire V, X99E-ITX/ac, TSMC, 10nm, 7nm

PC Perspective Podcast #359 - 07/23/2015

Join us this week as we discuss the AMD R9 Nano, 4TB Samsung SSDs, Windows 10 and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD A8-7670K (Godavari) Launches with Steamroller

Subject: Processors | July 22, 2015 - 09:56 PM |
Tagged: amd, APU, Godavari, a8, a8-7670k

AMD's Godavari architecture is the last one based on Bulldozer, which will hold the company's product stack over until their Zen architecture arrives in 2016. The A10-7870K was added a month ago, with a 95W TDP at a MSRP of $137 USD. This involved a slight performance bump of +200 MHz at its base frequency, but a +100 MHz higher Turbo than its predecessor when under high load. More interesting, it does this at the same TDP and the same basic architecture.


Remember that these are AMD's benchmarks.

The refresh has been expanded to include the A8-7670K. Some sites have reported that this uses the Excavator architecture as seen in Carrizo, but this is not the case. It is based on Steamroller. This product has a base clock of 3.6 GHz with a Turbo of up to 3.9 GHz. This is a +300 MHz Base and +100 MHz Turbo increase over the previous A8-7650K. Again, this is with the same architecture and TDP. The GPU even received a bit of a bump, too. It is now clocked at 757 MHz versus the previous generation's 720 MHz with all else equal, as far as I can tell. This should lead to a 5.1% increase in GPU compute throughput.

The A8-7670K just recently launched for an MSRP of $117.99. This 20$ saving should place it in a nice position below the A10-7870K for mainstream users.

Source: AMD

Stop me if you've heard this before; change your .exe and change your performance

Subject: Graphics Cards | July 20, 2015 - 02:00 PM |
Tagged: amd, linux, CS:GO

Thankfully it has been quite a while since we saw GPU driver optimization specific to .exe filenames on Windows, in the past both major providers have tweaked performance based on the name of the executable which launches the game.  Until now this particular flavour of underhandedness had become passé, at least until now.  Phoronix has spotted it once again, this time seeing a big jump in performance in CS:GO when they rename the binary from csgo_linux binary to hl2_Linux.  The game is built on the same engine but the optimization for the Source Engine are not properly applied to CS:GO.

There is nothing nefarious about this particular example, it seems more a case of AMD's driver team being lazy, or more likely short-staffed.  If you play CS:GO on Linux then rename your binary, you will see a jump in performance with no deleterious side effects.  Phoronix is investigating more games to see if there are other inconsistently applied optimizations.


"Should you be using a Radeon graphics card with the AMD Catalyst Linux driver and are disappointed by the poor performance, there is a very easy workaround for gaining much better performance under Linux... In some cases a simple tweak will yield around 40% better performance!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

AMD's hopes for the second half of 2015

Subject: General Tech | July 20, 2015 - 01:16 PM |
Tagged: amd, lisa su

It has not been a pretty year for AMD with overall sales of $942m representing 34.6% drop from this time last year and even the graphics portion seeing a 54.2% drop which resulted in loss of $147 million.  In part this is because all PC component companies have been suffering recently; in part because of a lack of incentive to upgrade high end components and to a larger extent because the general public is not going to pick up a new machine just before the release of a new Windows version.  Lisa Su did have some good news, sales of FX processors and A-series APU have been increasing and the second half of the year is historically better for sales.  It was suggested to The Register that AMD is not currently planning on reducing their workforce even more at this time but the possibility of future cuts was not completely ruled out.


"AMD has confirmed it is slipping back into cost-cutting mode after its annus horribilis, caused by tanking demand for consumer PCs in a quarter described by CEO Lisa Su as the “revenue trough” for 2015."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

TSMC Plans 10nm, 7nm, and "Very Steep" Ramping of 16nm.

Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM |
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm

Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.


So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.

Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.

In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.

Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.

Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.

Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?

AMD Confirms August Availability of Radeon R9 Nano

Subject: Graphics Cards | July 17, 2015 - 08:20 AM |
Tagged: radeon, r9 nano, hbm, Fiji, amd

AMD has spilled the beans on at least one aspect of the R9 Nano: the release timeframe. On their Q2 earnings call yesterday AMD CEO Lisa Su made this telling remark:

“Fury just launched, actually this week, and we will be launching Nano in the August timeframe.”


Image credit:

Wccftech had the story based on the AMD earnings call, but unfortunately there is no other new information the card just yet. We've speculated on how much lower clocks would need to be to meet the 175W target with full Fiji silicon, and it's going to be significant. The air coolers we've seen on the Fury (non-X) cards to date have extended well beyond the PCB, and the Nano is a mini-ITX form factor design.

Regardless of where the final GPU and memory clock numbers are I think it's safe to assume there won't be much (if any) overclocking headroom. Then again, of the card does have higher performance than the 290X in a mini ITX package at 175W, I don't think OC headroom will be a drawback. I guess we'll have to keep waiting for more information on the official specs before the end of August.

Source: Wccftech

Podcast #358 - AMD R9 Fury, Fury X Multi-GPU, Windows 10 and more!

Subject: General Tech | July 16, 2015 - 02:04 PM |
Tagged: podcast, video, amd, Fury, fury x, sli, crossfire, windows 10, 10240, corsair, RM850i, IBM, 7nm, kaby lake, Skylake, Intel, 14nm, 10nm

PC Perspective Podcast #358 - 07/16/2015

Join us this week as we discuss the AMD R9 Fury, Fury X Multi-GPU, Windows 10 and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!