Overclock any NVIDIA GPU on Desktop and Mobile with a New Utility

Subject: Graphics Cards | August 10, 2015 - 06:14 PM |
Tagged: overclocking, overclock, open source, nvidia, MSI Afterburner, API

An author called "2PKAQWTUQM2Q7DJG" (likely not a real name) has published a fascinating little article today on his/her Wordpress blog entitled, "Overclocking Tools for NVIDIA GPUs Suck. I Made My Own". What it contains is a full account of the process of creating an overclocking tool beyond the constraints of common utilities such as MSI Afterburner.

By probing MSI's OC utility using Ollydbg (an x86 "assembler level analysing debugger") the author was able to track down how Afterburner was working.


“nvapi.dll” definitely gets loaded here using LoadLibrary/GetModuleHandle. We’re on the right track. Now where exactly is that lib used? ... That’s simple, with the program running and the realtime graph disabled (it polls NvAPI constantly adding noise to the mass of API calls). we place a memory breakpoint on the .Text memory segment of the NVapi.dll inside MSI Afterburner’s process... Then we set the sliders in the MSI tool to get some negligible GPU underclock and hit the “apply” button. It breaks inside NvAPI… magic!

After further explaining the process and his/her source code for an overclocking utility, the user goes on to show the finished product in the form of a command line utility.


There is a link to the finished version of this utility at the end of the article, as well as the entire process with all source code. It makes for an interesting read (even for the painfully inept at programming, such as myself), and the provided link to download this mysterious overclocking utility (disguised as a JPG image file, no less) makes it both tempting and a little dubious. Does this really allow overclocking any NVIDIA GPU, including mobile? What could be the harm in trying?? In all seriousness however since some of what was seemingly uncovered in the article is no doubt proprietary, how long will this information be available?

It would probably be wise to follow the link to the Wordpress page ASAP!

Source: Wordpress

Khronos Group at SIGGRAPH 2015

Subject: Graphics Cards, Processors, Mobile, Shows and Expos | August 10, 2015 - 09:01 AM |
Tagged: vulkan, spir, siggraph 2015, Siggraph, opengl sc, OpenGL ES, opengl, opencl, Khronos

When the Khronos Group announced Vulkan at GDC, they mentioned that the API is coming this year, and that this date is intended to under promise and over deliver. Recently, fans were hoping that it would be published at SIGGRAPH, which officially begun yesterday. Unfortunately, Vulkan has not released. It does hold a significant chunk of the news, however. Also, it's not like DirectX 12 is holding a commanding lead at the moment. The headers were public only for a few months, and the code samples are less than two weeks old.


The organization made announcements for six products today: OpenGL, OpenGL ES, OpenGL SC, OpenCL, SPIR, and, as mentioned, Vulkan. They wanted to make their commitment clear, to all of their standards. Vulkan is urgent, but some developers will still want the framework of OpenGL. Bind what you need to the context, then issue a draw and, if you do it wrong, the driver will often clean up the mess for you anyway. The briefing was structure to be evident that it is still in their mind, which is likely why they made sure three OpenGL logos greeted me in their slide deck as early as possible. They are also taking and closely examining feedback about who wants to use Vulkan or OpenGL, and why.

As for Vulkan, confirmed platforms have been announced. Vendors have committed to drivers on Windows 7, 8, 10, Linux, including Steam OS, and Tizen (OSX and iOS are absent, though). Beyond all of that, Google will accept Vulkan on Android. This is a big deal, as Google, despite its open nature, has been avoiding several Khronos Group standards. For instance, Nexus phones and tablets do not have OpenCL drivers, although Google isn't stopping third parties from rolling it into their devices, like Samsung and NVIDIA. Direct support of Vulkan should help cross-platform development as well as, and more importantly, target the multi-core, relatively slow threaded processors of those devices. This could even be of significant use for web browsers, especially in sites with a lot of simple 2D effects. Google is also contributing support from their drawElements Quality Program (dEQP), which is a conformance test suite that they bought back in 2014. They are going to expand it to Vulkan, so that developers will have more consistency between devices -- a big win for Android.


While we're not done with Vulkan, one of the biggest announcements is OpenGL ES 3.2 and it fits here nicely. At around the time that OpenGL ES 3.1 brought Compute Shaders to the embedded platform, Google launched the Android Extension Pack (AEP). This absorbed OpenGL ES 3.1 and added Tessellation, Geometry Shaders, and ASTC texture compression to it. It was also more tension between Google and cross-platform developers, feeling like Google was trying to pull its developers away from Khronos Group. Today, OpenGL ES 3.2 was announced and includes each of the AEP features, plus a few more (like “enhanced” blending). Better yet, Google will support it directly.

Next up are the desktop standards, before we finish with a resurrected embedded standard.

OpenGL has a few new extensions added. One interesting one is the ability to assign locations to multi-samples within a pixel. There is a whole list of sub-pixel layouts, such as rotated grid and Poisson disc. Apparently this extension allows developers to choose it, as certain algorithms work better or worse for certain geometries and structures. There were probably vendor-specific extensions for a while, but now it's a ratified one. Another extension allows “streamlined sparse textures”, which helps manage data where the number of unpopulated entries outweighs the number of populated ones.

OpenCL 2.0 was given a refresh, too. It contains a few bug fixes and clarifications that will help it be adopted. C++ headers were also released, although I cannot comment much on it. I do not know the state that OpenCL 2.0 was in before now.

And this is when we make our way back to Vulkan.


SPIR-V, the code that runs on the GPU (or other offloading device, including the other cores of a CPU) in OpenCL and Vulkan is seeing a lot of community support. Projects are under way to allow developers to write GPU code in several interesting languages: Python, .NET (C#), Rust, Haskell, and many more. The slide lists nine that Khronos Group knows about, but those four are pretty interesting. Again, this is saying that you can write code in the aforementioned languages and have it run directly on a GPU. Curiously missing is HLSL, and the President of Khronos Group agreed that it would be a useful language. The ability to cross-compile HLSL into SPIR-V means that shader code written for DirectX 9, 10, 11, and 12 could be compiled for Vulkan. He expects that it won't take long for a project to start, and might already be happening somewhere outside his Google abilities. Regardless, those who are afraid to program in the C-like GLSL and HLSL shading languages might find C# and Python to be a bit more their speed, and they seem to be happening through SPIR-V.

As mentioned, we'll end on something completely different.


For several years, the OpenGL SC has been on hiatus. This group defines standards for graphics (and soon GPU compute) in “safety critical” applications. For the longest time, this meant aircraft. The dozens of planes (which I assume meant dozens of models of planes) that adopted this technology were fine with a fixed-function pipeline. It has been about ten years since OpenGL SC 1.0 launched, which was based on OpenGL ES 1.0. SC 2.0 is planned to launch in 2016, which will be based on the much more modern OpenGL ES 2 and ES 3 APIs that allow pixel and vertex shaders. The Khronos Group is asking for participation to direct SC 2.0, as well as a future graphics and compute API that is potentially based on Vulkan.

The devices that this platform intends to target are: aircraft (again), automobiles, drones, and robots. There are a lot of ways that GPUs can help these devices, but they need a good API to certify against. It needs to withstand more than an Ouya, because crashes could be much more literal.

Upcoming Oculus SDK 0.7 Integrates Direct Driver Mode from AMD and NVIDIA

Subject: Graphics Cards | August 7, 2015 - 10:46 AM |
Tagged: sdk, Oculus, nvidia, direct driver mode, amd

In an email sent out by Oculus this morning, the company has revealed some interesting details  about the upcoming release of the Oculus SDK 0.7 on August 20th. The most interesting change is the introduction of Direct Driver Mode, developed in tandem with both AMD and NVIDIA.


This new version of the SDK will remove the simplistic "Extended Mode" that many users and developers implemented for a quick and dirty way of getting the Rift development kits up and running. However, that implementation had the downside of additional latency, something that Oculus is trying to eliminate completely.

Here is what Oculus wrote about the "Direct Driver Mode" in its email to developers:

Direct Driver Mode is the most robust and reliable solution for interfacing with the Rift to date. Rather than inserting VR functionality between the OS and the graphics driver, headset awareness is added directly to the driver. As a result, Direct Driver Mode avoids many of the latency challenges of Extended Mode and also significantly reduces the number of conflicts between the Oculus SDK and third party applications. Note that Direct Driver Mode requires new drivers from NVIDIA and AMD, particularly for Kepler (GTX 645 or better) and GCN (HD 7730 or better) architectures, respectively.

We have heard NVIDIA and AMD talk about the benefits of direct driver implementations for VR headsets for along time. NVIDIA calls its software implementation GameWorks VR and AMD calls its software support LiquidVR. Both aim to do the same thing - give more direct access to the headset hardware to the developer while offering new ways for faster and lower latency rendering to games.


Both companies have unique features to offer as well, including NVIDIA and it's multi-res shading technology. Check out our interview with NVIDIA on the topic below:

NVIDIA's Tom Petersen came to our offices to talk about GameWorks VR

Other notes in the email include a tentative scheduled release of November for the 1.0 version of the Oculus SDK. But until that version releases, Oculus is only guaranteeing that each new runtime will support the previous version of the SDK. So, when SDK 0.8 is released, you can only guarantee support for it and 0.7. When 0.9 comes out, game developers will need make sure they are at least on SDK 0.8 otherwise they risk incompatibility. Things will be tough for developers in this short window of time, but Oculus claims its necessary to "allow them to more rapidly evolve the software architecture and API." After SDK 1.0 hits, future SDK releases will continue to support 1.0.

Source: Oculus

Which 980 Titanium should you put on your card?

Subject: Graphics Cards | August 4, 2015 - 06:06 PM |
Tagged: 980 Ti, asus, msi, gigabyte, evga, GTX 980 Ti G1 GAMING, GTX 980 Ti STRIX OC, GTX 980 Ti gaming 6g

If you've decided that the GTX 980 Ti is the card for you due to price, performance or other less tangible reasons you will find that there are quite a few to choose from.  Each have the same basic design but the coolers and frequencies vary between manufacturers, as do the prices.  That is why it is handy that The Tech Report have put together a round up of four models for a direct comparison.  In the article you will see the EVGA GeForce GTX 980 Ti SC+, Gigabyte's GTX 980 Ti G1 Gaming, MSI GTX 980 Ti Gaming 6G and the ASUS Strix GTX 980 Ti OC Edition.  The cards are not only checked for basic and overclocked performance, there is also noise levels and power consumption to think about, so check out the full review.


"The GeForce GTX 980 Ti is pretty much the fastest GPU you can buy.The aftermarket cards offer higher clocks and better cooling than Nvidia's reference design. But which one is right for you?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Rumor: NVIDIA to Replace Maxwell GTX 750 Ti

Subject: Graphics Cards | August 1, 2015 - 07:31 AM |
Tagged: nvidia, maxwell, gtx 960, gtx 950 ti, gtx 950

A couple of sites are claiming that NVIDIA intends to replace the first-generation GeForce GTX 750 Ti with more Maxwell, in the form of the GeForce GTX 950 and/or GTX 950 Ti. The general consensus is that it will run on a cut-down GM206 chip, which is currently found in the GTX 960. I will go light on the rumored specifications because this part of the rumor is single-source, from accounts of a HWBattle page that has been deleted. But for a general ballpark of performance, the GTX 960 has a full GM206 chip while the 950(/Ti) is expected to lose about a quarter of its printed shader units.


The particularly interesting part is the power, though. As we reported, Maxwell was branded as a power-efficient version of the Kepler architecture. This led to a high-end graphics cards that could be powered by the PCIe bus. According to these rumors, the new card will require a single, 8-pin power connector on top of the 75W provided by the bus. This has one of two interesting implications that I can think of.


  • The 750 Ti did not sell for existing systems as well as anticipated, or
  • The GM206 chip just couldn't hit that power target and they didn't want to make another die

Whichever is true, it will be interesting to see how NVIDIA brands this if/when the card launches. Creating a graphics card for systems without available power rails was a novel concept and it seemed to draw attention. That said, the rumors claim they're not doing it this time... for some reason.

Source: VR-Zone

You have a 4K monitor and $650USD, what do you do?

Subject: Graphics Cards | July 27, 2015 - 04:33 PM |
Tagged: 4k, amd, R9 FuryX, GTX 980 Ti, gtx titan x

[H]ard|OCP have set up their testbed for a 4K showdown between the similarly priced GTX 980 Ti and Radeon R9 Fury X with the $1000 TITAN X tossed in there for those with more money than sense.  The test uses the new Catalyst 15.7 and the GeForce 353.30 drivers to give a more even playing field while benchmarking Witcher 3, GTA V and other games.  When the dust settled the pattern was obvious and the performance differences could be seen.  The deltas were not huge but when you are paying $650 + tax for a GPU even performance a few frames better or a graphical option that can be used really matters.  Perhaps the most interesting result was the redemption of the TITAN X, its extra price was reflected in the performance results.  Check them out for yourself here.


"We take the new AMD Radeon R9 Fury X and evaluate the 4K gaming experience. We will also compare against the price competitive GeForce GTX 980 Ti as well as a GeForce GTX TITAN X. Which video card provides the best experience and performance when gaming at glorious 4K resolution?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Rumor: NVIDIA Pascal up to 17 Billion Transistors, 32GB HBM2

Subject: Graphics Cards | July 24, 2015 - 12:16 PM |
Tagged: rumor, pascal, nvidia, HBM2, hbm, graphics card, gpu

An exclusive report from Fudzilla claims some outlandish numbers for the upcoming NVIDIA Pascal GPU, including 17 billion transistors and a massive amount of second-gen HBM memory.

According to the report:

"Pascal is the successor to the Maxwell Titan X GM200 and we have been tipped off by some reliable sources that it will have  more than a double the number of transistors. The huge increase comes from  Pascal's 16 nm FinFET process and its transistor size is close to two times smaller."


The NVIDIA Pascal board (Image credit: Legit Reviews)

Pascal's 16nm FinFET production will be a major change from the existing 28nm process found on all current NVIDIA GPUs. And if this report is accurate they are taking full advantage considering that transistor count is more than double the 8 billion found in the TITAN X.


(Image credit: Fudzilla)

And what about memory? We have long known that Pascal will be NVIDIA's first forray into HBM, and Fudzilla is reporting that up to 32GB of second-gen HBM (HBM2) will be present on the highest model, which is a rather outrageous number even compared to the 12GB TITAN X.

"HBM2 enables cards with 4 HBM 2.0 cards with 4GB per chip, or four HBM 2.0 cards with 8GB per chips results with 16GB and 32GB respectively. Pascal has power to do both, depending on the SKU."

Pascal is expected in 2016, so we'll have plenty of time to speculate on these and doubtless other rumors to come.

Source: Fudzilla

NVIDIA Adds Metal Gear Solid V: The Phantom Pain Bundle to GeForce Cards

Subject: Graphics Cards | July 23, 2015 - 10:52 AM |
Tagged: nvidia, geforce, gtx, bundle, metal gear solid, phantom pain

NVIDIA continues with its pattern of flagship game bundles with today's announcement. Starting today, GeForce GTX 980 Ti, 980, 970 and 960 GPUs from select retailers will include a copy of Metal Gear Solid V: The Phantom Pain, due out September 15th. (Bundle is live on Amazon.com.) Also, notebooks that use the GTX 980M or 970M GPU qualify.


From NVIDIA's marketing on the bundle:

Only GeForce GTX gives you the power and performance to game like the Big Boss. Experience the METAL GEAR SOLID V: THE PHANTOM PAIN with incredible visuals, uncompromised gameplay, and advanced technologies. NVIDIA G-SYNC™ delivers smooth and stutter-free gaming, GeForce Experience™ provides optimal playable settings, and NVIDIA GameStream™ technology streams your game to any NVIDIA SHIELD™ device.

It appears that Amazon.com already has its landing page up and ready for the MGS V bundle program, so if you are hunting for a new graphics card stop there and see what they have in your range.

Let's hope that this game release goes a bit more smooth than Batman: Arkham Knight...

Source: NVIDIA

Stop me if you've heard this before; change your .exe and change your performance

Subject: Graphics Cards | July 20, 2015 - 02:00 PM |
Tagged: amd, linux, CS:GO

Thankfully it has been quite a while since we saw GPU driver optimization specific to .exe filenames on Windows, in the past both major providers have tweaked performance based on the name of the executable which launches the game.  Until now this particular flavour of underhandedness had become passé, at least until now.  Phoronix has spotted it once again, this time seeing a big jump in performance in CS:GO when they rename the binary from csgo_linux binary to hl2_Linux.  The game is built on the same engine but the optimization for the Source Engine are not properly applied to CS:GO.

There is nothing nefarious about this particular example, it seems more a case of AMD's driver team being lazy, or more likely short-staffed.  If you play CS:GO on Linux then rename your binary, you will see a jump in performance with no deleterious side effects.  Phoronix is investigating more games to see if there are other inconsistently applied optimizations.


"Should you be using a Radeon graphics card with the AMD Catalyst Linux driver and are disappointed by the poor performance, there is a very easy workaround for gaining much better performance under Linux... In some cases a simple tweak will yield around 40% better performance!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

TSMC Plans 10nm, 7nm, and "Very Steep" Ramping of 16nm.

Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM |
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm

Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.


So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.

Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.

In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.

Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.

Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.

Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?

AMD Confirms August Availability of Radeon R9 Nano

Subject: Graphics Cards | July 17, 2015 - 08:20 AM |
Tagged: radeon, r9 nano, hbm, Fiji, amd

AMD has spilled the beans on at least one aspect of the R9 Nano: the release timeframe. On their Q2 earnings call yesterday AMD CEO Lisa Su made this telling remark:

“Fury just launched, actually this week, and we will be launching Nano in the August timeframe.”


Image credit: VideoCardz.com

Wccftech had the story based on the AMD earnings call, but unfortunately there is no other new information the card just yet. We've speculated on how much lower clocks would need to be to meet the 175W target with full Fiji silicon, and it's going to be significant. The air coolers we've seen on the Fury (non-X) cards to date have extended well beyond the PCB, and the Nano is a mini-ITX form factor design.

Regardless of where the final GPU and memory clock numbers are I think it's safe to assume there won't be much (if any) overclocking headroom. Then again, of the card does have higher performance than the 290X in a mini ITX package at 175W, I don't think OC headroom will be a drawback. I guess we'll have to keep waiting for more information on the official specs before the end of August.

Source: Wccftech

Meet ASUS' DirectCU III on the Radeon Fury

Subject: Graphics Cards | July 13, 2015 - 03:34 PM |
Tagged: Fury, DirectCU III, asus, amd

The popular ASUS STRIX series has recently been updated with the DirectCU III custom cooler, on both the GTX 980 and the new Radeon Fury.  This version uses dual-10mm heatpipes and Triple Wing-Blade fans which are billed as providing 220% larger surface area as well as an increase in air pressure of 105%, which provide a claimed 40% reduction in temperature.  We cannot directly compare the cooling ability directly to the retail model, however [H]ard|OCP's tests show you can indeed cool a Fury on air, 71C at full load is lower than the 81C seen on a GTX 980.  Even more impressive is that fans were only at 43% speed and operating almost silently, at the cost of increased noise you could lower those temperatures if you desired.  Check out their full review to see how the card did but do take note, [H] does not at this time have access to the new GPU Tweak II utility required to overclock the card.

-update - now with less X's


"AMD's Radeon Fury X is here, the AMD Radeon R9 Fury presents itself and we evaluate a full retail custom ASUS STRIX R9 Fury using ASUS' new DirectCU III technology. We will compare this to a GeForce GTX 980 using the new drivers AMD just released and find out what kind of gameplay experience the R9 Fury has to offer."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Releases New WHQL Catalyst 15.7 Drivers

Subject: Graphics Cards | July 8, 2015 - 05:52 PM |
Tagged: Win 10, whql, drivers, catalyst, amd, 15.7, 15.20

Sadly, it is not every month that we see a new AMD WHQL driver release.  Several years back AMD made the promise that they would release WHQL drivers on a monthly basis, and for quite a while they kept to that promise.  Engineering cuts, new architectures, and more challenges than ever with new technologies have essentially ended the monthly model.  AMD does their best by putting out beta drivers when major titles are released, but it has been quite some time since we last saw a full WHQL.


Today we finally see the release of the very latest branch of the Catalyst drivers.  Last month we saw the 15.15 drivers that were released with the AMD FuryX.  We also had a fair share of beta drivers to keep users updated on the latest game profiles.  The version that has been released today is based on the 15.20 code path and is officially known as Catalyst 15.7.

There are a lot of new features and support in this driver, which makes it a pretty big deal.  I am guessing that it seems like such a big deal because major updates have been few and far between.  This is AMD's first driver to support the Windows 10 Technical Preview.

The next set of features is very exciting for anyone who has any GCN based card, no matter the age.  Virtual Super Resolution is enabled for all GCN 1.0 cards and above.  The same goes for Frame Rate Target Control.  AMD has included new CrossFire Profile Enhancements for many of the latest games and top sellers.  The only new feature that does not support all GCN cards is that of AMD FreeSync with CrossFire support.  As readers may remember, FreeSync did not previously work in a CrossFire solution.  FreeSync itself is relegated to the newer members of the GCN family.  The only other potential disappointment (and not new news at all) is still the lack of CrossFire support (much less FreeSync with CrossFire support) in DX9 titles.

AMD promises performance improvements as compared to the previous Omega drivers released last year.  This is fairly typical, but people are already reporting some better performance and CPU usage in WinX previews based on the latest build.  It is great to see AMD releasing a new set of drivers, but just like any addict... we can't wait for our next hit and what new features and performance they may bring.

You can find the drivers here.

Source: AMD

Report: AMD Radeon Fury Specs and Photos Leaked

Subject: Graphics Cards | July 7, 2015 - 11:59 AM |
Tagged: Radeon Fury, radeon, HBM1, amd

As reported by VideoCardz.com the upcoming Radeon Fury card specs have been leaked (and confirmed, according to the report), and the air-cooled card is said to have 8 fewer compute units enabled and a slightly slower core clock.


The report pictures a pair of Sapphire cards, both using the Tri-X triple-fan air cooler. The first is a reference-clocked version which will be 1000 MHz (50 Hz slower than the Fury X), and an overclocked version at 1040 MHz. And what of the rest of the specs? VideoCardz has created this table:


The total number of compute units is 56 (8 fewer than the Fury X), which at 64 stream cores per unit results in 3584 for the non-X GPU. TMU count drops to 224, and HBM1 memory speed is unchanged at 1000 MHz effective. VideoCardz is listing the ROP count at an unchanged 64, but this (along with the rest of the report, of course) has not been officially announced.


The board will apparently be identical to the reference Fury X

Retail price on this card had been announced by AMD as $549, and with the modest reduction in specs (and hopefully some overclocking headroom) this could be an attractive option to compete with the GTX 980, though it will probably need to beat the 980's performance or at least match its $500 price to be relevant in the current market. With these specs it looks like it will only be slightly behind the Fury X so pricing shouldn't be much of an issue for AMD just yet.

AMD Projects Decreased Revenue by 8% for Q2 2015

Subject: Graphics Cards, Processors | July 7, 2015 - 08:00 AM |
Tagged: earnings, amd

The projections for AMD's second fiscal quarter had revenue somewhere between flat and down 6%. The actual estimate, as of July 6th, is actually below the entire range. They expect that revenue is down 8% from the previous quarter, rather than the aforementioned 0 to 6%. This is attributed to weaker APU sales in OEM devices, but they also claim that channel sales are in line with projections.


This is disappointing news for fans of AMD, of course. The next two quarters will be more telling though. Q3 will count two of the launch months for Windows 10, which will likely include a bunch of new and interesting devices and aligns well with back to school season. We then get one more chance at a pleasant surprise in the fourth quarter and its holiday season, too. My intuition is that it won't be too much better than however Q3 ends up.

One extra note: AMD has also announced a “one-time charge” of $33 million USD related to a change in product roadmap. Rather than releasing designs at 20nm, they have scrapped those plans and will architect them for “the leading-edge FinFET node”. This might be a small expense compared to how much smaller the process technology will become. Intel is at 14nm and will likely be there for some time. Now AMD doesn't need to wait around at 20nm in the same duration.

Source: AMD

Overclocking the R9 390X

Subject: Graphics Cards | July 6, 2015 - 01:58 PM |
Tagged: amd, r9 390x, overclocking

Now that [H]ard|OCP has had more time to spend with the new R9 390X they have managed to find the overclocks that they are most comfortable running on the card they used to test.  They used MSI Afterburner 4.1.1 and first overclocked the card without changing voltages at all, which netted them 1150MHz core and 6.6GHz effective on the RAM.  From there they started to raise to Core Voltage, eventually settling on +50 as settings higher than that resulted in lower maximum observed voltages due to the TDP being reached and the card throttling back.  With that voltage setting they could get the card to run at 1180MHz, with the memory speed remaining at 6.6GHz as it is not effected by the core voltage settings, with the fan speed set 80% they saw a consistent 67C GPU temperature.  How much impact did that have on performance and could it push the card's performance beyond an overclocked GTX 980?  Read the full review to find out in detail.


"We take the new MSI Radeon R9 390X GAMING 8G video card and overclock it to it fullest and compare it with an overclocked GeForce GTX 980 at 1440p and 4K in today's latest games. Find out how much overclocking the R9 390X improves performance, and which video card is best performing. Can R9 390X overclock better than R9 290X?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Zotac's GTX 980Ti AMP! Extreme Is A Factory Overclocked Monster

Subject: Graphics Cards | July 4, 2015 - 02:39 PM |
Tagged: zotac, maxwell, gtx 980ti, factory overclocked

Zotac recently unleashed a monstrous new GTX 980Ti AMP! Extreme graphics card featuring a giant triple slot cooler and a very respectable factory overclock.

Specifically, the Zotac ZT-90505-10P card is a custom card with a factory overclocked NVIDIA GTX 980Ti GPU and GDDR5 memory. The card is a triple slot design that uses a dual fin stack IceStorm heatsink with three 90mm temperature controlled EKO fans. The cooler wraps the fans and HSF in a shroud and also uses a backplate on the bottom of the card. The card is powered by two 8-pin PCI-E power connectors and display outputs include three DisplayPort, one HDMI, and one DL-DVI.

Zotac ZT-90505-10P GTX 980Ti Amp Extreme Graphics Card.jpg

Zotac was able to push the Maxwell GPU with its 2,816 CUDA cores to 1,253 MHz base and 1,355 MHz boost. Further, the 6GB GDDR5 memory also has a factory overclock of 7,220 MHz. These clockspeeds are a decent bump over the reference speeds of 1,000 MHz GPU base, 1,076 MHz GPU boost, and 7,012 MHz memory.

We’ll have to wait for reviews to know for sure, but on paper this card looks to be a nice card that should run fast and cool thanks to that triple fan cooler. The ZT-90505-10P will be available shortly with an MSRP of $700 and a 2 year warranty.

Definitely not a bad price compared to other GTX 980Ti cards on the market.

Source: Zotac

Report: ASUS STRIX AMD Radeon Fury (Non-X) Card Listings Found

Subject: Graphics Cards | July 3, 2015 - 08:45 PM |
Tagged: strix, rumor, report, Radeon Fury, asus, amd

A report from VideoCardz.com shows three listings for an unreleased ASUS STRIX version of the AMD Radeon Fury (non-X).


Image credit: VideoCardz

The listings are from European sites, and all three list the same model name: ASUS-STRIX R9FURY-DC3-4G-GAMING. You can find the listing from the above photo here at the German site Computer-PC-Shop.


Image credit: VideoCardz

We can probably safely assume that this upcoming air-cooled card will make use of the new DirectCU III cooler introduced with the new STRIX GTX 980 Ti and STRIX R9 390X, and this massive triple-fan cooler should provide an interesting look at what Fury can do without the AIO liquid cooler from the Fury X. Air cooling will of course negate the issue of pump whine that many have complained about with certain Fury X units.


The ASUS STRIX R9 390X Gaming card with DirectCU III cooler

We await offical word on this new GPU, and what price we might expect this particular version to sell for here in the U.S.A.

Source: VideoCardz

New AMD Fury X Pumps May Reduce Noise Levels

Subject: Graphics Cards | July 2, 2015 - 02:03 PM |
Tagged: amd, radeon, fury x, pump whine

According to a couple of users from the Anandtech forums and others, there is another wave of AMD Fury X cards making their way out into the world. Opening up the top of the Fury X card to reveal the Cooler Master built water cooler pump, there are two different configurations in circulation. One has a teal and white Cooler Master sticker, the second one has a shiny CM logo embossed on it.


This is apparently a different pump implementation than we have seen thus far.

You might have read our recent story looking at the review sample as well as two retail purchased Fury X cards where we discovered that the initial pump whine and noise that AMD claimed would be gone, in fact remained to pester gamers. As it turns out, all three of our cards have the teal/white CM logo.


Our three Fury X cards have the same sticker on them.

Based on at least a couple of user reports, this different pump variation does not have the same level of pump whine that we have seen to date. If that's the case, it's great news - AMD has started pushing out Fury X cards to the retail market that don't whine and squeal!

If this sticker/label difference is in fact the indicator for a newer, quieter pump, it does leave us with a few questions. Do current Fury X owners with louder coolers get to exchange them through RMA? Is it possible that these new pump decals are not indicative of a total pump change over and this is just chance? I have asked AMD for details on this new information already, and in fact have been asking for AMD's input on the issue since the day of retail release. So far, no one has wanted to comment on it publicly or offer me any direction as to what is changing and when.

I hope for the gamers' sake that this new pump sticker somehow will be the tell-tale sign that you have a changed cooler implementation. Unfortunately for now, the only way to know if you are buying one of these is to install it in your system and listen or to wait for it to arrive and take the lid off the Fury X. (It's a Hex 1.5 screw by the way.)

Though our budget is more than slightly stretched, I'm keeping an eye out for more Fury X cards to show up for sale to get some more random samples in-house!

Source: Fudzilla

ASUS Introduces STRIX GTX 980 Ti with DirectCU III Cooler and a Hefty Overclock

Subject: Graphics Cards | June 30, 2015 - 01:16 PM |
Tagged: overclock, oc, GTX 980 Ti, DirectCU III, asus

ASUS has annouced a new STRIX edition of the GeForce GTX 980 Ti, and this is one massive card in not only size (measuring 12" x 6" x 1.57") but in potential performance as well.


First off, there is the new DirectCU III cooler, which offers 3 fans and a much larger overall design than that of the existing GTX 980 STRIX card. And there's good reason for the added cooling capacity: this card has one hefty overclock for a GTX 980 Ti, with a 1216 MHz Base and a whopping 1317 MHz Boost clock in "OC mode". The card's default mode is still quite a bit over reference with 1190 MHz Base and 1291 MHz Boost clocks (a reference 980 Ti has a Base of 1000 MHz and Boost clock of 1075 MHz). Memory with the STRIX 980 Ti is also overclocked, with 7200 MHz GDDR5 in both modes.

Features for this new card from ASUS:

  • 1317MHz GPU boost clock in OC mode with 7200MHz factory-overclocked memory speed for outstanding gaming experience
  • DirectCU III with Patented Triple Wing-Blade 0dB Fan Design delivers maximum air flow with 30% cooler and 3X quieter performance
  • AUTO-EXTREME Technology with 12+2 phase Super Alloy Power II delivers premium aerospace-grade quality and reliability
  • Pulsating STRIX LED makes a statement while adding style to your system
  • STRIX GPU-Fortifier relieves physical stress around the GPU in order to protect it
  • GPU Tweak II with Xsplit Gamecaster provides intuitive performance tweaking and lets you stream your gameplay instantly


The new DirectCU III cooler

The 0dB fans (zero-RPM mode under less demanding workloads) are back with a new "wing-blade" design that promises greater static pressure. Power delivery is also improved with the 14-phase "Super Alloy Power II" components, which ASUS claims will provide 50% cooler thermals while reducing "component buzzing" by up to 2x under load.


The previous DirectCU II cooler from the STRIX GTX 980

The new ASUS STRIX GTX 980 Ti Gaming card hasn't shown up on amazon yet, but it should be available soon for what I would expect to be around $699.

Source: ASUS