Rumor: NVIDIA GeForce GTX 980 Ti Specifications Leaked

Subject: Graphics Cards | May 26, 2015 - 05:03 PM |
Tagged: rumors, nvidia, leaks, GTX 980 Ti, gpu, gm200

Who doesn’t love rumor and speculation about unreleased products? (Other than the manufacturers of such products, of course.) Today VideoCardz is reporting via HardwareBattle a GPUZ screenshot reportedly showing specs for an NIVIDIA GeForce GTX 980 Ti.


Image credit: HardwareBattle via

First off, the HardwareBattle logo conveniently obscures the hardware ID (as well as ROP/TMU counts). What is visible is the 2816 shader count, which places it between the GTX 980 (2048) and TITAN X (3072). The 6 GB of GDDR5 memory has a 384-bit interface and 7 Gbps speed, so bandwidth should be the same 336 GB/s as the TITAN X. As far as core clocks on this GPU (which seems likely to be a cut-down GM200), they are identical to those of the TITAN X as well with 1000 MHz Base and 1076 MHz Boost clocks shown in the screenshot.


Image credit:

We await any official announcement, but from the frequency of the leaks it seems we won’t have to wait too long.

Podcast Pieces: A Discussion about HBM (High Bandwidth Memory) coming for AMD Fiji

Subject: Graphics Cards | May 23, 2015 - 09:46 AM |
Tagged: video, hbm, high bandwidth memory, amd, Fiji

During this week's podcast, Josh and the team went through an in-depth discussion of the new memory technology that AMD will be using on the upcoming Fiji GPU, HBM (high bandwidth memory). In case you don't regularly listen to our amazing PC Perspective Podcast, we have cut out the portion that focuses on HBM so that everyone can be educated on what this new technology will offer for coming GPUs.


Enjoy! Be sure to subscribe to the PC Perspective YouTube channel for more videos like this!

Leaked AMD Fiji Card Images Show Small Form Factor, Water Cooler Integration

Subject: Graphics Cards | May 22, 2015 - 09:39 AM |
Tagged: wce, radeon, Fiji, amd, 390x

UPDATE (5/22/15): Johan Andersson tweeted out this photo this morning, with the line: "This new island is one seriously impressive and sweet GPU. wow & thanks @AMDRadeon ! They will be put to good use :)"  Looks like we can confirm that at least one of the parts AMD is releasing does have the design of the images we showed you before, though the water cooling implementation is missing or altered.



File this under "rumor" for sure, but a cool one none the less...

After yesterday's official tidbit of information surrounding AMD's upcoming flagship graphics card for enthusiasts and its use of HBM (high bandwidth memory), it appears we have another leak on our hands. The guys over at Chiphell have apparently acquired some stock footage of the new Fiji flagship card (whether or not it will be called the 390X has yet to be seen) and it looks...awesome.


In that post from yesterday I noted that with an HBM design AMD could in theory build an add-in card that is of a different form factor than anything we have previously seen for a high end part. Based on the image above, if this turns out to be the high end Fiji offering, it appears the PCB will indeed be quite small as it no longer requires memory surrounding the GPU itself. You can also see that it will in fact be water cooled though it looks like it has barb inlets rather than a pre-attached cooler in this image.


The second leaked image shows display outputs consisting of three full-size DisplayPort connections and a single HDMI port.

All of this could be faked of course, but if it is, the joker did a damn good job of compiling all the information into one design. If it's real, I think AMD might finally have a match for the look and styling of the high-end GeForce offerings.

What do you think: real or fake? Cool or meh? Let us know!

Source: Chiphell

Could it be a 980 Ti, or does the bill of lading lie?

Subject: Graphics Cards | May 21, 2015 - 07:33 PM |
Tagged: rumour, nvidia, 980 Ti


The source of leaks and rumours is often unexpected, such as this import data of a shipment headed from China into India.  Could this 6GB card be the GTX 980 Ti that so many have theorized would be coming sometime around AMD's release of their new cards?  Does the fact that 60,709 Indian Rupees equal 954.447 US Dollars put a damper on your excitement or could it be that these 6 lonely cards are being sold at a higher rate overseas than they might be in the US? 

We don't know but we do know there is a mysterious card out there somewhere.

Source: Zauba

How about that High Bandwidth Memory

Subject: Graphics Cards | May 19, 2015 - 03:51 PM |
Tagged: memory, high bandwidth memory, hbm, Fiji, amd

Ryan and the rest of the crew here at PC Perspective are excited about AMD's new memory architecture and the fact that they will be first to market with it.  However as any intelligent reader is wont to look for; a second opinion on the topic is worth finding.  Look no further than The Tech Report who have also been briefed on AMD's new memory architecture.  Read on to see what they learned from Joe Macri and their thoughts on the successor to GDDR5 and HBM2 which is already in the works.


"HBM is the next generation of memory for high-bandwidth applications like graphics, and AMD has helped usher it to market. Read on to find out more about HBM and what we've learned about the memory subsystem in AMD's next high-end GPU, code-named Fiji."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Manufacturer: AMD

High Bandwidth Memory

UPDATE: I have embedded an excerpt from our PC Perspective Podcast that discusses the HBM technology that you might want to check out in addition to the story below.

The chances are good that if you have been reading PC Perspective or almost any other website that focuses on GPU technologies for the past year, you have read the acronym HBM. You might have even seen its full name: high bandwidth memory. HBM is a new technology that aims to turn the ability for a processor (GPU, CPU, APU, etc.) to access memory upside down, almost literally. AMD has already publicly stated that its next generation flagship Radeon GPU will use HBM as part of its design, but it wasn’t until today that we could talk about what HBM actually offers to a high performance processor like Fiji. At its core HBM drastically changes how the memory interface works, how much power is required for it and what metrics we will use to compare competing memory architectures. AMD and its partners started working on HBM with the industry more than 7 years ago, and with the first retail product nearly ready to ship, it’s time to learn about HBM.

We got some time with AMD’s Joe Macri, Corporate Vice President and Product CTO, to talk about AMD’s move to HBM and how it will shift the direction of AMD products going forward.

The first step in understanding HBM is to understand why it’s needed in the first place. Current GPUs, including the AMD Radeon R9 290X and the NVIDIA GeForce GTX 980, utilize a memory technology known as GDDR5. This architecture has scaled well over the past several GPU generations but we are starting to enter the world of diminishing returns. Balancing memory performance and power consumption is always a tough battle; just ask ARM about it. On the desktop component side we have much larger power envelopes to work inside but the power curve that GDDR5 is on will soon hit a wall, if you plot it far enough into the future. The result will be either drastically higher power consuming graphics cards or stalling performance improvements of the graphics market – something we have not really seen in its history.


While it’s clearly possible that current and maybe even next generation GPU designs could still have depended on GDDR5 as the memory interface, the move to a different solution is needed for the future; AMD is just making the jump earlier than the rest of the industry.

Continue reading our look at high bandwidth memory (HBM) architecture!!

NVIDIA Under Attack Again for GameWorks in The Witcher 3: Wild Hunt

Subject: Graphics Cards | May 17, 2015 - 12:04 PM |
Tagged: The Witcher 3, nvidia, hairworks, gameworks, amd

I feel like every few months I get to write more stories focusing on the exact same subject. It's almost as if nothing in the enthusiast market is happening and thus the cycle continues, taking all of us with it on a wild ride of arguments and valuable debates. Late last week I started hearing from some of my Twitter followers that there were concerns surrounding the upcoming release of The Witcher 3: Wild Hunt. Then I found a link to this news post over at that put some of the information in perspective.

Essentially, The Witcher 3 uses parts of NVIDIA's GameWorks development tools and APIs, software written by NVIDIA to help game developers take advantage of new technologies and to quickly and easily implement them into games. The problem of course is that GameWorks is written and developed by NVIDIA. That means that optimizations for AMD Radeon hardware are difficult or impossible, depending on who you want to believe. Clearly it doesn't benefit NVIDIA to optimize its software for AMD GPUs financially, though many in the community would like NVIDIA to give a better effort - for the good of said community.


Specifically in regards to The Witcher 3, the game implements NVIDIA HairWorks technology to add realism on many of the creatures of the game world. (Actually, the game includes HairWorks, HBAO+, PhysX,  Destruction and Clothing but our current discussion focuses on HairWorks.) All of the marketing and video surrounding The Witcher 3 has been awesome and the realistic animal fur simulation has definitely been a part of it. However, it appears that AMD Radeon GPU users are concerned that performance with HairWorks enabled will suffer.

An example of The Witcher 3: Wild Hunt with HairWorks

One of the game's developers has been quoted as such:

Many of you have asked us if AMD Radeon GPUs would be able to run NVIDIA’s HairWorks technology – the answer is yes! However, unsatisfactory performance may be experienced as the code of this feature cannot be optimized for AMD products. Radeon users are encouraged to disable NVIDIA HairWorks if the performance is below expectations.

There are at least several interpretations of this statement floating around the web. First, and most enflaming, is that NVIDIA is not allowing CD Project Red to optimize it by not offering source code. Another is that CD Project is choosing to not optimize for AMD hardware due to time considerations. The last is that it simply isn't possible to optimize it because of hardware limitations of HairWorks.

I went to NVIDIA with these complaints about HairWorks and Brian Burke gave me this response:

We are not asking game developers do anything unethical.
GameWorks improves the visual quality of games running on GeForce for our customers.  It does not impair performance on competing hardware.
Demanding source code access to all our cool technology is an attempt to deflect their performance issues. Giving away your IP, your source code, is uncommon for anyone in the industry, including middleware providers and game developers. Most of the time we optimize games based on binary builds, not source code.
GameWorks licenses follow standard industry practice.  GameWorks source code is provided to developers that request it under license, but they can’t redistribute our source code to anyone who does not have a license. 
The bottom line is AMD’s tessellation performance is not very good and there is not a lot NVIDIA can/should do about it. Using DX11 tessellation has sound technical reasoning behind it, it helps to keep the GPU memory footprint small so multiple characters can use hair and fur at the same time.
I believe it is a resource issue. NVIDIA spent a lot of artist and engineering resources to help make Witcher 3 better. I would assume that AMD could have done the same thing because our agreements with developers don’t prevent them from working with other IHVs. (See also, Project Cars)
I think gamers want better hair, better fur, better lighting, better shadows and better effects in their games. GameWorks gives them that.  

Interesting comments for sure. The essential take away from this is that HairWorks depends heavily on tessellation performance and we have known since the GTX 680 was released that NVIDIA's architecture performs better than AMD's GCN for tessellation - often by a significant amount. NVIDIA developed its middleware to utilize the strength of its own GPU technology and while it's clear that some disagree, not to negatively impact AMD. Did NVIDIA know that would be the case when it was developing the software? Of course it did. Should it have done something to help AMD GPUs more gracefully fall back? Maybe.

Next, I asked Burke directly if claims that NVIDIA was preventing AMD or the game developer from optimizing HairWorks for other GPUs and platforms were true? I was told that both AMD and CD Project had the ability to tune the game, but in different ways. The developer could change the tessellation density based on the specific GPU detected (lower for a Radeon GPU with less tessellation capability, for example) but that would require dedicated engineering from either CD Project or AMD to do. AMD, without access to the source code, should be able to make changes in the driver at the binary level, similar to how most other driver optimizations are built. Burke states that in these instances NVIDIA often sends engineers to work with game developers and that AMD "could have done the same had it chosen to."  And again, NVIDIA reiterated that in no way do its agreements with game developers prohibit optimization for AMD GPUs.

It would also be possible for AMD to have pushed for the implementation of TressFX in addition to HairWorks; a similar scenario played out in Grand Theft Auto V where several vendor-specific technologies were included from both NVIDIA and AMD, customized through in-game settings. 

NVIDIA has never been accused of being altruistic; it doesn't often create things and then share it with open arms to the rest of the hardware community. But it has to be understood that game developers know this as well - they are not oblivious. CD Project knew that HairWorks performance on AMD would be poor but decided to implement the technology into The Witcher 3 anyway. They were willing to sacrifice performance penalties for some users to improve the experience of others. You can argue that is not the best choice, but at the very least The Witcher 3 will let you disable the HairWorks feature completely, removing it from the performance debate all together.

In a perfect world for consumers, NVIDIA and AMD would walk hand-in-hand through the fields and develop hardware and software in tandem, making sure all users get the best possible experience with all games. But that style of work is only helpful (from a business perspective) for the organization attempting to gain market share, not the one with the lead. NVIDIA doesn't have to do it and chooses to not. If you don't want to support that style, vote with your wallet.

Another similar controversy surrounded the recent release of Project Cars. AMD GPU performance was significantly lower than comparable NVIDIA GPUs, even though this game does not implement any GameWorks technologies. In that case, the game's developer directly blamed AMD's drivers, saying that it was a lack of reaching out from AMD that caused the issues. AMD has since recanted its stance that the performance delta was "deliberate" and says a pending driver update will address gamers performance issues.


All arguing aside, this game looks amazing. Can we all agree on that?

The only conclusion I can come to from all of this is that if you don't like what NVIDIA is doing, that's your right - and you aren't necessarily wrong. There will be plenty of readers that see the comments made by NVIDIA above and continue to believe that they are being at best disingenuous and at worst, are straight up lying. As I mentioned above in my own comments NVIDIA is still a for-profit company that is responsible to shareholders for profit and growth. And in today's world that sometimes means working against other companies than with them, resulting in impressive new technologies for its customers and push back from competitor's customers. It's not fun, but that's how it works today.

Fans of AMD will point to G-Sync, GameWorks, CUDA, PhysX, FCAT and even SLI as indications of NVIDIA's negative impact on open PC gaming. I would argue that more users would look at that list and see improvements to PC gaming, progress that helps make gaming on a computer so much better than gaming on a console. The truth likely rests somewhere in the middle; there will always be those individuals that immediately side with one company or the other. But it's the much larger group in the middle, that shows no corporate allegiance and instead just wants to have as much fun as possible with gaming, that will impact NVIDIA and AMD the most.

So, since I know it will happen anyway, use the comments page below to vent your opinion. But, for the benefit of us all, try to keep it civil!

NVIDIA Releases 352.84 WHQL Drivers for Windows 10

Subject: Graphics Cards | May 15, 2015 - 11:36 PM |
Tagged: windows 10, geforce, graphics drivers, nvidia, whql

The last time that NVIDIA has released a graphics driver for Windows 10, they added a download category to their website for the pre-release operating system. Since about January, graphics driver updates were pushed by Windows Update and, before that, you would need to use Windows 8.1 drivers. Receiving drivers from Windows Update also meant that add-ons, such as PhysX runtimes and the GeForce Experience, would not be bundled with it. I know that some have installed them separately, but I didn't.


The 352.84 release, which is their second Windows 10 driver to be released outside of Windows Update, is also certified by WHQL. NVIDIA has recently been touting Microsoft certification for many of their drivers. Historically, they released a large number of Beta drivers that were stable, but did not wait for Microsoft to vouch for them. For one reason or another, they have put a higher priority on that label, even for “Game Ready” drivers that launch alongside a popular title.

For some reason, the driver is only available via GeForce Experience and, but not I assume NVIDIA will publish it there soon, too.

Source: NVIDIA

Oculus Rift "Full Rift Experience" Specifications Released

Subject: Graphics Cards, Processors, Displays, Systems | May 15, 2015 - 03:02 PM |
Tagged: Oculus, oculus vr, nvidia, amd, geforce, radeon, Intel, core i5

Today, Oculus has published a list of what they believe should drive their VR headset. The Oculus Rift will obviously run on lower hardware. Their minimum specifications, published last month and focused on the Development Kit 2, did not even list a specific CPU or GPU -- just a DVI-D or HDMI output. They then went on to say that you really should use a graphics card that can handle your game at 1080p with at least 75 fps.


The current list is a little different:

  • NVIDIA GeForce GTX 970 / AMD Radeon R9 290 (or higher)
  • Intel Core i5-4590 (or higher)
  • 8GB RAM (or higher)
  • A compatible HDMI 1.3 output
  • 2x USB 3.0 ports
  • Windows 7 SP1 (or newer).

I am guessing that, unlike the previous list, Oculus has a more clear vision for a development target. They were a little unclear about whether this refers to the consumer version or the current needs of developers. In either case, it would likely serve as a guide for what they believe developers should target when the consumer version launches.

This post also coincides with the release of the Oculus PC SDK 0.6.0. This version pushes distortion rendering to the Oculus Server process, rather than the application. It also allows multiple canvases to be sent to the SDK, which means developers can render text and other noticeable content at full resolution, but scale back in places that the user is less likely to notice. They can also be updated at different frequencies, such as sleeping the HUD redraw unless a value changes.

The Oculus PC SDK (0.6.0) is now available at the Oculus Developer Center.

Source: Oculus

Rumor: AMD Radeon R9 300-series Release Dates

Subject: Graphics Cards | May 14, 2015 - 07:00 AM |
Tagged: tonga, radeon, R9, pitcairn, Fiji, bonaire, amd, via WCCFTech, believes that AMD's Radeon R9 300-series GPUs will launch in late June. Specifically, the R9 380, the R7 370, and the R7 360 will arrive on the 18th of June. These are listed as OEM parts, as we have mentioned on the podcast, which Ryan speculates could mean that the flagship Fiji XT might go by a different name. seems to think that it will be called by the R9 390(X) though, and that it will be released on the 24th of June.

WCCFTech is a bit more timid, calling it simply “Fiji XT”.


In relation to industry events, this has the OEM lineup launching on the last day of E3 and Fiji XT launching in the middle of the following week. This feels a little weird, especially because AMD's E3 event with PC Gamer is on the 16th. While it makes sense for AMD to announce the launch a few days before it happens, that doesn't make sense for OEM parts unless they were going to announce a line of pre-built PCs. The most likely candidate to launch gaming PCs is Valve, and they're one of the few companies that are absent from AMD's event.

And this is where I run out of ideas. Launching a line of OEM parts at E3 is weird unless it was to open the flood gates for OEMs to make their own announcements. Unless Valve is scheduled to make an announcement earlier in the day, or a surprise appearance at the event, that seems unlikely. Something seems up, though.

Source: WCCFTech