All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | January 7, 2016 - 02:36 PM | Jeremy Hellstrom
Tagged: XFX R9 380X Double Dissipation XXX OC 4GB, xfx, amd, 380x
Take a quick break from reading about the soon to be released technology at CES for a look at a GPU you can buy right now. The XFX DD XXX series has been around for a few generations and the XFX R9 380X Double Dissipation XXX OC 4GB sports the same custom DD cooler you would expect. The factory overclock is quite modest, 20MHz on the GPU taking it to 990MHz and retaining the default 5.7GHz memory clock. Of course [H]ard|OCP were not going to leave that as is, they hit a 1040MHz core and 6.1GHz memory clock thanks to the custom cooling on the card, although with no way to adjust voltage they felt this card could be capable of more if that feature was added to the card. Read on to see how this card compares against the ASUS STRIX GTX 960 DCU II OC in this ~$220 GPU showdown.
"On our test bench today is the XFX R9 380X Double Dissipation XXX OC 4GB video card. It features the latest Ghost Thermal 3.0 cooling technology from XFX and a factory overclock. We will compare it to the ASUS STRIX GTX 960 DCU II OC 4GB in a battle of the $229 price point video cards to determine the better overall value."
Here are some more Graphics Card articles from around the web:
- ASUS R9 390 STRIX DirectCU III @ [H]ard|OCP
- ASUS R9 390X STRIX OC Review @ Hardware Canucks
- New AMD GPU Performance To Be Boosted By Linux 4.5; How It Compares To The Binary Blob @ Phoronix
- NVIDIA Linux Driver 2015 Year-in-Review @ Phoronix
- NVIDIA Quadro M4000 @ Kitguru
- Gigabyte GeForce GTX 950 Xtreme Review @ HiTech Legion
Subject: Graphics Cards, Shows and Expos | January 7, 2016 - 02:03 PM | Scott Michaud
Tagged: square enix, nvidia, CES 2016, CES
NVIDIA has just announced a new game bundle. If you purchase an NVIDIA GeForce GTX 970, GTX 980 desktop or mobile, GTX 980 Ti, GTX 980M, or GTX 970M, then you will receive a free copy of Rise of the Tomb Raider. As always, make sure the retailer is selling the participating card. If the product has a download code, it will be specially marked. NVIDIA will not upgrade non-participating stock to the bundle.
Rise of the Tomb Raider will go live on January 29th. It was originally released in November as an Xbox One timed exclusive. It will also arrive on the PlayStation 4, but not until “holiday,” which is probably around Q4 (or maybe late Q3).
If you purchase the bundle, then you graphics card will obviously be powerful enough to run the game. At a minimum, you will require a GeForce GTX 650 (2GB) or an AMD HD 7770 (2GB). The CPU needs are light too, requiring just a Sandy Bridge Core i3 (Intel Core i3-2100) or AMD's equivalent. Probably the only concern would be the minimum of 6GB system RAM, which also requires a 64-bit operating system. Now that the Xbox 360 and PlayStation 3 have been deprecated, 32-bit gaming will be increasingly rare for “AAA” titles. That said, we've been ramping up to 64-bit for the last decade. one of the first games that supported x86-64 was Unreal Tournament 2004.
The Rise of the Tomb Raider NVIDIA bundle starts today.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Shows and Expos | January 5, 2016 - 09:39 PM | Ryan Shrout
Tagged: vr ready, VR, virtual reality, video, Oculus, nvidia, htc, geforce, CES 2016, CES
Other than the in-depth discussion from NVIDIA on the Drive PX 2 and its push into autonomous driving, NVIDIA didn't have much other news to report. We stopped by the suite and got a few updates on SHIELD and the company's VR Ready program to certify systems that meet minimum recommended specifications for a solid VR experience.
For the SHIELD, NVIDIA is bringing Android 6.0 Marshmallow to the device, with new features like shared storage and the ability to customize the home screen of the Android TV interface. Nothing earth shattering and all of it is part of the 6.0 rollout.
The VR Ready program from NVIDIA will validate notebooks, systems and graphics cards that have the amount of horsepower to meet the minimum performance levels for a good VR experience. At this point, the specs essentially match up with what Oculus has put forth: a GTX 970 or better on the desktop and a GTX 980 (full, not 980M) on mobile.
Other than that, Ken and I took in some of the more recent VR demos including Epic's Bullet Train on the final Oculus Rift and Google's Tilt Brush on the latest iteration of the HTC Vive. Those were both incredibly impressive though the Everest demo that simulates a portion of the mountain climb was the one that really made me feel like I was somewhere else.
Check out the video above for more impressions!
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Mobile, Shows and Expos | January 5, 2016 - 04:47 PM | Scott Michaud
Tagged: external graphics, CES 2016, CES, asus
While external graphics has been a thing for quite some time, it was rarely an available thing. Several companies, such as AMD, Lucid, and others, announced products that were never sold. ASUS had their XG Station for Windows Vista that allowed laptops to plug into a GeForce 8600 GT, which was only available in Australia. Only now are we beginning to see options from Alienware, MSI, and even Microsoft that are widely available.
ASUS is jumping back in, too. Not much is known about the XG Station 2, except that it is “specially designed for ASUS laptops and graphics cards.” This sounds like it is using a proprietary connector, similar to Alienware and MSI, to connect to ASUS laptops. Also saying it's specifically for ASUS graphics cards is a bit confusing, though. If it is an open PCIe slot, I'm not sure why or how it would be limited to ASUS cards. If the graphics cards are pre-installed, then we don't know the list of potential GPUs.
Either way, ASUS states that the dock can be disconnected without shutting down the PC. I'm interested to see how the GPU is supposed to be unplugged, as Alienware's option can only be done when the system is off, and Microsoft's Surface Book has a software detach with a hardware latch. The connector will also charge the laptop, which is an interesting add-in.
Pricing and availability varies, like the other ASUS announcements, by region.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards | December 31, 2015 - 01:41 PM | Sebastian Peak
Tagged: rumor, report, radeon, Polaris, graphics card, gpu, GCN, amd
A report claims that Polaris will succeed GCN (Graphics Core Next) as the next AMD Radeon GPU core, which will power the 400-series graphics cards.
Image via VideoCardz.com
As these rumors go, this is about as convoluted as it gets. VideoCardz has published the story, sourced from WCCFtech, who was reporting on a post with supposedly leaked slides at HardwareBattle. The primary slide in question has since been pulled, and appears below:
Image via HWBattle.com
Of course the name does nothing to provide architectural information on this presumptive GCN replacement, and a new core for the 400-series GPUs was expected anyway after the 300-series was largely a rebranded 200-series (that's a lot of series). Let's hope actual details emerge soon, but for now we can speculate on mysterious tweets from certain interested parties:
— Raja Koduri (@GFXChipTweeter) November 26, 2015
Subject: Graphics Cards | December 29, 2015 - 07:05 AM | Scott Michaud
Tagged: opengl, mesa, linux, Intel
The open-source driver for Intel is known to be a little behind on Linux. Because Intel does not provide as much support as they should, the driver still does not support OpenGL 4.0, although that is changing. One large chunk of that API is support for tessellation, which comes from DirectX 11, and recent patches are adding it for supported hardware. Proprietary drivers exist, at least for some platforms, but they have their own issues.
According to the Phoronix article, once the driver succeeds in supporting OpenGL 4.0, it will not be too long to open the path to 4.2. Tessellation is a huge hurdle, partially because it involves adding two whole shading stages to the rendering pipeline. Broadwell GPUs were recently added, but a patch that was committed yesterday will expand that to Ivy Bridge and Haswell. On Windows, Intel is far ahead -- pushing OpenGL 4.4 for Skylake-based graphics, although that platform only has proprietary drivers. AMD and NVIDIA are up to OpenGL 4.5, which is the latest version.
While all of this is happening, Valve is working on an open-source Vulkan driver for Intel on Linux. This API will be released adjacent to OpenGL, and is built for high-performance graphics and compute. (Note that OpenCL is more sophisticated than Vulkan "1.0" will be on the compute side of things.) As nice as it would be to get high-end OpenGL support, especially for developers who want a more simplified structure to communicate to GPUs with, Vulkan will probably be the API that matters most for high-end video games. But again, that only applies to games that are developed for it.
Subject: Graphics Cards | December 21, 2015 - 01:04 PM | Jeremy Hellstrom
Tagged: GameWorks VR 1.1, nvidia, Oculus, opengl, vive
If you are blessed with the good fortune of already having a VR headset and happen to be running an NVIDIA GPU then there is a new driver you want to grab as soon as you can. The driver includes a new OpenGL extension that enables NVIDIA SLI support for OpenGL apps that display on an Oculus or Vive. NVIDIA's PR suggests you can expect your performance to improve 1.7 times, not quite doubling but certainly offering a noticeable performance improvement. The update is for both GeForce and Quadro cards.
They describe how the update will generate images in their blog post here. They imply that the changes to your code in order to benefit from this update will be minimal and it will also reduce the CPU overhead required to display the images for the right and left eye. Read on if you are interested in the developer side of this update, otherwise download your new driver and keep an eye out for application updates that enable support for SLI in VR.
Subject: Graphics Cards | December 21, 2015 - 07:25 AM | Scott Michaud
Tagged: vulkan, Mantle, Khronos, dx12, DirectX 12
The Khronos Group announced on Friday that the Vulkan API will not ship until next year. The standards body was expecting to launch it at some point in 2015. In fact, when I was first briefed on it, they specifically said that 2015 was an “under-promise and over-deliver” estimate. Vulkan is an open graphics and compute standard that was derived from AMD's Mantle. It, like OpenCL 2.1, uses the SPIR-V language for compute and shading though, which can be compiled from subsets of a variety of languages.
I know that most people will be quick to blame The Khronos Group for this, because industry bodies moving slowly is a stereotype, but I don't think it applies. When AMD created Mantle, it bore some significant delays at all levels. Its drivers and software were held back, and the public release of its SDK was delayed out of existence. Again, it would be easy to blame AMD for this, but hold on. We now get to Microsoft. DirectX 12, which is maybe even closer to Mantle than Vulkan is due to its shading language, didn't roll out as aggressively as Microsoft expected, either. Software is still pretty much non-existent when they claimed, at GDC 2014, that about 50% of PC games would be DX12-compatible by Holiday 2015. We currently have... ... zero (excluding pre-release).
Say what you like about the three examples individually, but when all three show problems, then there might just be a few issues that took longer than expected to solve. Again, this is a completely different metaphor of translating voltages coming through a PCI Express bus into fancy graphics and GPU compute, and create all of the supporting ecosystems, too.
Speaking of ecosystems, The Khronos Group has also announced that Google has upgraded their membership to “Promoter” to get more involved with Vulkan development. Google has been sort-of hostile towards certain standards from The Khronos Group on Android in the past, such as disabling OpenCL on Nexus devices, and trying to steer developers into using Android Extension Pack and Renderscript. They seem to want to use Vulkan proper this time, which is always healthy for the API.
I guess look forward to Vulkan in 2016... hopefully early.
Subject: Graphics Cards | December 17, 2015 - 05:49 PM | Jeremy Hellstrom
Tagged: radeon, crimson, amd
That's right folks, the official AMD Radeon Software Crimson Edition 15.12 has just launched for you to install. This includes the fixes for fan speeds when you are using AMD Overdrive, your settings will stick and the fans will revert to normal after you go back to the desktop from an intense gaming session. There are multiple fixes for Star Wars Battlefront, Fallout 4 and several GUI fixes within the software itself. As always there are still a few kinks being worked out but overall it is worth popping over to AMD to grab the new driver. You should also have less issues upgrading from within Crimson after this update as well.
Subject: Graphics Cards | December 16, 2015 - 08:12 AM | Scott Michaud
Tagged: nvidia, geforce experience, geforce
A new version of GeForce Experience was published yesterday. It is classified as a beta, which I'm guessing means that they wanted to release it before the Holidays, but they didn't want to have to fix potential, post-launch issues during the break. Thus, release it as a beta so users will just roll back if something doesn't work. On the other hand, NVIDIA is suggesting that it will be a recurring theme with their new "Early Access" program.
It has a few interesting features, though. First, it has a screenshot function that connects with Imgur. Steam's F12 function is pretty good for almost any title, but there are some times that you don't want to register the game with Steam, so a second option is welcome. They also have an overlay to control your stream, rather than just an indicator icon.
They added the ability to choose the Twitch Ingest Server, which is the server that the broadcaster connects to and delivers your stream into Twitch's back-end. I haven't used ShadowPlay for a while, but you previously needed to use whatever GeForce Experience chose. If it's not the best connection (ex: across the continent) then you basically had to deal with it. OBS and OBS Studio have these features too of course. I'll be clear: NVIDIA is playing catch-up to open source software in that area.
The last feature to be mentioned might just be the most interesting, though. A while ago, we mentioned that NVIDIA wants to allow online co-op by a GameStream-like service. They now have it, and it's called GameStream Co-op. The viewer can watch, take over your input, or register as a second gamepad. It requires a GeForce GTX 650 (or 660M) or higher.
Subject: Graphics Cards | December 14, 2015 - 03:55 PM | Jeremy Hellstrom
Tagged: amd, asus, STRIX R9 380X DirectCU II OC, overclock
Out of the box the ASUS STRIX R9 380X OC has a top GPU speed of 1030MHz and memory at 5.7GHz, enough to outperform a stock GTX 960 4GB at 1440p but not enough to provide satisfactory performance at that resolution. After spending some time with the card, [H]ard|OCP determined that the best overclock they could coax out of this particular GPU was 1175MHz and 6.5GHz, so they set about testing the performance at 1440p again. To make it fair they also overclocked their STRIX GTX 960 OC 4GB to 1527MHz and 8GHz. Read the full review for the detailed results, you will see that overclocking your 380X does really increase the value you get for your money.
"We take the new ASUS STRIX R9 380X DirectCU II OC based on AMD's new Radeon R9 380X GPU and overclock this video card to its highest potential. We'll compare performance in six games, including Fallout 4, to a highly overclocked ASUS GeForce GTX 960 4GB video card and find out who dominates 1440p gaming."
Here are some more Graphics Card articles from around the web:
- Sapphire R9 390 Nitro 8GB @ Kitguru
- XFX R9 380X DD XXX OC Review @ OCC
- AMD R9 380X 4GB Graphics Card CrossFire @ eTeknix
- HIS R9 380X IceQ X2 Turbo 4GB Video Card Review @ Madshrimps
Subject: Graphics Cards | December 8, 2015 - 03:56 PM | Jeremy Hellstrom
Tagged: msi, GTX 980 Ti SEA HAWK, 980 Ti, 4k
[H]ard|OCP recently wrapped up a review of the MSI GTX 980 Ti SEA HAWK and are now revisiting the GPU, focusing on the performance at 4K resolutions. The particular card that they have tops out at 1340MHz base and a 1441MHz boost clock which results in an in-game frequency of 1567MHz. For comparison testing they have tested their overclocked card against the SEA HAWK at the factory overclock and a regular 980 Ti at 4k resolution. Read on to see if this watercooled 980 Ti is worth the premium price in the full review.
"Our second installment with the MSI GTX 980 Ti SEA HAWK focuses in on gameplay at 4K and how viable it is with the fastest overclocked GTX 980 Ti we have seen yet. We will be using a reference GeForce GTX 980 Ti to show the full spectrum of the 980 Ti’s performance capabilities and emphasize the MSI GTX 980 Ti SEA HAWK’s value."
Here are some more Graphics Card articles from around the web:
- Gigabyte GTX 980 Ti Waterforce Xtreme Gaming 6 GB @ techPowerUp
- XFX R9 Fury Pro Triple Dissipation @ Bjorn3d
- PowerColor PCS+ R9 380X Myst Edition Review @ OCC
- Sapphire Radeon R9 Nitro 380X @ Kitguru
- Radeon Settings: Crimson Edition Performance Analysis @ eTeknix
- Radeon R9 Nano Small Form Factor Overclocking @ [H]ard|OCP
Subject: Graphics Cards, Processors | December 8, 2015 - 08:07 AM | Scott Michaud
Tagged: hsa, GCC, amd
Phoronix, the Linux-focused hardware website, highlighted patches for the GNU Compiler Collection (GCC) that implement HSA. This will allow newer APUs, such as AMD's Carrizo, to accelerate chunks of code (mostly loops) that have been tagged with a precompiler flag as valuable to be done on the GPU. While I have done some GPGPU development, many of the low-level specifics of HSA aren't areas that I have too much experience with.
The patches have been managed by Martin Jambor of SUSE Labs. You can see a slideshow presentation of their work on the GNU website. Even though features froze about a month ago, they are apparently hoping that this will make it into the official GCC 6 release. If so, many developers around the world will be able to target HSA-compatible hardware in the first half of 2016. Technically, anyone can do so regardless, but they would need to specifically use the unofficial branch on the GCC Subversion repository. This probably means compiling it themselves, and it might even be behind on a few features in other branches that were accepted into GCC 6.
Subject: Graphics Cards | December 6, 2015 - 11:29 PM | Scott Michaud
Tagged: gigabyte, cooler master, asetek, amd
AMD and Gigabyte have each received cease and desist letters from Asetek, regarding the Radeon Fury X and GeForce GTX 980 Water Force, respectively, for using a Cooler Master-based liquid cooling solution. The Cooler Master Seiden 120M is a self-contained block and water pump, which courts have ruled that it infringes on one of Asetek's patents. Asetek has been awarded 25.375% of Cooler Master's revenue from all affected products since January 1st, 2015.
This issue obviously affects NVIDIA less than AMD, since it applies to a single product from just one AIB partner. On AMD's side, however, it affects all Fury X products, but obviously not the air-cooled Fury and Fury Nano cards. It's also possible that future SKUs could be affected as well, especially since upcoming, top end GPUs will probably be in small packages adjacent HBM 2.0 memory. This dense form-factor lends itself well to direct cooling techniques, like closed-loop water.
Even more interesting is that we believe Asetek was expecting to get the Fury X contract. We reported on an Asetek press release that claimed they received their “Largest Ever Design Win” with an undisclosed OEM. We expected it to be the follow-up to the 290X, which we assumed was called 390X because, I mean, AMD just chose that branding, right? Then the Fury X launched and it contained a Cooler Master pump. I was confused. No other candidate for “Largest Ever Design Win” popped up from Asetek, either. I guess we were right? Question mark? The press release of Asetek's design win came out in August 2014 while Asetek won the patent case in December of that year.
Regardless, this patent war has been ongoing for several months now. If it even affects any future products, I'd hope that they'd have enough warning at this point.
Subject: Graphics Cards, Mobile | December 6, 2015 - 08:10 AM | Scott Michaud
Tagged: nvidia, graphics drivers, 860m
Users of notebooks with the GeForce GTX 860M GPU have apparently been experiencing crashes in many new titles. To remedy these issues, NVIDIA has published GeForce Hotfix Driver 359.12. If you do not have the GeForce GTX 860M, and all of your games work correctly, then you probably shouldn't install this. It has not been tested as much as official releases, by either Microsoft or NVIDIA, so other issues could have been introduced and no-one would know.
If you do have that specific GPU though, and you are having problems running certain titles, then you can install the driver now. Otherwise, you can wait for future, WHQL-certified drivers too. Some users are apparently claiming that the issues were fixed, while others complain about crashes in games like Mad Max and Shadow of Mordor.
Subject: Graphics Cards | December 3, 2015 - 10:34 PM | Sebastian Peak
Tagged: Tonga XT, tonga, Radeon R9 380X, Radeon R9 285, Radeon R9 280X, Radeon R9 280, radeon, amd, 384-bit
While it was reported a year ago that AMD's Tonga XT GPU had a 384-bit memory bus in articles sourcing the same PC Watch report, when the Radeon R9 380X was released last month we saw a Tonga XT GPU with a 256-bit memory interface.
The full Tonga core features a 384-bit GDDR5 memory bus (Credit: PC Watch)
Reports of the upcoming card had consistently referenced the wider 384-bit bus, and tonight we are able to officially confirm that Tonga (not just Tonga XT) has been 384-bit capable all along, though this was never enabled by AMD. The reason? The company never found the right price/performance combination.
AMD confirms 384-bit bus available on Tonga, just not enabled on any product, including 380X. Didn't find a perfect perf/$ slot.
— Ryan Shrout (@ryanshrout) December 4, 2015
AMD's Raja Koduri confirmed Tonga's 384-bit bus tonight, and our own Ryan Shrout broke the news on Twitter.
So does this mean an upcoming Tonga GPU could offer this wider memory bus? Tonga itself was a follow-up to Tahiti (R9 280/280X), which did have a 384-bit bus, but all along the choice had been made to keep the updated core at 256-bit.
Now more than a year after the launch of Tonga a new part featuring a fully enabled memory bus doesn't seem realistic, but it's still interesting to know that significantly more memory bandwidth is locked away from owners of these cards.
Subject: Graphics Cards | November 30, 2015 - 03:48 PM | Sebastian Peak
Tagged: rumor, report, price cut, nvidia, GTX 980, GTX 970, gtx 960, geforce
A report published by TechPowerUp suggests NVIDIA will soon be cutting prices across their existing GeForce lineup, with potential price changes reaching consumers in time for the holiday shopping season.
So what does this report suggest? The GTX 980 drops to $449, the GTX 970 goes to $299, and the GTX 960 goes to $179. These are pretty consistent with some of the sale or post-rebate prices we’ve seen of late, and such a move would certainly even things up somewhat between AMD and NVIDIA with regard to cost. Of course, we could see an answer from AMD in the form of a price reduction from their R9 300-series or Fury/Nano. We can only hope!
We’ve already seen prices come down during various black Friday sales on several GPUs, but the potential for a permanent price cut makes for interesting speculation if nothing else. Not to disparage the source, but no substantive evidence exists to directly point to a plan by NVIDIA to lower their GPU prices for some 900-series cards, but it would make sense given their competition from AMD at various price points.
Here’s to lower prices going forward.
Subject: Graphics Cards | November 29, 2015 - 05:52 PM | Scott Michaud
Tagged: nvidia, cancer research, gpgpu
The University of Toronto has just received a $200,000 grant from the NVIDIA Foundation for research in identifying genetic links to cancer. The institution uses GPUs to learn and identify mutations that cause the disease, which is hoped to eventually help diagnose the attributes of cancer for a specific patient and provide exact treatments. Their “next step” is comparing their technology with data from patients.
I am not too informed on cancer research, so I will point to the article and its sources for specifics. The team state that the libraries they create will be freely available for other biomedical researchers. They don't mention specific licenses or anything, but the article is not really an appropriate venue for that sort of discussion.
Subject: Graphics Cards | November 29, 2015 - 05:25 PM | Scott Michaud
Tagged: amd, graphics driver, radeon, crimson
Users have been reporting that the latest AMD graphics driver, Radeon Software Crimson Edition, has been incorrectly setting fan speeds. Some users report that the driver spins up fans to 100% and others report that they slow down to 30% regardless of load.
Over the weekend, AMD acknowledged the issue and claim that a fix is intended for Monday.
Some users also claim that the card will stick with that fan setting until it cooks itself. This seems odd to me, since GPUs (and CPUs of course) are now designed to down-volt if temperatures reach unsafe levels, and even cut power entirely if heat cannot be managed. We haven't really seen reports of graphics cards cooking themselves since the Radeon HD 5000 series implemented hardware in response to Furmark and OCCT. That said, the driver bug might some how override these hardware protections.
In the mean time, you'll either want to keep an eye on your fan settings and reset them as necessary, or roll back to the previous driver. AMD didn't comment on the high fan speed issue that some were complaining about, so I'm not sure if this fix will address both issues.
Subject: Graphics Cards | November 26, 2015 - 09:42 PM | Scott Michaud
Tagged: asus, 980 Ti
As most of our readers are well aware, the graphics market is dominated by two GPU vendors, both of which sell chips and reference designs to add-in board partners. ASUS is one of the oldest add-in board partners. According to their anniversary website, ASUS even produced a card that was based on the S3 ViRGE/DX graphics chipset all the way back in 1996.
To celebrate their 20th anniversary, although they don't exactly state when they start counting, ASUS has released a few high-end versions of Maxwell-based graphics cards. This one is the ASUS 20th Anniversary Gold Edition GTX 980 Ti. It comes with a base clock speed of 1266 MHz, which boosts up to 1367 MHz as needed. This is quite high, considering the reference card is clocked at 1000 MHz and boosts to 1189 MHz, although overclocking the 980 Ti is not too difficult to begin with. Ryan got up to 1465 MHz with a reference card. The Gold Edition GTX 980 Ti might go even higher with its enhanced cooling and power delivery, and it's designed for liquid nitrogen if you're that type of enthusiast.
Speaking of liquid nitrogen features, the card advertises a “Memory Defroster” feature that looks rather extreme. I can't say that I've ever seen a graphics card get covered in a visible layer of ice, but I've also never attached it to a reservoir of liquid with a temperature that's easier to write in Kelvin than Celcius.
Is this a legit problem? Or does this seem like “anti-dust shield” to everyone else too?
The ASUS 20th Anniversary Gold Edition GTX 980 Ti ships this month.