Corsair and MSI Introduce Hydro GFX Liquid Cooled GeForce GTX 980 Ti

Subject: Graphics Cards | September 16, 2015 - 09:00 PM |
Tagged: nvidia, msi, liquid cooler, GTX 980 Ti, geforce, corsair, AIO

A GPU with attached closed-loop liquid cooler is a little more mainstream these days with AMD's Fury X a high-profile example, and now a partnership between Corsair and MSI is bringing a very powerful NVIDIA option to the market.

HydroGFX_01.png

The new product is called the Hydro GFX, with NVIDIA's GeForce GTX 980 Ti supplying the GPU horsepower. Of course the advantage of a closed-loop cooler would be higher (sustained) clocks and lower temps/noise, which in turns means much better performance. Corsair explains:

"Hydro GFX consists of a MSI GeForce GTX 980 Ti card with an integrated aluminum bracket cooled by a Corsair Hydro Series H55 liquid cooler.

Liquid cooling keeps the card’s hottest, most critical components - the GPU, memory, and power circuitry - 30% cooler than standard cards while running at higher clock speeds with no throttling, boosting the GPU clock 20% and graphics performance up to 15%.

The Hydro Series H55 micro-fin copper cooling block and 120mm radiator expels the heat from the PC reducing overall system temperature and noise. The result is faster, smoother frame rates at resolutions of 4K and beyond at whisper quiet levels."

The factory overclock this 980 Ti is pretty substantial out of the box with a 1190 MHz Base (stock 1000 MHz) and 1291 MHz Boost clock (stock 1075 MHz). Memory is not overclocked (running at the default 7096 MHz), so there should still be some headroom for overclocking thanks to the air cooling for the RAM/VRM.

MSI-HYDRO-GFX-FRONT.png

A look at the box - and the Corsair branding

Specs from Corsair:

  • NVIDIA GeForce GTX 980 Ti GPU with Maxwell 2.0 microarchitecture
  • 1190/1291 MHz base/boost clock
  • Clocked 20% faster than standard GeForce GTX 980 Ti cards for up to a 15% performance boost.
  • Integrated liquid cooling technology keeps GPU, video RAM, and voltage regulator 30% cooler than standard cards
  • Corsair Hydro Series H55 liquid cooler with micro-fin copper block, 120mm radiator/fan
  • Memory: 6GB GDDR5, 7096 MHz, 384-bit interface
  • Outputs: 3x DisplayPort 1.2, HDMI 2.0, and Dual Link DVI
  • Power: 250 watts (600 watt PSU required)
  • Requirements: PCI Express 3.0 16x dual-width slot, 8+6-pin power connector, 600 watt PSU
  • Dimensions: 10.5 x 4.376 inches
  • Warranty: 3 years
  • MSRP: $739.99

As far as pricing/availability goes Corsair says the new card will debut in October in the U.S. with an MSRP of $739.99.

Source: Corsair

Report: TSMC To Produce NVIDIA Pascal On 16 nm FinFET

Subject: Graphics Cards | September 16, 2015 - 09:16 AM |
Tagged: TSMC, Samsung, pascal, nvidia, hbm, graphics card, gpu

According to a report by BusinessKorea TSMC has been selected to produce the upcoming Pascal GPU after initially competing with Samsung for the contract.

PascalBoard.jpg

Though some had considered the possibility of both Samsung and TSMC sharing production (albeit on two different process nodes, as Samsung is on 14 nm FinFET), in the end the duties fall on TSMC's 16 nm FinFET alone if this report is accurate. The move is not too surprising considering the longstanding position TSMC has maintained as a fab for GPU makers and Samsung's lack of experience in this area.

The report didn't make the release date for Pascal any more clear, naming it "next year" for the new HBM-powered GPU, which will also reportedly feature 16 GB of HBM 2 memory for the flagship version of the card. This would potentially be the first GPU released at 16 nm (unless AMD has something in the works before Pascal's release), as all current AMD and NVIDIA GPUs are manufactured at 28 nm.

Author:
Manufacturer: AMD
Tagged: video, radeon, R9, Nano, hbm, Fiji, amd

Specs and Hardware

The AMD Radeon Nano graphics card is unlike any product we have ever tested at PC Perspective. As I wrote and described to the best of my ability (without hardware in my hands) late last month, AMD is targeting a totally unique and different classification of hardware with this release. As a result, there is quite a bit of confusion, criticism, and concern about the Nano, and, to be upfront, not all of it is unwarranted.

IMG_3232.jpg

After spending the past week with an R9 Nano here in the office, I am prepared to say this immediately: for users matching specific criteria, there is no other option that comes close to what AMD is putting on the table today. That specific demographic though is going to be pretty narrow, a fact that won’t necessarily hurt AMD simply due to the obvious production limitations of the Fiji and HBM architectures.

At $650, the R9 Nano comes with a flagship cost but it does so knowing full well that it will not compete in terms of raw performance against the likes of the GTX 980 Ti or AMD’s own Radeon R9 Fury X. However, much like Intel has done with the Ultrabook and ULV platforms, AMD is attempting to carve out a new market that is looking for dense, modest power GPUs in small form factors. Whether or not they have succeeded is what I am looking to determine today. Ride along with me as we journey on the roller coaster of a release that is the AMD Radeon R9 Nano.

Continue reading our review of the AMD Radeon R9 Nano!!

The premium priced ASUS GTX 980 Ti STRIX DCIII OC certainly does perform

Subject: Graphics Cards | September 8, 2015 - 05:56 PM |
Tagged: STRIX DirectCU III OC, nvidia, factory overclocked, asus, 980 Ti

The ASUS GTX 980 Ti STRIX DCIII OC comes with the newest custom cooler from ASUS and a fairly respectable factory overclock of 1216MHz, 1317MHz boost and a 7.2GHz effective clock on the impressive 6GB of VRAM.  Once [H]ard|OCP had a chance to use GPUTweak II those values were increased to 1291MHz, 1392MHz boost and a 6GB VRAM clock with manual tweaking, for those who prefer automated OCing there are three modes which range from Silent to OC mode that will instantly get you ready to use the card.  With an MSRP of $690 and a street price usually over $700 you have to be ready to invest a lot of hard earned cash into this card but at 4k resolutions it does outperform the Fury X by a noticeable margin.

1440939750KGqumUdZsR_1_1.jpg

"Today we have the custom built ASUS GTX 980 Ti STRIX DirectCU III OC 6GB video card. It features a factory overclock, extreme cooling capabilities and state of the art voltage regulation. We compare it to the AMD Radeon R9 Fury, and overclock the ASUS GTX 980 Ti STRIX DCIII to its highest potential and look at some 4K playability."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

AMD Hosting R9 Nano Live Stream Tomorrow at 3pm ET

Subject: Graphics Cards | September 2, 2015 - 05:58 PM |
Tagged: video, r9 nano, Fiji, amd

Tomorrow afternoon, at 12pm PT / 3pm ET, AMD is hosting a live stream on its Twitch channel to show off and discuss a little more about the upcoming Radeon R9 Nano product we previewed last month.

I have no idea what is going to be discussed, I have no idea how long it will be and I don't really know what to expect at all other than that. Apparently AMD is going to play some games on the R9 Nano as well as talk about mods that the small form factor enables.

Enjoy!

Source: AMD

IFA 2015: ASUS ROG Matrix GTX 980Ti Platinum Announced

Subject: Graphics Cards | September 2, 2015 - 11:43 AM |
Tagged: ROG, Matrix GTX 980Ti Platinum, matrix, IFA 2015, GTX 980 Ti, DirectCU II, asus

The GTX 980 Ti has received the Matrix treatment from ASUS, and the ROG GTX 980Ti Platinum graphics card features a DirectCU II cooler with the new plasma copper color scheme.

Matrix GTX 980 Ti Platinum.png

In addition to the claimed 25% cooling advantage from the DirectCU II cooler, which also promises "3X less noise than reference cards", the Matrix Platinum is constructed with Super Alloy Power II components for maximum stability. An interesting addition is something called Memory Defroster, which ASUS explains:

"Memory Defroster is an ASUS-exclusive technology that takes overclocking to extremes – it defrosts the Matrix card's memory during subzero overclocking to ensure sustained stability."

The overbuilt ROG Matrix cards are meant to be overclocked of course, and the GTX 980Ti Platinum offers convenience features such as a one-click "Safe Mode" to restore the card's BIOS to default settings, and a color-coded load indicator that "lets users check GPU load levels at a glance".

ROG Matrix GTX 980Ti Platinum.jpg

The Matrix GTX 980 Ti Platinum also comes with a one‑year XSplit Gamecaster premium license, which is a $99 value. So what is the total cost of this card? That hasn't been announced just yet, and availability is also TBA.

Source: ASUS
Manufacturer: PC Perspective

To the Max?

Much of the PC enthusiast internet, including our comments section, has been abuzz with “Asynchronous Shader” discussion. Normally, I would explain what it is and then outline the issues that surround it, but I would like to swap that order this time. Basically, the Ashes of the Singularity benchmark utilizes Asynchronous Shaders in DirectX 12, but they disable it (by Vendor ID) for NVIDIA hardware. They say that this is because, while the driver reports compatibility, “attempting to use it was an unmitigated disaster in terms of performance and conformance”.

epic-2015-ue4-dx12.jpg

AMD's Robert Hallock claims that NVIDIA GPUs, including Maxwell, cannot support the feature in hardware at all, while all AMD GCN graphics cards do. NVIDIA has yet to respond to our requests for an official statement, although we haven't poked every one of our contacts yet. We will certainly update and/or follow up if we hear from them. For now though, we have no idea whether this is a hardware or software issue. Either way, it seems more than just politics.

So what is it?

Simply put, Asynchronous Shaders allows a graphics driver to cram workloads in portions of the GPU that are idle, but not otherwise available. For instance, if a graphics task is hammering the ROPs, the driver would be able to toss an independent physics or post-processing task into the shader units alongside it. Kollock from Oxide Games used the analogy of HyperThreading, which allows two CPU threads to be executed on the same core at the same time, as long as it has the capacity for it.

Kollock also notes that compute is becoming more important in the graphics pipeline, and it is possible to completely bypass graphics altogether. The fixed-function bits may never go away, but it's possible that at least some engines will completely bypass it -- maybe even their engine, several years down the road.

I wonder who would pursue something so silly, whether for a product or even just research.

But, like always, you will not get an infinite amount of performance by reducing your waste. You are always bound by the theoretical limits of your components, and you cannot optimize past that (except for obviously changing the workload itself). The interesting part is: you can measure that. You can absolutely observe how long a GPU is idle, and represent it as a percentage of a time-span (typically a frame).

And, of course, game developers profile GPUs from time to time...

According to Kollock, he has heard of some console developers getting up to 30% increases in performance using Asynchronous Shaders. Again, this is on console hardware and so this amount may increase or decrease on the PC. In an informal chat with a developer at Epic Games, so massive grain of salt is required, his late night ballpark “totally speculative” guesstimate is that, on the Xbox One, the GPU could theoretically accept a maximum ~10-25% more work in Unreal Engine 4, depending on the scene. He also said that memory bandwidth gets in the way, which Asynchronous Shaders would be fighting against. It is something that they are interested in and investigating, though.

AMD-2015-MantleAPI-slide1.png

This is where I speculate on drivers. When Mantle was announced, I looked at its features and said “wow, this is everything that a high-end game developer wants, and a graphics developer absolutely does not”. From the OpenCL-like multiple GPU model taking much of the QA out of SLI and CrossFire, to the memory and resource binding management, this should make graphics drivers so much easier.

It might not be free, though. Graphics drivers might still have a bunch of games to play to make sure that work is stuffed through the GPU as tightly packed as possible. We might continue to see “Game Ready” drivers in the coming years, even though much of that burden has been shifted to the game developers. On the other hand, maybe these APIs will level the whole playing field and let all players focus on chip design and efficient injestion of shader code. As always, painfully always, time will tell.

NVIDIA Releases 355.82 WHQL Drivers

Subject: Graphics Cards | August 31, 2015 - 07:19 PM |
Tagged: nvidia, graphics drivers, geforce, drivers

Unlike last week's 355.80 Hotfix, today's driver is fully certified by both NVIDIA and Microsoft (WHQL). According to users on GeForce Forums, this driver includes the hotfix changes, although I am still seeing a few users complain about memory issues under SLI. The general consensus seems to be that a number of bugs were fixed, and that driver quality is steadily increasing. This is also a “Game Ready” driver for Mad Max and Metal Gear Solid V: The Phantom Pain.

nvidia-2015-drivers-35582.png

NVIDIA's GeForce Game Ready 355.82 WHQL Mad Max and Metal Gear Solid V: The Phantom Pain drivers (inhale, exhale, inhale) are now available for download at their website. Note that Windows 10 drivers are separate from Windows 7 and Windows 8.x ones, so be sure to not take shortcuts when filling out the “select your driver” form. That, or just use GeForce Experience.

Source: NVIDIA

AMD Releases App SDK 3.0 with OpenCL 2.0

Subject: Graphics Cards, Processors | August 30, 2015 - 09:14 PM |
Tagged: amd, carrizo, Fiji, opencl, opencl 2.0

Apart from manufacturers with a heavy first-party focus, such as Apple and Nintendo, hardware is useless without developer support. In this case, AMD has updated their App SDK to include support for OpenCL 2.0, with code samples. It also updates the SDK for Windows 10, Carrizo, and Fiji, but it is not entirely clear how.

amd-new2.png

That said, OpenCL is important to those two products. Fiji has a very high compute throughput compared to any other GPU at the moment, and its memory bandwidth is often even more important for GPGPU workloads. It is also useful for Carrizo, because parallel compute and HSA features are what make it a unique product. AMD has been creating first-party software software and helping popular third-party developers such as Adobe, but a little support to the world at large could bring a killer application or two, especially from the open-source community.

The SDK has been available in pre-release form for quite some time now, but it is finally graduated out of beta. OpenCL 2.0 allows for work to be generated on the GPU, which is especially useful for tasks that vary upon previous results without contacting the CPU again.

Source: AMD

NVIDIA 355.80 Hotfix for Windows 10 SLI Memory Issues

Subject: Graphics Cards | August 27, 2015 - 05:23 PM |
Tagged: windows 10, nvidia, geforce, drivers, graphics drivers

While GeForce Hotfix driver 355.80 is not certified, or even beta, I know that a lot of our readers have issues with SLI in Windows 10. Especially in games like Battlefield 4, memory usage would expand until, apparently, a crash occurs. Since I run a single GPU, I have not experienced this issue and so I cannot comment on what happens. I just know that it was very common in the GeForce forums and in our comment section, so it was probably a big problem for many users.

nvidia-geforce.png

If you are not experiencing this problem, then you probably should not install this driver. This is a hotfix that, as stated above, was released outside of NVIDIA's typical update process. You might experience new, unknown issues. Affected users, on the other hand, have the choice to install the fix now, which could very well be stable, or wait for a certified release later.

You can pick it up from NVIDIA's support site.

Source: NVIDIA