Video Perspective: NVIDIA GeForce GTX 1080 Preview

Subject: Graphics Cards | May 10, 2016 - 07:29 PM |
Tagged: video, pascal, nvidia, GTX 1080, gtx 1070, geforce

After the live streamed event announcing the GeForce GTX 1080 and GTX 1070, Allyn and I spent a few minutes this afternoon going over the information as it was provided, discussing our excitement about the product and coming to grips with what in the world a "Founder's Edition" even is.

gtx1080small.jpg

If you haven't yet done so, check out Scott's summary post on the GTX 1080 and GTX 1070 specs right here.

Galax GeForce GTX 1080 Pictured with Custom Cooler

Subject: Graphics Cards | May 10, 2016 - 04:06 PM |
Tagged: video card, reference cooler, pascal, nvidia, GTX 1080, graphics card, GeForce GTX 1080, Founder's Edition

The first non-reference GTX 1080 has been revealed courtesy of Galax, and the images (via VideoCardz.com) look a lot different than the Founder's Edition.

GALAX-GeForce-GTX-1080-box.jpg

Galax GTX 1080 (Image Credit: VideoCardz)

The Galax is the first custom implementation of the GTX 1080 we've seen, and as such the first example of a $599 variant of the GTX 1080. The Founder's Edition cards carry a $100 premium (and offer that really nice industrial design) but ultimately it's about performance and the Galax card will presumably offer completely stock specifications.

GALAX-GeForce-GTX-1080-front.jpg

(Image Credit: VideoCardz)

Expect to see a deluge of aftermarket cooling from EVGA, ASUS, MSI, and others soon enough - most of which will presumably be using a dual or triple-fan cooler, and not a simple blower like this.

Source: VideoCardz

Microsoft updates Windows 10 UWP to support unlocked frame rates and G-Sync/FreeSync

Subject: Graphics Cards | May 10, 2016 - 12:11 PM |
Tagged: windows 10, windows, vrr, variable refresh rate, uwp, microsoft, g-sync, freesync

Back in March, Microsoft's Phil Spencer addressed some of the concerns over the Unified Windows Platform and PC gaming during his keynote address at the Build Conference. He noted that MS would "plan to open up VSync off, FreeSync, and G-Sync in May" and the company would "allow modding and overlays in UWP applications" sometime further into the future. Well it appears that Microsoft is on point with the May UWP update.

According to the MS DirectX Developer Blog, a Windows 10 update being pushed out today will enable UWP to support unlocked frame rates and variable refresh rate monitors in both G-Sync and FreeSync varieties. 

windows_8_logo-redux2.png

As a direct response to your feedback, we’re excited to announce the release today of new updates to Windows 10 that make gaming even better for game developers and gamers.

Later today, Windows 10 will be updated with two key new features:

Support for AMD’s FreesyncTM and NVIDIA’s G-SYNC™ in Universal Windows Platform games and apps

Unlocked frame rate for Universal Windows Platform (UWP) games and apps

Once applications take advantage of these new features, you will be able to play your UWP games with unlocked frame rates. We expect Gears of War: UE and Forza Motorsport 6: Apex to lead the way by adding this support in the very near future.

This OS update will be gradually rolled out to all machines, but you can download it directly here.

These updates to UWP join the already great support for unlocked frame rate and AMD and NVIDIA’s technologies in Windows 10 for classic Windows (Win32) apps.

Please keep the feedback coming!

Today's update won't automatically enable these features in UWP games like Gears of War or Quantum Break, they will still need to be updated individually by the developer. MS states that Gears of War and Forza will be the first to see these changes, but there is no mention of Quantum Break here, which is a game that could DEFINITELY benefit from the love of variable refresh rate monitors. 

Microsoft describes an unlocked frame rate as thus:

Vsync refers to the ability of an application to synchronize game rendering frames with the refresh rate of the monitor. When you use a game menu to “Disable vsync”, you instruct applications to render frames out of sync with the monitor refresh. Being able to render out of sync with the monitor refresh allows the game to render as fast as the graphics card is capable (unlocked frame rate), but this also means that “tearing” will occur. Tearing occurs when part of two different frames are on the screen at the same time.

I should note that these changes do not indicate that Microsoft is going to allow UWP games to go into an exclusive full screen mode - it still believes the disadvantages of that configuration outweigh the advantages. MS wants its overlays and a user's ability to easily Alt-Tab around Windows 10 to remain. Even though MS mentions screen tearing, I don't think that non-exclusive full screen applications will exhibit tearing.

gears.jpg

Gears of War on Windows 10 is a game that could definitely use an uncapped render rate and VRR support.

Instead, what is likely occurring, as we saw with the second iteration of the Ashes of the Singularity benchmark, is that the game will have an uncapped render rate internally but that frames rendered OVER 60 FPS (or the refresh rate of the display) will not be shown. This will improve perceived latency as the game will be able to present the most up to date frame (with the most update to date input data) when the monitor is ready for a new refresh. 

UPDATE 5/10/16 @ 4:31pm: Microsoft just got back to me and said that my above statement wasn't correct. Screen tearing will be able to occur in UWP games on Windows 10 after they integrate support for today's patch. Interesting!!

For G-Sync and FreeSync users, the ability to draw to the screen at any range of render rates will offer an even further advantage of uncapped frame rates, no tearing but also, no "dropped" frames caused by running at off-ratios of a standard monitor's refresh rate.

I'm glad to see Microsoft taking these steps at a brisk pace after the feedback from the PC community early in the year. As for UWP's continued evolution, the blog post does tease that we should "expect to see some exciting developments on multiple GPUs in DirectX 12 in the near future."

Source: MSDN

EKWB Releases AMD Radeon Pro Duo Full-Cover Water Block

Subject: Graphics Cards, Cases and Cooling | May 10, 2016 - 08:55 AM |
Tagged: water cooling, radeon pro duo, radeon, pro duo, liquid cooling, graphics cards, gpu cooler, gpu, EKWB, amd

While AMD's latest dual-GPU powerhouse comes with a rather beefy-looking liquid cooling system out of the box, the team at EK Water Blocks have nonetheless created their own full-cover block for the Pro Duo, which is now available in a pair of versions.

EKFC-Radeon-Pro-Duo_NP_fill_1600.jpg

"Radeon™ has done it again by creating the fastest gaming card in the world. Improving over the Radeon™ R9 295 X2, the Radeon Pro Duo card is faster and uses the 3rd generation GCN architecture featuring asynchronous shaders enables the latest DirectX™ 12 and Vulkan™ titles to deliver amazing 4K and VR gaming experiences. And now EK Water Blocks made sure, the owners can get the best possible liquid cooling solution for the card as well!"

EKFC-Radeon-Pro-Duo_pair.png

Nickel version (top), Acetal+Nickel version (bottom)

The blocks include a single-slot I/O bracket, which will allow the Pro Duo to fit in many more systems (and allow even more of them to be installed per motherboard!).

EKFC-Radeon-Pro-Duo_NP_input_1600-1500x999.jpg

"EK-FC Radeon Pro Duo water block features EK unique central inlet split-flow cooling engine with a micro fin design for best possible cooling performance of both GPU cores. The block design also allows flawless operation with reversed water flow without adversely affecting the cooling performance. Moreover, such design offers great hydraulic performance, allowing this product to be used in liquid cooling systems using weaker water pumps.

The base is made of nickel-plated electrolytic copper while the top is made of quality POM Acetal or acrylic (depending on the variant). Screw-in brass standoffs are pre-installed and allow for safe installation procedure."

Suggested pricing is set at 155.95€ for the blocks (approx. $177 US), and they are "readily available for purchase through EK Webshop and Partner Reseller Network".

Source: EKWB

What Are NVIDIA GeForce GTX 1080 Founders Edition Cards?

Subject: Graphics Cards | May 9, 2016 - 05:00 PM |
Tagged: pascal, nvidia, GTX 1080, geforce

During the GeForce GTX 1080 launch event, NVIDIA announced two prices for the card. The new GPU has an MSRP of $599 USD, while a Founders Edition will be available for $699 USD. They did not really elaborate on the difference at the keynote, but they apparently clarified the product structure for the attending press.

nvidia-2016-dreamhack-1080-specs.png

According to GamersNexus, the “Founders Edition” is NVIDIA's new branding for their reference design, which has been updated with the GeForce GTX 1080. That is it. Normally, a reference design is pretty much bottom-tier in a product stack. Apart from AMD's water-cooling experiments, reference designs are relatively simple, single-fan blower coolers. NVIDIA's reference cooler though, at least on their top-three-or-so models of any given generation, are pretty good. They are fairly quiet, effective, and aesthetically pleasing. When searching for a specific GPU online, you will often see a half-dozen entries based on this, from various AIB partners, and another half-dozen other offerings from those same companies, which is very different. MSI does their Twin Frozr thing, while ASUS has their Direct CU and Poseidon coolers.

If you want the $599 model, then, counter to what we've been conditioned to expect, you will not be buying NVIDIA's reference cooler. These will come from AIB partners, which means that NVIDIA is (at least somewhat) allowing them to set a minimum product this time around. They expect reference cards to be intrinsically valuable, not just purchased because they rank highest on a “sort by lowest price” metric.

This is interesting for a number of reasons. It wasn't too long ago that NVIDIA finally allowed AIB vendors to customize Titan-level graphics cards. Before that, NVIDIA's reference cooler was the only option. When they released control to their partners, we started to see water cooled Titan Xs. There is two ways to look at it: either NVIDIA is relaxing their policy of controlling user experience, or they want their personal brand to be more than the cheapest offering of their part. Granted, the GTX 1080 is supposed to be their high-end, but still mainstream offering.

It's just interesting to see this decision and rationalize it both as a release of control over user experience, and, simultaneously, as an increase of it.

Source: GamersNexus

AMD Releases Radeon Software Crimson Edition 16.5.1 Beta

Subject: Graphics Cards | May 9, 2016 - 02:05 PM |
Tagged: amd, graphics drivers, crimson

This is good to see. AMD has released Radeon Software Crimson Edition 16.5.1 to align with Forza Motorsport 6: Apex. The drivers are classified as Beta, and so is the game, coincidentally, which means 16.5.1 is not WHQL-certified. That doesn't have the weight that it used to, though. Its only listed feature is performance improvements with that title, especially for the R9 Fury X graphics card. Game-specific optimizations near launch appear to be getting consistent, and that was an area that AMD really needed to improve upon, historically.

amd-2015-crimson-logo.png

There are a handful of known issues, but they don't seem particularly concerning. The AMD Gaming Evolved overlay may crash in some titles, and The Witcher 3 may flicker in Crossfire, both of which could be annoying if they affect a game that you have been focusing on, but that's about it. There might be other issues (and improvements) that are not listed in the notes, but that's all I have to work on at the moment.

If you're interested in Forza 6: Apex, check out AMD's download page.

Source: AMD

NVIDIA GeForce GTX 1080 and GTX 1070 Announced

Subject: Graphics Cards | May 6, 2016 - 10:38 PM |
Tagged: pascal, nvidia, GTX 1080, gtx 1070, GP104, geforce

So NVIDIA has announced their next generation of graphics processors, based on the Pascal architecture. They introduced it as “a new king,” because they claim that it is faster than the Titan X, even at a lower power. It will be available “around the world” on May 27th for $599 USD (MSRP). The GTX 1070 was also announced, with slightly reduced specifications, and it will be available on June 10th for $379 USD (MSRP).

nvidia-2016-dreamhack-1080-photo.png

Pascal is created on the 16nm process at TSMC, which gives them a lot of headroom. They have fewer shaders than the Titan X, but with a significantly higher clock rate. It also uses GDDR5X, which is an incremental improvement over GDDR5. We knew it wasn't going to use HBM2.0, like Big Pascal does, but it's interesting that they did not stick with old, reliable GDDR5.

nvidia-2016-dreamhack-1080-specs.png

nvidia-2016-dreamhack-1070-specs.png

The full specifications of the GTX 1080 are as follows:

  • 2560 CUDA Cores
  • 1607 MHz Base Clock (8.2 TFLOPs)
  • 1733 MHz Boost Clock (8.9 TFLOPs)
  • 8GB GDDR5X Memory at 320 GB/s (256-bit)
  • 180W Listed Power (Update: uses 1x 8-pin power)

We do not currently have the specifications of the GTX 1070, apart from it being 6.5 TFLOPs.

nvidia-2016-dreamhack-1080-stockphoto.png

It also looks like it has five display outputs: 3x DisplayPort 1.2, which are “ready” for 1.3 and 1.4, 1x HDMI 2.0b, and 1x DL-DVI. They do not explicitly state that all three DisplayPorts will run on the same standard, even though that seems likely. They also do not state whether all five outputs can be used simultaneously, but I hope that they can be.

nvidia-2016-dreamhack-newsli.png

They also have a new SLI bridge, called SLI HB Bridge, that is supposed to have double the bandwidth of Maxwell. I'm not sure what that will mean for multi-gpu systems, but it will probably be something we'll find out about soon.

Source: NVIDIA

NVIDIA GeForce "GTX 1080" Benchmark Leaked

Subject: Graphics Cards | May 5, 2016 - 02:38 PM |
Tagged: nvidia, pascal, geforce

We're expecting a major announcement tomorrow... at some point. NVIDIA create a teaser website, called “Order of 10,” that is counting down to a 1PM EDT. On the same day, at 9PM EDT, they will have a live stream on their Twitch channel. This wasn't planned as long-term as their Game24 event, which turned out to be a GTX 970 and GTX 980 launch party, but it wouldn't surprise me if it ended up being a similar format. I don't know for sure whether one or both events will be about the new mainstream Pascal, but it would be surprising if Friday ends (for North America) without a GPU launch of some sort.

nvidia-geforce.png

VideoCardz got a hold of 3DMark Fire Strike Extreme benchmarks, though. It is registered as an 8GB card with a GPU clock of 1860 MHz. While synthetic benchmarks, let alone a single benchmark of anything, isn't necessarily representative of overall performance, it scores slightly higher than a reasonably overclocked GTX 980 Ti (and way above a stock one). Specifically, this card yields a graphics score of 10102 on Fire Strike Extreme 1.1, while the 980 Ti achieved 7781 for us without an overclock.

We expected a substantial bump in clock rate, especially after GP100 was announced at GTC. This “full” Pascal chip was listed at a 1328 MHz clock, with a 1480 MHz boost. Enterprise GPUs are often underclocked compared to consumer parts, stock to stock. As stated a few times, overclocking could be a huge gap, too. The GTX 980 Ti was able to go from 1190 MHz to 1465 MHz. On the other hand, consumer Pascal's recorded 1860 MHz could itself be an overclock. We won't know until NVIDIA makes an official release. If not, maybe we could see these new parts break 2 GHz in general use?

Source: VideoCardz

This is your AMD APU. This is your AMD APU on DX12; any questions?

Subject: Graphics Cards | April 29, 2016 - 07:09 PM |
Tagged: amd, dx12, async shaders

Earlier in the month [H]ard|OCP investigated the performance scaling that Intel processors display in DX12, now they have finished their tests on AMD processors.  These tests include Async computing information, so be warned before venturing forth into the comments.  [H] tested an FX 8370 at 2GHz and 4.3GHz to see what effect this had on the games, the 3GHz tests did not add any value and were dropped in favour of these two turbo frequencies.  There are some rather interesting results and discussion, drop by for the details.

1460040024Wl2AHtuhmm_5_1.gif

"One thing that has been on our minds about the new DX12 API is its ability to distribute workloads better on the CPU side. Now that we finally have a couple of new DX12 games that have been released to test, we spend a bit of time getting to bottom of what DX12 might be able to do for you. And a couple sentences on Async Compute."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: AMD

History and Specifications

The Radeon Pro Duo had an interesting history. Originally shown as an unbranded, dual-GPU PCB during E3 2015, which took place last June, AMD touted it as the ultimate graphics card for both gamers and professionals. At that time, the company thought that an October launch was feasible, but that clearly didn’t work out. When pressed for information in the Oct/Nov timeframe, AMD said that they had delayed the product into Q2 2016 to better correlate with the launch of the VR systems from Oculus and HTC/Valve.

During a GDC press event in March, AMD finally unveiled the Radeon Pro Duo brand, but they were also walking back on the idea of the dual-Fiji beast being aimed at the gaming crowd, even partially. Instead, the company talked up the benefits for game developers and content creators, such as its 8192 stream processors for offline rendering, or even to aid game devs in the implementation and improvement of multi-GPU for upcoming games.

05.jpg

Anyone that pays attention to the graphics card market can see why AMD would make the positional shift with the Radeon Pro Duo. The Fiji architecture is on the way out, with Polaris due out in June by AMD’s own proclamation. At $1500, the Radeon Pro Duo will be a stark contrast to the prices of the Polaris GPUs this summer, and it is well above any NVIDIA-priced part in the GeForce line. And, though CrossFire has made drastic improvements over the last several years thanks to new testing techniques, the ecosystem for multi-GPU is going through a major shift with both DX12 and VR bearing down on it.

So yes, the Radeon Pro Duo has both RADEON and PRO right there in the name. What’s a respectable PC Perspective graphics reviewer supposed to do with a card like that if it finds its way into your office? Test it of course! I’ll take a look at a handful of recent games as well as a new feature that AMD has integrated with 3DS Max called FireRender to showcase some of the professional chops of the new card.

Continue reading our review of the AMD Radeon Pro Duo!!