Teaser - GTX 1080's Tested in SLI - EVGA SC ACX 3.0

Subject: Graphics Cards | May 27, 2016 - 02:58 PM |
Tagged: sli, review, led, HB, gtx, evga, Bridge, ACX 3.0, 3dmark, 1080

...so the time where we manage to get multiple GTX 1080's in the office here would, of course, be when Ryan is on the other side of the planet. We are also missing some other semi-required items, like the new 'SLI HB 'bridge, but we should be able to test on an older LED bridge at 2560x1440 (under the resolution where the newer style is absolutely necessary to avoid a sub-optimal experience). That said, surely the storage guy can squeeze out a quick run of 3DMark to check out the SLI scaling, right?

config.png

For this testing, I spent just a few minutes with EVGA's OC Scanner to take advantage of GPU Boost 3.0. I cranked the power limits and fans on both cards, ending up at a stable overclock hovering at right around 2 GHz on the pair. I'm leaving out the details of the second GPU we got in for testing as it may be under NDA and I can't confirm that as all of the people to ask are in an opposite time zone, so I'm leaving out that for now (pfft - it has an aftermarket cooler). Then I simply ran Firestrike (25x14) with SLI disabled:

3dmark-single.png

...and then with it enabled:

3dmark-sli.png

That works out to a 92% gain in 3DMark score, with the FPS figures jumping by almost exactly 2x. Now remember, this is by no means a controlled test, and the boss will be cranking out a much more detailed piece with frame rated results galore in the future, but for now I just wanted to get some quick figures out to the masses for consumption and confirmation that 1080 SLI is a doable thing, even on an older bridge.

*edit* here's another teaser:

heaven-oc-.png

Aftermarket coolers are a good thing as evidenced by the 47c of that second GPU, but the Founders Edition blower-style cooler is still able to get past 2GHz just fine. Both cards had their fans at max speed in this example.

*edit again*

I was able to confirm we are not under NDA on the additional card we received. Behold:

IMG_1692.jpg

IMG_1698.jpg

This is the EVGA Superclocked edition with their ACX 3.0 cooler.

More to follow (yes, again)!

AMD Releases Radeon Software Crimson Edition 16.5.3

Subject: Graphics Cards | May 24, 2016 - 09:46 PM |
Tagged: vulkan, radeon, overwatch, graphics driver, Crimson Edition 16.5.3, crimson, amd

AMD has released new drivers for Overwatch (and more) with Radeon Software Crimson Edition 16.5.3.

amd-2015-crimson-logo.png

"Radeon Software Crimson Edition is AMD's revolutionary new graphics software that delivers redesigned functionality, supercharged graphics performance, remarkable new features, and innovation that redefines the overall user experience. Every Radeon Software release strives to deliver new features, better performance and stability improvements."

AMD lists these highlights for Radeon Software Crimson Edition 16.5.3:

Support for:

  • Total War: Warhammer
  • Overwatch
  • Dota 2 (with Vulkan API)

New AMD Crossfire profile available for:

  • Total War: Warhammer
  • Overwatch

The driver is available from AMD from the following direct links:

The full release notes with fixed/known issues is available at the source link here.

Source: AMD

NVIDIA Releases 368.22 Drivers for Overwatch

Subject: Graphics Cards | May 24, 2016 - 06:36 PM |
Tagged: nvidia, graphics drivers

Yesterday, NVIDIA has released WHQL-certified drivers to align with the release of Overwatch. This version, 368.22, is the first public release of the 367 branch. Pascal is not listed in the documentation as a supported product, so it's unclear whether this will be the launch driver for it. The GTX 1080 comes out on Friday, but two drivers in a week would not be unprecedented for NVIDIA.

While NVIDIA has not communicated this too well, 368.22 will not install on Windows Vista. If you are still using that operating system, then you will not be able to upgrade your graphics drivers past 365.19. 367-branch (and later) drivers will required Windows 7 and up.

nvidia-geforce.png

Before I continue, I should note that I've experienced so issues getting these drivers to install through GeForce Experience. Long story short, it took two attempts (with a clean install each time) to end up with a successful boot into 368.22. I didn't try the standalone installer that you can download from NVIDIA's website. If the second attempt using GeForce Experience failed, then I would have. That said, after I installed it, it seemed to work out well for me with my GTX 670.

While NVIDIA is a bit behind on documentation, the driver also rolls in other fixes. There were some GPU compute developers who had crashes and other failures in certain OpenCL and CUDA applications, which are now compatible with 368.22. I've also noticed that my taskbar hasn't been sliding around on its own anymore, but I've only been using the driver for a handful of hours.

You can get GeForce 368.22 drivers from GeForce Experience, but you might want to download the standalone installer (or skip a version or two if everything works fine).

Source: NVIDIA

AMD Gains Market Share in Q1'16 Discrete GPUs

Subject: Graphics Cards | May 18, 2016 - 06:11 PM |
Tagged: amd, radeon, market share

AMD sent out a note yesterday with some interesting news about how the graphics card market fared in Q1 of 2016. First, let's get to the bad news: sales of new discrete graphics solutions, in both mobile and desktop, dropped by 10.2% quarter to quarter, a decrease that was slightly higher than expected. Though details weren't given in the announcement or data I have from Mercury Research, it seems likely that expectations of upcoming new GPUs from both NVIDIA and AMD contributed to the slowdown of sales on some level.

Despite the shrinking pie, AMD grabbed more of it in Q1 2016 than it had in Q4 of 2015, gaining on total market share by 3.2% for a total of 29.4%. That's a nice gain in a short few months but its still much lower than Radeon has been as recently as 2013. That 3.2% gain includes both notebook and desktop discrete GPUs, but let's break it down further.

  Q1'16 Desktop Q1'16 Desktop Change Q1'16 Mobile Q1'16 Mobile Change
AMD 22.7% +1.8% 38.7% +7.3%
NVIDIA (assumed) ~77% -1.8% ~61% -7.3%

AMD's gain in the desktop graphics card market was 1.8%, up to 22.7% of the market, while the notebook discrete graphics share jumped an astounding 7.3% to 38.7% of the total market.

NVIDIA obviously still has a commanding lead in desktop add-in cards with more than 75% of the market, but Mercury Research believes that a renewed focus on driver development, virtual reality and the creation of the Radeon Technologies Group attributed to the increases in share for AMD.

Q3 of 2016 is where I think the future looks most interesting. Not only will NVIDIA's newly released GeForce GTX 1080 and upcoming GTX 1070 have time to settle in but the upcoming Polaris architecture based cards from AMD will have a chance to stretch their legs and attempt to continue pushing the needle in the upward direction.

New AMD Polaris 10 and Polaris 11 GPU Details Emerge

Subject: Editorial, Graphics Cards | May 18, 2016 - 01:18 PM |
Tagged: rumor, Polaris, opinion, HDMI 2.0, gpu, gddr5x, GDDR5, GCN, amd, 4k

While Nvidia's Pascal has held the spotlight in the news recently, it is not the only new GPU architecture debuting this year. AMD will soon be bringing its Polaris-based graphics cards to market for notebooks and mainstream desktop users. While several different code names have been thrown around for these new chips, they are consistently in general terms referred to as Polaris 10 and Polaris 11. AMD's Raja Kudori stated in an interview with PC Perspective that the numbers used in the naming scheme hold no special significance, but eventually Polaris will be used across the entire performance lineup (low end to high end graphics).

Naturally, there are going to be many rumors and leaks as the launch gets closer. In fact, Tech Power Up recently came into a number of interesting details about AMD's plans for Polaris-based graphics in 2016 including specifications and which areas of the market each chip is going to be aimed at. 

AMD GPU Roadmap.jpg

Citing the usual "industry sources" familiar with the matter (take that for what it's worth, but the specifications do not seem out of the realm of possibility), Tech Power Up revealed that there are two lines of Polaris-based GPUs that will be made available this year. Polaris 10 will allegedly occupy the mid-range (mainstream) graphics option in desktops as well as being the basis for high end gaming notebook graphics chips. On the other hand, Polaris 11 will reportedly be a smaller chip aimed at thin-and-light notebooks and mainstream laptops.

Now, for the juicy bits of the leak: the rumored specifications!

AMD's "Polaris 10" GPU will feature 32 compute units (CUs) which TPU estimates – based on the assumption that each CU still contains 64 shaders on Polaris – works out to 2,048 shaders. The GPU further features a 256-bit memory interface along with a memory controller supporting GDDR5 and GDDR5X (though not at the same time heh). This would leave room for cheaper Polaris 10 derived products with less than 32 CUs and/or cheaper GDDR5 memory. Graphics cards would have as much as 8GB of memory initially clocked at 7 Gbps. Reportedly, the full 32 CU GPU is rated at 5.5 TFLOPS of single precision compute power and runs at a TDP of no more than 150 watts.

Compared to the existing Hawaii-based R9 390X, the upcoming R9 400 Polaris 10 series GPU has fewer shaders and less memory bandwidth. The memory is clocked 1 GHz higher, but the GDDR5X memory bus is half that of the 390X's 512-bit GDDR5 bus which results in 224 GB/s memory bandwidth for Polaris 10 versus 384 GB/s on Hawaii. The R9 390X has a slight edge in compute performance at 5.9 TFLOPS versus Polaris 10's 5.5 TFLOPS however the Polaris 10 GPU is using much less power and easily wins at performance per watt! It almost reaches the same level of single precision compute performance at nearly half the power which is impressive if it holds true!

  R9 390X R9 390 R9 380 R9 400-Series "Polaris 10"
GPU Code name Grenada (Hawaii) Grenada (Hawaii) Antigua (Tonga) Polaris 10
GPU Cores 2816 2560 1792 2048
Rated Clock 1050 MHz 1000 MHz 970 MHz ~1343 MHz
Texture Units 176 160 112 ?
ROP Units 64 64 32 ?
Memory 8GB 8GB 4GB 8GB
Memory Clock 6000 MHz 6000 MHz 5700 MHz 7000 MHz
Memory Interface 512-bit 512-bit 256-bit 256-bit
Memory Bandwidth 384 GB/s 384 GB/s 182.4 GB/s 224 GB/s
TDP 275 watts 275 watts 190 watts 150 watts (or less)
Peak Compute 5.9 TFLOPS 5.1 TFLOPS 3.48 TFLOPS 5.5 TFLOPS
MSRP (current) ~$400 ~$310 ~$199 $ unknown

Note: Polaris GPU clocks esitmated using assumption of 5.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Another comparison that can be made is to the Radeon R9 380 which is a Tonga-based GPU with similar TDP. In this matchup, the Polaris 10 based chip will – at a slightly lower TDP – pack in more shaders, twice the amount of faster clocked memory with 23% more bandwidth, and provide a 58% increase in single precision compute horsepower. Not too shabby!

Likely, a good portion of these increases are made possible by the move to a smaller process node and utilizing FinFET "tri-gate" like transistors on the Samsung/Globalfoundries 14LPP FinFET manufacturing process, though AMD has also made some architecture tweaks and hardware additions to the GCN 4.0 based processors. A brief high level introduction is said to be made today in a webinar for their partners (though AMD has come out and said preemptively that no technical nitty-gritty details will be divulged yet). (Update: Tech Altar summarized the partner webinar. Unfortunately there was no major reveals other than that AMD will not be limiting AIB partners from pushing for the highest factory overclocks they can get).

Moving on from Polaris 10 for a bit, Polaris 11 is rumored to be a smaller GCN 4.0 chip that will top out at 14 CUs (estimated 896 shaders/stream processors) and 2.5 TFLOPS of single precision compute power. These chips aimed at mainstream and thin-and-light laptops will have 50W TDPs and will be paired with up to 4GB of GDDR5 memory. There is apparently no GDDR5X option for these, which makes sense at this price point and performance level. The 128-bit bus is a bit limiting, but this is a low end mobile chip we are talking about here...

  R7 370 R7 400 Series "Polaris 11"
GPU Code name Trinidad (Pitcairn) Polaris 11
GPU Cores 1024 896
Rated Clock

925 MHz base (975 MHz boost)

~1395 MHz
Texture Units 64 ?
ROP Units 32 ?
Memory 2 or 4GB 4GB
Memory Clock 5600 MHz ? MHz
Memory Interface 256-bit 128-bit
Memory Bandwidth 179.2 GB/s ? GB/s
TDP 110 watts 50 watts
Peak Compute 1.89 TFLOPS 2.5 TFLOPS
MSRP (current) ~$140 (less after rebates and sales) $?

Note: Polaris GPU clocks esitmated using assumption of 2.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Fewer details were unveiled concerning Polaris 11, as you can see from the chart above. From what we know so far, it should be a promising successor to the R7 370 series even with the memory bus limitation and lower shader count as the GPU should be clocked higher, (it also might have more shaders in M series mobile variants versus of the 370 and lower mobile series) and a much lower TDP for at least equivalent if not a decent increase in performance. The lower power usage in particular will be hugely welcomed in mobile devices as it will result in longer battery life under the same workloads, ideally. I picked the R7 370 as the comparison as it has 4 gigabytes of memory and not that many more shaders and being a desktop chip readers may be more widely familiar with it. It also appears to sit between the R7 360 and R7 370 in terms of shader count and other features but is allegedly going to be faster than both of them while using at least (on paper) less than half the power.

Of course these are still rumors until AMD makes Polaris officially, well, official with a product launch. The claimed specifications appear reasonable though, and based on that there are a few important takeaways and thoughts I have.

amd-2016-polaris-blocks.jpg

The first thing on my mind is that AMD is taking an interesting direction here. While NVIDIA has chosen to start out its new generation at the top by announcing "big Pascal" GP100 and actually launching the GP104 GTX 1080 (one of its highest end consumer chips/cards) yesterday and then over the course of the year introducing lower end products AMD has opted for the opposite approach. AMD will be starting closer to the lower end with a mainstream notebook chip and high end notebook/mainstream desktop GPU (Polaris 11 and 10 respectively) and then over a year fleshing out its product stack (remember Raja Kudori stated Polaris and GCN 4 would be used across the entire product stack) and building up with bigger and higher end GPUs over time finally topping off with its highest end consumer (and professional) GPUs based on "Vega" in 2017.

This means, and I'm not sure if this was planned by either Nvidia or AMD or just how it happened to work out based on them following their own GPU philosophies (but I'm thinking the latter), that for some time after both architectures are launched AMD and NVIDIA's newest architectures and GPUs will not be directly competing with each other. Eventually they should meet in the middle (maybe late this year?) with a mid-range desktop graphics card and it will be interesting to see how they stack up at similar price points and hardware levels. Then, of course once "Vega" based GPUs hit (sadly probably in time for NV's big Pascal to launch heh. I'm not sure if Vega is Fury X replacement only or even beyond that to 1080Ti or even GP100 competitor) we should see GCN 4 on the new smaller process node square up against NVIDIA and it's 16nm Pascal products across the board (entire lineup). Which will have the better performance, which will win out in power usage and performance/watt and performance/$? All questions I wish I knew the answers to, but sadly do not!!

Speaking of price and performance/$... Polaris is actually looking pretty good so far at hitting much lower TDPs and power usage targets while delivering at least similar performance if not a good bit more. Both AMD and NVIDIA appear to be bringing out GPUs better than I expected to see as far as technological improvements in performance and power usage (these die shrinks have really helped even though from here on out that trend isn't really going to continue...). I hope that AMD can at least match NV in these areas at the mid range even if they do not have a high end GPU coming out soon (not until sometime after these cards launch and not really until Vega, the high end GCN GPU successor). At least on paper based on the leaked information the GPUs so far look good. My only worry is going to be pricing which I think is going to make or break these cards. AMD will need to price them competitively and aggressively to ensure their adoption and success.  

I hope that doing the rollout this way (starting with lower end chips) helps AMD to iron out the new smaller process node and that they are able to get good yields so that they can be aggressive with pricing here and eventually at the hgh end!

I am looking forward to more information on AMD's Polaris architecture and the graphics cards based on it!

Also read:

I will admit that I am not 100% up on all the rumors and I apologize for that. With that said, I would love to hear what your thoughts are on AMD's upcoming GPUs and what you think about these latest rumors!

NVIDIA Releases Full Specifications for GTX 1070

Subject: Graphics Cards | May 18, 2016 - 12:49 PM |
Tagged: nvidia, pascal, gtx 1070, 1070, gtx, GTX 1080, 16nm FF+, TSMC, Founder's Edition

Several weeks ago when NVIDIA announced the new GTX 1000 series of products, we were given a quick glimpse of the GTX 1070.  This upper-midrange card is to carry a $379 price tag in retail form while the "Founder's Edition" will hit the $449 mark.  Today NVIDIA released the full specifications of this card on their website.

The interest of the GTX 1070 is incredibly great because of the potential performance of this card vs. the previous generation.  Price is also a big consideration here as it is far easier to raise $370 than it is to make the jump to GTX 1080 and shell out $599 once non-Founder's Edition cards are released.  The GTX 1070 has all of the same features as the GTX 1080, but it takes a hit when it comes to clockspeed and shader units.

gtx_1070_launch.png

The GTX 1070 is a Pascal based part that is fabricated on TSMC's 16nm FF+ node.  It shares the same overall transistor count of the GTX 1080, but it is partially disabled.  The GTX 1070 contains 1920 CUDA cores as compared to the 2560 cores of the 1080.  Essentially one full GPC is disabled to reach that number.  The clockspeeds take a hit as well compared to the full GTX 1080.  The base clock for the 1070 is still an impressive 1506 MHz and boost reaches 1683 MHz.  This combination of shader counts and clockspeed makes this probably a little bit faster than the older GTX 980 ti.  The rated TDP for the card is 150 watts with a single 8 pin PCI-E power connector.  This means that there should be some decent headroom when it comes to overclocking this card.  Due to binning and yields, we may not see 2+ GHz overclocks with these cards, especially if NVIDIA cut down the power delivery system as compared to the GTX 1080.  Time will tell on that one.

The memory technology that NVIDIA is using for this card is not the cutting edge GDDR5x or HBM, but rather the tried and true GDDR5.  8 GB of this memory sits on a 256 bit bus, but it is running at a very, very fast 8 gbps.  This gives overall bandwidth in the 256 GB/sec region.  When we combine this figure with the memory compression techniques implemented with the Pascal architecture we can see that the GTX 1070 will not be bandwidth starved.  We have no information if this generation of products will mirror what we saw with the previous generation GTX 970 in terms of disabled memory controllers and the 3.5 GB/500 MB memory split due to that unique memory subsystem.

gtx_1070_launch2.png

Beyond those things, the GTX 1070 is identical to the GTX 1080 in terms of DirectX features, display specifications, decoding support, double bandwidth SLI, etc.  There is an obvious amount of excitement for this card considering its potential performance and price point.  These supposedly will be available in the Founder's Edition release on June 10 for the $449 MSRP.  I know many people are considering using these cards in SLI to deliver performance for half the price of last year's GTX 980ti.  From all indications, these cards will be a signficant upgrade for anyone using GTX 970s in SLI.  With the greater access to monitors that hit 4K as well as Surround Gaming, this could be a solid purchase for anyone looking to step up their game in these scenarios.

Source: NVIDIA

The 1080 roundup, Pascal in all its glory

Subject: Graphics Cards | May 17, 2016 - 06:22 PM |
Tagged: nvidia, pascal, video, GTX 1080, gtx, GP104, geforce, founders edition

Yes that's right, if you felt Ryan and Al somehow missed something in our review of the new GTX 1080 or you felt the obvious pro-Matrox bios was showing here are the other reviews you can pick and choose from.  Start off with [H]ard|OCP who also tested Ashes of the Singularity and Doom as well as the old favourite Battlefield 4.  Doom really showed itself off as a next generation game, its Nightmare mode scoffing at any GPU with less than 5GB of VRAM available and pushing the single 1080 hard.  Read on to see how the competition stacked up ... or wait for the 1440 to come out some time in the future.

1463427458xepmrLV68z_1_8_l.jpg

"NVIDIA's next generation video card is here, the GeForce GTX 1080 Founders Edition video card based on the new Pascal architecture will be explored. We will compare it against the GeForce GTX 980 Ti and Radeon R9 Fury X in many games to find out what it is capable of."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Pairing up the R9 380X

Subject: Graphics Cards | May 16, 2016 - 03:52 PM |
Tagged: amd, r9 380x, crossfire

A pair of R9 380X's will cost you around $500, a bit more $100 less than a single GTX 980Ti and on par or a little less expensive than a straight GTX 980.  You have likely seen these cards compared but how often have you seen these cards pitted against a pair of GTX 960's which costs a little bit less than two 380X cards?  [H]ard|OCP decided it was worth investigating, perhaps for those who currently have a single one of these cards that are considering a second if the price is right.  The results are very tight, overall the two setups performed very similarly with some games favouring AMD and others NVIDIA, check out the full review here.

1462161482JkHsFf1A5H_1_1.jpg

"We are evaluating two Radeon R9 380X video cards in CrossFire against two GeForce GTX 960 video cards in a SLI arrangement. We will overclock each setup to its highest, to experience the full gaming benefit each configuration has to offer. Additionally we will compare a Radeon R9 380 CrossFire setup to help determine the best value."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

PCPer Live! GeForce GTX 1080 Live Stream with Tom Petersen (Now with free cards!)

Subject: General Tech, Graphics Cards | May 16, 2016 - 03:19 PM |
Tagged: video, tom petersen, pascal, nvidia, live, GTX 1080, gtx, GP104, geforce

Our review of the GeForce GTX 1080 is LIVE NOW, so be sure you check that out before today's live stream!!

Get yourself ready, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout and NVIDIA’s Tom Petersen. The general details about consumer Pascal and the GeForce GTX 1080 graphics card are already official and based on the traffic to our stories and the response on Twitter and YouTube, there is more than a little pent-up excitement. .

gtx10801.jpg

On hand to talk about the new graphics card, answer questions about technologies in the GeForce family including Pascal, SLI, VR, Simultaneous Multi-Projection and more will be Tom Petersen, well known in our community. We have done quite a few awesome live steams with Tom in the past, check them out if you haven't already.

pcperlive.png

NVIDIA GeForce GTX 1080 Live Stream

10am PT / 1pm ET - May 17th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

The event will take place Tuesday, May 17th at 1pm ET / 10am PT at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience, asking questions for me and Tom to answer live. 

Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else. Previous streams have produced news as well – including statements on support for Adaptive Sync, release dates for displays and first-ever demos of triple display G-Sync functionality. You never know what’s going to happen or what will be said!

UPDATE! UPDATE! UPDATE! This just in fellow gamers: Tom is going to be providing two GeForce GTX 1080 graphics cards to give away during the live stream! We won't be able to ship them until availability hits at the end of May, but two lucky viewers of the live stream will be able to get their paws on the fastest graphics card we have ever tested!! Make sure you are scheduled to be here on May 17th at 10am PT / 1pm ET!!

DSC00218.jpg

Don't you want to win me??!?

If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?

So join us! Set your calendar for this coming Tuesday at 1pm ET / 10am PT and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!

AMD Releases Radeon Software Crimson Edition 16.5.2 Beta

Subject: Graphics Cards | May 11, 2016 - 11:53 PM |
Tagged: amd, crimson, graphics drivers

For the second time this month, hence the version number, AMD has released a driver to coincide with a major game release. This one is for DOOM, which will be available on Friday. Like the previous driver, which was aligned with Forza, it has not been WHQL-certified. That's okay, though. NVIDIA's Game Ready drivers didn't strive for WHQL certification until just recently, and, even then, WHQL certification doesn't mean what it used to.

amd-2015-crimson-logo.png

But yeah, apart from game-specific optimizations for DOOM, 16.5.2 has a few extra reasons to be used. If you play Battleborn, which launched on May 3rd, then AMD has added a new CrossFire profile for that game. They have also fixed at least eleven issues (plus however many undocumented ones). It comes with ten known issues, but none of them seem particularly troubling. It seems to be mostly CrossFire-related issues.

You can pick up the driver at AMD's website.

Source: AMD

NVIDIA Limits GTX 1080 SLI to Two Cards

Subject: Graphics Cards | May 11, 2016 - 10:57 PM |
Tagged: sli, nvidia, GTX 1080, GeForce GTX 1080

Update (May 12th, 1:45am): Okay so the post has been deleted, which was originally from Chris Bencivenga, Support Manager at EVGA. A screenshot of it is attached below. Note that Jacob Freeman later posted that "More info about SLI support will be coming soon, please stay tuned." I guess this means take the news with a grain of salt until an official word can be released.

evga-2016-gtx1080sli.png

Original Post Below

According to EVGA, NVIDIA will not support three- and four-way SLI on the GeForce GTX 1080. They state that, even if you use the old, multi-way connectors, it will still be limited to two-way. The new SLI connector (called SLI HB) will provide better performance “than 2-way SLI did in the past on previous series”. This suggests that the old SLI connectors can be used with the GTX 1080, although with less performance and only for two cards.

nvidia-2016-dreamhack-newsli.png

This is the only hard information that we have on this change, but I will elaborate a bit based on what I know about graphics APIs. Basically, SLI (and CrossFire) are simplifications of the multi-GPU load-balancing problems such that it is easy to do from within the driver, without the game's involvement. In DirectX 11 and earlier, the game cannot interface with the driver in that way at all. That does not apply to DirectX 12 and Vulkan, however. In those APIs, you will be able to explicitly load-balance by querying all graphics devices (including APUs) and split the commands yourself.

Even though a few DirectX 12 games exist, it's still unclear how SLI and CrossFire will be utilized in the context of DirectX 12 and Vulkan. DirectX 12 has the tier of multi-GPU called “implicit multi-adapter,” which allows the driver to load balance. How will this decision affect those APIs? Could inter-card bandwidth even be offloaded via SLI HB in DirectX 12 and Vulkan at all? Not sure yet (but you would think that they would at least add a Vulkan extension). You should be able to use three GTX 1080s in titles that manually load-balance to three or more mismatched GPUs, but only for those games.

If it relies upon SLI, which is everything DirectX 11, then you cannot. You definitely cannot.

Source: EVGA

Here Comes the Maxwell Rebates

Subject: Graphics Cards | May 10, 2016 - 07:50 PM |
Tagged: nvidia, maxwell, GTX 980 Ti, GTX 970, GTX 1080, geforce

The GTX 1080 announcement is starting to ripple into retailers, leading to price cuts on the previous generation, Maxwell-based SKUs. If you were interested in the GTX 1080, or an AMD graphics card of course, then you probably want to keep waiting. That said, you can take advantage of the discounts to get a VR-ready GPU or if you already have a Maxwell card that could use a cheap SLI buddy.

evga-2016-980ti-new.jpg

This tip comes from a NeoGAF thread. Microcenter has several cards on sale, but EVGA seems to have the biggest price cuts. This 980 Ti has dropped from $750 USD down to $499.99 (or $474.99 if you'll promise yourself to do that mail-in rebate). That's a whole third of its price slashed, and puts it about a hundred dollars under GTX 1080. Granted, it will also be slower than the GTX 1080, with 2GB less video RAM, but $100 might be worth that for you.

They highlight two other EVGA cards as well. Both deals are slight variations on the GTX 970 line, and they are available for $250 and $255 ($225 and $230 after mail-in rebate).

Source: NeoGAF

Video Perspective: NVIDIA GeForce GTX 1080 Preview

Subject: Graphics Cards | May 10, 2016 - 07:29 PM |
Tagged: video, pascal, nvidia, GTX 1080, gtx 1070, geforce

After the live streamed event announcing the GeForce GTX 1080 and GTX 1070, Allyn and I spent a few minutes this afternoon going over the information as it was provided, discussing our excitement about the product and coming to grips with what in the world a "Founder's Edition" even is.

gtx1080small.jpg

If you haven't yet done so, check out Scott's summary post on the GTX 1080 and GTX 1070 specs right here.

Galax GeForce GTX 1080 Pictured with Custom Cooler

Subject: Graphics Cards | May 10, 2016 - 04:06 PM |
Tagged: video card, reference cooler, pascal, nvidia, GTX 1080, graphics card, GeForce GTX 1080, Founder's Edition

The first non-reference GTX 1080 has been revealed courtesy of Galax, and the images (via VideoCardz.com) look a lot different than the Founder's Edition.

GALAX-GeForce-GTX-1080-box.jpg

Galax GTX 1080 (Image Credit: VideoCardz)

The Galax is the first custom implementation of the GTX 1080 we've seen, and as such the first example of a $599 variant of the GTX 1080. The Founder's Edition cards carry a $100 premium (and offer that really nice industrial design) but ultimately it's about performance and the Galax card will presumably offer completely stock specifications.

GALAX-GeForce-GTX-1080-front.jpg

(Image Credit: VideoCardz)

Expect to see a deluge of aftermarket cooling from EVGA, ASUS, MSI, and others soon enough - most of which will presumably be using a dual or triple-fan cooler, and not a simple blower like this.

Source: VideoCardz

Microsoft updates Windows 10 UWP to support unlocked frame rates and G-Sync/FreeSync

Subject: Graphics Cards | May 10, 2016 - 12:11 PM |
Tagged: windows 10, windows, vrr, variable refresh rate, uwp, microsoft, g-sync, freesync

Back in March, Microsoft's Phil Spencer addressed some of the concerns over the Unified Windows Platform and PC gaming during his keynote address at the Build Conference. He noted that MS would "plan to open up VSync off, FreeSync, and G-Sync in May" and the company would "allow modding and overlays in UWP applications" sometime further into the future. Well it appears that Microsoft is on point with the May UWP update.

According to the MS DirectX Developer Blog, a Windows 10 update being pushed out today will enable UWP to support unlocked frame rates and variable refresh rate monitors in both G-Sync and FreeSync varieties. 

windows_8_logo-redux2.png

As a direct response to your feedback, we’re excited to announce the release today of new updates to Windows 10 that make gaming even better for game developers and gamers.

Later today, Windows 10 will be updated with two key new features:

Support for AMD’s FreesyncTM and NVIDIA’s G-SYNC™ in Universal Windows Platform games and apps

Unlocked frame rate for Universal Windows Platform (UWP) games and apps

Once applications take advantage of these new features, you will be able to play your UWP games with unlocked frame rates. We expect Gears of War: UE and Forza Motorsport 6: Apex to lead the way by adding this support in the very near future.

This OS update will be gradually rolled out to all machines, but you can download it directly here.

These updates to UWP join the already great support for unlocked frame rate and AMD and NVIDIA’s technologies in Windows 10 for classic Windows (Win32) apps.

Please keep the feedback coming!

Today's update won't automatically enable these features in UWP games like Gears of War or Quantum Break, they will still need to be updated individually by the developer. MS states that Gears of War and Forza will be the first to see these changes, but there is no mention of Quantum Break here, which is a game that could DEFINITELY benefit from the love of variable refresh rate monitors. 

Microsoft describes an unlocked frame rate as thus:

Vsync refers to the ability of an application to synchronize game rendering frames with the refresh rate of the monitor. When you use a game menu to “Disable vsync”, you instruct applications to render frames out of sync with the monitor refresh. Being able to render out of sync with the monitor refresh allows the game to render as fast as the graphics card is capable (unlocked frame rate), but this also means that “tearing” will occur. Tearing occurs when part of two different frames are on the screen at the same time.

I should note that these changes do not indicate that Microsoft is going to allow UWP games to go into an exclusive full screen mode - it still believes the disadvantages of that configuration outweigh the advantages. MS wants its overlays and a user's ability to easily Alt-Tab around Windows 10 to remain. Even though MS mentions screen tearing, I don't think that non-exclusive full screen applications will exhibit tearing.

gears.jpg

Gears of War on Windows 10 is a game that could definitely use an uncapped render rate and VRR support.

Instead, what is likely occurring, as we saw with the second iteration of the Ashes of the Singularity benchmark, is that the game will have an uncapped render rate internally but that frames rendered OVER 60 FPS (or the refresh rate of the display) will not be shown. This will improve perceived latency as the game will be able to present the most up to date frame (with the most update to date input data) when the monitor is ready for a new refresh. 

UPDATE 5/10/16 @ 4:31pm: Microsoft just got back to me and said that my above statement wasn't correct. Screen tearing will be able to occur in UWP games on Windows 10 after they integrate support for today's patch. Interesting!!

For G-Sync and FreeSync users, the ability to draw to the screen at any range of render rates will offer an even further advantage of uncapped frame rates, no tearing but also, no "dropped" frames caused by running at off-ratios of a standard monitor's refresh rate.

I'm glad to see Microsoft taking these steps at a brisk pace after the feedback from the PC community early in the year. As for UWP's continued evolution, the blog post does tease that we should "expect to see some exciting developments on multiple GPUs in DirectX 12 in the near future."

Source: MSDN

EKWB Releases AMD Radeon Pro Duo Full-Cover Water Block

Subject: Graphics Cards, Cases and Cooling | May 10, 2016 - 08:55 AM |
Tagged: water cooling, radeon pro duo, radeon, pro duo, liquid cooling, graphics cards, gpu cooler, gpu, EKWB, amd

While AMD's latest dual-GPU powerhouse comes with a rather beefy-looking liquid cooling system out of the box, the team at EK Water Blocks have nonetheless created their own full-cover block for the Pro Duo, which is now available in a pair of versions.

EKFC-Radeon-Pro-Duo_NP_fill_1600.jpg

"Radeon™ has done it again by creating the fastest gaming card in the world. Improving over the Radeon™ R9 295 X2, the Radeon Pro Duo card is faster and uses the 3rd generation GCN architecture featuring asynchronous shaders enables the latest DirectX™ 12 and Vulkan™ titles to deliver amazing 4K and VR gaming experiences. And now EK Water Blocks made sure, the owners can get the best possible liquid cooling solution for the card as well!"

EKFC-Radeon-Pro-Duo_pair.png

Nickel version (top), Acetal+Nickel version (bottom)

The blocks include a single-slot I/O bracket, which will allow the Pro Duo to fit in many more systems (and allow even more of them to be installed per motherboard!).

EKFC-Radeon-Pro-Duo_NP_input_1600-1500x999.jpg

"EK-FC Radeon Pro Duo water block features EK unique central inlet split-flow cooling engine with a micro fin design for best possible cooling performance of both GPU cores. The block design also allows flawless operation with reversed water flow without adversely affecting the cooling performance. Moreover, such design offers great hydraulic performance, allowing this product to be used in liquid cooling systems using weaker water pumps.

The base is made of nickel-plated electrolytic copper while the top is made of quality POM Acetal or acrylic (depending on the variant). Screw-in brass standoffs are pre-installed and allow for safe installation procedure."

Suggested pricing is set at 155.95€ for the blocks (approx. $177 US), and they are "readily available for purchase through EK Webshop and Partner Reseller Network".

Source: EKWB

What Are NVIDIA GeForce GTX 1080 Founders Edition Cards?

Subject: Graphics Cards | May 9, 2016 - 05:00 PM |
Tagged: pascal, nvidia, GTX 1080, geforce

During the GeForce GTX 1080 launch event, NVIDIA announced two prices for the card. The new GPU has an MSRP of $599 USD, while a Founders Edition will be available for $699 USD. They did not really elaborate on the difference at the keynote, but they apparently clarified the product structure for the attending press.

nvidia-2016-dreamhack-1080-specs.png

According to GamersNexus, the “Founders Edition” is NVIDIA's new branding for their reference design, which has been updated with the GeForce GTX 1080. That is it. Normally, a reference design is pretty much bottom-tier in a product stack. Apart from AMD's water-cooling experiments, reference designs are relatively simple, single-fan blower coolers. NVIDIA's reference cooler though, at least on their top-three-or-so models of any given generation, are pretty good. They are fairly quiet, effective, and aesthetically pleasing. When searching for a specific GPU online, you will often see a half-dozen entries based on this, from various AIB partners, and another half-dozen other offerings from those same companies, which is very different. MSI does their Twin Frozr thing, while ASUS has their Direct CU and Poseidon coolers.

If you want the $599 model, then, counter to what we've been conditioned to expect, you will not be buying NVIDIA's reference cooler. These will come from AIB partners, which means that NVIDIA is (at least somewhat) allowing them to set a minimum product this time around. They expect reference cards to be intrinsically valuable, not just purchased because they rank highest on a “sort by lowest price” metric.

This is interesting for a number of reasons. It wasn't too long ago that NVIDIA finally allowed AIB vendors to customize Titan-level graphics cards. Before that, NVIDIA's reference cooler was the only option. When they released control to their partners, we started to see water cooled Titan Xs. There is two ways to look at it: either NVIDIA is relaxing their policy of controlling user experience, or they want their personal brand to be more than the cheapest offering of their part. Granted, the GTX 1080 is supposed to be their high-end, but still mainstream offering.

It's just interesting to see this decision and rationalize it both as a release of control over user experience, and, simultaneously, as an increase of it.

Source: GamersNexus

AMD Releases Radeon Software Crimson Edition 16.5.1 Beta

Subject: Graphics Cards | May 9, 2016 - 02:05 PM |
Tagged: amd, graphics drivers, crimson

This is good to see. AMD has released Radeon Software Crimson Edition 16.5.1 to align with Forza Motorsport 6: Apex. The drivers are classified as Beta, and so is the game, coincidentally, which means 16.5.1 is not WHQL-certified. That doesn't have the weight that it used to, though. Its only listed feature is performance improvements with that title, especially for the R9 Fury X graphics card. Game-specific optimizations near launch appear to be getting consistent, and that was an area that AMD really needed to improve upon, historically.

amd-2015-crimson-logo.png

There are a handful of known issues, but they don't seem particularly concerning. The AMD Gaming Evolved overlay may crash in some titles, and The Witcher 3 may flicker in Crossfire, both of which could be annoying if they affect a game that you have been focusing on, but that's about it. There might be other issues (and improvements) that are not listed in the notes, but that's all I have to work on at the moment.

If you're interested in Forza 6: Apex, check out AMD's download page.

Source: AMD

NVIDIA GeForce GTX 1080 and GTX 1070 Announced

Subject: Graphics Cards | May 6, 2016 - 10:38 PM |
Tagged: pascal, nvidia, GTX 1080, gtx 1070, GP104, geforce

So NVIDIA has announced their next generation of graphics processors, based on the Pascal architecture. They introduced it as “a new king,” because they claim that it is faster than the Titan X, even at a lower power. It will be available “around the world” on May 27th for $599 USD (MSRP). The GTX 1070 was also announced, with slightly reduced specifications, and it will be available on June 10th for $379 USD (MSRP).

nvidia-2016-dreamhack-1080-photo.png

Pascal is created on the 16nm process at TSMC, which gives them a lot of headroom. They have fewer shaders than the Titan X, but with a significantly higher clock rate. It also uses GDDR5X, which is an incremental improvement over GDDR5. We knew it wasn't going to use HBM2.0, like Big Pascal does, but it's interesting that they did not stick with old, reliable GDDR5.

nvidia-2016-dreamhack-1080-specs.png

nvidia-2016-dreamhack-1070-specs.png

The full specifications of the GTX 1080 are as follows:

  • 2560 CUDA Cores
  • 1607 MHz Base Clock (8.2 TFLOPs)
  • 1733 MHz Boost Clock (8.9 TFLOPs)
  • 8GB GDDR5X Memory at 320 GB/s (256-bit)
  • 180W Listed Power (Update: uses 1x 8-pin power)

We do not currently have the specifications of the GTX 1070, apart from it being 6.5 TFLOPs.

nvidia-2016-dreamhack-1080-stockphoto.png

It also looks like it has five display outputs: 3x DisplayPort 1.2, which are “ready” for 1.3 and 1.4, 1x HDMI 2.0b, and 1x DL-DVI. They do not explicitly state that all three DisplayPorts will run on the same standard, even though that seems likely. They also do not state whether all five outputs can be used simultaneously, but I hope that they can be.

nvidia-2016-dreamhack-newsli.png

They also have a new SLI bridge, called SLI HB Bridge, that is supposed to have double the bandwidth of Maxwell. I'm not sure what that will mean for multi-gpu systems, but it will probably be something we'll find out about soon.

Source: NVIDIA

NVIDIA GeForce "GTX 1080" Benchmark Leaked

Subject: Graphics Cards | May 5, 2016 - 02:38 PM |
Tagged: nvidia, pascal, geforce

We're expecting a major announcement tomorrow... at some point. NVIDIA create a teaser website, called “Order of 10,” that is counting down to a 1PM EDT. On the same day, at 9PM EDT, they will have a live stream on their Twitch channel. This wasn't planned as long-term as their Game24 event, which turned out to be a GTX 970 and GTX 980 launch party, but it wouldn't surprise me if it ended up being a similar format. I don't know for sure whether one or both events will be about the new mainstream Pascal, but it would be surprising if Friday ends (for North America) without a GPU launch of some sort.

nvidia-geforce.png

VideoCardz got a hold of 3DMark Fire Strike Extreme benchmarks, though. It is registered as an 8GB card with a GPU clock of 1860 MHz. While synthetic benchmarks, let alone a single benchmark of anything, isn't necessarily representative of overall performance, it scores slightly higher than a reasonably overclocked GTX 980 Ti (and way above a stock one). Specifically, this card yields a graphics score of 10102 on Fire Strike Extreme 1.1, while the 980 Ti achieved 7781 for us without an overclock.

We expected a substantial bump in clock rate, especially after GP100 was announced at GTC. This “full” Pascal chip was listed at a 1328 MHz clock, with a 1480 MHz boost. Enterprise GPUs are often underclocked compared to consumer parts, stock to stock. As stated a few times, overclocking could be a huge gap, too. The GTX 980 Ti was able to go from 1190 MHz to 1465 MHz. On the other hand, consumer Pascal's recorded 1860 MHz could itself be an overclock. We won't know until NVIDIA makes an official release. If not, maybe we could see these new parts break 2 GHz in general use?

Source: VideoCardz