ASUS Announces ROG Strix GTX 1080

Subject: Graphics Cards | May 28, 2016 - 05:00 PM |
Tagged: asus, ROG, strix, GTX 1080, nvidia

The Founders Edition versions of the GTX 1080 went on sale yesterday, but we're beginning to see the third-party variants being announced. In this case, the ASUS ROG Strix is a three-fan design that uses their DirectCU III heatsink. More interestingly, ASUS decided to increase the amount of wattage that this card can accept by adding an extra, six-pin PCIe power connector (totaling 8-pin + 6-pin). A Founders Edition card only requires a single, eight-pin connection over the 75W provided by the PCIe slot itself. This provides an extra 75W of play room for the ROG Strix card, raising the maximum power from 225W to 300W.


Some of this power will be used for its on-card, RGB LED lighting, but I doubt that it was the reason for the extra 75W of headroom. The lights follow the edges of the card, acting like hats and bow-ties to the three fans. (Yes, you will never unsee that now.) The shroud is also modular, and ASUS provides the data for enthusiasts to 3D print their own modifications (albeit their warranty doesn't cover damage caused by this level of customization).


As for the actual performance, the card naturally comes with an overclock out of the box. The default “Gaming Mode” has a 1759 MHz base clock with an 1898 MHz boost. You can flip this into “OC Mode” for a slight, two-digit increase to 1784 MHz base and 1936 MHz boost. It is significantly higher than the Founders Edition, though, which has a base clock of 1607 MHz that boosts to 1733 MHz. The extra power will likely help manual overclocks, but it will come down to “silicon lottery” whether your specific chip was abnormally less influenced by manufacturing defects. We also don't know yet whether the Pascal architecture, and the 16nm process it relies upon, has any physical limits that will increasingly resist overclocks past a certain frequency.

Pricing and availability is not yet announced.

Source: ASUS

Teaser - GTX 1080's Tested in SLI - EVGA SC ACX 3.0

Subject: Graphics Cards | May 27, 2016 - 02:58 PM |
Tagged: sli, review, led, HB, gtx, evga, Bridge, ACX 3.0, 3dmark, 1080 the time where we manage to get multiple GTX 1080's in the office here would, of course, be when Ryan is on the other side of the planet. We are also missing some other semi-required items, like the new 'SLI HB 'bridge, but we should be able to test on an older LED bridge at 2560x1440 (under the resolution where the newer style is absolutely necessary to avoid a sub-optimal experience). That said, surely the storage guy can squeeze out a quick run of 3DMark to check out the SLI scaling, right?


For this testing, I spent just a few minutes with EVGA's OC Scanner to take advantage of GPU Boost 3.0. I cranked the power limits and fans on both cards, ending up at a stable overclock hovering at right around 2 GHz on the pair. I'm leaving out the details of the second GPU we got in for testing as it may be under NDA and I can't confirm that as all of the people to ask are in an opposite time zone, so I'm leaving out that for now (pfft - it has an aftermarket cooler). Then I simply ran Firestrike (25x14) with SLI disabled:


...and then with it enabled:


That works out to a 92% gain in 3DMark score, with the FPS figures jumping by almost exactly 2x. Now remember, this is by no means a controlled test, and the boss will be cranking out a much more detailed piece with frame rated results galore in the future, but for now I just wanted to get some quick figures out to the masses for consumption and confirmation that 1080 SLI is a doable thing, even on an older bridge.

*edit* here's another teaser:


Aftermarket coolers are a good thing as evidenced by the 47c of that second GPU, but the Founders Edition blower-style cooler is still able to get past 2GHz just fine. Both cards had their fans at max speed in this example.

*edit again*

I was able to confirm we are not under NDA on the additional card we received. Behold:



This is the EVGA Superclocked edition with their ACX 3.0 cooler.

More to follow (yes, again)!

Manufacturer: NVIDIA

First, Some Background

NVIDIA's Rumored GP102
Based on two rumors, NVIDIA seems to be planning a new GPU, called GP102, that sits between GP100 and GP104. This changes how their product stack flowed since Fermi and Kepler. GP102's performance, both single-precision and double-precision, will likely signal NVIDIA's product plans going forward.
  • - GP100's ideal 1 : 2 : 4 FP64 : FP32 : FP16 ratio is inefficient for gaming
  • - GP102 either extends GP104's gaming lead or bridges GP104 and GP100
  • - If GP102 is a bigger GP104, the future is unclear for smaller GPGPU devs
    • This is, unless GP100 can be significantly up-clocked for gaming.
  • - If GP102 matches (or outperforms) GP100 in gaming, and has better than 1 : 32 double-precision performance, then GP100 would be the first time that NVIDIA designed an enterprise-only, high-end GPU.


When GP100 was announced, Josh and I were discussing, internally, how it would make sense in the gaming industry. Recently, an article on WCCFTech cited anonymous sources, which should always be taken with a dash of salt, that claimed NVIDIA was planning a second architecture, GP102, between GP104 and GP100. As I was writing this editorial about it, relating it to our own speculation about the physics of Pascal, VideoCardz claims to have been contacted by the developers of AIDA64, seemingly on-the-record, also citing a GP102 design.

I will retell chunks of the rumor, but also add my opinion to it.


In the last few generations, each architecture had a flagship chip that was released in both gaming and professional SKUs. Neither audience had access to a chip that was larger than the other's largest of that generation. Clock rates and disabled portions varied by specific product, with gaming usually getting the more aggressive performance for slightly better benchmarks. Fermi had GF100/GF110, Kepler had GK110/GK210, and Maxwell had GM200. Each of these were available in Tesla, Quadro, and GeForce cards, especially Titans.

Maxwell was interesting, though. NVIDIA was unable to leave 28nm, which Kepler launched on, so they created a second architecture at that node. To increase performance without having access to more feature density, you need to make your designs bigger, more optimized, or more simple. GM200 was giant and optimized, but, to get the performance levels it achieved, also needed to be more simple. Something needed to go, and double-precision (FP64) performance was the big omission. NVIDIA was upfront about it at the Titan X launch, and told their GPU compute customers to keep purchasing Kepler if they valued FP64.

Fast-forward to Pascal.

AMD Releases Radeon Software Crimson Edition 16.5.3

Subject: Graphics Cards | May 24, 2016 - 09:46 PM |
Tagged: vulkan, radeon, overwatch, graphics driver, Crimson Edition 16.5.3, crimson, amd

AMD has released new drivers for Overwatch (and more) with Radeon Software Crimson Edition 16.5.3.


"Radeon Software Crimson Edition is AMD's revolutionary new graphics software that delivers redesigned functionality, supercharged graphics performance, remarkable new features, and innovation that redefines the overall user experience. Every Radeon Software release strives to deliver new features, better performance and stability improvements."

AMD lists these highlights for Radeon Software Crimson Edition 16.5.3:

Support for:

  • Total War: Warhammer
  • Overwatch
  • Dota 2 (with Vulkan API)

New AMD Crossfire profile available for:

  • Total War: Warhammer
  • Overwatch

The driver is available from AMD from the following direct links:

The full release notes with fixed/known issues is available at the source link here.

Source: AMD

NVIDIA Releases 368.22 Drivers for Overwatch

Subject: Graphics Cards | May 24, 2016 - 06:36 PM |
Tagged: nvidia, graphics drivers

Yesterday, NVIDIA has released WHQL-certified drivers to align with the release of Overwatch. This version, 368.22, is the first public release of the 367 branch. Pascal is not listed in the documentation as a supported product, so it's unclear whether this will be the launch driver for it. The GTX 1080 comes out on Friday, but two drivers in a week would not be unprecedented for NVIDIA.

While NVIDIA has not communicated this too well, 368.22 will not install on Windows Vista. If you are still using that operating system, then you will not be able to upgrade your graphics drivers past 365.19. 367-branch (and later) drivers will required Windows 7 and up.


Before I continue, I should note that I've experienced so issues getting these drivers to install through GeForce Experience. Long story short, it took two attempts (with a clean install each time) to end up with a successful boot into 368.22. I didn't try the standalone installer that you can download from NVIDIA's website. If the second attempt using GeForce Experience failed, then I would have. That said, after I installed it, it seemed to work out well for me with my GTX 670.

While NVIDIA is a bit behind on documentation, the driver also rolls in other fixes. There were some GPU compute developers who had crashes and other failures in certain OpenCL and CUDA applications, which are now compatible with 368.22. I've also noticed that my taskbar hasn't been sliding around on its own anymore, but I've only been using the driver for a handful of hours.

You can get GeForce 368.22 drivers from GeForce Experience, but you might want to download the standalone installer (or skip a version or two if everything works fine).

Source: NVIDIA

AMD Gains Market Share in Q1'16 Discrete GPUs

Subject: Graphics Cards | May 18, 2016 - 06:11 PM |
Tagged: amd, radeon, market share

AMD sent out a note yesterday with some interesting news about how the graphics card market fared in Q1 of 2016. First, let's get to the bad news: sales of new discrete graphics solutions, in both mobile and desktop, dropped by 10.2% quarter to quarter, a decrease that was slightly higher than expected. Though details weren't given in the announcement or data I have from Mercury Research, it seems likely that expectations of upcoming new GPUs from both NVIDIA and AMD contributed to the slowdown of sales on some level.

Despite the shrinking pie, AMD grabbed more of it in Q1 2016 than it had in Q4 of 2015, gaining on total market share by 3.2% for a total of 29.4%. That's a nice gain in a short few months but its still much lower than Radeon has been as recently as 2013. That 3.2% gain includes both notebook and desktop discrete GPUs, but let's break it down further.

  Q1'16 Desktop Q1'16 Desktop Change Q1'16 Mobile Q1'16 Mobile Change
AMD 22.7% +1.8% 38.7% +7.3%
NVIDIA (assumed) ~77% -1.8% ~61% -7.3%

AMD's gain in the desktop graphics card market was 1.8%, up to 22.7% of the market, while the notebook discrete graphics share jumped an astounding 7.3% to 38.7% of the total market.

NVIDIA obviously still has a commanding lead in desktop add-in cards with more than 75% of the market, but Mercury Research believes that a renewed focus on driver development, virtual reality and the creation of the Radeon Technologies Group attributed to the increases in share for AMD.

Q3 of 2016 is where I think the future looks most interesting. Not only will NVIDIA's newly released GeForce GTX 1080 and upcoming GTX 1070 have time to settle in but the upcoming Polaris architecture based cards from AMD will have a chance to stretch their legs and attempt to continue pushing the needle in the upward direction.

New AMD Polaris 10 and Polaris 11 GPU Details Emerge

Subject: Editorial, Graphics Cards | May 18, 2016 - 01:18 PM |
Tagged: rumor, Polaris, opinion, HDMI 2.0, gpu, gddr5x, GDDR5, GCN, amd, 4k

While Nvidia's Pascal has held the spotlight in the news recently, it is not the only new GPU architecture debuting this year. AMD will soon be bringing its Polaris-based graphics cards to market for notebooks and mainstream desktop users. While several different code names have been thrown around for these new chips, they are consistently in general terms referred to as Polaris 10 and Polaris 11. AMD's Raja Kudori stated in an interview with PC Perspective that the numbers used in the naming scheme hold no special significance, but eventually Polaris will be used across the entire performance lineup (low end to high end graphics).

Naturally, there are going to be many rumors and leaks as the launch gets closer. In fact, Tech Power Up recently came into a number of interesting details about AMD's plans for Polaris-based graphics in 2016 including specifications and which areas of the market each chip is going to be aimed at. 

AMD GPU Roadmap.jpg

Citing the usual "industry sources" familiar with the matter (take that for what it's worth, but the specifications do not seem out of the realm of possibility), Tech Power Up revealed that there are two lines of Polaris-based GPUs that will be made available this year. Polaris 10 will allegedly occupy the mid-range (mainstream) graphics option in desktops as well as being the basis for high end gaming notebook graphics chips. On the other hand, Polaris 11 will reportedly be a smaller chip aimed at thin-and-light notebooks and mainstream laptops.

Now, for the juicy bits of the leak: the rumored specifications!

AMD's "Polaris 10" GPU will feature 32 compute units (CUs) which TPU estimates – based on the assumption that each CU still contains 64 shaders on Polaris – works out to 2,048 shaders. The GPU further features a 256-bit memory interface along with a memory controller supporting GDDR5 and GDDR5X (though not at the same time heh). This would leave room for cheaper Polaris 10 derived products with less than 32 CUs and/or cheaper GDDR5 memory. Graphics cards would have as much as 8GB of memory initially clocked at 7 Gbps. Reportedly, the full 32 CU GPU is rated at 5.5 TFLOPS of single precision compute power and runs at a TDP of no more than 150 watts.

Compared to the existing Hawaii-based R9 390X, the upcoming R9 400 Polaris 10 series GPU has fewer shaders and less memory bandwidth. The memory is clocked 1 GHz higher, but the GDDR5X memory bus is half that of the 390X's 512-bit GDDR5 bus which results in 224 GB/s memory bandwidth for Polaris 10 versus 384 GB/s on Hawaii. The R9 390X has a slight edge in compute performance at 5.9 TFLOPS versus Polaris 10's 5.5 TFLOPS however the Polaris 10 GPU is using much less power and easily wins at performance per watt! It almost reaches the same level of single precision compute performance at nearly half the power which is impressive if it holds true!

  R9 390X R9 390 R9 380 R9 400-Series "Polaris 10"
GPU Code name Grenada (Hawaii) Grenada (Hawaii) Antigua (Tonga) Polaris 10
GPU Cores 2816 2560 1792 2048
Rated Clock 1050 MHz 1000 MHz 970 MHz ~1343 MHz
Texture Units 176 160 112 ?
ROP Units 64 64 32 ?
Memory 8GB 8GB 4GB 8GB
Memory Clock 6000 MHz 6000 MHz 5700 MHz 7000 MHz
Memory Interface 512-bit 512-bit 256-bit 256-bit
Memory Bandwidth 384 GB/s 384 GB/s 182.4 GB/s 224 GB/s
TDP 275 watts 275 watts 190 watts 150 watts (or less)
Peak Compute 5.9 TFLOPS 5.1 TFLOPS 3.48 TFLOPS 5.5 TFLOPS
MSRP (current) ~$400 ~$310 ~$199 $ unknown

Note: Polaris GPU clocks esitmated using assumption of 5.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Another comparison that can be made is to the Radeon R9 380 which is a Tonga-based GPU with similar TDP. In this matchup, the Polaris 10 based chip will – at a slightly lower TDP – pack in more shaders, twice the amount of faster clocked memory with 23% more bandwidth, and provide a 58% increase in single precision compute horsepower. Not too shabby!

Likely, a good portion of these increases are made possible by the move to a smaller process node and utilizing FinFET "tri-gate" like transistors on the Samsung/Globalfoundries 14LPP FinFET manufacturing process, though AMD has also made some architecture tweaks and hardware additions to the GCN 4.0 based processors. A brief high level introduction is said to be made today in a webinar for their partners (though AMD has come out and said preemptively that no technical nitty-gritty details will be divulged yet). (Update: Tech Altar summarized the partner webinar. Unfortunately there was no major reveals other than that AMD will not be limiting AIB partners from pushing for the highest factory overclocks they can get).

Moving on from Polaris 10 for a bit, Polaris 11 is rumored to be a smaller GCN 4.0 chip that will top out at 14 CUs (estimated 896 shaders/stream processors) and 2.5 TFLOPS of single precision compute power. These chips aimed at mainstream and thin-and-light laptops will have 50W TDPs and will be paired with up to 4GB of GDDR5 memory. There is apparently no GDDR5X option for these, which makes sense at this price point and performance level. The 128-bit bus is a bit limiting, but this is a low end mobile chip we are talking about here...

  R7 370 R7 400 Series "Polaris 11"
GPU Code name Trinidad (Pitcairn) Polaris 11
GPU Cores 1024 896
Rated Clock

925 MHz base (975 MHz boost)

~1395 MHz
Texture Units 64 ?
ROP Units 32 ?
Memory 2 or 4GB 4GB
Memory Clock 5600 MHz ? MHz
Memory Interface 256-bit 128-bit
Memory Bandwidth 179.2 GB/s ? GB/s
TDP 110 watts 50 watts
Peak Compute 1.89 TFLOPS 2.5 TFLOPS
MSRP (current) ~$140 (less after rebates and sales) $?

Note: Polaris GPU clocks esitmated using assumption of 2.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Fewer details were unveiled concerning Polaris 11, as you can see from the chart above. From what we know so far, it should be a promising successor to the R7 370 series even with the memory bus limitation and lower shader count as the GPU should be clocked higher, (it also might have more shaders in M series mobile variants versus of the 370 and lower mobile series) and a much lower TDP for at least equivalent if not a decent increase in performance. The lower power usage in particular will be hugely welcomed in mobile devices as it will result in longer battery life under the same workloads, ideally. I picked the R7 370 as the comparison as it has 4 gigabytes of memory and not that many more shaders and being a desktop chip readers may be more widely familiar with it. It also appears to sit between the R7 360 and R7 370 in terms of shader count and other features but is allegedly going to be faster than both of them while using at least (on paper) less than half the power.

Of course these are still rumors until AMD makes Polaris officially, well, official with a product launch. The claimed specifications appear reasonable though, and based on that there are a few important takeaways and thoughts I have.


The first thing on my mind is that AMD is taking an interesting direction here. While NVIDIA has chosen to start out its new generation at the top by announcing "big Pascal" GP100 and actually launching the GP104 GTX 1080 (one of its highest end consumer chips/cards) yesterday and then over the course of the year introducing lower end products AMD has opted for the opposite approach. AMD will be starting closer to the lower end with a mainstream notebook chip and high end notebook/mainstream desktop GPU (Polaris 11 and 10 respectively) and then over a year fleshing out its product stack (remember Raja Kudori stated Polaris and GCN 4 would be used across the entire product stack) and building up with bigger and higher end GPUs over time finally topping off with its highest end consumer (and professional) GPUs based on "Vega" in 2017.

This means, and I'm not sure if this was planned by either Nvidia or AMD or just how it happened to work out based on them following their own GPU philosophies (but I'm thinking the latter), that for some time after both architectures are launched AMD and NVIDIA's newest architectures and GPUs will not be directly competing with each other. Eventually they should meet in the middle (maybe late this year?) with a mid-range desktop graphics card and it will be interesting to see how they stack up at similar price points and hardware levels. Then, of course once "Vega" based GPUs hit (sadly probably in time for NV's big Pascal to launch heh. I'm not sure if Vega is Fury X replacement only or even beyond that to 1080Ti or even GP100 competitor) we should see GCN 4 on the new smaller process node square up against NVIDIA and it's 16nm Pascal products across the board (entire lineup). Which will have the better performance, which will win out in power usage and performance/watt and performance/$? All questions I wish I knew the answers to, but sadly do not!!

Speaking of price and performance/$... Polaris is actually looking pretty good so far at hitting much lower TDPs and power usage targets while delivering at least similar performance if not a good bit more. Both AMD and NVIDIA appear to be bringing out GPUs better than I expected to see as far as technological improvements in performance and power usage (these die shrinks have really helped even though from here on out that trend isn't really going to continue...). I hope that AMD can at least match NV in these areas at the mid range even if they do not have a high end GPU coming out soon (not until sometime after these cards launch and not really until Vega, the high end GCN GPU successor). At least on paper based on the leaked information the GPUs so far look good. My only worry is going to be pricing which I think is going to make or break these cards. AMD will need to price them competitively and aggressively to ensure their adoption and success.  

I hope that doing the rollout this way (starting with lower end chips) helps AMD to iron out the new smaller process node and that they are able to get good yields so that they can be aggressive with pricing here and eventually at the hgh end!

I am looking forward to more information on AMD's Polaris architecture and the graphics cards based on it!

Also read:

I will admit that I am not 100% up on all the rumors and I apologize for that. With that said, I would love to hear what your thoughts are on AMD's upcoming GPUs and what you think about these latest rumors!

NVIDIA Releases Full Specifications for GTX 1070

Subject: Graphics Cards | May 18, 2016 - 12:49 PM |
Tagged: nvidia, pascal, gtx 1070, 1070, gtx, GTX 1080, 16nm FF+, TSMC, Founder's Edition

Several weeks ago when NVIDIA announced the new GTX 1000 series of products, we were given a quick glimpse of the GTX 1070.  This upper-midrange card is to carry a $379 price tag in retail form while the "Founder's Edition" will hit the $449 mark.  Today NVIDIA released the full specifications of this card on their website.

The interest of the GTX 1070 is incredibly great because of the potential performance of this card vs. the previous generation.  Price is also a big consideration here as it is far easier to raise $370 than it is to make the jump to GTX 1080 and shell out $599 once non-Founder's Edition cards are released.  The GTX 1070 has all of the same features as the GTX 1080, but it takes a hit when it comes to clockspeed and shader units.


The GTX 1070 is a Pascal based part that is fabricated on TSMC's 16nm FF+ node.  It shares the same overall transistor count of the GTX 1080, but it is partially disabled.  The GTX 1070 contains 1920 CUDA cores as compared to the 2560 cores of the 1080.  Essentially one full GPC is disabled to reach that number.  The clockspeeds take a hit as well compared to the full GTX 1080.  The base clock for the 1070 is still an impressive 1506 MHz and boost reaches 1683 MHz.  This combination of shader counts and clockspeed makes this probably a little bit faster than the older GTX 980 ti.  The rated TDP for the card is 150 watts with a single 8 pin PCI-E power connector.  This means that there should be some decent headroom when it comes to overclocking this card.  Due to binning and yields, we may not see 2+ GHz overclocks with these cards, especially if NVIDIA cut down the power delivery system as compared to the GTX 1080.  Time will tell on that one.

The memory technology that NVIDIA is using for this card is not the cutting edge GDDR5x or HBM, but rather the tried and true GDDR5.  8 GB of this memory sits on a 256 bit bus, but it is running at a very, very fast 8 gbps.  This gives overall bandwidth in the 256 GB/sec region.  When we combine this figure with the memory compression techniques implemented with the Pascal architecture we can see that the GTX 1070 will not be bandwidth starved.  We have no information if this generation of products will mirror what we saw with the previous generation GTX 970 in terms of disabled memory controllers and the 3.5 GB/500 MB memory split due to that unique memory subsystem.


Beyond those things, the GTX 1070 is identical to the GTX 1080 in terms of DirectX features, display specifications, decoding support, double bandwidth SLI, etc.  There is an obvious amount of excitement for this card considering its potential performance and price point.  These supposedly will be available in the Founder's Edition release on June 10 for the $449 MSRP.  I know many people are considering using these cards in SLI to deliver performance for half the price of last year's GTX 980ti.  From all indications, these cards will be a signficant upgrade for anyone using GTX 970s in SLI.  With the greater access to monitors that hit 4K as well as Surround Gaming, this could be a solid purchase for anyone looking to step up their game in these scenarios.

Source: NVIDIA

The 1080 roundup, Pascal in all its glory

Subject: Graphics Cards | May 17, 2016 - 06:22 PM |
Tagged: nvidia, pascal, video, GTX 1080, gtx, GP104, geforce, founders edition

Yes that's right, if you felt Ryan and Al somehow missed something in our review of the new GTX 1080 or you felt the obvious pro-Matrox bios was showing here are the other reviews you can pick and choose from.  Start off with [H]ard|OCP who also tested Ashes of the Singularity and Doom as well as the old favourite Battlefield 4.  Doom really showed itself off as a next generation game, its Nightmare mode scoffing at any GPU with less than 5GB of VRAM available and pushing the single 1080 hard.  Read on to see how the competition stacked up ... or wait for the 1440 to come out some time in the future.


"NVIDIA's next generation video card is here, the GeForce GTX 1080 Founders Edition video card based on the new Pascal architecture will be explored. We will compare it against the GeForce GTX 980 Ti and Radeon R9 Fury X in many games to find out what it is capable of."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: NVIDIA

A new architecture with GP104

Table of Contents

The summer of change for GPUs has begun with today’s review of the GeForce GTX 1080. NVIDIA has endured leaks, speculation and criticism for months now, with enthusiasts calling out NVIDIA for not including HBM technology or for not having asynchronous compute capability. Last week NVIDIA’s CEO Jen-Hsun Huang went on stage and officially announced the GTX 1080 and GTX 1070 graphics cards with a healthy amount of information about their supposed performance and price points. Issues around cost and what exactly a Founders Edition is aside, the event was well received and clearly showed a performance and efficiency improvement that we were not expecting.


The question is, does the actual product live up to the hype? Can NVIDIA overcome some users’ negative view of the Founders Edition to create a product message that will get the wide range of PC gamers looking for an upgrade path an option they’ll take?

I’ll let you know through the course of this review, but what I can tell you definitively is that the GeForce GTX 1080 clearly sits alone at the top of the GPU world.

Continue reading our review of the GeForce GTX 1080 Founders Edition!!