Report: AMD's Top Polaris GPU Will Cost Just $199

Subject: Graphics Cards | May 31, 2016 - 08:23 PM |
Tagged: video card, rumor, report, Radeon RX 480, radeon, Polaris, graphics card, gpu, amd

Update: It's official. AMD Polaris Radeon RX 480 will launch for $199, > 5 TFLOPS of compute.

(Original post:)

The Wall Street Journal is reporting that AMD's upcoming Polaris graphics cards will be priced no higher than $199, a startling move to say the least.

AMD_Radeon_graphics_logo.jpg

The report arrives via VideoCardz.com:

"According to WSJ article, Polaris GPUs will cost no more than 199 USD. First systems equipped with Polaris GPUs will be available end of June:

'Advanced Micro Devices Inc. is angling to lower the cost of virtual reality, targeting the field with a new line of graphics hardware priced at $199—half or less the cost of comparable products.

AMD said the first chips based on its new Polaris design are expected to arrive in graphics cards for personal computers at the end of June. The company aims to help push the starting cost of PCs that can deliver VR experiences as low as $799 from above $1,000.'"

The report lists the high-end Polaris card as the "RX 480", which would be a departure from the recent nomenclature (R9 290X, R9 390X). Pricing such a card this aggressively not only creates what one would hope to be an incredible price/performance ratio, but is likely an answer to NVIDIA's GTX 1080/1070 - especially considering NVIDIA's new GTX 1070 is as fast as a GTX 980 Ti.

Is the Radeon RX 480 really the top end card, or a lower-cost variant? Will there be a 490, or 490X? This report certainly doesn't answer any questions, but the possibility of a powerful new GPU for $199 is very appealing.

Source: VideoCardz

New AMD Polaris 10 and Polaris 11 GPU Details Emerge

Subject: Editorial, Graphics Cards | May 18, 2016 - 01:18 PM |
Tagged: rumor, Polaris, opinion, HDMI 2.0, gpu, gddr5x, GDDR5, GCN, amd, 4k

While Nvidia's Pascal has held the spotlight in the news recently, it is not the only new GPU architecture debuting this year. AMD will soon be bringing its Polaris-based graphics cards to market for notebooks and mainstream desktop users. While several different code names have been thrown around for these new chips, they are consistently in general terms referred to as Polaris 10 and Polaris 11. AMD's Raja Kudori stated in an interview with PC Perspective that the numbers used in the naming scheme hold no special significance, but eventually Polaris will be used across the entire performance lineup (low end to high end graphics).

Naturally, there are going to be many rumors and leaks as the launch gets closer. In fact, Tech Power Up recently came into a number of interesting details about AMD's plans for Polaris-based graphics in 2016 including specifications and which areas of the market each chip is going to be aimed at. 

AMD GPU Roadmap.jpg

Citing the usual "industry sources" familiar with the matter (take that for what it's worth, but the specifications do not seem out of the realm of possibility), Tech Power Up revealed that there are two lines of Polaris-based GPUs that will be made available this year. Polaris 10 will allegedly occupy the mid-range (mainstream) graphics option in desktops as well as being the basis for high end gaming notebook graphics chips. On the other hand, Polaris 11 will reportedly be a smaller chip aimed at thin-and-light notebooks and mainstream laptops.

Now, for the juicy bits of the leak: the rumored specifications!

AMD's "Polaris 10" GPU will feature 32 compute units (CUs) which TPU estimates – based on the assumption that each CU still contains 64 shaders on Polaris – works out to 2,048 shaders. The GPU further features a 256-bit memory interface along with a memory controller supporting GDDR5 and GDDR5X (though not at the same time heh). This would leave room for cheaper Polaris 10 derived products with less than 32 CUs and/or cheaper GDDR5 memory. Graphics cards would have as much as 8GB of memory initially clocked at 7 Gbps. Reportedly, the full 32 CU GPU is rated at 5.5 TFLOPS of single precision compute power and runs at a TDP of no more than 150 watts.

Compared to the existing Hawaii-based R9 390X, the upcoming R9 400 Polaris 10 series GPU has fewer shaders and less memory bandwidth. The memory is clocked 1 GHz higher, but the GDDR5X memory bus is half that of the 390X's 512-bit GDDR5 bus which results in 224 GB/s memory bandwidth for Polaris 10 versus 384 GB/s on Hawaii. The R9 390X has a slight edge in compute performance at 5.9 TFLOPS versus Polaris 10's 5.5 TFLOPS however the Polaris 10 GPU is using much less power and easily wins at performance per watt! It almost reaches the same level of single precision compute performance at nearly half the power which is impressive if it holds true!

  R9 390X R9 390 R9 380 R9 400-Series "Polaris 10"
GPU Code name Grenada (Hawaii) Grenada (Hawaii) Antigua (Tonga) Polaris 10
GPU Cores 2816 2560 1792 2048
Rated Clock 1050 MHz 1000 MHz 970 MHz ~1343 MHz
Texture Units 176 160 112 ?
ROP Units 64 64 32 ?
Memory 8GB 8GB 4GB 8GB
Memory Clock 6000 MHz 6000 MHz 5700 MHz 7000 MHz
Memory Interface 512-bit 512-bit 256-bit 256-bit
Memory Bandwidth 384 GB/s 384 GB/s 182.4 GB/s 224 GB/s
TDP 275 watts 275 watts 190 watts 150 watts (or less)
Peak Compute 5.9 TFLOPS 5.1 TFLOPS 3.48 TFLOPS 5.5 TFLOPS
MSRP (current) ~$400 ~$310 ~$199 $ unknown

Note: Polaris GPU clocks esitmated using assumption of 5.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Another comparison that can be made is to the Radeon R9 380 which is a Tonga-based GPU with similar TDP. In this matchup, the Polaris 10 based chip will – at a slightly lower TDP – pack in more shaders, twice the amount of faster clocked memory with 23% more bandwidth, and provide a 58% increase in single precision compute horsepower. Not too shabby!

Likely, a good portion of these increases are made possible by the move to a smaller process node and utilizing FinFET "tri-gate" like transistors on the Samsung/Globalfoundries 14LPP FinFET manufacturing process, though AMD has also made some architecture tweaks and hardware additions to the GCN 4.0 based processors. A brief high level introduction is said to be made today in a webinar for their partners (though AMD has come out and said preemptively that no technical nitty-gritty details will be divulged yet). (Update: Tech Altar summarized the partner webinar. Unfortunately there was no major reveals other than that AMD will not be limiting AIB partners from pushing for the highest factory overclocks they can get).

Moving on from Polaris 10 for a bit, Polaris 11 is rumored to be a smaller GCN 4.0 chip that will top out at 14 CUs (estimated 896 shaders/stream processors) and 2.5 TFLOPS of single precision compute power. These chips aimed at mainstream and thin-and-light laptops will have 50W TDPs and will be paired with up to 4GB of GDDR5 memory. There is apparently no GDDR5X option for these, which makes sense at this price point and performance level. The 128-bit bus is a bit limiting, but this is a low end mobile chip we are talking about here...

  R7 370 R7 400 Series "Polaris 11"
GPU Code name Trinidad (Pitcairn) Polaris 11
GPU Cores 1024 896
Rated Clock

925 MHz base (975 MHz boost)

~1395 MHz
Texture Units 64 ?
ROP Units 32 ?
Memory 2 or 4GB 4GB
Memory Clock 5600 MHz ? MHz
Memory Interface 256-bit 128-bit
Memory Bandwidth 179.2 GB/s ? GB/s
TDP 110 watts 50 watts
Peak Compute 1.89 TFLOPS 2.5 TFLOPS
MSRP (current) ~$140 (less after rebates and sales) $?

Note: Polaris GPU clocks esitmated using assumption of 2.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)

Fewer details were unveiled concerning Polaris 11, as you can see from the chart above. From what we know so far, it should be a promising successor to the R7 370 series even with the memory bus limitation and lower shader count as the GPU should be clocked higher, (it also might have more shaders in M series mobile variants versus of the 370 and lower mobile series) and a much lower TDP for at least equivalent if not a decent increase in performance. The lower power usage in particular will be hugely welcomed in mobile devices as it will result in longer battery life under the same workloads, ideally. I picked the R7 370 as the comparison as it has 4 gigabytes of memory and not that many more shaders and being a desktop chip readers may be more widely familiar with it. It also appears to sit between the R7 360 and R7 370 in terms of shader count and other features but is allegedly going to be faster than both of them while using at least (on paper) less than half the power.

Of course these are still rumors until AMD makes Polaris officially, well, official with a product launch. The claimed specifications appear reasonable though, and based on that there are a few important takeaways and thoughts I have.

amd-2016-polaris-blocks.jpg

The first thing on my mind is that AMD is taking an interesting direction here. While NVIDIA has chosen to start out its new generation at the top by announcing "big Pascal" GP100 and actually launching the GP104 GTX 1080 (one of its highest end consumer chips/cards) yesterday and then over the course of the year introducing lower end products AMD has opted for the opposite approach. AMD will be starting closer to the lower end with a mainstream notebook chip and high end notebook/mainstream desktop GPU (Polaris 11 and 10 respectively) and then over a year fleshing out its product stack (remember Raja Kudori stated Polaris and GCN 4 would be used across the entire product stack) and building up with bigger and higher end GPUs over time finally topping off with its highest end consumer (and professional) GPUs based on "Vega" in 2017.

This means, and I'm not sure if this was planned by either Nvidia or AMD or just how it happened to work out based on them following their own GPU philosophies (but I'm thinking the latter), that for some time after both architectures are launched AMD and NVIDIA's newest architectures and GPUs will not be directly competing with each other. Eventually they should meet in the middle (maybe late this year?) with a mid-range desktop graphics card and it will be interesting to see how they stack up at similar price points and hardware levels. Then, of course once "Vega" based GPUs hit (sadly probably in time for NV's big Pascal to launch heh. I'm not sure if Vega is Fury X replacement only or even beyond that to 1080Ti or even GP100 competitor) we should see GCN 4 on the new smaller process node square up against NVIDIA and it's 16nm Pascal products across the board (entire lineup). Which will have the better performance, which will win out in power usage and performance/watt and performance/$? All questions I wish I knew the answers to, but sadly do not!!

Speaking of price and performance/$... Polaris is actually looking pretty good so far at hitting much lower TDPs and power usage targets while delivering at least similar performance if not a good bit more. Both AMD and NVIDIA appear to be bringing out GPUs better than I expected to see as far as technological improvements in performance and power usage (these die shrinks have really helped even though from here on out that trend isn't really going to continue...). I hope that AMD can at least match NV in these areas at the mid range even if they do not have a high end GPU coming out soon (not until sometime after these cards launch and not really until Vega, the high end GCN GPU successor). At least on paper based on the leaked information the GPUs so far look good. My only worry is going to be pricing which I think is going to make or break these cards. AMD will need to price them competitively and aggressively to ensure their adoption and success.  

I hope that doing the rollout this way (starting with lower end chips) helps AMD to iron out the new smaller process node and that they are able to get good yields so that they can be aggressive with pricing here and eventually at the hgh end!

I am looking forward to more information on AMD's Polaris architecture and the graphics cards based on it!

Also read:

I will admit that I am not 100% up on all the rumors and I apologize for that. With that said, I would love to hear what your thoughts are on AMD's upcoming GPUs and what you think about these latest rumors!

EKWB Releases AMD Radeon Pro Duo Full-Cover Water Block

Subject: Graphics Cards, Cases and Cooling | May 10, 2016 - 08:55 AM |
Tagged: water cooling, radeon pro duo, radeon, pro duo, liquid cooling, graphics cards, gpu cooler, gpu, EKWB, amd

While AMD's latest dual-GPU powerhouse comes with a rather beefy-looking liquid cooling system out of the box, the team at EK Water Blocks have nonetheless created their own full-cover block for the Pro Duo, which is now available in a pair of versions.

EKFC-Radeon-Pro-Duo_NP_fill_1600.jpg

"Radeon™ has done it again by creating the fastest gaming card in the world. Improving over the Radeon™ R9 295 X2, the Radeon Pro Duo card is faster and uses the 3rd generation GCN architecture featuring asynchronous shaders enables the latest DirectX™ 12 and Vulkan™ titles to deliver amazing 4K and VR gaming experiences. And now EK Water Blocks made sure, the owners can get the best possible liquid cooling solution for the card as well!"

EKFC-Radeon-Pro-Duo_pair.png

Nickel version (top), Acetal+Nickel version (bottom)

The blocks include a single-slot I/O bracket, which will allow the Pro Duo to fit in many more systems (and allow even more of them to be installed per motherboard!).

EKFC-Radeon-Pro-Duo_NP_input_1600-1500x999.jpg

"EK-FC Radeon Pro Duo water block features EK unique central inlet split-flow cooling engine with a micro fin design for best possible cooling performance of both GPU cores. The block design also allows flawless operation with reversed water flow without adversely affecting the cooling performance. Moreover, such design offers great hydraulic performance, allowing this product to be used in liquid cooling systems using weaker water pumps.

The base is made of nickel-plated electrolytic copper while the top is made of quality POM Acetal or acrylic (depending on the variant). Screw-in brass standoffs are pre-installed and allow for safe installation procedure."

Suggested pricing is set at 155.95€ for the blocks (approx. $177 US), and they are "readily available for purchase through EK Webshop and Partner Reseller Network".

Source: EKWB

AMD Announces Joint Venture with NFME

Subject: General Tech | April 30, 2016 - 12:33 AM |
Tagged: SoC, nfme, gpu, cpu, amd

Nantong Fujitsu Microelectronics Co., Ltd. (NFME) is a Chinese company that packages and tests integrated circuits. Recently, AMD has been working with China to reach that large market, especially given their ongoing cash concerns. This time, AMD sold 85% of its stake in two locations, AMD Penang, Malaysia and AMD Suzhou, Jiangsu, China, to NFME and formed a joint venture with them, called TF-AMD Microelectronics Sdn Bhd.

AMD_2016-Logo.png

I see two interesting aspects to this story.

First, AMD gets about $320 million USD in this transaction, after taxes and fees, and it also retains 15% of this venture. I am curious whether this will lead to a long-term source of income for AMD, even though the press release claims that this structure will be “cost neutral”. Either way, clearing a third of a billion dollars should help AMD to some extent. That equates to about two-to-three quarters of net-loss for the company, so it gives them about six-to-nine extra months of life on its own. That's not too bad if the transaction doesn't have any lasting consequences.

Second, NFME now has access to some interesting packaging and testing technologies. NFME's website claims that this allows them to handle dies up to 800mm2, substrates with up to 18 layers, and package sizes up to 75mm. These specifications sound like it pulls from their GPU experience, which could bring all of that effort and knowledge to completely different fields.

The press release states that 1,700 employees will be moved from AMD to this venture. They do not state whether any jobs are affected over and above this amount, though.

Report: NVIDIA GP104 Die Pictured; GTX 1080 Does Not Use HBM

Subject: Graphics Cards | April 22, 2016 - 10:16 AM |
Tagged: rumor, report, pascal, nvidia, leak, graphics card, gpu, gddr5x, GDDR5

According to a report from VideoCardz (via Overclock.net/Chip Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.

NVIDIA-GP104-GPU.jpg

Image credit: VideoCardz.com

"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."

A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory - based on a previously leaked photo of the die on PCB.

NVIDIA-Pascal-GP104-200-on-PCB.jpg

Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)

"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."

While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.

Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X - the first GPU to use HBM - was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.

If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected - which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.

Source: VideoCardz

Report: NVIDIA GTX 1080 GPU Cooler Pictured

Subject: Graphics Cards | April 19, 2016 - 11:08 AM |
Tagged: rumor, report, nvidia, leak, GTX 1080, graphics card, gpu, geforce

Another reported photo of an upcoming GTX 1080 graphics card has appeared online, this time via a post on Baidu.

GTX1080.jpg

(Image credit: VR-Zone, via Baidu)

The image is typically low-resolution and features the slightly soft focus we've come to expect from alleged leaks. This doesn't mean it's not legitimate, and this isn't the first time we have seen this design. This image also appears to only be the cooler, without an actual graphics card board underneath.

We have reported on the upcoming GPU rumored to be named "GTX 1080" in the recent past, and while no official announcement has been made it seems safe to assume that a successor to the current 900-series GPUs is forthcoming.

Source: VR-Zone

EVGA Releases NVIDIA GeForce GTX 950 Low Power Cards

Subject: Graphics Cards | April 5, 2016 - 11:57 AM |
Tagged: PCIe power, nvidia, low-power, GTX950, GTX 950 Low Power, graphics card, gpu, GeForce GTX 950, evga

EVGA has announced new low-power versions of the NVIDIA GeForce GTX 950, some of which do not require any PCIe power connection to work.

02G-P4-0958-KR_no_6_pin.jpg

"The EVGA GeForce GTX 950 is now available in special low power models, but still retains all the performance intact. In fact, several of these models do not even have a 6-Pin power connector!"

With or without power, all of these cards are full-on GTX 950's, with 768 CUDA cores and 2GB of GDDR5 memory. The primary difference will be with clock speeds, and EVGA provides a chart to illustrate which models still require PCIe power, as well as how they compare in performance.

evga_chart.png

It looks like the links to the 75W (no PCIe power required) models aren't working just yet on EVGA's site. Doubtless we will soon have active listings for pricing and availability info.

Source: EVGA

AMD Announces XConnect Technology for External Graphics

Subject: Graphics Cards | March 10, 2016 - 01:27 PM |
Tagged: XConnect, thunderbolt 3, radeon, graphics card, gpu, gaming laptop, external gpu, amd

AMD has announced their new external GPU technology called XConnect, which leverages support from the latest Radeon driver to support AMD graphics over Thunderbolt 3.

AMD_SCREEN.png

The technology showcased by AMD is powered by Razer, who partnered with AMD to come up with an expandable solution that supports up to 375W GPUs, including R9 Fury, R9 Nano, and all R9 300 series GPUs up to the R9 390X (there is no liquid cooling support, and the R9 Fury X isn't listed as being compatible). The notebook in AMD's marketing material is the Razer Blade Stealth, which offers the Razer Core external GPU enclosure as an optional accessory. (More information about these products from Razer here.) XConnect is not tied to any vendor, however; this is "generic driver" support for GPUs over Thunderbolt 3.

AMD has posted this video with the head of Global Technical Marketing, Robert Hallock, to explain the new tech and show off the Razer hardware:

The exciting part has to be the promise of an industry standard for external graphics, something many have hoped for. Not everyone will produce a product exactly like Razer has, since there is no requirement to provide a future upgrade path in a larger enclosure like this, but the important thing is that Thunderbolt 3 support is built in to the newest Radeon Crimson drivers.

Here are the system requirements for AMD XConnect from AMD:

  • ​Radeon Software 16.2.2 driver (or later)
  • 1x Thunderbolt 3 port
  • 40Gbps Thunderbolt 3 cable
  • Windows 10 build 10586 (or later)
  • BIOS support for external graphics over Thunderbolt 3 (check with system vendor for details)
  • Certified Thunderbolt 3 graphics enclosure configured with supported Radeon R9 Series GPU
  • Thunderbolt firmware (NVM) v.16

AMD_SLIDE.png

The announcement introduces all sorts of possibilities. How awesome would it be to see a tiny solution with an R9 Nano powered by, say, an SFX power supply? Or what about a dual-GPU enclosure (possibly requiring 2 Thunderbolt 3 connections?), or an enclosure supporting liquid cooling (and the R9 Fury X)? The potential is certainly there, and with a standard in place we could see some really interesting products in the near future (or even DIY solutions). It's a promising time for mobile gaming!

Source: AMD

ZOTAC Introduces ZBOX MAGNUS EN980 VR Ready Mini-PC

Subject: Graphics Cards, Systems | March 10, 2016 - 11:38 AM |
Tagged: zotac, zbox, VR, SFF, nvidia, mini-pc, MAGNUS EN980, liquid cooling, GTX980, GTX 980, graphics, gpu, geforce

ZOTAC is teasing a new mini PC "ready for virtual reality" leading up to Cebit 2016, happening later this month. The ZBOX MAGNUS EN980 supplants the EN970 as the most powerful version of ZOTAC's gaming mini systems, and will come equipped with no less than an NVIDIA GeForce GTX 980.

ZOTAC.jpg

(Image via Guru3D)

Some questions remain ahead of a more formal announcemnent, and foremost among them is the version of the system's GTX 980. Is this the full desktop variant, or the GTX 980m? It seems to be the former, if we can read into the "factory-installed water-cooling solution", especially if that pertains to the GPU. In any case this will easily be the most powerful mini-PC ZOTAC has released, as even the current MAGNUS EN970 doesn't actually ship with a GTX 970 as the name would imply; rather, a GTX 960 handles discrete graphics duties according to the specs.

The MAGNUS EN980's GTX 980 GPU - mobile or not - will make this a formidable gaming system, paired as it is with a 6th-gen Intel Skylake CPU (the specific model was not mentioned in the press release; the current high-end EN970 with dicrete graphics uses the Intel Core i5-5200U). Other details include support for up to four displays via HDMI and DisplayPort, USB 3.0 and 3.1 Type-C inputs, and built-in 802.11ac wireless.

We'll have to wait until Cebit (which runs from March 14 - 18) for more details. Full press release after the break.

Source: ZOTAC

New ASUS GeForce GTX 950 2G Requires No PCIe Power

Subject: Graphics Cards | March 4, 2016 - 04:48 PM |
Tagged: PCIe power, PCI Express, nvidia, GTX 950 2G, gtx 950, graphics card, gpu, geforce, asus, 75W

ASUS has released a new version of the GTX 950 called the GTX 950 2G, and the interesting part isn't what's been added, but what was taken away; namely, the PCIe power requirement.

gtx9502g_1.jpeg

When NVIDIA announced the GTX 950 (which Ryan reviewed here) it carried a TDP of 90W, which prevented it from running without a PCIe power connector. The GTX 950 was (seemingly) the replacement for the GTX 750, which didn't require anything beyond motherboard power via the PCIe slot, and the same held true for the more powerful GTX 750 Ti. Without the need for PCIe power that GTX 750 Ti became our (any many others) default recommendation to turn any PC into a gaming machine (an idea we just happened to cover in depth here).

Here's a look at the specs from ASUS for the GTX 950 2G:

  • Graphics Engine: NVIDIA GeForce GTX 950
  • Interface: PCI Express 3.0
  • Video Memory: GDDR5 2GB
  • CUDA Cores: 768
  • Memory Clock: 6610 MHz
  • Memory Interface: 128-bit
  • Engine Clock
    • Gaming Mode (Default) - GPU Boost Clock : 1190 MHZ , GPU Base Clock : 1026 MHz
    • OC Mode - GPU Boost Clock : 1228 MHZ , GPU Base Clock : 1051 MHz
  • Interface: HDMI 2.0, DisplayPort, DVI
  • Power Consumption: Up to 75W, no additional PCIe power required
  • Dimensions: 8.3 x 4.5 x 1.6 inches

gtx9502g_2.jpeg

Whether this model has any relation to the rumored "GTX 950 SE/LP" remains to be seen (and other than power, this card appears to have stock GTX 950 specs), but the option of adding in a GPU without concern over power requirements makes this a very attractive upgrade proposition for older builds or OEM PC's, depending on cost.

The full model number is ASUS GTX950-2G, and a listing is up on Amazon, though seemingly only a placeholder at the moment. (Link removed. The listing was apparently for an existing GTX 950 product.)

Source: ASUS