Subject: Graphics Cards | April 14, 2016 - 10:17 PM | Scott Michaud
Tagged: microsoft, windows 10, uwp, DirectX 12, dx12
At the PC Gaming Conference from last year's E3 Expo, Microsoft announced that they were looking to bring more first-party titles to Windows. They used to be one of the better PC gaming publishers, back in the Mechwarrior 4 and earlier Flight Simulator days, but they got distracted as Xbox 360 rose and Windows Vista fell.
Again, part of that is because they attempted to push users to Windows Vista and Games for Windows Live, holding back troubled titles like Halo 2: Vista and technologies like DirectX 10 from Windows XP, which drove users to Valve's then-small Steam platform. Epic Games was also a canary in the coalmine at that time, warning users that Microsoft was considering certification for Games for Windows Live, which threatened mod support “because Microsoft's afraid of what you might put into it”.
It's sometimes easy to conform history to fit a specific viewpoint, but it does sound... familiar.
Anyway, we're glad that Microsoft is bringing first-party content to the PC, and they are perfectly within their rights to structure it however they please. We are also within our rights to point out its flaws and ask for them to be corrected. Turns out that Quantum Break, like Gears of War before it, has some severe performance issues. Let's be clear, these will likely be fixed, and I'm glad that Microsoft didn't artificially delay the PC version to give the console an exclusive window. Also, had they delayed the PC version until it was fixed, we wouldn't have known whether it needed the time.
Still, the game apparently has issues with a 50 FPS top-end cap, on top of pacing-based stutters. One concern that I have is, because DigitalFoundry is a European publication, perhaps the 50Hz issue might be caused by their port being based on a PAL version of the game??? Despite suggesting it, I would be shocked if that were the case, but I'm just trying to figure out why anyone would create a ceiling at that specific interval. They are also seeing NVIDIA's graphics drivers frequently crash, which probably means that some areas of their DirectX 12 support are not quite what the game expects. Again, that is solvable by drivers.
It's been a shaky start for both DirectX 12 and the Windows 10 UWP platform. We'll need to keep waiting and see what happens going forward. I hope this doesn't discourage Microsoft too much, but also that they robustly fix the problems we're discussing.
Subject: Graphics Cards | April 12, 2016 - 05:34 PM | Jeremy Hellstrom
Tagged: asus, 980 Ti, GTX 980 Ti MATRIX Platinum, DirectCU II
The ASUS GTX 980 Ti MATRIX Platinum comes with a mix of features including a memory defroster, as this card is designed with LN2 cooling in mind so we may see it appear in some of this years overclocking contests. It uses the older dual-fan DirectCU II, not the newer CU III version but the cards still remained around 60C under full load when [H]ard|OCP tested them. The one-press VBIOS reload is perfect if you run into issues overclocking, and this card will overclock as [H] hit 1266MHz Base/1367MHz Boost/1503MHz In-Game with VRAM at 8.2GHz. That overclocking potential as well as an asking price currently under MSRP helped this card win the Gold, see it surpass the MSI Lightning in the full review.
"Today we review the ASUS GTX 980 Ti MATRIX Platinum, a gaming enthusiast centered video card which boasts enthusiast air cooling and an enthusiast overclock on air cooling. This high-end video card features DirectCU II cooling, making it the perfect comparison to the MSI GTX 980 TI LIGHTNING in class, price, performance and cooling."
Here are some more Graphics Card articles from around the web:
- Gigabyte GTX 980 Ti XtremeGaming 6GB @ techPowerUp
- ASUS Radeon R9 Fury STRIX Graphics Card Review @ NikKTech
- AMD VR Performance featuring Sapphire @ Kitguru
Subject: Graphics Cards | April 11, 2016 - 03:23 PM | Ryan Shrout
Tagged: rtg, radeon technologies group, radeon, driver, crimson, amd
For longer than AMD would like to admit, Radeon drivers and software were often criticized for plaguing issues on performance, stability and features. As the graphics card market evolved and software became a critical part of the equation, that deficit affected AMD substantially.
In fact, despite the advantages that modern AMD Radeon parts typically have over GeForce options in terms of pure frame rate for your dollar, I recommended an NVIDIA GeForce GTX 970, 980 and 980 Ti for our three different VR Build Guides last month ($900, $1500, $2500) in large part due to confidence in NVIDIA’s driver team to continue delivering updated drivers to provide excellent experiences for gamers.
But back in September of 2015 we started to see changes inside AMD. There was drastic reorganization of the company and those people in charge. AMD setup the Radeon Technologies Group, a new entity inside the organization that would have complete control over the graphics hardware and software directions. And it put one of the most respected people in the industry at its helm: Raja Koduri. On November 24th AMD launched Radeon Software Crimson, a totally new branding, style and implementation to control your Radeon GPU. I talked about it at the time, but the upgrade was noticeable; everything was faster, easier to find and…pretty.
Since then, AMD has rolled out several new drivers with key feature additions, improvements and of course, game performance increases. Thus far in 2016 the Radeon Technologies Group has released 7 new drivers, three of which have been WHQL certified. That is 100% more than they had during this same time last year when AMD released zero WHQL drivers and a big increase over the 1 TOTAL driver AMD released in Q1 of 2015.
Maybe most important of all, the team at Radeon Technologies Group claims to be putting a new emphasis on “day one” support for major PC titles. If implemented correctly, this gives enthusiasts and PC gamers that want to stay on the cutting edge of releases the ability to play optimized titles on the day of release. Getting updated drivers that fix bugs and improve performance weeks or months after release is great, but for gamers that may already be done with that game, the updates are worthless. AMD was guilty of this practice for years, having driver updates that would fix performance issues on Radeon hardware for reviewer testing but that missed the majority of the play time of early adopting consumers.
Thus far, AMD has only just started down this path. Newer games like Far Cry Primal, The Division, Hitman and Ashes of the Singularity all had drivers from AMD on or before release with performance improvements, CrossFire profiles or both. A few others were CLOSE to day one ready including Rise of the Tomb Raider, Plants vs Zombies 2 and Gears of War Ultimate Edition.
|Game||Release Date||First Driver Mention||Driver Date||Feature / Support|
|Rise of the Tomb Raider||01-28-2016||16.1.1||02-05-2016||Performance and CrossFire Profile|
|Plants vs Zombies 2||02-23-2016||16.2.1||03-01-2016||Performance|
|Gears Ultimate Edition||03-01-2016||16.3||03-10-2016||Performance|
|Far Cry Primal||03-01-2016||16.2.1||03-01-2016||CrossFire Profile|
|The Division||03-08-2016||16.1||02-25-2016||CrossFire Profile|
|Hitman||03-11-2016||16.3||03-10-2016||Performance, CrossFire Profile|
|Need for Speed||03-15-2016||16.3.1||03-18-2016||Performance, CrossFire Profile|
|Ashes of the Singularity||03-31-2016||16.2||02-25-2016||Performance|
AMD claims that the push for this “day one” experience will continue going forward, pointing at a 35% boost in performance in Quantum Break between Radeon Crimson 16.3.2 and 16.4.1. There will be plenty of opportunities in the coming weeks and months to test AMD (and NVIDIA) on this “day one” focus with PC titles that will have support for DX12, UWP and VR.
The software team at RTG has also added quite a few interesting features since the release of the first Radeon Crimson driver. Support for the Vulkan API and a DX12 capability called Quick Response Queue, along with new additions to the Radeon settings (Per-game display scaling, CrossFire status indicator, power efficiency toggle, etc.) are just a few.
Critical for consumers that were buying into VR, the Radeon Crimson drivers launched with support alongside the Oculus Rift and HTC Vive. Both of these new virtual reality systems are putting significant strain on the GPU of modern PCs and properly implementing support for techniques like timewarp is crucial to enabling a good user experience. Though Oculus and HTC / Valve were using NVIDIA based systems more or less exclusively during our time at the Game Developers Summit last month, AMD still has approved platforms and software from both vendors. In fact, in a recent change to the HTC Vive minimum specifications, Valve retroactively added the Radeon R9 280 to the list, giving a slight edge in component pricing to AMD.
AMD was also the first to enable full support for external graphics solutions like the Razer Core external enclosure in its drivers with XConnect. We wrote about that release in early March, and I’m eager to get my hands on a product combo to give it a shot. As of this writing and after talking with Razer, NVIDIA had still not fully implemented external GPU functionality for hot/live device removal.
When looking for some acceptance metric, AMD did point us to a survey they ran to measure the approval and satisfaction of Crimson. After 1700+ submission, the score customers gave them was a 4.4 out of 5.0 - pretty significant praise even coming from AMD customers. We don't exactly how the poll was run or in what location it was posted, but the Crimson driver release has definitely improved the perception that Radeon drivers have with many enthusiasts.
I’m not going to sit here and try to impart on everyone that AMD is absolved of past sins and we should immediately be converted into believers. What I can say is that the Radeon Technologies Group is moving in the right direction, down a path that shows a change in leadership and a change in mindset. I talked in September about the respect I had for Raja Koduri and interviewed him after AMD’s Capsaicin event at GDC; you can already start to see the changes he is making inside this division. He has put a priority on software, not just on making it look pretty, but promising to make good on proper multi-GPU support, improved timeliness of releases and innovative features. AMD and RTG still have a ways to go before they can unwind years of negativity, but the ground work is there.
The company and every team member has a sizeable task ahead of them as we approach the summer. The Radeon Technologies Group will depend on the Polaris architecture and its products to swing back the pendulum against NVIDIA, gaining market share, mind share and respect. From what we have seen, Polaris looks impressive and differentiates from Hawaii and Fiji fairly dramatically. But this product was already well baked before Raja got total control and we might have to see another generation pass before the portfolio of GPUs can change around the institution. NVIDIA isn’t sitting idle and the Pascal architecture also promises improved performance, while leaning on the work and investment in software and drivers that have gotten them to the dominant market leader position they are in today.
I’m looking forward to working with AMD throughout 2016 on what promises to be an exciting and market-shifting time period.
Subject: Graphics Cards | April 11, 2016 - 01:04 AM | Scott Michaud
Tagged: nvidia, vulkan, graphics drivers
This is not a main-line, WHQL driver. This is not even a mainstream beta driver. The beta GeForce 364.91 drivers (364.16 on Linux) are only available on the NVIDIA developer website, which, yes, is publicly accessible, but should probably not be installed unless you are intending to write software and every day counts. Also, some who have installed it claim that certain Vulkan demos stop working. I'm not sure whether that means the demo is out-of-date due to a rare conformance ambiguity, the driver has bugs, or the reports themselves are simply unreliable.
That said, if you are a software developer, and you don't mind rolling back if things go awry, you can check out the new version at NVIDIA's website. It updates Vulkan to 1.0.8, which is just documentation bugs and conformance tweaks. These things happen over time. In fact, the initial Vulkan release was actually Vulkan 1.0.3, if I remember correctly.
The driver also addresses issues with Vulkan and NVIDIA Optimus technologies, which is interesting. Optimus controls which GPU acts as primary in a laptop, switching between the discrete NVIDIA one and the Intel integrated one, depending on load and power. Vulkan and DirectX 12, however, expose all GPUs to the system. I'm curious how NVIDIA knows whether to sleep one or the other, and what that would look like to software that enumerates all compatible devices. Would it omit listing one of the GPUs? Or would it allow the software to wake the system out of Optimus should it want more performance?
Anywho, the driver is available now, but you probably should wait for official releases. The interesting thing is this seems to mean that NVIDIA will continue to release non-public Vulkan drivers. Hmm.
93% of a GP100 at least...
NVIDIA has announced the Tesla P100, the company's newest (and most powerful) accelerator for HPC. Based on the Pascal GP100 GPU, the Tesla P100 is built on 16nm FinFET and uses HBM2.
NVIDIA provided a comparison table, which we added what we know about a full GP100 to:
|Tesla K40||Tesla M40||Tesla P100||Full GP100|
|GPU||GK110 (Kepler)||GM200 (Maxwell)||GP100 (Pascal)||GP100 (Pascal)|
|FP32 CUDA Cores / SM||192||128||64||64|
|FP32 CUDA Cores / GPU||2880||3072||3584||3840|
|FP64 CUDA Cores / SM||64||4||32||32|
|FP64 CUDA Cores / GPU||960||96||1792||1920|
|Base Clock||745 MHz||948 MHz||1328 MHz||TBD|
|GPU Boost Clock||810/875 MHz||1114 MHz||1480 MHz||TBD|
|Memory Interface||384-bit GDDR5||384-bit GDDR5||4096-bit HBM2||4096-bit HBM2|
|Memory Size||Up to 12 GB||Up to 24 GB||16 GB||TBD|
|L2 Cache Size||1536 KB||3072 KB||4096 KB||TBD|
|Register File Size / SM||256 KB||256 KB||256 KB||256 KB|
|Register File Size / GPU||3840 KB||6144 KB||14336 KB||15360 KB|
|TDP||235 W||250 W||300 W||TBD|
|Transistors||7.1 billion||8 billion||15.3 billion||15.3 billion|
|GPU Die Size||551 mm2||601 mm2||610 mm2||610mm2|
|Manufacturing Process||28 nm||28 nm||16 nm||16nm|
This table is designed for developers that are interested in GPU compute, so a few variables (like ROPs) are still unknown, but it still gives us a huge insight into the “big Pascal” architecture. The jump to 16nm allows for about twice the number of transistors, 15.3 billion, up from 8 billion with GM200, with roughly the same die area, 610 mm2, up from 601 mm2.
A full GP100 processor will have 60 shader modules, compared to GM200's 24, although Pascal stores half of the shaders per SM. The GP100 part that is listed in the table above is actually partially disabled, cutting off four of the sixty total. This leads to 3584 single-precision (32-bit) CUDA cores, which is up from 3072 in GM200. (The full GP100 architecture will have 3840 of these FP32 CUDA cores -- but we don't know when or where we'll see that.) The base clock is also significantly higher than Maxwell, 1328 MHz versus ~1000 MHz for the Titan X and 980 Ti, although Ryan has overclocked those GPUs to ~1390 MHz with relative ease. This is interesting, because even though 10.6 TeraFLOPs is amazing, it's only about 20% more than what GM200 could pull off with an overclock.
Subject: Graphics Cards | April 5, 2016 - 03:57 PM | Sebastian Peak
Tagged: PCIe power, nvidia, low-power, GTX950, GTX 950 Low Power, graphics card, gpu, GeForce GTX 950, evga
EVGA has announced new low-power versions of the NVIDIA GeForce GTX 950, some of which do not require any PCIe power connection to work.
"The EVGA GeForce GTX 950 is now available in special low power models, but still retains all the performance intact. In fact, several of these models do not even have a 6-Pin power connector!"
With or without power, all of these cards are full-on GTX 950's, with 768 CUDA cores and 2GB of GDDR5 memory. The primary difference will be with clock speeds, and EVGA provides a chart to illustrate which models still require PCIe power, as well as how they compare in performance.
It looks like the links to the 75W (no PCIe power required) models aren't working just yet on EVGA's site. Doubtless we will soon have active listings for pricing and availability info.
Why things are different in VR performance testing
It has been an interesting past several weeks and I find myself in an interesting spot. Clearly, and without a shred of doubt, virtual reality, more than any other gaming platform that has come before it, needs an accurate measure of performance and experience. With traditional PC gaming, if you dropped a couple of frames, or saw a slightly out of sync animation, you might notice and get annoyed. But in VR, with a head-mounted display just inches from your face taking up your entire field of view, a hitch in frame or a stutter in motion can completely ruin the immersive experience that the game developer is aiming to provide. Even worse, it could cause dizziness, nausea and define your VR experience negatively, likely killing the excitement of the platform.
My conundrum, and the one that I think most of our industry rests in, is that we don’t yet have the tools and ability to properly quantify the performance of VR. In a market and a platform that so desperately needs to get this RIGHT, we are at a point where we are just trying to get it AT ALL. I have read and seen some other glances at performance of VR headsets like the Oculus Rift and the HTC Vive released today, but honest all are missing the mark at some level. Using tools built for traditional PC gaming environments just doesn’t work, and experiential reviews talk about what the gamer can expect to “feel” but lack the data and analysis to back it up and to help point the industry in the right direction to improve in the long run.
With final hardware from both Oculus and HTC / Valve in my hands for the last three weeks, I have, with the help of Ken and Allyn, been diving into the important question of HOW do we properly test VR? I will be upfront: we don’t have a final answer yet. But we have a direction. And we have some interesting results to show you that should prove we are on the right track. But we’ll need help from the likes of Valve, Oculus, AMD, NVIDIA, Intel and Microsoft to get it right. Based on a lot of discussion I’ve had in just the last 2-3 days, I think we are moving in the correct direction.
Why things are different in VR performance testing
So why don’t our existing tools work for testing performance in VR? Things like Fraps, Frame Rating and FCAT have revolutionized performance evaluation for PCs – so why not VR? The short answer is that the gaming pipeline changes in VR with the introduction of two new SDKs: Oculus and OpenVR.
Though both have differences, the key is that they are intercepting the draw ability from the GPU to the screen. When you attach an Oculus Rift or an HTC Vive to your PC it does not show up as a display in your system; this is a change from the first developer kits from Oculus years ago. Now they are driven by what’s known as “direct mode.” This mode offers improved user experiences and the ability for the Oculus an OpenVR systems to help with quite a bit of functionality for game developers. It also means there are actions being taken on the rendered frames after we can last monitor them. At least for today.
Subject: Graphics Cards | April 5, 2016 - 06:13 AM | Tim Verry
Tagged: HPC, hbm, gpgpu, firepro s9300x2, firepro, dual fiji, deep learning, big data, amd
Earlier this month AMD launched a dual Fiji powerhouse for VR gamers it is calling the Radeon Pro Duo. Now, AMD is bringing its latest GCN architecture and HBM memory to servers with the dual GPU FirePro S9300 x2.
The new server-bound professional graphics card packs an impressive amount of computing hardware into a dual-slot card with passive cooling. The FirePro S9300 x2 combines two full Fiji GPUs clocked at 850 MHz for a total of 8,192 cores, 512 TUs, and 128 ROPs. Each GPU is paired with 4GB of non-ECC HBM memory on package with 512GB/s of memory bandwidth which AMD combines to advertise this as the first professional graphics card with 1TB/s of memory bandwidth.
Due to lower clockspeeds the S9300 x2 has less peak single precision compute performance versus the consumer Radeon Pro Duo at 13.9 TFLOPS versus 16 TFLOPs on the desktop card. Businesses will be able to cram more cards into their rack mounted servers though since they do not need to worry about mounting locations for the sealed loop water cooling of the Radeon card.
|FirePro S9300 x2||Radeon Pro Duo||R9 Fury X||FirePro S9170|
|GPU||Dual Fiji||Dual Fiji||Fiji||Hawaii|
|GPU Cores||8192 (2 x 4096)||8192 (2 x 4096)||4096||2816|
|Rated Clock||850 MHz||1050 MHz||1050 MHz||930 MHz|
|Texture Units||2 x 256||2 x 256||256||176|
|ROP Units||2 x 64||2 x 64||64||64|
|Memory||8GB (2 x 4GB)||8GB (2 x 4GB)||4GB||32GB ECC|
|Memory Clock||500 MHz||500 MHz||500 MHz||5000 MHz|
|Memory Interface||4096-bit (HBM) per GPU||4096-bit (HBM) per GPU||4096-bit (HBM)||512-bit|
|Memory Bandwidth||1TB/s (2 x 512GB/s)||1TB/s (2 x 512GB/s)||512 GB/s||320 GB/s|
|TDP||300 watts||?||275 watts||275 watts|
|Peak Compute||13.9 TFLOPS||16 TFLOPS||8.60 TFLOPS||5.24 TFLOPS|
AMD is aiming this card at datacenter and HPC users working on "big data" tasks that do not require the accuracy of double precision floating point calculations. Deep learning tasks, seismic processing, and data analytics are all examples AMD says the dual GPU card will excel at. These are all tasks that can be greatly accelerated by the massive parallel nature of a GPU but do not need to be as precise as stricter mathematics, modeling, and simulation work that depend on FP64 performance. In that respect, the FirePro S9300 x2 has only 870 GLFOPS of double precision compute performance.
Further, this card supports a GPGPU optimized Linux driver stack called GPUOpen and developers can program for it using either OpenCL (it supports OpenCL 1.2) or C++. AMD PowerTune, and the return of FP16 support are also features. AMD claims that its new dual GPU card is twice as fast as the NVIDIA Tesla M40 (1.6x the K80) and 12 times as fast as the latest Intel Xeon E5 in peak single precision floating point performance.
The double slot card is powered by two PCI-E power connectors and is rated at 300 watts. This is a bit more palatable than the triple 8-pin needed for the Radeon Pro Duo!
The FirePro S9300 x2 comes with a 3 year warranty and will be available in the second half of this year for $6000 USD. You are definitely paying a premium for the professional certifications and support. Here's hoping developers come up with some cool uses for the dual 8.9 Billion transistor GPUs and their included HBM memory!
Subject: Graphics Cards | April 4, 2016 - 01:00 PM | Sebastian Peak
Tagged: workstation, VR, virtual reality, quadro, NVIDIA Quadro M5500, nvidia, msi, mobile workstation, enterprise
NVIDIA's VR Ready program, which is designed to inform users which GeForce GTX GPUs “deliver an optimal VR experience”, has moved to enterprise with a new program aimed at NVIDIA Quadro GPUs and related systems.
“We’re working with top OEMs such as Dell, HP and Lenovo to offer NVIDIA VR Ready professional workstations. That means models like the HP Z Workstation, Dell Precision T5810, T7810, T7910, R7910, and the Lenovo P500, P710, and P910 all come with NVIDIA-recommended configurations that meet the minimum requirements for the highest performing VR experience.
Quadro professional GPUs power NVIDIA professional VR Ready systems. These systems put our VRWorks software development kit at the fingertips of VR headset and application developers. VRWorks offers exclusive tools and technologies — including Context Priority, Multi-res Shading, Warp & Blend, Synchronization, GPU Affinity and GPU Direct — so pro developers can create great VR experiences.”
Partners include Dell, HP, and Lenovo, with new workstations featuring NVIDIA professional VR Ready certification.
Desktop isn't the only space for workstations, and in this morning's announcement NVIDIA and MSI are introducing the WT72 mobile workstation; the “the first NVIDIA VR Ready professional laptop”:
"The MSI WT72 VR Ready laptop is the first to use our new Maxwell architecture-based Quadro M5500 GPU. With 2,048 CUDA cores, the Quadro M5500 is the world’s fastest mobile GPU. It’s also our first mobile GPU for NVIDIA VR Ready professional mobile workstations, optimized for VR performance with ultra-low latency."
Here are the specs for the WT72 6QN:
- GPU: NVIDIA Quadro M5500 3D (8GB GDDR5)
- CPU Options:
- Xeon E3-1505M v5
- Core i7-6920HQ
- Core i7-6700HQ
- Chipset: CM236
- 64GB ECC DDR4 2133 MHz (Xeon)
- 32GB DDR4 2133 MHz (Core i7)
- Storage: Super RAID 4, 256GB SSD + 1TB SATA 7200 rpm
- 17.3” UHD 4K (Xeon, i7-6920HQ)
- 17.3” FHD Anti-Glare IPS (i7-6700HQ)
- LAN: Killer Gaming Network E2400
- Optical Drive: BD Burner
- I/O: Thunderbolt, USB 3.0 x6, SDXC card reader
- Webcam: FHD type (1080p/30)
- Speakers: Dynaudio Tech Speakers 3Wx2 + Subwoofer
- Battery: 9 cell
- Dimensions: 16.85” x 11.57” x 1.89”
- Weight: 8.4 lbs
- Warranty: 3-year limited
- Xeon E3-1505M v5 model: $6899
- Core i7-6920HQ model: $6299
- Core i7-6700HQ model: $5499
No doubt we will see details of other Quadro VR Ready workstations as GTC unfolds this week.
Subject: Graphics Cards | March 30, 2016 - 06:58 AM | Tim Verry
Tagged: maxwell, gtx 950, GM206, asus
Asus is launching a new midrange gaming graphics card clad in arctic camouflage. The Echelon GTX 950 Limited Edition is a Maxwell-based card that will come factory overclocked and paired with Asus features normally reserved for their higher end cards.
This dual slot, dual fan graphics card features “auto-extreme technology” which is Asus marketing speak for high end capacitors, chokes, and other components. Further, the card uses a DirectCU II cooler that Asus claims offers 20% better cooling performance while being 3-times quieter than the NVIDIA reference cooler. Asus tweaked the shroud on this card to resemble a white and gray arctic camouflage design. There is also a reinforced backplate that continues the stealthy camo theme.
I/O on the Echelon GTX 950 Limited Edition includes:
- 1 x DVI-D
- 1 x DVI-I
- 1 x HDMI 2.0
- 1 x DisplayPort
The card supports NVIDIA’s G-Sync technology and the inclusion of an HDMI 2.0 port allows it to be used in a HTPC/gaming PC build for the living room though case selection would be limited since it’s a larger dual slot card.
Beneath the stealthy exterior, Asus conceals a GM206-derived GTX 950 GPU with 768 CUDA cores, 48 Texture Units, and 32 ROPs as well as 2GB of GDDR5 memory. Out of the box, users have two factory overclocks to choose from that Asus calls Gaming and Overclock modes. In gaming mode, the Echelon GTX 950 GPU is clocked at 1,140 MHz base and 1,329 MHz boost. Turing the card to OC Mode, clockspeeds are further increased to 1,165 MHz base and 1,355 MHz boost.
For reference, the, well, reference GTX 950 clockspeeds are 1,024 MHz base and 1,186 MHz boost.
Asus also ever-so-slightly overclocked the GDDR5 memory to 6,610 MHz which is unfortunately a mere 10MHz over reference. The memory sits on a 128-bit bus and while a factory overclock is nice to see, transfer speeds increases will be minimal at best.
In our review of the GTX 950 which focused on the Asus Strix variant, Ryan found it be a good option for 1080p gamers wanting a bit more graphical prowess than the 750Ti for their games.
Maximum PC reports that camo-clad Echelon GTX 950 will be available at the end of the month. Pricing has not been released by Asus, but I would expect this card to come with an MSRP of around $180 USD.