AMD Radeon Crimson Edition 16.4.2 hits the streets

Subject: Graphics Cards | April 25, 2016 - 08:41 PM |
Tagged: graphics driver, crimson, amd

AMD's new Crimson driver has just been released with new features including official support for the new Radeon Pro Duo as well as both the Oculus Rift and HTC Vive VR headsets.  It also adds enhanced support for AMD's XConnect technology for external GPUs connected via a Thunderbolt 3 interface.  Crossfire profile updates include Hitman, Elite Dangerous and Need for Speed and they have also resolved the ongoing issue with the internal update procedure not seeing the newest drivers.  If you are having issues with games crashing to desktop on launch you will still need to disable the AMD Gaming Evolved overlay, unfortunately.

Get 'em right here!

radeon-crimson.jpg

"The latest version of Radeon Software Crimson Edition is here with 16.4.2. With this version, AMD delivers many quality improvements, updated/introduced new CrossFire profiles and delivered full support for AMD’s XConnect technology (including plug’n’play simplicity for Thunderbolt 3 eGFX enclosures configured with Radeon R9 Fury, Nano or 300 Series GPUs.)  Best of all, our DirectX 12 leadership continues to be strong, as shown by the performance numbers below."

Capture.JPG

Source: AMD

Report: NVIDIA GP104 Die Pictured; GTX 1080 Does Not Use HBM

Subject: Graphics Cards | April 22, 2016 - 02:16 PM |
Tagged: rumor, report, pascal, nvidia, leak, graphics card, gpu, gddr5x, GDDR5

According to a report from VideoCardz (via Overclock.net/Chip Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.

NVIDIA-GP104-GPU.jpg

Image credit: VideoCardz.com

"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."

A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory - based on a previously leaked photo of the die on PCB.

NVIDIA-Pascal-GP104-200-on-PCB.jpg

Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)

"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."

While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.

Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X - the first GPU to use HBM - was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.

If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected - which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.

Source: VideoCardz

Zotac Releases PCI-E x1 Version of NVIDIA GT 710 Graphics Card

Subject: Graphics Cards | April 21, 2016 - 12:37 PM |
Tagged:

Zotac has released a new variant of the low-power NVIDIA GeForce GT 710, and while this wouldn't normally be news this card has a very important distinction: its PCI-E x1 interface.

zt-71304-20l_image1.jpg

With a single-slot design, low-profile ready (with a pair of brackets included), and that PCI-E x1 interface, this card can go places where GPUs have never been able to go (AFAIK). Granted, you won't be doing much gaming on a GT 710, which features 192 CUDA cores and 1GB of DDR3 memory, but this card does provide support for up to 3 monitors via DVI, HDMI, and VGA outputs. 

A PCI-E x1 GPU would certainly provide some interesting options for ultra-compact systems such as those based on thin mini-ITX, which does not offer a full-length PCI Express slot; or for adding additional monitor support for business machines that only offer a single PCI-E x16 slot, but have a x1 slot available.

zt-71304-20l_image5.jpg

Specifications from Zotac:

Zotac ZT-71304-20L

  • GPU: GeForce GT 710
  • CUDA cores: 192
  • Video Memory: 1GB DDR3
  • Memory Bus: 64-bit
  • Engine Clock: 954 MHz
  • Memory Clock: 1600 MHz
  • PCI Express: PCI-E x1
  • Display Outputs: DL-DVI, VGA, HDMI
  • HDCP Support: Yes
  • Multi Display Capability: 3
  • Recommended Power Supply: 300W
  • Power Consumption: 25W
  • Power Input: N/A
  • API Support: DirectX 12 (feature level 11_0), OpenGL 4.5
  • Cooling: Passive
  • Slot Size: Single Slot
  • SLI: N/A
  • Supported OS: Windows 10 / 8 / 7 / Vista / XP
  • Card Length: 146.05mm x 111.15mm
  • Accessories: 2x Low profile bracket I/O brackets, Driver Disk, User Manual

zt-71304-20l_image7.jpg

The card, which is listed with the model ZT-71304-20L, has not yet appeared on any U.S. sites for purchase (that I can find, anyway), so we will have to wait to see where pricing will be.

Source: Zotac

Sony plans PlayStation NEO with massive APU hardware upgrade

Subject: Graphics Cards, Processors | April 19, 2016 - 03:21 PM |
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd

Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.

Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO." 

ps4gpu.jpg

The current PlayStation 4 APU
Image source: iFixIt.com

There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.

The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.

...

Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.

Giant Bomb even has details on the architectural changes.

  Shipping PS4 PS4 "NEO"
CPU 8 Jaguar Cores @ 1.6 GHz 8 Jaguar Cores @ 2.1 GHz
GPU AMD GCN, 18 CUs @ 800 MHz AMD GCN+, 36 CUs @ 911 MHz
Stream Processors 1152 SPs ~ HD 7870 equiv. 2304 SPs ~ R9 390 equiv.
Memory 8GB GDDR5 @ 176 GB/s 8GB GDDR5 @ 218 GB/s

(We actually did a full video teardown of the PS4 on launch day!)

If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).

DSC02539.jpg

Could Sony's NEO platform rival the R9 390?

If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so. 

It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.

However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.

polaris-5.jpg

New features of the AMD Polaris architecture due this summer

There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path. 

The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on

In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.

There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.

Source: Giant Bomb

Report: NVIDIA GTX 1080 GPU Cooler Pictured

Subject: Graphics Cards | April 19, 2016 - 03:08 PM |
Tagged: rumor, report, nvidia, leak, GTX 1080, graphics card, gpu, geforce

Another reported photo of an upcoming GTX 1080 graphics card has appeared online, this time via a post on Baidu.

GTX1080.jpg

(Image credit: VR-Zone, via Baidu)

The image is typically low-resolution and features the slightly soft focus we've come to expect from alleged leaks. This doesn't mean it's not legitimate, and this isn't the first time we have seen this design. This image also appears to only be the cooler, without an actual graphics card board underneath.

We have reported on the upcoming GPU rumored to be named "GTX 1080" in the recent past, and while no official announcement has been made it seems safe to assume that a successor to the current 900-series GPUs is forthcoming.

Source: VR-Zone

NVIDIA Releases 364.96 Hotfix Driver

Subject: Graphics Cards | April 14, 2016 - 10:44 PM |
Tagged: nvidia, graphics drivers

The GeForce 364.xx line of graphics drivers hasn't been smooth for NVIDIA. Granted, they tried to merge Vulkan support into their main branch at the same time as several new games, including DirectX 12 ones, launched. It was probably a very difficult period for NVIDIA, but WHQL-certified drivers should be better than this.

nvidia-2015-bandaid.png

Regardless, they're trying, and today they released GeForce Hot Fix Driver 364.96. Some of the early reactions mock NVIDIA for adding “Support for DOOM Open Beta” as the only listed feature of a “hotfix” driver, but I don't see it. It's entirely possible that the current drivers have a known issue with DOOM Open Beta and, thus, they require a hotfix. It's not necessarily “just a profile,” and “profiles” isn't exactly what a hardware vendor does to support a new title.

But anyway, Manuel Guzman, one of the faces for NVIDIA Customer Care, also says that this driver includes fixes for FPS drops in Dark Souls 3. According to some forum-goers, despite its numbering, it also does not contain the Vulkan updates from 364.91. This is probably a good thing, because it would be a bit silly to merge developer-branch features into a customer driver that only intends to solve problems before an official driver can be certified. I mean, that's like patching a flat tire, then drilling a hole in one of the good ones to mess around with it, too.

The GeForce 364.96 Hotfix Drivers are available at NVIDIA's website. If you're having problems, then it might be your solution. Otherwise? Wait until NVIDIA has an official release (or you start getting said problems).

DigitalFoundry Dissects Quantum Break (and I rant)

Subject: Graphics Cards | April 14, 2016 - 10:17 PM |
Tagged: microsoft, windows 10, uwp, DirectX 12, dx12

At the PC Gaming Conference from last year's E3 Expo, Microsoft announced that they were looking to bring more first-party titles to Windows. They used to be one of the better PC gaming publishers, back in the Mechwarrior 4 and earlier Flight Simulator days, but they got distracted as Xbox 360 rose and Windows Vista fell.

microsoft-2016-quantumbreak-logo.jpg

Again, part of that is because they attempted to push users to Windows Vista and Games for Windows Live, holding back troubled titles like Halo 2: Vista and technologies like DirectX 10 from Windows XP, which drove users to Valve's then-small Steam platform. Epic Games was also a canary in the coalmine at that time, warning users that Microsoft was considering certification for Games for Windows Live, which threatened mod support “because Microsoft's afraid of what you might put into it”.

It's sometimes easy to conform history to fit a specific viewpoint, but it does sound... familiar.

Anyway, we're glad that Microsoft is bringing first-party content to the PC, and they are perfectly within their rights to structure it however they please. We are also within our rights to point out its flaws and ask for them to be corrected. Turns out that Quantum Break, like Gears of War before it, has some severe performance issues. Let's be clear, these will likely be fixed, and I'm glad that Microsoft didn't artificially delay the PC version to give the console an exclusive window. Also, had they delayed the PC version until it was fixed, we wouldn't have known whether it needed the time.

Still, the game apparently has issues with a 50 FPS top-end cap, on top of pacing-based stutters. One concern that I have is, because DigitalFoundry is a European publication, perhaps the 50Hz issue might be caused by their port being based on a PAL version of the game??? Despite suggesting it, I would be shocked if that were the case, but I'm just trying to figure out why anyone would create a ceiling at that specific interval. They are also seeing NVIDIA's graphics drivers frequently crash, which probably means that some areas of their DirectX 12 support are not quite what the game expects. Again, that is solvable by drivers.

It's been a shaky start for both DirectX 12 and the Windows 10 UWP platform. We'll need to keep waiting and see what happens going forward. I hope this doesn't discourage Microsoft too much, but also that they robustly fix the problems we're discussing.

This ASUS GeForce GTX 980 Ti took the red pill

Subject: Graphics Cards | April 12, 2016 - 05:34 PM |
Tagged: asus, 980 Ti, GTX 980 Ti MATRIX Platinum, DirectCU II

The ASUS GTX 980 Ti MATRIX Platinum comes with a mix of features including a memory defroster, as this card is designed with LN2 cooling in mind so we may see it appear in some of this years overclocking contests.  It uses the older dual-fan DirectCU II, not the newer CU III version but the cards still remained around 60C under full load when [H]ard|OCP tested them.  The one-press VBIOS reload is perfect if you run into issues overclocking, and this card will overclock as [H] hit 1266MHz Base/1367MHz Boost/1503MHz In-Game with VRAM at 8.2GHz.  That overclocking potential as well as an asking price currently under MSRP helped this card win the Gold, see it surpass the MSI Lightning in the full review.

14603287514jUIHx7mZy_1_3_l.jpg

"Today we review the ASUS GTX 980 Ti MATRIX Platinum, a gaming enthusiast centered video card which boasts enthusiast air cooling and an enthusiast overclock on air cooling. This high-end video card features DirectCU II cooling, making it the perfect comparison to the MSI GTX 980 TI LIGHTNING in class, price, performance and cooling."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

AMD Radeon Crimson Edition drivers continue quality improvement

Subject: Graphics Cards | April 11, 2016 - 03:23 PM |
Tagged: rtg, radeon technologies group, radeon, driver, crimson, amd

For longer than AMD would like to admit, Radeon drivers and software were often criticized for plaguing issues on performance, stability and features. As the graphics card market evolved and software became a critical part of the equation, that deficit affected AMD substantially. 

In fact, despite the advantages that modern AMD Radeon parts typically have over GeForce options in terms of pure frame rate for your dollar, I recommended an NVIDIA GeForce GTX 970, 980 and 980 Ti for our three different VR Build Guides last month ($900, $1500, $2500) in large part due to confidence in NVIDIA’s driver team to continue delivering updated drivers to provide excellent experiences for gamers.

But back in September of 2015 we started to see changes inside AMD. There was drastic reorganization of the company and those people in charge. AMD setup the Radeon Technologies Group, a new entity inside the organization that would have complete control over the graphics hardware and software directions. And it put one of the most respected people in the industry at its helm: Raja Koduri. On November 24th AMD launched Radeon Software Crimson, a totally new branding, style and implementation to control your Radeon GPU. I talked about it at the time, but the upgrade was noticeable; everything was faster, easier to find and…pretty.

Since then, AMD has rolled out several new drivers with key feature additions, improvements and of course, game performance increases. Thus far in 2016 the Radeon Technologies Group has released 7 new drivers, three of which have been WHQL certified. That is 100% more than they had during this same time last year when AMD released zero WHQL drivers and a big increase over the 1 TOTAL driver AMD released in Q1 of 2015.

crimson-3.jpg

Maybe most important of all, the team at Radeon Technologies Group claims to be putting a new emphasis on “day one” support for major PC titles. If implemented correctly, this gives enthusiasts and PC gamers that want to stay on the cutting edge of releases the ability to play optimized titles on the day of release. Getting updated drivers that fix bugs and improve performance weeks or months after release is great, but for gamers that may already be done with that game, the updates are worthless. AMD was guilty of this practice for years, having driver updates that would fix performance issues on Radeon hardware for reviewer testing but that missed the majority of the play time of early adopting consumers.

q1driver-2.jpg

Thus far, AMD has only just started down this path. Newer games like Far Cry Primal, The Division, Hitman and Ashes of the Singularity all had drivers from AMD on or before release with performance improvements, CrossFire profiles or both. A few others were CLOSE to day one ready including Rise of the Tomb Raider, Plants vs Zombies 2 and Gears of War Ultimate Edition.

 

Game Release Date First Driver Mention Driver Date Feature / Support
Rise of the Tomb Raider 01-28-2016 16.1.1 02-05-2016 Performance and CrossFire Profile
Plants vs Zombies 2 02-23-2016 16.2.1 03-01-2016 Performance
Gears Ultimate Edition 03-01-2016 16.3 03-10-2016 Performance
Far Cry Primal 03-01-2016 16.2.1 03-01-2016 CrossFire Profile
The Division 03-08-2016 16.1 02-25-2016 CrossFire Profile
Hitman 03-11-2016 16.3 03-10-2016 Performance, CrossFire Profile
Need for Speed 03-15-2016 16.3.1 03-18-2016 Performance, CrossFire Profile
Ashes of the Singularity 03-31-2016 16.2 02-25-2016 Performance

 

AMD claims that the push for this “day one” experience will continue going forward, pointing at a 35% boost in performance in Quantum Break between Radeon Crimson 16.3.2 and 16.4.1. There will be plenty of opportunities in the coming weeks and months to test AMD (and NVIDIA) on this “day one” focus with PC titles that will have support for DX12, UWP and VR.

The software team at RTG has also added quite a few interesting features since the release of the first Radeon Crimson driver. Support for the Vulkan API and a DX12 capability called Quick Response Queue, along with new additions to the Radeon settings (Per-game display scaling, CrossFire status indicator, power efficiency toggle, etc.) are just a few.

q1driver-4.jpg

Critical for consumers that were buying into VR, the Radeon Crimson drivers launched with support alongside the Oculus Rift and HTC Vive. Both of these new virtual reality systems are putting significant strain on the GPU of modern PCs and properly implementing support for techniques like timewarp is crucial to enabling a good user experience. Though Oculus and HTC / Valve were using NVIDIA based systems more or less exclusively during our time at the Game Developers Summit last month, AMD still has approved platforms and software from both vendors. In fact, in a recent change to the HTC Vive minimum specifications, Valve retroactively added the Radeon R9 280 to the list, giving a slight edge in component pricing to AMD.

AMD was also the first to enable full support for external graphics solutions like the Razer Core external enclosure in its drivers with XConnect. We wrote about that release in early March, and I’m eager to get my hands on a product combo to give it a shot. As of this writing and after talking with Razer, NVIDIA had still not fully implemented external GPU functionality for hot/live device removal.

When looking for some acceptance metric, AMD did point us to a survey they ran to measure the approval and satisfaction of Crimson. After 1700+ submission, the score customers gave them was a 4.4 out of 5.0 - pretty significant praise even coming from AMD customers. We don't exactly how the poll was run or in what location it was posted, but the Crimson driver release has definitely improved the perception that Radeon drivers have with many enthusiasts.

I’m not going to sit here and try to impart on everyone that AMD is absolved of past sins and we should immediately be converted into believers. What I can say is that the Radeon Technologies Group is moving in the right direction, down a path that shows a change in leadership and a change in mindset. I talked in September about the respect I had for Raja Koduri and interviewed him after AMD’s Capsaicin event at GDC; you can already start to see the changes he is making inside this division. He has put a priority on software, not just on making it look pretty, but promising to make good on proper multi-GPU support, improved timeliness of releases and innovative features. AMD and RTG still have a ways to go before they can unwind years of negativity, but the ground work is there.

The company and every team member has a sizeable task ahead of them as we approach the summer. The Radeon Technologies Group will depend on the Polaris architecture and its products to swing back the pendulum against NVIDIA, gaining market share, mind share and respect. From what we have seen, Polaris looks impressive and differentiates from Hawaii and Fiji fairly dramatically. But this product was already well baked before Raja got total control and we might have to see another generation pass before the portfolio of GPUs can change around the institution. NVIDIA isn’t sitting idle and the Pascal architecture also promises improved performance, while leaning on the work and investment in software and drivers that have gotten them to the dominant market leader position they are in today.

I’m looking forward to working with AMD throughout 2016 on what promises to be an exciting and market-shifting time period.

NVIDIA Releases 364.91 Beta Drivers for Developers

Subject: Graphics Cards | April 11, 2016 - 01:04 AM |
Tagged: nvidia, vulkan, graphics drivers

This is not a main-line, WHQL driver. This is not even a mainstream beta driver. The beta GeForce 364.91 drivers (364.16 on Linux) are only available on the NVIDIA developer website, which, yes, is publicly accessible, but should probably not be installed unless you are intending to write software and every day counts. Also, some who have installed it claim that certain Vulkan demos stop working. I'm not sure whether that means the demo is out-of-date due to a rare conformance ambiguity, the driver has bugs, or the reports themselves are simply unreliable.

khronos-2016-vulkanlogo2.png

That said, if you are a software developer, and you don't mind rolling back if things go awry, you can check out the new version at NVIDIA's website. It updates Vulkan to 1.0.8, which is just documentation bugs and conformance tweaks. These things happen over time. In fact, the initial Vulkan release was actually Vulkan 1.0.3, if I remember correctly.

The driver also addresses issues with Vulkan and NVIDIA Optimus technologies, which is interesting. Optimus controls which GPU acts as primary in a laptop, switching between the discrete NVIDIA one and the Intel integrated one, depending on load and power. Vulkan and DirectX 12, however, expose all GPUs to the system. I'm curious how NVIDIA knows whether to sleep one or the other, and what that would look like to software that enumerates all compatible devices. Would it omit listing one of the GPUs? Or would it allow the software to wake the system out of Optimus should it want more performance?

Anywho, the driver is available now, but you probably should wait for official releases. The interesting thing is this seems to mean that NVIDIA will continue to release non-public Vulkan drivers. Hmm.

Source: NVIDIA

EVGA Releases NVIDIA GeForce GTX 950 Low Power Cards

Subject: Graphics Cards | April 5, 2016 - 03:57 PM |
Tagged: PCIe power, nvidia, low-power, GTX950, GTX 950 Low Power, graphics card, gpu, GeForce GTX 950, evga

EVGA has announced new low-power versions of the NVIDIA GeForce GTX 950, some of which do not require any PCIe power connection to work.

02G-P4-0958-KR_no_6_pin.jpg

"The EVGA GeForce GTX 950 is now available in special low power models, but still retains all the performance intact. In fact, several of these models do not even have a 6-Pin power connector!"

With or without power, all of these cards are full-on GTX 950's, with 768 CUDA cores and 2GB of GDDR5 memory. The primary difference will be with clock speeds, and EVGA provides a chart to illustrate which models still require PCIe power, as well as how they compare in performance.

evga_chart.png

It looks like the links to the 75W (no PCIe power required) models aren't working just yet on EVGA's site. Doubtless we will soon have active listings for pricing and availability info.

Source: EVGA

AMD Brings Dual Fiji and HBM Memory To Server Room With FirePro S9300 x2

Subject: Graphics Cards | April 5, 2016 - 06:13 AM |
Tagged: HPC, hbm, gpgpu, firepro s9300x2, firepro, dual fiji, deep learning, big data, amd

Earlier this month AMD launched a dual Fiji powerhouse for VR gamers it is calling the Radeon Pro Duo. Now, AMD is bringing its latest GCN architecture and HBM memory to servers with the dual GPU FirePro S9300 x2.

AMD Firepro S9300x2 Server HPC Card.jpg

The new server-bound professional graphics card packs an impressive amount of computing hardware into a dual-slot card with passive cooling. The FirePro S9300 x2 combines two full Fiji GPUs clocked at 850 MHz for a total of 8,192 cores, 512 TUs, and 128 ROPs. Each GPU is paired with 4GB of non-ECC HBM memory on package with 512GB/s of memory bandwidth which AMD combines to advertise this as the first professional graphics card with 1TB/s of memory bandwidth.

Due to lower clockspeeds the S9300 x2 has less peak single precision compute performance versus the consumer Radeon Pro Duo at 13.9 TFLOPS versus 16 TFLOPs on the desktop card. Businesses will be able to cram more cards into their rack mounted servers though since they do not need to worry about mounting locations for the sealed loop water cooling of the Radeon card.

  FirePro S9300 x2 Radeon Pro Duo R9 Fury X FirePro S9170
GPU Dual Fiji Dual Fiji Fiji Hawaii
GPU Cores 8192 (2 x 4096) 8192 (2 x 4096) 4096 2816
Rated Clock 850 MHz 1050 MHz 1050 MHz 930 MHz
Texture Units 2 x 256 2 x 256 256 176
ROP Units 2 x 64 2 x 64 64 64
Memory 8GB (2 x 4GB) 8GB (2 x 4GB) 4GB 32GB ECC
Memory Clock 500 MHz 500 MHz 500 MHz 5000 MHz
Memory Interface 4096-bit (HBM) per GPU 4096-bit (HBM) per GPU 4096-bit (HBM) 512-bit
Memory Bandwidth 1TB/s (2 x 512GB/s) 1TB/s (2 x 512GB/s) 512 GB/s 320 GB/s
TDP 300 watts ? 275 watts 275 watts
Peak Compute 13.9 TFLOPS 16 TFLOPS 8.60 TFLOPS 5.24 TFLOPS
Transistor Count 17.8B 17.8B 8.9B 8.0B
Process Tech 28nm 28nm 28nm 28nm
Cooling Passive Liquid Liquid Passive
MSRP $6000 $1499 $649 $4000

AMD is aiming this card at datacenter and HPC users working on "big data" tasks that do not require the accuracy of double precision floating point calculations. Deep learning tasks, seismic processing, and data analytics are all examples AMD says the dual GPU card will excel at. These are all tasks that can be greatly accelerated by the massive parallel nature of a GPU but do not need to be as precise as stricter mathematics, modeling, and simulation work that depend on FP64 performance. In that respect, the FirePro S9300 x2 has only 870 GLFOPS of double precision compute performance.

Further, this card supports a GPGPU optimized Linux driver stack called GPUOpen and developers can program for it using either OpenCL (it supports OpenCL 1.2) or C++. AMD PowerTune, and the return of FP16 support are also features. AMD claims that its new dual GPU card is twice as fast as the NVIDIA Tesla M40 (1.6x the K80) and 12 times as fast as the latest Intel Xeon E5 in peak single precision floating point performance. 

The double slot card is powered by two PCI-E power connectors and is rated at 300 watts. This is a bit more palatable than the triple 8-pin needed for the Radeon Pro Duo!

The FirePro S9300 x2 comes with a 3 year warranty and will be available in the second half of this year for $6000 USD. You are definitely paying a premium for the professional certifications and support. Here's hoping developers come up with some cool uses for the dual 8.9 Billion transistor GPUs and their included HBM memory!

Source: AMD

NVIDIA's New Quadro VR Ready Program Targets Enterprise

Subject: Graphics Cards | April 4, 2016 - 01:00 PM |
Tagged: workstation, VR, virtual reality, quadro, NVIDIA Quadro M5500, nvidia, msi, mobile workstation, enterprise

NVIDIA's VR Ready program, which is designed to inform users which GeForce GTX GPUs “deliver an optimal VR experience”, has moved to enterprise with a new program aimed at NVIDIA Quadro GPUs and related systems.

NVIDIA_VR.png

“We’re working with top OEMs such as Dell, HP and Lenovo to offer NVIDIA VR Ready professional workstations. That means models like the HP Z Workstation, Dell Precision T5810, T7810, T7910, R7910, and the Lenovo P500, P710, and P910 all come with NVIDIA-recommended configurations that meet the minimum requirements for the highest performing VR experience.

Quadro professional GPUs power NVIDIA professional VR Ready systems. These systems put our VRWorks software development kit at the fingertips of VR headset and application developers. VRWorks offers exclusive tools and technologies — including Context Priority, Multi-res Shading, Warp & Blend, Synchronization, GPU Affinity and GPU Direct — so pro developers can create great VR experiences.”

Partners include Dell, HP, and Lenovo, with new workstations featuring NVIDIA professional VR Ready certification. 

Pro VR Ready Deck.png

Desktop isn't the only space for workstations, and in this morning's announcement NVIDIA and MSI are introducing the WT72 mobile workstation; the “the first NVIDIA VR Ready professional laptop”:

"The MSI WT72 VR Ready laptop is the first to use our new Maxwell architecture-based Quadro M5500 GPU. With 2,048 CUDA cores, the Quadro M5500 is the world’s fastest mobile GPU. It’s also our first mobile GPU for NVIDIA VR Ready professional mobile workstations, optimized for VR performance with ultra-low latency."

Here are the specs for the WT72 6QN:

  • GPU: NVIDIA Quadro M5500 3D (8GB GDDR5)
  • CPU Options:
    • Xeon E3-1505M v5
    • Core i7-6920HQ
    • Core i7-6700HQ
  • Chipset: CM236
  • Memory:
    • 64GB ECC DDR4 2133 MHz (Xeon)
    • 32GB DDR4 2133 MHz (Core i7)
  • Storage: Super RAID 4, 256GB SSD + 1TB SATA 7200 rpm
  • Display:
    • 17.3” UHD 4K (Xeon, i7-6920HQ)
    • 17.3” FHD Anti-Glare IPS (i7-6700HQ)
  • LAN: Killer Gaming Network E2400
  • Optical Drive: BD Burner
  • I/O: Thunderbolt, USB 3.0 x6, SDXC card reader
  • Webcam: FHD type (1080p/30)
  • Speakers: Dynaudio Tech Speakers 3Wx2 + Subwoofer
  • Battery: 9 cell
  • Dimensions: 16.85” x 11.57” x 1.89”
  • Weight: 8.4 lbs
  • Warranty: 3-year limited
  • Pricing:  
    • Xeon E3-1505M v5 model: $6899
    • Core i7-6920HQ model: $6299
    • Core i7-6700HQ model: $5499

MSI_NB_WT72_Skylake_Photo18.jpg

No doubt we will see details of other Quadro VR Ready workstations as GTC unfolds this week.

Source: NVIDIA

Asus Echelon GTX 950 Limited Edition In Arctic Camouflage Available Soon

Subject: Graphics Cards | March 30, 2016 - 06:58 AM |
Tagged: maxwell, gtx 950, GM206, asus

Asus is launching a new midrange gaming graphics card clad in arctic camouflage. The Echelon GTX 950 Limited Edition is a Maxwell-based card that will come factory overclocked and paired with Asus features normally reserved for their higher end cards.

This dual slot, dual fan graphics card features “auto-extreme technology” which is Asus marketing speak for high end capacitors, chokes, and other components. Further, the card uses a DirectCU II cooler that Asus claims offers 20% better cooling performance while being 3-times quieter than the NVIDIA reference cooler. Asus tweaked the shroud on this card to resemble a white and gray arctic camouflage design. There is also a reinforced backplate that continues the stealthy camo theme.

Asus Echelon GTX 950 Limited Edition.png

I/O on the Echelon GTX 950 Limited Edition includes:

  • 1 x DVI-D
  • 1 x DVI-I
  • 1 x HDMI 2.0
  • 1 x DisplayPort

The card supports NVIDIA’s G-Sync technology and the inclusion of an HDMI 2.0 port allows it to be used in a HTPC/gaming PC build for the living room though case selection would be limited since it’s a larger dual slot card.

Beneath the stealthy exterior, Asus conceals a GM206-derived GTX 950 GPU with 768 CUDA cores, 48 Texture Units, and 32 ROPs as well as 2GB of GDDR5 memory. Out of the box, users have two factory overclocks to choose from that Asus calls Gaming and Overclock modes. In gaming mode, the Echelon GTX 950 GPU is clocked at 1,140 MHz base and 1,329 MHz boost. Turing the card to OC Mode, clockspeeds are further increased to 1,165 MHz base and 1,355 MHz boost.

For reference, the, well, reference GTX 950 clockspeeds are 1,024 MHz base and 1,186 MHz boost.

Asus Echelon GTX 950 Limited Edition Artic Camo Backplate.png

Asus also ever-so-slightly overclocked the GDDR5 memory to 6,610 MHz which is unfortunately a mere 10MHz over reference. The memory sits on a 128-bit bus and while a factory overclock is nice to see, transfer speeds increases will be minimal at best.

In our review of the GTX 950 which focused on the Asus Strix variant, Ryan found it be a good option for 1080p gamers wanting a bit more graphical prowess than the 750Ti for their games.

Maximum PC reports that camo-clad Echelon GTX 950 will be available at the end of the month. Pricing has not been released by Asus, but I would expect this card to come with an MSRP of around $180 USD.

Check out our review of the NVIDIA GTX 950: Maxwell for MOBAs

 

Source: Asus

Video Perspective: Retail Oculus Rift Day One - Setup, Early Testing

Subject: General Tech, Graphics Cards | March 29, 2016 - 03:24 AM |
Tagged: pcper, hardware, technology, review, Oculus, rift, Kickstarter, nvidia, geforce, GTX 980 Ti

It's Oculus Rift launch day and the team and I spent the afternoon setting up the Rift, running through a set of game play environments and getting some good first impressions on performance, experience and more. Oh, and we entered a green screen into the mix today as well.

AMD and NVIDIA release drivers for Oculus Rift launch day!

Subject: Graphics Cards | March 28, 2016 - 02:20 PM |
Tagged: vive, valve, steamvr, rift, Oculus, nvidia, htc, amd

As the first Oculus Rift retail units begin hitting hands in the US and abroad, both AMD and NVIDIA have released new drivers to help gamers ease into the world of VR gaming. 

Up first is AMD, with Radeon Software Crimson Edition 16.3.2. It adds support for Oculus SDK v1.3 and the Radeon Pro Duo...for all none of you that have that product in your hands. AMD claims that this driver will offer "the most stable and compatible driver for developing VR experiences on the Rift to-date." AMD tells us that the latest implementation of LiquidVR features in the software help the SDKs and VR games at release take better advantage of AMD Radeon GPUs. This includes capabilities like asynchronous shaders (which AMD thinks should be capitalized for some reason??) and Quick Response Queue (which I think refers to the ability to process without context change penalties) to help Oculus implement Asynchronous Timewarp.

ocululs.jpg

NVIDIA's release is a bit more substantial, with GeForce Game Ready 364.72 WHQL drivers adding support for the Oculus Rift, HTC Vive and improvements for Dark Souls III, Killer Instinct, Paragon early access and even Quantum Break.

For the optimum experience when using the Oculus Rift, and when playing the thirty games launching alongside the headset, upgrade to today's VR-optimized Game Ready driver. Whether you're playing Chronos, Elite Dangerous, EVE: Valkyrie, or any of the other VR titles, you'll want our latest driver to minimize latency, improve performance, and add support for our newest VRWorks features that further enhance your experience.

Today's Game Ready driver also supports the HTC Vive Virtual Reality headset, which launches next week. As with the Oculus Rift, our new driver optimizes and improves the experience, and adds support for the latest Virtual Reality-enhancing technology.

Good to see both GPU vendors giving us new drivers for the release of the Oculus Rift...let's hope it pans out well and the response from the first buyers is positive!

Video Perspective: HTC Vive Pre First Impressions

Subject: General Tech, Graphics Cards | March 26, 2016 - 04:11 AM |
Tagged: VR, vive pre, vive, virtual reality, video, pre, htc

On Friday I was able to get a pre-release HTC Vive Pre in the office and spend some time with it. Not only was I interested in getting more hands-on time with the hardware without a time limit but we were also experimenting with how to stream and record VR demos and environments. 

Enjoy and mock!

Red matter and green blood; Vulkan runs on Linux

Subject: Graphics Cards | March 24, 2016 - 06:04 PM |
Tagged: Ubuntu 16.04, linux, vulkan, amd, nvidia

Last week AMD released a new GPU-PRO Beta driver stack and this Monday, NVIDIA released the 364.12 beta driver, both of which support Vulkan and meant that Phoronix had a lot of work to do.  Up for testing were the GTX 950, 960, 970, 980, and 980 Ti as well as the R9 Fury, 290 and 285.  Logically, they used the Talos Principal test, their results compare not only the cards but also the performance delta between OpenGL and Vulkan and finished up with several OpenGL benchmarks to see if there were any performance improvements from the new drivers.  The results look good for Vulkan as it beats OpenGL across the board as you can see in the review.

image.php_.jpg

"Thanks to AMD having released their new GPU-PRO "hybrid" Linux driver a few days ago, there is now Vulkan API support for Radeon GPU owners on Linux. This new AMD Linux driver holds much potential and the closed-source bits are now limited to user-space, among other benefits covered in dozens of Phoronix articles over recent months. With having this new driver in hand plus NVIDIA promoting their Vulkan support to the 364 Linux driver series, it's a great time for some benchmarking. Here are OpenGL and Vulkan atop Ubuntu 16.04 Linux for both AMD Radeon and NVIDIA GeForce graphics cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Valve targeting lower price systems and GPUs for VR

Subject: Graphics Cards | March 19, 2016 - 07:02 PM |
Tagged: VR, vive, valve, htc, gdc 2016, GDC

A story posted over at UploadVR has some interesting information that came out of the final days of GDC last week. We know that Valve, HTC and Oculus have recommended users have a Radeon R9 290 or GTX 970 GPU or higher to run virtual reality content on both the Vive and the Rift, and that comes with a high cost for users that weren't already invested in PC gaming. Valve’s Alex Vlachos has other plans that might enable graphics cards from as far back as 2012 to work in Valve's VR ecosystem.

badresult2.jpg

Valve wants to lower the requirements for VR

Obviously there are some trade offs to consider. The reason GPUs have such high requirements for the Rift and Vive is their need to run at 90 FPS / 90 Hz without dropping frames to create a smooth and effective immersion. Deviance from that means the potential for motion sickness and poor VR experiences in general. 

From UploadVR's story:

“As long as the GPU can hit 45 HZ we want for people to be able to run VR,” Vlachos told UploadVR after the talk. “We’ve said the recommended spec is a 970, same as Oculus, but we do want lesser GPUs to work. We’re trying to reduce the cost [of VR].”

It's interesting that Valve would be talking about a 45 FPS target now, implying there would be some kind of frame doubling or frame interpolation to get back to the 90 FPS mark that the company believes is required for a good VR experience. 

adaptive-quality-valve.jpg

Image source: UploadVR

Vlachos also mentioned some other avenues that Valve could expand on to help improve performance. One of them is "adaptive quality", a feature we first saw discussed with the release of the Valve SteamVR Performance Test. This would allow the game to lower the image quality dynamically (texture detail, draw distance, etc.) based on hardware performance but might also include something called fixed foveated rendering. With FFR only the center of the image is rendered at maximum detail while the surrounding image runs at lower quality; the theory being that you are only focused on the center of the screen anyway and human vision blurs the periphery already. This is similar to NVIDIA's multi-res shading technology that is integrated into UE4 already, so I'm curious to see how this one might shape out.

Another quote from UploadVR:

“I can run Aperture [a graphically rich Valve-built VR experience] on a 680 without dropping frames at a lower quality, and, for me, that’s enough of a proof of concept,” Vlachos said.

I have always said that neither Valve nor Oculus are going to lock out older hardware, but that they wouldn't directly support it. That a Valve developer can run its performance test (with adaptive quality) on a GTX 680 is a good sign.

screen2.jpg

The Valve SteamVR Performance Test

But the point is also made by Vlachos that "most art we’re seeing in VR isn’t as dense" as other PC titles is a bit worrisome. We WANT VR games to improve to the same image quality and realism levels that we see in modern PC titles and not depend solely on artistic angles to get to the necessary performance levels for high quality virtual reality. Yes, the entry price today for PC-based VR is going to be steep, but I think "console-ifying" the platform will do a disservice in the long run.

Source: UploadVR

Going for Gold with MSI's newest GTX 980 Ti

Subject: Graphics Cards | March 18, 2016 - 05:59 PM |
Tagged: msi, GTX 980 Ti, MSI GTX 980 Ti GOLDEN Edition, nvidia, factory overclocked

Apart from the golden fan and HDMI port MSI's 980 Ti GOLDEN Edition also comes with a moderate factory overclock, 1140MHz Base, 1228MHz Boost and 7GHz memory, with an observed frequency of 1329MHz in game.  [H]ard|OCP managed to up those to 1290MHz Base and 1378MHz Boost and 7.8GHz memory with the card hitting 1504MHz in game.  That overclock produced noticeable results in many games and pushed it close to the performance of [H]'s overclocked MSI 980 Ti LIGHTNING.  The LIGHTNING proved to be the better card in terms of performance, both graphically and thermally, however it is also more expensive than the GOLDEN and does not have quite the same aesthetics, if that is important to you.

14566999420lauEJffPR_1_8_l.jpg

"Today we evaluate the MSI GTX 980 Ti GOLDEN Edition video card. This video card features a pure copper heatsink geared towards faster heat dissipation and better temps on air than other air cooled video cards. We will compare it to the MSI GTX 980 Ti LIGHTNING, placing the two video cards head to head in an overclocking shootout. "

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP