Author:
Manufacturer: Various

Early testing for higher end GPUs

UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.

I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.

rotr-screen1.jpg

Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.

In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.

Continue reading our look at GPU performance in Rise of the Tomb Raider!!

Podcast #385 - Rise of the Tomb Raider Performance, 3x NVMe M.2 RAID-0, AMD Q1 Offerings

Subject: General Tech | February 4, 2016 - 11:53 AM |
Tagged: video, Trion 150, tesla, steam os, Samsung, rise of the tomb raider, podcast, ocz, NVMe, Jim Keller, amd, 950 PRO

PC Perspective Podcast #385 - 02/04/2016

Join us this week as we discuss Rise of the Tomb Raider performance, a triple RAID-0 NVMe array and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD FirePro S-Series Introduces Hardware-Based GPU Virtualization

Subject: Graphics Cards | February 3, 2016 - 02:37 AM |
Tagged: virtual machines, virtual graphics, mxgpu, gpu virtualization, firepro, amd

AMD made an interesting enterprise announcement today with the introduction of new FirePro S-Series graphics cards that integrate hardware-based virtualization technology. The new FirePro S1750 and S1750 x2 are aimed at virtualized workstations, render farms, and cloud gaming platforms where each virtual machine has direct access to the graphics hardware.

The new graphics cards use a GCN-based Tonga GPU with 2,048 stream processors paired with 8GB of ECC GDDR5 memory on the single slot FirePro S1750. The dual slot FirePro S1750 x2, as the name suggests, is a dual GPU card that features a total of 4,096 shaders (2,048 per GPU) and 16 GB of ECC GDDR5 (8 GB per GPU). The S1750 has a TDP of 150W while the dual-GPU S1750 x2 variant is rated at 265W and either can be passively cooled.

AMD FirePro S1750 x2 Hardare-based virtualized GPU MxGPU.png

Where the graphics cards get niche is the inclusion of what AMD calls MxGPU (Multi-User GPU) technology which is derived from the SR-IOV (Single Root Input/Output Virtualization) PCI-Express standard. According to AMD, the new FirePro S-Series allows virtual machines direct access to the full range of GPU hardware (shaders, memory, ect.) and OpenCL 2.0 support on the software side. The S1750 supports up to 16 simultaneous users and the S1750 x2 tops out at 32 users. Each virtual machine is allocated an equal slice of the GPU, and as you add virtual machines the equal slices get smaller. AMD’s solution to that predicament is to add more GPUs to spread out the users and allocate each VM more hardware horsepower. It is worth noting that AMD has elected not to charge companies any per-user licensing fees for all these VMs the hardware supports which should make these cards more competitive.

The graphics cards use ECC memory to correct errors when dealing with very large numbers and calculations and every VM is reportedly protected and isolated such that one VM can not access any data of a different VM stored in graphics memory.

I am interested to see how these stack up compared to NVIDIA’s GRID and VGX GPU virtualization specialized graphics cards. The difference between the software versus hardware-based virtualization may not make much difference, but AMD’s approach may be every so slightly more efficient with the removal of layer between the virtual machine and hardware. We’ll have to wait and see, however.

Enterprise users will be able to pick up the new cards installed in systems from server manufacturers sometime in the first half of 2016. Pricing for the cards themselves appears to be $2,399 for the single GPU S1750 and $3,999 for the dual GPU S1750 x2.

Needless to say, this is all a bit more advanced (and expensive!) than the somewhat finicky 3D acceleration option desktop users can turn on in VMWare and VirtualBox! Are you experimenting with remote workstations and virtual machines for thin clients that can utilize GPU muscle? Does AMD’s MxGPU approach seem promising?

Source: AMD

The Wraith in action

Subject: Cases and Cooling | February 2, 2016 - 03:26 PM |
Tagged: Wraith, fx 8370, amd

By now you should have memorized Josh's look at AMD's new processors and FM2+ motherboards, unfortunately the one thing we were missing was time to test the unit (which totally did arrive, sorry!)TechGage on the other hand did receive an FX 8370 Wraith and had a chance to do some quick tests with this new 95W cooler.  There was a slight hitch, the motherboard they used ran the fan at the full 3,000RPM so more audio tests do need to be run however the thermals show great potential as the FX 8370 never surpassed 57C.  This indicates with a properly controlled fan header you should be able to reduce the speed and noise generated without seeing troublesome CPU temperatures.

AMD-Wraith-CPU-Cooler-Fan.jpg

"It’s not often that we’re treated to a CPU cooler update from AMD, so it was with great interest that we checked out its Wraith in action at last month’s CES. We’ve now been able to poke and prod the cooler over the past week in our lab, and cover everything important about it here. For good measure, we also tackle platform updates."

Here are some more Cases & Cooling reviews from around the web:

CASES & COOLING

Source: Techgage
Author:
Subject: Processors
Manufacturer: AMD

AMD Keeps Q1 Interesting

CES 2016 was not a watershed moment for AMD.  They showed off their line of current video cards and, perhaps more importantly, showed off working Polaris silicon, which will be their workhorse for 2016 in the graphics department.  They did not show off Zen, a next generation APU, or any AM4 motherboards.  The CPU and APU world was not presented in a way that was revolutionary.  What they did show off, however, hinted at the things to come to help keep AMD relevant in the desktop space.

AMD_NewQ1.jpg

It was odd to see an announcement about the stock cooler that AMD was introducing, but when we learned more about it, the more important it was for AMD’s reputation moving forward.  The Wraith cooler is a new unit to help control the noise and temperatures of the latest AMD CPUs and select APUs.  This is a fairly beefy unit with a large, slow moving fan that produces very little noise.  This is a big change from the variable speed fans on previous coolers that could get rather noisy and leave temperatures that were higher in range than are comfortable.  There has been some derision aimed at AMD for providing “just a cooler” for their top end products, but it is a push that is making them more user and enthusiast friendly without breaking the bank.

Socket AM3+ is not dead yet.  Though we have been commenting on the health of the platform for some time, AMD and its partners work to improve and iterate upon these products to include technologies such as USB 3.1 and M.2 support.  While these chipsets are limited to PCI-E 2.0 speeds, the four lanes available to most M.2 controllers allows these boards to provide enough bandwidth to fully utilize the latest NVMe based M.2 drives available.  We likely will not see a faster refresh on AM3+, but we will see new SKUs utilizing the Wraith cooler as well as a price break for the processors that exist in this socket.

Click here to continue reading about AMD's latest offerings for Q1 2016!

So That's Where Jim Keller Went To... Tesla Motors...

Subject: General Tech, Processors, Mobile | January 29, 2016 - 05:28 PM |
Tagged: tesla, tesla motors, amd, Jim Keller, apple

Jim Keller, a huge name in the semiconductor industry for his work at AMD and Apple, recently left AMD before the launch of the Zen architecture. This made us nervous, because when a big name leaves a company before a product launch, it could either be that their work is complete... or they're evacuating before a stink-bomb detonates and the whole room smells like rotten eggs.

jim_keller.jpg

It turns out a third option is possible: Elon Musk offers you a job making autonomous vehicles. Jim Keller's job title at Tesla will be Vice President of Autopilot Hardware Engineering. I could see this position being enticing, to say the least, even if you are confident in your previous employer's upcoming product stack. It doesn't mean that AMD's Zen architecture will be either good or bad, but it nullifies the earlier predictions, when Jim Keller left AMD, at least until further notice.

We don't know who approached who, or when.

Another point of note: Tesla Motors currently uses NVIDIA Tegra SoCs in their cars, who are (obviously) competitors of Jim Keller's former employer, AMD. It sounds like Jim Keller is moving into a somewhat different role than he had at AMD and Apple, but it could be interesting if Tesla starts taking chip design in-house, to customize the chip to their specific needs, and take away responsibilities from NVIDIA.

The first time he was at AMD, he was the lead architecture of the Athlon 64 processor, and he co-authored x86-64. When he worked at Apple, he helped design the Apple A4 and A5 processors, which were the first two that Apple created in-house; the first three iPhone processors were Samsung SoCs.

Podcast #384 - Corsair Carbide 600Q, GDDR5X, a Dual Fiji Graphics card and more!

Subject: General Tech | January 28, 2016 - 01:38 PM |
Tagged: podcast, video, corsair, carbide, 600q, 600c, gddr5x, jdec, amd, Fiji, fury x, fury x2, scythe, Ninja 4, logitech, g502 spectrum, Intel, Tigerlake, nzxt, Manta

PC Perspective Podcast #384 - 01/28/2016

Join us this week as we discuss the Corsair Carbide 600Q, GDDR5X, a Dual Fiji Graphics card and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD Shows Dual-Fiji Graphics Card in a Falcon Northwest PC at VRLA

Subject: Graphics Cards | January 25, 2016 - 11:51 AM |
Tagged: fury x2, Fiji, dual fiji, amd

Lo and behold! The dual-Fiji card that we have previous dubbed the AMD Radeon Fury X2 still lives! Based on a tweet from AMD PR dude Antal Tungler, a PC from Falcon Northwest at the VRLA convention was utilizing a dual-GPU Fiji graphics card to power some demos.

This prototype Falcon Northwest Tiki system was housing the GPU beast but no images were shown of the interior of the system. Still, it's good to see AMD at least recognize that this piece of hardware still exists at all, since it was initially promised to the enthusiast market by "fall of 2015."  Even in October we had hints that the card might be coming soon after seeing some shipping manifests leak out to the web. 

dualfuryken_0.jpg

Better late than never, right? One theory floating around inside the offices here is that AMD is going to release the Fury X2 along with the VR headsets coming out this spring, with hopes of making it THE VR graphics card of choice. The value of using multi-GPU for VR is interesting, with one GPU dedicated to each eye, though the pitfalls that could haunt both AMD and NVIDIA in this regard (latency, frame time consistency) make the technological capability a debate. 

Source: Twitter

GDDR5X Memory Standard Gets Official with JEDEC

Subject: Graphics Cards, Memory | January 22, 2016 - 11:08 AM |
Tagged: Polaris, pascal, nvidia, jedec, gddr5x, GDDR5, amd

Though information about the technology has been making rounds over the last several weeks, GDDR5X technology finally gets official with an announcement from JEDEC this morning. The JEDEC Solid State Foundation is, as Wikipedia tells us, an "independent semiconductor engineering trade organization and standardization body" that is responsible for creating memory standards. Getting the official nod from the org means we are likely to see implementations of GDDR5X in the near future.

The press release is short and sweet. Take a look.

ARLINGTON, Va., USA – JANUARY 21, 2016 –JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of JESD232 Graphics Double Data Rate (GDDR5X) SGRAM.  Available for free download from the JEDEC website, the new memory standard is designed to satisfy the increasing need for more memory bandwidth in graphics, gaming, compute, and networking applications.

Derived from the widely adopted GDDR5 SGRAM JEDEC standard, GDDR5X specifies key elements related to the design and operability of memory chips for applications requiring very high memory bandwidth.  With the intent to address the needs of high-performance applications demanding ever higher data rates, GDDR5X  is targeting data rates of 10 to 14 Gb/s, a 2X increase over GDDR5.  In order to allow a smooth transition from GDDR5, GDDR5X utilizes the same, proven pseudo open drain (POD) signaling as GDDR5.

“GDDR5X represents a significant leap forward for high end GPU design,” said Mian Quddus, JEDEC Board of Directors Chairman.  “Its performance improvements over the prior standard will help enable the next generation of graphics and other high-performance applications.”

JEDEC claims that by using the same signaling type as GDDR5 but it is able to double the per-pin data rate to 10-14 Gb/s. In fact, based on leaked slides about GDDR5X from October, JEDEC actually calls GDDR5X an extension to GDDR5, not a new standard. How does GDDR5X reach these new speeds? By doubling the prefech from 32 bytes to 64 bytes. This will require a redesign of the memory controller for any processor that wants to integrate it. 

gddr5x.jpg

Image source: VR-Zone.com

As for usable bandwidth, though information isn't quoted directly, it would likely see a much lower increase than we are seeing in the per-pin statements from the press release. Because the memory bus width would remain unchanged, and GDDR5X just grabs twice the chunk sizes in prefetch, we should expect an incremental change. No mention of power efficiency is mentioned either and that was one of the driving factors in the development of HBM.

07-bwperwatt.jpg

Performance efficiency graph from AMD's HBM presentation

I am excited about any improvement in memory technology that will increase GPU performance, but I can tell you that from my conversations with both AMD and NVIDIA, no one appears to be jumping at the chance to integrate GDDR5X into upcoming graphics cards. That doesn't mean it won't happen with some version of Polaris or Pascal, but it seems that there may be concerns other than bandwidth that keep it from taking hold. 

Source: JEDEC

Podcast #383 - Acer Predator X34, ASUS X99-M, AMD Q4 Earnings and more!

Subject: General Tech | January 21, 2016 - 02:34 PM |
Tagged: x99-m, X170, X150, video, Silent Base 800, Q4 2015, Predator X34, podcast, gigabyte, g-sync, freesync, earnings, be quiet, asus, amd, acer

PC Perspective Podcast #383 - 01/21/2016

Join us this week as we discuss the Acer Predator X34, ASUS X99-M, AMD Q4 Earnings and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!