Sony plans PlayStation NEO with massive APU hardware upgrade

Subject: Graphics Cards, Processors | April 19, 2016 - 11:21 AM |
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd

Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.

Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO." 

ps4gpu.jpg

The current PlayStation 4 APU
Image source: iFixIt.com

There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.

The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.

...

Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.

Giant Bomb even has details on the architectural changes.

  Shipping PS4 PS4 "NEO"
CPU 8 Jaguar Cores @ 1.6 GHz 8 Jaguar Cores @ 2.1 GHz
GPU AMD GCN, 18 CUs @ 800 MHz AMD GCN+, 36 CUs @ 911 MHz
Stream Processors 1152 SPs ~ HD 7870 equiv. 2304 SPs ~ R9 390 equiv.
Memory 8GB GDDR5 @ 176 GB/s 8GB GDDR5 @ 218 GB/s

(We actually did a full video teardown of the PS4 on launch day!)

If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).

DSC02539.jpg

Could Sony's NEO platform rival the R9 390?

If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so. 

It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.

However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.

polaris-5.jpg

New features of the AMD Polaris architecture due this summer

There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path. 

The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on

In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.

There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.

Source: Giant Bomb

Podcast #395 - AMD Driver Quality, New Intel and Micron SSDs, Corsair's SPEC-ALPHA and more!

Subject: General Tech | April 14, 2016 - 12:42 PM |
Tagged: video, TMX, Thrustmaster, podcast, omega, micron, Lian-Li, Intel, game ready, crimson, catalyst, bx300, amd

PC Perspective Podcast #395 - 04/14/2016

Join us this week as we discuss AMD Driver Quality, New Intel and Micron SSDs, Corsair's SPEC-ALPHA and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

This episode of the PC Perspective Podcast is sponsored by Lenovo!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano, and Sebastian Peak

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD Radeon Crimson Edition drivers continue quality improvement

Subject: Graphics Cards | April 11, 2016 - 11:23 AM |
Tagged: rtg, radeon technologies group, radeon, driver, crimson, amd

For longer than AMD would like to admit, Radeon drivers and software were often criticized for plaguing issues on performance, stability and features. As the graphics card market evolved and software became a critical part of the equation, that deficit affected AMD substantially. 

In fact, despite the advantages that modern AMD Radeon parts typically have over GeForce options in terms of pure frame rate for your dollar, I recommended an NVIDIA GeForce GTX 970, 980 and 980 Ti for our three different VR Build Guides last month ($900, $1500, $2500) in large part due to confidence in NVIDIA’s driver team to continue delivering updated drivers to provide excellent experiences for gamers.

But back in September of 2015 we started to see changes inside AMD. There was drastic reorganization of the company and those people in charge. AMD setup the Radeon Technologies Group, a new entity inside the organization that would have complete control over the graphics hardware and software directions. And it put one of the most respected people in the industry at its helm: Raja Koduri. On November 24th AMD launched Radeon Software Crimson, a totally new branding, style and implementation to control your Radeon GPU. I talked about it at the time, but the upgrade was noticeable; everything was faster, easier to find and…pretty.

Since then, AMD has rolled out several new drivers with key feature additions, improvements and of course, game performance increases. Thus far in 2016 the Radeon Technologies Group has released 7 new drivers, three of which have been WHQL certified. That is 100% more than they had during this same time last year when AMD released zero WHQL drivers and a big increase over the 1 TOTAL driver AMD released in Q1 of 2015.

crimson-3.jpg

Maybe most important of all, the team at Radeon Technologies Group claims to be putting a new emphasis on “day one” support for major PC titles. If implemented correctly, this gives enthusiasts and PC gamers that want to stay on the cutting edge of releases the ability to play optimized titles on the day of release. Getting updated drivers that fix bugs and improve performance weeks or months after release is great, but for gamers that may already be done with that game, the updates are worthless. AMD was guilty of this practice for years, having driver updates that would fix performance issues on Radeon hardware for reviewer testing but that missed the majority of the play time of early adopting consumers.

q1driver-2.jpg

Thus far, AMD has only just started down this path. Newer games like Far Cry Primal, The Division, Hitman and Ashes of the Singularity all had drivers from AMD on or before release with performance improvements, CrossFire profiles or both. A few others were CLOSE to day one ready including Rise of the Tomb Raider, Plants vs Zombies 2 and Gears of War Ultimate Edition.

 

Game Release Date First Driver Mention Driver Date Feature / Support
Rise of the Tomb Raider 01-28-2016 16.1.1 02-05-2016 Performance and CrossFire Profile
Plants vs Zombies 2 02-23-2016 16.2.1 03-01-2016 Performance
Gears Ultimate Edition 03-01-2016 16.3 03-10-2016 Performance
Far Cry Primal 03-01-2016 16.2.1 03-01-2016 CrossFire Profile
The Division 03-08-2016 16.1 02-25-2016 CrossFire Profile
Hitman 03-11-2016 16.3 03-10-2016 Performance, CrossFire Profile
Need for Speed 03-15-2016 16.3.1 03-18-2016 Performance, CrossFire Profile
Ashes of the Singularity 03-31-2016 16.2 02-25-2016 Performance

 

AMD claims that the push for this “day one” experience will continue going forward, pointing at a 35% boost in performance in Quantum Break between Radeon Crimson 16.3.2 and 16.4.1. There will be plenty of opportunities in the coming weeks and months to test AMD (and NVIDIA) on this “day one” focus with PC titles that will have support for DX12, UWP and VR.

The software team at RTG has also added quite a few interesting features since the release of the first Radeon Crimson driver. Support for the Vulkan API and a DX12 capability called Quick Response Queue, along with new additions to the Radeon settings (Per-game display scaling, CrossFire status indicator, power efficiency toggle, etc.) are just a few.

q1driver-4.jpg

Critical for consumers that were buying into VR, the Radeon Crimson drivers launched with support alongside the Oculus Rift and HTC Vive. Both of these new virtual reality systems are putting significant strain on the GPU of modern PCs and properly implementing support for techniques like timewarp is crucial to enabling a good user experience. Though Oculus and HTC / Valve were using NVIDIA based systems more or less exclusively during our time at the Game Developers Summit last month, AMD still has approved platforms and software from both vendors. In fact, in a recent change to the HTC Vive minimum specifications, Valve retroactively added the Radeon R9 280 to the list, giving a slight edge in component pricing to AMD.

AMD was also the first to enable full support for external graphics solutions like the Razer Core external enclosure in its drivers with XConnect. We wrote about that release in early March, and I’m eager to get my hands on a product combo to give it a shot. As of this writing and after talking with Razer, NVIDIA had still not fully implemented external GPU functionality for hot/live device removal.

When looking for some acceptance metric, AMD did point us to a survey they ran to measure the approval and satisfaction of Crimson. After 1700+ submission, the score customers gave them was a 4.4 out of 5.0 - pretty significant praise even coming from AMD customers. We don't exactly how the poll was run or in what location it was posted, but the Crimson driver release has definitely improved the perception that Radeon drivers have with many enthusiasts.

I’m not going to sit here and try to impart on everyone that AMD is absolved of past sins and we should immediately be converted into believers. What I can say is that the Radeon Technologies Group is moving in the right direction, down a path that shows a change in leadership and a change in mindset. I talked in September about the respect I had for Raja Koduri and interviewed him after AMD’s Capsaicin event at GDC; you can already start to see the changes he is making inside this division. He has put a priority on software, not just on making it look pretty, but promising to make good on proper multi-GPU support, improved timeliness of releases and innovative features. AMD and RTG still have a ways to go before they can unwind years of negativity, but the ground work is there.

The company and every team member has a sizeable task ahead of them as we approach the summer. The Radeon Technologies Group will depend on the Polaris architecture and its products to swing back the pendulum against NVIDIA, gaining market share, mind share and respect. From what we have seen, Polaris looks impressive and differentiates from Hawaii and Fiji fairly dramatically. But this product was already well baked before Raja got total control and we might have to see another generation pass before the portfolio of GPUs can change around the institution. NVIDIA isn’t sitting idle and the Pascal architecture also promises improved performance, while leaning on the work and investment in software and drivers that have gotten them to the dominant market leader position they are in today.

I’m looking forward to working with AMD throughout 2016 on what promises to be an exciting and market-shifting time period.

Podcast #394 - Measuring VR Performance, NVIDIA's Pascal GP100, Bristol Ridge APUs and more!

Subject: General Tech | April 7, 2016 - 02:47 PM |
Tagged: VR, vive, video, tesla p100, steamvr, Spectre 13.3, rift, podcast, perfmon, pascal, Oculus, nvidia, htc, hp, GP100, Bristol Ridge, APU, amd

PC Perspective Podcast #394 - 04/07/2016

Join us this week as we discuss measuring VR Performance, NVIDIA's Pascal GP100, Bristol Ridge APUs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

This episode of the PC Perspective Podcast is sponsored by Lenovo!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD Pre-Announces 7th Gen A-Series SOC

Subject: Processors | April 5, 2016 - 06:30 AM |
Tagged: mobile, hp, GCN, envy, ddr4, carrizo, Bristol Ridge, APU, amd, AM4

Today AMD is “pre-announcing” their latest 7th generation APU.  Codenamed “Bristol Ridge”, this new SOC is based off of the Excavator architecture featured in the previous Carrizo series of products.  AMD provided very few hints as to what was new and different in Bristol Ridge as compared to Carrizo, but they have provided a few nice hints.

br_01.png

They were able to provide a die shot of the new Bristol Ridge APU and there are some interesting differences between it and the previous Carrizo. Unfortunately, there really are no changes that we can see from this shot. Those new functional units that you are tempted to speculate about? For some reason AMD decided to widen out the shot of this die. Those extra units around the border? They are the adjacent dies on the wafer. I was bamboozled at first, but happily Marc Sauter pointed it out to me. No new functional units for you!

carrizo_die.jpg

This is the Carrizo shot. It is functionally identical to what we see with Bristol Ridge.

AMD appears to be using the same 28 nm HKMG process from GLOBALFOUNDRIES.  This is not going to give AMD much of a jump, but from information in the industry GLOBALFOUNDRIES and others have put an impressive amount of work into several generations of 28 nm products.  TSMC is on their third iteration which has improved power and clock capabilities on that node.  GLOBALFOUNDRIES has continued to improve their particular process and likely Bristol Ridge is going to be the last APU built on that node.

br_02.png

All of the competing chips are rated at 15 watts TDP. Intel has the compute advantage, but AMD is cleaning up when it comes to graphics.

The company has also continued to improve upon their power gating and clocking technologies to keep TDPs low, yet performance high.  AMD recently released the Godavari APUs to the market which exhibit better clocking and power characteristics from the previous Kaveri.  Little was done on the actual design, rather it was improved process tech as well as better clock control algorithms that achieved these advances.  It appears as though AMD has continued this trend with Bristol Ridge.

We likely are not seeing per clock increases, but rather higher and longer sustained clockspeeds providing the performance boost that we are seeing between Carrizo and Bristol Ridge.  In these benchmarks AMD is using 15 watt TDP products.  These are mobile chips and any power improvements will show off significant gains in overall performance.  Bristol Ridge is still a native quad core part with what looks to be an 8 module GCN unit.

br_03.png

Again with all three products at a 15 watt TDP we can see that AMD is squeezing every bit of performance it can with the 28 nm process and their Excavator based design.

The basic core and GPU design look relatively unchanged, but obviously there were a lot of tweaks applied to give the better performance at comparable TDPs.  

AMD is announcing this along with the first product that will feature this APU.  The HP Envy X360.  This convertible tablet offers some very nice features and looks to be one of the better implementations that AMD has seen using its latest APUs.  Carrizo had some wins, but taking marketshare back from Intel in the mobile space has been tortuous at best. AMD obviously hopes that Bristol Ridge in the sub-35 watt range will continue to show fight for the company in this important market.  Perhaps one of the more interesting features is the option for the PCIe SSD.  Hopefully AMD will send out a few samples so we can see what a more “premium” type convertible can do with the AMD silicon.

br_04.png

The HP Envy X360 convertible in all of its glory.

Bristol Ridge will be coming to the AM4 socket infrastructure in what appears to be a Computex timeframe.  These parts will of course feature higher TDPs than what we are seeing here with the 15 watt unit that was tested.  It seems at that time AMD will announce the full lineup from top to bottom and start seeding the market with AM4 boards that will eventually house the “Zen” CPUs that will show up in late 2016.

Source: AMD

AMD Brings Dual Fiji and HBM Memory To Server Room With FirePro S9300 x2

Subject: Graphics Cards | April 5, 2016 - 02:13 AM |
Tagged: HPC, hbm, gpgpu, firepro s9300x2, firepro, dual fiji, deep learning, big data, amd

Earlier this month AMD launched a dual Fiji powerhouse for VR gamers it is calling the Radeon Pro Duo. Now, AMD is bringing its latest GCN architecture and HBM memory to servers with the dual GPU FirePro S9300 x2.

AMD Firepro S9300x2 Server HPC Card.jpg

The new server-bound professional graphics card packs an impressive amount of computing hardware into a dual-slot card with passive cooling. The FirePro S9300 x2 combines two full Fiji GPUs clocked at 850 MHz for a total of 8,192 cores, 512 TUs, and 128 ROPs. Each GPU is paired with 4GB of non-ECC HBM memory on package with 512GB/s of memory bandwidth which AMD combines to advertise this as the first professional graphics card with 1TB/s of memory bandwidth.

Due to lower clockspeeds the S9300 x2 has less peak single precision compute performance versus the consumer Radeon Pro Duo at 13.9 TFLOPS versus 16 TFLOPs on the desktop card. Businesses will be able to cram more cards into their rack mounted servers though since they do not need to worry about mounting locations for the sealed loop water cooling of the Radeon card.

  FirePro S9300 x2 Radeon Pro Duo R9 Fury X FirePro S9170
GPU Dual Fiji Dual Fiji Fiji Hawaii
GPU Cores 8192 (2 x 4096) 8192 (2 x 4096) 4096 2816
Rated Clock 850 MHz 1050 MHz 1050 MHz 930 MHz
Texture Units 2 x 256 2 x 256 256 176
ROP Units 2 x 64 2 x 64 64 64
Memory 8GB (2 x 4GB) 8GB (2 x 4GB) 4GB 32GB ECC
Memory Clock 500 MHz 500 MHz 500 MHz 5000 MHz
Memory Interface 4096-bit (HBM) per GPU 4096-bit (HBM) per GPU 4096-bit (HBM) 512-bit
Memory Bandwidth 1TB/s (2 x 512GB/s) 1TB/s (2 x 512GB/s) 512 GB/s 320 GB/s
TDP 300 watts ? 275 watts 275 watts
Peak Compute 13.9 TFLOPS 16 TFLOPS 8.60 TFLOPS 5.24 TFLOPS
Transistor Count 17.8B 17.8B 8.9B 8.0B
Process Tech 28nm 28nm 28nm 28nm
Cooling Passive Liquid Liquid Passive
MSRP $6000 $1499 $649 $4000

AMD is aiming this card at datacenter and HPC users working on "big data" tasks that do not require the accuracy of double precision floating point calculations. Deep learning tasks, seismic processing, and data analytics are all examples AMD says the dual GPU card will excel at. These are all tasks that can be greatly accelerated by the massive parallel nature of a GPU but do not need to be as precise as stricter mathematics, modeling, and simulation work that depend on FP64 performance. In that respect, the FirePro S9300 x2 has only 870 GLFOPS of double precision compute performance.

Further, this card supports a GPGPU optimized Linux driver stack called GPUOpen and developers can program for it using either OpenCL (it supports OpenCL 1.2) or C++. AMD PowerTune, and the return of FP16 support are also features. AMD claims that its new dual GPU card is twice as fast as the NVIDIA Tesla M40 (1.6x the K80) and 12 times as fast as the latest Intel Xeon E5 in peak single precision floating point performance. 

The double slot card is powered by two PCI-E power connectors and is rated at 300 watts. This is a bit more palatable than the triple 8-pin needed for the Radeon Pro Duo!

The FirePro S9300 x2 comes with a 3 year warranty and will be available in the second half of this year for $6000 USD. You are definitely paying a premium for the professional certifications and support. Here's hoping developers come up with some cool uses for the dual 8.9 Billion transistor GPUs and their included HBM memory!

Source: AMD

AMD and NVIDIA release drivers for Oculus Rift launch day!

Subject: Graphics Cards | March 28, 2016 - 10:20 AM |
Tagged: vive, valve, steamvr, rift, Oculus, nvidia, htc, amd

As the first Oculus Rift retail units begin hitting hands in the US and abroad, both AMD and NVIDIA have released new drivers to help gamers ease into the world of VR gaming. 

Up first is AMD, with Radeon Software Crimson Edition 16.3.2. It adds support for Oculus SDK v1.3 and the Radeon Pro Duo...for all none of you that have that product in your hands. AMD claims that this driver will offer "the most stable and compatible driver for developing VR experiences on the Rift to-date." AMD tells us that the latest implementation of LiquidVR features in the software help the SDKs and VR games at release take better advantage of AMD Radeon GPUs. This includes capabilities like asynchronous shaders (which AMD thinks should be capitalized for some reason??) and Quick Response Queue (which I think refers to the ability to process without context change penalties) to help Oculus implement Asynchronous Timewarp.

ocululs.jpg

NVIDIA's release is a bit more substantial, with GeForce Game Ready 364.72 WHQL drivers adding support for the Oculus Rift, HTC Vive and improvements for Dark Souls III, Killer Instinct, Paragon early access and even Quantum Break.

For the optimum experience when using the Oculus Rift, and when playing the thirty games launching alongside the headset, upgrade to today's VR-optimized Game Ready driver. Whether you're playing Chronos, Elite Dangerous, EVE: Valkyrie, or any of the other VR titles, you'll want our latest driver to minimize latency, improve performance, and add support for our newest VRWorks features that further enhance your experience.

Today's Game Ready driver also supports the HTC Vive Virtual Reality headset, which launches next week. As with the Oculus Rift, our new driver optimizes and improves the experience, and adds support for the latest Virtual Reality-enhancing technology.

Good to see both GPU vendors giving us new drivers for the release of the Oculus Rift...let's hope it pans out well and the response from the first buyers is positive!

Red matter and green blood; Vulkan runs on Linux

Subject: Graphics Cards | March 24, 2016 - 02:04 PM |
Tagged: Ubuntu 16.04, linux, vulkan, amd, nvidia

Last week AMD released a new GPU-PRO Beta driver stack and this Monday, NVIDIA released the 364.12 beta driver, both of which support Vulkan and meant that Phoronix had a lot of work to do.  Up for testing were the GTX 950, 960, 970, 980, and 980 Ti as well as the R9 Fury, 290 and 285.  Logically, they used the Talos Principal test, their results compare not only the cards but also the performance delta between OpenGL and Vulkan and finished up with several OpenGL benchmarks to see if there were any performance improvements from the new drivers.  The results look good for Vulkan as it beats OpenGL across the board as you can see in the review.

image.php_.jpg

"Thanks to AMD having released their new GPU-PRO "hybrid" Linux driver a few days ago, there is now Vulkan API support for Radeon GPU owners on Linux. This new AMD Linux driver holds much potential and the closed-source bits are now limited to user-space, among other benefits covered in dozens of Phoronix articles over recent months. With having this new driver in hand plus NVIDIA promoting their Vulkan support to the 364 Linux driver series, it's a great time for some benchmarking. Here are OpenGL and Vulkan atop Ubuntu 16.04 Linux for both AMD Radeon and NVIDIA GeForce graphics cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Podcast #391 - AMD's news from GDC, the MSI Vortex, and Q&A!

Subject: General Tech | March 17, 2016 - 11:07 PM |
Tagged: podcast, video, amd, XConnect, gdc 2016, Vega, Polaris, navi, razer blade, Sulon Q, Oculus, vive, raja koduri, GTX 1080, msi, vortex, Intel, skulltrail, nuc

PC Perspective Podcast #391 - 03/17/2016

Join us this week as we discuss the AMD's news from GDC, the MSI Vortex, and Q&A!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Author:
Manufacturer: AMD

Some Hints as to What Comes Next

On March 14 at the Capsaicin event at GDC AMD disclosed their roadmap for GPU architectures through 2018.  There were two new names in attendance as well as some hints at what technology will be implemented in these products.  It was only one slide, but some interesting information can be inferred from what we have seen and what was said in the event and afterwards during interviews.

Polaris the the next generation of GCN products from AMD that have been shown off for the past few months.  Previously in December and at CES we saw the Polaris 11 GPU on display.  Very little is known about this product except that it is small and extremely power efficient.  Last night we saw the Polaris 10 being run and we only know that it is competitive with current mainstream performance and is larger than the Polaris 11.  These products are purportedly based on Samsung/GLOBALFOUNDRIES 14nm LPP.

roadmap.jpg

The source of near endless speculation online.

In the slide AMD showed it listed Polaris as having 2.5X the performance per watt over the previous 28 nm products in AMD’s lineup.  This is impressive, but not terribly surprising.  AMD and NVIDIA both skipped the 20 nm planar node because it just did not offer up the type of performance and scaling to make sense economically.  Simply put, the expense was not worth the results in terms of die size improvements and more importantly power scaling.  20 nm planar just could not offer the type of performance overall that GPU manufacturers could achieve with 2nd and 3rd generation 28nm processes.

What was missing from the slide is mention that Polaris will integrate either HMB1 or HBM2.  Vega, the architecture after Polaris, does in fact list HBM2 as the memory technology it will be packaged with.  It promises another tick up in terms of performance per watt, but that is going to come more from aggressive design optimizations and likely improvements on FinFET process technologies.  Vega will be a 2017 product.

Beyond that we see Navi.  It again boasts an improvement in perf per watt as well as the inclusion of a new memory technology behind HBM.  Current conjecture is that this could be HMC (hybrid memory cube).  I am not entirely certain of that particular conjecture as it does not necessarily improve upon the advantages of current generation HBM and upcoming HBM2 implementations.  Navi will not show up until 2018 at the earliest.  This *could* be a 10 nm part, but considering the struggle that the industry has had getting to 14/16nm FinFET I am not holding my breath.

AMD provided few details about these products other than what we see here.  From here on out is conjecture based upon industry trends, analysis of known roadmaps, and the limitations of the process and memory technologies that are already well known.

Click here to read the rest about AMD's upcoming roadmap!