The Expected Unexpected
Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega. The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe. During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.
Raja has dropped by the PCPer offices in the past.
Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there. This behavior is quite common in Silicon Valley and Raja is no stranger to this. Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI. While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts. During this time ATI was acquired by AMD and he became one of the top graphics guru at that company. In 2009 he quit AMD and moved on to Apple. He was Director of Graphics Architecture at Apple, but little is known about what he actually did. During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology. Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.
In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water. AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading. On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards. After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up. The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.
Subject: Processors | November 6, 2017 - 02:00 PM | Josh Walrath
Tagged: radeon, Polaris, mobile, kaby lake, interposer, Intel, HBM2, gaming, EMIB, apple, amd, 8th generation core
In what is probably considered one of the worst kept secrets in the industry, Intel has announced a new CPU line for the mobile market that integrates AMD’s Radeon graphics. For the past year or so rumors of such a partnership were freely flowing, but now we finally get confirmation as to how this will be implemented and marketed.
Intel’s record on designing GPUs has been rather pedestrian. While they have kept up with the competition, a slew of small issues and incompatibilities have plagued each generation. Performance is also an issue when trying to compete with AMD’s APUs as well as discrete mobile graphics offerings from both AMD and NVIDIA. Software and driver support is another area where Intel has been unable to compete due largely to economics and the competitions’ decades of experience in this area.
There are many significant issues that have been solved in one fell swoop. Intel has partnered with AMD’s Semi-Custom Group to develop a modern and competent GPU that can be closely connected to the Intel CPU all the while utilizing HBM2 memory to improve overall performance. The packaging of this product utilizes Intel’s EMIB (Embedded Multi-die Interconnect Bridge) tech.
EMIB is an interposer-like technology that integrates silicon bridges into the PCB instead of relying upon a large interposer. This allows a bit more flexibility in layout of the chips as well as lowers the Z height of the package as there is not a large interposer sitting between the chips and the PCB. Just as interposer technology allows the use of chips from different process technologies to work seamlessly together, EMIB provides that same flexibility.
The GPU looks to be based on the Polaris architecture which is a slight step back from AMD’s cutting edge Vega architecture. Polaris does not implement the Infinity Fabric component that Vega does. It is more conventional in terms of data communication. It is a step beyond what AMD has provided for Sony and Microsoft, who each utilize a semi-custom design for the latest console chips. AMD is able to integrate the HBM2 controller that is featured in Vega. Using HBM2 provides a tremendous amount of bandwidth along with power savings as compared to traditional GDDR-5 memory modules. It also saves dramatically on PCB space allowing for smaller form factors.
EMIB provides nearly all of the advantages of the interposer while keeping the optimal z-height of the standard PCB substrate.
Intel did have to do quite a bit of extra work on the power side of the equation. AMD utilizes their latest Infinity Fabric for fine grained power control in their upcoming Raven Ridge based Ryzen APUs. Intel had to modify their current hardware to be able to do much the same work with 3rd party silicon. This is no easy task as the CPU needs to monitor and continually adjust for GPU usage in a variety of scenarios. This type of work takes time and a lot of testing to fine tune as well as the inevitable hardware revisions to get thing to work correctly. This then needs to be balanced by the GPU driver stack which also tends to take control of power usage in mobile scenarios.
This combination of EMIB, Intel Kaby Lake CPU, HBM2, and a current AMD GPU make this a very interesting combination for the mobile and small form factor markets. The EMIB form factor provides very fast interconnect speeds and a smaller footprint due to the integration of HBM2 memory. The mature AMD Radeon software stack for both Windows and macOS environments provides Intel with another feature in which to sell their parts in areas where previously they were not considered. The 8th Gen Kaby Lake CPU provides the very latest CPU design on the new 14nm++ process for greater performance and better power efficiency.
This is one of those rare instances where such cooperation between intense rivals actually improves the situation for both. AMD gets a financial shot in the arm by signing a large and important customer for their Semi-Custom division. The royalty income from this partnership should be more consistent as compared to the console manufacturers due to the seasonality of the console product. This will have a very material effect on AMD’s bottom line for years to come. Intel gets a solid silicon solution with higher performance than they can offer, as well as aforementioned mature software stack for multiple OS. Finally throw in the HBM2 memory support for better power efficiency and a smaller form factor, and it is a clear win for all parties involved.
The PCB savings plus faster interconnects will allow these chips to power smaller form factors with better performance and battery life.
One of the unknowns here is what process node the GPU portion will be manufactured on. We do not know which foundry Intel will use, or if they will stay in-house. Currently TSMC manufactures the latest console SoCs while GLOBALFOUNDRIES handles the latest GPUS from AMD. Initially one would expect Intel to build the GPU in house, but the current rumor is that AMD will work to produce the chips with one of their traditional foundry partners. Once the chip is manufactured then it is sent to Intel to be integrated into their product.
Apple is one of the obvious candidates for this particular form factor and combination of parts. Apple has a long history with Intel on the CPU side and AMD on the GPU side. This product provides all of the solutions Apple needs to manufacture high performance products in smaller form factors. Gaming laptops also get a boost from such a combination that will offer relatively high performance with minimal power increases as well as the smaller form factor.
The potential (leaked) performance of the 8th Gen Intel CPU with Radeon Graphics.
The data above could very well be wrong about the potential performance of this combination. What we see is pretty compelling though. The Intel/AMD product performs like a higher end CPU with discrete GPU combo. It is faster than a NVIDIA GTX 1050 Ti and trails the GTX 1060. It also is significantly faster than a desktop AMD RX 560 part. We can also see that it is going to be much faster than the flagship 15 watt TDP AMD Ryzen 7 2700U. We do not yet know how it compares to the rumored 65 watt TDP Raven Ridge based APUs from AMD that will likely be released next year. What will be fascinating here is how much power the new Intel combination will draw as compared to the discrete solutions utilizing NVIDIA graphics.
To reiterate, this is Intel as a customer for AMD’s Semi-Custom group rather than a licensing agreement between the two companies. They are working hand in hand in developing this solution and then both profiting from it. AMD getting royalties from every Intel package sold that features this technology will have a very positive effect on earnings. Intel gets a cutting edge and competent graphics solution along with the improved software and driver support such a package includes.
Update: We have been informed that AMD is producing the chips and selling them directly to Intel for integration into these new SKUs. There are no royalties or licensing, but the Semi-Custom division should still receive the revenue for these specialized products made only for Intel.
There has been a lot of news lately about the release of Cryptocurrency-specific graphics cards from both NVIDIA and AMD add-in board partners. While we covered the currently cryptomining phenomenon in an earlier article, today we are taking a look at one of these cards geared towards miners.
It's worth noting that I purchased this card myself from Newegg, and neither AMD or Sapphire are involved in this article. I saw this card pop up on Newegg a few days ago, and my curiosity got the best of me.
There has been a lot of speculation, and little official information from vendors about what these mining cards will actually entail.
From the outward appearance, it is virtually impossible to distinguish this "new" RX 470 from the previous Sapphire Nitro+ RX 470, besides the lack of additional display outputs beyond the DVI connection. Even the branding and labels on the card identify it as a Nitro+ RX 470.
In order to test the hashing rates of this GPU, we are using Claymore's Dual Miner Version 9.6 (mining Ethereum only) against a reference design RX 470, also from Sapphire.
On the reference RX 470 out of the box, we hit rates of about 21.8 MH/s while mining Ethereum.
Once we moved to the Sapphire mining card, we move up to at least 24 MH/s from the start.
Is it time to buy that new GPU?
Testing commissioned by AMD. This means that AMD paid us for our time, but had no say in the results or presentation of them.
Earlier this week Bethesda and Arkane Studios released Prey, a first-person shooter that is a re-imaging of the 2006 game of the same name. Fans of System Shock will find a lot to love about this new title and I have found myself enamored with the game…in the name of science of course.
While doing my due diligence and performing some preliminary testing to see if we would utilize Prey for graphics testing going forward, AMD approached me to discuss this exact title. With the release of the Radeon RX 580 in April, one of the key storylines is that the card offers a reasonably priced upgrade path for users of 2+ year old hardware. With that upgrade you should see some substantial performance improvements and as I will show you here, the new Prey is a perfect example of that.
Targeting the Radeon R9 380, a graphics card that was originally released back in May of 2015, the RX 580 offers substantially better performance at a very similar launch price. The same is true for the GeForce GTX 960: launched in January of 2015, it is slightly longer in the tooth. AMD’s data shows that 80% of the users on Steam are running on R9 380X or slower graphics cards and that only 10% of them upgraded in 2016. Considering the great GPUs that were available then (including the RX 480 and the GTX 10-series), it seems more and more likely that we going to hit an upgrade inflection point in the market.
A simple experiment was setup: does the new Radeon RX 580 offer a worthwhile upgrade path for those many users of R9 380 or GTX 960 classifications of graphics cards (or older)?
|Radeon RX 580||Radeon R9 380||GeForce GTX 960|
|GPU||Polaris 20||Tonga Pro||GM206|
|Rated Clock||1340 MHz||918 MHz||1127 MHz|
|TDP||185 watts||190 watts||120 watts|
|MSRP (at launch)||$199 (4GB)
Subject: Graphics Cards | April 25, 2017 - 03:11 AM | Tim Verry
Tagged: sapphire, RX 580, RX 550, pulse, Polaris, nitro+, GCN
Earlier this month Sapphire announced a new budget-oriented series of graphics cards it calls PULSE. The new series slides in below the premium Nitro+ series to offer cheaper graphics cards that retain many of the high-quality hardware components but lack the flashy extras on the coolers, come in at lower factory overclocks, and have fewer PCI-E power inputs which, in theory, means lower overclocking headroom. The new graphics cards series is currently made up of five Polaris-based GPUs: the Sapphire Pulse RX 580, RX 570, RX 570 ITX, and RX 550.
According to Sapphire, Pulse graphics cards use many of the high-end components as the Nitro+ cards including Black Diamond Chokes 4, long lasting capacitors, fuse protection. And intelligent fan control. The new graphics cards have aluminum backplates, removeable Quick Connect fans with semi-passive cooling technology that allows the fans to turn off when the card is under light load. The RX 580 and RX 570 use Dual-X coolers and the RX 570 ITX and RX 550 use single fan shrouded coolers.
Compared to Nitro+, the coolers are a bit less flashy and there are no Nitro+ Glow LEDs. If you are not a fan of bling or do not have a windowed case, the Pulse cards might save you a bit of money while getting you most of the performance if Sapphire’s claims are accurate.
Speaking of performance, the Pulse branded graphics cards are factory overclocked, just not as much. The Sapphire Pulse RX 580 with its 2,304 cores comes with a boost clock of 1366 MHz, the RX 570 and RX 570 ITX come with GPU boost clocks of 1,284 MHz and 1,244 MHz respectively, and the RX 550 has a boost clock of 1,206 MHz. Memory clocks sit at 8,000 MHz for the RX 580 and 7,000 MHz for the remaining Pulse cards (RX 570, RX 570 ITX, and RX 550).
Along with the introduction of its new Pulse series of graphics cards, Sapphire has entered a “strategic partnership” with motherboard manufacturer Asrock. The new graphics cards are shipping now and will be available at retailers shortly. Pricing for the RX 550 isn’t available, but prices for the other cards has appeared online as follows: Pulse RX 580 8GB for $229.99, Pulse RX 580 4GB for $199.99, Pulse RX 570 for $179.99, Pulse RX 570 ITX for $169.99.
In all, the Pulse cards appear to be about $20 cheaper than the Nitro+ variant. We will have to wait and see if those prices hold up once retailers get stock in.
Subject: Graphics Cards | April 24, 2017 - 06:08 PM | Jeremy Hellstrom
Tagged: linux, RX 580, amd, overclocking, Polaris
Phoronix have had a chance to test out the refreshed Polaris RX 580 on the Linux 4.11 kernel and Mesa 17.1-devel, initially the AMDGPU-PRO 17.10 driver update was not included thanks to interesting timing. The performance deltas are as you would expect, a slight increase in performance that is relative to the increased clock speeds, just as when run on Windows. They also had a chance to try overclocking the new card, AMD added support for overclocking GCN 1.2 and newer cards on their proprietary Linux driver in 2016. They managed to increase the core by 6% without running into stability issues however when they overclocked the memory, they saw serious performance decreases. Check out the steps they tried along with the results from the overlocked GPU here.
"Yesterday I posted the initial Radeon RX 580 Linux benchmarks while now with having more time with this "Polaris Evolved" card I've been able to try out a bit more, like the AMDGPU Linux overclocking support. Here are the ups and downs of overclocking the Radeon graphics card under Linux."
Here are some more Graphics Card articles from around the web:
- Sapphire Nitro+ Radeon RX 570 4GB @ eTeknix
- PowerColor Red Devil Radeon RX 570 4GB @ eTeknix
- XFX RX 460 4GB Slim Single Review @ OCC
- Palit GeForce GTX 1080 Ti GameRock Premium 11 GB @ techPowerUp
- MSI GeForce GTX 1080 GAMING X PLUS 8G @ Guru of 3D
- MSI GTX 1080 Gaming X Plus 11 Gbps 8 GB @ techPowerUp
- Gigabyte Aorus GTX 1080 Ti Xtreme Edition 11GB @ Kitguru
- Phanteks Glacier 1080 GPU Waterblock @ techPowerUp
- PNY GTX 1070 XLR8 OC Gaming 8GB @ eTeknix
- GIGABYTE GeForce GTX 1060 Mini ITX OC 6GB @ eTeknix
Subject: Graphics Cards | April 18, 2017 - 04:04 PM | Jeremy Hellstrom
Tagged: RX 580, radeon, Polaris, amd, powercolor, red devil
Ryan covered the improvements over the previous Polaris based cards the RX 580 offers, a higher Rated Clock and standardizing memory frequency of all RX 580 models to 8GHz. That lead to the expected increase in performance compared the the RX 480, in a marketplace somewhat different than what the first Polaris chips arrived in. Consumers now know what NVIDIA's current generation cards provide in performance and prices have settled as much as can be expected in the volatile GPU market. Those using cards several generations old may be more receptive to an upgrade than they were with the previous generation, especially as the next large launches are some time off; we shall see if this is true in the coming months.
One particular reason to consider upgrading is VR support, something [H]ard|OCP covers in their review. The improved speeds do not provide miracles in their VR Leaderboard however they do show improvements in some games such as Serious Sam, with reprojection rates dropping markedly.
"AMD is launching the AMD Radeon RX 500 series today, and we lead with a custom retail Radeon RX 580 GPU based video card from PowerColor. We’ll take the Red Devil RX 580 Golden Sample video card through the paces and see how it compares to the competition at the same price point."
Here are some more Graphics Card articles from around the web:
- AMD's Radeon RX 580 and Radeon RX 570 @ The Tech Report
- ASUS Radeon RX 580 STRIX @ Guru of 3D
- apphire Radeon RX 570 Pulse 4 GB @ techPowerUp
- Sapphire Nitro+ RX 580 8GB Review @ Neoseeker
- PowerColor Red Devil Radeon RX 580 8GB @ eTeknix
- ASUS RX 570 STRIX Gaming OC 4GB @ Kitguru
- Sapphire RX 580 Nitro+ Limited Edition 8GB @ Kitguru
- PowerColor Radeon Red Devil RX 580 8GB Golden Sample Review @ OCC
- Unigine Superposition Is A Beautiful Way To Stress Your GPU In 2017, 17-Way Graphics Card Comparison @ Phoronix
- EVGA GeForce GTX 1070 SC2 Gaming iCX Review @ Bjorn3d
- Gigabyte Aorus GTX 1080 Ti Xtreme Gaming 11 GB @ techPowerUp
What is old is new again
Trust me on this one – AMD is aware that launching the RX 500-series of graphics cards, including the RX 580 we are reviewing today, is an uphill battle. Besides battling the sounds on the hills that whisper “reeebbrraannndd” AMD needs to work with its own board partners to offer up total solutions that compete well with NVIDIA’s stronghold on the majority of the market. Just putting out the Radeon RX 580 and RX 570 cards with same coolers and specs as the RX 400-series would be a recipe for ridicule. AMD is aware and is being surprisingly proactive in its story telling the consumer and the media.
- If you already own a Radeon RX 400-series card, the RX 500-series is not expected to be an upgrade path for you.
- The Radeon RX 500-series is NOT based on Vega. Polaris here everyone.
- Target users are those with Radeon R9 380 class cards and older – Polaris is still meant as an upgrade for that very large user base.
The story that is being told is compelling; more than you might expect. With more than 500 million gamers using graphics cards two years or older, based on Steam survey data, there is a HUGE audience that would benefit from an RX 580 graphics card upgrade. Older cards may lack support for FreeSync, HDR, higher refresh rate HDMI output and hardware encode/decode support for 4K resolution content. And while the GeForce GTX 1060 family would also meet that criteria, AMD wants to make the case that the Radeon family is the way to go.
The Radeon RX 500-series is based on the same Polaris architecture as the RX 400-series, though AMD would tell us that the technology has been refined since initial launch. More time with the 14nm FinFET process technology has given the fab facility, and AMD, some opportunities to refine. This gives the new GPUs the ability to scale to higher clocks than they could before (though not without the cost of additional power draw). AMD has tweaked multi-monitor efficiency modes, allowing idle power consumption to drop a handful of watts thanks to a tweaked pixel clock.
Maybe the most substantial change with this RX 580 release is the unleashing of any kind of power consumption constraints for the board partners. The Radeon RX 480 launch was marred with issues surrounding the amount of power AMD claimed the boards would use compared to how much they DID use. This time around, all RX 580 graphics cards will ship with AT LEAST an 8-pin power connector, opening overclocked models to use as much as 225 watts. Some cards will have an 8+6-pin configuration to go even higher. Considering the RX 480 launched with a supposed 150 watt TDP (that it never lived up to), that’s quite an increase.
AMD is hoping to convince gamers that Radeon Chill is a good solution to help some specific instances of excessive power draw. Recent drivers have added support for games like League of Legends and DOTA 2, adding to The Witcher 3, Dues Ex: Mankind Divided and more. I will freely admit that while the technology behind Chill sounds impressive, I don’t have the experience with it yet to claim or counterclaim its supposed advantages…without sacrificing user experience.
Subject: Graphics Cards | January 18, 2017 - 08:43 PM | Sebastian Peak
Tagged: video, unlock, shaders, shader cores, sapphire, radeon, Polaris, graphics, gpu, gaming, card, bios, amd, 1024
As reported by WCCFtech, AMD partner Sapphire has a new 1024 stream processor version of the RX460 listed on their site (Chinese language), and this product reveal of course comes after it became known that RX460 graphics cards had the potential to have their stream processor count unlocked from 896 to 1024 via BIOS update.
Sapphire RX460 1024SP 4G D5 Ultra Platinum OC (image credit: Sapphire)
The Sapphire RX460 1024SP edition offers a full Polaris 11 core operating at 1250 MHz, and it otherwise matches the specifications of a stock RX460 graphics card. Whether this product will be available outside of China is unknown, as is the potential pricing model should it be available in the USA. A 4GB Radeon RX460 retails for $99, while the current step-up option is the RX470, which doubles up on this 1024SP RX460's shader count with 2048, with a price increase of about 70% ($169).
AMD Polaris GCN 4.0 GPU lineup (Credit WCCFtech)
As you may note from the chart above, there is also an RX470D option between these cards that features 1792 shaders, though this option is also China-only.
Performance and Impressions
This content was sponsored by AMD.
Last week in part 1 of our look at the Radeon RX 460 as a budget gaming GPU, I detailed our progress through component selection. Centered around an XFX 2GB version of the Radeon RX 460, we built a machine using an Intel Core i3-6100, ASUS H110M motherboard, 8GB of DDR4 memory, both an SSD and a HDD, as well as an EVGA power supply and Corsair chassis. Part 1 discussed the reasons for our hardware selections as well as an unboxing and preview of the giveaway to come.
In today's short write up and video, I will discuss my impressions of the system overall as well as touch on the performance in a handful of games. Despite the low the price, and despite the budget moniker attributed to this build, a budding PC gamer or converted console gamer will find plenty of capability in this system.
Let's quickly recap the components making up our RX 460 budget build.
Our Radeon RX 460 Build
|Budget Radeon RX 460 Build|
|Processor||Intel Core i3-6100 - $109|
|Cooler||CRYORIG M9i - $19|
|Motherboard||ASUS H110M-A/M.2 - $54|
|Memory||2 x 4GB Crucial Ballistix DDR4-2400 - $51|
|Graphics Card||XFX Radeon RX 460 2GB - $98|
|Storage||240GB Sandisk SSD Plus - $68
1TB Western Digital Blue - $49
|Case||Corsair Carbide Series 88R - $49|
|Power Supply||EVGA 500 Watt - $42|
|Monitor||Nixues VUE24A 1080p 144Hz FreeSync - $251|
|Total Price||$549 on Amazon; $799 with monitor on Amazon|
For just $549 I was able to create shopping list of hardware that provides very impressive performance for the investment.
The completed system is damn nice looking, if I do say so myself. The Corsair Carbide 88R case sports a matte black finish with a large window to peer in at the hardware contained within. Coupled with the Nixeus FreeSync display and some Logitech G mouse and keyboard hardware we love, this is a configuration that any PC gamer would be proud to display.