Subject: Processors | July 9, 2014 - 05:42 PM | Josh Walrath
Tagged: nvidia, msi, Luxmark, Lightning, hsa, GTX 580, GCN, APU, amd, A88X, A10-7850K
When I first read many of the initial AMD A10 7850K reviews, my primary question was how would the APU act if there was a different GPU installed on the system and did not utilize the CrossFire X functionality that AMD talked about. Typically when a user installs a standalone graphics card on the AMD FM2/FM2+ platform, they disable the graphics portion of the APU. They also have to uninstall the AMD Catalyst driver suite. So this then leaves the APU as a CPU only, and all of that graphics silicon is left silent and dark.
Who in their right mind would pair a high end graphics card with the A10-7850K? This guy!
Does this need to be the case? Absolutely not! The GCN based graphics unit on the latest Kaveri APUs is pretty powerful when used in GPGPU/OpenCL applications. The 4 cores/2 modules and 8 GCN cores can push out around 856 GFlops when fully utilized. We also must consider that the APU is the first fully compliant HSA (Heterogeneous System Architecture) chip, and it handles memory accesses much more efficiently than standalone GPUs. The shared memory space with the CPU gets rid of a lot of the workarounds typically needed for GPGPU type applications. It makes sense that users would want to leverage the performance potential of a fully functioning APU while upgrading their overall graphics performance with a higher end standalone GPU.
To get this to work is very simple. Assuming that the user has been using the APU as their primary graphics controller, they should update to the latest Catalyst drivers. If the user is going to use an AMD card, then it would behoove them to totally uninstall the Catalyst driver and re-install only after the new card is installed. After this is completed restart the machine, go into the UEFI, and change the primary video boot device to PEG (PCI-Express Graphics) from the integrated unit. Save the setting and shut down the machine. Insert the new video card and attach the monitor cable(s) to it. Boot the machine and either re-install the Catalyst suite if an AMD card is used, or install the latest NVIDIA drivers if that is the graphics choice.
Windows 7 and Windows 8 allow users to install multiple graphics drivers from different vendors. In my case I utilized a last generation GTX 580 (the MSI N580GTX Lightning) along with the AMD A10 7850K. These products coexist happily together on the MSI A88X-G45 Gaming motherboard. The monitor is attached to the NVIDIA card and all games are routed through that since it is the primary graphics adapter. Performance seems unaffected with both drivers active.
I find it interesting that the GPU portion of the APU is named "Spectre". Who owns those 3dfx trademarks anymore?
When I load up Luxmark I see three entries: the APU (CPU and GPU portions), the GPU portion of the APU, and then the GTX 580. Luxmark defaults to the GPUs. We see these GPUs listed as “Spectre”, which is the GCN portion of the APU, and the NVIDIA GTX 580. Spectre supports OpenCL 1.2 while the GTX 580 is an OpenCL 1.1 compliant part.
With both GPUs active I can successfully run the Luxmark “Sala” test. The two units perform better together than when they are run separately. Adding in the CPU does increase the score, but not by very much (my guess here is that the APU is going to be very memory bandwidth bound in such a situation). Below we can see the results of the different units separate and together.
These results make me hopeful about the potential of AMD’s latest APU. It can run side by side with a standalone card, and applications can leverage the performance of this unit. Now all we need is more HSA aware software. More time and more testing is needed for setups such as this, and we need to see if HSA enabled software really does see a boost from using the GPU portion of the APU as compared to a pure CPU piece of software or code that will run on the standalone GPU.
Personally I find the idea of a heterogeneous solution such as this appealing. The standalone graphics card handles the actual graphics portions, the CPU handles that code, and the HSA software can then fully utilize the graphics portion of the APU in a very efficient manner. Unfortunately, we do not have hard numbers on the handful of HSA aware applications out there, especially when used in conjunction with standalone graphics. We know in theory that this can work (and should work), but until developers get out there and really optimize their code for such a solution, we simply do not know if having an APU will really net the user big gains as compared to something like the i7 4770 or 4790 running pure x86 code.
In the meantime, at least we know that these products work together without issue. The mixed mode OpenCL results make a nice case for improving overall performance in such a system. I would imagine with more time and more effort from developers, we could see some really interesting implementations that will fully utilize a system such as this one. Until then, happy experimenting!
FM2+ Has a High End?
AMD faces a bit of a quandary when it comes to their products. Their APUs are great at graphics, but not so great at general CPU performance. Their products are all under $200 for the CPU/APU but these APUs are not popular with the enthusiast and gaming crowd. Yes, they can make excellent budget gaming systems for those who do not demand ultra-high resolutions and quality settings, but it is still a tough sell for a lot of the mainstream market; the primary way AMD pushes these products is price.
Perhaps the irony here is that AMD is extremely competitive with Intel when it comes to chipset features. The latest A88X Fusion Control Hub is exceptionally well rounded with four native USB 3.0 ports, ten USB 2.0 ports, and eight SATA-6G ports. Performance of this chipset is not all that far off from what Intel offers with the Z87 chipset (USB and SATA-6G are slower, but not dramatically so). The chip also offers RAID 0, 1, 5, and 10 support as well as a 10/100/1000 Ethernet MAC (but a physical layer chip is still required).
Now we get back to price. AMD is not charging a whole lot for these FCH units, even the top end A88X. I do not have the exact number, but it is cheap as compared to the competing Intel option. Intel’s chipset business has made money for the company for years, but AMD does not have that luxury. AMD needs to bundle effectively to be competitive, so it is highly doubtful that the chipset division makes a net profit at the end of the day. Their job is to help push AMD’s CPU and APU offerings as much as possible.
These low cost FCH chips allow motherboard manufacturers to place a lot of customization on their board, but they are still limited in what they can do. A $200+ motherboard simply will not fly with consumers for the level of overall performance that even the latest AMD A10 7850K APU provides in CPU bound workloads. Unfortunately, HSA has not yet taken off to leverage the full potential of the Kaveri APU. We have had big developments, just not big enough that the majority of daily users out there will require an AMD APU. Until that happens, AMD will not be viewed favorably when it comes to its APU offerings in gaming or high performance systems.
The quandary obviously is how AMD and its motherboard partners can create inexpensive motherboards that are feature packed, yet will not break the bank or become burdensome towards APU sales? The FX series of processors from AMD do have a bit more leeway as the performance of the high end FX-8350 is not considered bad, and it is a decent overclocker. That platform can sustain higher motherboard costs due to this performance. The APU side, not so much. The answer to this quandary is tradeoffs.
Subject: General Tech | June 7, 2014 - 04:32 AM | Scott Michaud
Tagged: microsoft, xbox one, xbone, gpgpu, GCN
Shortly after the Kinect deprecation, Microsoft has announced that a 10% boost in GPU performance will be coming to Xbox One. This, of course, is the platform allowing developers to avoid the typical overhead which Kinect requires for its various tasks. Updated software will allow game developers to regain some or all of that compute time back.
Still looks like Wall-E grew a Freddie Mercury 'stache.
While it "might" (who am I kidding?) be used to berate Microsoft for ever forcing the Kinect upon users in the first place, this functionality was planned from before launch. Pre-launch interviews stated that Microsoft was looking into scheduling their compute tasks while the game was busy, for example, hammering the ROPs and leaving the shader cores idle. This could be that, and only that, or it could be a bit more if developers are allowed to opt out of most or all Kinect computations altogether.
The theoretical maximum GPU compute and shader performance of the Xbox One GPU is still about 29% less than its competitor, the PS4. Still, 29% less is better than about 36% less. Not only that, but the final result will always come down to the amount of care and attention spent on any given title by its developers. This will give them more breathing room, though.
Then, of course, the PC has about 3x the shader performance of either of those systems in a few single-GPU products. Everything should be seen in perspective.
AMD Brings Kabini to the Desktop
Perhaps we are performing a study of opposites? Yesterday Ryan posted his R9 295X2 review, which covers the 500 watt, dual GPU monster that will be retailing for $1499. A card that is meant for only the extreme enthusiast who has plenty of room in their case, plenty of knowledge about their power supply, and plenty of electricity and air conditioning to keep this monster at bay. The product that I am reviewing could not be any more different. Inexpensive, cool running, power efficient, and can be fit pretty much anywhere. These products can almost be viewed as polar opposites.
The interesting thing of course is that it shows how flexible AMD’s GCN architecture is. GCN can efficiently and effectively power the highest performing product in AMD’s graphics portfolio, as well as their lowest power offerings in the APU market. The performance scales very linearly when it comes to adding in more GCN compute cores.
The product that I am of course referring to are the latest Athlon and Sempron APUs that are based on the Kabini architecture which fuses Jaguar x86 cores with GCN compute cores. These APUs were announced last month, but we did not have the chance at the time to test them. Since then these products have popped up in a couple of places around the world, but this is the first time that reviewers have officially received product from AMD and their partners.
Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM | Scott Michaud
Tagged: gdc 14, GDC, GCN, amd
While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.
Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.
AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.
Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.
I know I learned.
As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.
This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.
Subject: Motherboards | March 6, 2014 - 02:44 AM | Tim Verry
Tagged: mini ITX, micro ATX, Kabini, GCN, FS1B, biostar, AM1
Biostar has officially launched three new AM1 platform motherboards that support AMD's latest Kabini-based desktop SoC. The new Biostar hardware falls under the new AM1M series and includes the micro ATX AM1m-HP board and two mini ITX boards: the AM1MH and AM1ML.
All three boards feature a FS1B SoC socket, two DDR3 DIMM slots, two SATA III 6Gbps ports, one PCI-E 2.0 x16 slot (running at x4 speeds), one PCI-E 2.0 x1 slot, Gigabit Ethernet, and 5.1 channel audio. The micro ATX AM1M-HP adds a legacy PCI slot to the mix. In an interesting twist, Biostar has oriented the memory horizontally above the FS1B socket rather than vertically and to the right of the socket.
Rear I/O on the AM1M-HP and AM1MH boards includes:
- 2 x PS/2
- 1 x HDMI
- 1 x VGA
- 2 x USB 3.0
- 2 x USB 2.0
- 1 x RJ45 (GbE)
- 3 x analog audio
The other mini ITX board (the AM1ML) has the same rear IO configuration minus the HDMI video output.
Biostar has not released pricing or availability information, but the boards should ship sometime in mid-April.
Low Power and Low Price
Back at CES earlier this year, we came across a couple of interesting motherboards that were neither AM3+ nor FM2+. These small, sparse, and inexpensive boards were actually based on the unannounced AM1 platform. This socket is actually the FS1b socket that is typically reserved for mobile applications which require the use of swappable APUs. The goal here is to provide a low cost, upgradeable platform for emerging markets where price is absolutely key.
AMD has not exactly been living on easy street for the past several years. Their CPU technologies have not been entirely competitive with Intel. This is their bread and butter. Helping to prop the company up though is a very robust and competitive graphics unit. The standalone and integrated graphics technology they offer are not only competitive, but also class leading in some cases. The integration of AMD’s GCN architecture into APUs has been their crowning achievement as of late.
This is not to say that AMD is totally deficient in their CPU designs. Their low power/low cost designs that started with the Bobcat architecture all those years back have always been very competitive in terms of performance, price, and power consumption. The latest iteration is the Kabini APU based on the Jaguar core architecture paired with GCN graphics. Kabini will be the part going into the FS1b socket that powers the AM1 platform.
Kabini is a four core processor (Jaguar) with a 128 unit GCN graphics part (8 GCN cores). These APUs will be rated at 25 watts up and down the stack. Even if they come with half the cores, it will still be a 25 watt part. AMD says that 25 watts is the sweet spot in terms of performance, cooling, and power consumption. Go lower than that and too much performance is sacrificed, and any higher it would make more sense to go with a Trinity/Richland/Kaveri solution. That 25 watt figure also encompasses the primary I/O functionality that typically resides on a standalone motherboard chipset. Kabini features 2 SATA 6G ports, 2 USB 3.0 ports, and 8 USB 2.0 ports. It also features multiple PCI-E lanes as well as a 4x PCI-E connection for external graphics. The chip also supports DisplayPort, HDMI, and VGA outputs. This is a true SOC from AMD that does a whole lot of work for not a whole lot of power.
Subject: General Tech | January 15, 2014 - 11:56 PM | Tim Verry
Tagged: r9 m290x, r7 m265, r5 m230, mobile gpu, GCN, amd
AMD recently took the wraps off of its latest mobile GPU series in the form of the R5 M200, R7 M200, and R9 M200 series. Currently, there is one GPU in each respective Rx M200 series including the AMD Radeon R5 M230, R7 M265, and R9 M290X. Do not get too excited, however. All of the new mobile GPUs are based on desktop versions of Volcanic Islands and not AMD's new Hawaii GPUs. As such, the Rx M200 series are essentially rebrands of the Radeon HD 8000M series (which was in turn OEM rebrands of the HD 7000M series) based around AMD's Graphics Core Next 1.0 architecture and specifically the Pitcairn GPU implementation.
All of the Rx M200 series support DirectX 11.2 Tier 1, up to 4GB GDDR5 memory, and at least 320 GCN shader cores. Informatin on the mid-range R7 M265 is scarce, but AMD has released information on the low and high end chips. Further, Computer Base has managed to put together specifications for the R5 M230 and R9 M290X. In short, the R5 M230 is a rebranded HD 8570 with higher clockspeeds and support for more memory while the R9 M290X is a rebranded HD 8970M with official support for DirectX 11.2 Tier 1 (the HD8970M technically supports it as well). A more detailed breakdown is as follows.
The R9 M290X features 1280 shaders clocked at 850MHz/900MHz (base/boost), 80 texture units, and 32 ROPs. OEMs can pair the GPU with up to 4GB of GDDR5 memory clocked at 1,200 MHz on a 256-bit bus.
The R5 M230 has 320 shaders clocked at 855MHz, 20 texture units, and 4 ROPs. This GPU can support up to 4GB of GDDR5 memory at 1,000MHz over a 64-bit bus.
Users will be able to get the new Rx M200 series graphics cards in mobile systems from Alienware, Clevo, Lenovo, and MSI. Other manufactures should pick up the new GPUs soon as well. The new series is not terribly exciting being nearly identical to the existing HD 8000M counterparts, but it does update the lineup to AMD's new naming and branding scheme. Notably, should AMD release a Hawaii-based mobile GPU, it has not left itself much room as far as naming goes (R9 M295X?).
The 7 Year Console Refresh
The consoles are coming! The consoles are coming! Ok, that is not necessarily true. One is already here and the second essentially is too. This of course brings up the great debate between PCs and consoles. The past has been interesting when it comes to console gaming, as often the consoles would be around a year ahead of PCs in terms of gaming power and prowess. This is no longer the case with this generation of consoles. Cutting edge is now considered mainstream when it comes to processing and graphics. The real incentive to buy this generation of consoles is a lot harder to pin down as compared to years past.
The PS4 retails for $399 US and the upcoming Xbox One is $499. The PS4’s price includes a single controller, while the Xbox’s package includes not just a controller, but also the next generation Kinect device. These prices would be comparable to some low end PCs which include keyboard, mouse, and a monitor that could be purchased from large brick and mortar stores like Walmart and Best Buy. Happily for most of us, we can build our machines to our own specifications and budgets.
As a directive from on high (the boss), we were given the task of building our own low-end gaming and productivity machines at a price as close to that of the consoles and explaining which solution would be superior at the price points given. The goal was to get as close to $500 as possible and still have a machine that would be able to play most recent games at reasonable resolutions and quality levels.
Subject: Processors | November 13, 2013 - 05:35 PM | Josh Walrath
Tagged: Puma, Mullins, mobile, Jaguar, GCN, beema, apu13, APU, amd, 2014
AMD’s APU13 is all about APUs and their programming, but the hardware we have seen so far has been dominated by the upcoming Kaveri products for FM2+. It seems that AMD has more up their sleeves for release this next year, and it has somewhat caught me off guard. The Beema and Mullins based products are being announced today, but we do not have exact details on these products. The codenames have been around for some time now, but interest has been minimal since they are evolutionary products based on Kabini and Temash APUs that have been available this year. Little did I know that things would be far more interesting than that.
The basis for Beema and Mullins is the Puma core. This is a highly optimized revision of Jaguar, and in some ways can be considered a new design. All of the basics in terms of execution units, caches, and memory controllers are the same. What AMD has done is go through the design with a fine toothed comb and make it far more efficient per clock than what we have seen previously. This is still a 28 nm part, but the extra attention and love lavished upon it by AMD has resulted in a much more efficient system architecture for the CPU and GPU portions.
The parts will be offered in two and four core configurations. Beema will span from 10W to 25W configurations. Mullins will go all the way down to “2W SDP”. SDP essentially means that while the chip can be theoretically rated higher, it will rarely go above that 2W envelope in the vast majority of situations. These chips are expected to be around 2X more efficient per clock than the previous Jaguar based products. This means that at similar clock speeds, Beema and Mullins will pull far less power than that previous gen. It should also allow some higher clockspeeds at the top end 25W area.
These will be some of the first fanless quad cores that AMD will introduce for the tablet market. Previously we have seen tablets utilize the cut down versions of Temash to hit power targets, but with this redesign it is entirely possible to utilize the fully enabled quad core Mullins. AMD has not given us specific speeds for these products, but we can guess that they will be around what we see currently, but the chip will just have a lower TDP rating.
AMD is introducing their new security platform based on the ARM Trustzone. Essentially a small ARM Cortex A5 is integrated in the design and handles the security aspects of this feature. We were not briefed on how this achieves security, but the slide below gives some of the bullet points of the technology.
Since the pure-play foundries will not have a workable 20 nm process for AMD to jump to in a timely manner, AMD had no other choice but to really optimize the Jaguar core to make it more competitive with products from Intel and the ARM partners. At 28 nm the ARM ecosystem has a power advantage over AMD, while at 22 nm Intel offers similar performance to AMD but with greater power efficiency.
This is a necessary update for AMD as the competition has certainly not slowed down. AMD is more constrained obviously by the lack of a next-generation process node available for 1H 2014, so a redesign of this magnitude was needed. The performance per watt metric is very important here, as it promises longer battery life without giving up the performance people received from the previous Kabini/Temash family of APUs. This design work could be carried over to the next generation of APUs using 20 nm and below, which hopefully will keep AMD competitive with the rest of the market. Beema and Mullins are interesting looking products that will be shown off at CES 2014.