Filling the Product Gaps
In the first several years of my PCPer employment, I typically handled most of the AMD CPU refreshes. These were rather standard affairs that involved small jumps in clockspeed and performance. These happened every 6 to 8 months, with the bigger architectural shifts happening some years apart. We are finally seeing a new refresh of the AMD APU parts after the initial release of Kaveri to the world at the beginning of this year. This update is different. Unlike previous years, there are no faster parts than the already available A10-7850K.
This refresh deals with fleshing out the rest of the Kaveri lineup with products that address different TDPs, markets, and prices. The A10-7850K is still the king when it comes to performance on the FM2+ socket (as long as users do not pay attention to the faster CPU performance of the A10-6800K). The initial launch in January also featured another part that never became available until now; the A8-7600 was supposed to be available some months ago, but is only making it to market now. The 7600 part was unique in that it had a configurable TDP that went from 65 watts down to 45 watts. The 7850K on the other hand was configurable from 95 watts down to 65 watts.
So what are we seeing today? AMD is releasing three parts to address the lower power markets that AMD hopes to expand their reach into. The A8-7600 was again detailed back in January, but never released until recently. The other two parts are brand new. The A10-7800 is a 65 watt TDP part with a cTDP that goes down to 45 watts. The other new chip is the A6-7600K which is unlocked, has a configurable TDP, and looks to compete directly with Intel’s recently released 20 year Anniversary Pentium G3258.
Subject: General Tech | July 17, 2014 - 11:37 PM | Tim Verry
Tagged: quarterly earnings, GCN, financial results, APU, amd
Today, AMD posted financial results for its second quarter of 2014. The company posted quarterly revenue of $1.44 billion, operating income of $63 million, and ultimately a net loss of $36 million (or $0.05 loss per share). The results are an improvement over both the previous quarter and a marked improvement over the same quarter last year.
The chart below compares the second quarter results to the previous quarter (Q1'14) and the same quarter last year (Q2'13). AMD saw increased revenue and operating income, but a higher net loss versus last quarter. Unfortunately, AMD is still saddled with a great deal of debt, which actually increased from 2.14 billion in Q1 2014 to $2.21 billion at the end of the second quarter.
|Q2 2014||Q1 2014||Q2 2014||Q2 2013|
|Revenue||$1.44 Billion||$1.40 Billion||$1.44 Billion||$1.16 Billion|
|Operating Income||$63 Million||$49 Million||$63 Million||($29 Million)|
|Net Profit/(Loss)||($36 Million)||($20 Million)||($36 Million)||($74 Million)|
The Computing Solutions division saw increased revenue of 1% over last quarter, but revenue fell 20% year over year due to fewer chips being sold.
On the bright side, the Graphics and Visual Solutions group saw quarterly revenue increase by 5% over last quarter and 141% YoY. The massive YoY increase is due, in part, to AMD's Semi-Custom Business unit and the SoCs that have come out of there (including the chips used in the latest gaming consoles).
Further, the company is currently sourcing 50% of its wafers from Global Foundries.
“Our transformation strategy is on track and we expect to deliver full year non-GAAP profitability and year-over-year revenue growth. We continue to strengthen our business model and shape AMD into a more agile company offering differentiated solutions for a diverse set of markets.”
-AMD CEO Rory Reed
AMD expects to see third quarter revenue increase by 2% (plus or minus 3%). Following next quarter, AMD will begin production of its Seattle ARM processors. Perhaps even more interesting will be 2016 when AMD is slated to introduce new x86 and GCN processors on a 20nm process.
The company is working towards being more efficient and profitable, and the end-of-year results will be interesting to see.
Also read: AMD Restructures. Lisa Su Is Now COO @ PC Perspective
Subject: Processors | July 9, 2014 - 05:42 PM | Josh Walrath
Tagged: nvidia, msi, Luxmark, Lightning, hsa, GTX 580, GCN, APU, amd, A88X, A10-7850K
When I first read many of the initial AMD A10 7850K reviews, my primary question was how would the APU act if there was a different GPU installed on the system and did not utilize the CrossFire X functionality that AMD talked about. Typically when a user installs a standalone graphics card on the AMD FM2/FM2+ platform, they disable the graphics portion of the APU. They also have to uninstall the AMD Catalyst driver suite. So this then leaves the APU as a CPU only, and all of that graphics silicon is left silent and dark.
Who in their right mind would pair a high end graphics card with the A10-7850K? This guy!
Does this need to be the case? Absolutely not! The GCN based graphics unit on the latest Kaveri APUs is pretty powerful when used in GPGPU/OpenCL applications. The 4 cores/2 modules and 8 GCN cores can push out around 856 GFlops when fully utilized. We also must consider that the APU is the first fully compliant HSA (Heterogeneous System Architecture) chip, and it handles memory accesses much more efficiently than standalone GPUs. The shared memory space with the CPU gets rid of a lot of the workarounds typically needed for GPGPU type applications. It makes sense that users would want to leverage the performance potential of a fully functioning APU while upgrading their overall graphics performance with a higher end standalone GPU.
To get this to work is very simple. Assuming that the user has been using the APU as their primary graphics controller, they should update to the latest Catalyst drivers. If the user is going to use an AMD card, then it would behoove them to totally uninstall the Catalyst driver and re-install only after the new card is installed. After this is completed restart the machine, go into the UEFI, and change the primary video boot device to PEG (PCI-Express Graphics) from the integrated unit. Save the setting and shut down the machine. Insert the new video card and attach the monitor cable(s) to it. Boot the machine and either re-install the Catalyst suite if an AMD card is used, or install the latest NVIDIA drivers if that is the graphics choice.
Windows 7 and Windows 8 allow users to install multiple graphics drivers from different vendors. In my case I utilized a last generation GTX 580 (the MSI N580GTX Lightning) along with the AMD A10 7850K. These products coexist happily together on the MSI A88X-G45 Gaming motherboard. The monitor is attached to the NVIDIA card and all games are routed through that since it is the primary graphics adapter. Performance seems unaffected with both drivers active.
I find it interesting that the GPU portion of the APU is named "Spectre". Who owns those 3dfx trademarks anymore?
When I load up Luxmark I see three entries: the APU (CPU and GPU portions), the GPU portion of the APU, and then the GTX 580. Luxmark defaults to the GPUs. We see these GPUs listed as “Spectre”, which is the GCN portion of the APU, and the NVIDIA GTX 580. Spectre supports OpenCL 1.2 while the GTX 580 is an OpenCL 1.1 compliant part.
With both GPUs active I can successfully run the Luxmark “Sala” test. The two units perform better together than when they are run separately. Adding in the CPU does increase the score, but not by very much (my guess here is that the APU is going to be very memory bandwidth bound in such a situation). Below we can see the results of the different units separate and together.
These results make me hopeful about the potential of AMD’s latest APU. It can run side by side with a standalone card, and applications can leverage the performance of this unit. Now all we need is more HSA aware software. More time and more testing is needed for setups such as this, and we need to see if HSA enabled software really does see a boost from using the GPU portion of the APU as compared to a pure CPU piece of software or code that will run on the standalone GPU.
Personally I find the idea of a heterogeneous solution such as this appealing. The standalone graphics card handles the actual graphics portions, the CPU handles that code, and the HSA software can then fully utilize the graphics portion of the APU in a very efficient manner. Unfortunately, we do not have hard numbers on the handful of HSA aware applications out there, especially when used in conjunction with standalone graphics. We know in theory that this can work (and should work), but until developers get out there and really optimize their code for such a solution, we simply do not know if having an APU will really net the user big gains as compared to something like the i7 4770 or 4790 running pure x86 code.
In the meantime, at least we know that these products work together without issue. The mixed mode OpenCL results make a nice case for improving overall performance in such a system. I would imagine with more time and more effort from developers, we could see some really interesting implementations that will fully utilize a system such as this one. Until then, happy experimenting!
FM2+ Has a High End?
AMD faces a bit of a quandary when it comes to their products. Their APUs are great at graphics, but not so great at general CPU performance. Their products are all under $200 for the CPU/APU but these APUs are not popular with the enthusiast and gaming crowd. Yes, they can make excellent budget gaming systems for those who do not demand ultra-high resolutions and quality settings, but it is still a tough sell for a lot of the mainstream market; the primary way AMD pushes these products is price.
Perhaps the irony here is that AMD is extremely competitive with Intel when it comes to chipset features. The latest A88X Fusion Control Hub is exceptionally well rounded with four native USB 3.0 ports, ten USB 2.0 ports, and eight SATA-6G ports. Performance of this chipset is not all that far off from what Intel offers with the Z87 chipset (USB and SATA-6G are slower, but not dramatically so). The chip also offers RAID 0, 1, 5, and 10 support as well as a 10/100/1000 Ethernet MAC (but a physical layer chip is still required).
Now we get back to price. AMD is not charging a whole lot for these FCH units, even the top end A88X. I do not have the exact number, but it is cheap as compared to the competing Intel option. Intel’s chipset business has made money for the company for years, but AMD does not have that luxury. AMD needs to bundle effectively to be competitive, so it is highly doubtful that the chipset division makes a net profit at the end of the day. Their job is to help push AMD’s CPU and APU offerings as much as possible.
These low cost FCH chips allow motherboard manufacturers to place a lot of customization on their board, but they are still limited in what they can do. A $200+ motherboard simply will not fly with consumers for the level of overall performance that even the latest AMD A10 7850K APU provides in CPU bound workloads. Unfortunately, HSA has not yet taken off to leverage the full potential of the Kaveri APU. We have had big developments, just not big enough that the majority of daily users out there will require an AMD APU. Until that happens, AMD will not be viewed favorably when it comes to its APU offerings in gaming or high performance systems.
The quandary obviously is how AMD and its motherboard partners can create inexpensive motherboards that are feature packed, yet will not break the bank or become burdensome towards APU sales? The FX series of processors from AMD do have a bit more leeway as the performance of the high end FX-8350 is not considered bad, and it is a decent overclocker. That platform can sustain higher motherboard costs due to this performance. The APU side, not so much. The answer to this quandary is tradeoffs.
Subject: General Tech | June 7, 2014 - 04:32 AM | Scott Michaud
Tagged: microsoft, xbox one, xbone, gpgpu, GCN
Shortly after the Kinect deprecation, Microsoft has announced that a 10% boost in GPU performance will be coming to Xbox One. This, of course, is the platform allowing developers to avoid the typical overhead which Kinect requires for its various tasks. Updated software will allow game developers to regain some or all of that compute time back.
Still looks like Wall-E grew a Freddie Mercury 'stache.
While it "might" (who am I kidding?) be used to berate Microsoft for ever forcing the Kinect upon users in the first place, this functionality was planned from before launch. Pre-launch interviews stated that Microsoft was looking into scheduling their compute tasks while the game was busy, for example, hammering the ROPs and leaving the shader cores idle. This could be that, and only that, or it could be a bit more if developers are allowed to opt out of most or all Kinect computations altogether.
The theoretical maximum GPU compute and shader performance of the Xbox One GPU is still about 29% less than its competitor, the PS4. Still, 29% less is better than about 36% less. Not only that, but the final result will always come down to the amount of care and attention spent on any given title by its developers. This will give them more breathing room, though.
Then, of course, the PC has about 3x the shader performance of either of those systems in a few single-GPU products. Everything should be seen in perspective.
AMD Brings Kabini to the Desktop
Perhaps we are performing a study of opposites? Yesterday Ryan posted his R9 295X2 review, which covers the 500 watt, dual GPU monster that will be retailing for $1499. A card that is meant for only the extreme enthusiast who has plenty of room in their case, plenty of knowledge about their power supply, and plenty of electricity and air conditioning to keep this monster at bay. The product that I am reviewing could not be any more different. Inexpensive, cool running, power efficient, and can be fit pretty much anywhere. These products can almost be viewed as polar opposites.
The interesting thing of course is that it shows how flexible AMD’s GCN architecture is. GCN can efficiently and effectively power the highest performing product in AMD’s graphics portfolio, as well as their lowest power offerings in the APU market. The performance scales very linearly when it comes to adding in more GCN compute cores.
The product that I am of course referring to are the latest Athlon and Sempron APUs that are based on the Kabini architecture which fuses Jaguar x86 cores with GCN compute cores. These APUs were announced last month, but we did not have the chance at the time to test them. Since then these products have popped up in a couple of places around the world, but this is the first time that reviewers have officially received product from AMD and their partners.
Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM | Scott Michaud
Tagged: gdc 14, GDC, GCN, amd
While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.
Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.
AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.
Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.
I know I learned.
As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.
This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.
Subject: Motherboards | March 6, 2014 - 02:44 AM | Tim Verry
Tagged: mini ITX, micro ATX, Kabini, GCN, FS1B, biostar, AM1
Biostar has officially launched three new AM1 platform motherboards that support AMD's latest Kabini-based desktop SoC. The new Biostar hardware falls under the new AM1M series and includes the micro ATX AM1m-HP board and two mini ITX boards: the AM1MH and AM1ML.
All three boards feature a FS1B SoC socket, two DDR3 DIMM slots, two SATA III 6Gbps ports, one PCI-E 2.0 x16 slot (running at x4 speeds), one PCI-E 2.0 x1 slot, Gigabit Ethernet, and 5.1 channel audio. The micro ATX AM1M-HP adds a legacy PCI slot to the mix. In an interesting twist, Biostar has oriented the memory horizontally above the FS1B socket rather than vertically and to the right of the socket.
Rear I/O on the AM1M-HP and AM1MH boards includes:
- 2 x PS/2
- 1 x HDMI
- 1 x VGA
- 2 x USB 3.0
- 2 x USB 2.0
- 1 x RJ45 (GbE)
- 3 x analog audio
The other mini ITX board (the AM1ML) has the same rear IO configuration minus the HDMI video output.
Biostar has not released pricing or availability information, but the boards should ship sometime in mid-April.
Low Power and Low Price
Back at CES earlier this year, we came across a couple of interesting motherboards that were neither AM3+ nor FM2+. These small, sparse, and inexpensive boards were actually based on the unannounced AM1 platform. This socket is actually the FS1b socket that is typically reserved for mobile applications which require the use of swappable APUs. The goal here is to provide a low cost, upgradeable platform for emerging markets where price is absolutely key.
AMD has not exactly been living on easy street for the past several years. Their CPU technologies have not been entirely competitive with Intel. This is their bread and butter. Helping to prop the company up though is a very robust and competitive graphics unit. The standalone and integrated graphics technology they offer are not only competitive, but also class leading in some cases. The integration of AMD’s GCN architecture into APUs has been their crowning achievement as of late.
This is not to say that AMD is totally deficient in their CPU designs. Their low power/low cost designs that started with the Bobcat architecture all those years back have always been very competitive in terms of performance, price, and power consumption. The latest iteration is the Kabini APU based on the Jaguar core architecture paired with GCN graphics. Kabini will be the part going into the FS1b socket that powers the AM1 platform.
Kabini is a four core processor (Jaguar) with a 128 unit GCN graphics part (8 GCN cores). These APUs will be rated at 25 watts up and down the stack. Even if they come with half the cores, it will still be a 25 watt part. AMD says that 25 watts is the sweet spot in terms of performance, cooling, and power consumption. Go lower than that and too much performance is sacrificed, and any higher it would make more sense to go with a Trinity/Richland/Kaveri solution. That 25 watt figure also encompasses the primary I/O functionality that typically resides on a standalone motherboard chipset. Kabini features 2 SATA 6G ports, 2 USB 3.0 ports, and 8 USB 2.0 ports. It also features multiple PCI-E lanes as well as a 4x PCI-E connection for external graphics. The chip also supports DisplayPort, HDMI, and VGA outputs. This is a true SOC from AMD that does a whole lot of work for not a whole lot of power.
Subject: General Tech | January 15, 2014 - 11:56 PM | Tim Verry
Tagged: r9 m290x, r7 m265, r5 m230, mobile gpu, GCN, amd
AMD recently took the wraps off of its latest mobile GPU series in the form of the R5 M200, R7 M200, and R9 M200 series. Currently, there is one GPU in each respective Rx M200 series including the AMD Radeon R5 M230, R7 M265, and R9 M290X. Do not get too excited, however. All of the new mobile GPUs are based on desktop versions of Volcanic Islands and not AMD's new Hawaii GPUs. As such, the Rx M200 series are essentially rebrands of the Radeon HD 8000M series (which was in turn OEM rebrands of the HD 7000M series) based around AMD's Graphics Core Next 1.0 architecture and specifically the Pitcairn GPU implementation.
All of the Rx M200 series support DirectX 11.2 Tier 1, up to 4GB GDDR5 memory, and at least 320 GCN shader cores. Informatin on the mid-range R7 M265 is scarce, but AMD has released information on the low and high end chips. Further, Computer Base has managed to put together specifications for the R5 M230 and R9 M290X. In short, the R5 M230 is a rebranded HD 8570 with higher clockspeeds and support for more memory while the R9 M290X is a rebranded HD 8970M with official support for DirectX 11.2 Tier 1 (the HD8970M technically supports it as well). A more detailed breakdown is as follows.
The R9 M290X features 1280 shaders clocked at 850MHz/900MHz (base/boost), 80 texture units, and 32 ROPs. OEMs can pair the GPU with up to 4GB of GDDR5 memory clocked at 1,200 MHz on a 256-bit bus.
The R5 M230 has 320 shaders clocked at 855MHz, 20 texture units, and 4 ROPs. This GPU can support up to 4GB of GDDR5 memory at 1,000MHz over a 64-bit bus.
Users will be able to get the new Rx M200 series graphics cards in mobile systems from Alienware, Clevo, Lenovo, and MSI. Other manufactures should pick up the new GPUs soon as well. The new series is not terribly exciting being nearly identical to the existing HD 8000M counterparts, but it does update the lineup to AMD's new naming and branding scheme. Notably, should AMD release a Hawaii-based mobile GPU, it has not left itself much room as far as naming goes (R9 M295X?).