The Architectural Deep Dive
AMD officially unveiled their brand new Bobcat architecture to the world at CES 2011. This was a very important release for AMD in the low power market. Even though Netbooks were a dying breed at that time, AMD experienced a good uptick in sales due to the good combination of price, performance, and power consumption for the new Brazos platform. AMD was of the opinion that a single CPU design would not be able to span the power consumption spectrum of CPUs at the time, and so Bobcat was designed to fill that space which existed from 1 watt to 25 watts. Bobcat never was able to get down to that 1 watt point, but the Z-60 was a 4.5 watt part with two cores and the full 80 Radeon cores.
The Bobcat architecture was produced on TSMC’s 40 nm process. AMD eschewed the upcoming 32 nm HKMG/SOI process that was being utilized for the upcoming Llano and Bulldozer parts. In hindsight, this was a good idea. Yields took a while to improve on GLOBALFOUNDRIES new process, while the existing 40 nm product from TSMC was running at full speed. AMD was able to provide the market in fairly short order with good quantities of Bobcat based APUs. The product more than paid for itself, and while not exactly a runaway success that garnered many points of marketshare from Intel, it helped to provide AMD with some stability in the market. Furthermore, it provided a very good foundation for AMD when it comes to low power parts that are feature rich and offer competitive performance.
The original Brazos update did not happen, instead AMD introduced Brazos 2.0 which was a more process improvement oriented product which featured slightly higher speeds but remained in the same TDP range. The uptake of this product was limited, and obviously it was a minor refresh to buoy purchases of the aging product. Competition was coming from low power Ivy Bridge based chips, as well as AMD’s new Trinity products which could reach TDPs of 17 watts. Brazos and Brazos 2.0 did find a home in low powered, but full sized notebooks that were very inexpensive. Even heavily leaning Intel based manufacturers like Toshiba released Brazos based products in the sub-$500 market. The combination of good CPU performance and above average GPU performance made this a strong product in this particular market. It was so power efficient, small batteries were typically needed, thereby further lowering the cost.
All things must pass, and Brazos is no exception. Intel has a slew of 22 nm parts that are encroaching on the sub-15 watt territory, ARM partners have quite a few products that are getting pretty decent in terms of overall performance, and the graphics on all of these parts are seeing some significant upgrades. The 40 nm based Bobcat products are no longer competitive with what the market has to offer. So at this time we are finally seeing the first Jaguar based products. Jaguar is not a revolutionary product, but it improves on nearly every aspect of performance and power usage as compared to Bobcat.
A Reference Platform - But not a great one
Believe it or not, AMD claims that the Brazos platform, along with the "Brazos 2.0" update the following year, were the company's most successful mobile platforms in terms of sales and design wins. When it first took the scene in late 2010, it was going head to head against the likes of Intel's Atom processor and the combination of Atom + NVIDIA ION and winning. It was sold in mini-ITX motherboard form factors as well as small clamshell notebooks (gasp, dare we say...NETBOOKS?) and though it might not have gotten the universal attention it deserved, it was a great part.
With Kabini (and Temash as well), AMD is making another attempt to pull in some marketshare in the low power, low cost mobile markets. I have already gone over the details of the mobile platforms that AMD is calling Elite Mobility (Temash) and Mainstream (Kabini) in a previous article that launched today.
This article will quickly focus on the real-world performance of the Kabini platform as demonstrated by a reference laptop I received while visiting AMD in Toronto a few weeks ago. While this design isn't going to be available in retail (and I am somewhat thankful based on the build quality) the key is to look at the performance and power efficiency of the platform itself, not the specific implementation.
Kabini Architecture Overview
The building blocks of Kabini are four Jaguar x86 cores and 128 Radeon cores colleted in a pair of Compute Units - similar in many ways to the CUs found in the Radeon HD 7000 series discrete GPUs. Josh has written a very good article that focuses on the completely new architecture that is Jaguar and compared it to other processors including AMD's previous core used in Brazos, the Bobcat core.
2013 Elite Mobility APU - Temash
AMD has a lot to say today. At an event up in Toronto this month we got to sit down with AMD’s marketing leadership and key engineers to learn about the company’s plans for 2013 mobility processors. This includes a refreshed high performance APU known as Richland that will replace Trinity as well as two brand new APUs based on Jaguar CPU cores and the GCN architecture for low power platforms.
Josh has put together an article that details the Jaguar + GCN design of Temash and Kabini and I have also posted some initial performance results of the Kabini reference system AMD handed me in May. This article will detail the plans that AMD has for each of these three mobile segments, starting with the newest entry, AMD’s Elite Mobility APU platform – Temash.
The goal of the APU, the combination of traditional x86 processing cores and a discrete style graphics system, was to offer unparalleled performance in smaller and more efficient form factors. AMD believes that their leadership in the graphics front will offer them a good sized advantage in areas including performance tablets, hybrids and small screen clamshells that may or not be touch enabled. They are acknowledging though that getting into the smallest tablets (like the Nexus 7) is not on the table quite yet and that content creation desktop replacements are probably outside the scope of Richland.
2013 Elite Mobility APU – Temash
AMD will have the first x86 quad-core SoC design with Temash and AMD thinks it will make a big splash in a relatively new market known as the “high performance” tablet.
Temash, built around Jaguar CPU cores and the graphics technology of GCN, will be able to offer fully accelerated video playback with transcode support as well with features like image stabilization and Perfect Picture enabled. Temash will also be the only SoC to offer support for DX11 graphics and even though some games might not have the ability to show off added effects there are quite a few performance advantages of DX11 over DX10/9. With more than 100% claimed GPU performance upgrade you’ll be able to drive displays at 2560x1600 for productivity use and even be able to take advantage of wireless display options.
Subject: General Tech | May 22, 2013 - 10:33 PM | Tim Verry
Tagged: xbox one, semi-custom business unit, ps4, microsoft, amd
Microsoft took the wraps off of its upcoming Xbox One console earlier this week, and it is now possible to compare Microsoft and Sony's next-generation hardware.
Prior to the Xbox One launch, Forbes contributor Paul Tassi postulated that Microsoft would be going a different route than Sony with its next Xbox. Specifically, that Microsoft would focus more on media playback and applications rather than purely gaming (unlike Sony, which is doing the opposite). At the time, I found myself agreeing with his sentiment, and now that the console as launched I believe Mr. Tassi was absolutely correct. Microsoft wants the Xbox One to be the center of your living room and the device you use for all of your media (and gaming) needs. The new console integrates the Windows kernel and can do multitasking of applications and media in a Metro-UI like fashion (2/3, 1/3 split screen).
On the other hand, Sony is positioning its console as the best gaming device for the living room, and is focusing on integrating all things gaming with media as more of an afterthought. Like previous PlayStation consoles, it will likely play back media files and Blu-ray movies just fine, but it is a gaming box at its core.
Interestingly, the hardware that both companies have chosen seems to line up nicely with those goals. Both the Xbox One and PS4 are based around a semi-custom AMD APU with eight Jaguar CPU cores, but they have gone in different directions from there.
PlayStation 4 hardware:
As a refresher, Sony's PS4 has the following hardware specifications.
- CPU: Eight core AMD “Jaguar” CPU
- GPU: AMD GCN GPU with 1152 shader units (in 18 CUs)
- Memory: 8GB of GDDR5 clocked at 5500MHz
- HDD: At least a spindle hard drive
- Bandwidth: 176 GB/s
Sony has changed directions from the PS3 by going with a simpler design that provides more graphical horsepower and higher system memory bandwidth versus the Xbox One. The PS4 uses a semi-custom AMD chip that has saved Sony a great deal of R&D money while also being easier for developers as it is that much closer to a traditional PC with its x86-64 APU (GDDR5 memory is unusual though). The PS4 is aimed at gamers and Sony's choice of hardware and memory reflects that.
Xbox One hardware:
Microsoft was not as forthcoming as Sony as far as touting specific hardware specifications, but based on the announcement and additional information acquired by AnandTech, the Xbox One features the following hardware:
- CPU: Eight core AMD “Jaguar” CPU
- GPU: AMD GCN GPU with 768 shader cores (within 12 Compute Units)
- Memory: 8GB of DDR3 system memory at 2133MHz as well as 32MB of on-chip eSRAM
- HDD: 500GB
- DDR3 Memory Bandwidth: 68.3 GB/s
- eSRAM Memory Bandwidth: 102GB/s
Microsoft took a different approach with the Xbox One. Instead of going for DDR5 like Sony did, Microsoft opted for a hybrid approach that uses a small but high-bandwidth and low latency embedded SRAM on the same chip as the CPU and GPU paired with a larger 8GB of traditional PC DDR3 system memory. This approach is interesting because it gives Microsoft a system that has access to low latency memory at the expense of the higher bandwidth that the PS4 enjoys with its single pool of DDR5 memory. Developers will need to become familiar with the embedded RAM to take full advantage of the latency benefits, however.
These hardware choices work out such that the PS4 has a distinct advantage when it comes to gaming performance. It has more GPU horsepower and high-bandwidth memory for feeding the processor high resolution textures. On the other hand, while Microsoft's console still has a respectable GPU (for a console), it seems to be optimized for lower latency memory access and just enough graphics oomph to enable the company to have a multimedia and home entertainment machine that can run multiple applications simultaneously while also satisfying gamers by giving them a decent graphical upgrade over the Xbox 360 for games.
This next generation of consoles should be interesting, as will the ensuing "flame wars" between fans. Both Microsoft and Sony have learned from the past (current) generation of consoles and are focusing on what they are good at to differentiate themselves. Microsoft is tapping into its Windows ecosystem of PCs and mobile devices and providing an app machine that the company hopes will be the hub of your living room entertainment needs. Sony, who does not have that expertise or existing infrastructure is also focusing in on what it excels at and that is gaming.
I'm looking forward to seeing how the consoles co-exist and how the market shakes out over 2014 and into the future as the hardware stays the same but software changes. Sony definitely has the hardware advantage to stay in the game longer when it comes to games and graphics, but Microsoft has a box that can do more than games and can find purchase in your media rack even after it is surpassed in gaming graphics quality by PCs and the competition.
What do you think about the split between the Xbox One and PS4's hardware?
Subject: Systems | May 21, 2013 - 08:21 PM | Tim Verry
Tagged: Richland, msi, gx70, gx60, gaming notebook, gaming, APU, amd
MSI announced two new gaming notebooks powered by AMD's latest Richland APUs today called the GX70 and GX60. Both gaming notebooks use AMD A10-5750M processors, a discrete AMD graphics card, 8GB of RAM, and a 750GB (7200 RPM) hard drive. Other shared specifications include a Killer E2200 NIC, Blu-ray drive, THX certified speakers, a headphone amp, and a large 9-cell battery.
The GX70 is the largest of the two gaming notebooks at 8.6 pounds and packing a 17.3” display. The GX70 uses the A10-5750M APU and a Radeon 8970M discrete mobile GPU to deliver gaming performance to the 1080p display. The system is also capable of outputting to multiple displays over HDMI and supports AMD's Eyefinity 3D technology. On the outside, the MSI GX70 features a 17.3” 1920 x 1080p display with an anti-reflective coating as well as a SteelSeries gaming keyboard.
Meanwhile, the MSI GX60 is a 15-inch notebook that weighs 7.7 pounds. This gaming notebook uses an AMD A10-5750M APU and a Radeon 7970M mobile discrete GPU. Further, the GX60 has a 15.6” 1080p anti-reflective display and SteelSeries gaming keyboard.
MSI claims that the new AMD Richland APUs will give its gaming notebooks much better battery life. The new GX70 and GX60 will have up to 40% better graphical performance compared to previous generations thanks to the new APUs and discrete cards. According to MSI VP of Sales Andy Tung, “the GX70 and GX60 deliver the ultimate sensory experience for both professional and amateur gamers.” More information on the new gaming notebooks can be found on this MSI press release.
Subject: Graphics Cards | May 20, 2013 - 12:54 PM | Tim Verry
Tagged: VIA, Q1 2013, nvidia, jpr, Intel, gpu market share, amd
Market analytics firm Jon Peddie Research recently released estimated market share and GPU shipment numbers from Q1 2013. The report includes information on AMD, NVIDIA, Intel, and Via and covered IGPs, processor graphics, and discrete GPUs included in desktop and mobile systems powered by X86 hardware. The report includes x86 tablets but otherwise does not factor in GPUs used in ARM devices like NVIDIA's Tegra chips. Year over Year, the PC market is down 12.6% and the GPU market declined by 12.9%. It is not all bad news for the PC market and discrete GPU makers, however. GPUs through 2016 are expected to exhibit a compound annual growth rate (CAGR) of 2.6% with as many as 394 million discrete GPUs shipped in 2016 alone.
In Q1 2013, the PC market is down 13.7% versus last quarter (Q4 2012) but the GPU market only declined 3.2%. This discrepency is explained as the result of people adding multiple GPUs to a single PC system, including adding a single discrete card to a system that already has processor graphics or an APU. By the end of Q1 2013, Intel holds 61.8% market share followed by AMD in second place with 20.2% and NVIDIA with 18%. Notably VIA is out of the game with 0.0% market share.
In terms of GPU shipments, NVIDIA had a relatively good first quarter of this year with an increase of 7.6% for notebook GPUs and desktop GPU shipments that remained flat. Overall, NVIDIA saw an increase in PC graphics shipments of 3.6%. On the other hand, x86 CPU giant Intel saw desktop and notebook GPUs slip by 3% and 6.3% respectively. Overall, that amounts to PC graphics shipments that fell by 5.3%. In between NVIDIA and Intel, AMD moved 30% more desktop chips (including APUs) versus Q4 2012. Meanwhile, Notebook chips (including APUs) fell by 7.3%. AMD's overall PC graphics shipments fell by 0.3%.
In all, this is decent news for the PC market as it shows that there is still interest in desktop GPUs. The PC market itself is declining and taking the GPU market with it, but it is far from the death of the desktop PC. It is interesting that NVIDIA (which announced Q1'13 revenue of $954.7 million) managed to push more chips while AMD and Intel were on the decline since NVIDIA doesn't have a x86 CPU with integrated graphics. I'm looking forward to seeing where NVIDIA stands as far as the mobile GPU market which does include ARM-powered products.
Subject: General Tech | May 16, 2013 - 03:11 PM | Ken Addison
Tagged: podcast, video, ibuypower, revolt, Seagate, sshd, nvidia, project shield, shield, haswell, corsair, seasonic, amd, ASUS P5A
PC Perspective Podcast #251 - 05/16/2013
Join us this week as we discuss the iBuyPower Revolt, Seagate SSHD, NVIDIA Shield Pricing, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano, and Morry Teitelman
Program length: 1:12:25
Week in Review:
0:10:30 Seagate Thin SSHD 500GB Review
News items of interest:
1:01:00 Hardware/Software Picks of the Week:
1-888-38-PCPER or email@example.com
Subject: Motherboards | May 15, 2013 - 09:37 PM | Josh Walrath
Tagged: asus, P5A, ALi, Aladdin V, 100 MHz, Super 7, amd, K6, K6-2, SDRAM
I first got into computers in the 8088 days, but I started to do it professionally when Socket 5 was transitioning to Socket 7. The Pentium 133 based Quantex system I bought after the Atlanta Olympics catapulted me into the modern computer age (I was previously using an Intel 386SX-16 MHz system from DAK… don’t get me started on that company). It was also when AOL was the only internet service in Laramie, WY. I started browsing hardware retailers and then moved onto independent review sites that were only then just popping up. Tom’s and Anandtech were very new and did not feature many pictures because digital cameras were still quite rare.
Remember when the 1/5/2 setup was considered optimal? It allowed for the good modem and good soundcard to be installed!
One of the big shifts of the time is when Intel abandoned Socket 7 and forged ahead with Slot 1. AMD had fit the K6 into the Socket 7 infrastructure, though it was initially designed for a proprietary socket. Intel had the Pentium II line and things were moving fast in those days. AMD was providing competition for Intel with excellent integer performance and adequate floating performance, as well as providing a socketed product that was cheaper to produce for both AMD and its motherboard partners. Socket 7 was then morphed into Super 7 with support for 100 MHz FSB speeds. This was a big jump for AMD as they spearheaded this move. Cyrix, IBM, and Winchip all went along for the ride, but they often supported oddball bus speeds that did not always translate well into bus dividers for AGP and PCI.
The first wave of AGP enabled chipsets that also supported bus speeds above 66 MHz finally hit the market, and one of the first was the SiS 5591. One of the first boards to support this chipset was the MTech R581A. The board showed jumper settings that supported 100 MHz, but it was far from stable at that speed. It did fully support 83.3 MHz, which gave many socket 7 users a nice boost when overclocking. The first true 100 MHz chips were the VIA MVP-3 and the ALi M1571 (Aladdin V). These natively supported the 100 MHz bus and ran it perfectly fine. These chipsets allowed the later K6-2 and K6-3 chips to exist and compete successfully with the 100 MHz based Pentium IIs.
This particular model included the onboard ESS sound chip. Pretty posh for the time. Oh yes, there was a time before USB 2.0...
I had a heck of a time getting a hold of a VIA MVP-3 based motherboard at first, and I never actually laid hands upon any Aladdin V based unit during that time. There was no Newegg or Tiger Direct back then, and most major distributors like Tech Data did not always stock a wide selection of products. I was also not making a whole lot of money. I was particularly jealous of all these other sites getting access to review hardware, but then again at this time I had only a handful of articles out and I had not even started Penstarsys.com yet. So when guys like Tom and Anand got their hands on the Asus P5A, it was most definitely must-read material.
This was one of the first 100 MHz Super 7 based boards out there, as VIA was having some real issues with their MVP-3 chipset. Eventually VIA fixed those issues, but not before ALi had a good couple of months’ lead on their primary competitor. Of great interest for this board was the ability to run at 120 MHz FSB. Very few boards could handle that speed well, but the 115 MHz setting seemed very stable. I/O performance was also a step above the VIA chipsets, but VIA was fairly well known for having strange I/O issues at that time (not to mention AGP compatibility issues). The Asus P5A was a great board for the time, and it did not suffer much from the AGP issues that plagued VIA. Oddly enough, though ALi had the better overall chipset, they did not sell as well as the VIA products. Asus still shipped a lot of them, so I guess that made up for the more limited selection.
That is a single phase power... array? Look at all that open space throughout the board!
Super 7 was a dying breed by 1999 with the introduction of the K7 Athlon, but the P5A sold very well throughout its entire lifespan. The board I acquired had the K6-2 500 in the socket, and a BIOS update would provide support for the later K6-3+ and K6-2+ processors. What perhaps strikes me most is the overall simplicity of the boards as compared to modern products. The P5A looks like it has a single power phase going to the CPU, does not feature integrated Ethernet or other amenities, and only has two ATA-33 ports. Interestingly enough, it does feature a ESS based audio codec. Rare for those days! Compare that to the monster products like the Crosshair V Formula Z or the G1.Sniper.3, I guess simplicity is overlooked these days?
Subject: Graphics Cards | May 15, 2013 - 12:00 PM | Ryan Shrout
Tagged: tomb raider, never settle reloaded, never settle, level up, Crysis 3, bundle, amd
AMD dropped us a quick note to let us in on another limited time offer for buyers of AMD Radeon graphics cards. Starting today, the Never Settle Reloaded bundle that we first told you about in February is getting an upgrade for select tiers. For new buyers of the Radeon HD 7970, 7950 and 7790 AMD will be adding Tomb Raider into the mix. Also, the Radeon HD 7870 will be getting Crysis 3.
Here is the updated, currently running AMD Radeon Level Up bundle matrix.
Now if you buy a new AMD Radeon HD 7970, HD 7950 or HD 7870 today you will get four top-level PC games including Crysis 3, Bioshock Infinite, Far Cry 3: Blood Dragon and Tomb Raider.
This is a limited time offer though that will end when supplies run out and we don't really have any idea when that will be. Check out AMD's Level Up site for more details and to find retailers offering the updated bundles.
I am curious to find out how successful these bundles have been for AMD and if NVIDIA has had a feedback on the Free-to-play bundle they offered or the new Metro: Last Light option. Do gamers put much emphasis on the game bundles that come with each graphics card or does the performance and technology make the difference?
UPDATE: I have seen a couple of questions on whether this Level Up promotion would be retroactive. According to the details I have from AMD, this promotion is NOT retroactive. If you have already purchased any of the affected cards you will not be getting the additional games.
Subject: Motherboards | May 15, 2013 - 03:56 AM | Tim Verry
Tagged: server, open source hardware, open source, open compute project, open 3.0, amd
Throughout last year, AMD worked with the Open Compute Foundation to develop open source hardware for servers. The goal of the project was to bring lower-cost, efficient motherboards (compatible with AMD processors) to the server market. Even better, the AMD-compatible hardware is open source which gives companies and OEM/system integrators free reign to modify and build the hardware themselves. The latest iteration of the project is called Open 3.0 and motherboards based on the design(s) are available now from a number of AMD partners.
An AMD Open 3.0 motherboard.
According to a recent AMD press release, Open 3.0 motherboards will be available from AVnet.inc, Hyve, Penguin Computing, and Zt Systems beginning this week. The new motherboards strip out unnecessary and "over-provisioned" hardware to cut down on upfront hardware costs and electrical usage. Open 3.0 uses a base open source motherboard design that can then be further customized to work with a variety of workloads and in various rack/server configurations. Servers based on OPen 3.0 will range from 1U to 3U in size and can slot into standard 19" racks or Open Rack environments. The boards with their dual Opteron 6300-series processors will reportedly be suitable for High Performance Computing (HPC), Virtual Desktop Infrastructure (VDI), Cloud applications, and storage servers. AMD claims that its Open 3.0 motherboards can reduce the Total Cost of Ownership (TCO) of servers by up to 57% in data centers. AMD claims that a server based on Open 3.0 has a TCO of $4,589 while one based on a traditional OEM motherboard costs up to 57% more at $10,669. The AMD-provided example sound nice. Despite the example likely being the best-case-scenario, the idea behind the Open Compute Project and the AMD-specific Open 3.0 hardware does make sense. Customers should see more competition with motherboards that are cheaper to produce and run thanks to the open source nature. Further details on the status of Open 3.0 and the available hardware is being discussed at an invitation-only industry round-table this week between partners, interested enterprise customers, and a number of companies (including AMD, Broadcom, and Quanta).
For the uninitiated, the Open 3.0 hardware features a motherboard that measures 16" x 16.7" and is intended for 1U, 1.5U, 2U, and 3U servers. Each Open 3.0 board includes two AMD Opteron 6300 series processors, 24 DDR3 DIMM slots (12 per CPU, 4 channels with 3 DIMMs each), six SATA ports, 1 managed dual-channel Gigabit Ethernet NIC, up to four PCI-E slots, and a single Mezzanine connector for custom modules (eg. the Mellanox IO or Broadcom Management card). Board IO will include a single serial port and two USB ports.
I'm glad to see AMD's side of the Open Compute Project come to fruition with the company's Open 3.0 hardware. Anything to reduce power usage and hardware cost is welcome in the data center world, and it will be interesting to see what kind of impact the open source hardware will have, especially when it comes to custom designs from system integrators. Intel is also working towards open source server hardware along with Facebook and the Open Compute Project. It is refreshing to see open source gaining traction in this market segment, to say the least.