Haswell - A New Architecture
Thanks for stopping by our coverage of the Intel Haswell, 4th Generation Core processor and Z87 chipset release! We have a lot of different stories for you to check out and I wanted to be sure you knew about them all.
- PCPer Live! ASUS Z87 Motherboard and Intel Haswell Live Event! - Tuesday, June 4th we will be hosting a live streaming event with JJ from ASUS. Stop by to learn about Z87 and overclocking Haswell and to win some motherboards and graphics cards!
- ASUS ROG Maximus VI Extreme Motherboard Review
- MSI Z87-GD65 Gaming Motherboard Review
- ASUS Gryphon Z87 Micro-ATX Motherboard Review
This spring has been unusually busy for us here at PC Perspective - with everything from new APU releases from AMD, new graphics cards from NVIDIA and now new desktop and mobile processors from Intel. There has never been a better time to be a technology enthusiast though some would argue that the days of the enthusiast PC builder are on the decline. Looking at the revived GPU wars and the launch of Intel's Haswell architecture, 4th Generation Core processors we couldn't disagree more.
Built on the same 22nm process technology that Ivy Bridge brought to the world, Haswell is a new architecture from Intel that really changes focus for the company towards a single homogenous design that has the ability to span wide ranging markets. From tablets to performance workstations, Haswell will soon finds its way into just about every crevasse of your technology life.
Today we focus on the desktop though - the release of the new Intel Core i7-4770K, fully unlocked, LGA1150 processor built for the Z87 chipset and DIY builders everywhere. In this review we'll discuss the architectural changes Haswell brings, the overclocking capabilities and limitations of the new design, application performance, graphics performance and quite a bit more.
Haswell remains a quad-core processor built on 1.4 billion transistors in a die measuring 177 mm2 with integrated processor graphics, shared L3 cache, dual channel DDR3 memory controller. But much has changed - let's dive in.
Kabini is a pretty nifty little chip. So nifty, AMD is actually producing server grade units for the growing micro-server market. As readers may or may not remember, AMD bought up SeaMicro last year to get a better grip on the expanding micro-server market. While there are no official announcements from SeaMicro about offerings utilizing the server-Kabini parts, we can expect there to be sooner as opposed to later.
The Kabini parts (Jaguar + GCN) will be branded Opteron X-series. So far there are two announced products; one utilizes the onboard graphics portion while the other has the GCN based unit disabled. The products have a selectable TDP that ranges from 9 watts to 22 watts. This should allow the vendors to further tailor the chips to their individual solutions.
The X1150 is the GPU-less product with adjustable TDPs ranging from 9 to 17 watts. It is a native quad core product with 2 MB of L2 cache. It can be clocked up to 2 GHz, which we assume is that 17 watts range. The X2150 has an adjustable TDP range from 11 to 22 watts. The four cores can go to a max speed of 1.9 GHz while the GPU can go from 266 MHz up to a max 600 MHz.
The Architectural Deep Dive
AMD officially unveiled their brand new Bobcat architecture to the world at CES 2011. This was a very important release for AMD in the low power market. Even though Netbooks were a dying breed at that time, AMD experienced a good uptick in sales due to the good combination of price, performance, and power consumption for the new Brazos platform. AMD was of the opinion that a single CPU design would not be able to span the power consumption spectrum of CPUs at the time, and so Bobcat was designed to fill that space which existed from 1 watt to 25 watts. Bobcat never was able to get down to that 1 watt point, but the Z-60 was a 4.5 watt part with two cores and the full 80 Radeon cores.
The Bobcat architecture was produced on TSMC’s 40 nm process. AMD eschewed the upcoming 32 nm HKMG/SOI process that was being utilized for the upcoming Llano and Bulldozer parts. In hindsight, this was a good idea. Yields took a while to improve on GLOBALFOUNDRIES new process, while the existing 40 nm product from TSMC was running at full speed. AMD was able to provide the market in fairly short order with good quantities of Bobcat based APUs. The product more than paid for itself, and while not exactly a runaway success that garnered many points of marketshare from Intel, it helped to provide AMD with some stability in the market. Furthermore, it provided a very good foundation for AMD when it comes to low power parts that are feature rich and offer competitive performance.
The original Brazos update did not happen, instead AMD introduced Brazos 2.0 which was a more process improvement oriented product which featured slightly higher speeds but remained in the same TDP range. The uptake of this product was limited, and obviously it was a minor refresh to buoy purchases of the aging product. Competition was coming from low power Ivy Bridge based chips, as well as AMD’s new Trinity products which could reach TDPs of 17 watts. Brazos and Brazos 2.0 did find a home in low powered, but full sized notebooks that were very inexpensive. Even heavily leaning Intel based manufacturers like Toshiba released Brazos based products in the sub-$500 market. The combination of good CPU performance and above average GPU performance made this a strong product in this particular market. It was so power efficient, small batteries were typically needed, thereby further lowering the cost.
All things must pass, and Brazos is no exception. Intel has a slew of 22 nm parts that are encroaching on the sub-15 watt territory, ARM partners have quite a few products that are getting pretty decent in terms of overall performance, and the graphics on all of these parts are seeing some significant upgrades. The 40 nm based Bobcat products are no longer competitive with what the market has to offer. So at this time we are finally seeing the first Jaguar based products. Jaguar is not a revolutionary product, but it improves on nearly every aspect of performance and power usage as compared to Bobcat.
A Reference Platform - But not a great one
Believe it or not, AMD claims that the Brazos platform, along with the "Brazos 2.0" update the following year, were the company's most successful mobile platforms in terms of sales and design wins. When it first took the scene in late 2010, it was going head to head against the likes of Intel's Atom processor and the combination of Atom + NVIDIA ION and winning. It was sold in mini-ITX motherboard form factors as well as small clamshell notebooks (gasp, dare we say...NETBOOKS?) and though it might not have gotten the universal attention it deserved, it was a great part.
With Kabini (and Temash as well), AMD is making another attempt to pull in some marketshare in the low power, low cost mobile markets. I have already gone over the details of the mobile platforms that AMD is calling Elite Mobility (Temash) and Mainstream (Kabini) in a previous article that launched today.
This article will quickly focus on the real-world performance of the Kabini platform as demonstrated by a reference laptop I received while visiting AMD in Toronto a few weeks ago. While this design isn't going to be available in retail (and I am somewhat thankful based on the build quality) the key is to look at the performance and power efficiency of the platform itself, not the specific implementation.
Kabini Architecture Overview
The building blocks of Kabini are four Jaguar x86 cores and 128 Radeon cores colleted in a pair of Compute Units - similar in many ways to the CUs found in the Radeon HD 7000 series discrete GPUs. Josh has written a very good article that focuses on the completely new architecture that is Jaguar and compared it to other processors including AMD's previous core used in Brazos, the Bobcat core.
2013 Elite Mobility APU - Temash
AMD has a lot to say today. At an event up in Toronto this month we got to sit down with AMD’s marketing leadership and key engineers to learn about the company’s plans for 2013 mobility processors. This includes a refreshed high performance APU known as Richland that will replace Trinity as well as two brand new APUs based on Jaguar CPU cores and the GCN architecture for low power platforms.
Josh has put together an article that details the Jaguar + GCN design of Temash and Kabini and I have also posted some initial performance results of the Kabini reference system AMD handed me in May. This article will detail the plans that AMD has for each of these three mobile segments, starting with the newest entry, AMD’s Elite Mobility APU platform – Temash.
The goal of the APU, the combination of traditional x86 processing cores and a discrete style graphics system, was to offer unparalleled performance in smaller and more efficient form factors. AMD believes that their leadership in the graphics front will offer them a good sized advantage in areas including performance tablets, hybrids and small screen clamshells that may or not be touch enabled. They are acknowledging though that getting into the smallest tablets (like the Nexus 7) is not on the table quite yet and that content creation desktop replacements are probably outside the scope of Richland.
2013 Elite Mobility APU – Temash
AMD will have the first x86 quad-core SoC design with Temash and AMD thinks it will make a big splash in a relatively new market known as the “high performance” tablet.
Temash, built around Jaguar CPU cores and the graphics technology of GCN, will be able to offer fully accelerated video playback with transcode support as well with features like image stabilization and Perfect Picture enabled. Temash will also be the only SoC to offer support for DX11 graphics and even though some games might not have the ability to show off added effects there are quite a few performance advantages of DX11 over DX10/9. With more than 100% claimed GPU performance upgrade you’ll be able to drive displays at 2560x1600 for productivity use and even be able to take advantage of wireless display options.
Subject: Editorial, General Tech, Graphics Cards, Processors, Systems | May 21, 2013 - 05:26 PM | Scott Michaud
Tagged: xbox one, xbox
Almost exactly three months have passed since Sony announced the Playstation 4 and just three weeks remain until E3. Ahead of the event, Microsoft unveiled their new Xbox console: The Xbox One. Being so close to E3, they are saving the majority of games until that time. For now, it is the box itself as well as its non-gaming functionality.
First and foremost, the raw specifications:
- AMD APU (5 billion transistors, 8 core, on-die eSRAM)
- 8GB RAM
- 500GB Storage, Bluray reader
- USB 3.0, 802.11n, HDMI out, HDMI in
The hardware is a definite win for AMD. The Xbox One is based upon an APU which is quite comparable to what the PS4 will offer. Unlike previous generations, there will not be too much differentiation based on available performance; I would not expect to see much of a fork in terms of splitscreen and other performance-sensitive features.
A new version of the Kinect sensor will also be present with all units which developers can depend upon. Technically speaking, the camera is higher resolution and more wide-angle; up to six skeletons can be tracked with joints able to rotate rather than just hinge. Microsoft is finally also permitting developers to use the Kinect along with a standard controller to, as they imagine, allow a user to raise their controller to block with a shield. That is the hope, but near the launch of the original Kinect, Microsoft filed a patent to allow sign language recognition: has not happened yet. Who knows whether the device will be successfully integrated into gaming applications.
Of course Microsoft is known most for system software, and the Xbox runs three lightweight operating environments. In Windows 8, you have the Modern interface which runs WinRT applications and you have the desktop app which is x86 compatible.
The Xbox One borrows more than a little from this model.
The home screen, which I am tempted to call the Start Screen, for the console has a very familiar tiled interface. They are not identical to Windows but they are definitely consistent. This interface allows for access to Internet Explorer and an assortment of apps. These apps can be pinned to the side of the screen, identical to Windows 8 modern app. I am expecting there to be "a lot of crossover" (to say the least) between this and the Windows Store; I would not be surprised if it is basically the same API. This works both when viewing entertainment content as well as within a game.
These three operating systems run at the same time. The main operating system is basically a Hyper-V environment which runs the two other operating systems simultaneously in sort-of virtual machines. These operating systems can be layered with low latency, since all you are doing is compositing them in a different order.
Lastly, they made reference to Xbox Live, go figure. Microsoft is seriously increasing their server capacity and expects developers to utilize Azure infrastructure to offload "latency-insensitive" computation for games. While Microsoft promises that you can play games offline, this obviously does not apply to features (or whole games) which rely upon the back-end infrastructure.
And yes, I know you will all beat up on me if I do not mention the SimCity debacle. Maxis claimed that much of the game requires an online connection due to the complicated server requirements; after a crack allowed offline functionality, it was clear that the game mostly operates fine on a local client. How much will the Xbox Live cloud service offload? Who knows, but that is at least their official word.
Now to tie up some loose ends. The Xbox One will not be backwards compatible with Xbox 360 games although that is no surprise. Also, Microsoft says they are allowing users to resell and lend games. That said, games will be installed and not require the disc, from what I have heard. Apart from the concerns about how much you can run on a single 500GB drive, once the game is installed rumor has it that if you load it elsewhere (the rumor is even more unclear about whether "elsewhere" counts accounts or machines) you will need to pay a fee to Microsoft. In other words? Basically not a used game.
Well, that has it. You can be sure we will add more as information comes forth. Comment away!
Subject: General Tech, Graphics Cards, Processors, Mobile | May 15, 2013 - 09:02 PM | Scott Michaud
Tagged: tegra 4, hp, tablets
Sentences containing the words "Hewlett-Packard" and "tablet" can end in a question mark, an exclamation mark, or a period on occasion. The gigantic multinational technology company tried to own a whole mobile operating system with their purchase of Palm and abandoned those plans just as abruptly with such a successful $99 liquidation of $500 tablets, go figure, that they to some extent did it twice. The operating system was open sourced and at some point LG swooped in and bought it, minus patents, for use in Smart TVs.
So how about that Android?
The floodgates are open on Tegra 4 with HP announcing their SlateBook x2 hybrid tablet just a single day after NVIDIA's SHIELD move out of the projects. The SlateBook x2 uses the Tegra 4 processor to power Android 4.2.2 Jellybean along with the full Google experience including the Google Play store. Along with Google Play, the SlateBook and its Tegra 4 processor are also allowed in TegraZone and NVIDIA's mobile gaming ecosystem.
As for the device itself, it is a 10.1" Android tablet which can dock into a keyboard for extended battery life, I/O ports, and well, a hardware keyboard. You are able to attach this tablet to a TV via HDMI along with the typical USB 2.0, combo audio jack, and a full-sized SD card slot; which half any given port is available through is anyone's guess, however. Wirelessly, you have WiFi a/b/g/n and some unspecified version of Bluetooth.
The raw specifications list follows:
NVIDIA Tegra 4 SoC
- ARM Cortex A15 quad core @ 1.8 GHz
- 72 "Core" GeForce GPU @ ~672MHz, 96 GFLOPS
- 2GB DDR3L RAM ("Starts at", maybe more upon customization?)
- 64GB eMMC SSD
- 1920x1200 10.1" touch-enabled IPS display
- HDMI output
- 1080p rear camera, 720p front camera with integrated microphone
- 802.11a/b/g/n + Bluetooth (4.0??)
- Combo audio jack, USB 2.0, SD Card reader
- Android 4.2.2 w/ Full Google and TegraZone experiences.
If this excites you, then you only have to wait until some point in August; you will also, of course, need to wait until you save up about $479.99 plus tax and shipping.
Subject: General Tech, Graphics Cards, Processors, Systems, Mobile | May 14, 2013 - 03:54 PM | Scott Michaud
Tagged: haswell, nec
While we are not sure when it will be released or whether it will be available for North America, we have found a Haswell laptop. Actually, NEC will release two products in this lineup: a high end 1080p unit and a lower end 1366x768 model. Unfortuantely, the article is in Japanese.
IPS displays have really wide viewing angles, even top and bottom.
NEC is known for their higher-end monitors; most people equate the Dell Ultrasharp panels with professional photo and video production, but their top end offers are ofter a tier below the best from companies like NEC and Eizo. The laptops we are discussing today both contain touch-enabled IPS panels with apparently double the contrast ratio of what NEC considers standard. While these may or may not be the tip-top NEC offerings, they should at least be putting in decent screens.
Obviously the headliner for us is the introduction of Haswell. While we do not know exactly which product NEC decided to embed, we do know that they are relying upon it for their graphics performance. With the aforementioned higher-end displays, it seems likely that NEC is intending this device for the professional market. A price-tag of 190000 yen (just under $1900 USD) for the lower end and 200000 yen (just under $2000 USD) for the higher end further suggests this is their target demographic.
Clearly a Japanese model.
The professional market does not exactly have huge requirements for graphics performance, but to explicitly see NEC trust Intel for their GPU performance is an interesting twist. Intel HD 4000 has been nibbling, to say the least, on the discrete GPU marketshare in laptops. I would expect this laptop would contain one of the BGA-based parts, which are soldered onto the motherboard, for the added graphics performance.
As a final note, the higher-end model will also contain a draft 802.11ac antenna. It is expected that network performance could be up to 867 megabits as a result.
Of course I could not get away without publishing the raw specifications:
LL850/MS (Price: 200000 yen):
- Fourth-generation Intel Core processor with onboard video
- 8GB DDR3 RAM
- 1TB HDD w/ 32GB SSD caching
- BDXL (100-128GB BluRay disc) drive
- IEEE 802.11ac WiFi adapter, Bluetooth 4.0
- SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
- 1080p IPS display with touch support
- Office Home and Business 2013 preinstalled?
LL750/MS (Price: 190000 yen):
- Fourth-generation Intel Core processor with onboard video
- 8GB DDR3 RAM
- 1TB HDD (no SSD cache)
- (Optical disc support not mentioned)
- IEEE 802.11a/b/g/n WiFi adapter, Bluetooth 4.0
- SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
- 1366x768 (IPS?) touch-enabled display
Subject: Editorial, General Tech, Cases and Cooling, Processors | May 10, 2013 - 04:23 PM | Scott Michaud
Tagged: c6, c7, haswell, PSU, corsair
I cannot do it captain! I don't have the not enough power!
We have been discussing the ultra-low power state of Haswell processors for a little over a week and how it could be detrimental to certain power supplies. Power supply manufacturers never quite expected that you could have as little as a 0.05 Amp (0.6W) draw on the 12V rail without being off. Since then, companies such as Enermax started to list power supplies which have been tested and are compliant with the new power requirements.
|AXi||AX1200i||Yes||100% Compatible with Haswell CPUs|
|AX860i||Yes||100% Compatible with Haswell CPUs|
|AX760i||Yes||100% Compatible with Haswell CPUs|
|AX||AX1200||Yes||100% Compatible with Haswell CPUs|
|AX860||Yes||100% Compatible with Haswell CPUs|
|AX850||Yes||100% Compatible with Haswell CPUs|
|AX760||Yes||100% Compatible with Haswell CPUs|
|AX750||Yes||100% Compatible with Haswell CPUs|
|AX650||Yes||100% Compatible with Haswell CPUs|
|HX||HX1050||Yes||100% Compatible with Haswell CPUs|
|HX850||Yes||100% Compatible with Haswell CPUs|
|HX750||Yes||100% Compatible with Haswell CPUs|
|HX650||Yes||100% Compatible with Haswell CPUs|
|TX-M||TX850M||Yes||100% Compatible with Haswell CPUs|
|TX750M||Yes||100% Compatible with Haswell CPUs|
|TX650M||Yes||100% Compatible with Haswell CPUs|
|TX||TX850||Yes||100% Compatible with Haswell CPUs|
|TX750||Yes||100% Compatible with Haswell CPUs|
|TX650||Yes||100% Compatible with Haswell CPUs|
|GS||GS800||Yes||100% Compatible with Haswell CPUs|
|GS700||Yes||100% Compatible with Haswell CPUs|
|GS600||Yes||100% Compatible with Haswell CPUs|
|CX-M||CX750M||Yes||100% Compatible with Haswell CPUs|
|CX600M||TBD||Likely compatible — currently validating|
|CX500M||TBD||Likely compatible — currently validating|
|CX430M||TBD||Likely compatible — currently validating|
|CX||CX750||Yes||100% Compatible with Haswell CPUs|
|CX600||TBD||Likely compatible — currently validating|
|CX500||TBD||Likely compatible — currently validating|
|CX430||TBD||Likely compatible — currently validating|
|VS||VS650||TBD||Likely compatible — currently validating|
|VS550||TBD||Likely compatible — currently validating|
|VS450||TBD||Likely compatible — currently validating|
|VS350||TBD||Likely compatible — currently validating|
Above is Corsair's slightly incomplete chart as of the time it was copied from their website, 3:30pm on May 10th, 2013; so far it is coming up all good. Their blog should be updated as new products get validated for the new C6 and C7 CPU sleep states.
The best part of this story is just how odd it is given the race to arc-welding (it's not a podcast so you can't Bingo! hahaha!) supplies we have been experiencing over the last several years. Simply put, some companies never thought that component manufacturers such as Intel would race to the bottom of power draws.
Subject: Editorial, General Tech, Graphics Cards, Processors | May 8, 2013 - 09:32 PM | Scott Michaud
Tagged: Volcanic Islands, radeon, ps4, amd
So the Southern Islands might not be entirely stable throughout 2013 as we originally reported; seismic activity being analyzed suggests the eruption of a new GPU micro-architecture as early as Q4. These Volcanic Islands, as they have been codenamed, should explode onto the scene opposing NVIDIA's GeForce GTX 700-series products.
It is times like these where GPGPU-based seismic computation becomes useful.
The rumor is based upon a source which leaked a fragment of a slide outlining the processor in block diagram form and specifications of its alleged flagship chip, "Hawaii". Of primary note, Volcanic Islands is rumored to be organized with both Serial Processing Modules (SPMs) and a Parallel Compute Module (PCM).
So apparently a discrete GPU can have serial processing units embedded on it now.
Heterogeneous Systems Architecture (HSA) is a set of initiatives to bridge the gap between massively parallel workloads and branching logic tasks. We usually make reference to this in terms of APUs and bringing parallel-optimized hardware to the CPU. In this case, we are discussing it in terms of bringing serial processing to the discrete GPU. According to the diagram, the chip within would contain 8 processor modules each with two processing cores and an FPU for a total of 16 cores. There does not seem to be any definite identification whether these cores would be based upon their license to produce x86 processors or their other license to produce ARM processors. Unlike an APU, this is heavily skewed towards parallel computation rather than a relatively even balance between CPU, GPU, and chipset features.
Now of course, why would they do that? Graphics processors can do branching logic but it tends to sharply cut performance. With an architecture such as this, a programmer might be able to more efficiently switch between parallel and branching logic tasks without doing an expensive switch across the motherboard and PCIe bus between devices. Josh Walrath suggested a server containing these as essentially add-in card computers. For gamers, this might help out with workloads such as AI which is awkwardly split between branching logic and massively parallel visibility and path-finding tasks. Josh seems skeptical about this until HSA becomes further adopted, however.
Still, there is a reason why they are implementing this now. I wonder, if the SPMs are based upon simple x86 cores, how the PS4 will influence PC gaming. Technically, a Volcanic Island GPU would be an oversized PS4 within an add-in card. This could give AMD an edge, particularly in games ported to the PC from the Playstation.
This chip, Hawaii, is rumored to have the following specifications:
- 4096 stream processors
- 16 serial processor cores on 8 modules
- 4 geometry engines
- 256 TMUs
- 64 ROPs
- 512-bit GDDR5 memory interface, much like the PS4.
20 nm Gate-Last silicon fab process
- Unclear if TSMC or "Common Platform" (IBM/Samsung/GLOBALFOUNDRIES)
Softpedia is also reporting on this leak. Their addition claims that the GPU will be designed on a 20nm Gate-Last fabrication process. While gate-last is considered to be not worth the extra effort in production, Fully Depleted Silicon On Insulator (FD-SOI) is apparently "amazing" on gate-last at 28nm and smaller fabrication. This could mean that AMD is eying that technology and making this design with intent of switching to an FD-SOI process, without a large redesign which an initially easier gate-first production would require.
Well that is a lot to process... so I will leave you with an open question for our viewers: what do you think AMD has planned with this architecture, and what do you like and/or dislike about what your speculation would mean?