Subject: Processors | March 19, 2014 - 08:00 PM | Ryan Shrout
Tagged: tim, Intel, hawell, gdc 14, GDC, 9-series
An update to the existing Haswell 4th Generation Core processors will be hitting retail sometime in mid-2014 according to what Intel has just told us. This new version of the existing processors will include new CPU packaging and the oft-requested improved thermal interface material (TIM). Overclockers have frequently claimed that the changes Intel made to the TIM was limiting performance; it seems Intel has listened to the community and will be updating some parts accordingly.
Recent leaks have indicated we'll see modest frequency increases in some of the K-series parts; in the 100 MHz range. All Intel is saying today though is what you see on that slide. Overclocks should improve with the new thermal interface material but by how much isn't yet known.
These new processors, under the platform code name of Devil's Canyon, will target the upcoming 9-series chipsets. When I asked about support for 8-series chipset users, Intel would only say that those motherboards "are not targeted" for the refreshed Haswell CPUs. I would not be surprised though to see some motherboard manufacturers attempt to find ways to integrate board support through BIOS/UEFI changes.
Though only slight refreshes, when we combine the Haswell Devil's Canyon release with the news about the X99 + Haswell-E, it appears that 2014 is shaping up to be pretty interesting for the enthusiast community!
Subject: General Tech, Processors, Chipsets | March 13, 2014 - 03:35 AM | Scott Michaud
Tagged: Intel, Haswell-E, X99
Though Ivy Bridge-E is not too distant of a memory, Haswell-E is on the horizon. The enthusiast version of Intel's architecture will come with a new motherboard chipset, the X99. (As an aside: what do you think its eventual successor will be called?) WCCFTech got their hands on details, albeit some of which have been kicking around for a few months, outlining the platform.
Image Credit: WCCFTech
First and foremost, Haswell-E (and X99) will support DDR4 memory. Its main benefit is increased bandwidth and decreased voltage at the same current, thus lower wattage. The chipset will support four memory channels.
Haswell-E will continue to have 40 PCIe lanes (the user's choice between five x8 slots or two x16 slots plus a x8 slot). This is the same number of total lanes as seen on Sandy Bridge-E and Ivy Bridge-E. While LGA 2011-3 is not compatible with LGA 2011, it does share that aspect.
X99 does significantly increase the number of SATA ports, to ten SATA 6Gbps (up from two SATA 6Gbps and four SATA 3Gbps). Intel RST, RST Smart Response Technology, and Rapid Recover Technology are also present and accounted for. The chipset also supports six native USB 3.0 ports and an additional eight USB 2.0 ones.
Intel Haswell-E and X99 is expected to launch sometime in Q3 2014.
Low Power and Low Price
Back at CES earlier this year, we came across a couple of interesting motherboards that were neither AM3+ nor FM2+. These small, sparse, and inexpensive boards were actually based on the unannounced AM1 platform. This socket is actually the FS1b socket that is typically reserved for mobile applications which require the use of swappable APUs. The goal here is to provide a low cost, upgradeable platform for emerging markets where price is absolutely key.
AMD has not exactly been living on easy street for the past several years. Their CPU technologies have not been entirely competitive with Intel. This is their bread and butter. Helping to prop the company up though is a very robust and competitive graphics unit. The standalone and integrated graphics technology they offer are not only competitive, but also class leading in some cases. The integration of AMD’s GCN architecture into APUs has been their crowning achievement as of late.
This is not to say that AMD is totally deficient in their CPU designs. Their low power/low cost designs that started with the Bobcat architecture all those years back have always been very competitive in terms of performance, price, and power consumption. The latest iteration is the Kabini APU based on the Jaguar core architecture paired with GCN graphics. Kabini will be the part going into the FS1b socket that powers the AM1 platform.
Kabini is a four core processor (Jaguar) with a 128 unit GCN graphics part (8 GCN cores). These APUs will be rated at 25 watts up and down the stack. Even if they come with half the cores, it will still be a 25 watt part. AMD says that 25 watts is the sweet spot in terms of performance, cooling, and power consumption. Go lower than that and too much performance is sacrificed, and any higher it would make more sense to go with a Trinity/Richland/Kaveri solution. That 25 watt figure also encompasses the primary I/O functionality that typically resides on a standalone motherboard chipset. Kabini features 2 SATA 6G ports, 2 USB 3.0 ports, and 8 USB 2.0 ports. It also features multiple PCI-E lanes as well as a 4x PCI-E connection for external graphics. The chip also supports DisplayPort, HDMI, and VGA outputs. This is a true SOC from AMD that does a whole lot of work for not a whole lot of power.
Subject: Processors | February 26, 2014 - 11:46 PM | Tim Verry
Tagged: SoC, Samsung, exynos 5, big.little, arm, 28nm
Samsung recently announced two new 32-bit Exynos 5 processors with the eight core Exynos 5 Octa 5422 and six core Exynos 5 Hexa 5260. Both SoCs utilize a combination of ARM Cortex-A7 and Cortex-A15 CPU cores along with ARM's Mali graphics. Unlike the previous Exynos 5 chips, the upcoming processors utilize a big.LITTLE configuration variant called big.LITTLE MP that allows all CPU cores to be used simultaneously. Samsung continues to use a 28nm process node, and the SoCs should be available for use in smartphones and tablets immediately.
The Samsung Exynos 5 Octa 5422 offers up eight CPU cores and an ARM Mali T628 MP6 GPU. The CPU configuration consists of four Cortex-A15 cores clocked at 2.1GHz and four Cortex-A7 cores clocked at 1.5GHz. Devices using this chip will be able to tap up to all eight cores at the same time for demanding workloads, allowing the device to complete the computations and return to a lower-power or sleep state sooner. Devices using previous generation Exynos chips were faced with an either-or scenario when it came to using the A15 or A7 groups of cores, but the big.LITTLE MP configuration opens up new possibilites.
While the Octa 5422 occupies the new high end for the lineup, the Exynos 5 Hexa 5260 is a new midrange chip that is the first six core Exynos product. This chip uses an as-yet-unnamed ARM Mali GPU along with six ARM cores. The configuration on this SoC is four low power Cortex-A7 cores clocked at 1.3GHz paired with two Cortex-A15 cores clocked at 1.7GHz. Devices can use all six cores at a time or more selectively. The Hexa 5260 offers up two higher powered cores for single threaded performance along with four power sipping cores for running background tasks and parallel workloads.
The new chips offer up access to more cores for more performance at the cost of higher power draw. While the additional cores may seem like overkill for checking email and surfing the web, the additional power can enable things like onboard voice recognition, machine vision, faster photo filtering and editing, and other parallel-friendly tasks. Notably, the GPU should be able to assist with some of this parallel processing, but GPGPU is still relatively new whereas developers have had much more time to familiarize themselves with and optimize applications for multiple CPU threads. Yes, the increasing number of cores lends itself well to marketing, but that does not preclude them from having real world performance benefits and application possibilities. As such, I'm interested to see what these chips can do and what developers are able to wring out of them.
Subject: Graphics Cards, Processors | February 26, 2014 - 07:18 PM | Ryan Shrout
Overclocking the memory and GPU clock speeds on an AMD APU can greatly improve gaming performance - it is known. With the new AMD A10-7850K in hand I decided to do a quick test and see how much we could improve average frame rates for mainstream gamers with only some minor tweaking of the motherboard BIOS.
Using some high-end G.Skill RipJaws DDR3-2400 memory, we were able to push memory speeds on the Kaveri APU up to 2400 MHz, a 50% increase over the stock 1600 MHz rate. We also increased the clock speed on the GPU portion of the A10-7850K from 720 MHz to 1028 MHz, a 42% boost. Interestingly, as you'll see in the video below, the memory speed had a MUCH more dramatic impact on our average frame rates in-game.
In the three games we tested for this video, GRID 2, Bioshock Infinite and Battlefield 4, total performance gain ranged from 26% to 38%. Clearly that can make the AMD Kaveri APU an even more potent gaming platform if you are willing to shell out for the high speed memory.
|Stock||GPU OC||Memory OC||Total OC||Avg FPS Change|
|22.4 FPS||23.7 FPS||28.2 FPS||29.1 FPS||+29%|
High + 2xAA
|33.5 FPS||36.3 FPS||41.1 FPS||42.3 FPS||+26%|
|30.1 FPS||30.9 FPS||40.2 FPS||41.8 FPS||+38%|
Subject: General Tech, Graphics Cards, Processors | February 25, 2014 - 01:33 PM | Scott Michaud
Tagged: Ivy Bridge, Intel, iGPU, haswell
Recently, Intel released the 18.104.22.16812 (22.214.171.124.3412 for 64-bit) drivers for their Ivy Bridge and Haswell integrated graphics. The download was apparently published on January 29th while its patch notes are dated February 22nd. It features expanded support for Intel Quick Sync Video Technology, allowing certain Pentium and Celeron-class processors to access the feature, as well as an alleged increase in OpenGL-based games. Probably the most famous OpenGL title of our time is Minecraft, although I do not know if that specific game will see improvements (and if so, how much).
The new driver enables Quick Sync Video for the following processors:
- Pentium 3558U
- Pentium 3561Y
- Pentium G3220(Unsuffixed/T/TE)
- Pentium G3420(Unsuffixed/T)
- Pentium G3430
- Celeron 2957U
- Celeron 2961Y
- Celeron 2981U
- Celeron G1820(Unsuffixed/T/TE)
- Celeron G1830
Besides the addition for these processors and the OpenGL performance improvements, the driver obviously fixes several bugs in each of its supported OSes. You can download the appropriate drivers from the Intel Download Center.
Subject: Processors, Mobile | February 24, 2014 - 05:30 PM | Ryan Shrout
Tagged: snapdrdagon 615, snapdragon 610, snapdragon 410, snapdragon, qualcomm, MWC 14, MWC, adreno 405, adreno
Intel, Mediatek and Allwinner have all come out with new SoC announcements at Mobile World Congress and Qualcomm is no different. By far the most interesting release is what it calls the "first commercial" 64-bit Octa-Core chipset with integrated global LTE support. The list of features and technologies included on the chipset is impressive.
The Snapdragon 615 integrates 8 x ARM Cortex-A53 cores that opterate on the newer 64-bit ARMv8 architecture while supporting 32-bit for backwards compatibility. Qualcomm is not using a custom designed CPU core for this chipset but the company has stated it will have its own custom 64-bit core sometime in 2015. This 8-core model is divided into a pair of quad-core clusters that will be tuned to different clock speed and power levels, offering the ability to run slightly more efficiently than would be possible with all cores tuned to the highest performance.
Snapdragon 610 is essentially the same design but is limited to a quad-core, single cluster setup.
Both of these parts will integrate the Qualcomm custom built Adreno 405 GPU that brings a DX11 class feature set, along with OpenGL ES 3.0 and OpenCL 1.2. The Adreno 405 performance is still unknown but it should be able to compete with the likes of PowerVR's Series6 used in the Apple A7 and Intel Merrifield parts. Quad HD resolutions are supported up to 2560x1600 and Miracast integration enables wireless display. H.265 hardware decode acceleration also found its way into the 615/610.
Connectivity features of the Snapdragon 615/610 include 802.11ac wireless as well as the company's 3rd generation LTE modem. Category 4 and carrier aggregation are optional.
Qualcomm has publicly stated that the move to 8-core processors with software lacking the capability to manage them properly was a poor decision. But it would appear that the "core race" has infected just about everyone.
Subject: Processors, Mobile | February 24, 2014 - 04:00 AM | Ryan Shrout
Tagged: z3480, PowerVR, MWC 14, MWC, moorefield, merrifield, Intel, atom
Intel also announced an LTE-Advanced modem, the XMM 7260 at Mobile World Congress today.
Last May Intel shared with us details of its new Silvermont architecture, a complete revamp of the Atom brand with an out-of-order design and vastly improved performance per watt. In September we had our first real-hands on with a processor built around Silvermont, code named Bay Trail. The Atom Z37xx and Z36xx products were released and quickly found their way into products like the ASUS T100 convertible notebook. In fact, both the Bay Trail processor and the ASUS T100 took home honors in our end-of-year hardware recognitions.
Today at Mobile World Congress 2014, Intel is officially announcing the Atom Z35xx and Z34xx processors based on the same Silvermont architecture, code named Moorefield and Merrifield respectively. These new processors share the same power efficiency of Bay Trail and excellent performance but have a few changes to showcase.
Though there are many SKUs yet to be revealed for Merrifield and Moorefield, this comparison table gives you a quick idea of how the new Atom Z3480 compares to the previous generation, Atom Z2580 and Clover Trail+.
The Atom Z3480 is a dual core (single module) processor with a clock speed as high as 2.13 GHz. And even though it doesn't have HyperThreading support, the new architecture is definitely faster than the previous product. The cellular radio listed on this table is a separate chip, not integrated into the SoC - at least not yet. PowerVR G6400 quad core/cluster graphics should present performance somewhere near that of the iPhone 5s with support for OpenCL and RenderScript acceleration. Intel claims that this PowerVR architecture will give Merrifield a 2x performance advantages over the graphics system in Clover Trail+. A new image processor allows for 1080p60 video capture (vs 30 FPS before) and support Android 4.4.2 is ready.
Most interestingly, the Merrifield and Moorefield SoCs do not use Intel's HD graphics technology and instead return to the world of Imagination Technology and the PowerVR IP. Specifically, the Merrifield chip, the smaller of the two new offerings from Intel, is using the PowerVR G6400 GPU; the same base technology that powers the A7 SoC from Apple in the iPhone 5s.
A comparison between the Merrifield and Moorefield SoCs reveals the differences between what will likely be targeted smartphone and tablet processors. The Moorefield part uses a pair of modules with a total of four cores, double that of Merrifield, and also includes a slightly higher performance PowerVR GPU option, the G6430.
Intel has provided some performance results of the new Atom Z3480 using a reference phone, though of course, with all vendor provided benchmarks, take them as an estimate until some third parties get a hold of this hardware for independent testing.
Looking at GFXBench 2.7, Intel estimates that Merrifield will run faster than the Apple A7 in the iPhone 5s and just slightly behind the Qualcomm Snapdragon 800 found in the Samsung Galaxy S4. Moorefield, the SoC that adds slightly to GPU performance and doubles the CPU core count, would improve performance to best the Qualcomm result.
WebXPRT is a web application benchmark and with it Intel's Atom Z3480 has the edge over both the Apple A7 and the Qualcomm S800. Intel also states that they can meet these performance claims while also offering better battery life than the Snapdragon S800 as well - interestingly the Apple A7 was left out of those metrics.
Finally, Intel did dive into the potential performance improvements that support for 64-bit technology will offer when Android finally implements support. While Kitkat can run a 64-bit kernel, the user space is not yet supported so benchmarking is a very complicated and limited process. Intel was able to find instances of 16-34% performance improvements from the move to 64-bit on Merrifield. We are still some time from 64-bit Android OS versions but Intel claims they will have full support ready when Google makes the transistion.
Both of these SoCs should be showing up in handsets and tablets by Q2. Intel did have design wins for Clover Trail+ in a couple of larger smartphones but the company has a lot more to prove to really make Silvermont a force in the mobile market.
Subject: Processors, Mobile | February 24, 2014 - 03:00 AM | Ryan Shrout
Tagged: wiko, Tegra 4i, tegra, nvidia, MWC 14, MWC
NVIDIA has been teasing the Tegra 4i for quite some time - the integration of a Tegra 4 SoC with the acquired NVIDIA i500 LTE modem technology. In truth, the Tegra 4i is a totally different processor than Tegra 4. While the big-boy Tegra 4 is a 4+1 Cortex-A15 chip with 72 GPU cores, the Tegra 4i is a 4+1 Cortex-A9 design with 60 GPU cores.
NVIDIA and up-and-coming European phone provider Wiko are announcing at Mobile World Congress the first Tegra 4i smartphone: Wax. That's right, the Wiko Wax.
Here is the full information from NVIDIA:
NVIDIA LTE Modem Makes Landfall in Europe, with Launch of Wiko Tegra 4i LTE Smartphone
Wiko Mobile, France’s fastest growing local phonemaker, has just launched Europe’s first Tegra 4i smartphone.
Tegra 4i – our first integrated LTE mobile processor – combines a 60-core GPU and our own LTE modem to bring up to 2X higher performance than competing phone chips.
It helps the Wiko WAX deliver fast web browsing, best-in-class gaming performance, smooth video playback and great battery life.
Launched at Mobile World Congress, in Barcelona, the Wiko WAX phone features a 4.7-inch 720p display, 8MP rear camera and LTE / HSPA+ support.
The phone will be available throughout Europe – including France, Spain, Portugal, Germany, Italy, UK and Belgium – starting in April.
Within two short years, Wiko has become a major player by providing unlocked phones with sophisticated design, outstanding performance and the newest technologies. It has more than two million users in France and is expanding overseas fast.
Wiko WAX comes pre-installed with TegraZone – NVIDIA’s free app that showcases the best games optimized for the Tegra processor.
As a refresher, Tegra 4i includes a quad-core CPU and fifth battery saver core, and a version of the NVIDIA i500 LTE modem optimized for integration.
The result is an extremely power efficient, compact, high-performance mobile processor that unleashes performance and capability usually only available in costly super phones.
Subject: Processors, Mobile | February 21, 2014 - 10:47 AM | Ryan Shrout
Tagged: wearables, wearable computing, quark, Intel, arm
On a post from the official ARM blogs, the guns are blazing in the battle for the wearable market mind share. Pretty much all the currently available wearable computing devices are using ARM-based processors but that hasn't prevented Intel from touting its Quark platform as the best platform for wearables. There are still lots of questions about Quark when it comes to performance and power consumption but ARM decided to pit its focus on heat.
For a blog post on ARM's website:
Intel’s Quark is an example that has a relatively low level of integration, but has still been positioned as a solution for wearables. Fine you may think, there are plenty of ARM powered communication chipsets it could be paired with, but a quick examination of the development board brings the applicability further into question. Quark runs at a rather surprising, and sizzling to the touch, 57°C. The one attribute it does offer is a cognitive awareness, not through any hardware integration suitable for the wearable market, but from the inbuilt thermal management hardware (complete with example code), which in the attached video you can see is being used to toggle a light switch once touched by a finger which, acting as a heat sync, drops the temperature below 50°C.
Along with this post is a YouTube video that shows this temperature testing taking place.
Of course, when looking at competitive analysis between companies you should always take the results as tentative at best. There is likely to be some change between the Quark Adruino board (Galileo) integration of the X1000 and what would make it into a final production wearable device. Obviously this is something Intel is award of as well and they are also aware of what temperature means for devices that users will have such direct contact with.
The proof will be easy to see, either way, as we progress through 2014. Will device manufacturers integrated Quark in any final design wins and what will the user experience of those units be like?
Still, it's always interesting to see marketing battles heat up between these types of computing giants.