Subject: Processors, Mobile | February 27, 2012 - 12:30 AM | Ryan Shrout
Tagged: tegra 3, quad-core, k3v2, k3, Huawei
Never heard of Huawei? Well you will going forward. The Chinese telecommunications company that claims 110,000 employees, 46% of which are planted in R&D departments, is entering in the market to compete against Apple, Samsung, Qualcomm, Texas Instruments, NVIDIA and others by building an ARM-based SoC for its own mobile devices.
Details are limited for now though we expect to hear more as Mobile World Congress progresses but here is what we know. The Huawei K3V2 CPU will be a quad-core Cortex A9 part with "16 GPUs" - though we don't have any reference what is meant by "a GPU". The A9s will run at either 1.2 GHz or 1.5 GHz and Huawei does mention that these will have
64-bit support a 64-bit memory controller as compared to the 32-bit controller on Tegra 3.
The company did have some performance claims that put the K3V2 ahead of the Galaxy Nexus (Exynos 3110) and ASUS Transformer Prime (Tegra 3). If you believe in marketing slides the new Huawei CPU will be about twice as fast in GPU performance and 49% faster in purely CPU-based tests while using 30% less power. Man, if we had a dollar for every time someone claimed these kinds of gains...
Hopefully we'll see some tests on this new SoC soon in the form of the Huawei Ascend D quad phone available this year.
Subject: Processors | February 26, 2012 - 10:38 PM | Ryan Shrout
Tagged: Intel, Ivy Bridge, delay
If you hadn't heard yet, last week we talked about a potential delay to the release of Intel's upcoming Ivy Bridge processor. Well pretty much everything we feared was "kind of" confirmed by Intel's Sean Maloney when he said:
“I think maybe it’s June now."
Huh. It's gets worse though as Maloney apparently was "blaming the push back on the complexity of the new manufacturing process." That process in particular was the 22nm tri-gate technology that Intel has been touting as one of its biggest developments in recent years.
Is this completely altered now??
The EETimes story gets more specific with date quotes from Jim McGregor of In-Stat.
Jim McGregor of In-Stat told EE Times that according to his industry sources in Taiwan, Intel's Ivy Bridge server parts were only delayed from April 8 until April 29, though the dual core i5 and i7 parts for notebooks had been pushed out from a planned May 13th launch to June 3.
Last week we were hearing that Intel would still launch Ivy Bridge parts in April but wouldn't send out the mass shipments until June, and while that is still possible, that seems much less likely after hearing Maloney's words today.
And if you haven't had enough bad news for today, there is this comment that pretty much backs up my thoughts that I laid out in our 190th episode of the PC Perpsective Podcast last week:
“It doesn’t really matter because there’s not really any compelling competition right now,” said one industry analyst on condition of anonymity, referring to AMD’s recent lag in the market.
AMD, we need you in our lives so badly. Please don't leave us here...alone...
Subject: Processors, Mobile | February 26, 2012 - 01:56 PM | Ryan Shrout
Tagged: tegra, Samsung, quad-core, MWC 12, MWC, exynos
While details are still sparse as we await the official start of Mobile World Congress in Barcelona tonight/tomorrow, it appears that Samsung plans to announce a new quad-core processor as part of its Exynos line. It will be the first Samsung SoC based on 32nm technology rather than the 45nm currently in production and will be available in both quad- and dual-core variant.
According to the story over at Unwiredview it will be available in frequencies ranging from 200 MHz all the way up to 1.5 GHz while offering lower power consumption than current options. I am curious how this actually stacks up though as we have seen that Tegra 3 doesn't REALLY offer lower power consumption and longer battery life even though that was a promise from NVIDIA. It definitely can offer less power consumption per performance unit, but in the end battery life is king for these mobile devices.
What about graphics performance? The story had this to say:
The new Exynos comes paired with the latest version of Samsung’s own graphics chip, which has 4 pixel processors and 1 geometry engine with 128 KB L2 cache. The graphics support OpenGL ES 2.0 and can generate up to 57 MPolygons/s.
Samsung claims that the new processor will offer 26% more performance compared to Exynos parts based on the 45nm process and I assume they are referring to dual-core vs dual-core results. Other claims include battery life improvements of "up to 50%" - we'd love to see it but we'll wait for actual devices to ship and showcase it before really getting excited.
The good news is that quad-core performance will be coming to more devices and NVIDIA won't be the only SoC designer on the block offering them. The use-cases for quad-core performance on a mobile device, phone or tablet, may still be in question though we never doubt the software side of the equation to utilize as much horsepower as it is provided.
Subject: General Tech, Processors, Mobile, Shows and Expos | February 25, 2012 - 07:06 PM | Scott Michaud
Tagged: texas instruments, MWC 12, arm, A9, A15
Texas Instruments could not wait until Mobile World Congress to start throwing punches. Despite their recent financial problems resulting in the closure of two fabrication plants TI believes that their product should speak for itself. Texas Instruments recently released a video showing their dual-core OMAP5 processor based on the ARM Cortex-A15 besting a quad-core ARM Cortex-A9 in rendering websites.
Chuck Norris joke.
Even with being at a two core disadvantage the 800 MHz OMAP5 processor was clocked 40 percent slower than the 1.3 GHz Cortex A9. The OMAP5 is said to be able to reach 2.5 GHz if necessary when released commercially.
Certain portions of the video did look a bit fishy however. Firstly, CNet actually loaded quicker on the A9 processor but it idled a bit before advancing to the second page. The A9 could have been stuck loading an object that the OMAP 5 did not have an issue with, but it does seem a bit weird.
About the fishiest part of the video is that the Quad-Core A9, which we assume to be a Tegra 3, is running on Honeycomb where the OMAP5 is running Ice Cream Sandwich. Ice Cream Sandwich has been much enhanced for performance over Honeycomb.
We have no doubt that the ARM Cortex-A15 will be much improved over the current A9. The issue here is that TI cannot successfully prove that with this demonstration.
Subject: General Tech, Processors | February 22, 2012 - 06:01 PM | Scott Michaud
Tagged: amd, Cyclos, piledriver
AMD has its own announcements about power consumption for the International Solid-State Circuits Conference this week. A few days ago we reported on Intel’s success integrating Wi-Fi transceivers into the CPU to reduce power consumption. Cyclos Semiconductor discussed their resonant clock mesh (RCM) technology which reduces waste energy dissipated when keeping the chip synchronized. AMD announced that this technology would be introduced in their upcoming Piledriver APUs and Opteron processors.
Excuse me, good sir. Do you have the time?
Tom’s Hardware put up an article to discuss the announcement with a small explanation of what is going on.
Inductive-capacitive oscillators are leveraged in mesh-based high-performance clock distribution networks to deliver "high-precision timing while dissipating almost no power." In effect, RCM promises to recycle clock power to enable lower power consumption or higher clock speeds.
For a more specific explanation, I turned to Josh Walrath. Chips are timed by a clock signal -- any overclocker will attest to that. Over time chips became larger and more complex which of course requires a larger and more complex system to propagate the clock signal through. Slowly but surely those circuits became large enough that the energy they dissipate simply by being powered becomes less and less negligible.
What Cyclos contributes is cleverly using inductor-capacitor circuits to keep the energy stored in the clock circuit mesh. With more of the energy stored in the mesh it just requires a small energy shove to trigger the signal after the initial charge. Also, less energy lost also means less heat dissipation which helps your battery as well as your heatsink.
Cyclos Semiconductor states that power savings are between 5 to 30 percent dependent on the chip design. In AMD’s case, they expect approximately 5 to 10 percent power savings in their Piledriver implementation. While AMD is the first implementation of Cyclos’ technology, it is not known what Intel currently has done or will potentially do to solve the problem.
Subject: General Tech, Processors, Systems, Mobile, Shows and Expos | February 20, 2012 - 01:50 AM | Scott Michaud
Tagged: Rosepoint, ISSCC 2012, ISSCC, Intel
If there is one thing that Intel is good at, it is writing a really big check to go in a new direction right when absolutely needed. Intel has released press information on what should be expected from their presence at the International Solid-State Circuits Conference which is currently in progress until the 23rd. The headliner for Intel at this event is their Rosepoint System on a Chip (SoC) which looks to lower power consumption by rethinking the RF transceiver and including it on the die itself. While the research has been underway for over a decade at this point, pressure from ARM has pushed Intel to, once again, throw money at R&D until their problems go away.
Intel could have easily trolled us all and have named this SoC "Centrino".
Almost ten years ago, AMD had Intel in a very difficult position. Intel fought to keep clock-rates high until AMD changed their numbering scheme to give proper credit to their higher performance-per-clock components. Intel dominated, legally or otherwise, the lower end market with their Celeron line of processors.
AMD responded with series of well-timed attacks against Intel. AMD jabbed Intel in the face and punched them in the gut with the release of the Sempron processor line nearby filing for anti-trust against Intel to allow them to more easily sell their processors in mainstream PCs.
At around this time, Intel decided to entirely pivot their product direction and made plans to take their Netburst architecture behind the shed. AMD has yet to recover from the tidal wave which the Core architectures crashed upon them.
Intel wishes to stop assaulting your battery indicator.
With the surge of ARM processors that have been fundamentally designed for lower power consumption than Intel’s x86-based competition, things look bleak for the expanding mobile market. Leave it to Intel to, once again, simply cut a gigantic check.
Intel is in the process of cutting power wherever possible in their mobile offerings. To remain competitive with ARM, Intel is not above outside-the-box solutions including the integration of more power-hungry components directly into the main processor. Similar to NVIDIA’s recent integration of touchscreen hardware into their Tegra 3 SoC, Intel will push the traditionally very power-hungry Wi-Fi transceivers into the SoC and supposedly eliminate all analog portions of the component in the process.
I am not too knowledgeable about Wi-Fi transceivers so I am not entirely sure how big of a jump Intel has made in their development, but it appears to be very significant. Intel is said to discuss this technology more closely during their talk on Tuesday morning titled, “A 20dBm 2.4GHz Digital Outphasing Transmitter for WLAN Application in 32nm CMOS.”
This paper is about a WiFi-compliant (802.11g/n) transmitter using Intel’s 32nm process and techniques leveraging Intel transistors to achieve record performance (power consumption per transmitted data better than state-of-the art). These techniques are expected to yield even better results when moved to Intel’s 22nm process and beyond.
What we do know is that the Rosepoint SoC will be manufactured at 32nm and is allegedly quite easy to scale down to smaller processes when necessary. Intel has also stated that while only Wi-Fi is currently supported, other frequencies including cellular bands could be developed in the future.
We will need to wait until later to see how this will affect the real world products, but either way -- this certainly is a testament to how much change a dollar can be broken into.
Subject: General Tech, Processors, Mobile | February 18, 2012 - 09:06 PM | Scott Michaud
Tagged: Intel, mobile, developer
Clay Breshears over at Intel posted about lazy software optimization over on the Intel Software Blog. His post is a spiritual resurrection of the over seven year’s old article by Herb Sutter, “The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software.” The content is very similar, but the problem is quite different.
The original 2004 article urged developers to heed the calls for the multi-core choo choo express and not hang around on the single core platform (train or computing) waiting for performance to get better. The current article takes that same mentality and applies it to power efficiency. Rather than waiting for hardware that has appropriate power efficiency for your application, learn techniques to bring your application into your desired power requirements.
"I believe your program is a little... processor heavy."
The meat of the article focuses on the development of mobile applications and the concerns that developers should have with battery conservation. Of course there is something to be said about Intel promoting mobile power efficiency. While developers could definitely increase the efficiency of their code, there is still a whole buffet of potential on the hardware side.
If you are a developer, particularly of mobile or laptop applications, Intel has an education portal for best power efficiency practices on their website. Be sure to check it out and pick up the tab once in a while, okay?
Subject: Processors | February 17, 2012 - 01:52 PM | Jeremy Hellstrom
Tagged: fx-8150, FX, cpu, bulldozer, amd, 990fx
AMD's $270 flagship processor, the 3.6GHz FX-8150 had a mixed reception as the hype which lead up to the release built up our expectations to a point that the processor could not live up to. Part of the disappointment has been blamed on the Windows 7 thread scheduler, which AMD described as not being optimized for their architecture, which lead to the release of hotfix files KB2645594 and KB2646060. TechPowerUp revisited their benchmarks to see if these patches effectively increase the performance of multi-threaded tasks; single threaded tasks are dependant on processor speed so they should be unaffected by the patches.
"After settling on the market, with all the quirks and bugs supposedly fixed, all the hype and disappointment blown away, we put AMD's FX-8150 under the scope. Benchmarks are done with and without the Windows 7 hotfix and in depth overclocking should resolve any doubts you have about AMD's flagship processor."
Here are some more Processor articles from around the web:
- AMD A8-3870K and Sapphire HD6450 FleX @ Kitguru
- The Opteron 6276: a closer look @ AnandTech
- AMD A8-3870K Unlocked Llano APU Review @ Hardware Canucks
- The Workstation & Server CPU Comparison Guide @ TechARP
- Intel Core i7-3820 vs. Core i7-2700K and Core i7-3930K @ X-bit Labs
- Intel Core i7-3820 3.6GHz Processor Review @ Legit Reviews
- Intel Core i7-3820 @ Techspot
- Intel Core i7-3930K @ OC3D
Subject: Processors | February 16, 2012 - 03:47 PM | Ryan Shrout
Tagged: Intel, Ivy Bridge, delay
Some unfortunate news is making the rounds today surrounding a potential delay of the upcoming Intel Ivy Bridge processor. A story over at Digitimes is reporting that due to an abundance of inventory on current generation Sandy Bridge parts, Intel will start to trickle out Ivy Bridge in early April but will hold off on the full shipments until after June.
If Intel is indeed delaying shipping Ivy Bridge it likely isn't due to pressure from AMD and with the announcement by top brass there it seems likely Intel will retain the performance lead on the CPU side of things from here on out. With the release of Windows 8 coming in the fall of 2012 Intel's partners (and Intel internally) are likely going to be using that as the primary jumping off point for the architecture transition.
If ever there was a reason to support AMD and competition in general, this is exactly that. Without pressure from a strong offering from the opposition Intel is free to adjust their product schedule based on internal financial reasons rather than external consumer forces. While we will still see some Ivy Bridge availability in April (according to Digitimes at least) in order to avoid a marketing disaster, it seems that the wide scale availability of the Intel design with processor graphics performance expected to be double that of Sandy Bridge won't be until the summer.
Subject: Processors | February 12, 2012 - 06:57 PM | Tim Verry
Tagged: shark bay, Intel, haswell, cpu
Intel's Ivy Bridge processor, the upcoming "tick" in Intel's clock-esque world domination strategy, has yet to be released and we are already getting rumors and leaked information coming in about the "tock" that will be Ivy Bridge's successor in the 22nm Haswell processors (as part of the Shark Bay platform). Ivy Bridge processors will bring incremental performance improvements and lower power usage on the same 1155 socket that Sandy Bridge employs.
Haswell; however, will move to (yet another) socket LGA 1150 on the desktop, and will bring incremental improvements over Ivy Bridge. Improvements include much faster integrated processor graphics and the AVX2 instruction set. Unfortunately, Intel will be returning to an increased TDP (thermal design power) with Haswell compared to the lower TDP from Sandy Bridge to Ivy Bridge.
According to Domain Haber, who claims to have gotten their hands on a leaked road map, Intel will be launching Ivy Bridge through the end of this year, and then will debut their Haswell processors in the first half of 2013. The alleged road map can be seen below.
What I found interesting about the road map is that there is no mention of an Ivy Bridge-E or Haswell-E processor. Instead, the current Sandy Bridge-E chips are shown occupying the high end and enthusiast segment through at least the first half of 2013 and the launch of Haswell. Whether enthusiasts will continue to choose the Sandy Bridge-E processors for that long will remain to be seen, however. Also strange is that, according to VR-Zone, Intel will have three tiers of integrated graphics performance with GT1, GT2, and GT3. They will then place the fastest graphics core in the mobile chips and leave the slower graphics cores in the desktop chips. Discrete cards are not dead yet, it seems (unless you're rocking an AMD APU of course).
Have you invested in a Sandy Bridge-E setup, or are you still holding onto an older chip to wait for the best performance upgrade for your money? If you have bought into SB-E, do you think it'll last you into 2013?