All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Processors | March 19, 2012 - 06:05 PM | Jeremy Hellstrom
Tagged: Ivy Bridge, sandy bridge, sandy bridge-e, i7-3770K (ES), i7-2600K, i7-3960x
VR-Zone took a processor from each of Intel's last three architectures, clocked them all to 4.7GHz and started benchmarking. By clocking them all the same you get to see a better comparison of the performance of the various architectures, although the motherboard chipset does introduce a variable into the performance results. As well, the Ivy Bridge Core i7-3770K is an engineering sample and so may not perfectly reflect the performance of the final retail product. Drop by to see how these chips compare in synthetic benchmarks.
"Intel's Core i7-3770K (ES) vs i7-2600K vs i7-3960X, nuff said! We have also included a brief USB 3.0 controller shootout inside, involving the new Z77 (Panther Point) Native USB implementation and other popular solutions."
Here are some more Processor articles from around the web:
- Intel Second Generation Core i7 3820 Review @ OCC
- Intel Xeon E5-2600 Sany Bridge-EP Server Processors @ Legit Reviews
- Intel Core i7 2700K Review @ HCW
- Core i7 3820 @ Guru of 3D
- Intel Ivy Bridge: everything you need to know @ Techspot
- The Ivy Bridge Preview: Core i7 3770K Tested @ AnandTech
- Desktop CPU Comparison Guide @ TechARP
- AMD FX-8120 Bulldozer @ Rbmods
Subject: Processors | March 14, 2012 - 06:21 AM | Tim Verry
Tagged: RISC, embedded systems, cortex-m0+, cortex-m, arm, 32-bit
ARM has recently announced a new 32 bit processor for embedded systems that sips power and is one of the lowest power designs yet. This new ARM processor is a new entrant to the Cortex M lineup and has been labeled the ARM Cortex-MO+. The chip features a full 32-bit RISC instruction set and is manufactured using the older, and low cost, 90nm process.
The magic happens when we look at the power draw, and according to ARM it will sip power at a mere 9µA (9 microamps) per Megahertz (MHz). It can further run any code designed for (existing) Cortex-M series processor including the Cortex-M3 and Cortex-M4. The new Cortex-M0+ is intended to be used in embedded systems and as microcontroller applications controlling larger machinery.
There is no word yet on pricing or availability; however, support has been promised by the Keil Microcontroller Development Kit and third part software such as Code Red, Micruim, and SEGGER. Freescale and NXP Semiconductor further have been named licensees of the technology thus far. In the case of NXP Semiconductor, they plan to replace existing 8 bit microcontrollers with the ARM Cortex-MO+ in devices such as their UPS units, active cabling, and touchscreens. Freescale, on the other hand, plans to develop their own version of the Cortex-MO+ in the form of the Kinetis L series processor. They will further use the low power chip to operate appliances, portable medical systems, and lighting (among others).
Subject: General Tech, Graphics Cards, Processors, Mobile | March 8, 2012 - 04:02 AM | Scott Michaud
Tagged: ray tracing, tablet, tablets, knight's ferry, Intel
Intel looks to bring ray-tracing from their Many Integrated Core (Intel MIC) architecture to your tablet… by remotely streaming from a server loaded with one or more Knight’s Ferry cards.
The anticipation of ray-tracing engulfed almost the entirety of 3D video gaming history. The reasonable support of ray-tracing is very seductive for games as it enables easier access to effects such as global illumination, reflections, and so forth. Ray-tracing is well deserved of its status as a buzzword.
Render yourself in what Knight’s Ferry delivered… with scaling linearly and ray-traced Wolfenstein
Screenshot from Intel Blogs.
Obviously Intel would love to make headway into the graphics market. In the past Intel has struggled to put forth an acceptable offering for graphics. It is my personal belief that Intel did not take graphics seriously when they were content selling cheap GPUs to be packed in with PCs. While the short term easy money flowed in, the industry slipped far enough ahead of them that they could not just easily pounce back into contention with a single huge R&D check.
Intel obviously cares about graphics now, and has been relentless at their research into the field. Their CPUs are far ahead of any competition in terms of serial performance -- and power consumption is getting plenty of attention itself.
Intel has long ago acknowledged the importance of massively parallel computing but was never quite able to bring products like Larabee against anything the companies they once ignored could retaliate with. This brings us back to ray-tracing: what is the ultimate advantage of ray-tracing?
Ray-tracing is a dead simple algorithm.
A ray-trace renderer is programmed very simply and elegantly. Effects are often added directly and without much approximation necessary. No hacking around is required in the numerous caveats within graphics APIs in order to get a functional render on screen. If you can keep throwing enough coal on the fire, it will burn without much effort -- so to speak. Intel just needs to put a fast enough processor behind it, and away they go.
Throughout the article, Daniel Pohl has in fact discussed numerous enhancements that they have made to their ray-tracing engine to improve performance. One of the most interesting improvements is their approach to antialiasing. If the rays from two neighboring pixels strike different meshes or strike the same mesh at the point of a sharp change in direction, denoted by color, between pixels then they are flagged for supersampling. The combination of that shortcut with MLAA will also be explored by Intel at some point.
A little behind-the-scenes trickery...
Screenshot from Intel Blogs.
Intel claims that they were able to achieve 20-30 FPS at 1024x600 resolutions streaming from a server with a single Knight’s Ferry card installed to an Intel Atom-based tablet. They were able to scale to within a couple percent of theoretical 8x performance with 8 Knight’s Ferry cards installed.
I very much dislike trusting my content to online streaming services as I am an art nut. I value the preservation of content which just is not possible if you are only able to access it through some remote third party -- can you guess my stance on DRM? That aside, I understand that Intel and others will regularly find ways to push content to where there just should not be enough computational horsepower to accept it.
Ray-tracing might be Intel’s attempt to circumvent all of the years of research that they ignored with conventional real-time rendering technologies. Either way, gaming engines are going the way of simpler rendering algorithms as GPUs become more generalized and less reliant on fixed-function hardware assigned to some arbitrary DirectX or OpenGL specification.
Intel just hopes that they can have a compelling product at that destination whenever the rest of the industry arrives.
Subject: Processors | March 6, 2012 - 03:54 PM | Jeremy Hellstrom
Tagged: xeon E5-2600, Sandy Bridge-EP, Romley, Grizzly Pass, Bighorn Peak
Somehow SemiAccurate got their hands on an Intel R2000 Romley system, featuring a pair of E5-2600 running on a S2600GZ 2S board, in a 2U rackmount case. The performance impressed them as they had to create artificial loads to even try to stress the machine. That wasn't all that is impressive about this new platform; as it is designed as a server platform energy savings during low usage times is a key factor for administrators. Romley goes far beyond reducing frequency and power consumption when idle and actually has 16 power savings profiles which offers control far beyond what has been possible previously. As well there are large benefits to moving the PCIe controller onto the die, which you can read all about at SemiAccurate.
"To give you an idea on how good it is, SemiAccurate spent the last few weeks testing the Intel R2000 (Bighorn Peak) 2U platform based on the S2600GZ (Grizzly Pass) 2S Romley board, and it quickly became obvious we could not stress it with any real workload, only artificial workloads would make this beast sweat. I could not find a way to stress both the memory subsystem and the CPUs at once. To make matters worse, none of this touched the most important modern bottleneck, the network and I/O. Tests didn’t stress the platform evenly, what used to be system tests became subsystem tests, and were obviously the compute equivalent of makework."
Here are some more Processor articles from around the web:
- Intel Xeon E5-2670 vs Core i7-3960X @ The Inquirer
- The Xeon E5-2600: dual Sandybridge for Servers @ AnandTech
- Intel Core i7-3820 Extreme Edition CPU @ Benchmark Reviews
- Intel Sandy Bridge-E i7-3820 CPU Review @ Madshrimps
- Intel Core i7-3820 Sandy Bridge-E Review @ Hardware Canucks
- Intel Core i7-3820 Quad-Core @ SSD Review
- Intel Core i7-3820 Sandy Bridge-E Processor Review @Hi Tech Legion
- CPU Performance Comparison Guide @ TechARP
Subject: Processors | March 6, 2012 - 02:26 PM | Jeremy Hellstrom
Tagged: xeon E5-2600, xeon e5, xeon, Sandy Bridge E, lga2011, Intel
SANTA CLARA, Calif., March 6, 2012 – Addressing the incredible growth of data traffic in the cloud, Intel Corporation announced the record-breaking Intel Xeon processor E5-2600 product family. These new processors deliver leadership performance, best data center performance per watt, breakthrough I/O innovation and trusted hardware security features to enable IT to scale. These processors are not only at the heart of servers and workstations, but will also power the next generation of storage and communication systems from leading vendors around the world.
Forecasts call for 15 billion connected devices and over 3 billion connected users by 2015. The amount of global data center IP traffic is forecasted to grow by 33 percent annually through 2015, surpassing 4.8 zetabytes per year, more than 3 times the amount in 2011. At these levels, each connected user will generate more than 4GB of data traffic every day – the equivalent of a 4-hour HD movie. This will increase the amount of data that needs to be stored by almost 50 percent per year. In order to scale to meet this growth, the worldwide number of cloud servers is expected to more than triple by 2015.
“The growth in cloud computing and connected devices is transforming the way businesses benefit from IT products and services,” said Diane Bryant, Intel vice president and general manager of the Datacenter and Connected Systems Group. “For businesses to capitalize on these innovations, the industry must address unprecedented demand for efficient, secure and high-performing datacenter infrastructure. The Intel Xeon processor E5-2600 product family is designed to address these challenges by offering unparalleled, balanced performance across compute, storage and network, while reducing operating costs.” The key requirements to enable IT to scale are performance, energy efficiency, I/O bandwidth and security. With the best combination of performance, built-in capabilities and cost-effectiveness, the new Intel Xeon processor E5-2600 product families are designed to address these requirements, and become the heart of the next-generation data center powering servers, storage and communication systems.
Leadership Performance with Best Data Center Performance per Watt
Supporting up to eight cores per processor and up to 768GB of system memory, the Intel Xeon processor E5-2600 product family increases performance by up to 80 percent, compared to the previous-generation Intel Xeon processor 5600 series. The family also supports Intel Advanced Vector Extension (Intel AVX) that increases the performance on compute-intensive applications such as financial analysis, media content creation and high performance computing up to 2 times.
Additional built-in technologies such as Intel Turbo Boost Technology 2.0, Intel Hyper-Threading Technology and Intel Virtualization Technology provide IT with flexible capabilities to increase the performance of their infrastructure dynamically. These performance advances have led the Intel Xeon processor E5-2600 product family to capture 1510 new dual socket x86 world records.
Modern data centers must improve the raw performance they deliver, but also do so efficiently by reducing power consumption and operating costs. The Intel Xeon processor E5-2600 product family continue Intel’s focus on reducing total cost of ownership by improving energy efficient performance more than 50 percent as measured by SPECpower_ssj 2008 compared to the previous generation Intel Xeon processor 5600 series. These processors offer support for tools to monitor and control power usage such as Intel Node Manager and Intel Data Center Manager, which provide accurate, real-time power and thermal data to system management consoles. In addition, Intel’s leadership performance allows IT managers to meet their growing demands while optimizing software license and capital costs.
I/O Innovation and Network Capabilities
With the unprecedented growth in data traffic it is essential that systems not only improve computational abilities, but also enable data to flow faster to support data-hungry applications and increase the bandwidth within the data center. The Intel Xeon processor E5-2600 product family meets these needs with Intel Integrated I/O (Intel IIO) and Intel Data Direct I/O (Intel DDIO). Intel DDIO allows Intel Ethernet controllers and adapters to route I/O traffic directly to processor cache, reducing trips to system memory reducing power consumption and I/O latency. The Intel Xeon processor E5-2600 product family is also the first server processors to integrate the I/O controller supporting PCI Express 3.0 directly into the microprocessor. This integration reduces latency up to 30 percent11 compared to prior generations and with PCI Express 3.0 can up to triple the movement of data into and out of the processor.
The high-performance processing power along with Intel Integrated I/O and advanced storage features such as PCIe non-transparent bridging and asynchronous DRAM refresh, makes the Intel Xeon processor E5-2600 product family also an ideal choice for storage and communications solutions.
Increasing bandwidth demands driven by server virtualization and data and storage network consolidation have led to strong growth in 10 Gigabit Ethernet deployments, with adapter port shipments exceeding 1 million units in each quarter of 2011. Today’s announcement of the Intel Ethernet Controller X540 demonstrates Intel’s commitment to driving 10 Gigabit Ethernet to the mainstream by reducing implementation costs. This industry-first single-chip 10GBASE-T solution is designed for low-cost, low-power LAN on motherboard (LOM) and includes flexible I/O Virtualization and Unified networking support at no additional cost.
The Intel Xeon processor E5-2600 product family reaffirms Intel’s commitment to providing a more secure hardware foundation for today’s data centers. Intel Advanced Encryption Standard New Instruction (Intel AES-NI14) helps systems to quickly encrypt and decrypt data running over a range of applications and transactions. Intel Trusted Execution Technology (Intel TXT15) creates a trusted foundation to reduce the infrastructure exposure to malicious attacks. These features in partnership with leading software applications will help IT protect their data centers against attack and scale to meet the demands of their customers.
Extensive Industry Support
Starting today, system manufacturers from around the world are expected to announce hundreds of Intel Xeon processor E5 family-based platforms. These manufacturers include Acer, Appro, Asus, Bull, Cisco, Dell, Fujitsu, HP, Hitachi, Huawei, IBM, Inspur, Lenovo, NEC, Oracle, Quanta, SGI, Sugon, Supermicro and Unisys.
Subject: Processors | March 1, 2012 - 03:42 PM | Jeremy Hellstrom
Tagged: Ivy Bridge, Intel
A sharp pair of eyes at Guru of 3D spotted a pdf on an Intel site which has since been taken down. While it is too bad we cannot give you the original PDF, Guru3D did post the pertinent information for those waiting patiently for Ivy Bridge to finally arrive.
As you can see the TDPs are impressively low, the desktop models ranging from 77W at the top end down to a 35W rating on the only dual core desktop model. On the mobile side the TDPs range between 17W to 35W, with more than half of those models being dual core. Also worth noting is the new graphics core, the HD4000 which is only available to two of the Core-i5 models, if you want the new core on a desktop then the Core-i7 is the way to go. On the mobile side, all models are listed as having HD4000 which might help Intel compete against AMD's Llano as consumers will not have to investigate the Intel chip in their laptop to determine which level of graphics processor they possess. Making a purchasing decision easier will go a long way to giving peace of mind to consumers that only want to spend their money and not their time researching before they buy.
Still no solid release date though.
Subject: Motherboards, Processors, Chipsets | February 29, 2012 - 12:03 PM | Ryan Shrout
Tagged: just delivered, z77a-gd65, Z77, sandy bridge, msi, Ivy Bridge, Intel
In preparation for Intel's 3rd generation of Intel Core microprocessor architecture (can you see we are dancing around things already), MSI has started showing a new line of motherboards. While at CES in January we saw the Z77A-GD65 option that will be available soon and offers some interesting new specs and features.
The Z77A-GD65 sports Military Class III components as well as support for a host of new items including PCI Express 3.0 and USB 3.0. While we can't share much more than that in terms of details I thought it might be worth showing off a few shots of the upcoming motherboard from our friends at MSI.
Subject: Processors | February 28, 2012 - 12:51 PM | Josh Walrath
Tagged: trinity, FX-8120, FX-6200, FX-4170, FX, FM3, bulldozer, amd, am3+
Since AMD held their Analysts’ Day, we have not heard a whole bunch from their CPU division. The graphics side has been in full gear launching the HD 7000 series of products, and soon we will see the final pieces of that particular puzzle fall into place. What about the CPU group? We have heard about Trinity for ages now, but that particular launch is still months away. The last CPU update detailed the “K” series of unlocked Llano chips. What about Bulldozer? Is there a new stepping? How is GLOBALFOUNDRIES’ 32 nm SOI/HKMG progressing?
I don’t have all those answers, unfortunately. Since AMD proceeded to sack most of the PR team, our contacts have all but disappeared. Questions emailed to AMD are often not returned. Requests for CPU information (or samples) are ignored. Are these people just simply overworked, or is AMD clamping down on information? Hard to say. My guess here is that they are taking the philosophy of, “No news is good news.” If a company does not send out review samples, they do not have to deal with products receiving bad reviews. I am not saying that the FX processors are necessarily bad, but they do not match up well against Intel’s latest Sandy Bridge parts. At least AMD parts are priced appropriately overall for their level of performance. If we look at overall results, the FX-8150 does match up fairly well with the i5-2500K, and they both exist very close to each other in price points.
What we do know is that AMD has released two new processors into the market with the FX-4170 and the FX-6200. The FX-4170 is a new dual module (four core) 125 watt TDP part that is clocked at an amazing 4.2 GHz stock speed, and a turbo that goes to 4.3 GHz. This is the fastest consumer grade processor in terms of clockspeed, but it obviously is not the fastest processor on Earth. The original FX-4100 is a 95 watt TDP part at 3.6 GHz stock/3.8 GHz turbo, 4 MB L2 cache, and 8 MB of L3. The FX-6200 is perhaps the more interesting of the two. It has a base clock of 3.8 GHz and a max turbo speed of 4.1 GHz. This is a pretty hefty increase from the FX-6100 with its base 3.3 GHz and 3.9 GHz turbo. The 6100 is a 95 watt TDP part while the new 6200 is 125 watt TDP. The 6200 is a three module (six core) part with 6 MB of L2 cache and 8 MB of L3.
The last bit of news is that the FX-8120 is getting a price cut to put it more in line against the competition. The email that we received about this and the previous announcements was amazingly generic and fairly uninformative. We do not know the prices, we do not know the rollout schedule, and we have no idea how much the FX-8120 is going to be chopped. We have seen the retail market already cut the prices down on the FX-8xxx series. The high end FX-8150 was introduced around $289 but now it can be readily available for $259. Now that demand has dropped in the PC sector and AMD’s supply has caught up, it is no wonder we are seeing new SKUs and the lowering of prices.
My goal is to try to get a hold of some of these parts, as they do look interesting from a value standpoint. The FX-6200 is of great interest for many users due to the nice provisioning of cores, L3 caches, and speeds. Throw in a decent price for this particular product, and it could be a favorite for budget enthusiasts who want to stick with AMD products. The area where it does fall down is that of TDP when compared to Intel’s Sandy Bridge parts at that price point. The jump to 3.8 GHz base speed and 4.1 GHz turbo should make it very comparable in stock clocked performance to anything Intel has in that price range.
Overclocking could be interesting here, but since it is already a 125 watt TDP part I do not know how much headroom these products have. 4.8 GHz is very likely, but on air cooling I would not expect overclocked speeds to reach much more above that. Still, these are interesting parts and give plenty of bang for their price. Add in pretty mature support for AM3+ motherboards, and AMD still has a chance with enthusiasts. The only real issue that is looming is PCI-E 3.0 support for the AM3+ ecosystem. We have not heard anything about the upcoming (or is it cancelled?) 1090FX chipset, other than it is based on 890FX/990FX and should not support PCI-E 3.0. With AMD’s push for APUs, I would expect the upcoming Trinity parts to introduce PCI-E 3.0. AMD also looks like they will start funneling the enthusiasts towards FM2 platforms and Trinity based parts. While AMD looks to support AM3+ with Piledriver based cores, my best guess here is that AM3+ will be phased out sooner rather than later.
The next 6 months will be critical for AMD and their path moving forwards. At the very least we will have a better idea of where the company is going under the new management. I am still expecting some big changes from AMD, and if Trinity can give Intel a run for its money in terms of per clock CPU performance, then they could have a winner on their hands and adjust their roadmap to further exploit that particular product release.
Subject: Processors, Mobile | February 27, 2012 - 12:30 AM | Ryan Shrout
Tagged: tegra 3, quad-core, k3v2, k3, Huawei
Never heard of Huawei? Well you will going forward. The Chinese telecommunications company that claims 110,000 employees, 46% of which are planted in R&D departments, is entering in the market to compete against Apple, Samsung, Qualcomm, Texas Instruments, NVIDIA and others by building an ARM-based SoC for its own mobile devices.
Details are limited for now though we expect to hear more as Mobile World Congress progresses but here is what we know. The Huawei K3V2 CPU will be a quad-core Cortex A9 part with "16 GPUs" - though we don't have any reference what is meant by "a GPU". The A9s will run at either 1.2 GHz or 1.5 GHz and Huawei does mention that these will have
64-bit support a 64-bit memory controller as compared to the 32-bit controller on Tegra 3.
The company did have some performance claims that put the K3V2 ahead of the Galaxy Nexus (Exynos 3110) and ASUS Transformer Prime (Tegra 3). If you believe in marketing slides the new Huawei CPU will be about twice as fast in GPU performance and 49% faster in purely CPU-based tests while using 30% less power. Man, if we had a dollar for every time someone claimed these kinds of gains...
Hopefully we'll see some tests on this new SoC soon in the form of the Huawei Ascend D quad phone available this year.
Subject: Processors | February 26, 2012 - 10:38 PM | Ryan Shrout
Tagged: Intel, Ivy Bridge, delay
If you hadn't heard yet, last week we talked about a potential delay to the release of Intel's upcoming Ivy Bridge processor. Well pretty much everything we feared was "kind of" confirmed by Intel's Sean Maloney when he said:
“I think maybe it’s June now."
Huh. It's gets worse though as Maloney apparently was "blaming the push back on the complexity of the new manufacturing process." That process in particular was the 22nm tri-gate technology that Intel has been touting as one of its biggest developments in recent years.
Is this completely altered now??
The EETimes story gets more specific with date quotes from Jim McGregor of In-Stat.
Jim McGregor of In-Stat told EE Times that according to his industry sources in Taiwan, Intel's Ivy Bridge server parts were only delayed from April 8 until April 29, though the dual core i5 and i7 parts for notebooks had been pushed out from a planned May 13th launch to June 3.
Last week we were hearing that Intel would still launch Ivy Bridge parts in April but wouldn't send out the mass shipments until June, and while that is still possible, that seems much less likely after hearing Maloney's words today.
And if you haven't had enough bad news for today, there is this comment that pretty much backs up my thoughts that I laid out in our 190th episode of the PC Perpsective Podcast last week:
“It doesn’t really matter because there’s not really any compelling competition right now,” said one industry analyst on condition of anonymity, referring to AMD’s recent lag in the market.
AMD, we need you in our lives so badly. Please don't leave us here...alone...
Subject: Processors, Mobile | February 26, 2012 - 01:56 PM | Ryan Shrout
Tagged: tegra, Samsung, quad-core, MWC 12, MWC, exynos
While details are still sparse as we await the official start of Mobile World Congress in Barcelona tonight/tomorrow, it appears that Samsung plans to announce a new quad-core processor as part of its Exynos line. It will be the first Samsung SoC based on 32nm technology rather than the 45nm currently in production and will be available in both quad- and dual-core variant.
According to the story over at Unwiredview it will be available in frequencies ranging from 200 MHz all the way up to 1.5 GHz while offering lower power consumption than current options. I am curious how this actually stacks up though as we have seen that Tegra 3 doesn't REALLY offer lower power consumption and longer battery life even though that was a promise from NVIDIA. It definitely can offer less power consumption per performance unit, but in the end battery life is king for these mobile devices.
What about graphics performance? The story had this to say:
The new Exynos comes paired with the latest version of Samsung’s own graphics chip, which has 4 pixel processors and 1 geometry engine with 128 KB L2 cache. The graphics support OpenGL ES 2.0 and can generate up to 57 MPolygons/s.
Samsung claims that the new processor will offer 26% more performance compared to Exynos parts based on the 45nm process and I assume they are referring to dual-core vs dual-core results. Other claims include battery life improvements of "up to 50%" - we'd love to see it but we'll wait for actual devices to ship and showcase it before really getting excited.
The good news is that quad-core performance will be coming to more devices and NVIDIA won't be the only SoC designer on the block offering them. The use-cases for quad-core performance on a mobile device, phone or tablet, may still be in question though we never doubt the software side of the equation to utilize as much horsepower as it is provided.
Subject: General Tech, Processors, Mobile, Shows and Expos | February 25, 2012 - 07:06 PM | Scott Michaud
Tagged: texas instruments, MWC 12, arm, A9, A15
Texas Instruments could not wait until Mobile World Congress to start throwing punches. Despite their recent financial problems resulting in the closure of two fabrication plants TI believes that their product should speak for itself. Texas Instruments recently released a video showing their dual-core OMAP5 processor based on the ARM Cortex-A15 besting a quad-core ARM Cortex-A9 in rendering websites.
Chuck Norris joke.
Even with being at a two core disadvantage the 800 MHz OMAP5 processor was clocked 40 percent slower than the 1.3 GHz Cortex A9. The OMAP5 is said to be able to reach 2.5 GHz if necessary when released commercially.
Certain portions of the video did look a bit fishy however. Firstly, CNet actually loaded quicker on the A9 processor but it idled a bit before advancing to the second page. The A9 could have been stuck loading an object that the OMAP 5 did not have an issue with, but it does seem a bit weird.
About the fishiest part of the video is that the Quad-Core A9, which we assume to be a Tegra 3, is running on Honeycomb where the OMAP5 is running Ice Cream Sandwich. Ice Cream Sandwich has been much enhanced for performance over Honeycomb.
We have no doubt that the ARM Cortex-A15 will be much improved over the current A9. The issue here is that TI cannot successfully prove that with this demonstration.
Subject: General Tech, Processors | February 22, 2012 - 06:01 PM | Scott Michaud
Tagged: amd, Cyclos, piledriver
AMD has its own announcements about power consumption for the International Solid-State Circuits Conference this week. A few days ago we reported on Intel’s success integrating Wi-Fi transceivers into the CPU to reduce power consumption. Cyclos Semiconductor discussed their resonant clock mesh (RCM) technology which reduces waste energy dissipated when keeping the chip synchronized. AMD announced that this technology would be introduced in their upcoming Piledriver APUs and Opteron processors.
Excuse me, good sir. Do you have the time?
Tom’s Hardware put up an article to discuss the announcement with a small explanation of what is going on.
Inductive-capacitive oscillators are leveraged in mesh-based high-performance clock distribution networks to deliver "high-precision timing while dissipating almost no power." In effect, RCM promises to recycle clock power to enable lower power consumption or higher clock speeds.
For a more specific explanation, I turned to Josh Walrath. Chips are timed by a clock signal -- any overclocker will attest to that. Over time chips became larger and more complex which of course requires a larger and more complex system to propagate the clock signal through. Slowly but surely those circuits became large enough that the energy they dissipate simply by being powered becomes less and less negligible.
What Cyclos contributes is cleverly using inductor-capacitor circuits to keep the energy stored in the clock circuit mesh. With more of the energy stored in the mesh it just requires a small energy shove to trigger the signal after the initial charge. Also, less energy lost also means less heat dissipation which helps your battery as well as your heatsink.
Cyclos Semiconductor states that power savings are between 5 to 30 percent dependent on the chip design. In AMD’s case, they expect approximately 5 to 10 percent power savings in their Piledriver implementation. While AMD is the first implementation of Cyclos’ technology, it is not known what Intel currently has done or will potentially do to solve the problem.
Subject: General Tech, Processors, Systems, Mobile, Shows and Expos | February 20, 2012 - 01:50 AM | Scott Michaud
Tagged: Rosepoint, ISSCC 2012, ISSCC, Intel
If there is one thing that Intel is good at, it is writing a really big check to go in a new direction right when absolutely needed. Intel has released press information on what should be expected from their presence at the International Solid-State Circuits Conference which is currently in progress until the 23rd. The headliner for Intel at this event is their Rosepoint System on a Chip (SoC) which looks to lower power consumption by rethinking the RF transceiver and including it on the die itself. While the research has been underway for over a decade at this point, pressure from ARM has pushed Intel to, once again, throw money at R&D until their problems go away.
Intel could have easily trolled us all and have named this SoC "Centrino".
Almost ten years ago, AMD had Intel in a very difficult position. Intel fought to keep clock-rates high until AMD changed their numbering scheme to give proper credit to their higher performance-per-clock components. Intel dominated, legally or otherwise, the lower end market with their Celeron line of processors.
AMD responded with series of well-timed attacks against Intel. AMD jabbed Intel in the face and punched them in the gut with the release of the Sempron processor line nearby filing for anti-trust against Intel to allow them to more easily sell their processors in mainstream PCs.
At around this time, Intel decided to entirely pivot their product direction and made plans to take their Netburst architecture behind the shed. AMD has yet to recover from the tidal wave which the Core architectures crashed upon them.
Intel wishes to stop assaulting your battery indicator.
With the surge of ARM processors that have been fundamentally designed for lower power consumption than Intel’s x86-based competition, things look bleak for the expanding mobile market. Leave it to Intel to, once again, simply cut a gigantic check.
Intel is in the process of cutting power wherever possible in their mobile offerings. To remain competitive with ARM, Intel is not above outside-the-box solutions including the integration of more power-hungry components directly into the main processor. Similar to NVIDIA’s recent integration of touchscreen hardware into their Tegra 3 SoC, Intel will push the traditionally very power-hungry Wi-Fi transceivers into the SoC and supposedly eliminate all analog portions of the component in the process.
I am not too knowledgeable about Wi-Fi transceivers so I am not entirely sure how big of a jump Intel has made in their development, but it appears to be very significant. Intel is said to discuss this technology more closely during their talk on Tuesday morning titled, “A 20dBm 2.4GHz Digital Outphasing Transmitter for WLAN Application in 32nm CMOS.”
This paper is about a WiFi-compliant (802.11g/n) transmitter using Intel’s 32nm process and techniques leveraging Intel transistors to achieve record performance (power consumption per transmitted data better than state-of-the art). These techniques are expected to yield even better results when moved to Intel’s 22nm process and beyond.
What we do know is that the Rosepoint SoC will be manufactured at 32nm and is allegedly quite easy to scale down to smaller processes when necessary. Intel has also stated that while only Wi-Fi is currently supported, other frequencies including cellular bands could be developed in the future.
We will need to wait until later to see how this will affect the real world products, but either way -- this certainly is a testament to how much change a dollar can be broken into.
Subject: General Tech, Processors, Mobile | February 18, 2012 - 09:06 PM | Scott Michaud
Tagged: Intel, mobile, developer
Clay Breshears over at Intel posted about lazy software optimization over on the Intel Software Blog. His post is a spiritual resurrection of the over seven year’s old article by Herb Sutter, “The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software.” The content is very similar, but the problem is quite different.
The original 2004 article urged developers to heed the calls for the multi-core choo choo express and not hang around on the single core platform (train or computing) waiting for performance to get better. The current article takes that same mentality and applies it to power efficiency. Rather than waiting for hardware that has appropriate power efficiency for your application, learn techniques to bring your application into your desired power requirements.
"I believe your program is a little... processor heavy."
The meat of the article focuses on the development of mobile applications and the concerns that developers should have with battery conservation. Of course there is something to be said about Intel promoting mobile power efficiency. While developers could definitely increase the efficiency of their code, there is still a whole buffet of potential on the hardware side.
If you are a developer, particularly of mobile or laptop applications, Intel has an education portal for best power efficiency practices on their website. Be sure to check it out and pick up the tab once in a while, okay?
Subject: Processors | February 17, 2012 - 01:52 PM | Jeremy Hellstrom
Tagged: fx-8150, FX, cpu, bulldozer, amd, 990fx
AMD's $270 flagship processor, the 3.6GHz FX-8150 had a mixed reception as the hype which lead up to the release built up our expectations to a point that the processor could not live up to. Part of the disappointment has been blamed on the Windows 7 thread scheduler, which AMD described as not being optimized for their architecture, which lead to the release of hotfix files KB2645594 and KB2646060. TechPowerUp revisited their benchmarks to see if these patches effectively increase the performance of multi-threaded tasks; single threaded tasks are dependant on processor speed so they should be unaffected by the patches.
"After settling on the market, with all the quirks and bugs supposedly fixed, all the hype and disappointment blown away, we put AMD's FX-8150 under the scope. Benchmarks are done with and without the Windows 7 hotfix and in depth overclocking should resolve any doubts you have about AMD's flagship processor."
Here are some more Processor articles from around the web:
- AMD A8-3870K and Sapphire HD6450 FleX @ Kitguru
- The Opteron 6276: a closer look @ AnandTech
- AMD A8-3870K Unlocked Llano APU Review @ Hardware Canucks
- The Workstation & Server CPU Comparison Guide @ TechARP
- Intel Core i7-3820 vs. Core i7-2700K and Core i7-3930K @ X-bit Labs
- Intel Core i7-3820 3.6GHz Processor Review @ Legit Reviews
- Intel Core i7-3820 @ Techspot
- Intel Core i7-3930K @ OC3D
Subject: Processors | February 16, 2012 - 03:47 PM | Ryan Shrout
Tagged: Intel, Ivy Bridge, delay
Some unfortunate news is making the rounds today surrounding a potential delay of the upcoming Intel Ivy Bridge processor. A story over at Digitimes is reporting that due to an abundance of inventory on current generation Sandy Bridge parts, Intel will start to trickle out Ivy Bridge in early April but will hold off on the full shipments until after June.
If Intel is indeed delaying shipping Ivy Bridge it likely isn't due to pressure from AMD and with the announcement by top brass there it seems likely Intel will retain the performance lead on the CPU side of things from here on out. With the release of Windows 8 coming in the fall of 2012 Intel's partners (and Intel internally) are likely going to be using that as the primary jumping off point for the architecture transition.
If ever there was a reason to support AMD and competition in general, this is exactly that. Without pressure from a strong offering from the opposition Intel is free to adjust their product schedule based on internal financial reasons rather than external consumer forces. While we will still see some Ivy Bridge availability in April (according to Digitimes at least) in order to avoid a marketing disaster, it seems that the wide scale availability of the Intel design with processor graphics performance expected to be double that of Sandy Bridge won't be until the summer.
Subject: Processors | February 12, 2012 - 06:57 PM | Tim Verry
Tagged: shark bay, Intel, haswell, cpu
Intel's Ivy Bridge processor, the upcoming "tick" in Intel's clock-esque world domination strategy, has yet to be released and we are already getting rumors and leaked information coming in about the "tock" that will be Ivy Bridge's successor in the 22nm Haswell processors (as part of the Shark Bay platform). Ivy Bridge processors will bring incremental performance improvements and lower power usage on the same 1155 socket that Sandy Bridge employs.
Haswell; however, will move to (yet another) socket LGA 1150 on the desktop, and will bring incremental improvements over Ivy Bridge. Improvements include much faster integrated processor graphics and the AVX2 instruction set. Unfortunately, Intel will be returning to an increased TDP (thermal design power) with Haswell compared to the lower TDP from Sandy Bridge to Ivy Bridge.
According to Domain Haber, who claims to have gotten their hands on a leaked road map, Intel will be launching Ivy Bridge through the end of this year, and then will debut their Haswell processors in the first half of 2013. The alleged road map can be seen below.
What I found interesting about the road map is that there is no mention of an Ivy Bridge-E or Haswell-E processor. Instead, the current Sandy Bridge-E chips are shown occupying the high end and enthusiast segment through at least the first half of 2013 and the launch of Haswell. Whether enthusiasts will continue to choose the Sandy Bridge-E processors for that long will remain to be seen, however. Also strange is that, according to VR-Zone, Intel will have three tiers of integrated graphics performance with GT1, GT2, and GT3. They will then place the fastest graphics core in the mobile chips and leave the slower graphics cores in the desktop chips. Discrete cards are not dead yet, it seems (unless you're rocking an AMD APU of course).
Have you invested in a Sandy Bridge-E setup, or are you still holding onto an older chip to wait for the best performance upgrade for your money? If you have bought into SB-E, do you think it'll last you into 2013?
Subject: Graphics Cards, Processors, Mobile | February 2, 2012 - 02:02 PM | Ryan Shrout
Tagged: amd, trinity, hsa, ultrabook, ultrathin
Today at the AMD Financial Analyst day in Sunnyvale, Lisa Su, Senior Vice President and General Manager, Global Business Units, showed off a reference design from Compal of an 18mm think ultrathin notebook that they are obviously hoping to compete with Intel's Ultrabook push.
The notebook is based on AMD's upcoming Trinity APU that improves on the CPU and GPU performance of the currently available Llano APU. There weren't many details though Su did state they were hoping for prices in the $600-800 range would could but a lot of pressure on Intel.
Subject: Editorial, Graphics Cards, Processors | February 2, 2012 - 12:31 PM | Ryan Shrout
Tagged: reports, gpu, fad, cpu, APU, analyst, amd
Consider this fair warning: tomorrow here at PC Perspective you will learn the future of AMD. Sound over dramatic? We don't think so. After a pretty interesting year in 2011 for the company and AMD has said on several occasions that this year's Financial Analyst Day was going to reveal a lot about what the future holds for them on the GPU, CPU and APU front.
Hopefully we will learn what AMD plans to do after the cancelation of the second-generation of ultra lower power APUs, how important discrete graphics will be going forward and what life there is for the processor architecture after Bulldozer.
We will be in Sunnyvale at the AMD campus covering the event and we will be holding a live blog at the same time...right here. The event starts at 9am PST on February 2nd, aso be sure you set your calendars and bookmark this page for all the news!!