All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Processors, Mobile, Shows and Expos | May 31, 2011 - 02:01 AM | Ryan Shrout
Tagged: ultrabook, Medfield, Ivy Bridge, Intel, haswell, computex, atom
With the release of the Intel Z68 chipset behind us by several weeks, Intel spent the opening keynote at Computex 2011 creating quite a buzz in the mobility section of the computing world. Intel’s Executive Vice President Sean Maloney took the stage on Tuesday and announced what Intel is calling a completely new category of mobile computer, the “Ultrabook”. A term coined by Intel directly, the Ultrabook will “marry the performance and capabilities of today’s laptops with tablet-like features and deliver a highly responsive and secure experience, in a thin, light and elegant design.”
If this photo looks familiar...see the similarity?
Intel is so confident in this new segment of the market that will fall between the tablet and notebook that they are predicting that by the time we reach the end of 2012 it will represent 40% of Intel’s processor shipments. That is an incredibly bold claim considering how massive and how dominate Intel is in the processor field. Intel plans to reach this 40% goal by addressing the Ultrabook market in three phases, the first of which will begin with ultra-low-power versions of today’s Sandy Bridge processors. Using this technology Maloney says we will see notebooks less than 0.8 inches thin and for under $1,000.
Make sure you "Read More" for the full story!!
NCSU Researchers Tweak Core Prefetching And Bandwidth Allocation to Boost Mult-Core Performance By 40%
Subject: Processors | May 27, 2011 - 11:26 AM | Tim Verry
Tagged: processor, multi-core, efficiency, bandwidth, algorithm
With the clock speed arms race now behind us, the world has turned to increases in the number of processor cores to boost performance. As more applications are becoming multi-threaded, CPU core increases have become even more important. In the consumer space, quad and hexa-core chips are rather popular in the enthusiast segment. On the server side, eight core chips provide extreme levels of performance.
The way that most multi-core processors operate involves the various CPU cores having access to their own cache (Intel’s current gen chips actually have three levels of cache, with the third level being shared between all cores. This specific caching system; however, is beyond the scope of this article). This cache is extremely fast and keeps the processing core(s) fed with data which the processor then feeds through its assembly line-esque instruction pipeline(s). The cache is populated with data through a method called “prefetching.” This prefetching pulls data of running applications from RAM using mathematical algorithms to determine what the processor is likely going to need to process next. Unfortunately, while these predictive algorithms are usually correct, they sometimes make mistakes and the processor is not fed with data from the cache and thus must look for it elsewhere. These instances, called stalls, can severely degrade core performance as the processor must reach out past the cache and into the system memory (RAM), or worse, the even slower hard drive to find the data it needs. When the processor must reach beyond its on-die cache, it is required to use the system bus to query the RAM for data. This processor to RAM bus, while faster than reading from a disk drive, is much slower than the cache. Further, processors are restricted in the amount of available bandwidth between the CPU and the RAM. As the number of included cores increases, the amount of shared bandwidth each core has access to is greatly reduced.
The layout of a current Sandy Bridge Intel processor. Note the Cache and Memory I/O.
A team of researchers at North Carolina State University have been studying the above mentioned issues, which are inherent in multi-core processors. Specifically, the research team was part of North Carolina State University’s Department of Electrical and Computer Engineering, and includes Fang Liu and Yan Solihin who were funded in part by the National Science foundation. In a paper concluding their research that will be presented June 9th, 2011 at the international Conference on Measurement and Modeling of Computer Systems, they detail two methods for improving upon the current bandwidth allocation and cache prefetching implementations.
Dr. Yan Solihin, associate professor and co-author of the paper in question stated that certain processor cores require more bandwidth than others; therefore, by dynamically monitoring the type and amount of data being requested by each core, the amount of bandwidth available can be prioritized by a per-core basis. Solihin further stated that “by better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance.”
Further, they have analyzed the data of the processors hardware counters and constructed a set of criteria that seek to improve efficiency by dynamically turning prefetching on and off on a per-core basis. By turning prefetching on and off on a per core basis, this further provides bandwidth to the cores that need it. By implementing both methods, the research team was able to improve multi-core performance by as much as 40 percent versus chips that do not prefetch data, and by 10 percent versus multi-core processors with cores that do prefetch data.
The researchers plan to detail their findings in a paper titled “Studying the Impact of Hardware Prefetching and Bandwidth Partitioning In Chip-Multiprocessors,” which will be publicly available on June 9th. The exact algorithms and criteria that they have determined will decrease the number of processor stalls and increase bandwidth efficiency will be extremely interesting to analyze. Further, it will be interesting to see if any of these improvements will be implemented by Intel or AMD in their future chips.
Subject: Processors | May 25, 2011 - 03:28 PM | Jeremy Hellstrom
Tagged: xeon, server, xeon x7, x7 4870, Intel
AnandTech got their hands on four of the the brand new 32nm Intel Xeon X7 4870, 10 cores clocked to 2.4GHz; perhaps a delayed 'Tick" but a tick nonetheless. Not only did they test the new chips they also had a chance to test it with Load Reduced DIMMs (LR-DIMM) as opposed to the old Fully Buffered style (FB-DIMMs) we were used to in days gone by. That spells higher capacity which is good considering the testbed they used can support up to 2TB of RAM to keep the 4 CPUs fed. This is a high end server part, not really competeing against AMD as a similar Opteron system would cost about 1/2 as much with performance reduced about the same as well. Check out this beast, but keep in mind a single CPU will set you back more than you paid for your whole system.
"Only one year later, Intel is upgrading the top Xeon by introducing Westmere-EX. Shrinking Intel's largest Xeon to 32nm allows it to be clocked slightly higher, get two extra cores, and add 6MB L3 cache. At the same time the chip is quite a bit smaller, which makes it cheaper to produce. Unfortunately, the customer does not really benefit from that fact, as the top Xeon became more expensive. Anyway, the Nehalem-EX was a popular chip, so it is no surprise that the improved version has persuaded 19 vendors to produce 60 different designs, ranging from two up to 256 sockets."
Here are some more Processor articles from around the web:
- Intel Core i5-2390T @ iXBT Labs
- Inexpensive AMD Processor Roundup @ iXBT Labs
- AMD Phenom II X4 980 BE 3.70 GHz @ techPowerUp
- AMD Phenom II X2 560 Black Edition AM3 Processor Review @ eTeknix
- Workstation & Server CPU Comparison Guide @ TechARP+
- CPU Performance Comparison Guide @ TechARP
- Intel's Silvermont: A New Atom Architecture @ AnandTech
Recently, AMD launched two new AMD Embedded G-Series APUs (Accelerated Processing Units). The two new chips have a TDP rating of 5.5 and 6.4 watts, which represent a 39% improvement in power savings over the previous iterations. The 361mm² chip package is capable of being used in embedded systems without the need for a fan to cool it. The embedded chips include one or two low power x86 Bobcat processors and a discrete class DirectX 11 GPU on a single die.
AMD currently has three systems utilizing the new APUs, including a Pico-ITX form factor computer, a Qseven form factor computer, and a digital sign system. Buddy Broeker, the Director of Embedded Solutions for AMD stated that "today we take the ground-breaking AMD Fusion APU well below 7W TDP and shatter the accepted traditional threshold for across-the-board fanless enablement."
The two new chips are named the T40R and the T40E. The chips both run at 1.00GHz; however, the 6.4 watt TDP T40E is a dual core chip and the 5.5 watt TDP T40R is a single core variant. Both chips include an AMD Radeon 6250 GPU, a 64KB L1 cache, and a 512KB L2 cache per each CPU core. Further, the chips feature an integrated DDR3 memory controller that can support up to 667MHz solder-down SODIMMs or two DIMM slots. More details on the series as a whole can be found here.
Mobile and embedded processors continue to get smaller and faster. Have you seen any AMD powered embedded technology in your town?
Subject: General Tech, Graphics Cards, Processors | May 22, 2011 - 11:04 PM | Scott Michaud
Tagged: fusion, amd, AFDS
In a little over three weeks’ time AMD will host their AMD Fusion Developer Summit 2011 (AFDS): a three-day conference with the hopes of promoting heterogeneous computing amongst developers. We have increasingly seen potential applications of using the parts of your computer outside the standard x86 core over the years though much of it was through NVIDIA’s brand. Building up to the summit, AMD’s DeveloperCentral talked with Lee Howes, parallel computing expert and Member of Technical Staff for Programming Models at AMD, about his upcoming session at AFDS.
I can't get over how much AFDS looks like a diagnosis.
In the short five-question interview, Dr. Howes outlined that the goal of his session is to show developers what to expect, good and bad, from developing for a heterogeneous architecture such as that of an APU. The rest of the interview was spent discussing how heterogeneous computing is currently and will eventually look like. Topics spanned from the slow perceived uptake of parallel computing in the home to the technological limitations of traditional CPUs that APUs and other heterogeneous computing systems look to bypass.
While AFDS is (by its namesake) a developer’s conference it is very much relevant to peer at for the end-user. The support for developers of newer computing architectures will help fuel the cycle of adoption between software and hardware which ends up with a better experience for us. What tasks would you like to see accelerated by heterogeneous computing? Let us know in the comments below.
Subject: Processors, Chipsets, Systems | May 19, 2011 - 06:10 PM | Jeremy Hellstrom
Tagged: sapphire, ion 2, htpc
At an estimated $450, the Sapphire Edge HD mini PC, powered by a dual core Atom D510 1.66 GHz with ION 2 graphics is a pretty good deal for those looking for a nettop. With only 250GB of storage you will probably want this connected to a large storage device either over ethernet or USB, though with services like Google Music Beta, Wolfgang's Vault and YouTube that might not be a problem. From InsideHW's testing you certainly won't have to worry about videos skipping just because your email is open.
"A dual-core CPU that won’t be stricken down by several programs running at the same time, GeForce that chews on any video that you put in front of it, sufficient RAM to make Windows 7 jump around, complete support for all types of video/audio formats and subtitles, and all this for a price of a good Blu-ray player - what else could you wish for?"
Here are some more Systems articles from around the web:
- ECS HDC-I Mini-ITX Fusion Board Review @ Madshrimps
- Gigabyte GA-E350N-USB3 @ iXBT Labs
- ASUS E35M1-I DELUXE @ Tweaktown
- Sony VAIO VPC-L218FX Review @ TechReviewSource
Subject: Editorial, General Tech, Graphics Cards, Processors, Mobile | May 13, 2011 - 06:49 PM | Scott Michaud
Tagged: nvidia, conference call
NVIDIA made their quarterly conference call on May 12th which consisted of financial results up to May 1st and questions from financial analysts and investors. NVIDIA chief executive officer Jen-Hsun Huang projected that future revenue from the GPU market would be “flattish”, revenue from the professional market would be “flattish”, and revenue from the consumer market would be “uppish”. Huang did mention that he believes that the GPU market will grow in the future as GPUs become ever more prevalent.
How's the green giant doing this quarter? Read on for details.
For the professional market, NVIDIA discussed their intention to continue providing proof-of-concept applications to show the benefit of GPU acceleration which they hope will spur development of GPU accelerated code. Huang repetitively mentioned that the professional market desires abilities like simultaneous simulation and visualization and that a 10% code-rewrite would increase performance 500-1000%, but current uptake is not as fast as they would like. NVIDIA also hinted that GPUs will be pushed in the server space in the upcoming future but did not clarify on what that could be. NVIDIA could simply be stating that Tesla will continue to be a focus for them; they also could be hinting towards applications similar to what we have seen in recent open sourced projects.
For consumers, Huang made note of their presence in the Android market with their support of Honeycomb 3.1 and the upcoming Icecream Sandwich. Questions were posed about the lackluster sales of Tegra tablets but Huang responded stating that the first generation of tablets were deceptively undesirable due to cost of 3G service. He went on to say that the second wave of tablets will be cheaper and more available in retail stores with Wi-Fi only models more accessible to consumers.
nVihhhhhhhhhdia. (Image by Google)
The bulk of the conference call was centered on nVidia’s purchase of Icera though not a lot of details were released being that the purchase is yet to be finalized. The main points of note is that as of yet, while NVIDIA could integrate Icera’s modems onto their Tegra mobile processors, they have no intention of doing so. They also stated they currently have no intention of jumping into the other mobile chip markets such as GPS and near-field communications due to the lesser significance and greater number of competitors.
I think the new owners like the color on the logo.
The last point of note from the conference call was that they expect that Project Denver, NVIDIA’s ARM-based processor, to be about 2 generations away from accessible. They noted that they cannot comment for Microsoft but they do reiterate their support for Windows 8 and its introduction of the ARM architecture. The general theme throughout the call was that NVIDIA was confident in their position as a player in the industry. If each of their projects works out as they plan, it could be a very well justified attitude.
Subject: Processors, Shows and Expos | May 12, 2011 - 12:22 PM | Jeremy Hellstrom
Tagged: VIA, Nano, quadcore, quad, centaur
VIA has been sitting pretty in a very specific peice of the computer market for quite a while now and is now being crowded by AMD and Intel working their way down to the low power market from their usual energy gobbling silicon and ARM sneaking up its performance from its traditional extremely low power consumption market. That competition has spurred VIA to develop first the dual core Nano X2 and now the QuadCore which is a pair of X2's on a single package using what was described to The Tech Report as a "side channel" of wiring between them. Still on 40nm it doesn't represent a completely new design for VIA, more a refinement of what they already produce. Check out their coverage as well as the write up Josh has finished here.
"Early this year, Via introduced its Nano X2 processor, a dual-core implementation of its Isaiah architecture built on TSMC's 40-nm chip fabrication process. Today, Via is announcing a new product, the QuadCore processor, that combines a pair of Nano X2 chips on a single package to deliver a low-cost, low-power CPU whose position in the market is fairly distinctive.
We visited Via-Centaur's Austin, Texas offices yesterday, where we chatted with Centaur Chief Architect Glenn Henry and Via marketing head Richard Brown. We came away with some fresh details on the QuadCore processor and a better sense of Via's future plans as an intriguing third-place supplier of x86-compatible PC processors."
Here are some more Processor articles from around the web:
- VIA QuadCore Preview & Centaur Tour @ [H]ard|OCP
- VIA's QuadCore: Nano Gets Bigger @ AnandTech
- The Best Budget & Mainstream Processors @ Techspot
- Intel Core i7 990X Extreme Edition Processor Review @ Legit Reviews
- Desktop CPU Comparison Guide Rev. 10.3 @ Tech ARP
- AMD Phenom II X2 555 Black Edition AM3 Processor Review @ eTeknix
- AMD Phenom II X4 980 Black Edition Processor Review @Hi Tech Legion
Subject: Processors | May 12, 2011 - 08:21 AM | John Davis
Tagged: software, sdk, linux, Intel, developer
Intel has just released an update to their OpenCL (Open Computing Language) SDK (Software Development Kit). With this update Intel has released a 64bit .rpm package, and previously only supported Windows. OpenCL is a huge jump in the future of heterogeneous computing, and the future of computers. Intel joins a host of manufacturers that now support OpenCL which includes AMD/ATI and nVidia.
OpenCL has many competitors in the heterogeneous computing realm which includes nVidia's CUDA and Microsoft's DirectCompute. All of this is one giant step forward in GPGPU. In the majority of computers that have dedicated GPU's or have an Intel processor with on-cpu graphics that are not in use, this is great news! Hopefully, future Linux distributions implement OpenCL similar to OS X did with Snow Leopard.
Subject: General Tech, Processors | May 12, 2011 - 12:34 AM | Scott Michaud
Tagged: sandy bridge, celeron
Intel has made a splash with their Sandy Bridge parts; for being in the middle-range they keep up with the higher end of the prior generation in many applications. We have heard rumors of new Atom-level parts from Intel deviating from their on-chip GPU structure that Sandy Bridge promotes. What about the next level? What about Celeron.
I'm guessing less than an i7.
Details were posted to CPU-World about Intel’s upcoming Sandy Bridge-based Celeron processors. There are three variants listed each supporting Intel’s on-chip GPU. The G440 is a single core part clocked at 1.6 GHz with a 650 MHz GPU where the G530 and G540 are both dual core parts clocked at 2.4 GHz and 2.5 GHz respectively and both with an 850 MHz GPU. The dual core parts have a 2MB L3 cache though the article is inconsistent on whether the single core part has 1 or 2 MB of L3 cache though we will assume 1 MB due to the wording of the article. While the GPU performance differs between the single core and dual core parts both will Turbo Boost to a maximum of 1GHz as need arises.
Functionally the chips will only contain the bare minimum of Sandy Bridge core features like 64-bit and virtualization support. There are still currently no further details on launch date and pricing. But if you are waiting to upgrade your lower end devices rest assured that Sandy B is there for you; at some point, at least.
Subject: Processors, Chipsets, Mobile | May 9, 2011 - 09:07 PM | Tim Verry
Tagged: PowerVR, Intel, gpu, atom
In a surprising move, Intel plans to move away from using it's own graphics processors with the next "full fat" Atom processors. Intel has traditionally favored its own graphics chipsets; however, VR-Zone reports that Intel has extended it's licensing agreements with PowerVR to include certain GPU architectures.
These GPU licenses will allow Intel to implement a PowerVR SGX545 equivalent graphics core with its Cedarview Atom chips. While the PowerVR graphics core is no match for dedicated GPUs or likely that found in Intel's own Sandy Bridge "HD 3000" series, the hardware will allow Atom powered systems to play video with ease thanks to hardware accelerated decodding of "MPEG-2, MPEG-4 part 2, VC1, WMV9 and the all-important H.264 codec." VR-Zone details the SGX545 GPU as being capable of "40 million triangles/s and 1Gpixles/s using a 64-bit bus" at the chips original 200mhz.
Intel plans to clock the mobile chips at 400mhz and the desktop graphics cores at 640mhz. The graphics cores will be capable of resolutions up to 1440x900 and supports VGA, HDMI 1.3a and Display Port 1.1 connections for video output. DirectX 10.1 support is also stated by VR-Zone to be supported by the SGX545, which means that the net-top versions of Atom may be capable of running the Aero desktop smoothly.
This integration by Intel of a GPU capable of hardware video acceleration will certainly make Nvidia's ION chipsets harder to justify for HTPC usage. ION chipsets will likely reliquish marketshare to cheaper stock Intel Atom platforms for basic home theater computers, but will still remain viable in the more specific market using ION + Atom chips as light gaming platforms in the living room.
Subject: Processors | May 8, 2011 - 07:56 PM | Tim Verry
Tagged: sandy bridge, Eight Core, 2011
Although the item is no longer for sale--likely due to threatened legal action from Intel--an Eight core (socket 2011) Sandy Bridge chip briefly went on sale for a sum of $1359.99 on auction site Ebay.
The chip reads "Intel Confidential Q19D ES 1.60GHz" and is alleged to be a preview chip for a socket 2011 Sandy Bridge processor capable of 8cores/16threads at 1.6Ghz. AMD's Bulldozer chips rocking eight cores have been announced for some time now and are planned to release in the first half of this year. Intel has been content in fighting AMD's six core chips with its hyperthreaded quad cores; however, as more prevalent and optimized multi-threaded applications take advantage of AMDs higher number of physical cores, Intel may be planning to match AMD physical core for physical core in a plan to keep AMD at bay.
The new Intel chips should help to slow AMD's rise in server market share, while also trickling down the idea of (further optimized) multi-threaded applications to consumer markets. With that said, even though consumer level applications cannot currently fully utilize sixteen threads, the Tim Taylor-esque part of many hardware enthusiasts (especially folders) would love to have one anyway!
Subject: Processors, Mobile | May 6, 2011 - 07:11 PM | Ryan Shrout
Tagged: project denver, nvidia, macbook, Intel, arm, apple
A very interesting story over at AppleInsider has put the rumor out there that Apple may choose to ditch the Intel/x86 architecture all together with some future upcoming notebooks. Instead, Apple may choose to go the route of the ARM-based processor, likely similar to the A4 that Apple built for the iPhone and iPad.
What is holding back the move right now? Well for one, the 64-bit versions of these processors aren't available yet and Apple's software infrastructure is definitely dependent on that. By the end of 2012 or early in 2013 those ARM-based designs should be ready for the market and very little would stop Apple from making the move. Again, this is if the rumors are correct.
Another obstacle is performance - even the best ARM CPUs on the market fall woefully behind the performance of Intel's current crop of Sandy Bridge processors or even their Core 2 Duo options.
In addition to laptops, the report said that Apple would "presumably" be looking to move its desktop Macs to ARM architecture as well. It characterized the transition to Apple-made chips for its line of computers as a "done deal."
"Now you realize why Apple is desperately searching for fab capacity from Samsung, Global Foundries, and TSMC," the report said. "Intel doesn't know about this particular change of heart yet, which is why they are dropping all the hints about wanting Apple as a foundry customer. Once they realize Apple will be fabbing ARM chips at the expense of x86 parts, they may not be so eager to provide them wafers on advanced processes."
Even though Apple is already specing its own processors like the A4 there is the possibility that they could go with another ARM partner for higher performance designs. NVIDIA's push into the ARM market with Project Denver could be a potential option as they are working very closely with ARM on those design and performance improvements. Apple might just "borrow" those changes however at NVIDIA's expense and build its own option that would satisify its needs exactly without the dependence on third-parties.
Migrating the notebook (and maybe desktop markets) to ARM processors would allow the company to unify their operating system across the classic "computer" designs and the newer computer models like iPads and iPhones. The idea of all of our computers turning into oversized iPhones doesn't sound appealing to me (nor I imagine, many of you) but with some changes in the interface it could become a workable option for many consumers.
With even Microsoft planning for an ARM-based version of Windows, it seems that x86 dominance in the processor market is being threatened without a doubt.
Subject: Graphics Cards, Processors | May 6, 2011 - 01:09 PM | Jeremy Hellstrom
Tagged: tri-fire, crossfire, sli, triple, sandybridge
Not too long ago [H]ard|OCP examined the price to performance ratio between a triple SLI GTX580 system and a Tri-Fire HD6990 and HD6970 and discovered that as far as value goes, NVIDIA could not touch AMD. A reader of theirs inquired if it was the aging Core i7-920 that was holding the cards back even with the overclock of 3.6GHz. A SandyBridge system with a Core i7-2600K and an ASUS board with the NF200 bridge chip was used to revisit the performance of the two vendors GPUs. The result; we can hardly wait for the Z68 boards to come out!
"We have re-tested performance between GTX 580 3-Way SLI and Radeon HD 6990+6970 Tri-Fire with a brand new Sandy Bridge 4.8GHz system. Our readers wanted to know if the CPU speed would improve performance and open up the potential of this triple-GPU performance beasts. To put it succinctly, they were right. The results completely turn the tables upside down and then some."
Here are some more Graphics Card articles from around the web:
- Triple Monitor Gaming: GeForce GTX 590 vs. Radeon HD 6990 @ TechSpot
- Sapphire Radeon HD 5850 and HD 5830 1GB Xtreme @ Tweaktown
- XFX HD Radeon 6790 Review @ OCC
- PowerColor HD 6950 Vortex II 2 GB @ techPowerUp
- HIS Radeon 6870 IceQX @ XSReviews
- HIS Radeon HD 6790 1GB IceQ X Turbo @ Tweaktown
- AMD Radeon HD 6670 1GB and HD 6570 512MB GDDR5 @ Hi Tech Legion
- AMD Radeon 6990 4GB Graphics Card Review @ eTeknix
- MSI R6950 Twin Frozr II/OC, MSI R6870 Hawk, MSI N560GTX-Ti Twin Frozr II @ iXBT Labs
- May 2011: Gallium3D vs. Classic Mesa vs. Catalyst @ Phoronix
- How to overclock a graphics card @ eTeknix
- i3DSpeed, April 2011 @ iXBT Labs
- MSI GTX560-Ti OC SLI @ OC3D
- Zotac GeForce GTX 560 Ti AMP! Edition 1GB Video Card Review @ ThinkComputers
- GIGABYTE GTX 580 Super Overclock @ OCAU
Subject: General Tech, Processors | May 4, 2011 - 01:55 PM | Tim Verry
Tagged: transistor, Intel
"After a decade of research, Intel has unveiled the world's first three dimensional transistor" states Mark Bohr, a Senior Fellow for Intel. Silicon based transistors in computers, mobile devices, vehicles, and embedded equipment have only existed in a planar, or two dimensional, form until today.
The new three dimensional transistor, dubbed "Tri-Gate," is now ready for high volume production, and will be included in Intel's new Ivy Bridge 22nm processors. This new Tri-Gate transistor is a huge deal for Intel as it will enable them to maintain the pace of current chip evolution as outlined by Moore's Law. If you are not familiar with Moore's Law, it states that approximately every 18 months, transistor density will double, bringing with it increases in performance and yeild while decreasing cost of production. Intel states that "It has become the basic business model for the semiconductor industry for more than 40 years."
As processors become smaller and smaller, the electric current becomes more and more difficult to contain. There are hundreds of thousands of minute connections and switches inside today's processors, and as manufacturing processes shrink, the amount of current leakage increases. With Intel's Core 2 Duo processors, Intel created a new "high-k"(high dielectric constant, which is a property of matter relating to the amount of charge it can hold) metal gate transistor using a material called Hafnium. The new material replaced the silicon dioxide dielectric gate of the transistor to combat the current leakage problem at 32nm. This allowed the chip process to shrink while scaling to produce less current leakage and heat. To be more specific, Intel states that "because high-k gate dielectrics can be several times thicker, they reduce gate leakage by over 100 times. As a result, these devices run cooler."
Unfortunately, at the much smaller 22nm process, Intel was not achieving results congruent with Moore's Law using even their high-k gate transistors. In order to maintain the scaling predicted in Moore's Law, Intel had to once again re-invent their transistors. In order to create a smaller manufacturing process while overcoming current leakage, Intel had to develop a way to use more of what little space they had available to them. It is here that they entered the third dimension. By designing a transistor that is able to control the electrical current on three sides instead of a single plane, they are able to shrink the transistor while ending up with more surface area to "control the stream" as Mark Bohr puts it.
The proposed benefits of Tri-Gate lie in it's ability to operate at lower voltages, with higher energy efficiency, all while running cooler and faster than ever before. More specifically, up to 37 percent increases in performance at low voltages versus Intel's current line of 32nm processors. Intel further states that "the new transistors consume less than half the power when at the same performance as 2-D planar transistors on 32nm chips." This means that at the same performance level of the current crop of Intel CPUs, Ivy Bridge will be able to do the same calculations either while using half the power needed of Sandy Bridge or nearly twice as fast (it is unlikely to scale perfectly as there is overhead and other elements of the chip that will not be as radically revamped) at the same level of power consumption. If this sort of scaling turns out to be true for the majority of Ivy Bridge chips, the overclocking abilities and resulting performance should be of unprecedented levels.
The use of Tri-Gate transistors is also mentioned as being beneficial for mobile and handheld devices as the power efficiency should allow increases in battery life. This is due to the chip running at decreased voltages while maintaining (at least) the same level of performance as current mobile chips. While Intel did not demo any mobile CPUs, they did state that Tri-Gate transistors may be integrated into future Atom chips.
Subject: Graphics Cards, Processors | May 3, 2011 - 10:37 PM | Ryan Shrout
Tagged: jpr, nvidia, gpus, amd, Intel
In a mixed report coming from Jon Peddie Research, information about the current state of the GPU world is coming into focus. Despite seeing only 83 million PCs shipping in Q1 2011 (a 5.4% drop compared to Q4 2010), the shipment of GPUs rose by 10.3%. While this no doubt means that just as many in the industry have been predicting, the GPU is becoming more important to the processing and computing worlds, there are several factors that should be considered before taking this news as win for the market as whole.
First, these results include the GPUs found in Intel and AMD’s CPU/GPU combo processors like the Sandy Bridge platforms, AMD’s Fusion APU and the more recent Intel Atom cores as well. If a notebook or desktop system then ships with a discrete solution from AMD or NVIDIA in addition to one of those processors, then the report indicates that two GPUs have shipped. We can assume then that because ALL Sandy Bridge processors include a GPU on them that much of this rise is due to the above consideration.
Subject: Processors | May 3, 2011 - 01:04 PM | Jeremy Hellstrom
Tagged: phenom ii, x4, amd, 980, 45nm
Remember the Phenom II, that CPU which was once incredible but is now the processor that those waiting for Llano are getting a little bored with? It has a new flagship model for the X4 series, called the X4 980 which runs at 3.7GHz and sports the same 45nm technology we have come to know so well. Read on to see if Josh could find anything about the last of the 45nms that will knock the dust off his case or if we are looking at more of the same, if slightly faster, silicon.
"In the end, this is a simple 100 MHz increase in clockspeed for AMD for their high end quad core processor. It is not all that much faster than the previous X4 975, but at least it does not consume all that much more power than the previous model. It is a good all-around performer, and would make a solid foundation for a productivity and gaming machine for most users. Invariably though, most eyes are drawn to the horizon and the promise of Llano and Bulldozer. Hopefully for AMD these next generation processors will allow them to more adequately compete with Intel when it comes to raw performance."
Here are some more Processor articles from around the web:
- AMD's Phenom II X4 980 Black Edition processor @ The Tech Report
- AMD Phenom II X4 980 Black Edition @ AnandTech
- AMD Phenom II X4 980 Black Edition @ Techware Labs
- AMD Phenom II X4 980 Black Edition @ Bjorn3D
- AMD Phenom II X4 980 Black Edition Review @ Neoseeker
- AMD Phenom-II X4-980 BE Processor @ Benchmark Reviews
- AMD Phenom II X4 980 Processor Review @ OCC
- AMD Phenom II x4 980 Black Edition @ Overclockers.com
- AMD Phenom II X4 980 Black Edition Processor Review @ HardwareHeaven
- AMD Phenom II X4 980 Black Edition Processor Review @ Hardware Canucks
- AMD Phenom II X4 980 Black Edition Quad-Core CPU Review @ Legit Reviews
- AMD Phenom II X4 980 Black Edition @ Legion Hardware
- Sandy Bridge and Lynnfield Quad-Core Processors Compared @ iXBT Labs
Subject: Graphics Cards, Processors | May 2, 2011 - 07:59 PM | Scott Michaud
Tagged: llano, fusion, amd
On Valentine’s Day, AMD reached out to us after our relationship with Intel’s Sandy B. broke down. A mug, some chocolate, and a promise of a wonderful date with their good friend Llano was AMD’s hope to help us move on to a more stable relationship. Months have gone by and we have made up with Sandy with many a great SATAday spent together. While Llano has yet to appear, AMD did urge us to keep waiting by revealing some of her measurements and an option for another playful partner.
Image from Donanim Haber
Llano’s GPU, as reported by Donanim Haber (translated to English), will feature 400 stream processors which will be clocked at 594 MHz. TechPowerUp also reports that it will be DirectX 11 compatible as expected and can pair up with one of AMD’s “Turks” based discrete GPUs: the HD 6570 and HD 6670. This combined GPU will be registered to the system as a Radeon HD6690 using Hybrid CrossFireX.
Just under two weeks ago we reviewed the aforementioned "Turks" based HD 6670 and 6570 with games like Left 4 Dead 2. Alone, those cards were able to play many games with antialiasing for people with monitor resolutions of 1680x1050. Llano will not perform as well as those cards but should be able to play those same games, and others, with just a few settings reduced. That said, Llano is also not a discrete card and thus it is not necessarily fair to compare it with one. Lastly, Llano can also be paired with those cards for further performance benefits making them all the more enticing for gamers not wishing to purchase higher end discrete graphics cards.
Subject: Processors | April 28, 2011 - 11:41 AM | Joe Kelly
Tagged: arm, amd
"AMD (NYSE: AMD) today announced a distinguished line-up of keynote speakers as well as technical session topics for the inaugural AMD Fusion Developer Summit (AFDS), which will be held June 13-16, 2011 at the Meydenbauer Center in Bellevue, Washington.
Industry keynote presentations will be delivered by esteemed industry experts from AMD, ARM and Microsoft. In his keynote “Heterogeneous Parallelism at Microsoft” Herb Sutter, Microsoft principal architect of Native Languages, will showcase upcoming innovations to bring access to increasingly heterogeneous compute resources directly into the world’s most popular native languages.
Jem Davies, ARM fellow and vice president of Technology, Media Processing Division, will deliver a keynote about ARM’s long history of heterogeneous computing, its future strategy, and ARM’s support of standards, including OpenCL™."
Could AMD be developing ARM based products for tablets, netbooks, servers and smartphones? There will be a version of Windows 8 that is ARM compatible. AMD may be working on a version of Fusion that uses ARM instead of x86.
It would make sense for AMD to license the ARM architecture if they could compete with Nvidia’s Tegra and Qualcomm’s snapdragon processors. AMD has a proven track record in creating graphics cards and so does NVidia. Will that give AMD an advantage from the start? Can AMD produce a better product or a product that is used in different types of applications that Tegra or future Tegra products were not designed for?
For several years now AMD has always been playing catch-up with Intel’s x86 processors and there in no end in sight. Will Bulldozer give AMD the performance catch-up needed to be faster the Intel? I doubt it or at least not when Sandy Bridge-EX comes out in Q4 of 2011.
We will have more information after the AMD Fusion Developer Summit.
Subject: Processors | April 21, 2011 - 04:57 PM | Jeremy Hellstrom
Tagged: atom, brazos, sandy bridge, energy efficient
On the AMD side there is the Brazos E-350 based platform and on the other is the Intel Core i3-2100T. One other choice would be to compare it to the aging Atom platform, which is missing some of the enhancements the more modern platforms have but is certainly energy efficient. Check out who can score the lowest at X-Bit Labs.
"The variety of components for small energy-efficient systems keeps growing day by day. In this review we are going to talk about energy-efficiency processors: AMD E-350 (Zacate) and Intel Core i3-2100T (Sandy Bridge). We will also discuss new Mini-ITX mainboards: Gigabyte E350N-USB3 (AMD Brazos platform) and Zotac H67-ITX WiFi (for LGA1155 processors)."
Here are some more Processor articles from around the web:
- Core i3-2120 and Core i3-2100 Processors @ X-bit Labs
- Intel Sandy Bridge Overclocking Guide @ Benchmark Reviews
- Intel Core i3 2100 @ Bjorn3D
- CPU Performance Comparison Guide @ TechARP
- Desktop CPU Comparison Guide @ TechARP
- Workstation & Server CPU Comparison Guide @ TechARP
Get notified when we go live!