All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Processors, Systems | June 12, 2011 - 08:57 PM | Tim Verry
Tagged: SFF, Intel, htpc, hd, DIY, atom
Habley has recently shown off a new small, embedded computer dubbed the SOM-6670E6XX. The new computer is the size of a post-it note; however, it sports an Atom E600 processor running at 1.0Gh as well as an integrated GMA600 graphics core. To be more specific, the motherboard in question measures 70mm x 70mm.
The CPU and GPU blend is able to support two displays and pipe two HD video streams to each. Using Media Player Class Home Cinema 1.5, the computer is able to play both a 1080p MPEG4 trailer of the X-Men First Class film and a HD FLV version of SpiderWic simultaneously. While playing both films, the CPU saw around 93% usage and 210 MB of RAM from the Windows Embedded 2009 operating system. Further, while playing an HD FLV film trailer while also watching an HD YouTube clip, the processor was again pegged at 93% usage; however, in this test the RAM usage was much higher, at 422 MB. The test system used, in addition to the SOM-6670, it consisted of a SOMB-073 Carrier board (which provides the various IO including video and audio output, mouse and keyboard input, and SATA ports), 1GB of on-board RAM, and a 5400RPM laptop form factor (2.5”) 120GB hard drive.
Including the two monitors, at 1280x768 (over HDMI) and 1920x1080 (SDVO) respectively, the system drew 18 watts during usage. You can see the test system of the small HD-capable computer in action in the video below. What uses do you have in mind for a micro-sized computer such as this?
Subject: General Tech, Processors | June 7, 2011 - 05:25 PM | Scott Michaud
Intel has been pushing for higher clock rates for ages now. While 4 and even 5 GHz is not entirely uncommon for those wishing to step outside Intel’s specifications and push the frequency as high as it can go, Intel has yet to allow their parts at that frequency in any supported fashion. That has recently changed with Intel’s Xeon line.
Tom’s Hardware noted from Intel’s spec sheet that Intel’s Xeon E3-1290 is clocked at 3.6Ghz with its Turbo Boost rating on single-threaded applications spiking to 4 GHz. Their original intention with their Netburst architecture from 2004 was to peak to ridiculously high frequencies but they quickly found their scalability ended below the 4 GHz line killing their plans for a 4 GHz SKU. With the Xeon architecture quite close to the higher-end Sandy Bridge parts it is possible that we might see 4 GHz in the desktop soon.
Advanced Micro Devices (AMD) announced today that they plan to bring their “FX” branding back to the latest high-end motherboards and CPUs. The first round of products to carry the brand include the “Scorpius” platform (AMD 990FX motherboards and AMD Radeon 6000 series graphics cards), and the upcoming “Zambezi” native octo-core unlocked processor. “FX customers will enjoy an unrivalled (sic) feature set and amazing control over their PC’s performance,” stated AMD.
TechPowerUp shows off the FX branded Zambezi's packaging, for example.
The “FX” moniker is AMD’s equivalent to Intel’s “Extreme Edition” products, which are overclocker and enthusiast-friendly products aimed at those wanting the fastest stock performance and the ability to push hardware to the limit through overclocking via unlocked multipliers.
In bringing back the “FX” brand in full force with Bulldozer, AMD seems confident in their processors’ performance versus the competition. It will certainly be interesting to see if their upcoming hardware can back up the enthusiast marketing and stack up against Intel’s offerings.
You can read more about AMD’s E3 announcement over at HardOCP.
Subject: Motherboards, Processors | June 1, 2011 - 09:40 PM | Ryan Shrout
Tagged: socket 2011, lga1366, danshui bay, asus
At their Republic of Gamers press conference ASUS showed off a prototype concept motherboard that really got some attention. The board combines an LGA1366 processor socket for current generation Nehalem processors AND a socket for the upcoming Sandy Bridge-E processors called Socket 2011. What does a beast like this look like?
Okay, so this board probably won't fit in your case and maybe won't even see the light of day outside a few reviews and interesting designs. But the concept is cool to see: use your LGA1366 processor today and still be able to upgrade to the Socket 2011 platform when those CPUs are released. You can see each processor has its own seperate memory slots though they share most of the other components.
The price of this board will also likely make it less than appealing to consumers; even those conscious of upgrade paths.
Subject: Processors, Shows and Expos | June 1, 2011 - 09:28 AM | Ryan Shrout
Tagged: trinity, llano, fusion, computex, bulldozer, APU, amd
While talking up the new 900-series of chipset and the branding for the upcoming AMD Llano APU launch, AMD did surprise us by showing off a bit more of the future than typical. Rick Bergman, general manager of the AMD Product Group, pulled a Trinity-based APU out of his pocket to demonstrate the conviction of staying on a "one-APU-per-year" cycle in the years to come.
While it looks just like any other AMD processors from a distance, this Trinity APU is based on the Bulldozer x86 architecture (which will see the first release as a CPU only later this year) and combines some amount of SIMD-units (aka Radeon cores) for a CPU/GPU combo. This will be the part that succeeds Llano, due out in a few short days.
This roadmap shows the cadence of once a year will be the norm for AMD going forward and that AMD plans to introduce an APU for the tablet market sometime in 2012. It will be interesting to see how late to the game AMD is in this arena and if they can compete with what ARM is doing or even what Intel will be doing with Medfield.
Subject: Processors, Mobile, Shows and Expos | May 31, 2011 - 02:01 AM | Ryan Shrout
Tagged: ultrabook, Medfield, Ivy Bridge, Intel, haswell, computex, atom
With the release of the Intel Z68 chipset behind us by several weeks, Intel spent the opening keynote at Computex 2011 creating quite a buzz in the mobility section of the computing world. Intel’s Executive Vice President Sean Maloney took the stage on Tuesday and announced what Intel is calling a completely new category of mobile computer, the “Ultrabook”. A term coined by Intel directly, the Ultrabook will “marry the performance and capabilities of today’s laptops with tablet-like features and deliver a highly responsive and secure experience, in a thin, light and elegant design.”
If this photo looks familiar...see the similarity?
Intel is so confident in this new segment of the market that will fall between the tablet and notebook that they are predicting that by the time we reach the end of 2012 it will represent 40% of Intel’s processor shipments. That is an incredibly bold claim considering how massive and how dominate Intel is in the processor field. Intel plans to reach this 40% goal by addressing the Ultrabook market in three phases, the first of which will begin with ultra-low-power versions of today’s Sandy Bridge processors. Using this technology Maloney says we will see notebooks less than 0.8 inches thin and for under $1,000.
Make sure you "Read More" for the full story!!
NCSU Researchers Tweak Core Prefetching And Bandwidth Allocation to Boost Mult-Core Performance By 40%
Subject: Processors | May 27, 2011 - 11:26 AM | Tim Verry
Tagged: processor, multi-core, efficiency, bandwidth, algorithm
With the clock speed arms race now behind us, the world has turned to increases in the number of processor cores to boost performance. As more applications are becoming multi-threaded, CPU core increases have become even more important. In the consumer space, quad and hexa-core chips are rather popular in the enthusiast segment. On the server side, eight core chips provide extreme levels of performance.
The way that most multi-core processors operate involves the various CPU cores having access to their own cache (Intel’s current gen chips actually have three levels of cache, with the third level being shared between all cores. This specific caching system; however, is beyond the scope of this article). This cache is extremely fast and keeps the processing core(s) fed with data which the processor then feeds through its assembly line-esque instruction pipeline(s). The cache is populated with data through a method called “prefetching.” This prefetching pulls data of running applications from RAM using mathematical algorithms to determine what the processor is likely going to need to process next. Unfortunately, while these predictive algorithms are usually correct, they sometimes make mistakes and the processor is not fed with data from the cache and thus must look for it elsewhere. These instances, called stalls, can severely degrade core performance as the processor must reach out past the cache and into the system memory (RAM), or worse, the even slower hard drive to find the data it needs. When the processor must reach beyond its on-die cache, it is required to use the system bus to query the RAM for data. This processor to RAM bus, while faster than reading from a disk drive, is much slower than the cache. Further, processors are restricted in the amount of available bandwidth between the CPU and the RAM. As the number of included cores increases, the amount of shared bandwidth each core has access to is greatly reduced.
The layout of a current Sandy Bridge Intel processor. Note the Cache and Memory I/O.
A team of researchers at North Carolina State University have been studying the above mentioned issues, which are inherent in multi-core processors. Specifically, the research team was part of North Carolina State University’s Department of Electrical and Computer Engineering, and includes Fang Liu and Yan Solihin who were funded in part by the National Science foundation. In a paper concluding their research that will be presented June 9th, 2011 at the international Conference on Measurement and Modeling of Computer Systems, they detail two methods for improving upon the current bandwidth allocation and cache prefetching implementations.
Dr. Yan Solihin, associate professor and co-author of the paper in question stated that certain processor cores require more bandwidth than others; therefore, by dynamically monitoring the type and amount of data being requested by each core, the amount of bandwidth available can be prioritized by a per-core basis. Solihin further stated that “by better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance.”
Further, they have analyzed the data of the processors hardware counters and constructed a set of criteria that seek to improve efficiency by dynamically turning prefetching on and off on a per-core basis. By turning prefetching on and off on a per core basis, this further provides bandwidth to the cores that need it. By implementing both methods, the research team was able to improve multi-core performance by as much as 40 percent versus chips that do not prefetch data, and by 10 percent versus multi-core processors with cores that do prefetch data.
The researchers plan to detail their findings in a paper titled “Studying the Impact of Hardware Prefetching and Bandwidth Partitioning In Chip-Multiprocessors,” which will be publicly available on June 9th. The exact algorithms and criteria that they have determined will decrease the number of processor stalls and increase bandwidth efficiency will be extremely interesting to analyze. Further, it will be interesting to see if any of these improvements will be implemented by Intel or AMD in their future chips.
Subject: Processors | May 25, 2011 - 03:28 PM | Jeremy Hellstrom
Tagged: xeon, server, xeon x7, x7 4870, Intel
AnandTech got their hands on four of the the brand new 32nm Intel Xeon X7 4870, 10 cores clocked to 2.4GHz; perhaps a delayed 'Tick" but a tick nonetheless. Not only did they test the new chips they also had a chance to test it with Load Reduced DIMMs (LR-DIMM) as opposed to the old Fully Buffered style (FB-DIMMs) we were used to in days gone by. That spells higher capacity which is good considering the testbed they used can support up to 2TB of RAM to keep the 4 CPUs fed. This is a high end server part, not really competeing against AMD as a similar Opteron system would cost about 1/2 as much with performance reduced about the same as well. Check out this beast, but keep in mind a single CPU will set you back more than you paid for your whole system.
"Only one year later, Intel is upgrading the top Xeon by introducing Westmere-EX. Shrinking Intel's largest Xeon to 32nm allows it to be clocked slightly higher, get two extra cores, and add 6MB L3 cache. At the same time the chip is quite a bit smaller, which makes it cheaper to produce. Unfortunately, the customer does not really benefit from that fact, as the top Xeon became more expensive. Anyway, the Nehalem-EX was a popular chip, so it is no surprise that the improved version has persuaded 19 vendors to produce 60 different designs, ranging from two up to 256 sockets."
Here are some more Processor articles from around the web:
- Intel Core i5-2390T @ iXBT Labs
- Inexpensive AMD Processor Roundup @ iXBT Labs
- AMD Phenom II X4 980 BE 3.70 GHz @ techPowerUp
- AMD Phenom II X2 560 Black Edition AM3 Processor Review @ eTeknix
- Workstation & Server CPU Comparison Guide @ TechARP+
- CPU Performance Comparison Guide @ TechARP
- Intel's Silvermont: A New Atom Architecture @ AnandTech
Recently, AMD launched two new AMD Embedded G-Series APUs (Accelerated Processing Units). The two new chips have a TDP rating of 5.5 and 6.4 watts, which represent a 39% improvement in power savings over the previous iterations. The 361mm² chip package is capable of being used in embedded systems without the need for a fan to cool it. The embedded chips include one or two low power x86 Bobcat processors and a discrete class DirectX 11 GPU on a single die.
AMD currently has three systems utilizing the new APUs, including a Pico-ITX form factor computer, a Qseven form factor computer, and a digital sign system. Buddy Broeker, the Director of Embedded Solutions for AMD stated that "today we take the ground-breaking AMD Fusion APU well below 7W TDP and shatter the accepted traditional threshold for across-the-board fanless enablement."
The two new chips are named the T40R and the T40E. The chips both run at 1.00GHz; however, the 6.4 watt TDP T40E is a dual core chip and the 5.5 watt TDP T40R is a single core variant. Both chips include an AMD Radeon 6250 GPU, a 64KB L1 cache, and a 512KB L2 cache per each CPU core. Further, the chips feature an integrated DDR3 memory controller that can support up to 667MHz solder-down SODIMMs or two DIMM slots. More details on the series as a whole can be found here.
Mobile and embedded processors continue to get smaller and faster. Have you seen any AMD powered embedded technology in your town?
Subject: General Tech, Graphics Cards, Processors | May 22, 2011 - 11:04 PM | Scott Michaud
Tagged: fusion, amd, AFDS
In a little over three weeks’ time AMD will host their AMD Fusion Developer Summit 2011 (AFDS): a three-day conference with the hopes of promoting heterogeneous computing amongst developers. We have increasingly seen potential applications of using the parts of your computer outside the standard x86 core over the years though much of it was through NVIDIA’s brand. Building up to the summit, AMD’s DeveloperCentral talked with Lee Howes, parallel computing expert and Member of Technical Staff for Programming Models at AMD, about his upcoming session at AFDS.
I can't get over how much AFDS looks like a diagnosis.
In the short five-question interview, Dr. Howes outlined that the goal of his session is to show developers what to expect, good and bad, from developing for a heterogeneous architecture such as that of an APU. The rest of the interview was spent discussing how heterogeneous computing is currently and will eventually look like. Topics spanned from the slow perceived uptake of parallel computing in the home to the technological limitations of traditional CPUs that APUs and other heterogeneous computing systems look to bypass.
While AFDS is (by its namesake) a developer’s conference it is very much relevant to peer at for the end-user. The support for developers of newer computing architectures will help fuel the cycle of adoption between software and hardware which ends up with a better experience for us. What tasks would you like to see accelerated by heterogeneous computing? Let us know in the comments below.
Subject: Processors, Chipsets, Systems | May 19, 2011 - 06:10 PM | Jeremy Hellstrom
Tagged: sapphire, ion 2, htpc
At an estimated $450, the Sapphire Edge HD mini PC, powered by a dual core Atom D510 1.66 GHz with ION 2 graphics is a pretty good deal for those looking for a nettop. With only 250GB of storage you will probably want this connected to a large storage device either over ethernet or USB, though with services like Google Music Beta, Wolfgang's Vault and YouTube that might not be a problem. From InsideHW's testing you certainly won't have to worry about videos skipping just because your email is open.
"A dual-core CPU that won’t be stricken down by several programs running at the same time, GeForce that chews on any video that you put in front of it, sufficient RAM to make Windows 7 jump around, complete support for all types of video/audio formats and subtitles, and all this for a price of a good Blu-ray player - what else could you wish for?"
Here are some more Systems articles from around the web:
- ECS HDC-I Mini-ITX Fusion Board Review @ Madshrimps
- Gigabyte GA-E350N-USB3 @ iXBT Labs
- ASUS E35M1-I DELUXE @ Tweaktown
- Sony VAIO VPC-L218FX Review @ TechReviewSource
Subject: Editorial, General Tech, Graphics Cards, Processors, Mobile | May 13, 2011 - 06:49 PM | Scott Michaud
Tagged: nvidia, conference call
NVIDIA made their quarterly conference call on May 12th which consisted of financial results up to May 1st and questions from financial analysts and investors. NVIDIA chief executive officer Jen-Hsun Huang projected that future revenue from the GPU market would be “flattish”, revenue from the professional market would be “flattish”, and revenue from the consumer market would be “uppish”. Huang did mention that he believes that the GPU market will grow in the future as GPUs become ever more prevalent.
How's the green giant doing this quarter? Read on for details.
For the professional market, NVIDIA discussed their intention to continue providing proof-of-concept applications to show the benefit of GPU acceleration which they hope will spur development of GPU accelerated code. Huang repetitively mentioned that the professional market desires abilities like simultaneous simulation and visualization and that a 10% code-rewrite would increase performance 500-1000%, but current uptake is not as fast as they would like. NVIDIA also hinted that GPUs will be pushed in the server space in the upcoming future but did not clarify on what that could be. NVIDIA could simply be stating that Tesla will continue to be a focus for them; they also could be hinting towards applications similar to what we have seen in recent open sourced projects.
For consumers, Huang made note of their presence in the Android market with their support of Honeycomb 3.1 and the upcoming Icecream Sandwich. Questions were posed about the lackluster sales of Tegra tablets but Huang responded stating that the first generation of tablets were deceptively undesirable due to cost of 3G service. He went on to say that the second wave of tablets will be cheaper and more available in retail stores with Wi-Fi only models more accessible to consumers.
nVihhhhhhhhhdia. (Image by Google)
The bulk of the conference call was centered on nVidia’s purchase of Icera though not a lot of details were released being that the purchase is yet to be finalized. The main points of note is that as of yet, while NVIDIA could integrate Icera’s modems onto their Tegra mobile processors, they have no intention of doing so. They also stated they currently have no intention of jumping into the other mobile chip markets such as GPS and near-field communications due to the lesser significance and greater number of competitors.
I think the new owners like the color on the logo.
The last point of note from the conference call was that they expect that Project Denver, NVIDIA’s ARM-based processor, to be about 2 generations away from accessible. They noted that they cannot comment for Microsoft but they do reiterate their support for Windows 8 and its introduction of the ARM architecture. The general theme throughout the call was that NVIDIA was confident in their position as a player in the industry. If each of their projects works out as they plan, it could be a very well justified attitude.
Subject: Processors, Shows and Expos | May 12, 2011 - 12:22 PM | Jeremy Hellstrom
Tagged: VIA, Nano, quadcore, quad, centaur
VIA has been sitting pretty in a very specific peice of the computer market for quite a while now and is now being crowded by AMD and Intel working their way down to the low power market from their usual energy gobbling silicon and ARM sneaking up its performance from its traditional extremely low power consumption market. That competition has spurred VIA to develop first the dual core Nano X2 and now the QuadCore which is a pair of X2's on a single package using what was described to The Tech Report as a "side channel" of wiring between them. Still on 40nm it doesn't represent a completely new design for VIA, more a refinement of what they already produce. Check out their coverage as well as the write up Josh has finished here.
"Early this year, Via introduced its Nano X2 processor, a dual-core implementation of its Isaiah architecture built on TSMC's 40-nm chip fabrication process. Today, Via is announcing a new product, the QuadCore processor, that combines a pair of Nano X2 chips on a single package to deliver a low-cost, low-power CPU whose position in the market is fairly distinctive.
We visited Via-Centaur's Austin, Texas offices yesterday, where we chatted with Centaur Chief Architect Glenn Henry and Via marketing head Richard Brown. We came away with some fresh details on the QuadCore processor and a better sense of Via's future plans as an intriguing third-place supplier of x86-compatible PC processors."
Here are some more Processor articles from around the web:
- VIA QuadCore Preview & Centaur Tour @ [H]ard|OCP
- VIA's QuadCore: Nano Gets Bigger @ AnandTech
- The Best Budget & Mainstream Processors @ Techspot
- Intel Core i7 990X Extreme Edition Processor Review @ Legit Reviews
- Desktop CPU Comparison Guide Rev. 10.3 @ Tech ARP
- AMD Phenom II X2 555 Black Edition AM3 Processor Review @ eTeknix
- AMD Phenom II X4 980 Black Edition Processor Review @Hi Tech Legion
Subject: Processors | May 12, 2011 - 08:21 AM | John Davis
Tagged: software, sdk, linux, Intel, developer
Intel has just released an update to their OpenCL (Open Computing Language) SDK (Software Development Kit). With this update Intel has released a 64bit .rpm package, and previously only supported Windows. OpenCL is a huge jump in the future of heterogeneous computing, and the future of computers. Intel joins a host of manufacturers that now support OpenCL which includes AMD/ATI and nVidia.
OpenCL has many competitors in the heterogeneous computing realm which includes nVidia's CUDA and Microsoft's DirectCompute. All of this is one giant step forward in GPGPU. In the majority of computers that have dedicated GPU's or have an Intel processor with on-cpu graphics that are not in use, this is great news! Hopefully, future Linux distributions implement OpenCL similar to OS X did with Snow Leopard.
Subject: General Tech, Processors | May 12, 2011 - 12:34 AM | Scott Michaud
Tagged: sandy bridge, celeron
Intel has made a splash with their Sandy Bridge parts; for being in the middle-range they keep up with the higher end of the prior generation in many applications. We have heard rumors of new Atom-level parts from Intel deviating from their on-chip GPU structure that Sandy Bridge promotes. What about the next level? What about Celeron.
I'm guessing less than an i7.
Details were posted to CPU-World about Intel’s upcoming Sandy Bridge-based Celeron processors. There are three variants listed each supporting Intel’s on-chip GPU. The G440 is a single core part clocked at 1.6 GHz with a 650 MHz GPU where the G530 and G540 are both dual core parts clocked at 2.4 GHz and 2.5 GHz respectively and both with an 850 MHz GPU. The dual core parts have a 2MB L3 cache though the article is inconsistent on whether the single core part has 1 or 2 MB of L3 cache though we will assume 1 MB due to the wording of the article. While the GPU performance differs between the single core and dual core parts both will Turbo Boost to a maximum of 1GHz as need arises.
Functionally the chips will only contain the bare minimum of Sandy Bridge core features like 64-bit and virtualization support. There are still currently no further details on launch date and pricing. But if you are waiting to upgrade your lower end devices rest assured that Sandy B is there for you; at some point, at least.
Subject: Processors, Chipsets, Mobile | May 9, 2011 - 09:07 PM | Tim Verry
Tagged: PowerVR, Intel, gpu, atom
In a surprising move, Intel plans to move away from using it's own graphics processors with the next "full fat" Atom processors. Intel has traditionally favored its own graphics chipsets; however, VR-Zone reports that Intel has extended it's licensing agreements with PowerVR to include certain GPU architectures.
These GPU licenses will allow Intel to implement a PowerVR SGX545 equivalent graphics core with its Cedarview Atom chips. While the PowerVR graphics core is no match for dedicated GPUs or likely that found in Intel's own Sandy Bridge "HD 3000" series, the hardware will allow Atom powered systems to play video with ease thanks to hardware accelerated decodding of "MPEG-2, MPEG-4 part 2, VC1, WMV9 and the all-important H.264 codec." VR-Zone details the SGX545 GPU as being capable of "40 million triangles/s and 1Gpixles/s using a 64-bit bus" at the chips original 200mhz.
Intel plans to clock the mobile chips at 400mhz and the desktop graphics cores at 640mhz. The graphics cores will be capable of resolutions up to 1440x900 and supports VGA, HDMI 1.3a and Display Port 1.1 connections for video output. DirectX 10.1 support is also stated by VR-Zone to be supported by the SGX545, which means that the net-top versions of Atom may be capable of running the Aero desktop smoothly.
This integration by Intel of a GPU capable of hardware video acceleration will certainly make Nvidia's ION chipsets harder to justify for HTPC usage. ION chipsets will likely reliquish marketshare to cheaper stock Intel Atom platforms for basic home theater computers, but will still remain viable in the more specific market using ION + Atom chips as light gaming platforms in the living room.
Subject: Processors | May 8, 2011 - 07:56 PM | Tim Verry
Tagged: sandy bridge, Eight Core, 2011
Although the item is no longer for sale--likely due to threatened legal action from Intel--an Eight core (socket 2011) Sandy Bridge chip briefly went on sale for a sum of $1359.99 on auction site Ebay.
The chip reads "Intel Confidential Q19D ES 1.60GHz" and is alleged to be a preview chip for a socket 2011 Sandy Bridge processor capable of 8cores/16threads at 1.6Ghz. AMD's Bulldozer chips rocking eight cores have been announced for some time now and are planned to release in the first half of this year. Intel has been content in fighting AMD's six core chips with its hyperthreaded quad cores; however, as more prevalent and optimized multi-threaded applications take advantage of AMDs higher number of physical cores, Intel may be planning to match AMD physical core for physical core in a plan to keep AMD at bay.
The new Intel chips should help to slow AMD's rise in server market share, while also trickling down the idea of (further optimized) multi-threaded applications to consumer markets. With that said, even though consumer level applications cannot currently fully utilize sixteen threads, the Tim Taylor-esque part of many hardware enthusiasts (especially folders) would love to have one anyway!
Subject: Processors, Mobile | May 6, 2011 - 07:11 PM | Ryan Shrout
Tagged: project denver, nvidia, macbook, Intel, arm, apple
A very interesting story over at AppleInsider has put the rumor out there that Apple may choose to ditch the Intel/x86 architecture all together with some future upcoming notebooks. Instead, Apple may choose to go the route of the ARM-based processor, likely similar to the A4 that Apple built for the iPhone and iPad.
What is holding back the move right now? Well for one, the 64-bit versions of these processors aren't available yet and Apple's software infrastructure is definitely dependent on that. By the end of 2012 or early in 2013 those ARM-based designs should be ready for the market and very little would stop Apple from making the move. Again, this is if the rumors are correct.
Another obstacle is performance - even the best ARM CPUs on the market fall woefully behind the performance of Intel's current crop of Sandy Bridge processors or even their Core 2 Duo options.
In addition to laptops, the report said that Apple would "presumably" be looking to move its desktop Macs to ARM architecture as well. It characterized the transition to Apple-made chips for its line of computers as a "done deal."
"Now you realize why Apple is desperately searching for fab capacity from Samsung, Global Foundries, and TSMC," the report said. "Intel doesn't know about this particular change of heart yet, which is why they are dropping all the hints about wanting Apple as a foundry customer. Once they realize Apple will be fabbing ARM chips at the expense of x86 parts, they may not be so eager to provide them wafers on advanced processes."
Even though Apple is already specing its own processors like the A4 there is the possibility that they could go with another ARM partner for higher performance designs. NVIDIA's push into the ARM market with Project Denver could be a potential option as they are working very closely with ARM on those design and performance improvements. Apple might just "borrow" those changes however at NVIDIA's expense and build its own option that would satisify its needs exactly without the dependence on third-parties.
Migrating the notebook (and maybe desktop markets) to ARM processors would allow the company to unify their operating system across the classic "computer" designs and the newer computer models like iPads and iPhones. The idea of all of our computers turning into oversized iPhones doesn't sound appealing to me (nor I imagine, many of you) but with some changes in the interface it could become a workable option for many consumers.
With even Microsoft planning for an ARM-based version of Windows, it seems that x86 dominance in the processor market is being threatened without a doubt.
Subject: Graphics Cards, Processors | May 6, 2011 - 01:09 PM | Jeremy Hellstrom
Tagged: tri-fire, crossfire, sli, triple, sandybridge
Not too long ago [H]ard|OCP examined the price to performance ratio between a triple SLI GTX580 system and a Tri-Fire HD6990 and HD6970 and discovered that as far as value goes, NVIDIA could not touch AMD. A reader of theirs inquired if it was the aging Core i7-920 that was holding the cards back even with the overclock of 3.6GHz. A SandyBridge system with a Core i7-2600K and an ASUS board with the NF200 bridge chip was used to revisit the performance of the two vendors GPUs. The result; we can hardly wait for the Z68 boards to come out!
"We have re-tested performance between GTX 580 3-Way SLI and Radeon HD 6990+6970 Tri-Fire with a brand new Sandy Bridge 4.8GHz system. Our readers wanted to know if the CPU speed would improve performance and open up the potential of this triple-GPU performance beasts. To put it succinctly, they were right. The results completely turn the tables upside down and then some."
Here are some more Graphics Card articles from around the web:
- Triple Monitor Gaming: GeForce GTX 590 vs. Radeon HD 6990 @ TechSpot
- Sapphire Radeon HD 5850 and HD 5830 1GB Xtreme @ Tweaktown
- XFX HD Radeon 6790 Review @ OCC
- PowerColor HD 6950 Vortex II 2 GB @ techPowerUp
- HIS Radeon 6870 IceQX @ XSReviews
- HIS Radeon HD 6790 1GB IceQ X Turbo @ Tweaktown
- AMD Radeon HD 6670 1GB and HD 6570 512MB GDDR5 @ Hi Tech Legion
- AMD Radeon 6990 4GB Graphics Card Review @ eTeknix
- MSI R6950 Twin Frozr II/OC, MSI R6870 Hawk, MSI N560GTX-Ti Twin Frozr II @ iXBT Labs
- May 2011: Gallium3D vs. Classic Mesa vs. Catalyst @ Phoronix
- How to overclock a graphics card @ eTeknix
- i3DSpeed, April 2011 @ iXBT Labs
- MSI GTX560-Ti OC SLI @ OC3D
- Zotac GeForce GTX 560 Ti AMP! Edition 1GB Video Card Review @ ThinkComputers
- GIGABYTE GTX 580 Super Overclock @ OCAU
Subject: General Tech, Processors | May 4, 2011 - 01:55 PM | Tim Verry
Tagged: transistor, Intel
"After a decade of research, Intel has unveiled the world's first three dimensional transistor" states Mark Bohr, a Senior Fellow for Intel. Silicon based transistors in computers, mobile devices, vehicles, and embedded equipment have only existed in a planar, or two dimensional, form until today.
The new three dimensional transistor, dubbed "Tri-Gate," is now ready for high volume production, and will be included in Intel's new Ivy Bridge 22nm processors. This new Tri-Gate transistor is a huge deal for Intel as it will enable them to maintain the pace of current chip evolution as outlined by Moore's Law. If you are not familiar with Moore's Law, it states that approximately every 18 months, transistor density will double, bringing with it increases in performance and yeild while decreasing cost of production. Intel states that "It has become the basic business model for the semiconductor industry for more than 40 years."
As processors become smaller and smaller, the electric current becomes more and more difficult to contain. There are hundreds of thousands of minute connections and switches inside today's processors, and as manufacturing processes shrink, the amount of current leakage increases. With Intel's Core 2 Duo processors, Intel created a new "high-k"(high dielectric constant, which is a property of matter relating to the amount of charge it can hold) metal gate transistor using a material called Hafnium. The new material replaced the silicon dioxide dielectric gate of the transistor to combat the current leakage problem at 32nm. This allowed the chip process to shrink while scaling to produce less current leakage and heat. To be more specific, Intel states that "because high-k gate dielectrics can be several times thicker, they reduce gate leakage by over 100 times. As a result, these devices run cooler."
Unfortunately, at the much smaller 22nm process, Intel was not achieving results congruent with Moore's Law using even their high-k gate transistors. In order to maintain the scaling predicted in Moore's Law, Intel had to once again re-invent their transistors. In order to create a smaller manufacturing process while overcoming current leakage, Intel had to develop a way to use more of what little space they had available to them. It is here that they entered the third dimension. By designing a transistor that is able to control the electrical current on three sides instead of a single plane, they are able to shrink the transistor while ending up with more surface area to "control the stream" as Mark Bohr puts it.
The proposed benefits of Tri-Gate lie in it's ability to operate at lower voltages, with higher energy efficiency, all while running cooler and faster than ever before. More specifically, up to 37 percent increases in performance at low voltages versus Intel's current line of 32nm processors. Intel further states that "the new transistors consume less than half the power when at the same performance as 2-D planar transistors on 32nm chips." This means that at the same performance level of the current crop of Intel CPUs, Ivy Bridge will be able to do the same calculations either while using half the power needed of Sandy Bridge or nearly twice as fast (it is unlikely to scale perfectly as there is overhead and other elements of the chip that will not be as radically revamped) at the same level of power consumption. If this sort of scaling turns out to be true for the majority of Ivy Bridge chips, the overclocking abilities and resulting performance should be of unprecedented levels.
The use of Tri-Gate transistors is also mentioned as being beneficial for mobile and handheld devices as the power efficiency should allow increases in battery life. This is due to the chip running at decreased voltages while maintaining (at least) the same level of performance as current mobile chips. While Intel did not demo any mobile CPUs, they did state that Tri-Gate transistors may be integrated into future Atom chips.