All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards, Processors, Shows and Expos | June 14, 2011 - 08:06 PM | Ryan Shrout
Tagged: vliw, trinity, llano, fusion, evergreen, cayman, amd, AFDS
Well that was an interesting twist... During a talk on the next generation of GPU technology at the AMD Fusion Developer Summit, one of the engineers was asked about Trinity, the next APU to be released in 2012 (and shown running today for the very first time). It was offered that Trinity in fact used a VLIW4 architecture rather than the VLIW5 design found in the just released Llano A-series APU.
A shader unit from the VLIW4-based Cayman architecture
That means that Trinity APUs will ship with Cayman-based GPU technology (6900 series) rather than the Evergreen (5000 series). While that doesn't tell us much in terms of performance simply because we have so many variables including shader counts and clocks, it does put to rest the rumor that Trinity was going to keep basically the same class of GPU technology that Llano had.
Trinity notebook shown for the first time today at AFDS. Inside is an APU with Cayman-class graphics.
AMD is definitely pushing the capabilities of APUs forward and if they can stay on schedule with Trinity, Intel might find the GPU portion of its Ivy Bridge architecture well behind again.
Subject: Editorial, Processors, Shows and Expos | June 14, 2011 - 05:09 PM | Ryan Shrout
Tagged: nvidia, Intel, heterogeneous, fusion, arm, AFDS
Before the AMD Fusion Developer Summit started this week in Bellevue, WA the most controversial speaker on the agenda was Jem Davies, the VP of Technology at ARM. Why would AMD and ARM get together on a stage with dozens of media and hundreds of developers in attendance? There is no partnership between them in terms of hardware or software but would there be some kind of major announcement made about the two company's future together?
In that regard, the keynote was a bit of a letdown and if you thought there was going to be a merger between them or a new AMD APU being announced with an ARM processor in it, you left a bit disappointed. Instead we got a bit of background on ARM how the race of processing architectures has slowly dwindled to just x86 and ARM as well as a few jibes at the competition NOT named AMD.
As is usually the case, Davies described the state of processor technology with an emphasis on power efficiency and the importance of designing with that future in mind. One of the interesting points was shown in regard to the "bitter reality" of core-type performance and the projected DECREASE we will see from 2012 onward due to leakage concerns as we progress to 10nm and even 7nm technologies.
The idea of dark silicon "refers to the huge swaths of silicon transistors on future chips that will be underused because there is not enough power to utilize all the transistors at the same time" according to this article over at physorg.com. As the process technology gets smaller then the areas of dark silicon increase until the area of the die that can be utilized at any one time might hit as low as 10% in 2020. Because of this, the need to design chips with many task-specific heterogeneous portions is crucial and both AMD and ARM on that track.
Those companies not on that path today, NVIDIA specifically and Intel as well, were addressed on the below slide when discussing GPU computing. Davies pointed out that if a company has a financial interest in the immediate success of only CPU or GPU then benchmarks will be built and shown in a way to make it appear that THAT portion is the most important. We have seen this from both NVIDIA and Intel in the past couple of years while AMD has consistently stated they are going to be using the best processor for the job.
Amdahl's Law is used in parallel computing to predict the theoretical maximum speed up using multiple processors. Davies reiterated what we have been told for some time that if only 50% of your application can actually BE parallelized, then no matter how many processing cores you throw at it, it will only ever be 50% faster. The heterogeneous computing products of today and the future can address both the parallel computing and serial computing tasks with improvements in performance and efficiency and should result in better computing in the long run.
So while we didn't get the major announcement from ARM and AMD that we might have been expecting, the fact that ARM would come up and share a stage with AMD reiterates the message of the Fusion Developer Summit quite clearly: a combined and balanced approach to processing might not be the sexiest but it is very much the correct one for consumers.
Subject: Processors, Mobile, Shows and Expos | June 14, 2011 - 12:08 PM | Ryan Shrout
Tagged: trinity, fusion, APU, AFDS
On stage during the opening keynote at the AMD Fusion Developer Summit 2011, Rick Bergman showed off a notebook that was being powered not by the recently released AMD Llano A-series APUs, but rather the Trinity core due in 2012.
Trinity is the desktop APU for next year that will combine Bulldozer-based x86 CPU cores with an updated DX11 GPU architecture built on the current 32nm process. Not much else is known about the chip yet but hopefully we'll get some more details this week at the show.
Subject: General Tech, Processors | June 14, 2011 - 02:47 AM | Scott Michaud
Tagged: Intel, haswell
Intel’s new processor lines come in two flavors: process shrinks and new architectures. Each revision comes out approximately a year after the prior one alternative between new architectures (tock) and process shrinks (tick). Sandy Bridge was the most recent new architecture which will be followed by Ivy Bridge, a process shrink of Sandy Bridge, and that will be succeeded by Intel’s newest architecture: Haswell.
I can Haswell?
The instructions added by Intel for their upcoming Haswell architecture are useful for a whole range of applications from image and video processing; to face detection; to database manipulation; to the generation of hashes; as well as arithmetic in general. As you can see the addition of instructions in this revision is quite wide in its scope. Keep in mind that the introduction of a new instruction set does not mean that programs will be optimized to take advantage of the added benefits for some time. However, when programs do start optimizing for the newer architectures it looks as though Haswell’s new offerings will speed up otherwise complicated tasks into a single instruction.
What task would you like to see a speedup on? Comment below.
Subject: Processors, Systems | June 12, 2011 - 08:57 PM | Tim Verry
Tagged: SFF, Intel, htpc, hd, DIY, atom
Habley has recently shown off a new small, embedded computer dubbed the SOM-6670E6XX. The new computer is the size of a post-it note; however, it sports an Atom E600 processor running at 1.0Gh as well as an integrated GMA600 graphics core. To be more specific, the motherboard in question measures 70mm x 70mm.
The CPU and GPU blend is able to support two displays and pipe two HD video streams to each. Using Media Player Class Home Cinema 1.5, the computer is able to play both a 1080p MPEG4 trailer of the X-Men First Class film and a HD FLV version of SpiderWic simultaneously. While playing both films, the CPU saw around 93% usage and 210 MB of RAM from the Windows Embedded 2009 operating system. Further, while playing an HD FLV film trailer while also watching an HD YouTube clip, the processor was again pegged at 93% usage; however, in this test the RAM usage was much higher, at 422 MB. The test system used, in addition to the SOM-6670, it consisted of a SOMB-073 Carrier board (which provides the various IO including video and audio output, mouse and keyboard input, and SATA ports), 1GB of on-board RAM, and a 5400RPM laptop form factor (2.5”) 120GB hard drive.
Including the two monitors, at 1280x768 (over HDMI) and 1920x1080 (SDVO) respectively, the system drew 18 watts during usage. You can see the test system of the small HD-capable computer in action in the video below. What uses do you have in mind for a micro-sized computer such as this?
Subject: General Tech, Processors | June 7, 2011 - 05:25 PM | Scott Michaud
Intel has been pushing for higher clock rates for ages now. While 4 and even 5 GHz is not entirely uncommon for those wishing to step outside Intel’s specifications and push the frequency as high as it can go, Intel has yet to allow their parts at that frequency in any supported fashion. That has recently changed with Intel’s Xeon line.
Tom’s Hardware noted from Intel’s spec sheet that Intel’s Xeon E3-1290 is clocked at 3.6Ghz with its Turbo Boost rating on single-threaded applications spiking to 4 GHz. Their original intention with their Netburst architecture from 2004 was to peak to ridiculously high frequencies but they quickly found their scalability ended below the 4 GHz line killing their plans for a 4 GHz SKU. With the Xeon architecture quite close to the higher-end Sandy Bridge parts it is possible that we might see 4 GHz in the desktop soon.
Advanced Micro Devices (AMD) announced today that they plan to bring their “FX” branding back to the latest high-end motherboards and CPUs. The first round of products to carry the brand include the “Scorpius” platform (AMD 990FX motherboards and AMD Radeon 6000 series graphics cards), and the upcoming “Zambezi” native octo-core unlocked processor. “FX customers will enjoy an unrivalled (sic) feature set and amazing control over their PC’s performance,” stated AMD.
TechPowerUp shows off the FX branded Zambezi's packaging, for example.
The “FX” moniker is AMD’s equivalent to Intel’s “Extreme Edition” products, which are overclocker and enthusiast-friendly products aimed at those wanting the fastest stock performance and the ability to push hardware to the limit through overclocking via unlocked multipliers.
In bringing back the “FX” brand in full force with Bulldozer, AMD seems confident in their processors’ performance versus the competition. It will certainly be interesting to see if their upcoming hardware can back up the enthusiast marketing and stack up against Intel’s offerings.
You can read more about AMD’s E3 announcement over at HardOCP.
Subject: Motherboards, Processors | June 1, 2011 - 09:40 PM | Ryan Shrout
Tagged: socket 2011, lga1366, danshui bay, asus
At their Republic of Gamers press conference ASUS showed off a prototype concept motherboard that really got some attention. The board combines an LGA1366 processor socket for current generation Nehalem processors AND a socket for the upcoming Sandy Bridge-E processors called Socket 2011. What does a beast like this look like?
Okay, so this board probably won't fit in your case and maybe won't even see the light of day outside a few reviews and interesting designs. But the concept is cool to see: use your LGA1366 processor today and still be able to upgrade to the Socket 2011 platform when those CPUs are released. You can see each processor has its own seperate memory slots though they share most of the other components.
The price of this board will also likely make it less than appealing to consumers; even those conscious of upgrade paths.
Subject: Processors, Shows and Expos | June 1, 2011 - 09:28 AM | Ryan Shrout
Tagged: trinity, llano, fusion, computex, bulldozer, APU, amd
While talking up the new 900-series of chipset and the branding for the upcoming AMD Llano APU launch, AMD did surprise us by showing off a bit more of the future than typical. Rick Bergman, general manager of the AMD Product Group, pulled a Trinity-based APU out of his pocket to demonstrate the conviction of staying on a "one-APU-per-year" cycle in the years to come.
While it looks just like any other AMD processors from a distance, this Trinity APU is based on the Bulldozer x86 architecture (which will see the first release as a CPU only later this year) and combines some amount of SIMD-units (aka Radeon cores) for a CPU/GPU combo. This will be the part that succeeds Llano, due out in a few short days.
This roadmap shows the cadence of once a year will be the norm for AMD going forward and that AMD plans to introduce an APU for the tablet market sometime in 2012. It will be interesting to see how late to the game AMD is in this arena and if they can compete with what ARM is doing or even what Intel will be doing with Medfield.
Subject: Processors, Mobile, Shows and Expos | May 31, 2011 - 02:01 AM | Ryan Shrout
Tagged: ultrabook, Medfield, Ivy Bridge, Intel, haswell, computex, atom
With the release of the Intel Z68 chipset behind us by several weeks, Intel spent the opening keynote at Computex 2011 creating quite a buzz in the mobility section of the computing world. Intel’s Executive Vice President Sean Maloney took the stage on Tuesday and announced what Intel is calling a completely new category of mobile computer, the “Ultrabook”. A term coined by Intel directly, the Ultrabook will “marry the performance and capabilities of today’s laptops with tablet-like features and deliver a highly responsive and secure experience, in a thin, light and elegant design.”
If this photo looks familiar...see the similarity?
Intel is so confident in this new segment of the market that will fall between the tablet and notebook that they are predicting that by the time we reach the end of 2012 it will represent 40% of Intel’s processor shipments. That is an incredibly bold claim considering how massive and how dominate Intel is in the processor field. Intel plans to reach this 40% goal by addressing the Ultrabook market in three phases, the first of which will begin with ultra-low-power versions of today’s Sandy Bridge processors. Using this technology Maloney says we will see notebooks less than 0.8 inches thin and for under $1,000.
Make sure you "Read More" for the full story!!
NCSU Researchers Tweak Core Prefetching And Bandwidth Allocation to Boost Mult-Core Performance By 40%
Subject: Processors | May 27, 2011 - 11:26 AM | Tim Verry
Tagged: processor, multi-core, efficiency, bandwidth, algorithm
With the clock speed arms race now behind us, the world has turned to increases in the number of processor cores to boost performance. As more applications are becoming multi-threaded, CPU core increases have become even more important. In the consumer space, quad and hexa-core chips are rather popular in the enthusiast segment. On the server side, eight core chips provide extreme levels of performance.
The way that most multi-core processors operate involves the various CPU cores having access to their own cache (Intel’s current gen chips actually have three levels of cache, with the third level being shared between all cores. This specific caching system; however, is beyond the scope of this article). This cache is extremely fast and keeps the processing core(s) fed with data which the processor then feeds through its assembly line-esque instruction pipeline(s). The cache is populated with data through a method called “prefetching.” This prefetching pulls data of running applications from RAM using mathematical algorithms to determine what the processor is likely going to need to process next. Unfortunately, while these predictive algorithms are usually correct, they sometimes make mistakes and the processor is not fed with data from the cache and thus must look for it elsewhere. These instances, called stalls, can severely degrade core performance as the processor must reach out past the cache and into the system memory (RAM), or worse, the even slower hard drive to find the data it needs. When the processor must reach beyond its on-die cache, it is required to use the system bus to query the RAM for data. This processor to RAM bus, while faster than reading from a disk drive, is much slower than the cache. Further, processors are restricted in the amount of available bandwidth between the CPU and the RAM. As the number of included cores increases, the amount of shared bandwidth each core has access to is greatly reduced.
The layout of a current Sandy Bridge Intel processor. Note the Cache and Memory I/O.
A team of researchers at North Carolina State University have been studying the above mentioned issues, which are inherent in multi-core processors. Specifically, the research team was part of North Carolina State University’s Department of Electrical and Computer Engineering, and includes Fang Liu and Yan Solihin who were funded in part by the National Science foundation. In a paper concluding their research that will be presented June 9th, 2011 at the international Conference on Measurement and Modeling of Computer Systems, they detail two methods for improving upon the current bandwidth allocation and cache prefetching implementations.
Dr. Yan Solihin, associate professor and co-author of the paper in question stated that certain processor cores require more bandwidth than others; therefore, by dynamically monitoring the type and amount of data being requested by each core, the amount of bandwidth available can be prioritized by a per-core basis. Solihin further stated that “by better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance.”
Further, they have analyzed the data of the processors hardware counters and constructed a set of criteria that seek to improve efficiency by dynamically turning prefetching on and off on a per-core basis. By turning prefetching on and off on a per core basis, this further provides bandwidth to the cores that need it. By implementing both methods, the research team was able to improve multi-core performance by as much as 40 percent versus chips that do not prefetch data, and by 10 percent versus multi-core processors with cores that do prefetch data.
The researchers plan to detail their findings in a paper titled “Studying the Impact of Hardware Prefetching and Bandwidth Partitioning In Chip-Multiprocessors,” which will be publicly available on June 9th. The exact algorithms and criteria that they have determined will decrease the number of processor stalls and increase bandwidth efficiency will be extremely interesting to analyze. Further, it will be interesting to see if any of these improvements will be implemented by Intel or AMD in their future chips.
Subject: Processors | May 25, 2011 - 03:28 PM | Jeremy Hellstrom
Tagged: xeon, server, xeon x7, x7 4870, Intel
AnandTech got their hands on four of the the brand new 32nm Intel Xeon X7 4870, 10 cores clocked to 2.4GHz; perhaps a delayed 'Tick" but a tick nonetheless. Not only did they test the new chips they also had a chance to test it with Load Reduced DIMMs (LR-DIMM) as opposed to the old Fully Buffered style (FB-DIMMs) we were used to in days gone by. That spells higher capacity which is good considering the testbed they used can support up to 2TB of RAM to keep the 4 CPUs fed. This is a high end server part, not really competeing against AMD as a similar Opteron system would cost about 1/2 as much with performance reduced about the same as well. Check out this beast, but keep in mind a single CPU will set you back more than you paid for your whole system.
"Only one year later, Intel is upgrading the top Xeon by introducing Westmere-EX. Shrinking Intel's largest Xeon to 32nm allows it to be clocked slightly higher, get two extra cores, and add 6MB L3 cache. At the same time the chip is quite a bit smaller, which makes it cheaper to produce. Unfortunately, the customer does not really benefit from that fact, as the top Xeon became more expensive. Anyway, the Nehalem-EX was a popular chip, so it is no surprise that the improved version has persuaded 19 vendors to produce 60 different designs, ranging from two up to 256 sockets."
Here are some more Processor articles from around the web:
- Intel Core i5-2390T @ iXBT Labs
- Inexpensive AMD Processor Roundup @ iXBT Labs
- AMD Phenom II X4 980 BE 3.70 GHz @ techPowerUp
- AMD Phenom II X2 560 Black Edition AM3 Processor Review @ eTeknix
- Workstation & Server CPU Comparison Guide @ TechARP+
- CPU Performance Comparison Guide @ TechARP
- Intel's Silvermont: A New Atom Architecture @ AnandTech
Recently, AMD launched two new AMD Embedded G-Series APUs (Accelerated Processing Units). The two new chips have a TDP rating of 5.5 and 6.4 watts, which represent a 39% improvement in power savings over the previous iterations. The 361mm² chip package is capable of being used in embedded systems without the need for a fan to cool it. The embedded chips include one or two low power x86 Bobcat processors and a discrete class DirectX 11 GPU on a single die.
AMD currently has three systems utilizing the new APUs, including a Pico-ITX form factor computer, a Qseven form factor computer, and a digital sign system. Buddy Broeker, the Director of Embedded Solutions for AMD stated that "today we take the ground-breaking AMD Fusion APU well below 7W TDP and shatter the accepted traditional threshold for across-the-board fanless enablement."
The two new chips are named the T40R and the T40E. The chips both run at 1.00GHz; however, the 6.4 watt TDP T40E is a dual core chip and the 5.5 watt TDP T40R is a single core variant. Both chips include an AMD Radeon 6250 GPU, a 64KB L1 cache, and a 512KB L2 cache per each CPU core. Further, the chips feature an integrated DDR3 memory controller that can support up to 667MHz solder-down SODIMMs or two DIMM slots. More details on the series as a whole can be found here.
Mobile and embedded processors continue to get smaller and faster. Have you seen any AMD powered embedded technology in your town?
Subject: General Tech, Graphics Cards, Processors | May 22, 2011 - 11:04 PM | Scott Michaud
Tagged: fusion, amd, AFDS
In a little over three weeks’ time AMD will host their AMD Fusion Developer Summit 2011 (AFDS): a three-day conference with the hopes of promoting heterogeneous computing amongst developers. We have increasingly seen potential applications of using the parts of your computer outside the standard x86 core over the years though much of it was through NVIDIA’s brand. Building up to the summit, AMD’s DeveloperCentral talked with Lee Howes, parallel computing expert and Member of Technical Staff for Programming Models at AMD, about his upcoming session at AFDS.
I can't get over how much AFDS looks like a diagnosis.
In the short five-question interview, Dr. Howes outlined that the goal of his session is to show developers what to expect, good and bad, from developing for a heterogeneous architecture such as that of an APU. The rest of the interview was spent discussing how heterogeneous computing is currently and will eventually look like. Topics spanned from the slow perceived uptake of parallel computing in the home to the technological limitations of traditional CPUs that APUs and other heterogeneous computing systems look to bypass.
While AFDS is (by its namesake) a developer’s conference it is very much relevant to peer at for the end-user. The support for developers of newer computing architectures will help fuel the cycle of adoption between software and hardware which ends up with a better experience for us. What tasks would you like to see accelerated by heterogeneous computing? Let us know in the comments below.
Subject: Processors, Chipsets, Systems | May 19, 2011 - 06:10 PM | Jeremy Hellstrom
Tagged: sapphire, ion 2, htpc
At an estimated $450, the Sapphire Edge HD mini PC, powered by a dual core Atom D510 1.66 GHz with ION 2 graphics is a pretty good deal for those looking for a nettop. With only 250GB of storage you will probably want this connected to a large storage device either over ethernet or USB, though with services like Google Music Beta, Wolfgang's Vault and YouTube that might not be a problem. From InsideHW's testing you certainly won't have to worry about videos skipping just because your email is open.
"A dual-core CPU that won’t be stricken down by several programs running at the same time, GeForce that chews on any video that you put in front of it, sufficient RAM to make Windows 7 jump around, complete support for all types of video/audio formats and subtitles, and all this for a price of a good Blu-ray player - what else could you wish for?"
Here are some more Systems articles from around the web:
- ECS HDC-I Mini-ITX Fusion Board Review @ Madshrimps
- Gigabyte GA-E350N-USB3 @ iXBT Labs
- ASUS E35M1-I DELUXE @ Tweaktown
- Sony VAIO VPC-L218FX Review @ TechReviewSource
Subject: Editorial, General Tech, Graphics Cards, Processors, Mobile | May 13, 2011 - 06:49 PM | Scott Michaud
Tagged: nvidia, conference call
NVIDIA made their quarterly conference call on May 12th which consisted of financial results up to May 1st and questions from financial analysts and investors. NVIDIA chief executive officer Jen-Hsun Huang projected that future revenue from the GPU market would be “flattish”, revenue from the professional market would be “flattish”, and revenue from the consumer market would be “uppish”. Huang did mention that he believes that the GPU market will grow in the future as GPUs become ever more prevalent.
How's the green giant doing this quarter? Read on for details.
For the professional market, NVIDIA discussed their intention to continue providing proof-of-concept applications to show the benefit of GPU acceleration which they hope will spur development of GPU accelerated code. Huang repetitively mentioned that the professional market desires abilities like simultaneous simulation and visualization and that a 10% code-rewrite would increase performance 500-1000%, but current uptake is not as fast as they would like. NVIDIA also hinted that GPUs will be pushed in the server space in the upcoming future but did not clarify on what that could be. NVIDIA could simply be stating that Tesla will continue to be a focus for them; they also could be hinting towards applications similar to what we have seen in recent open sourced projects.
For consumers, Huang made note of their presence in the Android market with their support of Honeycomb 3.1 and the upcoming Icecream Sandwich. Questions were posed about the lackluster sales of Tegra tablets but Huang responded stating that the first generation of tablets were deceptively undesirable due to cost of 3G service. He went on to say that the second wave of tablets will be cheaper and more available in retail stores with Wi-Fi only models more accessible to consumers.
nVihhhhhhhhhdia. (Image by Google)
The bulk of the conference call was centered on nVidia’s purchase of Icera though not a lot of details were released being that the purchase is yet to be finalized. The main points of note is that as of yet, while NVIDIA could integrate Icera’s modems onto their Tegra mobile processors, they have no intention of doing so. They also stated they currently have no intention of jumping into the other mobile chip markets such as GPS and near-field communications due to the lesser significance and greater number of competitors.
I think the new owners like the color on the logo.
The last point of note from the conference call was that they expect that Project Denver, NVIDIA’s ARM-based processor, to be about 2 generations away from accessible. They noted that they cannot comment for Microsoft but they do reiterate their support for Windows 8 and its introduction of the ARM architecture. The general theme throughout the call was that NVIDIA was confident in their position as a player in the industry. If each of their projects works out as they plan, it could be a very well justified attitude.
Subject: Processors, Shows and Expos | May 12, 2011 - 12:22 PM | Jeremy Hellstrom
Tagged: VIA, Nano, quadcore, quad, centaur
VIA has been sitting pretty in a very specific peice of the computer market for quite a while now and is now being crowded by AMD and Intel working their way down to the low power market from their usual energy gobbling silicon and ARM sneaking up its performance from its traditional extremely low power consumption market. That competition has spurred VIA to develop first the dual core Nano X2 and now the QuadCore which is a pair of X2's on a single package using what was described to The Tech Report as a "side channel" of wiring between them. Still on 40nm it doesn't represent a completely new design for VIA, more a refinement of what they already produce. Check out their coverage as well as the write up Josh has finished here.
"Early this year, Via introduced its Nano X2 processor, a dual-core implementation of its Isaiah architecture built on TSMC's 40-nm chip fabrication process. Today, Via is announcing a new product, the QuadCore processor, that combines a pair of Nano X2 chips on a single package to deliver a low-cost, low-power CPU whose position in the market is fairly distinctive.
We visited Via-Centaur's Austin, Texas offices yesterday, where we chatted with Centaur Chief Architect Glenn Henry and Via marketing head Richard Brown. We came away with some fresh details on the QuadCore processor and a better sense of Via's future plans as an intriguing third-place supplier of x86-compatible PC processors."
Here are some more Processor articles from around the web:
- VIA QuadCore Preview & Centaur Tour @ [H]ard|OCP
- VIA's QuadCore: Nano Gets Bigger @ AnandTech
- The Best Budget & Mainstream Processors @ Techspot
- Intel Core i7 990X Extreme Edition Processor Review @ Legit Reviews
- Desktop CPU Comparison Guide Rev. 10.3 @ Tech ARP
- AMD Phenom II X2 555 Black Edition AM3 Processor Review @ eTeknix
- AMD Phenom II X4 980 Black Edition Processor Review @Hi Tech Legion
Subject: Processors | May 12, 2011 - 08:21 AM | John Davis
Tagged: software, sdk, linux, Intel, developer
Intel has just released an update to their OpenCL (Open Computing Language) SDK (Software Development Kit). With this update Intel has released a 64bit .rpm package, and previously only supported Windows. OpenCL is a huge jump in the future of heterogeneous computing, and the future of computers. Intel joins a host of manufacturers that now support OpenCL which includes AMD/ATI and nVidia.
OpenCL has many competitors in the heterogeneous computing realm which includes nVidia's CUDA and Microsoft's DirectCompute. All of this is one giant step forward in GPGPU. In the majority of computers that have dedicated GPU's or have an Intel processor with on-cpu graphics that are not in use, this is great news! Hopefully, future Linux distributions implement OpenCL similar to OS X did with Snow Leopard.
Subject: General Tech, Processors | May 12, 2011 - 12:34 AM | Scott Michaud
Tagged: sandy bridge, celeron
Intel has made a splash with their Sandy Bridge parts; for being in the middle-range they keep up with the higher end of the prior generation in many applications. We have heard rumors of new Atom-level parts from Intel deviating from their on-chip GPU structure that Sandy Bridge promotes. What about the next level? What about Celeron.
I'm guessing less than an i7.
Details were posted to CPU-World about Intel’s upcoming Sandy Bridge-based Celeron processors. There are three variants listed each supporting Intel’s on-chip GPU. The G440 is a single core part clocked at 1.6 GHz with a 650 MHz GPU where the G530 and G540 are both dual core parts clocked at 2.4 GHz and 2.5 GHz respectively and both with an 850 MHz GPU. The dual core parts have a 2MB L3 cache though the article is inconsistent on whether the single core part has 1 or 2 MB of L3 cache though we will assume 1 MB due to the wording of the article. While the GPU performance differs between the single core and dual core parts both will Turbo Boost to a maximum of 1GHz as need arises.
Functionally the chips will only contain the bare minimum of Sandy Bridge core features like 64-bit and virtualization support. There are still currently no further details on launch date and pricing. But if you are waiting to upgrade your lower end devices rest assured that Sandy B is there for you; at some point, at least.
Subject: Processors, Chipsets, Mobile | May 9, 2011 - 09:07 PM | Tim Verry
Tagged: PowerVR, Intel, gpu, atom
In a surprising move, Intel plans to move away from using it's own graphics processors with the next "full fat" Atom processors. Intel has traditionally favored its own graphics chipsets; however, VR-Zone reports that Intel has extended it's licensing agreements with PowerVR to include certain GPU architectures.
These GPU licenses will allow Intel to implement a PowerVR SGX545 equivalent graphics core with its Cedarview Atom chips. While the PowerVR graphics core is no match for dedicated GPUs or likely that found in Intel's own Sandy Bridge "HD 3000" series, the hardware will allow Atom powered systems to play video with ease thanks to hardware accelerated decodding of "MPEG-2, MPEG-4 part 2, VC1, WMV9 and the all-important H.264 codec." VR-Zone details the SGX545 GPU as being capable of "40 million triangles/s and 1Gpixles/s using a 64-bit bus" at the chips original 200mhz.
Intel plans to clock the mobile chips at 400mhz and the desktop graphics cores at 640mhz. The graphics cores will be capable of resolutions up to 1440x900 and supports VGA, HDMI 1.3a and Display Port 1.1 connections for video output. DirectX 10.1 support is also stated by VR-Zone to be supported by the SGX545, which means that the net-top versions of Atom may be capable of running the Aero desktop smoothly.
This integration by Intel of a GPU capable of hardware video acceleration will certainly make Nvidia's ION chipsets harder to justify for HTPC usage. ION chipsets will likely reliquish marketshare to cheaper stock Intel Atom platforms for basic home theater computers, but will still remain viable in the more specific market using ION + Atom chips as light gaming platforms in the living room.
Get notified when we go live!