All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Processors | September 13, 2011 - 09:03 AM | Ryan Shrout
Tagged: video, overclocking, FX, bulldozer, amd
There is a sub-culture in the computing world that is more or less analogous to the world of NHRA drag racing: liquid nitrogen overclocking. And if you are really serious, liquid helium. During a press event in Austin, TX in August to discuss the upcoming Bulldozer processor, a team of overclockers pushed the new architecture to frequencies well beyond safety and well beyond where they should be. Without giving away the whole story yet, AMD was able to set a new frequency world record...
Sami Mäkinen and his team hit 8.429GHz on liquid nitrogen and liquid helium with a near-production FX processor sample. This bests the reigning record of 8.308 GHz that was hit on a Celeron processor with LN2.
On August 31, an AMD FX processor achieved a frequency of 8.429GHz, a stunning result for a modern, multi-core processor. The record was achieved with several days of preparation and an amazing and inspired run in front of world renowned technology press in Austin, Texas. This frequency bests the prior record of 8.309GHz, and completely blows away any modern desktop processor. Based on our overclocking tests, the AMD FX CPU is a clock eating monster, temporarily able to withstand extreme conditions to achieve amazing speed. Even with more conservative methods, the AMD FX processors, with multiplier unlocked throughout the range, appear to scale with cold. We achieved clock frequencies well above 5GHz using only air or sub-$100 water cooling solutions.
I was in attendance for the event and have to say that group put on a spectacular show and anytime you can play with liquid helium running at near absolute zero temperature, it's worth paying attention! In fact, I put together a video of the event that you can see below and if you haven't participated or seen something of this nature, it is worth checking out!!
Now I need to temper some dreams right now - the chances of you or I reaching these types of clock speeds on the Bulldozer CPUs upon release are pretty close to nil. What was more interesting was the casual overclocking we saw pushing upwards of 4.8+ GHz without breaking a sweat and that is what we will be investigating with our review of the processor later this year.
Update: Here is the screenshot from the official HWBot frequency rankings as well as a different video created by AMD themselves summarizing the event.
Subject: Editorial, General Tech, Motherboards, Processors, Chipsets | September 12, 2011 - 10:22 AM | Ryan Shrout
Tagged: Intel, idf 2011, idf
It is once again time for our annual pilgrimage to the land of the Golden Gate to spend a few days with our friends at Intel and the Intel Developer Forum. IDF is one of the most informative events that I attend and I am always impressed by the openness and detail with which Intel showcases its upcoming products and future roadmap. This year looks to be no different.
What do we have on the agenda? First and foremost, we expect to hear all about Ivy Bridge and the architecture changes it brings to the Sandy Bridge CPUs currently in the market. Will we see increased x86 performance or maybe increases in the likelihood of us recommending the integrated graphics? More information is set to be revealed on the 22nm tri-gate transistor as well as the X79 chipset and the Sandy Bridge-E enthusiast platform. SSDs and Ultrabooks are also set on the docket. It's going to be busy.
But what would a week in downtown San Francisco be without visits from other companies as well? We are set to meet with Lucid, MSI, ASUS, Gigabyte, Corsair, HP and of course, AMD. I expect we will have just as much to say about what each of these companies has on display as we do Intel's event.
I am planning on live blogging many of the sessions I will be attending so stay tuned to PC Perspective all week for the latest!!
Podcast #169 - SSD Decoder Update, Antec SOLO II, ASUS Eee Pad Transformer, Ultrabook news and a Drobo contest!!
Subject: Editorial, General Tech, Graphics Cards, Processors, Storage, Mobile | September 8, 2011 - 03:23 PM | Ryan Shrout
Tagged: ultrabook, ssd, podcast, eee pad transformer, drobo, decoder, asus, antec
PC Perspective Podcast #169 - 9/08/2011
Join us this week as we discuss the MARS II combo on Newegg, an update to the SSD Decoder, the new Antec SOLO II chassis, our review of the ASUS Eee Pad Transformer tablet, news on Ultrabook development and even announce a new contest partnership with Drobo!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store
- RSS - Subscribe through your regular
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, Allyn Malventano
This Podcast is brought to you by
- 1-888-38-PCPER or firstname.lastname@example.org
- http://twitter.com/ryanshrout and http://twitter.com/pcper
- MARS II Combo for $4000!
- SSD Decoder Update
- Kingwin Stryker 500W Fanless Power Supply Review
- Video Perspective: Antec SOLO II Chassis Review
- ASUS Eee Pad Transformer TF101 Review: Assemble!
- This Podcast is brought to you by MSI Computer, and their all new Sandy Bridge Motherboards!
- Zotac Releases New ZBOX Nano AD10 Series Mini PCs
- Toshiba Unveils Portege Z830 Ultrabook Series
- Bulldozer Infused Trinity APU Specifications Confirmed
- Intel Unveils 16 New 32nm Processors
- AMD Ships Bulldozer for Revenue- Interlagos though- will write up after the podcast and post on front page.
- Magma Unveils the First Three-Slot Thunderbolt Expansion Chassis
- Drobo contest
- Email from Wes about GPU selection
- Email from Chris about GPU whine
- Email from Lee about SSD security
- Email from a mystery writer about GPU stuttering
- Finally, a VIDEO QUESTION from David!
- Hardware / Software Pick of the Week
- 1-888-38-PCPER or email@example.com
- http://twitter.com/ryanshrout and http://twitter.com/pcper
Subject: Processors | September 7, 2011 - 02:14 PM | Jeremy Hellstrom
Tagged: amd, A4-3400, A4-3300
AMD today announced availability of the AMD A-Series Accelerated Processing Unit (APU) A4-3300 and A4-3400 desktop processors, bringing the entry-level desktop APU price down to just $70 (U.S. suggested retail price) for consumers who want PCs with brilliant HD graphics, advanced performance, and fast application and connectivity speeds.
The AMD A4-3300 and A4-3400 desktop APUs each combine two x86 CPU cores with 160 Radeon cores, enabling powerful DirectX 11-capable discrete level graphics and dedicated HD video processing on a single chip. These dualcore APUs enable responsive and energy-efficient performance for everyday PC productivity and multitasking, as well as an amazing gaming experience.
In addition to leading-edge graphics and competitive compute power, the AMD A4-3300 and A4-3400 APUs support:
- AMD Steady Video for instant removal of shakes and jitters when rewatching video, so content looks steady and smoothing.
- AMD Dual Graphics for a visual performance boost when paired with select AMD Radeon HD 6000 Series graphics cards .
- Integrated USB 3.0 controller for rapid transfer and storage of digital content.
- AMD VISION Engine Software to provide users with regular updates to help improve system performance and stability, and to introduce new software enhancements.
With a suggested retail price of $70.00 (U.S), the AMD A4-3300 APU operates at 2.5GHz (CPU) and 444MHz (GPU) with 160 Radeon Cores, 1MB of L2 cache and a TDP of 65W.
With a suggested retail price of $75.00 (U.S), the AMD A4-3400 APU operates at 2.7GHz (CPU) and 600MHz (GPU) with 160 Radeon Cores, 1MB of L2 cache and a TDP of 65W.
(No sign of them on NewEgg as of yet)
All AMD A-Series processors are designed for use with FM1 motherboards. AMD A4 APUs require the AMD Vision Engine Control Center 11.8 driver release or later releases.
Subject: Processors | September 5, 2011 - 09:52 PM | Tim Verry
Tagged: sandy bridge, pentium, Intel, cpu, Core, celeron, 32nm
Intel today released a price list which included 16 new 32nm processors. The new additions fill in gaps in the Celeron, Pentium, and Core product lines. The new additions are then further broken down into the desktop and mobile camps. On the desktop front, there are four Celeron models ranging from $47 to $52, three Pentium models ranging from $70 to $86, and four new Core i series processors ranging from $127 to $177. Within that range, there are three hyper-threaded dual core Core i3 part and one quad core Core i5 processor.
The mobile additions include one low end and four high end models. On the low end is the dual core Celeron B840 at 1.9GHz with 2 MB L3 cache and 35W TDP. On the high end are four Core i7 chips. The Core i7 2640M is a $346 part and is a hyper-threaded dual core chip at 2.8 GHz, 4 MB L3 cache, and 35W TDP. The Core i7 2760QM is a hyper-threaded quad core part at 2.4 GHz, 6 MB L3 cache, and a 45W TDP. As another 45W TDP part, the Core i7 2860 QM is also a hyper-threaded quad core at 2.5 GHz with 8 MB L3 cache. The highest end mobile chip addition is the Core i7 2960XM, which is a hyper-threaded quad core at 2.7 GHz, a 55W TDP, and 8 MB of L3 cache.
As you can see, there are quite a few new additions filling out the product lineup at various price points and performance segments. See the chart below for the full list and specs.
|Core i5-2320||3.0 GHz||4/4||6MB||95W||$177|
|Core i3-2130||3.4 GHz||2/4||3MB||65W||$138|
|Core i3-2125||3.3 GHz||2/4||3MB||65W||$134|
|Core i3-2120T||2.6 GHz||2/4||3MB||35W||$127|
|Pentium G860||3.0 GHz||2/2||3MB||65W||$86|
|Pentium G630||2.7 GHz||2/2||3MB||65W||$75|
|Pentium G630T||2.3 GHz||2/2||3MB||35W||$70|
|Celeron G540||2.5 GHz||2/2||2MB||65W||$52|
|Celeron G530T||2.0 GHz||2/2||2MB||35W||$47|
|Celeron G530||2.4 GHz||2/2||2MB||65W||$42|
|Celeron G440||1.6 GHz||1/1||1MB||35W||$37|
|Core i7-2960XM||2.7 GHz||4/8||8MB||55W||$1,096|
|Core i7-2860QM||2.5 GHz||4/8||8MB||45W||$568|
|Core i7-2760QM||2.4 GHz||4/8||6MB||45W||$378|
|Core i7-2640M||2.8 GHz||2/4||4MB||35W||$346|
|Celeron B840||1.9 GHz||2/2||2MB||35W||$86|
Subject: Processors | September 3, 2011 - 12:07 AM | Tim Verry
Tagged: trinity, llano, bulldozer, APU, amd
AMD has not only started announcing quite a few future processors, but has also gone a bit crazy with all of the code names for said products. Admittedly, when the news broke that Trinity APU specifications were revealed, I had to do a bit of digging to figure out just what the Trinity APU was (exactly). In the end, the APU (accelerated processing unit) is similar in composition to Llano except with a bulldozer based CPU core and upgraded GPU. The bulldozer core aspect is what threw me for a bit of a loop in that I had a difficult time figuring out how the CPU core could be based on bulldozer when bulldozer hasn’t even been released ;). Hopefully that long introduction helps somewhat in clearing up what Trinity is.
Specifically, the new Trinity APU will debut with AMD’s new “Piledriver” (more code names!) architecture, and include a Radeon HD 7000 series GPU and Bulldozer based CPU core. Futher, the Trinity APU will come in both notebook and desktop flavors titled “Comal” and “Virgo” respectively. AMD notes that the improvements in the CPU and GPU cores will result in up to a 50% performance increase over the current Llano A Series APUs. While the 50% number is measuring pure gigaflop performance, even if the real world speed increase is not as noticeable in everyday usage, it is still a nice bump in performance.
On the availability front, AMD has slated the processor for release in 2012; however, Semi Accurate believes that the APU may well debut much sooner than expected. The site further quoted sources who stated that “CES is a distinct possibility for a soft launch, and maybe more.” More tidbits of information can be had here.
Subject: Processors | August 30, 2011 - 12:50 PM | Jeremy Hellstrom
Tagged: a8-3850, amd, llano, overclocking, APU
Legit Reviews decide that they really wanted to be able to show the overclocking results you can expect from the AMD A8-3850, so they picked up eight of the chips to test each for overclocking ability. There have been examples in the past of chips with a wide variety of overclocking limits which was often decided by the chip revision but not in all cases. The test results show that all but two of the chips hit a stability issue when being pushed beyond 3679.5MHz, so you can take that as the most likely result that your chip will provide. The two outlying chips will be exceptional, in one case in a bad way which you can see in the full review.
"When AMD released the 'Lynx' desktop platform back in June 2011, our motherboard reviewer ran into some bad luck when overclocking the processor. When you get a new platform setup for the very first time you really don't know what to expect and it does take some time to learn all the quirks and nuances of a new processor and motherboard. We recently ordered in six more processors and then overclocked all seven of them to see what the best one would be for our test system!"
Here are some more Processor articles from around the web:
- AMD A6-3650, A8-3850 APUs @ iXBT Labs
- Desktop CPU Comparison Guide @ TechARP
- AMD A8 3850 A-series ALU @ Metku.net
- Energy-Efficient Processors from Intel Reviewed: Core i5-2500T, Core i5-2390T, Core i3-2100T and Pentium G620T @ X-bit Labs
- All Core i7 Models @ Hardware Secrets
- The Sandy Bridge Pentium Review: G850, G840, G620 & G620T Tested @ AnandTech
- All Core i5 Models @ Hardware Secrets
Subject: Processors | August 22, 2011 - 12:06 PM | Jeremy Hellstrom
Tagged: amd, linux, llano, a8-3850
Phoronix is still satisfying their curiosity about the performance of Llano under Linux. To that end they assembled an A8-3850 with Gigabyte's GA-A75M-UD2H motherboard, 2GB of DDR3 memory, and a 60GB OCZ Vertex 2 SSD and installed Ubuntu 11.04 64-bit, GNOME 2.32.1, X.Org Server 1.10.1, and an EXT4 file-system. To power the system they had a few choices but unfortunately the one they were most interested in, AMD's Open64 4.2.4, failed to compile. That left them with two versions of GCC and Clang to test in a variety of benchmarks. There is still some work to do to bring all of the power of Llano to Linux, but for now this will give you a good idea which to use.
"Last week were a set of AMD Fusion A8-3850 Linux benchmarks on Phoronix, but for you this week is a look at the AMD Fusion "Llano" APU performance when trying out a few different compilers. In particular, the latest GCC release and then using the highly promising Clang compiler on LLVM, the Low-Level Virtual Machine."
Here are some more Processor articles from around the web:
- Quick Sandy Bridge vs. AMD Fusion APU Integrated Graphics Comparison @ PCSTATS
- AMD A6-3650 Llano 2.6GHz Quad Core APU Review @Hi Tech Legion
- CPU Performance Comparison Guide @ TechARP
- Desktop CPU Comparison Guide @ TechARP
Subject: Processors | August 22, 2011 - 10:53 AM | Tim Verry
Tagged: mobile, fusion, E-Series, APU, amd
AMD today announced three new Accelerated Processing Units (APU) to bolster up the mobile lineup. Specifically, two new E-Series and one new C-Series APU are inserting themselves into the lineup. The new chips bring enhanced graphic capabilities, HDMI 1.4a, and DDR3 1333 support. "Today's PC users want stunning HD graphics and accelerated performance with all-day battery life and that's what AMD Fusion APUs deliver," said Chris Cloran, vice president and general manager, Client Division, AMD.
According to MaximumPC, the new E-450 APU takes the top slot, bringing two CPU cores clocked at 1.65GHz, a Radeon HD 6320 GPU clocked at a base of 508MHz and maximum of 600MHz, and a power sipping TDP of 18 watts. The second new E-Series APU carries the same 18 watt TDP and dual CPU cores as the E-450; however, it is clocked at a lower 1.3GHz. Further, the chip’s Radeon HD 6310 GPU is clocked at 488MHz. The new E-Series APUs feature battery life increases to the tune of up to 10.5 hours of Windows idle time.
The new C-Series APU is the C-60, and is a 1GHz dual core chip with a Radeon HD 6290 GPU. The APU is able to turbo its CPU cores to a maximum of 1.33GHz, while the GPU has a base clock of 276MHz and a maximum clock speed of 400MHz. Further, the chip has a 9 watt TDP, and boasts 12.25 hours of “resting battery life,” which AMD benchmarked using Windows Idle on a C-60 based netbook.
Currently, AMD has shipped more than 12 million APUs, and more than five million of the C-Series and E-Series processors in Q2 2011. More information on the specific benchmarking metrics AMD used can be found here.
Subject: General Tech, Processors | August 20, 2011 - 02:34 PM | Scott Michaud
Tagged: upgrade, Intel
It has almost been a complete year since Intel decided to sell $50 upgrade cards for their processors. Ryan noted that the cost of upgrade between the two processors was just $15 (at the time) which made the $35 premium over just outright purchasing the higher-end CPU seem quite ludicrous. Whether or not you agree with Intel’s methodology is somewhat irrelevant to Intel however as they have relaunched and expanded their initiative to include three SKUs.
DLCpu: Cash for cache!
Ryan was deliberately trying to pose the issue in question-form because it really is business as usual when it comes to hardware companies to artificially lock down higher SKUs for a lower price-point. The one thing he did not mention was that this upgrade seems to be designed primarily for processors included in the purchase of a retail PC where the user might not have had the choice of which processor to include.
As for this upgrade cycle there are three processors that qualify for the upgrade: the Pentium G622 can be upgraded to the Pentium G693, receiving a clock-rate boost; the Core i3-2102 can be upgraded to the Core i3-2153, receiving a clock-rate boost; the Core i3-2312M can be upgraded to the Core i3-2393M, receiving both a clock-rate boost as well as extra unlocked cache. There is no word on if each SKU would have its own upgrade card or even the cost of upgrading apart from the nebulous “affordable”. Performance is expected to increase approximately 10-25% depending on which part you upgrade and what task is being pushed upon it, the Pentium seeing the largest boost due to this unlock.
Do you agree with this initiative?
Subject: Processors | August 17, 2011 - 02:13 PM | Tim Verry
Tagged: Intel, sandy bridge, cpu
Intel plans to refresh its entry level and mid-range Sandy Bridge desktop processor lineup with seven new models and accompanying price drops. The new models include the Pentium G630, G630T, and G860 on the low end, and the Core i5 2320 on the high end. Making up the middle ground are the Core i3 2120T, i3 2125, and i3 2130 processors.
CPU-World reports that September and October will both see price reductions in certain Sandy Bridge processor SKUs. September will see price reductions in all mid and low power Core i5 and i7 processors. Specifically, the Core i5 processors will be reduced by as much as $11, while the Core i7-2600S will see a price cut of $12. October will bring price cuts for the low end Pentium and Core i3 processors. The Pentium CPUs will see a price cut of $11 and the Core i3 2120 will be cut by $21.
CPU World has a detailed chart of the individual chip prices which you can check out here. Will these price reductions be enough to entice you to buy into Sandy Bridge, or are you holding off upgrading until Ivy Bridge?
Subject: Processors | August 17, 2011 - 12:03 PM | Tim Verry
Tagged: APU, amd radeon, amd, A6-3500
AMD announced today a new desktop APU (Accelerated Processing Unit). The A6-3500 processor combines three x86 CPU cores with 320 Radeon GPU cores. The new A6-3500 APU comes with a full sweep of AMD technology, including Turbo Core, Steady Video image stabilization technology, DDR3 1333 support, HDCP compatibility, and AMD VISION Engine software. Following its predecessors, the new three core APU is able to pair with select AMD Radeon HD 6000 series discrete graphics cards.
This FM1 socket awaits an A Series APU like the new A6-3500
The three core APU operates at 2.1GHz (2.4GHz with Turbo Boost active) on the CPU side and 444MHz on the GPU side of things. Further, the APU features 3MB of L2 cache, a TDP of 65 watts, and is designed for use with FM1 motherboards.
The APU is now available for purchase at various online retailers and system builders with an MSRP of $95 USD. AMD states that the processor “delivers a compelling, affordable desktop experience for consumers and gamers.”
At under a $100, the new APU is an attractive option for HTPC usage and starter gaming systems on a tight budget. For more information on AMD’s APU architecture, can check out PC Perspective’s AMD A8-3850 APU review here.
Subject: Processors | August 15, 2011 - 10:45 AM | Tim Verry
Tagged: sandy bridge-e, Intel, hsf, cooling
We reported a few days ago that AMD is considering bunding a sealed loop water cooling solution with it's high end FX processors. In an interesting development, VR-Zone today stated that Intel will not be including any cooler at all with it's Sandy Bridge-E parts.
Specifically, Intel will not be bundling any processor cooler with its Core i7 Sandy Bridge-E 3820, 3930, or 3960X CPUs. These processors are rated at a 130 watt TDP; however, VR-Zone reports that the processors may in fact be drawing as much as 180 watts at stock speeds. This massive jump in power compared to previous models, if true, would make Intel's move to not include a cooler a good thing, as enthusiasts will almost certainly want a quality third part air cooler at least, and a proper water loop if any overclocking is involved. Enthusiasts especially have always opted to use an aftermarket cooler instead of the included Intel one as they have been notoriously noisy and mediocre in the performance department. While they are decent for stock speeds, overclockers have always demanded more than the Intel coolers could provide.
The situation is made all the more interested when paired against AMD's announcement; Intel has opted to not include any heatsink at all while AMD has opted to ratchet up the cooling performance with a sealed water loop. Personally, I find the two companies' reactions- because they are almost direct opposite solutions- very intersting and telling about the company mindset. Which solution do you like more, would you like the chip makers to ratchet up their stock cooling performance, or do you prefer the hands-off approach where they allow you to grab the cooler of your choice by not bundling anything in the processor box? Let us know in the comments!
Image Credit: Tim Verry. Used With Permission.
Subject: Cases and Cooling, Processors | August 13, 2011 - 02:53 AM | Tim Verry
Tagged: amd, FX, octocore, water cooling, sealed loop, LCS, hsf
According to Xbit Labs, AMD is considering switching out the usual air cooler (HSF) for a sealed loop liquid cooling solution (LCS) for its high end FX Processors. Specifically, AMD wants to pair their highest end eight core processor (and possibly the next highest end eight core chip) with the sealed loop liquid cooling solution. This information, they believe, comes from a “source with knowledge of the company’s plans.”
If you are not familiar with the sealed loop water coolers, PC Perspective reviewed the Corsair H70 processor cooler last year and it is a good example. Sealed loop water coolers are similar to the large DIY water cooling loops comprised of a large radiator, copper CPU block, pump, and reservoir all connected in a loop by tubing; however, they usually have smaller radiators and pumps as well as coolant that cannot be refilled (and should not have to be). This coolant carries heat away from the processor to be dissipated through a radiator. Corsair in particular has heavily invested in this once very niche product with it’s H series of coolers.
Traditionally, both Intel and AMD have been content in pairing their chips with mid-range but cheap air coolers that did a decent job of keeping the processors within their thermal limits at stock speeds. Enthusiasts, and especially those interested in overclocking, have generally ditched the included cooler in favor of a more powerful and/or quieter aftermarket cooler. Needless to say, including a cooler, especially with high end chips that will likely go to enthusiasts, that’s never even used only serves to add additional unnecessary cost for both consumers and the manufacturer. Thus, this move to bundle a more powerful sealed loop water cooler with its high end chips may be an attempt by AMD to futher appeal to enthusiasts and keep with their traditional image of being friendly to overclockers and hardware enthusiasts. Having and using a water cooler that is supported by the chip maker certainly doesn’t hurt, especially if it ever came down to warranty and RMA situations. On the other hand, enthusiasts can be very picky about which cooler to use in their systems; therefore, bundling a cooler that is sure to add even more extra cost to the package may not be the right move for AMD. At best, consumers are likely to see an extra $50 or so added to the sure to be pricey highest end eight core chips.
Their idea, if true, surely has merit, but is it wise? Let us know your thoughts in the comments below!
Subject: Processors | August 12, 2011 - 01:35 PM | Jeremy Hellstrom
Tagged: sandybridge, pentium, G850, Intel
Intel has updated the Pentium processor for the SandyBridge era with the 32nm G620, G840 and G850, all of which cost under $100. All are rated at 65W TDP with 3MB of level 3 cache, an integrated DDR3 memory controller, PCI Express 2.0 interface, Direct Media Interface 2.0, and Intel HD Graphics 2000. Legit Reviews tested the 2.9GHz G850 model and found no surprises, neither good nor bad. The Pentium line remains the workhorse model, perfect for office usage, web browsing and even watching movies. Those who make movies or want to do more than basic gaming are better off looking at an older LGA1156 processor or even a slightly more expensive Intel or AMD chip. If you've a relative that only needs a PC for light duty tasks, consider a system built around one of these new SandyBridge Pentiums.
"After trying out both the Intel Pentium G620 and Pentium G850 we must admit that we are still impressed by what these cost effective mainstream processors can do. Thanks to the powerful Intel 'Sandy Bridge' microarchitecture these dual-core processors don't run too far behind the more expensive offerings from Intel and AMD. You can find some pretty good deals on LGA775 and LGA1156 platforms right now, but the Intel Pentium series for LGA1155 has more features and as you could see in the performance tests they weren't that far behind in the benchmarks..."
Here are some more Processor articles from around the web:
- All Core i3 Models @ Hardware Secrets
- Intel Sandybridge 2500k @ XSreviews
- Intel Pentium G620 Sandy Bridge 2.6GHz CPU Review @ Legit Reviews
- Desktop CPU Comparison Guide @ TechARP
- AMD A6-3650 Llano APU Review @ Hardware Canucks
- AMD A6-3650 APU/Processor Review @ TechwareLabs
- AMD Fusion A8-3850 APU "Llano" On Linux @ Phoronix
Subject: Graphics Cards, Processors | August 8, 2011 - 08:28 PM | Tim Verry
Tagged: amd, APU, sdk, opencl
AMD released its new APUs (Accelerated Processing Unit) to the masses, and now they are revving the processors up with a new software development kit that increases performance and efficiency of OpenCL based applications. The new version 2.5 APP SDK is tailored to the APU architecture where the CPU and GPU are on the same die. Building on the OpenCL standard, APP SDK 2.5 promises to reduce the bandwidth limitation of the CPU to GPU connection, allowing for effective data transfer rates as high as 15GB per second in AMDs A Series APUs. Further performance enhancements include reduced kernel launch times and PCIe overhead.
AMD states that the new APP SDK will improve multi-gpu support for AMD APU graphics paired with a discrete card, and will “enable advanced capabilities” to improve the user experience including gesture based interfaces, image stabilization, and 3D applications.
The new development kit is currently being used by developers worldwide in the AMD OpenCl coding competition, where up to $50,000 in prizes will be given away to winning software submissions. You can get started with the SDK here.
Subject: Editorial, Graphics Cards, Processors | August 4, 2011 - 11:15 AM | Ryan Shrout
Tagged: nvidia, john carmack, interview, carmack, amd
A couple of years back we talked on the phone with John Carmack during the period of excitement about ray tracing and game engines. That interview is still one of our most read articles on PC Perspective as he always has interesting topics and information to share. While we are hosting the PC Perspective Hardware Workshop on Saturday at Quakecon 2011, we also scheduled some time to sit with John again to pick his brain on hardware and technology.
If you had a chance to ask John Carmack questions about hardware and technology, either the current sets of each or what he sees coming in the future, what would you ask? Let us know in our comments section below!! (No registration required to comment.)
Subject: Editorial, General Tech, Processors | August 3, 2011 - 02:11 AM | Scott Michaud
Tagged: Netburst, architecture
It is common knowledge that computing power consistently improves throughout time as dies shrink to smaller processes, clock rates increase, and the processor can do more and more things in parallel. One thing that people might not consider: how fast is the actual architecture itself? Think of the problem of computing in terms of a factory. You can increase the speed of the conveyor belt and you can add more assembly lines, but just how fast are the workers? There are many ways to increase the efficiency of a CPU: from tweaking the most common or adding new instruction sets to allow the task itself to be simplified; to playing with the pipeline size for proper balance between constantly loading the CPU with upcoming instructions and needing to dump and reload the pipe when you go the wrong way down an IF/ELSE statement. Tom’s Hardware wondered this and tested a variety of processors since 2005 with their settings modified such that they could only use one core and only be clocked at 3 GHz. Can you guess which architecture failed the most miserably?
Pfft, who says you ONLY need a calculator?
(Image from Intel)
Netburst architecture was designed to get very large clock rates at the expensive of heat -- and performance. At the time, the race between Intel and its competitors was clock rate: the higher the clock the better it was for marketers despite a 1.3 GHz Athlon wrecking a 3.2 GHz Celeron in actual performance. If you are in the mood for a little chuckle, this marketing strategy was all destroyed when AMD decided to name their processors “Athlon XP 3200+” and so forth rather than by their actual clock rate. One of the major reasons that Netburst was so terrible was branch prediction. Branch prediction is a strategy you can use to speed up a processor: when you reach a conditional jump from one chunk of code to another, such as “if this is true do that, otherwise do this”, you do not know for sure what will come next. Pipelining is a method of loading multiple commands into a processor to keep it constantly working. Branch prediction says: “I think I’ll go down this branch” and loads the pipeline assuming that is true; if you are wrong, you need to dump the pipeline and correct your mistake. One way that Pentium Netburst kept high clock rates was by having a ridiculously huge pipeline, 2-4x larger than the first generation of Core 2 parts which replaced it; unfortunately the Pentium 4 branch prediction was terrible keeping the processor stuck needing to dump its pipeline perpetually.
The sum of all tests... at least time-based ones.
(Image from Tom's Hardware)
Now that we excavated Intel’s skeletons to air them out it is time to bury them again and look at the more recent results. On the AMD side of things, it looks as though there has not been too much innovation on the efficiency side of things only now getting within range of the architecture efficiency that Intel had back in 2007 with their first introduction of Core 2. Obviously efficiency per core per clock means little in the real world as it tells you neither about raw performance of a part nor how power efficient it is. Still, it is interesting to see how big of a leap Intel made away from their turkey of an architecture theory known as Netburst and model the future around the Pentium 3 and Pentium M architectures. Lastly, despite the lead, it is interesting to note exactly how much work went into the Sandy Bridge architecture. Intel, despite an already large lead and focus outside of the x86 mindset, still tightened up their x86 architecture by a very visible margin. It might not be as dramatic as their abandonment of Pentium 4, but is still laudable in its own right.
Subject: General Tech, Processors | July 28, 2011 - 06:50 PM | Scott Michaud
Tagged: Sandy Bridge-EP, Intel
Since we got back together with Sandy B we have played a few games, made a couple home movies together, and went around travelling. Now that our extended vacation is over Sandy decided it is time to get a job. Sandy B was working part-time as a server and apparently like her job because Intel brought her to a job opening in Jaketown. Intel has apparently released details on their server product, Sandy Bridge-EP “Jaketown” that will debut in Q4, to replace the current server line of up-clocked desktop parts with disabled GPUs.
According to Real World Tech, Intel’s server component will contain up to 8 cores and sport PCI-Express 3.0 and Quick Path Interconnect 1.1. Rumors state that the highest-clocked component will run at up to 3GHz with the lowest estimated to be 2.66GHz. The main components of the CPU will be tied together with a ring bus, although unlike the original Sandy Bridge architecture the Sandy Bridge-EP ring will be bi-directional. Clock rates of the internal ring are not known but the bidirectional nature should decrease travelling distance of data by half on average. The L3 cache size is not known but is designed to be fast and low latency.
Intel looks to be really focusing this SKU down to be very efficient for the kinds of processes that servers require. There is no mention of the Sandy Bridge-EP containing a GPU, for instance, which should leave more options for highly effective x86 performance; at some point the GPU will become more relevant in the server market but Intel does not seem to think that today is that day. Check out the analysis at Real World Tech for more in-depth information.
Subject: Editorial, General Tech, Graphics Cards, Processors | July 22, 2011 - 08:20 PM | Scott Michaud
Tagged: MLAA, Matrox, Intel
Antialiasing is a difficult task for a computer to accomplish in terms of performance and many efforts have been made over the years to minimize the impact while still keeping as much of the visual appeal as possible. The problem with aliasing is that while pixels are the smallest unit of display on a computer monitor, it is large enough for our eye to see it as a distinct unit. You may however have two objects of two different colors partially occupy the same pixel, who wins? In real life, our eye would see the light from both objects hit the same retina nerve (that is not really how it biologically works but close enough) and it would see some blend between the two colors. Intel has released a whitepaper for their attempt at this problem and it resembles a method that Matrox used almost a decade ago.
Matrox's antialiasing method.
(Image from Tom's Hardware)
Looking at the problem of antialiasing, you wish to have multiple bits of information dictate the color of a pixel in the event that two objects of different colors both partially occupy the same pixel. The simplest method of doing that is dividing the pixel up into smaller pixels and then crushing them together to an average which is called Super Sampling. This means you are rendering an image 2x, 4x, or even 16x the resolution you are running at. More methods were discovered including just flagging the edges for antialiasing since that is where aliasing occurs. In the early 2000s, Matrox looked at the problem from an entirely different angle: since the edge is what really matters, we can find the shape of the various edges and see how much area of a pixel gets divided up between each object giving an effect they say is equivalent to 16x MSAA for very little cost. The problem with Matrox’s method: it failed with many cases of shadowing and pixelshaders… and came out in the DirectX 9 era. Suffices to say it did not save Matrox as an elite gaming GPU company.
(Both images from Intel Blog)
Intel’s method of antialiasing again looks at the geometry of the image but instead breaks the edges into L shapes to determine the area they enclose. To keep the performance up they do pipelining between the CPU and GPU which keeps the CPU and GPU constantly filled with the target or neighboring frames. In other words, as the GPU lets the CPU perform MLAA, the GPU is busy preparing and drawing the next frame. Of course when I see technology like this I think two things: will this work on architectures with discrete GPUs and will this introduce extra latency between the rendering code and the gameplay code? I would expect that it must as the frame is not even finished let alone drawn to monitor before you fetch the next set of states to be rendered. The question still exists if that effect will be drowned in the rest of the latencies experienced between synchronizing.
AMD and NVIDIA both have their variants of MLAA, the latter of which being called FXAA by NVIDIA's marketing team. Unlike AMD's method, NVIDIA's method must be programmed into the game engine by the development team requiring a little bit of extra work on the developer's part. That said, FXAA found its way into Duke Nukem Forever as well as the upcoming Battlefield 3 among other games so support is there and older games should be easy enough to just compute properly.
The flat line is how much time spent on MLAA itself, just a few milliseconds and constant.
(Image from Intel Blog)
Performance-wise the Intel solution performs ridiculously faster than MSAA, is pretty much scene-independent, and should produce results near the 16x mark due to the precision possible with calculating areas. Speculation about latency between render and game loops aside the implementation looks quite sound and allows users with on-processor graphics to not need to waste precious cycles (especially on GPUs that you would see on-processor) with antialiasing and instead use it more on raising other settings including resolution itself while still avoiding jaggies. Conversely, both AMD and NVIDIA's method run on the GPU which should make a little more sense for them as a discrete GPU should not require as much help as a GPU packed into a CPU.
Could Matrox’s last gasp from the gaming market be Intel’s battle cry?
(Registration not required for commenting)