Checking A8-3850 overclocking capability 7 times

Subject: Processors | August 30, 2011 - 12:50 PM |
Tagged: a8-3850, amd, llano, overclocking, APU

Legit Reviews decide that they really wanted to be able to show the overclocking results you can expect from the AMD A8-3850, so they picked up eight of the chips to test each for overclocking ability.  There have been examples in the past of chips with a wide variety of overclocking limits which was often decided by the chip revision but not in all cases.  The test results show that all but two of the chips hit a stability issue when being pushed beyond 3679.5MHz, so you can take that as the most likely result that your chip will provide.  The two outlying chips will be exceptional, in one case in a bad way which you can see in the full review.

BR_a8-3850-apu-tray-1.jpg

"When AMD released the 'Lynx' desktop platform back in June 2011, our motherboard reviewer ran into some bad luck when overclocking the processor. When you get a new platform setup for the very first time you really don't know what to expect and it does take some time to learn all the quirks and nuances of a new processor and motherboard. We recently ordered in six more processors and then overclocked all seven of them to see what the best one would be for our test system!"

Here are some more Processor articles from around the web:

Processors

Llano on Linux, good but not good enough

Subject: Processors | August 22, 2011 - 12:06 PM |
Tagged: amd, linux, llano, a8-3850

Phoronix is still satisfying their curiosity about the performance of Llano under Linux.  To that end they assembled an A8-3850 with Gigabyte's GA-A75M-UD2H motherboard, 2GB of DDR3 memory, and a 60GB OCZ Vertex 2 SSD and installed Ubuntu 11.04 64-bit, GNOME 2.32.1, X.Org Server 1.10.1, and an EXT4 file-system.  To power the system they had a few choices but unfortunately the one they were most interested in, AMD's Open64 4.2.4, failed to compile.  That left them with two versions of GCC and Clang to test in a variety of benchmarks.  There is still some work to do to bring all of the power of Llano to Linux, but for now this will give you a good idea which to use.

phoronix_llano.png

"Last week were a set of AMD Fusion A8-3850 Linux benchmarks on Phoronix, but for you this week is a look at the AMD Fusion "Llano" APU performance when trying out a few different compilers. In particular, the latest GCC release and then using the highly promising Clang compiler on LLVM, the Low-Level Virtual Machine."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

AMD Adds Three New Fusion APUs to Mobile Lineup

Subject: Processors | August 22, 2011 - 10:53 AM |
Tagged: mobile, fusion, E-Series, APU, amd

AMD today announced three new Accelerated Processing Units (APU) to bolster up the mobile lineup. Specifically, two new E-Series and one new C-Series APU are inserting themselves into the lineup. The new chips bring enhanced graphic capabilities, HDMI 1.4a, and DDR3 1333 support. "Today's PC users want stunning HD graphics and accelerated performance with all-day battery life and that's what AMD Fusion APUs deliver," said Chris Cloran, vice president and general manager, Client Division, AMD.

According to MaximumPC, the new E-450 APU takes the top slot, bringing two CPU cores clocked at 1.65GHz, a Radeon HD 6320 GPU clocked at a base of 508MHz and maximum of 600MHz, and a power sipping TDP of 18 watts. The second new E-Series APU carries the same 18 watt TDP and dual CPU cores as the E-450; however, it is clocked at a lower 1.3GHz. Further, the chip’s Radeon HD 6310 GPU is clocked at 488MHz.  The new E-Series APUs feature battery life increases to the tune of up to 10.5 hours of Windows idle time.

FusionAPU.jpg

The new C-Series APU is the C-60, and is a 1GHz dual core chip with a Radeon HD 6290 GPU. The APU is able to turbo its CPU cores to a maximum of 1.33GHz, while the GPU has a base clock of 276MHz and a maximum clock speed of 400MHz. Further, the chip has a 9 watt TDP, and boasts 12.25 hours of “resting battery life,” which AMD benchmarked using Windows Idle on a C-60 based netbook.

Currently, AMD has shipped more than 12 million APUs, and more than five million of the C-Series and E-Series processors in Q2 2011. More information on the specific benchmarking metrics AMD used can be found here.

Source: AMD

Intel returns to upgrade cards for more of their crippled parts

Subject: General Tech, Processors | August 20, 2011 - 02:34 PM |
Tagged: upgrade, Intel

It has almost been a complete year since Intel decided to sell $50 upgrade cards for their processors. Ryan noted that the cost of upgrade between the two processors was just $15 (at the time) which made the $35 premium over just outright purchasing the higher-end CPU seem quite ludicrous. Whether or not you agree with Intel’s methodology is somewhat irrelevant to Intel however as they have relaunched and expanded their initiative to include three SKUs.

intelupgrade.jpg

DLCpu: Cash for cache!

Ryan was deliberately trying to pose the issue in question-form because it really is business as usual when it comes to hardware companies to artificially lock down higher SKUs for a lower price-point. The one thing he did not mention was that this upgrade seems to be designed primarily for processors included in the purchase of a retail PC where the user might not have had the choice of which processor to include.

As for this upgrade cycle there are three processors that qualify for the upgrade: the Pentium G622 can be upgraded to the Pentium G693, receiving a clock-rate boost; the Core i3-2102 can be upgraded to the Core i3-2153, receiving a clock-rate boost; the Core i3-2312M can be upgraded to the Core i3-2393M, receiving both a clock-rate boost as well as extra unlocked cache. There is no word on if each SKU would have its own upgrade card or even the cost of upgrading apart from the nebulous “affordable”. Performance is expected to increase approximately 10-25% depending on which part you upgrade and what task is being pushed upon it, the Pentium seeing the largest boost due to this unlock.

Do you agree with this initiative?

Source: Intel

Intel Will Drop Prices On Sandy Bridge CPUs in Q3 2011

Subject: Processors | August 17, 2011 - 02:13 PM |
Tagged: Intel, sandy bridge, cpu

Intel plans to refresh its entry level and mid-range Sandy Bridge desktop processor lineup with seven new models and accompanying price drops. The new models include the Pentium G630, G630T, and G860 on the low end, and the Core i5 2320 on the high end. Making up the middle ground are the Core i3 2120T, i3 2125, and i3 2130 processors.

cpu1.jpg

CPU-World reports that September and October will both see price reductions in certain Sandy Bridge processor SKUs. September will see price reductions in all mid and low power Core i5 and i7 processors. Specifically, the Core i5 processors will be reduced by as much as $11, while the Core i7-2600S will see a price cut of $12. October will bring price cuts for the low end Pentium and Core i3 processors. The Pentium CPUs will see a price cut of $11 and the Core i3 2120 will be cut by $21.

CPU World has a detailed chart of the individual chip prices which you can check out here.  Will these price reductions be enough to entice you to buy into Sandy Bridge, or are you holding off upgrading until Ivy Bridge?

Source: CPU World

AMD Announces New Sub-$100 Triple Core A6-3500 APU

Subject: Processors | August 17, 2011 - 12:03 PM |
Tagged: APU, amd radeon, amd, A6-3500

AMD announced today a new desktop APU (Accelerated Processing Unit). The A6-3500 processor combines three x86 CPU cores with 320 Radeon GPU cores. The new A6-3500 APU comes with a full sweep of AMD technology, including Turbo Core, Steady Video image stabilization technology, DDR3 1333 support, HDCP compatibility, and AMD VISION Engine software. Following its predecessors, the new three core APU is able to pair with select AMD Radeon HD 6000 series discrete graphics cards.

plat00.jpg

This FM1 socket awaits an A Series APU like the new A6-3500

The three core APU operates at 2.1GHz (2.4GHz with Turbo Boost active) on the CPU side and 444MHz on the GPU side of things. Further, the APU features 3MB of L2 cache, a TDP of 65 watts, and is designed for use with FM1 motherboards.

The APU is now available for purchase at various online retailers and system builders with an MSRP of $95 USD. AMD states that the processor “delivers a compelling, affordable desktop experience for consumers and gamers.”

At under a $100, the new APU is an attractive option for HTPC usage and starter gaming systems on a tight budget. For more information on AMD’s APU architecture, can check out PC Perspective’s AMD A8-3850 APU review here.

Source: AMD

Sandy Bridge-E Processors: Cooler Sold Separately

Subject: Processors | August 15, 2011 - 10:45 AM |
Tagged: sandy bridge-e, Intel, hsf, cooling

We reported a few days ago that AMD is considering bunding a sealed loop water cooling solution with it's high end FX processors.  In an interesting development, VR-Zone today stated that Intel will not be including any cooler at all with it's Sandy Bridge-E parts.

IntelHSF.jpg

Specifically, Intel will not be bundling any processor cooler with its Core i7 Sandy Bridge-E 3820, 3930, or 3960X CPUs.  These processors are rated at a 130 watt TDP; however, VR-Zone reports that the processors may in fact be drawing as much as 180 watts at stock speeds.  This massive jump in power compared to previous models, if true, would make Intel's move to not include a cooler a good thing, as enthusiasts will almost certainly want a quality third part air cooler at least, and a proper water loop if any overclocking is involved.  Enthusiasts especially have always opted to use an aftermarket cooler instead of the included Intel one as they have been notoriously noisy and mediocre in the performance department.  While they are decent for stock speeds, overclockers have always demanded more than the Intel coolers could provide.

The situation is made all the more interested when paired against AMD's announcement; Intel has opted to not include any heatsink at all while AMD has opted to ratchet up the cooling performance with a sealed water loop.  Personally, I find the two companies' reactions- because they are almost direct opposite solutions- very intersting and telling about the company mindset.  Which solution do you like more, would you like the chip makers to ratchet up their stock cooling performance, or do you prefer the hands-off approach where they allow you to grab the cooler of your choice by not bundling anything in the processor box?  Let us know in the comments!

Image Credit: Tim Verry.  Used With Permission.

Source: VR-Zone

AMD Considers Bundling FX Processors With Sealed Loop Water Coolers (LCS)

Subject: Cases and Cooling, Processors | August 13, 2011 - 02:53 AM |
Tagged: amd, FX, octocore, water cooling, sealed loop, LCS, hsf

According to Xbit Labs, AMD is considering switching out the usual air cooler (HSF) for a sealed loop liquid cooling solution (LCS) for its high end FX Processors. Specifically, AMD wants to pair their highest end eight core processor (and possibly the next highest end eight core chip) with the sealed loop liquid cooling solution. This information, they believe, comes from a “source with knowledge of the company’s plans.”

AMDplusLCS.png

If you are not familiar with the sealed loop water coolers, PC Perspective reviewed the Corsair H70 processor cooler last year and it is a good example. Sealed loop water coolers are similar to the large DIY water cooling loops comprised of a large radiator, copper CPU block, pump, and reservoir all connected in a loop by tubing; however, they usually have smaller radiators and pumps as well as coolant that cannot be refilled (and should not have to be).  This coolant carries heat away from the processor to be dissipated through a radiator.  Corsair in particular has heavily invested in this once very niche product with it’s H series of coolers.

Traditionally, both Intel and AMD have been content in pairing their chips with mid-range but cheap air coolers that did a decent job of keeping the processors within their thermal limits at stock speeds. Enthusiasts, and especially those interested in overclocking, have generally ditched the included cooler in favor of a more powerful and/or quieter aftermarket cooler. Needless to say, including a cooler, especially with high end chips that will likely go to enthusiasts, that’s never even used only serves to add additional unnecessary cost for both consumers and the manufacturer. Thus, this move to bundle a more powerful sealed loop water cooler with its high end chips may be an attempt by AMD to futher appeal to enthusiasts and keep with their traditional image of being friendly to overclockers and hardware enthusiasts. Having and using a water cooler that is supported by the chip maker certainly doesn’t hurt, especially if it ever came down to warranty and RMA situations. On the other hand, enthusiasts can be very picky about which cooler to use in their systems; therefore, bundling a cooler that is sure to add even more extra cost to the package may not be the right move for AMD. At best, consumers are likely to see an extra $50 or so added to the sure to be pricey highest end eight core chips.

Their idea, if true, surely has merit, but is it wise? Let us know your thoughts in the comments below!

Source: Xbit Labs

No Intel architecture refresh can be complete without a Pentium model

Subject: Processors | August 12, 2011 - 01:35 PM |
Tagged: sandybridge, pentium, G850, Intel

Intel has updated the Pentium processor for the SandyBridge era with the 32nm G620, G840 and G850, all of which cost under $100.   All are rated at 65W TDP with 3MB of level 3 cache, an integrated DDR3 memory controller, PCI Express 2.0 interface, Direct Media Interface 2.0, and Intel HD Graphics 2000.  Legit Reviews tested the 2.9GHz G850 model and found no surprises, neither good nor bad.  The Pentium line remains the workhorse model, perfect for office usage, web browsing and even watching movies.  Those who make movies or want to do more than basic gaming are better off looking at an older LGA1156 processor or even a slightly more expensive Intel or AMD chip.  If you've a relative that only needs a PC for light duty tasks, consider a system built around one of these new SandyBridge Pentiums.

intel-pentium-offerings.jpg

"After trying out both the Intel Pentium G620 and Pentium G850 we must admit that we are still impressed by what these cost effective mainstream processors can do. Thanks to the powerful Intel 'Sandy Bridge' microarchitecture these dual-core processors don't run too far behind the more expensive offerings from Intel and AMD. You can find some pretty good deals on LGA775 and LGA1156 platforms right now, but the Intel Pentium series for LGA1155 has more features and as you could see in the performance tests they weren't that far behind in the benchmarks..."

Here are some more Processor articles from around the web:

Processors

AMD Accelerates APU OpenCL Performance With New SDK

Subject: Graphics Cards, Processors | August 8, 2011 - 08:28 PM |
Tagged: amd, APU, sdk, opencl

AMD released its new APUs (Accelerated Processing Unit) to the masses, and now they are revving the processors up with a new software development kit that increases performance and efficiency of OpenCL based applications. The new version 2.5 APP SDK is tailored to the APU architecture where the CPU and GPU are on the same die. Building on the OpenCL standard, APP SDK 2.5 promises to reduce the bandwidth limitation of the CPU to GPU connection, allowing for effective data transfer rates as high as 15GB per second in AMDs A Series APUs. Further performance enhancements include reduced kernel launch times and PCIe overhead.

APP_V3_longbnr.jpg

AMD states that the new APP SDK will improve multi-gpu support for AMD APU graphics paired with a discrete card, and will “enable advanced capabilities” to improve the user experience including gesture based interfaces, image stabilization, and 3D applications.

The new development kit is currently being used by developers worldwide in the AMD OpenCl coding competition, where up to $50,000 in prizes will be given away to winning software submissions. You can get started with the SDK here.

Source: AMD

John Carmack Interview: Question and Topic suggestions?

Subject: Editorial, Graphics Cards, Processors | August 4, 2011 - 11:15 AM |
Tagged: nvidia, john carmack, interview, carmack, amd

A couple of years back we talked on the phone with John Carmack during the period of excitement about ray tracing and game engines.  That interview is still one of our most read articles on PC Perspective as he always has interesting topics and information to share.  While we are hosting the PC Perspective Hardware Workshop on Saturday at Quakecon 2011, we also scheduled some time to sit with John again to pick his brain on hardware and technology.

carmack1.jpg

If you had a chance to ask John Carmack questions about hardware and technology, either the current sets of each or what he sees coming in the future, what would you ask?  Let us know in our comments section below!! (No registration required to comment.)

Source: PCPer

Yes, Netburst really was that bad: CPU architectures tested

Subject: Editorial, General Tech, Processors | August 3, 2011 - 02:11 AM |
Tagged: Netburst, architecture

It is common knowledge that computing power consistently improves throughout time as dies shrink to smaller processes, clock rates increase, and the processor can do more and more things in parallel. One thing that people might not consider: how fast is the actual architecture itself? Think of the problem of computing in terms of a factory. You can increase the speed of the conveyor belt and you can add more assembly lines, but just how fast are the workers? There are many ways to increase the efficiency of a CPU: from tweaking the most common or adding new instruction sets to allow the task itself to be simplified; to playing with the pipeline size for proper balance between constantly loading the CPU with upcoming instructions and needing to dump and reload the pipe when you go the wrong way down an IF/ELSE statement. Tom’s Hardware wondered this and tested a variety of processors since 2005 with their settings modified such that they could only use one core and only be clocked at 3 GHz. Can you guess which architecture failed the most miserably?

intel4004-2.jpg

Pfft, who says you ONLY need a calculator?

(Image from Intel)

Netburst architecture was designed to get very large clock rates at the expensive of heat -- and performance. At the time, the race between Intel and its competitors was clock rate: the higher the clock the better it was for marketers despite a 1.3 GHz Athlon wrecking a 3.2 GHz Celeron in actual performance. If you are in the mood for a little chuckle, this marketing strategy was all destroyed when AMD decided to name their processors “Athlon XP 3200+” and so forth rather than by their actual clock rate. One of the major reasons that Netburst was so terrible was branch prediction. Branch prediction is a strategy you can use to speed up a processor: when you reach a conditional jump from one chunk of code to another, such as “if this is true do that, otherwise do this”, you do not know for sure what will come next. Pipelining is a method of loading multiple commands into a processor to keep it constantly working. Branch prediction says: “I think I’ll go down this branch” and loads the pipeline assuming that is true; if you are wrong, you need to dump the pipeline and correct your mistake. One way that Pentium Netburst kept high clock rates was by having a ridiculously huge pipeline, 2-4x larger than the first generation of Core 2 parts which replaced it; unfortunately the Pentium 4 branch prediction was terrible keeping the processor stuck needing to dump its pipeline perpetually.

Toms-conclusion.png

The sum of all tests... at least time-based ones.

(Image from Tom's Hardware)

Now that we excavated Intel’s skeletons to air them out it is time to bury them again and look at the more recent results. On the AMD side of things, it looks as though there has not been too much innovation on the efficiency side of things only now getting within range of the architecture efficiency that Intel had back in 2007 with their first introduction of Core 2. Obviously efficiency per core per clock means little in the real world as it tells you neither about raw performance of a part nor how power efficient it is. Still, it is interesting to see how big of a leap Intel made away from their turkey of an architecture theory known as Netburst and model the future around the Pentium 3 and Pentium M architectures. Lastly, despite the lead, it is interesting to note exactly how much work went into the Sandy Bridge architecture. Intel, despite an already large lead and focus outside of the x86 mindset, still tightened up their x86 architecture by a very visible margin. It might not be as dramatic as their abandonment of Pentium 4, but is still laudable in its own right.

Ya Rly. Intel releasing new CPUs for servers

Subject: General Tech, Processors | July 28, 2011 - 06:50 PM |
Tagged: Sandy Bridge-EP, Intel

Since we got back together with Sandy B we have played a few games, made a couple home movies together, and went around travelling. Now that our extended vacation is over Sandy decided it is time to get a job. Sandy B was working part-time as a server and apparently like her job because Intel brought her to a job opening in Jaketown. Intel has apparently released details on their server product, Sandy Bridge-EP “Jaketown” that will debut in Q4, to replace the current server line of up-clocked desktop parts with disabled GPUs.

11-intel.png

NO WAI!!!!!

According to Real World Tech, Intel’s server component will contain up to 8 cores and sport PCI-Express 3.0 and Quick Path Interconnect 1.1. Rumors state that the highest-clocked component will run at up to 3GHz with the lowest estimated to be 2.66GHz. The main components of the CPU will be tied together with a ring bus, although unlike the original Sandy Bridge architecture the Sandy Bridge-EP ring will be bi-directional. Clock rates of the internal ring are not known but the bidirectional nature should decrease travelling distance of data by half on average. The L3 cache size is not known but is designed to be fast and low latency.

Intel looks to be really focusing this SKU down to be very efficient for the kinds of processes that servers require. There is no mention of the Sandy Bridge-EP containing a GPU, for instance, which should leave more options for highly effective x86 performance; at some point the GPU will become more relevant in the server market but Intel does not seem to think that today is that day. Check out the analysis at Real World Tech for more in-depth information.

Intel MLAA: Matrox had the right idea, wrong everything else

Subject: Editorial, General Tech, Graphics Cards, Processors | July 22, 2011 - 08:20 PM |
Tagged: MLAA, Matrox, Intel

Antialiasing is a difficult task for a computer to accomplish in terms of performance and many efforts have been made over the years to minimize the impact while still keeping as much of the visual appeal as possible. The problem with aliasing is that while pixels are the smallest unit of display on a computer monitor, it is large enough for our eye to see it as a distinct unit. You may however have two objects of two different colors partially occupy the same pixel, who wins? In real life, our eye would see the light from both objects hit the same retina nerve (that is not really how it biologically works but close enough) and it would see some blend between the two colors. Intel has released a whitepaper for their attempt at this problem and it resembles a method that Matrox used almost a decade ago.

MatroxAA.jpg

Matrox's antialiasing method.

(Image from Tom's Hardware)

Looking at the problem of antialiasing, you wish to have multiple bits of information dictate the color of a pixel in the event that two objects of different colors both partially occupy the same pixel. The simplest method of doing that is dividing the pixel up into smaller pixels and then crushing them together to an average which is called Super Sampling. This means you are rendering an image 2x, 4x, or even 16x the resolution you are running at. More methods were discovered including just flagging the edges for antialiasing since that is where aliasing occurs. In the early 2000s, Matrox looked at the problem from an entirely different angle: since the edge is what really matters, we can find the shape of the various edges and see how much area of a pixel gets divided up between each object giving an effect they say is equivalent to 16x MSAA for very little cost. The problem with Matrox’s method: it failed with many cases of shadowing and pixelshaders… and came out in the DirectX 9 era. Suffices to say it did not save Matrox as an elite gaming GPU company.

37399.png

37400.png

Look familiar?

(Both images from Intel Blog)

Intel’s method of antialiasing again looks at the geometry of the image but instead breaks the edges into L shapes to determine the area they enclose. To keep the performance up they do pipelining between the CPU and GPU which keeps the CPU and GPU constantly filled with the target or neighboring frames. In other words, as the GPU lets the CPU perform MLAA, the GPU is busy preparing and drawing the next frame. Of course when I see technology like this I think two things: will this work on architectures with discrete GPUs and will this introduce extra latency between the rendering code and the gameplay code? I would expect that it must as the frame is not even finished let alone drawn to monitor before you fetch the next set of states to be rendered. The question still exists if that effect will be drowned in the rest of the latencies experienced between synchronizing.

AMD and NVIDIA both have their variants of MLAA, the latter of which being called FXAA by NVIDIA's marketing team. Unlike AMD's method, NVIDIA's method must be programmed into the game engine by the development team requiring a little bit of extra work on the developer's part. That said, FXAA found its way into Duke Nukem Forever as well as the upcoming Battlefield 3 among other games so support is there and older games should be easy enough to just compute properly.

37407.png

The flat line is how much time spent on MLAA itself, just a few milliseconds and constant.

(Image from Intel Blog)

Performance-wise the Intel solution performs ridiculously faster than MSAA, is pretty much scene-independent, and should produce results near the 16x mark due to the precision possible with calculating areas. Speculation about latency between render and game loops aside the implementation looks quite sound and allows users with on-processor graphics to not need to waste precious cycles (especially on GPUs that you would see on-processor) with antialiasing and instead use it more on raising other settings including resolution itself while still avoiding jaggies. Conversely, both AMD and NVIDIA's method run on the GPU which should make a little more sense for them as a discrete GPU should not require as much help as a GPU packed into a CPU.

Could Matrox’s last gasp from the gaming market be Intel’s battle cry?

(Registration not required for commenting)

Source: Intel Blog

Overclockers Achieve Impressive Llano Overclocking Results, Come Close to 5GHz

Subject: Processors | July 18, 2011 - 11:15 PM |
Tagged: superpi, overclocking, LN2, llano, APU, amd, a8-3850

In a feat of overclocking prowess, the crew over at Akiba have managed to push the AMD Llano A8-3850 to its limits to achieve a Super PI 32M score of 14 minutes and 17.5 seconds at an impressive 4.75GHz. Using a retail A8-3850 APU, a Gigabyte GA-A75-UD4H motherboard, and a spine chilling amount of Liquid Nitrogen, the Japanese overclocking team came very close to breaking the 5GHz barrier.

ssoc1.jpg

Just how close did they come? 4.906.1GHz with a base clock of 169.2MHz to be exact, which is mighty impressive. Unfortunately, the APU had to undergo some sever electroshock therapy at 1.792 Volts! Further, the 4.9GHz clock speed was not stable enough for a valid Super PI 32M result; therefore, the necessity to run the benchmark at 4.75GHz.

The extreme cooling ended up causing issues with the motherboard once the team tried to switch out the A8-3850 for the A6-3650; therefore, they swapped in an Asus F1A75-V PRO motherboard. With the A6-3650, they achieved an overclock of 4.186GHz with a base clock of 161MHz and a voltage of 1.428V. The overclockers stated that they regretted having to swap out the Asus board as they believed the Gigabyte board would have allowed them to overclock the A6-3650 APU higher due to that particular motherboard’s ability to adjust voltage higher.

vsoc2.jpg

Although they did not break the 5GHz barrier, they were still able to achieve an impressive 69% overclock on the A8-3850 and a 61% overclock on the A6-3650 APU. For comparison, here are PC Perspective’s not-APU-frying overclocking results. At a default clock speed of 2.9 and 2.6 respectively, the A8-3850 and A6-3650 seem to have a good deal of headroom when it comes to bumping up the CPU performance. If you have a good aftermarket cooler, Llano starts to make a bit more sense as 3.2GHz on air and 3.6GHz on water are within reach.  How do you feel about Llano?

Source: Akiba

Intel Sandy Bridge-E Processors Just In Time For Christmas But With Some Features Removed

Subject: Processors | July 16, 2011 - 12:54 AM |
Tagged: sandy bridge-e, processor, Intel, cpu

It seems as though intl is running into a slew of snags as they attempt to push out their Sandy Bridge-E processors and their accompanying X79 chipset motherboards. While it was previously thought that the Sandy Bridge-E processors would not be available until at least Janruary 2012, VR-Zone is reporting that the CPUs may actually be out in time for Christmas this year; however, they will have a reduced feature set. The X79 chipset that powers the Sandy Bridge-E processors will also be released with a reduced feature set. While Intel may reintroduce the removed features in later iterations of the silicon, the first run components will have PCI-Express 3.0 and four SATA/SAS 6Gbps ports removed.  Further, Intel is waiting an extra CPU revision until it begins shipping the procesors out to board partners for their testing; the C-1 stepping instead of the C-0.

SBE_Ebay_Photo.jpg

In the case of PCI-E 3.0 support, Intel has had trouble testing their engineering silicon with PCI-E 3.0 cards and is not confident enough to integrate it into their production chips at this time. due to the lack of widely available PCI-E 3.0 add-in cards, support for the standard is not that large of a loss in the short twrm but will certainly affect the component's future proofing value. The removal of the SATA ports is due to issues with storage that have yet to be detailed.

While new technology is always welcome, one cant help but feel that delaying the new processors and motherboards until the silicon is ready (and containing the planned features) may be better for consumers. The board and investors likely do not agree, however. In any case, Sandy Bridge-E and X79 are coming, it is just a question of how they come.

Source: VR-Zone

SandyBridge graphics performance showdown; Linux versus Win7

Subject: Processors | July 13, 2011 - 05:27 PM |
Tagged: linux, windows, ostc, SBNA, sandybridge

In the first showdown that Phoronix tried, the Linux driver for Intel's HD3000 iGPU beat out the Win7 driver handily.  That win was due to the OSTC Linux engineers at Intel doing a bang up job on the Linux drivers, while the Windows team lagged behind a bit.  A few months have passed and the laggards on the Windows team have since released a major update to their drivers, necessitating Phoronix to repeat the test.  Unfortunately for them the Linux team has also released improvements, specifically "Sandy Bridge New Acceleration".  Can the Windows team retake the lead, or should you switch to OpenGL games on Linux? Read on to see.

phrx_pengwin.jpg

"The new benchmarks going out today on Phoronix are looking at the performance of Intel's Sandy Bridge graphics with the latest Microsoft Windows 7 and Ubuntu Linux drivers. Not only are we using the very latest drivers, but there is also a separate Linux test run with SNA, the "Sandy Bridge New Acceleration" architecture enabled."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

Video Perspective: AMD A-series APU Dual Graphics Technology Performance

Subject: Graphics Cards, Processors | July 13, 2011 - 02:13 PM |
Tagged: llano, dual graphics, crossfire, APU, amd, a8-3850, 3850

Last week we posted a short video about the performance of AMD's Llano core A-series of APUs for gaming and the response was so positive that we have decided to continue on with some other short looks at features and technologies with the processor.  For this video we decided to investigate the advantages and performance of the Dual Graphics technology - the AMD APU's ability to combine the performance of a discrete GPU with the Radeon HD 6550D graphics integrated on the A8-3850 APU.

For this test we set our A8-3850 budget gaming rig to the default clock speeds and settings and used an AMD Radeon HD 6570 1GB as our discrete card of choice.  With a price hovering around $70, the HD 6570 would be a modest purchase for a user that wants to add some graphical performance to their low-cost system but doesn't stretch into the market of the enthusiast.

The test parameters were simple: we knew the GPU on the Radeon HD 6570 was a bit better than that of the A8-3850 APU so we compared performance of the discrete graphics card ALONE to the performance of the system when enabling CrossFire, aka Dual Graphics technology.  The results are pretty impressive:

You may notice that these percentages of scaling are higher than those we found in our first article about Llano on launch day.  The reasoning is that we used the Radeon HD 6670 there and found that while compatible by AMD's directives, the HD 6670 is overpowering the HD 6550D GPU on the APU and the performance delta it provides is smaller by comparison.  

So, just as we said with our APU overclocking video, while adding in a discrete card like the HD 6570 won't turn your PC into a $300 graphics card centered gaming machine it will definitely help performance by worthwhile amounts without anyone feeling like they are wasting the silicon on the A8-3850.  

Source: AMD

A PC Macbook Air: Can Intel has?

Subject: General Tech, Processors, Systems | July 10, 2011 - 02:45 AM |
Tagged: Intel, ultrabook

Intel has been trying to push for a new classification of high-end, thin, and portable notebooks to offset the netbook flare-up of recent memory. Intel hopes that by the end of 2012, these “Ultrabooks” will comprise 40% of consumer notebook sales. What is the issue? They are expected to retail in the 1000$ range which is enough for consumers to buy a dual-core laptop with 4 GB of RAM and a tablet. Intel is not fazed by this and has even gone to the effort of offering money to companies wishing to develop these Ultrabooks; the OEMs are fazed, however, and even with Intel’s pressing there is only one, the ASUS UX21, slated to be released in September.

Asus sticking its neck out. (Video by Engadget)

For the launch, Intel created three processors based on the Sandy Bridge architecture: the i5-2557M, the i7-2637M, and the i7-2677M. At just 17 watts of power, these processors should do a lot on Intel’s end to support the branding of Ultrabooks having long battery life and an ultra-thin case given the lessened need for heat dissipation. Intel also has two upcoming Celeron processors which are likely the same ones we reported on two months ago. Intel has a lot to worry about when it comes to competition with their Ultrabook platform though; AMD will have products that appeal to a similar demographic for half the price and tablets might just eat up much of the rest of the market.

Do you have a need for a thousand dollar ultraportable laptop? Will a tablet not satisfy that need?

(Registration not required for commenting)

Source: ZDNet

Video Perspective: AMD A-series APU Overclocking and Gaming Performance

Subject: Graphics Cards, Motherboards, Processors | July 6, 2011 - 08:15 PM |
Tagged: amd, llano, APU, a-series, a8, a8-3850, overclocking

We have spent quite a bit of time with AMD's latest processor, the A-series of APUs previously known as Llano, but something we didn't cover in the initial review was how overclocking the A8-3850 APU affected gaming performance for the budget-minded gamer.  Wonder no more!

In this short video we took the A8-3850 and pushed the base clock frequency from 100 MHz to 133 MHz and overclocked the CPU clock rate from 2.9 GHz to 3.6 GHz while also pushing the GPU frequency from 600 MHz up to 798 MHz.  All of the clock rates (including CPU, GPU, memory and north bridge) are based on that base frequency so overclocking on the AMD A-series can be pretty simple provided the motherboard vendors provide the multiplier options to go with it.  We tested a system based on a Gigabyte and an ASRock motherboard both with very good results to say the least.  

We tested 3DMark11, Bad Company 2, Lost Planet 2, Left 4 Dead 2 and Dirt 3 to give us a quick overall view of performance increases.  We ran the games at 1680x1050 resolutions and "Medium"-ish quality settings to find a base frame rate on the APU of about 30 FPS.  Then we applied our overclocked settings to see what gains we got.  Honestly, I was surprised by the results.

While overclocking a Llano-based gaming rig won't make it compete against $200 graphics cards, getting a nice 30% boost in performance for a budget minded gamer is basically a no-brainer if you are any kind of self respecting PC enthusiast. 

Source: AMD