All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards, Processors | August 8, 2011 - 08:28 PM | Tim Verry
Tagged: amd, APU, sdk, opencl
AMD released its new APUs (Accelerated Processing Unit) to the masses, and now they are revving the processors up with a new software development kit that increases performance and efficiency of OpenCL based applications. The new version 2.5 APP SDK is tailored to the APU architecture where the CPU and GPU are on the same die. Building on the OpenCL standard, APP SDK 2.5 promises to reduce the bandwidth limitation of the CPU to GPU connection, allowing for effective data transfer rates as high as 15GB per second in AMDs A Series APUs. Further performance enhancements include reduced kernel launch times and PCIe overhead.
AMD states that the new APP SDK will improve multi-gpu support for AMD APU graphics paired with a discrete card, and will “enable advanced capabilities” to improve the user experience including gesture based interfaces, image stabilization, and 3D applications.
The new development kit is currently being used by developers worldwide in the AMD OpenCl coding competition, where up to $50,000 in prizes will be given away to winning software submissions. You can get started with the SDK here.
Subject: Editorial, Graphics Cards, Processors | August 4, 2011 - 11:15 AM | Ryan Shrout
Tagged: nvidia, john carmack, interview, carmack, amd
A couple of years back we talked on the phone with John Carmack during the period of excitement about ray tracing and game engines. That interview is still one of our most read articles on PC Perspective as he always has interesting topics and information to share. While we are hosting the PC Perspective Hardware Workshop on Saturday at Quakecon 2011, we also scheduled some time to sit with John again to pick his brain on hardware and technology.
If you had a chance to ask John Carmack questions about hardware and technology, either the current sets of each or what he sees coming in the future, what would you ask? Let us know in our comments section below!! (No registration required to comment.)
Subject: Editorial, General Tech, Processors | August 3, 2011 - 02:11 AM | Scott Michaud
Tagged: Netburst, architecture
It is common knowledge that computing power consistently improves throughout time as dies shrink to smaller processes, clock rates increase, and the processor can do more and more things in parallel. One thing that people might not consider: how fast is the actual architecture itself? Think of the problem of computing in terms of a factory. You can increase the speed of the conveyor belt and you can add more assembly lines, but just how fast are the workers? There are many ways to increase the efficiency of a CPU: from tweaking the most common or adding new instruction sets to allow the task itself to be simplified; to playing with the pipeline size for proper balance between constantly loading the CPU with upcoming instructions and needing to dump and reload the pipe when you go the wrong way down an IF/ELSE statement. Tom’s Hardware wondered this and tested a variety of processors since 2005 with their settings modified such that they could only use one core and only be clocked at 3 GHz. Can you guess which architecture failed the most miserably?
Pfft, who says you ONLY need a calculator?
(Image from Intel)
Netburst architecture was designed to get very large clock rates at the expensive of heat -- and performance. At the time, the race between Intel and its competitors was clock rate: the higher the clock the better it was for marketers despite a 1.3 GHz Athlon wrecking a 3.2 GHz Celeron in actual performance. If you are in the mood for a little chuckle, this marketing strategy was all destroyed when AMD decided to name their processors “Athlon XP 3200+” and so forth rather than by their actual clock rate. One of the major reasons that Netburst was so terrible was branch prediction. Branch prediction is a strategy you can use to speed up a processor: when you reach a conditional jump from one chunk of code to another, such as “if this is true do that, otherwise do this”, you do not know for sure what will come next. Pipelining is a method of loading multiple commands into a processor to keep it constantly working. Branch prediction says: “I think I’ll go down this branch” and loads the pipeline assuming that is true; if you are wrong, you need to dump the pipeline and correct your mistake. One way that Pentium Netburst kept high clock rates was by having a ridiculously huge pipeline, 2-4x larger than the first generation of Core 2 parts which replaced it; unfortunately the Pentium 4 branch prediction was terrible keeping the processor stuck needing to dump its pipeline perpetually.
The sum of all tests... at least time-based ones.
(Image from Tom's Hardware)
Now that we excavated Intel’s skeletons to air them out it is time to bury them again and look at the more recent results. On the AMD side of things, it looks as though there has not been too much innovation on the efficiency side of things only now getting within range of the architecture efficiency that Intel had back in 2007 with their first introduction of Core 2. Obviously efficiency per core per clock means little in the real world as it tells you neither about raw performance of a part nor how power efficient it is. Still, it is interesting to see how big of a leap Intel made away from their turkey of an architecture theory known as Netburst and model the future around the Pentium 3 and Pentium M architectures. Lastly, despite the lead, it is interesting to note exactly how much work went into the Sandy Bridge architecture. Intel, despite an already large lead and focus outside of the x86 mindset, still tightened up their x86 architecture by a very visible margin. It might not be as dramatic as their abandonment of Pentium 4, but is still laudable in its own right.
Subject: General Tech, Processors | July 28, 2011 - 06:50 PM | Scott Michaud
Tagged: Sandy Bridge-EP, Intel
Since we got back together with Sandy B we have played a few games, made a couple home movies together, and went around travelling. Now that our extended vacation is over Sandy decided it is time to get a job. Sandy B was working part-time as a server and apparently like her job because Intel brought her to a job opening in Jaketown. Intel has apparently released details on their server product, Sandy Bridge-EP “Jaketown” that will debut in Q4, to replace the current server line of up-clocked desktop parts with disabled GPUs.
According to Real World Tech, Intel’s server component will contain up to 8 cores and sport PCI-Express 3.0 and Quick Path Interconnect 1.1. Rumors state that the highest-clocked component will run at up to 3GHz with the lowest estimated to be 2.66GHz. The main components of the CPU will be tied together with a ring bus, although unlike the original Sandy Bridge architecture the Sandy Bridge-EP ring will be bi-directional. Clock rates of the internal ring are not known but the bidirectional nature should decrease travelling distance of data by half on average. The L3 cache size is not known but is designed to be fast and low latency.
Intel looks to be really focusing this SKU down to be very efficient for the kinds of processes that servers require. There is no mention of the Sandy Bridge-EP containing a GPU, for instance, which should leave more options for highly effective x86 performance; at some point the GPU will become more relevant in the server market but Intel does not seem to think that today is that day. Check out the analysis at Real World Tech for more in-depth information.
Subject: Editorial, General Tech, Graphics Cards, Processors | July 22, 2011 - 08:20 PM | Scott Michaud
Tagged: MLAA, Matrox, Intel
Antialiasing is a difficult task for a computer to accomplish in terms of performance and many efforts have been made over the years to minimize the impact while still keeping as much of the visual appeal as possible. The problem with aliasing is that while pixels are the smallest unit of display on a computer monitor, it is large enough for our eye to see it as a distinct unit. You may however have two objects of two different colors partially occupy the same pixel, who wins? In real life, our eye would see the light from both objects hit the same retina nerve (that is not really how it biologically works but close enough) and it would see some blend between the two colors. Intel has released a whitepaper for their attempt at this problem and it resembles a method that Matrox used almost a decade ago.
Matrox's antialiasing method.
(Image from Tom's Hardware)
Looking at the problem of antialiasing, you wish to have multiple bits of information dictate the color of a pixel in the event that two objects of different colors both partially occupy the same pixel. The simplest method of doing that is dividing the pixel up into smaller pixels and then crushing them together to an average which is called Super Sampling. This means you are rendering an image 2x, 4x, or even 16x the resolution you are running at. More methods were discovered including just flagging the edges for antialiasing since that is where aliasing occurs. In the early 2000s, Matrox looked at the problem from an entirely different angle: since the edge is what really matters, we can find the shape of the various edges and see how much area of a pixel gets divided up between each object giving an effect they say is equivalent to 16x MSAA for very little cost. The problem with Matrox’s method: it failed with many cases of shadowing and pixelshaders… and came out in the DirectX 9 era. Suffices to say it did not save Matrox as an elite gaming GPU company.
(Both images from Intel Blog)
Intel’s method of antialiasing again looks at the geometry of the image but instead breaks the edges into L shapes to determine the area they enclose. To keep the performance up they do pipelining between the CPU and GPU which keeps the CPU and GPU constantly filled with the target or neighboring frames. In other words, as the GPU lets the CPU perform MLAA, the GPU is busy preparing and drawing the next frame. Of course when I see technology like this I think two things: will this work on architectures with discrete GPUs and will this introduce extra latency between the rendering code and the gameplay code? I would expect that it must as the frame is not even finished let alone drawn to monitor before you fetch the next set of states to be rendered. The question still exists if that effect will be drowned in the rest of the latencies experienced between synchronizing.
AMD and NVIDIA both have their variants of MLAA, the latter of which being called FXAA by NVIDIA's marketing team. Unlike AMD's method, NVIDIA's method must be programmed into the game engine by the development team requiring a little bit of extra work on the developer's part. That said, FXAA found its way into Duke Nukem Forever as well as the upcoming Battlefield 3 among other games so support is there and older games should be easy enough to just compute properly.
The flat line is how much time spent on MLAA itself, just a few milliseconds and constant.
(Image from Intel Blog)
Performance-wise the Intel solution performs ridiculously faster than MSAA, is pretty much scene-independent, and should produce results near the 16x mark due to the precision possible with calculating areas. Speculation about latency between render and game loops aside the implementation looks quite sound and allows users with on-processor graphics to not need to waste precious cycles (especially on GPUs that you would see on-processor) with antialiasing and instead use it more on raising other settings including resolution itself while still avoiding jaggies. Conversely, both AMD and NVIDIA's method run on the GPU which should make a little more sense for them as a discrete GPU should not require as much help as a GPU packed into a CPU.
Could Matrox’s last gasp from the gaming market be Intel’s battle cry?
(Registration not required for commenting)
Subject: Processors | July 18, 2011 - 11:15 PM | Tim Verry
Tagged: superpi, overclocking, LN2, llano, APU, amd, a8-3850
In a feat of overclocking prowess, the crew over at Akiba have managed to push the AMD Llano A8-3850 to its limits to achieve a Super PI 32M score of 14 minutes and 17.5 seconds at an impressive 4.75GHz. Using a retail A8-3850 APU, a Gigabyte GA-A75-UD4H motherboard, and a spine chilling amount of Liquid Nitrogen, the Japanese overclocking team came very close to breaking the 5GHz barrier.
Just how close did they come? 4.906.1GHz with a base clock of 169.2MHz to be exact, which is mighty impressive. Unfortunately, the APU had to undergo some sever electroshock therapy at 1.792 Volts! Further, the 4.9GHz clock speed was not stable enough for a valid Super PI 32M result; therefore, the necessity to run the benchmark at 4.75GHz.
The extreme cooling ended up causing issues with the motherboard once the team tried to switch out the A8-3850 for the A6-3650; therefore, they swapped in an Asus F1A75-V PRO motherboard. With the A6-3650, they achieved an overclock of 4.186GHz with a base clock of 161MHz and a voltage of 1.428V. The overclockers stated that they regretted having to swap out the Asus board as they believed the Gigabyte board would have allowed them to overclock the A6-3650 APU higher due to that particular motherboard’s ability to adjust voltage higher.
Although they did not break the 5GHz barrier, they were still able to achieve an impressive 69% overclock on the A8-3850 and a 61% overclock on the A6-3650 APU. For comparison, here are PC Perspective’s not-APU-frying overclocking results. At a default clock speed of 2.9 and 2.6 respectively, the A8-3850 and A6-3650 seem to have a good deal of headroom when it comes to bumping up the CPU performance. If you have a good aftermarket cooler, Llano starts to make a bit more sense as 3.2GHz on air and 3.6GHz on water are within reach. How do you feel about Llano?
Subject: Processors | July 16, 2011 - 12:54 AM | Tim Verry
Tagged: sandy bridge-e, processor, Intel, cpu
It seems as though intl is running into a slew of snags as they attempt to push out their Sandy Bridge-E processors and their accompanying X79 chipset motherboards. While it was previously thought that the Sandy Bridge-E processors would not be available until at least Janruary 2012, VR-Zone is reporting that the CPUs may actually be out in time for Christmas this year; however, they will have a reduced feature set. The X79 chipset that powers the Sandy Bridge-E processors will also be released with a reduced feature set. While Intel may reintroduce the removed features in later iterations of the silicon, the first run components will have PCI-Express 3.0 and four SATA/SAS 6Gbps ports removed. Further, Intel is waiting an extra CPU revision until it begins shipping the procesors out to board partners for their testing; the C-1 stepping instead of the C-0.
In the case of PCI-E 3.0 support, Intel has had trouble testing their engineering silicon with PCI-E 3.0 cards and is not confident enough to integrate it into their production chips at this time. due to the lack of widely available PCI-E 3.0 add-in cards, support for the standard is not that large of a loss in the short twrm but will certainly affect the component's future proofing value. The removal of the SATA ports is due to issues with storage that have yet to be detailed.
While new technology is always welcome, one cant help but feel that delaying the new processors and motherboards until the silicon is ready (and containing the planned features) may be better for consumers. The board and investors likely do not agree, however. In any case, Sandy Bridge-E and X79 are coming, it is just a question of how they come.
Subject: Processors | July 13, 2011 - 05:27 PM | Jeremy Hellstrom
Tagged: linux, windows, ostc, SBNA, sandybridge
In the first showdown that Phoronix tried, the Linux driver for Intel's HD3000 iGPU beat out the Win7 driver handily. That win was due to the OSTC Linux engineers at Intel doing a bang up job on the Linux drivers, while the Windows team lagged behind a bit. A few months have passed and the laggards on the Windows team have since released a major update to their drivers, necessitating Phoronix to repeat the test. Unfortunately for them the Linux team has also released improvements, specifically "Sandy Bridge New Acceleration". Can the Windows team retake the lead, or should you switch to OpenGL games on Linux? Read on to see.
"The new benchmarks going out today on Phoronix are looking at the performance of Intel's Sandy Bridge graphics with the latest Microsoft Windows 7 and Ubuntu Linux drivers. Not only are we using the very latest drivers, but there is also a separate Linux test run with SNA, the "Sandy Bridge New Acceleration" architecture enabled."
Here are some more Processor articles from around the web:
- CPU Performance Comparison Guide Rev. 5.7 @ TechARP
- Intel Core i3-2120 3.3GHz Sandy Bridge Processor Review @ Legit Reviews
- AMD A8-3850 Llano @ LostCircuits
- Overclocking AMD’s A8-3850 Llano APU @ Overclockers.com
Subject: Graphics Cards, Processors | July 13, 2011 - 02:13 PM | Ryan Shrout
Tagged: llano, dual graphics, crossfire, APU, amd, a8-3850, 3850
Last week we posted a short video about the performance of AMD's Llano core A-series of APUs for gaming and the response was so positive that we have decided to continue on with some other short looks at features and technologies with the processor. For this video we decided to investigate the advantages and performance of the Dual Graphics technology - the AMD APU's ability to combine the performance of a discrete GPU with the Radeon HD 6550D graphics integrated on the A8-3850 APU.
For this test we set our A8-3850 budget gaming rig to the default clock speeds and settings and used an AMD Radeon HD 6570 1GB as our discrete card of choice. With a price hovering around $70, the HD 6570 would be a modest purchase for a user that wants to add some graphical performance to their low-cost system but doesn't stretch into the market of the enthusiast.
The test parameters were simple: we knew the GPU on the Radeon HD 6570 was a bit better than that of the A8-3850 APU so we compared performance of the discrete graphics card ALONE to the performance of the system when enabling CrossFire, aka Dual Graphics technology. The results are pretty impressive:
You may notice that these percentages of scaling are higher than those we found in our first article about Llano on launch day. The reasoning is that we used the Radeon HD 6670 there and found that while compatible by AMD's directives, the HD 6670 is overpowering the HD 6550D GPU on the APU and the performance delta it provides is smaller by comparison.
So, just as we said with our APU overclocking video, while adding in a discrete card like the HD 6570 won't turn your PC into a $300 graphics card centered gaming machine it will definitely help performance by worthwhile amounts without anyone feeling like they are wasting the silicon on the A8-3850.
Subject: General Tech, Processors, Systems | July 10, 2011 - 02:45 AM | Scott Michaud
Tagged: Intel, ultrabook
Intel has been trying to push for a new classification of high-end, thin, and portable notebooks to offset the netbook flare-up of recent memory. Intel hopes that by the end of 2012, these “Ultrabooks” will comprise 40% of consumer notebook sales. What is the issue? They are expected to retail in the 1000$ range which is enough for consumers to buy a dual-core laptop with 4 GB of RAM and a tablet. Intel is not fazed by this and has even gone to the effort of offering money to companies wishing to develop these Ultrabooks; the OEMs are fazed, however, and even with Intel’s pressing there is only one, the ASUS UX21, slated to be released in September.
Asus sticking its neck out. (Video by Engadget)
For the launch, Intel created three processors based on the Sandy Bridge architecture: the i5-2557M, the i7-2637M, and the i7-2677M. At just 17 watts of power, these processors should do a lot on Intel’s end to support the branding of Ultrabooks having long battery life and an ultra-thin case given the lessened need for heat dissipation. Intel also has two upcoming Celeron processors which are likely the same ones we reported on two months ago. Intel has a lot to worry about when it comes to competition with their Ultrabook platform though; AMD will have products that appeal to a similar demographic for half the price and tablets might just eat up much of the rest of the market.
Do you have a need for a thousand dollar ultraportable laptop? Will a tablet not satisfy that need?
(Registration not required for commenting)
Subject: Graphics Cards, Motherboards, Processors | July 6, 2011 - 08:15 PM | Ryan Shrout
Tagged: amd, llano, APU, a-series, a8, a8-3850, overclocking
We have spent quite a bit of time with AMD's latest processor, the A-series of APUs previously known as Llano, but something we didn't cover in the initial review was how overclocking the A8-3850 APU affected gaming performance for the budget-minded gamer. Wonder no more!
In this short video we took the A8-3850 and pushed the base clock frequency from 100 MHz to 133 MHz and overclocked the CPU clock rate from 2.9 GHz to 3.6 GHz while also pushing the GPU frequency from 600 MHz up to 798 MHz. All of the clock rates (including CPU, GPU, memory and north bridge) are based on that base frequency so overclocking on the AMD A-series can be pretty simple provided the motherboard vendors provide the multiplier options to go with it. We tested a system based on a Gigabyte and an ASRock motherboard both with very good results to say the least.
We tested 3DMark11, Bad Company 2, Lost Planet 2, Left 4 Dead 2 and Dirt 3 to give us a quick overall view of performance increases. We ran the games at 1680x1050 resolutions and "Medium"-ish quality settings to find a base frame rate on the APU of about 30 FPS. Then we applied our overclocked settings to see what gains we got. Honestly, I was surprised by the results.
While overclocking a Llano-based gaming rig won't make it compete against $200 graphics cards, getting a nice 30% boost in performance for a budget minded gamer is basically a no-brainer if you are any kind of self respecting PC enthusiast.
Subject: Processors | July 6, 2011 - 04:41 PM | Tim Verry
Tagged: llano, APU, amd
Newegg recently opened up its Llano inventory to consumers, with both the A6-3650 and A8-3850 now in stock. The new AMD APUs represent a combination of AMD graphics and CPU, and are an interesting option for low cost systems on both the budget gaming machines using integrated graphics and small form factor HTPC systems.
Our you ready for Llnao? Why not join the discussion over in the forums and advocate whether Llnao is deserving of Hardware Leaderboard status?
Subject: Processors, Chipsets, Mobile | July 6, 2011 - 04:09 PM | Tim Verry
Tagged: WTI, VIA, S3 Graphics, htc
Low power x86 processor maker VIA Technologies today announced that it is selling off the entirety of its stake in S3 Graphics to popular phone manufacturer HTC. Having acquired S3 Graphics in 2001, the company planned to integrate graphics capabilities into its processors and chipsets. In 2005 S3 graphics became under capitalized and VIA brought in WTI a private investment company to fun operations and R&D initiatives. Cher Wang, the chairman of VIA is a “significant shareholder.”
Under the agreement, all of VIA’s shares in S3 Graphics are worth $300 million. VIA will receive $147 million while WTI will receive $153 million. Of the $147 million, VIA will recognize a capital gain of $37 million and a paid-in-capital of $115 million.
The Senior Vice President and Board Director of VIA, Tzu-mu Lin, stated that “The Transaction would allow VIA to monetize a portion of its rich IP portfolio, yet retain its graphics capabilities to support the development and sale of its processors and chipsets.” The transaction is subject to approvals from the board directors of VIA, WTI, and HTC and is expected to close before the end of the year.
HTC seems to be interested in acquiring graphics IP, which begs the question whether the phone manufacturer is planning to design its own ARM S3 graphics chips for its future phones. What do you think of the deal?
Subject: Processors | July 6, 2011 - 12:53 PM | Jeremy Hellstrom
Tagged: llano, APU, amd
Llano is still very active in the news, as reviewers try to pin down exactly what the capabilities of a true APU are; what does it do well and what does it not do well. Most reviewers have discovered that AMD's offering is relatively weak at current generation general computation and absolutely amazing as an integral GPU. Part of the weakness in computational tasks seems to stem from the scarcity of programs that can take advantage of multi-core processors and the almost complete lack of GPU accelerated programs ... that are not graphical in nature. X-bit Labs takes a very in depth look at the modified Stars core called Husky and the Sumo graphical portion which resembles Redwood.
"Desktop Lynx platform that includes hybrid Llano processors has finally found its way to the consumers. Let’s take a closer look at it and find out how successful the combination of old Stars processor cores and a high-performance Radeon GPU actually is."
Here are some more Processor articles from around the web:
- AMD A8-3850 2.9GHz Llano APU Review @ Legit Reviews
- A8-3850 vs. Core i3-2100 CPU Review @ Hardware Secrets
- AMD A8-3850 (Llano) APU Video Performance Examined @ Tweaktown
- AMD A8-3850 Lynx APU Processor @ Benchmark Reviews
- Mobile CPU Comparison Guide @ TechARP
Subject: Processors | June 30, 2011 - 12:20 PM | Jeremy Hellstrom
Tagged: lynx, llano, igp, amd, a8-3850, 6550d, 3850
Long story short, the new AMD A8-3850 simply can't compete with Intel's SandyBridge processors as an x86 CPU but as an integrated GPU it is better than anything we or The Tech Report have seen before.
The actual story is far more complicated for the Llano true quad core processor. On the CPU side of the APU equation, it can handle the Core i3-2100 which is it's closest competition on the majority of multithreaded tasks, though it falls behind on single threaded applications. The price war is also on AMD's side as you would need to pair a discrete GPU with the i3-2100 in order to match the graphics performance. The other very important are where AMD falls is power consumption; sure at idle it uses very little power but when operating at full speed it consumes almost as much power as an i7-2600.
On the GPU side we see better gaming performance than anything else out there, assuming you stick to DX10 and DX11 games as DX9 games can have some issues with Llano. That holds especially true of Hybrid Crossfire, as when Ryan paired the A8-3850 with discrete Radeon cards he ran into difficulties in some games. You can read about that in his full review.
"AMD's "Llano" APU makes a compelling proposition as a laptop chip, but its position on the desktop is more precarious. Read on to find out why—and whether it can overcome that hurdle."
Here are some more Processor articles from around the web:
- The AMD A8-3850 Review: Llano on the Desktop @ AnandTech
- AMD Llano A8-3850 APU @ TechwareLabs
- AMD A8-3850 Llano APU Review @ OCC
- AMD A8-A3850 APU and Lynx @ Bjorn3d
- AMD A8-3850 Llano APU & Gigabyte A75M-UD2H Review @ Neoseeker
- AMD A8-3850 APU Review: The Arrival of Llano @Hi Tech Legion
- AMD A8-3850 (Llano) APU and A55/A75 Chipset @ Tweaktown
- AMD A8-3850 APU @ Overclockers.com
- AMD Llano A8-3850 APU and Gigabyte A75-UD4H Launch Review @ HardwareHeaven
- AMD A8-3850 APU Review: Llano Hits the Desktop @ Hardware Canucks
- AMD A8-3850 Llano APU @ Techspot
- AMD Llano APU: The Future is Fusion @ InsideHW
- Intel Pentium G850, Pentium G840 and Pentium G620 @ X-bit Labs
Subject: Processors | June 30, 2011 - 10:51 AM | Jeremy Hellstrom
Tagged: lynx, llano, igp, amd, a8-3850, 6550d, 3850
AMD (NYSE:AMD) today announced availability for the AMD Fusion A-Series Accelerated Processing Unit (APU) A8-3850 and A6-3650 desktop processors. The AMD A8-3850 and A6-3650 desktop processors will enable a high- performance experience for desktop users, including brilliant HD graphics, supercomputer-like performance, and incredibly fast application speeds.
Both the AMD A8-3850 and A6-3650 desktop processors combine four x86 CPU cores with powerful DirectX®11-capable discrete-level graphics, and up to 400 Radeon™ cores along with dedicated HD video processing on a single chip. Only AMD Fusion APUs offer true AMD Dual Graphics, with up to 120 percent visual performance boost*, when paired with select AMD Radeon™ HD 6000 Series graphics cards. Consumers can achieve supercomputer-like performance of more than 500 gigaflops compute capacity and enjoy rapid content transfers via USB 3.0.
All A-Series processors are powered by AMD VISION Engine Software, which is composed of AMD Catalyst™ graphics driver, AMD OpenCL driver and the AMD VISION Engine Control Center. With this suite of software, users get regular updates designed to improve system performance and stability, and can add new software enhancements.
With a suggested retail price of $135, the AMD A8-3850 desktop processor operates at 2.9GHz (CPU) and 600MHz (GPU) with 400 Radeon™ Cores, 4MB of L2 cache and a TDP of 100W.
The AMD A6-3650 desktop processor has clock speeds of 2.6GHz (CPU) and 443MHz (GPU) with 320 Radeon™ Cores, 4MB of L2 cache and a TDP of 100W. The suggested retail price of the AMD A6-3650 desktop processor is $115.
In an increasingly digital and visually oriented world, consumers demand more responsive multitasking, vivid graphics, lifelike games, lag-free videos, and ultimate multimedia performance. AMD A8-3850 and A6-3650 desktop processors enable these visually stunning end-user experiences.
FM1 motherboards for the A-Series APUs are available now from leading original design manufacturers (ODMs), including ASUS, ASRock, Biostar, ECS, Foxconn (Hong Hai Precision), Gigabyte, Jetway, MSI and Sapphire.
AMD A8-3850 and A6-3650 desktop processors are scheduled to be available for purchase through system builders and at major online retailers, including Amazon, CyberPower Inc., iBuyPower, Newegg and TigerDirect beginning July 3, 2011. Additional processors are scheduled to be available later this year.
AMD A8-3850 and A6-3650 desktop processors, and the corresponding FM1 motherboards, were created with desktop consumers and gamers in mind.
Subject: General Tech, Graphics Cards, Processors | June 24, 2011 - 01:13 PM | Scott Michaud
Tagged: linux, Ivy Bridge, Intel
Back when Sandy Bridge launched, Intel had some difficulty with Linux compatibility due to their support software not being available long enough ahead of launch for distribution developers to roll it in to their releases. As a result, users purchasing Sandy Bridge hardware would be in for a frolic in the third-party repositories unless they wished to wait four or five months for their distributions to release their next major version. This time Intel is pushing code out much earlier though questions still remain if they will fully make Ubuntu’s 11.10 release.
You mean there's Intel... inside me?
Intel came down hard on themselves for their Sandy Bridge support. Jesse Barnes, an open-source Linux developer at Intel, posted on the Phoronix Forums his thoughts on the Sandy Bridge Linux issue:
"No, this is our job, and we blew it for Sandy Bridge. We're supposed to do development well ahead of product release, and make sure distros include the necessary code to get things working … Fortunately we've learned from this and are giving ourselves more time and planning better for Sandy Bridge's successor, Ivy Bridge."
Now, six months later as support for Ivy Bridge is getting released and rolled into their necessary places, Intel appears to be more successful than last time. Much of the code that Intel needs to release for Ivy Bridge is already available and rolled in to the Linux 3.0 kernel. A few features missed the deadline and must be rolled in to Linux 3.1 kernel. While Phoronix believes that Fedora 16 will still be able to roll in support in time it is possible that Ubuntu 11.10 may not unless the back-port the changes to their distribution. That is obviously not something Intel would like to see happen given all their extra effort of recent.
Subject: Processors | June 23, 2011 - 12:32 PM | Jeremy Hellstrom
Tagged: llano, amd, AFDS
Since Llano is the best news we have seen from AMD in quite a while here is more coverage of the APU and the AMD Fusion Developer Summit, this time from The Tech Report. They take an A8-3500M APU and Radeon HD 6620G powered laptop and pit it against an HP ProBook 6460b with a Core i5-2410M and HD3000. The TDP of both processors is 35W and they are likely going to be priced similarly once Llano powered laptops hit the market. As with Ryan's review, for CPU bound tests the AMD processor lags far behind but once the GPU power comes into play the positions are completely reversed. It will be interesting to see how AMD positions Llano in the marketplace.
"Can AMD's 'Llano' APU really take on Intel's excellent Sandy Bridge processors and hold its own? We've taken a deep look at its architecture and performance in order to find out."
Here are some more Processor articles from around the web:
- AMD Llano A8-3500M APU Review @ t-break
- More Linux Benchmarks Of The AMD A8-3500M Fusion APU @ Phoronix
- AMD's Fusion Developer Summit was a success @ The Inquirer
- CPU Performance Comparison Guide Rev. 5.6 @ TechARP
Subject: Processors | June 21, 2011 - 12:51 AM | Tim Verry
Tagged: ulv, sandy bridge, Intel, cpu, celeron
According to Maximum PC, Intel recently revamped its official price list by adding four new ULV (ultra-low-voltage chips generally found in ultraportable notebooks) processors. The new additions feature three Sandy Bridge based chips and one Intel Celeron processor. The three new Sandy Bridge ULV CPUs include the dual core hyperthreaded Core i5 2557M with 3 MB cache running at 1.7 GHz, Core i7 2637M with 4 MB cache running at 1.7GHz, and the Core i7 2677M at 1.8 GHz with 4 MB cache. Utilizing Turbo Boost, the chips are able to reach 2.7 GHz, 2.8 GHz, and 2.9 GHz respectively. Further, the new Celeron ULV is the dual core Celeron 847 processor with 2 MB cache running at 1.1 GHZ.
The Core i5 2557M carries a pricetag of $250, while the Core i7 2637M goes for $289, and the Core i7 2677M has an MSRP of $317. You can see the entire price list here. The new Sandy Bridge based ULV processors are able to Turbo Boost from between 1.0 and 1.1 GHz depending on model, which should provide plenty of power for mobile devices while sipping battery power with a TDP (themal design power) of only 17 watts.
Subject: General Tech, Processors | June 20, 2011 - 04:46 PM | Scott Michaud
Tagged: VIA, sysmark, nvidia, bapco, amd
People like benchmarks. Benchmarks tell you which component to purchase while your mouse flutters between browser tabs of various Newegg or Amazon pages. Benchmarks let you see how awesome your PC is because often videogames will not for a couple of years. One benchmark you probably have not seen here in a very long time is Sysmark from the Business Applications Performance Corporation, known as BAPCo to its friends and well-wishers. There has been dispute over the political design of BAPCo and it eventually boiled over with AMD, NVIDIA, and VIA rolling off the sides of the pot.
Fixed that for you
The disputes centered mostly over the release of SYSmark 2012. For years various members have been complaining about various aspects of the product which they allege Intel strikes down and ignores while designing each version. One major complaint is the lack of reporting on the computer’s GPU performance which is quickly becoming beyond relevant to an actual system’s overall performance. With NVIDIA, AMD, and VIA gone from the consortium, Intel is pretty much left alone in the company: now officially.
Get notified when we go live!