Author:
Subject: Processors
Manufacturer: Intel

Sandy Bridge-E is just what you expect

Introduction

It has been more than three years since Intel released the first Core i7 processor built around the Nehalem CPU architecture along with the X58 chipset.  It quickly became the platform of choice for the enthusiast market (gamers and overclockers), and remained in that role even as the world of processors evolved around it with the release of Westmere and Sandy Bridge.  Yes, we have been big supporters of the Sandy Bridge Core i7 parts for some time as the "new" platform of choice for gamers, but part of us always fondly remembered the days of Nehalem and X58. 

Well, Intel shared the sentimentl and this holiday they are officially unveiling the Sandy Bridge-E platform and the X79 chipset.  The "E" stands for enthusiast in this case and you'll find that many of the same decisions and patterns apply from the Nehalem release to this one.  Nehalem and X58 was really meant as a workstation design but the performance and features were so good that Intel wanted to offer it to the high-end consumer as well. Sandy Bridge-E is the same thing - this design is clearly built for the high-profit areas of computing including workstation and servers but those that want the best available technology will find it pretty damn attractive as well. 

cpu1.jpg

But what actually makes a Sandy Bridge-E processor (now going with the Core i7-3xxx model naming scheme) different from the Sandy Bridge CPUs we have come to love since it was released in January of this year?

The Sandy Bridge-E Architecture

The answer might surprise you, but truthfully not a whole lot has changed.  In fact, from a purely architectural stand point (when looking at the x86 processor cores), Sandy Bridge-E looks essentially identical to the cores found in currently available Sandy Bridge CPUs.  You will see the same benefits of the additional AVX instruction set in applications that take advantage of it, a shared L3 cache that exists between all of the cores for data coherency and the ring bus introduced with Sandy Bridge is still there to move data between the cores, cache and uncore sections of the die.

Click here to continue reading our review of the new Sandy Bridge-E processor, the Core i7-3960X!!

Author:
Subject: Processors
Manufacturer: AMD

Bulldozer Architecture

Introduction

Bulldozer.  Since its initial unveiling and placement on the roadmap many have called the Bulldozer architecture the savior of AMD, the processor that would finally turn the tide back against Intel and its dominance in the performance desktop market.  After quite literally YEARS of waiting we have finally gotten our hands on the Bulldozer processors, now called the AMD FX series of CPUs, and can report on our performance and benchmarking of the platform.

With all of the leaks surrounding the FX processor launch you might be surprised by quite a bit of our findings - both on the positive and the negative side of things.  With all of the news in the past weeks about Bulldozer, now we can finally give you the REAL information.

Before we dive right into the performance part of our story I think it is important to revisit the Bulldozer architecture and describe what makes it different than the Phenom II architecutre as well as Intel's Sandy Bridge design.  Josh wrote up a great look at the architecture earlier in the year with information that is still 100% pertinent and we recount much of that writing here.  If you are comfortable with the architeture design points, then feel free to skip ahead to the sections you are more interested in - but I recommend highly you give the data below a look first. 

The below text was taken from Bulldozer at ISSCC 2011 - The Future of AMD Processors.

Bulldozer Architecture Revisited

Bulldozer brings very little from the previous generation of CPUs, except perhaps the experience of the engineers working on these designs.  Since the original Athlon, the basic floor plan of the CPU architecture AMD has used is relatively unchanged.  Certainly there were significant changes throughout the years to keep up in performance, but the 10,000 foot view of the actual decode, integer, and floating point units were very similar throughout the years.  TLB’s increasing in size, more instructions in flight, etc. were all tweaked and improved upon.  Aspects such as larger L2 caches, integrated memory controllers, and the addition of a shared L3 cache have all brought improvements to the architecture.  But the overall data flow is very similar to that of the original Athlon introduced 14 years ago.

As covered in our previous article about Bulldozer, it is a modular design which will come in several flavors depending on the market it is addressing.  The basic building block of the Bulldozer core is a 213 million transistor unit which features 2 MB of L2 cache.  This block contains the fetch and decode unit, two integer execution units, a shared 2 x 128 bit floating point/SIMD unit, L1 data and instruction caches, and a large shared L2 unit.  All of this is manufactured on GLOBALFOUNDRIES’ 32nm, 11 metal layer SOI process.  This entire unit, plus 2 MB of L2 cache, is contained in approximately 30.9 mm squared of die space.

arch00.jpg

Continue reading our review of the AMD FX Processor (codenamed Bulldozer)!!

Author:
Manufacturer: id Software

RAGE Peforms...well

RAGE is not as dependant on your graphics hardware as it is on your CPU and storage system (which may be an industry first); the reason for which we will discover when talking about the texture pop-up issue on the next page.

Introduction

The first id Software designed game since the release of Doom 3 in August of 2004, RAGE has a lot riding on it.  Not only is this the introduction of the idTech 5 game engine but also culminates more than 4 years of development and the first new IP from the developer since the creation of Quake.  And since the first discussions and demonstrations of Carmack's new MegaTexture technology, gamers have been expecting a lot as well. 

02-screen.jpg

Would this game be impressive enough on the visuals to warrant all the delays we have seen?  Would it push today's GPUs in a way that few games are capable of?  It looks like we have answers to both of those questions and you might be a bit disappointed.  

Performance Characteristics

First, let's get to the heart of the performance question: will your hardware play RAGE?  Chances are, very much so.  I ran through some tests of RAGE on a variety of hardware including the GeForce GTX 580, 560 Ti, 460 1GB and the Radeon HD 6970, HD 6950, HD 6870 and HD 5850.  The test bed included an Intel Core i7-965 Nehalem CPU, 6GB of DDR3-1333 memory running off of a 600GB VelociRaptor hard drive.  Here are the results from our performance tests running at 1920x1080 resolution with 4x AA enabled in the game options:

Continue reading our initial look at RAGE performance and image quality!!

Author:
Manufacturer: PC Perspective

The Basics

Introduction

If you have been visiting PC Perspective at all over the last week there is no doubt you have seen a lot of discussion about the currently running Battlefield 3 beta.  We posted an article looking at performance of several different GPUs in the game and then followed it up with a look at older cards like the GeForce 9800 GT.  We did a live stream of some PC Perspective staff playing BF3 with readers and fans, showed off and tested the locked Caspian Border map and even looked at multi-GPU scaling performance.  It was a lot of testing and a lot of time, but now that we have completed it, we are ready to summarize our findings in a piece that many have been clamoring for - a Battlefield 3 system build guide.

05.jpg

The purpose of this article is simple: gather our many hours or testing and research and present the results in a way that simply says "here is the hardware we recommend."  It is a the exact same philosophy that makes our PC Perspective Hardware Leaderboard so successful as it gives the reader all the information they need, all in one place.

Continue reading our guide for building a system for Battlefield 3!!

Author:
Manufacturer: NVIDIA
Tagged: vsmp, tegra, nvidia, kal-el

Kal-El Tegra SoC to use 5 cores

Recent news from NVIDIA has unveiled some interesting new technical details about the upcoming Kal-El ARM-based Tegra SoC.  While we have known for some time that this chip would include a quad-core processor and would likely be the first ARM-based quad-core part on the market, NVIDIA's Matt Wuebbling spilled the beans on a new technology called "Variable SMP" (vSMP) and a fifth core on the die.

kalel1.png

An updated diagram shows the fifth "companion" core - Courtesy NVIDIA

This patented technology allows the upcoming Tegra processor to address a couple of key issues that affect smartphones and tablets: standby power consumption and manufacturing process deviations.  Even though all five of the cores on Kal-El are going to be based on the ARM Cortex A9 design they will have very different power characteristics due to variations in the TSMC 40nm process technology that builds them.  Typical of most foundries and process technologies, TSMC has both a "high performance" and a "low power" derivative of the 40nm technology usually aimed at different projects.  The higher performing variation will run at faster clock speeds but will also have more transistor leakages thus increasing overall power consumption.  The low power option does just the opposite: lowers the frequency ceiling while using less power at idle and usage states.  

kalel2.png

CPU power and performance curves - Courtesy NVIDIA

NVIDIA's answer to this dilemma is to have both - a single A9 core built on the low power transistors and quad A9s built on the higher performing transistors.  The result is the diagram you saw at the top of this story with a quad-core SoC with a single ARM-based "companion."  NVIDIA is calling this strategy Variable Symmetric Multiprocessing and using some integrated hardware tricks it is able to switch between operating on the lower power core OR one to four of the higher power cores.  The low power process will support operating frequencies up to only 500 MHz while the high speed process transistors will be able to hit well above 1-1.2 GHz. 

Keep reading for more on the 5-core quad-core Kal-El SoC from NVIDIA!!

Author:
Subject: Processors
Manufacturer: AMD

Bulldozer Ships for Revenue

Some months back we covered the news that AMD had released its first revenue shipments of Llano.  This was a big deal back then, as it was the first 32 nm based product from AMD, and one which could help AMD achieve power and performance parity with Intel in a number of platforms.  Llano has gone on to be a decent seller for AMD, and it has had a positive effect on AMD’s marketshare in laptops.  Where once AMD was a distant second in overall terms of power and performance in the mobile environment, Llano now allows them to get close to the CPU performance of the Intel processors, achieve much greater performance in graphics workloads, and has matched Intel in overall power consumption.

bd_handoff.jpg

KY Wong and Marshall Kwait hand off the first box of Bulldozer based Interlagos processors to Cray's Joe Fitzgerald.  Photo courtesy of AMD.

Some five months later we are now making the same type of announcement for AMD and their first revenue shipment of the Bulldozer core.  The first chips off the line are actually “Interlagos” chips; basically server processors that feature upwards of 16 cores (8 modules, each module containing two integer units and then the shared 256 bit FPU/SSE SIMD unit).  The first customer is Cray, purveyor of fine supercomputers everywhere.  They will be integrating these new chips into their Cray XE6 supercomputers, which have been purchased by a handful of governmental and education entities around the world.

Continue reading for our analysis on AMD's processor future...

Carmack Speaks

Last week we were in Dallas, Texas covering Quakecon 2011 as well as hosting our very own PC Perspective Hardware Workshop.  While we had over 1100 attendees at the event and had a blast judging the case mod contest, one of the highlights of the event is always getting to sit down with John Carmack and pick his brain about topics of interest.  We got about 30 minutes of John's time over the weekend and pestered him with questions about the GPU hardware race, how Intel's intergrated graphics (and AMD Fusion) fit in the future of PCs, the continuing debate about ray tracing, rasterization, voxels and infinite detail engines, key technologies for PC gamers like multi-display engines and a lot more!

One of our most read articles of all time was our previous interview with Carmack that focused a lot more on the ray tracing and rasterization debate.  If you never read that, much of it is still very relevant today and is worth reading over. 

carmack201c.png

This year though John has come full circle on several things including ray tracing, GPGPU workloads and even the advantages that console hardware has over PC gaming hardware.

Continue reading to see the full video interview and our highlights from it!!

Author:
Manufacturer: General

How much will these Bitcoin mining configurations cost you in power?

Earlier this week we looked at Bitcoin mining performance across a large range of GPUs but we had many requests for estimates on the cost of the power to drive them.  At the time we were much more interested in the performance of these configurations but now that we have that information and we started to look at the potential profitability of doing something like this, look at the actual real-world cost of running a mining machine 24 hours a day, 7 days a week became much more important. 

This led us to today's update where we will talk about the average cost of power, and thus the average cost of running our 16 different configurations, in 50 different locations across the United States.  We got our data from the U.S. Energy Information Administration website where they provide average retail prices on electricity divided up by state and by region.  For use today, we downloaded the latest XLS file (which has slightly more updated information than the website as of this writing) and started going to work with some simple math. 

Here is how your state matches up:

kwh-1.jpg

kwh-2.jpg

The first graph shows the rates in alphabetical order by state, the second graph in order from the most expensive to the least.  First thing we noticed: if you live in Hawaii, I hope you REALLY love the weather.  And maybe it's time to look into that whole solar panel thing, huh?  Because Hawaii was SO FAR out beyond our other data points, we are going to be leaving it out of our calculations and instead are going to ask residents and those curious to just basically double one of our groupings.

Keep reading to get the full rundown on how power costs will affect your mining operations, and why it may not make sense to mine AT ALL with NVIDIA graphics cards! 

Author:
Manufacturer: General

What is a Bitcoin?

This article looking at Bitcoins and the performance of various GPUs with mining them was really a big team effort at PC Perspective.  Props goes out to Tim Verry for doing the research on the process of mining and helping to explain what Bitcoins are all about.  Ken Addison did a great job doing through an alottment of graphics cards running our GUIMiner and getting the data you will see presented later.  Scott Michaud helped with some graphics and imagery and I'm the monkey that just puts it all together at the end.

** Update 7/13/11 **  We recently wrote another piece on the cost of the power to run our Bitcoin mining operations used in this performance article.  Based on the individual prices of electric in all 50 states of the US, we found that the cost of the power to run some cards exceeded the value of the Bitcoin currency based on today's exchange rates.  I would highly recommend you check out that story as well after giving this performance-based article a thorough reading.  ** End Update **

A new virtual currency called Bitcoin has been receiving a great deal of news fanfare, criticism and user adoption. The so called cryptographic currency uses strong encryption methods to eliminate the need for trust when buying and selling goods over the Internet in addition to a peer-to-peer distributed timestamp server that maintains a public record of every transaction to prevent double spending of the electronic currency. The aspect of Bitcoin that has caused the most criticism and recent large rise in growth lies in is its inherent ability to anonymize the real life identities of users (though the transactions themselves are public) and the ability to make money by supporting the Bitcoin network in verifying pending transactions through a process called “mining” respectively. Privacy, security, cutting out the middle man and making it easy for users to do small casual transactions without fees as well as the ability to be rewarded for helping to secure the network by mining are all selling points (pun intended) of the currency.

When dealing with a more traditional and physical local currency, there is a need to for both parties to trust the currency but not much need to trust each other as handing over cash is fairly straightforward. One does not need to trust the other person as much as if it were a check which could bounce. Once it has changed hands, the buyer can not go and spend that money elsewhere as it is physically gone. Transactions over the Internet; however, greatly reduce the convenience of that local currency, and due to the series of tubes’ inability to carry cash through the pipes, services like Paypal as well as credit cards and checks are likely to be used in its place. While these replacements are convenient, they also are much riskier than cash as fraudulent charge-backs and disputes are likely to occur, leaving the seller in a bad position. Due to this risk, sellers have to factor a certain percentage of expected fraud into their prices in addition to collecting as much personally identifiable information as possible. Bitcoin seeks to remedy these risks by bringing the convenience of a local currency to the virtual plane with irreversible transactions, a public record of all transactions, and the ability to trust strong cryptography instead of the need for trusting people.

paymentpal.jpg

There are a number of security measures inherent in the Bitcoin protocol that assist with these security goals. Foremost, bitcoin uses strong public and private key cryptography to secure coins to a user. Money is handled by a bitcoin wallet, which is a program such as the official bitcoin client that creates public/private key pairs that allow you to send and receive money. You are further able to generate new receiving addresses whenever you want within the client. The wallet.dat file is the record of all your key pairs and thus your bitcoins and contains 100 address/key pairs (though you are able to generate new ones beyond that). Then, to send money one only needs to sign the bitcoin with their private key and send it to the recipient’s public key. This creates a chain of transactions that are secured by these public and private key pairs from person to person. Unfortunately this cryptography alone is not able to prevent double spending, meaning that Person A could sign the bitcoin with his private key to Person B, but also could do the same to Person C and so on. This issue is where the peer-to-peer and distributed computing aspect of the bitcoin protocol come into play. By using a peer-to-peer distributed timestamp server, the bitcoin protocol creates a public record of every transaction that prevents double spending of bitcoins. Once the bitcoin has been signed to a public key (receiving address) with the user’s private key, and the network confirms this transaction the bitcoins can no longer be spent by Person A as the network has confirmed that the coin belongs to Person B now, and they are the only ones that can spend it using their private key.

Keep reading our article that details the theories behind Bitcoins as well as the performance of modern GPUs in mining them!  

Author:
Manufacturer: AMD

Architecture Details

Introduction

Just a couple of weeks ago we took the cover off of AMD's Llano processor for the first time in the form of the Sabine platform: Llano's mobile derivative.  In that article we wrote in great detail about the architecture and how it performed on the stage of the notebook market - it looked very good when compared to the Intel Sandy Bridge machines we had on-hand.  Battery life is one of the most important aspects of evaluating a mobile configuration with performance and features taking a back seat the majority of the time.  In the world of the desktop though, that isn't necessarily the case. 

Desktop computers, even those meant for a low-cost and mainstream market, don't find power consumption as crucial and instead focus on the features and performance of your platform almost exclusively.  There are areas where power and heat are more scrutinized such as the home theater PC market and small form-factor machines but in general you need to be sure to hit a homerun with performance per dollar in this field.  Coming into this article we had some serious concerns about Llano and its ability to properly address this specifically. 

How did our weeks with the latest AMD Fusion APU turn out?  There is a ton of information that needed to be addressed including a look at the graphics performance in comparison to Sandy Bridge, how the quad-core "Stars" x86 CPU portion stands up to modern options, how the new memory controller affects graphics performance, Dual Graphics, power consumption and even a whole new overclocking methodology.  Keep reading and you'll get all the answers you are looking for.

Llano Architecture

We spent a LOT of time in our previous Llano piece discussing the technical details of the new Llano Fusion CPU/GPU architecture and the fundamentals are essentially identical from the mobile part to the new desktop releases.  Because of that, much of the information here is going to be a repeat with some minor changes in the forms of power envelopes, etc.  

slide01.jpg

The platform diagram above gives us an overview of what components will make up a system built on the Llano Fusion APU design.  The APU itself is made up 2 or 4 x86 CPU cores that come from the Stars family released with the Phenom / Phenom II processors.  They do introduce a new Turbo Core feature that we will discuss later that is somewhat analogous to what Intel has done with its processors with Turbo Boost.

Continue reading our AMD A-series Llano desktop review for all the benchmarks and information!

Author:
Subject: Processors, Mobile
Manufacturer: AMD

AMD lines up Llano

Introduction

2006.  That was the year where the product we are reviewing today was first consummated and the year that AMD and ATI merged in a $5.4 billion deal that many read about scratching their heads.  At the time the pairing of a the 2nd place microprocessor company with the 2nd place graphics technology vendor might have seemed like an odd arrangement even with the immediate benefit of a unified platform of chipset, integrated graphics and processor to offer to mobile and desktop OEMs.  In truth though, that was a temporary solution to a more long term problem that we now know as heterogeneous computing: the merging not just of these companies but all the computing workloads of CPUs and GPUs.

Five years later, and by most accounts more than a couple of years late, the new AMD that now sans-manufacturing facility is ready to release the first mainstream APU, Accelerated Processing Unit.  While the APU name is something that the competition hasn't adopted, the premise of a CPU/GPU combination processing unit is not just the future, it is the present as well.  Intel has been shipping Sandy Bridge, the first mainstream silicon with a CPU and GPU truly integrated together on a single die since January 2011 and AMD no longer has the timing advantage that we thought it would when the merger was announced.

For sanity sake, I should mention the Zacate platform that combines an ATI-based GPU with a custom low power x86 core called Bobcat for the netbook and nettop market that was released in November of 2010.  As much as we like that technology it doesn't have the performance characteristics to address the mainstream market and that is exactly where Llano comes in.

AMD Llano Architecture

Llano's architecture has been no secret over the last two years as AMD has let details and specifications leak at a slow pace in order to build interest and excitement over the pending transition.  That information release has actually slowed this year though likely to reduce expectations on the first generation APU with the release of the Sandy Bridge processor proving to be more potent than perhaps AMD expected.  And in truth, while the Llano design as whole is brand new all of the components that make it up have been seen before - both the x86 Stars core and the Radeon 5000 series-class have been tested and digested on PC Perspective for many years.

For today's launch we were given a notebook reference platform for the Llano architecture called "Sabine".  While the specifications we are looking at here are specific to this mainstream notebook platform nearly all will apply to the desktop release later in the year (perhaps later in the month actually).

slide01.jpg

The platform diagram above gives us an overview of what components will make up a system built on the Llano Fusion APU design.  The APU itself is made up 2 or 4 x86 CPU cores that come from the Stars family released with the Phenom / Phenom II processors.  They do introduce a new Turbo Core feature that we will discuss later that is somewhat analogous to what Intel has done with its processors with Turbo Boost. 

There is a TON of more information, so be sure you hit that Read More link right now!!

Author:
Manufacturer: Lucid

AMD and Virtual Vsync for Lucid Virtu

Lucid has grown from a small startup that we thought might have a chance to survive in the world of AMD and NVIDIA to a major player in the computing space.  Its latest and most successful software architecture was released into the wild with the Z68 chipset as Lucid Virtu - software that enabled users to take advantage of both the performance of a discrete graphics card and the intriguing features of the integrated graphics of Intel's Sandy Bridge CPU. 

While at Computex 2011 in Taiwan we met with the President of Lucid, Offir Remez, who was excited to discuss a few key new additions to the Virtu suite with the new version titled "Virtu Universal".  The new addition is support for AMD platforms including current 890-based integrated graphics options as well the upcoming AMD Llano (and more) APU CPU/GPU combinations.  It is hard to see a reason for Virtu on current AMD platforms like the 890 series as there are no compelling features on the integrated graphics on that front but with the pending release of Llano you can be sure that AMD is going to integrate some of its own interesting GP-GPU features that will compete with the QuickSync technology of Sandy Bridge among other things.  To see Lucid offer support for AMD this early is a good sign for day-of availability on the platform later this year.

lucid01.jpg

The second pillar of Lucid's announcement with Virtu Universal was the addition of support for the mobile space, directly competing with NVIDIA and AMD's own hardware-specific switchable graphics solutions.  By far the most successful this far has been NVIDIA's Optimus which has filtered its way down basically into all major OEMs and in most of the major notebook releases that include both integrated and discrete graphics solutions.  The benefit that Lucid offers is that it will work with BOTH Intel and AMD platforms simplifying the product stack quite a bit.  

Read on for more information and some videos of Virtual Vsync in action!

Author:
Manufacturer: Intel

The fast get faster

Introduction

With all the news and excitement about the Sandy Bridge architecture, platform and processors from Intel since their launch in January, it is easy to overlook the Nehalem architecture that continues to sell and be integrated into the fastest consumer PCs available. Remember Nehalem and its three digit model numbers? You really have to stretch that memory as it was before the CPU/GPU combo of Sandy Bridge and even before the Clarkdale / Lynnfield processors that began the move towards lower cost dual-channel memory based processors.

03.jpg

It seems odd to think that today we are taking a step BACK in time to review the new Core i7-990X processor and a very nicely upgraded X58 motherboard from Intel in the form of the DX58SO2. The Core i7-990X is a Gulftown (6-core) processor that in many cases becomes the fastest consumer processor on the market and flagship CPU for Nehalem and the “Extreme Edition” suffix. Replacing the i7-980X, the 990X will fill that $999 processor segment for extreme enthusiasts and high end system builders.

Author:
Subject: Processors
Manufacturer: VIA

Past Nano History

One could argue that VIA jumped on the low power bandwagon before it was really cool.  Way back in the late 90s VIA snatched up processor firms Cyrix and Centaur, and started to merge those design teams to create low powered x86 CPUs.  Over the next several years VIA was still flying high on the chipset side, but due to circumstances started to retreat from that business.  On the Intel side it was primarily due to the legal issues that stemmed from the front side bus license that VIA had, and how it apparently did not apply to the Pentium 4.  On the AMD side it was more about the increased competition from NVIDIA and ATI/AMD, plus the lack of revenue from that smaller CPU market.  Other areas have kept VIA afloat through the years, such as audio codecs, very popular Firewire controllers, and the latest USB 3.0 components that are starting to show up.

vq_04.jpg

Considering all of the above, VIA thought its best way to survive was to get into the CPU business and explore a niche in the x86 market that had been widely ignored except for a handful of products from guys like Nat Semi (who had originally bought up Cyrix).  In the late 90s and early 00s there just was not much of a call for low power x86 products, and furthermore the industry was still at a point where even mundane productivity software would max out the top end x86 processors at the time.  This was a time where 1GHz was still not common, and all processors were single core.  Fast forward to 2011 and we have four and six core processors running in excess of 3 GHz.  We have also seen a dramatic shift in the x86 realm to specialized, lower power processors.

Read on for more details!

Author:
Subject: Processors
Manufacturer: AMD

Phenom II End of Line

It was January, 2009 when AMD released their first 45 nm product to the desktop market.  While the server market actually received the first 45 nm parts some months earlier, they were pretty rare until AMD finished ramping production and was able to release the next generation of Phenom parts into the wild.  The Phenom II proved an able competitor to Intel’s seemingly unstoppable Core 2 architecture.  While the Phenom II typically had to be clocked slightly higher than the competing products, they held up well in terms of price and performance.

a980_01.jpg

AMD was finally able to overcome the stigma of the original Phenom launch, which was late, slow, and featured that wonderful revision B2 bug.  The Phenom II showed none of those problems, per clock performance was enhanced, and the chips were able to run at speeds of 3.0 GHz.  These chips were able to hit speeds of 4+ GHz on water cooling, and 5+ GHz using LNO2.  AMD seemed finally back in the game.  The Phenom II looked to propel AMD back into competitiveness with Intel, and the leaks pertaining to the 6 core versions of the architecture only made consumers all the more excited for what was to come.

Author:
Subject: Processors
Manufacturer: AMD
Tagged:

Just Like Zacate Before It...

AMD let leak today that late last month the Singapore packaging facility shipped the first Llano products for revenue. This is a big event for AMD, as their Future Really is Fusion. I take a look at what all this means for the company, as well as cover how their motherboard partners are going to handle part of the transition to AM3+.Way back in early November, 2010 it was reported that AMD began their first revenue shipments of the Zacate/Ontario processors to their partners.  This was big news for AMD, as they expected these processors to make quite the impact in the market.  It took a while for the first processors to actually hit the market in final form, and it wasn’t until after CES in early 2011 that we actually saw products that were available to consumers.  Things really did not take off until as recently as February, and the demand for these chips is still at a very high level.

So today’s announcement on the AMD Blog site that the packaging arm of AMD in Singapore has shipped the first revenue Llano based parts is very important for the company.  While Llano is not based on the next generation Bulldozer CPU architecture, it does feature what could be argued as the fastest and most advanced integrated graphics processor in the world.  AMD is still aiming for solid CPU performance by including a fast four core Phenom II based processor, with some IPC improvements that should help it keep competitive in most workloads.  But the real star will be the graphics.  While we have seen some big leaps forward over the past few years in integrated performance, this should be a significant landmark in graphics technology.  AMD was one of the first to provide users with acceptable 3D performance in the original AMD 690G chipset, and improved it with the 780, 785, and 800 series of integrated graphics.  The integrated part inside the Llano should be quantum leap ahead in features and performance as compared to anything else out there.

The happy group of workers with the plain, brown colored box holding the first revenue shipment of Llano based processors.

It took approximately three months before we saw initial penetration of Zacate based parts in the retail market, and we can expect about the same amount of time for Llano.  This will be the first 32 nm part that AMD has released, which is around 18 months after Intel launched their first 32 nm parts.  AMD and GLOBALFOUNDRIES have wanted to cut that gap, but nobody else in the world has the fabrication resources that Intel does.  I would imagine that many of these first Llano parts are actually aimed at the notebook market, where they could make the biggest financial impact as well as provide some TDP/clockspeed leeway which allows GLOBALFOUNDRIES time to work on higher clocked versions to improve yields and bins.

We will see desktop parts shortly after, and can expect a fair supply by early July.  This should be hitting OEMs as they are preparing for the “Back to School” launches in early August.  AMD is truly offering what looks to be standalone graphics card performance on a CPU, all at around 95 watts TDP for the average desktop part.  Combined with a true quad core processor and an ample amount of L2 cache per core, we should see performance that rivals that of a high end Athlon II X4 combined with a HD 6550 graphics card.  All combined in one nice little package, and for a slightly lower cost (and a lower cost of goods for the manufacturer).

Earlier last week we were informed that quite a few motherboards from Asus, Gigabyte, and MSI will be compatible with the upcoming AM3+ based Bulldozer products.  Because none of these current boards are built to utilize CPU graphics, they likely will not be compatible with Llano.  This really wouldn’t make any sense anyway, since current AM3 processors would work just as well as the first generation of Llano processors.  But the current AM3 motherboards should not have a problem with the first generation of Bulldozer processors.

Gigabyte is shipping boards with the block socket denoting AM3+ compatibility.

Asus and MSI are providing BIOS updates to their 800 series of motherboards.  Asus looks to provide BIOS updates for 890FX, 890GX, and some 880G based boards.  MSI is concentrating on the 890FX and 890GX series, but leaving out the 880 and 870 parts.  Gigabyte is actually shipping “Black Socket” AM3+ compatible boards, and these comprise nearly their entire current lineup of 800 series of boards.  Users have spotted the black socket 870 boards for sale around North America, but so far they have been a handful of boards.  From all indications these are not actually AM3+ sockets, but are in fact AM3 but in a black plastic to convey Bulldozer/AM3+ compatibility.  The AM3 socket actually has 941 pin holes, but AM3 CPUs feature 938 pins.  AM3+ will features 942 pin holds, but the assumption is that the actual chips themselves will have a few pins less than 942.  Hence the ability to fit into current AM3 boards even though AM3+ processors will have more pins.  There will likely be certain tradeoffs with using an AM3 board with AM3+ processors, and these look to be related to power efficiency and maximum turbo clockspeed targets.  Current AM3 boards do not look to have the vast VCore switching that AM3+ boards will have, and we could see some limitations provided the HT 3.0 bus of current AM3 boards as compared to the updated HT 3.1 revision in AM3+.

Rumor has it that the Bulldozer release will be around June 20, while Llano will be officially unveiled in the first week of July.  That may or may not hold true, depending on how well AMD is able to ship these products.  Computex will likely see a lot more information slip out, as well as the ubiquitous motherboard support and floor samples being freely viewed by the public.  The official release of AM3+ motherboards should occur by the end of this month, and perhaps in early May.  AMD has a great need to seed the market with these motherboards, and they are also backwards compatible with AM3 processors.  We also may see the first integrated support of USB 3.0 in these motherboards, as AMD has received USB 3.0 certification for the Hudson FCH core logic chips.  There is still confusion as to how these are going to be placed in the market, or if they are even compatible in conjunction with the AMD 900 series of northbridge chips.

These are certainly exciting times for AMD.  The HD 6000 series of graphics chips are doing well in the market.  They are now truly competitive at the high end again.  Their CPUs have been holding their own, especially due to the help that Intel gave them with the Sandy Bridge issue.  They are about to release what looks to be a very power CPU architecture in Bulldozer, and they will be rewriting the books when it comes to integrated graphics performance and capabilities.  Certainly AMD will not overtake Intel, but we as consumers do need a strong and competitive AMD for the sake of our own wallets.

Author:
Subject: Processors
Manufacturer: General
Tagged:

One of our Sandy Bridge complaints

Lucid first showed up its Virtu software virtualization for GPUs at CES in January but they are now finally ready to give us some hands on testing time. Virtu promises to marry the integrated graphics features of the Sandy Bridge Intel processor graphics to the performance of discrete solutions from NVIDIA and AMD.

Author:
Subject: Processors
Manufacturer: AMD
Tagged:

AMD at the ISSCC 2011

AMD has provided further details on the Bulldozer architecture at the ISSCC 2011 conference. These include an overall view of each module and some of the physical characteristics, a redesign of the schedulers and integer execution units, and a comprehensive look at power saving features that allow the Bulldozer core to exist in moderate power and TDP ranges. We cover the highlights of the submitted papers and make some guesses at what the final product will look like.

Author:
Subject: Processors
Manufacturer: AMD
Tagged:

Intel Missteps

AMD is unleashing a... marketing campaign. Ok, we were hoping that with the Sandy Bridge bug and its delay, AMD would step up and release at least some new CPUs and perhaps the new AM3+ platform. But alas, we take a look at the only official response to Intel's stumble. While this is a new trick from AMD, will it be enough to gobble up some business that Intel left on the table by pulling Sandy Bridge parts from the market?

Author:
Subject: Processors
Manufacturer: NVIDIA
Tagged:

Boldly Going...

For years NVIDIA has been hiring engineers with CPU backgrounds, and with the Tegra series of products we finally see what they have been working on. NVIDIA has foregone trying to get a x86 license, and instead is jumping with both feet into the world of ARM processors. Considering the market NVIDIA is aiming for, they have the chance to be a prominent figure in processor technology for years to come.