Gigabyte Unveils GA-H77N-WIFI Mini-ITX Motherboard

Subject: General Tech, Graphics Cards, Motherboards, Cases and Cooling, Processors, Chipsets, Memory, Displays | August 7, 2012 - 10:07 AM |
Tagged: Z77, motherboard, mini-itx, Intel, gigabyte, ga-h77n-wifi

During a European roadshow, Gigabyte showed off a new Mini-ITX form factor motherboard for the first time. Called the GA-H77N-WIFI, the motherboard is well suited for home theater and home server tasks. Based on the H77 chipset, it is compatible with the latest Intel Core i3 (coming soon), i5, and i7 "Ivy Bridge" processors. The board goes for an all-black PCB with minimal heatsinks on the VRMs, and the form factor is the same size as the motherboard that Ryan recently used in his Mini-ITX HTPC build.

ga_h77n_wifi.jpg

The GA-H77N-WIFI features a LGA 1155 processor socket, two DDR3 DIMM slots, PCI Express slot, two SATA 3Gbps ports, two SATA 6Gbps ports, and an internal USB 3.0 header. There are also two Realtek Ethernet controller chips and a Realtek audio chip.

Rear IO on the Mini-ITX motherboard includes:
  • 1 PS/2 port
  • 2 USB 3.0 ports
  • 2 HDMI ports
  • 1 DVI port
  • 2 Antenna connectors (WIFI)
  • 4 USB 2.0 ports
  • 2 Gigabit Ethernet ports
  • 1 Optical S/PDIF port
  • 5 Analog audio jacks

The dual Gigabit Ethernet ports are interesting. It could easily be loaded with open source routing software and turned into router/firewall/Wi-Fi access point. To really take advantage of the Ivy Bridge support, you could put together a nice media server and HTPC recording/streaming box (using something like SiliconDust's HDHomeRun networked tuners or Ceton's USB tuner since this board is very scarce in the way of PCI-E slots). What would you do with this Mini-ITX Gigabyte board?

Unfortunately, there is no word yet on pricing or availability, but the motherboard is likely coming soon. You can find more information on the motherboard over at tonymacx86, who managed to snag get some photos of the board.

Source: Tony Mac X86

Ivy Bridge-E after Haswell: I think I've gone cross-eyed

Subject: General Tech, Processors | August 6, 2012 - 02:12 AM |
Tagged: Ivy Bridge-E, Intel

According to VR-Zone, an Intel roadmap has surfaced which outlines the upper end of the company’s CPU product line through the end of 3rd Quarter 2013. The most interesting albeit also most confusing entry is the launch of Ivy Bridge-E processors in the quarter after the Haswell mainstream parts.

So apparently the lack of high-performance CPU competition unhooked Intel’s tick-tock-clock.

The latest Intel CPU product roadmap outlines the company’s expected product schedule through to the end of Q3 2013. The roadmap from last quarter revealed that Intel’s next architecture, Haswell, would be released in the second quarter of 2013 with only Sandy Bridge-E SKUs to satisfy the enthusiasts who want the fastest processors and the most available RAM slots. It was unclear what would eventually replace SBE as the enthusiast part and what Intel expects for their future release cycles.

2a.png

I can Haswell-E’zburger?

(Photo Credit: VR-Zone)

Latest rumors continue to assert that Sandy Bridge-E X79 chipset-based motherboards will be able to support Ivy Bridge-E with a BIOS update.

The downside: personally, not a big fan of upgrading CPUs frequently.

In the past I have never kept a motherboard and replaced a CPU. While I have gone through the annoyance of applying thermal paste – and guessing where Arctic Cooling stains will appear over the next 2 weeks – I tend to even just use the default thermal tape which comes with the stock coolers. I am not just cheap or lazy either; I simply tend to not feel a jump in performance unless I allow three to five years between CPU product cycles to pass by.

But that obviously does not reflect all enthusiasts.

But how far behind on the enthusiast architectures will Intel allow themselves to get? Certainly someone with my taste in CPU upgrades should not wait 8-10 years to upgrade our processors if this doubling of time-between-releases continues?

What do you think is the future of Intel’s release cycle? Is this a one-time blip trying to make Ivy Bridge scale up or do you expect that Intel will start releasing progressively more infrequently on the upper end?

Source: VR-Zone

AMD Hires Veteran CPU Architect in Jim Keller

Subject: Processors | August 1, 2012 - 08:38 AM |
Tagged: x86-64, x86, MIPS, Jim Keller, arm, amd, Alpha

There has been quite a bit of news lately from AMD, and very little of it good.  What has perhaps dominated the headlines throughout this past year was the amount of veteran AMD employees who have decided (or were pushed) to seek employment elsewhere.  Not much has been said from these departing employees, but Rory Read certainly started things off with a bang by laying off some 10% of the company just months into his tenure.

Now we finally have some good news in terms of employment.  AMD has hired a pretty big name in the industry.  Not just a big name, but a person who was one of the primary leads on two of AMD’s most successful architectures to date.  Jim Keller is coming back to AMD, and at a time where it seems AMD needs some veteran leadership who is very in touch with not just the industry, but CPU architecture design.

Jim was a veteran of DEC and worked on some of the fastest Alpha processors of the time.  Much could be written about DEC and how they let what could have been one of the most important and profitable architectures in computing history sit essentially on the back burner while they focused on seemingly dinosaur age computing.  After the Alpha was sold off and DEC sold away, Jim found his way to AMD and played a very important role at that company.

amd_logo.gif

The first product was helping to launch the K7, and worked primarily with system engineering.  The vast majority of design work for the K7 was finished by the time he signed on, but he apparently worked quite a bit on integrating it into the new socket architecture that was derived from the DEC Alpha.  Where Jim really earned his keep was in co-authoring the x86-64 specification and being lead architect on the AMD K8 series of processors.  While he left in 1999, the mark he left on AMD is essentially indelible.

After AMD he joined Sibyte (Broadcom) and was lead architect on a series of MIPS processors used in networking devices.  This lasted until 2003 and he again left the company seemingly more prosperous than when he began.

PA-Semi was the next stop and he worked again primarily on networking specific SOCs utilizing the PowerPC architecture.  So far, by counting fingers, Jim has worked on five major ISAs (Alpha, x86, x86-64, MIPS, and PowerPC).  These chips were able to power networking devices with 10 Gb throughput.  PA-Semi was then purchased by Apple in 2007/2008.

At Apple Jim was now Director of Platform Architecture and worked with yet another major ISA; ARM.  Jim worked to develop several major and successful products with the A4 and A5 processors that have powered the latest iPhone and iPad products from the Cupertino giant.  To say that this individual has had his fingers in some very important pies is an understatement.

Jim now rejoins AMD as CVP and Chief Architect of CPU Cores.  He will report directly to Mark Papermaster.  His primary job is to improve execution efficiency and consistency, as well as implement next generation features into future CPU cores which will keep AMD competitive with not only Intel, but other rising competitors in the low power space.  This is finally some good news for AMD as they are actually adding talent rather than losing it.  While Jim may not be able to turn the company around overnight, he does look to be an important piece of the puzzle with a huge amount of experience and knowhow with multiple CPU ISA.  If there is anyone that can tackle the challenges in front of AMD in the face of a changing world, this might be the guy.  So far he has had a positive impact in every stop he has made, and perhaps this could prove to be the pinnacle of his career.  Or it could be where his career goes to die.  It is hard to say, but I do think that AMD made a good hire with Jim.

Source: AMD

Dango Durango -- Next-Gen Xbox developer kit in the wild.

Subject: General Tech, Graphics Cards, Processors, Systems | July 31, 2012 - 08:35 PM |
Tagged:

Eurogamer and Digital Foundry believe that a next-generation Xbox developer kit somehow got into the hands of an internet user looking to fence it for $10,000. If the rumors are true, a few interesting features are included in the kit: an Intel CPU and an NVIDIA graphics processor.

A little PC perspective on console gaming news…

If the source and people who corroborate it are telling the truth: somehow Microsoft lost control of a single developer’s kit for their upcoming Xbox platform. Much like their Cupertino frenemies who lost an iPhone 4 in a bar which was taken and sold for $5000 to a tech blog, the current owner of the Durango devkit is looking for a buyer for a mere $10000. It is unlikely he found it on a bar stool.

Durango0.jpg.jpg

One further level of irony, the Xbox 360 alpha devkit were repurposed Apple Mac Pros.

Image source: DaE as per its own in-image caption.

Alpha developer kits will change substantially externally but often do give clues to what to expect internally.

The first Xbox 360 software demonstrations were performed on slightly altered Apple Mac Pros. At that time, Apple was built on a foundation of PowerPC by IBM while the original Xbox ran Intel hardware. As it turned out, the Xbox 360 was based on the PowerPC architecture.

Durango2.jpg.jpg

Huh, looks like a PC.

The leaked developer kit for the next Xbox is said to be running X86 hardware and an NVIDIA graphics processor. 8GB of RAM is said to be present on the leaked kit albeit that only suggests that the next Xbox will have less than 8GB of RAM. With as cheap as RAM is these days -- a great concern for PC gamers would be that Microsoft would load the console to the brim with memory and remove the main technical advantage of our platform. Our PCs will still have that advantage once our gamers stop being scared of 64-bit compatibility issues. As a side note, those specifications are fairly identical to the equally nebulous specs rumored for Valve’s Steam Box demo kit.

The big story is the return to x86 and NVIDIA.

AMD is not fully ruled out of the equation if they manage to provide Microsoft with a bid they cannot refuse. Of course practically speaking AMD only has an iceball’s chance in Hell of have a CPU presence in the upcoming Xbox – upgraded from snowball. More likely than not Intel will pick up the torch that IBM kept warm for them with their superior manufacturing.

PC gamers might want to pay close attention from this point on…

Contrast the switch for Xbox from PowerPC to X86 with the recent commentary from Gabe Newell and Rob Pardo of Blizzard. As Mike Capps has allured to – prior to the launch of Unreal Tournament 3 – Epic is concerned about the console mindset coming to the PC. It is entirely possible that Microsoft could be positioning the Xbox platform closer to the PC. Perhaps there are plans for cross-compatibility in exchange for closing the platform around certification and licensing fees?

Moving the Xbox platform closer to the PC in hardware specifications could renew their attempts to close the platform as has failed with their Games for Windows Live initiative. What makes the PC platform great is the lack of oversight about what can be created for it and the ridiculous time span for compatibility for what has been produced for it.

It might be no coincidence that the two companies who are complaining about Windows 8 are the two companies who design their games to be sold and supported for decades after launch.

And if the worst does happen, PC gaming has been a stable platform despite repetitive claims of its death – but could the user base be stable enough to handle a shift to Linux? I doubt that most would even understand the implications of proprietary platforms on art to even consider it. What about Adobe and the other software and hardware tool companies who have yet to even consider Linux as a viable platform?

The dark tunnel might have just gotten longer.

Source: Eurogamer

ARM, TSMC to Produce 64-bit Processors With 3D Transistors

Subject: Processors | July 24, 2012 - 12:07 PM |
Tagged: TSMC, ARMv8, arm, 64-bit, 3d transistors, 20nm

 

Yesterday ARM announced a multi-year partnership with fab TSMC to produce sub-20nm processors that utilize 3D FinFET transistors. The collaboration and data sharing between the two companies will allow the fabless ARM SoC company the ability to produce physical processors based on its designs and will allow TSMC a platform to further its process nodes and FinFET transistor technology. The first TSMC-produced processors will be based on the ARMv8 architecture and will be 64-bit compatible.

ARMv8.jpg

The addition of 3D transistors will allow the ARM processors to be even more power efficient and suitable for both mobile devices. Alternatively, it could allow for higher clockspeeds at the same TDP ratings as current chips. The other big news is that the chips will be moving to a 64-bit compatible design, which is huge considering ARM processors have traditionally been 32-bit. By moving to 64-bit, ARM is positioning itself for server and workstation adoption, especially with the recent ARM-compatible Windows 8 build due to be released soon. Granted, ARM SoCs have a long way to go before taking market share from Intel and AMD in the desktop and server markets in a big way but it is slowly but surely becoming more competitive with the x86-64 giants.

TSMC’s R&D Vice President Cliff Hou stated that the collaboration between ARM and TSMC will allow TSMC to optimize its FinFET process to target “high speed, low voltage and low leakage.” ARM further qualified that the partnership would give ARM early access to the 3D transistor FinFET process that could help create advanced SoC designs and ramp up volume production.

I think this is a very positive move for ARM, and it should allow them to make much larger inroads into the higher-end computing markets and see higher adoption beyond mobile devices. On the other hand, it is going to depend on TSMC to keep up and get the process down. Considering the issues with creating enough 28nm silicon to meet demand for AMD and NVIDIA’s latest graphics cards, a sub-20nm process may be asking a lot. Here’s hoping that it’s a successful venture for both companies, however.

You can find more information in the full press release.

Source: Maximum PC

AMD Blames Lackluster Earnings on Weak Economy

Subject: Processors | July 20, 2012 - 11:21 AM |
Tagged: quarterly earnings, loss, APU, amd

AMD recently released its Q2 2012 earnings (as did Intel), and things are continuing to look bleak for the number two x86-64 processor company. The company stated that the lower than expected numbers were the result of a weak economy and during a time of the year when people are not buying computers. The may be some truth to that as the second quarter is in the post-Christmas holiday season lul and before the big back-to-school retail push. On the economy front, it’s harder for me to say but without going political or armchair economist on you, the market seems better than it has been but is really still recovering–At least from a consumer perspective.

AMD reported revenue of $1.41 billion in the second quarter of 2012, which does not seem terrible, but when compared to Intel’s $13.5 billion Q2 revenue, and the fact that AMD’s numbers represent an 11-percent lower value than last quarter and 10-percent decrease versus Q2 2011, it’s easy to say that things are not looking good for the company.

According to Paul Lilly over at MaximumPC, when breaking AMD’s numbers down by business segment it gets even worse. Its Computing Solutions business fell 13-percent versus the previous quarter and Q2 2011. On the other hand, the company has the ever-so-slightly better news that the graphics card division stayed the same versus last year and was down 5-percent versus last quarter. The company was quoted as stating that the respective revenue drops were due to lower desktop sales in China and Europe and a “seasonally down quarter.”

PC Perspective’s Josh Walrath recently wrote up an editorial (note: pre-earnings call) that talks about AMDs new plan to focus on APUs, take on less risk, and push out new products faster. As a future-looking article, it talks about the impact of the company’s upcoming VIshera and Kaveri processors as well as AMD’s increased focus on heterogeneous system architectures. It remains to be seen if that new path for company will help them to make money or if it will hurt them. AMD cautions that Q3 2012 may not see increased revenue, but here’s hoping that they will be able to pull together for a strong Q4 and sell chips during the big holiday shopping season.

I for one am excited about the prospects of Kaveri and believe that HSA could work and is what AMD needs to focus on as it is one advantage that they have over NVIDIA and Intel – NVIDIA does not have an x86-64 license and Intel’s processor graphics leave room for improvement, to put it mildly. AMD may not have the best CPU cores, but it’s not an inherently bad design and where they are moving with the full convergence of the CPU and GPU is much farther ahead of the other big players.

Read more about AMD's Q2 2012 earnings (transcript).

Source: Maximum PC

AMD Social Media Reviewers Wanted - 2000 AMD APUs Available Free!

Subject: General Tech, Processors | July 12, 2012 - 03:51 PM |
Tagged: amd, llano, APU, comiccon

If you are in the San Diego area today or tomorrow, you should make it a point to stop by Belo San Diego (http://www.belosandiego.com/ 438 E Street), a night club near the convention area, to visit with the AMD and the Geek and Sundry group.

day.jpg

Felicia Day, most popular for her role in the web-series The Guild, will be part of the on going event between 10am and 2am both today (the 12th) and tomorrow sponsored by AMD.  She is excited to be there - just look!

If you stop by the Belo nightclub during those hours you can take home a FREE AMD A8-3870K APU (with accompanying motherboard) if you agree to use your social media outlets (Twitter and Facebook) to tell your friends about the experience.  You will in fact become an AMD Social Media Reviewer!

Sorry, if you aren't in the San Diego area, you are out of luck on this promotion.  This is just another reason why attending ComicCon is so enticing!

Source: AMD

Can a 12-Core ARM Cluster hit critical mass?

Subject: Processors | June 26, 2012 - 05:08 PM |
Tagged: arm, cortex-a9, e-350, i7-3770k, z530, Ivy Bridge, atom, Zacate

Taking a half dozen PandaBoard ESes from Texas Instruments that have a 1.2GHz dual-core ARM Cortex-A9 processor onboard, Phoronix built a 12-core ARM machine to test out against AMD's E-350 APU as well as Intel's Atom Z530 and a Core i7 3770K.  Before you you make the assumption that the ARM's will be totally outclassed by any of these processors, Phoronix is testing performance per Watt and the ARM system uses a total of 31W when fully stressed and idles below 20W, which gives ARM a big lead on power consumption. 

Phoronix tested out these four systems and the results were rather surprising as it seems Intel's Ivy Bridge is a serious threat to ARM.  Not only did it provide more total processing power, its performance per Watt tended to beat ARM and more importantly to many, it is cheaper to build an i7-3770K system than it is to set up a 12-core ARM server.  The next generation of ARM chips have some serious competition.

Phoronix12ARM.jpg

"Last week I shared my plans to build a low-cost, 12-core, 30-watt ARMv7 cluster running Ubuntu Linux. The ARM cluster that is built around the PandaBoard ES development boards is now online and producing results... Quite surprising results actually for a low-power Cortex-A9 compute cluster. Results include performance-per-Watt comparisons to Intel Atom and Ivy Bridge processors along with AMD's Fusion APU."

Here are some more Processor articles from around the web:

Processors

 

Source: Phoronix

Intel Introduces Xeon Phi: Larrabee Unleashed

Subject: Processors | June 19, 2012 - 11:46 AM |
Tagged: Xeon Phi, xeon e5, nvidia, larrabee, knights corner, Intel, HPC, gpgpu, amd

Intel does not respond well when asked about Larabee.  Though Intel has received a lot of bad press from the gaming community about what they were trying to do, that does not necessarily mean that Intel was wrong about how they set up the architecture.  The problem with Larrabee was that it was being considered as a consumer level product with an eye for breaking into the HPC/GPGPU market.  For the consumer level, Larrabee would have been a disaster.  Intel simply would not have been able to compete with AMD and NVIDIA for gamers’ hearts.
 
The problem with Larrabee and the consumer space was a matter of focus, process decisions, and die size.  Larrabee is unique in that it is almost fully programmable and features really only one fixed function unit.  In this case, that fixed function unit was all about texturing.    Everything else relied upon the large array of x86 processors and their attached vector units.  This turns out to be very inefficient when it comes to rendering games, which is the majority of work for the consumer market in graphics cards.  While no outlet was able to get a hold of a Larrabee sample and run benchmarks on it, the general feeling was that Intel would easily be a generation behind in performance.  When considering how large the die size would have to be to even get to that point, it was simply not economical for Intel to produce these cards.
 
phi_01.jpg
 
Xeon Phi is essentially an advanced part based on the original Larrabee architecture.
 
This is not to say that Larrabee does not have a place in the industry.  The actual design lends itself very nicely towards HPC applications.  With each chip hosting many x86 processors with powerful vector units attached, these products can provide tremendous performance in HPC applications which can leverage these particular units.  Because Intel utilized x86 processors instead of the more homogenous designs that AMD and NVIDIA use (lots of stream units doing vector and scalar, but no x86 units or a more traditional networking fabric to connect them).  This does give Intel a leg up on the competition when it comes to programming.  While GPGPU applications are working with products like OpenCL, C++ AMP, and NVIDIA’s CUDA, Intel is able to rely on many current programming languages which can utilize x86.  With the addition of wide vector units on each x86 core, it is relatively simple to make adjustments to utilize these new features as compared to porting something over to OpenCL.
 
So this leads us to the Intel Xeon Phi.  This is the first commercially available product based on an updated version of the Larrabee technology.  The exact code name is Knights Corner.  This is a new MIC (many integrated cores) product based on Intel’s latest 22 nm Tri-Gate process technology.  The details are scarce on how many cores this product actually contains, but it looks to be more than 50 of a very basic “Pentium” style core;  essentially low die space, in-order, and all connected by a robust networking fabric that allows fast data transfer between the memory interface, PCI-E interface, and the cores.
 
intelphi.jpg
 
Each Xeon Phi promises more than 1 TFLOP of performance (as measured by Linpack).  When combined with the new Xeon E5 series of processors, these products can provide a huge amount of computing power.  Furthermore, with the addition of the Cray interconnect technology that Intel acquired this year, clusters of these systems could provide for some of the fastest supercomputers on the market.  While it will take until the end of this year at least to integrate these products into a massive cluster, it will happen and Intel expects these products to be at the forefront of driving performance from the Petascale to the Exascale.
 
phi_02.jpg
 
These are the building blocks that Intel hopes to utilize to corner the HPC market.  Providing powerful CPUs and dozens if not hundreds of MIC units per cluster, the potential computer power should bring us to the Exascale that much sooner.
 
Time will of course tell if Intel will be successful with Xeon Phi and Knights Corner.  The idea behind this product seems sound, and the addition of powerful vector units being attached to simple x86 cores should make the software migration to massively parallel computing just a wee bit easier than what we are seeing now with GPU based products from AMD and NVIDIA.  The areas that those other manufacturers have advantages over Intel are that of many years of work with educational institutions (research), software developers (gaming, GPGPU, and HPC), and industry standards groups (Khronos).  Xeon Phi has a ways to go before being fully embraced by these other organizations, and its future is certainly not set in stone.  We have yet to see 3rd party groups get a hold of these products and put them to the test.  While Intel CPUs are certainly class leading, we still do not know of the full potential of these MIC products as compared to what is currently available in the market.
 

The one positive thing for Intel’s competitors is that it seems their enthusiasm for massively parallel computing is justified.  Intel just entered that ring with a unique architecture that will certainly help push high performance computing more towards true heterogeneous computing. 

Source: Intel

Live Blog: AMD Fusion Developer Summit 2012 (AFDS)

Subject: Graphics Cards, Processors, Shows and Expos | June 14, 2012 - 11:46 AM |
Tagged: live blog, arm, APU, amd, AFDS

afdslogo.png

Day 3 - Thursday, June 14th

We are here at AFDS 2012 for the day 3 keynotes - join us as find out what else AMD has in store.  

If you are looking for Tuesday or Wednesday keynotes and information on the announcement of the HSA Foundation, you can find it below, after the break!

AMD Licenses ARM Technology: AMD Leans on ARM for Security

Subject: Processors | June 13, 2012 - 10:00 AM |
Tagged: TrustZone, hsa, Cortex-A5, cortex, arm, APU, amd, AFDS

Last year after that particular AFDS, there was much speculation that AMD and ARM would get a whole lot closer.  Today we have confirmed that in two ways.  The first is that AMD and ARM are founding members of the HSA Foundation.  This endeavor is a rather ambitious project that looks to make it much easier for programmers to access the full computer power of a CPU/GPU combo, or as AMD likes to call them, the APU.  The second confirmation is one that has been theorized for quite some time, but few people have actually hit upon the actual implementation.  This second confirmation is that AMD is licensing ARM cores and actually integrating them into their x86 based APUs.

HSAFoundation-FINAL-Desktop.png
 
AMD and ARM are serious about working with each other.  This is understandable as both of them are competing tooth and nail with Intel.
 
ARM has a security functionality that they have been working with for several years now.  This is called ARM TrustZone.  It is a set of hardware and software products that provide a greater amount of security in data transfer and transactions.  The hardware basis is built into the ARM licensed designs and is implemented in literally billions of devices (not all of them enabled).  The biggest needs that this technology addresses are that of secure transactions and password enabled logins.  Money is obviously quite important, but with identity theft and fraud on the rise, secure logins to personal information or even social sites are reaching the same level of importance as large monetary transactions.
 
AMD will actually be implementing a Cortex-A5 processor into AMD APUs that will handle the security aspects of ARM TrustZone.  The A5 is the smallest Cortex processor available, and that would make sense to use it in a full APU so it will not take up an extreme amount of die space.  When made on what I would assume to be a 28 nm process, a single A5 processor would likely take up as little as 10 to 15 mm squared of space on the die.
 
This is not exactly the licensing agreement that many analysts had expected from AMD.  It is a start though.  I would generally expect AMD to be more aggressive in the future with offerings based on ARM technologies.  If we remember some time ago Rory Read of AMD pronounced their GPU technology as “the crown jewel” of their IP lineup, it makes little sense for AMD to limit this technology just to standalone GPUs and x86 based APUs.  If AMD is serious about heterogeneous computing, I would expect them to eventually move into perhaps not the handheld ARM market initially, but certainly with more server level products based on 64 bit ARM technology.
 
cor_a5.jpg
 
Cortex-A5: coming to an AMD APU near you in 2013/2014.  Though probably not in quad core fashion as shown above.
 
AMD made a mistake once by selling off their ultra-mobile graphics group, Imageon.  This was sold off to Qualcomm, who is now a major player in the ARM ecosystem with their Snapdragon products based on Adreno graphics (“Adreno” is an anagram of “Radeon”).  With the release of low powered processors in both the Brazos and Trinity line, AMD is again poised to deliver next generation graphics to the low power market.  Now the question is, what will that graphics unit be attached to?
 
 
Source: AMD

AFDS 2012: HSA Foundation Joins AMD, ARM, Ti, Imagination and MediaTek with Open Architecture

Subject: Graphics Cards, Processors | June 12, 2012 - 01:31 PM |
Tagged: texas instruments, mediatek, imagination, hsa foundation, hsa, arm, amd, AFDS

Today is a big day for AMD as they, along with four other major players in the world of processors and SoCs, announced the formation of the HSA Foundation.  The HSA Foundation is a non-profit consortium created to define and promote an open approach to heterogeneous computing.  The primary goal is to make it easier for software developers to write and program for the parallel power of GPUs.  This encompasses both integrated and discrete of which the HSA (heterogeneous systems architecture) Foundation wants to enable users to take full advantage of all the processing resources available to them.

On stage at the AMD Fusion Developer Summit in Bellevue, WA, AMD announced the formation of the consortium in partnership with ARM, Imagination Technologies, MediaTek, and Texas Instruments; some of the biggest names in computing. 

The companies will work together to drive a single architecture specification and simplify the programming model to help software developers take greater advantage of the capabilities found in modern central processing units (CPUs) and graphics processing units (GPUs), and unlock the performance and power efficiency of the parallel computing engines found in heterogeneous processors.

HSA Foundation_Logo.png

There are a lot of implications in this simple statement and there are many questions that are left open ended to which we hope to get answered this week while at AFDS.  The idea of a "single architecture specification" set a lot of things in motion and makes us question the direction of both AMD and the traditionally ARM-based companies of the HSA Foundation will be moving in.  AMD has had the APU, and the eventual complete fusion of the CPU and GPU, on its roadmap for quite a few years and has publicly stated that in 2014 they will have their first fully HSA-capable part.  We are still assuming that this is an x86 + Radeon based part, but that may or may not be the long term goal; ideas of ARM-based AMD processors with Radeon graphics technology AND of Radeon based ARM-processors built by other companies still swirl amongst the show.  There are even rumors of Frankenstein-like combinations of x86 and ARM based products for niche applications.

hsa01.jpg

Looks like there is room for a few more founding partners...

Obviously ARM and others have their own graphics IP (ARM has Mali, Imagination Technology has Power VR) and those GPUs can be used for parallel processing in much the same way that we think of GPU processing on discrete GPUs and APUs today.  ARM processor designers are well aware of the power and efficiency benefits of utilizing all of the available transistors and processing power correctly and the emphasis on an HSA-style system design makes a lot of sense moving forward.  

My main question for the HSA Foundation is its goals: obviously they want to promote the simplistic approach for programmers, but what does that actually translate to on the hardware side?  It is possible that both x86 and ARM-based ISAs can continue to exist with libraries and compilers built to correctly handle applications for each architecture, but that would seem to me to be against the goals of such a partnership of technology leaders.

amdarmcombo.jpg

In a meeting with AMD personnel, the most powerful and inspiring idea from the HSA Foundation is summed up with this:

"This is bigger than AMD.  This is bigger than the PC ecosystem."

The end game is to make sure that all software developers can EASILY take advantage of both traditional and parallel processing cores without ever having to know what is going on under the hood.  AMD and the other HSA Foundation members continue to tell us that this optimization can be completely ISA-agnostic – though the technical blockages for that to take place are severe. 

AMD will benefit from the success of the HSA Foundation by finally getting more partners involved in promoting the idea of heterogeneous computing, and powerful ones at that.  ARM is the biggest player in the low power processor market responsible for the Cortex and Mali architectures found in the vast majority of mobile processors.  As those partners trumpet the same cause as AMD, more software will be developed to take advantage of parallel computing and AMD believes their GPU architecture gives them a definite performance advantage once that takes hold.  

What I find most interesting is the unknown – how will this affect the roadmaps for all the hardware companies involved?  Are we going to see the AMD APU roadmap shift to an ARM-IP system?  Will we see companies like Texas Instruments fully integrate the OMAP and Power VR cores into a single memory space (or ARM with Cortex and Mali)?  Will we eventually see NVIDIA jump onboard and lend their weight towards true heterogenous computing?

hsa02.jpg

We have much more the learn about the HSA Foundation and its direction for the industry but we can easily say that this is probably the most important processor company collaboration announcement in many years – and it does so without the 800 pound gorilla that is Intel in attendance.  By going after the ARM-based markets where Intel is already struggling to compete in, AMD can hope to create a foothold with technological and partnership advantages and return to a seat of prominence.  This harkens back to the late 1990s when AMD famously put together the "virtual gorilla" with many partners to take on Intel.

Check out the full press release after the break!

AFDS 2012: AMD "Kaveri" APU to offer 1 TFLOPS Compute Performance

Subject: Graphics Cards, Processors | June 12, 2012 - 12:18 PM |
Tagged: Kaveri, APU, amd, AFDS

During the opening keynote at the AMD Fusion Developer Summit 2012, AMD's Dr. Lisa Su revealed a slide with performance of the upcoming 3rd genreation Kaveri APU.

04.jpg

While Trinity is currently rated at 726 GFLOPS, the Kaveri APU due late in 2012 or early 2013, will have at least 1 TFLOPS of total compute performance.  That is a 37% boost over the previous generation. 

If you want more information, check out our keynote live blog!!

AFDS 2012: AMD Wireless Display to compete against Intel WiDi with open standards

Subject: General Tech, Processors, Displays | June 10, 2012 - 06:45 PM |
Tagged: widi, Intel, awd, amd wireless display, amd, AFDS

While perusing through the listings and descriptions of sessions and presentations for the upcoming AMD Fusion Developer Summit, I came across an interesting one that surprised me.  Tomorrow, June 11th, at 5:15pm PST, you can stop by the Grand Hyatt in Bellevue to learn about the upcoming AMD Wireless Display technology.

AWD (AMD Wireless Display) is a multiple-platform application family to enable wireless display technologies much in the same way that Intel has been pushing with WiDi.  While Intel's take on it requires very specific Intel wireless controllers and is only recently, with the release of Ivy Bridge, getting the full-steam push from Intel, AMD's take on it is quite different.

widi02.jpg

Intel introduced WiDi in 2010

According to the brief on this AFDS session, AMD wants to create an API and SDKs for application developers to integrate AWD into software and to leverage the WiFi Alliance for an open-standards compliant front-end.  Using AMD APUs, the goal is provide lower latency for encoded video and audio while still using the required MPEG2TS wrapper.  We are also likely to learn that AMD hopes to make AWD open to a wider array of wireless devices.

AMD often takes this "open" approach to new technologies with mixed results - CUDA has been in place for many years while the adoption of OpenCL is only starting to take hold and 3D Vision still is the standard for 3D gaming on the PC.  

After having quite a few chances to use Intel's Wireless Display (WiDi) technology myself I can definitely say that the wireless approach is the one I am most excited with and that has the most potential to revolutionize the way we work with displays and computing devices.  I am eager to see what partners AMD has been working with and what demonstrations they will have for AWD next week.

Comprehensive Ivy Bridge testing on Ubuntu 12.04 LTS

Subject: Processors | June 8, 2012 - 03:51 PM |
Tagged: ubuntu, linux, Intel, Ivy Bridge, compiler, virtualization

Phoronix have been very busy lately, getting their heads around the functionality of Ivy Bridge on Linux and as these processor are much more compatible than their predecessors it has resulted in a lot of testing.  The majority of the testing focused on the performance of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64 on an i7-3770K using a wide variety of programs and benchmarks.  Their initial findings favoured GCC over all other compilers as in general it took top spot, with LLVM having issues with some of their tests.  They then started to play around with the instruction sets the processor was allowed to use, by disabling some of the new features they could emulate how the Ivy Bridge processor would perform if it was from a previous generation of chips, good to judge the improvement of raw processing power.  They finished up by testing its virtualization performance, with BareMetal, the Kernel-based Virtual Machine virtualization and Oracle VM VirtualBox.  You can see how they compared right here.

phoronix_ivy.png

"From an Intel Core i7 3770K "Ivy Bridge" system here is an 11-way compiler comparison to look at the performance of these popular code compilers on the latest-generation Intel hardware. Among the compilers being compared on Intel's Ivy Bridge platform are multiple releases of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64."

Here are some more Processor articles from around the web:

Processors

 

Source:

AMD Fusion Developer Summit 2012 - What to expect

Subject: General Tech, Processors, Shows and Expos | June 7, 2012 - 06:49 PM |
Tagged: hsa, fusion, amd, AFDS

One of the best show experiences I had last year was a surprise to me - AMD's first annual Fusion Developer Summit (AFDS) was hosted in the Seattle / Bellevue area.  I say that it was a surprise only because the inaugural year for vendor-specific shows like this tend to be pretty bland and lack interesting information, but that wasn't the case in 2011.  We saw ARM get on stage with AMD to talk about the idea of "dark silicon" and how to prevent it, we saw the first AMD Trinity notebook and even got details of the Tahiti GPU architecture well ahead of release.  

We expect even better things in 2012.

afds.jpg

While I don't know exactly what surprises will be on display this year I am looking forward to seeing the improvement from software developers after having another 12 months to work on APU-accelerated applications.  HSA (heterogeneous system architecture) has been getting a lot of buzz from AMD and the industry as we push towards a combined memory address space and the ultimate acceleration of programs across both serialized and parallel processors on the same die.

If you are in the Seattle / Bellevue area and you have the ability to attend AFDS, I would highly encourage you to do so.  You'll have access to:

  • Keynotes
  • Never before seen demos
  • Technical tracks and sessions to learn about HSA and programming for it

If you can't make it though, you should definitely follow the whole event right here at PC Perspective - the easiest way is to keep track of our AFDS tag to make sure you don't miss any of the potentially industry shifting news! 

You can also expect us to have a live blog from the event as well!

AMD Releases Brazos 2.0, dual-core Bobcat for low power platforms

Subject: Processors | June 6, 2012 - 05:08 PM |
Tagged: Zacate, Hudson-M3L, FCH, E2-1800, E2-1200, computex, brazos 2.0, brazos, Bobcat, amd

 Today AMD is officially releasing their Brazos 2.0 parts. This is a case of good news/bad news for the company. The good news is that they have an updated product that is based on their very successful Brazos 1.0 platform and that particular part has sold over 30 million units and is included in some 160 designs. The bad news is that AMD did not improve the product dramatically over what we previously had.

b2_comp_chart.jpg

While Brazos will not beat these Intel offerings in pure performance, they do match up nicely in terms of price and battery life.

It is well known that AMD cancelled their original Bobcat 2.0 28 nm parts last fall (Krishna and Wichita), and instead worked on improving the fabrication of the current Brazos APUs. Little is known as to why those original 28 nm parts were cancelled, but perhaps the overriding reason is that there simply would not be enough 28 nm production through the first three quarters of 2012 to enable AMD to adequately meet demand on these parts (all the while sacrificing higher margin GPU wafer orders on the 28 nm node). We also must consider that AMD could have been counting on GLOBALFOUNDRIES to have their flavor of 28 nm HKMG process up and running, which of course at this time it is not.

These new Brazos 2.0 chips are still manufactured on TSMC’s 40 nm process, but that particular process is very mature at this time. This has allowed AMD and TSMC to squeeze every last drop of performance and efficiency out of the aging 40 nm node, and in so doing has allowed AMD a bit more headroom when it comes to the Zacate APUs that Brazos 2.0 is based off of. The two new processors are the E2-1800 and the E2-1200.

The E2-1800 is a dual core Bobcat CPU featuring an APU with 80 stream units based on the older HD 5000 series of parts. AMD has renamed the GPU to the HD 7340, though it has little in common with the GCN (Graphics Core Next) based HD 7000 graphics units. AMD increased the core CPU speed from the E-450 by 50 MHz and the GPU portion by 80 MHz. This gives the E2-1800 a core clockspeed of 1.7 GHz and the graphics runs at a brisk 680 MHz. This continues to be an 18 watt TDP part and the die size is the same 75 mm squared.

Click here to read more.

Source: AMD

Video Perspective: AMD A10-4600M vs Core i7-3720QM on Diablo III

Subject: Graphics Cards, Processors, Mobile | June 1, 2012 - 10:52 AM |
Tagged: video, trinity, Ivy Bridge, Intel, i7-3720QM, diablo iii, APU, amd, a10-4600m

So, apparently PC gamers are big fans of Diablo III, to the tune of 3.5 million copies sold in the first 24 hours.  That means there are a lot of people out there looking for information about the performance they can expect on various harware configurations with Diablo III.  Since we happened to have the two newest mobile processors and platforms on-hand, and because many people seemed to assume that "just about anything" would be able to play D3, we decided to put it to the test.

d3-1.png

In our previous reviews of the AMD Trinity and Intel Ivy Bridge reference systems, the general consensus was that the CPU portion of the chip was better on Intel's side while the GPU portion was still weighted towards the AMD Trinity APU.  Both of these CPUs, the A10-4600M and the Core i7-3720QM, are the highest end mobile solutions from both AMD and Intel. 

d3-2.png

The specifications weren't identical, but again, for a mobile platform, this was the best we could do.  With the AMD system only having 4GB of memory compared to the Ivy Bridge system with 8GB, that is one lone "stand out" spec.  The Intel HD 4000 graphics offer a noticeable upgrade from the HD 3000 on the Sandy Bridge platform but AMD's new HD 7660G (based on Cayman) also sees performance increase. 

d3-3.png

We ran our tests at 1366x768 with "high" image quality settings and ran through a section of the early part of the game a few times with FRAPs to get our performance results.  We did also run some tests to an external monitor at 1920x1080 with "low" presets and AA disabled - both are reported in the video below.  Enjoy!

Know CPUs were made of sand? Yes, but I like your video.

Subject: Editorial, General Tech, Processors | May 30, 2012 - 06:42 PM |
Tagged: Intel, fab

Intel has released an animated video and supplementary PDF document to explain how Intel CPUs are manufactured. The video is more “cute” than anything else although the document is surprisingly really well explained for the average interested person. If you have ever wanted to know how a processor was physically produced then I highly recommend taking about a half of an hour to watch the video and read the text.

If you have ever wondered how CPUs came to be from raw sand -- prepare to get learned.

Intel has published a video and accompanied information document which explains their process almost step by step. The video itself will not teach you too much as it was designed to illustrate the information in the online pamphlet.

Not shown is the poor sandy bridges that got smelted for your enjoyment.

Rest in got

My background in education is a large part of the reason why I am excited by this video. The accompanied document is really well explained, goes into just the right amount of detail, and does so very honestly. The authors did not shy away from declaring that they do not produce their own wafers nor did they sugarcoat that each die even on the same wafer could perform differently or possibly not at all.

You should do yourself a favor and check it out.

Source: Intel (pdf)

Dell uses ARM-based "Copper" servers to accelerate ecosystem

Subject: Processors, Systems | May 29, 2012 - 05:15 PM |
Tagged: server, dell, copper, arm

Dell announced today that is going to help enable the world of the ARM-based server ecosystem by enabling key hyperscale customers to access and develop on Dell's own "Copper" ARM servers.

Dell today announced it is responding to the demands of our customers for continued innovation in support of hyperscale environments, and enabling the ecosystem for ARM-based servers. The ARM-based server market is approaching an inflection point, marked by increasing customer interest in testing and developing applications, and Dell believes now is the right time to help foster development and testing of operating systems and applications for ARM servers.

Dell is recognized as an industry leader in both the x86 architecture and the hyperscale server market segments. Dell began testing ARM server technology internally in 2010 in response to increasing customer demands for density and power efficiency, and worked closely with select Dell Data Center Solutions (DCS) hyperscale customers to understand their interest level and expectations for ARM-based servers. Today's announcement is a natural extension of Dell's server leadership and the company's continued focus on delivering next generation technology innovation.

While these servers are still not publicly available, Dell is fostering the development of software and verification processes by seeding these unique servers to a select few groups.  PC Perspective is NOT one of them.

armserver-hotswap.jpg

Each of these 3U rack mount machines includes 48 independent servers, each based around a 1.6 GHz quad-core Marvell Armada XP SoC.  Each of the sleds (pictured below) holds four discrete server nodes, each capable of as much as 8GB of memory on a single DDR3 UDIMM.  Each node can access one 2.5-in HDD bay and one Gigabit Ethernet connection.

dell_copper_sled.jpg

Click for a larger view

Even though we are still very early into the life cycle of ARM architectures in the server room, Dell claims that these systems are built perfectly for web front-ends and Hadoop environments:

Customers have expressed great interest in understanding ARM-based server advantages and how they may apply to their hyperscale environments. Dell believes ARM infrastructures demonstrate promise for web front-end and Hadoop environments, where advantages in performance per dollar and performance per watt are critical. The ARM server ecosystem is still developing, and largely available in open-source, non-production versions, and the current focus is on supporting development of that ecosystem. Dell has designed its programs to support today's market realities by providing lightweight, high-performance seed units and easy remote access to development clusters.

There is little doubt that Intel will feel and address this competition in the coming years.

Source: Marketwatch