Subject: General Tech, Processors, Displays | June 10, 2012 - 06:45 PM | Ryan Shrout
Tagged: widi, Intel, awd, amd wireless display, amd, AFDS
While perusing through the listings and descriptions of sessions and presentations for the upcoming AMD Fusion Developer Summit, I came across an interesting one that surprised me. Tomorrow, June 11th, at 5:15pm PST, you can stop by the Grand Hyatt in Bellevue to learn about the upcoming AMD Wireless Display technology.
AWD (AMD Wireless Display) is a multiple-platform application family to enable wireless display technologies much in the same way that Intel has been pushing with WiDi. While Intel's take on it requires very specific Intel wireless controllers and is only recently, with the release of Ivy Bridge, getting the full-steam push from Intel, AMD's take on it is quite different.
Intel introduced WiDi in 2010
According to the brief on this AFDS session, AMD wants to create an API and SDKs for application developers to integrate AWD into software and to leverage the WiFi Alliance for an open-standards compliant front-end. Using AMD APUs, the goal is provide lower latency for encoded video and audio while still using the required MPEG2TS wrapper. We are also likely to learn that AMD hopes to make AWD open to a wider array of wireless devices.
AMD often takes this "open" approach to new technologies with mixed results - CUDA has been in place for many years while the adoption of OpenCL is only starting to take hold and 3D Vision still is the standard for 3D gaming on the PC.
After having quite a few chances to use Intel's Wireless Display (WiDi) technology myself I can definitely say that the wireless approach is the one I am most excited with and that has the most potential to revolutionize the way we work with displays and computing devices. I am eager to see what partners AMD has been working with and what demonstrations they will have for AWD next week.
Subject: Processors | June 8, 2012 - 03:51 PM | Jeremy Hellstrom
Tagged: ubuntu, linux, Intel, Ivy Bridge, compiler, virtualization
Phoronix have been very busy lately, getting their heads around the functionality of Ivy Bridge on Linux and as these processor are much more compatible than their predecessors it has resulted in a lot of testing. The majority of the testing focused on the performance of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64 on an i7-3770K using a wide variety of programs and benchmarks. Their initial findings favoured GCC over all other compilers as in general it took top spot, with LLVM having issues with some of their tests. They then started to play around with the instruction sets the processor was allowed to use, by disabling some of the new features they could emulate how the Ivy Bridge processor would perform if it was from a previous generation of chips, good to judge the improvement of raw processing power. They finished up by testing its virtualization performance, with BareMetal, the Kernel-based Virtual Machine virtualization and Oracle VM VirtualBox. You can see how they compared right here.
"From an Intel Core i7 3770K "Ivy Bridge" system here is an 11-way compiler comparison to look at the performance of these popular code compilers on the latest-generation Intel hardware. Among the compilers being compared on Intel's Ivy Bridge platform are multiple releases of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64."
Here are some more Processor articles from around the web:
- Intel's ultrabook-bound Core i5-3427U processor @ The Tech Report
- Intel Core i5 3470 Review: HD 2500 Graphics Tested @ AnandTech
- Comparing Ivy Bridge vs. Sandy Bridge @ TechReviewSource
- EE Bookshelf: ARM Cortex M Architecture Overview @ Adafruit
- The Workstation & Server CPU Comparison Guide @ TechARP
- The Bulldozer Aftermath: Delving Even Deeper @ AnandTech
- AMD E-Series APU “Brazo 2.0″ @ Bjorn3D
- AMD A8-3870K Black Edition & Hybrid Crossfire @ OC3D
- AMD A4 3400 APU @ Kitguru
Subject: General Tech, Processors, Shows and Expos | June 7, 2012 - 06:49 PM | Ryan Shrout
Tagged: hsa, fusion, amd, AFDS
One of the best show experiences I had last year was a surprise to me - AMD's first annual Fusion Developer Summit (AFDS) was hosted in the Seattle / Bellevue area. I say that it was a surprise only because the inaugural year for vendor-specific shows like this tend to be pretty bland and lack interesting information, but that wasn't the case in 2011. We saw ARM get on stage with AMD to talk about the idea of "dark silicon" and how to prevent it, we saw the first AMD Trinity notebook and even got details of the Tahiti GPU architecture well ahead of release.
We expect even better things in 2012.
While I don't know exactly what surprises will be on display this year I am looking forward to seeing the improvement from software developers after having another 12 months to work on APU-accelerated applications. HSA (heterogeneous system architecture) has been getting a lot of buzz from AMD and the industry as we push towards a combined memory address space and the ultimate acceleration of programs across both serialized and parallel processors on the same die.
If you are in the Seattle / Bellevue area and you have the ability to attend AFDS, I would highly encourage you to do so. You'll have access to:
- Never before seen demos
- Technical tracks and sessions to learn about HSA and programming for it
If you can't make it though, you should definitely follow the whole event right here at PC Perspective - the easiest way is to keep track of our AFDS tag to make sure you don't miss any of the potentially industry shifting news!
You can also expect us to have a live blog from the event as well!
Subject: Processors | June 6, 2012 - 05:08 PM | Josh Walrath
Tagged: Zacate, Hudson-M3L, FCH, E2-1800, E2-1200, computex, brazos 2.0, brazos, Bobcat, amd
Today AMD is officially releasing their Brazos 2.0 parts. This is a case of good news/bad news for the company. The good news is that they have an updated product that is based on their very successful Brazos 1.0 platform and that particular part has sold over 30 million units and is included in some 160 designs. The bad news is that AMD did not improve the product dramatically over what we previously had.
While Brazos will not beat these Intel offerings in pure performance, they do match up nicely in terms of price and battery life.
It is well known that AMD cancelled their original Bobcat 2.0 28 nm parts last fall (Krishna and Wichita), and instead worked on improving the fabrication of the current Brazos APUs. Little is known as to why those original 28 nm parts were cancelled, but perhaps the overriding reason is that there simply would not be enough 28 nm production through the first three quarters of 2012 to enable AMD to adequately meet demand on these parts (all the while sacrificing higher margin GPU wafer orders on the 28 nm node). We also must consider that AMD could have been counting on GLOBALFOUNDRIES to have their flavor of 28 nm HKMG process up and running, which of course at this time it is not.
These new Brazos 2.0 chips are still manufactured on TSMC’s 40 nm process, but that particular process is very mature at this time. This has allowed AMD and TSMC to squeeze every last drop of performance and efficiency out of the aging 40 nm node, and in so doing has allowed AMD a bit more headroom when it comes to the Zacate APUs that Brazos 2.0 is based off of. The two new processors are the E2-1800 and the E2-1200.
The E2-1800 is a dual core Bobcat CPU featuring an APU with 80 stream units based on the older HD 5000 series of parts. AMD has renamed the GPU to the HD 7340, though it has little in common with the GCN (Graphics Core Next) based HD 7000 graphics units. AMD increased the core CPU speed from the E-450 by 50 MHz and the GPU portion by 80 MHz. This gives the E2-1800 a core clockspeed of 1.7 GHz and the graphics runs at a brisk 680 MHz. This continues to be an 18 watt TDP part and the die size is the same 75 mm squared.
Subject: Graphics Cards, Processors, Mobile | June 1, 2012 - 10:52 AM | Ryan Shrout
Tagged: video, trinity, Ivy Bridge, Intel, i7-3720QM, diablo iii, APU, amd, a10-4600m
So, apparently PC gamers are big fans of Diablo III, to the tune of 3.5 million copies sold in the first 24 hours. That means there are a lot of people out there looking for information about the performance they can expect on various harware configurations with Diablo III. Since we happened to have the two newest mobile processors and platforms on-hand, and because many people seemed to assume that "just about anything" would be able to play D3, we decided to put it to the test.
In our previous reviews of the AMD Trinity and Intel Ivy Bridge reference systems, the general consensus was that the CPU portion of the chip was better on Intel's side while the GPU portion was still weighted towards the AMD Trinity APU. Both of these CPUs, the A10-4600M and the Core i7-3720QM, are the highest end mobile solutions from both AMD and Intel.
The specifications weren't identical, but again, for a mobile platform, this was the best we could do. With the AMD system only having 4GB of memory compared to the Ivy Bridge system with 8GB, that is one lone "stand out" spec. The Intel HD 4000 graphics offer a noticeable upgrade from the HD 3000 on the Sandy Bridge platform but AMD's new HD 7660G (based on Cayman) also sees performance increase.
We ran our tests at 1366x768 with "high" image quality settings and ran through a section of the early part of the game a few times with FRAPs to get our performance results. We did also run some tests to an external monitor at 1920x1080 with "low" presets and AA disabled - both are reported in the video below. Enjoy!
Subject: Editorial, General Tech, Processors | May 30, 2012 - 06:42 PM | Scott Michaud
Tagged: Intel, fab
Intel has released an animated video and supplementary PDF document to explain how Intel CPUs are manufactured. The video is more “cute” than anything else although the document is surprisingly really well explained for the average interested person. If you have ever wanted to know how a processor was physically produced then I highly recommend taking about a half of an hour to watch the video and read the text.
If you have ever wondered how CPUs came to be from raw sand -- prepare to get learned.
Intel has published a video and accompanied information document which explains their process almost step by step. The video itself will not teach you too much as it was designed to illustrate the information in the online pamphlet.
Not shown is the poor sandy bridges that got smelted for your enjoyment.
Rest in got
My background in education is a large part of the reason why I am excited by this video. The accompanied document is really well explained, goes into just the right amount of detail, and does so very honestly. The authors did not shy away from declaring that they do not produce their own wafers nor did they sugarcoat that each die even on the same wafer could perform differently or possibly not at all.
You should do yourself a favor and check it out.
Subject: Processors, Systems | May 29, 2012 - 05:15 PM | Ryan Shrout
Tagged: server, dell, copper, arm
Dell announced today that is going to help enable the world of the ARM-based server ecosystem by enabling key hyperscale customers to access and develop on Dell's own "Copper" ARM servers.
Dell today announced it is responding to the demands of our customers for continued innovation in support of hyperscale environments, and enabling the ecosystem for ARM-based servers. The ARM-based server market is approaching an inflection point, marked by increasing customer interest in testing and developing applications, and Dell believes now is the right time to help foster development and testing of operating systems and applications for ARM servers.
Dell is recognized as an industry leader in both the x86 architecture and the hyperscale server market segments. Dell began testing ARM server technology internally in 2010 in response to increasing customer demands for density and power efficiency, and worked closely with select Dell Data Center Solutions (DCS) hyperscale customers to understand their interest level and expectations for ARM-based servers. Today's announcement is a natural extension of Dell's server leadership and the company's continued focus on delivering next generation technology innovation.
While these servers are still not publicly available, Dell is fostering the development of software and verification processes by seeding these unique servers to a select few groups. PC Perspective is NOT one of them.
Each of these 3U rack mount machines includes 48 independent servers, each based around a 1.6 GHz quad-core Marvell Armada XP SoC. Each of the sleds (pictured below) holds four discrete server nodes, each capable of as much as 8GB of memory on a single DDR3 UDIMM. Each node can access one 2.5-in HDD bay and one Gigabit Ethernet connection.
Click for a larger view
Even though we are still very early into the life cycle of ARM architectures in the server room, Dell claims that these systems are built perfectly for web front-ends and Hadoop environments:
Customers have expressed great interest in understanding ARM-based server advantages and how they may apply to their hyperscale environments. Dell believes ARM infrastructures demonstrate promise for web front-end and Hadoop environments, where advantages in performance per dollar and performance per watt are critical. The ARM server ecosystem is still developing, and largely available in open-source, non-production versions, and the current focus is on supporting development of that ecosystem. Dell has designed its programs to support today's market realities by providing lightweight, high-performance seed units and easy remote access to development clusters.
There is little doubt that Intel will feel and address this competition in the coming years.
Subject: Processors, Mobile | May 29, 2012 - 11:33 AM | Ryan Shrout
Tagged: z2670, windows 8, dell, clover trail, atom
In a leaked slide posted by Neowin.net, details of Dell's upcoming Latitude 10 tablet are coming to light, including hardware specifications like the Intel Atom Z2670 "Clover Trail" SoC.
This 10.1-in Windows 8 based tablet will include a 1366x768 display with a capacitive multi-touch screen and an optional stylus accessory. Weighing in at just over 1.5 pounds, the Latitude 10 is just slightly heavier than the latest generation of iPad (1.46 pounds).
Intel's upcoming Atom processor, the Z2670, will be at the core of the design and will be based on the "Clover Trail" design, a slightly faster and updated version of "Medfield" we have seen implemented on mobile phones early in 2012. With dual-cores capable of HyperThreading, and the ability to enter into "Burst Mode" which offers "quick bursts of extra performance when called upon", the Atom Z2670 should be capable of presenting a reasonable Windows 8 experience.
Other specifications include 2 GB of DDR2-800 lower power memory, up to a 128 GB SSD, 2 and 4 cell swappable batteries and front plus rear facing cameras.
With Computex 2012 right around the corner in Taipei, Taiwan, we expect to see quite a few more tablets and hybrid machines based on Windows 8 including Intel Atom-powered devices as well as ARM-based devices running Windows 8 RT.
Subject: General Tech, Processors, Mobile | May 24, 2012 - 06:01 PM | Scott Michaud
Intel has released a report about their environmental efforts in terms of manufacturing efficiency, waste, and the efficiency of their products themselves. Their 2020 mobile and data center product line is expected to use 25-fold less power than their 2010 product line. Intel is hoping to use less water and consume 1.4 TWh less energy between 2012 and 2015 in their manufacturing with no chemical waste to landfill by 2020.
It is not easy been green.
… But, especially now, Intel can afford to try.
The chip manufacturer has set some goals for themselves to decrease their impact on the environment. These plans were published in their 2011 Corporate Responsibility Report (pdf), released last week. The plan highlights goals extending out as far as 2020.
It would seem that for Intel foresight is also 2020.
Yes, those puns were terrible, I admit it.
One of the forefront issues raised is alterations to their supply chain. Their raw materials have been addressed -- not just for eco-friendliness -- but also for human rights violations. By the end of 2012 Intel intends to validate that all tantalum would be “conflict-free” with the other three minerals verified by the end of 2013.
On the topic of environmental impact Intel is also intending on reducing their electrical and water usage at their manufacturing plants. A total of 1.4 TWh of energy is expected to be reduced from 2012 through 2015. Intel is also lauding their solar initiatives although they fell short of committing to any specific future endeavors in clean energy in this report.
Lastly, Intel claims that their mobile and data center products will consume 25-fold less power than their 2010 counterparts. Obviously such a statement falls more under gloating than a vow to promote sustainability but it is respectable none-the-less.
Subject: Editorial, General Tech, Graphics Cards, Processors | May 19, 2012 - 04:52 PM | Scott Michaud
Tagged: ultrabook, trinity, cloud computing, cloud, amd
Bloomberg Businessweek reports AMD CEO Rory Read claims that his company will produce chips which are suited for consumer needs and not to crunch larger and larger bundles of information. They also like eating Intel’s bacon -- the question: is it from a pig or a turkey?
Read believes there is “enough processing power on every laptop on the planet today”.
The argument revolves around the shift to the cloud, as usual. It is very alluring to shift focus from the instrument to the data itself. More enticing: discussing how the instruments change to suit that need; this is especially true if you develop instruments and yearn to shift anyway.
Don’t question the bacon…
AMD has been trusting that their processors will be good enough and their products will differentiate in other ways such as with graphics capabilities which they claim will be more important for cloud services. AMD hopes that their newer laptops will steal some bacon from Intel and their ultrabook initiative.
The main problem with the cloud is that it is mostly something that people feel that they want rather than actually do. They believe they want their content controlled by a company for them until it becomes inaccessible temporarily or permanently. They believe they want their information accessible in online services but then freak out about the privacy implications of it.
The public appeal of the cloud is that it lets you feel as though you can focus on the content rather than the medium. The problem is that you do not have fewer distractions from your content -- just different ones -- and they rear their head once or twice in isolation of each other. You experience a privacy concern here and an incompatibility or licensing issue there. For some problems and for some people it makes more sense to control your own data. It will continue to be important to serve that market.
And if crunching ends up being necessary for the future it looks like Intel will be a little lonely at the top.