All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Graphics Cards, Processors, Systems | August 1, 2012 - 12:35 AM | Scott Michaud
Eurogamer and Digital Foundry believe that a next-generation Xbox developer kit somehow got into the hands of an internet user looking to fence it for $10,000. If the rumors are true, a few interesting features are included in the kit: an Intel CPU and an NVIDIA graphics processor.
A little PC perspective on console gaming news…
If the source and people who corroborate it are telling the truth: somehow Microsoft lost control of a single developer’s kit for their upcoming Xbox platform. Much like their Cupertino frenemies who lost an iPhone 4 in a bar which was taken and sold for $5000 to a tech blog, the current owner of the Durango devkit is looking for a buyer for a mere $10000. It is unlikely he found it on a bar stool.
One further level of irony, the Xbox 360 alpha devkit were repurposed Apple Mac Pros.
Image source: DaE as per its own in-image caption.
Alpha developer kits will change substantially externally but often do give clues to what to expect internally.
The first Xbox 360 software demonstrations were performed on slightly altered Apple Mac Pros. At that time, Apple was built on a foundation of PowerPC by IBM while the original Xbox ran Intel hardware. As it turned out, the Xbox 360 was based on the PowerPC architecture.
Huh, looks like a PC.
The leaked developer kit for the next Xbox is said to be running X86 hardware and an NVIDIA graphics processor. 8GB of RAM is said to be present on the leaked kit albeit that only suggests that the next Xbox will have less than 8GB of RAM. With as cheap as RAM is these days -- a great concern for PC gamers would be that Microsoft would load the console to the brim with memory and remove the main technical advantage of our platform. Our PCs will still have that advantage once our gamers stop being scared of 64-bit compatibility issues. As a side note, those specifications are fairly identical to the equally nebulous specs rumored for Valve’s Steam Box demo kit.
The big story is the return to x86 and NVIDIA.
AMD is not fully ruled out of the equation if they manage to provide Microsoft with a bid they cannot refuse. Of course practically speaking AMD only has an iceball’s chance in Hell of have a CPU presence in the upcoming Xbox – upgraded from snowball. More likely than not Intel will pick up the torch that IBM kept warm for them with their superior manufacturing.
PC gamers might want to pay close attention from this point on…
Contrast the switch for Xbox from PowerPC to X86 with the recent commentary from Gabe Newell and Rob Pardo of Blizzard. As Mike Capps has allured to – prior to the launch of Unreal Tournament 3 – Epic is concerned about the console mindset coming to the PC. It is entirely possible that Microsoft could be positioning the Xbox platform closer to the PC. Perhaps there are plans for cross-compatibility in exchange for closing the platform around certification and licensing fees?
Moving the Xbox platform closer to the PC in hardware specifications could renew their attempts to close the platform as has failed with their Games for Windows Live initiative. What makes the PC platform great is the lack of oversight about what can be created for it and the ridiculous time span for compatibility for what has been produced for it.
It might be no coincidence that the two companies who are complaining about Windows 8 are the two companies who design their games to be sold and supported for decades after launch.
And if the worst does happen, PC gaming has been a stable platform despite repetitive claims of its death – but could the user base be stable enough to handle a shift to Linux? I doubt that most would even understand the implications of proprietary platforms on art to even consider it. What about Adobe and the other software and hardware tool companies who have yet to even consider Linux as a viable platform?
The dark tunnel might have just gotten longer.
Subject: Processors | July 24, 2012 - 04:07 PM | Tim Verry
Tagged: TSMC, ARMv8, arm, 64-bit, 3d transistors, 20nm
Yesterday ARM announced a multi-year partnership with fab TSMC to produce sub-20nm processors that utilize 3D FinFET transistors. The collaboration and data sharing between the two companies will allow the fabless ARM SoC company the ability to produce physical processors based on its designs and will allow TSMC a platform to further its process nodes and FinFET transistor technology. The first TSMC-produced processors will be based on the ARMv8 architecture and will be 64-bit compatible.
The addition of 3D transistors will allow the ARM processors to be even more power efficient and suitable for both mobile devices. Alternatively, it could allow for higher clockspeeds at the same TDP ratings as current chips. The other big news is that the chips will be moving to a 64-bit compatible design, which is huge considering ARM processors have traditionally been 32-bit. By moving to 64-bit, ARM is positioning itself for server and workstation adoption, especially with the recent ARM-compatible Windows 8 build due to be released soon. Granted, ARM SoCs have a long way to go before taking market share from Intel and AMD in the desktop and server markets in a big way but it is slowly but surely becoming more competitive with the x86-64 giants.
TSMC’s R&D Vice President Cliff Hou stated that the collaboration between ARM and TSMC will allow TSMC to optimize its FinFET process to target “high speed, low voltage and low leakage.” ARM further qualified that the partnership would give ARM early access to the 3D transistor FinFET process that could help create advanced SoC designs and ramp up volume production.
I think this is a very positive move for ARM, and it should allow them to make much larger inroads into the higher-end computing markets and see higher adoption beyond mobile devices. On the other hand, it is going to depend on TSMC to keep up and get the process down. Considering the issues with creating enough 28nm silicon to meet demand for AMD and NVIDIA’s latest graphics cards, a sub-20nm process may be asking a lot. Here’s hoping that it’s a successful venture for both companies, however.
You can find more information in the full press release.
Subject: Processors | July 20, 2012 - 03:21 PM | Tim Verry
Tagged: quarterly earnings, loss, APU, amd
AMD recently released its Q2 2012 earnings (as did Intel), and things are continuing to look bleak for the number two x86-64 processor company. The company stated that the lower than expected numbers were the result of a weak economy and during a time of the year when people are not buying computers. The may be some truth to that as the second quarter is in the post-Christmas holiday season lul and before the big back-to-school retail push. On the economy front, it’s harder for me to say but without going political or armchair economist on you, the market seems better than it has been but is really still recovering–At least from a consumer perspective.
AMD reported revenue of $1.41 billion in the second quarter of 2012, which does not seem terrible, but when compared to Intel’s $13.5 billion Q2 revenue, and the fact that AMD’s numbers represent an 11-percent lower value than last quarter and 10-percent decrease versus Q2 2011, it’s easy to say that things are not looking good for the company.
According to Paul Lilly over at MaximumPC, when breaking AMD’s numbers down by business segment it gets even worse. Its Computing Solutions business fell 13-percent versus the previous quarter and Q2 2011. On the other hand, the company has the ever-so-slightly better news that the graphics card division stayed the same versus last year and was down 5-percent versus last quarter. The company was quoted as stating that the respective revenue drops were due to lower desktop sales in China and Europe and a “seasonally down quarter.”
PC Perspective’s Josh Walrath recently wrote up an editorial (note: pre-earnings call) that talks about AMDs new plan to focus on APUs, take on less risk, and push out new products faster. As a future-looking article, it talks about the impact of the company’s upcoming VIshera and Kaveri processors as well as AMD’s increased focus on heterogeneous system architectures. It remains to be seen if that new path for company will help them to make money or if it will hurt them. AMD cautions that Q3 2012 may not see increased revenue, but here’s hoping that they will be able to pull together for a strong Q4 and sell chips during the big holiday shopping season.
I for one am excited about the prospects of Kaveri and believe that HSA could work and is what AMD needs to focus on as it is one advantage that they have over NVIDIA and Intel – NVIDIA does not have an x86-64 license and Intel’s processor graphics leave room for improvement, to put it mildly. AMD may not have the best CPU cores, but it’s not an inherently bad design and where they are moving with the full convergence of the CPU and GPU is much farther ahead of the other big players.
Read more about AMD's Q2 2012 earnings (transcript).
Subject: General Tech, Processors | July 12, 2012 - 07:51 PM | Ryan Shrout
Tagged: amd, llano, APU, comiccon
If you are in the San Diego area today or tomorrow, you should make it a point to stop by Belo San Diego (http://www.belosandiego.com/ 438 E Street), a night club near the convention area, to visit with the AMD and the Geek and Sundry group.
Felicia Day, most popular for her role in the web-series The Guild, will be part of the on going event between 10am and 2am both today (the 12th) and tomorrow sponsored by AMD. She is excited to be there - just look!
If you stop by the Belo nightclub during those hours you can take home a FREE AMD A8-3870K APU (with accompanying motherboard) if you agree to use your social media outlets (Twitter and Facebook) to tell your friends about the experience. You will in fact become an AMD Social Media Reviewer!
Sorry, if you aren't in the San Diego area, you are out of luck on this promotion. This is just another reason why attending ComicCon is so enticing!
Subject: Processors | June 26, 2012 - 09:08 PM | Jeremy Hellstrom
Tagged: arm, cortex-a9, e-350, i7-3770k, z530, Ivy Bridge, atom, Zacate
Taking a half dozen PandaBoard ESes from Texas Instruments that have a 1.2GHz dual-core ARM Cortex-A9 processor onboard, Phoronix built a 12-core ARM machine to test out against AMD's E-350 APU as well as Intel's Atom Z530 and a Core i7 3770K. Before you you make the assumption that the ARM's will be totally outclassed by any of these processors, Phoronix is testing performance per Watt and the ARM system uses a total of 31W when fully stressed and idles below 20W, which gives ARM a big lead on power consumption.
Phoronix tested out these four systems and the results were rather surprising as it seems Intel's Ivy Bridge is a serious threat to ARM. Not only did it provide more total processing power, its performance per Watt tended to beat ARM and more importantly to many, it is cheaper to build an i7-3770K system than it is to set up a 12-core ARM server. The next generation of ARM chips have some serious competition.
"Last week I shared my plans to build a low-cost, 12-core, 30-watt ARMv7 cluster running Ubuntu Linux. The ARM cluster that is built around the PandaBoard ES development boards is now online and producing results... Quite surprising results actually for a low-power Cortex-A9 compute cluster. Results include performance-per-Watt comparisons to Intel Atom and Ivy Bridge processors along with AMD's Fusion APU."
Here are some more Processor articles from around the web:
- AMD FX-8120 Black Edition CPU Review (with Asus M5A99X EVO) @ Kitguru
- Intel Core i7-3720QM: Mobile Ivy Bridge @ Techspot
- Sandy Bridge for servers: Intel Xeon E5-2600 review @ Hardware.Info
- Desktop CPU Comparison Guide @ TechARP
- Workstation & Server CPU Comparison Guide @ TechARP
- Mobile CPU Comparison Guide @ TechARP
Subject: Processors | June 19, 2012 - 03:46 PM | Josh Walrath
Tagged: Xeon Phi, xeon e5, nvidia, larrabee, knights corner, Intel, HPC, gpgpu, amd
The one positive thing for Intel’s competitors is that it seems their enthusiasm for massively parallel computing is justified. Intel just entered that ring with a unique architecture that will certainly help push high performance computing more towards true heterogeneous computing.
Subject: Graphics Cards, Processors, Shows and Expos | June 14, 2012 - 03:46 PM | Ryan Shrout
Tagged: live blog, arm, APU, amd, AFDS
Day 3 - Thursday, June 14th
We are here at AFDS 2012 for the day 3 keynotes - join us as find out what else AMD has in store.
If you are looking for Tuesday or Wednesday keynotes and information on the announcement of the HSA Foundation, you can find it below, after the break!
Subject: Processors | June 13, 2012 - 02:00 PM | Josh Walrath
Tagged: TrustZone, hsa, Cortex-A5, cortex, arm, APU, amd, AFDS
Last year after that particular AFDS, there was much speculation that AMD and ARM would get a whole lot closer. Today we have confirmed that in two ways. The first is that AMD and ARM are founding members of the HSA Foundation. This endeavor is a rather ambitious project that looks to make it much easier for programmers to access the full computer power of a CPU/GPU combo, or as AMD likes to call them, the APU. The second confirmation is one that has been theorized for quite some time, but few people have actually hit upon the actual implementation. This second confirmation is that AMD is licensing ARM cores and actually integrating them into their x86 based APUs.
Subject: Graphics Cards, Processors | June 12, 2012 - 05:31 PM | Ryan Shrout
Tagged: texas instruments, mediatek, imagination, hsa foundation, hsa, arm, amd, AFDS
Today is a big day for AMD as they, along with four other major players in the world of processors and SoCs, announced the formation of the HSA Foundation. The HSA Foundation is a non-profit consortium created to define and promote an open approach to heterogeneous computing. The primary goal is to make it easier for software developers to write and program for the parallel power of GPUs. This encompasses both integrated and discrete of which the HSA (heterogeneous systems architecture) Foundation wants to enable users to take full advantage of all the processing resources available to them.
On stage at the AMD Fusion Developer Summit in Bellevue, WA, AMD announced the formation of the consortium in partnership with ARM, Imagination Technologies, MediaTek, and Texas Instruments; some of the biggest names in computing.
The companies will work together to drive a single architecture specification and simplify the programming model to help software developers take greater advantage of the capabilities found in modern central processing units (CPUs) and graphics processing units (GPUs), and unlock the performance and power efficiency of the parallel computing engines found in heterogeneous processors.
There are a lot of implications in this simple statement and there are many questions that are left open ended to which we hope to get answered this week while at AFDS. The idea of a "single architecture specification" set a lot of things in motion and makes us question the direction of both AMD and the traditionally ARM-based companies of the HSA Foundation will be moving in. AMD has had the APU, and the eventual complete fusion of the CPU and GPU, on its roadmap for quite a few years and has publicly stated that in 2014 they will have their first fully HSA-capable part. We are still assuming that this is an x86 + Radeon based part, but that may or may not be the long term goal; ideas of ARM-based AMD processors with Radeon graphics technology AND of Radeon based ARM-processors built by other companies still swirl amongst the show. There are even rumors of Frankenstein-like combinations of x86 and ARM based products for niche applications.
Looks like there is room for a few more founding partners...
Obviously ARM and others have their own graphics IP (ARM has Mali, Imagination Technology has Power VR) and those GPUs can be used for parallel processing in much the same way that we think of GPU processing on discrete GPUs and APUs today. ARM processor designers are well aware of the power and efficiency benefits of utilizing all of the available transistors and processing power correctly and the emphasis on an HSA-style system design makes a lot of sense moving forward.
My main question for the HSA Foundation is its goals: obviously they want to promote the simplistic approach for programmers, but what does that actually translate to on the hardware side? It is possible that both x86 and ARM-based ISAs can continue to exist with libraries and compilers built to correctly handle applications for each architecture, but that would seem to me to be against the goals of such a partnership of technology leaders.
In a meeting with AMD personnel, the most powerful and inspiring idea from the HSA Foundation is summed up with this:
"This is bigger than AMD. This is bigger than the PC ecosystem."
The end game is to make sure that all software developers can EASILY take advantage of both traditional and parallel processing cores without ever having to know what is going on under the hood. AMD and the other HSA Foundation members continue to tell us that this optimization can be completely ISA-agnostic – though the technical blockages for that to take place are severe.
AMD will benefit from the success of the HSA Foundation by finally getting more partners involved in promoting the idea of heterogeneous computing, and powerful ones at that. ARM is the biggest player in the low power processor market responsible for the Cortex and Mali architectures found in the vast majority of mobile processors. As those partners trumpet the same cause as AMD, more software will be developed to take advantage of parallel computing and AMD believes their GPU architecture gives them a definite performance advantage once that takes hold.
What I find most interesting is the unknown – how will this affect the roadmaps for all the hardware companies involved? Are we going to see the AMD APU roadmap shift to an ARM-IP system? Will we see companies like Texas Instruments fully integrate the OMAP and Power VR cores into a single memory space (or ARM with Cortex and Mali)? Will we eventually see NVIDIA jump onboard and lend their weight towards true heterogenous computing?
We have much more the learn about the HSA Foundation and its direction for the industry but we can easily say that this is probably the most important processor company collaboration announcement in many years – and it does so without the 800 pound gorilla that is Intel in attendance. By going after the ARM-based markets where Intel is already struggling to compete in, AMD can hope to create a foothold with technological and partnership advantages and return to a seat of prominence. This harkens back to the late 1990s when AMD famously put together the "virtual gorilla" with many partners to take on Intel.
Subject: Graphics Cards, Processors | June 12, 2012 - 04:18 PM | Ryan Shrout
Tagged: Kaveri, APU, amd, AFDS
During the opening keynote at the AMD Fusion Developer Summit 2012, AMD's Dr. Lisa Su revealed a slide with performance of the upcoming 3rd genreation Kaveri APU.
While Trinity is currently rated at 726 GFLOPS, the Kaveri APU due late in 2012 or early 2013, will have at least 1 TFLOPS of total compute performance. That is a 37% boost over the previous generation.
If you want more information, check out our keynote live blog!!
Subject: General Tech, Processors, Displays | June 10, 2012 - 10:45 PM | Ryan Shrout
Tagged: widi, Intel, awd, amd wireless display, amd, AFDS
While perusing through the listings and descriptions of sessions and presentations for the upcoming AMD Fusion Developer Summit, I came across an interesting one that surprised me. Tomorrow, June 11th, at 5:15pm PST, you can stop by the Grand Hyatt in Bellevue to learn about the upcoming AMD Wireless Display technology.
AWD (AMD Wireless Display) is a multiple-platform application family to enable wireless display technologies much in the same way that Intel has been pushing with WiDi. While Intel's take on it requires very specific Intel wireless controllers and is only recently, with the release of Ivy Bridge, getting the full-steam push from Intel, AMD's take on it is quite different.
Intel introduced WiDi in 2010
According to the brief on this AFDS session, AMD wants to create an API and SDKs for application developers to integrate AWD into software and to leverage the WiFi Alliance for an open-standards compliant front-end. Using AMD APUs, the goal is provide lower latency for encoded video and audio while still using the required MPEG2TS wrapper. We are also likely to learn that AMD hopes to make AWD open to a wider array of wireless devices.
AMD often takes this "open" approach to new technologies with mixed results - CUDA has been in place for many years while the adoption of OpenCL is only starting to take hold and 3D Vision still is the standard for 3D gaming on the PC.
After having quite a few chances to use Intel's Wireless Display (WiDi) technology myself I can definitely say that the wireless approach is the one I am most excited with and that has the most potential to revolutionize the way we work with displays and computing devices. I am eager to see what partners AMD has been working with and what demonstrations they will have for AWD next week.
Subject: Processors | June 8, 2012 - 07:51 PM | Jeremy Hellstrom
Tagged: ubuntu, linux, Intel, Ivy Bridge, compiler, virtualization
Phoronix have been very busy lately, getting their heads around the functionality of Ivy Bridge on Linux and as these processor are much more compatible than their predecessors it has resulted in a lot of testing. The majority of the testing focused on the performance of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64 on an i7-3770K using a wide variety of programs and benchmarks. Their initial findings favoured GCC over all other compilers as in general it took top spot, with LLVM having issues with some of their tests. They then started to play around with the instruction sets the processor was allowed to use, by disabling some of the new features they could emulate how the Ivy Bridge processor would perform if it was from a previous generation of chips, good to judge the improvement of raw processing power. They finished up by testing its virtualization performance, with BareMetal, the Kernel-based Virtual Machine virtualization and Oracle VM VirtualBox. You can see how they compared right here.
"From an Intel Core i7 3770K "Ivy Bridge" system here is an 11-way compiler comparison to look at the performance of these popular code compilers on the latest-generation Intel hardware. Among the compilers being compared on Intel's Ivy Bridge platform are multiple releases of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64."
Here are some more Processor articles from around the web:
- Intel's ultrabook-bound Core i5-3427U processor @ The Tech Report
- Intel Core i5 3470 Review: HD 2500 Graphics Tested @ AnandTech
- Comparing Ivy Bridge vs. Sandy Bridge @ TechReviewSource
- EE Bookshelf: ARM Cortex M Architecture Overview @ Adafruit
- The Workstation & Server CPU Comparison Guide @ TechARP
- The Bulldozer Aftermath: Delving Even Deeper @ AnandTech
- AMD E-Series APU “Brazo 2.0″ @ Bjorn3D
- AMD A8-3870K Black Edition & Hybrid Crossfire @ OC3D
- AMD A4 3400 APU @ Kitguru
Subject: General Tech, Processors, Shows and Expos | June 7, 2012 - 10:49 PM | Ryan Shrout
Tagged: hsa, fusion, amd, AFDS
One of the best show experiences I had last year was a surprise to me - AMD's first annual Fusion Developer Summit (AFDS) was hosted in the Seattle / Bellevue area. I say that it was a surprise only because the inaugural year for vendor-specific shows like this tend to be pretty bland and lack interesting information, but that wasn't the case in 2011. We saw ARM get on stage with AMD to talk about the idea of "dark silicon" and how to prevent it, we saw the first AMD Trinity notebook and even got details of the Tahiti GPU architecture well ahead of release.
We expect even better things in 2012.
While I don't know exactly what surprises will be on display this year I am looking forward to seeing the improvement from software developers after having another 12 months to work on APU-accelerated applications. HSA (heterogeneous system architecture) has been getting a lot of buzz from AMD and the industry as we push towards a combined memory address space and the ultimate acceleration of programs across both serialized and parallel processors on the same die.
If you are in the Seattle / Bellevue area and you have the ability to attend AFDS, I would highly encourage you to do so. You'll have access to:
- Never before seen demos
- Technical tracks and sessions to learn about HSA and programming for it
If you can't make it though, you should definitely follow the whole event right here at PC Perspective - the easiest way is to keep track of our AFDS tag to make sure you don't miss any of the potentially industry shifting news!
You can also expect us to have a live blog from the event as well!
Subject: Processors | June 6, 2012 - 09:08 PM | Josh Walrath
Tagged: Zacate, Hudson-M3L, FCH, E2-1800, E2-1200, computex, brazos 2.0, brazos, Bobcat, amd
Today AMD is officially releasing their Brazos 2.0 parts. This is a case of good news/bad news for the company. The good news is that they have an updated product that is based on their very successful Brazos 1.0 platform and that particular part has sold over 30 million units and is included in some 160 designs. The bad news is that AMD did not improve the product dramatically over what we previously had.
While Brazos will not beat these Intel offerings in pure performance, they do match up nicely in terms of price and battery life.
It is well known that AMD cancelled their original Bobcat 2.0 28 nm parts last fall (Krishna and Wichita), and instead worked on improving the fabrication of the current Brazos APUs. Little is known as to why those original 28 nm parts were cancelled, but perhaps the overriding reason is that there simply would not be enough 28 nm production through the first three quarters of 2012 to enable AMD to adequately meet demand on these parts (all the while sacrificing higher margin GPU wafer orders on the 28 nm node). We also must consider that AMD could have been counting on GLOBALFOUNDRIES to have their flavor of 28 nm HKMG process up and running, which of course at this time it is not.
These new Brazos 2.0 chips are still manufactured on TSMC’s 40 nm process, but that particular process is very mature at this time. This has allowed AMD and TSMC to squeeze every last drop of performance and efficiency out of the aging 40 nm node, and in so doing has allowed AMD a bit more headroom when it comes to the Zacate APUs that Brazos 2.0 is based off of. The two new processors are the E2-1800 and the E2-1200.
The E2-1800 is a dual core Bobcat CPU featuring an APU with 80 stream units based on the older HD 5000 series of parts. AMD has renamed the GPU to the HD 7340, though it has little in common with the GCN (Graphics Core Next) based HD 7000 graphics units. AMD increased the core CPU speed from the E-450 by 50 MHz and the GPU portion by 80 MHz. This gives the E2-1800 a core clockspeed of 1.7 GHz and the graphics runs at a brisk 680 MHz. This continues to be an 18 watt TDP part and the die size is the same 75 mm squared.
Subject: Graphics Cards, Processors, Mobile | June 1, 2012 - 02:52 PM | Ryan Shrout
Tagged: video, trinity, Ivy Bridge, Intel, i7-3720QM, diablo iii, APU, amd, a10-4600m
So, apparently PC gamers are big fans of Diablo III, to the tune of 3.5 million copies sold in the first 24 hours. That means there are a lot of people out there looking for information about the performance they can expect on various harware configurations with Diablo III. Since we happened to have the two newest mobile processors and platforms on-hand, and because many people seemed to assume that "just about anything" would be able to play D3, we decided to put it to the test.
In our previous reviews of the AMD Trinity and Intel Ivy Bridge reference systems, the general consensus was that the CPU portion of the chip was better on Intel's side while the GPU portion was still weighted towards the AMD Trinity APU. Both of these CPUs, the A10-4600M and the Core i7-3720QM, are the highest end mobile solutions from both AMD and Intel.
The specifications weren't identical, but again, for a mobile platform, this was the best we could do. With the AMD system only having 4GB of memory compared to the Ivy Bridge system with 8GB, that is one lone "stand out" spec. The Intel HD 4000 graphics offer a noticeable upgrade from the HD 3000 on the Sandy Bridge platform but AMD's new HD 7660G (based on Cayman) also sees performance increase.
We ran our tests at 1366x768 with "high" image quality settings and ran through a section of the early part of the game a few times with FRAPs to get our performance results. We did also run some tests to an external monitor at 1920x1080 with "low" presets and AA disabled - both are reported in the video below. Enjoy!
Subject: Editorial, General Tech, Processors | May 30, 2012 - 10:42 PM | Scott Michaud
Tagged: Intel, fab
Intel has released an animated video and supplementary PDF document to explain how Intel CPUs are manufactured. The video is more “cute” than anything else although the document is surprisingly really well explained for the average interested person. If you have ever wanted to know how a processor was physically produced then I highly recommend taking about a half of an hour to watch the video and read the text.
If you have ever wondered how CPUs came to be from raw sand -- prepare to get learned.
Intel has published a video and accompanied information document which explains their process almost step by step. The video itself will not teach you too much as it was designed to illustrate the information in the online pamphlet.
Not shown is the poor sandy bridges that got smelted for your enjoyment.
Rest in got
My background in education is a large part of the reason why I am excited by this video. The accompanied document is really well explained, goes into just the right amount of detail, and does so very honestly. The authors did not shy away from declaring that they do not produce their own wafers nor did they sugarcoat that each die even on the same wafer could perform differently or possibly not at all.
You should do yourself a favor and check it out.
Subject: Processors, Systems | May 29, 2012 - 09:15 PM | Ryan Shrout
Tagged: server, dell, copper, arm
Dell announced today that is going to help enable the world of the ARM-based server ecosystem by enabling key hyperscale customers to access and develop on Dell's own "Copper" ARM servers.
Dell today announced it is responding to the demands of our customers for continued innovation in support of hyperscale environments, and enabling the ecosystem for ARM-based servers. The ARM-based server market is approaching an inflection point, marked by increasing customer interest in testing and developing applications, and Dell believes now is the right time to help foster development and testing of operating systems and applications for ARM servers.
Dell is recognized as an industry leader in both the x86 architecture and the hyperscale server market segments. Dell began testing ARM server technology internally in 2010 in response to increasing customer demands for density and power efficiency, and worked closely with select Dell Data Center Solutions (DCS) hyperscale customers to understand their interest level and expectations for ARM-based servers. Today's announcement is a natural extension of Dell's server leadership and the company's continued focus on delivering next generation technology innovation.
While these servers are still not publicly available, Dell is fostering the development of software and verification processes by seeding these unique servers to a select few groups. PC Perspective is NOT one of them.
Each of these 3U rack mount machines includes 48 independent servers, each based around a 1.6 GHz quad-core Marvell Armada XP SoC. Each of the sleds (pictured below) holds four discrete server nodes, each capable of as much as 8GB of memory on a single DDR3 UDIMM. Each node can access one 2.5-in HDD bay and one Gigabit Ethernet connection.
Click for a larger view
Even though we are still very early into the life cycle of ARM architectures in the server room, Dell claims that these systems are built perfectly for web front-ends and Hadoop environments:
Customers have expressed great interest in understanding ARM-based server advantages and how they may apply to their hyperscale environments. Dell believes ARM infrastructures demonstrate promise for web front-end and Hadoop environments, where advantages in performance per dollar and performance per watt are critical. The ARM server ecosystem is still developing, and largely available in open-source, non-production versions, and the current focus is on supporting development of that ecosystem. Dell has designed its programs to support today's market realities by providing lightweight, high-performance seed units and easy remote access to development clusters.
There is little doubt that Intel will feel and address this competition in the coming years.
Subject: Processors, Mobile | May 29, 2012 - 03:33 PM | Ryan Shrout
Tagged: z2670, windows 8, dell, clover trail, atom
In a leaked slide posted by Neowin.net, details of Dell's upcoming Latitude 10 tablet are coming to light, including hardware specifications like the Intel Atom Z2670 "Clover Trail" SoC.
This 10.1-in Windows 8 based tablet will include a 1366x768 display with a capacitive multi-touch screen and an optional stylus accessory. Weighing in at just over 1.5 pounds, the Latitude 10 is just slightly heavier than the latest generation of iPad (1.46 pounds).
Intel's upcoming Atom processor, the Z2670, will be at the core of the design and will be based on the "Clover Trail" design, a slightly faster and updated version of "Medfield" we have seen implemented on mobile phones early in 2012. With dual-cores capable of HyperThreading, and the ability to enter into "Burst Mode" which offers "quick bursts of extra performance when called upon", the Atom Z2670 should be capable of presenting a reasonable Windows 8 experience.
Other specifications include 2 GB of DDR2-800 lower power memory, up to a 128 GB SSD, 2 and 4 cell swappable batteries and front plus rear facing cameras.
With Computex 2012 right around the corner in Taipei, Taiwan, we expect to see quite a few more tablets and hybrid machines based on Windows 8 including Intel Atom-powered devices as well as ARM-based devices running Windows 8 RT.
Subject: General Tech, Processors, Mobile | May 24, 2012 - 10:01 PM | Scott Michaud
Intel has released a report about their environmental efforts in terms of manufacturing efficiency, waste, and the efficiency of their products themselves. Their 2020 mobile and data center product line is expected to use 25-fold less power than their 2010 product line. Intel is hoping to use less water and consume 1.4 TWh less energy between 2012 and 2015 in their manufacturing with no chemical waste to landfill by 2020.
It is not easy been green.
… But, especially now, Intel can afford to try.
The chip manufacturer has set some goals for themselves to decrease their impact on the environment. These plans were published in their 2011 Corporate Responsibility Report (pdf), released last week. The plan highlights goals extending out as far as 2020.
It would seem that for Intel foresight is also 2020.
Yes, those puns were terrible, I admit it.
One of the forefront issues raised is alterations to their supply chain. Their raw materials have been addressed -- not just for eco-friendliness -- but also for human rights violations. By the end of 2012 Intel intends to validate that all tantalum would be “conflict-free” with the other three minerals verified by the end of 2013.
On the topic of environmental impact Intel is also intending on reducing their electrical and water usage at their manufacturing plants. A total of 1.4 TWh of energy is expected to be reduced from 2012 through 2015. Intel is also lauding their solar initiatives although they fell short of committing to any specific future endeavors in clean energy in this report.
Lastly, Intel claims that their mobile and data center products will consume 25-fold less power than their 2010 counterparts. Obviously such a statement falls more under gloating than a vow to promote sustainability but it is respectable none-the-less.
Subject: Editorial, General Tech, Graphics Cards, Processors | May 19, 2012 - 08:52 PM | Scott Michaud
Tagged: ultrabook, trinity, cloud computing, cloud, amd
Bloomberg Businessweek reports AMD CEO Rory Read claims that his company will produce chips which are suited for consumer needs and not to crunch larger and larger bundles of information. They also like eating Intel’s bacon -- the question: is it from a pig or a turkey?
Read believes there is “enough processing power on every laptop on the planet today”.
The argument revolves around the shift to the cloud, as usual. It is very alluring to shift focus from the instrument to the data itself. More enticing: discussing how the instruments change to suit that need; this is especially true if you develop instruments and yearn to shift anyway.
Don’t question the bacon…
AMD has been trusting that their processors will be good enough and their products will differentiate in other ways such as with graphics capabilities which they claim will be more important for cloud services. AMD hopes that their newer laptops will steal some bacon from Intel and their ultrabook initiative.
The main problem with the cloud is that it is mostly something that people feel that they want rather than actually do. They believe they want their content controlled by a company for them until it becomes inaccessible temporarily or permanently. They believe they want their information accessible in online services but then freak out about the privacy implications of it.
The public appeal of the cloud is that it lets you feel as though you can focus on the content rather than the medium. The problem is that you do not have fewer distractions from your content -- just different ones -- and they rear their head once or twice in isolation of each other. You experience a privacy concern here and an incompatibility or licensing issue there. For some problems and for some people it makes more sense to control your own data. It will continue to be important to serve that market.
And if crunching ends up being necessary for the future it looks like Intel will be a little lonely at the top.