Subject: General Tech, Mobile, Shows and Expos | June 21, 2011 - 10:58 PM | Scott Michaud
Tagged: Huawei, CommunicAsia, Android 3.2
There seems to always be a trade show going on at some corner of the ellipsoid world particularly at this time of the year. Down in Singapore the CommunicAsia 2011 exhibition is on until the 24th and news is starting to trickle out about advancements in communication technology. If you were holding your breath until Android reached version 3.2 on devices you can almost finally exhale, if you are still conscious because you can at best hold your breath for like 8 minutes and Android products are not that quick to ship. Yet.
Seventh floor… going up… ... WHAMMY BAR!!!
Huawei announced on the 21st that they are releasing a 7-inch tablet based on Android’s 3.2 release. The tablet will feature a dual-core 1.2 GHz processor from Qualcomm but no mention of how much system RAM it will contain as it still allegedly depends on partners. The capacitive touchscreen will be IPS-based at a 217 PPI pixel density. After a little trigonometry: a 7-inch screen will have a resolution somewhere between 1280x720 and 1366x768 if its pixel density is 217 pixels per inch. The unit itself is capable of outputting 1080p to an external display through HDMI. There are currently no details towards a price, but Huawei stated that there are no plans for a Wifi-only version. The unit is expected to ship in the third quarter of this year.
Introducing the AMD FSA
At AMD’s Fusion 11 conference, we were treated to a nice overview of AMD’s next generation graphics architecture. With the recent change in their lineup going from the previous VLIW-5 setup (powered their graphics chips from the Radeon HD 2900 through the latest “Barts” chip running the HD 6800 series) to the new VLIW-4 (HD 6900), many were not expecting much from AMD in terms of new and unique designs. The upcoming “Southern Isles” were thought to be based on the current VLIW-4 architecture, and would feature more performance and a few new features due to the die shrink to 28 nm. It turns out that speculation is wrong.
In late Q4 of this year we should see the first iteration of this new architecture that was detailed today by Eric Demers. The overview detailed some features that will not make it into this upcoming product, but eventually it will all be added in over the next three years or so. Historically speaking, AMD has placed graphics first, with GPGPU/compute as the secondary functionality of their GPUs. While we have had compute abilities since the HD 1800/1900 series of products, AMD has not been as aggressive with compute as has its primary competition. From the G80 GPUs and beyond, NVIDIA has pushed compute harder and farther than AMD has. With its mature CUDA development tools and the compute heavy Fermi architecture, NVIDIA has been a driving force in this particular market. Now that AMD has released two APU based products (Llano and Brazos), they are starting to really push OpenCL, Direct Compute, and the recently announced C++ AMP.
Continue reading for all the details on AMD's Graphics Core Next!
Subject: Editorial, Graphics Cards, Processors, Mobile, Shows and Expos | June 16, 2011 - 06:41 PM | Ryan Shrout
Tagged: llano, liveblog, fusion, APU, amd, AFDS
The AMD Fusion Developer Summit 2011 is set to begin at 11:30am ET / 8:30am PT and promises to bring some interesting and forward looking news about the future of AMD's APU technology. We are going to cover the keynotes LIVE right here throughout the week so if you want to know what is happening AS IT HAPPENS, stick around!!
Subject: General Tech, Shows and Expos | June 16, 2011 - 01:14 AM | Scott Michaud
Tagged: opencl, amd, AFDS
If you are a developer of applications which requires more performance than a CPU alone can provide then you are probably having a gleeful week. Today Microsoft announced their competitor to OpenCL and we have a large write-up about that aspect of their keynote address. If you are currently an OpenCL developer you are not left out, however, as AMD has announced new tools designed to make your life easier too.
General Purpose GPU utilities: Because BINK won't satisfy this crowd.
(Logo trademark Apple Inc.)
AMD’s spectrum of enhanced tools includes:
- gDEBuger: An OpenCL and OpenGL debugger, profiler, and memory analyzer released as a plugin for Visual Studio.
- Parallel Path Analyzer (PPA): A tool designed to profile data transfers and kernel execution across your system.
- Global Memory for Accelerators (GMAC) API: Lets developers use multiple devices without needing to manage multiple data buffers in both the CPU and the GPU.
- Task Manager API: A framework to manage scheduling kernels across devices.
These tools and utilities should make the development of software easier and allow more developers to take the risk on the new technology. The GPU has already proven itself worthy of more and more important tasks and it is only a matter of time before it is finally ubiquitous enough that it is a default component as important as the CPU itself. As an ironic aside, that should spur the adoption of PC Gaming given how many people would have sufficient hardware.
Subject: Editorial, General Tech, Shows and Expos | June 15, 2011 - 09:58 PM | Ryan Shrout
Tagged: programming, microsoft, fusion, c++, amp, AFDS
During this morning's keynote at the AMD Fusion Developer Summit, Microsoft's Herb Sutter went on stage to discuss the problems and solutions involved around programming and developing for multi-processing systems and heterogeneous computing systems in particular. While the problems are definitely something we have discussed before at PC Perspective, the new solution that was showcased was significant.
C++ AMP (accelerated massive parallelism) was announced as a new extension to Visual Studio and the C++ programming language to help developers take advantage of the highly parallel and heterogeneous computing environments of today and the future. The new programming model uses C++ syntax and will be available in the next version of Visual Studio with "bits of it coming later this year." Sorry, no hard release date was given when probed.
Perhaps just as significant is the fact that Microsoft announced the C++ AMP standard would be an open specification and they are going to allow other compilers to integrated support for it. Unlike C# then, C++ AMP has a chance to be a new dominant standard in the programming world as the need for parallel computing expands. While OpenCL was the only option for developers that promised to allow easy utilization of ALL computing power in a computing device, C++ AMP gives users another option with the full weight of Microsoft behind it.
To demonstrate the capability of C++ AMP Microsoft showed a rigid body simulation program that ran on multiple computers and devices from a single executable file and was able to scale in performance from 3 GLOPS on the x86 cores of Llano to 650 GFLOPS on the combined APU power and to 830 GFLOPS with a pair of discrete Radeon HD 5800 GPUs. The same executable file was run on an AMD E-series APU powered tablet and ran at 16 GFLOPS with 16,000 particles. This is the promise of heterogeneous programming languages and is the gateway necessary for consumers and business to truly take advantage of the processors that AMD (and other companies) are building today.
If you want programs other than video transcoding apps to really push the promise of heterogeneous computing, then the announcement of C++ AMP is very, very big news.
Subject: Graphics Cards, Processors, Shows and Expos | June 15, 2011 - 12:06 AM | Ryan Shrout
Tagged: vliw, trinity, llano, fusion, evergreen, cayman, amd, AFDS
Well that was an interesting twist... During a talk on the next generation of GPU technology at the AMD Fusion Developer Summit, one of the engineers was asked about Trinity, the next APU to be released in 2012 (and shown running today for the very first time). It was offered that Trinity in fact used a VLIW4 architecture rather than the VLIW5 design found in the just released Llano A-series APU.
A shader unit from the VLIW4-based Cayman architecture
That means that Trinity APUs will ship with Cayman-based GPU technology (6900 series) rather than the Evergreen (5000 series). While that doesn't tell us much in terms of performance simply because we have so many variables including shader counts and clocks, it does put to rest the rumor that Trinity was going to keep basically the same class of GPU technology that Llano had.
Trinity notebook shown for the first time today at AFDS. Inside is an APU with Cayman-class graphics.
AMD is definitely pushing the capabilities of APUs forward and if they can stay on schedule with Trinity, Intel might find the GPU portion of its Ivy Bridge architecture well behind again.
Subject: Editorial, Processors, Shows and Expos | June 14, 2011 - 09:09 PM | Ryan Shrout
Tagged: nvidia, Intel, heterogeneous, fusion, arm, AFDS
Before the AMD Fusion Developer Summit started this week in Bellevue, WA the most controversial speaker on the agenda was Jem Davies, the VP of Technology at ARM. Why would AMD and ARM get together on a stage with dozens of media and hundreds of developers in attendance? There is no partnership between them in terms of hardware or software but would there be some kind of major announcement made about the two company's future together?
In that regard, the keynote was a bit of a letdown and if you thought there was going to be a merger between them or a new AMD APU being announced with an ARM processor in it, you left a bit disappointed. Instead we got a bit of background on ARM how the race of processing architectures has slowly dwindled to just x86 and ARM as well as a few jibes at the competition NOT named AMD.
As is usually the case, Davies described the state of processor technology with an emphasis on power efficiency and the importance of designing with that future in mind. One of the interesting points was shown in regard to the "bitter reality" of core-type performance and the projected DECREASE we will see from 2012 onward due to leakage concerns as we progress to 10nm and even 7nm technologies.
The idea of dark silicon "refers to the huge swaths of silicon transistors on future chips that will be underused because there is not enough power to utilize all the transistors at the same time" according to this article over at physorg.com. As the process technology gets smaller then the areas of dark silicon increase until the area of the die that can be utilized at any one time might hit as low as 10% in 2020. Because of this, the need to design chips with many task-specific heterogeneous portions is crucial and both AMD and ARM on that track.
Those companies not on that path today, NVIDIA specifically and Intel as well, were addressed on the below slide when discussing GPU computing. Davies pointed out that if a company has a financial interest in the immediate success of only CPU or GPU then benchmarks will be built and shown in a way to make it appear that THAT portion is the most important. We have seen this from both NVIDIA and Intel in the past couple of years while AMD has consistently stated they are going to be using the best processor for the job.
Amdahl's Law is used in parallel computing to predict the theoretical maximum speed up using multiple processors. Davies reiterated what we have been told for some time that if only 50% of your application can actually BE parallelized, then no matter how many processing cores you throw at it, it will only ever be 50% faster. The heterogeneous computing products of today and the future can address both the parallel computing and serial computing tasks with improvements in performance and efficiency and should result in better computing in the long run.
So while we didn't get the major announcement from ARM and AMD that we might have been expecting, the fact that ARM would come up and share a stage with AMD reiterates the message of the Fusion Developer Summit quite clearly: a combined and balanced approach to processing might not be the sexiest but it is very much the correct one for consumers.
Subject: Processors, Mobile, Shows and Expos | June 14, 2011 - 04:08 PM | Ryan Shrout
Tagged: trinity, fusion, APU, AFDS
On stage during the opening keynote at the AMD Fusion Developer Summit 2011, Rick Bergman showed off a notebook that was being powered not by the recently released AMD Llano A-series APUs, but rather the Trinity core due in 2012.
Trinity is the desktop APU for next year that will combine Bulldozer-based x86 CPU cores with an updated DX11 GPU architecture built on the current 32nm process. Not much else is known about the chip yet but hopefully we'll get some more details this week at the show.
Subject: General Tech, Shows and Expos | June 11, 2011 - 06:28 PM | Scott Michaud
Tagged: john carmack, id, E3
John Carmack was and is one of the biggest faces in videogame engine development since Wolfenstein 3D, Doom, and Quake. He was at E3 to promote his company, iD Software’s, RAGE: their nearest upcoming release. While he was there, PCGamer managed to corner him for a 22 minute interview ranging from RAGE; to the current and future state of PC gaming; to the perceptive effect of input latency and how framerate affects it.
Look at how stable the framerate is!
- Texture resolution and memory limitations on consoles
- Higher end PCs being approximately 10-fold higher performance than the consoles
- Sandy Bridge is finally barely good enough for integrated graphics to be viable GPUs for games
- DirectX and OpenGL APIs hold the PC back, looking forward to new movements to access GPU better
- His interest focuses on the toolset to let the artists do more with less effort
- PC Gaming is still viable but a minority
- Input latency is longer than people expect, sometimes up to 100ms and beyond
- The exciting yet not necessarily crucial nature of newer rendering technologies
John Carmack always has interesting interviews from his very down to Earth and blunt tone. If you have a free half hour and want to hear one of the best game programmers in the world talk about his trade, this is definitely an interview for you.
Subject: General Tech, Shows and Expos | June 8, 2011 - 11:48 PM | Scott Michaud
Tagged: razer, E3
You may have noticed a slew of gaming-related news flooding from various cracks in the internet this week. E3, the Electronic Entertainment Expo, is currently in progress in Los Angeles and much news spawned from its presence. PC Gamers are not left out of the expo, however, as companies like Razer announce their latest wares and technology. While a standard mouse is sufficient for most users there are some who desire extra sensitivity and extra buttons and those are precisely the customers for companies like Razer. Today, Razer announced that two of their upcoming mice would have two independent sensors, one optical and one laser, for enhanced tracking.
If they announce a five sensor Razer, The Onion won. (Image by Razer)
Razer listed a series of benefits to adding a second sensor to their next generation Mamba and Imperator mice:
- One sensor can calibrate the other to the surface you are using.
- The user will be able to determine how far away from the surface the mouse will stop tracking.
- Less latency tracking the surface you are operating on.
- Higher tracking precision.
While it is possible that you may appreciate those extra features on your mouse the largest factor in your gameplay will not be your hardware. The largest benefit I received switching from a three-button Microsoft mouse to a gaming mouse was the extra thumb buttons which I bound to an AutoHotkey script for single-button scrolling up and down large documents. (Available here if that's something you desire.) If these features speak to you however, check out Razer’s website.