PC Perspective Podcast #159 - AMD Llano Notebook Platform, AMD Fusion platform architecture, X79 Rumors, the deal about BAPCo and more!
Subject: General Tech | June 23, 2011 - 11:39 AM | Ken Addison
Tagged: x79, podcast, nvidia, llano, Intel, fusion, APU, amd
PC Perspective Podcast #159 - 6/23/2011
This week we talk about the AMD Llano Notebook Platform, AMD Fusion platform architecture, X79 Rumors, the deal about BAPCo and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store
- RSS - Subscribe through your regular
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
This Podcast is brought to you by
- 0:00:30 Introduction
- 1-888-38-PCPER or email@example.com
- http://twitter.com/ryanshrout and http://twitter.com/pcper
- 0:01:50 AMD A-Series Llano APU Sabine Notebook Platform Review
- 0:05:00 AMD Fusion System Architecture Overview - Southern Isle GPUs and Beyond
- 0:33:24 This Podcast is brought to you by
, and their all new Sandy Bridge Motherboards!
- 0:34:00 AFDS11: AMD Demonstrates Trinity Powered Notebook
- 0:35:45 AFDS11: ARM Talks Dark Silicon and Computing Bias at Fusion Summit
- 0:41:30 AFDS11: Microsoft Announces C++ AMP, Competitor to OpenCL
- 0:45:45 New Rumor Indicates X79 Chipset Will Support Both 1366 and 2011 Sockets
- 0:49:49 Microsoft is probably laughing as AMD speculates the unlikelihood of Intel buying NVIDIA
- 0:54:45 Larrabee rides again, almost ... meet Knights Corner the new Many Integrated Core design
- 0:58:35 What's the big deal with BAPCo? Why Benchmarking Matters
- 1:05:20 Crysis 2: Cry Harder (with DX11 and High Res textures)
- 1:06:00 *Allyn Show and Tell*
- 1:12:45 Quakecon Reminder - http://www.quakecon.org/
- 1:13:17 Hardware / Software Pick of the Week
- http://twitter.com/ryanshrout and http://twitter.com/pcper
- 1:25:45 Closing
Subject: Editorial, General Tech | June 21, 2011 - 11:36 AM | Ryan Shrout
Tagged: VIA, sysmark, nvidia, Intel, benchmark, bapco, amd
It seems that all the tech community is talking about today is BAPCo and its benchmarking suite called Sysmark. A new version, 2012, was released just recently and yesterday we found out that AMD, NVIDIA and VIA have all dropped their support of the "Business Applications Performance Corporation". Obviously those companies have a beef with the benchmark as it is, yet somehow one company stands behind the test: Intel.
Everyone you know of is posting about it. My twitter feed "asplode" with comments like this:
AMD quits BAPCo, says SYSmark is nutso. Nvidia and VIA, they say, also. http://bit.ly/kHvKux
AMD: Voting For Openness: In order to get a better understanding of AMD's press release earlier concerning BAPCO... http://bit.ly/kNtKkj
Ooh, BapCo drama.
Even PC Perspective posted on this drama yesterday afternoon saying: "The disputes centered mostly over the release of SYSmark 2012. For years various members have been complaining about various aspects of the product which they allege Intel strikes down and ignores while designing each version. One major complaint is the lack of reporting on the computer’s GPU performance which is quickly becoming beyond relevant to an actual system’s overall performance. With NVIDIA, AMD, and VIA gone from the consortium, Intel is pretty much left alone in the company: now officially."
Obviously while cutting the grass this morning this is the topic swirling through my head; so thanks for that everyone. My question is this: does it really matter and how is this any different than it has been for YEARS? The cynical side of me says that AMD, NVIDIA and VIA all dropped out because each company's particular products aren't stacking up as well as Intel's when it comes to the total resulting score. Intel makes the world's fastest CPUs, I don't think anyone with a brain will dispute that, and as such on benchmarks that test the CPU, they are going to have the edge.
We recently reviewed the AMD Llano-based Sabine platform and in CPU-centric tests like SiSoft Sandra, TrueCrypt and 7zip the AMD APU is noticeably slower. But AMD isn't sending out press releases and posting blogs about how these benchmarks don't show the true performance of a system as the end user will see. And Intel isn't pondering why we used games like Far Cry 2 and Just Cause 2 to show the AMD APU dominating there. Why? Because these tests are part of a suite of benchmarks we use to show the overall performance of a system. They are tools which competent reviewers wield in order to explain to readers why certain hardware acts in a certain way in certain circumstances.
Continue reading for more on this topic...
Subject: General Tech, Processors | June 20, 2011 - 01:46 PM | Scott Michaud
Tagged: VIA, sysmark, nvidia, bapco, amd
People like benchmarks. Benchmarks tell you which component to purchase while your mouse flutters between browser tabs of various Newegg or Amazon pages. Benchmarks let you see how awesome your PC is because often videogames will not for a couple of years. One benchmark you probably have not seen here in a very long time is Sysmark from the Business Applications Performance Corporation, known as BAPCo to its friends and well-wishers. There has been dispute over the political design of BAPCo and it eventually boiled over with AMD, NVIDIA, and VIA rolling off the sides of the pot.
Fixed that for you
The disputes centered mostly over the release of SYSmark 2012. For years various members have been complaining about various aspects of the product which they allege Intel strikes down and ignores while designing each version. One major complaint is the lack of reporting on the computer’s GPU performance which is quickly becoming beyond relevant to an actual system’s overall performance. With NVIDIA, AMD, and VIA gone from the consortium, Intel is pretty much left alone in the company: now officially.
Subject: General Tech | June 16, 2011 - 09:57 AM | Jeremy Hellstrom
Tagged: amd, Intel, nvidia
In some sort of bizarre voyeuristic hardware love/hate triangle AMD, Intel and NVIDIA are all semi-intertwined and being observed by Microsoft. Speaking with The Inquirer the VP of product and platform marketing at AMD, Leslie Sobon, stated that there was no chance that Intel would attempt to purchase NVIDIA as AMD did with ATI. AMD's purchase was less about the rights to the Radeon series as it was taking possession of the intellectual property that ATI owned after a decade of creating GPUs and lead directly to the APUs that AMD has recently released which will likely become their main product. Intel already has a working architecture that combines GPU and CPU and doesn't need to purchase another company's IP in order to develop that type of product.
There is another reason for purchasing NVIDIA though, which has very little to do with their discreet graphics card IP and everything to do with Tegra and Fermi which are two specialized products which so far Intel doesn't have an answer for. A vastly improved and shrunken Atom might be able to push Tegra off of mobile platforms and perhaps specialized SandyBridge CPUs could accelerate computation like the Fermi products do but so far there are no solid leads, only speculation.
If you learn more from your failures than your successes then Intel knows a lot about graphics.
"CHIP DESIGNER AMD believes that it is on a divergent path from Intel thanks to its accelerated processor unit (APU) and that Intel buying Nvidia "would never happen"."
Here is some more Tech News from around the web:
- Find Out if Your Passwords Were Leaked by LulzSec Right Here @ Gizmodo
- Adobe patches critical bugs in Flash and Reader @ The Register
- Umi, we hardly knew ye: contemplating the fate of the videophone in 2011 @ Ars Technica
- 'A SHARK attacked my ROBOT', gasps ex-Sun exec @ The Register
- We’ve got a real bone to pick with this mouse @ Hack a Day
- Fun Quotes from the AFDS Media Roundtable @ SemiAccurate
Subject: Editorial, Processors, Shows and Expos | June 14, 2011 - 02:09 PM | Ryan Shrout
Tagged: nvidia, Intel, heterogeneous, fusion, arm, AFDS
Before the AMD Fusion Developer Summit started this week in Bellevue, WA the most controversial speaker on the agenda was Jem Davies, the VP of Technology at ARM. Why would AMD and ARM get together on a stage with dozens of media and hundreds of developers in attendance? There is no partnership between them in terms of hardware or software but would there be some kind of major announcement made about the two company's future together?
In that regard, the keynote was a bit of a letdown and if you thought there was going to be a merger between them or a new AMD APU being announced with an ARM processor in it, you left a bit disappointed. Instead we got a bit of background on ARM how the race of processing architectures has slowly dwindled to just x86 and ARM as well as a few jibes at the competition NOT named AMD.
As is usually the case, Davies described the state of processor technology with an emphasis on power efficiency and the importance of designing with that future in mind. One of the interesting points was shown in regard to the "bitter reality" of core-type performance and the projected DECREASE we will see from 2012 onward due to leakage concerns as we progress to 10nm and even 7nm technologies.
The idea of dark silicon "refers to the huge swaths of silicon transistors on future chips that will be underused because there is not enough power to utilize all the transistors at the same time" according to this article over at physorg.com. As the process technology gets smaller then the areas of dark silicon increase until the area of the die that can be utilized at any one time might hit as low as 10% in 2020. Because of this, the need to design chips with many task-specific heterogeneous portions is crucial and both AMD and ARM on that track.
Those companies not on that path today, NVIDIA specifically and Intel as well, were addressed on the below slide when discussing GPU computing. Davies pointed out that if a company has a financial interest in the immediate success of only CPU or GPU then benchmarks will be built and shown in a way to make it appear that THAT portion is the most important. We have seen this from both NVIDIA and Intel in the past couple of years while AMD has consistently stated they are going to be using the best processor for the job.
Amdahl's Law is used in parallel computing to predict the theoretical maximum speed up using multiple processors. Davies reiterated what we have been told for some time that if only 50% of your application can actually BE parallelized, then no matter how many processing cores you throw at it, it will only ever be 50% faster. The heterogeneous computing products of today and the future can address both the parallel computing and serial computing tasks with improvements in performance and efficiency and should result in better computing in the long run.
So while we didn't get the major announcement from ARM and AMD that we might have been expecting, the fact that ARM would come up and share a stage with AMD reiterates the message of the Fusion Developer Summit quite clearly: a combined and balanced approach to processing might not be the sexiest but it is very much the correct one for consumers.
Subject: Mobile | June 9, 2011 - 03:43 PM | Tim Verry
Tagged: Tegra 2, super phone, Sprint, Photon, nvidia, arm, 4g
If desktop processors are advancing at the speed of sound, then mobile processors are advancing at somewhere near the speed of light. Just a year ago, a 600MHz Ti processor was very fast; however, in the age of dual core 1GHz+ processors that seems to be rather slow by comparison. Speaking of the speed of photons, Sprint has recently unveiled a new Motorola smart phone called the Photon 4G that is packed with lots of hardware and powered by Android 2.3.
What makes the Photon 4G special; however, is that it is the first NVIDIA Tegra powered "super phone" on Sprint's 4G cellular network. The 2.6 inch x 5 inch device has a depth of .5 inches and weighs in at 5.6 ounces. This rather hefty chassis holds a large 4.3" "qHD" display with a resolution of 540x960. Further, the phone has two cameras with the rear camera being capable of capturing 720p HD video and the front facing camera sporting a VGA (480x640) resolution. An HDMI output port, a microSD card slot supporting up to 32GB cards, and a metal kick stand also have a place on the device.
Internally, the phone features a 1GHz dual core Tegra 2 processor, 16GB of on-board storage, and 1GB of RAM. A 3G/4G radio supporting International GSM frequencies as well as a Bluetooth and Wifi 802.11 b/g/n radio are also present. This hardware is in turn backed by a 1700 mAh Lithium Ion battery.
According to the NVIDIA blog, the device is made further desirable due to it's ability to play "multi-platform, console-class Android OS games with the kind of experience you expect from a game console." The Photon 4G also supports Bluetooth controller input, enabling it to act as a sort of portable gaming console by hooking it up to a large display via HDMI and playing games using a Bluetooth controller. NVIDIA demonstrated playing Riptide GP on the phone using a Wii controller. It will likely support the dual shock controller down the road as well.
NVIDIA shows off the Wii controlled super phone's gaming abilities
Sprint claims a nine to ten hour talk-time for the phone, depending on the network the phone is using (3G/4G); therefore, it will be interesting to see if this phone will have the battery life in real world tests to be a good portable gaming machine. It may even steal some market share from the Playstation Vita if Android can keep new games flowing. What do you think about the Photon 4G?
Subject: Graphics Cards | June 8, 2011 - 05:21 AM | Tim Verry
Tagged: water cooling, pny, nvidia, GTX 580, asetek
At E3 2011, PNY and Asetek showed off a new NVIDIA GTX 580 graphics card that is cooled by an Asetek water cooler. Another variant that includes a CPU water block in the sealed-water loop will also be available. The new system promises up to 30% lower temps compared to the NVIDIA reference cooler. Further, Asetek claims that the new cooler will result in increased headroom for overclocking, and a decrease in acoustics due to using a larger 120mm fan that can spin much slower (and quieter) than the traditional graphics card fan at the same level of cooling performance.
Nicholas Mauro, the Senior Marketing Manager for PC Components at PNY stated that “with a design that outperforms current equivalent air cooled models, this simple all-in-one solution will resonate deeply with gamers looking for a powerful yet affordable option.”
PNY is currently running a pre-order promotional bundle on the PNY website, which includes “$100 worth of bonus PNY gear: a 16ft HDMI Mini to HDMI cable, a custom-built PNY 8GB ‘Liquid Cooled’ USB Flash Drive, and a ‘Liquid Cooled logo T-shirt.” The XLR8 Liquid Cooled GTX 580 has a MSRP of $579.99 while the GPU+CPU water loop, the “XLR8 Liquid Cooled GTX 580 with CPU Cooling,” carries a MSRP of $649.99. The new coolers will come with a standard 3 year warranty, which is extended to 5 years if registered on PNY’s website. They will be available for purchase at the end of June at various brick and mortar and online retailers.
The street price of these coolers will likely determine how much adoption they will receive, as they are in a narrow market between high end air cooling and a DIY water loop.
Subject: Graphics Cards, Cases and Cooling | June 8, 2011 - 04:08 AM | Tim Verry
Tagged: water cooling, nvidia, gpu, CoolIT
CoolIT Systems recently launched the OMNI N590 A.L.C. water cooler for NVIDIA’s GeForce GTX 590 graphics cards. The sealed-loop water cooler promises to solve both thermal and acoustic issues, and enable high performance NVIDIA Quad SLI setups for enthusiasts.
CoolIT claims that their OMNI A.L.C. is the world’s first fully contained water cooling loop for graphics cards. Following in the success of its OMNI N480 and N580 coolers, the new A.L.C. model promises to “deliver up to 30°C lower GPU operating temperatures” in addition to lowering the noise output of the PC.
The cooler itself is very reminiscent of Corsair’s H70 CPU cooler; however, on the OMNI A.L.C, the pump is located on the radiator instead of the water block, which may limit the amount of airflow compared to the CPU variants from other manufacturers. By moving the pump to the radiator, they have been able to make the GPU-attached water block very thin, which should make SLI setups physically easier.
Further, the cooler is immediately available in complete systems from MAINGEAR, Falcon Northwest, and Puget Systems.
What are your thoughts on sealed-loop graphics card coolers?
Subject: General Tech | June 6, 2011 - 08:52 AM | Jeremy Hellstrom
Tagged: nvidia, microsoft, lawyers
It turns out that while NVIDIA did not quite sell its soul to get its GPU into the first XBox, it did give up its right to go out unchaperoned. As part of the deal Microsoft can block any large purchase of NVIDIA shares by another company. If a company tries to purchase 30% or more of NVIDIA's shares then had and still has Microsoft has the right to put kybosh on the deal. A decade ago when the deal was first inked the agreement would have made a lot of sense to Microsoft; they were going to depend on NVIDIA's GPU and did not want to have another company buy a majority share in NVIDIA to get a grip on Microsoft's new gaming console. This deal makes NVIDIA rather unattractive to many companies as the investment of time and money necessary to set up a large deal could be utterly wasted if Microsoft decides it doesn't like the look of NVIDIA's new bedmate. The Inquirer has more here, and are currently awaiting a response to the article from Microsoft.
"AN UNLIKELY BETROTHAL between Microsoft and Nvidia has been uncovered that gives Microsoft the right of first and last refusal to buy Nvidia.
Microsoft entered into an agreement with Nvidia back in 2000 when the chip design outfit was brought in to work on the GPU of what would then become Microsoft's Xbox. That in itself isn't particularly surprising, but Information Week dug up a 10K filing with the Securities and Exchanges Commission (SEC) in which Nvidia reported that Microsoft had first and last rights of refusal should a third party make an offer to buy 30 per cent or more of Nvidia's shares."
Here is some more Tech News from around the web:
- In-flight Internet: the view from 35,000 feet and three years @ Ars Technica
- Skype reverse-engineered and open sourced @ The Register
- Notorious rootkit gets self-propagation powers @ The Register
- UC San Diego Builds Phase-Change Solid-State Drive That's 2 to 7 Times Faster Than NAND @ ExtremeTech
- HTC Dev launched to support OpenSense development @ t-break
- Kodak PlaySport Zx3 Review @ TechReviewSource
- Installing a Server OS in Intel Media Series Motherboards Guide @MissingRemote
- Post Computex 2011 - Part 1 @ Bjorn3D
- Mach Xtreme Displays New JMicron SATA 3 SSD @ The SSD Review
- Fan speed control? BitFenix has an app for that @ The Tech Report
- Sapphire shows A75 mobo, passive Radeons @ The Tech Report
- Computex 2011: The ROG Releases @ AnandTech
- Liveblog: Microsoft's E3 press conference on June 6 @ Ars Technica
- Liveblog: WWDC 2011 keynote on June 6 @ Ars Technica
NVIDIA recently unveiled a new four core CPU for mobile devices at Mobile World Congress which promises to power 2560x1600, 300 DPI displays as well as enable realistic dynamic lighting and physics in mobile games, features that until recently were only possible in the realm of gaming laptops and desktops.
The quad core ARM CPU has been paired with a new 12 core GeForce graphics processing unit. The CPU alone is able to outperform the older Tegra 2 chip by close to 2x. With the additional GPU cores; however, NVIDIA has even more performance, and the ability to implement great looking games for mobile tablets and so called “super phones.”
At a resolution of 1280x800 (according to Engadget), the new Kal-El graphics demo shows off a new game featuring a glowing ball that acts as a truly dynamic light source in addition to realistic cloth physics. Using all four processing cores of the CPU allowed NVIDIA to implement cloth that reacts to the changing gravity of the game in a dynamic- and very realistic looking- manner. The mobile chip saw approximately 80% usage across all cores during the game demo. When NVIDIA disabled two of the CPU cores, the game became nearly unplayable, with the two remaining cores maxed out, the demo’s frame rate dropped to below 15 frames per second.
The new “Tegra Super Chip” will certainly allow mobile game developers to design immersive and realistic looking worlds as well as enhancing consumers’ ability to watch 1080p HD video with ease. The only drawback of the chip seems to be that battery technology is much slower to advance than transistor technology; therefore, it will be interesting to see how the new NVIDIA chip performs in that regard.