Video Perspective: AMD Steady Video Technology on AMD A-Series APUs

Subject: Editorial, Graphics Cards | July 25, 2011 - 02:23 PM |
Tagged: amd, APU, llano, steady video, a8-3850, video

In our continuing coverage of the AMD Llano-based A-Series of APUs we have another short video that discusses and evaluates the performance of AMD's Steady Video technology publicly released to the world with the 11.6 driver revision this month.  Steady Video, as we described it in our initial AMD Llano A8-3850 review is:

Using a heterogeneous computing model AMD's driver will have the ability to stabilize "bouncy" video that is usually associated with consumer cameras and unsteady hands.

Basically, AMD is on the war path to show you that your GPU can be used for more than just gaming and video transcoding.  If the APU and heterogeneous computing is to thrive, unique and useful applications of the GPU cores found in Llano, Trinity and beyond must be realized.  Real-time video filtering and stabilization with Steady Video is such an example and is exclusive to AMD GPUs and APUs.

As you can see there are no benchmarks in that video, no numbers we can really quote or reference to tell you "how much" better the corrected videos are compared to the originals.  The examples we gave you there were NOT filtered or selected because they show off the technology better or worse than any others; instead we used it for what AMD said it should be used for - amateur video taken without tripods, etc.  

And since this feature works not only AMD A-Series APUs but also on recent Radeon GPUs, I encourage you all to give it a shot and let us know what you think in our comments here below - do you find the feature useful and effective?  Would you leave the option enabled full time or just turn on when you encounter a particularly bouncy video?

If you haven't seen our previous Video Perspectives focusing on AMD A-Series of APUs, you can catch them here:

Intel MLAA: Matrox had the right idea, wrong everything else

Subject: Editorial, General Tech, Graphics Cards, Processors | July 22, 2011 - 08:20 PM |
Tagged: MLAA, Matrox, Intel

Antialiasing is a difficult task for a computer to accomplish in terms of performance and many efforts have been made over the years to minimize the impact while still keeping as much of the visual appeal as possible. The problem with aliasing is that while pixels are the smallest unit of display on a computer monitor, it is large enough for our eye to see it as a distinct unit. You may however have two objects of two different colors partially occupy the same pixel, who wins? In real life, our eye would see the light from both objects hit the same retina nerve (that is not really how it biologically works but close enough) and it would see some blend between the two colors. Intel has released a whitepaper for their attempt at this problem and it resembles a method that Matrox used almost a decade ago.

MatroxAA.jpg

Matrox's antialiasing method.

(Image from Tom's Hardware)

Looking at the problem of antialiasing, you wish to have multiple bits of information dictate the color of a pixel in the event that two objects of different colors both partially occupy the same pixel. The simplest method of doing that is dividing the pixel up into smaller pixels and then crushing them together to an average which is called Super Sampling. This means you are rendering an image 2x, 4x, or even 16x the resolution you are running at. More methods were discovered including just flagging the edges for antialiasing since that is where aliasing occurs. In the early 2000s, Matrox looked at the problem from an entirely different angle: since the edge is what really matters, we can find the shape of the various edges and see how much area of a pixel gets divided up between each object giving an effect they say is equivalent to 16x MSAA for very little cost. The problem with Matrox’s method: it failed with many cases of shadowing and pixelshaders… and came out in the DirectX 9 era. Suffices to say it did not save Matrox as an elite gaming GPU company.

37399.png

37400.png

Look familiar?

(Both images from Intel Blog)

Intel’s method of antialiasing again looks at the geometry of the image but instead breaks the edges into L shapes to determine the area they enclose. To keep the performance up they do pipelining between the CPU and GPU which keeps the CPU and GPU constantly filled with the target or neighboring frames. In other words, as the GPU lets the CPU perform MLAA, the GPU is busy preparing and drawing the next frame. Of course when I see technology like this I think two things: will this work on architectures with discrete GPUs and will this introduce extra latency between the rendering code and the gameplay code? I would expect that it must as the frame is not even finished let alone drawn to monitor before you fetch the next set of states to be rendered. The question still exists if that effect will be drowned in the rest of the latencies experienced between synchronizing.

AMD and NVIDIA both have their variants of MLAA, the latter of which being called FXAA by NVIDIA's marketing team. Unlike AMD's method, NVIDIA's method must be programmed into the game engine by the development team requiring a little bit of extra work on the developer's part. That said, FXAA found its way into Duke Nukem Forever as well as the upcoming Battlefield 3 among other games so support is there and older games should be easy enough to just compute properly.

37407.png

The flat line is how much time spent on MLAA itself, just a few milliseconds and constant.

(Image from Intel Blog)

Performance-wise the Intel solution performs ridiculously faster than MSAA, is pretty much scene-independent, and should produce results near the 16x mark due to the precision possible with calculating areas. Speculation about latency between render and game loops aside the implementation looks quite sound and allows users with on-processor graphics to not need to waste precious cycles (especially on GPUs that you would see on-processor) with antialiasing and instead use it more on raising other settings including resolution itself while still avoiding jaggies. Conversely, both AMD and NVIDIA's method run on the GPU which should make a little more sense for them as a discrete GPU should not require as much help as a GPU packed into a CPU.

Could Matrox’s last gasp from the gaming market be Intel’s battle cry?

(Registration not required for commenting)

Source: Intel Blog

Intel and AMD Provide Positive Earnings

Subject: Editorial | July 22, 2011 - 01:59 PM |
Tagged: Sandy Bridge E, Q2 2011, llano, Intel, bulldozer, APU, amd

 

The first half of this year has been surprisingly strong for the chip industry, and Intel and AMD are helping to lead the way and satiate demand for new processors at all market levels.

Intel was first off the bat to release earnings for their 2nd quarter, and they again broke revenue and profit records for Q2.  Gross revenue was a very healthy $13 billion and the company’s net profit was an impressive $3 billion.  Margins are down from last year’s high of 67%, but the actual 61% far outshines that of their competition.  Q2 2010 results were $10.8 billion in gross revenue and $2.9 billion net profit.  While profit was “only” $100 million more than Q1 2010, the extra $2.2 billion in revenue is something to sit up and notice.

intel_sb_die.jpg

Sandy Bridge based parts have continued to propel Intel's domination of the CPU market.

Probably the two greatest strengths for Intel are extracting the most amount of performance per mm squared of die and of overall process technology leadership.  Intel has been shipping 32 nm parts for some 18 months now, and their redesigned Sandy Bridge architecture has left their primary competition in the dust when it comes to overall multi-core CPU performance.  Intel has improved their integrated graphics capabilities, but this is one area where they simply cannot compete with the more GPU focused AMD.  Intel is also facing much increased competition in the mobile market from the Llano based chips and their accompanying chipset, which has been a virtual fortress for Intel until recently.  While Intel still rules in CPU performance, the combination of rich graphics, chipset features, and competitive power consumption has made Llano a true threat to the mobile sector.

Click here to read more.

Source:

With Intel's recent purchasing habits, could crossdressing be in their future?

Subject: Editorial | July 20, 2011 - 06:10 PM |
Tagged: vpro, TPM, speculation, security, mcafee, intel txt, Intel, infineon, amt

Not too long ago the tech world was buzzing with the news that Intel had aquired McAfee for $7.68 billion.  This gave them the knowledge base to start thinking about putting antivirus technology directly onto their chips, which seemed far more likely than an Intel branded software antivirus product.  When Intel CTO Justin Rattner started talking about technology that resembled the failed attempts at digital rights management, such as Microsoft's Palladium, or the Trusted Platform Module, aka TPM, a different idea was promoted with its own acronyms; Intel Active Management Technology (AMT) and Intel Trusted Execution Technology (Intel TXT).  This theory was lent credence by the mention of Intel's vPro and a desire by Intel to move security to the top of their list of priorities.  By integrating security software directly into vPro architecture, it might not even be necessary to place antivirus code directly on their hardware. Adding optimization to product architecture that Intel trusts absolutely, as they made it themselves, and the overall level of security on an Intel based virtual machine would be greatly increased.

vpro.jpg

Then Intel went and muddied the water with the $1.9 billion purchase of Infineon Technologies AG’s wireless business, which doesn't own manufacturing facilities but does own the intellectual property and patents for chips providing wireless communication.  Suddenly some discarded theories about the purchase of McAfee seemed valid again.  One possibility that was bandied about was the idea of Intel moving into ARM territory in the cell phone business.  With Intel's new focus on low power chips, with Atom being the starting point, the idea of Intel moving into providing secure CPUs appropriate for cell phones and tablets became much more believable.  With the current rise of viruses targeted at those mobile platforms and the vulnerabilities present in Android and Windows based phones having hardware based antivirus, or at least optimized hardware, makes a lot of sense.  

It also differentiates them from ARM, who has more market experience making ultra low power chips but certainly does not own an antivirus vendor.  The security concerns with cell phones and tablets will continue to increase at the same pace as the capabilities of the devices increase.  Where once bluejacking was the biggest concern of a cell phone user, a smart phone user can browse the world wild web and expose themselves to all sorts of nastiness, including more than just the nastiness they intended to browse for.  A hardware solution would leave more processing power for the user; running Norton 360 on a cell phone or tablet would chew up a lot of cycles.

cell_killer.jpg

Today those muddied waters were stirred up even more as Intel announced it is planning to buy Fulcrum Microsystems, maker of high end 10Gbps and 40Gbps ethernet switches.  This purchase would support the theory decided before the purchase of Infineon's wireless group; that Intel is taking a serious look at a total TPM ecosystem.  In order to truly trust your platform you need to do more than secure your endpoints.  If your server is running AMT or Intel TXT, then you can be assured that any virtual machine running on it can be trusted.  As well, if both the server and client are running processors capable of Intel's TPM (sounds so much better that DRM, eh?)  again both machines can be considered trusted platforms. 

That does not help with trusting data which has been transferred over a WAN, or in some cases even a LAN.   Data transfer allows an attacker a means of entry, or at least a way of denying data transfer.  With a trusted platform, any data which does not match what is expected by the receiving machine will be prevented from running, so a successful man in the middle attack might not allow remote code execution or privilege escalation but would certainly act as a DoS attack as the TPM client refuses to accept the incoming data.   Once the routers and switches involved in the data transfer are secured with the exact same TPM specifications, the entire route is protected and can all be considered part of the same Trusted Platform.  The network devices would reject any code injection attempted on the data during transfer, allowing data to flow freely inside a LAN as well as customized WANs. 

intel_AES.jpg

Returning to the secure cell phone theory, we can now consider the possibility of a TPM compliant cell phone thanks to the theoretical integration of Intel processors into your phone and tablet. Now you would be able to include your mobile communications into your TPM ecosystem.  Properly implemented that security and not only will you challenge ARM 's market share by out-securing them, you could topple RIM's share of the business market as a BlackBerry may be handy to the sales team but they are a nightmare for the IT/IS security team.  Nothing is perfect but that would be a huge step towards defeating the current attack vectors that effect business systems.  So far Intel is not saying much, so all we can do is speculate ... which is fun.

 

Author:
Subject: Editorial, Mobile
Manufacturer: Qualcomm

Meet Vellamo

With Google reporting daily Android device activations upward of 550,000 devices a day, the rapid growth and ubiqutity of the platform cannot be denied. As the platform has grown, we here at PC Perspective have constantly kept our eye out for ways to assess and compare the performance of different devices running the same mobile operating systems. In the past we have done performance testing with applications such as Quadrant and Linpack, and GPU testing with NenaMark and Qualcomm's NeoCore product.

Today we are taking a look at a new mobile benchmark from Qualcomm, named Vellamo. Qualcomm has seen the need for an agnostic browser benchmark on Android, and so came Vellamo. A video introduction from Qualcomm's Director of Product Management, Sy Choudhury, is below.

 

With the default configuration, Vellamo performs a battery of 14 tests. These tests are catagorized into Rendering, Javascript, User Experience, Networking, and Advanced. 

For more on this benchmark and our results from 10 different Android-power devices, keep reading!

Bumpday 7/20/2011: 3D Glasses not available in all areas

Subject: Editorial, General Tech | July 20, 2011 - 02:00 PM |
Tagged: bumpday, 3d

This week LG unveiled their glasses-free 3D LCD display with only a minimal amount of LG employees trying to pet a poorly Photoshopped Formula One race car. 3D is quite heavily promoted lately with the hype machine apparently being fueled by anthropomorphic blue cats and Box Office records. 3D on the PC has been around for much longer, however. NVIDIA and ELSA had support for 3D glasses over a decade ago for 3D effects in games of the time. There really has not really been much said about 3D between then and the rush of publicity now so I guess it is time to bump it up in our memory.

Bumpday2.png

This week’s intermission… in the third dimension

In August 2002 the epitome of threads on ATI’s lack of 3D stereoscopic support was born with a simple message: give your greens to the green. Of course whenever you mention one brand over another there immediately becomes a three-way comparison between the market leaders: ATI, nVidia, and Matrox (wha-what!?! Actually another article will be posted soon; an old Matrox technology has a spiritual successor… because the body’s long since dead.) Even back then, however, we had people who bashed 3D technology long before it was cool to dislike 3D technology. Some people like it a lot though, enough to drop down 1600$ on a pair of 3D VR glasses, but no money on an ATI card.

BUMP!

Source: PCPer Forums

Chris Blizzard, Mozilla Blogs: The process of multi-process

Subject: Editorial, General Tech | July 19, 2011 - 11:59 PM |
Tagged: mozilla, firefox

One side-effect of splitting a program up into multiple processes is that instructions do not inherently have a specific order. One of the most evident places for that to occur is during a videogame. I am sure most gamers have played a game where the controls just felt sluggish and muddy for some inexplicable reason. While there could be a few problems, one likely cause is that your input is not evaluated for a perceivably large amount of time. Chris Blizzard of Mozilla took on this and other issues with multithreaded applications and wrapped it around the concept of Firefox past, present, and future.

22-mozilla.jpg

Firefox is getting Beta all the time.

One common misconception is that your input is recognized between each frame, which is untrue: many frames could go by before input affects the events on screen. John Carmack in a recent E3 interview discussed about iD measuring up to 100ms worth of frames occurring before a frame occurred which recognized the user’s command. This is often more permissible for games with slower-paced game design where agility is less relevant; if your character would lose to a Yak in a foot race, turns about as quick as one, and takes a hundred bullets to die: you will not notice that you started to dodge a few milliseconds earlier as you would expect to die in either case. In a web browser it is much less dramatic though the same principle is true: the browser is busy doing its many tasks and cannot waste too much time checking if the user has requested something yet. This aspect of performance, along with random hanging, is considered “responsiveness”. Mozilla targets 50 milliseconds (one-twentieth of a second) as the maximum time before Firefox rechecks its state for changes.

Chris Blizzard goes on to discuss how hardware is mostly advancing on the front of increases in parallelism rather than clock speed and other per-thread advancements. GPGPU was not a topic in the blog post leaving the question for the distant future centered on what a multithreaded DOM would look like – valuing the classical multicore over the still budding many-core architectures. Memory usage and crashing were also addressed though this likely was more to dispel the Firefox stereotype of being a memory hog starting later in the Firefox 2 era.

GPGPU-Trail.png

The GPGPU trail is not Mozilla's roadmap.

The last topic discussed was Sandboxing for security. One advantage of branching off your multiple threads into multiple discrete processes is that you could request that the operating system assign limited rights to individual processes. The concept of limited rights is to prevent one application from exploiting too much permissions for the purpose of forcing your computer to do something undesirable. If you are accepting external data, such as a random website on the internet, you need to make sure that if it can exploit vulnerability in your web browser that it gains as little permission as possible. While it is not a guarantee that external data will be executed with dangerous permission levels: the harder you can make it, the better.

What does our readers think? (Registration not required to comment.)

Source: Mozilla Blog

NVDA Cum Laude-ing Stanford a CUDA Center of Excellence

Subject: Editorial, General Tech, Graphics Cards | July 17, 2011 - 01:07 PM |
Tagged: stanford, nvidia, CUDA

NVIDIA has been pushing their CUDA platform for years now as a method to access your GPU for purposes far beyond the scopes of flags and frags. We have seen what a good amount of heterogeneous hardware will do to a process with a hefty portion of parallelizable code from encryption to generating bitcoins; media processing to blurring the line between real-time and non-real-time 3d rendering. NVIDIA also recognizes the role that academia plays in training the future programmers and thus strongly supports when an institution teaches how to use GPU hardware effectively, especially when they teach how to use NVIDIA GPU hardware effectively. Recently, NVIDIA knighted Stanford as the latest of its CUDA Center of Excellence round table.

GPUniversity.jpg

It will be 150$ if you want it framed.

The list of CUDA Centres of Excellence now currently includes: Georgia Institute of Technology, Harvard School of Engineering, Institute of Process Engineering at Chinese Academy of Sciences, National Taiwan University, Stanford Engineering, TokyoTech, Tsinghua University, University of Cambridge, University of Illinois at Urbana-Champaign, University of Maryland, University of Tennessee, and the University of Utah. If you are interested in learning about programming for GPUs then NVIDIA has just graced blessing on one further choice. Whether that will affect many prospective students and faculty is yet to be seen, but it makes for many amusing puns nonetheless.

Source: NVIDIA

PDXLAN Gears up with sponsors and gamers

Subject: Editorial, Shows and Expos | July 16, 2011 - 12:51 PM |
Tagged: sli, sapphire, pdxlan, pdx, nvidia, msi, amd

If you are a PC gaming and live near or around the Portland, OR area you are familiar with the concept of PDXLAN - one of the most popular (but still cool and underground) LAN events in the country.  The primary event is going on this weekend and I am here to both game and take a look at what the sponsors are showing off. 

pdx01.jpg

MSI has a lot of stuff going on including a look at the latest version of the Afterburner overclocking tool, the 3GB version of the GTX 580 Lightning (that Josh is currently working on a review of) and even a NVIDIA Surround based Dirt 3 sim seat.

pdx02.jpg

Gaming laptops are still taking off here in the US and MSI has a couple on display including a HUGE 18-in mode (on the right) with a keyboard that lights up with various colors of LEDs, configurable.  

pdx03.jpg

The Sapphire guys are here as well and are showing off much of what AMD for gamers including Eyefinity configurations like the very popular 5x1 portrait mode.  This is something that only AMD offers currently and in this demo we were looking at Dragon Age II.  It was definitely grabbing some attention!

pdx04.jpg

Showing that AMD's HD3D technology does indeed have legs Sapphire was showing off the new Samsung SyncMaster SA950 that has a nice external design.  I am going to spend some more time with it today to see how it performs, so check back for more!

pdx05.jpg

If you are here, you can also find me getting butt kicked at various games.  This is the machine I'll be on, a Maingear built GTX 580 SLI right with an overclocked Intel Core i7 Sandy Bridge processor and 30-in display.  I know, it sucks to be me, but someone has to sacrifice and play on it, right?

More from PDXLAN later today!

Source: PCPer

One Billion work units down and the FLOPs are still rising

Subject: Editorial | July 15, 2011 - 06:18 PM |
Tagged: killer frogs, friday, folding@home, folding frogs, boinc, 1 billion

When visiting our Forums you will be greeted with an ever changing mix of the old and new.  Some members will be tracking down manuals from old kit that they inherited from friends, such as an Epox EP-8KDA3J, while others might have the manual but are having a tough time finding drivers for their DFI LanParty NForce2 Ultra B board.   Those threads are nestled between another member's indepth look at the brand new Intel Core i7-2600K with GIGABYTE Z68X-UD7, with plenty of pictures and tips on getting the best overclock.

Our podcast, PC Perspective Podcast #162 this week, has inspired one member to stop lurking and ask about a case for a NAS they want to build, while other long term members discuss the merits of SLIOne member is being very mean to their speakers at Audio Corner while in the  Storage Forum you can read about the ressurection of Intel SSDs that suddenly become 8MB in size

Also this week there has been a lot of talk about Bitcoins, both in terms of the best GPUs to use for mining as well as a rough guide on how much your electricity bill will go up thanks to your new habit, but there are other things you can do with your spare cycles.  The PC Perspective Folding Frogs are going strong and helping with protein folding, but it is the PC Perspective Killer Frogs BOINC team which has something to cheer about right now.  Our combined BOINC project processed workunit total has gone over 1 billion!  Congrats to everyone, old members and new, that have contributed to help us pass this milestone.

hellsya.jpg