Subject: General Tech | November 23, 2012 - 10:03 AM | Jeremy Hellstrom
Tagged: gpgpu, amd, nvidia, Intel, phi, tesla, firepro, HPC
The skeptics were right to question the huge improvements seen when using GPGPUs in a system for heavy parallel computing tasks. The cards do help a lot but the 100x improvements that have been reported by some companies and universities had more to do with poorly optimized CPU code than with the processing power of GPGPUs. This news comes from someone who you might not expect to burst this particular bubble, Sumit Gupta is the GM of NVIDIA's Tesla team and he might be trying to mitigate any possible disappointment from future customers which have optimized CPU coding and won't see the huge improvements seen by academics and other current customers. The Inquirer does point out a balancing benefit, it is obviously much easier to optimize code in CUDA, OpenCL and other GPGPU languages than it is to code for multicored CPUs.
"Both AMD and Nvidia have been using real-world code examples and projects to promote the performance of their respective GPGPU accelerators for years, but now it seems some of the eye popping figures including speed ups of 100x or 200x were not down to just the computing power of GPGPUs. Sumit Gupta, GM of Nvidia's Tesla business told The INQUIRER that such figures were generally down to starting with unoptimised CPU."
Here is some more Tech News from around the web:
- Intel reportedly speeds up development of low-power processors @ DigiTimes
- Firefox and Opera squish big buffer overflow bugs @ The Register
- Hexing MAC address reveals Wifi passwords @ The Register
- Cisco Linksys EA6500 Smart Wi-Fi Router Review @ Legit Reviews
- Camera shootout: Samsung Galaxy S III vs S III mini @ Hardware.info
- Black Friday Tech Deals @ TechReviewSource
- Lawrence 'Empire Strikes Back' Kasdan to pen future Star Wars script @ The Register
- Win Corsair AX860i, AX760i, AX860 & AX760 power supplies @ Kitguru
Get notified when we go live!