Subject: General Tech, Graphics Cards | May 19, 2012 - 03:27 AM | Scott Michaud
Tagged: Adobe, CS6, gpgpu
Last month, SemiAccurate reported that Adobe Creative Suite 6 would be programmed around OpenCL which would allow any GPU to accelerate your work. Adobe now claims that OpenCL would only accelerate the HD6750M and the HD6770M running on OSX Lion with 1GB of vRAM on a MacBook Pro at least for the time being at least for Adobe Premiere Pro.
Does it aggravate you when something takes a while or stutters when you know a part of your PC is just idle?
Adobe has been increasingly moving to take advantage of the graphics processor available in your computer to benefit the professional behind the keyboard, mouse, or tablet. CS 5.5 pushed several of their applications on to the CUDA platform. End-users claim that Adobe sold them out for NVIDIA but that just seems unlikely and unlike either company. My prediction is and always was more that NVIDIA parachuted in some engineers to Adobe and their help was limited to CUDA.
Creative Suite 6 further suggests that I was correct as Adobe has gone back and re-authored much of those features in OpenCL.
Isn't it somewhat ironic that insanity is a symptom of mercury poisoning?
AMD as a hatter!
CS6 will not execute on just any old GPU now despite the wider availability of OpenCL relative to the somewhat NVIDIA proprietary CUDA. While the CUDA whitelist currently extends to 22 Windows NVIDIA GPUs and 3 Mac OSX NVIDIA GPUs current OpenCL support is limited to a pair of AMD-based OSX Lion mobile GPUs: the 6750M and the 6770M.
It would not surprise me if other GPUs would accelerate CS6 if manually added to a whitelist. Adobe probably is very conservative with what components they add to the whitelist in an effort to reduce support costs. That does not mean that you will see benefits even if you trick Adobe into accepting hardware acceleration though.
It appears as if Adobe is working towards using the most open and broad standards -- they just are doing it at their own pace this time. This release was obviously paced for Apple support.
Subject: Editorial, General Tech, Graphics Cards, Processors | April 4, 2012 - 04:13 AM | Scott Michaud
Tagged: nvidia, Intel, Knight's Corner, gpgpu
NVIDIA steals Intel’s lunch… analogy. In the process they claim that optimizing your application for Intel’s upcoming many-core hardware is not free of effort, and that effort is similar to what is required to develop on what NVIDIA already has available.
A few months ago, Intel published an article on their software blog to urge developers to look to the future without relying on the future when they design their applications. The crux of Intel’s argument states that regardless of how efficient Intel makes their processors, there is still responsibility on your part to create efficient code.
There’s always that one, in the back of the class…
NVIDIA, never a company to be afraid to make a statement, used Intel’s analogy to alert developers to optimize for many-core architectures.
The hope that unmodified HPC applications will work well on MIC with just a recompile is not really credible, nor is talking about ease of programming without consideration of performance.
There is no free lunch. Programmers will need to put in some effort to structure their applications for hybrid architectures. But that work will pay off handsomely for today’s, and especially tomorrow’s, HPC systems.
It remains to be seen how Intel MIC will perform when it eventually arrives. But why wait? Better to get ahead of the game by starting down the hybrid multicore path now.
NVIDIA thinks that Intel was correct: there would be no free lunch for developers, why not purchase a plate at NVIDIA’s table? Who knows, after the appetizer you might want to stay around.
You cannot simply allow your program to execute on Many Integrated Core (MIC) hardware and expect it to do so well. The goal is not to simply implement on new hardware -- it is to perform efficiently while utilizing the advantages of everything that is available. It will always be up to the developer to set up their application in the appropriate way.
Your advantage will be to understand the pros and cons of massive parallelism. NVIDIA, AMD, and now Intel have labored to create a variety of architectures to suit this aspiration; software developers must labor in a similar way on their end.
Subject: General Tech | February 8, 2012 - 12:13 PM | Jeremy Hellstrom
Tagged: gpgpu, l3 cache, APU
Over at North Carolina State University, students Yi Yang, Ping Xiang and Dr. Huiyang Zhou, along with Mike Mantor of Advanced Micro Devices have been working on a way to improve how efficiently the GPU and CPU work together. Our current generations of APU/GPGPUs, Llano and Sandy Bridge, have united the two processing units on a single substrate but as of yet they cannot efficiently pass operations back and forth. This project works to leverage the L3 cache of the CPU to give a high speed bridge between the two processors, allowing the CPU to pass highly parallel tasks to the GPU for more efficient processing and letting the CPU deal with the complex operations it was designed for.
Along with that bridge comes a change in the way the L2 prefetch is utilized; increasing memory access at that level frees up more for the L3 to pass data between CPU and GPU thanks to a specially designed preexecution unit triggered by the GPU and running on the CPU which will enable synchronized memory fetch instructions. The result has been impressive, in their tests they saw an average improvement of 21.4% in performance.
"Researchers from North Carolina State University have developed a new technique that allows graphics processing units (GPUs) and central processing units (CPUs) on a single chip to collaborate – boosting processor performance by an average of more than 20 percent.
"Chip manufacturers are now creating processors that have a 'fused architecture,' meaning that they include CPUs and GPUs on a single chip,” says Dr. Huiyang Zhou, an associate professor of electrical and computer engineering who co-authored a paper on the research. "This approach decreases manufacturing costs and makes computers more energy efficient. However, the CPU cores and GPU cores still work almost exclusively on separate functions. They rarely collaborate to execute any given program, so they aren’t as efficient as they could be. That's the issue we’re trying to resolve."
Here is some more Tech News from around the web:
- Laser boffins blast bits onto hard drive at 200Gb/sec @ The Register
- Intel admits Haswell uses transactional memory @ SemiAccurate
- Nvidia and Rambus bury the hatchet on patents @ The Inquirer
- Don't panic? Windows 8 and the "ribbonification" of Explorer @ Ars Technica
- DIY Solid State Tesla Coil @ Hack a Day
- Brice from Arctic reveals all in exclusive @ Kitguru
- Instructables Giving Away $50,000 3D Printer @ MAKE:Blog
Subject: General Tech, Graphics Cards | January 29, 2012 - 02:53 AM | Scott Michaud
Tagged: nvidia, gpgpu, CUDA
NVIDIA has traditionally been very interested in acquiring room in the high-performance computing for scientific research market. For a lot of functions, having a fast and highly parallel processor saves time and money compared to having a traditional computer crunch away or having to book time with one of the world’s relatively few supercomputers. Despite the raw performance of a GPU, adequate development tools are required to bring the simulation or calculation into a functional program to execute on said GPU. NVIDIA is said to have had a strong lead with their CUDA platform for quite some time; that lead will likely continue with releases the size of this one.
What does a tuned up GPU purr like? Cuda cuda cuda cuda cuda.
The most recent release, CUDA 4.1, has three main features:
- A visual profiler to point out common mistakes and optimizations and to provide instructions which detail how to alter your code to increase your performance
- A new compiler which is based on the LLVM infrastructure, making good on their promise to open the CUDA platform to other architectures -- both software and hardware
- New image and signal processing functions for their NVIDIA Performance Primitives (NPP) library, relieving developers the need to create their own versions or license a proprietary library
The three features, as NVIDIA describes them in their press release, are listed below.
New Visual Profiler - Easiest path to performance optimization
The new Visual Profiler makes it easy for developers at all experience levels to optimize their code for maximum performance. Featuring automated performance analysis and an expert guidance system that delivers step-by-step optimization suggestions, the Visual Profiler identifies application performance bottlenecks and recommends actions, with links to the optimization guides. Using the new Visual Profiler, performance bottlenecks are easily identified and actionable.
LLVM Compiler - Instant 10 percent increase in application performance
LLVM is a widely-used open-source compiler infrastructure featuring a modular design that makes it easy to add support for new programming languages and processor architectures. Using the new LLVM-based CUDA compiler, developers can achieve up to 10 percent additional performance gains on existing GPU-accelerated applications with a simple recompile. In addition, LLVM's modular design allows third-party software tool developers to provide a custom LLVM solution for non-NVIDIA processor architectures, enabling CUDA applications to run across NVIDIA GPUs, as well as those from other vendors.
New Image, Signal Processing Library Functions - "Drop-in" Acceleration with NPP Library
NVIDIA has doubled the size of its NPP library, with the addition of hundreds of new image and signal processing functions. This enables virtually any developer using image or signal processing algorithms to easily gain the benefit of GPU acceleration, with the simple addition of library calls into their application. The updated NPP library can be used for a wide variety of image and signal processing algorithms, ranging from basic filtering to advanced workflows.
New Trojan.Badminer Malware Steals Your Spare Processing Cycles To Make Criminals Money At Your Expense
Subject: General Tech | August 17, 2011 - 11:02 PM | Tim Verry
Tagged: trojan, opencl, mining, Malware, gpgpu, bitcoin
A new piece of malware was recently uncovered by anti-virus provider Symantec that seeks to profit from your spare computing cycles. Dubbed Trojan.Badminer, this insidious piece of code is a trojan that (so far) is capable of affecting Windows operating systems from Windows 98 to Windows 7. Once this trojan has been downloaded and executed (usually through an online attack vector via an unpatched bug in flash or java), it proceeds to create a number of files and registry entries.
It's a trojan infected bitcoin, oh the audacity of malware authors!
After it has propagated throughout the system, it is then able to run one of two mining programs. It will first search for a compatible graphics card, and run Phoenix Miner. However, if a graphics card is not found, it will fall back to RPC miner and instead steal your CPU cycles. The miners then start hashing in search of bitcoin blocks, and if found, will then send the reward money to the attacker’s account.
It should be noted that bitcoin mining itself is not inherently bad, and many people run it legitimately. In fact, if you are interested in learning more about bitcoins, we ran an article on them recently. This trojan on the other hand is malicious because it is infecting the user’s computer with unwanted code that steals processing cycles from the GPU and CPU to make the attacker money. All these GPU and CPU cycles come at the cost of reduced system responsiveness and electricity, which can add up to a rather large bill, depending on where you live and what hardware the trojan is able to get its hands on.
Right now, Symantec is offering up general tips on keeping users’ computers free from the infection, including enabling a software firewall (or at least being behind a router with its own firewall that blocks unsolicited incoming connections), running the computer as the lowest level user possible with UAC turned on, and not clicking on unsolicited email attachments or links.
If you are also a bitcoin miner, you may want to further protect yourself by securing your bitcoin wallet in the event that you also accidentally become infected by a trojan that seeks to steal the wallet.dat file (the file that essentially holds all your bitcoin currency).
Stay vigilant folks, and keep an eye out on your system GPU and CPU utilization in addion to using safer computing habits to keep nastly malware like this off of your system. On a more opinionated note, is it just me or have malware authors really hit a new low with this one?
Subject: Editorial, General Tech, Graphics Cards | July 26, 2011 - 08:39 PM | Scott Michaud
Tagged: gpgpu, Developer Watch, CUVI
Code that can be easily parallelized into many threads have been streaming over to the GPU with many applications and helper libraries taking advantage of CUDA and OpenCL primarily. Thus for developers who wish to utilize the GPU more but are unsure where to start there are more and more options for libraries of functions to call and at least partially embrace their video cards. OpenCV is a library of functions for image manipulation and, while GPU support is ongoing through CUDA, primarily runs on the CPU. CUVIlib, which has just launched their 0.5 release, is a competitor to OpenCV with a strong focus on GPU utilization, performance, and ease of implementation. While OpenCV is licensed as BSD which is about as permissive a license as can be offered, CUVI is not and is based on a proprietary EULA.
The little plus signs are the computer tracking motion. CUVI (top; 33fps), OpenCV (bottom; 2.5fps)
(Video from CUVIlib)
Despite the proprietary and non-free for commercial use nature of CUVI they advertise large speedups for certain algorithms. For their Kanade-Lucas-Tomasi Feature Tracker algorithm when compared with OpenCV’s implementation they report a three-fold increase in performance with just a GeForce 9800GT installed and 8-13x faster when using a high end computing card such as the Tesla C2050. Their feature page includes footage of two 720p high definition videos undergoing the KLT algorithm with the OpenCV CPU method chugging at 2.5 fps contrasted with CUVI’s GPU-accelerated 33fps. Whether you would prefer to side with OpenCV’s GPU advancements or pay CUVIlib to augment what OpenCV is not good enough for your needs at is up to you, but either future will likely involve the GPU.
Subject: General Tech | July 14, 2011 - 04:38 PM | Ken Addison
Tagged: podcast, bitcoin, mining, gpu, gpgpu, amd, nvidia, eyefinity, APU
PC Perspective Podcast #162 - 7/14/2011
This week we talk about our adventures in Bitcoin Mining, the Eyefinity experience, Ultrabooks and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store
- RSS - Subscribe through your regular
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
This Podcast is brought to you by
- 0:00:40 Introduction
- 1-888-38-PCPER or email@example.com
- http://twitter.com/ryanshrout and http://twitter.com/pcper
- 0:02:10 Bitcoin Currency and GPU Mining Performance Comparison
- 0:22:48 Bitcoin Mining Update: Power Usage Costs Across the United States
- 0:34:15 This Podcast is brought to you by
, and their all new Sandy Bridge Motherboards!
- 0:34:50 Eyefinity and Me
- 0:45:00 Video Perspective: AMD A-series APU Dual Graphics Technology Performance
- 0:47:02 As expected NVIDIA's next generation GPU release schedule was a bit optimistic
- 0:49:40 A PC Macbook Air: Can Intel has?
- 0:53:00 PC: for all your Xbox gaming needs
- 0:56:06 Email from Howard
- 1:00:28 Email from Ian
- 1:03:00 Email from Jan
- In case you're interested, here are almost 150mpix of HDR: http://rattkin.info/archives/430
- 1:08:55 Quakecon Reminder - http://www.quakecon.org/
- 1:09:45 Hardware / Software Pick of the Week
- 1-888-38-PCPER or firstname.lastname@example.org
- http://twitter.com/ryanshrout and http://twitter.com/pcper
- 1:15:15 Closing
Subject: General Tech, Graphics Cards | June 29, 2011 - 08:58 PM | Scott Michaud
Tagged: gpgpu, CUDA
If you have seen our various news articles regarding how a GPU can be useful in many ways, and you are a developer yourself, you may be wondering how to get in on that action. Recently Microsoft showed off their competitor to OpenCL known as C++ AMP and AMD showed off some new tools designed to help developers of OpenCL. Everything was dead silent on the CUDA front at the AMD Fusion Developer Summit, as expected, but that does not mean that no-one is helping people who do not mind being tied in to NVIDIA. An open-sourced project has been created to generate template file for programmers wishing to do some of their computation in CUDA and wish a helping hand setting up the framework.
You may think the videocard is backwards, but clearly its DVI heads are in front.
The project was started by Pavel Kartashev and is a Java application that accepts form input and generates CUDA code to be imported into your project. The application will help you generate the tedious skeleton code for defining variables and efficiently using the GPU architecture leaving you to program the actual process to be accomplished itself. The author apparently plans to create a Web-based version which should be quite easy with the Java-based nature of his application. Personally I would find myself more interested in the local application or a widget to leaving my web browser windows to reference material. That said, I am sure that someone would like this tool in their web browser, possibly more people than are like-minded with me.
Subject: General Tech, Graphics Cards | May 6, 2011 - 05:25 PM | Scott Michaud
Tagged: linux, kgpu, gpgpu
PC Per has discussed using the GPU as a massively-parallel augment to the CPU for a very long time to allow the latter to focus on the branching logic (“if/then/else”) and other processes it is good at that GPUs are not. AMD and Intel both have their attempts to bundle the benefits of a GPU on to their CPU parts with their respective technologies. Currently most of the applications outside of the scientific community are gaming and multimedia; however, as the presence of stronger GPUs saturates, we are seeing more and more functions relegate to the GPU.
So happy together!
KGPU is an attempt to bring the horsepower of the GPU to the fingertips of the Linux kernel. While the kernel itself will remain a CPU function, the attempt allows the kernel to offload the parallel stuff to the GPU for large speed-ups and keep the CPU free for more. Their current version shows whole multiple speedups of eCryptfs, an encrypted filesystem, in terms of maximum read and write bandwidth by allowing the GPU to deal with the AES cipher.
We should continue to see speedups as tasks that would be perfect for the GPU are finally allowed to be with their true love. Furthermore, as the number of tasks relegated to the GPU increases we should continue to see more and stronger GPUs embedded in PCs which should decrease the fears for PC game developers worried about the number of PCs capable of running their applications. I am sure that is great news to many of our frequent readers.
Get notified when we go live!