NVIDIA Tesla High Performance Computing - GPUs Take a New Life
GPUs for more than just graphics
NVIDIA has seemingly been trailing AMD (formerly ATI) in the business of promoting their graphics processing units as general-purpose processors to be used on other data sets than graphics. AMD was the first to work with Stanford's Folding@Home program last September to create a GPU-based client for their computing project on protein folding. During their recent R600 GPU launch, AMD spent a large portion of our editor's day looking at other applications for their GPU than just graphics and specifically added features in the GPU itself to aid future GPGPU projects.
NVIDIA on the other hand has been somewhat more reserved, with the exception of the launch of their CUDA (complete unified device architecture) initiative that attempts to bring standard coding practices to a much more complex world of massively-threaded processors. Today though, NVIDIA is making a huge announcement on their progression into the world of HPC (high-performance computing) with their GPU-based products with a brand called Tesla.
The Need for GPU HPC
NVIDIA's move into the HPC market comes at a time when processing power seems to have hit a stalling point. As Intel and AMD add more cores to their CPUs, the processing power hasn't kept up with the huge performance gains we saw in the mid-90s. Part of the problem is software based; coding for multi-threaded processors is much more difficult than serial processing. But in some cases, where the multi-threaded nature of the computational data isn't a problem, the hardware is actually the bottleneck. NVIDIA's Tesla GPU-based processing technology is the answer for just those situations.
This diagram from NVIDIA shows the progression from a single processor to multi-processors to massively parallel processors to clusters.
According to NVIDIA at least, without the power of the GPU, CPU's were going to fall well short of Moore's Law in computing power in the world of high-performance computing. Keep in mind that the y-axis on this graph is not really processing performance, but rather "relative floating point performance" which is an area the GPU has a big leg up over current CPUs.
NVIDIA told us that though Intel might see the HPC world as a "war" with other hardware vendors, NVIDIA sees GPUs and CPUs working together in the future for improved HPC performance.
High performance computing applications are typically those that require multiple times the amount of processing power in a typical server and access immense amounts of data simultaneously. While there are both serial and parallel examples of HPC applications, the most common and most constrained applications are those extremely parallel operations that often perform the same operations on millions (or more) of data points at the same time. As you might recall, graphics processing is one of those problems that can be described as such and GPUs from the likes of NVIDIA and AMD attack them quite well.
It doesn't take much additional thought to see then that GPUs in their current, or even slightly modified, form could be used for applications such as:
Simulation of Neural Circuits
Seismic and Reservoir simulation
Cell phone design
There are many, many other applications for this type of mass-data super-parallel computing including topics if the fields of electromagnetics, energy, biomedical, pharmaceutical, industrial, financial and military operations. We'll leave the more complex details of the HPC software to another article at another time perhaps; let's dive into the hardware NVIDIA is announcing today.