Subject: General Tech, Graphics Cards, Processors | December 15, 2011 - 04:03 AM | Scott Michaud
NVIDIA lays as the current front-runner for the “Last Year’s Best Decision, This Year” award. You may remember our coverage last June of the AMD Fusion Developer Summit; industry members such as ARM, Microsoft, and of course AMD discussed the potential of utilizing specialized processors and developing on open platforms such as OpenCL and Microsoft’s announced C++ AMP. Do you know what would have been an amazing announcement for AFDS to stomp OpenCL and C++ AMP? That NVIDIA would open up CUDA. Know what announcement missed that bus by a whole half a year? NVIDIA will open up CUDA.
Your platform pooh-pooh? Bear a CUDA.
While I just harassed NVIDIA for their timing, it might not be too late. CUDA is still a powerhouse of a GPGPU platform with substantial software support from absolute mammoth software packages such as Adobe Creative Suite to smaller projects like KGPU. With the open sourcing of the CUDA compiler, NVIDIA is also permitting manufacturers like AMD and even Intel to support CUDA with their GPUs, x86 CPUs, and other processing units. While I am excited at this outcome, I am still somewhat confused about NVIDIA’s timing: they are just a little late to open up and crush the market, and they seem quite abrupt if they originally intended CUDA to survive as a forever-proprietary computing platform.
Subject: Editorial, General Tech, Graphics Cards | July 17, 2011 - 01:07 PM | Scott Michaud
Tagged: stanford, nvidia, CUDA
NVIDIA has been pushing their CUDA platform for years now as a method to access your GPU for purposes far beyond the scopes of flags and frags. We have seen what a good amount of heterogeneous hardware will do to a process with a hefty portion of parallelizable code from encryption to generating bitcoins; media processing to blurring the line between real-time and non-real-time 3d rendering. NVIDIA also recognizes the role that academia plays in training the future programmers and thus strongly supports when an institution teaches how to use GPU hardware effectively, especially when they teach how to use NVIDIA GPU hardware effectively. Recently, NVIDIA knighted Stanford as the latest of its CUDA Center of Excellence round table.
It will be 150$ if you want it framed.
The list of CUDA Centres of Excellence now currently includes: Georgia Institute of Technology, Harvard School of Engineering, Institute of Process Engineering at Chinese Academy of Sciences, National Taiwan University, Stanford Engineering, TokyoTech, Tsinghua University, University of Cambridge, University of Illinois at Urbana-Champaign, University of Maryland, University of Tennessee, and the University of Utah. If you are interested in learning about programming for GPUs then NVIDIA has just graced blessing on one further choice. Whether that will affect many prospective students and faculty is yet to be seen, but it makes for many amusing puns nonetheless.
Subject: General Tech, Graphics Cards | June 29, 2011 - 08:58 PM | Scott Michaud
Tagged: gpgpu, CUDA
If you have seen our various news articles regarding how a GPU can be useful in many ways, and you are a developer yourself, you may be wondering how to get in on that action. Recently Microsoft showed off their competitor to OpenCL known as C++ AMP and AMD showed off some new tools designed to help developers of OpenCL. Everything was dead silent on the CUDA front at the AMD Fusion Developer Summit, as expected, but that does not mean that no-one is helping people who do not mind being tied in to NVIDIA. An open-sourced project has been created to generate template file for programmers wishing to do some of their computation in CUDA and wish a helping hand setting up the framework.
You may think the videocard is backwards, but clearly its DVI heads are in front.
The project was started by Pavel Kartashev and is a Java application that accepts form input and generates CUDA code to be imported into your project. The application will help you generate the tedious skeleton code for defining variables and efficiently using the GPU architecture leaving you to program the actual process to be accomplished itself. The author apparently plans to create a Web-based version which should be quite easy with the Java-based nature of his application. Personally I would find myself more interested in the local application or a widget to leaving my web browser windows to reference material. That said, I am sure that someone would like this tool in their web browser, possibly more people than are like-minded with me.
Subject: General Tech, Graphics Cards, Storage | May 11, 2011 - 07:58 PM | Scott Michaud
Tagged: SQL, developer, CUDA
Programmers are beginning to understand and be ever more comfortable with the uses of GPUs in their applications. Late last week we explored the KGPU project. KGPU is designed to allow the Linux kernel to offload massively parallel processes to the GPU to offload the CPU as well as directly increase performance. KGPU showed that in terms of an encrypted file system you can see whole multiple increases in read and write bandwidth on an SSD. Perhaps this little GPU thing can be useful for more? Alenka Project thinks so: they are currently working on a CUDA-based SQL-like language for data processing.
CUDA woulda shoulda... and did.
SQL databases are some of the most common methods to store and manipulate larger sets of data. If you have a blog it almost definitely is storing its information in a SQL database. If you play an MMO your data is almost definitely stored and accessed on a SQL server. As your data size expands and your number of concurrent accesses increases you can see why using a GPU could keep your application running much smoother.
Alenka in its current release supports large data sets exceeding both GPU and system RAM via streaming chunks, processing, and moving on. Its supported primitive types are doubles, longs, and varchars. It is open source under the Apache license V2.0. Developers interested in using or assisting with the project can check out their Sourceforge. We should continue to see more and more GPU-based applications appear in the near future as problems such as these are finally lifted from the CPU and given to someone more suitable to bear.