An interesting bit of editorial came across my desk today from none other than the big green company known as NVIDIA.  As a whole NVIDIA has not been shy in their attacks on Intel what with the “cans of whoop ass” and the like; but this time NVIDIA has taken it upon themselves to send the media a document detail “questions” about Intel’s Larrabee on the day the architecture gets detailed at Siggraph. 

The document itself does raise some interesting points – most of which we have already brought forth in our own Larrabee editorial last week.  Some key quotes:

Intel claims the X86 instruction set makes parallel computing easier to accomplish but as any HPC developer will tell you, this hasn’t proven true with multi‐core CPUs as applications struggle to scale from 2 to 4 cores. Now with even more cores, this same technology is claimed to solve parallel computing ‐ we’d like to know what changed. After all, if it’ll be easy to program 32 cores with 16‐wide SIMD, why aren’t more developers using quad cores with 4‐wide SIMD? And if Ct is the answer, then why not use it on their CPUs?

Point taken – that is indeed a large part of the equation we don’t know yet – how the real-time compiler technology will work to properly feed all of these x86-based cores. 

The NVIDIA computing architecture was specifically designed to support the C language ‐ like any other processor architecture. Comments that the GPU is only partially programmable are incorrect ‐ all the processors in the NVIDIA GPU are programmable in the C language. Given this, why is Intel calling the CUDA C‐compiler a “new language”?

In truth I have never heard anyone outside of NVIDIA say that CUDA is truly using the C-language.  It is VERY VERY similar but they are noticeable differences in syntax and structure.  I am not as code-savvy as some of our other readers so I’ll have to gather more information on this to post later.

Intel claims that the X86 base of Larrabee makes it seamless for developers. But with conflicting statements coming from Intel themselves on whether or not there will be a new programming model or not, there are several important questions.
‐ Will apps written for today’s Intel CPUs run unmodified on Larrabee?
‐ Will apps written for Larrabee run unmodified on today’s Intel multi‐core CPUs?
‐ The SIMD part of Larrabee is different from Intel’s CPUs ‐ so won’t that create compatibility problems?

This sums up many of our questions as well – and in truth we simply don’t have the answers to them yet.  There will obviously be SOME differences between the code for primary CPUs and the upcoming Larrabee technology but I have to assume that those changes will be much less than the difference between CUDA and standard x86 code. 

In fact, NVIDIA demonstrated CUDA for both GPU and CPU at our annual financial analyst day and ran an astrophysics simulation on an 8‐core GPU inside a chipset, a G80‐class GPU and a quad core CPU. Exactly the same binary program was used for the range of GPUs. And exactly the same source code for the CPU and GPU.

This is a great first step for NVIDIA and the adoption of the CUDA programming model. 

To date, Intel has not described Larrabee’s development environment. While focusing on one aspect of the architecture ‐ the X86 instruction set ‐ any differences or new challenges on top of the existing problems with multi‐threading have yet to be revealed. With a new SSE architecture, new software layers to manage threads, perhaps another new language with Ct ‐ developers are not simply using the X86 instruction set‐‐they need to learn the way around a different computing architecture.

While true, the same is to be said of NVIDIA’s CUDA technology.  It would seem that this editorial from NVIDIA is less about why Larrabee is no good and more about why Larrabee will have the same struggles that NVIDIA’s GPU architectures have and will have going into the standard computing models. 

Larrabee is positioning itself as a great GPU. Yet, users and developers alike have expressed frustration and disappointment with their IGP technology for many years. Why hasn’t Intel used some of their investment and expertise to fix some of these problems for their 200M+ IGP customers? Also, will they be able to achieve the fine balance between both power and cost in graphics?

NVIDIA tackles the Larrabee question - Graphics Cards 2
Larrabee architecture as we know it now

Ray Tracing (RT). NVIDIA’s CUDA‐enabled GPUs can do raytracing, and more. Even if Intel can do raytracing on Larrabee, how would developers achieve that on the hundreds of millions of Intel IGPs shipped each year? Is Intel abandoning its IGP customers?

This is probably the most heard argument about Larrabee: the “if their IGP sucks why trust their new GPU?”  The truth is that these are VERY DIFFERENT teams that work on each project and Larrabee was a new development from the ground up.  If anything, NVIDIA should fear the future of Intel’s IGPs if they can build a successful GPU design in Larrabee; if Intel’s integrated chipsets become anywhere near as fast as NVIDIA’s there will be a big problem for the green team.

One final comment from NVIDIA:

In summary, Intel knows that moving to powerful, floating point rich parallel architectures is the future – in so doing they will inevitably encourage more developers to develop on GPUs as they too will see this move from Intel as a major industry shift and will want to target the hardware where their software has the greatest chance of success. NVIDIA will have shipped over 150 million CUDA capable parallel processors by the time Larrabee ships and Intel knows they will hurt their CPU business by making this transition, but this is truly the era of visual computing and this shift is a necessary move.

This is probably the most well thought out statement in the paper – the fundamental way processing happens is changing dramatically; NVIDIA, AMD and Intel all know it and Intel is developing Larrabee for just that reason.  NVIDIA might have a numbers advantage with their install base of GeForce graphics cards but they know that alone will not save them – hence why we have seen such ferocious marketing from NVIDIA in recent months. 

Overall, the editorial piece is a good marketing bump for NVIDIA and does bring up interesting questions – though it’s not like we haven’t already discussed the issues before. 

For those of you interested, you can download the whole PDF file here.