Review Index:
Feedback

The State of NVIDIA: For better or for worse

Author: Ryan Shrout
Manufacturer: NVIDIA
Tagged:

Here comes Fermi to save the day

Believe it or not, NVIDIA knows all of this already and has a plan
to correct it.  The first step in this process was the introduction of
the Fermi GPU architecture at the GPU Technology Conference.  I am not going to get into the
technical details of this architecture here but you should definitely
check out my preview of the GPU posted earlier in the week. 

The key take away from the Fermi architecture unveiling is the
focus NVIDIA is taking on GPU computing, CUDA and the open
standards.  For most of CUDA's existence, AMD and much of the media
have been saying that while it was an interesting idea, it wouldn't
mean much when open standards like OpenCL and DirectCompute started to
take hold.  While there is little doubt that OpenCL and DirectCompute
will be important going forward NVIDIA's investment in CUDA and its
surrounding technologies are finally bearing fruit in the form of
developers and applications - lots and lots of each.



Fermi architecture

At the GPU Technology Conference in San Jose this week I saw
literally dozens of developers and their applications at work on the
GPU.  Everything from complex physics modeling to holographic user
interfaces and even network management demonstrations were on display
utilizing the GPU for most or all of the parallel computing.  The
majority of these programs are not really intended for consumer markets
yet - hence why the focus of the GTC was for academia and professional
fields.

CUDA has definitely evolved over the years since the G80
architecture introduction from the compute language itself based on C
to basically becoming the term that NVIDIA uses to cover the entirety
of the hardware and software infrastructure to facilitate GPU
computing.  The CUDA elements on the hardware side point towards the
specific GPU changes NVIDIA made to improve GPU compute performance
while the software elements for CUDA include the company’s
implementations of OpenCL and DirectCompute as well as compilers for
languages like C and FORTRAN and even the upcoming support for C++
found in Fermi.

That last factor could be one of the most important for NVIDIA and
CUDA - C++ is easily the most popular programming language for the PC
platform and the ability to easily parallelize code to be run on the
GPU, directly inside MS Visual Studio via the new NVIDIA Nexus tool, is
something that could revolutionize GPU computing.  And this feature,
along with other language compilers, is something AMD simply
doesn't offer.

AMD's stance has been to wait and let the standards evolve around
them; NVIDIA's has been to create methods to enable GPU computing and
then adopt the open standards when they become available.  Most people
in the industry, myself included, originally thought NVIDIA was
wasting a lot of development effort and money on the initiative, but
based on the length of time it has taken OpenCL to develop, it appears
NVIDIA has been making the correct moves.  NVIDIA likes to say that
they didn't want to wait for GPU computing to evolve but rather move
the industry ahead by force if necessary.  Although much of the
advantage NVIDIA has COULD be lost if/when OpenCL and DirectCompute
become the dominant standards, I think the manual labor NVIDIA has been
going through these years will continue to give them an edge.  The
demonstrations at the GPU Technology Conference have proven it.

Has NVIDIA forgotten about the gamers?



After finishing the week at GTC, one of the main questions we have
been getting from our readers is whether or not NVIDIA has shifted its
focus away from the gamer and squarely on the world of GPU computing. 
While it easy to get that impression from the information provided (or
not provided) during the show, NVIDIA assures us that is not the case. 

NVIDIA's Drew Henry stated quite plainly to me that "Fermi was
designed first for the GeForce brand" but that the company spent
additional money and resources on all the GPU computing features and
enhancements.  The obvious problem gamers saw from all the news at
GTC was that there was no mention of new gaming or graphics features,
performance boosts or anything else that really targeted the enthusiast
market.  And to be fair this is the first time NVIDIA has gone this
route with a new architecture; traditionally we are used to seeing the
details of the GPU design revealed at the same time as the GeForce
product previews hit the web.  This time we know a lot of details about
the GPU (512 shaders, 384-bit GDDR5 memory bus, etc) but we will be
waiting months before products are available on the market.

Does that make the information NVIDIA provided last week a "paper
launch"?  One of those dirty terms we have been mostly without in the
hardware community for the past several years.  AMD has already tell us
that it was but I would more likely equate this to a traditional Intel
CPU schedule where we are given architectural details well in advance
of specific product news.  Had NVIDIA given us benchmarks comparing
Fermi to the HD 5870 or product details and pricing, then yes, I would
have chastised NVIDIA for attempting to disrupt the sales of HD 5870 by
introducing FUD. 



This is basically all we know about how Fermi will perform for now

Even though details of gaming benefits of the new Fermi
architecture were at a minimum, I was able to get some interesting
comments from NVIDIA executives about what we MIGHT see.  First, Fermi
is obviously going to be a DirectX 11 complant GPU and even though
NVIDIA is publicly pushing that DX11 isn't a necessary feature today,
NVIDIA's Tony Tomasi definitely sees the advantages of a DX11
configuration for developers.  The ability to write a single rendering
engine for DX11 and have it run on DX10 hardware using what are
essentially "capability bits" to disable certain features will likely
increase adoption for DX11 at a much quicker pace than we saw with
DX10. 

Another feature we quizzed NVIDIA about was an answer to the AMD Eyefinity technology - a new feature I was quite impressed by in our recent Radeon HD 5870
review.  While NVIDIA wouldn't officially answer yes or no, they did
say they "have had that feature in our Quadro line for some time,
so it wouldn't take a lot of work to move that into our GeForce
lineup."  Sounds pretty promising, doesn't it?

So yes, if you are a gamer looking for information on how Fermi
will run your favorite titles or what it will offer over and above
GT200 or AMD's Evergreen parts, you are going to be waiting a bit
longer...

Chipsets, the other NVIDIA market

While Fermi was definitely the focus of the GPU Technology
Conference, we took the chance to pose some questions about ION and
other chipsets to the NVIDIA personnel on site.  ION has seen moderate
success in the world of low power and low cost computers though I would
say that the number of implementations using the ION chipset have been fewer than expected.  NVIDIA still alludes to the pressures
of Intel as the key factor of ION adoption which, even if true, won't
change a thing for NVIDIA in terms of market share or profits. 



The original ION reference board

The future of ION is in doubt as well; remember that Intel will be pushing for further technology integration with the upcoming Pine Trail release that moves a basic graphics core onto the CPU die.  When that occurs,
system integrators like ASUS and HP might find it even more of a
stretch to spend the extra money on the ION chipset.  NVIDIA's plans
for countering that trend will be to offer a discrete version of a GPU
under the ION brand that will remove the MCP features lowering costs
and probably power requirements.  Whether or not this change is what
"ION 2" will materialize as is yet unknown.

Finally, we have an update pending for the killer feature that
would make ION-based netbooks and nettops truly irresistible:
GPU-accelerated Flash.  We saw this feature first previewed at Computex last June and even though there are announcements coming VERY SOON on this new
player revision, it will likely be sometime in Q2 2010 before Adobe
Flash Player includes GPU acceleration out of the box.  I know many
people that would love to put an EeeBox on their home theater that
could run Hulu reliably - I am one of them and eagerly await the day we
can do so!



The last of the nForce chipsets?

The other interesting news from GTC in the world of chipsets was
the revelation that NVIDIA has canceled its plans to make any more
Intel chipsets.  Last year, when NVIDIA told us they would not be
making a chipset for the Nehalem/Bloomfield processors (and that they
would be licensing SLI on X58 motherboards) they reiterated their
commitment to using their DMI bus license to make an nForce chipset for
the Lynnfield processors the following year. Well, Lynnfield is out and
with no news from NVIDIA on the subject, we were beginning to suspect
something was afoot.  NVIDIA says that pressure from Intel's legal
department debating the validity of NVIDIA's DMI license is really what
kept NVIDIA out of the space.  And considering our slight
disappointment in the feature set of the P55 chipset for Lynnfield CPUs
(no SATA 6G or PCIe 2.0 lanes, etc) we are going to miss an NVIDIA
option in this field. 

Looking for information on where NVIDIA chipsets might be headed
for the AMD platform?  Put simply, NVIDIA states "there is no demand
for our chipsets in the AMD platform."  With these revelations, it is
more than assured now that NVIDIA's chipset division is essentially
closed.

UPDATE: An NVIDIA spokesperson contacted me in regards to the AMD chipset issue - they are telling me that while demand for enthusiast-level chipsets is basically gone, NVIDIA is doing a high volume of MCP 61-based sales.  So to be fair, NVIDIA's chipset division might not be "essentially closed", chipset development at NVIDIA essentially is.

UPDATE 2 (10/8/09): We received official word from NVIDIA on this issue after having our news reported on by numerous outlets.  I wrote up a short analysis of the company's thoughts with some additional editorial on the chipset situation that I think is worth visiting.  You can read that news post here

Feel free to follow me on Twitter (http://twitter.com/ryanshrout) to get the latest info on this story and any updates we make.

Closing Thoughts

There are still a ton of questions about NVIDIA following this week
as there are contradicting tones from the company in many areas. 
NVIDIA says that Fermi was designed first for the GeForce brand yet
they were frequently caught saying "NVIDIA is on a fundamentally
different path than AMD" in terms of GPUs.  That obviously refers to
NVIDIA's commitment to the GPU computing fields, developer tools and
features like C++ support compared to AMD's desire to wait for the open
standards to become mainstream.  NVIDIA says that they have not
forgotten about the PC gamer yet their stance on not allowing PhysX
acceleration on systems with both an NVIDIA and AMD GPU point in the
other direction.

It is still very possible that the Fermi architecture will be a
gaming powerhouse, able to best the AMD Evergreen line of GPUs in the
gaming tests crucial to being a success in the enthusiast market.  Even
if that is the case, it is possible that Fermi is the beginning of
NVIDIA looking at the bigger picture; as the discrete PC gaming market
shrinks the need for profits is a driving force behind getting the
Quadro and Tesla brands moving into the HPC markets sooner rather than
later.  NVIDIA still knows that they depend on the gaming market for
the funds to develop these other products but that may soon change as
the company hopes the work they have put into GPU computing pays off.

There are some issues I think NVIDIA needs to address immediately
for current gaming users but I still feel it is too early to be overly
excitied, or dissappointed, about what Fermi will bring to the gaming
community.  We will know more soon, but if it will be soon enough to
keep AMD from gaining a lead in the enthusiast market has yet to be
seen.

If you would like to leave a comment or questions about this article for us, please head into the PC Perspective Forums to do so!

No comments posted yet.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.