Feedback

NVIDIA Mobility GeForce 200M Updates - GDDR5, 40nm and DX10.1

Author:
Subject: Mobile
Manufacturer: NVIDIA
Tagged:

Mobility Gaming update

Sure, there is a lot to be cynical about when discussing the latest mobility
GPUs announced by NVIDIA today, but there is also a bit to be
interested in.  First, let's just jump to the chase and show you the
table of specifications and then analyze a bit:


These five new parts join the previously released GeForce GTX 280M and GTX 260M to complete the 200M series from NVIDIA.  What is most interesting is that these 5 new GPUs are based on the GT200 architecture used in the desktop GTX 280/GTX 260 graphics cards rather than the aging G92 architecture that was the source of most of the goodness inside the GTX 280M and 260M mobility parts released last March.  NVIDIA is quick to mention that while these new GPUs are BASED on the GT200 architecture that it is NOT explicitly a GT200 part.  They apparently are quite proud of the enhancements and changes made to the design as well as new features like GDDR5 support, DX10.1 support and more detailed below.  However, it is still a confusing naming and product structure when the current "high end" mobility GPUs are actually based on the previous generation graphics architecture while mid-range parts are designed around the most recent architecture.  In the end, it is performance that consumers really care about so if the new GF 200M parts fall in the right price/performance zone then NVIDIA should do well with them.

You
can clearly see that that NVIDIA has created a scaled lineup of GPUs
for the notebook segment that range from the 96 stream processor GTS
260M to the 16 SP G210M that NVIDIA claims has "up to 10x better
performance than integrated graphics" - we assume they mean Intel's
integrated graphics in this case.  Taking that lowest end part out of
the picture, the remaining GT and GTS parts are pretty similar: they
all run at either 500 or 550 MHz core clock speed with scaled shader
clock speeds (they label it processor clock) up to 1375 MHz. 


For your reference, the GeForce GTX 280M and GTX 260M specs are as follows:

  • GTX 280M: 55nm, 562 GFLOPS, 128 SPs, 585 MHz core, 1463 MHz shader, 950 MHz GDDR3, 256-bit bus, G92-based architecture

  • GTX 260M: 55nm, 462 GFLOPS, 112 SPs, 550 MHz core, 1375 MHz shader, 950 MHz GDDR3, 256-bit bus, G92-based architecture

(Side
note: while looking for those above GTX part specs in the original GF
200M launch slides, there is a several-slide-long section going over
why moving to GDDR5 memory was a bad idea for Radeon's Mobility HD 4870
GPU.  NVIDIA says now that they are adding GDDR5 to an architecture because it "makes sense to do so" based on pricing and performance considerations.)



GeForce GTX 280M die shot

While both the GT and GTS lines support
1GB of frame buffer, the GTS line integrates a new GDDR5 memory
controller - the first we have seen from NVIDIA on any product thus far. 
Even the company's highest end GT200-based desktop graphics cards are
using GDDR3-based memory controllers.  With that move comes a dramatic
increase in memory speed - from 800 MHz on the GT line to as high as
1800 MHz on the GTS 260M.  Having never tested an NVIDIA GPU with a
GDDR5 memory controller before, it is hard for me to predict how the
move from GDDR3 to the newer memory standard will affect performance -
it greatly depends on the processing power and efficiency on the GPU
shader processors themselves.  Hopefully we'll get to test a few GF
200M options in the near future to get to the bottom of it. 

There are some surprises to go along with these new GF 200M part besides the first GDDR5 memory controller on an NVIDIA GPU: they are also the first GPUs built on TSMC's 40nm technology and the first to offer DirectX 10.1 support
I can only guess that TSMC has gotten a handle on the 40nm process
finally as up until recently everyone at NVIDIA wanted to tell you just
how BAD the 40nm process actually was there and what a mistake it was
for AMD to bring out the Radeon HD 4770 based on it.  Obviously for mobility parts, the lower process
technology is supposed to allow for smaller parts with lower power
consumption - an area of extreme importance in this market.  As for the
first NVIDIA-based DX10.1 implementation - we don't yet know much about
it.  But again, based on how recently and how often NVIDIA staff have
told me how useless DX10.1 is for gaming, I can't imagine them eating
enough crow now to really start pushing the feature.  Everyone has to
have some moral high ground, right?

 

Comparison between the die sizes on the new 40nm GF 200M GPUs - estimates only

One interesting note about these GPUs that you can
see simply by looking at the table NVIDIA provided above: power leakage
might still be an issue on the 40nm process.  If you look at the GT
230M and GT 240M, both of those parts share a TDP of 23W yet the GT
240M has a 10% higher performance rating in GFLOPS.  That simply means
that NVIDIA is doing binning based solely on those specifications for
those particular parts and that system builders get no platform
benefits (being able to make a smaller, lighter design) going with the
GT 230M rather than the 240M.  Heat will apparently be the same so why
not pick the higher performing part?  Obviously that depends on
NVIDIA's menu pricing...  Also interesting is the difference between
the GTS 250M and 260M parts - while the performance difference of 10%
is the same as the GT parts, the TDP is actually 35% HIGHER for that extra performance.  That is quite a tradeoff for an ODM and I'll be curious how many are willing to take that risk.

NVIDIA claims to have "over 100 design wins" for the new GeForce 200M GPUs - good news for a company that has had some issues with mobility GPUs in the past.  If companies like Dell and HP are going to continue to
trust NVIDIA with their flagship lines of notebooks, they might
actually have a product worth looking at.

UPDATE: The more I learn about this mobility GF 200M launch, the more intrigued I become.  While at first you might assume (as I did) that NVIDIA has simply taken a GT200 architecture GPU and slapped on DX10.1 and GDDR5 support, it looks like NVIDIA has done a LOT more work internally on the design than that.  NVIDIA reps have mentioned phrases like "highly optimized shaders" when asked about why they would bother moving to GDDR5 now when the desktop GT200-based parts are doing just fine with their GDDR3 memory architecture.  Usually NVIDIA's PR and technical teams spend a lot less time informing the press about architectures on the mobility side of the GPU world so we are hoping to see a desktop variant of this updated and tweaked architecture soon so we can grab some more details on the changes. 

No comments posted yet.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.