Was leading with a low end Maxwell smart?

Subject: Graphics Cards | February 19, 2014 - 01:43 PM |
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video

We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores.  In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units.  This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet.  At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels.  Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.

That is of course after you read Ryan's full review.

nvidia-geforce-gtx750ti-645x399.jpg

"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Author:
Manufacturer: NVIDIA

What we know about Maxwell

I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell.  It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell.  It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.

For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself.  There I will detail the product specifications, performance comparison and expectations, etc.

If you are interested in learning what makes Maxwell tick, keep reading below.

The NVIDIA Maxwell Architecture

When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it.  Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press.  In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance.  It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.  

During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design.  Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary.  NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.  

The logic of the GPU design remains similar to Kepler.  There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).  

block.jpg

GM107 Block Diagram

Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell.  There are more divisions, more groupings and fewer CUDA cores "per block" than before.  As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.  

Continue reading our review of the NVIDIA GeForce GTX 750 Ti and Maxwell Architecture!!

Author:
Subject: Editorial
Manufacturer: NVIDIA

It wouldn’t be February if we didn’t hear the Q4 FY14 earnings from NVIDIA!  NVIDIA does have a slightly odd way of expressing their quarters, but in the end it is all semantics.  They are not in fact living in the future, but I bet their product managers wish they could peer into the actual Q4 2014.  No, the whole FY14 thing relates back to when they made their IPO and how they started reporting.  To us mere mortals, Q4 FY14 actually represents Q4 2013.  Clear as mud?  Lord love the Securities and Exchange Commission and their rules.

633879_NVLogo_3D.jpg

The past quarter was a pretty good one for NVIDIA.  They came away with $1.144 billion in gross revenue and had a GAAP net income of $147 million.  This beat the Street’s estimate by a pretty large margin.  As a response, trading of NVIDIA’s stock has gone up in after hours.  This has certainly been a trying year for NVIDIA and the PC market in general, but they seem to have come out on top.

NVIDIA beat estimates primarily on the strength of the PC graphics division.  Many were focusing on the apparent decline of the PC market and assumed that NVIDIA would be dragged down by lower shipments.  On the contrary, it seems as though the gaming market and add-in sales on the PC helped to solidify NVIDIA’s quarter.  We can look at a number of factors that likely contributed to this uptick for NVIDIA.

Click here to read the rest of NVIDIA's Q4 FY2014 results!

A graphical description of market woes from Jon Peddie

Subject: General Tech, Graphics Cards | February 25, 2013 - 10:32 AM |
Tagged: jon peddie, graphics, market share

If last weeks report from Jon Peddie Research on sales for all add in and integrated graphics had you worried, the news this week is not gong to help boost your confidence.  This week the report focuses solely on add in boards and the drop is dramatic; Q4 2012 sales plummeted just short of 20% compared to Q3 2012.  When you look at the entire year, sales dropped 10% overall as AMD's APUs are making serious inroads into the mobile market, as are Intel's, with many notebooks being sold without a discrete GPU.  The losses are coming from the mainstream market, enthusiast level GPUs actually saw a slight increase in sales but the small volume is utterly drowned by the mainstream market.  You can check out the full press release here.

PR_108.jpg

"JPR found that AIB shipments during Q4 2012 behaved according to past years with regard to seasonality, but the drop was considerably more dramatic. AIB shipments decreased 17.3% from the last quarter (the 10 year average is just -0.68%). On a year-to-year comparison, shipments were down 10%."

Here is some more Tech News from around the web:

Tech Talk

Author:
Manufacturer: PC Perspective

In case you missed it...

UPDATE: We have now published full details on our Frame Rating capture and analysis system as well as an entire host of benchmark results.  Please check it out!!

In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time.  Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle.  If you already read that page of our TITAN review, nothing new is included below. 

I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!


Why are you not testing CrossFire??

If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out.  (Part 1 is here, part 2 is here.)  The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.

Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system.  With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.

We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS.  As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.

Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all.  They are simply presenting data that they believe to be true based on the tools at their disposal.  More data is always better. 

Here are these results and our discussion.  I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way.  I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI.  To gather results I used two processes:

  1. Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
  2. Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.

Here is an example of what the overlay looks like in Battlefield 3.

fr_sli_1.jpg

Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge

The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process.  A solid color is added to the PRESENT call (more details to come later) for each individual frame.  As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.

By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.

fr_cf_1.jpg

Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge

Here you see a very similar screenshot running on CrossFire.  Notice the thin silver band between the maroon and purple?  That is a complete frame according to FRAPS and most reviews.  Not to us - we think that frame rendered is almost useless. 

Continue reading our 3rd part in a series of Frame Rating and to see our first performance results!!

Author:
Manufacturer: PC Perspective

Another update

In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates.  I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.

d31.jpg

Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.

dis1.jpg

This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.

 

Video Loading...

 

Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback.  Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.

We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic.  You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.

Let me know below what you thought of this video and any questions that you might have. 

Sapphire CES 2013: Mini-PCs are the Future

Subject: Shows and Expos | January 9, 2013 - 06:11 PM |
Tagged: Vapor X, sapphire, PCs, graphics, APU, amd

 

Sapphire was a quick trip with a few interesting things to show off.  At the moment we are in a quiet period with AMD and NVIDIA graphics releases.  While AMD has released a few of their mobile based 8000 series parts, we are still not expecting a major desktop refresh anytime soon.  This is somewhat bittersweet for the graphics partners.  On one hand they have more time to differentiate their products and create more value for their consumers.  On the other hand there is no major push with new technology that will help the bottom line.

sapp_01.JPG

The company is not only involved with graphics, but has a long history of producing motherboards.  They offer products for both AMD and Intel, but their primary focus is to address the APU market.  FM2 is well fleshed out with Sapphire with A85X, A75, and A55 products.  Sapphire does find it slightly easier to compete in the AMD market than going against the biggies in the larger and potentially more lucrative Intel market.

The area where they are hoping to experience the most growth in is the micro PC market.  These are very small “desktop” style products based on mobile parts.  These are robust little units which do not ship with an OS or the ability to build in an optical device.  Due to Sapphire being such a strong AMD partner, they are primarily focusing on APUs in this market as well.

sapp_02.JPG

The Edge VS8 is the top product for Sapphire in this market.  It is based on a mobile Trinity APU that is quad core enabled running at 1.6 GHz.  The graphics portion is the 7600G, which looks to feature the entire complement of GCN units but obviously clocked down to save on power.  The VS4 features Trinity but with a dual core processor running at 1.9 GHz.

The lower end Edge HD series is a slightly older unit, and the HD3 runs the last generation Llano processor.  They also feature an Intel based HD4 that runs the Celeron 897 processor.

These PCs are shipped without operating systems and can also be bought in a barebones state.  For example the VS8 comes standard with 4GB of memory and a 320 GB HD (spindle based).  By buying a barebones version a user can easily stack as much memory as possible in the machine as well as use a SSD to give that much more performance.

sapp_03.JPG

Sapphire continues to offer their entire line of AMD based graphics cards and are really pushing their Vapor X technology.  Which leads us to our next product.  Sapphire will start introducing their CPU cooling designs to the market and will be using the Vapor-X technology.  Vapor chamber cooling will be coming to the CPU market very soon and at competitive prices.

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

Jon Peddie has good news for NVIDIA in Q3 2012

Subject: Chipsets | November 26, 2012 - 10:06 AM |
Tagged: jon peddie, Q3 2012, graphics, market share

Jon Peddie Research have released their findings for the graphics market in Q3 of 2012, with bad news for the market, though not so bad for NVIDIA.  The downward trend in PC sales has had an effect on the overall graphics market, with the number of units sold dropping 5.2% from this time last year and only NVIDIA seeing a rise in the number of units sold.  AMD saw a drop of 10.7% in the number of units they shipped, specifically a 30% drop from last quarter in desktop APUs and just under 5% in mobile processors.  Intel's overall sales dropped 8%, with both segments falling roughly equally but NVIDIA's strictly discrete GPU business saw a 28.3% gain in desktop market share and 12% for notebooks when compared to last quarter.

Worth noting is what JPR includes in this research above and beyond what we used to think of as the graphics market.  Any x86 based processor with a GPU is included, tablets to desktops as are IGPs and discrete cards; ARM based devices, cell phones and all server chips are excluded.

JPR_Q32012.png

"The news was terrific for Nvidia and disappointing for everyone the other major players. From Q2 to Q3 Intel slipped in both desktop (7%) and notebook (8.6%). AMD dropped (2%) in the desktop, and (17%) in notebooks. Nvidia gained 28.3% in desktop from quarter to quarter and jumped almost 12% in the notebook segment.

This was a not a very good quarter the shipments were down -1.45% on a Qtr-Qtr basis, and -10.8% on a Yr-Yr basis. We found that graphics shipments during Q3'12 slipped from last quarter -1.5% as compared to PCs which grew slightly by 0.9% overall (however more GPU's shipped than PCs due to double attach). GPUs are traditionally a leading indicator of the market, since a GPU goes into every system before it is shipped and most of the PC vendors are guiding down for Q4."

Here is some more Tech News from around the web:

Tech Talk

Computex: AMD Launching Tahiti 2 Graphics Cards Next Week

Subject: Graphics Cards | June 8, 2012 - 10:23 AM |
Tagged: tahiti, graphics, gpu, computex, binning, amd, 7970 ghz edition

AMD is having a string of successes with its 28nm 7000 series graphics cards. While it was dethroned by NVIDIA’s GTX 680, the AMD Radeon HD 7970 is easier to get a hold of. It certainly seems like the company is having a much easier time in manufacturing its GPUs compared to NVIDIA’s Kepler cards. AMD has been cranking out HD 7970s for a few months now and they have gotten the binning process down such that they are getting a good number of pieces of silicon that have a healthy bit of overhead over that of the 7970’s stock speeds.

And so enters Tahiti 2. Tahiti 2 represents GPU silicon that is binning not only for HD 7970 speeds but is able to push up the default clock speed while running with lower voltage. As a result, the GPUs are able to stay within the same TDP of current 7970 cards but run faster.

But how much faster? Well, SemiAccurate is reporting that AMD is seeing as much as a 20% clock speed improvement over current Radeon HD 7970 graphics cards. This means that cards are able to run at clock speeds up to approximately 1075MHz – quite a bit above the current reference clock speed of 925MHz!

AMD 7970.jpg

The AMD 7970 3GB card. Expect Tahiti 2 to look exactly the same but run at higher clock speeds.

They are further reporting that, because the TDP has not changed, no cooler, PCB, or memory changes will be needed. This will make it that much easier for add in board partners to get the updated reference-based GPUs out as quickly as possible and with minimal cost increases (we hope). You can likely count on board partners capitalizing on the 1,000MHz+ speeds by branding the new cards “GHz Edition” much like the Radeon 7770 has enjoyed.

With 7970 chips having overhead and binning higher than needed, an updated and lower-power using refresh may also be in order for AMD’s 7950 “Tahiti Pro” graphics cards. Heck, maybe they can refresh the entire lineup with better binned silicon but keep the same clock speeds in order to reduce power consumption on all their cards.