Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

Jon Peddie has good news for NVIDIA in Q3 2012

Subject: Chipsets | November 26, 2012 - 01:06 PM |
Tagged: jon peddie, Q3 2012, graphics, market share

Jon Peddie Research have released their findings for the graphics market in Q3 of 2012, with bad news for the market, though not so bad for NVIDIA.  The downward trend in PC sales has had an effect on the overall graphics market, with the number of units sold dropping 5.2% from this time last year and only NVIDIA seeing a rise in the number of units sold.  AMD saw a drop of 10.7% in the number of units they shipped, specifically a 30% drop from last quarter in desktop APUs and just under 5% in mobile processors.  Intel's overall sales dropped 8%, with both segments falling roughly equally but NVIDIA's strictly discrete GPU business saw a 28.3% gain in desktop market share and 12% for notebooks when compared to last quarter.

Worth noting is what JPR includes in this research above and beyond what we used to think of as the graphics market.  Any x86 based processor with a GPU is included, tablets to desktops as are IGPs and discrete cards; ARM based devices, cell phones and all server chips are excluded.

JPR_Q32012.png

"The news was terrific for Nvidia and disappointing for everyone the other major players. From Q2 to Q3 Intel slipped in both desktop (7%) and notebook (8.6%). AMD dropped (2%) in the desktop, and (17%) in notebooks. Nvidia gained 28.3% in desktop from quarter to quarter and jumped almost 12% in the notebook segment.

This was a not a very good quarter the shipments were down -1.45% on a Qtr-Qtr basis, and -10.8% on a Yr-Yr basis. We found that graphics shipments during Q3'12 slipped from last quarter -1.5% as compared to PCs which grew slightly by 0.9% overall (however more GPU's shipped than PCs due to double attach). GPUs are traditionally a leading indicator of the market, since a GPU goes into every system before it is shipped and most of the PC vendors are guiding down for Q4."

Here is some more Tech News from around the web:

Tech Talk

Computex: AMD Launching Tahiti 2 Graphics Cards Next Week

Subject: Graphics Cards | June 8, 2012 - 01:23 PM |
Tagged: tahiti, graphics, gpu, computex, binning, amd, 7970 ghz edition

AMD is having a string of successes with its 28nm 7000 series graphics cards. While it was dethroned by NVIDIA’s GTX 680, the AMD Radeon HD 7970 is easier to get a hold of. It certainly seems like the company is having a much easier time in manufacturing its GPUs compared to NVIDIA’s Kepler cards. AMD has been cranking out HD 7970s for a few months now and they have gotten the binning process down such that they are getting a good number of pieces of silicon that have a healthy bit of overhead over that of the 7970’s stock speeds.

And so enters Tahiti 2. Tahiti 2 represents GPU silicon that is binning not only for HD 7970 speeds but is able to push up the default clock speed while running with lower voltage. As a result, the GPUs are able to stay within the same TDP of current 7970 cards but run faster.

But how much faster? Well, SemiAccurate is reporting that AMD is seeing as much as a 20% clock speed improvement over current Radeon HD 7970 graphics cards. This means that cards are able to run at clock speeds up to approximately 1075MHz – quite a bit above the current reference clock speed of 925MHz!

AMD 7970.jpg

The AMD 7970 3GB card. Expect Tahiti 2 to look exactly the same but run at higher clock speeds.

They are further reporting that, because the TDP has not changed, no cooler, PCB, or memory changes will be needed. This will make it that much easier for add in board partners to get the updated reference-based GPUs out as quickly as possible and with minimal cost increases (we hope). You can likely count on board partners capitalizing on the 1,000MHz+ speeds by branding the new cards “GHz Edition” much like the Radeon 7770 has enjoyed.

With 7970 chips having overhead and binning higher than needed, an updated and lower-power using refresh may also be in order for AMD’s 7950 “Tahiti Pro” graphics cards. Heck, maybe they can refresh the entire lineup with better binned silicon but keep the same clock speeds in order to reduce power consumption on all their cards.

NVIDIA Kepler Graphics Cards Lineup Leak To Web

Subject: Graphics Cards | February 6, 2012 - 06:23 PM |
Tagged: nvidia, kepler, graphics, gpu

Although there were quite a few rumors leading up to AMD's Radeon 7000 series launch, the Internet has been very quiet on the greener side of the graphics market. Finally; however, we have some rumors to share with you on the Nvidia front. As always, take these numbers with more than your average grain of salt.

Specifically, EXP Review managed to uncover two charts that supposedly detail specifics about a range of GeForce 600 series Kepler cards from the number of stream processors to the release date. Needless to say, it's a lot of rumored information to take in all at once.

Anyway, without further adieu, let's dive into the two leaked charts.

Model Code Name Die Size Core Clock (TBD) MHz Shader Clock (TBD) GHz Stream Processors SM Count ROPs Memory Clock (effective) GDDR5  Bus Width Memory Bus Width
GTX690 GK110x2 550mm2 ~750 ~1.5 2x1024 2x32 2x56 4.5 GHz 2x448bit 2x252GB/s
GTX680 GK110 550mm2 ~850 ~1.7 1024 32 64 5.5 GHz 512bit 352GB/s
GTX670 GK110 550mm2 ~850 ~1.7 896 28 56 5 GHz 448bit 280GB/s
GTX660Ti GK110 550mm2 ~850 ~1.7 768 24 48 5 GHz 384bit 240GB/s
GTX660 GK104 290mm2 ~900 ~1.8 512 16 32 5.8 GHz 256bit 186GB/s
GTX650Ti GK104 290mm2 ~850 ~1.7 448 14 28 5.5 GHz 224bit 154GB/s
GTX650 GK106 155mm2 ~900 ~1.8 256 8 24 5.5 GHz 192bit 132GB/s
GTX640 GK106 155mm2 ~850 ~1.7 192 6 16 5.5 GHz 128bit 88GB/s

 

From the chart above, we can see the entire lineup of Kepler cards from the NVIDIA GTX 640 to the dual GPU GTX 690.  The die size in the higher end GeForce cards is approximately 50% larger than that of the AMD Radeon HD 7970, but not much bigger than that of the GTX 580.  If only we knew the TDP of these cards!  In the next chart, we see alleged performance comparison versus the AMD competition.

Model Bus Interface Frame Buffer Transistors (Billion) Price Point Release Date Performance Scale
GTX690 PCI-E 3 x16 2x1.75 GB 2x6.4 $999 Q3 2012  
GTX680 PCI-E 3 x16 2 GB 6.4 $649 April 2012 ~45%>HD7970
GTX670 PCI-E 3 x16 1.75 GB 6.4 $499 April 2012 ~20%>HD7970
GTX660Ti PCI-E 3 x16 1.5 GB 6.4 $399 Q2/Q3 2012 ~10%>HD7950
GTX660 PCI-E 3 x16 2 GB 3.4 $319 April 2012 ~GTX580
GTX650Ti PCI-E 3 x16 1.75 GB 3.4 $249 Q2/Q3 2012 ~GTX570
GTX650 PCI-E 3 x16 1.5 GB 1.8 $179 May 2012 ~GTX560
GTX640 PCI-E 3 x16 2 GB 1.8 $139 May 2012 ~GTX550Ti

 

If these numbers hold true, NVIDIA will handily beat the current AMD offerings; however, I would wait for reviews to come out before making any purchasing decisions.  One interesting aspect is the amount of GDDR5 memory.  It seems that NVIDIA is sticking with 2GB frame buffers (or less) per GPU while AMD has really started upping the RAM.  It will be interesting to see how this affects gaming in NVIDIA Surround and/or at high resolutions.

What do you guys think about these numbers, do you think Kepler will live up to the alleged performance scale figures?

Source: EXPreview

Galaxy Now Extending Warranty On Graphics Cards Purchased After August 1st

Subject: Graphics Cards | August 2, 2011 - 10:46 AM |
Tagged: graphics, gpu, galaxy

Popular maker of NVIDIA graphics cards Galaxy, recently announced that they are extending the warranty of their graphics cards products to three years.  "Galaxy has listened to the enthusiast market and we are glad to move from a 2 year warranty to a 3 year warranty by registration."  The new extended warranty will apply to all graphics cards purchased after August 1st, 2011 that are then registered with Galaxy.  Products will further bear the seal shown below to let customers know that the graphics card qualifies.

galaxywarranty.png

Seeing warranties being extended is always a good thing, especially in a world where the once popular lifetime warranty is rare.  What do you think of the extended warranty?  Will this be enough to push you towards a Galaxy branded card on your next purchase?

Source: Galaxy

MSI Announces New PCI-Express 3 Motherboard

Subject: Motherboards | July 6, 2011 - 04:36 PM |
Tagged: PCI-E 3.0, msi, graphics

MSI recently unveiled a new motherboard supporting the PCI-Express 3.0 standard. The Intel LGA 1155 CPU socket and Z68 chipset are also features of the upcoming motherboard, dubbed the Z68A-GD80 (G3).

image001.png

The new MSI board joins ASRock's announcement as one of the first PCI-Express 3.0 motherboards, and is loaded with tons of features. The Z68 chipset naturally supports Intel Sandy Bridge processors, PCI-E 3.0, a UEFI BIOS, OC Genie II, and their signature MIL-810STD military class components. The PCI-E 3.0 slots help AMD CrossFire X and NVIDIA SLI multi GPU solutions fed with plenty of bandwidth. Rear IO includes a PS/2 port, USB 3.0, USB 2.0, HDMI, DVI, 7.1 audio, Dual Gigabit Ethernet, e-SATA, and firewire.  On board IO includes 3 PCI-E 3.0 slots, 2 PCI slots, and two PCI-E x1 slots, the 1155 CPU socket, and 4 DDR3 DIMMs.

What do you think of the new board; are you ready for PCI-E 3.0?

Source: MSI

NVIDIA Ups Mobile Graphics Ante, Releases GTX 560M

Subject: Graphics Cards | May 29, 2011 - 11:35 PM |
Tagged: mobile gaming, gtx 560m, graphics, computex

At Computex 2011, NVIDIA plans to showcase the latest addition to its mobile graphics lineup, the GTX 560M GPU.  Powered by a mobile version of the 500 series desktop GPU, the graphics card will bring support for NVIDIA's Optimus, 3D Vision, and PhysX technologies. On launch, there will be two notebooks from Asus and Toshiba, the G74sx and a Qosmio gaming laptop respectively, with many more to follow.

Nvidia_GTX_560M_Andrew_Coonrad.PNG

The Asus G74sx and Toshiba Gaming Notebook

From a performance aspect, the GTX 560M purports to deliver twice the performance of the current latest 540M mobile chip. According to GeForce.com, in Crysis: Warhead the GTX 560M pulls a respectable 30-40 FPS at 1080p resolution with “Gamer” detail settings. This is in contrast to the older GTX 540M, which can only maintain 30-40 frames per second at 1080p at the lowest detail settings. In 3DMark Vantage, the GTX 560M scored 10,000 points whereas the older 540M only pulled off approximately 4,200 points. Andrew Coonrad, of NVIDIA’s Technical Marketing department further stated that the graphics card would play both the Witcher 2 and Duke Nukem: Forever at approximately 50 frames per second.

GeForce states that if you are a mobile gamer looking for an easy to carry gaming notebook that can offer Optimus’ battery saving technology and 3D Visions gaming features, laptops with the GTX 560 are the way to go as the older GTX 480M is not nearly as power efficient (and thus less portable). Laptops with the new graphics card are in stock now at several online retailers.

Computex 2011 Coverage brought to you by MSI Computer and Antec

Source: NVIDIA

KFA2 Launches Two NVIDIA GeForce GTX 560 Graphics Cards

Subject: Graphics Cards | May 21, 2011 - 03:04 AM |
Tagged: nvidia, kfa2, GTX 560, graphics

large560EXOC_image_3.jpg

Not to be left out of the slew of NVIDIA GeForce GTX 560 releases, KFA2 announced two new NVIDIA graphics cards to their current graphics card lineup.  Both are based on the Geforce GTX 560 GPU; however, one card is overclocked and fitted with an aftermarket heatsink and fan combo (the other is a standard single, centered, and shrouded fan design).  Labeled the KFA2 GeForce GTX 560 1GB 256bit and the KFA2 GeForce GTX 560 EX OC 1GB 256bit, the DirectX 11 cards offer the following specifications:

  GeForce GTX 560 1GB 256bit GeForce GTX 560 EX OC 1GB 256bit
CUDA Cores 336 336
GPU Clock 810 MHz 905 MHz
Shader Clock 1620 MHz 1810 MHz
Memory Clock 2004 MHz 2004 MHz
Memory 1 GB GDDR5 on 256-bit bus 1 GB GDDR5 on 256-bit bus
Memory Bandwidth 128.3 GB/s 128.3 GB/s
Texture Fill Rate 45.3 Billion/s 50.6 Billion/s

 

The two new cards seem to be positioned (specifications wise) between purely reference cards and the highest clocked GTX 560 cards of their competitors.  The street price will ultimately determine if they are worth picking up versus other brands with higher clocks or reference clocks but aftermarket cooling.  KFA2 states that the cards will be available online and in retail stores throughout Europe, and are backed by a two year warranty.

Source: KFA2

Gigabyte Unveils NVIDIA GeForce GTX 560 Overclock Edition Graphics Card

Subject: Graphics Cards | May 19, 2011 - 04:33 AM |
Tagged: nvidia, GTX 560, graphics

Coinciding with the NDA lift on the NVIDIA GeForce GTX 560, Gigabyte announced its enthusiast class Overclock Edition graphics card based on new the GTX 560 GPU.

GTX560GigabyteOC.jpg

The new Overclock Edition replaces the reference design's cooler with Gigabyte's own WindForce 2X variant, which they claim reduces the noise of the card under full load to 31db. Further, the heatsink used direct heat pipe technology, which means that the heat pipes that carry heat away from the GPU and into the fins physically contact the GPU itself. Both fans produce 30.5 CFM of airflow to quickly dissipate the heat of the overclocked GTX 560 GPU, Gigabyte was able to clock the card at a 830 MHz GPU clock and a 4008 Mhz memory clock from the factory. Gigabyte claims to improve overclocking capability by 10% to 30% thanks to it's "Ultra Durable" copper PCB technology and power switching enhancements.

The full specification of the GeForce GTX 560 Overclock Edition are as follows:

 

 

Model Number GV-N56GOC-1GI
Core Clock 830 MHz
Shader Clock 1660 MHz
Memory Amount 1 GB
Memory Type GDDR5
Memory Bus 256 bit
Card Bus PCI-E 2.0
Process Technology 40 nm
Card Dimensions 43mm (h) x 238mm (l) x 130mm (w)
Power Requirements Minimum 500 Watt PSU required
DirectX Support 11
Outputs

1x HDMI and Display Port via adapter(s)

1x mini HDMI

2x DVI

1x VGA (via adapter)

 

Gigabyte is a popular motherboard manufacturer for enthusiasts and it seems that they are striving to gain that same level of consumer brand loyalty with their graphics cards.  Do you have a Gigabyte graphics card in your rig?

Source: Gigabyte

NVIDIA To Release Updated GTX 590 Cards In June

Subject: Graphics Cards | May 17, 2011 - 01:43 PM |
Tagged: nvidia, hardware, graphics

gtx590.jpg

The current GTX 590

VR-Zone reports that NVIDIA is gearing up to deliver a revised edition GTX 590 in June to combat the overheating problems that some overclockers fell victim too using certain drivers.  PC Perspective did not run into the issue when overclocking their card; however, VR-Zone stated in an earlier article that:

"NVIDIA has sent out a cautionary to their partners regarding possible component damage due to high temperature when running Furmark 1.9 as it bypasses the capping detection. . . .  This is something not able to fix through drivers nor it is just applicable to GeForce GTX 590."

Fortunately for overclockers, NVIDIA is planning to re-engineer aspects of the design, including new inductors, which should help with the over-current protection issues.  This new design will also effect the size and dimensions of the current GTX 590 PCB, which means that current third party heat sinks and water blocks made for the (current) GTX 590 will not fit.

It is nice to see that NVIDIA is sticking by it's technology and updating its hardware to fix issues.  Overclockers especially, will benefit from this updated model.

 

Source: VR-Zone