Author:
Manufacturer: MSI

A New TriFrozr Cooler

Graphics cards are by far the most interesting topic we cover at PC Perspective.  Between the battles of NVIDIA and AMD as well as the competition between board partners like EVGA, ASUS, MSI and Galaxy, there is very rarely a moment in time when we don't have a different GPU product of some kind on an active test bed.  Both NVIDIA and AMD release reference cards (for the most part) with each and every new product launch and it then takes some time for board partners to really put their own stamp on the designs.  Other than the figurative stamp that is the sticker on the fan.

IMG_9886.JPG

One of the companies that has recently become well known for very custom, non-reference graphics card designs is MSI and the pinnacle of the company's engineering falls into the Lightning brand.  As far back as the MSI GTX 260 Lightning and as recently as the MSI HD 7970 Lightning, these cards have combined unique cooling, custom power design and good amount of over engineering to really produce a card that has few rivals.

Today we are looking at the brand new MSI GeForce GTX 780 Lightning, a complete revamp of the GTX 780 that was released in May.  Based on the same GK110 GPU as the GTX Titan card, with two fewer SMX units, the GTX 780 easily the second fastest single GPU card on the market.  MSI is hoping to make the enthusiasts even more excited about the card with the Lightning design that brings a brand new TriFrozr cooler, impressive power design and overclocking capabilities that basic users and LN2 junkies can take advantage of.  Just what DO you get for $750 these days?

Continue reading our review of the MSI GeForce GTX 780 Lightning graphics card!!

Author:
Manufacturer: Asus

Plus one GTX 670...

Brand new GPU architectures are typically packaged in reference designs when it comes to power, PCB layout, and cooling.  Once manufacturers get a chance to put out their own designs, then interesting things happen.  The top end products are usually the ones that get the specialized treatment first, because they typically have the larger margins to work with.  Design choices here will eventually trickle down to lower end cards, typically with a price point $20 to $30 more than a reference design.  Companies such as MSI have made this their bread and butter with the Lightning series on top, the Hawk line handling the midrange, and then the hopped up reference designs with better cooling under the Twin Frozr moniker.

asus_jd_01.jpg

ASUS has been working with their own custom designs for years and years, but it honestly was not until the DirectCU series debuted did we have a well defined product lineup which pushes high end functionality across the entire lineup of products from top to bottom.  Certainly they had custom and unique designs, but things really seemed to crystallize with DirectCU.  I guess that is also the power of a good marketing tool as well.  DirectCU is a well known brand owned by Asus, and users typically know what to expect when looking at a DirectCU product.

Click to read the entire review here!

Manufacturer: XSPC

Introduction

02-680v2-1-1.jpg

Courtesy of XSPC

The Razor GTX680 water block was among the first in the XSPC full cover line of blocks. The previous generation of XSPC water blocks offered cooling for the GPU as well as the memory and on-board VRMs, but did not offer the protection that a full card-sized block offers to the sensitive components integrated into the card's PCB. At an MSRP of $99.99, the Razor GTX680 water block is a sound investment.

03-680v2-2.jpg

Courtesy of XSPC

The Razor GTX680 block comes with a total of seven G1/4" ports - four on the inlet side (left) and three on the outlet side (right). XSPC included the following component with the block: XSPC thermal compound, dual blue LEDs, five steel port caps, paper washers and mounting screws, and TIM (thermal interface material) for use with the on board memory and VRM chips.

Continue reading our review of the XSPC Razor GTX680 water block!

Author:
Manufacturer: AMD

Frame Pacing for CrossFire

When the Radeon HD 7990 launched in April of this year, we had some not-so-great things to say about it.  The HD 7990 depends on CrossFire technology to function and we had found quite a few problems with AMD's CrossFire technology over the last months of testing with our Frame Rating technology, the HD 7990 "had a hard time justifying its $1000 price tag."  Right at launch, AMD gave us a taste of a new driver that they were hoping would fix the frame pacing and frame time variance issues seen in CrossFire, and it looked positive.  The problem was that the driver wouldn't be available until summer.

As I said then: "But until that driver is perfected, is bug free and is presented to buyers as a made-for-primetime solution, I just cannot recommend an investment this large on the Radeon HD 7990."

Today could be a very big day for AMD - the release of the promised driver update that enables frame pacing on AMD 7000-series CrossFire configurations including the Radeon HD 7990 graphics cards with a pair of Tahiti GPUs. 

It's not perfect yet and there are some things to keep an eye on.  For example, this fix will not address Eyefinity configurations which includes multi-panel solutions and the new 4K 60 Hz displays that require a tiled display configuration.  Also, we found some issues with more than two GPU CrossFire that we'll address in a later page too.

 

New Driver Details

Starting with 13.8 and moving forward, AMD plans to have the frame pacing fix integrated into all future drivers.  The software team has implemented a software based frame pacing algorithm that simply monitors the time it takes for each GPU to render a frame, how long a frame is displayed on the screen and inserts delays into the present calls when necessary to prevent very tightly timed frame renders.  This balances or "paces" the frame output to the screen without lowering the overall frame rate.  The driver monitors this constantly in real-time and minor changes are made on a regular basis to keep the GPUs in check. 

7990card.JPG

As you would expect, this algorithm is completely game engine independent and the games should be completely oblivious to all that is going on (other than the feedback from present calls, etc). 

This fix is generic meaning it is not tied to any specific game and doesn't require profiles like CrossFire can from time to time.  The current implementation will work with DX10 and DX11 based titles only with DX9 support being added later with another release.  AMD claims this was simply a development time issue and since most modern GPU-bound titles are DX10/11 based they focused on that area first.  In phase 2 of the frame pacing implementation AMD will add in DX9 and OpenGL support.  AMD wouldn't give me a timeline for implementation though so we'll have to see how much pressure AMD continues with internally to get the job done.

Continue reading our story of the new AMD Catalyst 13.8 beta driver with frame pacing support!!

Author:
Manufacturer: Galaxy

Overclocked GTX 770 from Galaxy

When NVIDIA launched the GeForce GTX 770 at the very end of May, we started to get in some retail samples from companies like Galaxy.  While our initial review looked at the reference models, other add-in card vendors are putting their own unique touch on the latest GK104 offering and Galaxy was kind enough to send us their GeForce GTX 770 2GB GC model that uses a unique, more efficient cooler design and also runs at overclocked frequencies. 

If you haven't yet read up on the GTX 770 GPU, you should probably stop by my first review of the GTX 770 to see what information you are missing out on.  Essentially, the GTX 770 is a full-spec GK104 Kepler GPU running at higher clocks (both core and memory speeds) compared to the original GTX 680.  The new reference clocks for the GTX 770 were 1046 MHz base clock, 1085 MHz Boost clock and a nice increase to 7.0 GHz memory speeds.

gpuz.png

Galaxy GeForce GTX 770 2GB GC Specs

The Galaxy GC model is overclocked with a new base clock setting of 1111 MHz and a higher Boost clock of 1163 MHz; both are about 6.5-7.0% higher than the original clocks.  Galaxy has left the memory speeds alone though keeping them running at 7.0 GHz effectively.

IMG_9941.JPG

Continue reading our review of the Galaxy GeForce GTX 770 2GB GC graphics card!!

Author:
Manufacturer: NVIDIA

Another Wrench – GeForce GTX 760M Results

Just recently, I evaluated some of the current processor-integrated graphics options from our new Frame Rating performance metric. The results were very interesting, proving Intel has done some great work with its new HD 5000 graphics option for Ultrabooks. You might have noticed that the MSI GE40 didn’t just come with the integrated HD 4600 graphics but also included a discrete NVIDIA GeForce GTX 760M, on-board.  While that previous article was to focus on the integrated graphics of Haswell, Trinity, and Richland, I did find some noteworthy results with the GTX 760M that I wanted to investigate and present.

IMG_0141.JPG

The MSI GE40 is a new Haswell-based notebook that includes the Core i7-4702MQ quad-core processor and Intel HD 4600 graphics.  Along with it MSI has included the Kepler architecture GeForce GTX 760M discrete GPU.

760mspecs.png

This GPU offers 768 CUDA cores running at a 657 MHz base clock but can stretch higher with GPU Boost technology.  It is configured with 2GB of GDDR5 memory running at 2.0 GHz

If you didn’t read the previous integrated graphics article, linked above, you’re going to have some of the data presented there spoiled and so you might want to get a baseline of information by getting through that first.  Also, remember that we are using our Frame Rating performance evaluation system for this testing – a key differentiator from most other mobile GPU testing.  And in fact it is that difference that allowed us to spot an interesting issue with the configuration we are showing you today. 

If you are not familiar with the Frame Rating methodology, and how we had to change some things for mobile GPU testing, I would really encourage you to read this page of the previous mobility Frame Rating article for the scoop.  The data presented below depends on that background knowledge!

Okay, you’ve been warned – on to the results.

Continue reading our story about GeForce GTX 760M Frame Rating results and Haswell Optimus issues!!

Author:
Manufacturer: Various

Battle of the IGPs

Our long journey with Frame Rating, a new capture-based analysis tool to measure graphics performance of PCs and GPUs, began almost two years ago as a way to properly evaluate the real-world experiences for gamers.  What started as a project attempting to learn about multi-GPU complications has really become a new standard in graphics evaluation and I truly believe it will play a crucial role going forward in GPU and game testing. 

Today we use these Frame Rating methods and tools, which are elaborately detailed in our Frame Rating Dissected article, and apply them to a completely new market: notebooks.  Even though Frame Rating was meant for high performance discrete desktop GPUs, the theory and science behind the entire process is completely applicable to notebook graphics and even on the integrated graphics solutions on Haswell processors and Richland APUs.  It also is able to measure performance of discrete/integrated graphics combos from NVIDIA and AMD in a unique way that has already found some interesting results.

 

Battle of the IGPs

Even though neither side wants us to call it this, we are testing integrated graphics today.  With the release of Intel’s Haswell processor (the Core i7/i5/i3 4000) the company has upgraded the graphics noticeably on several of their mobile and desktop products.  In my first review of the Core i7-4770K, a desktop LGA1150 part, the integrated graphics now known as the HD 4600 were only slightly faster than the graphics of the previous generation Ivy Bridge and Sandy Bridge.  Even though we had all the technical details of the HD 5000 and Iris / Iris Pro graphics options, no desktop parts actually utilize them so we had to wait for some more hardware to show up. 

 

mbair.JPG

When Apple held a press conference and announced new MacBook Air machines that used Intel’s Haswell architecture, I knew I could count on Ken to go and pick one up for himself.  Of course, before I let him start using it for his own purposes, I made him sit through a few agonizing days of benchmarking and testing in both Windows and Mac OS X environments.  Ken has already posted a review of the MacBook Air 11-in model ‘from a Windows perspective’ and in that we teased that we had done quite a bit more evaluation of the graphics performance to be shown later.  Now is later.

So the first combatant in our integrated graphics showdown with Frame Rating is the 11-in MacBook Air.  A small, but powerful Ultrabook that sports more than 11 hours of battery life (in OS X at least) but also includes the new HD 5000 integrated graphics options.  Along with that battery life though is the GT3 variation of the new Intel processor graphics that doubles the number of compute units as compared to the GT2.  The GT2 is the architecture behind the HD 4600 graphics that sits with nearly all of the desktop processors, and many of the notebook versions, so I am very curious how this comparison is going to stand. 

Continue reading our story on Frame Rating with Haswell, Trinity and Richland!!

Author:
Manufacturer: PC Perspective

The GPU Midrange Gets a Kick

I like budget video cards.  They hold a soft spot in my heart.  I think the primary reason for this is that I too was once a poor college student and could not afford the really expensive cards.  Ok, so this was maybe a few more years ago than I like to admit.  Back when the Matrox Millennium was very expensive, I ended up getting the STB Lightspeed 128 instead.  Instead of the 12 MB Voodoo 2 I went for the 8 MB version.  I was never terribly fond of paying top dollar for a little extra performance.  I am still not fond of it either.

The sub-$200 range is a bit of a sweet spot that is very tightly packed with products.  These products typically perform in the range of a high end card from 3 years ago, yet still encompass the latest features of the top end products from their respective companies.  These products can be overclocked by end users to attain performance approaching cards in the $200 to $250 range.  Mind, there are some specific limitations to the amount of performance one can actually achieve with these cards.  Still, what a user actually gets is very fair when considering the price.

budg_01.jpg

Today I cover several flavors of cards from three different manufacturers that are based on the AMD HD 7790 and the NVIDIA GTX 650 Ti BOOST chips.  These range in price from $129 to $179.  The features on these cards are amazingly varied, and there are no “sticker edition” parts to be seen here.  Each card is unique in its design and the cooling strategies are also quite distinct.  Users should not expect to drive monitors above 1920x1200, much less triple monitors in Surround and Eyefinity.

Now let us quickly go over the respective chips that these cards are based on.

Click here to read the entire article!

Author:
Manufacturer: NVIDIA

Getting even more life from GK104

Have you guys heard about this new GPU from NVIDIA?  It’s called GK104 and it turns out that the damn thing is found yet another graphics card this year – the new GeForce GTX 760.  Yup, you read that right, what NVIDIA is saying is the last update to the GeForce lineup through Fall 2013 is going to be based on the same GK104 design that we have previously discussed in reviews of the GTX 680, GTX 670, GTX 660 Ti, GTX 690 and more recently, the GTX 770. This isn’t a bad thing though!  GK104 has done a fantastic job in every field and market segment that NVIDIA has tossed it into with solid performance and even better performance per watt than the competition.  It does mean however that talking up the architecture is kind of mind numbing at this point…

block.jpg

If you are curious about the Kepler graphics architecture and the GK104 in particular, I’m not going to stop you from going back and reading over my initial review of the GTX 680 from January of 2012.  The new GTX 760 takes the same GPU, adds a new and improved version of GPU Boost (the same we saw in the GTX 770) and lowers down the specifications a bit to enable NVIDIA to hit a new price point.  The GTX 760 will be replacing the GTX 660 Ti – that card will be falling into the ether but the GTX 660 will remain, as will everything below it including the GTX 650 Ti Boost, 650 Ti and plain old 650.  The GTX 670 went the way of the dodo with the release of the GTX 770.

01.jpg

Even though the GTX 690 isn't on this list, NVIDIA says it isn't EOL

As for the GeForce GTX 760 it will ship with 1152 CUDA cores running at a base clock of 980 MHz and a typical boost clock of 1033 MHz.  The memory speed remains at 6.0 GHz on a 256-bit memory bus and you can expect to find both 2GB and 4GB frame buffer options from retail partners upon launch.  The 1152 CUDA cores are broken up over 6 SMX units and that means you’ll see some parts with 3 GPCs and others with 4 – NVIDIA claims any performance delta between them will be negligible. 

Continue reading our review of the NVIDIA GeForce GTX 760 2GB Graphics Card!!

Manufacturer: Adobe

OpenCL Support in a Meaningful Way

Adobe had OpenCL support since last year. You would never benefit from its inclusion unless you ran one of two AMD mobility chips under Mac OSX Lion, but it was there. Creative Cloud, predictably, furthers this trend with additional GPGPU support for applications like Photoshop and Premiere Pro.

This leads to some interesting points:

  • How OpenCL is changing the landscape between Intel and AMD
  • What GPU support is curiously absent from Adobe CC for one reason or another
  • Which GPUs are supported despite not... existing, officially.

adobe-cs-products.jpg

This should be very big news for our readers who do production work whether professional or for a hobby. If not, how about a little information about certain GPUs that are designed to compete with the GeForce 700-series?

Read on for our thoughts, after the break.

Author:
Manufacturer: NVIDIA

Kepler-based Mobile GPUs

Late last month, just before the tech world blew up from the mess that is Computex, NVIDIA announced a new line of mobility discrete graphics parts under the GTX 700M series label.  At the time we simply posted some news and specifications about the new products but left performance evaluation for a later time.  Today we have that for the highest end offering, the GeForce GTX 780M. 

As with most mobility GPU releases it seems, the GTX 700M series is not really a new GPU and only offers cursory feature improvements.  Based completely on the Kepler line of parts, the GTX 700M will range from 1536 CUDA cores on the GTX 780M to 768 cores on the GTX 760M. 

slide2.jpg

The flagship GTX 780M is essentially a desktop GTX 680 card in a mobile form factor with lower clock speeds.  With 1536 CUDA cores running at 823 MHz and boosting to higher speeds depending on the notebook configuration, a 256-bit memory controller running at 5 GHz, the GTX 780M will likely be the fastest mobile GPU you can buy.  (And we’ll be testing that in the coming pages.) 

The GTX 760M, 765M and 770M offering ranges of performance that scale down to 768 cores at 657 MHz.  NVIDIA claims we’ll see the GTX 760M in systems as small as 14-in and below with weights at 2kg or so from vendors like MSI and Acer.  For Ultrabooks and thinner machines you’ll have to step down to smaller, less power hungry GPUs like the GT 750 and 740 but even then we expect NVIDIA to have much faster gaming performance than the Haswell-based processor graphics.

Continue reading our performance review of the new NVIDIA GeForce GTX 780M mobility GPU!!

Author:
Manufacturer: NVIDIA

A necessary gesture

NVIDIA views the gaming landscape as a constantly shifting medium that starts with the PC.  But the company also sees mobile gaming, cloud gaming and even console gaming as part of the overall ecosystem.  But that is all tied together by an investment in content – the game developers and game publishers that make the games that we play on PCs, tablets, phones and consoles.

nv14.jpg

The slide above shows NVIDIA targeting for each segment – expect for consoles obviously.  NVIDIA GRID will address the cloud gaming infrastructure, GeForce and the GeForce Experience will continue with the PC systems and NVIDIA SHIELD and the Tegra SoC will get the focus for the mobile and tablet spaces.  I find it interesting that NVIDIA has specifically called out Steam under the PC – maybe a hint of the future for the upcoming Steam Box?

The primary point of focus for today’s press meeting was to talk about the commitment that NVIDIA has to the gaming world and to developers.  AMD has been talking up their 4-point attack on gaming that starts really with the dominance in the console markets.  But NVIDIA has been the leader in the PC world for many years and doesn’t see that changing.

nv02.jpg

With several global testing facilities, the most impressive of which exists in Russia, NVIDIA tests more games, more hardware and more settings combinations than you can possibly imagine.  They tune drivers and find optimal playing settings for more than 100 games that are now wrapped up into the GeForce Experience software.  They write tools for developers to find software bottlenecks and test for game streaming latency (with the upcoming SHIELD). They invest more in those areas than any other hardware vendor.

nv03.jpg

This is a list of technologies that NVIDIA claims they invented or developed – an impressive list that includes things like programmable shaders, GPU compute, Boost technology and more. 

nv04.jpg

Many of these turned out to be very important in the development and advancement of gaming – not for PCs but for ALL gaming. 

Continue reading our editorial on NVIDIA's stance on it's future in PC gaming!!

Author:
Manufacturer: NVIDIA

GK104 gets cheaper and faster

A week ago today we posted our review of the GeForce GTX 780, NVIDIA's attempt to split the difference between the GTX 680 and the GTX Titan graphics cards in terms of performance and pricing.  Today NVIDIA launches the GeForce GTX 770 that, even though it has a fancy new name, is a card and a GPU that you are very familiar with.

arch01.png

The NVIDIA GK104 GPU Diagram

Based on GK104, the same GPU that powers the GTX 680 (released in March 2012), GTX 670 and the GTX 690 (though in a pair), the new GeForce GTX 770 has very few changes from the previous models that are really worth noting.  NVIDIA has updated the GPU Boost technology to 2.0 (more granular, better controls in software) but the real changes come in the clocks speeds.

specs2.png

The GTX 770 is still built around 4 GPCs and 8 SMXs for a grand total of 1536 CUDA cores, 128 texture units and 32 ROPs.  The clock speeds have increased from 1006 MHz base clock and 1058 MHz Boost up to 1046 MHz base and 1085 MHz Boost.  That is a pretty minor speed bump in reality, an increase of just 4% or so over the previous clock speeds. 

NVIDIA did bump up the GDDR5 memory speed considerably though, going from 6.0 Gbps to 7.0 Gbps, or 1750 MHz.  The memory bus width remains 256-bits wide but the total memory bandwidth has jumped up to 224.3 GB/s.

Maybe the best change for PC gamers is the new starting MSRP for the GeForce GTX 770 at $399 - a full $50-60 less than the GTX 680 was selling for as of yesterday.  If you happened to pick up a GTX 680 recently, you are going to want to look into your return options as this will surely annoying the crap out of you.

If you want more information on the architecture design of the GK104 GPU, check out our initial article on the chips release from last year.  Otherwise, with those few specification changes out of the way, let's move on to some interesting information.

The NVIDIA GeForce GTX 770 2GB Reference Card

Tired of this design yet?  If so, you'll want to look into some of the non-reference options I'll show you on the next page from other vendors, but I for one am still taken with the design of these cards.  You will find a handful of vendors offering up re-branded GTX 770 options at the outset of release but most will have their own SKUs to showcase.

IMG_9918.JPG

Continue reading our review of the NVIDIA GeForce GTX 770 graphics card!!

Author:
Manufacturer: NVIDIA

GK110 Gets a Lower Price Point

If you want to ask us some questions about the GTX 780 or our review, join us for a LIVE STREAM at 2pm EDT / 11am PDT on our LIVE page.

When NVIDIA released the GeForce GTX Titan in February there was a kind of collective gasp across the enthusiast base.  Half of that intake of air was from people amazed at the performance they were seeing on a single GPU graphics cards powered by the GK110 chip.  The other half was from people aghast of the $1000 price point that NVIDIA launched it at.  The GTX Titan was the fastest single GPU card in the world, without any debate, but with it came a cost we hadn't seen in some time.  Even with the debate between it, the GTX 690 and the HD 7990, the Titan was likely my favorite GPU, cost no concerns.

IMG_9863.JPG

Today we see the extension of the GK110, by cutting it back some, and releasing a new card.  The GeForce GTX 780 3GB is based on the same chip as the GTX Titan but with additional SMX units disabled, a lower CUDA core count and less memory.  But as you'll soon see, the performance delta between it and the GTX 680 and Radeon HD 7970 GHz is pretty impressive.  The $650 price tag though - maybe not.

We held a live stream the day this review launched at http://pcper.com/live.  You can see the replay that goes over our benchmark results and thoughts on the GTX 780 below.

 

The GeForce GTX 780 - A Cut Down GK110

As I mentioned above, the GTX 780 is a pared-down GK110 GPU and for more information on that particular architecture change, you should really take a look at my original GTX Titan launch article from February.  There is a lot more that is different on this part compared to GK104 than simple shader counts, but for gamers most of the focus will rest there. 

The chip itself is a 7.1 billion mega-ton beast though a card with the GTX 780 label is actually utilizing much fewer than that.  Below you will find a couple of block diagrams that represent the reduced functionality of the GTX 780 versus the GTX Titan:

block1.png

Continue reading our review of the NVIDIA GeForce GTX 780 3GB GK110 Graphics Card!!

Author:
Manufacturer: Intel

The Intel HD Graphics are joined by Iris

Intel gets a bad wrap on the graphics front.  Much of it is warranted but a lot of it is really just poor marketing about the technologies and features they implement and improve on.  When AMD or NVIDIA update a driver or fix a bug or bring a new gaming feature to the table, they are sure that every single PC hardware based website knows about and thus, that as many PC gamers as possible know about it.  The same cannot be said about Intel though - they are much more understated when it comes to trumpeting their own horn.  Maybe that's because they are afraid of being called out on some aspects or that they have a little bit of performance envy compared to the discrete options on the market. 

Today might be the start of something new from the company though - a bigger focus on the graphics technology in Intel processors.  More than a month before the official unveiling of the Haswell processors publicly, Intel is opening up about SOME of the changes coming to the Haswell-based graphics products. 

We first learned about the changes to Intel's Haswell graphics architecture way back in September of 2012 at the Intel Developer Forum.  It was revealed then that the GT3 design would essentially double theoretical output over the currently existing GT2 design found in Ivy Bridge.  GT2 will continue to exist (though slightly updated) on Haswell and only some versions of Haswell will actually see updates to the higher-performing GT3 options.  

01.jpg

In 2009 Intel announced a drive to increase graphics performance generation to generation at an exceptional level.  Not long after they released the Sandy Bridge CPU and the most significant performance increase in processor graphics ever.  Ivy Bridge followed after with a nice increase in graphics capability but not nearly as dramatic as the SNB jump.  Now, according to this graphic, the graphics capability of Haswell will be as much as 75x better than the chipset-based graphics from 2006.  The real question is what variants of Haswell will have that performance level...

02.jpg

I should note right away that even though we are showing you general performance data on graphics, we still don't have all the details on what SKUs will have what features on the mobile and desktop lineups.  Intel appears to be trying to give us as much information as possible without really giving us any information. 

Read more on Haswell's new graphics core here.

Author:
Manufacturer: Various

Our 4K Testing Methods

You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office.  Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160.  For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays.  Oh, and this TV only cost us $1300.

seiki5.jpg

In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable.  You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz.  That doesn't mean we are limited to 30 FPS of performance though, far from it.  As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.

I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others.  Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome.  The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations.  Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle.  This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise. 

4ksizes.png

Image from Digital Trends

I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.

Continue reading our results from testing 4K 3840x2160 gaming on high end graphics cards!!

Author:
Manufacturer: AMD

The card we have been expecting

Despite all the issues that were brought up with our new graphics performance testing methodology we are calling Frame Rating, there is little debate in the industry that AMD is making noise once again in the graphics field.  From the elaborate marketing and game bundles with all Radeon HD 7000 series cards over the last year to the hiring of Roy Taylor, VP of sales but also the company's most vocal supporter. 

slide1_0.jpg

Along with the marketing though goes plenty of technology and important design wins.  With the dominance of the APU on the console side (Wii U, Playstation 4 and the next Xbox), AMD is making sure that the familiarity with its GPU architecture there pays dividends on the PC side as well.  Developers will be focusing on AMD's graphics hardware for 5-10 years with the console generation and that could result in improved performance and feature support for Radeon graphics for PC gamers. 

Today's release of the Radeon HD 7990 6GB Malta dual-GPU graphics card shows a renewed focus on high-end graphics markets since the release of the Radeon HD 7970 in January of 2012.  And while you may have seen something for sale previously with the HD 7990 name attached, those were custom designs built by partners, not by AMD. 

slide2_0.jpg

Both ASUS and PowerColor currently have high-end dual-Tahiti cards for sale.  The PowerColor HD 7990 Devil 13 used the brand directly but ASUS' ARES II kept away from the name and focused on its own high-end card brands instead. 

The "real" Radeon HD 7990 card was first teased at GDC in March and takes a much less dramatic approach to its design without being less impressive technically.  The card includes a pair of Tahiti, HD 7970-class GPUs on a single PCB with 6GB of total memory.  The raw specifications are listed here:

slide6_0.jpg

Considering there are two HD 7970 GPUs on the HD 7990, the doubling of the major specs shouldn't be surprising though it is a little deceiving.  There are 8.6 billion transistors yes, but there are still 4.3 billion on each GPU.  Yes there are 4096 stream processors but only 2048 on each GPU requiring software GPU scaling to increase performance.  The same goes with texture fill rate, compute performance, memory bandwidth, etc.  The same could be said for all dual-GPU graphics cards though.

Continue reading our review of the AMD Radeon HD 7990 6GB Graphics Card!!

Author:
Manufacturer: Various

A very early look at the future of Catalyst

Today is a very interesting day for AMD.  It marks both the release of the reference design of the Radeon HD 7990 graphics card, a dual-GPU Tahiti behemoth, and the first sample of a change to the CrossFire technology that will improve animation performance across the board.  Both stories are incredibly interesting and as it turns out both feed off of each other in a very important way: the HD 7990 depends on CrossFire and CrossFire depends on this driver. 

If you already read our review (or any review that is using the FCAT / frame capture system) of the Radeon HD 7990, you likely came away somewhat unimpressed.  The combination of a two AMD Tahiti GPUs on a single PCB with 6GB of frame buffer SHOULD have been an incredibly exciting release for us and would likely have become the single fastest graphics card on the planet.  That didn't happen though and our results clearly state why that is the case: AMD CrossFire technology has some serious issues with animation smoothness, runt frames and giving users what they are promised. 

Our first results using our Frame Rating performance analysis method were shown during the release of the NVIDIA GeForce GTX Titan card in February.  Since then we have been in constant talks with the folks at AMD to figure out what was wrong, how they could fix it, and what it would mean to gamers to implement frame metering technology.  We followed that story up with several more that showed the current state of performance on the GPU market using Frame Rating that painted CrossFire in a very negative light.  Even though we were accused by some outlets of being biased or that AMD wasn't doing anything incorrectly, we stuck by our results and as it turns out, so does AMD. 

Today's preview of a very early prototype driver shows that the company is serious about fixing the problems we discovered. 

If you are just catching up on the story, you really need some background information.  The best place to start is our article published in late March that goes into detail about how game engines work, how our completely new testing methods work and the problems with AMD CrossFire technology very specifically.  From that piece:

It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand.  We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern.  Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case.  Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?”  My answer based on the below graph would be no.

runt.jpg

An example of a runt frame in a CrossFire configuration

NVIDIA's solution for getting around this potential problem with SLI was to integrate frame metering, a technology that balances frame presentation to the user and to the game engine in a way that enabled smoother, more consistent frame times and thus smoother animations on the screen.  For GeForce cards, frame metering began as a software solution but was actually integrated as a hardware function on the Fermi design, taking some load off of the driver.

Continue reading our article on the new prototype driver from AMD to address frame pacing issues in CrossFire!!

Author:
Manufacturer: PC Perspective

Not a simple answer

After publishing the Frame Rating Part 3 story, I started to see quite a bit of feedback from readers and other enthusiasts with many requests for information about Vsync and how it might affect the results we are seeing here.  Vertical Sync is the fix for screen tearing, a common artifact seen in gaming (and other mediums) when the frame rendering rate doesn’t match the display’s refresh rate.  Enabling Vsync will force the rendering engine to only display and switch frames in the buffer to match the vertical refresh rate of the monitor or a divisor of it.  So a 60 Hz monitor could only display frames at 16ms (60 FPS), 33ms (30 FPS), 50ms (20 FPS), and so on.

Many early readers hypothesized that simply enabling Vsync would fix the stutter and runt issues that Frame Rating was bringing to light.  In fact, AMD was a proponent of this fix, as many conversations we have had with the GPU giant trailed into the direction of Vsync as answer to their multi-GPU issues. 

In our continuing research on graphics performance, part of our Frame Rating story line, I recently spent many hours playing games on different hardware configurations and different levels of Vertical Sync.  After this time testing, I am comfortable in saying that I do not think that simply enabling Vsync on platforms that exhibit a large number of runt frames fixes the issue.  It may prevent runts, but it does not actually produce a completely smooth animation. 

To be 100% clear - the issues with Vsync and animation smoothness are not limited to AMD graphics cards or even multi-GPU configurations.  The situations we are demonstrating here present themselves equally on AMD and NVIDIA platforms and with single or dual card configurations, as long as all other parameters are met.  Our goal today is only to compare a typical Vsync situation from either vendor to a reference result at 60 FPS and at 30 FPS; not to compare AMD against NVIDIA!!

Crysis3_1920x1080_PLOT_1.png

In our initial research with Frame Rating, I presented this graph on the page discussing Vsync.  At the time, I left this note with the image:

The single card and SLI configurations without Vsync disabled look just like they did on previous pages but the graph for GTX 680 SLI with Vsync on is very different.  Frame times are only switching back and forth between 16 ms and 33 ms, 60 and 30 instantaneous FPS due to the restrictions of Vsync.  What might not be obvious at first is that the constant shifting back and forth between these two rates (two refresh cycles with one frame, one refresh cycle with one frame) can actually cause more stuttering and animation inconsistencies than would otherwise appear.

Even though I had tested this out and could literally SEE that animation inconsistency I didn't yet have a way to try and demonstrate it to our readers, but today I think we do.

The plan for today's article is going to be simple.  I am going to present a set of three videos to you that show side by side runs from different configuration options and tell you what I think we are seeing in each result.  Then on another page, I'm going to show you three more videos and see if you can pinpoint the problems on your own.

Continue reading our article on the effects of Vsync on gaming animation smoothness!!

Author:
Manufacturer: PC Perspective

What to look for and our Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Today marks the conclusion of our first complete round up of Frame Rating results, the culmination of testing that was started 18 months ago.  Hopefully you have caught our other articles on the subject at hand, and you really will need to read up on the Frame Rating Dissected story above to truly understand the testing methods and results shown in this article.  Use the links above to find the previous articles!

To round out our Frame Rating testing in this interation, we are looking at more cards further down the product stack in two different sets.  The first comparison will look at the AMD Radeon HD 7870 GHz Edition and the NVIDIA GeForce GTX 660 graphics cards in both single and dual-card configurations.  Just like we saw with our HD 7970 vs GTX 680 and our HD 7950 vs GTX 660 Ti testing, evaluating how the GPUs compare in our new and improved testing methodology in single GPU configurations is just as important as testing in SLI and CrossFire.  The GTX 660 ($199 at Newegg.com) and the HD 7870 ($229 at Newegg.com) are the closest matches in terms of pricing though both card have some interesting game bundle options as well.

7870.jpg

AMD's Radeon HD 7870 GHz Edition

Our second set of results will only be looking at single GPU performance numbers for lower cost graphics cards like the AMD Radeon HD 7850 and Radeon HD 7790 and from NVIDIA the GeForce GTX 650 Ti and GTX 650 Ti BOOST.  We didn't include multi-GPU results on these cards simply due to time constraints internally and because we are eager to move onto further Frame Rating testing and input testing. 

gtx660.jpg

NVIDIA's GeForce GTX 660


If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates. We are using a secondary hardware capture system to record each frame of game play as the monitor would receive them. That recorded video is then analyzed to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 660 2GB
AMD Radeon HD 7870 2GB
NVIDIA GeForce GTX 650 Ti 1GB
NVIDIA GeForce GTX 650 Ti BOOST 2GB
AMD Radeon HD 7850 2GB
AMD Radeon HD 7790 1GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

On to the results! 

Continue reading our review of the GTX 660 and HD 7870 using Frame Rating!!