Author:
Manufacturer: NVIDIA

The GM204 Architecture

James Clerk Maxwell's equations are the foundation of our society's knowledge about optics and electrical circuits. It is a fitting tribute from NVIDIA to include Maxwell as a code name for a GPU architecture and NVIDIA hopes that features, performance, and efficiency that they have built into the GM204 GPU would be something Maxwell himself would be impressed by. Without giving away the surprise conclusion here in the lead, I can tell you that I have never seen a GPU perform as well as we have seen this week, all while changing the power efficiency discussion in as dramatic a fashion.

IMG_9754.JPG

To be fair though, this isn't our first experience with the Maxwell architecture. With the release of the GeForce GTX 750 Ti and its GM107 GPU, NVIDIA put the industry on watch and let us all ponder if they could possibly bring such a design to a high end, enthusiast class market. The GTX 750 Ti brought a significantly lower power design to a market that desperately needed it, and we were even able to showcase that with some off-the-shelf PC upgrades, without the need for any kind of external power.

That was GM107 though; today's release is the GM204, indicating that not only are we seeing the larger cousin of the GTX 750 Ti but we also have at least some moderate GPU architecture and feature changes from the first run of Maxwell. The GeForce GTX 980 and GTX 970 are going to be taking on the best of the best products from the GeForce lineup as well as the AMD Radeon family of cards, with aggressive pricing and performance levels to match. And, for those that understand the technology at a fundamental level, you will likely be surprised by how much power it requires to achieve these goals. Toss in support for things like a new AA method, Dynamic Super Resolution, and even improved SLI performance and you can see why doing it all on the same process technology is impressive.

The NVIDIA Maxwell GM204 Architecture

The NVIDIA Maxwell GM204 graphics processor was built from the ground up with an emphasis on power efficiency. As it was stated many times during the technical sessions we attended last week, the architecture team learned quite a bit while developing the Kepler-based Tegra K1 SoC and much of that filtered its way into the larger, much more powerful product you see today. This product is fast and efficient, but it was all done while working on the same TSMC 28nm process technology used on the Kepler GTX 680 and even AMD's Radeon R9 series of products.

GeForce_GTX_980_Block_Diagram_FINAL.png

The fundamental structure of GM204 is setup like the GM107 product shipped as the GTX 750 Ti. There is an array of GPCs (Graphics Processing Clustsers), each comprised of multiple SMs (Streaming Multiprocessors, also called SMMs for this Maxwell derivative) and external memory controllers. The GM204 chip (the full implementation of which is found on the GTX 980), consists of 4 GPCs, 16 SMMs and four 64-bit memory controllers.

Continue reading our review of the GeForce GTX 980 and GTX 970 GM204 Graphics Cards!!

Author:
Manufacturer: NVIDIA

A few days with some magic monitors

Last month friend of the site and technology enthusiast Tom Petersen, who apparently does SOMETHING at NVIDIA, stopped by our offices to talk about G-Sync technology. A variable refresh rate feature added to new monitors with custom NVIDIA hardware, G-Sync is a technology that has been frequently discussed on PC Perspective

The first monitor to ship with G-Sync is the ASUS ROG Swift PG278Q - a fantastic 2560x1440 27-in monitor with a 144 Hz maximum refresh rate. I wrote a glowing review of the display here recently with the only real negative to it being a high price tag: $799. But when Tom stopped out to talk about the G-Sync retail release, he happened to leave a set of three of these new displays for us to mess with in a G-Sync Surround configuration. Yummy.

So what exactly is the current experience of using a triple G-Sync monitor setup if you were lucky enough to pick up a set? The truth is that the G-Sync portion of the equation works great but that game support for Surround (or Eyefinity for that matter) is still somewhat cumbersome. 

IMG_9606.JPG

In this quick impressions article I'll walk through the setup and configuration of the system and tell you about my time playing seven different PC titles in G-Sync Surround.

Continue reading our editorial on using triple ASUS ROG Swift monitors in G-Sync Surround!!

Author:
Manufacturer: AMD

Tonga GPU Features

On December 22, 2011, AMD launched the first 28nm GPU based on an architecture called GCN on the code name Tahiti silicon. That was the release of the Radeon HD 7970 and it was the beginning of an incredibly long adventure for PC enthusiasts and gamers. We eventually saw the HD 7970 GHz Edition and the R9 280/280X releases, all based on essentially identical silicon, keeping a spot in the market for nearly 3 years. Today AMD is launching the Tonga GPU and Radeon R9 285, a new piece of silicon that shares many traits of Tahiti but adds support for some additional features.

Replacing the Radeon R9 280 in the current product stack, the R9 285 will step in at $249, essentially the same price. Buyers will be treated to an updated feature set though including options that were only previously available on the R9 290 and R9 290X (and R7 260X). These include TrueAudio, FreeSync, XDMA CrossFire and PowerTune.

09.jpg

Many people have been calling this architecture GCN 1.1 though AMD internally doesn't have a moniker for it. The move from Tahiti, to Hawaii and now to Tonga, reveals a new design philosophy from AMD, one of smaller and more gradual steps forward as opposed to sudden, massive improvements in specifications. Whether this change was self-imposed or a result of the slowing of process technology advancement is really a matter of opinion.

Continue reading our review of the AMD Radeon R9 285 Tonga GPU!!

Author:
Manufacturer: ASUS

The Waiting Game

NVIDIA G-Sync was announced at a media event held in Montreal way back in October, and promised to revolutionize the way the display and graphics card worked together to present images on the screen. It was designed to remove hitching, stutter, and tearing -- almost completely. Since that fateful day in October of 2013, we have been waiting. Patiently waiting. We were waiting for NVIDIA and its partners to actually release a monitor that utilizes the technology and that can, you know, be purchased.

In December of 2013 we took a look at the ASUS VG248QE monitor, the display for which NVIDIA released a mod kit to allow users that already had this monitor to upgrade to G-Sync compatibility. It worked, and I even came away impressed. I noted in my conclusion that, “there isn't a single doubt that I want a G-Sync monitor on my desk” and, “my short time with the NVIDIA G-Sync prototype display has been truly impressive…”. That was nearly 7 months ago and I don’t think anyone at that time really believed it would be THIS LONG before the real monitors began to show in the hands of gamers around the world.

IMG_9328.JPG

Since NVIDIA’s October announcement, AMD has been on a marketing path with a technology they call “FreeSync” that claims to be a cheaper, standards-based alternative to NVIDIA G-Sync. They first previewed the idea of FreeSync on a notebook device during CES in January and then showed off a prototype monitor in June during Computex. Even more recently, AMD has posted a public FAQ that gives more details on the FreeSync technology and how it differs from NVIDIA’s creation; it has raised something of a stir with its claims on performance and cost advantages.

That doesn’t change the product that we are reviewing today of course. The ASUS ROG Swift PG278Q 27-in WQHD display with a 144 Hz refresh rate is truly an awesome monitor. What did change is the landscape, from NVIDIA's original announcement until now.

Continue reading our review of the ASUS ROG Swift PG278Q 2560x1440 G-Sync Monitor!!

Author:
Manufacturer: ASUS

Experience with Silent Design

In the time periods between major GPU releases, companies like ASUS have the ability to really dig down and engineer truly unique products. With the expanded time between major GPU releases, from either NVIDIA or AMD, these products have continued evolving to offer better features and experiences than any graphics card before them. The ASUS Strix GTX 780 is exactly one of those solutions – taking a GTX 780 GPU that was originally released in May of last year and twisting it into a new design that offers better cooling, better power and lower noise levels.

ASUS intended, with the Strix GTX 780, to create a card that is perfect for high end PC gamers, without crossing into the realm of bank-breaking prices. They chose to go with the GeForce GTX 780 GPU from NVIDIA at a significant price drop from the GTX 780 Ti, with only a modest performance drop. They double the reference memory capacity from 3GB to 6GB of GDDR5, to assuage any buyer’s thoughts that 3GB wasn’t enough for multi-screen Surround gaming or 4K gaming. And they change the cooling solution to offer a near silent operation mode when used in “low impact” gaming titles.

The ASUS Strix GTX 780 Graphics Card

The ASUS Strix GTX 780 card is a pretty large beast, both in physical size and in performance. The cooler is a slightly modified version of the very popular DirectCU II thermal design used in many of the custom built ASUS graphics cards. It has a heat dissipation area more than twice that of the reference NVIDIA cooler and uses larger fans that allow them to spin slower (and quieter) at the improved cooling capacity.

IMG_0325.JPG

Out of the box, the ASUS Strix GTX 780 will run at 889 MHz base clock and 941 MHz Boost clock, a fairly modest increase over the 863/900 MHz rates of the reference card. Obviously with much better cooling and a lot of work being done on the PCB of this custom design, users will have a lot of headroom to overclock on their own, but I continue to implore companies like ASUS and MSI to up the ante out of the box! One area where ASUS does impress is with the memory – the Strix card features a full 6GB of GDDR5 running 6.0 GHz, twice the capacity of the reference GTX 780 (and even GTX 780 Ti) cards. If you had any concerns about Surround or 4K gaming, know that memory capacity will not be a problem. (Though raw compute power may still be.)

Continue reading our review of the ASUS Strix GTX 780 6GB Graphics Card!!

Manufacturer: Intel

When Magma Freezes Over...

Intel confirms that they have approached AMD about access to their Mantle API. The discussion, despite being clearly labeled as "an experiment" by an Intel spokesperson, was initiated by them -- not AMD. According to AMD's Gaming Scientist, Richard Huddy, via PCWorld, AMD's response was, "Give us a month or two" and "we'll go into the 1.0 phase sometime this year" which only has about five months left in it. When the API reaches 1.0, anyone who wants to participate (including hardware vendors) will be granted access.

AMD_Mantle_Logo.png

AMD inside Intel Inside???

I do wonder why Intel would care, though. Intel has the fastest per-thread processors, and their GPUs are not known to be workhorses that are held back by API call bottlenecks, either. Of course, that is not to say that I cannot see any reason, however...

Read on to see why, I think, Intel might be interested and what this means for the industry.

Author:
Manufacturer: MSI

The Radeon R9 280

Though not really new, the AMD Radeon R9 280 GPU is a part that we really haven't spent time with at PC Perspective. Based on the same Tahiti GPU found in the R9 280X, the HD 7970, the HD 7950 and others, the R9 280 fits at a price point and performance level that I think many gamers will see as enticing. MSI sent along a model that includes some overclocked settings and an updated cooler, allowing the GPU to run at its top speed without much noise.

With a starting price of just $229 or so, the MSI Radeon R9 280 Gaming graphics cards has some interesting competition as well. From the AMD side it butts heads with the R9 280X and the R9 270X. The R9 280X costs $60-70 more though and as you'll see in our benchmarks, the R9 280 will likely cannibalize some of those sales. From NVIDIA, the GeForce GTX 760 is priced right at $229 as well, but does it really have the horsepower to keep with Tahiti?

IMG_0277.JPG

Continue reading our review of the MSI Radeon R9 280 3GB Gaming Graphics Card!!

Subject: Systems
Manufacturer: Various

The Road to 1080p

THE_GPUS.jpg

The stars of the show: a group of affordable GPU options

When preparing to build or upgrade a PC on any kind of a budget, how can you make sure you're extracting the highest performance per dollar from the parts you choose? Even if you do your homework comparing every combination of components is impossible. As system builders we always end up having to look at various benchmarks here and there and then ultimately make assumptions. It's the nature of choosing products within an industry that's completely congested at every price point.

Another problem is that lower-priced graphics cards are usually benchmarked on high-end test platforms with Core i7 processors - which is actually a necessary thing when you need to eliminate CPU bottlenecks from the mix when testing GPUs. So it seems like it might be valuable (and might help narrow buying choices down) if we could take a closer look at gaming performance from complete systems built with only budget parts, and see what these different combinations are capable of.

With this in mind I set out to see just how much it might take to reach acceptable gaming performance at 1080p (acceptable being 30 FPS+). I wanted to see where the real-world gaming bottlenecks might occur, and get a feel for the relationship between CPU and GPU performance. After all, if there was no difference in gaming performance between, say, a $40 and an $80 processor, why spend twice as much money? The same goes for graphics. We’re looking for “good enough” here, not “future-proof”.

ALL_COMPONENTS.jpg

The components in all their shiny boxy-ness (not everything made the final cut)

If money was no object we’d all have the most amazing high-end parts, and play every game at ultra settings with hundreds of frames per second (well, except at 4K). Of course most of us have limits, but the time and skill required to assemble a system with as little cash as possible can result in something that's actually a lot more rewarding (and impressive) than just throwing a bunch of money at top-shelf components.

The theme of this article is good enough, as in, don't spend more than you have to. I don't want this to sound like a bad thing. And if along the way you discover a bargain, or a part that overperforms for the price, even better!

Yet Another AM1 Story?

We’ve been talking about the AMD AM1 platform since its introduction, and it makes a compelling case for a low cost gaming PC. With the “high-end” CPU in the lineup (the Athlon 5350) just $60 and motherboards in the $35 range, it makes sense to start here. (I actually began this project with the Sempron 3820 as well, but it just wasn’t enough for 1080p gaming by a long shot so the test results were quickly discarded.) But while the 5350 is an APU, I didn't end up testing it without a dedicated GPU. (Ok, I eventually did but it just can't handle 1080p.)

But this isn’t just a story about AM1 after all. Jumping right in here, let's look at the result of my research (and mounting credit card debt). All prices were accurate as I wrote this, but are naturally prone to fluctuate:

Tested Hardware
Graphics Cards

MSI AMD Radeon R7 250 2GB OC - $79.99

XFX AMD Radeon R7 260X - $109.99

EVGA NVIDIA GeForce GTX 750 - $109.99

EVGA NVIDIA GeForce GTX 750 Ti SC - $153.99

Processors

AMD Athlon 5350 2.05 GHz Quad-Core APU - $59.99

AMD Athlon X2 340X 3.2 GHz Dual-Core CPU - $44.99.

AMD Athlon X4 760K 3.8 GHz Quad-Core CPU - $84.99

Intel Pentium G3220 3.0 GHz Dual-Core CPU - $56.99

Motherboards

ASRock AM1B-ITX Mini-ITX AMD AM1 - $39.99

MSI A88XM-E45 Micro-ATX AMD A88X - $72.99

ECS H81H3-M4 Micro-ATX Intel H81 - $47.99

Memory 4GB Samsung OEM PC3-12800 DDR3-1600 (~$40 Value)
Storage Western Digital Blue 1TB Hard Drive - $59.99
Power Supply EVGA 430 Watt 80 PLUS PSU - $39.99
OS Windows 8.1 64-bit - $99

So there it is. I'm sure it won't please everyone, but there is enough variety in this list to support no less than 16 different combinations, and you'd better believe I ran each test on every one of those 16 system builds!

Keep reading our look at budget gaming builds for 1080p!!

Author:
Manufacturer: NVIDIA

A powerful architecture

In March of this year, NVIDIA announced the GeForce GTX Titan Z at its GPU Technology Conference. It was touted as the world's fastest graphics card with its pair of full GK110 GPUs but it came with an equally stunning price of $2999. NVIDIA claimed it would be available by the end of April for gamers and CUDA developers to purchase but it was pushed back slightly and released at the very end of May, going on sale for the promised price of $2999.

The specifications of GTX Titan Z are damned impressive - 5,760 CUDA cores, 12GB of total graphics memory, 8.1 TFLOPs of peak compute performance. But something happened between the announcement and product release that perhaps NVIDIA hadn't accounted for. AMD's Radeon R9 295X2, a dual-GPU card with full-speed Hawaii chips on-board, was released at $1499. I think it's fair to say that AMD took some chances that NVIDIA was surprised to see them take, including going the route of a self-contained water cooler and blowing past the PCI Express recommended power limits to offer a ~500 watt graphics card. The R9 295X2 was damned fast and I think it caught NVIDIA a bit off-guard.

As a result, the GeForce GTX Titan Z release was a bit quieter than most of us expected. Yes, the Titan Black card was released without sampling the gaming media but that was nearly a mirror of the GeForce GTX 780 Ti, just with a larger frame buffer and the performance of that GPU was well known. For NVIDIA to release a flagship dual-GPU graphics cards, admittedly the most expensive one I have ever seen with the GeForce brand on it, and NOT send out samples, was telling.

NVIDIA is adamant though that the primary target of the Titan Z is not just gamers but the CUDA developer that needs the most performance possible in as small of a space as possible. For that specific user, one that doesn't quite have the income to invest in a lot of Tesla hardware but wants to be able to develop and use CUDA applications with a significant amount of horsepower, the Titan Z fits the bill perfectly.

Still, the company was touting the Titan Z as "offering supercomputer class performance to enthusiast gamers" and telling gamers in launch videos that the Titan Z is the "fastest graphics card ever built" and that it was "built for gamers." So, interest peaked, we decided to review the GeForce GTX Titan Z.

The GeForce GTX TITAN Z Graphics Card

Cost and performance not withstanding, the GeForce GTX Titan Z is an absolutely stunning looking graphics card. The industrial design started with the GeForce GTX 690 (the last dual-GPU card NVIDIA released) and continued with the GTX 780 and Titan family, lives on with the Titan Z. 

IMG_0270.JPG

The all metal finish looks good and stands up to abuse, keeping that PCB straight even with the heft of the heatsink. There is only a single fan on the Titan Z, center mounted, with a large heatsink covering both GPUs on opposite sides. The GeForce logo up top illuminates, as we have seen on all similar designs, which adds a nice touch.

Continue reading our review of the NVIDIA GeForce GTX Titan Z 12GB Graphics Card!!

Author:
Manufacturer: MSI

Lightning Returns

With the GPU landscape mostly settled for 2014, we have the ability to really dig in and evaluate the retail models that continue to pop up from NVIDIA and AMD board partners. One of our favorite series of graphics cards over the years comes from MSI in the form of the Lightning brand. These cards tend to take the engineering levels to a point other designers simply won't do - and we love it! Obviously the target of this capability is additional overclocking headroom and stability, but what if the GPU target has issues scaling already?

That is more or less the premise of the Radeon R9 290X Lightning from MSI. AMD's Radeon R9 290X Hawaii GPU is definitely a hot and power hungry part and that caused quite a few issues at the initial release. Since then though, both AMD and its add-in card partners have worked to improve the coolers installed on these cards to improve performance reliability and decrease the LOUD NOISES produced by the stock, reference cooler.

Let's dive into the latest to hit our test bench, the MSI Radeon R9 290X Lightning.

The MSI Radeon R9 290X Lightning

IMG_9897.JPG

MSI continues to utilize the yellow and black color scheme that many of the company's high end parts integrate and I love the combination. I know that both NVIDIA and AMD disapprove of the distinct lack of "green" and "red" in the cooler and box designs, but good on MSI for sticking to its own thing. 

IMG_9900.JPG

The box for the Lightning card is equal to the prominence of the card itself and you even get a nifty drawer for all of the included accessories.

IMG_9901.JPG

We originally spotted the MSI R9 290X Lightning at CES in January and the design remains the same. The cooler is quite large (and damn heavy) and is cooled by a set of three fans. The yellow fan in the center is smaller and spins a bit faster, creating more noise than I would prefer. All fan speeds can be adjusted with MSI's included fan control software.

Continue reading our review of the MSI Radeon R9 290X Lightning Graphics Card!!

Author:
Manufacturer: Various

The AMD Argument

Earlier this week, a story was posted in a Forbes.com blog that dove into the idea of NVIDIA GameWorks and how it was doing a disservice not just on the latest Ubisoft title Watch_Dogs but on PC gamers in general. Using quotes from AMD directly, the author claims that NVIDIA is actively engaging in methods to prevent game developers from optimizing games for AMD graphics hardware. This is an incredibly bold statement and one that I hope AMD is not making lightly. Here is a quote from the story:

Gameworks represents a clear and present threat to gamers by deliberately crippling performance on AMD products (40% of the market) to widen the margin in favor of NVIDIA products. . . . Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.

The example cited on the Forbes story is the recently released Watch_Dogs title, which appears to show favoritism towards NVIDIA GPUs with performance of the GTX 770 ($369) coming close the performance of a Radeon R9 290X ($549).

It's evident that Watch Dogs is optimized for Nvidia hardware but it's staggering just how un-optimized it is on AMD hardware.

watch_dogs_ss9_99866.jpg

Watch_Dogs is the latest GameWorks title released this week.

I decided to get in touch with AMD directly to see exactly what stance the company was attempting to take with these kinds of claims. No surprise, AMD was just as forward with me as they appeared to be in the Forbes story originally.

The AMD Stance

Central to AMD’s latest annoyance with the competition is the NVIDIA GameWorks program. First unveiled last October during a press event in Montreal, GameWorks combines several NVIDIA built engine functions into libraries that can be utilized and accessed by game developers to build advanced features into games. NVIDIA’s website claims that GameWorks is “easy to integrate into games” while also including tutorials and tools to help quickly generate content with the software set. Included in the GameWorks suite are tools like VisualFX which offers rendering solutions like HBAO+, TXAA, Depth of Field, FaceWorks, HairWorks and more. Physics tools include the obvious like PhysX while also adding clothing, destruction, particles and more.

Continue reading our editorial on the verbal battle between AMD and NVIDIA about the GameWorks program!!

Author:
Manufacturer: AMD

You need a bit of power for this

PC gamers. We do some dumb shit sometimes. Those on the outside looking in, forced to play on static hardware with fixed image quality and low expandability, turn up their noses and question why we do the things we do. It’s not an unfair reaction, they just don’t know what they are missing out on.

For example, what if you decided to upgrade your graphics hardware to improve performance and allow you to up the image quality on your games to unheard of levels? Rather than using a graphics configuration with performance found in a modern APU you could decide to run not one but FOUR discrete GPUs in a single machine. You could water cool them for optimal temperature and sound levels. This allows you to power not 1920x1080 (or 900p), not 2560x1400 but 4K gaming – 3840x2160.

IMG_9980.JPG

All for the low, low price of $3000. Well, crap, I guess those console gamers have a right to question the sanity of SOME enthusiasts.

After the release of AMD’s latest flagship graphics card, the Radeon R9 295X2 8GB dual-GPU beast, our mind immediately started to wander to what magic could happen (and what might go wrong) if you combined a pair of them in a single system. Sure, two Hawaii GPUs running in tandem produced the “fastest gaming graphics card you can buy” but surely four GPUs would be even better.

The truth is though, that isn’t always the case. Multi-GPU is hard, just ask AMD or NVIDIA. The software and hardware demands placed on the driver team to coordinate data sharing, timing control, etc. are extremely high even when you are working with just two GPUs in series. Moving to three or four GPUs complicates the story even further and as a result it has been typical for us to note low performance scaling, increased frame time jitter and stutter and sometimes even complete incompatibility.

IMG_0002.JPG

During our initial briefing covering the Radeon R9 295X2 with AMD there was a system photo that showed a pair of the cards inside a MAINGEAR box. As one of AMD’s biggest system builder partners, MAINGEAR and AMD were clearly insinuating that these configurations would be made available for those with the financial resources to pay for it. Even though we are talking about a very small subset of the PC gaming enthusiast base, these kinds of halo products are what bring PC gamers together to look and drool.

As it happens I was able to get a second R9 295X2 sample in our offices for a couple of quick days of testing.

Working with Kyle and Brent over at HardOCP, we decided to do some hardware sharing in order to give both outlets the ability to judge and measure Quad CrossFire independently. The results are impressive and awe inspiring.

Continue reading our review of the AMD Radeon R9 295X2 CrossFire at 4K!!

Author:
Manufacturer: Various

Competition is a Great Thing

While doing some testing with the AMD Athlon 5350 Kabini APU to determine it's flexibility as a low cost gaming platform, we decided to run a handful of tests to measure something else that is getting a lot of attention right now: AMD Mantle and NVIDIA's 337.50 driver.

Earlier this week I posted a story that looked at performance scaling of NVIDIA's new 337.50 beta driver compared to the previous 335.23 WHQL. The goal was to assess the DX11 efficiency improvements that the company stated it had been working on and implemented into this latest beta driver offering. In the end, we found some instances where games scaled by as much as 35% and 26% but other cases where there was little to no gain with the new driver. We looked at both single GPU and multi-GPU scenarios on mostly high end CPU hardware though.

Earlier in April I posted an article looking at Mantle, AMD's answer to a lower level API that is unique to its ecosystem, and how it scaled on various pieces of hardware on Battlefield 4. This was the first major game to implement Mantle and it remains the biggest name in the field. While we definitely saw some improvements in gaming experiences with Mantle there was work to be done when it comes to multi-GPU scaling and frame pacing. 

Both parties in this debate were showing promise but obviously both were far from perfect.

am1setup.jpg

While we were benchmarking the new AMD Athlon 5350 Kabini based APU, an incredibly low cost processor that Josh reviewed in April, it made sense to test out both Mantle and NVIDIA's 337.50 driver in an interesting side by side.

Continue reading our story on the scaling performance of AMD Mantle and NVIDIA's 337.50 driver with Star Swarm!!

Author:
Manufacturer: NVIDIA

SLI Testing

Let's see if I can start this story without sounding too much like a broken record when compared to the news post I wrote late last week on the subject of NVIDIA's new 337.50 driver. In March, while attending the Game Developer's Conference to learn about the upcoming DirectX 12 API, I sat down with NVIDIA to talk about changes coming to its graphics driver that would affect current users with shipping DX9, DX10 and DX11 games. 

As I wrote then:

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

In truth, this is something that both NVIDIA and AMD have likely been doing all along but NVIDIA has renewed purpose with the pressure that AMD's Mantle has placed on them, at least from a marketing and PR point of view. It turns out that the driver that starts to implement all of these efficiency changes is the recent 337.50 release and on Friday I wrote up a short story that tested a particularly good example of the performance changes, Total War: Rome II, with a promise to follow up this week with additional hardware and games. (As it turns out, results from Rome II are...an interesting story. More on that on the next page.)

slide1.jpg

Today I will be looking at seemingly random collection of gaming titles, running on some reconfigured test bed we had in the office in an attempt to get some idea of the overall robustness of the 337.50 driver and its advantages over the 335.23 release that came before it. Does NVIDIA have solid ground to stand on when it comes to the capabilities of current APIs over what AMD is offering today?

Continue reading our analysis of the new NVIDIA 337.50 Driver!!

Author:
Manufacturer: AMD

A Powerful Architecture

AMD likes to toot its own horn. Just a take a look at the not-so-subtle marketing buildup to the Radeon R9 295X2 dual-Hawaii graphics card, released today. I had photos of me shipped to…me…overnight. My hotel room at GDC was also given a package which included a pair of small Pringles cans (chips) and a bottle of volcanic water. You may have also seen some photos posted of a mysterious briefcase with its side stickered by with the silhouette of a Radeon add-in board.

This tooting is not without some validity though. The Radeon R9 295X2 is easily the fastest graphics card we have ever tested and that says a lot based on the last 24 months of hardware releases. It’s big, it comes with an integrated water cooler, and it requires some pretty damn specific power supply specifications. But AMD did not compromise on the R9 295X2 and, for that, I am sure that many enthusiasts will be elated. Get your wallets ready, though, this puppy will run you $1499.

01.jpg

Both AMD and NVIDIA have a history of producing high quality dual-GPU graphics cards late in the product life cycle. The most recent entry from AMD was the Radeon HD 7990, a pair of Tahiti GPUs on a single PCB with a triple fan cooler. While a solid performing card, the product was released in a time when AMD CrossFire technology was well behind the curve and, as a result, real-world performance suffered considerably. By the time the drivers and ecosystem were fixed, the HD 7990 was more or less on the way out. It was also notorious for some intermittent, but severe, overheating issues, documented by Tom’s Hardware in one of the most harshly titled articles I’ve ever read. (Hey, Game of Thrones started again this week!)

The Hawaii GPU, first revealed back in September and selling today under the guise of the R9 290X and R9 290 products, is even more power hungry than Tahiti. Many in the industry doubted that AMD would ever release a dual-GPU product based on Hawaii as the power and thermal requirements would be just too high. AMD has worked around many of these issues with a custom water cooler and placing specific power supply requirements on buyers. Still, all without compromising on performance. This is the real McCoy.

Continue reading our review of the AMD Radeon R9 295X2 8GB Dual Hawaii Graphics Card!!

Author:
Manufacturer: AMD

BF4 Integrates FCAT Overlay Support

Back in September AMD publicly announced Mantle, a new lower level API meant to offer more performance for gamers and more control for developers fed up with the restrictions of DirectX. Without diving too much into the politics of the release, the fact that Battlefield 4 developer DICE was integrating Mantle into the Frostbite engine for Battlefield was a huge proof point for the technology. Even though the release was a bit later than AMD had promised us, coming at the end of January 2014, one of the biggest PC games on the market today had integrated a proprietary AMD API.

When I did my first performance preview of BF4 with Mantle on February 1st, the results were mixed but we had other issues to deal with. First and foremost, our primary graphics testing methodology, called Frame Rating, wasn't able to be integrated due to the change of API. Instead we were forced to use an in-game frame rate counter built by DICE which worked fine, but didn't give us the fine grain data we really wanted to put the platform to the test. It worked, but we wanted more. Today we are happy to announce we have full support for our Frame Rating and FCAT testing with BF4 running under Mantle.

A History of Frame Rating

In late 2012 and throughout 2013, testing graphics cards became a much more complicated beast. Terms like frame pacing, stutter, jitter and runts were not in the vocabulary of most enthusiasts but became an important part of the story just about one year ago. Though complicated to fully explain, the basics are pretty simple.

Rather than using software on the machine being tested to measure performance, our Frame Rating system uses a combination of local software and external capture hardware. On the local system with the hardware being evaluated we run a small piece of software called an overlay that draws small colored bars on the left hand side of the game screen that change successively with each frame rendered by the game. Using a secondary system, we capture the output from the graphics card directly, intercepting it from the display output, in real-time in an uncompressed form. With that video file captured, we then analyze it frame by frame, measuring the length of each of those colored bars, how long they are on the screen, how consistently they are displayed. This allows us to find the average frame rate but also to find how smoothly the frames are presented, if there are dropped frames and if there are jitter or stutter issues. 

screen1.jpg

Continue reading our first look at Frame Rating / FCAT Testing with Mantle in Battlefield 4!!

Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-card-profile.jpg

Courtesy of ASUS

The ASUS ROG Poseidon GTX 780 video card is the latest incarnation of the Republic of Gamer (ROG) Poseidon series. Like the previous Poseidon series products, the Poseidon GTX 780 features a hybrid cooler, capable of air and liquid-based cooling for the GPU and on board components. The AUS ROG Poseidon GTX 780 graphics card comes with an MSRP of $599, a premium price for a premium card .

03-fly-apart-image.jpg

Courtesy of ASUS

In designing the Poseidon GTX 780 graphics card, ASUS packed in many of premium components you would normally find as add-ons. Additionally, the card features motherboard quality power components, featuring a 10 phase digital power regulation system using ASUS DIGI+ VRM technology coupled with Japanese black metallic capacitors. The Poseidon GTX 780 has the following features integrated into its design: DisplayPort output port, HDMI output port, dual DVI ports (DVI-D and DVI-I type ports), aluminum backplate, integrated G 1/4" threaded liquid ports, dual 90mm cooling fans, 6-pin and 8-pin PCIe-style power connectors, and integrated power connector LEDs and ROG logo LED.

Continue reading our review of the ASUS ROG Poseidon GTX 780 graphics card!

Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

15.jpg

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

04.jpg

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!

Author:
Manufacturer: Various

EVGA GTX 750 Ti ACX FTW

The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason.  It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users.  It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.

IMG_9872.JPG

NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available).  There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas.  Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades. 

After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.

On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC.  All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.

Continue reading our round up of EVGA, Galaxy and PNY GTX 750 Ti Graphics Cards!!

Author:
Manufacturer: NZXT

Installation

When the Radeon R9 290 and R9 290X first launched last year, they were plagued by issues of overheating and variable clock speeds.  We looked at the situation several times over the course of a couple months and AMD tried to address the problem with newer drivers.  These drivers did help stabilize clock speeds (and thus performance) of the reference built R9 290 and R9 290X cards but caused noise levels to increase as well.  

The real solution was the release of custom cooled versions of the R9 290 and R9 290X from AMD partners like ASUS, MSI and others.  The ASUS R9 290X DirectCU II model for example, ran cooler, quieter and more consistently than any of the numerous reference models we had our hands on.  

But what about all those buyers that are still purchasing, or have already purchased, reference style R9 290 and 290X cards?  Replacing the cooler on the card is the best choice and thanks to our friends at NZXT we have a unique solution that combines standard self contained water coolers meant for CPUs with a custom built GPU bracket.  

IMG_9179_0.JPG

Our quick test will utilize one of the reference R9 290 cards AMD sent along at launch and two specific NZXT products.  The Kraken X40 is a standard CPU self contained water cooler that sells for $100 on Amazon.com.  For our purposes though we are going to team it up with the Kraken G10, a $30 GPU-specific bracket that allows you to use the X40 (and other water coolers) on the Radeon R9 290.

IMG_9181_0.JPG

Inside the box of the G10 you'll find an 80mm fan, a back plate, the bracket to attach the cooler to the GPU and all necessary installation hardware.  The G10 will support a wide range of GPUs, though they are targeted towards the reference designs of each:

NVIDIA : GTX 780 Ti, 780, 770, 760, Titan, 680, 670, 660Ti, 660, 580, 570, 560Ti, 560, 560SE 
AMD : R9 290X, 290, 280X*, 280*, 270X, 270 HD7970*, 7950*, 7870, 7850, 6970, 6950, 6870, 6850, 6790, 6770, 5870, 5850, 5830
 

That is pretty impressive but NZXT will caution you that custom designed boards may interfere.

IMG_9184_0.JPG

The installation process begins by removing the original cooler which in this case just means a lot of small screws.  Be careful when removing the screws on the actual heatsink retention bracket and alternate between screws to take it off evenly.

Continue reading about how the NZXT Kraken G10 can improve the cooling of the Radeon R9 290 and R9 290X!!