Author:
Manufacturer: NVIDIA

SLI Testing

Let's see if I can start this story without sounding too much like a broken record when compared to the news post I wrote late last week on the subject of NVIDIA's new 337.50 driver. In March, while attending the Game Developer's Conference to learn about the upcoming DirectX 12 API, I sat down with NVIDIA to talk about changes coming to its graphics driver that would affect current users with shipping DX9, DX10 and DX11 games. 

As I wrote then:

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

In truth, this is something that both NVIDIA and AMD have likely been doing all along but NVIDIA has renewed purpose with the pressure that AMD's Mantle has placed on them, at least from a marketing and PR point of view. It turns out that the driver that starts to implement all of these efficiency changes is the recent 337.50 release and on Friday I wrote up a short story that tested a particularly good example of the performance changes, Total War: Rome II, with a promise to follow up this week with additional hardware and games. (As it turns out, results from Rome II are...an interesting story. More on that on the next page.)

slide1.jpg

Today I will be looking at seemingly random collection of gaming titles, running on some reconfigured test bed we had in the office in an attempt to get some idea of the overall robustness of the 337.50 driver and its advantages over the 335.23 release that came before it. Does NVIDIA have solid ground to stand on when it comes to the capabilities of current APIs over what AMD is offering today?

Continue reading our analysis of the new NVIDIA 337.50 Driver!!

Author:
Manufacturer: AMD

A Powerful Architecture

AMD likes to toot its own horn. Just a take a look at the not-so-subtle marketing buildup to the Radeon R9 295X2 dual-Hawaii graphics card, released today. I had photos of me shipped to…me…overnight. My hotel room at GDC was also given a package which included a pair of small Pringles cans (chips) and a bottle of volcanic water. You may have also seen some photos posted of a mysterious briefcase with its side stickered by with the silhouette of a Radeon add-in board.

This tooting is not without some validity though. The Radeon R9 295X2 is easily the fastest graphics card we have ever tested and that says a lot based on the last 24 months of hardware releases. It’s big, it comes with an integrated water cooler, and it requires some pretty damn specific power supply specifications. But AMD did not compromise on the R9 295X2 and, for that, I am sure that many enthusiasts will be elated. Get your wallets ready, though, this puppy will run you $1499.

01.jpg

Both AMD and NVIDIA have a history of producing high quality dual-GPU graphics cards late in the product life cycle. The most recent entry from AMD was the Radeon HD 7990, a pair of Tahiti GPUs on a single PCB with a triple fan cooler. While a solid performing card, the product was released in a time when AMD CrossFire technology was well behind the curve and, as a result, real-world performance suffered considerably. By the time the drivers and ecosystem were fixed, the HD 7990 was more or less on the way out. It was also notorious for some intermittent, but severe, overheating issues, documented by Tom’s Hardware in one of the most harshly titled articles I’ve ever read. (Hey, Game of Thrones started again this week!)

The Hawaii GPU, first revealed back in September and selling today under the guise of the R9 290X and R9 290 products, is even more power hungry than Tahiti. Many in the industry doubted that AMD would ever release a dual-GPU product based on Hawaii as the power and thermal requirements would be just too high. AMD has worked around many of these issues with a custom water cooler and placing specific power supply requirements on buyers. Still, all without compromising on performance. This is the real McCoy.

Continue reading our review of the AMD Radeon R9 295X2 8GB Dual Hawaii Graphics Card!!

Author:
Manufacturer: AMD

BF4 Integrates FCAT Overlay Support

Back in September AMD publicly announced Mantle, a new lower level API meant to offer more performance for gamers and more control for developers fed up with the restrictions of DirectX. Without diving too much into the politics of the release, the fact that Battlefield 4 developer DICE was integrating Mantle into the Frostbite engine for Battlefield was a huge proof point for the technology. Even though the release was a bit later than AMD had promised us, coming at the end of January 2014, one of the biggest PC games on the market today had integrated a proprietary AMD API.

When I did my first performance preview of BF4 with Mantle on February 1st, the results were mixed but we had other issues to deal with. First and foremost, our primary graphics testing methodology, called Frame Rating, wasn't able to be integrated due to the change of API. Instead we were forced to use an in-game frame rate counter built by DICE which worked fine, but didn't give us the fine grain data we really wanted to put the platform to the test. It worked, but we wanted more. Today we are happy to announce we have full support for our Frame Rating and FCAT testing with BF4 running under Mantle.

A History of Frame Rating

In late 2012 and throughout 2013, testing graphics cards became a much more complicated beast. Terms like frame pacing, stutter, jitter and runts were not in the vocabulary of most enthusiasts but became an important part of the story just about one year ago. Though complicated to fully explain, the basics are pretty simple.

Rather than using software on the machine being tested to measure performance, our Frame Rating system uses a combination of local software and external capture hardware. On the local system with the hardware being evaluated we run a small piece of software called an overlay that draws small colored bars on the left hand side of the game screen that change successively with each frame rendered by the game. Using a secondary system, we capture the output from the graphics card directly, intercepting it from the display output, in real-time in an uncompressed form. With that video file captured, we then analyze it frame by frame, measuring the length of each of those colored bars, how long they are on the screen, how consistently they are displayed. This allows us to find the average frame rate but also to find how smoothly the frames are presented, if there are dropped frames and if there are jitter or stutter issues. 

screen1.jpg

Continue reading our first look at Frame Rating / FCAT Testing with Mantle in Battlefield 4!!

Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-card-profile.jpg

Courtesy of ASUS

The ASUS ROG Poseidon GTX 780 video card is the latest incarnation of the Republic of Gamer (ROG) Poseidon series. Like the previous Poseidon series products, the Poseidon GTX 780 features a hybrid cooler, capable of air and liquid-based cooling for the GPU and on board components. The AUS ROG Poseidon GTX 780 graphics card comes with an MSRP of $599, a premium price for a premium card .

03-fly-apart-image.jpg

Courtesy of ASUS

In designing the Poseidon GTX 780 graphics card, ASUS packed in many of premium components you would normally find as add-ons. Additionally, the card features motherboard quality power components, featuring a 10 phase digital power regulation system using ASUS DIGI+ VRM technology coupled with Japanese black metallic capacitors. The Poseidon GTX 780 has the following features integrated into its design: DisplayPort output port, HDMI output port, dual DVI ports (DVI-D and DVI-I type ports), aluminum backplate, integrated G 1/4" threaded liquid ports, dual 90mm cooling fans, 6-pin and 8-pin PCIe-style power connectors, and integrated power connector LEDs and ROG logo LED.

Continue reading our review of the ASUS ROG Poseidon GTX 780 graphics card!

Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

15.jpg

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

04.jpg

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!

Author:
Manufacturer: Various

EVGA GTX 750 Ti ACX FTW

The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason.  It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users.  It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.

IMG_9872.JPG

NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available).  There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas.  Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades. 

After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.

On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC.  All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.

Continue reading our round up of EVGA, Galaxy and PNY GTX 750 Ti Graphics Cards!!

Author:
Manufacturer: NZXT

Installation

When the Radeon R9 290 and R9 290X first launched last year, they were plagued by issues of overheating and variable clock speeds.  We looked at the situation several times over the course of a couple months and AMD tried to address the problem with newer drivers.  These drivers did help stabilize clock speeds (and thus performance) of the reference built R9 290 and R9 290X cards but caused noise levels to increase as well.  

The real solution was the release of custom cooled versions of the R9 290 and R9 290X from AMD partners like ASUS, MSI and others.  The ASUS R9 290X DirectCU II model for example, ran cooler, quieter and more consistently than any of the numerous reference models we had our hands on.  

But what about all those buyers that are still purchasing, or have already purchased, reference style R9 290 and 290X cards?  Replacing the cooler on the card is the best choice and thanks to our friends at NZXT we have a unique solution that combines standard self contained water coolers meant for CPUs with a custom built GPU bracket.  

IMG_9179_0.JPG

Our quick test will utilize one of the reference R9 290 cards AMD sent along at launch and two specific NZXT products.  The Kraken X40 is a standard CPU self contained water cooler that sells for $100 on Amazon.com.  For our purposes though we are going to team it up with the Kraken G10, a $30 GPU-specific bracket that allows you to use the X40 (and other water coolers) on the Radeon R9 290.

IMG_9181_0.JPG

Inside the box of the G10 you'll find an 80mm fan, a back plate, the bracket to attach the cooler to the GPU and all necessary installation hardware.  The G10 will support a wide range of GPUs, though they are targeted towards the reference designs of each:

NVIDIA : GTX 780 Ti, 780, 770, 760, Titan, 680, 670, 660Ti, 660, 580, 570, 560Ti, 560, 560SE 
AMD : R9 290X, 290, 280X*, 280*, 270X, 270 HD7970*, 7950*, 7870, 7850, 6970, 6950, 6870, 6850, 6790, 6770, 5870, 5850, 5830
 

That is pretty impressive but NZXT will caution you that custom designed boards may interfere.

IMG_9184_0.JPG

The installation process begins by removing the original cooler which in this case just means a lot of small screws.  Be careful when removing the screws on the actual heatsink retention bracket and alternate between screws to take it off evenly.

Continue reading about how the NZXT Kraken G10 can improve the cooling of the Radeon R9 290 and R9 290X!!

Author:
Manufacturer: NVIDIA

Maxwell and Kepler and...Fermi?

Covering the landscape of mobile GPUs can be a harrowing experience.  Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family.  Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.  

Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least).  Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.

slides01.jpg
 

With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet.  ShadowPlay and GameStream also find their way to mobile GeForce users as well.

Let's take a quick look at the new hardware specifications.

  GTX 880M GTX 780M GTX 870M GTX 770M
GPU Code name Kepler Kepler Kepler Kepler
GPU Cores 1536 1536 1344 960
Rated Clock 954 MHz 823 MHz 941 MHz 811 MHz
Memory Up to 4GB Up to 4GB Up to 3GB Up to 3GB
Memory Clock 5000 MHz 5000 MHz 5000 MHz 4000 MHz
Memory Interface 256-bit 256-bit 192-bit 192-bit
Features Battery Boost
GameStream
ShadowPlay
GFE
GameStream
ShadowPlay
GFE
Battery Boost
GameStream
ShadowPlay
GFE
GameStream
ShadowPlay
GFE

Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line.  However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M.  Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.  

Continue reading about the NVIDIA GeForce GTX 800M Launch and Battery Boost!!

Author:
Manufacturer: Various

1920x1080, 2560x1440, 3840x2160

Join us on March 11th at 9pm ET / 6pm PT for a LIVE Titanfall Game Stream!  You can find us at http://www.pcper.com/live.  You can subscribe to our mailing list to be alerted whenever we have a live event!!

We canceled the event due to the instability of Titanfall servers.  We'll reschedule soon!!

With the release of Respawn's Titanfall upon us, many potential PC gamers are going to be looking for suggestions on compiling a list of parts targeted at a perfect Titanfall experience.  The good news is, even with a fairly low investment in PC hardware, gamers will find that the PC version of this title is definitely the premiere way to play as the compute power of the Xbox One just can't compete.

titanfallsystem.jpg
 

In this story we'll present three different build suggestions, each addressing a different target resolution but also better image quality settings than the Xbox One can offer.  We have options for 1080p, the best option that the Xbox could offer, 2560x1440 and even 3840x2160, better known as 4K.  In truth, the graphics horsepower required by Titanfall isn't overly extreme, and thus an entire PC build coming in under $800, including a full copy of Windows 8.1, is easy to accomplish.

Target 1: 1920x1080

First up is old reliable, the 1920x1080 resolution that most gamers still have on their primary gaming display.  That could be a home theater style PC hooked up to a TV or monitors in sizes up to 27-in.  Here is our build suggestion, followed by our explanations.

  Titanfall 1080p Build
Processor Intel Core i3-4330 - $137
Motherboard MSI H87-G43 - $96
Memory Corsair Vengeance LP 8GB 1600 MHz (2 x 4GB) - $89
Graphics Card EVGA GeForce GTX 750 Ti - $179
Storage Western Digital Blue 1TB - $59
Case Corsair 200R ATX Mid Tower Case - $72
Power Supply Corsair CX 500 watt - $49
OS Windows 8.1 OEM - $96
Total Price $781 - Amazon Full Cart

Our first build comes in at $781 and includes some incredibly competent gaming hardware for that price.  The Intel Core i3-4330 is a dual-core, HyperThreaded processor that provides more than enough capability to push Titanfall any all other major PC games on the market.  The MSI H87 motherboard lacks some of the advanced features of the Z87 platform but does the job at a lower cost.  8GB of Corsair memory, though not running at a high clock speed, provides more than enough capacity for all the programs and applications you could want to run.

Continue reading our article on building a gaming PC for Titanfall!!

Author:
Manufacturer: EVGA

Its been a while...

EVGA has been around for quite some time now.  They have turned into NVIDIA’s closest North American partner after the collapse of the original VisionTek.  At nearly every trade show or gaming event, EVGA is closely associated with whatever NVIDIA presence is there.  In the past EVGA focused primarily on using NVIDIA reference designs for PCB and cooling, and would branch out now and then with custom or semi-custom watercooling solutions.

evga_780_01.jpg

A very svelte and minimalist design for the shroud.  I like it.

The last time I actually reviewed an EVGA products was way back in May of 2006.  I took a look at the 7600 GS product, which was a passively cooled card.  Oddly enough, that card is sitting right in front of me as I write this.  Unfortunately, that particular card has a set of blown caps on it and no longer works.  Considering that the card has been in constant use since 2006, I would say that it held up very well for those eight years!

EVGA has been expanding their product lineup to be able to handle the highs and lows of the PC market.  They have started manufacturing motherboards, cases, and power supplies to help differentiate their product lineup and hopefully broaden their product portfolio.  We know from past experiences that companies that rely on one type of product from a single manufacturer (GPUs in this particular case) can experience some real issues if demand drops dramatically due to competitive disadvantages.  EVGA also has taken a much more aggressive approach to differentiating their products while keeping them within a certain budget.

The latest generation of GTX 700 based cards have seen the introduction of the EVGA ACX cooling solutions.  These dual fan coolers are a big step up from the reference design and puts EVGA on par with competitive products from Asus and MSI.  EVGA does make some tradeoffs as compared, but these are fairly minimal when considering the entire package.

Click here to read the entire review!

Author:
Manufacturer: ORIGIN PC

Mobile Gaming Powerhouse

Every once in a while, a vendor sends us a preconfigured gaming PC or notebook.  We don't usually focus too much on these systems because so many of readers are quite clearly DIY builders.  Gaming notebooks are another beast, though. Without going through a horrible amount of headaches, building a custom gaming notebook is a pretty tough task.  So, for users who are looking for a ton of gaming performance in a package that is mobile, going with a machine like the ORIGIN PC EON17-SLX is the best option.

IMG_9494.JPG

As the name implies, the EON17-SLX is a 17-in notebook that includes some really impressive specifications including a Haswell processor and SLI GeForce GTX 780M GPUs.

  ORIGIN PC EON17-SLX
Processor Core i7-4930MX (Haswell)
Cores / Threads 4 / 8
Graphics 2 x NVIDIA GeForce GTX 780M 4GB
System Memory 16GB Corsair Vengeance DDR3-1600
Storage 2 x 120GB mSATA SSD (RAID-0)
1 x Western Digital Black 750GB HDD
Wireless Intel 7260 802.11ac
Screen 17-in 1920x1080 LED Matte
Optical 6x Blu-ray reader / DVD writer
Extras Thunderbolt
Operating System Windows 8.1
Price ~$4500

Intel's Core i7-4930MX processor is actually a quad-core Haswell based CPU, not an Ivy Bridge-E part like you might guess based on the part number.  The GeForce GTX 780M GPUs each include 4GB of frame buffer (!!) and have very similar specifications to the desktop GTX 770 parts.  Even though they run at lower clock speeds, a pair of these GPUs will provide a ludicrous amount of gaming performance.

As you would expect for a notebook with this much compute performance, it isn't a thin and light. My scale tips at 9.5 pounds with the laptop alone and over 12 pounds with the power adapter included.  Images of the profile below will indicate not only many of the features included but also the size and form factor.

Continue reading our review of the ORIGIN PC EON17-SLX Gaming Notebook!!

Author:
Manufacturer: Various

An Upgrade Project

When NVIDIA started talking to us about the new GeForce GTX 750 Ti graphics card, one of the key points they emphasized was the potential use for this first-generation Maxwell GPU to be used in the upgrade process of smaller form factor or OEM PCs. Without the need for an external power connector, the GTX 750 Ti provided a clear performance delta from integrated graphics with minimal cost and minimal power consumption, so the story went.

Eager to put this theory to the test, we decided to put together a project looking at the upgrade potential of off the shelf OEM computers purchased locally.  A quick trip down the road to Best Buy revealed a PC sales section that was dominated by laptops and all-in-ones, but with quite a few "tower" style desktop computers available as well.  We purchased three different machines, each at a different price point, and with different primary processor configurations.

The lucky winners included a Gateway DX4885, an ASUS M11BB, and a Lenovo H520.

IMG_9724.JPG

Continue reading An Upgrade Story: Can the GTX 750 Ti Convert OEMs PCs to Gaming PCs?

Author:
Manufacturer: NVIDIA

What we know about Maxwell

I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell.  It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell.  It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.

For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself.  There I will detail the product specifications, performance comparison and expectations, etc.

If you are interested in learning what makes Maxwell tick, keep reading below.

The NVIDIA Maxwell Architecture

When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it.  Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press.  In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance.  It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.  

During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design.  Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary.  NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.  

The logic of the GPU design remains similar to Kepler.  There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).  

block.jpg

GM107 Block Diagram

Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell.  There are more divisions, more groupings and fewer CUDA cores "per block" than before.  As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.  

Continue reading our review of the NVIDIA GeForce GTX 750 Ti and Maxwell Architecture!!

Author:
Manufacturer: AMD

Straddling the R7 and R9 designation

It is often said that the sub-$200 graphics card market is crowded.  It will get even more so over the next 7 days.  Today AMD is announcing a new entry into this field, the Radeon R7 265, which seems to straddle the line between their R7 and R9 brands.  The product is much closer in its specifications to the R9 270 than it is the R7 260X. As you'll see below, it is built on a very familiar GPU architecture.

slides01.jpg

AMD claims that the new R7 265 brings a 25% increase in performance to the R7 line of graphics cards.  In my testing, this does turn out to be true and also puts it dangerously close to the R9 270 card released late last year. Much like we saw with the R9 290 compared to the R9 290X, the less expensive but similarly performing card might make the higher end model a less attractive option.

Let's take a quick look at the specifications of the new R7 265.

slides02.jpg

Based on the Pitcairn GPU, a part that made its debut with the Radeon HD 7870 and HD 7850 in early 2012, this card has 1024 stream processors running at 925 MHz equating to 1.89 TFLOPS of total peak compute power.  Unlike the other R7 cards, the R7 265 has a 256-bit memory bus and will come with 2GB of GDDR5 memory running at 5.6 GHz.  The card requires a single 6-pin power connection but has a peak TDP of 150 watts - pretty much the maximum of the PCI Express bus and one power connector.  And yes, the R7 265 supports DX 11.2, OpenGL 4.3, and Mantle, just like the rest of the AMD R7/R9 lineup.  It does NOT support TrueAudio and the new CrossFire DMA units.

  Radeon R9 270X Radeon R9 270 Radeon R7 265 Radeon R7 260X Radeon R7 260
GPU Code name Pitcairn Pitcairn Pitcairn Bonaire Bonaire
GPU Cores 1280 1280 1024 896 768
Rated Clock 1050 MHz 925 MHz 925 MHz 1100 MHz 1000 MHz
Texture Units 80 80 64 56 48
ROP Units 32 32 32 16 16
Memory 2GB 2GB 2GB 2GB 2GB
Memory Clock 5600 MHz 5600 MHz 5600 MHz 6500 MHz 6000 MHz
Memory Interface 256-bit 256-bit 256-bit 128-bit 128-bit
Memory Bandwidth 179 GB/s 179 GB/s 179 GB/s 104 GB/s 96 GB/s
TDP 180 watts 150 watts 150 watts 115 watts 95 watts
Peak Compute 2.69 TFLOPS 2.37 TFLOPS 1.89 TFLOPS 1.97 TFLOPS 1.53 TFLOPS
MSRP $199 $179 $149 $119 $109

The table above compares the current AMD product lineup, ranging from the R9 270X to the R7 260, with the R7 265 directly in the middle.  There are some interesting specifications to point out that make the 265 a much closer relation to the R7 270/270X cards than anything below it.  Though the R7 265 has four fewer compute units (which is 256 stream processors) than the R9 270. The biggest performance gap here is going to be found with the 256-bit memory bus that persists; the available memory bandwidth of 179 GB/s is 72% higher than the 104 GB/s from the R7 260X!  That will definitely improve performance drastically compared to the rest of the R7 products.  Pay no mind to that peak performance of the 260X being higher than the R7 265; in real world testing that never happened.

Continue reading our review of the new AMD Radeon R7 265 2GB Graphics Card!!

Manufacturer: PC Perspective
Tagged: Mantle, interview, amd

What Mantle signifies about GPU architectures

Mantle is a very interesting concept. From the various keynote speeches, it sounds like the API is being designed to address the current state (and trajectory) of graphics processors. GPUs are generalized and highly parallel computation devices which are assisted by a little bit of specialized silicon, when appropriate. The vendors have even settled on standards, such as IEEE-754 floating point decimal numbers, which means that the driver has much less reason to shield developers from the underlying architectures.

Still, Mantle is currently a private technology for an unknown number of developers. Without a public SDK, or anything beyond the half-dozen keynotes, we can only speculate on its specific attributes. I, for one, have technical questions and hunches which linger unanswered or unconfirmed, probably until the API is suitable for public development.

Or, until we just... ask AMD.

amd-mantle-interview-01.jpg

Our response came from Guennadi Riguer, the chief architect for Mantle. In it, he discusses the API's usage as a computation language, the future of the rendering pipeline, and whether there will be a day where Crossfire-like benefits can occur by leaving an older Mantle-capable GPU in your system when purchasing a new, also Mantle-supporting one.

Q: Mantle's shading language is said to be compatible with HLSL. How will optimizations made for DirectX, such as tweaks during shader compilation, carry over to Mantle? How much tuning will (and will not) be shared between the two APIs?

[Guennadi] The current Mantle solution relies on the same shader generation path games the DirectX uses and includes an open-source component for translating DirectX shaders to Mantle accepted intermediate language (IL). This enables developers to quickly develop Mantle code path without any changes to the shaders. This was one of the strongest requests we got from our ISV partners when we were developing Mantle.

AMD-mantle-dx-hlsl-GSA_screen_shot.jpg

Follow-Up: What does this mean, specifically, in terms of driver optimizations? Would AMD, or anyone else who supports Mantle, be able to re-use the effort they spent on tuning their shader compilers (and so forth) for DirectX?

[Guennadi] With the current shader compilation strategy in Mantle, the developers can directly leverage DirectX shader optimization efforts in Mantle. They would use the same front-end HLSL compiler for DX and Mantle, and inside of the DX and Mantle drivers we share the shader compiler that generates the shader code our hardware understands.

Read on to see the rest of the interview!

Author:
Manufacturer: AMD

A quick look at performance results

Late last week, EA and Dice released the long awaited patch for Battlefield 4 that enables support for the Mantle renderer.  This new API technology was introduced by AMD back in September. Unfortunately, AMD wasn't quite ready for its release with their Catalyst 14.1 beta driver.  I wrote a short article that previewed the new driver's features, its expected performance with the Mantle version of BF4, and commentary about the current state of Mantle.  You should definite read that as a primer before continuing if you haven't yet.  

Today, after really just a few short hours with a useable driver, I have only limited results.  Still, I know that you, our readers, clamor for ANY information on the topic.  I thought I would share what we have thus far.

Initial Considerations

As I mentioned in the previous story, the Mantle version of Battlefield 4 has the biggest potential to show advantages in times where the game is more CPU limited.  AMD calls this the "low hanging fruit" for this early release of Mantle and claim that further optimizations will come, especially for GPU-bound scenarios.  Because of that dependency on CPU limitations, that puts some non-standard requirements on our ability to showcase Mantle's performance capabilities.

bf42.jpg

For example, the level of the game and even the section of that level, in the BF4 single player campaign, can show drastic swings in Mantle's capabilities.  Multiplayer matches will also show more consistent CPU utilization (and thus could be improved by Mantle) though testing those levels in a repeatable, semi-scientific method is much more difficult.  And, as you'll see in our early results, I even found a couple instances in which the Mantle API version of BF4 ran a smidge slower than the DX11 instance.  

For our testing, we compiled two systems that differed in CPU performance in order to simulate the range of processors installed within consumers' PCs.  Our standard GPU test bed includes a Core i7-3960X Sandy Bridge-E processor specifically to remove the CPU as a bottleneck and that has been included here today.  We added in a system based on the AMD A10-7850K Kaveri APU which presents a more processor-limited (especially per-thread) system, overall, and should help showcase Mantle benefits more easily.

Continue reading our early look at the performance advantages of AMD Mantle on Battlefield 4!!

Author:
Manufacturer: AMD

A troubled launch to be sure

AMD has released some important new drivers with drastic feature additions over the past year.  Remember back in August of 2013 when Frame Pacing was first revealed?  Today’s Catalyst 14.1 beta release will actually complete the goals that AMD set forth upon itself in early 2013 in regards to introducing (nearly) complete Frame Pacing technology integration for non-XDMA GPUs while also adding support for Mantle and HSA capability.

Frame Pacing Phase 2 and HSA Support

When AMD released the first frame pacing capable beta driver in August of 2013, it added support to existing GCN designs (HD 7000-series and a few older generations) at resolutions of 2560x1600 and below.  While that definitely addressed a lot of the market, the fact was that CrossFire users were also amongst the most likely to have Eyefinity (3+ monitors spanned for gaming) or even 4K displays (quickly dropping in price).  Neither of those advanced display options were supported with any Catalyst frame pacing technology.

That changes today as Phase 2 of the AMD Frame Pacing feature has finally been implemented for products that do not feature the XDMA technology (found in Hawaii GPUs for example).  That includes HD 7000-series GPUs, the R9 280X and 270X cards, as well as older generation products and Dual Graphics hardware combinations such as the new Kaveri APU and R7 250.  I have already tested Kaveri and the R7 250 in fact, and you can read about its scaling and experience improvements right here.  That means that users of the HD 7970, R9 280X, etc., as well as those of you with HD 7990 dual-GPU cards, will finally be able to utilize the power of both GPUs in your system with 4K displays and Eyefinity configurations!

BF3_5760x1080_PLOT.png

This is finally fixed!!

As of this writing I haven’t had time to do more testing (other than the Dual Graphics article linked above) to demonstrate the potential benefits of this Phase 2 update, but we’ll be targeting it later in the week.  For now, it appears that you’ll be able to get essentially the same performance and pacing capabilities on the Tahiti-based GPUs as you can with Hawaii (R9 290X and R9 290). 

Catalyst 14.1 beta is also the first public driver to add support for HSA technology, allowing owners of the new Kaveri APU to take advantage of the appropriately enabled applications like LibreOffice and the handful of Adobe apps.  AMD has since let us know that this feature DID NOT make it into the public release of Catalyst 14.1.

The First Mantle Ready Driver (sort of) 

A technology that has been in development for more than two years according to AMD, the newly released Catalyst 14.1 beta driver is the first to enable support for the revolutionary new Mantle API for PC gaming.  Essentially, Mantle is AMD’s attempt at creating a custom API that will replace DirectX and OpenGL in order to more directly target the GPU hardware in your PC, specifically the AMD-based designs of GCN (Graphics Core Next). 

slide2_0.jpg

Mantle runs at a lower level than DX or OGL does, able to more directly access the hardware resources of the graphics chips, and with that ability is able to better utilize the hardware in your system, both CPU and GPU.  In fact, the primary benefit of Mantle is going to be seen in the form of less API overhead and bottlenecks such as real-time shader compiling and code translation. 

If you are interested in the meat of what makes Mantle tick and why it was so interesting to us when it was first announced in September of 2013, you should check out our first deep-dive article written by Josh.  In it you’ll get our opinion on why Mantle matters and why it has the potential for drastically changing the way the PC is thought of in the gaming ecosystem.

Continue reading our coverage of the launch of AMD's Catalyst 14.1 driver and Battlefield 4 Mantle patch!!

Author:
Manufacturer: AMD

Hybrid CrossFire that actually works

The road to redemption for AMD and its driver team has been a tough one.  Since we first started to reveal the significant issues with AMD's CrossFire technology back in January of 2013 the Catalyst driver team has been hard at work on a fix, though I will freely admit it took longer to convince them that the issue was real than I would have liked.  We saw the first steps of the fix released in August of 2013 with the release of the Catalyst 13.8 beta driver.  It supported DX11 and DX10 games and resolutions of 2560x1600 and under (no Eyefinity support) but was obviously still less than perfect.  

In October with the release of AMD's latest Hawaii GPU the company took another step by reorganizing the internal architecture of CrossFire on the chip level with XDMA.  The result was frame pacing that worked on the R9 290X and R9 290 in all resolutions, including Eyefinity, though still left out older DX9 titles.  

One thing that had not been addressed, at least not until today, was the issues that surrounded AMD's Hybrid CrossFire technology, now known as Dual Graphics.  This is the ability for an AMD APU with integrated Radeon graphics to pair with a low cost discrete GPU to improve graphics performance and gaming experiences.  Recently over at Tom's Hardware they discovered that Dual Graphics suffered from the exact same scaling issues as standard CrossFire; frame rates in FRAPS looked good but the actually perceived frame rate was much lower.

drivers01.jpg

A little while ago a new driver made its way into my hands under the name of Catalyst 13.35 Beta X, a driver that promised to enable Dual Graphics frame pacing with Kaveri and R7 graphics cards.  As you'll see in the coming pages, the fix definitely is working.  And, as I learned after doing some more probing, the 13.35 driver is actually a much more important release than it at first seemed.  Not only is Kaveri-based Dual Graphics frame pacing enabled, but Richland and Trinity are included as well.  And even better, this driver will apparently fix resolutions higher than 2560x1600 in desktop graphics as well - something you can be sure we are checking on this week!

drivers02.jpg

Just as we saw with the first implementation of Frame Pacing in the Catalyst Control Center, with the 13.35 Beta we are using today you'll find a new set of options in the Gaming section to enable or disable Frame Pacing.  The default setting is On; which makes me smile inside every time I see it.

drivers03.jpg

The hardware we are using is the same basic setup we used in my initial review of the AMD Kaveri A8-7600 APU review.  That includes the A8-7600 APU, an Asrock A88X mini-ITX motherboard, 16GB of DDR3 2133 MHz memory and a Samsung 840 Pro SSD.  Of course for our testing this time we needed a discrete card to enable Dual Graphics and we chose the MSI R7 250 OC Edition with 2GB of DDR3 memory.  This card will run you an additional $89 or so on Amazon.com.  You could use either the DDR3 or GDDR5 versions of the R7 250 as well as the R7 240, but in our talks with AMD they seemed to think the R7 250 DDR3 was the sweet spot for the CrossFire implementation.

IMG_9457.JPG

Both the R7 250 and the A8-7600 actually share the same number of SIMD units at 384, otherwise known as 384 shader processors or 6 Compute Units based on the new nomenclature that AMD is creating.  However, the MSI card is clocked at 1100 MHz while the GPU portions of the A8-7600 APU are running at only 720 MHz. 

So the question is, has AMD truly fixed the issues with frame pacing with Dual Graphics configurations, once again making the budget gamer feature something worth recommending?  Let's find out!

Continue reading our look at Dual Graphics Frame Pacing with the Catalyst 13.35 Beta Driver!!

Author:
Manufacturer: Asus

A Refreshing Change

Refreshes are bad, right?  I guess that depends on who you talk to.  In the case of AMD, it is not a bad thing.  For people who live for cutting edge technology in the 3D graphics world, it is not pretty.  Unfortunately for those people, reality has reared its ugly head.  Process technology is slowing down, but product cycles keep moving along at a healthy pace.  This essentially necessitates minor refreshes for both AMD and NVIDIA when it comes to their product stack.  NVIDIA has taken the Kepler architecture to the latest GTX 700 series of cards.  AMD has done the same thing with the GCN architecture, but has radically changed the nomenclature of the products.

Gone are the days of the Radeon HD 7000 series.  Instead AMD has renamed their GCN based product stack with the Rx 2xx series.  The products we are reviewing here are the R9 280X and the R9 270X.  These products were formerly known as the HD 7970 and HD 7870 respectively.  These products differ in clock speeds slightly from the previous versions, but the differences are fairly minimal.  What is different are the prices for these products.  The R9 280X retails at $299 while the R9 270X comes in at $199.

asus_r9_01.png

Asus has taken these cards and applied their latest DirectCU II technology to them.  These improvements relate to design, component choices, and cooling.  These are all significant upgrades from the reference designs, especially when it comes to the cooling aspects.  It is good to see such a progression in design, but it is not entirely surprising given that the first HD 7000 series debuted in January, 2012.

Click here to read the rest of the review!

Author:
Manufacturer: AMD

DisplayPort to Save the Day?

During an impromptu meeting with AMD this week, the company's Corporate Vice President for Visual Computing, Raja Koduri, presented me with an interesting demonstration of a technology that allowed the refresh rate of a display on a Toshiba notebook to perfectly match with the render rate of the game demo being shown.  The result was an image that was smooth and with no tearing effects.  If that sounds familiar, it should.  NVIDIA's G-Sync was announced in November of last year and does just that for desktop systems and PC gamers.

Since that November unveiling, I knew that AMD would need to respond in some way.  The company had basically been silent since learning of NVIDIA's release but that changed for me today and the information discussed is quite extraordinary.  AMD is jokingly calling the technology demonstration "FreeSync".

slides04.jpg

Variable refresh rates as discussed by NVIDIA.

During the demonstration AMD's Koduri had two identical systems side by side based on a Kabini APU . Both were running a basic graphics demo of a rotating windmill.  One was a standard software configuration while the other model had a modified driver that communicated with the panel to enable variable refresh rates.  As you likely know from our various discussions about variable refresh rates an G-Sync technology from NVIDIA, this setup results in a much better gaming experience as it produces smoother animation on the screen without the horizontal tearing associated with v-sync disabled.  

Obviously AMD wasn't using the same controller module that NVIDIA is using on its current G-Sync displays, several of which were announced this week at CES.  Instead, the internal connection on the Toshiba notebook was the key factor: Embedded Display Port (eDP) apparently has a feature to support variable refresh rates on LCD panels.  This feature was included for power savings on mobile and integrated devices as refreshing the screen without new content can be a waste of valuable battery resources.  But, for performance and gaming considerations, this feature can be used to initiate a variable refresh rate meant to smooth out game play, as AMD's Koduri said.

Continue reading our thoughts on AMD's initial "FreeSync" variable refresh rate demonstration!!