All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
EVGA GTX 750 Ti ACX FTW
The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason. It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users. It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.
NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available). There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas. Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades.
After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.
On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC. All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.
When the Radeon R9 290 and R9 290X first launched last year, they were plagued by issues of overheating and variable clock speeds. We looked at the situation several times over the course of a couple months and AMD tried to address the problem with newer drivers. These drivers did help stabilize clock speeds (and thus performance) of the reference built R9 290 and R9 290X cards but caused noise levels to increase as well.
The real solution was the release of custom cooled versions of the R9 290 and R9 290X from AMD partners like ASUS, MSI and others. The ASUS R9 290X DirectCU II model for example, ran cooler, quieter and more consistently than any of the numerous reference models we had our hands on.
But what about all those buyers that are still purchasing, or have already purchased, reference style R9 290 and 290X cards? Replacing the cooler on the card is the best choice and thanks to our friends at NZXT we have a unique solution that combines standard self contained water coolers meant for CPUs with a custom built GPU bracket.
Our quick test will utilize one of the reference R9 290 cards AMD sent along at launch and two specific NZXT products. The Kraken X40 is a standard CPU self contained water cooler that sells for $100 on Amazon.com. For our purposes though we are going to team it up with the Kraken G10, a $30 GPU-specific bracket that allows you to use the X40 (and other water coolers) on the Radeon R9 290.
Inside the box of the G10 you'll find an 80mm fan, a back plate, the bracket to attach the cooler to the GPU and all necessary installation hardware. The G10 will support a wide range of GPUs, though they are targeted towards the reference designs of each:
NVIDIA : GTX 780 Ti, 780, 770, 760, Titan, 680, 670, 660Ti, 660, 580, 570, 560Ti, 560, 560SE
AMD : R9 290X, 290, 280X*, 280*, 270X, 270 HD7970*, 7950*, 7870, 7850, 6970, 6950, 6870, 6850, 6790, 6770, 5870, 5850, 5830
That is pretty impressive but NZXT will caution you that custom designed boards may interfere.
The installation process begins by removing the original cooler which in this case just means a lot of small screws. Be careful when removing the screws on the actual heatsink retention bracket and alternate between screws to take it off evenly.
Maxwell and Kepler and...Fermi?
Covering the landscape of mobile GPUs can be a harrowing experience. Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family. Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.
Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least). Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.
With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet. ShadowPlay and GameStream also find their way to mobile GeForce users as well.
Let's take a quick look at the new hardware specifications.
|GTX 880M||GTX 780M||GTX 870M||GTX 770M|
|GPU Code name||Kepler||Kepler||Kepler||Kepler|
|Rated Clock||954 MHz||823 MHz||941 MHz||811 MHz|
|Memory||Up to 4GB||Up to 4GB||Up to 3GB||Up to 3GB|
|Memory Clock||5000 MHz||5000 MHz||5000 MHz||4000 MHz|
Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line. However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M. Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.
1920x1080, 2560x1440, 3840x2160
Join us on March 11th at 9pm ET / 6pm PT for a LIVE Titanfall Game Stream! You can find us at http://www.pcper.com/live. You can subscribe to our mailing list to be alerted whenever we have a live event!!
We canceled the event due to the instability of Titanfall servers. We'll reschedule soon!!
With the release of Respawn's Titanfall upon us, many potential PC gamers are going to be looking for suggestions on compiling a list of parts targeted at a perfect Titanfall experience. The good news is, even with a fairly low investment in PC hardware, gamers will find that the PC version of this title is definitely the premiere way to play as the compute power of the Xbox One just can't compete.
In this story we'll present three different build suggestions, each addressing a different target resolution but also better image quality settings than the Xbox One can offer. We have options for 1080p, the best option that the Xbox could offer, 2560x1440 and even 3840x2160, better known as 4K. In truth, the graphics horsepower required by Titanfall isn't overly extreme, and thus an entire PC build coming in under $800, including a full copy of Windows 8.1, is easy to accomplish.
Target 1: 1920x1080
First up is old reliable, the 1920x1080 resolution that most gamers still have on their primary gaming display. That could be a home theater style PC hooked up to a TV or monitors in sizes up to 27-in. Here is our build suggestion, followed by our explanations.
|Titanfall 1080p Build|
|Processor||Intel Core i3-4330 - $137|
|Motherboard||MSI H87-G43 - $96|
|Memory||Corsair Vengeance LP 8GB 1600 MHz (2 x 4GB) - $89|
|Graphics Card||EVGA GeForce GTX 750 Ti - $179|
|Storage||Western Digital Blue 1TB - $59|
|Case||Corsair 200R ATX Mid Tower Case - $72|
|Power Supply||Corsair CX 500 watt - $49|
|OS||Windows 8.1 OEM - $96|
|Total Price||$781 - Amazon Full Cart|
Our first build comes in at $781 and includes some incredibly competent gaming hardware for that price. The Intel Core i3-4330 is a dual-core, HyperThreaded processor that provides more than enough capability to push Titanfall any all other major PC games on the market. The MSI H87 motherboard lacks some of the advanced features of the Z87 platform but does the job at a lower cost. 8GB of Corsair memory, though not running at a high clock speed, provides more than enough capacity for all the programs and applications you could want to run.
Its been a while...
EVGA has been around for quite some time now. They have turned into NVIDIA’s closest North American partner after the collapse of the original VisionTek. At nearly every trade show or gaming event, EVGA is closely associated with whatever NVIDIA presence is there. In the past EVGA focused primarily on using NVIDIA reference designs for PCB and cooling, and would branch out now and then with custom or semi-custom watercooling solutions.
A very svelte and minimalist design for the shroud. I like it.
The last time I actually reviewed an EVGA products was way back in May of 2006. I took a look at the 7600 GS product, which was a passively cooled card. Oddly enough, that card is sitting right in front of me as I write this. Unfortunately, that particular card has a set of blown caps on it and no longer works. Considering that the card has been in constant use since 2006, I would say that it held up very well for those eight years!
EVGA has been expanding their product lineup to be able to handle the highs and lows of the PC market. They have started manufacturing motherboards, cases, and power supplies to help differentiate their product lineup and hopefully broaden their product portfolio. We know from past experiences that companies that rely on one type of product from a single manufacturer (GPUs in this particular case) can experience some real issues if demand drops dramatically due to competitive disadvantages. EVGA also has taken a much more aggressive approach to differentiating their products while keeping them within a certain budget.
The latest generation of GTX 700 based cards have seen the introduction of the EVGA ACX cooling solutions. These dual fan coolers are a big step up from the reference design and puts EVGA on par with competitive products from Asus and MSI. EVGA does make some tradeoffs as compared, but these are fairly minimal when considering the entire package.
Mobile Gaming Powerhouse
Every once in a while, a vendor sends us a preconfigured gaming PC or notebook. We don't usually focus too much on these systems because so many of readers are quite clearly DIY builders. Gaming notebooks are another beast, though. Without going through a horrible amount of headaches, building a custom gaming notebook is a pretty tough task. So, for users who are looking for a ton of gaming performance in a package that is mobile, going with a machine like the ORIGIN PC EON17-SLX is the best option.
As the name implies, the EON17-SLX is a 17-in notebook that includes some really impressive specifications including a Haswell processor and SLI GeForce GTX 780M GPUs.
|ORIGIN PC EON17-SLX|
|Processor||Core i7-4930MX (Haswell)|
|Cores / Threads||4 / 8|
|Graphics||2 x NVIDIA GeForce GTX 780M 4GB|
|System Memory||16GB Corsair Vengeance DDR3-1600|
|Storage||2 x 120GB mSATA SSD (RAID-0)
1 x Western Digital Black 750GB HDD
|Wireless||Intel 7260 802.11ac|
|Screen||17-in 1920x1080 LED Matte|
|Optical||6x Blu-ray reader / DVD writer|
|Operating System||Windows 8.1|
Intel's Core i7-4930MX processor is actually a quad-core Haswell based CPU, not an Ivy Bridge-E part like you might guess based on the part number. The GeForce GTX 780M GPUs each include 4GB of frame buffer (!!) and have very similar specifications to the desktop GTX 770 parts. Even though they run at lower clock speeds, a pair of these GPUs will provide a ludicrous amount of gaming performance.
As you would expect for a notebook with this much compute performance, it isn't a thin and light. My scale tips at 9.5 pounds with the laptop alone and over 12 pounds with the power adapter included. Images of the profile below will indicate not only many of the features included but also the size and form factor.
An Upgrade Project
When NVIDIA started talking to us about the new GeForce GTX 750 Ti graphics card, one of the key points they emphasized was the potential use for this first-generation Maxwell GPU to be used in the upgrade process of smaller form factor or OEM PCs. Without the need for an external power connector, the GTX 750 Ti provided a clear performance delta from integrated graphics with minimal cost and minimal power consumption, so the story went.
Eager to put this theory to the test, we decided to put together a project looking at the upgrade potential of off the shelf OEM computers purchased locally. A quick trip down the road to Best Buy revealed a PC sales section that was dominated by laptops and all-in-ones, but with quite a few "tower" style desktop computers available as well. We purchased three different machines, each at a different price point, and with different primary processor configurations.
The lucky winners included a Gateway DX4885, an ASUS M11BB, and a Lenovo H520.
What we know about Maxwell
I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell. It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell. It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.
For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself. There I will detail the product specifications, performance comparison and expectations, etc.
If you are interested in learning what makes Maxwell tick, keep reading below.
The NVIDIA Maxwell Architecture
When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it. Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press. In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance. It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.
During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design. Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary. NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.
The logic of the GPU design remains similar to Kepler. There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).
GM107 Block Diagram
Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell. There are more divisions, more groupings and fewer CUDA cores "per block" than before. As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.
Straddling the R7 and R9 designation
It is often said that the sub-$200 graphics card market is crowded. It will get even more so over the next 7 days. Today AMD is announcing a new entry into this field, the Radeon R7 265, which seems to straddle the line between their R7 and R9 brands. The product is much closer in its specifications to the R9 270 than it is the R7 260X. As you'll see below, it is built on a very familiar GPU architecture.
AMD claims that the new R7 265 brings a 25% increase in performance to the R7 line of graphics cards. In my testing, this does turn out to be true and also puts it dangerously close to the R9 270 card released late last year. Much like we saw with the R9 290 compared to the R9 290X, the less expensive but similarly performing card might make the higher end model a less attractive option.
Let's take a quick look at the specifications of the new R7 265.
Based on the Pitcairn GPU, a part that made its debut with the Radeon HD 7870 and HD 7850 in early 2012, this card has 1024 stream processors running at 925 MHz equating to 1.89 TFLOPS of total peak compute power. Unlike the other R7 cards, the R7 265 has a 256-bit memory bus and will come with 2GB of GDDR5 memory running at 5.6 GHz. The card requires a single 6-pin power connection but has a peak TDP of 150 watts - pretty much the maximum of the PCI Express bus and one power connector. And yes, the R7 265 supports DX 11.2, OpenGL 4.3, and Mantle, just like the rest of the AMD R7/R9 lineup. It does NOT support TrueAudio and the new CrossFire DMA units.
|Radeon R9 270X||Radeon R9 270||Radeon R7 265||Radeon R7 260X||Radeon R7 260|
|GPU Code name||Pitcairn||Pitcairn||Pitcairn||Bonaire||Bonaire|
|Rated Clock||1050 MHz||925 MHz||925 MHz||1100 MHz||1000 MHz|
|Memory Clock||5600 MHz||5600 MHz||5600 MHz||6500 MHz||6000 MHz|
|Memory Bandwidth||179 GB/s||179 GB/s||179 GB/s||104 GB/s||96 GB/s|
|TDP||180 watts||150 watts||150 watts||115 watts||95 watts|
|Peak Compute||2.69 TFLOPS||2.37 TFLOPS||1.89 TFLOPS||1.97 TFLOPS||1.53 TFLOPS|
The table above compares the current AMD product lineup, ranging from the R9 270X to the R7 260, with the R7 265 directly in the middle. There are some interesting specifications to point out that make the 265 a much closer relation to the R7 270/270X cards than anything below it. Though the R7 265 has four fewer compute units (which is 256 stream processors) than the R9 270. The biggest performance gap here is going to be found with the 256-bit memory bus that persists; the available memory bandwidth of 179 GB/s is 72% higher than the 104 GB/s from the R7 260X! That will definitely improve performance drastically compared to the rest of the R7 products. Pay no mind to that peak performance of the 260X being higher than the R7 265; in real world testing that never happened.
What Mantle signifies about GPU architectures
Mantle is a very interesting concept. From the various keynote speeches, it sounds like the API is being designed to address the current state (and trajectory) of graphics processors. GPUs are generalized and highly parallel computation devices which are assisted by a little bit of specialized silicon, when appropriate. The vendors have even settled on standards, such as IEEE-754 floating point decimal numbers, which means that the driver has much less reason to shield developers from the underlying architectures.
Still, Mantle is currently a private technology for an unknown number of developers. Without a public SDK, or anything beyond the half-dozen keynotes, we can only speculate on its specific attributes. I, for one, have technical questions and hunches which linger unanswered or unconfirmed, probably until the API is suitable for public development.
Or, until we just... ask AMD.
Our response came from Guennadi Riguer, the chief architect for Mantle. In it, he discusses the API's usage as a computation language, the future of the rendering pipeline, and whether there will be a day where Crossfire-like benefits can occur by leaving an older Mantle-capable GPU in your system when purchasing a new, also Mantle-supporting one.
Q: Mantle's shading language is said to be compatible with HLSL. How will optimizations made for DirectX, such as tweaks during shader compilation, carry over to Mantle? How much tuning will (and will not) be shared between the two APIs?
[Guennadi] The current Mantle solution relies on the same shader generation path games the DirectX uses and includes an open-source component for translating DirectX shaders to Mantle accepted intermediate language (IL). This enables developers to quickly develop Mantle code path without any changes to the shaders. This was one of the strongest requests we got from our ISV partners when we were developing Mantle.
Follow-Up: What does this mean, specifically, in terms of driver optimizations? Would AMD, or anyone else who supports Mantle, be able to re-use the effort they spent on tuning their shader compilers (and so forth) for DirectX?
[Guennadi] With the current shader compilation strategy in Mantle, the developers can directly leverage DirectX shader optimization efforts in Mantle. They would use the same front-end HLSL compiler for DX and Mantle, and inside of the DX and Mantle drivers we share the shader compiler that generates the shader code our hardware understands.
A quick look at performance results
Late last week, EA and Dice released the long awaited patch for Battlefield 4 that enables support for the Mantle renderer. This new API technology was introduced by AMD back in September. Unfortunately, AMD wasn't quite ready for its release with their Catalyst 14.1 beta driver. I wrote a short article that previewed the new driver's features, its expected performance with the Mantle version of BF4, and commentary about the current state of Mantle. You should definite read that as a primer before continuing if you haven't yet.
Today, after really just a few short hours with a useable driver, I have only limited results. Still, I know that you, our readers, clamor for ANY information on the topic. I thought I would share what we have thus far.
As I mentioned in the previous story, the Mantle version of Battlefield 4 has the biggest potential to show advantages in times where the game is more CPU limited. AMD calls this the "low hanging fruit" for this early release of Mantle and claim that further optimizations will come, especially for GPU-bound scenarios. Because of that dependency on CPU limitations, that puts some non-standard requirements on our ability to showcase Mantle's performance capabilities.
For example, the level of the game and even the section of that level, in the BF4 single player campaign, can show drastic swings in Mantle's capabilities. Multiplayer matches will also show more consistent CPU utilization (and thus could be improved by Mantle) though testing those levels in a repeatable, semi-scientific method is much more difficult. And, as you'll see in our early results, I even found a couple instances in which the Mantle API version of BF4 ran a smidge slower than the DX11 instance.
For our testing, we compiled two systems that differed in CPU performance in order to simulate the range of processors installed within consumers' PCs. Our standard GPU test bed includes a Core i7-3960X Sandy Bridge-E processor specifically to remove the CPU as a bottleneck and that has been included here today. We added in a system based on the AMD A10-7850K Kaveri APU which presents a more processor-limited (especially per-thread) system, overall, and should help showcase Mantle benefits more easily.
A troubled launch to be sure
AMD has released some important new drivers with drastic feature additions over the past year. Remember back in August of 2013 when Frame Pacing was first revealed? Today’s Catalyst 14.1 beta release will actually complete the goals that AMD set forth upon itself in early 2013 in regards to introducing (nearly) complete Frame Pacing technology integration for non-XDMA GPUs while also adding support for Mantle
and HSA capability.
Frame Pacing Phase 2 and HSA Support
When AMD released the first frame pacing capable beta driver in August of 2013, it added support to existing GCN designs (HD 7000-series and a few older generations) at resolutions of 2560x1600 and below. While that definitely addressed a lot of the market, the fact was that CrossFire users were also amongst the most likely to have Eyefinity (3+ monitors spanned for gaming) or even 4K displays (quickly dropping in price). Neither of those advanced display options were supported with any Catalyst frame pacing technology.
That changes today as Phase 2 of the AMD Frame Pacing feature has finally been implemented for products that do not feature the XDMA technology (found in Hawaii GPUs for example). That includes HD 7000-series GPUs, the R9 280X and 270X cards, as well as older generation products and Dual Graphics hardware combinations such as the new Kaveri APU and R7 250. I have already tested Kaveri and the R7 250 in fact, and you can read about its scaling and experience improvements right here. That means that users of the HD 7970, R9 280X, etc., as well as those of you with HD 7990 dual-GPU cards, will finally be able to utilize the power of both GPUs in your system with 4K displays and Eyefinity configurations!
This is finally fixed!!
As of this writing I haven’t had time to do more testing (other than the Dual Graphics article linked above) to demonstrate the potential benefits of this Phase 2 update, but we’ll be targeting it later in the week. For now, it appears that you’ll be able to get essentially the same performance and pacing capabilities on the Tahiti-based GPUs as you can with Hawaii (R9 290X and R9 290).
Catalyst 14.1 beta is also the first public driver to add support for HSA technology, allowing owners of the new Kaveri APU to take advantage of the appropriately enabled applications like LibreOffice and the handful of Adobe apps. AMD has since let us know that this feature DID NOT make it into the public release of Catalyst 14.1.
The First Mantle Ready Driver (sort of)
A technology that has been in development for more than two years according to AMD, the newly released Catalyst 14.1 beta driver is the first to enable support for the revolutionary new Mantle API for PC gaming. Essentially, Mantle is AMD’s attempt at creating a custom API that will replace DirectX and OpenGL in order to more directly target the GPU hardware in your PC, specifically the AMD-based designs of GCN (Graphics Core Next).
Mantle runs at a lower level than DX or OGL does, able to more directly access the hardware resources of the graphics chips, and with that ability is able to better utilize the hardware in your system, both CPU and GPU. In fact, the primary benefit of Mantle is going to be seen in the form of less API overhead and bottlenecks such as real-time shader compiling and code translation.
If you are interested in the meat of what makes Mantle tick and why it was so interesting to us when it was first announced in September of 2013, you should check out our first deep-dive article written by Josh. In it you’ll get our opinion on why Mantle matters and why it has the potential for drastically changing the way the PC is thought of in the gaming ecosystem.
Hybrid CrossFire that actually works
The road to redemption for AMD and its driver team has been a tough one. Since we first started to reveal the significant issues with AMD's CrossFire technology back in January of 2013 the Catalyst driver team has been hard at work on a fix, though I will freely admit it took longer to convince them that the issue was real than I would have liked. We saw the first steps of the fix released in August of 2013 with the release of the Catalyst 13.8 beta driver. It supported DX11 and DX10 games and resolutions of 2560x1600 and under (no Eyefinity support) but was obviously still less than perfect.
In October with the release of AMD's latest Hawaii GPU the company took another step by reorganizing the internal architecture of CrossFire on the chip level with XDMA. The result was frame pacing that worked on the R9 290X and R9 290 in all resolutions, including Eyefinity, though still left out older DX9 titles.
One thing that had not been addressed, at least not until today, was the issues that surrounded AMD's Hybrid CrossFire technology, now known as Dual Graphics. This is the ability for an AMD APU with integrated Radeon graphics to pair with a low cost discrete GPU to improve graphics performance and gaming experiences. Recently over at Tom's Hardware they discovered that Dual Graphics suffered from the exact same scaling issues as standard CrossFire; frame rates in FRAPS looked good but the actually perceived frame rate was much lower.
A little while ago a new driver made its way into my hands under the name of Catalyst 13.35 Beta X, a driver that promised to enable Dual Graphics frame pacing with Kaveri and R7 graphics cards. As you'll see in the coming pages, the fix definitely is working. And, as I learned after doing some more probing, the 13.35 driver is actually a much more important release than it at first seemed. Not only is Kaveri-based Dual Graphics frame pacing enabled, but Richland and Trinity are included as well. And even better, this driver will apparently fix resolutions higher than 2560x1600 in desktop graphics as well - something you can be sure we are checking on this week!
Just as we saw with the first implementation of Frame Pacing in the Catalyst Control Center, with the 13.35 Beta we are using today you'll find a new set of options in the Gaming section to enable or disable Frame Pacing. The default setting is On; which makes me smile inside every time I see it.
The hardware we are using is the same basic setup we used in my initial review of the AMD Kaveri A8-7600 APU review. That includes the A8-7600 APU, an Asrock A88X mini-ITX motherboard, 16GB of DDR3 2133 MHz memory and a Samsung 840 Pro SSD. Of course for our testing this time we needed a discrete card to enable Dual Graphics and we chose the MSI R7 250 OC Edition with 2GB of DDR3 memory. This card will run you an additional $89 or so on Amazon.com. You could use either the DDR3 or GDDR5 versions of the R7 250 as well as the R7 240, but in our talks with AMD they seemed to think the R7 250 DDR3 was the sweet spot for the CrossFire implementation.
Both the R7 250 and the A8-7600 actually share the same number of SIMD units at 384, otherwise known as 384 shader processors or 6 Compute Units based on the new nomenclature that AMD is creating. However, the MSI card is clocked at 1100 MHz while the GPU portions of the A8-7600 APU are running at only 720 MHz.
So the question is, has AMD truly fixed the issues with frame pacing with Dual Graphics configurations, once again making the budget gamer feature something worth recommending? Let's find out!
A Refreshing Change
Refreshes are bad, right? I guess that depends on who you talk to. In the case of AMD, it is not a bad thing. For people who live for cutting edge technology in the 3D graphics world, it is not pretty. Unfortunately for those people, reality has reared its ugly head. Process technology is slowing down, but product cycles keep moving along at a healthy pace. This essentially necessitates minor refreshes for both AMD and NVIDIA when it comes to their product stack. NVIDIA has taken the Kepler architecture to the latest GTX 700 series of cards. AMD has done the same thing with the GCN architecture, but has radically changed the nomenclature of the products.
Gone are the days of the Radeon HD 7000 series. Instead AMD has renamed their GCN based product stack with the Rx 2xx series. The products we are reviewing here are the R9 280X and the R9 270X. These products were formerly known as the HD 7970 and HD 7870 respectively. These products differ in clock speeds slightly from the previous versions, but the differences are fairly minimal. What is different are the prices for these products. The R9 280X retails at $299 while the R9 270X comes in at $199.
Asus has taken these cards and applied their latest DirectCU II technology to them. These improvements relate to design, component choices, and cooling. These are all significant upgrades from the reference designs, especially when it comes to the cooling aspects. It is good to see such a progression in design, but it is not entirely surprising given that the first HD 7000 series debuted in January, 2012.
DisplayPort to Save the Day?
During an impromptu meeting with AMD this week, the company's Corporate Vice President for Visual Computing, Raja Koduri, presented me with an interesting demonstration of a technology that allowed the refresh rate of a display on a Toshiba notebook to perfectly match with the render rate of the game demo being shown. The result was an image that was smooth and with no tearing effects. If that sounds familiar, it should. NVIDIA's G-Sync was announced in November of last year and does just that for desktop systems and PC gamers.
Since that November unveiling, I knew that AMD would need to respond in some way. The company had basically been silent since learning of NVIDIA's release but that changed for me today and the information discussed is quite extraordinary. AMD is jokingly calling the technology demonstration "FreeSync".
Variable refresh rates as discussed by NVIDIA.
During the demonstration AMD's Koduri had two identical systems side by side based on a Kabini APU . Both were running a basic graphics demo of a rotating windmill. One was a standard software configuration while the other model had a modified driver that communicated with the panel to enable variable refresh rates. As you likely know from our various discussions about variable refresh rates an G-Sync technology from NVIDIA, this setup results in a much better gaming experience as it produces smoother animation on the screen without the horizontal tearing associated with v-sync disabled.
Obviously AMD wasn't using the same controller module that NVIDIA is using on its current G-Sync displays, several of which were announced this week at CES. Instead, the internal connection on the Toshiba notebook was the key factor: Embedded Display Port (eDP) apparently has a feature to support variable refresh rates on LCD panels. This feature was included for power savings on mobile and integrated devices as refreshing the screen without new content can be a waste of valuable battery resources. But, for performance and gaming considerations, this feature can be used to initiate a variable refresh rate meant to smooth out game play, as AMD's Koduri said.
Introduction and Unboxing
We've been covering NVIDIA's new G-Sync tech for quite some time now, and displays so equipped are finally shipping. With all of the excitement going on, I became increasingly interested in the technology, especially since I'm one of those guys who is extremely sensitive to input lag and the inevitable image tearing that results from vsync-off gaming. Increased discussion on our weekly podcast, coupled with the inherent difficulty of demonstrating the effects without seeing G-Sync in action in-person, led me to pick up my own ASUS VG248QE panel for the purpose of this evaluation and review. We've generated plenty of other content revolving around the G-Sync tech itself, so lets get straight into what we're after today - evaluating the out of box installation process of the G-Sync installation kit.
All items are well packed and protected.
Included are installation instructions, a hard plastic spudger for opening the panel, a couple of stickers, and all necessary hardware bits to make the conversion.
Sapphire Triple Fan Hawaii
It was mid-December when the very first custom cooled AMD Radeon R9 290X card hit our offices in the form of the ASUS R9 290X DirectCU II. It was cooler, quieter, and faster than the reference model; this is a combination that is hard to pass up (if you could buy it yet). More and more of these custom models, both in the R9 290 and R9 290X flavor, are filtering their way into PC Perspective. Next on the chopping block is the Sapphire Tri-X model of the R9 290X.
Sapphire's triple fan cooler already made quite an impression on me when we tested a version of it on the R9 280X retail round up from October. It kept the GPU cool but it was also the loudest of the retail cards tested at the time. For the R9 290X model, Sapphire has made some tweaks to the fan speeds and the design of the cooler which makes it a better overall solution as you will soon see.
The key tenets for any AMD R9 290/290X custom cooled card is to beat AMD's reference cooler in performance, noise, and variable clock rates. Does Sapphire meet these goals?
The Sapphire R9 290X Tri-X 4GB
While the ASUS DirectCU II card was taller and more menacing than the reference design, the Sapphire Tri-X cooler is longer and appears to be more sleek than the competition thus far. The bright yellow and black color scheme is both attractive and unique though it does lack the LED light that the 280X showcased.
Sapphire has overclocked this model slightly, to 1040 MHz on the GPU clock, which puts it in good company.
|AMD Radeon R9 290X||ASUS R9 290X DirectCU II||Sapphire R9 290X Tri-X|
|Rated Clock||1000 MHz||1050 MHz||1040 MHz|
|Memory Clock||5000 MHz||5400 MHz||5200 MHz|
|TDP||~300 watts||~300 watts||~300 watts|
|Peak Compute||5.6 TFLOPS||5.6+ TFLOPS||5.6T TFLOPS|
There are three fans on the Tri-X design, as the name would imply, but each are the same size unlike the smaller central fan design of the R9 280X.
The First Custom R9 290X
It has been a crazy launch for the AMD Radeon R9 series of graphics cards. When we first reviewed both the R9 290X and the R9 290, we came away very impressed with the GPU and the performance it provided. Our reviews of both products resulted in awards of the Gold class. The 290X was a new class of single GPU performance while the R9 290 nearly matched performance at a crazy $399 price tag.
But there were issues. Big, glaring issues. Clock speeds had a huge amount of variance depending on the game and we saw a GPU that was rated as "up to 1000 MHz" running at 899 MHz in Skyrim and 821 MHz in Bioshock Infinite. Those are not insignificant deltas in clock rate that nearly perfectly match deltas in performance. These speeds also changed based on the "hot" or "cold" status of the graphics card - had it warmed up and been active for 10 minutes prior to testing? If so, the performance was measurably lower than with a "cold" GPU that was just started.
That issue was not necessarily a deal killer; rather, it just made us rethink how we test GPUs. The fact that many people were seeing lower performance on retail purchased cards than with the reference cards sent to press for reviews was a much bigger deal. In our testing in November the retail card we purchased, that was using the exact same cooler as the reference model, was running 6.5% slower than we expected.
The obvious hope was the retail cards with custom PCBs and coolers would be released from AMD partners and somehow fix this whole dilemma. Today we see if that was correct.
A slightly smaller MARS
The NVIDIA GeForce GTX 760 was released in June of 2013. Based on the same GK104 GPU as the GTX 680, GTX 670 and GTX 770, the GTX 760 disabled a couple more of the clusters of processor cores to offer up impressive performance levels for a lower cost than we had seen previously. My review of the GTX 760 was very positive as NVIDIA had priced it aggressively against the competing products from AMD.
As for ASUS, they have a storied history with the MARS brand. Typically an over-built custom PCB with two of the highest end NVIDIA GPUs stapled together, the ASUS MARS cards have been limited edition products with a lot of cache around them. The first MARS card was a dual GTX 285 product that was the first card to offer 4GB of memory (though 2GB per GPU of course). The MARS II took a pair of GTX 580 GPUs and pasted them on a HUGE card and sold just 1000 of them worldwide. It was heavy, expensive and fast; blazing fast. But at a price of $1200+ it wasn't on the radar of most PC gamers.
Interestingly, the MARS iteration for the GTX 680 never occurred and why that is the case is still a matter of debate. Some point the finger at poor sales and ASUS while others think that NVIDIA restricted ASUS' engineers from being as creative as they needed to be.
Today's release of the ASUS ROG MARS 760 is a bit different - this is still a high end graphics card but it doesn't utilize the fastest single-GPU option on the market. Instead ASUS has gone with a more reasonable design that combines a pair of GTX 760 GK104 GPUs on a single PCB with a PCI Express bridge chip between them. The MARS 760 is significantly smaller and less power hungry than previous MARS cards but it is still able to pack a punch in the performance department as you'll soon see.
Quality time with G-Sync
Readers of PC Perspective will already know quite alot about NVIDIA's G-Sync technology. When it was first unveiled in October we were at the event and were able to listen to NVIDIA executives, product designers and engineers discuss and elaborate on what it is, how it works and why it benefits gamers. This revolutionary new take on how displays and graphics cards talk to each other enables a new class of variable refresh rate monitors that will offer up the smoothness advantages of having V-Sync off, while offering the tear-free images normally reserved for gamers enabling V-Sync.
NVIDIA's Prototype G-Sync Monitor
We were lucky enough to be at NVIDIA's Montreal tech day while John Carmack, Tim Sweeney and Johan Andersson were on stage discussing NVIDIA G-Sync among other topics. All three developers were incredibly excited about G-Sync and what it meant for gaming going forward.
Also on that day, I published a somewhat detailed editorial that dug into the background of V-sync technology, why the 60 Hz refresh rate existed and why the system in place today is flawed. This basically led up to an explanation of how G-Sync works, including integration via extending Vblank signals and detailed how NVIDIA was enabling the graphics card to retake control over the entire display pipeline.
In reality, if you want the best explanation of G-Sync, how it works and why it is a stand-out technology for PC gaming, you should take the time to watch and listen to our interview with NVIDIA's Tom Petersen, one of the primary inventors of G-Sync. In this video we go through quite a bit of technical explanation of how displays work today, and how the G-Sync technology changes gaming for the better. It is a 1+ hour long video, but I selfishly believe that it is the most concise and well put together collection of information about G-Sync for our readers.
The story today is more about extensive hands-on testing with the G-Sync prototype monitors. The displays that we received this week were modified versions of the 144Hz ASUS VG248QE gaming panels, the same ones that will in theory be upgradeable by end users as well sometime in the future. These monitors are TN panels, 1920x1080 and though they have incredibly high refresh rates, aren't usually regarded as the highest image quality displays on the market. However, the story about what you get with G-Sync is really more about stutter (or lack thereof), tearing (or lack thereof), and a better overall gaming experience for the user.