Author:
Manufacturer: Rebellion

Quick Performance Comparison

Earlier this week, we posted a brief story that looked at the performance of Middle-earth: Shadow of Mordor on the latest GPUs from both NVIDIA and AMD. Last week also marked the release of the v1.11 patch for Sniper Elite 3 that introduced an integrated benchmark mode as well as support for AMD Mantle.

I decided that this was worth a quick look with the same line up of graphics cards that we used to test Shadow of Mordor. Let's see how the NVIDIA and AMD battle stacks up here.

For those unfamiliar with the Sniper Elite series, the focuses on the impact of an individual sniper on a particular conflict and Sniper Elite 3 doesn't change up that formula much. If you have ever seen video of a bullet slowly going through a body, allowing you to see the bones/muscle of the particular enemy being killed...you've probably been watching the Sniper Elite games.

screen3.jpg

Gore and such aside, the game is fun and combines sniper action with stealth and puzzles. It's worth a shot if you are the kind of gamer that likes to use the sniper rifles in other FPS titles.

But let's jump straight to performance. You'll notice that in this story we are not using our Frame Rating capture performance metrics. That is a direct result of wanting to compare Mantle to DX11 rendering paths - since we have no way to create an overlay for Mantle, we have resorted to using FRAPs and the integrated benchmark mode in Sniper Elite 3.

Our standard GPU test bed was used with a Core i7-3960X processor, an X79 motherboard, 16GB of DDR3 memory, and the latest drivers for both parties involved. That means we installed Catalyst 14.9 for AMD and 344.16 for NVIDIA. We'll be comparing the GeForce GTX 980 to the Radeon R9 290X, and the GTX 970 to the R9 290. We will also look at SLI/CrossFire scaling at the high end.

Continue reading our performance results in Sniper Elite 3!!

Manufacturer: NVIDIA

If there is one message that I get from NVIDIA's GeForce GTX 900M-series announcement, it is that laptop gaming is a first-class citizen in their product stack. Before even mentioning the products, the company provided relative performance differences between high-end desktops and laptops. Most of the rest of the slide deck is showing feature-parity with the desktop GTX 900-series, and a discussion about battery life.

nvidia-maxwell-mobile-logo.jpg

First, the parts. Two products have been announced: The GeForce GTX 980M and the GeForce GTX 970M. Both are based on the 28nm Maxwell architecture. In terms of shading performance, the GTX 980M has a theoretical maximum of 3.189 TFLOPs, and the GTX 970M is calculated at 2.365 TFLOPs (at base clock). On the desktop, this is very close to the GeForce GTX 770 and the GeForce GTX 760 Ti, respectively. This metric is most useful when you're compute bandwidth-bound, at high resolution with complex shaders.

The full specifications are:

  GTX 980M GTX 970M
GTX 980
(Desktop)
GTX 970
(Desktop)
GTX 880M
(Laptop)
CUDA Cores 1536 1280 2048 1664 1536
Core (MHz) 1038 924 1126 1050 954
Perf. (TFLOP) 3.189 2.365 4.612 3.494 2.930
Memory Up to 4GB Up to 3GB 4GB 4GB 4GB/8GB
Memory Rate 2500 MHz 2500 MHz 7.0 (GT/s) 7.0 (GT/s) 2500 MHz
Memory Width 256-bit 192-bit 256-bit 256-bit 256-bit
Architecture Maxwell Maxwell Maxwell Maxwell Kepler
Process Node 28nm 28nm 28nm 28nm 28nm
DirectX Version 12.0 12.0 12.0 12.0 11.0

As for the features, it should be familiar for those paying attention to both desktop 900-series and the laptop 800M-series product launches. From desktop Maxwell, the 900M-series is getting VXGI, Dynamic Super Resolution, and Multi-Frame Sampled AA (MFAA). From the latest generation of Kepler laptops, the new GPUs are getting an updated BatteryBoost technology. From the rest of the GeForce ecosystem, they will also get GeForce Experience, ShadowPlay, and so forth.

For VXGI, DSR, and MFAA, please see Ryan's discussion for the desktop Maxwell launch. Information about these features is basically identical to what was given in September.

nvidia-maxwell-battery.jpg

BatteryBoost, on the other hand, is a bit different. NVIDIA claims that the biggest change is just raw performance and efficiency, giving you more headroom to throttle. Perhaps more interesting though, is that GeForce Experience will allow separate one-click optimizations for both plugged-in and battery use cases.

The power efficiency demonstrated with the Maxwell GPU in Ryan's original GeForce GTX 980 and GTX 970 review is even more beneficial for the notebook market where thermal designs are physically constrained. Longer battery life, as well as thinner and lighter gaming notebooks, will see tremendous advantages using a GPU that can run at near peak performance on the maximum power output of an integrated battery. In NVIDIA's presentation, they mention that while notebooks on AC power can use as much as 230 watts of power, batteries tend to peak around 100 watts. Given that a full speed, desktop-class GTX 980 has a TDP of 165 watts, compared to the 250 watts of a Radeon R9 290X, translates into notebook GPU performance that will more closely mirror its desktop brethren.

nvidia-maxwell-mobile-designs.jpg

Of course, you probably will not buy your own laptop GPU; rather, you will be buying devices which integrate these. There are currently five designs across four manufacturers that are revealed (see image above). Three contain the GeForce GTX 980M, one has a GTX 970M, and the other has a pair of GTX 970Ms. Prices and availability are not yet announced.

AMD Dropping R9 290X to $399, R9 290 to $299

Subject: Graphics Cards | October 6, 2014 - 03:21 PM |
Tagged: radeon, R9 290X, r9 290, hawaii, GTX 980, GTX 970, geforce, amd

On Saturday while finishing up the writing on our Shadow of Mordor performance story, I noticed something quite interesting. The prices of AMD's flagship Radeon products had all come down quite a bit. In an obvious response to the release of NVIDIA's new GeForce GTX 980 and GTX 970, the Radeon R9 290X and the Radeon R9 290 have lowered prices in a very aggressive fashion.

UPDATE: A couple of individual cards appear to be showing up as $360 and $369 on Newegg!

pricedrop1.jpg

Amazon.com is showing some R9 290X cards at $399

For now, Amazon.com is only listing the triple-fan Gigabyte R9 290X Windforce card at $399, though Newegg.com has a couple as well.

pricedrop2.jpg

Amazon.com also has several R9 290 cards for $299

And again, Newegg.com has some other options for R9 290 cards at these lower prices.

Let's assume that these price drops are going to be permanent which seems likely based on the history of AMD and market adjustments. That shifts the high end GPU market considerably.

     
GeForce GTX 980 4GB $549  
  $399 Radeon R9 290X 4GB
GeForce GTX 970 4GB $329  
  $299 Radeon R9 290 4GB

The battle for that lower end spot between the GTX 970 and R9 290 is now quite a bit tighter though NVIDIA's Maxwell architecture still has a positive outlook against the slightly older Hawaii GPU. Our review of the GTX 970 shows that it is indeed faster than the R9 290 though it no longer has the significant cost advantage it did upon release. The GTX 980, however, is much tougher sell over the Radeon R9 290X for PC gamers that are concerned with price per dollar over all else. I would still consider the GTX 980 faster than the R9 290X...but is it $150 faster? That's a 35% price difference NVIDIA now has to contend with.

NVIDIA has proven that is it comfortable staying in this position against AMD as it maintained it during essentially the entire life of the GTX 680 and GTX 780 product lines. AMD is more willing to make price cuts to pull the Radeon lineup back into the spotlight. Though the market share between the competitors didn't change much over the previous 6 months, I'll be very curious to see how these two strategies continue to play out.

Author:
Manufacturer: WB Games

Testing Notes

In what can most definitely be called the best surprise of the fall game release schedule, the open-world action game set in the Lord of the Rings world, Middle-earth: Shadow of Mordor has been receiving impressive reviews from gamers and the media. (GiantBomb.com has a great look at it if you are new to the title.) What also might be a surprise to some is that the PC version of the game can be quite demanding on even the latest PC hardware, pulling in frame rates only in the low-60s at 2560x1440 with its top quality presets.

ShadowOfMordor-2014-10-03-14-12-17-99_0.jpg

Late last week I spent a couple of days playing around with Shadow of Mordor as well as the integrated benchmark found inside the Options menu. I wanted to get an idea of the performance characteristics of the game to determine if we might include this in our full-time game testing suite update we are planning later in the fall. To get some sample information I decided to run through a couple of quality presets with the top two cards from NVIDIA and AMD and compare them.

Testing Notes

Without a doubt, the visual style of Shadow of Mordor is stunning – with the game settings cranked up high the world, characters and fighting scenes look and feel amazing. To be clear, in the build up to this release we had really not heard anything from the developer or NVIDIA (there is an NVIDIA splash screen at the beginning) about the title which is out of the ordinary. If you are looking for a game that is both fun to play (I am 4+ hours in myself) and can provide a “wow” factor to show off your PC rig then this is definitely worth picking up.

Continue reading our performance overview of Middle-earth: Shadow of Mordor!!

DirectX 12 Shipping with Windows 10

Subject: Graphics Cards | October 3, 2014 - 03:18 AM |
Tagged: microsoft, DirectX, DirectX 12, windows 10, threshold, windows

A Microsoft blog posting confirms: "The final version of Windows 10 will ship with DirectX 12". To me, this seems like a fairly obvious statement. The loose dates provided for both the OS and the availability of retail games suggest that the two would be launching at roughly the same time. The article also claims that DirectX 12 "Early Access" members will be able to develop with the Windows 10 Technical Preview. Apart from Unreal Engine 4 (for Epic Games subscribers), Intel will also provide source access to their Asteroids demo, shown at Siggraph 2014, to all accepted early access developers.

windows-directx12-landscapes.jpg

Our readers might find this information slightly disappointing as it could be interpreted that DirectX 12 would not be coming to Windows 7 (or even 8.x). While it does not look as hopeful as before, they never, at any point, explicitly say that it will not come to older operating systems. It still might.

Source: Microsoft

The MSI GeForce GTX 970 GAMING 4G is sitting right in the sweet spot

Subject: Graphics Cards | October 2, 2014 - 04:01 PM |
Tagged: msi, GTX 970 GAMING 4G, factory overclocked

It is sadly out of stock on both NewEgg and Amazon right now but MSI's $350 GTX 970 GAMING 4G is an incredible buy and worth waiting for.  The factory overclock already set up on this card is quite nice, a Core rated at 1140/1279MHz which [H]ard|OCP actually observed hit as high as 1366MHz until they overclocked it and hit 1542MHz before the 110% GPU power limitation ended their fun.  It would seem that the card is capable of more, if only you were not prevented from feeding it more than that extra 10%.  The card was already beating the 780 Ti and R8 290 before the overclock but you should read the full review to see what happened once they tested it at the full speed.

1411976595nitFZ11Eg1_1_9_l.jpg

"The MSI GeForce GTX 970 GAMING 4G video card is making the GeForce GTX 780 and AMD Radeon R9 290 obsolete. This $349 video card puts up a fight and punches in a win at this price. The overclock alone is somewhat staggering. If you are about to spend money on a GPU, don't miss this one."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: EVGA

Installation and Overview

While once a very popular way to cool your PC, the art of custom water loops tapered off in the early 2000s as the benefits of better cooling, and overclocking in general, met with diminished returns. In its place grew a host of companies offering closed loop system, individually sealed coolers for processors and even graphics cards that offered some of the benefits of standard water cooling (noise, performance) without the hassle of setting up a water cooling configuration manually.

A bit of a resurgence has occurred in the last year or two though where the art and styling provided by custom water loop cooling is starting to reassert itself into the PC enthusiast mindset. Some companies never left (EVGA being one of them), but it appears that many of the users are returning to it. Consider me part of that crowd.

IMG_9943.jpg

During a live stream we held with EVGA's Jacob Freeman, the very first prototype of the EVGA Hydro Copper was shown and discussed. Lucky for us, I was able to coerce Jacob into leaving the water block with me for a few days to do some of our testing and see just how much capability we could pull out of the GM204 GPU and a GeForce GTX 980.

Our performance preview today will look at the water block itself, installation, performance and temperature control. Keep in mind that this is a very early prototype, the first one to make its way to US shores. There will definitely be some changes and updates (in both the hardware and the software support for overclocking) before final release in mid to late October. Should you consider this ~$150 Hydro Copper water block for your GTX 980?

Continue reading our preview of the EVGA GTX 980 Hydro Copper Water Block!!

Broadwell-U (BGA) Lineup Leaked

Subject: Graphics Cards, Processors | September 30, 2014 - 03:33 AM |
Tagged: iris, Intel, core m, broadwell-y, broadwell-u, Broadwell

Intel's upcoming 14nm product line, Broadwell, is expected to have six categories of increasing performance. Broadwell-Y, later branded Core M, is part of the soldered BGA family at expected TDPs of 3.5 to 4.5W. Above this is Broadwell-U, which are also BGA packages, and thus require soldering by the system builder. VR-Zone China has a list of seemingly every 15W SKU in that category. 28W TDP "U" products are expected to be available in the following quarter, but are not listed.

intel-broadwell-u_1.png

Image Credit: VR-Zone

As for those 15W parts though, there are seventeen (17!) of them, ranging from Celeron to Core i7. While each product is dual-core, the ones that are Core i3 and up have Hyper-Threading, increasing the parallelism to four tasks simultaneously. In terms of cache, Celerons and Pentiums will have 2MB, Core i7s will have 4MB, and everything in between will have 3MB. Otherwise, the products vary on the clock frequency they were binned (bin-sorted) at, and the integrated graphics that they contain.

intel-broadwell-u_2.png

Image Credit: VR-Zone

These integrated iGPUs range from "Intel HD Graphics" on the Celerons and Pentiums, to "Intel Iris Graphics 6100" on one Core i7, two Core i5s, and one Core i3. The rest pretty much alternate between Intel HD Graphics 5500 and Intel HD Graphics 6000. Maximum frequency of any given iGPU can vary within the same product, but only by about 100 MHz at the most. The exact spread is below.

  • Intel HD Graphics: 300 MHz base clock, 800 MHz at load.
  • Intel HD Graphics 5500: 300 MHz base clock, 850-950 MHz at load (depending on SKU).
  • Intel HD Graphics 6000: 300 MHz base clock, 1000 MHz at load.
  • Intel Iris Graphics 6100: 300 MHz base clock, 1000-1100 MHz at load (depending on SKU).

Unfortunately, without the number of shader units to go along with the core clock, we cannot derive a FLOP value yet. This is a very important metric for increasing resolution and shader complexity, and it would provide a relatively fair metric to compare the new parts against previous offerings for higher resolutions and quality settings, especialy in DirectX 12 I would assume.

intel-broadwell-iris-graphics-6100.png

Image Credit: VR-Zone

Probably the most interesting part to me is that "Intel HD Graphics" without a number meant GT1 with Haswell. Starting with Broadwell, it has been upgraded to GT2 (apparently). As we can see from even the 4.5W Core M processors, Intel is taking graphics seriously. It is unclear whether their intention is to respect gaming's influence on device purchases, or if they are believing that generalized GPU compute will be "a thing" very soon.

Source: VR-Zone

AMD Catalyst 14.9 for Windows

Subject: Graphics Cards | September 29, 2014 - 05:33 PM |
Tagged: whql, radeon, Catalyst 14.9, amd

AMD.jpg

The full release notes are available here or take a look at the highlights below.

The latest version of the AMD Catalyst Software Suite, AMD Catalyst 14.9 is designed to support the following Microsoft Windows platforms:

Highlights of AMD Catalyst 14.9 Windows Driver

  • Support for the AMD Radeon R9 280
  • Performance improvements (comparing AMD Catalyst 14.9 vs. AMD Catalyst 14.4)
    • 3DMark Sky Diver improvements
      • AMD A4 6300 – improves up to 4%
      • Enables AMD Dual Graphics / AMD CrossFire support
    • 3DMark Fire Strike
      • AMD Radeon R9 290 Series - improves up to 5% in Performance Preset
    • 3DMark11
      • AMD Radeon R9 290 Series / R9 270 Series - improves up to 4% in Entry and Performance Preset
    • BioShock Infinite
      • AMD Radeon R9 290 Series – 1920x1080 - improves up to 5%
    • Company of Heroes 2
      • AMD Radeon R9 290 Series - improves up to 8%
    • Crysis 3
      • AMD Radeon R9 290 Series / R9 270 Series – improves up to 10%
    • Grid Auto Sport
      • AMD CrossFire profile
    • Murdered Soul Suspect
      • AMD Radeon R9 290X (2560x1440, 4x MSAA, 16x AF) – improves up to 50%
      • AMD Radeon R9 290 Series / R9 270 Series – improves up to 6%
      • CrossFire configurations improve scaling up to 75%
    • Plants vs. Zombies (Direct3D performance improvements)
      • AMD Radeon R9 290X - 1920x1080 Ultra – improves up to 11%
      • AMD Radeon R9290X - 2560x1600 Ultra – improves up to 15%
      • AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra) - 92% scaling
    • Batman Arkham Origins:
      • AMD Radeon R9 290X (4x MSAA) – improves up to 20%
      • CrossFire configurations see up to a 70% gain in scaling
    • Wildstar
      • Power Xpress profile Performance improvements to improve smoothness of application
      • Performance improves up to 30% on the AMD Radeon R9 and R7 Series of products for both single GPU and Multi-GPU configurations
    • Tomb Raider
      • AMD Radeon R9 290 Series – improves up to 5%
    • Watch Dogs
      • AMD Radeon R9 290 Series / R9 270 Series – improves up to 9%
      • AMD CrossFire – Frame pacing improvement
      • Improved CrossFire performance – up to 20%
    • Assassin's Creed IV
      • Improves CrossFire scaling (3840x2160 High Settings) up to 93% (CrossFire scaling improvement of 25% compared to AMD Catalyst 14.4)
    • Lichdom
      • Improves performance for single GPU and Multi-GPU configurations
    • Star Craft II
      • AMD Radeon R9 290X (2560x1440, AA, 16x AF) – improves up to 20%

AMD Eyefinity enhancements

  • Mixed Resolution Support
    • A new architecture providing brand new capabilities
    • Display groups can be created with monitors of different resolution (including difference sizes and shapes)
    • Users have a choice of how surface is created over the display group
      • Fill – legacy mode, best for identical monitors
      • Fit – create the Eyefinity surface using best available rectangular area with attached displays
      • Expand – create a virtual Eyefinity surface using desktops as viewports onto the surface
    • Eyefinity Display Alignment
      • Enables control over alignment between adjacent monitors
      • One-Click Setup Driver detects layout of extended desktop
      • Can create Eyefinity display group using this layout in one click!
      • New user controls for video color and display settings
      • Greater control over Video Color Management:
        • Controls have been expanded from a single slider for controlling Boost and Hue to per color axis
        • Color depth control for Digital Flat Panels (available on supported HDMI and DP displays)
        • Allows users to select different color depths per resolution and display

AMD Mantle enhancements

  • Mantle now supports AMD Mobile products with Enduro technology
    • Battlefield 4: AMD Radeon HD 8970M (1366x768; high settings) – 21% gain
    • Thief: AMD Radeon HD 8970M (1920x1080; high settings) – 14% gain
    • Star Swarm: AMD Radeon HD 8970M (1920x1080; medium settings) – 274% gain
  • Enables support for Multi-GPU configurations with Thief (requires the latest Thief update)
  • AMD AM1 JPEG decoding acceleration
    • JPEG decoding acceleration was first enabled on the A10 APU Series in AMD Catalyst 14.1 beta, and has now been extended to the AMD AM1 Platform
    • Provides fast JPEG decompression Provides Power Efficiency for JPEG decompression

Resolved Issues

  • 60Hz SST flickering has been identified as an issue with non-standard display timings exhibited by the AOC U2868PQU panel on certain AMD Radeon graphics cards. A software workaround has been implemented in the AMD Catalyst 14.9 driver to resolve the display timing issues with this display
  • Users seeing flickering issues in 60Hz SST mode are further encouraged to obtain newer display firmware from their monitor vendor that will resolve flickering at its origin.
  • Users are additionally advised to utilize DisplayPort-certified cables to ensure the integrity of the DisplayPort data connection.
  • 4K panel flickering issues found on the AMD Radeon R9 290 Series and AMD Radeon HD 7800
  • Series Screen tearing observed on AMD CrossFire systems with Eyefinity portrait display configurations
  • Instability issues for Grid Autosport when running in 2x1 or 1x2 Eyefinity configurations
  • Geometry corruption in State of Decay
Source: AMD

Apple A8 Die Shot Released (and Debated)

Subject: Graphics Cards, Processors, Mobile | September 29, 2014 - 01:53 AM |
Tagged: apple, a8, a7, Imagination Technologies, PowerVR

First, Chipworks released a dieshot of the new Apple A8 SoC (stored at archive.org). It is based on the 20nm fabrication process from TSMC, which they allegedly bought the entire capacity for. From there, a bit of a debate arose regarding what each group of transistors represented. All sources claim that it is based around a dual-core CPU, but the GPU is a bit polarizing.

apple-a8-dieshot-chipworks.png

Image Credit: Chipworks via Ars Technica

Most sources, including Chipworks, Ars Technica, Anandtech, and so forth believe that it is a quad-core graphics processor from Imagination Technologies. Specifically, they expect that it is the GX6450 from the PowerVR Series 6XT. This is a narrow upgrade over the G6430 found in the Apple A7 processor, which is in line with the initial benchmarks that we saw (and not in line with the 50% GPU performance increase that Apple claims). For programmability, the GX6450 is equivalent to a DirectX 10-level feature set, unless it was extended by Apple, which I doubt.

apple-a8-dieshot-dailytech.png

Image Source: DailyTech

DailyTech has their own theory, suggesting that it is a GX6650 that is horizontally-aligned. From my observation, their "Cluster 2" and "Cluster 5" do not look identical at all to the other four, so I doubt their claims. I expect that they heard Apple's 50% claims, expected six GPU cores as the rumors originally indicated, and saw cores that were not there.

Which brings us back to the question of, "So what is the 50% increase in performance that Apple claims?" Unless they had a significant increase in clock rate, I still wonder if Apple is claiming that their increase in graphics performance will come from the Metal API even though it is not exclusive to new hardware.

But from everything we saw so far, it is just a handful of percent better.