8 GB Variants of the R9 290X Coming This Month

Subject: General Tech, Graphics Cards | November 5, 2014 - 12:56 PM |
Tagged: radeon, R9 290X, R9, amd, 8gb

With the current range of AMD’s R9 290X cards sitting at 4 GB of memory, listings for an 8 GB version have appeared on an online retailer. As far back as March, Sapphire was rumored to be building an 8 GB variety. Those rumours were supposedly quashed last month by AMD and Sapphire. However, AMD has since confirmed the existence of the new additions to the series. Pre-orders have appeared online and are said to be shipping out this month.

amd-r9-290x-8gb-GX-353-SP_88860_600.jpg

Image Credit: Overclockers UK

With 8 GB of GDDR5 memory and price tags between $480 and $520, these new additions, expectedly, do not come cheap. Compared to the 4 GB versions of the R9 290X line, which run about $160 less according to the online retailer, is it worth upgrading at this stage? For the people using a single 1080p monitor, the answer is likely no. For those with multi-screen setups, or those with deep enough pockets to own a 4K display, however, the benefits may begin to justify the premium. At 4K though, just a single 8 GB R9 290X may not provide the best experience; a Crossfire setup would benefit more from the 8 GB bump, being less reliant on GPU speed.

AMD’s 8 GB R9 290X’s are currently available for preorder: a reference version for £299.99 + VAT (~$480) and a Vapor-X version for £324.99 + VAT (~$520). They are slated to ship later this month.

What, me jealous? Four weeks with SLI'd GTX 980s

Subject: Graphics Cards | October 31, 2014 - 03:45 PM |
Tagged: sli, nvidia, GTX 980

Just in case you need a reason to be insanely jealous of someone, [H]ard|OCP has just published an article covering what it is like to be living with two GTX 980's in SLI.  The cards are driving three Dell U2410 24" 1920x1200 displays for a relatively odd resolution of 3600x1920 but apart from an issue with the GeForce Experience software suite the cards have no trouble displaying to all three monitors.  In their testing of Borderlands games they definitely noticed when PhysX was turned on, though like others [H] wishes that PhysX would abandon its proprietary roots.  When compared to the Radeon R9 290X CrossFire system the performance is very similar but when you look at heat, power and noise produced the 980's are the clear winner.  Keep in mind a good 290X is just over $300 while the least expensive GTX 980 will run you over $550.

1414677298HAmmSaoZGr_1_1.jpg

"What do you get when you take two NVIDIA GeForce GTX 980 video cards, configure those for SLI, and set those at your feet for four weeks? We give our thoughts and opinions about actually using these GPUs in our own system for four weeks with focus on performance, sound profile, and heat generated by these cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Assassin's Creed Unity Has NVIDIA-exclusive Effects via GameWorks

Subject: General Tech, Graphics Cards | October 29, 2014 - 06:12 PM |
Tagged: ubisoft, assassin's creed

Ubisoft has integrated GameWorks into Assassin's Creed Unity, or at least parts of it. The main feature to be included is NVIDIA's Horizon Based Ambient Occlusion Plus (HBAO+), which is their implementation of Ambient Occlusion. This effect darkens areas that would otherwise be incorrectly lit with our current limitations of Global Illumination. Basically, it analyzes the scene's geometry to subtract some of the influence of "ambient light" in places where it is an unrealistic approximation (particularly in small crevices). This is especially useful for overcast scenes, where direct sunlight does not overwhelm the contribution of scatters and bounces.

The other features to be included are Temporal Anti-alising (TXAA), Percentage-Closer Soft Shadows (PCSS), and GeometryWorks Advanced Tessellation. TXAA and PCSS were both included in Assassin's Creed IV: Black Flag, alongside the previously mentioned HBAO+, so it makes sense that Ubisoft continues to use what worked for them. GeometryWorks is a different story. NVIDIA seems to claim that it is like DirectX 11 tessellation, but is better suited for use alongside HBAO+ and PCSS.

unity2.jpg

Assassin's Creed Unity will be available on November 11th.

Source: NVIDIA

GeForce GTX 970 Coil Whine Concerns

Subject: Graphics Cards | October 28, 2014 - 12:09 PM |
Tagged: maxwell, GTX 970, geforce, coil whine

Coil whine is the undesirable effect of electrical components creating audible noise when operating. Let's look to our friends at Wikipedia for a concise and accurate description of the phenomenon:

Coil noise is, as its name suggests, caused by electromagnetic coils. These coils, which may act as inductors or transformers, have a certain resonant frequency when coupled with the rest of the electric circuit, as well as a resonance at which it will tend to physically vibrate.

As the wire that makes up the coil passes a variable current, a small amount of electrical oscillation occurs, creating a small magnetic field. Normally this magnetic field simply works to establish the inductance of the coil. However, this magnetic field can also cause the coil itself to physically vibrate. As the coil vibrates physically, it moves through a variable magnetic field, and feeds its resonance back into the system. This can produce signal interference in the circuit and an audible hum as the coil vibrates.

Coil noise can happen, for example, when the coil is poorly secured to the circuit board, is poorly damped, or if the resonant frequency of the coil is close to the resonant frequency of the electric circuit. The effect becomes more pronounced as the signal passing through the coil increases in strength, and as it nears the resonant frequency of the coil, or as it nears the resonant frequency of the circuit. Coil noise is also noticed most often when it is in the humanly audible frequency.

Coil noise is also affected by the irregularities of the magnetic material within the coil. The flux density of the inductor is effected by these irregularities, causing small currents in the coil, contaminating the original signal. This particular subset of is sometimes referred to as magnetic fluctuation noise or the Barkhausen effect. Coil noise can also occur in conjunction with the noise produced by magnetostriction.

Gamers that frequently upgrade their graphics cards may have been witness to this problem with a particular install, or you might have been one of the lucky ones to never deal with the issue. If your computer sits under your desk, in a loud room or you only game with headphones, it's also possible that you just never noticed.

inductor.jpg

Possibly offending inductors?

The reason this comes up to today is that reports are surfacing of GeForce GTX 970 cards from various graphics card vendors exhibiting excessive coil whine or coil noise. These reports are coming in from multiple forum threads around the internet, a collection of YouTube videos of users attempting to capture the issue and even official statements from some of NVIDIA's partners. Now, just because the internet is talking about it doesn't necessarily mean it's a "big deal" relative to the number of products being sold. However, after several Twitter comments and emails requesting we look into the issue, I thought it was pertinent to start asking questions.

As far as I can tell today, GTX 970 cards from multiple vendors including EVGA, MSI and Gigabyte all have users reporting issues and claims of excessive coil noise. For my part here, I have two EVGA GTX 970 cards and an MSI GTX 970, none of which are producing sound at what I would call "excessive" levels. Everyone's opinion of excessive noise is going to vary, but as someone who sits next to a desk-high test bed and hears hundreds of cards a year, I am confident I have a good idea of what to listen for.

We are still gathering data on this potential issue, but a few of the companies mentioned above have issued official or semi-official statements on the problem.

From MSI:  

The coil whine issue is not specific to 900 series, but can happen with any high end GPU and that MSI is looking in to ways to minimize the issue. If you still have concern regarding this issue, then please contact our RMA department.

From EVGA:

We have been watching the early feedback on GTX 970 and inductor noise very closely, and have actively taken steps to improve this. We urge anyone who has this type of concern to contact our support so we can address it directly.

From NVIDIA: 

We’re aware of a small percentage of users reporting excessive “coil whine” noises and are actively looking into the issue.

We are waiting for feedback from other partners to see how they plan to respond.

Since all of the GTX 970 cards currently shipping are non-reference, custom built PCB designs, NVIDIA's input to the problem is one mostly of recommendations. NVIDIA knows that it is their name and brand being associated with any noisy GeForce cards so I would expect a lot of discussions and calls being had behind closed doors to make sure partners are addressing user concerns.

IMG_9794.JPG

Interestingly, the GeForce GTX 970 was the one card of this Maxwell release where all of NVIDIA's partners chose to go the route of custom designs rather than adopting the NVIDIA reference design. On the GTX 980, however, you'll find a mix of both and I would wager that NVIDIA's reference boards do not exhibit any above average noise levels from coils. (I have actually tested four reference GTX 980s without coil whine coming into play.) Sometimes offering all of these companies the option to be creative and to differentiate can back-fire if the utmost care isn't taken in component selection.

Ironically the fix is simple: a little glue on those vibrating inductor coils and the problem goes away. But most of the components are sealed making the simple fix a non-starter for the end user (and I wouldn't recommend doing that anyway). It does point to a lack of leadership from board manufacturers that are willing to skimp on hardware in such a way to make this a big enough issue that I am sitting here writing about this today.

As an aside, if you hear coil whine when running a game at 500-5000 FPS, I don't think that counts as being a major problem for your gaming. I have seen a video or two running a DX9 render test at over 4500 FPS - pretty much any card built today will make noises you don't expect when hitting that kind of performance level.

As for my non-official discussions on the topics with various parties, everyone continues to reiterate that the problem is not as widespread as the some of the forum threads would have you believe. It's definitely higher than normal, and getting public acknowledgements from EVGA and MSI basically confirms that, but one person told me the complaint and RMA levels are where they were expected to be consider the "massively fast sell out rates" the GTX 970 is experiencing. 

Of course, AMD isn't immune to coil whine issues either. If you remember back to the initial launch of the Radeon R9 290X and R9 290, we had similar coil whine issues and experienced those first hand on reference card designs. (You can see a video I recorded of an XFX unit back in November of 2013 here.) You can still find threads on popular forums from that time period discussing the issue and YouTube never seems to forget anything, so there's that. Of course, the fact that previous card launches might have seen issues along the same line doesn't forgive the issue in current or later card releases, but it does put things into context.

So, let's get some user feedback; I want to hear from GTX 970 owners about their experiences to help guide our direction of research going forward.

Click here to take our short poll for GTX 970 owners!

Source: Various

Sony PS4 and Microsoft Xbox One Already Hitting a Performance Wall

Subject: General Tech, Graphics Cards | October 27, 2014 - 04:50 PM |
Tagged: xbox one, sony, ps4, playstation 4, microsoft, amd

A couple of weeks back a developer on Ubisoft's Assassin's Creed Unity was quoted that the team had decided to run both the Xbox One and the Playstation 4 variants of the game at 1600x900 resolution "to avoid all the debates and stuff." Of course, the Internet exploded in a collection of theories about why that would be the case: were they paid off by Microsoft?

For those of us that focus more on the world of PC gaming, however, the following week an email into the Giantbomb.com weekly podcast from an anonymous (but seemingly reliable) developer on the Unity team raised even more interesting material. In this email, despite addressing other issues on the value of pixel count and the stunning visuals of the game, the developer asserted that we may have already peaked on the graphical compute capability of these two new gaming consoles. Here is a portion of the information:

The PS4 couldn’t do 1080p 30fps for our game, whatever people, or Sony and Microsoft say. ...With all the concessions from Microsoft, backing out of CPU reservations not once, but twice, you’re looking at about a 1-2 FPS difference between the two consoles.

What's hard is not getting the game to render but getting everything else in the game at the same level of performance we designed from the start for the graphics. By the amount of content and NPCs in the game, from someone who witnessed a lot of optimizations for games from Ubisoft in the past, this is crazily optimized for such a young generation of consoles. This is really about to define the next-generation unlike any other game beforehand.

We are bound from the CPU because of AI. Around 50% of the CPU is used for the pre-packaged rendering parts..

unity1.jpg

So, if we take this anonymous developers information as true, and this whole story is based on that assumption, then have learned some interesting things.

  1. The PS4, the more graphically powerful of the two very similarly designed consoles, was not able to maintain a 30 FPS target when rendering at 1920x1080 resolution with Assassin's Creed Unity.
     
  2. The Xbox One (after giving developers access to more compute cycles previously reserved to Kinect) is within a 1-2 FPS mark of the PS4.
     
  3. The Ubisoft team see Unity as being "crazily optimized" for the architecture and consoles even as we just now approach the 1 year anniversary of their release.
     
  4. Half of the CPU compute time is being used to help the rendering engine by unpacking pre-baked lighting models for the global illumination implementation and thus the game is being limited by the 50% remaining performance power the AI, etc.

It would appear that just as many in the media declared when the specifications for the new consoles were announced, the hardware inside the Playstation 4 and Xbox One undershoots the needs of game developers to truly build "next-generation" games. If, as this developer states, we are less than a year into the life cycle of hardware that was planned for an 8-10 year window and we have reached performance limits, that's a bad sign for game developers that really want to create exciting gaming worlds. Keep in mind that this time around the hardware isn't custom built cores or using a Cell architecture - we are talking about very basic x86 cores and traditional GPU hardware that ALL software developers are intimately familiar with. It does not surprise me one bit that we have seen more advanced development teams hit peak performance.

unity2.jpg

If the PS4, the slightly more powerful console of the pair, is unable to render reliably at 1080p with a 30 FPS target, then unless the Ubisoft team are completely off the rocker in terms of development capability, the advancement of gaming on consoles would appear to be somewhat limited. Remember the specifications for these two consoles:

  PlayStation 4 Xbox One
Processor 8-core Jaguar APU 8-core Jaguar APU
Motherboard Custom Custom
Memory 8GB GDDR5 8GB DDR3
Graphics Card 1152 Stream Unit APU 768 Stream Unit APU
Peak Compute 1,840 GFLOPS 1,310 GFLOPS

The custom built parts from AMD both feature an 8-core Jaguar x86 architecture and either 768 or 1152 stream processors. The Jaguar CPU cores aren't high performance parts: single-threaded performance of Jaguar is less than the Intel Silvermont/Bay Trail designs by as much as 25%. Bay Trail is powering lots of super low cost tablets today and even the $179 ECS LIVA palm-sized mini-PC we reviewed this week. And the 1152/768 stream processors in the GPU portion of the AMD APU provide some punch, but a Radeon HD 7790 (now called the R7 260X), released in March of 2013, provides more performance than the PS4 and the Radeon R7 250X is faster than what resides in the Xbox One.

xboxonegpu.jpg

If you were to ask me today what kind of performance would be required from AMD's current GPU lineup for a steady 1080p gaming experience on the PC, I would probably tell you the R9 280, a card you can buy today for around $180. From NVIDIA, I would likely pick a GTX 760 (around $200).

Also note that if the developer is using 50% of the CPU resources for rendering computation and the remaining 50% isn't able to hold up its duties on AI, etc., we likely have hit performance walls on the x86 cores as well.

Even if this developer quote is 100% correct that doesn't mean that the current generation of consoles is completely doomed. Microsoft has already stated that DirectX 12, focused on performance efficiency of current generation hardware, will be coming to the Xbox One and that could mean additional performance gains for developers. The PS4 will likely have access to OpenGL Next that is due in the future. And of course, it's also possible that this developer is just wrong and there is plenty of headroom left in the hardware for games to take advantage of.

unity3.jpg

But honestly, based on my experience with these GPU and CPU cores, I don't think that's the case. If you look at screenshots of Assassin's Creed Unity and then look at the minimum and recommended specifications for the game on the PC, there is huge, enormous discrepancy. Are the developers just writing lazy code and not truly optimizing for the hardware? It seems unlikely that a company the size of Ubisoft would choose this route on purpose, creating a console game that runs in a less-than-ideal state while also struggling on the PC version. Remember, there is almost no "porting" going on here: the Xbox One and Playstation 4 share the same architecture as the PC now.

Of course, we might just be treading through known waters. I know we are a bit biased, and so is our reader base, but I am curious: do you think MS and Sony have put themselves in a hole with their shortsighted hardware selections?

UPDATE: It would appear that a lot of readers and commentors take our editorial on the state of the PS4 and XB1 as a direct attack on AMD and its APU design. That isn't really the case - regardless of what vendors' hardware is inside the consoles, had Microsoft and Sony still targeted the same performance levels, we would be in the exact same situation. An Intel + NVIDIA hardware combination could just have easily been built to the same peak theoretical compute levels and would have hit the same performance wall just as quickly. MS and Sony could have prevented this by using higher performance hardware, selling the consoles at a loss out the gate and preparing each platform for the next 7-10 years properly. And again, the console manufacturers could have done that with higher end AMD hardware, Intel hardware or NVIDIA hardware. The state of the console performance war is truly hardware agnostic.

AMD Catalyst 14.9.2 Beta for Civilization: Beyond Earth

Subject: Graphics Cards | October 26, 2014 - 02:44 AM |
Tagged: amd, driver, catalyst

So Ryan has been playing many games lately, as a comparison between the latest GPUs from AMD and NVIDIA. While Civilization: Beyond Earth is not the most demanding game in existence on GPUs, it is not trivial either. While not the most complex, from a video card's perspective, it is a contender for most demanding game on your main processor (CPU). It also has some of the most thought-out Mantle support of any title using the API, when using the AMD Catalyst 14.9.2 Beta driver.

firaxis-civilization-beyond-earth.jpg

And now you can!

The Catalyst 14.9.2 Beta drivers support just about anything using the GCN architecture, from APUs (starting with Kaveri) to discrete GPUs (starting with the HD 7000 and HD 7000M series). Beyond enabling Mantle support in Civilization, it also fixes some issues with Metro, Shadow of Mordor, Total War: Rome 2, Watch_Dogs, and other games.

Also, both AMD and Firaxis are aware of a bug in Civilization: Beyond Earth where the mouse cursor does not click exactly where it is supposed to, if the user enables font scaling in Windows. They are working on it, but suggest setting it to the default (100%) if users experience this issue. This could be problematic for customers with high-DPI screens, but could keep you playing until an official patch is released.

You can get 14.9.2 Beta for Windows 7 and Windows 8.1 at AMD's website.

Source: AMD

AMD Radeon R9 290X Now Selling at $299

Subject: Graphics Cards | October 24, 2014 - 03:44 PM |
Tagged: radeon, R9 290X, leaderboard, hwlb, hawaii, amd, 290x

When NVIDIA launched the GTX 980 and GTX 970 last month, it shocked the discrete graphics world. The GTX 970 in particular was an amazing performer and undercut the price of the Radeon R9 290 at the time. That is something that NVIDIA rarely does and we were excited to see some competition in the market.

AMD responded with some price cuts on both the R9 290X and the R9 290 shortly thereafter (though they refuse to call them that) and it seems that AMD and its partners are at it again.

r9290x1.jpg

Looking on Amazon.com today we found several R9 290X and R9 290 cards at extremely low prices. For example:

The R9 290X's primary competition in terms of raw performance is the GeForce GTX 980, currently selling for $549 and up. If you can find them in stock, that means NVIDIA has a hill of $250 to climb when going against the lowest priced R9 290X.

r92901.jpg

The R9 290 looks interesting as well:

Several other R9 290 cards are selling for upwards of $300-320 making them bone-headed decisions if you can get the R9 290X for the same or lower price, but considering the GeForce GTX 970 is selling for at least $329 today (if you can find it) and you can see why consumers are paying close attention. 

Will NVIDIA make any adjustments of its own? It's hard to say right now since stock is so hard to come by of both the GTX 980 and GTX 970 but it's hard to imagine NVIDIA lowering prices as long as parts continue to sell out. NVIDIA believes that its branding and technologies like G-Sync make GeForce cards more valuable and until they being to see a shift in the market, I imagine that will stay the course.

For those of you that utilize our Hardware Leaderboard you'll find that Jeremy has taken these prices into account and update a couple of the system build configurations.

Source: Amazon.com

Who rules the ~$250 market? XFX R9 285 Black Edition versus the GTX 760

Subject: Graphics Cards | October 23, 2014 - 04:06 PM |
Tagged: xfx, R9 285 Black Edition, factory overclocked, amd

Currently sitting at $260 the XFX R9 285 Black Edition is a little less expensive than the ASUS ROG STRIKER GTX 760 and significantly more expensive than the ASUS GTX760 DirectCU2 card.  Those prices lead [H]ard|OCP to set up a showdown to see which card provided the best bang for the buck, especially once they overclocked the AMD card to 1125MHz core and 6GHz RAM.  In the end it was a very close race between the cards, the performance crown did go to the R9 285 BE but that performance comes at a premium as you can get performance almost as good for $50 less.  Of course the both the XFX card and the  STRIKER sell at a premium compared to cards with less features and a stock setup; you should expect the lower priced R9 285s to be closer in performance to the DirectCU2 card.

1413885880S78ZQ7Hqqp_1_13_l.jpg

"Today we are reviewing the new XFX Radeon R9 285 Black Edition video card. We will compare it to a pair of GeForce GTX 760 based GPUs to determine the best at the sub-$250 price point. XFX states that it is faster than the GTX 760, but that is based on a single synthetic benchmark, let's see how it holds up in real world gaming."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

GeForce Game Ready Driver 344.48 WHQL

Subject: Graphics Cards | October 22, 2014 - 12:52 PM |
Tagged: whql, nvidia, GeForce 344.48

"Game Ready" for Lords of the Fallen, Civilization: Beyond Earth, and Elite: Dangerous. 

Grab it straight from NVIDIA or GeForce.com.

dsr-auto-enabled-in-geforce-experience-640px.jpg

What’s New in Version 344.48

Game Ready

Best gaming experience for Lords of the Fallen, Civilization: Beyond Earth, and Elite:Dangerous.

Gaming Technology

  • Supports Dynamic Super Resolution (DSR) on Kepler and Fermi-based desktop GPUs. Software Modules
  • NVIDIA PhysX System Software - version 9.14.0702
  • NVIDIA GPU PhysX acceleration is available only on systems with GeForce 8-series and later GPUs with a minimum of 256 MB dedicated graphics memory.
  • NVIDIA GPU PhysX acceleration is not available if there is a non-NVIDIA graphics processor in the system, even if it is not used for rendering.
  • HD Audio Driver - version 1.3.32.1 CUDA - version 6.5
  • GeForce Experience - 16.13.56.0 Application Profiles

Added or updated the following profiles:

  • Assassin's Creed Unity – control panel FXAA disabled
  • Dead Rising 3 – SLI-Single profile added
  • Elite Dangerous – SLI profile added, control panel FXAA disabled
  • Escape Dead Island – SLI profile added
  • FIFA 15 – SLI-Single profile added
  • Lichdom: Battlemage– SLI profile added
  • Lords of the Fallen – SLI profile added
  • MechWarrior Online – DX11 SLI profile added
  • Monster Hunter Online Benchmark – SLI profile added
  • Ryse: Son of Rome – SLI profile added, stereo blocked
  • Sid Meier's Civilization: Beyond Earth – ambient occlusion (AO) profile added
  • Sleeping Dogs Definitive Edition – SLI profile added
  • The Crew – control panel FXAA disabled
  • The Vanishing of Ethan Carter – SLI profile added 3D Vision Profiles

Added or updated the following profiles:

  • Dead Rising 3 – Not Recommended
  • Strife – rated as Fair 3D Compatibility Mode Support

Support for 3D Compatibility Mode has been added for the following games:

  • Dead Rising 3 – rated as Excellent
  • Strife – rated as Excellent

Windows Vista/Windows 7/Windows 8/Windows 8.1 Fixed Issues

  • Make control panel option for MFAA visible in NVIDIA Control Panel only for non-SLI configurations.
  • Implement MFAA along with porting TSF filter to driver side shim.
  • Add SLI profile for Sleeping Dogs: Definitive Edition.
  • GeForce GTX 980, Windows 8.1: Occasionally, the first line in a displayed frame mistakenly has content from a prior rendered frame.
  • Need SLI profile for FIFA 15.
  • Having G-SYNC enabled with Oculus Rift drivers installed causes applications to crash while launching and sometimes causes the system to reboot.
  • Green screen when certain videos played back in Media Player Classic Home Cinema. Backport to r304_00 all missing changes to the FreeBSD installer.
  • Device does not start (error code 49) in certain OEM motherboards.
  • Assassin's Creed Unity, Windows 8: TDR crash after loading a level and playing a little on NVIDIA 7-series GPUs.
  • Windows 8.1: Significant drop off in performance with 3D Vision enabled in SLI in Tomb Raider, no repro with Windows 7.
Source: NVIDIA

PCPer Live! Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA Part 2!

Subject: Editorial, Graphics Cards | October 21, 2014 - 07:45 PM |
Tagged: video, pcper, nvidia, live, GTX 980, geforce, game stream, borderlands: the pre-sequel, borderlands

UPDATE: It's time for ROUND 2!

UPDATE 2: You missed the fun for the second time? That's unfortunate, but you can relive the fun with the replay right here!

I'm sure like the staff at PC Perspective, many of our readers have been obsessively playing the Borderlands games since the first release in 2009. Borderlands 2 arrived in 2012 and once again took hold of the PC gaming mindset. This week marks the release of Borderlands: The Pre-Sequel, which as the name suggests, takes place before the events of Borderlands 2. The Pre-Sequel has playable characters that were previously only known to the gamer as NPCs and that, coupled with the new low-gravity game play style, should entice nearly everyone that loves the first-person, loot-driven series to come back.

To celebrate the release, PC Perspective has partnered with NVIDIA to host a couple of live game streams that will feature some multi-player gaming fun as well some prizes to giveaway to the community. I will be joined once again by NVIDIA's Andrew Coonrad and Kris Rey to tackle the campaign in a cooperative style while taking a couple of stops to give away some hardware.

livelogo.jpg

Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA Part 2

5pm PT / 8pm ET - October 21st

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

Here are some of the prizes we have lined up for those of you that join us for the live stream:

Holy crap, that's a hell of a list!! How do you win? It's really simple: just tune in and watch the Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA! We'll explain the methods to enter live on the air and anyone can enter from anywhere in the world - no issues at all!

So stop by Tuesday night for some fun, some gaming and the chance to win some hardware!

global-landing-page-borderlands-presequal.jpg

2K_Borderlands_Pre-Sequel_AthenaToss_1stPerson.jpg

2K_Borderlands_Pre-Sequel_moonBandits.jpg

Gigabyte Packs Factory Overclocked GTX 970 GPU Into Mini ITX Card

Subject: Graphics Cards | October 21, 2014 - 06:42 PM |
Tagged: maxwell, nvidia, gaming, mini ITX, small form factor, GTX 970, GM204, gigabyte

Gigabyte has announced a new miniature graphics card based around NVIDIA's GeForce GTX 970 GPU. The upcoming card is a dual slot, single fan design that is even shorter than the existing GTX 970 graphics cards (which are fairly short themselves). Officially known as the GV-N970IXOC-4GD, the miniaturized GTX 970 will be available for your small form factor (Mini ITX) systems in November for around $330.

The new Mini ITX compatible graphics card packs in a factory overclocked GeForce GTX 970 processor, 4GB of video memory, a custom PCB, and a custom WindForce-inspired cooler into a graphics card that is smaller than any of the existing GTX 970 cards. Gigabyte is using a custom design with a single 8-pin PCI-E power connector instead of two 6-pin connectors from the reference design or the 6-pin plus 8-pin from manufacturers like EVGA. The single power connector means less cabling to route (and successfully attempt to hide heh) and better small form factor PSU compatibility. The cooler is an aluminum fin array with three copper heatpipes paired with a single shrouded fan.

Gigabyte GTX 970 Factory Overclocked Mini ITX Graphics Card.png

The tiny card comes factory overclocked at 1076 MHz base and 1216 MHz boost, which is a respectable boost over the reference specifications. For reference, the GeForce GTX 970 processor is a 28nm chip using NVIDIA's GM204 "Maxwell" architecture with 1664 CUDA cores clocked at 1051 MHz base and 1178 MHz boost. It appears that Gigabyte has left the 4GB of GDDR5 untouched at 7.0 GT/s.

  Gigabyte GTX 970 Mini ITX
Reference GTX 970
CUDA Cores 1664 1664
Core (MHz) 1076 1051
Core (MHz) Boost 1216 1178
Memory 4GB 4GB
Memory Rate 7.0 (GT/s) 7.0 (GT/s)
Memory Width 256-bit 256-bit
Architecture Maxwell Maxwell
Process Node 28nm 28nm
PCI-E Power 1x 8-pin 2x 6-pin
DirectX Version 12.0 12.0

The display output on the miniature Gigabyte card differs slightly from the reference design with the addition of a DVI-D connection.

  • 3 x DisplayPort
  • 1 x HDMI
  • 1 x DVI-I
  • 1 x DVI-D

According to Gigabyte, its custom cooler resulted in lower temperatures versus the reference design. The company claims that when running Metro: Last Light, the Mini ITX Gigabyte GTX 970 GPU ran at 62°C versus a reference design hitting 76°C running the same game. If true, the Gigabyte cooler is capable of keeping the card significantly cooler while taking up less space (though fan speeds and sound levels were not mentioned, nor compared to other custom coolers).

The small form factor friendly GTX 970 is coming next month with a MSRP of $329.99. Are you excited?

Source: Videocardz

The GTX 980 can reach very impressive frequencies

Subject: Graphics Cards | October 14, 2014 - 06:49 PM |
Tagged: GTX 980, nvidia, overclocking

[H]ard|OCP has had more time to spend with their reference GTX 980 and have reached the best stable overclock they could on this board without moving to third party coolers or serious voltage mods.  At 1516MHz core and 8GHz VRAM on this reference card, retail models will of course offer different results; regardless it is not too shabby a result.  This overclock was not easy to reach and how they managed it and the lessons they learned along the way make for interesting reading.  The performance increases were noticeable, in most cases the overclocked card was beating the stock card by 25% and as this was a reference card the retail cards with enhanced coolers and the possibility of custom BIOS which disable NVIDIA's TDP/Power Limit settings you could see cards go even faster.  You can bet [H] and PCPer will both be revisting the overclocking potential of GTX 980s.

1412746309oHQVINIuLi_1_1.gif

"The new NVIDIA GeForce GTX 980 makes overclocking GPUs a ton of fun again. Its extremely high clock rates achieved when you turn the right dials and sliders result in real world gaming advantages. We will compare it to a GeForce GTX 780 Ti and Radeon R9 290X; all overclocked head-to-head."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

PCPer Live! Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA

Subject: Editorial, Graphics Cards | October 13, 2014 - 10:28 PM |
Tagged: video, pcper, nvidia, live, GTX 980, geforce, game stream, borderlands: the pre-sequel, borderlands

UPDATE: You missed this weeks live stream but you can watch the game play via this YouTube embed!!

I'm sure like the staff at PC Perspective, many of our readers have been obsessively playing the Borderlands games since the first release in 2009. Borderlands 2 arrived in 2012 and once again took hold of the PC gaming mindset. This week marks the release of Borderlands: The Pre-Sequel, which as the name suggests, takes place before the events of Borderlands 2. The Pre-Sequel has playable characters that were previously only known to the gamer as NPCs and that, coupled with the new low-gravity game play style, should entice nearly everyone that loves the first-person, loot-driven series to come back.

To celebrate the release, PC Perspective has partnered with NVIDIA to host a couple of live game streams that will feature some multi-player gaming fun as well some prizes to giveaway to the community. I will be joined by NVIDIA's Andrew Coonrad and Kris Rey to tackle the campaign in a cooperative style while taking a couple of stops to give away some hardware.

livelogo.jpg

Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA

5pm PT / 8pm ET - October 14th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

Here are some of the prizes we have lined up for those of you that join us for the live stream:

Holy crap, that's a hell of a list!! How do you win? It's really simple: just tune in and watch the Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA! We'll explain the methods to enter live on the air and anyone can enter from anywhere in the world - no issues at all!

So stop by Tuesday night for some fun, some gaming and the chance to win some hardware!

global-landing-page-borderlands-presequal.jpg

2K_Borderlands_Pre-Sequel_AthenaToss_1stPerson.jpg

2K_Borderlands_Pre-Sequel_moonBandits.jpg

Dr. Lisa Su Is AMD's New President and CEO

Subject: Graphics Cards, Processors | October 8, 2014 - 05:54 PM |
Tagged: amd

In an abrupt announcement, Rory Read has stepped down from his positions at AMD, leaving them to Dr. Lisa Su. Until today, Mr. Read served as president and Chief Executive Officer (CEO) of the x86 chip designer and Dr. Su as Chief Operating Officer (COO). Today however, Dr. Su has become president and CEO, and Mr. Read will stay on for a couple of months as an adviser during the transition.

amd-lisa-su.jpg

Josh Walrath, editor here at PC Perspective, tweeted that he was "Curious as to why Rory didn't stay on longer? He did some good things there [at AMD], but [it's] very much an unfinished job." I would have to agree. It feels like an odd time, hence the earlier use of the word "abrupt", to have a change in management. AMD restructured just four months ago, which was the occasion for Dr. Su to be promoted to COO. In fact, at least as far as I know, no-one is planned to fill her former position as COO.

These points suggest that she was planned to take over the company for at least several months.

 

Josh's Thoughts

I have been told that timing is everything.  I guess this rings true, but only if you truly know the circumstances around any action.  Today’s announcement by AMD was odd in its timing, but it was not exactly unexpected.  As Scott mentioned above, I was confused by this happening now.  I had expected Rory to be in charge for at least another year, if not two.  Rory had hinted that he was not planning on being at AMD forever, but was aiming at creating a solid foundation for the company and to help shore up its finances and instill a new culture.  While the culture is turning due to pressure from up top as well as a pretty significant personnel cuts, AMD is not quite as nimble yet as they want to be.

Rory’s term has seen the return of seasoned veterans like Jim Keller and Raja Koduri.  These guys are helping to turn the ship around after some fairly mediocre architecturse on the CPU and GPU sides.  While Raja had little to do with GCN, we are seeing some aggressive moves there in terms of features that are making their products much more competitive with NVIDIA.  Keller has made some very significant changes to the overall roadmap on the CPU side and I think we will see some very solid improvements in design and execution over the next two years.

Lisa Su was brought in by Rory shortly after he was named CEO.  Lisa has a pretty significant background in semiconductors and has made a name for herself in her work with IBM and Freescale.  Lisa attained all three of her degrees from MIT.  This is not unheard of, but it is uncommon to stay in one academic setting when gaining advanced degrees.  Having said that, MIT certainly is the top engineering and science school in the nation (if not the world).  I’m sure people from RPI, GT, and CalTech might argue that, but it certainly is an impressive school to have on your resume.

Dr. Su has seemingly been groomed for this transition for quite some time now.  She went from a VP to COO rather quickly, and is now shouldering the burden of being CEO.  Lisa has been on quite a few of the quarterly conference calls and taking questions.  She also serves on the Board of Directors at Analog Devices.

I think that Lisa will continue along the same path that Rory set out, but she will likely bring a few new wrinkles due to her experience with semiconductor design and R&D at IBM.  We can only hope that this won’t become a Dirk Meyer 2.0 type situation where a successful engineer and CPU architect could not change the course of the company after the disastrous reign of Hector Ruiz.  I do not think that this will be the case, as Rory did not leave the mess that Hector did.  I also believe that Lisa has more business sense and acumen than Dirk did.

This change, at this time, has provided some instability in the markets when regarding AMD.  Some weeks ago AMD was at a near high for the year at around $4.66 per share.  Right now it is hovering at $3.28.  I was questioning why the stock price was going down, and it seems that my question was answered.  One way or the other, rumors of Rory taking off reached investors’ ears and we saw a rapid decline in share price.  We have yet to see what Q3 earnings look like now that Rory has rather abruptly left his position, but people are pessimistic as to what will be announced with such a sudden departure.

AMD Dropping R9 290X to $399, R9 290 to $299

Subject: Graphics Cards | October 6, 2014 - 03:21 PM |
Tagged: radeon, R9 290X, r9 290, hawaii, GTX 980, GTX 970, geforce, amd

On Saturday while finishing up the writing on our Shadow of Mordor performance story, I noticed something quite interesting. The prices of AMD's flagship Radeon products had all come down quite a bit. In an obvious response to the release of NVIDIA's new GeForce GTX 980 and GTX 970, the Radeon R9 290X and the Radeon R9 290 have lowered prices in a very aggressive fashion.

UPDATE: A couple of individual cards appear to be showing up as $360 and $369 on Newegg!

pricedrop1.jpg

Amazon.com is showing some R9 290X cards at $399

For now, Amazon.com is only listing the triple-fan Gigabyte R9 290X Windforce card at $399, though Newegg.com has a couple as well.

pricedrop2.jpg

Amazon.com also has several R9 290 cards for $299

And again, Newegg.com has some other options for R9 290 cards at these lower prices.

Let's assume that these price drops are going to be permanent which seems likely based on the history of AMD and market adjustments. That shifts the high end GPU market considerably.

     
GeForce GTX 980 4GB $549  
  $399 Radeon R9 290X 4GB
GeForce GTX 970 4GB $329  
  $299 Radeon R9 290 4GB

The battle for that lower end spot between the GTX 970 and R9 290 is now quite a bit tighter though NVIDIA's Maxwell architecture still has a positive outlook against the slightly older Hawaii GPU. Our review of the GTX 970 shows that it is indeed faster than the R9 290 though it no longer has the significant cost advantage it did upon release. The GTX 980, however, is much tougher sell over the Radeon R9 290X for PC gamers that are concerned with price per dollar over all else. I would still consider the GTX 980 faster than the R9 290X...but is it $150 faster? That's a 35% price difference NVIDIA now has to contend with.

NVIDIA has proven that is it comfortable staying in this position against AMD as it maintained it during essentially the entire life of the GTX 680 and GTX 780 product lines. AMD is more willing to make price cuts to pull the Radeon lineup back into the spotlight. Though the market share between the competitors didn't change much over the previous 6 months, I'll be very curious to see how these two strategies continue to play out.

DirectX 12 Shipping with Windows 10

Subject: Graphics Cards | October 3, 2014 - 03:18 AM |
Tagged: microsoft, DirectX, DirectX 12, windows 10, threshold, windows

A Microsoft blog posting confirms: "The final version of Windows 10 will ship with DirectX 12". To me, this seems like a fairly obvious statement. The loose dates provided for both the OS and the availability of retail games suggest that the two would be launching at roughly the same time. The article also claims that DirectX 12 "Early Access" members will be able to develop with the Windows 10 Technical Preview. Apart from Unreal Engine 4 (for Epic Games subscribers), Intel will also provide source access to their Asteroids demo, shown at Siggraph 2014, to all accepted early access developers.

windows-directx12-landscapes.jpg

Our readers might find this information slightly disappointing as it could be interpreted that DirectX 12 would not be coming to Windows 7 (or even 8.x). While it does not look as hopeful as before, they never, at any point, explicitly say that it will not come to older operating systems. It still might.

Source: Microsoft

The MSI GeForce GTX 970 GAMING 4G is sitting right in the sweet spot

Subject: Graphics Cards | October 2, 2014 - 04:01 PM |
Tagged: msi, GTX 970 GAMING 4G, factory overclocked

It is sadly out of stock on both NewEgg and Amazon right now but MSI's $350 GTX 970 GAMING 4G is an incredible buy and worth waiting for.  The factory overclock already set up on this card is quite nice, a Core rated at 1140/1279MHz which [H]ard|OCP actually observed hit as high as 1366MHz until they overclocked it and hit 1542MHz before the 110% GPU power limitation ended their fun.  It would seem that the card is capable of more, if only you were not prevented from feeding it more than that extra 10%.  The card was already beating the 780 Ti and R8 290 before the overclock but you should read the full review to see what happened once they tested it at the full speed.

1411976595nitFZ11Eg1_1_9_l.jpg

"The MSI GeForce GTX 970 GAMING 4G video card is making the GeForce GTX 780 and AMD Radeon R9 290 obsolete. This $349 video card puts up a fight and punches in a win at this price. The overclock alone is somewhat staggering. If you are about to spend money on a GPU, don't miss this one."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Broadwell-U (BGA) Lineup Leaked

Subject: Graphics Cards, Processors | September 30, 2014 - 03:33 AM |
Tagged: iris, Intel, core m, broadwell-y, broadwell-u, Broadwell

Intel's upcoming 14nm product line, Broadwell, is expected to have six categories of increasing performance. Broadwell-Y, later branded Core M, is part of the soldered BGA family at expected TDPs of 3.5 to 4.5W. Above this is Broadwell-U, which are also BGA packages, and thus require soldering by the system builder. VR-Zone China has a list of seemingly every 15W SKU in that category. 28W TDP "U" products are expected to be available in the following quarter, but are not listed.

intel-broadwell-u_1.png

Image Credit: VR-Zone

As for those 15W parts though, there are seventeen (17!) of them, ranging from Celeron to Core i7. While each product is dual-core, the ones that are Core i3 and up have Hyper-Threading, increasing the parallelism to four tasks simultaneously. In terms of cache, Celerons and Pentiums will have 2MB, Core i7s will have 4MB, and everything in between will have 3MB. Otherwise, the products vary on the clock frequency they were binned (bin-sorted) at, and the integrated graphics that they contain.

intel-broadwell-u_2.png

Image Credit: VR-Zone

These integrated iGPUs range from "Intel HD Graphics" on the Celerons and Pentiums, to "Intel Iris Graphics 6100" on one Core i7, two Core i5s, and one Core i3. The rest pretty much alternate between Intel HD Graphics 5500 and Intel HD Graphics 6000. Maximum frequency of any given iGPU can vary within the same product, but only by about 100 MHz at the most. The exact spread is below.

  • Intel HD Graphics: 300 MHz base clock, 800 MHz at load.
  • Intel HD Graphics 5500: 300 MHz base clock, 850-950 MHz at load (depending on SKU).
  • Intel HD Graphics 6000: 300 MHz base clock, 1000 MHz at load.
  • Intel Iris Graphics 6100: 300 MHz base clock, 1000-1100 MHz at load (depending on SKU).

Unfortunately, without the number of shader units to go along with the core clock, we cannot derive a FLOP value yet. This is a very important metric for increasing resolution and shader complexity, and it would provide a relatively fair metric to compare the new parts against previous offerings for higher resolutions and quality settings, especialy in DirectX 12 I would assume.

intel-broadwell-iris-graphics-6100.png

Image Credit: VR-Zone

Probably the most interesting part to me is that "Intel HD Graphics" without a number meant GT1 with Haswell. Starting with Broadwell, it has been upgraded to GT2 (apparently). As we can see from even the 4.5W Core M processors, Intel is taking graphics seriously. It is unclear whether their intention is to respect gaming's influence on device purchases, or if they are believing that generalized GPU compute will be "a thing" very soon.

Source: VR-Zone

AMD Catalyst 14.9 for Windows

Subject: Graphics Cards | September 29, 2014 - 05:33 PM |
Tagged: whql, radeon, Catalyst 14.9, amd

AMD.jpg

The full release notes are available here or take a look at the highlights below.

The latest version of the AMD Catalyst Software Suite, AMD Catalyst 14.9 is designed to support the following Microsoft Windows platforms:

Highlights of AMD Catalyst 14.9 Windows Driver

  • Support for the AMD Radeon R9 280
  • Performance improvements (comparing AMD Catalyst 14.9 vs. AMD Catalyst 14.4)
    • 3DMark Sky Diver improvements
      • AMD A4 6300 – improves up to 4%
      • Enables AMD Dual Graphics / AMD CrossFire support
    • 3DMark Fire Strike
      • AMD Radeon R9 290 Series - improves up to 5% in Performance Preset
    • 3DMark11
      • AMD Radeon R9 290 Series / R9 270 Series - improves up to 4% in Entry and Performance Preset
    • BioShock Infinite
      • AMD Radeon R9 290 Series – 1920x1080 - improves up to 5%
    • Company of Heroes 2
      • AMD Radeon R9 290 Series - improves up to 8%
    • Crysis 3
      • AMD Radeon R9 290 Series / R9 270 Series – improves up to 10%
    • Grid Auto Sport
      • AMD CrossFire profile
    • Murdered Soul Suspect
      • AMD Radeon R9 290X (2560x1440, 4x MSAA, 16x AF) – improves up to 50%
      • AMD Radeon R9 290 Series / R9 270 Series – improves up to 6%
      • CrossFire configurations improve scaling up to 75%
    • Plants vs. Zombies (Direct3D performance improvements)
      • AMD Radeon R9 290X - 1920x1080 Ultra – improves up to 11%
      • AMD Radeon R9290X - 2560x1600 Ultra – improves up to 15%
      • AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra) - 92% scaling
    • Batman Arkham Origins:
      • AMD Radeon R9 290X (4x MSAA) – improves up to 20%
      • CrossFire configurations see up to a 70% gain in scaling
    • Wildstar
      • Power Xpress profile Performance improvements to improve smoothness of application
      • Performance improves up to 30% on the AMD Radeon R9 and R7 Series of products for both single GPU and Multi-GPU configurations
    • Tomb Raider
      • AMD Radeon R9 290 Series – improves up to 5%
    • Watch Dogs
      • AMD Radeon R9 290 Series / R9 270 Series – improves up to 9%
      • AMD CrossFire – Frame pacing improvement
      • Improved CrossFire performance – up to 20%
    • Assassin's Creed IV
      • Improves CrossFire scaling (3840x2160 High Settings) up to 93% (CrossFire scaling improvement of 25% compared to AMD Catalyst 14.4)
    • Lichdom
      • Improves performance for single GPU and Multi-GPU configurations
    • Star Craft II
      • AMD Radeon R9 290X (2560x1440, AA, 16x AF) – improves up to 20%

AMD Eyefinity enhancements

  • Mixed Resolution Support
    • A new architecture providing brand new capabilities
    • Display groups can be created with monitors of different resolution (including difference sizes and shapes)
    • Users have a choice of how surface is created over the display group
      • Fill – legacy mode, best for identical monitors
      • Fit – create the Eyefinity surface using best available rectangular area with attached displays
      • Expand – create a virtual Eyefinity surface using desktops as viewports onto the surface
    • Eyefinity Display Alignment
      • Enables control over alignment between adjacent monitors
      • One-Click Setup Driver detects layout of extended desktop
      • Can create Eyefinity display group using this layout in one click!
      • New user controls for video color and display settings
      • Greater control over Video Color Management:
        • Controls have been expanded from a single slider for controlling Boost and Hue to per color axis
        • Color depth control for Digital Flat Panels (available on supported HDMI and DP displays)
        • Allows users to select different color depths per resolution and display

AMD Mantle enhancements

  • Mantle now supports AMD Mobile products with Enduro technology
    • Battlefield 4: AMD Radeon HD 8970M (1366x768; high settings) – 21% gain
    • Thief: AMD Radeon HD 8970M (1920x1080; high settings) – 14% gain
    • Star Swarm: AMD Radeon HD 8970M (1920x1080; medium settings) – 274% gain
  • Enables support for Multi-GPU configurations with Thief (requires the latest Thief update)
  • AMD AM1 JPEG decoding acceleration
    • JPEG decoding acceleration was first enabled on the A10 APU Series in AMD Catalyst 14.1 beta, and has now been extended to the AMD AM1 Platform
    • Provides fast JPEG decompression Provides Power Efficiency for JPEG decompression

Resolved Issues

  • 60Hz SST flickering has been identified as an issue with non-standard display timings exhibited by the AOC U2868PQU panel on certain AMD Radeon graphics cards. A software workaround has been implemented in the AMD Catalyst 14.9 driver to resolve the display timing issues with this display
  • Users seeing flickering issues in 60Hz SST mode are further encouraged to obtain newer display firmware from their monitor vendor that will resolve flickering at its origin.
  • Users are additionally advised to utilize DisplayPort-certified cables to ensure the integrity of the DisplayPort data connection.
  • 4K panel flickering issues found on the AMD Radeon R9 290 Series and AMD Radeon HD 7800
  • Series Screen tearing observed on AMD CrossFire systems with Eyefinity portrait display configurations
  • Instability issues for Grid Autosport when running in 2x1 or 1x2 Eyefinity configurations
  • Geometry corruption in State of Decay
Source: AMD

Apple A8 Die Shot Released (and Debated)

Subject: Graphics Cards, Processors, Mobile | September 29, 2014 - 01:53 AM |
Tagged: apple, a8, a7, Imagination Technologies, PowerVR

First, Chipworks released a dieshot of the new Apple A8 SoC (stored at archive.org). It is based on the 20nm fabrication process from TSMC, which they allegedly bought the entire capacity for. From there, a bit of a debate arose regarding what each group of transistors represented. All sources claim that it is based around a dual-core CPU, but the GPU is a bit polarizing.

apple-a8-dieshot-chipworks.png

Image Credit: Chipworks via Ars Technica

Most sources, including Chipworks, Ars Technica, Anandtech, and so forth believe that it is a quad-core graphics processor from Imagination Technologies. Specifically, they expect that it is the GX6450 from the PowerVR Series 6XT. This is a narrow upgrade over the G6430 found in the Apple A7 processor, which is in line with the initial benchmarks that we saw (and not in line with the 50% GPU performance increase that Apple claims). For programmability, the GX6450 is equivalent to a DirectX 10-level feature set, unless it was extended by Apple, which I doubt.

apple-a8-dieshot-dailytech.png

Image Source: DailyTech

DailyTech has their own theory, suggesting that it is a GX6650 that is horizontally-aligned. From my observation, their "Cluster 2" and "Cluster 5" do not look identical at all to the other four, so I doubt their claims. I expect that they heard Apple's 50% claims, expected six GPU cores as the rumors originally indicated, and saw cores that were not there.

Which brings us back to the question of, "So what is the 50% increase in performance that Apple claims?" Unless they had a significant increase in clock rate, I still wonder if Apple is claiming that their increase in graphics performance will come from the Metal API even though it is not exclusive to new hardware.

But from everything we saw so far, it is just a handful of percent better.