Subject: General Tech, Graphics Cards | November 5, 2014 - 12:56 PM | Andre DeCoste
Tagged: radeon, R9 290X, R9, amd, 8gb
With the current range of AMD’s R9 290X cards sitting at 4 GB of memory, listings for an 8 GB version have appeared on an online retailer. As far back as March, Sapphire was rumored to be building an 8 GB variety. Those rumours were supposedly quashed last month by AMD and Sapphire. However, AMD has since confirmed the existence of the new additions to the series. Pre-orders have appeared online and are said to be shipping out this month.
Image Credit: Overclockers UK
With 8 GB of GDDR5 memory and price tags between $480 and $520, these new additions, expectedly, do not come cheap. Compared to the 4 GB versions of the R9 290X line, which run about $160 less according to the online retailer, is it worth upgrading at this stage? For the people using a single 1080p monitor, the answer is likely no. For those with multi-screen setups, or those with deep enough pockets to own a 4K display, however, the benefits may begin to justify the premium. At 4K though, just a single 8 GB R9 290X may not provide the best experience; a Crossfire setup would benefit more from the 8 GB bump, being less reliant on GPU speed.
Subject: Graphics Cards | October 31, 2014 - 03:45 PM | Jeremy Hellstrom
Tagged: sli, nvidia, GTX 980
Just in case you need a reason to be insanely jealous of someone, [H]ard|OCP has just published an article covering what it is like to be living with two GTX 980's in SLI. The cards are driving three Dell U2410 24" 1920x1200 displays for a relatively odd resolution of 3600x1920 but apart from an issue with the GeForce Experience software suite the cards have no trouble displaying to all three monitors. In their testing of Borderlands games they definitely noticed when PhysX was turned on, though like others [H] wishes that PhysX would abandon its proprietary roots. When compared to the Radeon R9 290X CrossFire system the performance is very similar but when you look at heat, power and noise produced the 980's are the clear winner. Keep in mind a good 290X is just over $300 while the least expensive GTX 980 will run you over $550.
"What do you get when you take two NVIDIA GeForce GTX 980 video cards, configure those for SLI, and set those at your feet for four weeks? We give our thoughts and opinions about actually using these GPUs in our own system for four weeks with focus on performance, sound profile, and heat generated by these cards."
Here are some more Graphics Card articles from around the web:
- NVIDIA GeForce GTX 980 SLI 4K @ [H]ard|OCP
- GeForce GTX 980 PCI-Express Scaling @ techPowerUp
- Inno3D GTX 980 'iChill Herculez X4 Air Boss Ultra' @ Kitguru
- NVIDIA's Linux Driver Can Deliver Better OpenGL Performance Than Windows 8.1 @ Phoronix
- 6-Way Ubuntu 14.10 Radeon Gallium3D vs. Catalyst Driver Comparison @ Phoronix
- Diamond Boost Radeon R9 270X Review @ OCC
- Sapphire R9 285 Dual-X OC Video Card Review @ TechwareLabs
- HIS Radeon R9 290X Hybrid IceQ 4GB - Liquid Cooled @ Legion Hardware
Mini-ITX Sized Package with a Full Sized GPU
PC components seem to be getting smaller. Micro-ATX used to not be very popular for the mainstream enthusiast, but that has changed as of late. Mini-ITX is now the hot form factor these days with plenty of integrated features on motherboards and interesting case designs to house them in. Enthusiast graphics cards tend to be big, and that is a problem for some of these small cases. Manufacturers are responding to this by squeezing every ounce of cooling performance into smaller cards that more adequately fit in these small chassis.
MSI is currently offering their midrange cards in these mini-ITX liveries. The card we have today is the GTX 760 Mini-ITX Gaming. The GTX 760 is a fairly popular card due to it being fairly quick, but not too expensive. It is still based on the GK104, though fairly heavily cut down from a fully functional die. The GTX 760 features 1152 CUDA Cores divided into 6 SMXs. A fully functional GK104 is 1536 CUDA Cores and 8 SMXs. The stock clock on the GTX 760 is 980 MHz with a boost up to 1033 MHz.
The pricing for the GTX 760 cards is actually fairly high as compared to similarly performing products from AMD. NVIDIA feels that they offer a very solid product at that price and do not need to compete directly with AMD on a performance per dollar basis. Considering that NVIDIA has stayed very steady in terms of marketshare, they probably have a valid point. Overall the GTX 760 performs in the same general area as a R9 270X and R9 280, but again the AMD parts have a significant advantage in terms of price.
The challenges for making a high performing, small form factor card are focused on power delivery and thermal dissipation. Can the smaller PCB still have enough space for all of the VRMs required with such a design? Can the manufacturer develop a cooling solution that will keep the GPU in the designed thermal envelope? MSI has taken a shot at these issues with their GTX 760 Mini-ITX OC edition card.
Subject: General Tech, Graphics Cards | October 29, 2014 - 06:12 PM | Scott Michaud
Tagged: ubisoft, assassin's creed
Ubisoft has integrated GameWorks into Assassin's Creed Unity, or at least parts of it. The main feature to be included is NVIDIA's Horizon Based Ambient Occlusion Plus (HBAO+), which is their implementation of Ambient Occlusion. This effect darkens areas that would otherwise be incorrectly lit with our current limitations of Global Illumination. Basically, it analyzes the scene's geometry to subtract some of the influence of "ambient light" in places where it is an unrealistic approximation (particularly in small crevices). This is especially useful for overcast scenes, where direct sunlight does not overwhelm the contribution of scatters and bounces.
The other features to be included are Temporal Anti-alising (TXAA), Percentage-Closer Soft Shadows (PCSS), and GeometryWorks Advanced Tessellation. TXAA and PCSS were both included in Assassin's Creed IV: Black Flag, alongside the previously mentioned HBAO+, so it makes sense that Ubisoft continues to use what worked for them. GeometryWorks is a different story. NVIDIA seems to claim that it is like DirectX 11 tessellation, but is better suited for use alongside HBAO+ and PCSS.
Assassin's Creed Unity will be available on November 11th.
Subject: Graphics Cards | October 28, 2014 - 12:09 PM | Ryan Shrout
Tagged: maxwell, GTX 970, geforce, coil whine
Coil whine is the undesirable effect of electrical components creating audible noise when operating. Let's look to our friends at Wikipedia for a concise and accurate description of the phenomenon:
Coil noise is, as its name suggests, caused by electromagnetic coils. These coils, which may act as inductors or transformers, have a certain resonant frequency when coupled with the rest of the electric circuit, as well as a resonance at which it will tend to physically vibrate.
As the wire that makes up the coil passes a variable current, a small amount of electrical oscillation occurs, creating a small magnetic field. Normally this magnetic field simply works to establish the inductance of the coil. However, this magnetic field can also cause the coil itself to physically vibrate. As the coil vibrates physically, it moves through a variable magnetic field, and feeds its resonance back into the system. This can produce signal interference in the circuit and an audible hum as the coil vibrates.
Coil noise can happen, for example, when the coil is poorly secured to the circuit board, is poorly damped, or if the resonant frequency of the coil is close to the resonant frequency of the electric circuit. The effect becomes more pronounced as the signal passing through the coil increases in strength, and as it nears the resonant frequency of the coil, or as it nears the resonant frequency of the circuit. Coil noise is also noticed most often when it is in the humanly audible frequency.
Coil noise is also affected by the irregularities of the magnetic material within the coil. The flux density of the inductor is effected by these irregularities, causing small currents in the coil, contaminating the original signal. This particular subset of is sometimes referred to as magnetic fluctuation noise or the Barkhausen effect. Coil noise can also occur in conjunction with the noise produced by magnetostriction.
Gamers that frequently upgrade their graphics cards may have been witness to this problem with a particular install, or you might have been one of the lucky ones to never deal with the issue. If your computer sits under your desk, in a loud room or you only game with headphones, it's also possible that you just never noticed.
Possibly offending inductors?
The reason this comes up to today is that reports are surfacing of GeForce GTX 970 cards from various graphics card vendors exhibiting excessive coil whine or coil noise. These reports are coming in from multiple forum threads around the internet, a collection of YouTube videos of users attempting to capture the issue and even official statements from some of NVIDIA's partners. Now, just because the internet is talking about it doesn't necessarily mean it's a "big deal" relative to the number of products being sold. However, after several Twitter comments and emails requesting we look into the issue, I thought it was pertinent to start asking questions.
As far as I can tell today, GTX 970 cards from multiple vendors including EVGA, MSI and Gigabyte all have users reporting issues and claims of excessive coil noise. For my part here, I have two EVGA GTX 970 cards and an MSI GTX 970, none of which are producing sound at what I would call "excessive" levels. Everyone's opinion of excessive noise is going to vary, but as someone who sits next to a desk-high test bed and hears hundreds of cards a year, I am confident I have a good idea of what to listen for.
We are still gathering data on this potential issue, but a few of the companies mentioned above have issued official or semi-official statements on the problem.
The coil whine issue is not specific to 900 series, but can happen with any high end GPU and that MSI is looking in to ways to minimize the issue. If you still have concern regarding this issue, then please contact our RMA department.
We have been watching the early feedback on GTX 970 and inductor noise very closely, and have actively taken steps to improve this. We urge anyone who has this type of concern to contact our support so we can address it directly.
We’re aware of a small percentage of users reporting excessive “coil whine” noises and are actively looking into the issue.
We are waiting for feedback from other partners to see how they plan to respond.
Since all of the GTX 970 cards currently shipping are non-reference, custom built PCB designs, NVIDIA's input to the problem is one mostly of recommendations. NVIDIA knows that it is their name and brand being associated with any noisy GeForce cards so I would expect a lot of discussions and calls being had behind closed doors to make sure partners are addressing user concerns.
Interestingly, the GeForce GTX 970 was the one card of this Maxwell release where all of NVIDIA's partners chose to go the route of custom designs rather than adopting the NVIDIA reference design. On the GTX 980, however, you'll find a mix of both and I would wager that NVIDIA's reference boards do not exhibit any above average noise levels from coils. (I have actually tested four reference GTX 980s without coil whine coming into play.) Sometimes offering all of these companies the option to be creative and to differentiate can back-fire if the utmost care isn't taken in component selection.
Ironically the fix is simple: a little glue on those vibrating inductor coils and the problem goes away. But most of the components are sealed making the simple fix a non-starter for the end user (and I wouldn't recommend doing that anyway). It does point to a lack of leadership from board manufacturers that are willing to skimp on hardware in such a way to make this a big enough issue that I am sitting here writing about this today.
As an aside, if you hear coil whine when running a game at 500-5000 FPS, I don't think that counts as being a major problem for your gaming. I have seen a video or two running a DX9 render test at over 4500 FPS - pretty much any card built today will make noises you don't expect when hitting that kind of performance level.
As for my non-official discussions on the topics with various parties, everyone continues to reiterate that the problem is not as widespread as the some of the forum threads would have you believe. It's definitely higher than normal, and getting public acknowledgements from EVGA and MSI basically confirms that, but one person told me the complaint and RMA levels are where they were expected to be consider the "massively fast sell out rates" the GTX 970 is experiencing.
Of course, AMD isn't immune to coil whine issues either. If you remember back to the initial launch of the Radeon R9 290X and R9 290, we had similar coil whine issues and experienced those first hand on reference card designs. (You can see a video I recorded of an XFX unit back in November of 2013 here.) You can still find threads on popular forums from that time period discussing the issue and YouTube never seems to forget anything, so there's that. Of course, the fact that previous card launches might have seen issues along the same line doesn't forgive the issue in current or later card releases, but it does put things into context.
So, let's get some user feedback; I want to hear from GTX 970 owners about their experiences to help guide our direction of research going forward.
Subject: General Tech, Graphics Cards | October 27, 2014 - 04:50 PM | Ryan Shrout
Tagged: xbox one, sony, ps4, playstation 4, microsoft, amd
A couple of weeks back a developer on Ubisoft's Assassin's Creed Unity was quoted that the team had decided to run both the Xbox One and the Playstation 4 variants of the game at 1600x900 resolution "to avoid all the debates and stuff." Of course, the Internet exploded in a collection of theories about why that would be the case: were they paid off by Microsoft?
For those of us that focus more on the world of PC gaming, however, the following week an email into the Giantbomb.com weekly podcast from an anonymous (but seemingly reliable) developer on the Unity team raised even more interesting material. In this email, despite addressing other issues on the value of pixel count and the stunning visuals of the game, the developer asserted that we may have already peaked on the graphical compute capability of these two new gaming consoles. Here is a portion of the information:
The PS4 couldn’t do 1080p 30fps for our game, whatever people, or Sony and Microsoft say. ...With all the concessions from Microsoft, backing out of CPU reservations not once, but twice, you’re looking at about a 1-2 FPS difference between the two consoles.
What's hard is not getting the game to render but getting everything else in the game at the same level of performance we designed from the start for the graphics. By the amount of content and NPCs in the game, from someone who witnessed a lot of optimizations for games from Ubisoft in the past, this is crazily optimized for such a young generation of consoles. This is really about to define the next-generation unlike any other game beforehand.
We are bound from the CPU because of AI. Around 50% of the CPU is used for the pre-packaged rendering parts..
So, if we take this anonymous developers information as true, and this whole story is based on that assumption, then have learned some interesting things.
- The PS4, the more graphically powerful of the two very similarly designed consoles, was not able to maintain a 30 FPS target when rendering at 1920x1080 resolution with Assassin's Creed Unity.
- The Xbox One (after giving developers access to more compute cycles previously reserved to Kinect) is within a 1-2 FPS mark of the PS4.
- The Ubisoft team see Unity as being "crazily optimized" for the architecture and consoles even as we just now approach the 1 year anniversary of their release.
- Half of the CPU compute time is being used to help the rendering engine by unpacking pre-baked lighting models for the global illumination implementation and thus the game is being limited by the 50% remaining performance power the AI, etc.
It would appear that just as many in the media declared when the specifications for the new consoles were announced, the hardware inside the Playstation 4 and Xbox One undershoots the needs of game developers to truly build "next-generation" games. If, as this developer states, we are less than a year into the life cycle of hardware that was planned for an 8-10 year window and we have reached performance limits, that's a bad sign for game developers that really want to create exciting gaming worlds. Keep in mind that this time around the hardware isn't custom built cores or using a Cell architecture - we are talking about very basic x86 cores and traditional GPU hardware that ALL software developers are intimately familiar with. It does not surprise me one bit that we have seen more advanced development teams hit peak performance.
If the PS4, the slightly more powerful console of the pair, is unable to render reliably at 1080p with a 30 FPS target, then unless the Ubisoft team are completely off the rocker in terms of development capability, the advancement of gaming on consoles would appear to be somewhat limited. Remember the specifications for these two consoles:
|PlayStation 4||Xbox One|
|Processor||8-core Jaguar APU||8-core Jaguar APU|
|Memory||8GB GDDR5||8GB DDR3|
|Graphics Card||1152 Stream Unit APU||768 Stream Unit APU|
|Peak Compute||1,840 GFLOPS||1,310 GFLOPS|
The custom built parts from AMD both feature an 8-core Jaguar x86 architecture and either 768 or 1152 stream processors. The Jaguar CPU cores aren't high performance parts: single-threaded performance of Jaguar is less than the Intel Silvermont/Bay Trail designs by as much as 25%. Bay Trail is powering lots of super low cost tablets today and even the $179 ECS LIVA palm-sized mini-PC we reviewed this week. And the 1152/768 stream processors in the GPU portion of the AMD APU provide some punch, but a Radeon HD 7790 (now called the R7 260X), released in March of 2013, provides more performance than the PS4 and the Radeon R7 250X is faster than what resides in the Xbox One.
If you were to ask me today what kind of performance would be required from AMD's current GPU lineup for a steady 1080p gaming experience on the PC, I would probably tell you the R9 280, a card you can buy today for around $180. From NVIDIA, I would likely pick a GTX 760 (around $200).
Also note that if the developer is using 50% of the CPU resources for rendering computation and the remaining 50% isn't able to hold up its duties on AI, etc., we likely have hit performance walls on the x86 cores as well.
Even if this developer quote is 100% correct that doesn't mean that the current generation of consoles is completely doomed. Microsoft has already stated that DirectX 12, focused on performance efficiency of current generation hardware, will be coming to the Xbox One and that could mean additional performance gains for developers. The PS4 will likely have access to OpenGL Next that is due in the future. And of course, it's also possible that this developer is just wrong and there is plenty of headroom left in the hardware for games to take advantage of.
But honestly, based on my experience with these GPU and CPU cores, I don't think that's the case. If you look at screenshots of Assassin's Creed Unity and then look at the minimum and recommended specifications for the game on the PC, there is huge, enormous discrepancy. Are the developers just writing lazy code and not truly optimizing for the hardware? It seems unlikely that a company the size of Ubisoft would choose this route on purpose, creating a console game that runs in a less-than-ideal state while also struggling on the PC version. Remember, there is almost no "porting" going on here: the Xbox One and Playstation 4 share the same architecture as the PC now.
Of course, we might just be treading through known waters. I know we are a bit biased, and so is our reader base, but I am curious: do you think MS and Sony have put themselves in a hole with their shortsighted hardware selections?
UPDATE: It would appear that a lot of readers and commentors take our editorial on the state of the PS4 and XB1 as a direct attack on AMD and its APU design. That isn't really the case - regardless of what vendors' hardware is inside the consoles, had Microsoft and Sony still targeted the same performance levels, we would be in the exact same situation. An Intel + NVIDIA hardware combination could just have easily been built to the same peak theoretical compute levels and would have hit the same performance wall just as quickly. MS and Sony could have prevented this by using higher performance hardware, selling the consoles at a loss out the gate and preparing each platform for the next 7-10 years properly. And again, the console manufacturers could have done that with higher end AMD hardware, Intel hardware or NVIDIA hardware. The state of the console performance war is truly hardware agnostic.
Subject: Graphics Cards | October 26, 2014 - 02:44 AM | Scott Michaud
Tagged: amd, driver, catalyst
So Ryan has been playing many games lately, as a comparison between the latest GPUs from AMD and NVIDIA. While Civilization: Beyond Earth is not the most demanding game in existence on GPUs, it is not trivial either. While not the most complex, from a video card's perspective, it is a contender for most demanding game on your main processor (CPU). It also has some of the most thought-out Mantle support of any title using the API, when using the AMD Catalyst 14.9.2 Beta driver.
And now you can!
The Catalyst 14.9.2 Beta drivers support just about anything using the GCN architecture, from APUs (starting with Kaveri) to discrete GPUs (starting with the HD 7000 and HD 7000M series). Beyond enabling Mantle support in Civilization, it also fixes some issues with Metro, Shadow of Mordor, Total War: Rome 2, Watch_Dogs, and other games.
Also, both AMD and Firaxis are aware of a bug in Civilization: Beyond Earth where the mouse cursor does not click exactly where it is supposed to, if the user enables font scaling in Windows. They are working on it, but suggest setting it to the default (100%) if users experience this issue. This could be problematic for customers with high-DPI screens, but could keep you playing until an official patch is released.
You can get 14.9.2 Beta for Windows 7 and Windows 8.1 at AMD's website.
Subject: Graphics Cards | October 24, 2014 - 03:44 PM | Ryan Shrout
Tagged: radeon, R9 290X, leaderboard, hwlb, hawaii, amd, 290x
When NVIDIA launched the GTX 980 and GTX 970 last month, it shocked the discrete graphics world. The GTX 970 in particular was an amazing performer and undercut the price of the Radeon R9 290 at the time. That is something that NVIDIA rarely does and we were excited to see some competition in the market.
AMD responded with some price cuts on both the R9 290X and the R9 290 shortly thereafter (though they refuse to call them that) and it seems that AMD and its partners are at it again.
Looking on Amazon.com today we found several R9 290X and R9 290 cards at extremely low prices. For example:
- XFX Radeon R9 290X Double D - $299 (after MIR)
- Gigabyte R9 290X WindForce - $360
- MSI R9 290X Gaming - $366
The R9 290X's primary competition in terms of raw performance is the GeForce GTX 980, currently selling for $549 and up. If you can find them in stock, that means NVIDIA has a hill of $250 to climb when going against the lowest priced R9 290X.
The R9 290 looks interesting as well:
Several other R9 290 cards are selling for upwards of $300-320 making them bone-headed decisions if you can get the R9 290X for the same or lower price, but considering the GeForce GTX 970 is selling for at least $329 today (if you can find it) and you can see why consumers are paying close attention.
Will NVIDIA make any adjustments of its own? It's hard to say right now since stock is so hard to come by of both the GTX 980 and GTX 970 but it's hard to imagine NVIDIA lowering prices as long as parts continue to sell out. NVIDIA believes that its branding and technologies like G-Sync make GeForce cards more valuable and until they being to see a shift in the market, I imagine that will stay the course.
For those of you that utilize our Hardware Leaderboard you'll find that Jeremy has taken these prices into account and update a couple of the system build configurations.
A Civ for a New Generation
Turn-based strategy games have long been defined by the Civilization series. Civ 5 took up hours and hours of the PC Perspective team's non-working hours (and likely the working ones too) and it looks like the new Civilization: Beyond Earth has the chance to do the same. Early reviews of the game from Gamespot, IGN, and Polygon are quite positive, and that's great news for a PC-only release; they can sometimes get overlooked in the games' media.
For us, the game offers an interesting opportunity to discuss performance. Beyond Earth is definitely going to be more CPU-bound than the other games that we tend to use in our benchmark suite, but the fact that this game is new, shiny, and even has a Mantle implementation (AMD's custom API) makes interesting for at least a look at the current state of performance. Both NVIDIA and AMD sent have released drivers with specific optimization for Beyond Earth as well. This game is likely to be popular and it deserves the attention it gets.
Civilization: Beyond Earth, a turn-based strategy game that can take a very long time to complete, ships with an integrated benchmark mode to help users and the industry test performance under different settings and hardware configurations. To enable it, you simple add "-benchmark results.csv" to the Steam game launch options and then start up the game normally. Rather than taking you to the main menu, you'll be transported into a view of a map that represents a somewhat typical gaming state for a long term session. The game will use the last settings you ran the game at to measure your system's performance, without the modified launch options, so be sure to configure that before you prepare to benchmark.
The output of this is the "result.csv" file, saved to your Steam game install root folder. In there, you'll find a list of numbers, separated by commas, representing the frame times for each frame rendering during the run. You don't get averages, a minimum, or a maximum without doing a little work. Fire up Excel or Google Docs and remember the formula:
1000 / Average (All Frame Times) = Avg FPS
It's a crude measurement that doesn't take into account any errors, spikes, or other interesting statistical data, but at least you'll have something to compare with your friends.
Our testing settings
Just as I have done in recent weeks with Shadow of Mordor and Sniper Elite 3, I ran some graphics cards through the testing process with Civilization: Beyond Earth. These include the GeForce GTX 980 and Radeon R9 290X only, along with SLI and CrossFire configurations. The R9 290X was run in both DX11 and Mantle.
- Core i7-3960X
- ASUS Rampage IV Extreme X79
- 16GB DDR3-1600
- GeForce GTX 980 Reference (344.48)
- ASUS R9 290X DirectCU II (14.9.2 Beta)
Mantle Additions and Improvements
AMD is proud of this release as it introduces a few interesting things alongside the inclusion of the Mantle API.
- Enhanced-quality Anti-Aliasing (EQAA): Improves anti-aliasing quality by doubling the coverage samples (vs. MSAA) at each AA level. This is automatically enabled for AMD users when AA is enabled in the game.
- Multi-threaded command buffering: Utilizing Mantle allows a game developer to queue a much wider flow of information between the graphics card and the CPU. This communication channel is especially good for multi-core CPUs, which have historically gone underutilized in higher-level APIs. You’ll see in your testing that Mantle makes a notable difference in smoothness and performance high-draw-call late game testing.
- Split-frame rendering: Mantle empowers a game developer with total control of multi-GPU systems. That “total control” allows them to design an mGPU renderer that best matches the design of their game. In the case of Civilization: Beyond Earth, Firaxis has selected a split-frame rendering (SFR) subsystem. SFR eliminates the latency penalties typically encountered by AFR configurations.
EQAA is an interesting feature as it improves on the quality of MSAA (somewhat) by doubling the coverage sample count while maintaining the same color sample count as MSAA. So 4xEQAA will have 4 color samples and 8 coverage samples while 4xMSAA would have 4 of each. Interestingly, Firaxis has decided the EQAA will be enabled on Beyond Earth anytime a Radeon card is detected (running in Mantle or DX11) and AA is enabled at all. So even though in the menus you might see 4xMSAA enabled, you are actually running at 4xEQAA. For NVIDIA users, 4xMSAA means 4xMSAA. Performance differences should be negligible though, according to AMD (who would actually be "hurt" by this decision if it brought down FPS).
Subject: Graphics Cards | October 23, 2014 - 04:06 PM | Jeremy Hellstrom
Tagged: xfx, R9 285 Black Edition, factory overclocked, amd
Currently sitting at $260 the XFX R9 285 Black Edition is a little less expensive than the ASUS ROG STRIKER GTX 760 and significantly more expensive than the ASUS GTX760 DirectCU2 card. Those prices lead [H]ard|OCP to set up a showdown to see which card provided the best bang for the buck, especially once they overclocked the AMD card to 1125MHz core and 6GHz RAM. In the end it was a very close race between the cards, the performance crown did go to the R9 285 BE but that performance comes at a premium as you can get performance almost as good for $50 less. Of course the both the XFX card and the STRIKER sell at a premium compared to cards with less features and a stock setup; you should expect the lower priced R9 285s to be closer in performance to the DirectCU2 card.
"Today we are reviewing the new XFX Radeon R9 285 Black Edition video card. We will compare it to a pair of GeForce GTX 760 based GPUs to determine the best at the sub-$250 price point. XFX states that it is faster than the GTX 760, but that is based on a single synthetic benchmark, let's see how it holds up in real world gaming."
Here are some more Graphics Card articles from around the web:
- ZOTAC GTX 780 AMP! Edition Graphics Card Review @ TechwareLabs
- ASUS STRIX GeForce GTX 970 4GB @ eTeknix
- GeForce GTX 970 cards from MSI and Asus @ The Tech Report
- ASUS STRIX GTX 980 OC 4 GB @ techPowerUp
- MSI GTX980 Gaming 4G @ Kitguru
- MSI GTX 970 Gaming 4G – New Maxwell Price/Performance Beast @ Bjorn3D
- NVIDIA GeForce GTX 970 Offers Great Linux Performance @ Phoronix