Author:
Manufacturer: AMD

A slightly new architecture

Note: We also tested the new AMD Radeon R9 290X in CrossFire and at 4K resolutions; check out that full Frame Rating story right here!!

Last month AMD brought media, analysts, and customers out to Hawaii to talk about a new graphics chip coming out this year.  As you might have guessed based on the location: the code name for this GPU was in fact, Hawaii. It was targeted at the high end of the discrete graphics market to take on the likes of the GTX 780 and GTX TITAN from NVIDIA. 

Earlier this month we reviewed the AMD Radeon R9 280X, R9 270X, and the R7 260X. None of these were based on that new GPU.  Instead, these cards were all rebrands and repositionings of existing hardware in the market (albeit at reduced prices).  Those lower prices made the R9 280X one of our favorite GPUs of the moment as it offers performance per price points currently unmatched by NVIDIA.

But today is a little different, today we are talking about a much more expensive product that has to live up to some pretty lofty goals and ambitions set forward by the AMD PR and marketing machine.  At $549 MSRP, the new AMD Radeon R9 290X will become the flagship of the Radeon brand.  The question is: to where does that ship sail?

 

The AMD Hawaii Architecture

To be quite upfront about it, the Hawaii design is very similar to that of the Tahiti GPU from the Radeon HD 7970 and R9 280X cards.  Based on the same GCN (Graphics Core Next) architecture AMD assured us would be its long term vision, Hawaii ups the ante in a few key areas while maintaining the same core.

01.jpg

Hawaii is built around Shader Engines, of which the R9 290X has four.  Each of these includes 11 CU (compute units) which hold 4 SIMD arrays each.  Doing the quick math brings us to a total stream processor count of 2,816 on the R9 290X. 

Continue reading our review of the AMD Radeon R9 290X 4GB Graphics Card!!

AMD Aggressively Targets Professional GPU Market

Subject: General Tech, Graphics Cards | October 23, 2013 - 04:30 PM |
Tagged: amd, firepro

Currently AMD holds 18% market share with their FirePro line of professional GPUs. This compares to NVIDIA who owns 81% with Quadro. I assume the "other" category is the sum of S3 and Matrox who, together, command 1% of the professional market (just the professional market)

According to Jon Peddie of JPR, as reported by X-Bit Labs, AMD intends to wrestle back revenue left unguarded for NVIDIA. "After years of neglect, AMD’s workstation group, under the tutorage of Matt Skyner, has the backing and commitment of top management and AMD intends to push into the market aggressively." They have already gained share this year.

W600-card.jpg

During AMD's 3rd Quarter (2013) earnings call, CEO Rory Read outlined the importance of the professional graphics market.

We also continue to make steady progress in another of growth businesses in the third quarter as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe that we can continue to gain share in this lucrative part of the GPU market based on our product portfolio, design wins in flight, and enhanced channel programs.

On the same conference call (actually before and after the professional graphics sound bite), Rory noted their renewed push into the server and embedded SoC markets with 64-bit x86 and 64-bit ARM processors. They will be the only company manufacturing both x86 and ARM solutions which should be an interesting proposition for an enterprise in need of both. Why deal with two vendors?

Either way, AMD will probably be refocusing on the professional and enterprise markets for the near future. For the rest of us, this hopefully means that AMD has a stable (and confident) roadmap in the processor and gaming markets. If that is the case, a profitable Q3 is definitely a good start.

Taking the ASUS R9 280X DirectCU II TOP as far as it can go

Subject: Graphics Cards | October 23, 2013 - 03:20 PM |
Tagged: amd, overclocking, asus, ASUS R9 280X DirectCU II TOP, r9 280x

Having already seen what the ASUS R9 280X DirectCU II TOP can do at default speeds the obvious next step, once they had time to fully explore the options, was for [H]ard|OCP to see just how far this GPU can overclock.  To make a long story short, they went from a default clock of  1070MHz up to 1230MHz and pushed the RAM to 6.6GHz from 6.4GHz though the voltage needed to be bumped from 1.2v to 1.3v.  The actual frequencies are nowhere near as important as the effect on gameplay though, to see those results you will have to click through to the full article.

1381749747uEpep2oCxY_1_2.gif

"We take the new ASUS R9 280X DirectCU II TOP video card and find out how high it will overclock with GPU Tweak and voltage modification. We will compare performance to an overclocked GeForce GTX 770 and find out which card comes out on top when pushed to its overclocking limits."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

hUMA's new trick; standardised task dispatch packets

Subject: General Tech | October 23, 2013 - 10:53 AM |
Tagged: amd, hUMA

hUMA's currently well know trick, a shared memory space which both the CPU and GPU can access without penalty is only the first of its revealed optimizations.  The Register talks today about another way in which this new architecture allows the CPU and GPU equal treatment, standardized task queues and dispatch packets which avoid dealing with a kernel level driver to assign tasks.  With hUMA the GPU is able to shedule tasks for the CPU directly.  That would allow any application that was designed to hUMA standards to have its various tasks assigned to the proper processor without needing extra coding.  This not only makes it cheaper and quicker to design apps but would allow all hUMA apps to take advantage of the specialized abilities of both the CPU and GPU at no cost.

reg_hq_1.jpg

"The upcoming chips will utilise a technique AMD calls Heterogeneous Queuing (hQ). This new approach puts the GPU on an equal footing with the CPU: no longer will the graphics engine have to wait for the central processor to tell it what to do."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Author:
Subject: Editorial
Manufacturer:

The Really Good Times are Over

We really do not realize how good we had it.  Sure, we could apply that to budget surpluses and the time before the rise of global terrorism, but in this case I am talking about the predictable advancement of graphics due to both design expertise and improvements in process technology.  Moore’s law has been exceptionally kind to graphics.  We can look back and when we plot the course of these graphics companies, they have actually outstripped Moore in terms of transistor density from generation to generation.  Most of this is due to better tools and the expertise gained in what is still a fairly new endeavor as compared to CPUs (the first true 3D accelerators were released in the 1993/94 timeframe).

The complexity of a modern 3D chip is truly mind-boggling.  To get a good idea of where we came from, we must look back at the first generations of products that we could actually purchase.  The original 3Dfx Voodoo Graphics was comprised of a raster chip and a texture chip, each contained approximately 1 million transistors (give or take) and were made on a then available .5 micron process (we shall call it 500 nm from here on out to give a sense of perspective with modern process technology).  The chips were clocked between 47 and 50 MHz (though often could be clocked up to 57 MHz by going into the init file and putting in “SET SST_GRXCLK=57”… btw, SST stood for Sellers/Smith/Tarolli, the founders of 3Dfx).  This revolutionary graphics card at the time could push out 47 to 50 megapixels and had 4 MB of VRAM and was released in the beginning of 1996.

righteous3d_01.JPG

My first 3D graphics card was the Orchid Righteous 3D.  Voodoo Graphics was really the first successful consumer 3D graphics card.  Yes, there were others before it, but Voodoo Graphics had the largest impact of them all.

In 1998 3Dfx released the Voodoo 2, and it was a significant jump in complexity from the original.  These chips were fabricated on a 350 nm process.  There were three chips to each card, one of which was the raster chip and the other two were texture chips.  At the top end of the product stack was the 12 MB cards.  The raster chip had 4 MB of VRAM available to it while each texture chip had 4 MB of VRAM for texture storage.  Not only did this product double performance from the Voodoo Graphics, it was able to run in single card configurations at 800x600 (as compared to the max 640x480 of the Voodoo Graphics).  This is the same time as when NVIDIA started to become a very aggressive competitor with the Riva TnT and ATI was about to ship the Rage 128.

Read the entire editorial here!

John Carmack, Tim Sweeney and Johan Andersson Talk NVIDIA G-Sync, AMD Mantle and Graphics Trends

Subject: Graphics Cards | October 18, 2013 - 04:55 PM |
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd

If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event.  After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.

IMG_0009.JPG

John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances.  Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.

If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!

AMD Allows Two Early R9 290X Benchmarks

Subject: General Tech, Graphics Cards | October 17, 2013 - 01:37 PM |
Tagged: radeon, R9 290X, amd

The NDA on AMD R9 290X benchmarks has not yet lifted but AMD was in Montreal to provide two previews: BioShock Infinite and Tomb Raider, both at 4K (3840 x 2160). Keep in mind, these scores are provided by AMD and definitely does not represent results from our monitor-capture solution. Expect more detailed results from us, later, as we do some Frame Rating.

amd-gpu14-06.png

The test machine used in both setups contains:

  • Intel Core i7-3960X at 3.3 GHz
  • MSI X79A-GD65
  • 16GB of DDR3-1600
  • Windows 7 SP1 64-bit
  • NVIDIA GeForce GTX 780 (331.40 drivers) / AMD Radeon R9 290X (13.11 beta drivers)

The R9 290X is configured in its "Quiet Mode" during both benchmarks. This is particularly interesting, to me, as I was unaware of such feature (it has been a while since I last used a desktop AMD/ATi card). I would assume this is a fan and power profile to keep noise levels as silent as possible for some period of time. A quick Google search suggests this feature is new with the Radeon Rx200-series cards.

bioshock_screen2.jpg

BioShock Infinite is quite demanding at 4K with ultra quality settings. Both cards maintain an average framerate above 30FPS.

AMD R9 290X "Quiet Mode": 44.25 FPS

NVIDIA GeForce GTX 780: 37.67 FPS

tomb_raider1.jpg

(Update 1: 4:44pm EST) AMD confirmed TressFX is disabled in these benchmark scores. It, however, is enabled if you are present in Montreal to see the booth. (end of update 1)

Tomb Raider is also a little harsh at those resolutions. Unfortunately, the results are ambiguous whether or not TressFX has been enabled throughout the benchmarks. The summary explicitly claims TressFX is enabled, while the string of settings contains "Tressfx=off". Clearly, one of the two entries is a typo. We are currently trying to get clarification. In the mean time:

AMD R9 290X "Quiet Mode": 40.2 FPS

NVIDIA GeForce GTX 780: 34.5 FPS

Notice how both of these results are not compared to a GeForce Titan. Recent leaks suggest a retail price for AMD's flagship card in the low-$700 market. The GeForce 780, on the other hand, resides in the $650-700 USD price point.

It seems pretty clear, to me, that cost drove this comparison rather than performance.

Source: AMD

Still Not Settling? Is AMD Going to Continue Never Settle?

Subject: General Tech, Graphics Cards | October 17, 2013 - 12:46 PM |
Tagged: amd, radeon

In summary, "We don't know yet".

amd7990.jpg

We do know of a story posted by Fudzilla which cited Roy Taylor, VP of Global Channel Sales, as a source confirming the reintroduction of Never Settle for the new "Rx200" Radeon cards. Adding credibility, Roy Taylor retweeted the story via his official account. This tweet is still there as I write this post.

The Tech Report, after publishing the story, was contacted by Robert Hallock of AMD Gaming and Graphics. The official word, now, is that AMD does not have any announcements regarding bundles for new products. He is also quoted, "We continue to consider Never Settle bundles as a core component of AMD Gaming Evolved program and intend to do them again in the future".

So, I (personally) see promise in that we will see a new Never Settle bundle. For the moment, AMD is officially silent on the matter. Also, we do not know (and, it is possible, neither does AMD at this time) which games will be included and how many users can claim if there will even be a choice at all.

Source: Tech Report

AMD cards get an unexpected boost from the Linux 3.12 kernel

Subject: General Tech | October 15, 2013 - 10:30 AM |
Tagged: amd, linux, catalyst, 3.12 kernel

Phoronix had a very pleasant surprise over the weekend which was apparently also a complete surprise to the AMD Linux driver team; vastly improved GPU performance on the 3.12 kernel.  In tests on a dozen cards ranging from the elderly HD3850 to the HD6950 all cards showed at least some improvements and in some cases increases of 50%.  Keep your eyes tuned for updates as they work with AMD's Linux team to see where these performance increases originated from and to give Phoronix time to get their hands on new hardware for more testing.  Check out the complete review here.

AMD-Catalyst-Linux.jpg

"Over the weekend I released benchmarks showing the Linux 3.12 kernel bringing big AMD Radeon performance improvements. Those benchmarks of a Radeon HD 4000 series GPU showed the Linux 3.12 kernel bringing major performance improvements over Linux 3.11 and prior. Some games improved with just low double-digit gains while other Linux games were nearly 90% faster! Interestingly, the AMD Radeon Linux developers were even surprised by these findings. After carrying out additional tests throughout the weekend, I can confirm these truly incredible performance improvements on other hardware. In this article are results from ten different AMD Radeon graphics cards."

Here is some more Tech News from around the web:

Tech Talk

Source: Phoronix

Microsoft Confirms AMD Mantle Not Compatible with Xbox One

Subject: Graphics Cards | October 14, 2013 - 05:52 PM |
Tagged: xbox one, microsot, Mantle, dx11, amd

Microsoft posted a new blog on its Windows site that discusses some of the new features of the latest DirectX on Windows 8.1 and the upcoming Xbox One.  Of particular interest was a line that confirms what I have said all along about the much-hyped AMD Mantle low-level API: it is not compatible with Xbox One

We are very excited that with the launch of Xbox One, we can now bring the latest generation of Direct3D 11 to console. The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.

mantle.jpg

What does this mean for AMD?  Nothing really changes except some of the common online discussion about how easy it would now be for developers to convert games built for the console to the AMD-specific Mantle API.  AMD claims that Mantle offers a significant performance advantage over DirectX and OpenGL by giving developers that choose to implement support for it closer access to the hardware without much of the software overhead found in other APIs.

Josh summed it up in a recent editorial.

This is what Mantle does.  It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.  This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX.  This is a significant boost in overall efficiency.  Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application.  A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX.  Mantle shifts the bottleneck elsewhere.

I still believe that AMD Mantle could bring interesting benefits to the AMD Radeon graphics cards on the PC but I think this official statement from Microsoft will dampen some of the over excitement.

Also worth noting is this comment about the DX11 implementation on the Xbox One:

With Xbox One we have also made significant enhancements to the implementation of Direct3D 11, especially in the area of runtime overhead. The result is a very streamlined, “close to metal” level of runtime performance. In conjunction with the third generation PIX performance tool for Xbox One, developers can use Direct3D 11 to unlock the full performance potential of the console.

So while Windows and the upcoming Xbox One will share an API there will still be performance advantages for games on the console thanks to the nature of a static hardware configuration.

Source: Microsoft