Subject: General Tech, Graphics Cards | October 28, 2013 - 10:21 PM | Scott Michaud
Tagged: R9 290X, amd
Hawaii launches and AMD sells their inventory (all of it, in many cases). The Radeon R9 290X brought reasonably Titan-approaching performance to the $550-600 USD dollar value. Near and dear to our website, AMD also took the opportunity to address much of the Crossfire and Eyefinity frame pacing issues.
Nitroware also took a look at the card... from a distance because they did not receive a review unit. His analysis was based on concepts, such as revisions to AMD design over the life of their Graphics Core Next architecture. The discussion goes back to the ATI Rage series of fixed function hardware and ends with a comparisson between the Radeon HD 7900 "Tahiti" and the R9 290X "Hawaii".
Our international viewers (or even curious North Americans) might also like to check out the work Dominic undertook compiling regional pricing and comparing those values to currency conversion data. There is more to an overview (or review) than benchmarks.
Subject: Graphics Cards | October 24, 2013 - 02:38 PM | Jeremy Hellstrom
Tagged: radeon, R9 290X, kepler, hawaii, amd
If you didn't stay up to watch our live release of the R9 290X after the podcast last night you missed a chance to have your questions answered but you will be able to watch the recording later on. The R9 290X arrived today, bringing 4K and Crossfire reviews as well as single GPU testing on many a site including PCPer of course. You don't just have to take our word for it, [H]ard|OCP was also putting together a review of AMD's Titan killer. Their benchmarks included some games we haven't adopted yet such as ARMA III. Check out their results and compare them to ours, AMD really has a winner here.
"AMD is launching the Radeon R9 290X today. The R9 290X represents AMD's fastest single-GPU video card ever produced. It is priced to be less expensive than the GeForce GTX 780, but packs a punch on the level of GTX TITAN. We look at performance, the two BIOS mode options, and even some 4K gaming."
Here are some more Graphics Card articles from around the web:
- AMD's Radeon R9 290X graphics card @ The Tech Report
- AMD Radeon R9 290X 4GB Video Card Review @ Legit Reviews
- AMD Radeon R9 290X @ Hardware.info
- 4K Gaming Showdown - AMD R9 290X & R9 280X Vs Nvidia GTX Titan & GTX 780 @ eTeknix
- AMD Radeon R9 290X 4GB @ eTeknix
- AMD Radeon R9 290X @ Legit Reviews
- AMD Radeon R9 290X 4GB Review @ Hardware Canucks
- AMD Radeon R9 290X CrossFire @ techPowerUp
- AMD Radeon R9 290X @ Techspot
- AMD R9 290X @ Kitguru
- AMD Radeon R9 290X 4 GB @ techPowerUp
A bit of a surprise
Okay, let's cut to the chase here: it's late, we are rushing to get our articles out, and I think you all would rather see our testing results NOW rather than LATER. The first thing you should do is read my review of the AMD Radeon R9 290X 4GB Hawaii graphics card which goes over the new architecture, new feature set, and performance in single card configurations.
Then, you should continue reading below to find out how the new XDMA, bridge-less CrossFire implementation actually works in both single panel and 4K (tiled) configurations.
A New CrossFire For a New Generation
CrossFire has caused a lot of problems for AMD in recent months (and a lot of problems for me as well). But, AMD continues to make strides in correcting the frame pacing issues associated with CrossFire configurations and the new R9 290X moves the bar forward.
Without the CrossFire bridge connector on the 290X, all of the CrossFire communication and data transfer occurs over the PCI Express bus that connects the cards to the entire system. AMD claims that this new XDMA interface was designed for Eyefinity and UltraHD resolutions (which were the subject of our most recent article on the subject). By accessing the memory of the GPU through PCIe AMD claims that it can alleviate the bandwidth and sync issues that were causing problems with Eyefinity and tiled 4K displays.
Even better, this updated version of CrossFire is said to compatible with the frame pacing updates to the Catalyst driver to improve multi-GPU performance experiences for end users.
When an extra R9 290X accidentally fell into my lap, I decided to take it for a spin. And if you have followed my graphics testing methodology in the past year then you'll understand the important of these tests.
A slightly new architecture
Last month AMD brought media, analysts, and customers out to Hawaii to talk about a new graphics chip coming out this year. As you might have guessed based on the location: the code name for this GPU was in fact, Hawaii. It was targeted at the high end of the discrete graphics market to take on the likes of the GTX 780 and GTX TITAN from NVIDIA.
Earlier this month we reviewed the AMD Radeon R9 280X, R9 270X, and the R7 260X. None of these were based on that new GPU. Instead, these cards were all rebrands and repositionings of existing hardware in the market (albeit at reduced prices). Those lower prices made the R9 280X one of our favorite GPUs of the moment as it offers performance per price points currently unmatched by NVIDIA.
But today is a little different, today we are talking about a much more expensive product that has to live up to some pretty lofty goals and ambitions set forward by the AMD PR and marketing machine. At $549 MSRP, the new AMD Radeon R9 290X will become the flagship of the Radeon brand. The question is: to where does that ship sail?
The AMD Hawaii Architecture
To be quite upfront about it, the Hawaii design is very similar to that of the Tahiti GPU from the Radeon HD 7970 and R9 280X cards. Based on the same GCN (Graphics Core Next) architecture AMD assured us would be its long term vision, Hawaii ups the ante in a few key areas while maintaining the same core.
Hawaii is built around Shader Engines, of which the R9 290X has four. Each of these includes 11 CU (compute units) which hold 4 SIMD arrays each. Doing the quick math brings us to a total stream processor count of 2,816 on the R9 290X.
Subject: General Tech, Graphics Cards | October 23, 2013 - 07:30 PM | Scott Michaud
Tagged: amd, firepro
Currently AMD holds 18% market share with their FirePro line of professional GPUs. This compares to NVIDIA who owns 81% with Quadro. I assume the "other" category is the sum of S3 and Matrox who, together, command 1% of the professional market (just the professional market)
According to Jon Peddie of JPR, as reported by X-Bit Labs, AMD intends to wrestle back revenue left unguarded for NVIDIA. "After years of neglect, AMD’s workstation group, under the tutorage of Matt Skyner, has the backing and commitment of top management and AMD intends to push into the market aggressively." They have already gained share this year.
During AMD's 3rd Quarter (2013) earnings call, CEO Rory Read outlined the importance of the professional graphics market.
We also continue to make steady progress in another of growth businesses in the third quarter as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe that we can continue to gain share in this lucrative part of the GPU market based on our product portfolio, design wins in flight, and enhanced channel programs.
On the same conference call (actually before and after the professional graphics sound bite), Rory noted their renewed push into the server and embedded SoC markets with 64-bit x86 and 64-bit ARM processors. They will be the only company manufacturing both x86 and ARM solutions which should be an interesting proposition for an enterprise in need of both. Why deal with two vendors?
Either way, AMD will probably be refocusing on the professional and enterprise markets for the near future. For the rest of us, this hopefully means that AMD has a stable (and confident) roadmap in the processor and gaming markets. If that is the case, a profitable Q3 is definitely a good start.
Subject: Graphics Cards | October 23, 2013 - 06:20 PM | Jeremy Hellstrom
Tagged: amd, overclocking, asus, ASUS R9 280X DirectCU II TOP, r9 280x
Having already seen what the ASUS R9 280X DirectCU II TOP can do at default speeds the obvious next step, once they had time to fully explore the options, was for [H]ard|OCP to see just how far this GPU can overclock. To make a long story short, they went from a default clock of 1070MHz up to 1230MHz and pushed the RAM to 6.6GHz from 6.4GHz though the voltage needed to be bumped from 1.2v to 1.3v. The actual frequencies are nowhere near as important as the effect on gameplay though, to see those results you will have to click through to the full article.
"We take the new ASUS R9 280X DirectCU II TOP video card and find out how high it will overclock with GPU Tweak and voltage modification. We will compare performance to an overclocked GeForce GTX 770 and find out which card comes out on top when pushed to its overclocking limits."
Here are some more Graphics Card articles from around the web:
- Gigabyte R9 280X OC 3 GB @ techPowerUp
- HIS R9 280X iPower IceQ X2 Turbo Boost Clock 3GB Video Card Review @ Madshrimps
- AMD Radeon R9 290X Versus NVIDIA GeForce GTX 780 Benchmarks @ Legit Reviews
- XFX R9 280X Black OC Edition @ Kitguru
- AMD Radeon R9 280X Video Card Review w/ ASUS, XFX and MSI @ Legit Reviews
- HIS R9 280X iPower IceQ X² Turbo and R9 270X IceQ X² Turbo @ Legion Hardware
- Sapphire Radeon R9 270X Vapor-X @ Benchmark Reviews
- MSI R9 270X Hawk Review @ OCC
- Asus Matrix R9 280X Platinum @ LanOC Reviews
- ASUS R9 280X DirectCU II TOP 3 GB @ techPowerUp
- HIS Radeon R9 280X IceQ X2 @ Benchmark Reviews
- Sapphire Toxic Edition R9 270X Video Card Review @HiTech Legion
- MSI Radeon R9 270X Gaming Video Card Review @ Ninjalane
- Sapphire Toxic R9 270X @ LanOC Review
- AMD Radeon 7000 and Radeon R200 Series Mixed CrossFire Testing @ Legit Reviews
- AMD Radeon R9 270X Graphics Card Review @ Techgage
- MSI R9 270X HAWK 2 GB @ techPowerUp
- HIS Radeon R9 270X IceQ X2 Turbo Boost @ Benchmark Reviews
- Asus R9 270X DirectCU II Top @ LanOC Reviews
- Diamond Multimedia Radeon 7870 7870PE52GV Review @ HCW
- AMD Radeon R9 270X On Linux @ Phoronix
- ASUS GTX760 DirectCU Mini OC @ Hardawre.info
- Gigabyte GeForce GTX 770 OC / GTX 780 OC @ Hardware.info
- MSI N660 Gaming Review: affordable and silent GeForce GTX 660 @ Hardawre.info
- Asus GTX 670 Direct CU Mini @ LanOC Reviews
- NVIDIA GeForce GTX 650 On Linux @ Phoronix
Subject: General Tech | October 23, 2013 - 01:53 PM | Jeremy Hellstrom
Tagged: amd, hUMA
hUMA's currently well know trick, a shared memory space which both the CPU and GPU can access without penalty is only the first of its revealed optimizations. The Register talks today about another way in which this new architecture allows the CPU and GPU equal treatment, standardized task queues and dispatch packets which avoid dealing with a kernel level driver to assign tasks. With hUMA the GPU is able to shedule tasks for the CPU directly. That would allow any application that was designed to hUMA standards to have its various tasks assigned to the proper processor without needing extra coding. This not only makes it cheaper and quicker to design apps but would allow all hUMA apps to take advantage of the specialized abilities of both the CPU and GPU at no cost.
"The upcoming chips will utilise a technique AMD calls Heterogeneous Queuing (hQ). This new approach puts the GPU on an equal footing with the CPU: no longer will the graphics engine have to wait for the central processor to tell it what to do."
Here is some more Tech News from around the web:
- iPad Air vs iPad 4 specs comparison @ The Inquirer
- Firefox's Blocked-By-Default Java Isn't Going Down Well @ Slashdot
- Microsoft's Surface Pro 2 is harder to repair than an iPad @ The Inquirer
- ASUS talks Rampage IV Black Edition, next-gen video cards and cooling technology @ Hardware.info
- D-Link hole-prober finds 'backdoor' in Chinese wireless routers @ The Register
- be quiet! WorldWide Joint Giveaway - Win one Power Zone 1000W PSU, one Shadow Rock 2 CPU Cooler and two Silent Wings 140mm Fans
The Really Good Times are Over
We really do not realize how good we had it. Sure, we could apply that to budget surpluses and the time before the rise of global terrorism, but in this case I am talking about the predictable advancement of graphics due to both design expertise and improvements in process technology. Moore’s law has been exceptionally kind to graphics. We can look back and when we plot the course of these graphics companies, they have actually outstripped Moore in terms of transistor density from generation to generation. Most of this is due to better tools and the expertise gained in what is still a fairly new endeavor as compared to CPUs (the first true 3D accelerators were released in the 1993/94 timeframe).
The complexity of a modern 3D chip is truly mind-boggling. To get a good idea of where we came from, we must look back at the first generations of products that we could actually purchase. The original 3Dfx Voodoo Graphics was comprised of a raster chip and a texture chip, each contained approximately 1 million transistors (give or take) and were made on a then available .5 micron process (we shall call it 500 nm from here on out to give a sense of perspective with modern process technology). The chips were clocked between 47 and 50 MHz (though often could be clocked up to 57 MHz by going into the init file and putting in “SET SST_GRXCLK=57”… btw, SST stood for Sellers/Smith/Tarolli, the founders of 3Dfx). This revolutionary graphics card at the time could push out 47 to 50 megapixels and had 4 MB of VRAM and was released in the beginning of 1996.
My first 3D graphics card was the Orchid Righteous 3D. Voodoo Graphics was really the first successful consumer 3D graphics card. Yes, there were others before it, but Voodoo Graphics had the largest impact of them all.
In 1998 3Dfx released the Voodoo 2, and it was a significant jump in complexity from the original. These chips were fabricated on a 350 nm process. There were three chips to each card, one of which was the raster chip and the other two were texture chips. At the top end of the product stack was the 12 MB cards. The raster chip had 4 MB of VRAM available to it while each texture chip had 4 MB of VRAM for texture storage. Not only did this product double performance from the Voodoo Graphics, it was able to run in single card configurations at 800x600 (as compared to the max 640x480 of the Voodoo Graphics). This is the same time as when NVIDIA started to become a very aggressive competitor with the Riva TnT and ATI was about to ship the Rage 128.
Subject: Graphics Cards | October 18, 2013 - 07:55 PM | Ryan Shrout
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd
If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event. After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.
John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances. Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.
If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!
Subject: General Tech, Graphics Cards | October 17, 2013 - 04:37 PM | Scott Michaud
Tagged: radeon, R9 290X, amd
The NDA on AMD R9 290X benchmarks has not yet lifted but AMD was in Montreal to provide two previews: BioShock Infinite and Tomb Raider, both at 4K (3840 x 2160). Keep in mind, these scores are provided by AMD and definitely does not represent results from our monitor-capture solution. Expect more detailed results from us, later, as we do some Frame Rating.
The test machine used in both setups contains:
- Intel Core i7-3960X at 3.3 GHz
- MSI X79A-GD65
- 16GB of DDR3-1600
- Windows 7 SP1 64-bit
- NVIDIA GeForce GTX 780 (331.40 drivers) / AMD Radeon R9 290X (13.11 beta drivers)
The R9 290X is configured in its "Quiet Mode" during both benchmarks. This is particularly interesting, to me, as I was unaware of such feature (it has been a while since I last used a desktop AMD/ATi card). I would assume this is a fan and power profile to keep noise levels as silent as possible for some period of time. A quick Google search suggests this feature is new with the Radeon Rx200-series cards.
BioShock Infinite is quite demanding at 4K with ultra quality settings. Both cards maintain an average framerate above 30FPS.
AMD R9 290X "Quiet Mode": 44.25 FPS
NVIDIA GeForce GTX 780: 37.67 FPS
(Update 1: 4:44pm EST) AMD confirmed TressFX is disabled in these benchmark scores. It, however, is enabled if you are present in Montreal to see the booth. (end of update 1)
Tomb Raider is also a little harsh at those resolutions. Unfortunately, the results are ambiguous whether or not TressFX has been enabled throughout the benchmarks. The summary explicitly claims TressFX is enabled, while the string of settings contains "Tressfx=off". Clearly, one of the two entries is a typo. We are currently trying to get clarification. In the mean time:
AMD R9 290X "Quiet Mode": 40.2 FPS
NVIDIA GeForce GTX 780: 34.5 FPS
Notice how both of these results are not compared to a GeForce Titan. Recent leaks suggest a retail price for AMD's flagship card in the low-$700 market. The GeForce 780, on the other hand, resides in the $650-700 USD price point.
It seems pretty clear, to me, that cost drove this comparison rather than performance.