Hybrid CrossFire that actually works
The road to redemption for AMD and its driver team has been a tough one. Since we first started to reveal the significant issues with AMD's CrossFire technology back in January of 2013 the Catalyst driver team has been hard at work on a fix, though I will freely admit it took longer to convince them that the issue was real than I would have liked. We saw the first steps of the fix released in August of 2013 with the release of the Catalyst 13.8 beta driver. It supported DX11 and DX10 games and resolutions of 2560x1600 and under (no Eyefinity support) but was obviously still less than perfect.
In October with the release of AMD's latest Hawaii GPU the company took another step by reorganizing the internal architecture of CrossFire on the chip level with XDMA. The result was frame pacing that worked on the R9 290X and R9 290 in all resolutions, including Eyefinity, though still left out older DX9 titles.
One thing that had not been addressed, at least not until today, was the issues that surrounded AMD's Hybrid CrossFire technology, now known as Dual Graphics. This is the ability for an AMD APU with integrated Radeon graphics to pair with a low cost discrete GPU to improve graphics performance and gaming experiences. Recently over at Tom's Hardware they discovered that Dual Graphics suffered from the exact same scaling issues as standard CrossFire; frame rates in FRAPS looked good but the actually perceived frame rate was much lower.
A little while ago a new driver made its way into my hands under the name of Catalyst 13.35 Beta X, a driver that promised to enable Dual Graphics frame pacing with Kaveri and R7 graphics cards. As you'll see in the coming pages, the fix definitely is working. And, as I learned after doing some more probing, the 13.35 driver is actually a much more important release than it at first seemed. Not only is Kaveri-based Dual Graphics frame pacing enabled, but Richland and Trinity are included as well. And even better, this driver will apparently fix resolutions higher than 2560x1600 in desktop graphics as well - something you can be sure we are checking on this week!
Just as we saw with the first implementation of Frame Pacing in the Catalyst Control Center, with the 13.35 Beta we are using today you'll find a new set of options in the Gaming section to enable or disable Frame Pacing. The default setting is On; which makes me smile inside every time I see it.
The hardware we are using is the same basic setup we used in my initial review of the AMD Kaveri A8-7600 APU review. That includes the A8-7600 APU, an Asrock A88X mini-ITX motherboard, 16GB of DDR3 2133 MHz memory and a Samsung 840 Pro SSD. Of course for our testing this time we needed a discrete card to enable Dual Graphics and we chose the MSI R7 250 OC Edition with 2GB of DDR3 memory. This card will run you an additional $89 or so on Amazon.com. You could use either the DDR3 or GDDR5 versions of the R7 250 as well as the R7 240, but in our talks with AMD they seemed to think the R7 250 DDR3 was the sweet spot for the CrossFire implementation.
Both the R7 250 and the A8-7600 actually share the same number of SIMD units at 384, otherwise known as 384 shader processors or 6 Compute Units based on the new nomenclature that AMD is creating. However, the MSI card is clocked at 1100 MHz while the GPU portions of the A8-7600 APU are running at only 720 MHz.
So the question is, has AMD truly fixed the issues with frame pacing with Dual Graphics configurations, once again making the budget gamer feature something worth recommending? Let's find out!
Subject: General Tech, Graphics Cards | January 28, 2014 - 07:00 PM | Scott Michaud
Tagged: Mantle, BF4, amd
A number of sites have reported on Toshiba's leak of the Catalyst 13.35 BETA driver. Mantle and TrueAudio support highlight its rumored changelog. Apparently Ryan picked it up, checked it out, and found that it does not have the necessary DLLs included. I do not think he has actual Mantle software to test against, and I am not sure how he knew what libraries Mantle requires, but this package apparently does not include them. Perhaps it was an incomplete build?
Sorry folks, unlike the above image, these are not the drivers you are looking for.
The real package should be coming soon, however. Recent stories which reference EA tech support (at this point we should all know better) claim that the Mantle update for Battlefield 4 will be delayed until February. Fans reached out to AMD's Robert Hallock who responded that it was, "Categorically not true". It sounds like AMD is planning on releasing at least their end of the patch before Friday ends.
This is looking promising, at least. Something is being done behind the scenes.
Subject: Graphics Cards | January 23, 2014 - 06:01 PM | Jeremy Hellstrom
Tagged: amd, asus, R9 290X DC2 OC, overclocking
[H]ard|OCP has had a chance to take the time to really see how well the R9 290X can overclock, as frequencies get lower as heat increases a quick gaming session is not enough to truly represent the performance of this new GPU. The ASUS R9 290X DirectCU II OC offers a custom cooler which demonstrated the overclocking potential of this GPU on air cooling, or at least this specific GPU as we have seen solid evidence of performance variability with 28nm Hawaii GPUs. You should read the full review to truly understand what they saw when overclocking but the good news is that once they found a sweet spot for fan speed and voltage the GPU remained at the frequency they chose. Unfortunately at 1115MHz the overclock they managed was only 75MHz higher than the cards default speed and while that could beat a stock GTX 780 Ti, the NVIDIA product overclocked higher and proved the superior card.
"We will take the ASUS R9 290X DC2 OC custom AMD R9 290X based video card and for the first time see how well the 290X can overclock. We will also for the first time compare it to an overclocked GeForce GTX 780 Ti video card head-to-head and see who wins when overclocking is accounted for."
Here are some more Graphics Card articles from around the web:
- Sapphire R9 290 4GB TRI-X OC Review @ Hardware Canucks
- HIS R9 270X IceQ X² Turbo Boost 2GB @ eTeknix
- Sapphire R9 290 Tri-X 4GB @ eTeknix
- Powercolor R9 280X TurboDuo 3GB @ eTeknix
- ASUS R9 290X DirectCU II OC 4 GB @ techPowerUp
- Gigabyte AMD Radeon R9 290X WF OC Video Card Review @ Madshrimps
- Sapphire Radeon R7 260X OC Review @ TechwareLabs
- EVGA GTX 780 Ti Classified 3072 MB @ techPowerUp
- Gigabyte GTX 780 Ti GHZ Edition Review! @ Bjorn3D
Subject: General Tech, Graphics Cards | January 23, 2014 - 03:29 AM | Scott Michaud
Tagged: ShadowPlay, nvidia, geforce experience
NVIDIA has been upgrading their GeForce Experience just about once per month, on average. Most of their attention has been focused on ShadowPlay which is their video capture and streaming service for games based on DirectX. GeForce Experience 1.8.1 brought streaming to Twitch and the ability to overlay the user's webcam.
Until this version, users could choose between "Low", "Medium", and "High" quality stages. GeForce Experence 1.8.2 adds "Custom" which allows manual control over resolution, frame rate, and bit rate. NVIDIA wants to makes it clear: frame rate controls the number of images per second and bit rate controls the file size per second. Reducing the frame rate without adjusting the bit rate will result in a file of the same size (just with better quality per frame).
Also with this update, NVIDIA allows users to set a push-to-talk key. I expect this will be mostly useful for Twitch streaming in a crowded dorm or household. Only transmitting your voice when you have something to say prevents someone else from accidentally transmitting theirs globally and instantaneously.
GeForce Experience 1.8.2 is available for download at the GeForce website. Users with a Fermi-based GPU will no longer be pushed GeForce Experience (because it really does not do anything for those graphics cards). The latest version can always be manually downloaded, however.
Subject: General Tech, Graphics Cards, Processors | January 22, 2014 - 09:41 PM | Scott Michaud
AMD had a decent quarter and close to a profitable year as a whole. For the quarter ending on December 28th, the company managed $89 million dollars in profits. This accounts for interest payments on loans and everything else. The whole year averaged to a $103 million dollar gain in operating income although that still works out to a loss of $74 million (for the year) all things considered. That said, a quarterly gain of $89 million versus an annual loss of $74 million. One more quarter would forgive the whole year.
This is a hefty turn-around from their billion dollar operating loss of last year.
This gain was led by Graphics and Visual Solutions. While Computing Solutions revenue has declined, the graphics team has steadily increased in both revenue and profits. Graphics and Visual Solutions are in charge of graphics processors as well as revenue from the game console manufacturers. Even then, their processor division is floating just below profitability.
Probably the best news for AMD is that they plan the next four quarters to each be profitable. Hopefully this means that there are no foreseen hurdles in the middle of their marathon.
Subject: Editorial, General Tech, Graphics Cards | January 22, 2014 - 02:12 AM | Scott Michaud
Tagged: linux, intel hd graphics, haswell
Looking through this post by Phoronix, it would seem that Intel had a significant regression in performance on Ubuntu 14.04 with the Linux 3.13 kernel. In some tests, HD 4600 only achieves about half of the performance recorded on the HD 4000. I have not been following Linux iGPU drivers and it is probably a bit late to do any form of in-depth analysis... but yolo. I think the article actually made a pretty big mistake and came to the exact wrong conclusion.
Let's do this!
According to the article, in Xonotic v0.7, Ivy Bridge's Intel HD 4000 scores 176.23 FPS at 1080p on low quality settings. When you compare this to Haswell's HD 4600 and its 124.45 FPS result, this seems bad. However, even though they claim this as a performance regression, they never actually post earlier (and supposedly faster) benchmarks.
So I dug one up.
Back in October, the same test was performed with the same hardware. The Intel HD 4600 was not significantly faster back then, rather it was actually a bit slower with a score of 123.84 FPS. The Intel HD 4000 managed 102.68 FPS. Haswell did not regress between that time and Ubuntu 14.04 on Linux 3.13, Ivy Bridge received a 71.63% increase between then and Ubuntu 14.04 on Linux 3.13.
Of course, there could have been a performance increase between October and now and that recently regressed for Haswell... but I could not find those benchmarks. All I can see is that Haswell has been quite steady since October. Either way, that is a significant performance increase on Ivy Bridge since that snapshot in time, even if Haswell had a rise-and-fall that I was unaware of.
Subject: Graphics Cards | January 21, 2014 - 03:49 PM | Ryan Shrout
Tagged: rumor, nvidia, kepler, gtx titan black, gtx titan, gtx 790
How about some fresh graphics card rumors for your Tuesday afternoon? The folks at VideoCardz.com have collected some information about two potential NVIDIA GeForce cards that are going to hit your pocketbook hard. If the mid-range GPU market was crowded wait until you see the changes NVIDIA might have for you soon on the high-end.
First up is the NVIDIA GeForce GTX TITAN Black Edition, a card that will actually have the same specifications as the GTX 780 Ti but with full performance double precision floating point and a move from 3GB to 6GB of memory. The all-black version of the GeForce GTX 700-series cooler is particularly awesome looking.
Image from VideoCardz.com
The new TITAN would sport the same GPU as GTX 780 Ti, only TITAN BLACK would have higher double precision computing performance, thus more FP64 CUDA cores. The GTX TITAN Black Edition is also said to feature 6GB memory buffer.This is twice as much as GTX 780 Ti, and it pretty much confirms we won’t be seeing any 6GB Ti’s.
The rest is pretty much well known, TITAN BLACK has 2880 CUDA cores, 240 TMUs and 48 ROPs.
VideoCardz.com says this will come in at $999. If true, this is a pure HPC play as the GTX 780 Ti would still offer the same gaming performance for enthusiasts.
Secondly, there looks to be an upcoming dual-GPU graphics card using a pair of GK110 GPUs that will be called the GeForce GTX 790. The specifications that VideoCardz.com says they have indicate that each GPU will have 2496 enabled CUDA cores and a smaller 320-bit memory interface with 5GB designated for each GPU. Cutting back on the memory interface, shader counts and even clocks speeds would allow NVIDIA to manage power consumption at the targeted 300 watt level.
Image from VideoCardz.com
Head over to VideoCardz.com for more information about these rumors but if all goes as they expect, you'll hear about these products quite a bit more in February and March.
What do you think? Are these new $1000 graphics cards something you are looking forward to?
Subject: General Tech, Graphics Cards | January 20, 2014 - 04:19 AM | Scott Michaud
Tagged: maxwell, nvidia
Well this is somewhat unexpected (and possibly wrong). Maxwell, NVIDIA's new architecture to replace Kepler, is said to appear in Feburary with the form of a GeForce GTX 750 Ti. The rumors, which sound iffy to me, claims that this core will be produced at TSMC on a 28nm fabrication technology and later transition to their 20nm lines.
As if the 700-series family tree was not diverse enough.
2013 may have been much closer than expected.
Swedish site, Sweclockers, have been contacted by "sources" which claim that NVIDIA has already alerted partners to prepare a graphics card launch. Very little information is given beyond that. They do not even have access to a suggested GM1## architecture code. They just claim that partners should expect a new videocard on the 18th of February (what type of launch that is is also unclear).
This also raises questions about why the mid-range card will come before the high-end. If the 28nm rumor is true, it could just be that NVIDIA did not want to wait around until TSMC could fabricate their high-end part if they already had an architecture version that could be produced now. It could be as simple as that.
The GeForce GTX 750 Ti is rumored to arrive in February to replace the GTX 650 Ti Boost.
Subject: Editorial, General Tech, Graphics Cards, Processors, Memory, Systems | January 20, 2014 - 02:40 AM | Scott Michaud
Tagged: corsair, overclocking
I rarely overclock anything and this is for three main reasons. The first is that I have had an unreasonably bad time with computer parts failing on their own. I did not want to tempt fate. The second was that I focused on optimizing the operating system and its running services. This was mostly important during the Windows 98, Windows XP, and Windows Vista eras. The third is that I did not find overclocking valuable enough for the performance you regained.
A game that is too hefty to run is probably not an overclock away from working.
Thankfully this never took off...
Today, overclocking is easier and safer than ever with parts that basically do it automatically and back off, on their own, if thermals are too aggressive. Several components are also much less locked down than they have been. (Has anyone, to this day, hacked the locked Barton cores?) It should not be too hard to find a SKU which encourages the enthusiast to tweak some knobs.
But how much of an increase will you see? Corsair has been blogging about using their components (along with an Intel processor, Gigabyte motherboard, and eVGA graphics card because they obviously do not make those) to overclock. The cool part is they break down performance gains in terms of raising the frequencies for just the CPU, just the GPU, just the RAM, or all of the above together. This breakdown shows how each of the three categories contribute to the whole. While none of the overclocks are dramatic, Corsair is probably proud of the 5% jump in Cinebench OpenGL performance just by overclocking the RAM from 1600 MHz to 1866 MHz without touching the CPU or GPU.
It is definitely worth a look.
A Refreshing Change
Refreshes are bad, right? I guess that depends on who you talk to. In the case of AMD, it is not a bad thing. For people who live for cutting edge technology in the 3D graphics world, it is not pretty. Unfortunately for those people, reality has reared its ugly head. Process technology is slowing down, but product cycles keep moving along at a healthy pace. This essentially necessitates minor refreshes for both AMD and NVIDIA when it comes to their product stack. NVIDIA has taken the Kepler architecture to the latest GTX 700 series of cards. AMD has done the same thing with the GCN architecture, but has radically changed the nomenclature of the products.
Gone are the days of the Radeon HD 7000 series. Instead AMD has renamed their GCN based product stack with the Rx 2xx series. The products we are reviewing here are the R9 280X and the R9 270X. These products were formerly known as the HD 7970 and HD 7870 respectively. These products differ in clock speeds slightly from the previous versions, but the differences are fairly minimal. What is different are the prices for these products. The R9 280X retails at $299 while the R9 270X comes in at $199.
Asus has taken these cards and applied their latest DirectCU II technology to them. These improvements relate to design, component choices, and cooling. These are all significant upgrades from the reference designs, especially when it comes to the cooling aspects. It is good to see such a progression in design, but it is not entirely surprising given that the first HD 7000 series debuted in January, 2012.
Get notified when we go live!