Subject: Graphics Cards | June 12, 2014 - 06:17 PM | Ryan Shrout
Tagged: overclocking, nvidia, gtx titan z, geforce
Earlier this week I posted a review of the NVIDIA GeForce GTX Titan Z graphics card, a dual-GPU Kepler GK110 part that currently sells for $3000. If you missed that article you should read it first and catch up but the basic summary was that, for PC gamers, it's slower and twice the price of AMD's Radeon R9 295X2.
During that article though I mentioned that the Titan Z had more variable clock speeds than any other GeForce card I had tested. At the time I didn't go any further than that since the performance of the card already pointed out the deficit it had going up against the R9 295X2. However, several readers asked me to dive into overclocking with the Titan Z and with that came the need to show clock speed changes.
My overclocking was done through EVGA's PrecisionX software and we measured clock speeds with GPU-Z. The first step in overclocking an NVIDIA GPU is to simply move up the Power Target sliders and see what happens. This tells the card that it is allowed to consume more power than it would normally be allowed to, and then thanks to GPU Boost technology, the clock speed should scale up naturally.
Click to Enlarge
And that is exactly what happened. I ran through 30 minutes of looped testing with Metro: Last Light at stock settings, with the Power Target at 112%, with the Power Target at 120% (the maximum setting) and then again with the Power Target at 120% and the GPU clock offset set to +75 MHz.
That 75 MHz offset was the highest setting we could get to run stable on the Titan Z, which brings the Base clock up to 781 MHz and the Boost clock to 951 MHz. Though, as you'll see in our frequency graphs below the card was still reaching well above that.
Click to Enlarge
This graph shows clock rates of the GK110 GPUs on the Titan Z over the course of 25 minutes of looped Metro: Last Light gaming. The green line is the stock performance of the card without any changes to the power settings or clock speeds. While it starts out well enough, hitting clock rates of around 1000 MHz, it quickly dives and by 300 seconds of gaming we are often going at or under the 800 MHz mark. That pattern is consistent throughout the entire tested time and we have an average clock speed of 894 MHz.
Next up is the blue line, generated by simply moving the power target from 100% to 112%, giving the GPUs a little more thermal headroom to play with. The results are impressive, with a much more consistent clock speed. The yellow line, for the power target at 120%, is even better with a tighter band of clock rates and with a higher average clock.
Finally, the red line represents the 120% power target with a +75 MHz offset in PrecisionX. There we see a clock speed consistency matching the yellow line but offset up a bit, as we have been taught to expect with NVIDIA's recent GPUs.
Click to Enlarge
The result of all this data comes together in the bar graph here that lists the average clock rates over the entire 25 minute test runs. At stock settings, the Titan Z was able to hit 894 MHz, just over the "typical" boost clock advertised by NVIDIA of 876 MHz. That's good news for NVIDIA! Even though there is a lot more clock speed variance than I would like to see with the Titan Z, the clock speeds are within the expectations set by NVIDIA out the gate.
Bumping up that power target though will help out gamers that do invest in the Titan Z quite a bit. Just going to 112% results in an average clock speed of 993 MHz, a 100 MHz jump worth about 11% overall. When we push that power target up even further, and overclock the frequency offset a bit, we actually get an average clock rate of 1074 MHz, 20% faster than the stock settings. This does mean that our Titan Z is pulling more power and generating more noise (quite a bit more actually) with fan speeds going from around 2000 to 2700 RPM.
At both 2560x1440 and 3840x2160, in the Metro: Last Light benchmark we ran, the added performance of the Titan Z does put it at the same level of the Radeon R9 295X2. Of course, it goes without saying that we could also overclock the 295X2 a bit further to improve ITS performance, but this is an exercise in education.
Does it change my stance or recommendation for the Titan Z? Not really; I still think it is overpriced compared to the performance you get from AMD's offerings and from NVIDIA's own lower priced GTX cards. However, it does lead me to believe that the Titan Z could have been fixed and could have offered at least performance on par with the R9 295X2 had NVIDIA been willing to break PCIe power specs and increase noise.
UPDATE (6/13/14): Some of our readers seem to be pretty confused about things so I felt the need to post an update to the main story here. One commenter below mentioned that I was one of "many reviewers that pounded the R290X for the 'throttling issue' on reference coolers" and thinks I am going easy on NVIDIA with this story. However, there is one major difference that he seems to overlook: the NVIDIA results here are well within the rated specs.
When I published one of our stories looking at clock speed variance of the Hawaii GPU in the form of the R9 290X and R9 290, our results showed that clock speed of these cards were dropping well below the rated clock speed of 1000 MHz. Instead I saw clock speeds that reached as low as 747 MHz and stayed near the 800 MHz mark. The problem with that was in how AMD advertised and sold the cards, using only the phrase "up to 1.0 GHz" in its marketing. I recommended that AMD begin selling the cards with a rated base clock and a typical boost clock instead only labeling with the, at the time, totally incomplete "up to" rating. In fact, here is the exact quote from this story: "AMD needs to define a "base" clock and a "typical" clock that users can expect." Ta da.
The GeForce GTX Titan Z though, as we look at the results above, is rated and advertised with a base clock of 705 MHz and a boost clock of 876 MHz. The clock speed comparison graph at the top of the story shows the green line (the card at stock) never hitting that 705 MHz base clock while averaging 894 MHz. That average is ABOVE the rated boost clock of the card. So even though the GPU is changing between frequencies more often than I would like, the clock speeds are within the bounds set by NVIDIA. That was clearly NOT THE CASE when AMD launched the R9 290X and R9 290. If NVIDIA had sold the Titan Z with only the specification of "up to 1006 MHz" or something like then the same complaint would be made. But it is not.
The card isn't "throttling" at all, in fact, as someone specifies below. That term insinuates that it is going below a rated performance rating. It is acting in accordance with the GPU Boost technology that NVIDIA designed.
Some users seem concerned about temperature: the Titan Z will hit 80-83C in my testing, both stock and overclocked, and simply scales the fan speed to compensate accordingly. Yes, overclocked, the Titan Z gets quite a bit louder but I don't have sound level tests to show that. It's louder than the R9 295X2 for sure but definitely not as loud as the R9 290 in its original, reference state.
Finally, some of you seem concerned that I was restrticted by NVIDIA on what we could test and talk about on the Titan Z. Surprise, surprise, NVIDIA didn't send us this card to test at all! In fact, they were kind of miffed when I did the whole review and didn't get into showing CUDA benchmarks. So, there's that.
A powerful architecture
In March of this year, NVIDIA announced the GeForce GTX Titan Z at its GPU Technology Conference. It was touted as the world's fastest graphics card with its pair of full GK110 GPUs but it came with an equally stunning price of $2999. NVIDIA claimed it would be available by the end of April for gamers and CUDA developers to purchase but it was pushed back slightly and released at the very end of May, going on sale for the promised price of $2999.
The specifications of GTX Titan Z are damned impressive - 5,760 CUDA cores, 12GB of total graphics memory, 8.1 TFLOPs of peak compute performance. But something happened between the announcement and product release that perhaps NVIDIA hadn't accounted for. AMD's Radeon R9 295X2, a dual-GPU card with full-speed Hawaii chips on-board, was released at $1499. I think it's fair to say that AMD took some chances that NVIDIA was surprised to see them take, including going the route of a self-contained water cooler and blowing past the PCI Express recommended power limits to offer a ~500 watt graphics card. The R9 295X2 was damned fast and I think it caught NVIDIA a bit off-guard.
As a result, the GeForce GTX Titan Z release was a bit quieter than most of us expected. Yes, the Titan Black card was released without sampling the gaming media but that was nearly a mirror of the GeForce GTX 780 Ti, just with a larger frame buffer and the performance of that GPU was well known. For NVIDIA to release a flagship dual-GPU graphics cards, admittedly the most expensive one I have ever seen with the GeForce brand on it, and NOT send out samples, was telling.
NVIDIA is adamant though that the primary target of the Titan Z is not just gamers but the CUDA developer that needs the most performance possible in as small of a space as possible. For that specific user, one that doesn't quite have the income to invest in a lot of Tesla hardware but wants to be able to develop and use CUDA applications with a significant amount of horsepower, the Titan Z fits the bill perfectly.
Still, the company was touting the Titan Z as "offering supercomputer class performance to enthusiast gamers" and telling gamers in launch videos that the Titan Z is the "fastest graphics card ever built" and that it was "built for gamers." So, interest peaked, we decided to review the GeForce GTX Titan Z.
The GeForce GTX TITAN Z Graphics Card
Cost and performance not withstanding, the GeForce GTX Titan Z is an absolutely stunning looking graphics card. The industrial design started with the GeForce GTX 690 (the last dual-GPU card NVIDIA released) and continued with the GTX 780 and Titan family, lives on with the Titan Z.
The all metal finish looks good and stands up to abuse, keeping that PCB straight even with the heft of the heatsink. There is only a single fan on the Titan Z, center mounted, with a large heatsink covering both GPUs on opposite sides. The GeForce logo up top illuminates, as we have seen on all similar designs, which adds a nice touch.
Subject: Graphics Cards | June 9, 2014 - 02:53 PM | Jeremy Hellstrom
Tagged: amd, r9 280, msi, R9 280 GAMING OC, factory overclocked
[H]ard|OCP has just posted a review of MSI's factory overclocked
HD7950 R9 280 GAMING OC card, with a 67MHz overclock on the GPU out of the box bringing it up to the 280X's default speed of 1GHz. With a bit of work that can be increased, [H]'s testing was also done at 1095MHz with the RAM raised to 5.4GHz which was enough to take it's performance just beyond the stock GTX 760 it was pitted against. Considering the equality of the performance as well as the price of these cards the decision as to which to go can be based on bundled games or personal preference.
"Priced at roughly $260 we have the MSI R9 280 GAMING OC video card, which features pre-overclocked performance, MSI's Twin Frozr IV cooling system, and highest end components. We'll focus on performance when gaming at 1080p between this boss and the GeForce GTX 760 video card!"
Here are some more Graphics Card articles from around the web:
- Gigabyte R7 250X OC 1GB GDDR5 @ Madshrimps
- HIS R7 250X iCooler 1GB GDDR5 @ Madshrimps
- PowerColor Radeon R9 295X2 Review @ OCC
- MSI Radeon R9 280 OC Review @ TechwareLabs
- Sapphire R9 290 Vapor-X OC Review @ Hardware Canucks
- AMD Kaveri Mobile APU Preview - FX-7600P with Radeon R7 Graphics @ Legit Reviews
- The Performance-Per-Watt, Efficiency Of GPUs On Open-Source Drivers @ Phoronix
- Testing 60+ Intel/AMD/NVIDIA GPUs On Linux With Open-Source Drivers @ Phoronix
- NVIDIA GeForce GT 740: I'd Rather Have Maxwell @ Phoronix
- NVIDIA’s GTX TITAN Z; GK110 Squared @ Hardware Canucks
With the GPU landscape mostly settled for 2014, we have the ability to really dig in and evaluate the retail models that continue to pop up from NVIDIA and AMD board partners. One of our favorite series of graphics cards over the years comes from MSI in the form of the Lightning brand. These cards tend to take the engineering levels to a point other designers simply won't do - and we love it! Obviously the target of this capability is additional overclocking headroom and stability, but what if the GPU target has issues scaling already?
That is more or less the premise of the Radeon R9 290X Lightning from MSI. AMD's Radeon R9 290X Hawaii GPU is definitely a hot and power hungry part and that caused quite a few issues at the initial release. Since then though, both AMD and its add-in card partners have worked to improve the coolers installed on these cards to improve performance reliability and decrease the LOUD NOISES produced by the stock, reference cooler.
Let's dive into the latest to hit our test bench, the MSI Radeon R9 290X Lightning.
The MSI Radeon R9 290X Lightning
MSI continues to utilize the yellow and black color scheme that many of the company's high end parts integrate and I love the combination. I know that both NVIDIA and AMD disapprove of the distinct lack of "green" and "red" in the cooler and box designs, but good on MSI for sticking to its own thing.
The box for the Lightning card is equal to the prominence of the card itself and you even get a nifty drawer for all of the included accessories.
We originally spotted the MSI R9 290X Lightning at CES in January and the design remains the same. The cooler is quite large (and damn heavy) and is cooled by a set of three fans. The yellow fan in the center is smaller and spins a bit faster, creating more noise than I would prefer. All fan speeds can be adjusted with MSI's included fan control software.
Subject: Graphics Cards, Displays | June 4, 2014 - 12:40 AM | Ryan Shrout
Tagged: gsync, g-sync, freesync, DisplayPort, computex 2014, computex, adaptive sync
AMD FreeSync is likely a technology or brand or term that is going to be used a lot between now and the end of 2014. When NVIDIA introduced variable refresh rate monitor technology to the world in October of last year, one of the immediate topics of conversation was the response that AMD was going to have. NVIDIA's G-Sync technology is limited to NVIDIA graphics cards and only a few (actually just one still as I write this) monitors actually have the specialized hardware to support it. In practice though, variable refresh rate monitors fundamentally change the gaming experience for the better.
At CES, AMD went on the offensive and started showing press a hacked up demo of what they called "FreeSync", a similar version of the variable refresh technology working on a laptop. At the time, the notebook was a requirement of the demo because of the way AMD's implementation worked. Mobile displays have previously included variable refresh technologies in order to save power and battery life. AMD found that it could repurpose that technology to emulate the effects that NVIDIA G-Sync creates - a significantly smoother gaming experience without the side effects of Vsync.
Our video preview of NVIDIA G-Sync Technology
Since that January preview, things have progressed for the "FreeSync" technology. Taking the idea to the VESA board responsible for the DisplayPort standard, in April we found out that VESA had adopted the technology and officially and called it Adaptive Sync.
So now what? AMD is at Computex and of course is taking the opportunity to demonstrate a "FreeSync" monitor with the DisplayPort 1.2a Adaptive Sync feature at work. Though they aren't talking about what monitor it is or who the manufacturer is, the demo is up and running and functions with frame rates wavering between 40 FPS and 60 FPS - the most crucial range of frame rates that can adversely affect gaming experiences. AMD has a windmill demo running on the system, perfectly suited to showing Vsync enabled (stuttering) and Vsync disabled (tearing) issues with a constantly rotating object. It is very similar to the NVIDIA clock demo used to show off G-Sync.
The demo system is powered by an AMD FX-8350 processor and Radeon R9 290X graphics card. The monitor is running at 2560x1440 and is the very first working prototype of the new standard. Even more interesting, this is a pre-existing display that has had its firmware updated to support Adaptive Sync. That's potentially exciting news! Monitors COULD BE UPGRADED to support this feature, but AMD warns us: "...this does not guarantee that firmware alone can enable the feature, it does reveal that some scalar/LCD combinations are already sufficiently advanced that they can support some degree of DRR (dynamic refresh rate) and the full DPAS (DisplayPort Adaptive Sync) specification through software changes."
The time frame for retail available monitors using DP 1.2a is up in the air but AMD has told us that the end of 2014 is entirely reasonable. Based on the painfully slow release of G-Sync monitors into the market, AMD has less of a time hole to dig out of than we originally thought, which is good. What is not good news though is that this feature isn't going to be supported on the full range of AMD Radeon graphics cards. Only the Radeon R9 290/290X and R7 260/260X (and the R9 295X2 of course) will actually be able to support the "FreeSync" technology. Compare that to NVIDIA's G-Sync: it is supported by NVIDIA's entire GTX 700 and GTX 600 series of cards.
All that aside, seeing the first official prototype of "FreeSync" is awesome and is getting me pretty damn excited about the variable refresh rate technologies once again! Hopefully we'll get some more hands on time (eyes on, whatever) with a panel in the near future to really see how it compares to the experience that NVIDIA G-Sync provides. There is still the chance that the technologies are not directly comparable and some in-depth testing will be required to validate.
Subject: Graphics Cards, Processors | June 3, 2014 - 02:10 PM | Ryan Shrout
Tagged: Intel, amd, richard huddy
Interesting news is crossing the ocean today as we learn that Richard Huddy, who has previously had stints at NVIDIA, ATI, AMD and most recently, Intel, is teaming up with AMD once again. Richard brings with him years of experience and innovation in the world of developer relations and graphics technology. Often called "the Godfather" of DirectX, AMD wants to prove to the community it is taking PC gaming seriously.
The official statement from AMD follows:
AMD is proud to announce the return of the well-respected authority in gaming, Richard Huddy. After three years away from AMD, Richard returns as AMD's Gaming Scientist in the Office of the CTO - he'll be serving as a senior advisor to key technology executives, like Mark Papermaster, Raja Koduri and Joe Macri. AMD is extremely excited to have such an industry visionary back. Having spent his professional career with companies like NVIDIA, Intel and ATI, and having led the worldwide ISV engineering team for over six years at AMD, Mr. Huddy has a truly unique perspective on the PC and Gaming industries.
Mr. Huddy rejoins AMD after a brief stint at Intel, where he had a major impact on their graphics roadmap. During his career Richard has made enormous contributions to the industry, including the development of DirectX and a wide range of visual effects technologies. Mr. Huddy’s contributions in gaming have been so significant that he was immortalized as ‘The Scientist’ in Max Payne (if you’re a gamer, you’ll see the resemblance immediately).
Kitguru has a video from Richard Huddy explaining his reasoning for the move back to AMD.
This move points AMD in a very interesting direction going forward. The creation of the Mantle API and the debate around AMD's developer relations programs are going to be hot topics as we move into the summer and I am curious how quickly Huddy thinks he can have an impact.
I have it on good authority we will find out very soon.
Subject: Graphics Cards | June 2, 2014 - 11:41 PM | Sebastian Peak
Tagged: computex, radeon, r9 295x2, Hawaii XT, dual gpu, computex 2014, ASUS ROG, asus, Ares, amd
The latest installment in the ASUS ARES series of ultra-powerful, limited-edition graphics cards has been announced, and the Ares III is set to be the “world’s fastest” video card.
The dual-GPU powerhouse is driven by two “hand-selected” Radeon Hawaii XT GPUs (R9 290X cores) with 8GB of GDDR5 memory. The card is overclockable according to ASUS, and will likely arrive factory overclocked as they claim it will be faster out of the box than the reference R9-295x2. The ARES III features a custom-designed EK water block, so unlike the R9 295x2 the end user will need to supply the liquid cooling loop.
ASUS claims that the ARES III will “deliver 25% cooler performance than reference R9 295X designs“, but to achieve this ASUS “highly” recommends a high flow rate loop with at least a 120x3 radiator “to extract maximum performance from the card,” and they “will provide a recommended list of water cooling systems at launch”.
Only 500 of the ARES III will be made, and are individually numbered. No pricing has been announced, but ASUS says to expect it to be more than a 295x2 ($1499) - but less than a TITAN Z ($2999). The ASUS ROG ARES III will be available in Q3 2014.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Graphics Cards | June 2, 2014 - 05:52 PM | Scott Michaud
Tagged: nvidia, geforce, geforce experience, ShadowPlay
NVIDIA has just launched another version of their GeForce Experience, incrementing the version to 2.1. This release allows video of up to "2500x1600", which I assume means 2560x1600, as well as better audio-video synchronization in Adobe Premiere. Also, because why stop going after FRAPS once you start, it also adds an in-game framerate indicator. It also adds push-to-talk for recording the microphone.
Another note: when GeForce Experience 2.0 launched, it introduced streaming of the user's desktop. This allowed recording of OpenGL and windowed-mode games by simply capturing an entire monitor. This mode was not capable of "Shadow Mode", which I believed was because they thought users didn't want a constant rolling video to be taken of their desktop in the event that they wanted to save a few minutes of it at some point. Turns out that I was wrong; the feature was coming and it arrived with GeForce Experience 2.1.
GeForce Experience 2.1 is now available at NVIDIA's website, unless it already popped up a notification for you.
Subject: Graphics Cards | May 28, 2014 - 07:17 PM | Jeremy Hellstrom
Tagged: driver, Catalyst 14.4 beta, amd
Get the latest Catalyst for your Radeon!
- Starting with AMD Catalyst 14.6 Beta, AMD will no longer support Windows 8.0 (and the WDDM 1.2 driver) Windows 8.0 users should upgrade (for Free) to Windows 8.1 to take advantage of the new features found in the AMD Catalyst 14.6 Beta
- AMD Catalyst 14.4 will remain available for users who wish to remain on Windows 8 A future AMD Catalyst release will allow for the WDDM 1.1 (Windows 7 driver) to be installed under Windows 8.0 for those users unable to upgrade to Windows 8.1
- The AMD Catalyst 14.6 Beta Driver can be downloaded from the following links: AMD Catalyst 14.6 Beta Driver for Windows
- NOTE! This Catalyst Driver is provided "AS IS", and under the terms and conditions of the End User License Agreement provided with it.
- Performance improvements
- Watch Dogs performance improvements AMD Radeon R9 290X - 1920x1080 4x MSAA – improves up to 25%
- AMD Radeon R9290X - 2560x1600 4x MSAA – improves up to 28%
- AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra settings, MSAA = 4X) - 92% scaling
- Murdered Soul Suspect performance improvements
- AMD Radeon R9 290X – 2560x1600 4x MSAA – improves up to 16%
- AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra settings, MSAA = 4X) - 93% scaling
- AMD Eyefinity enhancements: Mixed Resolution Support
- A new architecture providing brand new capabilities
- Display groups can be created with monitors of different resolution (including difference sizes and shapes)
- Users have a choice of how surface is created over the display group
- Fill – legacy mode, best for identical monitors
- Fit – create the Eyefinity surface using best available rectangular area with attached displays.
- Expand – create a virtual Eyefinity surface using desktops as viewports onto the surface.
- Eyefinity Display Alignment
- Enables control over alignment between adjacent monitors
- One-Click Setup Driver detects layout of extended desktops
- Can create Eyefinity display group using this layout in one click!
- New user controls for video color and display settings
- Greater control over Video Color Management:
- Controls have been expanded from a single slider for controlling Boost and Hue to per color axis
- Color depth control for Digital Flat Panels (available on supported HDMI and DP displays)
- Allows users to select different color depths per resolution and display
- AMD Mantle enhancements
- Mantle now supports AMD Mobile products with Enduro technology
- Battlefield 4: AMD Radeon HD 8970M (1366x768; high settings) – 21% gain
- Thief: AMD Radeon HD 8970M (1920x1080; high settings) – 14% gain
- Star Swarm: AMD Radeon HD 8970M (1920x1080; medium settings) – 274% gain
- Enables support for Multi-GPU configurations with Thief (requires the latest Thief update)
... and much more, grab it here.
The AMD Argument
Earlier this week, a story was posted in a Forbes.com blog that dove into the idea of NVIDIA GameWorks and how it was doing a disservice not just on the latest Ubisoft title Watch_Dogs but on PC gamers in general. Using quotes from AMD directly, the author claims that NVIDIA is actively engaging in methods to prevent game developers from optimizing games for AMD graphics hardware. This is an incredibly bold statement and one that I hope AMD is not making lightly. Here is a quote from the story:
Gameworks represents a clear and present threat to gamers by deliberately crippling performance on AMD products (40% of the market) to widen the margin in favor of NVIDIA products. . . . Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.
The example cited on the Forbes story is the recently released Watch_Dogs title, which appears to show favoritism towards NVIDIA GPUs with performance of the GTX 770 ($369) coming close the performance of a Radeon R9 290X ($549).
It's evident that Watch Dogs is optimized for Nvidia hardware but it's staggering just how un-optimized it is on AMD hardware.
Watch_Dogs is the latest GameWorks title released this week.
I decided to get in touch with AMD directly to see exactly what stance the company was attempting to take with these kinds of claims. No surprise, AMD was just as forward with me as they appeared to be in the Forbes story originally.
The AMD Stance
Central to AMD’s latest annoyance with the competition is the NVIDIA GameWorks program. First unveiled last October during a press event in Montreal, GameWorks combines several NVIDIA built engine functions into libraries that can be utilized and accessed by game developers to build advanced features into games. NVIDIA’s website claims that GameWorks is “easy to integrate into games” while also including tutorials and tools to help quickly generate content with the software set. Included in the GameWorks suite are tools like VisualFX which offers rendering solutions like HBAO+, TXAA, Depth of Field, FaceWorks, HairWorks and more. Physics tools include the obvious like PhysX while also adding clothing, destruction, particles and more.
Get notified when we go live!