A baker's dozen of GTX 960
Back on the launch day of the GeForce GTX 960, we hosted NVIDIA's Tom Petersen for a live stream. During the event, NVIDIA and its partners provided ten GTX 960 cards for our live viewers to win which we handed out through about an hour and a half. An interesting idea was proposed during the event - what would happen if we tried to overclock all of the product NVIDIA had brought along to see what the distribution of results looked like? After notifying all the winners of their prizes and asking for permission from each, we started the arduous process of testing and overclocking a total of 13 (10 prizes plus our 3 retail units already in the office) different GTX 960 cards.
Hopefully we will be able to provide a solid base of knowledge for buyers of the GTX 960 that we don't normally have the opportunity to offer: what is the range of overclocking you can expect and what is the average or median result. I think you will find the data interesting.
The 13 Contenders
Our collection of thirteen GTX 960 cards includes a handful from ASUS, EVGA and MSI. The ASUS models are all STRIX models, the EVGA cards are of the SSC variety, and the MSI cards include a single Gaming model and three 100ME. (The only difference between the Gaming and 100ME MSI cards is the color of the cooler.)
To be fair to the prize winners, I actually assigned each of them a specific graphics card before opening them up and testing them. I didn't want to be accused of favoritism by giving the best overclockers to the best readers!
Subject: General Tech, Motherboards | November 4, 2014 - 01:12 AM | Scott Michaud
Tagged: X99, overclocking, msi, mpower, motherboards, motherboard
The X99S XPOWER is MSI's top-of-the-line overclocking motherboard. The company has just introduced the X99S MPOWER to complement it on their product stack. It is a similar motherboard with a smaller price tag that was reduced by removing a few optional features (I will outline the major differences, below). These are basically unrelated to performance and overclocking, minus the buttons to set the base clock on the motherboard itself and a couple of accessories (the XPOWER comes with a free Delid Die Guard and temporary fan stand). It is more things like the number of I/O ports.
The main differences with the MPOWER are:
- It does not have the fifth, eight-lane PCIe slot, just the four provided by Haswell-E.
- It has one Intel Gigabit Ethernet adapter, instead of two.
- It does not have built-in 802.11ac WiFi or Bluetooth.
- It has two less USB 3.0 ports (external).
- It has one less USB 2.0 port (internal, seemingly the "Direct USB" port for BIOS updates).
- It does not come with a Delid Die Guard or fan stand.
There are a few other differences, such as the XPOWER having an I/O port cover and a few extra on-board overclocking switches and buttons, but I cannot see anything that stands out. The current price difference is about 115$ at Newegg, which is a healthy saving if nothing is a deal-killer.
Subject: Graphics Cards | October 14, 2014 - 06:49 PM | Jeremy Hellstrom
Tagged: GTX 980, nvidia, overclocking
[H]ard|OCP has had more time to spend with their reference GTX 980 and have reached the best stable overclock they could on this board without moving to third party coolers or serious voltage mods. At 1516MHz core and 8GHz VRAM on this reference card, retail models will of course offer different results; regardless it is not too shabby a result. This overclock was not easy to reach and how they managed it and the lessons they learned along the way make for interesting reading. The performance increases were noticeable, in most cases the overclocked card was beating the stock card by 25% and as this was a reference card the retail cards with enhanced coolers and the possibility of custom BIOS which disable NVIDIA's TDP/Power Limit settings you could see cards go even faster. You can bet [H] and PCPer will both be revisting the overclocking potential of GTX 980s.
"The new NVIDIA GeForce GTX 980 makes overclocking GPUs a ton of fun again. Its extremely high clock rates achieved when you turn the right dials and sliders result in real world gaming advantages. We will compare it to a GeForce GTX 780 Ti and Radeon R9 290X; all overclocked head-to-head."
Here are some more Graphics Card articles from around the web:
- GeForce GTX 980 cards from Gigabyte and Zotac @ The Tech Report
- Palit GTX980 Super Jetstream OC @ Kitguru
- The NVIDIA GTX 980 SLI Review @ Hardware Canucks
- Gainward Phantom GeForce GTX 970 4GB @ eTeknix
- MSI GeForce GTX 980 Gaming 4 GB @ techPowerUp
- NVIDIA GeForce GTX 980M & GTX 970M Preview @ Hardware Canucks
- NVIDIA GTX 970 SLI Performance Review @ Hardware Canucks
- NVIDIA GeForce GTX 980 Dominates With OpenCL On Linux @ Phoronix
- Sapphire R9 270X Toxic Vs NZXT Kraken Cooling @ eTeknix
- Raijintek Morpheus GPU Cooler @ eTeknix
- Arctic Accelero Hybrid II-120 Liquid GPU Cooler @ Kitguru
- AMD Radeon R9 285 Tonga Performance On Linux @ Phoronix
- Gigabyte AMD Radeon R9 285 WindForce OC Video Card Review @ Madshrimps
- HIS R9 290X iPower IceQ X2 Turbo 4GB GDDR5 Video Card Review @ Madshrimps
- Sapphire Radeon R9 285 ITX Compact OC Review @HiTech Legion
- XFX R9 280 Double Dissipation 3GB @ [H]ard|OCP
Subject: General Tech, Graphics Cards | September 26, 2014 - 02:03 AM | Scott Michaud
Tagged: steam, precisionx 16, precisionx, overclocking, nvidia, evga
If you were looking to download EVGA Precision X recently, you were likely disappointed. For a few months now, the software was unavailable because of a disagreement between the add-in board (AIB) partner and Guru3D (and the RivaTuner community). EVGA maintains that it was a completely original work, and references to RivaTuner are a documentation error. As a result, they pulled the tool just a few days after launching X 15.
This new version, besides probably cleaning up all of the existing issues mentioned above, adds support for the new GeForce GTX 900-series cards, a new interface, an "OSD" for inside applications, and Steam Achievements (??). You can get a permanent badge on your Steam account for breaking 1200 MHz on your GPU, taking a screenshot, or restoring settings to default. I expect that latter badge is one of shame, like the Purple Heart from Battlefield, that is not actually a bad thing and says nothing less of your overclocking skills by pressing it. Seriously, save yourself some headache and just press default if things just do not seem right.
PrecisionX 16 is free, available now, and doesn't require an EVGA card (just a site sign-up).
Subject: General Tech | August 18, 2014 - 01:33 PM | Jeremy Hellstrom
Tagged: gigabyte, Extreme Overclocking Competition, overclocking
The weapons this year at Gigabyte's EOC were a Core i5-4690K and Core i7-4790K, Gigabyte Z97X-SOC FORCE LN2, Gigabyte HD7790, G.Skill TridentX F3-2933C12D-8GTXDG
and a Seasonic SSX-1200 Platinum PSU. Team Awardfabrik hit 6578MHz on the i7-4790K with a mix of luck and skill while Team Switzerland took top spot for memory at 2106.3MHz. Raw speed of one component is not enough to win this competition and when the nitrogen fog lifted it was Team HardwareLuxx with the overall win. Check out what benchmarks were run and pictures and video from the event on MadShrimps.
"Each year Gigabyte Germany organizes the Extreme Overclocking Competition. At the EOC the best overclocking teams of Germany have a chance to prove who is still king. The main organizer behind each event is Germany’s finest Roman Hartung also known as der8auer at HWBot.org. This year besides Gigabyte also G.Skill, Intel, Seasonic and Gelid solutions provided the required hardware and funds to allow this clash of the titans to take place at the Know Cube at the Heilbronn Tech University."
Here is some more Tech News from around the web:
- The TR Podcast 159: Kaveri returns, Shield delivers, and Brix gets game @ The Tech Report
- How to Encrypt Email in Linux @ Linux.com
- Microsoft cries UNINSTALL in the wake of Blue Screens of Death @ The Register
- HTC One W8 leak reveals Windows Phone 8.1, Blinkfeed app @ The Inquirer
- Boffins find hundreds of thousands of woefully insecure IoT devices @ The Register
- LUXA2 PL2 6000mAh Leather Power Bank Review @ NikKTech
- NikKTech & Thermalright Worldwide Summer Giveaway
- Gamescom 2014 Gallery @ Kitguru
Redefining Price/Performance with AMD Motherboards
Motherboards are fascinating to me. They always have been. I remember voraciously reading motherboard reviews in the mid-90s. I simply could not get enough of them. Some new chipset from SiS, VIA, or ALi? I scoured the internet for information on them and what new features they would bring to the table. Back then motherboards did not have the retail presence they do now. The manufacturers were starting to learn to differentiate their products and cater to the enthusiasts who would not only buy and support these products, but also recommend them to friends/family/the world.
Today motherboards are really the foundation for any PC build. Choosing a motherboard is no longer just picking up some whitebox board that has a 440 BX chipset. Now users are much more active in debating what kind of features they need, what kind of feedback has this manufacturer received from consumers, what kind of ratings the board has on Amazon or Newegg. Features like build quality or overclocking performance sway users from company to company and product to product.
In the past 15 years or so we have seen some pretty rigid guidelines for pricing of motherboards. The super cheap “PC Chips” style motherboards existed below the $90 range. The decent, but unexciting motherboards with the bare minimum of features would go from $90 to $150. The $150 and beyond products were typically considered enthusiast class motherboards with expanded features, better build quality, and more robust power delivery options. Thankfully for consumers, this model is being shaken up by the latest generation of products from AMD.
MSI insures that everything is nicely packed and protected in their black and red box.
I mentioned in the previous Gigabyte G1.Sniper.A88X review that AMD and its partners do not have the luxury of offering a $150 and above FM2+ motherboard due to the nature (and pricing) of the latest FM2+ APUs. I am fairly sure the amount of people willing to spend $200 on a motherboard to house a $179 APU that seemingly overclocks as well on a cheap board as it does a more expensive one (meaning, not very well at all) is pretty low. If there is one bright side to the latest Kaveri APUs, it is that the graphics portion is extremely robust in both graphics and OpenCL applications. The hope for AMD and users alike is that HSA will in fact take off and provide a significant performance boost in a wide variety of applications that typically require quite a bit of horsepower.
Subject: Processors | July 8, 2014 - 07:23 PM | Jeremy Hellstrom
Tagged: intel atom, Pentium G3258, overclocking
Technically it is an Anniversary Edition Pentium processor but it reminds those of us who have been in the game a long time of the old Celeron D's which cost very little and overclocked like mad! The Pentium G3258 is well under $100 but the stock speed of 3.2GHz is only a recommendation as this processor is just begging to be overclocked. The Tech Report coaxed it up to 4.8GHz on air cooling, 100MHz higher than the i7-4790K they tested. A processor that costs about 20% of the price of the 4790K can almost meet its performance in Crysis 3 without resorting to even high end watercooling should make any gamer on a budget sit up an take notice. Sure you lose the extra cores and other features of the flagship processor but if you are primarily a gamer these are not your focus, you simply want the fastest processor you can get at a reasonable amount of money. Stay tuned for more information about the Anniversary Edition Pentium as there are more benchmarks to be run!
"This new Pentium is an unlocked dual-core CPU based on the latest 22-nm Haswell silicon. I ran out and picked one up as soon as they went on sale last week. The list price is only 72 bucks, but Micro Center had them on sale for $60. In other words, you can get a processor that will quite possibly run at clock speeds north of 4GHz—with all the per-clock throughput of Intel's very latest CPU core—for the price of a new Call of Shooty game.
Also, ours overclocks like a Swiss watchmaker on meth."
Here are some more Processor articles from around the web:
- Intel Pentium G3258 Dual Core Processor Gaming Performance @ Legit Reviews
- Intel Pentium G3258 Processor Review @ Legit Reviews
- Intel Core i7 4790K @ eTeknix
- Devil's Canyon Intel Core i7-4790K @ Legion Hardware
- Overclocking the Core i7-4790K @ The Tech Report
Subject: General Tech, Motherboards, Memory | July 6, 2014 - 03:53 AM | Scott Michaud
Tagged: overclocking, memory, gigabyte
About a week ago, HWBOT posted a video of a new DDR3 memory clock record which was apparently beaten the very next day after the movie was published. Tom's Hardware reported on the first of the two, allegedly performed by Gigabyte on their Z97X-SOC Force LN2 Motherboard. The Tom's Hardware article also, erroneously, lists the 2nd place overclock (then 1st place) at 4.56 GHz when it was really half that, because DDR is duplex (2.28 GHz). This team posted their video with a recording of the overclock being measured by an oscilloscope. This asserts that they did not mess with HWBOT.
The now first place team, which managed 2.31 GHz on the same motherboard, did not go to the same level of proof, as far as I can tell.
This is the 2nd fastest overclock...
... but the fastest to be recorded with an oscilloscope that I can tell
Before the machine crashes to a blue screen, the oscilloscope actually reports 2.29 GHz. I am not sure why they took 10 MHZ off, but I expect it is because the system crashed before HWBOT was able to record that higher frequency. Either way, 2.28 GHz was a new world record, and verified by a video, whether or not it was immediately beat.
Tom's Hardware also claims that liquid nitrogen was used to cool the system, which brings sense to why they would use an LN2 board. It could have been chosen just for its overclocking features, but that would have been a weird tradeoff. The LN2 board doesn't have mounting points for a CPU air or water cooler. The extra features would have been offset by the need to build a custom CPU cooler, to not use liquid nitrogen with. It is also unclear how the memory was cooled, whether it was, somehow, liquid nitrogen-cooled too, or if it was exposed to the air.
FM2+ Has a High End?
AMD faces a bit of a quandary when it comes to their products. Their APUs are great at graphics, but not so great at general CPU performance. Their products are all under $200 for the CPU/APU but these APUs are not popular with the enthusiast and gaming crowd. Yes, they can make excellent budget gaming systems for those who do not demand ultra-high resolutions and quality settings, but it is still a tough sell for a lot of the mainstream market; the primary way AMD pushes these products is price.
Perhaps the irony here is that AMD is extremely competitive with Intel when it comes to chipset features. The latest A88X Fusion Control Hub is exceptionally well rounded with four native USB 3.0 ports, ten USB 2.0 ports, and eight SATA-6G ports. Performance of this chipset is not all that far off from what Intel offers with the Z87 chipset (USB and SATA-6G are slower, but not dramatically so). The chip also offers RAID 0, 1, 5, and 10 support as well as a 10/100/1000 Ethernet MAC (but a physical layer chip is still required).
Now we get back to price. AMD is not charging a whole lot for these FCH units, even the top end A88X. I do not have the exact number, but it is cheap as compared to the competing Intel option. Intel’s chipset business has made money for the company for years, but AMD does not have that luxury. AMD needs to bundle effectively to be competitive, so it is highly doubtful that the chipset division makes a net profit at the end of the day. Their job is to help push AMD’s CPU and APU offerings as much as possible.
These low cost FCH chips allow motherboard manufacturers to place a lot of customization on their board, but they are still limited in what they can do. A $200+ motherboard simply will not fly with consumers for the level of overall performance that even the latest AMD A10 7850K APU provides in CPU bound workloads. Unfortunately, HSA has not yet taken off to leverage the full potential of the Kaveri APU. We have had big developments, just not big enough that the majority of daily users out there will require an AMD APU. Until that happens, AMD will not be viewed favorably when it comes to its APU offerings in gaming or high performance systems.
The quandary obviously is how AMD and its motherboard partners can create inexpensive motherboards that are feature packed, yet will not break the bank or become burdensome towards APU sales? The FX series of processors from AMD do have a bit more leeway as the performance of the high end FX-8350 is not considered bad, and it is a decent overclocker. That platform can sustain higher motherboard costs due to this performance. The APU side, not so much. The answer to this quandary is tradeoffs.
Subject: Graphics Cards | June 12, 2014 - 06:17 PM | Ryan Shrout
Tagged: overclocking, nvidia, gtx titan z, geforce
Earlier this week I posted a review of the NVIDIA GeForce GTX Titan Z graphics card, a dual-GPU Kepler GK110 part that currently sells for $3000. If you missed that article you should read it first and catch up but the basic summary was that, for PC gamers, it's slower and twice the price of AMD's Radeon R9 295X2.
During that article though I mentioned that the Titan Z had more variable clock speeds than any other GeForce card I had tested. At the time I didn't go any further than that since the performance of the card already pointed out the deficit it had going up against the R9 295X2. However, several readers asked me to dive into overclocking with the Titan Z and with that came the need to show clock speed changes.
My overclocking was done through EVGA's PrecisionX software and we measured clock speeds with GPU-Z. The first step in overclocking an NVIDIA GPU is to simply move up the Power Target sliders and see what happens. This tells the card that it is allowed to consume more power than it would normally be allowed to, and then thanks to GPU Boost technology, the clock speed should scale up naturally.
Click to Enlarge
And that is exactly what happened. I ran through 30 minutes of looped testing with Metro: Last Light at stock settings, with the Power Target at 112%, with the Power Target at 120% (the maximum setting) and then again with the Power Target at 120% and the GPU clock offset set to +75 MHz.
That 75 MHz offset was the highest setting we could get to run stable on the Titan Z, which brings the Base clock up to 781 MHz and the Boost clock to 951 MHz. Though, as you'll see in our frequency graphs below the card was still reaching well above that.
Click to Enlarge
This graph shows clock rates of the GK110 GPUs on the Titan Z over the course of 25 minutes of looped Metro: Last Light gaming. The green line is the stock performance of the card without any changes to the power settings or clock speeds. While it starts out well enough, hitting clock rates of around 1000 MHz, it quickly dives and by 300 seconds of gaming we are often going at or under the 800 MHz mark. That pattern is consistent throughout the entire tested time and we have an average clock speed of 894 MHz.
Next up is the blue line, generated by simply moving the power target from 100% to 112%, giving the GPUs a little more thermal headroom to play with. The results are impressive, with a much more consistent clock speed. The yellow line, for the power target at 120%, is even better with a tighter band of clock rates and with a higher average clock.
Finally, the red line represents the 120% power target with a +75 MHz offset in PrecisionX. There we see a clock speed consistency matching the yellow line but offset up a bit, as we have been taught to expect with NVIDIA's recent GPUs.
Click to Enlarge
The result of all this data comes together in the bar graph here that lists the average clock rates over the entire 25 minute test runs. At stock settings, the Titan Z was able to hit 894 MHz, just over the "typical" boost clock advertised by NVIDIA of 876 MHz. That's good news for NVIDIA! Even though there is a lot more clock speed variance than I would like to see with the Titan Z, the clock speeds are within the expectations set by NVIDIA out the gate.
Bumping up that power target though will help out gamers that do invest in the Titan Z quite a bit. Just going to 112% results in an average clock speed of 993 MHz, a 100 MHz jump worth about 11% overall. When we push that power target up even further, and overclock the frequency offset a bit, we actually get an average clock rate of 1074 MHz, 20% faster than the stock settings. This does mean that our Titan Z is pulling more power and generating more noise (quite a bit more actually) with fan speeds going from around 2000 to 2700 RPM.
At both 2560x1440 and 3840x2160, in the Metro: Last Light benchmark we ran, the added performance of the Titan Z does put it at the same level of the Radeon R9 295X2. Of course, it goes without saying that we could also overclock the 295X2 a bit further to improve ITS performance, but this is an exercise in education.
Does it change my stance or recommendation for the Titan Z? Not really; I still think it is overpriced compared to the performance you get from AMD's offerings and from NVIDIA's own lower priced GTX cards. However, it does lead me to believe that the Titan Z could have been fixed and could have offered at least performance on par with the R9 295X2 had NVIDIA been willing to break PCIe power specs and increase noise.
UPDATE (6/13/14): Some of our readers seem to be pretty confused about things so I felt the need to post an update to the main story here. One commenter below mentioned that I was one of "many reviewers that pounded the R290X for the 'throttling issue' on reference coolers" and thinks I am going easy on NVIDIA with this story. However, there is one major difference that he seems to overlook: the NVIDIA results here are well within the rated specs.
When I published one of our stories looking at clock speed variance of the Hawaii GPU in the form of the R9 290X and R9 290, our results showed that clock speed of these cards were dropping well below the rated clock speed of 1000 MHz. Instead I saw clock speeds that reached as low as 747 MHz and stayed near the 800 MHz mark. The problem with that was in how AMD advertised and sold the cards, using only the phrase "up to 1.0 GHz" in its marketing. I recommended that AMD begin selling the cards with a rated base clock and a typical boost clock instead only labeling with the, at the time, totally incomplete "up to" rating. In fact, here is the exact quote from this story: "AMD needs to define a "base" clock and a "typical" clock that users can expect." Ta da.
The GeForce GTX Titan Z though, as we look at the results above, is rated and advertised with a base clock of 705 MHz and a boost clock of 876 MHz. The clock speed comparison graph at the top of the story shows the green line (the card at stock) never hitting that 705 MHz base clock while averaging 894 MHz. That average is ABOVE the rated boost clock of the card. So even though the GPU is changing between frequencies more often than I would like, the clock speeds are within the bounds set by NVIDIA. That was clearly NOT THE CASE when AMD launched the R9 290X and R9 290. If NVIDIA had sold the Titan Z with only the specification of "up to 1006 MHz" or something like then the same complaint would be made. But it is not.
The card isn't "throttling" at all, in fact, as someone specifies below. That term insinuates that it is going below a rated performance rating. It is acting in accordance with the GPU Boost technology that NVIDIA designed.
Some users seem concerned about temperature: the Titan Z will hit 80-83C in my testing, both stock and overclocked, and simply scales the fan speed to compensate accordingly. Yes, overclocked, the Titan Z gets quite a bit louder but I don't have sound level tests to show that. It's louder than the R9 295X2 for sure but definitely not as loud as the R9 290 in its original, reference state.
Finally, some of you seem concerned that I was restrticted by NVIDIA on what we could test and talk about on the Titan Z. Surprise, surprise, NVIDIA didn't send us this card to test at all! In fact, they were kind of miffed when I did the whole review and didn't get into showing CUDA benchmarks. So, there's that.