Subject: Graphics Cards | October 21, 2014 - 06:42 PM | Tim Verry
Tagged: maxwell, nvidia, gaming, mini ITX, small form factor, GTX 970, GM204, gigabyte
Gigabyte has announced a new miniature graphics card based around NVIDIA's GeForce GTX 970 GPU. The upcoming card is a dual slot, single fan design that is even shorter than the existing GTX 970 graphics cards (which are fairly short themselves). Officially known as the GV-N970IXOC-4GD, the miniaturized GTX 970 will be available for your small form factor (Mini ITX) systems in November for around $330.
The new Mini ITX compatible graphics card packs in a factory overclocked GeForce GTX 970 processor, 4GB of video memory, a custom PCB, and a custom WindForce-inspired cooler into a graphics card that is smaller than any of the existing GTX 970 cards. Gigabyte is using a custom design with a single 8-pin PCI-E power connector instead of two 6-pin connectors from the reference design or the 6-pin plus 8-pin from manufacturers like EVGA. The single power connector means less cabling to route (and
successfully attempt to hide heh) and better small form factor PSU compatibility. The cooler is an aluminum fin array with three copper heatpipes paired with a single shrouded fan.
The tiny card comes factory overclocked at 1076 MHz base and 1216 MHz boost, which is a respectable boost over the reference specifications. For reference, the GeForce GTX 970 processor is a 28nm chip using NVIDIA's GM204 "Maxwell" architecture with 1664 CUDA cores clocked at 1051 MHz base and 1178 MHz boost. It appears that Gigabyte has left the 4GB of GDDR5 untouched at 7.0 GT/s.
|Gigabyte GTX 970 Mini ITX||
Reference GTX 970
|Core (MHz) Boost||1216||1178|
|Memory Rate||7.0 (GT/s)||7.0 (GT/s)|
|PCI-E Power||1x 8-pin||2x 6-pin|
The display output on the miniature Gigabyte card differs slightly from the reference design with the addition of a DVI-D connection.
- 3 x DisplayPort
- 1 x HDMI
- 1 x DVI-I
- 1 x DVI-D
According to Gigabyte, its custom cooler resulted in lower temperatures versus the reference design. The company claims that when running Metro: Last Light, the Mini ITX Gigabyte GTX 970 GPU ran at 62°C versus a reference design hitting 76°C running the same game. If true, the Gigabyte cooler is capable of keeping the card significantly cooler while taking up less space (though fan speeds and sound levels were not mentioned, nor compared to other custom coolers).
The small form factor friendly GTX 970 is coming next month with a MSRP of $329.99. Are you excited?
GeForce GTX 980M Performance Testing
When NVIDIA launched the GeForce GTX 980 and GTX 970 graphics cards last month, part of the discussion at our meetings also centered around the mobile variants of Maxwell. The NDA was a bit later though and Scott wrote up a short story announcing the release of the GTX 980M and the GTX 970M mobility GPUs. Both of these GPUs are based on the same GM204 design as the desktop cards, though as you should have come to expect by now, do so with lower specifications than the similarly-named desktop options. Take a look:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
Just like the desktop models, GTX 980M and GTX 970M are built on the 28nm process technology and are tweaked and built for power efficiency - one of the reasons the mobile release of this product is so interesting.
With a CUDA core count of 1536, the GTX 980M has 33% fewer shader cores than the desktop GTX 980, along with a slightly lower base clock speed. The result is a peak theoretical performance of 3.189 TFLOPs, compared to 4.6 TFLOPs on the GTX 980 desktop. In fact, that is only slightly higher than the GTX 880M based on Kepler, that clocks in with the same CUDA core count (1536) but a TFLOP capability of 2.9. Bear in mind that the GTX 880M is using a different architecture design than the GTX 980M; Maxwell's design advantages go beyond just CUDA core count and clock speed.
The GTX 970M is even smaller, with a CUDA core count of 1280 and peak performance rated at 2.365 TFLOPs. Also notice that the memory bus width has shrunk from 256-bit to 192-bit for this part.
As is typically the case with mobile GPUs, the memory speed of the GTX 980M and GTX 970M is significantly lower than the desktop parts. While the GeForce GTX 980 and 970 that install in your desktop PC will have memory running at 7.0 GHz, the mobile versions will run at 5.0 GHz in order to conserve power.
From a feature set stand point though, the GTX 980M/970M are very much the same as the desktop parts that I looked at in September. You will have support for VXGI, NVIDIA's new custom global illumination technology, Multi-Frame AA and maybe most interestingly, Dynamic Super Resolution (DSR). DSR allows you to render a game at a higher resolution and then use a custom filter to down sample it back to your panel's native resolution. For mobile gamers that are using 1080p screens (as our test sample shipped with) this is a good way to utilize the power of your GPU for less power-hungry games, while getting a surprisingly good image at the same time.
Subject: Graphics Cards | October 14, 2014 - 06:49 PM | Jeremy Hellstrom
Tagged: GTX 980, nvidia, overclocking
[H]ard|OCP has had more time to spend with their reference GTX 980 and have reached the best stable overclock they could on this board without moving to third party coolers or serious voltage mods. At 1516MHz core and 8GHz VRAM on this reference card, retail models will of course offer different results; regardless it is not too shabby a result. This overclock was not easy to reach and how they managed it and the lessons they learned along the way make for interesting reading. The performance increases were noticeable, in most cases the overclocked card was beating the stock card by 25% and as this was a reference card the retail cards with enhanced coolers and the possibility of custom BIOS which disable NVIDIA's TDP/Power Limit settings you could see cards go even faster. You can bet [H] and PCPer will both be revisting the overclocking potential of GTX 980s.
"The new NVIDIA GeForce GTX 980 makes overclocking GPUs a ton of fun again. Its extremely high clock rates achieved when you turn the right dials and sliders result in real world gaming advantages. We will compare it to a GeForce GTX 780 Ti and Radeon R9 290X; all overclocked head-to-head."
Here are some more Graphics Card articles from around the web:
- GeForce GTX 980 cards from Gigabyte and Zotac @ The Tech Report
- Palit GTX980 Super Jetstream OC @ Kitguru
- The NVIDIA GTX 980 SLI Review @ Hardware Canucks
- Gainward Phantom GeForce GTX 970 4GB @ eTeknix
- MSI GeForce GTX 980 Gaming 4 GB @ techPowerUp
- NVIDIA GeForce GTX 980M & GTX 970M Preview @ Hardware Canucks
- NVIDIA GTX 970 SLI Performance Review @ Hardware Canucks
- NVIDIA GeForce GTX 980 Dominates With OpenCL On Linux @ Phoronix
- Sapphire R9 270X Toxic Vs NZXT Kraken Cooling @ eTeknix
- Raijintek Morpheus GPU Cooler @ eTeknix
- Arctic Accelero Hybrid II-120 Liquid GPU Cooler @ Kitguru
- AMD Radeon R9 285 Tonga Performance On Linux @ Phoronix
- Gigabyte AMD Radeon R9 285 WindForce OC Video Card Review @ Madshrimps
- HIS R9 290X iPower IceQ X2 Turbo 4GB GDDR5 Video Card Review @ Madshrimps
- Sapphire Radeon R9 285 ITX Compact OC Review @HiTech Legion
- XFX R9 280 Double Dissipation 3GB @ [H]ard|OCP
Subject: Editorial, Graphics Cards | October 13, 2014 - 10:28 PM | Ryan Shrout
Tagged: video, pcper, nvidia, live, GTX 980, geforce, game stream, borderlands: the pre-sequel, borderlands
UPDATE: You missed this weeks live stream but you can watch the game play via this YouTube embed!!
I'm sure like the staff at PC Perspective, many of our readers have been obsessively playing the Borderlands games since the first release in 2009. Borderlands 2 arrived in 2012 and once again took hold of the PC gaming mindset. This week marks the release of Borderlands: The Pre-Sequel, which as the name suggests, takes place before the events of Borderlands 2. The Pre-Sequel has playable characters that were previously only known to the gamer as NPCs and that, coupled with the new low-gravity game play style, should entice nearly everyone that loves the first-person, loot-driven series to come back.
To celebrate the release, PC Perspective has partnered with NVIDIA to host a couple of live game streams that will feature some multi-player gaming fun as well some prizes to giveaway to the community. I will be joined by NVIDIA's Andrew Coonrad and Kris Rey to tackle the campaign in a cooperative style while taking a couple of stops to give away some hardware.
Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA
5pm PT / 8pm ET - October 14th
Need a reminder? Join our live mailing list!
Here are some of the prizes we have lined up for those of you that join us for the live stream:
- 1 x NVIDIA SHIELD Tablet (Wi-Fi) - PC Perspective Review
- 1 x NVIDIA GeForce GTX 980 4GB - PC Perspective Review
- 1 x ASUS ROG Swift PG278Q G-Sync Monitor - PC Perspective Review
- 3 x Borderlands: The Pre-Sequel Steam Keys
Holy crap, that's a hell of a list!! How do you win? It's really simple: just tune in and watch the Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA! We'll explain the methods to enter live on the air and anyone can enter from anywhere in the world - no issues at all!
So stop by Tuesday night for some fun, some gaming and the chance to win some hardware!
SLI Setup and Testing Configuration
The idea of multi-GPU gaming is pretty simple on the surface. By adding another GPU into your gaming PC, the game and the driver are able to divide the workload of the game engine and send half of the work to one GPU and half to another, then combining that work on to your screen in the form of successive frames. This should make the average frame rate much higher, improve smoothness and just basically make the gaming experience better. However, implementation of multi-GPU technologies like NVIDIA SLI and AMD CrossFire are much more difficult than the simply explanation above. We have traveled many steps in this journey and while things have improved in several key areas, there is still plenty of work to be done in others.
As it turns out, support for GPUs beyond two seems to be one of those areas ready for improvement.
When the new NVIDIA GeForce GTX 980 launched last month my initial review of the product included performance results for GTX 980 cards running in a 2-Way SLI configuration, by far the most common derivative. As it happens though, another set of reference GeForce GTX 980 cards found there way to our office and of course we needed to explore the world of 3-Way and 4-Way SLI support and performance on the new Maxwell GPU.
The dirty secret for 3-Way and 4-Way SLI (and CrossFire for that matter) is that it just doesn't work as well or as smoothly as 2-Way configurations. Much more work is put into standard SLI setups as those are by far the most common and it doesn't help that optimizing for 3-4 GPUs is more complex. Some games will scale well, others will scale poorly; hell some even scale the other direction.
Let's see what the current state of high GPU count SLI is with the GeForce GTX 980 and whether or not you should consider purchasing more than of these new flagship parts.
Subject: Graphics Cards, Processors | October 8, 2014 - 05:54 PM | Scott Michaud
In an abrupt announcement, Rory Read has stepped down from his positions at AMD, leaving them to Dr. Lisa Su. Until today, Mr. Read served as president and Chief Executive Officer (CEO) of the x86 chip designer and Dr. Su as Chief Operating Officer (COO). Today however, Dr. Su has become president and CEO, and Mr. Read will stay on for a couple of months as an adviser during the transition.
Josh Walrath, editor here at PC Perspective, tweeted that he was "Curious as to why Rory didn't stay on longer? He did some good things there [at AMD], but [it's] very much an unfinished job." I would have to agree. It feels like an odd time, hence the earlier use of the word "abrupt", to have a change in management. AMD restructured just four months ago, which was the occasion for Dr. Su to be promoted to COO. In fact, at least as far as I know, no-one is planned to fill her former position as COO.
These points suggest that she was planned to take over the company for at least several months.
I have been told that timing is everything. I guess this rings true, but only if you truly know the circumstances around any action. Today’s announcement by AMD was odd in its timing, but it was not exactly unexpected. As Scott mentioned above, I was confused by this happening now. I had expected Rory to be in charge for at least another year, if not two. Rory had hinted that he was not planning on being at AMD forever, but was aiming at creating a solid foundation for the company and to help shore up its finances and instill a new culture. While the culture is turning due to pressure from up top as well as a pretty significant personnel cuts, AMD is not quite as nimble yet as they want to be.
Rory’s term has seen the return of seasoned veterans like Jim Keller and Raja Koduri. These guys are helping to turn the ship around after some fairly mediocre architecturse on the CPU and GPU sides. While Raja had little to do with GCN, we are seeing some aggressive moves there in terms of features that are making their products much more competitive with NVIDIA. Keller has made some very significant changes to the overall roadmap on the CPU side and I think we will see some very solid improvements in design and execution over the next two years.
Lisa Su was brought in by Rory shortly after he was named CEO. Lisa has a pretty significant background in semiconductors and has made a name for herself in her work with IBM and Freescale. Lisa attained all three of her degrees from MIT. This is not unheard of, but it is uncommon to stay in one academic setting when gaining advanced degrees. Having said that, MIT certainly is the top engineering and science school in the nation (if not the world). I’m sure people from RPI, GT, and CalTech might argue that, but it certainly is an impressive school to have on your resume.
Dr. Su has seemingly been groomed for this transition for quite some time now. She went from a VP to COO rather quickly, and is now shouldering the burden of being CEO. Lisa has been on quite a few of the quarterly conference calls and taking questions. She also serves on the Board of Directors at Analog Devices.
I think that Lisa will continue along the same path that Rory set out, but she will likely bring a few new wrinkles due to her experience with semiconductor design and R&D at IBM. We can only hope that this won’t become a Dirk Meyer 2.0 type situation where a successful engineer and CPU architect could not change the course of the company after the disastrous reign of Hector Ruiz. I do not think that this will be the case, as Rory did not leave the mess that Hector did. I also believe that Lisa has more business sense and acumen than Dirk did.
This change, at this time, has provided some instability in the markets when regarding AMD. Some weeks ago AMD was at a near high for the year at around $4.66 per share. Right now it is hovering at $3.28. I was questioning why the stock price was going down, and it seems that my question was answered. One way or the other, rumors of Rory taking off reached investors’ ears and we saw a rapid decline in share price. We have yet to see what Q3 earnings look like now that Rory has rather abruptly left his position, but people are pessimistic as to what will be announced with such a sudden departure.
Quick Performance Comparison
Earlier this week, we posted a brief story that looked at the performance of Middle-earth: Shadow of Mordor on the latest GPUs from both NVIDIA and AMD. Last week also marked the release of the v1.11 patch for Sniper Elite 3 that introduced an integrated benchmark mode as well as support for AMD Mantle.
I decided that this was worth a quick look with the same line up of graphics cards that we used to test Shadow of Mordor. Let's see how the NVIDIA and AMD battle stacks up here.
For those unfamiliar with the Sniper Elite series, the focuses on the impact of an individual sniper on a particular conflict and Sniper Elite 3 doesn't change up that formula much. If you have ever seen video of a bullet slowly going through a body, allowing you to see the bones/muscle of the particular enemy being killed...you've probably been watching the Sniper Elite games.
Gore and such aside, the game is fun and combines sniper action with stealth and puzzles. It's worth a shot if you are the kind of gamer that likes to use the sniper rifles in other FPS titles.
But let's jump straight to performance. You'll notice that in this story we are not using our Frame Rating capture performance metrics. That is a direct result of wanting to compare Mantle to DX11 rendering paths - since we have no way to create an overlay for Mantle, we have resorted to using FRAPs and the integrated benchmark mode in Sniper Elite 3.
Our standard GPU test bed was used with a Core i7-3960X processor, an X79 motherboard, 16GB of DDR3 memory, and the latest drivers for both parties involved. That means we installed Catalyst 14.9 for AMD and 344.16 for NVIDIA. We'll be comparing the GeForce GTX 980 to the Radeon R9 290X, and the GTX 970 to the R9 290. We will also look at SLI/CrossFire scaling at the high end.
If there is one message that I get from NVIDIA's GeForce GTX 900M-series announcement, it is that laptop gaming is a first-class citizen in their product stack. Before even mentioning the products, the company provided relative performance differences between high-end desktops and laptops. Most of the rest of the slide deck is showing feature-parity with the desktop GTX 900-series, and a discussion about battery life.
First, the parts. Two products have been announced: The GeForce GTX 980M and the GeForce GTX 970M. Both are based on the 28nm Maxwell architecture. In terms of shading performance, the GTX 980M has a theoretical maximum of 3.189 TFLOPs, and the GTX 970M is calculated at 2.365 TFLOPs (at base clock). On the desktop, this is very close to the GeForce GTX 770 and the GeForce GTX 760 Ti, respectively. This metric is most useful when you're compute bandwidth-bound, at high resolution with complex shaders.
The full specifications are:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
As for the features, it should be familiar for those paying attention to both desktop 900-series and the laptop 800M-series product launches. From desktop Maxwell, the 900M-series is getting VXGI, Dynamic Super Resolution, and Multi-Frame Sampled AA (MFAA). From the latest generation of Kepler laptops, the new GPUs are getting an updated BatteryBoost technology. From the rest of the GeForce ecosystem, they will also get GeForce Experience, ShadowPlay, and so forth.
For VXGI, DSR, and MFAA, please see Ryan's discussion for the desktop Maxwell launch. Information about these features is basically identical to what was given in September.
BatteryBoost, on the other hand, is a bit different. NVIDIA claims that the biggest change is just raw performance and efficiency, giving you more headroom to throttle. Perhaps more interesting though, is that GeForce Experience will allow separate one-click optimizations for both plugged-in and battery use cases.
The power efficiency demonstrated with the Maxwell GPU in Ryan's original GeForce GTX 980 and GTX 970 review is even more beneficial for the notebook market where thermal designs are physically constrained. Longer battery life, as well as thinner and lighter gaming notebooks, will see tremendous advantages using a GPU that can run at near peak performance on the maximum power output of an integrated battery. In NVIDIA's presentation, they mention that while notebooks on AC power can use as much as 230 watts of power, batteries tend to peak around 100 watts. Given that a full speed, desktop-class GTX 980 has a TDP of 165 watts, compared to the 250 watts of a Radeon R9 290X, translates into notebook GPU performance that will more closely mirror its desktop brethren.
Of course, you probably will not buy your own laptop GPU; rather, you will be buying devices which integrate these. There are currently five designs across four manufacturers that are revealed (see image above). Three contain the GeForce GTX 980M, one has a GTX 970M, and the other has a pair of GTX 970Ms. Prices and availability are not yet announced.
Subject: Graphics Cards | October 6, 2014 - 03:21 PM | Ryan Shrout
Tagged: radeon, R9 290X, r9 290, hawaii, GTX 980, GTX 970, geforce, amd
On Saturday while finishing up the writing on our Shadow of Mordor performance story, I noticed something quite interesting. The prices of AMD's flagship Radeon products had all come down quite a bit. In an obvious response to the release of NVIDIA's new GeForce GTX 980 and GTX 970, the Radeon R9 290X and the Radeon R9 290 have lowered prices in a very aggressive fashion.
UPDATE: A couple of individual cards appear to be showing up as $360 and $369 on Newegg!
Amazon.com is showing some R9 290X cards at $399
For now, Amazon.com is only listing the triple-fan Gigabyte R9 290X Windforce card at $399, though Newegg.com has a couple as well.
Amazon.com also has several R9 290 cards for $299
- Gigabyte R9 290 Windforce - $299
- Powercolor AX R9 290 - $299
- ASUS R9 290 DirectCU II - $329
- More R9 290 Card on Amazon.com
And again, Newegg.com has some other options for R9 290 cards at these lower prices.
Let's assume that these price drops are going to be permanent which seems likely based on the history of AMD and market adjustments. That shifts the high end GPU market considerably.
|GeForce GTX 980 4GB||$549|
|$399||Radeon R9 290X 4GB|
|GeForce GTX 970 4GB||$329|
|$299||Radeon R9 290 4GB|
The battle for that lower end spot between the GTX 970 and R9 290 is now quite a bit tighter though NVIDIA's Maxwell architecture still has a positive outlook against the slightly older Hawaii GPU. Our review of the GTX 970 shows that it is indeed faster than the R9 290 though it no longer has the significant cost advantage it did upon release. The GTX 980, however, is much tougher sell over the Radeon R9 290X for PC gamers that are concerned with price per dollar over all else. I would still consider the GTX 980 faster than the R9 290X...but is it $150 faster? That's a 35% price difference NVIDIA now has to contend with.
NVIDIA has proven that is it comfortable staying in this position against AMD as it maintained it during essentially the entire life of the GTX 680 and GTX 780 product lines. AMD is more willing to make price cuts to pull the Radeon lineup back into the spotlight. Though the market share between the competitors didn't change much over the previous 6 months, I'll be very curious to see how these two strategies continue to play out.
In what can most definitely be called the best surprise of the fall game release schedule, the open-world action game set in the Lord of the Rings world, Middle-earth: Shadow of Mordor has been receiving impressive reviews from gamers and the media. (GiantBomb.com has a great look at it if you are new to the title.) What also might be a surprise to some is that the PC version of the game can be quite demanding on even the latest PC hardware, pulling in frame rates only in the low-60s at 2560x1440 with its top quality presets.
Late last week I spent a couple of days playing around with Shadow of Mordor as well as the integrated benchmark found inside the Options menu. I wanted to get an idea of the performance characteristics of the game to determine if we might include this in our full-time game testing suite update we are planning later in the fall. To get some sample information I decided to run through a couple of quality presets with the top two cards from NVIDIA and AMD and compare them.
Without a doubt, the visual style of Shadow of Mordor is stunning – with the game settings cranked up high the world, characters and fighting scenes look and feel amazing. To be clear, in the build up to this release we had really not heard anything from the developer or NVIDIA (there is an NVIDIA splash screen at the beginning) about the title which is out of the ordinary. If you are looking for a game that is both fun to play (I am 4+ hours in myself) and can provide a “wow” factor to show off your PC rig then this is definitely worth picking up.