In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates. I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.
Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.
This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.
Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback. Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.
We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic. You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.
Let me know below what you thought of this video and any questions that you might have.
Subject: Graphics Cards | January 12, 2013 - 12:02 PM | Ryan Shrout
Tagged: nvidia, Intel, hd graphics, haswell, geforce, dirt 3, ces 2013, CES, 650m
While wandering around the Intel booth we were offered a demo of the graphics performance of the upcoming Haswell processor, due out in the middle of 2013. One of the big changes on this architecture will be another jump up in graphics performance, even more than we saw going from Sandy Bridge to Ivy Bridge.
On the left is the Intel Haswell system and on the right is a mobile system powered by the NVIDIA GeForce GT 650M. For reference, that discrete GPU has 384 cores and a 128-bit memory bus so we aren't talking about flagship performance here. Haswell GT3 graphics is rumored to have double the performance of the GT2 found in Ivy Bridge based on talks at IDF this past September.
While I am not able to report the benchmark results, I can tell you what I "saw" in my viewing. First, the Haswell graphics loaded the game up more slowly than the NVIDIA card. That isn't a big deal really and could change with driver updates closer to launch, but it is was a lingering problem we have seen with Intel HD graphics over the years.
During the actual benchmark run, both looked great while running at 1080p and High quality presets. I did notice during part of the loading of the level, the Haswell system seemed to "stutter" a bit and was a little less fluid in the animation. I did NOT notice that during the actually benchmark gameplay though.
I also inquired with Intel's graphics team about how dedicated they were to providing updated graphics drivers for HD graphics users. They were defensive about their current output saying they have released quarterly drivers since the Sandy Bridge release but that perhaps they should be more vocal about it (I agree). While I tried to get some kind of formal commitment from them going forward to monthly releases with game support added within X number of days, they weren't willing to do that quite yet.
If AMD and NVIDIA discrete notebook (and low cost desktop) graphics divisions are to push an edge, game support and frequent updates are going to be the best place to start. Still, seeing Intel continue to push forward on the path of improved processor graphics is great if they can follow through for gamers!
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech | January 9, 2013 - 12:46 PM | Jeremy Hellstrom
Tagged: nvidia, geforce, graphics drivers, fud
Say what you will about AMD's driver team but they don't tend to release drivers that allow some to elevate their privileges on their PCs. That was unfortunately the Christmas present NVIDIA offered Windows users who installed 310.70, similar to the gift they offered Linux users last summer. According to The Register, the new driver no longer contains that security hole, which makes upgrading to the newest driver more important than usual. That is not the only reason to grab the new driver, NVIDIA reports that 310.90 provides 26% faster performance in Call of Duty: Black Ops 2 and up to 18% faster performance in Assassin’s Creed III as well as improvements to 400, 500 and 600 series cards in most other games.
"The vulnerability allows a remote attacker with a valid domain account to gain super-user access to any desktop or laptop running the vulnerable service," HD Moore, the developer of Metasploit and chief security officer at Rapid7, told SecurityWeek.
"This flaw also allows an attacker (or rogue user) with a low-privileged account to gain super-access to their own system, but the real risk to enterprises is the remote vector," he added."
Here is some more Tech News from around the web:
- The Complete BlackBerry 10 Video Walkthrough: Surprise, It’s Neat @ Gizmodo
- Microsoft Axing Messenger On March 15th @ Slashdot
- Microsoft details first critical patches of 2013 @ The Register
- Boffins hide messages in Skype ‘silence packets’ @ The Register
- 2013 in storage: Flash, file systems and... Is CDMI actually HAPPENING? @ The Register
- CES Dialog 2013 - Day 2 The First Official Day @ Ninjalane
- CES Dialog Day 1 - The Calm Before The Storm @ Ninjalane
- TR's CES digest, part 2: Samsung, Asus, Corsair, Thermaltake, MSI, and Gigabyte
- AMD, Intel and Nvidia start 2013 with bold chip statements at CES @ The Inquirer
A change is coming in 2013
If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs. A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system. The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.
More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.
And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year. A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance. For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms. But rather than average those out by each second of time, what if you looked at each frame individually?
Scott over at Tech Report started doing that this past year and found some interesting results. I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting.
Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc. I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results. I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.
At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz. Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes. This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.
Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software. There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this.
Subject: General Tech | October 4, 2012 - 10:08 PM | Tim Verry
Tagged: nvidia, kepler, gtx 650ti, gpu, geforce
Earlier this year, specifications for an as-yet-unreleased GTX 650 Ti graphics card from NVIDIA leaked. At the time, the rumors indicated that the GTX 650 Ti would have hardware closer to the GTX 650 than the GTX 660 but still be based on the GK106 Kepler chip. It would have a 128-bit memory interface, 48 testure units, and 576 CUDA cores in 1.5 GPCs (3 SMX units). And to top it off, it had a rumored price of around $170! Not exactly a bargain.
Welll, as the launch gets closer more details are being leaked, and this time around the rumored information is indicating that the GTX 650 Ti will be closer in performance to the GTX 660 and cost around $140-$150. That certainly sounds better!
The new rumors are indicating that the reference GTX 650 Ti will have 768 CUDA cores, and 64 texture units, which means it has the full two GPCs (so it is only missing the one-half of a GPC that you get with GTX 660). and four SMX units. As a point of reference, the GTX 660 – which NVIDIA swears is the full GK106 chip – has five SMX units in 2 and a half GPCs.
The following image shows the layout of the GTX 660. The GTX 650 Ti will have the GPC on the far right disabled. Previous rumors suggested that the entire middle GPC would be turned off, so the new rumors are definitely looking more promising in terms of potential performance.
Specifically marked GK106-220 on the die, the GTX 650 Ti is based the same GK106 Kepler chip as the GTX 660, but with some features disabled. The GPU is reportedly clocked at 925MHz, and it does not support NVIDIA's GPU Boost technology.
Memory performance will take a large hit compared to the full GK106 chip. The GTX 650 Ti will feature 1GB of GDDR5 memory clocked at 1350MHz on a 128-bit memory interface. That amounts to approximately 86.4 GB/s bandwidth, which is slightly over half of the GTX 660's 144.2 GB/s bandwidth. Also, it's just barely over the 80 GB/s bandwidth of the GTX 650 (which makes sense, considering they are both using 128-bit interfaces).
The latest rumors indicate the GTX 650 Ti will be priced at around $140 with custom cards such as recently leaked Galaxy GTX 650 Ti GC on Newegg costing more ($149). These new leaked specifications have more weight than the previous rumors since they have come from multiple leaks from multiple places, so I am hoping that these new rumors are the real deal. If so, the GTX 650 Ti becomes a much better value that it was rumored to be before!
You can find more photos of a leaked GTX 650 Ti over at Chiphell.
PhysX Settings Comparison
Borderlands 2 is a hell of a game; we actually ran a 4+ hour live event on launch day to celebrate its release and played it after our podcast that week as well. When big PC releases occur we usually like to take a look at performance of the game on a few graphics cards as well to see how NVIDIA and AMD cards stack up. Interestingly, for this title, PhysX technology was brought up again and NVIDIA was widely pushing it as a great example of implementation of the GPU-accelerated physics engine.
What you may find unique in Borderlands 2 is that the game actually allows you to enabled PhysX features at Low, Medium and High settings, with either NVIDIA or AMD Radeon graphics cards installed in your system. In past titles, like Batman: Arkham City and Mafia II, PhysX was only able to be enabled (or at least at higher settings) if you had an NVIDIA card. Many gamers that used AMD cards saw this as a slight and we tended to agree. But since we could enable it with a Radeon card installed, we were curious to see what the results would be.
Of course, don't expect the PhysX effects to be able to utilize the Radeon GPU for acceleration...
Borderlands 2 PhysX Settings Comparison
The first thing we wanted to learn was just how much difference you would see by moving from Low (the lowest setting, there is no "off") to Medium and then to High. The effects were identical on both AMD and NVIDIA cards and we made a short video here to demonstrate the changes in settings.
Subject: Graphics Cards | September 13, 2012 - 05:09 PM | Jeremy Hellstrom
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga, factory overclocked
As those of you who have already read the post below this one know, ASUS decided to create a DirectCU II model for their GTX 660, with the famous heatpipe bearing heatsink. They have overclocked the GPU already and the card comes with tools to allow you to push it even further if you take the time to get to know your card and what it can manage. Check the full press release below.
Fremont, CA (September 13, 2012) - ASUS is excited to release the ASUS GeForce GTX 660 DirectCU II series featuring the Standard, OC and TOP editions. Utilizing the latest 28nm NVIDIA Kepler graphics architecture, the OC and TOP cards deliver a factory-overclock while all three cards feature ASUS exclusive DirectCU thermal design and GPU Tweak tuning software to deliver a quieter, cooler, faster, and more immersive gameplay experience. The ASUS GeForce GTX 660 DirectCU II series set a new benchmark for exceptional performance and power efficiency in a highly affordable graphics card. The ASUS GeForce GTX 660 DirectCU II is perfect for gamers looking to upgrade from last-generation graphics technology while retaining ASUS’ class-leading cooling and acoustic performance.
Superior Design and Software for the Best Gaming Experience ASUS equips the GeForce GTX 660 DirectCU II series with 2GB of GDDR5 memory clocked up to 6108MHz. The TOP edition features a blistering GPU core boost clock of 1137MHz, 104MHz faster than reference designs while the OC edition arrives with a factory-set GPU core boost speed of 1085MHz. Exclusive ASUS DIGI+ VRM digital power delivery and user-friendly GPU Tweak tuning software allows all cards to easily overclock beyond factory-set speeds offering enhanced performance in your favorite game or compute intensive application.
The ASUS GeForce GTX 660 DirectCU II series feature exclusive DirectCU technology. The custom designed cooler uses direct contact copper heatpipes for faster heat transduction and up to 20% lower normal operating temperatures than reference designs. The optimized fans are able operate at lower speeds providing a much quieter gaming or computing environment. For enhanced stability, energy efficiency, and overclocking margins the cards feature DIGI+ VRM digital power deliver plus a class-leading six-phase Super Alloy Power design for the capacitors, chokes, and MOSFETs meant to extend product lifespan and durability while operating noise-free even under heavy workloads.
ASUS once again includes the award winning GPU Tweak tuning suite in the box. Overclocking-inclined enthusiasts or gamers can boost clock speeds, set power targets, and configure fan operating parameters and policies; all this and more is accessible in the user-friendly interface. GPU Tweak offers built-in safe guards to ensure all modifications are safe, maintaining optimal stability and card reliability.
Subject: Graphics Cards | September 13, 2012 - 04:49 PM | Jeremy Hellstrom
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga
The non-Ti version of the GTX 660 has arrived on test benches and retailers, with even the heavily overclocked cards being available at $230, like EVGA's Superclocked model or MSI's OC'd card once you count the MIR. That price places it right in between the HD 7850 and 7870, and ~$70 less than the GTX 660 Ti, while the performance is mostly comparable to a stock HD7870 though the OC versions can top the GTX660.
[H]ard|OCP received ASUS' version of the card, a DirectCU II based version with the distinctive heatpipes. ASUS overclocked the card to a 1072MHz base clock and 1137MHz GPU Boost and [H] plans to see just how much further the frequencies can be pushed at a later date. Their final word on this card for those looking to upgrade, for those of you with "a GTX 560 Ti, and even the GTX 570, the GTX 660 is an upgrade".
"NVIDIA is launching the new GeForce GTX 660 GPU, codenamed GK106. We have a retail ASUS GeForce GTX 660 DirectCU II custom video card fully evaluated against a plethora of competition at this price point. This brand new GPU aims for a price point just under the GTX 660 Ti but still promises to deliver exceptional 1080p gaming with AA."
Here are some more Graphics Card articles from around the web:
- Nvidia's GeForce GTX 660 @ The Tech Report
- ASUS GTX 660 Direct CU II TOP Review @ OCC
- NVIDIA GeForce GTX 660 Launch Review @ Neoseeker
- EVGA GeForce GTX 660 SC (SuperClocked) 2GB @ Bjorn3D
- Nvidia GeForce GTX 660 @ Hardware.info
- NVIDIA Geforce GTX 660 Reviews @Hi Tech Legion
- The NVIDIA GeForce GTX 660 Review: GK106 Fills Out The Kepler Family @ AnandTech
- SI GEFORCE GTX 660 Twin Frozr 2GB OC @ Tweaktown
- Gigabyte GeForce GTX 660 @ Legion Hardware
- Gigabyte GTX 660 Overclock 2GB Graphics Card Review @ eTeknix
- EVGA GeForce GTX 660 2GB SuperClocked @ Benchmark Reviews
- MSI GTX 660 OC Edition Twin Frozr @ Kitguru
- Nvidia GeForce GTX 660 @ Techspot
- Gigabyte GTX 660 OC Video Card Review @ Ninjalane
- MSI GTX 660 Twin Frozr 2GB OC @ LanOC Reviews
- NVIDIA GeForce GTX 660 Overclocked Graphics Card Review (EVGA/ZOTAC)@ HardwareHeaven
- EVGA GTX 660 Superclocked 2Gb @ LanOC Reviews
- NVIDIA GeForce GTX 660 Review @ Hardware Canucks
- ASUS, KFA2 and MSI GeForce GTX 660 reviews with 2-way SLI @ Guru of 3D
- MSI GeForce GTX 660 Twin Frozr 2 GB @ techPowerUp
- ZOTAC GeForce GTX 660 2 GB @ techPowerUp
- Gigabyte GTX 660 Windforce OC 2 GB @ techPowerUp
- ASUS GeForce GTX 660 Direct Cu II 2 GB @ techPowerUp
NVIDIA GeForce GTX 660 Video Card Review w/ MSI and EVGA @ Legit Reviews
- Six GeForce GTX 660 Ti graphics cards: ASUS, EVGA, Gigabyte, MSI and Zotac @ Hardware.info
- Gigabyte GTX 660 Ti OC Windforce @ Kitguru
- AFOX Radeon HD 7850 (Single Slot), MSI R7870 Hawk Graphics Cards @ iXBT Labs
- Inno3D GTX 680 iChill Black Series Accelero Hybrid 4GB Overclocked @ Tweaktown
- MSI Geforce GTX 670 Power Edition @ Rbmods
- i3DSpeed, August 2012 @ iXBT Labs
- Arctic Accelero Xtreme 7970 VGA Cooler Review @ eTeknix
- Sapphire Radeon HD 7970 Vapor-X OC 6GB Graphics Card Review @ eTeknix
- Sapphire FleX HD 7770 GHz Edition @ LanOC Reviews
GK106 Completes the Circle
The release of the various Kepler-based graphics cards have been interesting to watch from the outside. Though NVIDIA certainly spiced things up with the release of the GeForce GTX 680 2GB card back in March, and then with the dual-GPU GTX 690 4GB graphics card, for quite quite some time NVIDIA was content to leave the sub-$400 markets to AMD's Radeon HD 7000 cards. And of course NVIDIA's own GTX 500-series.
But gamers and enthusiasts are fickle beings - knowing that the GTX 660 was always JUST around the corner, many of you were simply not willing to buy into the GTX 560s floating around Newegg and other online retailers. AMD benefited greatly from this lack of competition and only recently has NVIDIA started to bring their latest generation of cards to the price points MOST gamers are truly interested in.
Today we are going to take a look at the brand new GeForce GTX 660, a graphics cards with 2GB of frame buffer that will have a starting MSRP of $229. Coming in $80 under the GTX 660 Ti card released just last month, does the more vanilla GTX 660 have what it takes to replace the success of the GTX 460?
The GK106 GPU and GeForce GTX 660 2GB
NVIDIA's GK104 GPU is used in the GeForce GTX 690, GTX 680, GTX 670 and even the GTX 660 Ti. We saw the much smaller GK107 GPU with the GT 640 card, a release I was not impressed with at all. With the GTX 660 Ti starting at $299 and the GT 640 at $120, there was a WIDE gap in NVIDIA's 600-series lineup that the GTX 660 addresses with an entirely new GPU, the GK106.
First, let's take a quick look at the reference card from NVIDIA for the GeForce GTX 660 2GB - it doesn't differ much from the reference cards for the GTX 660 Ti and even the GTX 670.
The GeForce GTX 660 uses the same half-length PCB that we saw for the first time with the GTX 670 and this will allow retail partners a lot of flexibility with their card designs.
Subject: Graphics Cards | September 13, 2012 - 09:38 AM | Tim Verry
Tagged: nvidia, kepler, gtx 650, graphics cards, geforce
Ah, Kepler: the (originally intended as) midrange graphics card architecture that took the world by storm and allowed NVIDIA to take it from the dual-GPU GeForce GTX 690 all the way down to budget discrete HTPC cards. So far this year we have seen the company push Kepler to its limits by adding GPU boost and placing it in the GTX 690 and GTX 680. Those cards were great, but commanded a price premium that most gamers could not afford. Enter the GTX 670 and GTX 660 Ti earlier this year and Kepler started to become an attractive option for gamers wanting a high-end single GPU system without breaking the bank. Those cards, at $399 and $299 respectively were a step in the right direction to making the Kepler architecture available to everyone but were still a bit pricey if you were on a tighter budget for your gaming rig (or needed to factor in the Significant Other Approval Process™).
Well, Kepler has now been on the market for about six months, and I’m excited to (finally) announce that NVIDIA is launching its first Kepler-based budget gaming card! The NVIDIA GeForce GTX 650 brings Kepler down to the ever-attractive $109 price point and is even capable of playing new games at 1080p above 30FPS. Not bad for such a cheap card!
With the GTX 650, you are making some sacrifices as far as hardware, but things are not all bad. The card features a mere 384 CUDA cores and 1GB of GDDR5 memory on a 128-bit bus. This is a huge decrease in hardware compared to the GTX 660 Ti’s 1344 CUDA cores and 2GB memory on a 192-bit bus – but that card is also $200 more. And while the GTX 650 runs the memory at 5Gbps, NVIDIA was not shy about pumping up the GPU core clockspeed. No boost functionality was mentioned but the base clockspeed is a respectable 1058 MHz. Even better, the card only requires a single 6-pin PCI-E power connector and has a TDP of 64W (less than half of its higher-end GeForce brethren).
The following chart compares the specifications between the new Geforce GTX 650 through the GTX 670 graphics card.
Click on the above chart for a larger image.
The really important question is how well it handles games, and NVIDIA showed off several slides with claimed performance numbers. Taking these numbers with a grain of salt as they are coming from the same company that built the hardware, the GTX 650 looks like a capable GPU for the price. The company compared it to both its GTS 450 (Fermi) and AMD’s 7750 graphics card. Naturally, it was shown in a good light in both comparisons, but nothing egregious.
NVIDIA is claiming an 8X performance increase versus the old 9500 GT, and an approximate 20% speed increase versus the GTS 450. And improvements to the hardware itself has allowed NVIDIA to improve performance while requiring less power; the company claims the GTX 650 uses up to half the power of its Fermi predecessor.
The comparison between the GTX 650 and AMD Radeon HD 7750 is harder to gauge, though the 7750 is priced competitively around the GTX 650’s $109 MSRP so it will be interesting to see how that shakes out. NVIDIA is claiming anywhere from 1.08 to 1.34 times the performance of the 7750 in a number of games, shown in the chart below.
If you have been eyeing a 7750, the GTX 650 looks like it might be the better option, assuming reviewers are able to replicate NVIDIA’s results.
Keep in mind, these are NVIDIA's numbers and not from our reviews.
Unfortunately, NVIDIA did not benchmark the GTS 450 against the GTX 650 in the games. Rather, they compared it to the 9500 GT to show the upgrade potential for anyone still holding onto the older hardware (pushing the fact that you can run DirectX 11 at 1080p if you upgrade). Still, the results for the 650 are interesting by themselves. In MechWarrior Online, World of Warcraft, and Max Payne 3 the budget GPU managed at least 40 FPS at 1920x1080 resolution in DirectX 11 mode. Nothing groundbreaking, for sure, but fairly respectable for the price. Assuming it can pull at least a min of 30 FPS in other recent games, this will be a good option for DIY builders that want to get started with PC gaming on a budget.
All in all, the NVIDIA GeForce GTX 650 looks to be a decent card and finally rounds out the Kepler architecture. At this price point, NVIDIA can finally give every gamer a Kepler option instead of continuing to rely on older cards to answer AMD at the lower price points. I’m interested to see how AMD answers this, and specifically if gamers will see more price cuts on the AMD side.
If you have not already, I strongly recommend you give our previous Kepler GPU reviews a read through for a look at what NVIDIA’s latest architecture is all about.
PC Perspective Kepler-based GTX Graphics Card Reviews:
- GeForce GTX 690: Dual GK104 Kepler Greatness
- GeForce GTX 680: Kepler is ready for retail
- GeForce GTX 670: Kepler for $399
- GeForce GTX 660 Ti: Another GK104 Option for $299
- GeForce GTX 660: GK106 Completes the Circle