GeForce 314.07 WHQL Drivers: Optimized For Crysis 3, Assassin's Creed 3 & Far Cry 3

Subject: Graphics Cards | February 19, 2013 - 01:50 PM |
Tagged: nvidia, graphics drivers, geforce, 314.07

Just in time for the arrival of the Titan previews comes the new WHQL 314.07 Geforce driver from NVIDIA.  Instead of offering a list of blanket improvements and average frame rate increased, NVIDIA has assembled a list of charts showing performance differences between this driver and the previous one for their four top GPUs in both SLI and single card setups.  As well they attempt to answer the question "Will it play Crysis 3?" with the chart below, showing the performance you can expect with Very High settings at 1080p resolution and 4x AA.  They also provide a link to their GeForce Experience tool which will optimize your Crysis 3 settings to whatever NVIDIA card(s) you happen to be using.  Upgrade now as the new driver seems to offer improvements across the board.

nvidia-geforce-314-07-whql-drivers-crysis-3-performance-chart-650.png

 

The new GeForce 314.07 WHQL driver is now available to download. An essential update for gamers jumping into Crysis 3 this week, 314.07 WHQL improves single-GPU and multi-GPU performance in Crytek’s sci-fi shooter by up to 65%.

Other highlights include sizeable SLI and single-GPU performance gains of up to 27% in Assassin’s Creed III, 19% in Civilization V, 14% in Call of Duty: Black Ops 2, 14% in DiRT 3, 11% in Just Cause 2, 10% in Deus Ex: Human Revolution, 10% in F1 2012, and 10% in Far Cry 3.

Rounding out the release is a ‘Excellent’ 3D Vision profile for Crysis 3, a SLI profile for Ninja Theory’s DmC: Devil May Cry, and an updated SLI profile for the free-to-play, third-person co-op shooter, Warframe.

You can download the GeForce 314.07 WHQL drivers with one click from the GeForce.com homepage; Windows XP, Windows 7 and Windows 8 packages are available for desktop systems, and for notebooks there are Windows 7 and Windows 8 downloads that cover all non-legacy products.

Source: NVIDIA
Author:
Manufacturer: NVIDIA

GK110 Makes Its Way to Gamers

Our NVIDIA GeForce GTX TITAN Coverage Schedule:

Back in May of 2012 NVIDIA released information on GK110, a new GPU that the company was targeting towards HPC (high performance computing) and the GPGPU markets that are eager for more processing power.  Almost immediately the questions began on when we might see the GK110 part make its way to consumers and gamers in addition to finding a home in supercomputers like Cray's Titan system capable of 17.59 Petaflops/s. 

 

Video Loading...

Watch this same video on our YouTube channel

02.jpg

Nine months later we finally have an answer - the GeForce GTX TITAN is a consumer graphics card built around the GK110 GPU.  Comprised of 2,688 CUDA cores, 7.1 billion transistors and with a die size of 551 mm^2, the GTX TITAN is a big step forward (both in performance and physical size).

specs3.jpg

From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse.  While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units.  Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz.  As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.

The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase.  Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s

Continue reading our preview of the brand new NVIDIA GeForce GTX TITAN graphics card!!

NVIDIA Joins the Bundle Game: Up to $150 in credit on Free-to-Play games for all GTX buyers

Subject: Graphics Cards | February 11, 2013 - 12:33 PM |
Tagged: world of tanks, planetside 2, nvidia, Hawken, gtx, geforce, bundle

AMD has definitely been winning the "game" of game bundles and bonus content with graphics cards purchases, as is evident from the recent Never Settle Reloaded campaign that includes titles like Crysis 3, Bioshock Infinite and Tomb Raider.  I made comments that NVIDIA was falling behind and may even start to look like they have moved away from a focus on PC gamers since they hadn't made any reply over the last year...

After losing a bidding war with AMD over Crysis 3, today NVIDIA is unveiling a bundle campaign that attack at a different angle; rather than including in bundled games NVIDIA is working free-to-play titles.  How do you give gamers bonuses by including free to play games?  Credits!  Cold hard cash!

bundle1.png

Starting today if you pick up any GeForce GTX graphics card you'll be eligible to get free in-game credit to use in one of the three free-to-play titles partnering with NVIDIA.  A GTX 650 or GTX 650 Ti will net you $25 in each for a total bonus of $75 while buying a GTX 660 or higher, all the way up to the GTX 690 results in $50 per game for a total of $150.

Also, after asking NVIDIA about it, this is a PER CARD bundle so if you get an SLI pair of anything, you'll get double the credit.  A pair of GeForce GTX 660s for an SLI rig results in $100 per game, $300 total!

bundle3.png

This is a very interesting approach that NVIDIA has decided to take and I am eager to get feedback from our readers on the differences between AMD's and NVIDIA's bundles.  I have played quite a bit of Planetside 2 and definitely enjoyed it; it is a graphics showcase as well with huge and expansive levels and hundreds of people per server.  World of Tanks and Hawken I am less familiar with but they also are extremely popular.

bundle2.png

Leave us your comments below!  Do you think NVIDIA's new GeForce GTX gaming bundle for free-to-play game credits can be successful! 

If you are looking for a new GeForce GTX card today and this bundle convinced you to buy, feel free to use the links below. 

Rumor: NVIDIA GK110 based GeForce GPU 'Titan' to be released late February

Subject: Graphics Cards | January 22, 2013 - 02:44 PM |
Tagged: nvidia, geforce, gk110, titan, rumor

A combination of rumors and news pieces found online and in some recent conversations with partners indicates that February will see the release of a new super-high-end graphics card from NVIDIA based on the GK110 GPU.  Apparently using the name "Titan" based on a report from Sweclockers.com, this new single GPU card will feature 2688 CUDA cores, compared to the 1536 in the GeForce GTX 680. 

gk110.jpg

If true, the name Titan likely refers to the Cray super computer of the same name built using GK110 Kepler Tesla cards.  Sweclockers.com's sources are quoted with the clocks of this new super-GPU as well: 732 MHz core clock and 5.2 GHz GDDR5 memory clock.  While those numbers are low compared to the 1000+ MHz speeds of the GK104 parts out today, this GPU would have 75% more compute units and presumably additional memory capacity as well.  The memory bus width of 384-bits is a 50% increase as well which would indicate another big jump in performance over current cards.  The CUDA core count of 2688 is actually indicative of a GK110 GPU with a single SMX disabled as well.

teslagpu2.jpg

The NVIDIA Titan card will apparently be the replacement for the GeForce GTX 690, a dual-GK104 card launched in May of last year.  The performance estimate for the Titan is approximately 85% of that GTX 690 and if the rumors are right it would see an $899 price tag.

Based on other conversations I have had recently you should only expect those same partners that were able to sell the GTX 690 to stock this new GK110-based part.  There won't be any modifications and you will see very little differentiation between vendors branding on it.  If dates are to be believed, we are hearing that a Feb 25th (or at least that week) launch is the current target.

Author:
Manufacturer: PC Perspective

Another update

In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates.  I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.

d31.jpg

Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.

dis1.jpg

This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.

 

Video Loading...

 

Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback.  Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.

We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic.  You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.

Let me know below what you thought of this video and any questions that you might have. 

CES 2013: Intel Haswell HD Graphics Compared to GeForce GT 650M

Subject: Graphics Cards | January 12, 2013 - 12:02 PM |
Tagged: nvidia, Intel, hd graphics, haswell, geforce, dirt 3, ces 2013, CES, 650m

While wandering around the Intel booth we were offered a demo of the graphics performance of the upcoming Haswell processor, due out in the middle of 2013.  One of the big changes on this architecture will be another jump up in graphics performance, even more than we saw going from Sandy Bridge to Ivy Bridge. 

haswell1.jpg

On the left is the Intel Haswell system and on the right is a mobile system powered by the NVIDIA GeForce GT 650M.  For reference, that discrete GPU has 384 cores and a 128-bit memory bus so we aren't talking about flagship performance here.  Haswell GT3 graphics is rumored to have double the performance of the GT2 found in Ivy Bridge based on talks at IDF this past September. 

While I am not able to report the benchmark results, I can tell you what I "saw" in my viewing.  First, the Haswell graphics loaded the game up more slowly than the NVIDIA card.  That isn't a big deal really and could change with driver updates closer to launch, but it is was a lingering problem we have seen with Intel HD graphics over the years. 

haswell2.jpg

During the actual benchmark run, both looked great while running at 1080p and High quality presets.  I did notice during part of the loading of the level, the Haswell system seemed to "stutter" a bit and was a little less fluid in the animation.  I did NOT notice that during the actually benchmark gameplay though. 

I also inquired with Intel's graphics team about how dedicated they were to providing updated graphics drivers for HD graphics users.  They were defensive about their current output saying they have released quarterly drivers since the Sandy Bridge release but that perhaps they should be more vocal about it (I agree).  While I tried to get some kind of formal commitment from them going forward to monthly releases with game support added within X number of days, they weren't willing to do that quite yet. 

If AMD and NVIDIA discrete notebook (and low cost desktop) graphics divisions are to push an edge, game support and frequent updates are going to be the best place to start.  Still, seeing Intel continue to push forward on the path of improved processor graphics is great if they can follow through for gamers!

Coverage of CES 2014 is brought to you by AMD!

PC Perspective's CES 2014 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

NVIDIA's 310.90 Driver - more performance, less vulnerabilities

Subject: General Tech | January 9, 2013 - 12:46 PM |
Tagged: nvidia, geforce, graphics drivers, fud

Say what you will about AMD's driver team but they don't tend to release drivers that allow some to elevate their privileges on their PCs.  That was unfortunately the Christmas present NVIDIA offered Windows users who installed 310.70, similar to the gift they offered Linux users last summer.  According to The Register, the new driver no longer contains that security hole, which makes upgrading to the newest driver more important than usual.  That is not the only reason to grab the new driver, NVIDIA reports that 310.90 provides 26% faster performance in Call of Duty: Black Ops 2 and up to 18% faster performance in Assassin’s Creed III as well as improvements to 400, 500 and 600 series cards in most other games. 

logo_geforce.png

"The vulnerability allows a remote attacker with a valid domain account to gain super-user access to any desktop or laptop running the vulnerable service," HD Moore, the developer of Metasploit and chief security officer at Rapid7, told SecurityWeek.

"This flaw also allows an attacker (or rogue user) with a low-privileged account to gain super-access to their own system, but the real risk to enterprises is the remote vector," he added."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

New Specifications Leak For GTX 650 Ti, Launch Likely Imminent

Subject: General Tech | October 4, 2012 - 10:08 PM |
Tagged: nvidia, kepler, gtx 650ti, gpu, geforce

Earlier this year, specifications for an as-yet-unreleased GTX 650 Ti graphics card from NVIDIA leaked. At the time, the rumors indicated that the GTX 650 Ti would have hardware closer to the GTX 650 than the GTX 660 but still be based on the GK106 Kepler chip. It would have a 128-bit memory interface, 48 testure units, and 576 CUDA cores in 1.5 GPCs (3 SMX units). And to top it off, it had a rumored price of around $170! Not exactly a bargain.

Welll, as the launch gets closer more details are being leaked, and this time around the rumored information is indicating that the GTX 650 Ti will be closer in performance to the GTX 660 and cost around $140-$150. That certainly sounds better!

inno3d GTX 650Ti.jpg

The new rumors are indicating that the reference GTX 650 Ti will have 768 CUDA cores, and 64 texture units, which means it has the full two GPCs (so it is only missing the one-half of a GPC that you get with GTX 660). and four SMX units. As a point of reference, the GTX 660 – which NVIDIA swears is the full GK106 chip – has five SMX units in 2 and a half GPCs.

The following image shows the layout of the GTX 660. The GTX 650 Ti will have the GPC on the far right disabled. Previous rumors suggested that the entire middle GPC would be turned off, so the new rumors are definitely looking more promising in terms of potential performance.

GeForce_GTX_660_Block_Diagram_FINAL.png

Specifically marked GK106-220 on the die, the GTX 650 Ti is based the same GK106 Kepler chip as the GTX 660, but with some features disabled. The GPU is reportedly clocked at 925MHz, and it does not support NVIDIA's GPU Boost technology.

GTX 650Ti.jpg

Memory performance will take a large hit compared to the full GK106 chip. The GTX 650 Ti will feature 1GB of GDDR5 memory clocked at 1350MHz on a 128-bit memory interface. That amounts to approximately 86.4 GB/s bandwidth, which is slightly over half of the GTX 660's 144.2 GB/s bandwidth. Also, it's just barely over the 80 GB/s bandwidth of the GTX 650 (which makes sense, considering they are both using 128-bit interfaces).

new GeForce GTX 650 Ti Specifications Leak.png

The latest rumors indicate the GTX 650 Ti will be priced at around $140 with custom cards such as recently leaked Galaxy GTX 650 Ti GC on Newegg costing more ($149). These new leaked specifications have more weight than the previous rumors since they have come from multiple leaks from multiple places, so I am hoping that these new rumors are the real deal. If so, the GTX 650 Ti becomes a much better value that it was rumored to be before!

Galaxy GTX 650Ti.jpg

You can find more photos of a leaked GTX 650 Ti over at Chiphell.

Source: Chip Hell
Author:
Manufacturer: Various

PhysX Settings Comparison

Borderlands 2 is a hell of a game; we actually ran a 4+ hour live event on launch day to celebrate its release and played it after our podcast that week as well.  When big PC releases occur we usually like to take a look at performance of the game on a few graphics cards as well to see how NVIDIA and AMD cards stack up.  Interestingly, for this title, PhysX technology was brought up again and NVIDIA was widely pushing it as a great example of implementation of the GPU-accelerated physics engine.

What you may find unique in Borderlands 2 is that the game actually allows you to enabled PhysX features at Low, Medium and High settings, with either NVIDIA or AMD Radeon graphics cards installed in your system.  In past titles, like Batman: Arkham City and Mafia II, PhysX was only able to be enabled (or at least at higher settings) if you had an NVIDIA card.  Many gamers that used AMD cards saw this as a slight and we tended to agree.  But since we could enable it with a Radeon card installed, we were curious to see what the results would be.

screenshot-16.jpg

Of course, don't expect the PhysX effects to be able to utilize the Radeon GPU for acceleration...

Borderlands 2 PhysX Settings Comparison

The first thing we wanted to learn was just how much difference you would see by moving from Low (the lowest setting, there is no "off") to Medium and then to High.  The effects were identical on both AMD and NVIDIA cards and we made a short video here to demonstrate the changes in settings.

Continue reading our article that compares PhysX settings on AMD and NVIDIA GPUs!!