NVIDIA Joins the Bundle Game: Up to $150 in credit on Free-to-Play games for all GTX buyers

Subject: Graphics Cards | February 11, 2013 - 12:33 PM |
Tagged: world of tanks, planetside 2, nvidia, Hawken, gtx, geforce, bundle

AMD has definitely been winning the "game" of game bundles and bonus content with graphics cards purchases, as is evident from the recent Never Settle Reloaded campaign that includes titles like Crysis 3, Bioshock Infinite and Tomb Raider.  I made comments that NVIDIA was falling behind and may even start to look like they have moved away from a focus on PC gamers since they hadn't made any reply over the last year...

After losing a bidding war with AMD over Crysis 3, today NVIDIA is unveiling a bundle campaign that attack at a different angle; rather than including in bundled games NVIDIA is working free-to-play titles.  How do you give gamers bonuses by including free to play games?  Credits!  Cold hard cash!

bundle1.png

Starting today if you pick up any GeForce GTX graphics card you'll be eligible to get free in-game credit to use in one of the three free-to-play titles partnering with NVIDIA.  A GTX 650 or GTX 650 Ti will net you $25 in each for a total bonus of $75 while buying a GTX 660 or higher, all the way up to the GTX 690 results in $50 per game for a total of $150.

Also, after asking NVIDIA about it, this is a PER CARD bundle so if you get an SLI pair of anything, you'll get double the credit.  A pair of GeForce GTX 660s for an SLI rig results in $100 per game, $300 total!

bundle3.png

This is a very interesting approach that NVIDIA has decided to take and I am eager to get feedback from our readers on the differences between AMD's and NVIDIA's bundles.  I have played quite a bit of Planetside 2 and definitely enjoyed it; it is a graphics showcase as well with huge and expansive levels and hundreds of people per server.  World of Tanks and Hawken I am less familiar with but they also are extremely popular.

bundle2.png

Leave us your comments below!  Do you think NVIDIA's new GeForce GTX gaming bundle for free-to-play game credits can be successful! 

If you are looking for a new GeForce GTX card today and this bundle convinced you to buy, feel free to use the links below. 

Rumor: NVIDIA GK110 based GeForce GPU 'Titan' to be released late February

Subject: Graphics Cards | January 22, 2013 - 02:44 PM |
Tagged: nvidia, geforce, gk110, titan, rumor

A combination of rumors and news pieces found online and in some recent conversations with partners indicates that February will see the release of a new super-high-end graphics card from NVIDIA based on the GK110 GPU.  Apparently using the name "Titan" based on a report from Sweclockers.com, this new single GPU card will feature 2688 CUDA cores, compared to the 1536 in the GeForce GTX 680. 

gk110.jpg

If true, the name Titan likely refers to the Cray super computer of the same name built using GK110 Kepler Tesla cards.  Sweclockers.com's sources are quoted with the clocks of this new super-GPU as well: 732 MHz core clock and 5.2 GHz GDDR5 memory clock.  While those numbers are low compared to the 1000+ MHz speeds of the GK104 parts out today, this GPU would have 75% more compute units and presumably additional memory capacity as well.  The memory bus width of 384-bits is a 50% increase as well which would indicate another big jump in performance over current cards.  The CUDA core count of 2688 is actually indicative of a GK110 GPU with a single SMX disabled as well.

teslagpu2.jpg

The NVIDIA Titan card will apparently be the replacement for the GeForce GTX 690, a dual-GK104 card launched in May of last year.  The performance estimate for the Titan is approximately 85% of that GTX 690 and if the rumors are right it would see an $899 price tag.

Based on other conversations I have had recently you should only expect those same partners that were able to sell the GTX 690 to stock this new GK110-based part.  There won't be any modifications and you will see very little differentiation between vendors branding on it.  If dates are to be believed, we are hearing that a Feb 25th (or at least that week) launch is the current target.

Author:
Manufacturer: PC Perspective

Another update

In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates.  I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.

d31.jpg

Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.

dis1.jpg

This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.

 

Video Loading...

 

Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback.  Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.

We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic.  You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.

Let me know below what you thought of this video and any questions that you might have. 

CES 2013: Intel Haswell HD Graphics Compared to GeForce GT 650M

Subject: Graphics Cards | January 12, 2013 - 12:02 PM |
Tagged: nvidia, Intel, hd graphics, haswell, geforce, dirt 3, ces 2013, CES, 650m

While wandering around the Intel booth we were offered a demo of the graphics performance of the upcoming Haswell processor, due out in the middle of 2013.  One of the big changes on this architecture will be another jump up in graphics performance, even more than we saw going from Sandy Bridge to Ivy Bridge. 

haswell1.jpg

On the left is the Intel Haswell system and on the right is a mobile system powered by the NVIDIA GeForce GT 650M.  For reference, that discrete GPU has 384 cores and a 128-bit memory bus so we aren't talking about flagship performance here.  Haswell GT3 graphics is rumored to have double the performance of the GT2 found in Ivy Bridge based on talks at IDF this past September. 

While I am not able to report the benchmark results, I can tell you what I "saw" in my viewing.  First, the Haswell graphics loaded the game up more slowly than the NVIDIA card.  That isn't a big deal really and could change with driver updates closer to launch, but it is was a lingering problem we have seen with Intel HD graphics over the years. 

haswell2.jpg

During the actual benchmark run, both looked great while running at 1080p and High quality presets.  I did notice during part of the loading of the level, the Haswell system seemed to "stutter" a bit and was a little less fluid in the animation.  I did NOT notice that during the actually benchmark gameplay though. 

I also inquired with Intel's graphics team about how dedicated they were to providing updated graphics drivers for HD graphics users.  They were defensive about their current output saying they have released quarterly drivers since the Sandy Bridge release but that perhaps they should be more vocal about it (I agree).  While I tried to get some kind of formal commitment from them going forward to monthly releases with game support added within X number of days, they weren't willing to do that quite yet. 

If AMD and NVIDIA discrete notebook (and low cost desktop) graphics divisions are to push an edge, game support and frequent updates are going to be the best place to start.  Still, seeing Intel continue to push forward on the path of improved processor graphics is great if they can follow through for gamers!

Coverage of CES 2014 is brought to you by AMD!

PC Perspective's CES 2014 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

NVIDIA's 310.90 Driver - more performance, less vulnerabilities

Subject: General Tech | January 9, 2013 - 12:46 PM |
Tagged: nvidia, geforce, graphics drivers, fud

Say what you will about AMD's driver team but they don't tend to release drivers that allow some to elevate their privileges on their PCs.  That was unfortunately the Christmas present NVIDIA offered Windows users who installed 310.70, similar to the gift they offered Linux users last summer.  According to The Register, the new driver no longer contains that security hole, which makes upgrading to the newest driver more important than usual.  That is not the only reason to grab the new driver, NVIDIA reports that 310.90 provides 26% faster performance in Call of Duty: Black Ops 2 and up to 18% faster performance in Assassin’s Creed III as well as improvements to 400, 500 and 600 series cards in most other games. 

logo_geforce.png

"The vulnerability allows a remote attacker with a valid domain account to gain super-user access to any desktop or laptop running the vulnerable service," HD Moore, the developer of Metasploit and chief security officer at Rapid7, told SecurityWeek.

"This flaw also allows an attacker (or rogue user) with a low-privileged account to gain super-access to their own system, but the real risk to enterprises is the remote vector," he added."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

New Specifications Leak For GTX 650 Ti, Launch Likely Imminent

Subject: General Tech | October 4, 2012 - 10:08 PM |
Tagged: nvidia, kepler, gtx 650ti, gpu, geforce

Earlier this year, specifications for an as-yet-unreleased GTX 650 Ti graphics card from NVIDIA leaked. At the time, the rumors indicated that the GTX 650 Ti would have hardware closer to the GTX 650 than the GTX 660 but still be based on the GK106 Kepler chip. It would have a 128-bit memory interface, 48 testure units, and 576 CUDA cores in 1.5 GPCs (3 SMX units). And to top it off, it had a rumored price of around $170! Not exactly a bargain.

Welll, as the launch gets closer more details are being leaked, and this time around the rumored information is indicating that the GTX 650 Ti will be closer in performance to the GTX 660 and cost around $140-$150. That certainly sounds better!

inno3d GTX 650Ti.jpg

The new rumors are indicating that the reference GTX 650 Ti will have 768 CUDA cores, and 64 texture units, which means it has the full two GPCs (so it is only missing the one-half of a GPC that you get with GTX 660). and four SMX units. As a point of reference, the GTX 660 – which NVIDIA swears is the full GK106 chip – has five SMX units in 2 and a half GPCs.

The following image shows the layout of the GTX 660. The GTX 650 Ti will have the GPC on the far right disabled. Previous rumors suggested that the entire middle GPC would be turned off, so the new rumors are definitely looking more promising in terms of potential performance.

GeForce_GTX_660_Block_Diagram_FINAL.png

Specifically marked GK106-220 on the die, the GTX 650 Ti is based the same GK106 Kepler chip as the GTX 660, but with some features disabled. The GPU is reportedly clocked at 925MHz, and it does not support NVIDIA's GPU Boost technology.

GTX 650Ti.jpg

Memory performance will take a large hit compared to the full GK106 chip. The GTX 650 Ti will feature 1GB of GDDR5 memory clocked at 1350MHz on a 128-bit memory interface. That amounts to approximately 86.4 GB/s bandwidth, which is slightly over half of the GTX 660's 144.2 GB/s bandwidth. Also, it's just barely over the 80 GB/s bandwidth of the GTX 650 (which makes sense, considering they are both using 128-bit interfaces).

new GeForce GTX 650 Ti Specifications Leak.png

The latest rumors indicate the GTX 650 Ti will be priced at around $140 with custom cards such as recently leaked Galaxy GTX 650 Ti GC on Newegg costing more ($149). These new leaked specifications have more weight than the previous rumors since they have come from multiple leaks from multiple places, so I am hoping that these new rumors are the real deal. If so, the GTX 650 Ti becomes a much better value that it was rumored to be before!

Galaxy GTX 650Ti.jpg

You can find more photos of a leaked GTX 650 Ti over at Chiphell.

Source: Chip Hell
Author:
Manufacturer: Various

PhysX Settings Comparison

Borderlands 2 is a hell of a game; we actually ran a 4+ hour live event on launch day to celebrate its release and played it after our podcast that week as well.  When big PC releases occur we usually like to take a look at performance of the game on a few graphics cards as well to see how NVIDIA and AMD cards stack up.  Interestingly, for this title, PhysX technology was brought up again and NVIDIA was widely pushing it as a great example of implementation of the GPU-accelerated physics engine.

What you may find unique in Borderlands 2 is that the game actually allows you to enabled PhysX features at Low, Medium and High settings, with either NVIDIA or AMD Radeon graphics cards installed in your system.  In past titles, like Batman: Arkham City and Mafia II, PhysX was only able to be enabled (or at least at higher settings) if you had an NVIDIA card.  Many gamers that used AMD cards saw this as a slight and we tended to agree.  But since we could enable it with a Radeon card installed, we were curious to see what the results would be.

screenshot-16.jpg

Of course, don't expect the PhysX effects to be able to utilize the Radeon GPU for acceleration...

Borderlands 2 PhysX Settings Comparison

The first thing we wanted to learn was just how much difference you would see by moving from Low (the lowest setting, there is no "off") to Medium and then to High.  The effects were identical on both AMD and NVIDIA cards and we made a short video here to demonstrate the changes in settings.

Continue reading our article that compares PhysX settings on AMD and NVIDIA GPUs!!

ASUS Launches the GeForce GTX 660 DirectCU II Lineup

Subject: Graphics Cards | September 13, 2012 - 05:09 PM |
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga, factory overclocked

As those of you who have already read the post below this one know, ASUS decided to create a DirectCU II model for their GTX 660, with the famous heatpipe bearing heatsink.  They have overclocked the GPU already and the card comes with tools to allow you to push it even further if you take the time to get to know your card and what it can manage.  Check the full press release below.

Fremont, CA (September 13, 2012) - ASUS is excited to release the ASUS GeForce GTX 660 DirectCU II series featuring the Standard, OC and TOP editions. Utilizing the latest 28nm NVIDIA Kepler graphics architecture, the OC and TOP cards deliver a factory-overclock while all three cards feature ASUS exclusive DirectCU thermal design and GPU Tweak tuning software to deliver a quieter, cooler, faster, and more immersive gameplay experience. The ASUS GeForce GTX 660 DirectCU II series set a new benchmark for exceptional performance and power efficiency in a highly affordable graphics card. The ASUS GeForce GTX 660 DirectCU II is perfect for gamers looking to upgrade from last-generation graphics technology while retaining ASUS’ class-leading cooling and acoustic performance.

image01.jpg

Superior Design and Software for the Best Gaming Experience ASUS equips the GeForce GTX 660 DirectCU II series with 2GB of GDDR5 memory clocked up to 6108MHz. The TOP edition features a blistering GPU core boost clock of 1137MHz, 104MHz faster than reference designs while the OC edition arrives with a factory-set GPU core boost speed of 1085MHz. Exclusive ASUS DIGI+ VRM digital power delivery and user-friendly GPU Tweak tuning software allows all cards to easily overclock beyond factory-set speeds offering enhanced performance in your favorite game or compute intensive application.

The ASUS GeForce GTX 660 DirectCU II series feature exclusive DirectCU technology. The custom designed cooler uses direct contact copper heatpipes for faster heat transduction and up to 20% lower normal operating temperatures than reference designs. The optimized fans are able operate at lower speeds providing a much quieter gaming or computing environment. For enhanced stability, energy efficiency, and overclocking margins the cards feature DIGI+ VRM digital power deliver plus a class-leading six-phase Super Alloy Power design for the capacitors, chokes, and MOSFETs meant to extend product lifespan and durability while operating noise-free even under heavy workloads.

ASUS once again includes the award winning GPU Tweak tuning suite in the box. Overclocking-inclined enthusiasts or gamers can boost clock speeds, set power targets, and configure fan operating parameters and policies; all this and more is accessible in the user-friendly interface. GPU Tweak offers built-in safe guards to ensure all modifications are safe, maintaining optimal stability and card reliability.

Source: ASUS

New Kepler on the Block, meet the vanilla GTX 660

Subject: Graphics Cards | September 13, 2012 - 04:49 PM |
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga

The non-Ti version of the GTX 660 has arrived on test benches and retailers, with even the heavily overclocked cards being available at $230, like EVGA's Superclocked model or MSI's OC'd card once you count the MIR.  That price places it right in between the HD 7850 and 7870, and ~$70 less than the GTX 660 Ti, while the performance is mostly comparable to a stock HD7870 though the OC versions can top the GTX660.

[H]ard|OCP received ASUS' version of the card, a DirectCU II based version with the distinctive heatpipes.  ASUS overclocked the card to a 1072MHz base clock and 1137MHz GPU Boost and [H] plans to see just how much further the frequencies can be pushed at a later date.  Their final word on this card for those looking to upgrade, for those of you with "a GTX 560 Ti, and even the GTX 570, the GTX 660 is an upgrade".

H_660gtx.gif

"NVIDIA is launching the new GeForce GTX 660 GPU, codenamed GK106. We have a retail ASUS GeForce GTX 660 DirectCU II custom video card fully evaluated against a plethora of competition at this price point. This brand new GPU aims for a price point just under the GTX 660 Ti but still promises to deliver exceptional 1080p gaming with AA."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP