CES 2013: Intel Haswell HD Graphics Compared to GeForce GT 650M

Subject: Graphics Cards | January 12, 2013 - 12:02 PM |
Tagged: nvidia, Intel, hd graphics, haswell, geforce, dirt 3, ces 2013, CES, 650m

While wandering around the Intel booth we were offered a demo of the graphics performance of the upcoming Haswell processor, due out in the middle of 2013.  One of the big changes on this architecture will be another jump up in graphics performance, even more than we saw going from Sandy Bridge to Ivy Bridge. 

haswell1.jpg

On the left is the Intel Haswell system and on the right is a mobile system powered by the NVIDIA GeForce GT 650M.  For reference, that discrete GPU has 384 cores and a 128-bit memory bus so we aren't talking about flagship performance here.  Haswell GT3 graphics is rumored to have double the performance of the GT2 found in Ivy Bridge based on talks at IDF this past September. 

While I am not able to report the benchmark results, I can tell you what I "saw" in my viewing.  First, the Haswell graphics loaded the game up more slowly than the NVIDIA card.  That isn't a big deal really and could change with driver updates closer to launch, but it is was a lingering problem we have seen with Intel HD graphics over the years. 

haswell2.jpg

During the actual benchmark run, both looked great while running at 1080p and High quality presets.  I did notice during part of the loading of the level, the Haswell system seemed to "stutter" a bit and was a little less fluid in the animation.  I did NOT notice that during the actually benchmark gameplay though. 

I also inquired with Intel's graphics team about how dedicated they were to providing updated graphics drivers for HD graphics users.  They were defensive about their current output saying they have released quarterly drivers since the Sandy Bridge release but that perhaps they should be more vocal about it (I agree).  While I tried to get some kind of formal commitment from them going forward to monthly releases with game support added within X number of days, they weren't willing to do that quite yet. 

If AMD and NVIDIA discrete notebook (and low cost desktop) graphics divisions are to push an edge, game support and frequent updates are going to be the best place to start.  Still, seeing Intel continue to push forward on the path of improved processor graphics is great if they can follow through for gamers!

Coverage of CES 2014 is brought to you by AMD!

PC Perspective's CES 2014 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

NVIDIA's 310.90 Driver - more performance, less vulnerabilities

Subject: General Tech | January 9, 2013 - 12:46 PM |
Tagged: nvidia, geforce, graphics drivers, fud

Say what you will about AMD's driver team but they don't tend to release drivers that allow some to elevate their privileges on their PCs.  That was unfortunately the Christmas present NVIDIA offered Windows users who installed 310.70, similar to the gift they offered Linux users last summer.  According to The Register, the new driver no longer contains that security hole, which makes upgrading to the newest driver more important than usual.  That is not the only reason to grab the new driver, NVIDIA reports that 310.90 provides 26% faster performance in Call of Duty: Black Ops 2 and up to 18% faster performance in Assassin’s Creed III as well as improvements to 400, 500 and 600 series cards in most other games. 

logo_geforce.png

"The vulnerability allows a remote attacker with a valid domain account to gain super-user access to any desktop or laptop running the vulnerable service," HD Moore, the developer of Metasploit and chief security officer at Rapid7, told SecurityWeek.

"This flaw also allows an attacker (or rogue user) with a low-privileged account to gain super-access to their own system, but the real risk to enterprises is the remote vector," he added."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

New Specifications Leak For GTX 650 Ti, Launch Likely Imminent

Subject: General Tech | October 4, 2012 - 10:08 PM |
Tagged: nvidia, kepler, gtx 650ti, gpu, geforce

Earlier this year, specifications for an as-yet-unreleased GTX 650 Ti graphics card from NVIDIA leaked. At the time, the rumors indicated that the GTX 650 Ti would have hardware closer to the GTX 650 than the GTX 660 but still be based on the GK106 Kepler chip. It would have a 128-bit memory interface, 48 testure units, and 576 CUDA cores in 1.5 GPCs (3 SMX units). And to top it off, it had a rumored price of around $170! Not exactly a bargain.

Welll, as the launch gets closer more details are being leaked, and this time around the rumored information is indicating that the GTX 650 Ti will be closer in performance to the GTX 660 and cost around $140-$150. That certainly sounds better!

inno3d GTX 650Ti.jpg

The new rumors are indicating that the reference GTX 650 Ti will have 768 CUDA cores, and 64 texture units, which means it has the full two GPCs (so it is only missing the one-half of a GPC that you get with GTX 660). and four SMX units. As a point of reference, the GTX 660 – which NVIDIA swears is the full GK106 chip – has five SMX units in 2 and a half GPCs.

The following image shows the layout of the GTX 660. The GTX 650 Ti will have the GPC on the far right disabled. Previous rumors suggested that the entire middle GPC would be turned off, so the new rumors are definitely looking more promising in terms of potential performance.

GeForce_GTX_660_Block_Diagram_FINAL.png

Specifically marked GK106-220 on the die, the GTX 650 Ti is based the same GK106 Kepler chip as the GTX 660, but with some features disabled. The GPU is reportedly clocked at 925MHz, and it does not support NVIDIA's GPU Boost technology.

GTX 650Ti.jpg

Memory performance will take a large hit compared to the full GK106 chip. The GTX 650 Ti will feature 1GB of GDDR5 memory clocked at 1350MHz on a 128-bit memory interface. That amounts to approximately 86.4 GB/s bandwidth, which is slightly over half of the GTX 660's 144.2 GB/s bandwidth. Also, it's just barely over the 80 GB/s bandwidth of the GTX 650 (which makes sense, considering they are both using 128-bit interfaces).

new GeForce GTX 650 Ti Specifications Leak.png

The latest rumors indicate the GTX 650 Ti will be priced at around $140 with custom cards such as recently leaked Galaxy GTX 650 Ti GC on Newegg costing more ($149). These new leaked specifications have more weight than the previous rumors since they have come from multiple leaks from multiple places, so I am hoping that these new rumors are the real deal. If so, the GTX 650 Ti becomes a much better value that it was rumored to be before!

Galaxy GTX 650Ti.jpg

You can find more photos of a leaked GTX 650 Ti over at Chiphell.

Source: Chip Hell
Author:
Manufacturer: Various

PhysX Settings Comparison

Borderlands 2 is a hell of a game; we actually ran a 4+ hour live event on launch day to celebrate its release and played it after our podcast that week as well.  When big PC releases occur we usually like to take a look at performance of the game on a few graphics cards as well to see how NVIDIA and AMD cards stack up.  Interestingly, for this title, PhysX technology was brought up again and NVIDIA was widely pushing it as a great example of implementation of the GPU-accelerated physics engine.

What you may find unique in Borderlands 2 is that the game actually allows you to enabled PhysX features at Low, Medium and High settings, with either NVIDIA or AMD Radeon graphics cards installed in your system.  In past titles, like Batman: Arkham City and Mafia II, PhysX was only able to be enabled (or at least at higher settings) if you had an NVIDIA card.  Many gamers that used AMD cards saw this as a slight and we tended to agree.  But since we could enable it with a Radeon card installed, we were curious to see what the results would be.

screenshot-16.jpg

Of course, don't expect the PhysX effects to be able to utilize the Radeon GPU for acceleration...

Borderlands 2 PhysX Settings Comparison

The first thing we wanted to learn was just how much difference you would see by moving from Low (the lowest setting, there is no "off") to Medium and then to High.  The effects were identical on both AMD and NVIDIA cards and we made a short video here to demonstrate the changes in settings.

Continue reading our article that compares PhysX settings on AMD and NVIDIA GPUs!!

ASUS Launches the GeForce GTX 660 DirectCU II Lineup

Subject: Graphics Cards | September 13, 2012 - 05:09 PM |
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga, factory overclocked

As those of you who have already read the post below this one know, ASUS decided to create a DirectCU II model for their GTX 660, with the famous heatpipe bearing heatsink.  They have overclocked the GPU already and the card comes with tools to allow you to push it even further if you take the time to get to know your card and what it can manage.  Check the full press release below.

Fremont, CA (September 13, 2012) - ASUS is excited to release the ASUS GeForce GTX 660 DirectCU II series featuring the Standard, OC and TOP editions. Utilizing the latest 28nm NVIDIA Kepler graphics architecture, the OC and TOP cards deliver a factory-overclock while all three cards feature ASUS exclusive DirectCU thermal design and GPU Tweak tuning software to deliver a quieter, cooler, faster, and more immersive gameplay experience. The ASUS GeForce GTX 660 DirectCU II series set a new benchmark for exceptional performance and power efficiency in a highly affordable graphics card. The ASUS GeForce GTX 660 DirectCU II is perfect for gamers looking to upgrade from last-generation graphics technology while retaining ASUS’ class-leading cooling and acoustic performance.

image01.jpg

Superior Design and Software for the Best Gaming Experience ASUS equips the GeForce GTX 660 DirectCU II series with 2GB of GDDR5 memory clocked up to 6108MHz. The TOP edition features a blistering GPU core boost clock of 1137MHz, 104MHz faster than reference designs while the OC edition arrives with a factory-set GPU core boost speed of 1085MHz. Exclusive ASUS DIGI+ VRM digital power delivery and user-friendly GPU Tweak tuning software allows all cards to easily overclock beyond factory-set speeds offering enhanced performance in your favorite game or compute intensive application.

The ASUS GeForce GTX 660 DirectCU II series feature exclusive DirectCU technology. The custom designed cooler uses direct contact copper heatpipes for faster heat transduction and up to 20% lower normal operating temperatures than reference designs. The optimized fans are able operate at lower speeds providing a much quieter gaming or computing environment. For enhanced stability, energy efficiency, and overclocking margins the cards feature DIGI+ VRM digital power deliver plus a class-leading six-phase Super Alloy Power design for the capacitors, chokes, and MOSFETs meant to extend product lifespan and durability while operating noise-free even under heavy workloads.

ASUS once again includes the award winning GPU Tweak tuning suite in the box. Overclocking-inclined enthusiasts or gamers can boost clock speeds, set power targets, and configure fan operating parameters and policies; all this and more is accessible in the user-friendly interface. GPU Tweak offers built-in safe guards to ensure all modifications are safe, maintaining optimal stability and card reliability.

Source: ASUS

New Kepler on the Block, meet the vanilla GTX 660

Subject: Graphics Cards | September 13, 2012 - 04:49 PM |
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga

The non-Ti version of the GTX 660 has arrived on test benches and retailers, with even the heavily overclocked cards being available at $230, like EVGA's Superclocked model or MSI's OC'd card once you count the MIR.  That price places it right in between the HD 7850 and 7870, and ~$70 less than the GTX 660 Ti, while the performance is mostly comparable to a stock HD7870 though the OC versions can top the GTX660.

[H]ard|OCP received ASUS' version of the card, a DirectCU II based version with the distinctive heatpipes.  ASUS overclocked the card to a 1072MHz base clock and 1137MHz GPU Boost and [H] plans to see just how much further the frequencies can be pushed at a later date.  Their final word on this card for those looking to upgrade, for those of you with "a GTX 560 Ti, and even the GTX 570, the GTX 660 is an upgrade".

H_660gtx.gif

"NVIDIA is launching the new GeForce GTX 660 GPU, codenamed GK106. We have a retail ASUS GeForce GTX 660 DirectCU II custom video card fully evaluated against a plethora of competition at this price point. This brand new GPU aims for a price point just under the GTX 660 Ti but still promises to deliver exceptional 1080p gaming with AA."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: NVIDIA

GK106 Completes the Circle

The release of the various Kepler-based graphics cards have been interesting to watch from the outside.  Though NVIDIA certainly spiced things up with the release of the GeForce GTX 680 2GB card back in March, and then with the dual-GPU GTX 690 4GB graphics card, for quite quite some time NVIDIA was content to leave the sub-$400 markets to AMD's Radeon HD 7000 cards.  And of course NVIDIA's own GTX 500-series.

But gamers and enthusiasts are fickle beings - knowing that the GTX 660 was always JUST around the corner, many of you were simply not willing to buy into the GTX 560s floating around Newegg and other online retailers.  AMD benefited greatly from this lack of competition and only recently has NVIDIA started to bring their latest generation of cards to the price points MOST gamers are truly interested in. 

Today we are going to take a look at the brand new GeForce GTX 660, a graphics cards with 2GB of frame buffer that will have a starting MSRP of $229.  Coming in $80 under the GTX 660 Ti card released just last month, does the more vanilla GTX 660 have what it takes to replace the success of the GTX 460?

The GK106 GPU and GeForce GTX 660 2GB

NVIDIA's GK104 GPU is used in the GeForce GTX 690, GTX 680, GTX 670 and even the GTX 660 Ti.  We saw the much smaller GK107 GPU with the GT 640 card, a release I was not impressed with at all.  With the GTX 660 Ti starting at $299 and the GT 640 at $120, there was a WIDE gap in NVIDIA's 600-series lineup that the GTX 660 addresses with an entirely new GPU, the GK106.

First, let's take a quick look at the reference card from NVIDIA for the GeForce GTX 660 2GB - it doesn't differ much from the reference cards for the GTX 660 Ti and even the GTX 670.

01.jpg

The GeForce GTX 660 uses the same half-length PCB that we saw for the first time with the GTX 670 and this will allow retail partners a lot of flexibility with their card designs. 

Continue reading our review of the GeForce GTX 660 graphics card!

NVIDIA Launches GTX 650 for Budget Gamers

Subject: Graphics Cards | September 13, 2012 - 09:38 AM |
Tagged: nvidia, kepler, gtx 650, graphics cards, geforce

Ah, Kepler: the (originally intended as) midrange graphics card architecture that took the world by storm and allowed NVIDIA to take it from the dual-GPU GeForce GTX 690 all the way down to budget discrete HTPC cards. So far this year we have seen the company push Kepler to its limits by adding GPU boost and placing it in the GTX 690 and GTX 680. Those cards were great, but commanded a price premium that most gamers could not afford. Enter the GTX 670 and GTX 660 Ti earlier this year and Kepler started to become an attractive option for gamers wanting a high-end single GPU system without breaking the bank. Those cards, at $399 and $299 respectively were a step in the right direction to making the Kepler architecture available to everyone but were still a bit pricey if you were on a tighter budget for your gaming rig (or needed to factor in the Significant Other Approval Process™).

Well, Kepler has now been on the market for about six months, and I’m excited to (finally) announce that NVIDIA is launching its first Kepler-based budget gaming card! The NVIDIA GeForce GTX 650 brings Kepler down to the ever-attractive $109 price point and is even capable of playing new games at 1080p above 30FPS. Not bad for such a cheap card!

GTX 650.jpg

With the GTX 650, you are making some sacrifices as far as hardware, but things are not all bad. The card features a mere 384 CUDA cores and 1GB of GDDR5 memory on a 128-bit bus. This is a huge decrease in hardware compared to the GTX 660 Ti’s 1344 CUDA cores and 2GB memory on a 192-bit bus – but that card is also $200 more. And while the GTX 650 runs the memory at 5Gbps, NVIDIA was not shy about pumping up the GPU core clockspeed. No boost functionality was mentioned but the base clockspeed is a respectable 1058 MHz. Even better, the card only requires a single 6-pin PCI-E power connector and has a TDP of 64W (less than half of its higher-end GeForce brethren).

Specs Comparison

The following chart compares the specifications between the new Geforce GTX 650 through the GTX 670 graphics card. 

GTX 650 and GTX 660 Specifications.jpg

Click on the above chart for a larger image.

Gaming Potential?

The really important question is how well it handles games, and NVIDIA showed off several slides with claimed performance numbers. Taking these numbers with a grain of salt as they are coming from the same company that built the hardware, the GTX 650 looks like a capable GPU for the price. The company compared it to both its GTS 450 (Fermi) and AMD’s 7750 graphics card. Naturally, it was shown in a good light in both comparisons, but nothing egregious.

NVIDIA is claiming an 8X performance increase versus the old 9500 GT, and an approximate 20% speed increase versus the GTS 450. And improvements to the hardware itself has allowed NVIDIA to improve performance while requiring less power; the company claims the GTX 650 uses up to half the power of its Fermi predecessor.

20percent better than fermi.jpg

The comparison between the GTX 650 and AMD Radeon HD 7750 is harder to gauge, though the 7750 is priced competitively around the GTX 650’s $109 MSRP so it will be interesting to see how that shakes out. NVIDIA is claiming anywhere from 1.08 to 1.34 times the performance of the 7750 in a number of games, shown in the chart below.

GTX 650 vs HD 7750.jpg

If you have been eyeing a 7750, the GTX 650 looks like it might be the better option, assuming reviewers are able to replicate NVIDIA’s results.

FPS GTX 650.png

Keep in mind, these are NVIDIA's numbers and not from our reviews.

Unfortunately, NVIDIA did not benchmark the GTS 450 against the GTX 650 in the games. Rather, they compared it to the 9500 GT to show the upgrade potential for anyone still holding onto the older hardware (pushing the fact that you can run DirectX 11 at 1080p if you upgrade). Still, the results for the 650 are interesting by themselves. In MechWarrior Online, World of Warcraft, and Max Payne 3 the budget GPU managed at least 40 FPS at 1920x1080 resolution in DirectX 11 mode. Nothing groundbreaking, for sure, but fairly respectable for the price. Assuming it can pull at least a min of 30 FPS in other recent games, this will be a good option for DIY builders that want to get started with PC gaming on a budget.

All in all, the NVIDIA GeForce GTX 650 looks to be a decent card and finally rounds out the Kepler architecture. At this price point, NVIDIA can finally give every gamer a Kepler option instead of continuing to rely on older cards to answer AMD at the lower price points. I’m interested to see how AMD answers this, and specifically if gamers will see more price cuts on the AMD side.

GTX 650 Specs.jpg

If you have not already, I strongly recommend you give our previous Kepler GPU reviews a read through for a look at what NVIDIA’s latest architecture is all about.

PC Perspective Kepler-based GTX Graphics Card Reviews:

PNY GTX 660 Ti Spotted for $299, Some Hope for Gamers After All?

Subject: Graphics Cards | August 6, 2012 - 12:56 AM |
Tagged: pny, nvidia, kepler, gtx 660 Ti, geforce

I reported earlier today that a Swedish retailer had listed a GTX 660 Ti for pre-order at 2,604 SEK (~$387). Assuming that figure was legitimate, it puts a serious hurt on the dreams of a $300 gaming card that performs very closely to the more expensive GTX 670 Kepler-based NVIDIA GPU. Bringing some of that hope back is graphics card news and reviews website Videocardz that claims to have found US-retailer based figures for the upcoming NVIDIA graphics card. In two screenshots, the site captured a page from what appears to be Cost Central that lists the MSRP of the GTX 660 Ti at $349.99. Even better is the second screenshot. It shows a–likely reference design–PNY Technologies GTX 660 Ti for $299.99 USD. The model number listed on both sites is VCGGTX660TXPB, which seems to indicate that it is being sold for less than MSRP over at MacMall.com.

PNY GTX 660 Ti MacMall.jpg

The MSRP does further suggest that most graphics cards should be closer to $400 than $300, however. Especially for a new product, the MSRP is usually a good indication of where prices are centered around. With an MSRP of $349.99 for what is likely a reference card, custom designs should be more expensive and may even push that $400 mark.

On the other hand, it may yet be possible to snag a small number of designs for closer to $300 from some retailers with some shopping around and instant rebates, but it is difficult to say with 100% certainty either way until the cards are official and they are actually purchasable on major retailers’ websites.

In this case, I’m hoping to be proven wrong, as I do want to see a $300 GPU with hardware specifications that are very close to the GTX 670! Now that we have US pricing, it appears that the launch is imminent; therefore, it should be possible to get your hands on one–and see the final prices–very soon.

Source: Videocardz