Author:
Manufacturer: Various

Competition is a Great Thing

While doing some testing with the AMD Athlon 5350 Kabini APU to determine it's flexibility as a low cost gaming platform, we decided to run a handful of tests to measure something else that is getting a lot of attention right now: AMD Mantle and NVIDIA's 337.50 driver.

Earlier this week I posted a story that looked at performance scaling of NVIDIA's new 337.50 beta driver compared to the previous 335.23 WHQL. The goal was to assess the DX11 efficiency improvements that the company stated it had been working on and implemented into this latest beta driver offering. In the end, we found some instances where games scaled by as much as 35% and 26% but other cases where there was little to no gain with the new driver. We looked at both single GPU and multi-GPU scenarios on mostly high end CPU hardware though.

Earlier in April I posted an article looking at Mantle, AMD's answer to a lower level API that is unique to its ecosystem, and how it scaled on various pieces of hardware on Battlefield 4. This was the first major game to implement Mantle and it remains the biggest name in the field. While we definitely saw some improvements in gaming experiences with Mantle there was work to be done when it comes to multi-GPU scaling and frame pacing. 

Both parties in this debate were showing promise but obviously both were far from perfect.

am1setup.jpg

While we were benchmarking the new AMD Athlon 5350 Kabini based APU, an incredibly low cost processor that Josh reviewed in April, it made sense to test out both Mantle and NVIDIA's 337.50 driver in an interesting side by side.

Continue reading our story on the scaling performance of AMD Mantle and NVIDIA's 337.50 driver with Star Swarm!!

Author:
Manufacturer: NVIDIA

SLI Testing

Let's see if I can start this story without sounding too much like a broken record when compared to the news post I wrote late last week on the subject of NVIDIA's new 337.50 driver. In March, while attending the Game Developer's Conference to learn about the upcoming DirectX 12 API, I sat down with NVIDIA to talk about changes coming to its graphics driver that would affect current users with shipping DX9, DX10 and DX11 games. 

As I wrote then:

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

In truth, this is something that both NVIDIA and AMD have likely been doing all along but NVIDIA has renewed purpose with the pressure that AMD's Mantle has placed on them, at least from a marketing and PR point of view. It turns out that the driver that starts to implement all of these efficiency changes is the recent 337.50 release and on Friday I wrote up a short story that tested a particularly good example of the performance changes, Total War: Rome II, with a promise to follow up this week with additional hardware and games. (As it turns out, results from Rome II are...an interesting story. More on that on the next page.)

slide1.jpg

Today I will be looking at seemingly random collection of gaming titles, running on some reconfigured test bed we had in the office in an attempt to get some idea of the overall robustness of the 337.50 driver and its advantages over the 335.23 release that came before it. Does NVIDIA have solid ground to stand on when it comes to the capabilities of current APIs over what AMD is offering today?

Continue reading our analysis of the new NVIDIA 337.50 Driver!!

NVIDIA GeForce Driver 337.50 Early Results are Impressive

Subject: Graphics Cards | April 11, 2014 - 03:30 PM |
Tagged: nvidia, geforce, dx11, driver, 337.50

UPDATE: We have put together a much more comprehensive story based on the NVIDIA 337.50 driver that includes more cards and more games while also disputing the Total War: Rome II results seen here. Be sure to read it!!

When I spoke with NVIDIA after the announcement of DirectX 12 at GDC this past March, a lot of the discussion centered around a pending driver release that promised impressive performance advances with current DX11 hardware and DX11 games. 

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

09.jpg

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

Lofty goals to be sure. This driver was released last week and I immediately wanted to test and verify many of these claims. However, a certain other graphics project kept me occupied most of the week and then a short jaunt to Dallas kept me from the task until yesterday. 

To be clear, I am planning to look at several more games and card configurations next week, but I thought it was worth sharing our first set of results. The test bed in use is the same as our standard GPU reviews.

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 780 Ti 3GB
NVIDIA GeForce GTX 770 2GB
Graphics Drivers NVIDIA: 335.23 WHQL, 337.50 Beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

The most interesting claims from NVIDIA were spikes as high as 70%+ in Total War: Rome II, so I decided to start there. 

First up, let's take a look at the GTX 780 Ti SLI results, the flagship gaming card from NVIDIA.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

With this title, running at the Extreme preset, jumps from an average frame rate of 59 FPS to 88 FPS, an increase of 48%! Frame rate variance does increase a bit with the faster average frame rate but it stays within limits of smoothness, but barely.

Next up, the GeForce GTX 770 SLI results.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

Results here are even more impressive as the pair of GeForce GTX 770 cards running in SLI jump from 29.5 average FPS to 51 FPS, an increase of 72%!! Even better, this occurs without any kind of frame rate variance increase and in fact, the blue line of the 337.50 driver is actually performing better in that perspective.

All of these tests were run with the latest patch on Total War: Rome II and I did specifically ask NVIDIA if there were any differences in the SLI profiles between these two drivers for this game. I was told absolutely not - this just happens to be the poster child example of changes NVIDIA has made with this DX11 efficiency push.

Of course, not all games are going to see performance improvements like this, or even improvements that are measurable at all. Just as we have seen with other driver enhancements over the years, different hardware configurations, image quality settings and even scenes used to test each game will shift the deltas considerably. I can tell you already that based on some results I have (but am holding for my story next week) performance improvements in other games are ranging from <5% up to 35%+. While those aren't reaching the 72% level we saw in Total War: Rome II above, these kinds of experience changes with driver updates are impressive to see.

Even though we are likely looking at the "best case" for NVIDIA's 337.50 driver changes with the Rome II results here, clearly there is merit behind what the company is pushing. We'll have more results next week!

Some NVIDIA R337.50 Driver Controversy

Subject: General Tech, Graphics Cards | April 8, 2014 - 06:44 PM |
Tagged: nvidia, geforce, drivers

NVIDIA's GeForce 337.50 Driver was said to address performance when running DirectX 11-based software. Now that it is out, multiple sources are claiming the vendor-supplied benchmarks are exaggerated or simply untrue.

nvidia-337-sli.png

ExtremeTech compiled benchmarks from Anandtech and BlackHoleTec.

Going alphabetically, Anandtech tested the R337.50 and R331.xx drivers with a GeForce GTX 780 Ti, finding a double-digit increase with BioShock: Infinite and Metro: Last Light and basically zero improvement for GRID 2, Rome II, Crysis: Warhead, Crysis 3, and Company of Heroes 2. Adding a second GTX 780 Ti into the mix helped matters, seeing a 76% increase in Rome II and about 9% in most of the other titles.

BlackHoleTec is next. Testing the mid-range, but overclocked GeForce 760 between R337.50 and R335.23 drivers, they found slight improvements (1-3 FPS), except for Battlefield 4 and Skyrim (the latter is not DX11 to be fair) which noticed a slight reduction in performance (about 1 FPS).

ExtremeTech, finally, published one benchmark but it did not compare between drivers. All it really shows is CPU scaling in AMD GPUs.

Unfortunately, I do not have any benchmarks to present of my own because I am not a GPU reviewer nor do I have a GPU testbed. Ironically, the launch of the Radeon R9 295 X2 video card might have lessened that number of benchmarks available for NVIDIA's driver, who knows?

If it is true, and R337.50 does basically nothing in a setup with one GPU, I am not exactly sure what NVIDIA was hoping to accomplish. Of course someone was going to test it and publish their results. The point of the driver update was apparently to show how having a close relationship with Microsoft can lead you to better PC gaming products now and in the future. That can really only be the story if you have something to show. Now, at least I expect, we will probably see more positive commentary about Mantle - at least when people are not talking about DirectX 12.

If you own a GeForce card, I would still install the new driver though, especially if you have an SLi configuration. Scaling to a second GPU does see measurable improvements with Release 337.50. Even for a single-card configuration, it certainly should not hurt anything.

Source: ExtremeTech
Author:
Manufacturer: Various

EVGA GTX 750 Ti ACX FTW

The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason.  It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users.  It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.

IMG_9872.JPG

NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available).  There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas.  Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades. 

After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.

On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC.  All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.

Continue reading our round up of EVGA, Galaxy and PNY GTX 750 Ti Graphics Cards!!

Win a GeForce GTX 750 Ti by Showing Off Your Upgrade-Worthy Rig!

Subject: General Tech, Graphics Cards | March 11, 2014 - 09:06 PM |
Tagged: nvidia, gtx 750 ti, giveaway, geforce, contest

UPDATE:  We have our winners! Congrats to the following users that submitted upgrade worthy PCs that will be shipped a free GeForce GTX 750 Ti courtesy of NVIDIA! 

  • D. Todorov
  • C. Fogg
  • K. Rowe
  • K. Froehlich
  • D. Aarssen

When NVIDIA launched the GeForce GTX 750 Ti this month it convinced us to give this highly efficient graphics card a chance to upgrade some off-the-shelf, under powered PCs.  In a story that we published just a week ago, we were able to convert three pretty basic and pretty boring computers into impressive gaming PCs by adding in the $150 Maxwell-based graphics card.

gateway-bioshock.png

If you missed the video we did on the upgrade process and results, check it out here.

Now we are going to give our readers the chance to do the same thing to their PCs.  Do you have a computer in your home that is just not up to the task of playing the latest PC games?  Then this contest is right up your alley.

IMG_9552.JPG

Prizes: 1 of 5 GeForce GTX 750 Ti Graphics Cards

Your Task: You are going to have to do a couple of things to win one of these cards in our "Upgrade Story Giveaway."  We want to make sure these cards are going to those of you that can really use it so here is what we are asking for (you can find the form to fill out right here):

  1. Show us your PC that is in need of an upgrade!  Take a picture of your machine with this contest page on the screen or something similar and share it with us.  You can use Imgur.com to upload your photo if you need some place to put it.  An inside shot would be good as well.  Place the URL for your image in the appropriate field in the form below.
  2. Show us your processor and integrated graphics that need some help!  That means you can use a program like CPU-Z to view the processor in your system and then GPU-Z to show us the graphics setup.  Take a screenshot of both of these programs so we can see what hardware you have that needs more power for PC gaming!  Place the URL for that image in the correct field below.
  3. Give us your name and email address so we can contact you for more information if you win!
  4. Leave us a comment below to let me know why you think you should win!!
  5. Subscribing to our PCPer Live! mailing list or even our PCPer YouTube channel wouldn't hurt either...

That's pretty much it!  We'll run this promotion for 2 weeks with a conclusion date of March 13th. That should give you plenty of time to get your entry in.

Good luck!!

NVIDIA Coin Mining Performance Increases with Maxwell and GTX 750 Ti

Subject: General Tech, Graphics Cards | February 20, 2014 - 05:45 PM |
Tagged: nvidia, mining, maxwell, litecoin, gtx 750 ti, geforce, dogecoin, coin, bitcoin, altcoin

As we have talked about on several different occasions, Altcoin mining (anything that is NOT Bitcoin specifically) is a force on the current GPU market whether we like it or not. Traditionally, Miners have only bought AMD-based GPUs, due to the performance advantage when compared to their NVIDIA competition. However, with continued development of the cudaMiner application over the past few months, NVIDIA cards have been gaining performance in Scrypt mining.

The biggest performance change we've seen yet has come with a new version of cudaMiner released yesterday. This new version (2014-02-18) brings initial support for the Maxwell architecture, which was just released yesterday in the GTX 750 and 750 Ti. With support for Maxwell, mining starts to become a more compelling option with this new NVIDIA GPU.

With the new version of cudaMiner on the reference version of the GTX 750 Ti, we were able to achieve a hashrate of 263 KH/s, impressive when you compare it to the performance of the previous generation, Kepler-based GTX 650 Ti, which tops out at about 150KH/s or so.

IMG_9552.JPG

As you may know from our full GTX 750 Ti Review,  the GM107 overclocks very well. We were able to push our sample to the highest offset configurable of +135 MHz, with an additional 500 MHz added to the memory frequency, and 31 mV bump to the voltage offset. All of this combined to a ~1200 MHz clockspeed while mining, and an additional 40 KH/s or so of performance, bringing us to just under 300KH/s with the 750 Ti.

perf.png

As we compare the performance of the 750 Ti to AMD GPUs and previous generation NVIDIA GPUs, we start to see how impressive the performance of this card stacks up considering the $150 MSRP. For less than half the price of the GTX 770, and roughly the same price as a R7 260X, you can achieve the same performance.

power.png

When we look at power consumption based on the TDP of each card, this comparison only becomes more impressive. At 60W, there is no card that comes close to the performance of the 750 Ti when mining. This means you will spend less to run a 750 Ti than a R7 260X or GTX 770 for roughly the same hash rate.

perfdollar.png

Taking a look at the performance per dollar ratings of these graphics cards, we see the two top performers are the AMD R7 260X and our overclocked GTX 750 Ti.

perfpower.png

However, when looking at the performance per watt differences of the field, the GTX 750 Ti looks more impressive. While most miners may think they don't care about power draw, it can help your bottom line. By being able to buy a smaller, less efficient power supply the payoff date for the hardware is moved up.  This also bodes well for future Maxwell based graphics cards that we will likely see released later in 2014.  

Continue reading our look at Coin Mining performance with the GTX 750 Ti and Maxwell!!

Was leading with a low end Maxwell smart?

Subject: Graphics Cards | February 19, 2014 - 04:43 PM |
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video

We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores.  In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units.  This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet.  At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels.  Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.

That is of course after you read Ryan's full review.

nvidia-geforce-gtx750ti-645x399.jpg

"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Author:
Manufacturer: Various

An Upgrade Project

When NVIDIA started talking to us about the new GeForce GTX 750 Ti graphics card, one of the key points they emphasized was the potential use for this first-generation Maxwell GPU to be used in the upgrade process of smaller form factor or OEM PCs. Without the need for an external power connector, the GTX 750 Ti provided a clear performance delta from integrated graphics with minimal cost and minimal power consumption, so the story went.

Eager to put this theory to the test, we decided to put together a project looking at the upgrade potential of off the shelf OEM computers purchased locally.  A quick trip down the road to Best Buy revealed a PC sales section that was dominated by laptops and all-in-ones, but with quite a few "tower" style desktop computers available as well.  We purchased three different machines, each at a different price point, and with different primary processor configurations.

The lucky winners included a Gateway DX4885, an ASUS M11BB, and a Lenovo H520.

IMG_9724.JPG

Continue reading An Upgrade Story: Can the GTX 750 Ti Convert OEMs PCs to Gaming PCs?

NVIDIA Releases GeForce TITAN Black

Subject: General Tech, Graphics Cards | February 18, 2014 - 09:03 AM |
Tagged: nvidia, gtx titan black, geforce titan, geforce

NVIDIA has just announced the GeForce GTX Titan Black. Based on the full high-performance Kepler (GK110) chip, it is mostly expected to be a lower cost development platform for GPU processing applications. All 2,880 single precision (FP32) CUDA Cores and 960 double precision (FP64) CUDA Cores are unlocked, yielding 5.1 TeraFLOPs of 32-bit decimal and 1.3 TeraFLOPs of 64-bit decimal performance. The chip contains 1536kB of L2 Cache and will be paired with 6GB of video memory on the board.

nvidia-titan-black-2.jpg

The original GeForce GTX Titan launched last year, almost to the day. Also based on the GK110 design, it also featured full double precision performance with only one SMX disabled. Of course, no component at the time contained a fully-enabled GK110 processor. The first product with all 15 SMX units active was not realized until the Quadro K6000, announced in July but only available in the fall. It was followed by the GeForce GTX 780 Ti (with a fraction of its FP64 performance) in November, and the fully powered Tesla K40 less than two weeks after that.

nvidia-titan-black-3.jpg

For gaming applications, this card is expected to have comparable performance to the GTX 780 Ti... unless you can find a use for the extra 3GB of memory. Games do not display much benefit with the extra 64-bit floating point (decimal) performance because the majority of their calculations are at 32-bit precision.

The NVIDIA GeForce GTX Titan Black is available today at a price of $999.

Source: NVIDIA