Podcast #298 - Next Generation Intel Motherboards, Crossfire R9 295x2s, Corsair AX1500i Power Supply, and more!

Subject: General Tech | May 1, 2014 - 09:35 AM |
Tagged: video, r9 295x2, podcast, nvidia, Next Generation, Intel, corsair, AX1500i, amd, 295x2

PC Perspective Podcast #298 - 05/01/2014

Join us this week as we discuss Next Generation Intel Motherboards, Crossfire R9 295x2s, Corsair AX1500i Power Supply, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Josh Walrath, Jeremy Hellstrom, Allyn Malventano, and Morry Tietelman

Program length: 1:22:18
    1. there is a video, and it will be streamed
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week:
  4. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

NVIDIA Announces Watch_Dogs Bundle with GeForce GPUs

Subject: Graphics Cards | April 29, 2014 - 07:22 AM |
Tagged: nvidia, watch_dogs, watch dogs, bundle, geforce

A bit of a surprise email found its way to my inbox today that announced NVIDIA's partnership with Ubisoft to include copies of the upcoming Watch_Dogs game with GeForce GTX graphics cards. 

watchdogs.jpg

Gamers that purchase a GeForce GTX 780 Ti, GTX 780, GTX 770 or GTX 760 from select retailers will qualify for a free copy of the game. You can details on this bundle and available GPUs to take advantage of it at Amazon.com!

The press release also confirms inclusion of NVIDIA exclusive features like TXAA and HBAO+ in the game itself, which is interesting. From what I am hearing, Watch_Dogs is going to be a beast of a game on GPU hardware and we are looking forward to using it as a test platform going forward.

Full press release is included below.

OWN THE TECH AND CONTROL THE CITY WITH NVIDIA® AND UBISOFT®

Select GeForce GTX GPUs Now Include the Hottest Game of the Year: Watch_Dogs™

Santa Clara, CA  April 29, 2014 — Destructoid calls it one of the “most wanted games of 2014.” CNET said it was “one of the most anticipated games in recent memory.” MTV said it’s one of the “Can’t-Miss Video Games of 2014.” This, all before anyone out there has even played it.

So, starting today(1), gamers who purchase select NVIDIA® GeForce® GTX® 780 Ti, 780, 770 and 760 desktop GPUs can get their chance to play Watch_Dogs™, the new PC game taking the world by storm and latest masterpiece from Ubisoft®.

Described as a “polished, refined and truly next generation experience,” in Watch_Dogs you play as Aiden Pearce, a brilliant hacker whose criminal past led to a violent family tragedy. While seeking justice, you will monitor and hack those around you, access omnipresent security cameras, download personal information to locate a target, control traffic lights and public transportation to stop the enemy and more.

Featuring NVIDIA TXAA and HBAO+ technology for an interactive, immersive experience, it’s clear that gamers can’t wait to play Watch_Dogs, especially considering the effusive praise that the official trailer received. Launched mere weeks ago, the trailer has already been viewed more than a combined 650,000 times. For gamers, Watch_Dogs seamlessly blends a mixture of single-player and multiplayer action in a way never before seen, and Ubisoft has gone one step further in creating a unique ctOS mobile companion app for users of smartphone and tablet devices allowing for even greater access to the fun. If you haven’t checked out the trailer, please check it out here: https://www.youtube.com/watch?v=3eHCJ8pWdf0.

The GeForce GTX and Watch_Dogs bundle is available starting today from leading e-tailers including Newegg, Amazon.com, TigerDirect, NCIX; add-in card vendors such as EVGA; and nationwide system builders including AVADirect, CyberPower, Digital Storm, Falcon Northwest, iBUYPOWER, Maingear, Origin PC, Puget Systems, V3 Gaming PC and Velocity Micro. For a full list of participating partners, please visit: www.GeForce.com/GetWatchDogs.

Source: NVIDIA

Another GPU Driver Showdown: AMD vs NVIDIA in Linux

Subject: General Tech, Graphics Cards | April 27, 2014 - 01:22 AM |
Tagged: nvidia, linux, amd

GPU drivers have been a hot and sensitive topic at the site, especially recently, probably spurred on by the announcements of Mantle and DirectX 12. These two announcements admit and illuminate (like a Christmas tree) the limitations of APIs on gaming performance. Both AMD and NVIDIA have their recent successes and failures on their respective fronts. This will not deal with that, though. This is a straight round-up of new GPUs running the latest drivers... in Linux.

7-TuxGpu.png

Again, results are mixed and a bit up for interpretation.

In all, NVIDIA tends to have better performance with its 700-series parts than equivalently-priced R7 or R9 products from AMD, especially in low-performance Source Engine titles such as Team Fortress 2. Sure, even the R7 260X was almost at 120 FPS, but the R9 290 was neck-and-neck with the GeForce GTX 760. The GeForce GTX 770, about $50 cheaper than the R9 290, had a healthy 10% lead over it.

In Unigine Heaven, however, the AMD R9 290 passed the NVIDIA GTX 770 by a small margin, coming right in line with it's aforementioned $50-bigger price tag. In that situation, where performance became non-trivial, AMD caught up (but did not beat). Also, third-party driver support is more embraced by AMD than NVIDIA. On the other hand, NVIDIA's proprietary drivers are demonstrably better, even if you would argue that the specific cases are trivial because of overkill.

And then there's Unvanquished, where AMD's R9 290 did not achieve triple-digit FPS scores despite the $250 GTX 760 getting 110 FPS.

Update: As pointed out in the comments, some games perform significantly better on the $130 R7 260X than the $175 GTX 750 Ti (HL2: Lost Coast, TF2, OpenArena, Unigine Sanctuary). Some other games are the opposite, with the 750 Ti holding a sizable lead over the R7 260X (Unigine Heaven and Unvanquished). Again, Linux performance is a grab bag between vendors.

There's a lot of things to consider, especially if you are getting into Linux gaming. I expect that it will be a hot topic, soon, as it picks up... ... Steam.

Source: Phoronix

"NVIDIA test model(SHIELD)" with Tegra K1 on AnTuTu

Subject: General Tech, Systems, Mobile | April 27, 2014 - 12:30 AM |
Tagged: nvidia, sheild, shield 2, AnTuTu

VR-Zone is claiming that this is the successor to NVIDIA's SHIELD portable gaming system. An AnTuTu benchmark was found for a device called, "NVIDIA test model(SHIELD)" with an "NVIDIA Gefroce(Kepler Graphics)" GPU, typos left as-is. My gut expects that it is valid, but I hesitate to vouch the rumor. If it even came from NVIDIA, which the improper spelling and capitalization of "GeForce" calls into question, it could easily be an internal prototype and maybe even incorrectly given the "SHIELD" (which is properly spelled and capitalized) label.

nvidia-shield-antutu.jpg

Image Credit: AnTuTu.com

As far as its camera listing, it would make sense for the SHIELD to get one at standard definition (0.3MP -- probably 640x480). The fact that the original SHIELD shipped without any, at all, still confuses me. The low resolution sensor still does not make sense, seeming like an almost pointless upgrade, but it could be used by NVIDIA for a specific application or built-in purpose.

Or, it could be an irrelevant benchmark listing.

Either way, there are rumors floating around about a SHIELD 2 being announced at E3 in June. It is unlikely that NVIDIA will give up on the handheld any time soon. Whether that means new hardware, versus more software updates, is anyone's guess. The Tegra K1 would have been a good launching SoC for the SHIELD, however, with its full OpenGL 4.4 and compute support (the hardware supports up to OpenCL 1.2 although driver support will apparently be "based on customer needs". PDF - page 8).

Waiting. Seeing. You know the drill.

Source: VR-Zone

Enter Tegra K1 CUDA Vision Challenge, Win Jetson TK1

Subject: General Tech | April 25, 2014 - 10:43 AM |
Tagged: nvidia, contest, jetson tk1, kepler

Attention enthusiasts, developers and creators. Are you working on a new embedded computing application?

Meet the Jetson TK1 Developer Kit. It’s the world’s first mobile supercomputer for embedded systems, putting unprecedented computing performance in a low-power, portable and fully programmable package.

image002.jpg

Power, ports, and portability: the Jetson TK1 development kit.The Jetson TK1 development kit

It’s the ultimate platform for developing next-generation computer vision solutions for robotics, medical devices, and automotive applications.

And we’re giving away 50 of them as part of our Tegra K1 CUDA Vision Challenge.

In addition to the Tegra K1 processor, the Jetson TK1 DevKit is equipped with 2 GB of RAM, 16 GB of storage and a host of ports and connectivity options.

And, because it offers full support for CUDA, the most pervasive, easy-to-use parallel computing platform and programming model, it’s much easier to program than the FPGA, custom ASIC and DSP processors that are typically used in today’s embedded systems.

Jetson TK1 is based on the Kepler computing architecture, the same technology powering today’s supercomputers, professional workstations and high-end gaming rigs. It has 192 CUDA cores, delivering over 300 GFLOPs of performance, and also provides full support for OpenGL 4.4, and CUDA 6.0, as well as the GPU-accelerated OpenCV.

Our Tegra K1 system-on-a-chip offers unprecedented power and portability.Our Tegra K1 system-on-a-chip offers unprecedented power and portability.

Entering the Tegra K1 CUDA Vision Challenge is easy. Just tell us about your embedded application idea. All proposals must be submitted April 30, 2014. Entries will be judged for innovation, impact on research or industry, public availability, and quality of work.

image003.jpg

By the end of May, the top 50 submissions will be awarded one of the first Jetson TK1 DevKits to roll off the production line, as well as access to technical support documents and assets.

The five most noteworthy Jetson TK1 breakthroughs may get a chance to share their work at the NVIDIA GPU Technology Conference in 2015.

Source: NVIDIA
Author:
Subject: General Tech
Manufacturer: PC Perspective

AM1 Walks New Ground

After Josh's initial review of the AMD AM1 Platform and the Athlon 5350, we received a few requests to look at gaming performance with a discrete GPU installed. Even though this platform isn't being aimed at gamers looking to play demanding titles, we started to investigate this setup anyway.

While Josh liked the ASUS AM1I-A Mini ITX motherboard he used in his review, with only a x1 PCI-E slot it would be less than ideal for this situation.

71F0Lmi7WgL._SL1500_.jpg

Luckily we had the Gigabyte AM1M-S2H Micro ATX motherboard, which features a full length PCI-E x16 slot, as well as 2 x1 slots.

Don't be mistaken by the shape of the slot though, the AM1 chipset still only offers 4 lanes of PCI-Express 2.0. This, of course, means that the graphics card will not be running at full bandwidth. However, having the physical x16 slot makes it a lot easier to physically connect a discrete GPU, without having to worry about those ribbon cables that miners use.

Continue reading AMD AM1 Platform and Athlon 5350 with GTX 750 Ti - 1080p at under $450!!

Author:
Manufacturer: Various

Competition is a Great Thing

While doing some testing with the AMD Athlon 5350 Kabini APU to determine it's flexibility as a low cost gaming platform, we decided to run a handful of tests to measure something else that is getting a lot of attention right now: AMD Mantle and NVIDIA's 337.50 driver.

Earlier this week I posted a story that looked at performance scaling of NVIDIA's new 337.50 beta driver compared to the previous 335.23 WHQL. The goal was to assess the DX11 efficiency improvements that the company stated it had been working on and implemented into this latest beta driver offering. In the end, we found some instances where games scaled by as much as 35% and 26% but other cases where there was little to no gain with the new driver. We looked at both single GPU and multi-GPU scenarios on mostly high end CPU hardware though.

Earlier in April I posted an article looking at Mantle, AMD's answer to a lower level API that is unique to its ecosystem, and how it scaled on various pieces of hardware on Battlefield 4. This was the first major game to implement Mantle and it remains the biggest name in the field. While we definitely saw some improvements in gaming experiences with Mantle there was work to be done when it comes to multi-GPU scaling and frame pacing. 

Both parties in this debate were showing promise but obviously both were far from perfect.

am1setup.jpg

While we were benchmarking the new AMD Athlon 5350 Kabini based APU, an incredibly low cost processor that Josh reviewed in April, it made sense to test out both Mantle and NVIDIA's 337.50 driver in an interesting side by side.

Continue reading our story on the scaling performance of AMD Mantle and NVIDIA's 337.50 driver with Star Swarm!!

Podcast #296 - NVIDIA's 337.50 Driver Improvements, Corsair H105, Intel Haswell Refresh details and more!

Subject: General Tech | April 17, 2014 - 11:58 AM |
Tagged: podcast, video, nvidia, 337.50, corsair, H105, amd, Intel, haswell, devil's canyon

PC Perspective Podcast #296 - 04/17/2014

Join us this week as we discuss NVIDIA's 337.50 Driver Improvements, Corsair H105, Intel Haswell Refresh details and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

Program length: 1:25:06
 

Be sure to subscribe to the PC Perspective YouTube channel!!

 

 

Author:
Manufacturer: NVIDIA

SLI Testing

Let's see if I can start this story without sounding too much like a broken record when compared to the news post I wrote late last week on the subject of NVIDIA's new 337.50 driver. In March, while attending the Game Developer's Conference to learn about the upcoming DirectX 12 API, I sat down with NVIDIA to talk about changes coming to its graphics driver that would affect current users with shipping DX9, DX10 and DX11 games. 

As I wrote then:

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

In truth, this is something that both NVIDIA and AMD have likely been doing all along but NVIDIA has renewed purpose with the pressure that AMD's Mantle has placed on them, at least from a marketing and PR point of view. It turns out that the driver that starts to implement all of these efficiency changes is the recent 337.50 release and on Friday I wrote up a short story that tested a particularly good example of the performance changes, Total War: Rome II, with a promise to follow up this week with additional hardware and games. (As it turns out, results from Rome II are...an interesting story. More on that on the next page.)

slide1.jpg

Today I will be looking at seemingly random collection of gaming titles, running on some reconfigured test bed we had in the office in an attempt to get some idea of the overall robustness of the 337.50 driver and its advantages over the 335.23 release that came before it. Does NVIDIA have solid ground to stand on when it comes to the capabilities of current APIs over what AMD is offering today?

Continue reading our analysis of the new NVIDIA 337.50 Driver!!

NVIDIA GeForce Driver 337.50 Early Results are Impressive

Subject: Graphics Cards | April 11, 2014 - 12:30 PM |
Tagged: nvidia, geforce, dx11, driver, 337.50

UPDATE: We have put together a much more comprehensive story based on the NVIDIA 337.50 driver that includes more cards and more games while also disputing the Total War: Rome II results seen here. Be sure to read it!!

When I spoke with NVIDIA after the announcement of DirectX 12 at GDC this past March, a lot of the discussion centered around a pending driver release that promised impressive performance advances with current DX11 hardware and DX11 games. 

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

09.jpg

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

Lofty goals to be sure. This driver was released last week and I immediately wanted to test and verify many of these claims. However, a certain other graphics project kept me occupied most of the week and then a short jaunt to Dallas kept me from the task until yesterday. 

To be clear, I am planning to look at several more games and card configurations next week, but I thought it was worth sharing our first set of results. The test bed in use is the same as our standard GPU reviews.

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 780 Ti 3GB
NVIDIA GeForce GTX 770 2GB
Graphics Drivers NVIDIA: 335.23 WHQL, 337.50 Beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

The most interesting claims from NVIDIA were spikes as high as 70%+ in Total War: Rome II, so I decided to start there. 

First up, let's take a look at the GTX 780 Ti SLI results, the flagship gaming card from NVIDIA.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

With this title, running at the Extreme preset, jumps from an average frame rate of 59 FPS to 88 FPS, an increase of 48%! Frame rate variance does increase a bit with the faster average frame rate but it stays within limits of smoothness, but barely.

Next up, the GeForce GTX 770 SLI results.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

Results here are even more impressive as the pair of GeForce GTX 770 cards running in SLI jump from 29.5 average FPS to 51 FPS, an increase of 72%!! Even better, this occurs without any kind of frame rate variance increase and in fact, the blue line of the 337.50 driver is actually performing better in that perspective.

All of these tests were run with the latest patch on Total War: Rome II and I did specifically ask NVIDIA if there were any differences in the SLI profiles between these two drivers for this game. I was told absolutely not - this just happens to be the poster child example of changes NVIDIA has made with this DX11 efficiency push.

Of course, not all games are going to see performance improvements like this, or even improvements that are measurable at all. Just as we have seen with other driver enhancements over the years, different hardware configurations, image quality settings and even scenes used to test each game will shift the deltas considerably. I can tell you already that based on some results I have (but am holding for my story next week) performance improvements in other games are ranging from <5% up to 35%+. While those aren't reaching the 72% level we saw in Total War: Rome II above, these kinds of experience changes with driver updates are impressive to see.

Even though we are likely looking at the "best case" for NVIDIA's 337.50 driver changes with the Rome II results here, clearly there is merit behind what the company is pushing. We'll have more results next week!