Podcast #265 - XSPC GTX 680 Waterblock, ASUS's DirectCU II Refresh, V-NAND SSDs and more!

Subject: General Tech | August 22, 2013 - 02:54 PM |
Tagged: XSPC, video, V-NAND, ssd, Samsung, podcast, MXC, Intel, gtz 780, gtx 680, DirectCU II, asus, 670 mini

PC Perspective Podcast #265 - 08/22/2013

Join us this week as we discuss the XSPC GTX 680 Waterblock, ASUS's DirectCU II Refresh, V-NAND SSDs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Program length: 1:17:41

  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week:
    1. Jeremy: Logitech Z506 5.1 Surround Speaker System 75W on sale for Canucks or not quite as good for 'muricans
  4. 1-888-38-PCPER or podcast@pcper.com
  5. Closing/outro

*Due to upload issues on YouTube's side today, the video may take substantially longer than usual to be available

Manufacturer: XSPC

Introduction

02-680v2-1-1.jpg

Courtesy of XSPC

The Razor GTX680 water block was among the first in the XSPC full cover line of blocks. The previous generation of XSPC water blocks offered cooling for the GPU as well as the memory and on-board VRMs, but did not offer the protection that a full card-sized block offers to the sensitive components integrated into the card's PCB. At an MSRP of $99.99, the Razor GTX680 water block is a sound investment.

03-680v2-2.jpg

Courtesy of XSPC

The Razor GTX680 block comes with a total of seven G1/4" ports - four on the inlet side (left) and three on the outlet side (right). XSPC included the following component with the block: XSPC thermal compound, dual blue LEDs, five steel port caps, paper washers and mounting screws, and TIM (thermal interface material) for use with the on board memory and VRM chips.

Continue reading our review of the XSPC Razor GTX680 water block!

Never mind the 780; here comes the GTX 770

Subject: Graphics Cards | May 30, 2013 - 02:55 PM |
Tagged: nvidia, kepler, gtx 770, gtx 680, GK104, geforce, MSI GTX660 HAWK

$400 is a tempting number, much less expensive than the $650 price tag on the GTX 780 and right in line with the existing GTX670 as well as AMD's HD7970.  You will probably not see many at that price, $450 is more likely as there will be very few reference cards released, all manufacturers will be putting there own spins on the design of these cards, which brings the price in line with the GTX680.  Performance wise these cards outpace the two current single GPU flagship cards, not by enough to make it worth upgrading from a 7970 or 680 but certainly enough to attract owners of previous generation cards.  [H]ard|OCP reviewed MSI's Lightning model, with dual fans, an overclock of 104MHz on the base clock and 117MHz boost, plus a completely unlocked BIOS for even more tweaking choices.

If you want to see how well it fares on our new Frame Rating metric you will have to read Ryan's full review here.

H770.jpg

"NVIDIA debuts the "new" GeForce GTX 770 today. The GeForce GTX 770 is poised to provide refreshed performance, for a surprising price. We evaluate a retail MSI GeForce GTX 770 Lightning flagship video card from MSI with specifications that will make any enthusiast smile. The $399 price point just got a kick in the pants."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: NVIDIA

GK104 gets cheaper and faster

A week ago today we posted our review of the GeForce GTX 780, NVIDIA's attempt to split the difference between the GTX 680 and the GTX Titan graphics cards in terms of performance and pricing.  Today NVIDIA launches the GeForce GTX 770 that, even though it has a fancy new name, is a card and a GPU that you are very familiar with.

arch01.png

The NVIDIA GK104 GPU Diagram

Based on GK104, the same GPU that powers the GTX 680 (released in March 2012), GTX 670 and the GTX 690 (though in a pair), the new GeForce GTX 770 has very few changes from the previous models that are really worth noting.  NVIDIA has updated the GPU Boost technology to 2.0 (more granular, better controls in software) but the real changes come in the clocks speeds.

specs2.png

The GTX 770 is still built around 4 GPCs and 8 SMXs for a grand total of 1536 CUDA cores, 128 texture units and 32 ROPs.  The clock speeds have increased from 1006 MHz base clock and 1058 MHz Boost up to 1046 MHz base and 1085 MHz Boost.  That is a pretty minor speed bump in reality, an increase of just 4% or so over the previous clock speeds. 

NVIDIA did bump up the GDDR5 memory speed considerably though, going from 6.0 Gbps to 7.0 Gbps, or 1750 MHz.  The memory bus width remains 256-bits wide but the total memory bandwidth has jumped up to 224.3 GB/s.

Maybe the best change for PC gamers is the new starting MSRP for the GeForce GTX 770 at $399 - a full $50-60 less than the GTX 680 was selling for as of yesterday.  If you happened to pick up a GTX 680 recently, you are going to want to look into your return options as this will surely annoying the crap out of you.

If you want more information on the architecture design of the GK104 GPU, check out our initial article on the chips release from last year.  Otherwise, with those few specification changes out of the way, let's move on to some interesting information.

The NVIDIA GeForce GTX 770 2GB Reference Card

Tired of this design yet?  If so, you'll want to look into some of the non-reference options I'll show you on the next page from other vendors, but I for one am still taken with the design of these cards.  You will find a handful of vendors offering up re-branded GTX 770 options at the outset of release but most will have their own SKUs to showcase.

IMG_9918.JPG

Continue reading our review of the NVIDIA GeForce GTX 770 graphics card!!

Author:
Manufacturer: Various

Our 4K Testing Methods

You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office.  Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160.  For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays.  Oh, and this TV only cost us $1300.

seiki5.jpg

In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable.  You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz.  That doesn't mean we are limited to 30 FPS of performance though, far from it.  As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.

I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others.  Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome.  The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations.  Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle.  This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise. 

4ksizes.png

Image from Digital Trends

I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.

Continue reading our results from testing 4K 3840x2160 gaming on high end graphics cards!!

A New Gigabyte WindForce 450W GPU Cooler May Be Coming to a GTX 680 and Titan Near You

Subject: Graphics Cards | April 14, 2013 - 07:59 PM |
Tagged: windforce, nvidia, gtx titan, gtx 680, gpu cooler, gigabyte

Earlier this week, PC component manufacturer Gigabyte showed off its new graphics card cooler at its New Idea Tech Tour even in Berlin, Germany. The new triple slot cooler is built for this generation's highest-end graphics cards. It is capable of cooling cards with up to 450W TDPs while keeping the cards cooler and quiter than reference heatsinks.

The Gigabyte WindForce 450W cooler is a triple slot design that combines a large heatsink with three 80mm fans. The heatsink features two aluminum fin arrays connected to the GPU block by three 10mm copper heatpipes. Gigabyte stated during the card's reveal that its cooler keeps a NVIDIA GTX 680 graphics card 2°C cooler and 23.3 dB quiter during a Furmark benchmark run. Further, the cooler will allow these high end cards, like the GTX Titan to achieve higher (stable) boost clocks.

Gigabyte WindForce 450W GPU Cooler for NVIDIA GTX Titan and GTX 680 Graphics Cards.jpg

ComputerBase.de was on hand at Gigabyte's event in Berlin to snap shots of the upcoming GPU cooler.

The company has not announced which graphics cards will use the new cooler or when it will be available, but A Gigabyte GTX 680 and a custom cooled-Titan seem to be likely candidates considering these cards were mentioned in the examples given in the presentation. Note that NVIDIA has prohibited AIB partners from putting custom coolers on the Titan thus far, but other rumored Titan graphics cards with custom coolers seem to suggest that the company will allow custom-cooled Titans to be sold at retail at some point. In addition to using it for the top-end NVIDIA cards, I think a GTX 670  or GTX 660 Ti GPU using this cooler would also be great, as it would likely be one of the quieter running options available (because you could spin the three 80mm fans much slower than the single reference fan and still get the same temps).

What do you think about Gigabyte's new 450W GPU cooler? You can find more photos over at Computer Base (computerbase.de).

How Games Work

 

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Introduction

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year.  I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation.  After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance. 

card4.jpg

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates.  This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive.  I think the remainder of this article will prove that. 

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing.  I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all!  There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to your YouTube channel if you haven't already!

Continue reading our analysis of the new Frame Rating performance testing methodology!!

Author:
Subject: Systems
Manufacturer: AVADirect

A look outside and in

We handle a fair amount of system reviews here at PC Perspective and use them mainly as a way to feature unique and interesting designs and configurations.  We know how the hardware will perform for the most part; doing extensive CPU and GPU testing on nearly a daily basis.  Sometimes we'll get systems in that are extremely budget friendly, other times vendors pass us machines that have MSRPs similar to a Kia automobile.  Then there are times, like today, we get a unique design that is a great mix of both.

AVADirect has had a Mini Gaming PC design for a while now but recently has gone through a refresh that adds in support for the latest Ivy Bridge processors, NVIDIA Kepler GPUs all using a new case from BitFenix that combines it in a smaller, mini-ITX form factor.

The quick specifications look like this:

  • BitFenix Prodigy chassis
  • Intel Core i7-3770K CPU, Overclocked at 4.4 GHz
  • ASUS P8Z77-I Deluxe Z77 Motherboard
  • EVGA GeForce GTX 680 2GB GPU
  • OCZ 240GB Vertex 3 SSD
  • Seagate 2TB SATA 6G HDD
  • 8GB Crucual DDR3-1866 Memory
  • Cooler Master 850 watt Silent Pro PSU

You'll also see a large, efficient Prolimatech cooler inside along with a Blu-ray burner and Windows 7 for a surprisingly reasonable $2100 price tag.

01.jpg

The BitFenix Prodigy chassis is a unique design that starts with sets of FiberFlex legs and handles surrounding the mini-ITX case.  The minor flexibility of the legs absorbs sound and impact on the table while the handles work great for picking up the system for LAN events and the like.  While at first I was worried about using them to support the weight of the rig, I had no problems and was assured by both BitFenix and by AVADirect it would withstand the torture.

Check out our video review before continuing on to the full article with benchmarks and pricing!

Continue reading our review of the AVADirect Mini Gaming PC!!

Gigabyte Launches Three New Factory Overclocked Kepler GPUs

Subject: Graphics Cards | October 4, 2012 - 04:42 PM |
Tagged: nvidia, kepler, gtx 680, gtx 670, gtx 660 Ti, gigabyte, factory overclocked

Gigabyte is launching three new factory overclocked graphics cards featuring a Kepler GPU, custom PCB, and custom cooler. The factory overclocks are notable, but will cost you. Specifically, the company is producing versions of the GTX 660 Ti, GTX 670, and GTX 680.

GTX 680.jpg

The Gigabyte GV-N680OC-4GD takes the GTX 680 GPU, places it on a custom PCB, and pairs it with 4GB of GDDR5 memory. It features two 6-pin PCI-E power connectors, and Gigabyte’s Windforce X3 450W custom cooler using a triangular fin design that allegedly increases cooling potential. While the GDDR5 memory clockspeeds have not been increased over the reference clocks, the GPU core and boost clockspeeds have been pushed to 1071 MHz and 1137 MHz respectively. The following chart shows the differences in clockspeed and memory over the reference design.

  Reference GTX 680 Gigabyte N680OC-4GD
GPU Core 1006 MHz 1071 MHz
GPU Boost 1058 MHz 1137 MHz
GDDR5 Amount 2 GB 4 GB
GDDR5 Speed 6 Gbps 6 Gbps
Price $500 $800 (rumored)

 

The GTX 680 is not the only card to get a custom makeover by Gigabyte, however. The GV-N670OC-4GD is a custom GTX 670. With this card, Gigabyte has set the base clockspeed at 980 MHz – the boost clockspeed of reference cards – and the boost clockspeed at 1058 MHz. Gigabyte has also doubled down on the GDDR5 memory by packing 4GB onto the custom PCB. The memory clockspeed remains the same 6 Gbps as reference cards, however.

GTX 670.jpg

This card uses the same Windforce X3 cooler as the cust GTX 680, and as a result has a triple slot design that looks identical to the N680OC-4GD. If you look just above the PCI-E connector though, you can see tell them apart by the product name.

  Reference GTX 670 Gigabyte N670OC-4GD
GPU Core 915 MHz 980 MHz
GPU Boost 980 MHz 1058 MHz
GDDR5 Amount 2 GB 4 GB
GDDR5 Speed 6 Gbps 6 Gbps
Price $400 $550 (rumored)

 

Finally, we have the GV-N66TOC-3GD which overclocks the GTX 660 Ti GPU to the max. Factory clockspeeds are set at 1032 MHz base and 1111 MHz boost. Memory also sees a small bump from 2GB reference to 3GB. On the other hand, the memory is not overclocked and remains at the reference 6 Gbps clockspeed. This card also has a triple fan Windforce cooler, however this version is not the triple slot design found on the GTX 670 and GTX 680s SKUs – only dual slot.

  Reference GTX 660 Ti Gigabyte N66TOC-3GD
GPU Core 915 MHz 1032 MHz
GPU Boost 980 MHz 1111 MHz
GDDR5 Amount 2 GB 3 GB
GDDR5 Speed 6 Gbps 6 Gbps
Price $300 $415 (rumored)

 

All three of the Gigabyte GPUs feature two DVI, one full-size HDMI, and one full-size DisplayPort connector for video outputs. 

All three factory overclocked graphics cards feature respectable GPU overclocks, and it appears that Gigabyte has provided ample cooling for each GPU. The triple slot, triple fan version on the N670OC-4GD and N680OC-4GD in particular seem to offer headroom above even what Gigabyte has clocked these out of the box. Curiously though, Gigabyte is continuing the trend of not touching the memory clockspeed of Kepler cards. It may be that the RAM chips are already at their max on the reference design, or there could be some behind the scenes talk with NVIDIA not waning Add In Board partners to touch the memory Unfortunately, all I have at this point is speculation, but it is a rather curious omission on such high end cards. That point becomes clearer when price is taken into consideration. Videocardz claims to have the pricing information for the three video cards, and the custom cards are going to cost you a large premium over reference cards. The rumored prices can be found in the charts above compared against the reference pricing, but the basic run down is that the GV-N66TOC-3GD will cost $415, the GV-N670OC-4GD will cost $550, and the GV-N680OC-4GD will cost (an astounding) $800.

I’m hoping that the rumored prices are in error and will be adjusted once the cards are available. These are neat cards that look to have plenty of cooling, but I’m still trying to figure out just what these cards have to offer to justify the huge jump over reference pricing. And, no, the superfluous gold plated HDMI connectors do not count. [For example, the 4GB Galaxy GTX 670 we recently reviewed was only $70 over reference while the Gigabyte card is rumored to be $150!]

GTX 660Ti.jpg

The Gigabyte N66TOC-3GD factory overclocked GPU.

You can find links to the Gigabyte product pages in the charts above. If you have not already, please check out our GTX 660 Ti, GTX 670, and GTX 680 graphics card reviews for the full scoop on the various Kepler iterations. And if you are considering the Gigabyte N680OC-4GD, you should probably check out the dual GPU GTX 690 review as well (heh).

Source: Videocardz
Author:
Manufacturer: Various

PhysX Settings Comparison

Borderlands 2 is a hell of a game; we actually ran a 4+ hour live event on launch day to celebrate its release and played it after our podcast that week as well.  When big PC releases occur we usually like to take a look at performance of the game on a few graphics cards as well to see how NVIDIA and AMD cards stack up.  Interestingly, for this title, PhysX technology was brought up again and NVIDIA was widely pushing it as a great example of implementation of the GPU-accelerated physics engine.

What you may find unique in Borderlands 2 is that the game actually allows you to enabled PhysX features at Low, Medium and High settings, with either NVIDIA or AMD Radeon graphics cards installed in your system.  In past titles, like Batman: Arkham City and Mafia II, PhysX was only able to be enabled (or at least at higher settings) if you had an NVIDIA card.  Many gamers that used AMD cards saw this as a slight and we tended to agree.  But since we could enable it with a Radeon card installed, we were curious to see what the results would be.

screenshot-16.jpg

Of course, don't expect the PhysX effects to be able to utilize the Radeon GPU for acceleration...

Borderlands 2 PhysX Settings Comparison

The first thing we wanted to learn was just how much difference you would see by moving from Low (the lowest setting, there is no "off") to Medium and then to High.  The effects were identical on both AMD and NVIDIA cards and we made a short video here to demonstrate the changes in settings.

Continue reading our article that compares PhysX settings on AMD and NVIDIA GPUs!!