Author:
Manufacturer: Various

Our 4K Testing Methods

You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office.  Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160.  For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays.  Oh, and this TV only cost us $1300.

seiki5.jpg

In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable.  You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz.  That doesn't mean we are limited to 30 FPS of performance though, far from it.  As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.

I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others.  Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome.  The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations.  Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle.  This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise. 

4ksizes.png

Image from Digital Trends

I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.

Continue reading our results from testing 4K 3840x2160 gaming on high end graphics cards!!

Author:
Subject: Processors
Manufacturer: AMD

heterogeneous Uniform Memory Access

 

Several years back we first heard AMD’s plans on creating a uniform memory architecture which will allow the CPU to share address spaces with the GPU.  The promise here is to create a very efficient architecture that will provide excellent performance in a mixed environment of serial and parallel programming loads.  When GPU computing came on the scene it was full of great promise.  The idea of a heavily parallel processing unit that will accelerate both integer and floating point workloads could be a potential gold mine in wide variety of applications.  Alas, the promise of the technology did not meet expectations when we have viewed the results so far.  There are many problems with combining serial and parallel workloads between CPUs and GPUs, and a lot of this has to do with very basic programming and the communication of data between two separate memory pools.

huma_01.jpg

CPUs and GPUs do not share common memory pools.  Instead of using pointers in programming to tell each individual unit where data is stored in memory, the current implementation of GPU computing requires the CPU to write the contents of that address to the standalone memory pool of the GPU.  This is time consuming and wastes cycles.  It also increases programming complexity to be able to adjust to such situations.  Typically only very advanced programmers with a lot of expertise in this subject could program effective operations to take these limitations into consideration.  The lack of unified memory between CPU and GPU has hindered the adoption of the technology for a lot of applications which could potentially use the massively parallel processing capabilities of a GPU.

The idea for GPU compute has been around for a long time (comparatively).  I still remember getting very excited about the idea of using a high end video card along with a card like the old GeForce 6600 GT to be a coprocessor which would handle heavy math operations and PhysX.  That particular plan never quite came to fruition, but the idea was planted years before the actual introduction of modern DX9/10/11 hardware.  It seems as if this step with hUMA could actually provide a great amount of impetus to implement a wide range of applications which can actively utilize the GPU portion of an APU.

Click here to continue reading about AMD's hUMA architecture.

Author:
Manufacturer: Corsair

Obsidian Series for under $100

If you need a case for your next PC build, the chances are good that Corsair has a model that you'll like.  Ranging from the obscenely large Obsidian 900D to the $69 Carbide 200R and just about everything in between, Corsair has a ton of options  Today we are reviewing the brand new entrant to the Obsidian series, the 350D, that brings Corsair to the Micro-ATX form factor. 

The Obsidian series is the flagship chassis line from Corsair and typically means you are getting the best of the best from the expanding components company.  With an MSRP of just $99 you are definitely making some sacrifices on features and on size, limiting us to Micro-ATX or Mini-ITX motherboards and systems. 

IMG_973601.jpg

The front panel has an attractive brushed finish to it with removable front panel (and fan filter).

IMG_974203.jpg

Connections up top include headphones, microphone as well as a pair of USB 3.0 ports.  There power button is right in the center with dual LEDs on each side.  The reset button is just to the right of the mic port and is recessed enough to prevent accidental presses.

Continue reading our review of the Corsair Obsidian 350D chassis!!

Author:
Subject: Processors
Manufacturer: AMD

Jaguar Hits the Embedded Space

 

It has long been known that AMD has simply not had a lot of luck going head to head against Intel in the processor market.  Some years back they worked on differentiating themselves, and in so doing have been able to stay afloat through hard times.  The acquisitions that AMD has made in the past decade are starting to make a difference in the company, especially now that the PC market that they have relied upon for revenue and growth opportunities is suddenly contracting.  This of course puts a cramp in AMD’s style, but with better than expected results in their previous quarter, things are not nearly as dim as some would expect.

Q1 was still pretty harsh for AMD, but they maintained their marketshare in both processors and graphics chips.  One area that looks to get a boost is that of embedded processors.  AMD has offered embedded processors for some time, but with the way the market is heading they look to really ramp up their offerings to fit in a variety of applications and SKUs.  The last generation of G-series processors were based upon the Bobcat/Brazos platform.  This two chip design (APU and media hub) came in a variety of wattages with good performance from both the CPU and GPU portion.  While the setup looked pretty good on paper, it was not widely implemented because of the added complexity of a two chip design plus thermal concerns vs. performance.

soc_arch.jpg

AMD looks to address these problems with one of their first, true SOC designs.  The latest G-series SOC’s are based upon the brand new Jaguar core from AMD.  Jaguar is the successor to the successful Bobcat core which is a low power, dual core processor with integrated DX11/VLIW5 based graphics.  Jaguar improves performance vs. Bobcat in CPU operations between 6% to 13% when clocked identically, but because it is manufactured on a smaller process node it is able to do so without using as much power.  Jaguar can come in both dual core and quad core packages.  The graphics portion is based on the latest GCN architecture.

Read the rest of the AMD G-Series release by clicking here!

Author:
Manufacturer: AMD

The card we have been expecting

Despite all the issues that were brought up with our new graphics performance testing methodology we are calling Frame Rating, there is little debate in the industry that AMD is making noise once again in the graphics field.  From the elaborate marketing and game bundles with all Radeon HD 7000 series cards over the last year to the hiring of Roy Taylor, VP of sales but also the company's most vocal supporter. 

slide1_0.jpg

Along with the marketing though goes plenty of technology and important design wins.  With the dominance of the APU on the console side (Wii U, Playstation 4 and the next Xbox), AMD is making sure that the familiarity with its GPU architecture there pays dividends on the PC side as well.  Developers will be focusing on AMD's graphics hardware for 5-10 years with the console generation and that could result in improved performance and feature support for Radeon graphics for PC gamers. 

Today's release of the Radeon HD 7990 6GB Malta dual-GPU graphics card shows a renewed focus on high-end graphics markets since the release of the Radeon HD 7970 in January of 2012.  And while you may have seen something for sale previously with the HD 7990 name attached, those were custom designs built by partners, not by AMD. 

slide2_0.jpg

Both ASUS and PowerColor currently have high-end dual-Tahiti cards for sale.  The PowerColor HD 7990 Devil 13 used the brand directly but ASUS' ARES II kept away from the name and focused on its own high-end card brands instead. 

The "real" Radeon HD 7990 card was first teased at GDC in March and takes a much less dramatic approach to its design without being less impressive technically.  The card includes a pair of Tahiti, HD 7970-class GPUs on a single PCB with 6GB of total memory.  The raw specifications are listed here:

slide6_0.jpg

Considering there are two HD 7970 GPUs on the HD 7990, the doubling of the major specs shouldn't be surprising though it is a little deceiving.  There are 8.6 billion transistors yes, but there are still 4.3 billion on each GPU.  Yes there are 4096 stream processors but only 2048 on each GPU requiring software GPU scaling to increase performance.  The same goes with texture fill rate, compute performance, memory bandwidth, etc.  The same could be said for all dual-GPU graphics cards though.

Continue reading our review of the AMD Radeon HD 7990 6GB Graphics Card!!

Author:
Manufacturer: Various

A very early look at the future of Catalyst

Today is a very interesting day for AMD.  It marks both the release of the reference design of the Radeon HD 7990 graphics card, a dual-GPU Tahiti behemoth, and the first sample of a change to the CrossFire technology that will improve animation performance across the board.  Both stories are incredibly interesting and as it turns out both feed off of each other in a very important way: the HD 7990 depends on CrossFire and CrossFire depends on this driver. 

If you already read our review (or any review that is using the FCAT / frame capture system) of the Radeon HD 7990, you likely came away somewhat unimpressed.  The combination of a two AMD Tahiti GPUs on a single PCB with 6GB of frame buffer SHOULD have been an incredibly exciting release for us and would likely have become the single fastest graphics card on the planet.  That didn't happen though and our results clearly state why that is the case: AMD CrossFire technology has some serious issues with animation smoothness, runt frames and giving users what they are promised. 

Our first results using our Frame Rating performance analysis method were shown during the release of the NVIDIA GeForce GTX Titan card in February.  Since then we have been in constant talks with the folks at AMD to figure out what was wrong, how they could fix it, and what it would mean to gamers to implement frame metering technology.  We followed that story up with several more that showed the current state of performance on the GPU market using Frame Rating that painted CrossFire in a very negative light.  Even though we were accused by some outlets of being biased or that AMD wasn't doing anything incorrectly, we stuck by our results and as it turns out, so does AMD. 

Today's preview of a very early prototype driver shows that the company is serious about fixing the problems we discovered. 

If you are just catching up on the story, you really need some background information.  The best place to start is our article published in late March that goes into detail about how game engines work, how our completely new testing methods work and the problems with AMD CrossFire technology very specifically.  From that piece:

It will become painfully apparent as we dive through the benchmark results on the following pages, but I feel that addressing the issues that CrossFire and Eyefinity are creating up front will make the results easier to understand.  We showed you for the first time in Frame Rating Part 3, AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern.  Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case.  Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?”  My answer based on the below graph would be no.

runt.jpg

An example of a runt frame in a CrossFire configuration

NVIDIA's solution for getting around this potential problem with SLI was to integrate frame metering, a technology that balances frame presentation to the user and to the game engine in a way that enabled smoother, more consistent frame times and thus smoother animations on the screen.  For GeForce cards, frame metering began as a software solution but was actually integrated as a hardware function on the Fermi design, taking some load off of the driver.

Continue reading our article on the new prototype driver from AMD to address frame pacing issues in CrossFire!!

Subject: Motherboards
Manufacturer: GIGABYTE

Introduction and Technical Specifications

Introduction

02-6641_big.jpg

Courtesy of GIGABYTE

The Z77N-WiFi is GIGABYTE's latest edition to the Mini-ITX lineup. Although the board is not as packed with features as some of the other enthusiast-minded mini-ITX boards, GIGABYTE did some interesting things with the board layout to space components out on the board more evenly. The Z77N-WiFi even comes standard with dual-Realtek GbE NICs and an Intel 802.11n-based WiFi mPCIe card. We put the board through our normal gamut of tests to see how well this mighty mite sized up with its full-sized brethren. The Z77N-WiFi board comes with an equally reasonable retail price at a mere $129.99.

Continue reading our review of the GIGABYTE Z77N-WiFi motherboard!

Subject: Storage

Introduction, Specifications and Packaging

Introduction

A while back, we saw OCZ undergo a major restructuring. 150+ product SKUs were removed from their lineup, leaving a solid core group of products for the company to focus on. The Vertex and Agility lines were spared, and the Vector was introduced and well received by the community. With all of that product trimming, we were bound to see another release at some point:

ext-front.JPG

Today we see a branch from one of those tree limbs in the form of the Vertex 3.20. This is basically a Vertex 3, but with the 25nm IMFT Sync flash replaced by newer 20nm IMFT Sync flash. The drop to 20nm comes with a slight penalty in write endurance (3000 cycles, down from the 5000 rating of 25nm) for the gain of cheaper production cost (more dies per 300mm wafer).

imft 20 nm.jpg

IMFT has been cooking up 20nm flash for a while now, and it is becoming mature enough to enter the mainstream. The first entrant was Intel's own 335 Series, which debuted late last year. 20nm flash has no real groundbreaking improvements other than the reduced size, so the hope is that this shrink will translate to lower cost/GB to the end user. Let's see how the new Vertex shakes out.

Specifications:

  • Capacity: 120, 240GB
  • Sequential read:  550 MB/sec
  • Sequential write: 520 MB/sec
  • Random read IOPS (up to):  35 k-IOPS
  • Random write IOPS (up to):  65 k-IOPS

Packaging:

packaging.JPG

This simple plastic packaging does away with the 3.5" bracket previously included with all OCZ models.

Continue reading our review of the OCZ Vertex 3.20 240GB SSD!!

Author:
Subject: Editorial
Manufacturer: GLOBALFOUNDRIES

Taking a Fresh Look at GLOBALFOUNDRIES

It has been a while since we last talked about GLOBALFOUNDRIES, and it is high time to do so.  So why the long wait between updates?  Well, I think the long and short of it is a lack of execution from their stated roadmaps from around 2009 on.  When GF first came on the scene they had a very aggressive roadmap about where their process technology will be and how it will be implemented.  I believe that GF first mentioned a working 28 nm process in a early 2011 timeframe.  There was a lot of excitement in some corners as people expected next generation GPUs to be available around then using that process node.

fab1_r.jpg

Fab 1 is the facility where all 32 nm SOI and most 28 nm HKMG are produced.

Obviously GF did not get that particular process up and running as expected.  In fact, they had some real issues getting 32 nm SOI running in a timely manner.  Llano was the first product GF produced on that particular node, as well as plenty of test wafers of Bulldozer parts.  Both were delayed from when they were initially expected to hit, and both had fabrication issues.  Time and money can fix most things when it comes to process technology, and eventually GF was able to solve what issues they had on their end.  32 nm SOI/HKMG is producing like gangbusters.  AMD has improved their designs on their end to make things a bit easier as well at GF.

While shoring up the 32 nm process was of extreme importance to GF, it seemingly took resources away from further developing 28 nm and below processes.  While work was still being done on these products, the roadmap was far too aggressive for what they were able to accomplish.  The hits just kept coming though.  AMD cut back on 32nm orders, which had a financial impact on both companies.  It was cheaper for AMD to renegotiate the contract and take a penalty rather than order chips that it simply could not sell.  GF then had lots of line space open on 32 nm SOI (Dresden) that could not be filled.  AMD then voided another contract in which they suffered a larger penalty by opting to potentially utilize a second source for 28 nm HKMG production of their CPUs and APUs.  AMD obviously was very uncomfortable about where GF was with their 28 nm process.

During all of this time GF was working to get their Luther Forest FAB 8 up and running.  Building a new FAB is no small task.  This is a multi-billion dollar endeavor and any new FAB design will have complications.  Happily for GF, the development of this FAB has gone along seemingly according to plan.  The FAB has achieved every major milestone in construction and deployment.  Still, the risks involved with a FAB that could reach around $8 billion+ are immense.

2012 was not exactly the year that GF expected, or hoped for.  It was tough on them and their partners.  They also had more expenses such as acquiring Chartered back in 2009 and then acquiring the rather significant stake that AMD had in the company in the first place.  During this time ATIC has been pumping money into GF to keep it afloat as well as its aspirations at being a major player in the fabrication industry.

Continue reading our editorial on the status of GLOBALFOUNDRIES going into 2013 and beyond!!

Author:
Manufacturer: PC Perspective

Not a simple answer

After publishing the Frame Rating Part 3 story, I started to see quite a bit of feedback from readers and other enthusiasts with many requests for information about Vsync and how it might affect the results we are seeing here.  Vertical Sync is the fix for screen tearing, a common artifact seen in gaming (and other mediums) when the frame rendering rate doesn’t match the display’s refresh rate.  Enabling Vsync will force the rendering engine to only display and switch frames in the buffer to match the vertical refresh rate of the monitor or a divisor of it.  So a 60 Hz monitor could only display frames at 16ms (60 FPS), 33ms (30 FPS), 50ms (20 FPS), and so on.

Many early readers hypothesized that simply enabling Vsync would fix the stutter and runt issues that Frame Rating was bringing to light.  In fact, AMD was a proponent of this fix, as many conversations we have had with the GPU giant trailed into the direction of Vsync as answer to their multi-GPU issues. 

In our continuing research on graphics performance, part of our Frame Rating story line, I recently spent many hours playing games on different hardware configurations and different levels of Vertical Sync.  After this time testing, I am comfortable in saying that I do not think that simply enabling Vsync on platforms that exhibit a large number of runt frames fixes the issue.  It may prevent runts, but it does not actually produce a completely smooth animation. 

To be 100% clear - the issues with Vsync and animation smoothness are not limited to AMD graphics cards or even multi-GPU configurations.  The situations we are demonstrating here present themselves equally on AMD and NVIDIA platforms and with single or dual card configurations, as long as all other parameters are met.  Our goal today is only to compare a typical Vsync situation from either vendor to a reference result at 60 FPS and at 30 FPS; not to compare AMD against NVIDIA!!

Crysis3_1920x1080_PLOT_1.png

In our initial research with Frame Rating, I presented this graph on the page discussing Vsync.  At the time, I left this note with the image:

The single card and SLI configurations without Vsync disabled look just like they did on previous pages but the graph for GTX 680 SLI with Vsync on is very different.  Frame times are only switching back and forth between 16 ms and 33 ms, 60 and 30 instantaneous FPS due to the restrictions of Vsync.  What might not be obvious at first is that the constant shifting back and forth between these two rates (two refresh cycles with one frame, one refresh cycle with one frame) can actually cause more stuttering and animation inconsistencies than would otherwise appear.

Even though I had tested this out and could literally SEE that animation inconsistency I didn't yet have a way to try and demonstrate it to our readers, but today I think we do.

The plan for today's article is going to be simple.  I am going to present a set of three videos to you that show side by side runs from different configuration options and tell you what I think we are seeing in each result.  Then on another page, I'm going to show you three more videos and see if you can pinpoint the problems on your own.

Continue reading our article on the effects of Vsync on gaming animation smoothness!!

Manufacturer: Enermax USA

Introduction and Features

2-Ostrog-Banner.jpg

 
In this review we are going to take a detailed look at one of the latest case offerings from Enermax, the Giant Ostrog GT, a fortress for your hardware. The new Ostrog GT is a mid-tower case that incorporates advanced cooling features along with support for multiple, extended length VGA cards.  The Ostrog GT enclosure features a clear acrylic side window, comes with a classic black finish inside and out, and is available with either Red or Blue accent colors. The Ostrog GT comes with two 140mm LED intake fans in the front and one 120mm exhaust fan on the back with optional locations for up to twelve fans along with support for a 240/280mm liquid cooling radiator.

Note: Ostrog is a Russian term for a small fortress – now you know! 

3-VGA-Clearnce.jpg

(Courtesy of Enermax)

Ostrog GT Mid-Tower Gaming Case Key Features (Courtesy of Enermax)

4a-Features-table.jpg

4b-Features-graphics.jpg

Continue reading our review of the Enermax Ostrog GT Case!!

Author:
Subject: General Tech
Manufacturer: Nerdytec

Gaming on your Couch

Sometimes really unique products come across our door step and we just love to tell our readers about things that might normally fall outside the PC hardware field.  The COUCHMASTER, essentially a piece of furniture made for gaming, is one of those items.

couch1.png

The COUCHMASTER, produced by a German company called Nerdytec, is a device built to help gamers use a mouse and keyboard while sitting on a couch and gaming in large screen environments.  It has a pair of foam-stuffed side block that hold up a wood-constructed center panel that puts your mouse and keyboard at a comfortable angle. 

couch2.png

Cable routing is made simple with Velcro removable panels under the keyboard and mouse and some versions of COUCHMASTER include a 4-port USB hub for connecting input devices, audio headsets, etc.  The only that didn't work in our testing were external hard drives - just not enough power coming from the USB 3.0 connection through the include extension cable.

couch3.png

I played the entirety of Bioshock Infinite with the COUCHMASTER, and other than getting some odd looks from my wife, couldn't think of a more impressive and comfortable way to play PC games from a distance and without a standard desk setup.

couch4.png

I would love to see some changes like the addition of recessed drink holders on the sides, but otherwise, the only drawback to Nerdytec's COUCHMASTER is the price; it starts at $170 or so USD.

Check out the full video review posted below!!

UPDATE: The CouchMaster is now for sale in the US now!

Author:
Manufacturer: PC Perspective

What to look for and our Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Today marks the conclusion of our first complete round up of Frame Rating results, the culmination of testing that was started 18 months ago.  Hopefully you have caught our other articles on the subject at hand, and you really will need to read up on the Frame Rating Dissected story above to truly understand the testing methods and results shown in this article.  Use the links above to find the previous articles!

To round out our Frame Rating testing in this interation, we are looking at more cards further down the product stack in two different sets.  The first comparison will look at the AMD Radeon HD 7870 GHz Edition and the NVIDIA GeForce GTX 660 graphics cards in both single and dual-card configurations.  Just like we saw with our HD 7970 vs GTX 680 and our HD 7950 vs GTX 660 Ti testing, evaluating how the GPUs compare in our new and improved testing methodology in single GPU configurations is just as important as testing in SLI and CrossFire.  The GTX 660 ($199 at Newegg.com) and the HD 7870 ($229 at Newegg.com) are the closest matches in terms of pricing though both card have some interesting game bundle options as well.

7870.jpg

AMD's Radeon HD 7870 GHz Edition

Our second set of results will only be looking at single GPU performance numbers for lower cost graphics cards like the AMD Radeon HD 7850 and Radeon HD 7790 and from NVIDIA the GeForce GTX 650 Ti and GTX 650 Ti BOOST.  We didn't include multi-GPU results on these cards simply due to time constraints internally and because we are eager to move onto further Frame Rating testing and input testing. 

gtx660.jpg

NVIDIA's GeForce GTX 660


If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates. We are using a secondary hardware capture system to record each frame of game play as the monitor would receive them. That recorded video is then analyzed to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 660 2GB
AMD Radeon HD 7870 2GB
NVIDIA GeForce GTX 650 Ti 1GB
NVIDIA GeForce GTX 650 Ti BOOST 2GB
AMD Radeon HD 7850 2GB
AMD Radeon HD 7790 1GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

On to the results! 

Continue reading our review of the GTX 660 and HD 7870 using Frame Rating!!

Subject: Motherboards
Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

board.jpg

Courtesy of ASUS

The P8Z77X-I Deluxe is ASUS' high-powered answer to the small form factor crowd. Through some unique design decisions and an upright daughter-board, ASUS was able to cram a full 10-phase digital power delivery system into this board without sacrificing any other integrated components. It's nice to see a manufacturer step up and design a mini-ITX board in the same vein as its full-sized counterpart. We put the board through our normal gamut of tests to see how well this mighty Mini-ITX board sized up with its full-sized brethren. At a retail list price of $219, the P8Z77-I Deluxe needs to prove its worth against the full sized boards.

profile.jpg

Courtesy of ASUS

ASUS designed a full 10 phases of digital power, housed in the board's upright daughter card sitting parallel to the CPU cooler. The P8Z77-I Deluxe with its high-end power plant is packed full of features, including SATA 2, SATA 3, e-SATA, USB 2.0, and USB 3.0 ports for storage devices. Networking capabilities include an Intel GigE NIC, a Broadcom dual-port 802.11n adapter, and a Broadcom Bluetooth adapter. The board also features a single PCI-Express x16 slot for graphics cards and other expansion cards.

rear-panel.jpg

Courtesy of ASUS

Continue reading our review of the ASUS P8Z77-I Deluxe motherboard!

What to Look For, Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

We are back again with another edition of our continued reveal of data from the capture-based Frame Rating GPU performance methods.  In this third segment we are moving on down the product stack to the NVIDIA GeForce GTX 660 Ti and the AMD Radeon HD 7950 - both cards that fall into a similar price range.

gtx660ti.JPG

I have gotten many questions about why we are using the cards in each comparison and the answer is pretty straight forward: pricing.  In our first article we looked at the Radeon HD 7970 GHz Edition and the GeForce GTX 680 while in the second we compared the Radeon HD 7990 (HD 7970s in CrossFire), the GeForce GTX 690 and the GeForce GTX Titan.  This time around we have the GeForce GTX 660 Ti ($289 on Newegg.com) and the Radeon HD 7950 ($299 on Newegg.com) but we did not include the GeForce GTX 670 because it sits much higher at $359 or so.  I know some of you are going to be disappointed that it isn't in here, but I promise we'll see it again in a future piece!


If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 660 Ti 2GB
AMD Radeon HD 7950 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX 660 Ti and HD 7950 using Frame Rating!!

Manufacturer: Oyen Digital

Introduction and Technical Specifications

Introduction

full-package.jpg

Courtesy of Oyen Digital

Oyen Digital, a popular manufacturer of portable storage enclosures and devices, provided us with its MiniPro™ eSATA / USB 3.0 Portable Hard Drive enclosure for testing USB 3.0 enhanced mode on the ASUS P8Z77-I Deluxe motherboard. This enclosure offers support for USB 2.0, USB 3.0, and eSATA ports in conjunction with a 2.5" hard drive. We put this enclosure on the test bench with the ASUS P8Z77-I Deluxe board to test the performance limits of the device. The MiniPro™ enclosure can be found at your favorite e-tailer for $39.95.

front.jpg

back.jpg

Oyen Digital

The MiniPro™ SATA / USB 3.0 Portable Hard Drive enclosure is a simple aluminum enclosure supporting any 2.5" form factor hard drive up to SATA III speeds. The enclosure itself supports USB 2.0, USB 3.0, and eSATA connections. Because of its use of the ASMedia 1053e chipset for USB 3.0 support, the enclosure supports both USB 3.0 normal mode transfer speeds and UASP (USB Attached SCSI Protocol) mode transfer speeds. UASP mode is a method of bulk transfer for USB 3.0 connections that increases transfer speeds through the use of parallel simultaneous packet transfers. Per our sources at ASUS, UASP can be explained as follows:

The adoption of the SCSI Protocol in USB 3.0 provides its users with the advantage of having better data throughput than traditional BOT (Bulk-Only Transfer) protocol, all thanks to its streaming architecture as well as the improved queuing (NCQ support) and task management, which eliminated much of the round trip time between USB commands, so more commands can be sent simultaneously. Moreover, thanks to the multi-tasking aware architecture, the performance is further enhanced when multiple transfers occur.
The downside of UASP is that the receiving device (Flash drive/external hard drive etc) must also be UASP enabled for the protocol to work. This requires checking your peripherals before purchase. However since UASP is an industry standard, the device support for ASUS UASP implementation is not restricted to a particular controller manufacturer or device type, so the overall number of peripherals available should undoubtedly grow.

Technical Specifications (taken from the Oyen Digital website)

Ports

eSATA 6G (Up to 6.0 Gbps)
USB 3.0: (Up to 5.0 Gbps)

Interface

SATA III (up to 15mm SATA 2.5" HDD/SSD)

Chipset

USB 3.0
ASMedia 1053e

eSATA
ASMedia 1456pe

Weight

10 oz.

Certifications

CE, FCC

Requirements

Windows XP/Vista/7/8 & above; MAC OS 10.2 & above; Linux 2.4.22 & above

Continue reading our review of the Oyen Digital MiniPro™ enclosure!

Manufacturer: NVIDIA

NVIDIA releases the GeForce GT 700M family

NVIDIA revolutionized gaming on the desktop with the release of its 600-series Kepler-based graphics cards in March 2012. With the release of the GeForce GT 700M series, Kepler enters the mobile arena to power laptops, ultrabooks, and all-in-one systems.

Today, NVIDIA introduces four new members to its mobile line: the GeForce GT 750M, the GeForce GT 740M, the GeForce GT 735M, and the GeForce GT 720M. These four new mobile graphics processors join the previously-released members of the GeForce GT 700m series: the GeForce GT 730M and the GeForce GT 710M. With the exception of the Fermi-based GeForce GT 720M, all of the newly-released mobile cores are based on NVIDIA's 28nm Kepler architecture.

Notebooks based on the GeForce GT 700M series will offer in-built support for the following new technologies:

Automatic Battery Savings through NVIDIA Optimus Technology

02-optimus-tech-slide.PNG

Automatic Game Configuration through the GeForce Experience

03-gf-exp.PNG

Automatic Performance Optimization through NVIDIA GPU Boost 2.0

03-gpu-boost-20.PNG

Continue reading our release coverage of the NVIDIA GTX 700M series!

Summary Thus Far

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

Welcome to the second in our intial series of articles focusing on Frame Rating, our new graphics and GPU performance technology that drastically changes how the community looks at single and multi-GPU performance.  In the article we are going to be focusing on a different set of graphics cards, the highest performing single card options on the market including the GeForce GTX 690 4GB dual-GK104 card, the GeForce GTX Titan 6GB GK110-based monster as well as the Radeon HD 7990, though in an emulated form.  The HD 7990 was only recently officially announced by AMD at this years Game Developers Conference but the specifications of that hardware are going to closely match what we have here on the testbed today - a pair of retail Radeon HD 7970s in CrossFire. 

titancard.JPG

Will the GTX Titan look as good in Frame Rating as it did upon its release?

If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

 

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX TITAN 6GB
NVIDIA GeForce GTX 690 4GB
AMD Radeon HD 7970 CrossFire 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta (GTX 690)
NVIDIA: 314.09 beta (GTX TITAN)
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX Titan, GTX 690 and HD 7990 using Frame Rating!!

How Games Work

 

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Introduction

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year.  I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation.  After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance. 

card4.jpg

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates.  This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive.  I think the remainder of this article will prove that. 

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing.  I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all!  There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to your YouTube channel if you haven't already!

Continue reading our analysis of the new Frame Rating performance testing methodology!!

Author:
Manufacturer: NVIDIA

The GTX 650 Ti Gets Boost and More Memory

In mid-October NVIDIA released the GeForce GTX 650 Ti based on GK106, the same GPU that powers the GTX 660 though with fewer enabled CUDA cores and GPC units.  At the time we were pretty impressed with the 650 Ti:

The GTX 650 Ti has more in common with the GTX 660 than it does the GTX 650, both being based on the GK106 GPU, but is missing some of the unique features that NVIDIA has touted of the 600-series cards like GPU Boost and SLI.

Today's release of the GeForce GTX 650 Ti BOOST actually addresses both of those missing features by moving even closer to the specification sheet found on the GTX 660 cards. 

Our video review of the GTX 650 Ti BOOST and Radeon HD 7790.

block1.jpg

Option 1: Two GPCs with Four SMXs

Just like we saw with the original GTX 650 Ti, there are two different configurations of the GTX 650 Ti BOOST; both have the same primary specifications but will differ in which SMX is disabled from the full GK106 ASIC.  The newer version will still have 768 CUDA cores but clock speeds will increase from 925 MHz to 980 MHz base and 1033 MHz typical boost clock.  Texture unit count remains the same at 64.

Continue reading our review of the NVIDIA GeForce GTX 650 Ti BOOST graphics card!!