Manufacturer: NVIDIA

NVIDIA releases the GeForce GT 700M family

NVIDIA revolutionized gaming on the desktop with the release of its 600-series Kepler-based graphics cards in March 2012. With the release of the GeForce GT 700M series, Kepler enters the mobile arena to power laptops, ultrabooks, and all-in-one systems.

Today, NVIDIA introduces four new members to its mobile line: the GeForce GT 750M, the GeForce GT 740M, the GeForce GT 735M, and the GeForce GT 720M. These four new mobile graphics processors join the previously-released members of the GeForce GT 700m series: the GeForce GT 730M and the GeForce GT 710M. With the exception of the Fermi-based GeForce GT 720M, all of the newly-released mobile cores are based on NVIDIA's 28nm Kepler architecture.

Notebooks based on the GeForce GT 700M series will offer in-built support for the following new technologies:

Automatic Battery Savings through NVIDIA Optimus Technology

02-optimus-tech-slide.PNG

Automatic Game Configuration through the GeForce Experience

03-gf-exp.PNG

Automatic Performance Optimization through NVIDIA GPU Boost 2.0

03-gpu-boost-20.PNG

Continue reading our release coverage of the NVIDIA GTX 700M series!

Summary Thus Far

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

Welcome to the second in our intial series of articles focusing on Frame Rating, our new graphics and GPU performance technology that drastically changes how the community looks at single and multi-GPU performance.  In the article we are going to be focusing on a different set of graphics cards, the highest performing single card options on the market including the GeForce GTX 690 4GB dual-GK104 card, the GeForce GTX Titan 6GB GK110-based monster as well as the Radeon HD 7990, though in an emulated form.  The HD 7990 was only recently officially announced by AMD at this years Game Developers Conference but the specifications of that hardware are going to closely match what we have here on the testbed today - a pair of retail Radeon HD 7970s in CrossFire. 

titancard.JPG

Will the GTX Titan look as good in Frame Rating as it did upon its release?

If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

 

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX TITAN 6GB
NVIDIA GeForce GTX 690 4GB
AMD Radeon HD 7970 CrossFire 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta (GTX 690)
NVIDIA: 314.09 beta (GTX TITAN)
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX Titan, GTX 690 and HD 7990 using Frame Rating!!

How Games Work

 

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Introduction

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year.  I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation.  After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance. 

card4.jpg

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates.  This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive.  I think the remainder of this article will prove that. 

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing.  I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all!  There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to your YouTube channel if you haven't already!

Continue reading our analysis of the new Frame Rating performance testing methodology!!

Author:
Manufacturer: NVIDIA

The GTX 650 Ti Gets Boost and More Memory

In mid-October NVIDIA released the GeForce GTX 650 Ti based on GK106, the same GPU that powers the GTX 660 though with fewer enabled CUDA cores and GPC units.  At the time we were pretty impressed with the 650 Ti:

The GTX 650 Ti has more in common with the GTX 660 than it does the GTX 650, both being based on the GK106 GPU, but is missing some of the unique features that NVIDIA has touted of the 600-series cards like GPU Boost and SLI.

Today's release of the GeForce GTX 650 Ti BOOST actually addresses both of those missing features by moving even closer to the specification sheet found on the GTX 660 cards. 

Our video review of the GTX 650 Ti BOOST and Radeon HD 7790.

block1.jpg

Option 1: Two GPCs with Four SMXs

Just like we saw with the original GTX 650 Ti, there are two different configurations of the GTX 650 Ti BOOST; both have the same primary specifications but will differ in which SMX is disabled from the full GK106 ASIC.  The newer version will still have 768 CUDA cores but clock speeds will increase from 925 MHz to 980 MHz base and 1033 MHz typical boost clock.  Texture unit count remains the same at 64.

Continue reading our review of the NVIDIA GeForce GTX 650 Ti BOOST graphics card!!

Author:
Subject: Mobile
Manufacturer: ARM

 

ARM is a company that no longer needs much of an introduction.  This was not always the case.  ARM has certainly made a name for themselves among PC, tablet, and handheld consumers.  Their primary source of income is licensing CPU designs as well as their ISA.  While names like the Cortex A9 and Cortex A15 are fairly well known, not as many people know about the graphics IP that ARM also licenses.  Mali is the product name of the graphics IP, and it encompasses an entire range of features and performance that can be licensed by other 3rd parties.

I was able to get a block of time with Nizar Romdhane, Head of the Mali Ecosystem at ARM.  I was able to ask a few questions about Mali, ARM’s plans to address the increasingly important mobile graphics market, and how they will compete with competition from Imagination Technologies, Intel, AMD, NVIDIA, and Qualcomm.

 

We would like to thank Nizar for his time, as well as Phil Hughes in facilitating this interview.  Stay tuned as we are expecting to continue this series of interviews with other ARM employees in the near future.

Author:
Manufacturer: AMD

A New GPU with the Same DNA

When we talked with AMD recently about its leaked roadmap that insinuated that we would not see any new GPUs in 2013, they were adamant that other options would be made available to gamers but were coy about about saying when and to what degree.  As it turns out, today marks the release of the Radeon HD 7790, a completely new piece of silicon under the Sea Islands designation, that uses the same GCN (Graphics Core Next) architecture as the HD 7000-series / Southern Islands GPUs with a handful of tweaks and advantages from improved clock boosting with PowerTune to faster default memory clocks.

slide02.png

To be clear, the Radeon HD 7790 is a completely new ASIC, not a rebranding of a currently available part, though the differences between the options are mostly in power routing and a reorganization of the GCN design found in Cape Verde and Pitcairn designs.  The code name for this particular GPU is Bonaire and it is one of several upcoming updates to the HD 7000 cards. 

Bonaire is built on the same 28nm TSMC process technology that all Southern Islands parts are built on and consists of 2.08 billion transistors in a 160 mm2 die.  Compared to the HD 7800 (Pitcairn) GPU at 212 mm2 and HD 7700 (Cape Verde) at 120 mm2, the chip for the HD 7790 falls right in between.  And while the die images above are likely not completely accurate, it definitely appears that AMD's engineers have reorganized the internals.

slide03.png

Bonaire is built with 14 CUs (compute units) for a total stream processor count of 896, which places it closer to the performance level of the HD 7850 (1024 SPs) than it does the HD 7770 (640 SPs).  The new Sea Islands GPU includes the same dual tessellation engines of the higher end HD 7000s as well and a solid 128-bit memory bus that runs at 6.0 Gbps out the gate on the 1GB frame buffer.  The new memory controller is completely reworked in Bonaire and allows for a total memory bandwidth of 96 GB/s in comparison to the 72 GB/s of the HD 7770 and peaking theoretical compute performance at 1.79 TFLOPS.

The GPU clock rate is set at 1.0 GHz, but there is more on that later.

Continue reading our review of the Sapphire AMD Radeon HD 7790 1GB Bonaire GPU!!

Subject: Motherboards
Manufacturer: MSI

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of MSI

With the Z77A-G45 Thunderbolt, MSI took an award winning design and tweaked it to bring an affordable Thunderbolt-based solution to the masses without sacrificing on quality or performance. We put this board through our grueling battery of tests to validate the board's performance promises. The MSI Z77A-G45 Thunderbolt can be found at your favorite retailer for the reasonable price of $169.99.

03-diag-profile-front.jpg

Courtesy of MSI

The Z77A-G45 Thunderbolt sports a simple design and layout with some of the bells and whistles found on the higher priced boards omitted to keep the feature set intact and the price to a minimum. MSI includes the following features in the Z77A-G45 Thunderbolt's design: SATA 2 and SATA 3 ports; a Realtek GigE NIC; three PCI-Express x16 slots for up to tri-card support; USB 2.0 and 3.0 ports; and a single Thunderbolt port in the rear panel. For an in-depth overview on Thunderbolt technology and its advantages over other interconnect technologies, please see our review here.

04-rear-panel.jpg

Courtesy of MSI

Continue reading our review of the MSI Z77A-GD45 motherboard!!

Author:
Subject: Motherboards
Manufacturer: ASUS

AM3+ Last Gasp?

 

Over the past several years I have reviewed quite a few Asus products.  The ones that typically grab my attention are the ROG based units.  These are usually the most interesting, over the top, and expensive products in their respective fields.  Ryan has reviewed the ROG graphics cards, and they have rarely disappointed.  I have typically taken a look at the Crosshair series of boards that support AMD CPUs.

chvfz_01.jpg

Crosshair usually entails the “best of the best” when it comes to features and power delivery.  My first brush with these boards was the Crosshair IV.  That particular model was only recently taken out of my primary work machine.  It proved itself to be an able performer and lasted for years (even overclocked).  The Crosshair IV Extreme featured the Lucid Hydra chip to allow mutli-GPU performance without going to pure SLI or Crossfire.  The Crosshair V got rid of Lucid and added official SLI support and it incorporated the Supreme FX II X-Fi audio.  All of these boards have some things in common.  They are fast, they overclock well, and they are among the most expensive motherboards ever for the AMD platform.

So what is there left to add?  The Crosshair V is a very able platform for Bulldozer and Piledriver based parts.  AMD is not updating the AM3+ chipsets, so we are left with the same 990FX northbridge and the SB950 southie (both of which are essentially the same as the 890FX/SB850).  It should be a simple refresh, right?  We had Piledriver released a few months ago and there should be some power and BIOS tweaks that can be implemented and then have a rebranded board.  Sounds logical, right?  Well, thankfully for us, Asus did not follow that path.

The Asus Crosshair V Formula Z is a fairly radical redesign of the previous generation of products.  The amount of extra features, design changes, and power characteristics make it a far different creature than the original Crosshair V.  While both share many of the same style features, under the skin this is a very different motherboard.  I am rather curious why Asus did not brand this as the “Crosshair VI”.  Let’s explore, shall we?

Click here to read the entire review on the ASUS Crosshair V Formula-Z

Author:
Subject: Processors
Manufacturer: AMD

AMD Exposes Richland

When we first heard about “Richland” last year, there was a little bit of excitement from people.  Not many were sure what to expect other than a faster “Trinity” based CPU with a couple extra goodies.  Today we finally get to see what Richland is.  While interesting, it is not necessarily exciting.  While an improvement, it will not take AMD over the top in the mobile market.  What it actually brings to the table is better competition and a software suite that could help to convince buyers to choose AMD instead of a competing Intel part.

From a design standpoint, it is nearly identical to the previous Trinity.  That being said, a modern processor is not exactly simple.  A lot of software optimizations can be applied to these products to increase performance and efficiency.  It seems that AMD has done exactly that.  We had heard rumors that the graphics portion was in fact changed, but it looks like it has stayed the same.  Process improvements have been made, but that is about the extent of actual hardware changes to the design.

rich_01.jpg

The new Richland APUs are branded the A-5000 series of products.  The top end is the A10-5750M with HD-8650 integrated graphics.  This is still the VLIW-4 based graphics unit seen in the previous Trinity products, but enough changes have been made with software that I can enable Dual Graphics with the new Solar System based GPUs (GCN).  The speeds of these products have received a nice boost.  As compared to the previous top end A10-4600, the 5750 takes the base speed from 2.3 GHz to 2.5 GHz.  Boost goes from 3.2 GHz up to 3.5 GHz.  The graphics portion takes the base clock from 496 MHz up to 533 MHz, while turbo mode improves over the 4600 from 685 MHz to 720 MHz.  These are not staggering figures, but it all still fits within the 35 watt TDP of the previous product.

rich_02.jpg

One other important improvement is the ability to utilize DDR-3 1866 memory.  Throughout the past year we have seen memory densities increase fairly dramatically without impacting power consumption.  This goes for speed as well.  While we would expect to see lower power DIMMs be used in the thin and light categories, expect to see faster DDR-3 1866 in the larger notebooks that will soon be heading our way.

Click here to read more about AMD's Richland APUs!

Manufacturer: SilverStone

Introduction and Features

2-Strider-Banner.jpg

SilverStone recently added the 600W ST60F-G power supply to their Strider Plus series. The Strider Plus ST60F-G is fully modular and housed in a compact enclosure that is only 140mm (5.5") deep for easy integration. The ST50F-P and ST60F-P are certified 80 Plus Bronze while the ST60F-PS and ST75F-P are certified 80 Plus Silver.  We will be taking a detailed look at the ST60F-PS 600W power supply in this review.

3-Strider-Series.jpg

Here is what SilverStone has to say about their new Strider Plus ST60F-PS PSU: “The ST60F-PS is the world smallest modular ATX PSU with a 140mm depth which has also eased installation. In addition to 80Plus Silver level efficiency (80Plus Gold at 230VAC), all models are built to meet very high standards in electrical performance so they have ±3% voltage regulation and ripple & noise, and high amperage single +12V rail. Other notable features included are 24/7 40°C continuous output capacity, low-noise 120mm fan, and multiple sets of PCI-E cables. For enthusiasts that are looking for the power supply with all the right features to make a high performing and aesthetically pleasing system, the Strider Plus series reaches the goal with maximum satisfaction.

SilverStone ST60F-PS  Power Supply Key Features:

• Compact design with a depth of 140mm for easy integration
• 600 watt continuous power output 24/7 at 40°C
• 80Plus Silver certified with 80 Plus Gold level efficiency at 230VAC
• Strict ±3% voltage regulation with low AC ripple and noise
• Class leading +12V rail with up to 49A (588W)
• Active PFC (up to 0.99)
• Universal AC input (90-264 VAC) full range
• 100% modular cables
• Silent running 120mm fan
• Protections: OCP, OVP, OPP, OTP, UVP, and SCP
• MSRP $105 USD

Please continue reading our SilverStone ST60F-PS power supply review!

Subject: Storage

Introduction, Specifications, and Packaging

Last year we did a combined review of the ioSafe SoloPRO and the Synology Diskstation 212+ NAS solution. The ioSafe handled the function of disaster-hardened storage, capable of withstanding a typical building fire along with the resulting drenching from your friendly neighborhood fire department, while the DiskStation made the USB-connected ioSafe available to the network and added numerous additional features as part of its excellent DSM software suite, offering many features to both both home and business users:

5-lit.jpg

The whole time I was working on that article, I kept wondering how cool it would be if both items were combined into one unit. You could then get the additional benefit of multi-drive redundancy *and* the disaster-proof protection offered by the ioSafe. Well, that just happened:

slideshow_n2_1.png

Behold the ioSafe N2!

Read on for the full review!

Author:
Subject: Systems
Manufacturer: Sony

An AIO and Tablet Design

When new and interesting architectures and technology are developed, it enables system designers to build creative designs and systems for consumers.  With its renewed focus on power efficiency as well as performance, Intel has helped move the industry towards new form factors like the Next Unit of Computing and the evolution of the All-in-One design.

Today we are taking a look at the new Sony VAIO Tap 20 system, an AIO that not only integrates a 10-point touch screen on a 20-in 1600x900 resolution display and an Ivy Bridge architecture ultra low voltage processor, but also a battery to make the design semi-mobile and ripe for inclusion in high-tech homes. 

Check out our quick video overview below and then follow that up with a full pictorial outline and some more details!

Video Loading...

This isn't Sony's first foray into all-in-one PCs of course but it is among the first to combine this particular set of features.  In what is essentially an Ultrabook design with a large screen, the Tap 20 combines an Ivy Bridge Core i5 processors, 4GB of DDR3 memory and a 750GB hard drive.  Here is the breakdown:

 

Sony VAIO Tap 20 System Setup
CPU Intel Core i5-3317U
Memory 4GB DDR3-1600 (1 x SODIMM)
Hard Drive 750GB XXRPM HDD (2.5-in)
Sound Card On-board
Graphics Card Intel HD 4000 Processor Graphics
Display 1600x900 20-in touch screen (10 point)
Power Supply External
Networking Gigabit Ethernet
802.11n WiFi
I/O 2 x USB 3.0
SD / Memory Stick card reader
Headphone / Mic connection
Operating System Windows 8 x64

 

IMG_9546.JPG

The display is pretty nice with a 1600x900 reoslution though I do wish we had seen a full 1080p screen for HD video playback.  As with most touch screens, the display quality is under that of a non-touch IPS monitor but even up close (as you tend to use touch devices) you'll be hard pressed to find any imperfections.  Viewing angles are great as well which allows for better multi-person usage. 

IMG_9547.JPG

On the left we find the power connection and a hard wired Ethernet connection that compliments the integrated 802.11n WiFi.

Continue reading our review of the Sony VAIO Tap 20 AIO!!!

Introduction and Features

2-P-Series-Banner.jpg

Seasonic was one of the first manufacturers to offer a fanless PC power supply and they just keep on getting better! In their relentless pursuit to continuously improve their products, Seasonic has recently upgraded the Fanless 400/460/520 units from 80Plus Gold to 80Plus Platinum status while delivering even better performance and backed by a 7-year warranty. We will be taking a detailed look at the all-modular Platinum-460 Fanless power supply in this review.

3-Front.jpg

Seasonic Platinum Series Fanless 460W Power Supply

80Plus Platinum The Platinum Series Fanless 400W, 460W, and 520W power supplies are certified in accordance to the 80PLUS organization's highest standard, offering the newest technology and innovation for performance and energy savings with up to 92% efficiency and a true power factor of greater than 0.9 PF.

Seasonic Patented DC Connector Panel with Integrated Voltage Regulation Module The Seasonic Platinum Fanless power supplies feature an integrated DC connector panel with onboard VRM (Voltage Regulator Module) that enables not only near perfect DC-to-DC conversion with reduction of current loss/impedance and increase of efficiency but also a fully modular DC cabling that enables maximum flexibility of integration and forward compatibility.

Full Modular Design (DC to DC) Seasonic’s Patented Fully Modular Design offers ease of installation while minimizing voltage drops and impedance and greatly maximizes efficiency and cooling to enhance overall performance and reliability.

Seasonic Platinum Series Fanless PSU Key Features:

• 80Plus Platinum certified super high efficiency (up to 92%)
• Fanless operation (0dBA)
• 7-Year manufacturer's warranty
• Fully Modular Cable design with flat ribbon-style cables
• Seasonic DC Connector Panel with integrated VRMs
• DC-to-DC Converter design
• Tight voltage regulation (±2%)
• Multi-GPU technology support
• Conductive polymer aluminum solid capacitors
• High reliability 105°C Japanese made electrolytic capacitors
• Ultra Ventilation honeycomb structure
• Dual sided PCB layout
• High capacity +12V output
• High current Gold plated terminals with Easy Swap connectors
• Active PFC (0.99 PF typical) with Universal AC input

Please continue reading our Platinum-460 Fanless power supply review!

Manufacturer: Corsair

Introduction and Technical Specifications

Introduction

01-h110-full.PNG

Hydro Series™ H110 Extreme Performance Liquid CPU Cooler
Courtesy of Corsair

03-h90-full.PNG

Hydro Series™ H90 High Performance Liquid CPU Cooler
Courtesy of Corsair

Corsair has upped their presence in the cooling field with the new 140mm fan-based additions to the Hydro Series™ CPU water cooler lineup. Corsair was kind enough to provide us with samples of their H90 and H110 series cooling units, both using 140mm fans. We put these coolers up against their H80i 120mm fan-based unit as well as our custom-built Swiftech Apogee HD cooling system to see how well these new Corsair units performed. Starting at a base price of $99.99 for the Corsair H90 cooler, you can't go wrong with either unit.

04-h110-bare.PNG

Hydro Series™ H110 Extreme Performance CPU Cooler without fans
Courtesy of Corsair

05-h90-bare.PNG

Hydro Series™ H90 High Performance Liquid CPU Cooler without fans
Courtesy of Corsair

Corsair worked with Asetek to design their new 140mm-based line of coolers with the H90 and H110 introduced to enhance their current line of coolers. Both coolers are built using aluminum radiators capable of holding 140mm fans and copper cold plates. The rubber coated tubing used is low permeability 1/4 inch based tubing with multiple layers used to prevent liquid evaporation and to provide maximum tubing flexibility. Unlike their Corsair Link™ based coolers, the Corsair H90 and H110 units do not have integrated LEDs nor the Corsair Link™ based monitoring system.

Author:
Manufacturer: PC Perspective

In case you missed it...

UPDATE: We have now published full details on our Frame Rating capture and analysis system as well as an entire host of benchmark results.  Please check it out!!

In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time.  Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle.  If you already read that page of our TITAN review, nothing new is included below. 

I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!


Why are you not testing CrossFire??

If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out.  (Part 1 is here, part 2 is here.)  The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.

Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system.  With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.

We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS.  As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.

Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all.  They are simply presenting data that they believe to be true based on the tools at their disposal.  More data is always better. 

Here are these results and our discussion.  I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way.  I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI.  To gather results I used two processes:

  1. Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
  2. Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.

Here is an example of what the overlay looks like in Battlefield 3.

fr_sli_1.jpg

Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge

The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process.  A solid color is added to the PRESENT call (more details to come later) for each individual frame.  As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.

By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.

fr_cf_1.jpg

Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge

Here you see a very similar screenshot running on CrossFire.  Notice the thin silver band between the maroon and purple?  That is a complete frame according to FRAPS and most reviews.  Not to us - we think that frame rendered is almost useless. 

Continue reading our 3rd part in a series of Frame Rating and to see our first performance results!!

Author:
Manufacturer: NVIDIA

TITAN is back for more!

Our NVIDIA GeForce GTX TITAN Coverage Schedule:

If you are reading this today, chances are you were here on Tuesday when we first launched our NVIDIA GeForce GTX TITAN features and preview story (accessible from the link above) and were hoping to find benchmarks then.  You didn't, but you will now.  I am here to show you that the TITAN is indeed the single fastest GPU on the market and MAY be the best graphics cards (single or dual GPU) on the market depending on what usage models you have.  Some will argue, some will disagree, but we have an interesting argument to make about this $999 gaming beast.

A brief history of time...er, TITAN

In our previous article we talked all about TITAN's GK110-based GPU, the form factor, card design, GPU Boost 2.0 features and much more and I would highly press you all to read it before going forward.  If you just want the cliff notes, I am going to copy and paste some of the most important details below.

IMG_9502.JPG

From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse.  While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units.  Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz.  As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.

The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase.  Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s.

blockdiagram2.jpg

Speaking of memory - this card will ship with 6GB on-board.  Yes, 6 GeeBees!!  That is twice as much as AMD's Radeon HD 7970 and three times as much as NVIDIA's own GeForce GTX 680 card.  This is without a doubt a nod to the super-computing capabilities of the GPU and the GPGPU functionality that NVIDIA is enabling with the double precision aspects of GK110.

Continue reading our full review of the NVIDIA GeForce GTX TITAN graphics card with benchmarks and an update on our Frame Rating process!!

Subject: Editorial, Storage
Manufacturer: Various
Tagged: tlc, ssd, slc, mlc, endurance

Taking an Accurate Look at SSD Write Endurance

Last year, I posted a rebuttal to a paper describing the future of flash memory as ‘bleak’. The paper went through great (and convoluted) lengths to paint a tragic picture of flash memory endurance moving forward. Yesterday a newer paper hit Slashdotthis one doing just the opposite, and going as far as to assume production flash memory handling up to 1 Million erase cycles. You’d think that since I’m constantly pushing flash memory as a viable, reliable, and super-fast successor to Hard Disks (aka 'Spinning Rust'), that I’d just sit back on this one and let it fly. After all, it helps make my argument! Well, I can’t, because if there are errors published on a topic so important to me, it’s in the interest of journalistic integrity that I must now post an equal and opposite rebuttal to this one – even if it works against my case.

First I’m going to invite you to read through the paper in question. After doing so, I’m now going to pick it apart. Unfortunately I’m crunched for time today, so I’m going to reduce my dissertation into the form of some simple bulleted points:

  • Max data write speed did not take into account 8/10 encoding, meaning 6Gb/sec = 600MB/sec, not 750MB/sec.
  • The flash *page* size (8KB) and block sizes (2MB) chosen more closely resemble that of MLC parts (not SLC – see below for why this is important).
  • The paper makes no reference to Write Amplification.

Perhaps the most glaring and significant is that all of the formulas, while correct, fail to consider the most important factor when dealing with flash memory writes – Write Amplification.

Before geting into it, I'll reference the excellent graphic that Anand put in his SSD Relapse piece:

writeamplification2.png

SSD controllers combine smaller writes into larger ones in an attempt to speed up the effective write speed. This falls flat once all flash blocks have been written to at least once. From that point forward, the SSD must play musical chairs with the data on each and every small write. In a bad case, a single 4KB write turns into a 2MB write. For that example, Write Amplification would be a factor of 500, meaning the flash memory is cycled at 500x the rate calculated in the paper. Sure that’s an extreme example, but the point is that without referencing amplification at all, it is assumed to be a factor of 1, which would only be the case if you were only writing 2MB blocks of data to the SSD. This is almost never the case, regardless of Operating System.

After posters on Slashdot called out the author on his assumptions of rated P/E cycles, he went back and added two links to justify his figures. The problem is that the first links to a 2005 data sheet for 90nm SLC flash. Samsung’s 90nm flash was 1Gb per die (128MB). The packages were available with up to 4 dies each, and scaling up to a typical 16-chip SSD, that only gives you an 8GB SSD. Not very practical. That’s not to say 100k is an inaccurate figure for SLC endurance. It’s just a really bad reference to use is all. Here's a better one from the Flash Memory Summit a couple of years back:

flash-1.png

The second link was a 2008 PR blast from Micron, based on their proposed pushing of the 34nm process to its limits. “One Million Write Cycles” was nothing more than a tag line for an achievement accomplished in a lab under ideal conditions. That figure was never reached in anything you could actually buy in a SATA SSD. A better reference would be from that same presentation at the Summit:

flash-2.png

This shows larger process nodes hitting even beyond 1 million cycles (given sufficient additional error bits used for error correction), but remember it has to be something that is available and in a usable capacity to be practical for real world use, and that’s just not the case for the flash in the above chart.

At the end of the day, manufacturers must balance cost, capacity, and longevity. This forces a push towards smaller processes (for more capacity per cost), with the limit being how much endurance they are willing to give up in the process. In the end they choose based on what the customer needs. Enterprise use leans towards SLC or eMLC, as they are willing to spend more for the gain in endurance. Typical PC users get standard MLC and now even TLC, which are *good enough* for that application. It's worth noting that most SSD failures are not due to burning out all of the available flash P/E cycles. The vast majority are due to infant mortality failures of the controller or even due to buggy firmware. I've never written enough to any single consumer SSD (in normal operation) to wear out all of the flash. The closest I've come to a flash-related failure was when I had an ioDrive fail during testing by excessive heat causing a solder pad to lift on one of the flash chips.

All of this said, I’d love to see a revisit to the author’s well-structured paper – only based on the corrected assumptions I’ve outlined above. *That* is the type of paper I would reference when attempting to make *accurate* arguments for SSD endurance.

Author:
Manufacturer: NVIDIA

GK110 Makes Its Way to Gamers

Our NVIDIA GeForce GTX TITAN Coverage Schedule:

Back in May of 2012 NVIDIA released information on GK110, a new GPU that the company was targeting towards HPC (high performance computing) and the GPGPU markets that are eager for more processing power.  Almost immediately the questions began on when we might see the GK110 part make its way to consumers and gamers in addition to finding a home in supercomputers like Cray's Titan system capable of 17.59 Petaflops/s. 

 

Video Loading...

Watch this same video on our YouTube channel

02.jpg

Nine months later we finally have an answer - the GeForce GTX TITAN is a consumer graphics card built around the GK110 GPU.  Comprised of 2,688 CUDA cores, 7.1 billion transistors and with a die size of 551 mm^2, the GTX TITAN is a big step forward (both in performance and physical size).

specs3.jpg

From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse.  While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units.  Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz.  As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.

The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase.  Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s

Continue reading our preview of the brand new NVIDIA GeForce GTX TITAN graphics card!!

Author:
Subject: Motherboards
Manufacturer: Gigabyte

Define an Enthusiast CPU...

 

FM2 poses an interesting quandary for motherboard manufacturers.  AMD provides a very robust and full featured chip for use with their processors (A85X) that would lend itself well to midrange and enthusiast class motherboards.  Unfortunately, AMD does not provide a similarly high end CPU as compared to the competition at price ranges that would make sense for a motherboard that would cost between $140 and $250 on the FM2 platform.

So these manufacturers are constrained on price to offer fully featured motherboards that take advantage of all aspects of the A85X FCH (Fusion Controller Hub).  Until AMD can deliver a more competitive CPU on the FM2 platform, motherboard manufacturers will be forced to design offerings that can really go no higher than $129 (the current price of the fastest A10 processor from AMD).  This is not necessarily a bad thing though, as it has forced these manufacturers to really rethink their designs and to focus their energies on getting the greatest bang-for-the-buck.  AMD is selling a decent number of these processors, but the market is constrained as compared to the Intel offerings utilizing the 1155 BGA infrastructure.

gb_01.jpg

Gigabyte has taken this particular bull by the horns and have applied a very unique (so far) technology to the board.  This is on top of all the other marketing and engineering terms that we are quite familiar with.  The company itself is one of the top three manufacturers of motherboards in the world, and they typically trail Asus in terms of shipments but are still ahead of MSI.  As with any motherboard manufacturer, the quality of Gigabyte products has seen peaks and valleys through the years.  From what I have seen for the past few years though, Gigabyte is doing very well in terms of overall quality and value.

Click to continue reading about the Gigabyte GA-F2A85X-UP4!!

Manufacturer: Rosewill

Introduction and Features

2-Tachyon-Banner.jpg

Rosewill continues to expand their power supply lineup with the introduction of four new units in the Tachyon Series. All Tachyon Series power supplies are certified 80Plus Platinum to deliver maximum efficiency. We will be taking a detailed look at the Flagship Tachyon-1000 in this review.

3-T-1000.jpg

Tachyon Series 1000W PSU Key Features:

• 80Plus Platinum certified
• Continuous 1000W output @50°C
• Single powerful +12V rail – ideal for Gaming Systems
• SLI & CrossFire Ready – six 6+2 pin PCI-E connectors
• Modular cable design
• Mesh sleeving on all cables for easier cable routing
• Silent 140mm fan with Auto Fan Speed Control
• Fanless operation at low power
• Active PFC with Universal AC input (100-240V)
• Protection Circuits: OC, OV, OP, UV, and SC Protection,
• Safety and EMI Approvals: cTUVus, FCC, CE, ROHS
• 5-Year Warranty

Please continue reading our Rosewill Tachyon power supply review!