Author:
Manufacturer: Sony

Load time improvements

This is PART 1 of our testing on the PlayStation 4 storage systems, with the stock hard drive, an SSHD hybrid and an SSD.  In PART 2 we take a look at the changes introduced with PSN downloaded games versus Blu-ray installed games as well as show boot time differences.  Be sure you read PART 2, PlayStation 4 (PS4) Blu-ray and Download Storage Performance, Boot Times.

On Friday Sony released the PlayStation 4 onto the world.  The first new console launch in 7 years, the PS4 has a lot to live up to, but our story today isn't going to attempt to weigh the value of the hardware or software ecosystem.  Instead, after our PS4 teardown video from last week, we got quite a few requests for information on storage performance with the PS4 and what replacement hardware might offer gamers.

Hard Drive Replacement Process

Changing the hard drive in your PlayStation 4 is quite simple, a continuation of a policy Sony's policy with the PS3.

01_0.jpg

Installation starts with the one semi-transparent panel on the top of the unit, to the left of the light bar.  Obviously make sure your PS4 is completely turned off and unplugged.

02_0.jpg

Simply slide it to the outside of the chassis and wiggle it up to release.  There are no screws or anything to deal with yet.

03_0.jpg

Once inside you'll find a screw with the PS4 shapes logos on them; that is screw you need to remove to pull out the hard drive cage. 

Continue reading our analysis of PS4 HDD, SSHD and SSD Performance!!

Subject: Storage
Manufacturer: Western Digital

Introduction and Features

Introduction:

Today Western Digital launched an important addition to their Personal Cloud Storage NAS family - the My Cloud EX4:

131112-081131-4.84.jpg

The My Cloud EX4 is Western Digital's answer to the increased demand for larger personal storage devices. When folks look for places to consolidate all of their bulk files, media, system backups, etc, they tend to extend past what is possible with a single hard drive. Here is Western Digital's projection on where personal storage is headed:

WD EX4-market shift.png

Where the My Cloud was a single drive solution, the My Cloud EX4 extends that capability to span up to four 3.5" drives. When it comes to devices that span across several drives, the number 4 is a bit of a sweet spot, as it enables several RAID configurations:

WD EX4-RAID options.png

Everything but online capacity expansion (where the user can swap drives one at a time to a larger capacitiy) is suppoted. While WD has stated that feature will be available in a future update, I find it a bit risky to intentionally and repeatedly fail an array by pulling drives and forcing rebuilds. It just makes more sense to back up the data and re-create a fresh array with the new larger drives installed.

Ok, so we've got the groundwork down with a 4-bay NAS device. What remains to be seen is how Western Digital has implemented the feature set. There is a lot to get through here, so let's get to it.

Read on for more on Western Digital's new My Cloud EX4!

Author:
Subject: Mobile
Manufacturer: EVGA

NVIDIA Tegra Note Program

Clearly, NVIDIA’s Tegra line has not been as successful as the company had hoped and expected.  The move for the discrete GPU giant into the highly competitive world of the tablet and phone SoCs has been slower than expected, and littered with roadblocks that were either unexpected or that NVIDIA thought would be much easier to overcome. 

IMG_1879.JPG

The truth is that this was always a long play for the company; success was never going to be overnight and anyone that thought that was likely or possible was deluded.  Part of it has to do with the development cycle of the ARM ecosystem.  NVIDIA is used to a rather quick development, production, marketing and sales pattern thanks to its time in high performance GPUs, but the SoC world is quite different.  By the time a device based on a Tegra chip is found in the retail channel it had to go through an OEM development cycle, NVIDIA SoC development cycle and even an ARM Cortex CPU development cycle.  The result is an extended time frame from initial product announcement to retail availability.

Partly due to this, and partly due to limited design wins in the mobile markets, NVIDIA has started to develop internal-designed end-user devices that utilize its Tegra SoC processors.  This has the benefit of being much faster to market – while most SoC vendors develop reference platforms during the normal course of business,  NVIDIA is essentially going to perfect and productize them.

Continue reading our review of the NVIDIA Tegra Note 7 $199 Tablet!!

Author:
Subject: Processors
Manufacturer: AMD

More Details from Lisa Su

The executives at AMD like to break their own NDAs.  Then again, they are the ones typically setting these NDA dates, so it isn’t a big deal.  It is no secret that Kaveri has been in the pipeline for some time.  We knew a lot of the basic details of the product, but there were certainly things that were missing.  Lisu Su went up onstage and shared a few new details with us.

kaveri.jpg

Kaveri will be made up of 4 “Steamroller” cores, which are enhanced versions of the previous Bulldozer/Trinity/Vishera families of products.  Nearly everything in the processor is doubled.  It now has dual decode, more cache, larger TLBs, and a host of other smaller features that all add up to greater single thread performance and better multi-threaded handling and performance.   Integer performance will be improved, and the FPU/MMX/SSE unit now features 2 x 128 bit FMAC units which can “fuse” and support AVX 256.

However, there was no mention of the fabled 6 core Kaveri.  At this time, it is unlikely that particular product will be launched anytime soon. 

Click to read the entire article here!

Subject: Mobile
Manufacturer: MSI

Introduction and Design

P9173249.jpg

With few exceptions, it’s generally been taken for granted that gaming notebooks are going to be hefty devices. Portability is rarely the focus, with weight and battery life alike usually sacrificed in the interest of sheer power. But the MSI GE40 2OC—the lightest 14-inch gaming notebook currently available—seeks to compromise while retaining the gaming prowess. Trending instead toward the form factor of a large Ultrabook, the GE40 is both stylish and manageable (and perhaps affordable at around $1,300)—but can its muscle withstand the reduction in casing real estate?

While it can’t hang with the best of the 15-inch and 17-inch crowd, in context with its 14-inch peers, the GE40’s spec sheet hardly reads like it’s been the subject of any sort of game-changing handicap:

specs.png

One of the most popular CPUs for Haswell gaming notebooks has been the 2.4 GHz (3.4 GHz Turbo) i7-4700MQ. But the i7-4702MQ in the GE40-20C is nearly as powerful (managing 2.2 GHz and 3.2 GHz in those same areas respectively), and it features a TDP that’s 10 W lower at just 37 W. That’s ideal for notebooks such as the GE40, which seek to provide a thinner case in conjunction with uncompromising performance. Meanwhile, the NVIDIA GTX 760M is no slouch, even if it isn’t on the same level as the 770s and 780s that we’ve been seeing in some 15.6-inch and 17.3-inch gaming beasts.

Elsewhere, it’s business as usual, with 8 GB of RAM and a 120 GB SSD rounding out the major bullet points. Nearly everything here is on par with the best of rival 14-inch gaming models with the exception of the 900p screen resolution (which is bested by some notebooks, such as Dell’s Alienware 14 and its 1080p panel).

Continue reading our review of the MSI GE40 2OC!!!

Author:
Manufacturer: EVGA

EVGA Brings Custom GTX 780 Ti Early

Reference cards for new graphics card releases are very important for a number of reasons.  Most importantly, these are the cards presented to the media and reviewers that judge the value and performance of these cards out of the gate.  These various articles are generally used by readers and enthusiasts to make purchasing decisions, and if first impressions are not good, it can spell trouble.  Also, reference cards tend to be the first cards sold in the market (see the recent Radeon R9 290/290X launch) and early adopters get the same technology in their hands; again the impressions reference cards leave will live in forums for eternity.

All that being said, retail cards are where partners can differentiate and keep the various GPUs relevant for some time to come.  EVGA is probably the most well known NVIDIA partner and is clearly their biggest outlet for sales.  The ACX cooler is one we saw popularized with the first GTX 700-series cards and the company has quickly adopted it to the GTX 780 Ti, released by NVIDIA just last week

evga780tiacx.jpg

I would normally have a full review for you as soon as we could but thanks to a couple of upcoming trips that will keep me away from the GPU test bed, that will take a little while longer.  However, I thought a quick preview was in order to show off the specifications and performance of the EVGA GTX 780 Ti ACX.

gpuz.png

As expected, the EVGA ACX design of the GTX 780 Ti is overclocked.  While the reference card runs at a base clock of 875 MHz and a typical boost clock of 928 MHz, this retail model has a base clock of 1006 MHz and a boost clock of 1072 MHz.  This means that all 2,880 CUDA cores are going to run somewhere around 15% faster on the EVGA ACX model than the reference GTX 780 Ti SKUs. 

We should note that though the cooler is custom built by EVGA, the PCB design of this GTX 780 Ti card remains the same as the reference models. 

Continue reading our preview of the EVGA GeForce GTX 780 Ti ACX custom-cooled graphics card!!

Author:
Manufacturer: AMD

An issue of variance

AMD just sent along an email to the press with a new driver to use for Radeon R9 290X and Radeon R9 290 testing going forward.  Here is the note:

We’ve identified that there’s variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink.

The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

The correct target RPM values are 2200RPM for the AMD Radeon R9 290X ‘Quiet mode’, and 2650RPM for the R9 290. You can verify these in GPU-Z.

If you’re working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.

Great!  This is good news!  Except it also creates some questions. 

When we first tested the R9 290X and the R9 290, we discussed the latest iteration of AMD's PowerTune technology. That feature attempts to keep clocks as high as possible under the constraints of temperature and power.  I took issue with the high variability of clock speeds on our R9 290X sample, citing this graph:

clock-avg.png

I then did some digging into the variance and the claims that AMD was building a "configurable" GPU.  In that article we found that there were significant performance deltas between "hot" and "cold" GPUs; we noticed that doing simple, quick benchmarks would produce certain results that were definitely not real-world in nature.  At the default 40% fan speed, Crysis 3 showed 10% variance with the 290X at 2560x1440:

Crysis3_2560x1440_OFPS.png

Continue reading our coverage of the most recent driver changes and how they affect the R9 290X and R9 290!!

Author:
Manufacturer: NVIDIA

GK110 in all its glory

I bet you didn't realize that October and November were going to become the onslaught of graphics cards it has been.  I know I did not and I tend to have a better background on these things than most of our readers.  Starting with the release of the AMD Radeon R9 280X, 270X and R7 260X in the first week of October, it has pretty much been a non-stop battle between NVIDIA and AMD for the hearts, minds, and wallets of PC gamers. 

Shortly after the Tahiti refresh came NVIDIA's move into display technology with G-Sync, a variable refresh rate feature that will work with upcoming monitors from ASUS and others as long as you have a GeForce Kepler GPU.  The technology was damned impressive, but I am still waiting for NVIDIA to send over some panels for extended testing. 

Later in October we were hit with the R9 290X, the Hawaii GPU that brought AMD back in the world of ultra-class single GPU card performance.  It has produced stellar benchmarks and undercut the prices (then at least) of the GTX 780 and GTX TITAN.  We tested it in both single and multi-GPU configurations and found that AMD had made some impressive progress in fixing its frame pacing issues, even with Eyefinity and 4K tiled displays. 

NVIDIA dropped a driver release with ShadowPlay that allows gamers to record playback locally without a hit on performance.  I posted a roundup of R9 280X cards which showed alternative coolers and performance ranges.  We investigated the R9 290X Hawaii GPU and the claims that performance is variable and configurable based on fan speeds.  Finally, the R9 290 (non-X model) was released this week to more fanfare than the 290X thanks to its nearly identical performance and $399 price tag. 

IMG_1862.JPG

And today, yet another release.  NVIDIA's GeForce GTX 780 Ti takes the performance of the GK110 and fully unlocks it.  The GTX TITAN uses one fewer SMX and the GTX 780 has three fewer SMX units so you can expect the GTX 780 Ti to, at the very least, become the fastest NVIDIA GPU available.  But can it hold its lead over the R9 290X and validate its $699 price tag?

Continue reading our review of the NVIDIA GeForce GTX 780 Ti 3GB GK110 Graphics Card!!

Subject: Storage
Manufacturer: OCZ Technology

Introduction, Specifications and Packaging

Introduction:

It has been a while since OCZ introduced their Vector SSD, and it was in need of a refresh to bring its pricing more in-line with the competition, which had been equipping their products with physically smaller flash dies (therefore reducing cost). Today, OCZ launched a refresh to their Vector - now dubbed the Vector 150:

131107-072702-7.11.jpg

The OCZ strategy changed up a while back. They removed a lot of redundancy and confusing product lines, consolidating everything into a few simple solutions. Here's a snapsot of that strategy, showing the prior and newer iterations of three simple solutions:

vector 150 - lineup.png

The Vector 150 we look at today falls right into the middle here. I just love the 'ENTHUSIST' icon they went with:

ocz enthusiast.png

Read on for our full review of the new OCZ Vector 150!

Author:
Subject: Editorial
Manufacturer: Wyoming Whiskey

Bourbon? Really?

Why is there a bourbon review on a PC-centric website? 

We can’t live, eat, and breathe PC technology all the time.  All of us have outside interests that may not intersect with the PC and mobile market.  I think we would be pretty boring people if that were the case.  Yes, our professional careers are centered in this area, but our personal lives do diverge from the PC world.  You certainly can’t drink a GPU, though I’m sure somebody out there has tried.

bottle31.jpg

The bottle is unique to Wyoming Whiskey.  The bourbon has a warm, amber glow about it as well.  Picture courtesy of Wyoming Whiskey

Many years ago I became a beer enthusiast.  I loved to sample different concoctions, I would brew my own, and I settled on some personal favorites throughout the years.  Living in Wyoming is not necessarily conducive to sampling many different styles and types of beers, and so I was in a bit of a rut.  A few years back a friend of mine bought me a bottle of Tomatin 12 year single malt scotch, and I figured this would be an interesting avenue to move down since I had tapped out my selection of new and interesting beers (Wyoming has terrible beer distribution).

Click to read the entire review here!

Author:
Manufacturer: AMD

More of the same for a lot less cash

The week before Halloween, AMD unleashed a trick on the GPU world under the guise of the Radeon R9 290X and it was the fastest single GPU graphics card we had tested to date.  With a surprising price point of $549, it was able to outperform the GeForce GTX 780 (and GTX TITAN in most cases) while under cutting the competitions price by $100.  Not too bad! 

amd1.jpg

Today's release might be more surprising (and somewhat confusing).  The AMD Radeon R9 290 4GB card is based on the same Hawaii GPU with a few less compute units enabled (CUs) and an even more aggressive price and performance placement.  Seriously, has AMD lost its mind?

Can a card with a $399 price tag cut into the same performance levels as the JUST DROPPED price of $499 for the GeForce GTX 780??  And, if so, what sacrifices are being made by users that adopt it?  Why do so many of our introduction sentences end in question marks?

The R9 290 GPU - Hawaii loses a small island

If you are new to the Hawaii GPU and you missed our first review of the Radeon R9 290X from last month, you should probably start back there.  The architecture is very similar to that of the HD 7000-series Tahiti GPUs with some modest changes to improve efficiency with the biggest jump in raw primitives per second to 4/clock over 2/clock.

diagram1.jpg

The R9 290 is based on Hawaii though it has four fewer compute units (CUs) than the R9 290X.  When I asked AMD if that meant there was one fewer CU per Shader Engine or if they were all removed from a single Engine, they refused to really answer.  Instead, several "I'm not allowed to comment on the specific configuration" lines were given.  This seems pretty odd as NVIDIA has been upfront about the dual options for its derivative GPU models.  Oh well.

Continue reading our review of the AMD Radeon R9 290 4GB Graphics Card Review!!!

Author:
Manufacturer: AMD

Clock Variations

When AMD released the Radeon R9 290X last month, I came away from the review very impressed with the performance and price point the new flagship graphics card was presented with.  My review showed that the 290X was clearly faster than the NVIDIA GeForce GTX 780 and (and that time) was considerably less expensive as well - a win-win for AMD without a doubt. 

But there were concerns over a couple of aspects of the cards design.  First was the temperature and, specifically, how AMD was okay with this rather large silicon hitting 95C sustained.  Another concern, AMD has also included a switch at the top of the R9 290X to switch fan profiles.  This switch essentially creates two reference defaults and makes it impossible for us to set a baseline of performance.  These different modes only changed the maximum fan speed that the card was allowed to reach.  Still, performance changed because of this setting thanks to the newly revised (and updated) AMD PowerTune technology.

We also saw, in our initial review, a large variation in clock speeds both from one game to another as well as over time (after giving the card a chance to heat up).  This led me to create the following graph showing average clock speeds 5-7 minutes into a gaming session with the card set to the default, "quiet" state.  Each test is over a 60 second span.

clock-avg.png

Clearly there is variance here which led us to more questions about AMD's stance.  Remember when the Kepler GPUs launched.  AMD was very clear that variance from card to card, silicon to silicon, was bad for the consumer as it created random performance deltas between cards with otherwise identical specifications. 

When it comes to the R9 290X, though, AMD claims both the GPU (and card itself) are a customizable graphics solution.  The customization is based around the maximum fan speed which is a setting the user can adjust inside the Catalyst Control Center.  This setting will allow you to lower the fan speed if you are a gamer desiring a quieter gaming configuration while still having great gaming performance.  If you are comfortable with a louder fan, because headphones are magic, then you have the option to simply turn up the maximum fan speed and gain additional performance (a higher average clock rate) without any actual overclocking.

Continue reading our article on the AMD Radeon R9 290X - The Configurable GPU!!!

Subject: Motherboards
Manufacturer: GIGABYTE

Introduction and Technical Specifications

Introduction

02-8094_big.jpg

Courtesy of GIGABYTE

The GIGABYTE Z87X-UD5H is the upper tier of their Z87 desktop board line. The board supports the latest generation of Intel LGA1150-based processors and is crammed full of all the bells and whistles you would expect on more costly boards. With an MSRP of $229.99, the board is aggressively priced to compete with the other upper tier Z87 desktop boards like the MSI Z87 MPower.

03-8154.jpg

Courtesy of GIGABYTE

Besides an impressive 16-phase digital power delivery system dedicated to the processor, the Z87X-UD5H features GIGABYTE's Ultra Durable 5 Plus technology. Ultra Durable 5 Plus brings several high-end power components into the board's design: International Rectifier (IR) manufactured PowIRstage™ ICs and PWM controllers, Nippon Chemi-con manufactured Black Solid capacitors with a 10k hour operational rating at 105C, 15 micron gold plating on the CPU socket pins, and two 0.070mm copper layers imbedded into the PCB for optimal heat dissipation. In addition to the Ultra Durable 5 Plus power features, GIGABYTE designed the board with the following features: 10 SATA 6Gb/s ports; two Intel GigE NICs; three PCI-Express x16 slots for up to dual-card NVIDIA SLI or AMD CrossFire support; two PCI-Express x1 slots; a PCI slot; on board power, reset, and BIOS reset buttons; switch BIOS and Dual-BIOS switches; 2-digit diagnostic LED display; integrated voltage measurement points; and USB 2.0 and 3.0 port support.

04-8096_big.jpg

Courtesy of GIGABYTE

Continue reading our review of the GIGABYTE Z87X-UD5H motherboard!

Author:
Manufacturer: Various

ASUS R9 280X DirectCU II TOP

Earlier this month AMD took the wraps off of a revamped and restyled family of GPUs under the Radeon R9 and R7 brands.  When I reviewed the R9 280X, essentially a lower cost version of the Radoen HD 7970 GHz Edition, I came away impressed with the package AMD was able to put together.  Though there was no new hardware to really discuss with the R9 280X, the price drop placed the cards in a very aggressive position adjacent the NVIDIA GeForce line-up (including the GeForce GTX 770 and the GTX 760). 

As a result, I fully expect the R9 280X to be a great selling GPU for those gamers with a mid-range budget of $300. 

But another of the benefits of using an existing GPU architecture is the ability for board partners to very quickly release custom built versions of the R9 280X. Companies like ASUS, MSI, and Sapphire are able to have overclocked and custom-cooled alternatives to the 3GB $300 card, almost immediately, by simply adapting the HD 7970 PCB.

all01.jpg

Today we are going to be reviewing a set of three different R9 280X cards: the ASUS DirectCU II, MSI Twin Frozr Gaming, and the Sapphire TOXIC. 

Continue reading our roundup of the R9 280X cards from ASUS, MSI and Sapphire!!

Author:
Manufacturer: ARM

ARM is Serious About Graphics

Ask most computer users from 10 years ago who ARM is, and very few would give the correct answer.  Some well informed people might mention “Intel” and “StrongARM” or “XScale”, but ARM remained a shadowy presence until we saw the rise of the Smartphone.  Since then, ARM has built up their brand, much to the chagrin of companies like Intel and AMD.  Partners such as Samsung, Apple, Qualcomm, MediaTek, Rockchip, and NVIDIA have all worked with ARM to produce chips based on the ARMv7 architecture, with Apple being the first to release the first ARMv8 (64 bit) SOCs.  The multitude of ARM architectures are likely the most shipped chips in the world, going from very basic processors to the very latest Apple A7 SOC.

t700_01.jpg

The ARMv7 and ARMv8 architectures are very power efficient, yet provide enough performance to handle the vast majority of tasks utilized on smartphones and tablets (as well as a handful of laptops).  With the growth of visual computing, ARM also dedicated itself towards designing competent graphics portions of their chips.  The Mali architecture is aimed at being an affordable option for those without access to their own graphics design groups (NVIDIA, Qualcomm), but competitive with others that are willing to license their IP out (Imagination Technologies).

ARM was in fact one of the first to license out the very latest graphics technology to partners in the form of the Mali-T600 series of products.  These modules were among the first to support OpenGL ES 3.0 (compatible with 2.0 and 1.1) and DirectX 11.  The T600 architecture is very comparable to Imagination Technologies’ Series 6 and the Qualcomm Adreno 300 series of products.  Currently NVIDIA does not have a unified mobile architecture in production that supports OpenGL ES 3.0/DX11, but they are adapting the Kepler architecture to mobile and will be licensing it to interested parties.  Qualcomm does not license out Adreno after buying that group from AMD (Adreno is an anagram of Radeon).

Click to read the entire article here!

Manufacturer: NVIDIA

It impresses.

ShadowPlay is NVIDIA's latest addition to their GeForce Experience platform. This feature allows their GPUs, starting with Kepler, to record game footage either locally or stream it online through Twitch.tv (in a later update). It requires Kepler GPUs because it is accelerated by that hardware. The goal is to constantly record game footage without any noticeable impact to performance; that way, the player can keep it running forever and have the opportunity to save moments after they happen.

Also, it is free.

shadowplay-vs.jpg

I know that I have several gaming memories which come unannounced and leave undocumented. A solution like this is very exciting to me. Of course a feature on paper not the same as functional software in the real world. Thankfully, at least in my limited usage, ShadowPlay mostly lives up to its claims. I do not feel its impact on gaming performance. I am comfortable leaving it on at all times. There are issues, however, that I will get to soon.

This first impression is based on my main system running the 331.65 (Beta) GeForce drivers recommended for ShadowPlay.

  • Intel Core i7-3770, 3.4 GHz
  • NVIDIA GeForce GTX 670
  • 16 GB DDR3 RAM
  • Windows 7 Professional
  • 1920 x 1080 @ 120Hz.
  • 3 TB USB3.0 HDD (~50MB/s file clone).

The two games tested are Starcraft II: Heart of the Swarm and Battlefield 3.

Read on to see my thoughts on ShadowPlay, the new Experience on the block.

Manufacturer: Thermalright

Introduction and Technical Specifications

Introduction

02-Silver-Arrow-EX_0.jpg

Courtesy of Thermalright

Thermalright is an established brand in the CPU cooling arena with its track record for innovative creations designed to best remove the heat from your prize CPU. The latest incarnation of their cooler line for Intel and AMD-based CPUs takes the form of the Silver Arrow SB-E Extreme, a massive nickel-plated copper cooler sporting two 140mm fans to aid in heat dispersal. We tested this cooler in conjunction with other all-in-one and air coolers to see how well the Thermalright cooler stacks up. With a retail price at $99.99, the cooler has a premium price for the premium performance it offers.

03-Silver-Arrow-EX-Top1.jpg

Courtesy of Thermalright

04-Silver-Arrow-side.jpg

Courtesy of Thermalright

05-Silver-Arrow-EX-Bottom_0.jpg

Courtesy of Thermalright

Thermalright took their cooler design to a whole new level with the Silver Arrow SB-E Extreme. The cooler features a nickel-plated copper base and heat pipes with two massive aluminum thin-finned tower radiators to help with heat dissipation. The Silver Arrow SB-E contains eight total 6mm diameter heat pipes that run through the copper base plate, terminating in the two aluminum tower radiators. The base plate itself is polished to a mirror-like finish, ensuring optimal mating between the base plate and CPU surfaces.

Continue reading our review of the Thermalright Silver Arrow SB-E CPU air cooler!

Author:
Manufacturer: AMD

A bit of a surprise

Okay, let's cut to the chase here: it's late, we are rushing to get our articles out, and I think you all would rather see our testing results NOW rather than LATER.  The first thing you should do is read my review of the AMD Radeon R9 290X 4GB Hawaii graphics card which goes over the new architecture, new feature set, and performance in single card configurations. 

Then, you should continue reading below to find out how the new XDMA, bridge-less CrossFire implementation actually works in both single panel and 4K (tiled) configurations.

IMG_1802.JPG

 

A New CrossFire For a New Generation

CrossFire has caused a lot of problems for AMD in recent months (and a lot of problems for me as well).  But, AMD continues to make strides in correcting the frame pacing issues associated with CrossFire configurations and the new R9 290X moves the bar forward.

Without the CrossFire bridge connector on the 290X, all of the CrossFire communication and data transfer occurs over the PCI Express bus that connects the cards to the entire system.  AMD claims that this new XDMA interface was designed for Eyefinity and UltraHD resolutions (which were the subject of our most recent article on the subject).  By accessing the memory of the GPU through PCIe AMD claims that it can alleviate the bandwidth and sync issues that were causing problems with Eyefinity and tiled 4K displays.

Even better, this updated version of CrossFire is said to compatible with the frame pacing updates to the Catalyst driver to improve multi-GPU performance experiences for end users.

IMG_1800.JPG

When an extra R9 290X accidentally fell into my lap, I decided to take it for a spin.  And if you have followed my graphics testing methodology in the past year then you'll understand the important of these tests.

Continue reading our article Frame Rating: AMD Radeon R9 290X CrossFire and 4K Preview Testing!!

Author:
Manufacturer: AMD

A slightly new architecture

Note: We also tested the new AMD Radeon R9 290X in CrossFire and at 4K resolutions; check out that full Frame Rating story right here!!

Last month AMD brought media, analysts, and customers out to Hawaii to talk about a new graphics chip coming out this year.  As you might have guessed based on the location: the code name for this GPU was in fact, Hawaii. It was targeted at the high end of the discrete graphics market to take on the likes of the GTX 780 and GTX TITAN from NVIDIA. 

Earlier this month we reviewed the AMD Radeon R9 280X, R9 270X, and the R7 260X. None of these were based on that new GPU.  Instead, these cards were all rebrands and repositionings of existing hardware in the market (albeit at reduced prices).  Those lower prices made the R9 280X one of our favorite GPUs of the moment as it offers performance per price points currently unmatched by NVIDIA.

But today is a little different, today we are talking about a much more expensive product that has to live up to some pretty lofty goals and ambitions set forward by the AMD PR and marketing machine.  At $549 MSRP, the new AMD Radeon R9 290X will become the flagship of the Radeon brand.  The question is: to where does that ship sail?

 

The AMD Hawaii Architecture

To be quite upfront about it, the Hawaii design is very similar to that of the Tahiti GPU from the Radeon HD 7970 and R9 280X cards.  Based on the same GCN (Graphics Core Next) architecture AMD assured us would be its long term vision, Hawaii ups the ante in a few key areas while maintaining the same core.

01.jpg

Hawaii is built around Shader Engines, of which the R9 290X has four.  Each of these includes 11 CU (compute units) which hold 4 SIMD arrays each.  Doing the quick math brings us to a total stream processor count of 2,816 on the R9 290X. 

Continue reading our review of the AMD Radeon R9 290X 4GB Graphics Card!!

Author:
Subject: Editorial
Manufacturer:

The Really Good Times are Over

We really do not realize how good we had it.  Sure, we could apply that to budget surpluses and the time before the rise of global terrorism, but in this case I am talking about the predictable advancement of graphics due to both design expertise and improvements in process technology.  Moore’s law has been exceptionally kind to graphics.  We can look back and when we plot the course of these graphics companies, they have actually outstripped Moore in terms of transistor density from generation to generation.  Most of this is due to better tools and the expertise gained in what is still a fairly new endeavor as compared to CPUs (the first true 3D accelerators were released in the 1993/94 timeframe).

The complexity of a modern 3D chip is truly mind-boggling.  To get a good idea of where we came from, we must look back at the first generations of products that we could actually purchase.  The original 3Dfx Voodoo Graphics was comprised of a raster chip and a texture chip, each contained approximately 1 million transistors (give or take) and were made on a then available .5 micron process (we shall call it 500 nm from here on out to give a sense of perspective with modern process technology).  The chips were clocked between 47 and 50 MHz (though often could be clocked up to 57 MHz by going into the init file and putting in “SET SST_GRXCLK=57”… btw, SST stood for Sellers/Smith/Tarolli, the founders of 3Dfx).  This revolutionary graphics card at the time could push out 47 to 50 megapixels and had 4 MB of VRAM and was released in the beginning of 1996.

righteous3d_01.JPG

My first 3D graphics card was the Orchid Righteous 3D.  Voodoo Graphics was really the first successful consumer 3D graphics card.  Yes, there were others before it, but Voodoo Graphics had the largest impact of them all.

In 1998 3Dfx released the Voodoo 2, and it was a significant jump in complexity from the original.  These chips were fabricated on a 350 nm process.  There were three chips to each card, one of which was the raster chip and the other two were texture chips.  At the top end of the product stack was the 12 MB cards.  The raster chip had 4 MB of VRAM available to it while each texture chip had 4 MB of VRAM for texture storage.  Not only did this product double performance from the Voodoo Graphics, it was able to run in single card configurations at 800x600 (as compared to the max 640x480 of the Voodoo Graphics).  This is the same time as when NVIDIA started to become a very aggressive competitor with the Riva TnT and ATI was about to ship the Rage 128.

Read the entire editorial here!