Author:
Manufacturer: EVGA

EVGA Brings Custom GTX 780 Ti Early

Reference cards for new graphics card releases are very important for a number of reasons.  Most importantly, these are the cards presented to the media and reviewers that judge the value and performance of these cards out of the gate.  These various articles are generally used by readers and enthusiasts to make purchasing decisions, and if first impressions are not good, it can spell trouble.  Also, reference cards tend to be the first cards sold in the market (see the recent Radeon R9 290/290X launch) and early adopters get the same technology in their hands; again the impressions reference cards leave will live in forums for eternity.

All that being said, retail cards are where partners can differentiate and keep the various GPUs relevant for some time to come.  EVGA is probably the most well known NVIDIA partner and is clearly their biggest outlet for sales.  The ACX cooler is one we saw popularized with the first GTX 700-series cards and the company has quickly adopted it to the GTX 780 Ti, released by NVIDIA just last week

evga780tiacx.jpg

I would normally have a full review for you as soon as we could but thanks to a couple of upcoming trips that will keep me away from the GPU test bed, that will take a little while longer.  However, I thought a quick preview was in order to show off the specifications and performance of the EVGA GTX 780 Ti ACX.

gpuz.png

As expected, the EVGA ACX design of the GTX 780 Ti is overclocked.  While the reference card runs at a base clock of 875 MHz and a typical boost clock of 928 MHz, this retail model has a base clock of 1006 MHz and a boost clock of 1072 MHz.  This means that all 2,880 CUDA cores are going to run somewhere around 15% faster on the EVGA ACX model than the reference GTX 780 Ti SKUs. 

We should note that though the cooler is custom built by EVGA, the PCB design of this GTX 780 Ti card remains the same as the reference models. 

Continue reading our preview of the EVGA GeForce GTX 780 Ti ACX custom-cooled graphics card!!

NVIDIA strikes back!

Subject: Graphics Cards | November 8, 2013 - 04:41 PM |
Tagged: nvidia, kepler, gtx 780 ti, gk110, geforce

Here is a roundup of the reviews of what is now the fastest single GPU card on the planet, the GTX 780 Ti, which is a fully active GK110 chip.  The 7GHz GDDR5 is faster than AMD's memory but use a 384-bit memory bus which is less than the R9 290X which leads to some interesting questions about the performance of this card under high resolutions.  Are you willing to pay quite a bit more for better performance and a quieter card? Check out the performance deltas at [H]ard|OCP and see if that changes your mind at all.

You can see how it measures up in ISUs in Ryan's review as well.

1383802230J422mbwkoS_1_10_l.jpg

"NVIDIA's fastest single-GPU video card is being launched today. With the full potential of the Kepler architecture and GK110 GPU fully unlocked, how will it perform compared to the new R9 290X with new drivers? Will the price versus performance make sense? Will it out perform a TITAN? We find out all this and more."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: AMD

An issue of variance

AMD just sent along an email to the press with a new driver to use for Radeon R9 290X and Radeon R9 290 testing going forward.  Here is the note:

We’ve identified that there’s variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink.

The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

The correct target RPM values are 2200RPM for the AMD Radeon R9 290X ‘Quiet mode’, and 2650RPM for the R9 290. You can verify these in GPU-Z.

If you’re working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.

Great!  This is good news!  Except it also creates some questions. 

When we first tested the R9 290X and the R9 290, we discussed the latest iteration of AMD's PowerTune technology. That feature attempts to keep clocks as high as possible under the constraints of temperature and power.  I took issue with the high variability of clock speeds on our R9 290X sample, citing this graph:

clock-avg.png

I then did some digging into the variance and the claims that AMD was building a "configurable" GPU.  In that article we found that there were significant performance deltas between "hot" and "cold" GPUs; we noticed that doing simple, quick benchmarks would produce certain results that were definitely not real-world in nature.  At the default 40% fan speed, Crysis 3 showed 10% variance with the 290X at 2560x1440:

Crysis3_2560x1440_OFPS.png

Continue reading our coverage of the most recent driver changes and how they affect the R9 290X and R9 290!!

AMD Releases Catalyst 13.11 Beta 9.2 Driver To Correct Performance Variance Issue of R9 290 Series Graphics Cards

Subject: Graphics Cards, Cases and Cooling | November 8, 2013 - 02:41 AM |
Tagged: R9 290X, powertune, hawaii, graphics drivers, gpu, GCN, catalyst 13.11 beta, amd, 290x

AMD recently launched its 290X graphics card, which is the new high-end single GPU solution using a GCN-based Hawaii architecture. The new GPU is rather large and incorporates an updated version of AMD's PowerTune technology to automatically adjust clockspeeds based on temperature and a maximum fan speed of 40%. Unfortunately, it seems that some 290X cards available at retail exhibited performance characteristics that varied from review units.

Retail versus Review Sample Performance Variance Testing.jpg

AMD has looked into the issue and released the following statement in response to the performance variances (which PC Perspective is looking into as well).

Hello, We've identified that there's variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink. The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

The correct target RPM values are 2200RPM for the AMD Radeon R9 290X "Quiet mode", and 2650RPM for the R9 290. You can verify these in GPU-Z. If you're working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.

From the AMD statement, it seems to be an issue with fan speeds from card to card causing the performance variances. With a GPU that is rated to run at up to 95C, a fan limited to 40% maximum, and dynamic clockspeeds, it is only natural that cards could perform differently, especially if case airflow is not up to par. On the other hand, the specific issue pointed out by other technology review sites (per my understanding, it was initially Tom's Hardware that reported on the retail vs review sample variance) is  an issue where the 40% maximum on certain cards is not actually the RPM target that AMD intended.

AMD intended for the Radeon R9 290X's fan to run at 2200RPM (40%) in Quiet Mode and the fan on the R9 290 (which has a maximum fan speed percentage of 47%) to spin at 2650 RPM in Quiet Mode. However, some cards 40% values are not actually hitting those intended RPMs, which is causing performance differences due to cooling and PowerTune adjusting the clockspeeds accordingly.

Luckily, the issue is being worked on by AMD, and it is reportedly rectified by a driver update. The driver update ensures that the fans are actually spinning at the intended speed when set to the 40% (R9 290X) or 47% (R9 290) values in Catalyst Control Center. The new driver, which includes the fix, is version Catalyst 13.11 Beta 9.2 and is available for download now. 

If you are running a R9 290 or R9 290X in your system, you should consider updating to the latest driver to ensure you are getting the cooling (and as a result gaming) performance you are supposed to be getting.

Catalyst 13.11 Beta 9.2 is available from the AMD website.

Also read:

Stay tuned to PC Perspective for more information on the Radeon R9 290 series GPU performance variance issue as it develops.

Image credit: Ryan Shrout (PC Perspective).

Source: AMD
Author:
Manufacturer: NVIDIA

GK110 in all its glory

I bet you didn't realize that October and November were going to become the onslaught of graphics cards it has been.  I know I did not and I tend to have a better background on these things than most of our readers.  Starting with the release of the AMD Radeon R9 280X, 270X and R7 260X in the first week of October, it has pretty much been a non-stop battle between NVIDIA and AMD for the hearts, minds, and wallets of PC gamers. 

Shortly after the Tahiti refresh came NVIDIA's move into display technology with G-Sync, a variable refresh rate feature that will work with upcoming monitors from ASUS and others as long as you have a GeForce Kepler GPU.  The technology was damned impressive, but I am still waiting for NVIDIA to send over some panels for extended testing. 

Later in October we were hit with the R9 290X, the Hawaii GPU that brought AMD back in the world of ultra-class single GPU card performance.  It has produced stellar benchmarks and undercut the prices (then at least) of the GTX 780 and GTX TITAN.  We tested it in both single and multi-GPU configurations and found that AMD had made some impressive progress in fixing its frame pacing issues, even with Eyefinity and 4K tiled displays. 

NVIDIA dropped a driver release with ShadowPlay that allows gamers to record playback locally without a hit on performance.  I posted a roundup of R9 280X cards which showed alternative coolers and performance ranges.  We investigated the R9 290X Hawaii GPU and the claims that performance is variable and configurable based on fan speeds.  Finally, the R9 290 (non-X model) was released this week to more fanfare than the 290X thanks to its nearly identical performance and $399 price tag. 

IMG_1862.JPG

And today, yet another release.  NVIDIA's GeForce GTX 780 Ti takes the performance of the GK110 and fully unlocks it.  The GTX TITAN uses one fewer SMX and the GTX 780 has three fewer SMX units so you can expect the GTX 780 Ti to, at the very least, become the fastest NVIDIA GPU available.  But can it hold its lead over the R9 290X and validate its $699 price tag?

Continue reading our review of the NVIDIA GeForce GTX 780 Ti 3GB GK110 Graphics Card!!

NVIDIA Grid GPUs Available for Amazon EC2

Subject: General Tech, Graphics Cards, Systems | November 5, 2013 - 09:33 PM |
Tagged: nvidia, grid, AWS, amazon

Amazon Web Services allows customers (individuals, organizations, or companies) to rent servers of certain qualities to match their needs. Many websites are hosted at their data centers, mostly because you can purchase different (or multiple) servers if you have big variations in traffic.

I, personally, sometimes use it as a game server for scheduled multiplayer events. The traditional method is spending $50-80 USD per month on a... decent... server running all-day every-day and using it a couple of hours per week. With Amazon EC2, we hosted a 200 player event (100 vs 100) by purchasing a dual-Xeon (ironically the fastest single-threaded instance) server connected to Amazon's internet backbone by 10 Gigabit Ethernet. This server cost just under $5 per hour all expenses considered. It was not much of a discount but it ran like butter.

nvidia-grid-bracket.png

This leads me to today's story: NVIDIA GRID GPUs are now available at Amazon Web Services. Both companies hope their customers will use (or create services based on) these instances. Applications they expect to see are streamed games, CAD and media creation, and other server-side graphics processing. These Kepler-based instances, named "g2.2xlarge", will be available along side the older Fermi-based Cluster Compute Instances ("cg1.4xlarge").

It is also noteworthy that the older Fermi-based Tesla servers are about 4x as expensive. GRID GPUs are based on GK104 (or GK107, but those are not available on Amazon EC2) and not the more compute-intensive GK110. It would probably be a step backwards for customers intending to perform GPGPU workloads for computational science or "big data" analysis. The newer GRID systems do not have 10 Gigabit Ethernet, either.

So what does it have? Well, I created an AWS instance to find out.

aws-grid-cpu.png

Its CPU is advertised as an Intel E5-2670 with 8 threads and 26 Compute Units (CUs). This is particularly odd as that particular CPU is eight-core with 16 threads; it is also usually rated by Amazon at 22 CUs per 8 threads. This made me wonder whether the CPU is split between two clients or if Amazon disabled Hyper-Threading to push the clock rates higher (and ultimately led me to just log in to an instance and see). As it turns out, HT is still enabled and the processor registers as having 4 physical cores.

The GPU was slightly more... complicated.

aws-grid-gpu.png

NVIDIA control panel apparently does not work over remote desktop and the GPU registers as a "Standard VGA Graphics Adapter". Actually, two are available in Device Manager although one has the yellow exclamation mark of driver woe (random integrated graphics that wasn't disabled in BIOS?). GPU-Z was not able to pick much up from it but it was of some help.

Keep in mind: I did this without contacting either Amazon or NVIDIA. It is entirely possible that the OS I used (Windows Server 2008 R2) was a poor choice. OTOY, as a part of this announcement, offers Amazon Machine Image (AMI)s for Linux and Windows installations integrated with their ORBX middleware.

I spot three key pieces of information: The base clock is 797 MHz, the memory size is 2990 MB, and the default drivers are Forceware 276.52 (??). The core and default clock rate, GK104 and 797 MHz respectively, are characteristic of the GRID K520 GPU with its 2 GK104 GPUs clocked at 800 MHz. However, since the K520 gives each GPU 4GB and this instance only has 3GB of vRAM, I can tell that the product is slightly different.

I was unable to query the device's shader count. The K520 (similar to a GeForce 680) has 1536 per GPU which sounds about right (but, again, pure speculation).

I also tested the server with TCPing to measure its networking performance versus the cluster compute instances. I did not do anything like Speedtest or Netalyzr. With a normal cluster instance I achieve about 20-25ms pings; with this instance I was more in the 45-50ms range. Of course, your mileage may vary and this should not be used as any official benchmark. If you are considering using the instance for your product, launch an instance and run your own tests. It is not expensive. Still, it seems to be less responsive than Cluster Compute instances which is odd considering its intended gaming usage.

Regardless, now that Amazon picked up GRID, we might see more services (be it consumer or enterprise) which utilizes this technology. The new GPU instances start at $0.65/hr for Linux and $0.767/hr for Windows (excluding extra charges like network bandwidth) on demand. Like always with EC2, if you will use these instances a lot, you can get reduced rates if you pay a fee upfront.

Official press blast after the break.

Source: NVIDIA
Author:
Manufacturer: AMD

More of the same for a lot less cash

The week before Halloween, AMD unleashed a trick on the GPU world under the guise of the Radeon R9 290X and it was the fastest single GPU graphics card we had tested to date.  With a surprising price point of $549, it was able to outperform the GeForce GTX 780 (and GTX TITAN in most cases) while under cutting the competitions price by $100.  Not too bad! 

amd1.jpg

Today's release might be more surprising (and somewhat confusing).  The AMD Radeon R9 290 4GB card is based on the same Hawaii GPU with a few less compute units enabled (CUs) and an even more aggressive price and performance placement.  Seriously, has AMD lost its mind?

Can a card with a $399 price tag cut into the same performance levels as the JUST DROPPED price of $499 for the GeForce GTX 780??  And, if so, what sacrifices are being made by users that adopt it?  Why do so many of our introduction sentences end in question marks?

The R9 290 GPU - Hawaii loses a small island

If you are new to the Hawaii GPU and you missed our first review of the Radeon R9 290X from last month, you should probably start back there.  The architecture is very similar to that of the HD 7000-series Tahiti GPUs with some modest changes to improve efficiency with the biggest jump in raw primitives per second to 4/clock over 2/clock.

diagram1.jpg

The R9 290 is based on Hawaii though it has four fewer compute units (CUs) than the R9 290X.  When I asked AMD if that meant there was one fewer CU per Shader Engine or if they were all removed from a single Engine, they refused to really answer.  Instead, several "I'm not allowed to comment on the specific configuration" lines were given.  This seems pretty odd as NVIDIA has been upfront about the dual options for its derivative GPU models.  Oh well.

Continue reading our review of the AMD Radeon R9 290 4GB Graphics Card Review!!!

Author:
Manufacturer: AMD

Clock Variations

When AMD released the Radeon R9 290X last month, I came away from the review very impressed with the performance and price point the new flagship graphics card was presented with.  My review showed that the 290X was clearly faster than the NVIDIA GeForce GTX 780 and (and that time) was considerably less expensive as well - a win-win for AMD without a doubt. 

But there were concerns over a couple of aspects of the cards design.  First was the temperature and, specifically, how AMD was okay with this rather large silicon hitting 95C sustained.  Another concern, AMD has also included a switch at the top of the R9 290X to switch fan profiles.  This switch essentially creates two reference defaults and makes it impossible for us to set a baseline of performance.  These different modes only changed the maximum fan speed that the card was allowed to reach.  Still, performance changed because of this setting thanks to the newly revised (and updated) AMD PowerTune technology.

We also saw, in our initial review, a large variation in clock speeds both from one game to another as well as over time (after giving the card a chance to heat up).  This led me to create the following graph showing average clock speeds 5-7 minutes into a gaming session with the card set to the default, "quiet" state.  Each test is over a 60 second span.

clock-avg.png

Clearly there is variance here which led us to more questions about AMD's stance.  Remember when the Kepler GPUs launched.  AMD was very clear that variance from card to card, silicon to silicon, was bad for the consumer as it created random performance deltas between cards with otherwise identical specifications. 

When it comes to the R9 290X, though, AMD claims both the GPU (and card itself) are a customizable graphics solution.  The customization is based around the maximum fan speed which is a setting the user can adjust inside the Catalyst Control Center.  This setting will allow you to lower the fan speed if you are a gamer desiring a quieter gaming configuration while still having great gaming performance.  If you are comfortable with a louder fan, because headphones are magic, then you have the option to simply turn up the maximum fan speed and gain additional performance (a higher average clock rate) without any actual overclocking.

Continue reading our article on the AMD Radeon R9 290X - The Configurable GPU!!!

Author:
Manufacturer: Various

ASUS R9 280X DirectCU II TOP

Earlier this month AMD took the wraps off of a revamped and restyled family of GPUs under the Radeon R9 and R7 brands.  When I reviewed the R9 280X, essentially a lower cost version of the Radoen HD 7970 GHz Edition, I came away impressed with the package AMD was able to put together.  Though there was no new hardware to really discuss with the R9 280X, the price drop placed the cards in a very aggressive position adjacent the NVIDIA GeForce line-up (including the GeForce GTX 770 and the GTX 760). 

As a result, I fully expect the R9 280X to be a great selling GPU for those gamers with a mid-range budget of $300. 

But another of the benefits of using an existing GPU architecture is the ability for board partners to very quickly release custom built versions of the R9 280X. Companies like ASUS, MSI, and Sapphire are able to have overclocked and custom-cooled alternatives to the 3GB $300 card, almost immediately, by simply adapting the HD 7970 PCB.

all01.jpg

Today we are going to be reviewing a set of three different R9 280X cards: the ASUS DirectCU II, MSI Twin Frozr Gaming, and the Sapphire TOXIC. 

Continue reading our roundup of the R9 280X cards from ASUS, MSI and Sapphire!!

Author:
Manufacturer: ARM

ARM is Serious About Graphics

Ask most computer users from 10 years ago who ARM is, and very few would give the correct answer.  Some well informed people might mention “Intel” and “StrongARM” or “XScale”, but ARM remained a shadowy presence until we saw the rise of the Smartphone.  Since then, ARM has built up their brand, much to the chagrin of companies like Intel and AMD.  Partners such as Samsung, Apple, Qualcomm, MediaTek, Rockchip, and NVIDIA have all worked with ARM to produce chips based on the ARMv7 architecture, with Apple being the first to release the first ARMv8 (64 bit) SOCs.  The multitude of ARM architectures are likely the most shipped chips in the world, going from very basic processors to the very latest Apple A7 SOC.

t700_01.jpg

The ARMv7 and ARMv8 architectures are very power efficient, yet provide enough performance to handle the vast majority of tasks utilized on smartphones and tablets (as well as a handful of laptops).  With the growth of visual computing, ARM also dedicated itself towards designing competent graphics portions of their chips.  The Mali architecture is aimed at being an affordable option for those without access to their own graphics design groups (NVIDIA, Qualcomm), but competitive with others that are willing to license their IP out (Imagination Technologies).

ARM was in fact one of the first to license out the very latest graphics technology to partners in the form of the Mali-T600 series of products.  These modules were among the first to support OpenGL ES 3.0 (compatible with 2.0 and 1.1) and DirectX 11.  The T600 architecture is very comparable to Imagination Technologies’ Series 6 and the Qualcomm Adreno 300 series of products.  Currently NVIDIA does not have a unified mobile architecture in production that supports OpenGL ES 3.0/DX11, but they are adapting the Kepler architecture to mobile and will be licensing it to interested parties.  Qualcomm does not license out Adreno after buying that group from AMD (Adreno is an anagram of Radeon).

Click to read the entire article here!