Author:
Manufacturer: AMD

Clock Variations

When AMD released the Radeon R9 290X last month, I came away from the review very impressed with the performance and price point the new flagship graphics card was presented with.  My review showed that the 290X was clearly faster than the NVIDIA GeForce GTX 780 and (and that time) was considerably less expensive as well - a win-win for AMD without a doubt. 

But there were concerns over a couple of aspects of the cards design.  First was the temperature and, specifically, how AMD was okay with this rather large silicon hitting 95C sustained.  Another concern, AMD has also included a switch at the top of the R9 290X to switch fan profiles.  This switch essentially creates two reference defaults and makes it impossible for us to set a baseline of performance.  These different modes only changed the maximum fan speed that the card was allowed to reach.  Still, performance changed because of this setting thanks to the newly revised (and updated) AMD PowerTune technology.

We also saw, in our initial review, a large variation in clock speeds both from one game to another as well as over time (after giving the card a chance to heat up).  This led me to create the following graph showing average clock speeds 5-7 minutes into a gaming session with the card set to the default, "quiet" state.  Each test is over a 60 second span.

clock-avg.png

Clearly there is variance here which led us to more questions about AMD's stance.  Remember when the Kepler GPUs launched.  AMD was very clear that variance from card to card, silicon to silicon, was bad for the consumer as it created random performance deltas between cards with otherwise identical specifications. 

When it comes to the R9 290X, though, AMD claims both the GPU (and card itself) are a customizable graphics solution.  The customization is based around the maximum fan speed which is a setting the user can adjust inside the Catalyst Control Center.  This setting will allow you to lower the fan speed if you are a gamer desiring a quieter gaming configuration while still having great gaming performance.  If you are comfortable with a louder fan, because headphones are magic, then you have the option to simply turn up the maximum fan speed and gain additional performance (a higher average clock rate) without any actual overclocking.

Continue reading our article on the AMD Radeon R9 290X - The Configurable GPU!!!

Author:
Manufacturer: Various

ASUS R9 280X DirectCU II TOP

Earlier this month AMD took the wraps off of a revamped and restyled family of GPUs under the Radeon R9 and R7 brands.  When I reviewed the R9 280X, essentially a lower cost version of the Radoen HD 7970 GHz Edition, I came away impressed with the package AMD was able to put together.  Though there was no new hardware to really discuss with the R9 280X, the price drop placed the cards in a very aggressive position adjacent the NVIDIA GeForce line-up (including the GeForce GTX 770 and the GTX 760). 

As a result, I fully expect the R9 280X to be a great selling GPU for those gamers with a mid-range budget of $300. 

But another of the benefits of using an existing GPU architecture is the ability for board partners to very quickly release custom built versions of the R9 280X. Companies like ASUS, MSI, and Sapphire are able to have overclocked and custom-cooled alternatives to the 3GB $300 card, almost immediately, by simply adapting the HD 7970 PCB.

all01.jpg

Today we are going to be reviewing a set of three different R9 280X cards: the ASUS DirectCU II, MSI Twin Frozr Gaming, and the Sapphire TOXIC. 

Continue reading our roundup of the R9 280X cards from ASUS, MSI and Sapphire!!

Author:
Manufacturer: ARM

ARM is Serious About Graphics

Ask most computer users from 10 years ago who ARM is, and very few would give the correct answer.  Some well informed people might mention “Intel” and “StrongARM” or “XScale”, but ARM remained a shadowy presence until we saw the rise of the Smartphone.  Since then, ARM has built up their brand, much to the chagrin of companies like Intel and AMD.  Partners such as Samsung, Apple, Qualcomm, MediaTek, Rockchip, and NVIDIA have all worked with ARM to produce chips based on the ARMv7 architecture, with Apple being the first to release the first ARMv8 (64 bit) SOCs.  The multitude of ARM architectures are likely the most shipped chips in the world, going from very basic processors to the very latest Apple A7 SOC.

t700_01.jpg

The ARMv7 and ARMv8 architectures are very power efficient, yet provide enough performance to handle the vast majority of tasks utilized on smartphones and tablets (as well as a handful of laptops).  With the growth of visual computing, ARM also dedicated itself towards designing competent graphics portions of their chips.  The Mali architecture is aimed at being an affordable option for those without access to their own graphics design groups (NVIDIA, Qualcomm), but competitive with others that are willing to license their IP out (Imagination Technologies).

ARM was in fact one of the first to license out the very latest graphics technology to partners in the form of the Mali-T600 series of products.  These modules were among the first to support OpenGL ES 3.0 (compatible with 2.0 and 1.1) and DirectX 11.  The T600 architecture is very comparable to Imagination Technologies’ Series 6 and the Qualcomm Adreno 300 series of products.  Currently NVIDIA does not have a unified mobile architecture in production that supports OpenGL ES 3.0/DX11, but they are adapting the Kepler architecture to mobile and will be licensing it to interested parties.  Qualcomm does not license out Adreno after buying that group from AMD (Adreno is an anagram of Radeon).

Click to read the entire article here!

Manufacturer: NVIDIA

It impresses.

ShadowPlay is NVIDIA's latest addition to their GeForce Experience platform. This feature allows their GPUs, starting with Kepler, to record game footage either locally or stream it online through Twitch.tv (in a later update). It requires Kepler GPUs because it is accelerated by that hardware. The goal is to constantly record game footage without any noticeable impact to performance; that way, the player can keep it running forever and have the opportunity to save moments after they happen.

Also, it is free.

shadowplay-vs.jpg

I know that I have several gaming memories which come unannounced and leave undocumented. A solution like this is very exciting to me. Of course a feature on paper not the same as functional software in the real world. Thankfully, at least in my limited usage, ShadowPlay mostly lives up to its claims. I do not feel its impact on gaming performance. I am comfortable leaving it on at all times. There are issues, however, that I will get to soon.

This first impression is based on my main system running the 331.65 (Beta) GeForce drivers recommended for ShadowPlay.

  • Intel Core i7-3770, 3.4 GHz
  • NVIDIA GeForce GTX 670
  • 16 GB DDR3 RAM
  • Windows 7 Professional
  • 1920 x 1080 @ 120Hz.
  • 3 TB USB3.0 HDD (~50MB/s file clone).

The two games tested are Starcraft II: Heart of the Swarm and Battlefield 3.

Read on to see my thoughts on ShadowPlay, the new Experience on the block.

Author:
Manufacturer: AMD

A bit of a surprise

Okay, let's cut to the chase here: it's late, we are rushing to get our articles out, and I think you all would rather see our testing results NOW rather than LATER.  The first thing you should do is read my review of the AMD Radeon R9 290X 4GB Hawaii graphics card which goes over the new architecture, new feature set, and performance in single card configurations. 

Then, you should continue reading below to find out how the new XDMA, bridge-less CrossFire implementation actually works in both single panel and 4K (tiled) configurations.

IMG_1802.JPG

 

A New CrossFire For a New Generation

CrossFire has caused a lot of problems for AMD in recent months (and a lot of problems for me as well).  But, AMD continues to make strides in correcting the frame pacing issues associated with CrossFire configurations and the new R9 290X moves the bar forward.

Without the CrossFire bridge connector on the 290X, all of the CrossFire communication and data transfer occurs over the PCI Express bus that connects the cards to the entire system.  AMD claims that this new XDMA interface was designed for Eyefinity and UltraHD resolutions (which were the subject of our most recent article on the subject).  By accessing the memory of the GPU through PCIe AMD claims that it can alleviate the bandwidth and sync issues that were causing problems with Eyefinity and tiled 4K displays.

Even better, this updated version of CrossFire is said to compatible with the frame pacing updates to the Catalyst driver to improve multi-GPU performance experiences for end users.

IMG_1800.JPG

When an extra R9 290X accidentally fell into my lap, I decided to take it for a spin.  And if you have followed my graphics testing methodology in the past year then you'll understand the important of these tests.

Continue reading our article Frame Rating: AMD Radeon R9 290X CrossFire and 4K Preview Testing!!

Author:
Manufacturer: AMD

A slightly new architecture

Note: We also tested the new AMD Radeon R9 290X in CrossFire and at 4K resolutions; check out that full Frame Rating story right here!!

Last month AMD brought media, analysts, and customers out to Hawaii to talk about a new graphics chip coming out this year.  As you might have guessed based on the location: the code name for this GPU was in fact, Hawaii. It was targeted at the high end of the discrete graphics market to take on the likes of the GTX 780 and GTX TITAN from NVIDIA. 

Earlier this month we reviewed the AMD Radeon R9 280X, R9 270X, and the R7 260X. None of these were based on that new GPU.  Instead, these cards were all rebrands and repositionings of existing hardware in the market (albeit at reduced prices).  Those lower prices made the R9 280X one of our favorite GPUs of the moment as it offers performance per price points currently unmatched by NVIDIA.

But today is a little different, today we are talking about a much more expensive product that has to live up to some pretty lofty goals and ambitions set forward by the AMD PR and marketing machine.  At $549 MSRP, the new AMD Radeon R9 290X will become the flagship of the Radeon brand.  The question is: to where does that ship sail?

 

The AMD Hawaii Architecture

To be quite upfront about it, the Hawaii design is very similar to that of the Tahiti GPU from the Radeon HD 7970 and R9 280X cards.  Based on the same GCN (Graphics Core Next) architecture AMD assured us would be its long term vision, Hawaii ups the ante in a few key areas while maintaining the same core.

01.jpg

Hawaii is built around Shader Engines, of which the R9 290X has four.  Each of these includes 11 CU (compute units) which hold 4 SIMD arrays each.  Doing the quick math brings us to a total stream processor count of 2,816 on the R9 290X. 

Continue reading our review of the AMD Radeon R9 290X 4GB Graphics Card!!

Author:
Manufacturer: NVIDIA

Our Legacys Influence

We are often creatures of habit.  Change is hard. And often times legacy systems that have been in place for a very long time can shift and determine the angle at which we attack new problems.  This happens in the world of computer technology but also outside the walls of silicon and the results can be dangerous inefficiencies that threaten to limit our advancement in those areas.  Often our need to adapt new technologies to existing infrastructure can be blamed for stagnant development. 

Take the development of the phone as an example.  The pulse based phone system and the rotary dial slowed the implementation of touch dial phones and forced manufacturers to include switches to select between pulse and tone based dialing options on phones for decades. 

Perhaps a more substantial example is that of the railroad system that has based the track gauge (width between the rails) on the transportation methods that existed before the birth of Christ.  Horse drawn carriages pulled by two horses had an axle gap of 4 feet 8 inches in the 1800s and thus the first railroads in the US were built with a track gauge of 4 feet 8 inches.  Today, the standard rail track gauge remains 4 feet 8 inches despite the fact that a wider gauge would allow for more stability of larger cargo loads and allow for higher speed vehicles.  But the cost of updating the existing infrastructure around the world would be so cost prohibitive that it is likely we will remain with that outdated standard.

railroad.jpg

What does this have to do with PC hardware and why am I giving you an abbreviated history lesson?  There are clearly some examples of legacy infrastructure limiting our advancement in hardware development.  Solid state drives are held back by the current SATA based storage interface though we are seeing movements to faster interconnects like PCI Express to alleviate this.  Some compute tasks are limited by the “infrastructure” of standard x86 processor cores and the move to GPU compute has changed the direction of these workloads dramatically.

There is another area of technology that could be improved if we could just move past an existing way of doing things.  Displays.

Continue reading our story on NVIDIA G-Sync Variable Refresh Rate Technology!!

Author:
Manufacturer: AMD

The AMD Radeon R9 280X

Today marks the first step in an introduction of an entire AMD Radeon discrete graphics product stack revamp. Between now and the end of 2013, AMD will completely cycle out Radeon HD 7000 cards and replace them with a new branding scheme. The "HD" branding is on its way out and it makes sense. Consumers have moved on to UHD and WQXGA display standards; HD is no longer extraordinary.

But I want to be very clear and upfront with you: today is not the day that you’ll learn about the new Hawaii GPU that AMD promised would dominate the performance per dollar metrics for enthusiasts.  The Radeon R9 290X will be a little bit down the road.  Instead, today’s review will look at three other Radeon products: the R9 280X, the R9 270X and the R7 260X.  None of these products are really “new”, though, and instead must be considered rebrands or repositionings. 

There are some changes to discuss with each of these products, including clock speeds and more importantly, pricing.  Some are specific to a certain model, others are more universal (such as updated Eyefinity display support). 

Let’s start with the R9 280X.

 

AMD Radeon R9 280X – Tahiti aging gracefully

The AMD Radeon R9 280X is built from the exact same ASIC (chip) that powers the previous Radeon HD 7970 GHz Edition with a few modest changes.  The core clock speed of the R9 280X is actually a little bit lower at reference rates than the Radeon HD 7970 GHz Edition by about 50 MHz.  The R9 280X GPU will hit a 1.0 GHz rate while the previous model was reaching 1.05 GHz; not much a change but an interesting decision to be made for sure.

Because of that speed difference the R9 280X has a lower peak compute capability of 4.1 TFLOPS compared to the 4.3 TFLOPS of the 7970 GHz.  The memory clock speed is the same (6.0 Gbps) and the board power is the same, with a typical peak of 250 watts.

280x-1.jpg

Everything else remains the same as you know it on the HD 7970 cards.  There are 2048 stream processors in the Tahiti version of AMD’s GCN (Graphics Core Next), 128 texture units and 32 ROPs all being pushed by a 384-bit GDDR5 memory bus running at 6.0 GHz.  Yep, still with a 3GB frame buffer.

Continue reading our review of the AMD Radeon R9 280X, R9 270X and R7 260X!!!

Manufacturer: Scott Michaud

A new generation of Software Rendering Engines.

We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.

My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.

Errata: BioShock uses a modified Unreal Engine 2.5, not 3.

In the above video:

  • I show the problems with graphics APIs such as DirectX and OpenGL.
  • I talk about what those APIs attempt to solve, finding color values for your monitor.
  • I discuss the advantages of boiling graphics problems down to general mathematics.
  • Finally, I prove the advantages of boiling graphics problems down to general mathematics.

I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.

Click here, after you watch the video, to read more about GPU-accelerated Software Rendering.

Author:
Manufacturer: AMD

Earth Shattering?

AMD is up to some interesting things.  Today at AMD’s tech day, we discovered a veritable cornucopia of information.  Some of it was pretty interesting (audio), some was discussed ad-naseum (audio, audio, and more audio), and one thing in particular was quite shocking.  Mantle was the final, big subject that AMD was willing to discuss.  Many assumed that the R9 290X would be the primary focus of this talk, but in fact it very much was an aside that was not discussed at any length.  AMD basically said, “Yes, the card exists, and it has some new features that we are not going to really go over at this time.”  Mantle, as a technology, is at the same time a logical step as well as an unforeseen one.  So what all does Mantle mean for users?

mantle_diag_01.jpg

Looking back through the mists of time, when dinosaurs roamed the earth, the individual 3D chip makers all implemented low level APIs that allowed programmers to get closer to the silicon than what other APIs such as Direct3D and OpenGL would allow.  This was a very efficient way of doing things in terms of graphics performance.  It was an inefficient way to do things for a developer writing code for multiple APIs.  Microsoft and the Kronos Group had solutions with Direct3D and OpenGL that allowed these programmers to develop for these high level APIs very simply (comparatively so).  The developers could write code that would run D3D/OpenGL, and the graphics chip manufacturers would write drivers that would interface with Direct3D/OpenGL, which then go through a hardware abstraction layer to communicate with the hardware.  The onus was then on the graphics people to create solid, high performance drivers that would work well with DirectX or OpenGL, so the game developer would not have to code directly for a multitude of current and older graphics cards.

Read the entire article here.

Author:
Manufacturer: Various

Summary of Events

In January of 2013 I revealed a new testing methodology for graphics cards that I dubbed Frame Rating.  At the time I was only able to talk about the process, using capture hardware to record the output directly from the DVI connections on graphics cards, but over the course of a few months started to release data and information using this technology.  I followed up the story in January with a collection of videos that displayed some of the capture video and what kind of performance issues and anomalies we were able to easily find. 

My first full test results were published in February to quite a bit of stir and then finally in late March released Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing which dramatically changed the way graphics cards and gaming performance was discussed and evaluated forever. 

Our testing proved that AMD CrossFire was not improving gaming experiences in the same way that NVIDIA SLI was.  Also, we showed that other testing tools like FRAPS were inadequate in showcasing this problem.  If you are at all unfamiliar with this testing process or the results it showed, please check out the Frame Rating Dissected story above.

At the time, we tested 5760x1080 resolution using AMD Eyefinity and NVIDIA Surround but found there were too many issues and problems with our scripts and the results they were presenting to give reasonably assured performance metrics.  Running AMD + Eyefinity was obviously causing some problems but I wasn’t quite able to pinpoint what they were and how severe it might have been.  Instead I posted graphs like this:

01.png

We were able to show NVIDIA GTX 680 performance and scaling in SLI at 5760x1080 but we only were giving results for the Radeon HD 7970 GHz Edition in a single GPU configuration.

 

Since those stories were released, AMD has been very active.  At first they were hesitant to believe our results and called into question our processes and the ability for gamers to really see the frame rate issues we were describing.  However, after months of work and pressure from quite a few press outlets, AMD released a 13.8 beta driver that offered a Frame Pacing option in the 3D controls that enables the ability to evenly space out frames in multi-GPU configurations producing a smoother gaming experience.

02.png

The results were great!  The new AMD driver produced very consistent frame times and put CrossFire on a similar playing field to NVIDIA’s SLI technology.  There were limitation though: the driver only fixed DX10/11 games and only addressed resolutions of 2560x1440 and below.

But the story won’t end there.  CrossFire and Eyefinity are still very important in a lot of gamers minds and with the constant price drops in 1920x1080 panels, more and more gamers are taking (or thinking of taking) the plunge to the world of Eyefinity and Surround.  As it turns out though, there are some more problems and complications with Eyefinity and high-resolution gaming (multi-head 4K) that are cropping up and deserve discussion.

Continue reading our investigation into AMD Eyefinity and NVIDIA Surround with multi-GPU solutions!!

Author:
Manufacturer: MSI

A New TriFrozr Cooler

Graphics cards are by far the most interesting topic we cover at PC Perspective.  Between the battles of NVIDIA and AMD as well as the competition between board partners like EVGA, ASUS, MSI and Galaxy, there is very rarely a moment in time when we don't have a different GPU product of some kind on an active test bed.  Both NVIDIA and AMD release reference cards (for the most part) with each and every new product launch and it then takes some time for board partners to really put their own stamp on the designs.  Other than the figurative stamp that is the sticker on the fan.

IMG_9886.JPG

One of the companies that has recently become well known for very custom, non-reference graphics card designs is MSI and the pinnacle of the company's engineering falls into the Lightning brand.  As far back as the MSI GTX 260 Lightning and as recently as the MSI HD 7970 Lightning, these cards have combined unique cooling, custom power design and good amount of over engineering to really produce a card that has few rivals.

Today we are looking at the brand new MSI GeForce GTX 780 Lightning, a complete revamp of the GTX 780 that was released in May.  Based on the same GK110 GPU as the GTX Titan card, with two fewer SMX units, the GTX 780 easily the second fastest single GPU card on the market.  MSI is hoping to make the enthusiasts even more excited about the card with the Lightning design that brings a brand new TriFrozr cooler, impressive power design and overclocking capabilities that basic users and LN2 junkies can take advantage of.  Just what DO you get for $750 these days?

Continue reading our review of the MSI GeForce GTX 780 Lightning graphics card!!

Author:
Manufacturer: Asus

Plus one GTX 670...

Brand new GPU architectures are typically packaged in reference designs when it comes to power, PCB layout, and cooling.  Once manufacturers get a chance to put out their own designs, then interesting things happen.  The top end products are usually the ones that get the specialized treatment first, because they typically have the larger margins to work with.  Design choices here will eventually trickle down to lower end cards, typically with a price point $20 to $30 more than a reference design.  Companies such as MSI have made this their bread and butter with the Lightning series on top, the Hawk line handling the midrange, and then the hopped up reference designs with better cooling under the Twin Frozr moniker.

asus_jd_01.jpg

ASUS has been working with their own custom designs for years and years, but it honestly was not until the DirectCU series debuted did we have a well defined product lineup which pushes high end functionality across the entire lineup of products from top to bottom.  Certainly they had custom and unique designs, but things really seemed to crystallize with DirectCU.  I guess that is also the power of a good marketing tool as well.  DirectCU is a well known brand owned by Asus, and users typically know what to expect when looking at a DirectCU product.

Click to read the entire review here!

Manufacturer: XSPC

Introduction

02-680v2-1-1.jpg

Courtesy of XSPC

The Razor GTX680 water block was among the first in the XSPC full cover line of blocks. The previous generation of XSPC water blocks offered cooling for the GPU as well as the memory and on-board VRMs, but did not offer the protection that a full card-sized block offers to the sensitive components integrated into the card's PCB. At an MSRP of $99.99, the Razor GTX680 water block is a sound investment.

03-680v2-2.jpg

Courtesy of XSPC

The Razor GTX680 block comes with a total of seven G1/4" ports - four on the inlet side (left) and three on the outlet side (right). XSPC included the following component with the block: XSPC thermal compound, dual blue LEDs, five steel port caps, paper washers and mounting screws, and TIM (thermal interface material) for use with the on board memory and VRM chips.

Continue reading our review of the XSPC Razor GTX680 water block!

Author:
Manufacturer: AMD

Frame Pacing for CrossFire

When the Radeon HD 7990 launched in April of this year, we had some not-so-great things to say about it.  The HD 7990 depends on CrossFire technology to function and we had found quite a few problems with AMD's CrossFire technology over the last months of testing with our Frame Rating technology, the HD 7990 "had a hard time justifying its $1000 price tag."  Right at launch, AMD gave us a taste of a new driver that they were hoping would fix the frame pacing and frame time variance issues seen in CrossFire, and it looked positive.  The problem was that the driver wouldn't be available until summer.

As I said then: "But until that driver is perfected, is bug free and is presented to buyers as a made-for-primetime solution, I just cannot recommend an investment this large on the Radeon HD 7990."

Today could be a very big day for AMD - the release of the promised driver update that enables frame pacing on AMD 7000-series CrossFire configurations including the Radeon HD 7990 graphics cards with a pair of Tahiti GPUs. 

It's not perfect yet and there are some things to keep an eye on.  For example, this fix will not address Eyefinity configurations which includes multi-panel solutions and the new 4K 60 Hz displays that require a tiled display configuration.  Also, we found some issues with more than two GPU CrossFire that we'll address in a later page too.

 

New Driver Details

Starting with 13.8 and moving forward, AMD plans to have the frame pacing fix integrated into all future drivers.  The software team has implemented a software based frame pacing algorithm that simply monitors the time it takes for each GPU to render a frame, how long a frame is displayed on the screen and inserts delays into the present calls when necessary to prevent very tightly timed frame renders.  This balances or "paces" the frame output to the screen without lowering the overall frame rate.  The driver monitors this constantly in real-time and minor changes are made on a regular basis to keep the GPUs in check. 

7990card.JPG

As you would expect, this algorithm is completely game engine independent and the games should be completely oblivious to all that is going on (other than the feedback from present calls, etc). 

This fix is generic meaning it is not tied to any specific game and doesn't require profiles like CrossFire can from time to time.  The current implementation will work with DX10 and DX11 based titles only with DX9 support being added later with another release.  AMD claims this was simply a development time issue and since most modern GPU-bound titles are DX10/11 based they focused on that area first.  In phase 2 of the frame pacing implementation AMD will add in DX9 and OpenGL support.  AMD wouldn't give me a timeline for implementation though so we'll have to see how much pressure AMD continues with internally to get the job done.

Continue reading our story of the new AMD Catalyst 13.8 beta driver with frame pacing support!!

Author:
Manufacturer: Galaxy

Overclocked GTX 770 from Galaxy

When NVIDIA launched the GeForce GTX 770 at the very end of May, we started to get in some retail samples from companies like Galaxy.  While our initial review looked at the reference models, other add-in card vendors are putting their own unique touch on the latest GK104 offering and Galaxy was kind enough to send us their GeForce GTX 770 2GB GC model that uses a unique, more efficient cooler design and also runs at overclocked frequencies. 

If you haven't yet read up on the GTX 770 GPU, you should probably stop by my first review of the GTX 770 to see what information you are missing out on.  Essentially, the GTX 770 is a full-spec GK104 Kepler GPU running at higher clocks (both core and memory speeds) compared to the original GTX 680.  The new reference clocks for the GTX 770 were 1046 MHz base clock, 1085 MHz Boost clock and a nice increase to 7.0 GHz memory speeds.

gpuz.png

Galaxy GeForce GTX 770 2GB GC Specs

The Galaxy GC model is overclocked with a new base clock setting of 1111 MHz and a higher Boost clock of 1163 MHz; both are about 6.5-7.0% higher than the original clocks.  Galaxy has left the memory speeds alone though keeping them running at 7.0 GHz effectively.

IMG_9941.JPG

Continue reading our review of the Galaxy GeForce GTX 770 2GB GC graphics card!!

Author:
Manufacturer: NVIDIA

Another Wrench – GeForce GTX 760M Results

Just recently, I evaluated some of the current processor-integrated graphics options from our new Frame Rating performance metric. The results were very interesting, proving Intel has done some great work with its new HD 5000 graphics option for Ultrabooks. You might have noticed that the MSI GE40 didn’t just come with the integrated HD 4600 graphics but also included a discrete NVIDIA GeForce GTX 760M, on-board.  While that previous article was to focus on the integrated graphics of Haswell, Trinity, and Richland, I did find some noteworthy results with the GTX 760M that I wanted to investigate and present.

IMG_0141.JPG

The MSI GE40 is a new Haswell-based notebook that includes the Core i7-4702MQ quad-core processor and Intel HD 4600 graphics.  Along with it MSI has included the Kepler architecture GeForce GTX 760M discrete GPU.

760mspecs.png

This GPU offers 768 CUDA cores running at a 657 MHz base clock but can stretch higher with GPU Boost technology.  It is configured with 2GB of GDDR5 memory running at 2.0 GHz

If you didn’t read the previous integrated graphics article, linked above, you’re going to have some of the data presented there spoiled and so you might want to get a baseline of information by getting through that first.  Also, remember that we are using our Frame Rating performance evaluation system for this testing – a key differentiator from most other mobile GPU testing.  And in fact it is that difference that allowed us to spot an interesting issue with the configuration we are showing you today. 

If you are not familiar with the Frame Rating methodology, and how we had to change some things for mobile GPU testing, I would really encourage you to read this page of the previous mobility Frame Rating article for the scoop.  The data presented below depends on that background knowledge!

Okay, you’ve been warned – on to the results.

Continue reading our story about GeForce GTX 760M Frame Rating results and Haswell Optimus issues!!

Author:
Manufacturer: Various

Battle of the IGPs

Our long journey with Frame Rating, a new capture-based analysis tool to measure graphics performance of PCs and GPUs, began almost two years ago as a way to properly evaluate the real-world experiences for gamers.  What started as a project attempting to learn about multi-GPU complications has really become a new standard in graphics evaluation and I truly believe it will play a crucial role going forward in GPU and game testing. 

Today we use these Frame Rating methods and tools, which are elaborately detailed in our Frame Rating Dissected article, and apply them to a completely new market: notebooks.  Even though Frame Rating was meant for high performance discrete desktop GPUs, the theory and science behind the entire process is completely applicable to notebook graphics and even on the integrated graphics solutions on Haswell processors and Richland APUs.  It also is able to measure performance of discrete/integrated graphics combos from NVIDIA and AMD in a unique way that has already found some interesting results.

 

Battle of the IGPs

Even though neither side wants us to call it this, we are testing integrated graphics today.  With the release of Intel’s Haswell processor (the Core i7/i5/i3 4000) the company has upgraded the graphics noticeably on several of their mobile and desktop products.  In my first review of the Core i7-4770K, a desktop LGA1150 part, the integrated graphics now known as the HD 4600 were only slightly faster than the graphics of the previous generation Ivy Bridge and Sandy Bridge.  Even though we had all the technical details of the HD 5000 and Iris / Iris Pro graphics options, no desktop parts actually utilize them so we had to wait for some more hardware to show up. 

 

mbair.JPG

When Apple held a press conference and announced new MacBook Air machines that used Intel’s Haswell architecture, I knew I could count on Ken to go and pick one up for himself.  Of course, before I let him start using it for his own purposes, I made him sit through a few agonizing days of benchmarking and testing in both Windows and Mac OS X environments.  Ken has already posted a review of the MacBook Air 11-in model ‘from a Windows perspective’ and in that we teased that we had done quite a bit more evaluation of the graphics performance to be shown later.  Now is later.

So the first combatant in our integrated graphics showdown with Frame Rating is the 11-in MacBook Air.  A small, but powerful Ultrabook that sports more than 11 hours of battery life (in OS X at least) but also includes the new HD 5000 integrated graphics options.  Along with that battery life though is the GT3 variation of the new Intel processor graphics that doubles the number of compute units as compared to the GT2.  The GT2 is the architecture behind the HD 4600 graphics that sits with nearly all of the desktop processors, and many of the notebook versions, so I am very curious how this comparison is going to stand. 

Continue reading our story on Frame Rating with Haswell, Trinity and Richland!!

Author:
Manufacturer: PC Perspective

The GPU Midrange Gets a Kick

I like budget video cards.  They hold a soft spot in my heart.  I think the primary reason for this is that I too was once a poor college student and could not afford the really expensive cards.  Ok, so this was maybe a few more years ago than I like to admit.  Back when the Matrox Millennium was very expensive, I ended up getting the STB Lightspeed 128 instead.  Instead of the 12 MB Voodoo 2 I went for the 8 MB version.  I was never terribly fond of paying top dollar for a little extra performance.  I am still not fond of it either.

The sub-$200 range is a bit of a sweet spot that is very tightly packed with products.  These products typically perform in the range of a high end card from 3 years ago, yet still encompass the latest features of the top end products from their respective companies.  These products can be overclocked by end users to attain performance approaching cards in the $200 to $250 range.  Mind, there are some specific limitations to the amount of performance one can actually achieve with these cards.  Still, what a user actually gets is very fair when considering the price.

budg_01.jpg

Today I cover several flavors of cards from three different manufacturers that are based on the AMD HD 7790 and the NVIDIA GTX 650 Ti BOOST chips.  These range in price from $129 to $179.  The features on these cards are amazingly varied, and there are no “sticker edition” parts to be seen here.  Each card is unique in its design and the cooling strategies are also quite distinct.  Users should not expect to drive monitors above 1920x1200, much less triple monitors in Surround and Eyefinity.

Now let us quickly go over the respective chips that these cards are based on.

Click here to read the entire article!

Author:
Manufacturer: NVIDIA

Getting even more life from GK104

Have you guys heard about this new GPU from NVIDIA?  It’s called GK104 and it turns out that the damn thing is found yet another graphics card this year – the new GeForce GTX 760.  Yup, you read that right, what NVIDIA is saying is the last update to the GeForce lineup through Fall 2013 is going to be based on the same GK104 design that we have previously discussed in reviews of the GTX 680, GTX 670, GTX 660 Ti, GTX 690 and more recently, the GTX 770. This isn’t a bad thing though!  GK104 has done a fantastic job in every field and market segment that NVIDIA has tossed it into with solid performance and even better performance per watt than the competition.  It does mean however that talking up the architecture is kind of mind numbing at this point…

block.jpg

If you are curious about the Kepler graphics architecture and the GK104 in particular, I’m not going to stop you from going back and reading over my initial review of the GTX 680 from January of 2012.  The new GTX 760 takes the same GPU, adds a new and improved version of GPU Boost (the same we saw in the GTX 770) and lowers down the specifications a bit to enable NVIDIA to hit a new price point.  The GTX 760 will be replacing the GTX 660 Ti – that card will be falling into the ether but the GTX 660 will remain, as will everything below it including the GTX 650 Ti Boost, 650 Ti and plain old 650.  The GTX 670 went the way of the dodo with the release of the GTX 770.

01.jpg

Even though the GTX 690 isn't on this list, NVIDIA says it isn't EOL

As for the GeForce GTX 760 it will ship with 1152 CUDA cores running at a base clock of 980 MHz and a typical boost clock of 1033 MHz.  The memory speed remains at 6.0 GHz on a 256-bit memory bus and you can expect to find both 2GB and 4GB frame buffer options from retail partners upon launch.  The 1152 CUDA cores are broken up over 6 SMX units and that means you’ll see some parts with 3 GPCs and others with 4 – NVIDIA claims any performance delta between them will be negligible. 

Continue reading our review of the NVIDIA GeForce GTX 760 2GB Graphics Card!!