Subject: Graphics Cards | February 19, 2014 - 04:43 PM | Jeremy Hellstrom
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video
We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores. In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units. This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet. At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels. Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.
That is of course after you read Ryan's full review.
"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"
Here are some more Graphics Card articles from around the web:
- MSI GTX 750 Ti Gaming Video Card Review @HiTech Legion
- NVIDIA GeForce GTX 750 Ti @ Benchmark Reviews
- ASUS GTX 750 OC 1 GB @ techPowerUp
- MSI GTX 750 Ti Gaming 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750Ti the Arrival of Maxwell @HiTech Legion
- Palit GTX 750 Ti StormX Dual 2 GB @ techPowerUp
- The GTX 750 Ti Review; Maxwell Arrives @ Hardware Canucks
- Nvidia GeForce GTX 750 Ti vs. AMD Radeon R7 265 @ Legion Hardware
- MSI GTX750Ti OC Twin Frozr @ Kitguru
- NVIDIA GeForce GTX 750 Ti 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750 Ti "Maxwell" On Linux @ Phoronix
- A quick look at Mantle on AMD's Kaveri APU @ The Tech Report
- Sapphire Radeon R9 Tri-X OC video card @ Hardwareoverclock
- AMD Radeon R9 290: Still Not Good For Linux Users @ Phoronix
- AMD Radeon R7 265 2GB Video Card Review @ Legit Reviews
- Sapphire Radeon R7 260X OC 2GB Graphics Card Review @ Techgage
- XFX Double Dissipation R9 280X @ [H]ard|OCP
What we know about Maxwell
I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell. It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell. It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.
For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself. There I will detail the product specifications, performance comparison and expectations, etc.
If you are interested in learning what makes Maxwell tick, keep reading below.
The NVIDIA Maxwell Architecture
When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it. Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press. In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance. It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.
During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design. Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary. NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.
The logic of the GPU design remains similar to Kepler. There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).
GM107 Block Diagram
Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell. There are more divisions, more groupings and fewer CUDA cores "per block" than before. As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.
Subject: Graphics Cards | February 13, 2014 - 02:31 PM | Jeremy Hellstrom
Tagged: radeon, r7 265, pitcairn, Mantle, gpu, amd
Some time in late February or March you will be able to purchase the R7 265 for around $150, a decent price for an entry level GPU that will benefit those who are currently dependent on the GPU portion of an APU. This leads to the question of its performance and if this Pitcairn refresh will really benefit a gamer on a tight budget. Hardware Canucks tested it against the two NVIDIA cards closest in price, the GTX 650 Ti Boost which is almost impossible to find and the GTX 660 2GB which is $40 more than the MSRP of the R7 265. The GTX 660 is faster overall but when you look at the price to performance ratio the R7 265 is a more attractive offering. Of course with NVIDIA's Maxwell release just around the corner this could change drastically.
If you already caught Ryan's review, you might have missed the short video he just added on the last page.
"AMD's R7 265 is meant to reside in the space between the R7 260X and R9 270, though performance is closer to its R9 sibling. Could this make it a perfect budget friendly graphics card?"
Here are some more Graphics Card articles from around the web:
- AMD updates Radeon R7 series with R7 265 GPU, promising 25 percent more power @ The Inquirer
- Sapphire Radeon R7 265 2 GB @ techPowerUp
- Sapphire R7 265 Dual X @ Kitguru
- Gigabyte Windforce Radeon R9 280X OC Video Card Review @HiTech Legion
- XFX Radeon R9 290 Double Dissipation @ Benchmark Reviews
- Sapphire R7 260X 2GB OC 2x DVI Video Card Review @ Legit Reviews
- Sapphire R9-290X Tri-X “Sapphire Takes a Shot at Cooling the Monster” Review! @ Bjorn3D
- Asus R9 290 Direct CU II OC @ Kitguru
- Sapphire R9 290X Tri-X OC 4 GB @ techPowerUp
- GIGABYTE R9 290X WindForce OC Review @ Hardware Canucks
- AMD Mantle BF4 and StarSwarm Testing Part 2 @ Legit Reviews
- Gigabyte GTX 780 Ti GHz Edition 3GB @ eTeknix
- MSI GeForce GTX 780 Ti GAMING 3G @ [H]ard|OCP
Straddling the R7 and R9 designation
It is often said that the sub-$200 graphics card market is crowded. It will get even more so over the next 7 days. Today AMD is announcing a new entry into this field, the Radeon R7 265, which seems to straddle the line between their R7 and R9 brands. The product is much closer in its specifications to the R9 270 than it is the R7 260X. As you'll see below, it is built on a very familiar GPU architecture.
AMD claims that the new R7 265 brings a 25% increase in performance to the R7 line of graphics cards. In my testing, this does turn out to be true and also puts it dangerously close to the R9 270 card released late last year. Much like we saw with the R9 290 compared to the R9 290X, the less expensive but similarly performing card might make the higher end model a less attractive option.
Let's take a quick look at the specifications of the new R7 265.
Based on the Pitcairn GPU, a part that made its debut with the Radeon HD 7870 and HD 7850 in early 2012, this card has 1024 stream processors running at 925 MHz equating to 1.89 TFLOPS of total peak compute power. Unlike the other R7 cards, the R7 265 has a 256-bit memory bus and will come with 2GB of GDDR5 memory running at 5.6 GHz. The card requires a single 6-pin power connection but has a peak TDP of 150 watts - pretty much the maximum of the PCI Express bus and one power connector. And yes, the R7 265 supports DX 11.2, OpenGL 4.3, and Mantle, just like the rest of the AMD R7/R9 lineup. It does NOT support TrueAudio and the new CrossFire DMA units.
|Radeon R9 270X||Radeon R9 270||Radeon R7 265||Radeon R7 260X||Radeon R7 260|
|GPU Code name||Pitcairn||Pitcairn||Pitcairn||Bonaire||Bonaire|
|Rated Clock||1050 MHz||925 MHz||925 MHz||1100 MHz||1000 MHz|
|Memory Clock||5600 MHz||5600 MHz||5600 MHz||6500 MHz||6000 MHz|
|Memory Bandwidth||179 GB/s||179 GB/s||179 GB/s||104 GB/s||96 GB/s|
|TDP||180 watts||150 watts||150 watts||115 watts||95 watts|
|Peak Compute||2.69 TFLOPS||2.37 TFLOPS||1.89 TFLOPS||1.97 TFLOPS||1.53 TFLOPS|
The table above compares the current AMD product lineup, ranging from the R9 270X to the R7 260, with the R7 265 directly in the middle. There are some interesting specifications to point out that make the 265 a much closer relation to the R7 270/270X cards than anything below it. Though the R7 265 has four fewer compute units (which is 256 stream processors) than the R9 270. The biggest performance gap here is going to be found with the 256-bit memory bus that persists; the available memory bandwidth of 179 GB/s is 72% higher than the 104 GB/s from the R7 260X! That will definitely improve performance drastically compared to the rest of the R7 products. Pay no mind to that peak performance of the 260X being higher than the R7 265; in real world testing that never happened.
AMD Releases Catalyst 13.11 Beta 9.2 Driver To Correct Performance Variance Issue of R9 290 Series Graphics Cards
Subject: Graphics Cards, Cases and Cooling | November 8, 2013 - 02:41 AM | Tim Verry
Tagged: R9 290X, powertune, hawaii, graphics drivers, gpu, GCN, catalyst 13.11 beta, amd, 290x
AMD recently launched its 290X graphics card, which is the new high-end single GPU solution using a GCN-based Hawaii architecture. The new GPU is rather large and incorporates an updated version of AMD's PowerTune technology to automatically adjust clockspeeds based on temperature and a maximum fan speed of 40%. Unfortunately, it seems that some 290X cards available at retail exhibited performance characteristics that varied from review units.
AMD has looked into the issue and released the following statement in response to the performance variances (which PC Perspective is looking into as well).
Hello, We've identified that there's variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink. The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.
The correct target RPM values are 2200RPM for the AMD Radeon R9 290X "Quiet mode", and 2650RPM for the R9 290. You can verify these in GPU-Z. If you're working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.
From the AMD statement, it seems to be an issue with fan speeds from card to card causing the performance variances. With a GPU that is rated to run at up to 95C, a fan limited to 40% maximum, and dynamic clockspeeds, it is only natural that cards could perform differently, especially if case airflow is not up to par. On the other hand, the specific issue pointed out by other technology review sites (per my understanding, it was initially Tom's Hardware that reported on the retail vs review sample variance) is an issue where the 40% maximum on certain cards is not actually the RPM target that AMD intended.
AMD intended for the Radeon R9 290X's fan to run at 2200RPM (40%) in Quiet Mode and the fan on the R9 290 (which has a maximum fan speed percentage of 47%) to spin at 2650 RPM in Quiet Mode. However, some cards 40% values are not actually hitting those intended RPMs, which is causing performance differences due to cooling and PowerTune adjusting the clockspeeds accordingly.
Luckily, the issue is being worked on by AMD, and it is reportedly rectified by a driver update. The driver update ensures that the fans are actually spinning at the intended speed when set to the 40% (R9 290X) or 47% (R9 290) values in Catalyst Control Center. The new driver, which includes the fix, is version Catalyst 13.11 Beta 9.2 and is available for download now.
If you are running a R9 290 or R9 290X in your system, you should consider updating to the latest driver to ensure you are getting the cooling (and as a result gaming) performance you are supposed to be getting.
Catalyst 13.11 Beta 9.2 is available from the AMD website.
- AMD Radeon R9 290X Hawaii - The Configurable GPU?
- AMD Radeon R9 290 4GB Review - Trip to Hawaii for $399
Stay tuned to PC Perspective for more information on the Radeon R9 290 series GPU performance variance issue as it develops.
Image credit: Ryan Shrout (PC Perspective).
Subject: Graphics Cards | June 1, 2013 - 01:38 PM | Tim Verry
Tagged: watercooling, nvidia, hydro copper, gtx 780, gpu, gk110, evga
EVGA GTX 780 Hydro Copper GPUs
While NVIDIA restricted partners from going with aftermarket coolers on the company's GTX TITAN graphics card, the recently released NVIDIA GTX 780 does not appear to have the same limits placed upon it. As such, many manufacturers will be releasing GTX 780 graphics cards with custom coolers. One such design that caught my attention was the Hydro Copper full cover waterblock from EVGA.
This new cooler will be used on at least two upcoming EVGA graphics cards, the GTX 780 and GTX 780 Classified. EVGA has not yet announced clockspeeds or pricing for the Classified edition, but the GTX 780 Hydro Copper will be a GTX 780 GPU clocked at 980 MHz base and 1033 MHz boost. The 3GB of GDDR5 memory is stock clocked at 6008 MHz, however. It uses a single 8-pin and a single 6-pin PCI-E power connector. This card is selling for around $799 at retailers such as Newegg.
The GTX 780 Classified Hydro Copper will have a factory overclocked GTX 780 GPU and 3GB of GDDR5 memory at 6008 MHz, but beyond that details are scarce. The 8+8-pin PCI-E power connectors do suggest a healthy overclock (or at least that users will be able to push the cards after they get them).
Both the GTX 780 and GTX 780 Classified Hydro Copper graphics cards feature two DL-DVI, one HDMI, and one DisplayPort video outputs.
The Hydro Copper cooler itself is the really interesting bit about these cards though. It is a single slot, full cover waterblock that will cool the entire graphics card (GPU, VRM, Memory, ect). It has two inlet/outlet ports that can be swapped around to accommodate SLI setups or other custom water tube routing. A configurable LED-backlit EVGA logo adorns the side of the card and can be controlled in software. A 0.25 x 0.35 pin matrix is used in the portion of the block above the GPU to increase the surface area and aid in cooling. Unfortunately, while the card and cooler are single slot, you will actually need two case PCI expansion slots due to the two DL-DVI connectors.
It looks like a neat card, and it should perform well. I'm looking forward to seeing reviews of the card and how the cooler holds up to overclocking. Buying an overclocked card with a pre-installed waterblock is not for everyone but having a water cooled GPU with a warranty will be worth it more than pairing a stock card with a custom block.
Subject: Graphics Cards | May 28, 2013 - 11:32 PM | Tim Verry
Tagged: gpu, drivers, catalyst 13.6 beta, beta, amd
AMD has released its Catalyst 13.6 beta graphics driver, and it fixes a number of issues under both Windows 8 and Linux. The new beta driver is also compatible with the existing Catalyst 13.5 CAP1 (Catalyst Application Profile) which improves performance of several PC games.
As far as the Windows version of the graphics driver, Catalyst 13.6 adds OpenCL GPU acceleration support to Adobe's Premiere Pro CC software and enables AMD Wireless Display technology on systems with the company's A-Series APUs and either Broadcom or Atheros Wi-Fi chipsets. AMD has also made a couple of tweaks to its Enduro technology, including correctly identifying when a Metro app idles and offloading the corresponding GPU tasks to integrated graphics instead of a discrete card. The new beta driver also resolves an issue with audio dropout over HDMI.
On the Linux side of things, Catalyst 13.6 beta adds support for the following when using AMD's A10, A8, A6, and A4 APUs:
- Ubuntu 13.04
- Xserver 1.14
- GLX_EXT_buffer age
The driver fixes several bugs as well, including resolving black screen and corruption issues under TF2, an issue with OpenGL applications and VSYNC, and UVD playback issues where the taskbar would disappear and/or the system would experience a noticeable performance drop while playing a UVD in XBMC.
You can grab the new beta driver from the AMD website.
Subject: General Tech | April 3, 2013 - 01:21 PM | Jeremy Hellstrom
Tagged: gpu, DRAM, ddr3, price increase
It has taken a while but the climbing price of memory is about to have an effect on the price you pay for your next GPU. DigiTimes does specifically mention DDR3 but as both GDDR4 and GDDR5 are based off of DDR3 they will suffer the same price increases. You can expect to see the new prices last as part of the reason for the increase in the price of RAM is the decrease in sales volume. AMD may be hit harder overall than NVIDIA as they tend to put more memory on their cards and buyers of value cards might see the biggest percentage increase as those cards still sport 1GB or more of memory.
"Since DDR3 memory prices have recently risen by more than 10%, the sources believe the graphics cards are unlikely to see their prices return to previous levels within the next six months unless GPU makers decide to offer promotions for specific models or launch next-generation products."
Here is some more Tech News from around the web:
- Memory vendors pile on '3D' stacking standard @ The Register
- History of the GPU, Part 2: 3Dfx Voodoo, the game-changer @ Techspot
- Intel releases OpenCL SDK for upcoming Haswell chips @ The Inquirer
- Linux Foundation Training Prepares the International Space Station for Linux Migration @ Linux.com
- Microsoft releases Exchange 2013 update @ The Register
- Canon PowerShot A2600 Review @ TechReviewSource
- AMD Releases Open-Source UVD Video Support @ Phoronix
- Win An Amazing PC Specialist Gaming System @ eTeknix
Subject: General Tech | March 27, 2013 - 01:21 PM | Jeremy Hellstrom
Tagged: gpu, history, get off my lawn
TechSpot has jsut published an article looking at the history of the GPU over the past decades, from the first NTSC capable cards, through the golden 3DFX years straight through to the modern GPGPU. There have been a lot of standards over the years such as MDA, CGA and EGA as well as different interfaces like ISA, the graphic card specific AGP to our current PCIe standard. The first article in this four part series takes us from 1976 through to 1995 and the birth of the Voodoo series of accelerators. Read on to bring back memories or perhaps to encounter some of this history for the first time.
"The evolution of the modern graphics processor begins with the introduction of the first 3D add-in cards in 1995, followed by the widespread adoption of the 32-bit operating systems and the affordable personal computer. While 3D graphics turned a fairly dull PC industry into a light and magic show, they owe their existence to generations of innovative endeavour. Over the next few weeks we'll be taking an extensive look at the history of the GPU, going from the early days of 3D consumer graphics, to the 3Dfx Voodoo game-changer, the industry's consolidation at the turn of the century, and today's modern GPGPU."
Here is some more Tech News from around the web:
- The Do-It-Yourself All-In-One Computer Standard from Intel @ Hardware Secrets
- What is Windows Blue? @ TechReviewSource
- Mozilla Firefox OS on Dreamfone video demo @ The Inquirer
- Google Keep hands-on @ The Inquirer
- Whoops! Tiny bug in NetBSD 6.0 code ruins SSH crypto keys @ The Register
In case you missed it...
In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time. Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle. If you already read that page of our TITAN review, nothing new is included below.
I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!
Why are you not testing CrossFire??
If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out. (Part 1 is here, part 2 is here.) The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.
Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system. With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.
We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS. As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.
Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all. They are simply presenting data that they believe to be true based on the tools at their disposal. More data is always better.
Here are these results and our discussion. I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way. I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI. To gather results I used two processes:
- Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
- Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.
Here is an example of what the overlay looks like in Battlefield 3.
Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge
The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process. A solid color is added to the PRESENT call (more details to come later) for each individual frame. As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.
By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.
Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge
Here you see a very similar screenshot running on CrossFire. Notice the thin silver band between the maroon and purple? That is a complete frame according to FRAPS and most reviews. Not to us - we think that frame rendered is almost useless.