Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-card-profile.jpg

Courtesy of ASUS

The ASUS ROG Poseidon GTX 780 video card is the latest incarnation of the Republic of Gamer (ROG) Poseidon series. Like the previous Poseidon series products, the Poseidon GTX 780 features a hybrid cooler, capable of air and liquid-based cooling for the GPU and on board components. The AUS ROG Poseidon GTX 780 graphics card comes with an MSRP of $599, a premium price for a premium card .

03-fly-apart-image.jpg

Courtesy of ASUS

In designing the Poseidon GTX 780 graphics card, ASUS packed in many of premium components you would normally find as add-ons. Additionally, the card features motherboard quality power components, featuring a 10 phase digital power regulation system using ASUS DIGI+ VRM technology coupled with Japanese black metallic capacitors. The Poseidon GTX 780 has the following features integrated into its design: DisplayPort output port, HDMI output port, dual DVI ports (DVI-D and DVI-I type ports), aluminum backplate, integrated G 1/4" threaded liquid ports, dual 90mm cooling fans, 6-pin and 8-pin PCIe-style power connectors, and integrated power connector LEDs and ROG logo LED.

Continue reading our review of the ASUS ROG Poseidon GTX 780 graphics card!

Was leading with a low end Maxwell smart?

Subject: Graphics Cards | February 19, 2014 - 04:43 PM |
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video

We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores.  In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units.  This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet.  At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels.  Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.

That is of course after you read Ryan's full review.

nvidia-geforce-gtx750ti-645x399.jpg

"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Author:
Manufacturer: NVIDIA

What we know about Maxwell

I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell.  It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell.  It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.

For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself.  There I will detail the product specifications, performance comparison and expectations, etc.

If you are interested in learning what makes Maxwell tick, keep reading below.

The NVIDIA Maxwell Architecture

When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it.  Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press.  In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance.  It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.  

During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design.  Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary.  NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.  

The logic of the GPU design remains similar to Kepler.  There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).  

block.jpg

GM107 Block Diagram

Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell.  There are more divisions, more groupings and fewer CUDA cores "per block" than before.  As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.  

Continue reading our review of the NVIDIA GeForce GTX 750 Ti and Maxwell Architecture!!

Pitcairn rides again in the R7 265

Subject: Graphics Cards | February 13, 2014 - 02:31 PM |
Tagged: radeon, r7 265, pitcairn, Mantle, gpu, amd

Some time in late February or March you will be able to purchase the R7 265 for around $150, a decent price for an entry level GPU that will benefit those who are currently dependent on the GPU portion of an APU.  This leads to the question of its performance and if this Pitcairn refresh will really benefit a gamer on a tight budget.  Hardware Canucks tested it against the two NVIDIA cards closest in price, the GTX 650 Ti Boost which is almost impossible to find and the GTX 660 2GB which is $40 more than the MSRP of the R7 265.  The GTX 660 is faster overall but when you look at the price to performance ratio the R7 265 is a more attractive offering.  Of course with NVIDIA's Maxwell release just around the corner this could change drastically.

If you already caught Ryan's review, you might have missed the short video he just added on the last page.

slides04.jpg

Crowded house

"AMD's R7 265 is meant to reside in the space between the R7 260X and R9 270, though performance is closer to its R9 sibling. Could this make it a perfect budget friendly graphics card?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Author:
Manufacturer: AMD

Straddling the R7 and R9 designation

It is often said that the sub-$200 graphics card market is crowded.  It will get even more so over the next 7 days.  Today AMD is announcing a new entry into this field, the Radeon R7 265, which seems to straddle the line between their R7 and R9 brands.  The product is much closer in its specifications to the R9 270 than it is the R7 260X. As you'll see below, it is built on a very familiar GPU architecture.

slides01.jpg

AMD claims that the new R7 265 brings a 25% increase in performance to the R7 line of graphics cards.  In my testing, this does turn out to be true and also puts it dangerously close to the R9 270 card released late last year. Much like we saw with the R9 290 compared to the R9 290X, the less expensive but similarly performing card might make the higher end model a less attractive option.

Let's take a quick look at the specifications of the new R7 265.

slides02.jpg

Based on the Pitcairn GPU, a part that made its debut with the Radeon HD 7870 and HD 7850 in early 2012, this card has 1024 stream processors running at 925 MHz equating to 1.89 TFLOPS of total peak compute power.  Unlike the other R7 cards, the R7 265 has a 256-bit memory bus and will come with 2GB of GDDR5 memory running at 5.6 GHz.  The card requires a single 6-pin power connection but has a peak TDP of 150 watts - pretty much the maximum of the PCI Express bus and one power connector.  And yes, the R7 265 supports DX 11.2, OpenGL 4.3, and Mantle, just like the rest of the AMD R7/R9 lineup.  It does NOT support TrueAudio and the new CrossFire DMA units.

  Radeon R9 270X Radeon R9 270 Radeon R7 265 Radeon R7 260X Radeon R7 260
GPU Code name Pitcairn Pitcairn Pitcairn Bonaire Bonaire
GPU Cores 1280 1280 1024 896 768
Rated Clock 1050 MHz 925 MHz 925 MHz 1100 MHz 1000 MHz
Texture Units 80 80 64 56 48
ROP Units 32 32 32 16 16
Memory 2GB 2GB 2GB 2GB 2GB
Memory Clock 5600 MHz 5600 MHz 5600 MHz 6500 MHz 6000 MHz
Memory Interface 256-bit 256-bit 256-bit 128-bit 128-bit
Memory Bandwidth 179 GB/s 179 GB/s 179 GB/s 104 GB/s 96 GB/s
TDP 180 watts 150 watts 150 watts 115 watts 95 watts
Peak Compute 2.69 TFLOPS 2.37 TFLOPS 1.89 TFLOPS 1.97 TFLOPS 1.53 TFLOPS
MSRP $199 $179 $149 $119 $109

The table above compares the current AMD product lineup, ranging from the R9 270X to the R7 260, with the R7 265 directly in the middle.  There are some interesting specifications to point out that make the 265 a much closer relation to the R7 270/270X cards than anything below it.  Though the R7 265 has four fewer compute units (which is 256 stream processors) than the R9 270. The biggest performance gap here is going to be found with the 256-bit memory bus that persists; the available memory bandwidth of 179 GB/s is 72% higher than the 104 GB/s from the R7 260X!  That will definitely improve performance drastically compared to the rest of the R7 products.  Pay no mind to that peak performance of the 260X being higher than the R7 265; in real world testing that never happened.

Continue reading our review of the new AMD Radeon R7 265 2GB Graphics Card!!

AMD Releases Catalyst 13.11 Beta 9.2 Driver To Correct Performance Variance Issue of R9 290 Series Graphics Cards

Subject: Graphics Cards, Cases and Cooling | November 8, 2013 - 02:41 AM |
Tagged: R9 290X, powertune, hawaii, graphics drivers, gpu, GCN, catalyst 13.11 beta, amd, 290x

AMD recently launched its 290X graphics card, which is the new high-end single GPU solution using a GCN-based Hawaii architecture. The new GPU is rather large and incorporates an updated version of AMD's PowerTune technology to automatically adjust clockspeeds based on temperature and a maximum fan speed of 40%. Unfortunately, it seems that some 290X cards available at retail exhibited performance characteristics that varied from review units.

Retail versus Review Sample Performance Variance Testing.jpg

AMD has looked into the issue and released the following statement in response to the performance variances (which PC Perspective is looking into as well).

Hello, We've identified that there's variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink. The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

The correct target RPM values are 2200RPM for the AMD Radeon R9 290X "Quiet mode", and 2650RPM for the R9 290. You can verify these in GPU-Z. If you're working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.

From the AMD statement, it seems to be an issue with fan speeds from card to card causing the performance variances. With a GPU that is rated to run at up to 95C, a fan limited to 40% maximum, and dynamic clockspeeds, it is only natural that cards could perform differently, especially if case airflow is not up to par. On the other hand, the specific issue pointed out by other technology review sites (per my understanding, it was initially Tom's Hardware that reported on the retail vs review sample variance) is  an issue where the 40% maximum on certain cards is not actually the RPM target that AMD intended.

AMD intended for the Radeon R9 290X's fan to run at 2200RPM (40%) in Quiet Mode and the fan on the R9 290 (which has a maximum fan speed percentage of 47%) to spin at 2650 RPM in Quiet Mode. However, some cards 40% values are not actually hitting those intended RPMs, which is causing performance differences due to cooling and PowerTune adjusting the clockspeeds accordingly.

Luckily, the issue is being worked on by AMD, and it is reportedly rectified by a driver update. The driver update ensures that the fans are actually spinning at the intended speed when set to the 40% (R9 290X) or 47% (R9 290) values in Catalyst Control Center. The new driver, which includes the fix, is version Catalyst 13.11 Beta 9.2 and is available for download now. 

If you are running a R9 290 or R9 290X in your system, you should consider updating to the latest driver to ensure you are getting the cooling (and as a result gaming) performance you are supposed to be getting.

Catalyst 13.11 Beta 9.2 is available from the AMD website.

Also read:

Stay tuned to PC Perspective for more information on the Radeon R9 290 series GPU performance variance issue as it develops.

Image credit: Ryan Shrout (PC Perspective).

Source: AMD

EVGA Outfits GTX 780 With Hydro Copper Water Block

Subject: Graphics Cards | June 1, 2013 - 01:38 PM |
Tagged: watercooling, nvidia, hydro copper, gtx 780, gpu, gk110, evga

EVGA GTX 780 Hydro Copper GPUs

While NVIDIA restricted partners from going with aftermarket coolers on the company's GTX TITAN graphics card, the recently released NVIDIA GTX 780 does not appear to have the same limits placed upon it. As such, many manufacturers will be releasing GTX 780 graphics cards with custom coolers. One such design that caught my attention was the Hydro Copper full cover waterblock from EVGA.

EVGA GTX 780 with Hydro Copper Water Block (2).jpg

This new cooler will be used on at least two upcoming EVGA graphics cards, the GTX 780 and GTX 780 Classified. EVGA has not yet announced clockspeeds or pricing for the Classified edition, but the GTX 780 Hydro Copper will be a GTX 780 GPU clocked at 980 MHz base and 1033 MHz boost. The 3GB of GDDR5 memory is stock clocked at 6008 MHz, however. It uses a single 8-pin and a single 6-pin PCI-E power connector. This card is selling for around $799 at retailers such as Newegg.

The GTX 780 Classified Hydro Copper will have a factory overclocked GTX 780 GPU and 3GB of GDDR5 memory at 6008 MHz, but beyond that details are scarce. The 8+8-pin PCI-E power connectors do suggest a healthy overclock (or at least that users will be able to push the cards after they get them).

Both the GTX 780 and GTX 780 Classified Hydro Copper graphics cards feature two DL-DVI, one HDMI, and one DisplayPort video outputs.

EVGA GTX 780 Classified with Hydro Copper Water Block (1).jpg

The Hydro Copper cooler itself is the really interesting bit about these cards though. It is a single slot, full cover waterblock that will cool the entire graphics card (GPU, VRM, Memory, ect). It has two inlet/outlet ports that can be swapped around to accommodate SLI setups or other custom water tube routing. A configurable LED-backlit EVGA logo adorns the side of the card and can be controlled in software. A 0.25 x 0.35 pin matrix is used in the portion of the block above the GPU to increase the surface area and aid in cooling. Unfortunately, while the card and cooler are single slot, you will actually need two case PCI expansion slots due to the two DL-DVI connectors.

It looks like a neat card, and it should perform well. I'm looking forward to seeing reviews of the card and how the cooler holds up to overclocking. Buying an overclocked card with a pre-installed waterblock is not for everyone but having a water cooled GPU with a warranty will be worth it more than pairing a stock card with a custom block.

Source: EVGA

AMD Catalyst 13.6 Beta Drivers For Windows and Linux Now Available

Subject: Graphics Cards | May 28, 2013 - 11:32 PM |
Tagged: gpu, drivers, catalyst 13.6 beta, beta, amd

AMD has released its Catalyst 13.6 beta graphics driver, and it fixes a number of issues under both Windows 8 and Linux. The new beta driver is also compatible with the existing Catalyst 13.5 CAP1 (Catalyst Application Profile) which improves performance of several PC games.

As far as the Windows version of the graphics driver, Catalyst 13.6 adds OpenCL GPU acceleration support to Adobe's Premiere Pro CC software and enables AMD Wireless Display technology on systems with the company's A-Series APUs and either Broadcom or Atheros Wi-Fi chipsets. AMD has also made a couple of tweaks to its Enduro technology, including correctly identifying when a Metro app idles and offloading the corresponding GPU tasks to integrated graphics instead of a discrete card. The new beta driver also resolves an issue with audio dropout over HDMI.

AMD Catalyst Drivers.jpg

On the Linux side of things, Catalyst 13.6 beta adds support for the following when using AMD's A10, A8, A6, and A4 APUs:

  • Ubuntu 13.04
  • Xserver 1.14
  • GLX_EXT_buffer age

The driver fixes several bugs as well, including resolving black screen and corruption issues under TF2, an issue with OpenGL applications and VSYNC, and UVD playback issues where the taskbar would disappear and/or the system would experience a noticeable performance drop while playing a UVD in XBMC.

You can grab the new beta driver from the AMD website.

Source: AMD

Bad news GPU fans, prices may be climbing

Subject: General Tech | April 3, 2013 - 01:21 PM |
Tagged: gpu, DRAM, ddr3, price increase

It has taken a while but the climbing price of memory is about to have an effect on the price you pay for your next GPU.  DigiTimes does specifically mention DDR3 but as both GDDR4 and GDDR5 are based off of DDR3 they will suffer the same price increases.  You can expect to see the new prices last as part of the reason for the increase in the price of RAM is the decrease in sales volume.  AMD may be hit harder overall than NVIDIA as they tend to put more memory on their cards and buyers of value cards might see the biggest percentage increase as those cards still sport 1GB or more of memory.

Money.jpg

"Since DDR3 memory prices have recently risen by more than 10%, the sources believe the graphics cards are unlikely to see their prices return to previous levels within the next six months unless GPU makers decide to offer promotions for specific models or launch next-generation products."

Here is some more Tech News from around the web:

Tech Talk

Source: DigiTimes

Grab a sprite and take a graphical trip down memory lane

Subject: General Tech | March 27, 2013 - 01:21 PM |
Tagged: gpu, history, get off my lawn

TechSpot has jsut published an article looking at the history of the GPU over the past decades, from the first NTSC capable cards, through the golden 3DFX years straight through to the modern GPGPU.  There have been a lot of standards over the years such as MDA, CGA and EGA as well as different interfaces like ISA, the graphic card specific AGP to our current PCIe standard.  The first article in this four part series takes us from 1976 through to 1995 and the birth of the Voodoo series of accelerators.  Read on to bring back memories or perhaps to encounter some of this history for the first time.

TS_glquake.jpg

"The evolution of the modern graphics processor begins with the introduction of the first 3D add-in cards in 1995, followed by the widespread adoption of the 32-bit operating systems and the affordable personal computer. While 3D graphics turned a fairly dull PC industry into a light and magic show, they owe their existence to generations of innovative endeavour. Over the next few weeks we'll be taking an extensive look at the history of the GPU, going from the early days of 3D consumer graphics, to the 3Dfx Voodoo game-changer, the industry's consolidation at the turn of the century, and today's modern GPGPU."

Here is some more Tech News from around the web:

Tech Talk

Source: TechSpot