Subject: Graphics Cards, Cases and Cooling | January 6, 2014 - 04:00 PM | Morry Teitelman
Tagged: water cooling, ROG, nvidia, gtx 780, gtx 770, CES 2014, CES, asus
In keeping with their tradition of pushing the innovation and performance boundaries through their ROG product line, ASUS today released NVIDIA-based 7-seried video cards featuring as part of their Poseidon series of liquid cooled products. ASUS released both a GTX 780 and GTX 770-based product with the hybrid Poseidon cooling solution.
Courtesy of ASUS
All Poseidon series graphics cards come with a hybrid cooling solution, using a combination of fan-based and water-based cooling to propel these cards to new performance heights. The card's GPU is cooled with the DirectCU H20 cooler with water pathed through the integrated barbs to the copper-based cooler. The water inlets are threaded, accepting G1/4" sized male fittings. The memory and VRM components are cooled by a massive dual-fan heat pipe, exhausting air through the rear panel port. Both cards feature the red and black ROG coloration with the Poseidon series name displayed prominently along the right front edge of the card.
Courtesy of ASUS
Both Poseidon series graphics cards, the ROG Poseidon GTX 770 and ROG Poseidon GTX 780, include ASUS' DIGI+ VRM and Super Allow Power power circuitry to ensure stability and component life under the most grueling conditions. When paired with the Poseidon cooling, the GPU ran 20% cooler and 3 times quieter than a comparable reference card with card operating temperatures 16 C lower than the same reference solution.
Follow all of our coverage of the show at http://pcper.com/ces!
Introduction and Unboxing
We've been covering NVIDIA's new G-Sync tech for quite some time now, and displays so equipped are finally shipping. With all of the excitement going on, I became increasingly interested in the technology, especially since I'm one of those guys who is extremely sensitive to input lag and the inevitable image tearing that results from vsync-off gaming. Increased discussion on our weekly podcast, coupled with the inherent difficulty of demonstrating the effects without seeing G-Sync in action in-person, led me to pick up my own ASUS VG248QE panel for the purpose of this evaluation and review. We've generated plenty of other content revolving around the G-Sync tech itself, so lets get straight into what we're after today - evaluating the out of box installation process of the G-Sync installation kit.
All items are well packed and protected.
Included are installation instructions, a hard plastic spudger for opening the panel, a couple of stickers, and all necessary hardware bits to make the conversion.
Subject: General Tech, Graphics Cards | December 31, 2013 - 05:46 PM | Scott Michaud
Tagged: linux, iris pro, iris, Intel, haswell
'Tis the season to be sharing and giving (unless you are Disney).
According to Phoronix, Intel has shipped (heh heh heh, "boatload") over 5,000 pages of documentation and technical specs for their Haswell iGPUs including the HD, Iris, and Iris Pro product lines. The intricacies of the 3D engine, GPGPU computation, and video acceleration are laid bare for the open source community. Video acceleration is something that is oft omit from the manuals of other companies.
Phoronix believes that Intel is, and has been, the best GPU vendor for open-source drivers. AMD obviously supports their open-source community but Intel edges them out on speed. The Radeon HD 7000 series is just beginning to mature, according to their metric, and Hawaii is far behind that. For NVIDIA, the Nouveau driver is still developed primarily by reverse-engineering. That said, documentation was released a few months ago.
Of course all of these comparisons are only considering the open-source drivers.
NVIDIA prides itself on their proprietary driver offering and AMD pretty much offers both up for the user to chose between. Phoronix claims that Intel employs over two-dozen open-source Linux graphics developers but, of course, that is their only graphics driver for Linux. That is not a bad thing, of course, because a launch open-source GPU driver is really identical to what they would launch for a proprietary driver just without slapping the wrists of anyone who tries to tweak it. It does make sense for Intel, however, because community support will certainly do nothing but help their adoption.
If you would like to check out the documentation, it is available at Intel's 01.org.
Sapphire Triple Fan Hawaii
It was mid-December when the very first custom cooled AMD Radeon R9 290X card hit our offices in the form of the ASUS R9 290X DirectCU II. It was cooler, quieter, and faster than the reference model; this is a combination that is hard to pass up (if you could buy it yet). More and more of these custom models, both in the R9 290 and R9 290X flavor, are filtering their way into PC Perspective. Next on the chopping block is the Sapphire Tri-X model of the R9 290X.
Sapphire's triple fan cooler already made quite an impression on me when we tested a version of it on the R9 280X retail round up from October. It kept the GPU cool but it was also the loudest of the retail cards tested at the time. For the R9 290X model, Sapphire has made some tweaks to the fan speeds and the design of the cooler which makes it a better overall solution as you will soon see.
The key tenets for any AMD R9 290/290X custom cooled card is to beat AMD's reference cooler in performance, noise, and variable clock rates. Does Sapphire meet these goals?
The Sapphire R9 290X Tri-X 4GB
While the ASUS DirectCU II card was taller and more menacing than the reference design, the Sapphire Tri-X cooler is longer and appears to be more sleek than the competition thus far. The bright yellow and black color scheme is both attractive and unique though it does lack the LED light that the 280X showcased.
Sapphire has overclocked this model slightly, to 1040 MHz on the GPU clock, which puts it in good company.
|AMD Radeon R9 290X||ASUS R9 290X DirectCU II||Sapphire R9 290X Tri-X|
|Rated Clock||1000 MHz||1050 MHz||1040 MHz|
|Memory Clock||5000 MHz||5400 MHz||5200 MHz|
|TDP||~300 watts||~300 watts||~300 watts|
|Peak Compute||5.6 TFLOPS||5.6+ TFLOPS||5.6T TFLOPS|
There are three fans on the Tri-X design, as the name would imply, but each are the same size unlike the smaller central fan design of the R9 280X.
Subject: Graphics Cards | December 31, 2013 - 03:13 PM | Jeremy Hellstrom
Tagged: amd, asus, ASUS ROG, MATRIX PLATINUM R9 280X
There is a lot to love about the ASUS ROG Matrix Platinum R9 280X, from its looks and faster and quieter performance when compared to the reference model. Still, [H]ard|OCP doesn't recommend running the fan at 100% except in brief moments when you need to dump heat quickly as it is quite loud but 50-60% will keep you below 90C and is not very noticeable. It overclocks very well, they hit 1270MHz core and 6.6GHz VRAM easily and noticed performance improvements when they did so, pushing past the GTX 770 occasionally. There is one major disappointment with this card, the price is currently over 50% higher than the base 280X MSRP so you might want to hold off for a while before thinking of purchasing this card.
"Today we take ASUS' elite R9 280X product, the ASUS ROG MATRIX PLATINUM R9 280X video card, and put its advanced power phasing and board components to the test as we harshly overclock it. We compare it to an ASUS GeForce GTX 770 DirectCU II. Then we will put it head to head against the overclocked SAPPHIRE TOXIC R9 280X."
Here are some more Graphics Card articles from around the web:
- Powercolor Radeon R9-290X Review @ Bjorn3D
- Gigabyte R9 270X Windforce OC @ Kitguru
- Sapphire R9 290X Tri-X OC @ Kitguru
- The $109 Console-killer GPU: AMD's Radeon R7 260 Graphics Card Reviewed @ Techgage
- AMD Catalyst 2013 Linux Graphics Driver Year-In-Review @ Phoronix
- Asus GTX 780 Ti DirectCU II OC @ Kitguru
- Zotac GeForce GTX 780 Ti AMP! Review @ Bjorn3D
Subject: Graphics Cards | December 30, 2013 - 02:32 PM | Ryan Shrout
Tagged: amd, Mantle, hawaii, BF4, battlefield 4
If you have been following the mess than has been Battlefield 4 since its release, what with the crashing on both PCs and consoles, you know that EA and DICE have decided that fixing the broken game is the number 1 priority. Gee, thanks.
While they work on that though, there is another casualty of development other than the pending DLC packs: AMD's Mantle version of the game. If you remember way back in September of 2013, along with the announcement of AMD's Hawaii GPUs, AMD and DICE promised a version of the BF4 game running on Mantle as a free update in December. If you are counting, that is just 1 more day away from being late.
Today we got this official statement from AMD:
After much consideration, the decision was made to delay the Mantle patch for Battlefield 4. AMD continues to support DICE on the public introduction of Mantle, and we are tremendously excited about the coming release for Battlefield 4! We are now targeting a January release and will have more information to share in the New Year.
Well, it's not a surprise but it sure is a bummer. One of the killer new features for AMD's GPUs was supposed to be the ability to use this new low-level API to enhance performance for PC games. As Josh stated in our initial article on the subject, "It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software. This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX. This is a significant boost in overall efficiency."
It seems that buyers of the AMD R9 series of graphics need to wait at least another month to really see what the promise of Mantle is really all about. Will the wait be worth it?
Subject: General Tech, Graphics Cards | December 24, 2013 - 04:15 AM | Scott Michaud
Tagged: R9 290X, r9 290, msi
MSI just announced their two customized Hawaii GPUs. One of the two new boards will be based on the R9 290 and the other based on the R9 290X. The design is based around the Twin Frozr IV Advanced two-fan model found on previous cards.
The specifications of the 290X version include three different modes: OC, Gaming and Silent. The Silent mode will run at 1000 MHz which is the same clock speed as the reference models were set at. In Gaming mode the card will run at 1030 MHz and in OC mode it will clock at 1040 MHz. Obviously there will be some slight noise level variances between them but I am pretty sure that the difference between Gaming and OC mode is going to be negligible.
The R9 290 version has the same three settings, but the clock speeds are 947 MHz, 977 MHz and 1007 MHz respectively.
As a final note: MSI's press release claims, "Available Now". It does not appear to be available on either Amazon or Newegg but NCIX claims that it is estimated to arrive February 26th, 2014 for $700. I seriously hope that there are a few typoes... maybe they meant December 26? Maybe they meant not a more-than-$100 premium?
It is, unfortunately, still a wait and see game with these custom AIBs.
Subject: General Tech, Graphics Cards | December 19, 2013 - 07:23 PM | Scott Michaud
Tagged: amd, firepro, SPECviewperf
SPECviewperf 12 is a benchmark for workstation components that attempts to measure performance expected for professional applications. It is basically synthetic but is designed to quantify how your system can handle Maya, for instance. AMD provided us with a press deck of some benchmarks they ran leading to many strong FirePro results in the entry to mid-range levels.
They did not include high-end results which they justify with the quote, "[The] Vast majority of CAD and CAE users purchase entry and mid-range Professional graphics boards". That slide, itself, was titled, "Focusing Where It Matters Most". I will accept that but I assume they did the benchmarks and wonder if it would have just been better to include them.
The cards AMD compared are:
- Quadro 410 ($105) vs FirePro V3900 ($105)
- Quadro K600 ($160) vs FirePro V4900 ($150)
- Quadro K2000 ($425) vs FirePro W5000 ($425)
- Quadro K4000 ($763) vs FirePro W7000 ($750)
In each of the pairings, about as equally-priced as possible, AMD held decent lead throughout eight tests included in SPECviewperf 12. You could see the performance gap leveling off as prices begun to rise, however.
Obviously a single benchmark suite should be just one data-point when comparing two products. Still, these are pretty healthy performance numbers.
Subject: General Tech, Graphics Cards, Processors | December 19, 2013 - 04:05 AM | Scott Michaud
Tagged: Intel, haswell
In another review from around the net, Carl Nelson over at Hardcoreware tested the dual-core (4 threads) Intel Core i3-4340 based on the Haswell architecture. This processor slides into the $157 retail price point with a maximum frequency of 3.6GHz and an Intel HD 4600 iGPU clocked at 1150MHz. Obviously this is not intended as top-end performance but, of course, not everyone wants that.
Image Credit: Hardcoreware
One page which I found particularly interesting was the one which benchmarked Battlefield 4 rendering on the iGPU. The AMD A10 6790K (~$130) had slightly lower 99th percentile frame time (characteristic of higher performance) but slightly lower average frames per second (characteristic of lower performance). The graph of frame times shows that AMD is much more consistent than Intel. Perhaps the big blue needs a little Fame Rating? I would be curious to see what is causing the pretty noticeable (in the graph, at least) stutter. AMD's frame pacing seems to be very consistent albeit this is obviously not a Crossfire scenario.
If you are in the low-to-mid $100 price point be sure to check out his review. Also, of course, Kaveri should be coming next month so that is something to look out for.
The First Custom R9 290X
It has been a crazy launch for the AMD Radeon R9 series of graphics cards. When we first reviewed both the R9 290X and the R9 290, we came away very impressed with the GPU and the performance it provided. Our reviews of both products resulted in awards of the Gold class. The 290X was a new class of single GPU performance while the R9 290 nearly matched performance at a crazy $399 price tag.
But there were issues. Big, glaring issues. Clock speeds had a huge amount of variance depending on the game and we saw a GPU that was rated as "up to 1000 MHz" running at 899 MHz in Skyrim and 821 MHz in Bioshock Infinite. Those are not insignificant deltas in clock rate that nearly perfectly match deltas in performance. These speeds also changed based on the "hot" or "cold" status of the graphics card - had it warmed up and been active for 10 minutes prior to testing? If so, the performance was measurably lower than with a "cold" GPU that was just started.
That issue was not necessarily a deal killer; rather, it just made us rethink how we test GPUs. The fact that many people were seeing lower performance on retail purchased cards than with the reference cards sent to press for reviews was a much bigger deal. In our testing in November the retail card we purchased, that was using the exact same cooler as the reference model, was running 6.5% slower than we expected.
The obvious hope was the retail cards with custom PCBs and coolers would be released from AMD partners and somehow fix this whole dilemma. Today we see if that was correct.
Get notified when we go live!