Subject: Graphics Cards | January 23, 2014 - 06:01 PM | Jeremy Hellstrom
Tagged: amd, asus, R9 290X DC2 OC, overclocking
[H]ard|OCP has had a chance to take the time to really see how well the R9 290X can overclock, as frequencies get lower as heat increases a quick gaming session is not enough to truly represent the performance of this new GPU. The ASUS R9 290X DirectCU II OC offers a custom cooler which demonstrated the overclocking potential of this GPU on air cooling, or at least this specific GPU as we have seen solid evidence of performance variability with 28nm Hawaii GPUs. You should read the full review to truly understand what they saw when overclocking but the good news is that once they found a sweet spot for fan speed and voltage the GPU remained at the frequency they chose. Unfortunately at 1115MHz the overclock they managed was only 75MHz higher than the cards default speed and while that could beat a stock GTX 780 Ti, the NVIDIA product overclocked higher and proved the superior card.
"We will take the ASUS R9 290X DC2 OC custom AMD R9 290X based video card and for the first time see how well the 290X can overclock. We will also for the first time compare it to an overclocked GeForce GTX 780 Ti video card head-to-head and see who wins when overclocking is accounted for."
Here are some more Graphics Card articles from around the web:
- Sapphire R9 290 4GB TRI-X OC Review @ Hardware Canucks
- HIS R9 270X IceQ X² Turbo Boost 2GB @ eTeknix
- Sapphire R9 290 Tri-X 4GB @ eTeknix
- Powercolor R9 280X TurboDuo 3GB @ eTeknix
- ASUS R9 290X DirectCU II OC 4 GB @ techPowerUp
- Gigabyte AMD Radeon R9 290X WF OC Video Card Review @ Madshrimps
- Sapphire Radeon R7 260X OC Review @ TechwareLabs
- EVGA GTX 780 Ti Classified 3072 MB @ techPowerUp
- Gigabyte GTX 780 Ti GHZ Edition Review! @ Bjorn3D
Subject: General Tech, Graphics Cards | January 23, 2014 - 03:29 AM | Scott Michaud
Tagged: ShadowPlay, nvidia, geforce experience
NVIDIA has been upgrading their GeForce Experience just about once per month, on average. Most of their attention has been focused on ShadowPlay which is their video capture and streaming service for games based on DirectX. GeForce Experience 1.8.1 brought streaming to Twitch and the ability to overlay the user's webcam.
Until this version, users could choose between "Low", "Medium", and "High" quality stages. GeForce Experence 1.8.2 adds "Custom" which allows manual control over resolution, frame rate, and bit rate. NVIDIA wants to makes it clear: frame rate controls the number of images per second and bit rate controls the file size per second. Reducing the frame rate without adjusting the bit rate will result in a file of the same size (just with better quality per frame).
Also with this update, NVIDIA allows users to set a push-to-talk key. I expect this will be mostly useful for Twitch streaming in a crowded dorm or household. Only transmitting your voice when you have something to say prevents someone else from accidentally transmitting theirs globally and instantaneously.
GeForce Experience 1.8.2 is available for download at the GeForce website. Users with a Fermi-based GPU will no longer be pushed GeForce Experience (because it really does not do anything for those graphics cards). The latest version can always be manually downloaded, however.
Subject: General Tech, Graphics Cards, Processors | January 22, 2014 - 09:41 PM | Scott Michaud
AMD had a decent quarter and close to a profitable year as a whole. For the quarter ending on December 28th, the company managed $89 million dollars in profits. This accounts for interest payments on loans and everything else. The whole year averaged to a $103 million dollar gain in operating income although that still works out to a loss of $74 million (for the year) all things considered. That said, a quarterly gain of $89 million versus an annual loss of $74 million. One more quarter would forgive the whole year.
This is a hefty turn-around from their billion dollar operating loss of last year.
This gain was led by Graphics and Visual Solutions. While Computing Solutions revenue has declined, the graphics team has steadily increased in both revenue and profits. Graphics and Visual Solutions are in charge of graphics processors as well as revenue from the game console manufacturers. Even then, their processor division is floating just below profitability.
Probably the best news for AMD is that they plan the next four quarters to each be profitable. Hopefully this means that there are no foreseen hurdles in the middle of their marathon.
Subject: Editorial, General Tech, Graphics Cards | January 22, 2014 - 02:12 AM | Scott Michaud
Tagged: linux, intel hd graphics, haswell
Looking through this post by Phoronix, it would seem that Intel had a significant regression in performance on Ubuntu 14.04 with the Linux 3.13 kernel. In some tests, HD 4600 only achieves about half of the performance recorded on the HD 4000. I have not been following Linux iGPU drivers and it is probably a bit late to do any form of in-depth analysis... but yolo. I think the article actually made a pretty big mistake and came to the exact wrong conclusion.
Let's do this!
According to the article, in Xonotic v0.7, Ivy Bridge's Intel HD 4000 scores 176.23 FPS at 1080p on low quality settings. When you compare this to Haswell's HD 4600 and its 124.45 FPS result, this seems bad. However, even though they claim this as a performance regression, they never actually post earlier (and supposedly faster) benchmarks.
So I dug one up.
Back in October, the same test was performed with the same hardware. The Intel HD 4600 was not significantly faster back then, rather it was actually a bit slower with a score of 123.84 FPS. The Intel HD 4000 managed 102.68 FPS. Haswell did not regress between that time and Ubuntu 14.04 on Linux 3.13, Ivy Bridge received a 71.63% increase between then and Ubuntu 14.04 on Linux 3.13.
Of course, there could have been a performance increase between October and now and that recently regressed for Haswell... but I could not find those benchmarks. All I can see is that Haswell has been quite steady since October. Either way, that is a significant performance increase on Ivy Bridge since that snapshot in time, even if Haswell had a rise-and-fall that I was unaware of.
Subject: Graphics Cards | January 21, 2014 - 03:49 PM | Ryan Shrout
Tagged: rumor, nvidia, kepler, gtx titan black, gtx titan, gtx 790
How about some fresh graphics card rumors for your Tuesday afternoon? The folks at VideoCardz.com have collected some information about two potential NVIDIA GeForce cards that are going to hit your pocketbook hard. If the mid-range GPU market was crowded wait until you see the changes NVIDIA might have for you soon on the high-end.
First up is the NVIDIA GeForce GTX TITAN Black Edition, a card that will actually have the same specifications as the GTX 780 Ti but with full performance double precision floating point and a move from 3GB to 6GB of memory. The all-black version of the GeForce GTX 700-series cooler is particularly awesome looking.
Image from VideoCardz.com
The new TITAN would sport the same GPU as GTX 780 Ti, only TITAN BLACK would have higher double precision computing performance, thus more FP64 CUDA cores. The GTX TITAN Black Edition is also said to feature 6GB memory buffer.This is twice as much as GTX 780 Ti, and it pretty much confirms we won’t be seeing any 6GB Ti’s.
The rest is pretty much well known, TITAN BLACK has 2880 CUDA cores, 240 TMUs and 48 ROPs.
VideoCardz.com says this will come in at $999. If true, this is a pure HPC play as the GTX 780 Ti would still offer the same gaming performance for enthusiasts.
Secondly, there looks to be an upcoming dual-GPU graphics card using a pair of GK110 GPUs that will be called the GeForce GTX 790. The specifications that VideoCardz.com says they have indicate that each GPU will have 2496 enabled CUDA cores and a smaller 320-bit memory interface with 5GB designated for each GPU. Cutting back on the memory interface, shader counts and even clocks speeds would allow NVIDIA to manage power consumption at the targeted 300 watt level.
Image from VideoCardz.com
Head over to VideoCardz.com for more information about these rumors but if all goes as they expect, you'll hear about these products quite a bit more in February and March.
What do you think? Are these new $1000 graphics cards something you are looking forward to?
Subject: General Tech, Graphics Cards | January 20, 2014 - 04:19 AM | Scott Michaud
Tagged: maxwell, nvidia
Well this is somewhat unexpected (and possibly wrong). Maxwell, NVIDIA's new architecture to replace Kepler, is said to appear in Feburary with the form of a GeForce GTX 750 Ti. The rumors, which sound iffy to me, claims that this core will be produced at TSMC on a 28nm fabrication technology and later transition to their 20nm lines.
As if the 700-series family tree was not diverse enough.
2013 may have been much closer than expected.
Swedish site, Sweclockers, have been contacted by "sources" which claim that NVIDIA has already alerted partners to prepare a graphics card launch. Very little information is given beyond that. They do not even have access to a suggested GM1## architecture code. They just claim that partners should expect a new videocard on the 18th of February (what type of launch that is is also unclear).
This also raises questions about why the mid-range card will come before the high-end. If the 28nm rumor is true, it could just be that NVIDIA did not want to wait around until TSMC could fabricate their high-end part if they already had an architecture version that could be produced now. It could be as simple as that.
The GeForce GTX 750 Ti is rumored to arrive in February to replace the GTX 650 Ti Boost.
Subject: Editorial, General Tech, Graphics Cards, Processors, Memory, Systems | January 20, 2014 - 02:40 AM | Scott Michaud
Tagged: corsair, overclocking
I rarely overclock anything and this is for three main reasons. The first is that I have had an unreasonably bad time with computer parts failing on their own. I did not want to tempt fate. The second was that I focused on optimizing the operating system and its running services. This was mostly important during the Windows 98, Windows XP, and Windows Vista eras. The third is that I did not find overclocking valuable enough for the performance you regained.
A game that is too hefty to run is probably not an overclock away from working.
Thankfully this never took off...
Today, overclocking is easier and safer than ever with parts that basically do it automatically and back off, on their own, if thermals are too aggressive. Several components are also much less locked down than they have been. (Has anyone, to this day, hacked the locked Barton cores?) It should not be too hard to find a SKU which encourages the enthusiast to tweak some knobs.
But how much of an increase will you see? Corsair has been blogging about using their components (along with an Intel processor, Gigabyte motherboard, and eVGA graphics card because they obviously do not make those) to overclock. The cool part is they break down performance gains in terms of raising the frequencies for just the CPU, just the GPU, just the RAM, or all of the above together. This breakdown shows how each of the three categories contribute to the whole. While none of the overclocks are dramatic, Corsair is probably proud of the 5% jump in Cinebench OpenGL performance just by overclocking the RAM from 1600 MHz to 1866 MHz without touching the CPU or GPU.
It is definitely worth a look.
A Refreshing Change
Refreshes are bad, right? I guess that depends on who you talk to. In the case of AMD, it is not a bad thing. For people who live for cutting edge technology in the 3D graphics world, it is not pretty. Unfortunately for those people, reality has reared its ugly head. Process technology is slowing down, but product cycles keep moving along at a healthy pace. This essentially necessitates minor refreshes for both AMD and NVIDIA when it comes to their product stack. NVIDIA has taken the Kepler architecture to the latest GTX 700 series of cards. AMD has done the same thing with the GCN architecture, but has radically changed the nomenclature of the products.
Gone are the days of the Radeon HD 7000 series. Instead AMD has renamed their GCN based product stack with the Rx 2xx series. The products we are reviewing here are the R9 280X and the R9 270X. These products were formerly known as the HD 7970 and HD 7870 respectively. These products differ in clock speeds slightly from the previous versions, but the differences are fairly minimal. What is different are the prices for these products. The R9 280X retails at $299 while the R9 270X comes in at $199.
Asus has taken these cards and applied their latest DirectCU II technology to them. These improvements relate to design, component choices, and cooling. These are all significant upgrades from the reference designs, especially when it comes to the cooling aspects. It is good to see such a progression in design, but it is not entirely surprising given that the first HD 7000 series debuted in January, 2012.
Subject: Graphics Cards | January 10, 2014 - 02:19 AM | Tim Verry
Tagged: water cooling, VisionTek, r9 290, liquid cooling, CES 2014, CES, amd
VisionTek unveiled a new custom liquid cooled graphics card based on AMD's R9 290 GPU. The CryoVenom R9 290 900675 card uses a custom engineered full cover EK water block that allows VisionTek to wring the full potential out of AMD's Hawaii GPU by overclocking it 24% over stock clockspeeds while running much cooler than the fan cooled reference cards.
As a refresher, the AMD R9 290 GPU at the heart of the new graphics card is based on AMD's latest Hawaii architecture and features 2,560 shaders, 160 texture units, and 64 ROPs. The GPU interfaces with 4GB of GDDR5 memory on a 512-bit bus. The reference R9 290 GPUs have a GPU clockspeed of 947 MHz and memory clockspeed of 1250 MHz (note the clockspeed problems of reference cards due to the coolers used).
The VisionTek card ditches a fan HSF in favor of a full cover waterblock that cools the GPU, memory, and VRMs. It has a nickel-plated copper base with an acrylic top. Water is channeled through a micro-fin array designed to cool the card without putting strain on low pressure pumps. A black anodized aluminum backplate adds support and passive (additional) VRM cooling to the graphics card. The CryoVenom maintains the two DL-DVI, one HDMI, and one DisplayPort video output connections of reference cards, however.
Going with a liquid cooler has allowed VisionTek to ratchet up the clockspeeds to an impressive 1,175 MHz for the GPU and 1,450 MHz for the memory. That is a respectable 24% and 16% increase over stock, respectively and is estimated to offer up to 38% better overall performance at those overclocked speeds. Perhaps even more impressive than the overclocks themselves is that VisionTek claims to be able to keep the card just under 52-degrees C under load which is a significant improvement over stock!
According to VisionTek, each Cryovenom R9 290 graphics card is custom build and put through a variety of burn in tests to ensure that it can operate at the rated overclocks and is free of water leaks when attached to a loop.
The liquid cooled cards have an MSRP of $550 and will be available shortly (the cards are currently out of stock on the VisionTek site). Here's hoping that VisionTek is able to keep the cards at MSRP, because even at a $150 premium over the MSRP of reference cards it would still be a good deal at a time when reference cards are being sold at prices well over MSRP.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Shows and Expos | January 9, 2014 - 06:05 PM | Ryan Shrout
Tagged: CES, CES 2014, gigabyte, R9 290X, gtx 780 ti, windforce
While the world still waits for stock of the custom cooled R9 290X and R9 290 cards from AMD's partners to show up in stores, Gigabyte was showcasing its WindForce models on the floor at CES 2014.
The Gigabyte GV-R929XOC-4GD is an R9 290X graphics cards that includes the company custom designed WindForce, triple fan cooler. The cooler is rated at 450 watts of dissipation, but hopefully you'll never actually be drawing that from this single GPU graphics card. The core clock on this model will be slightly overclocked, going from the stock 1000 MHz to 1040 MHz. Hopefully we'll have a review sample soon so we can verify that it maintains that overclocked clock speed throughout the gaming workloads.
Using the very same cooler is the GV-N78TGHZ-3GD based on the GeForce GTX 780 Ti GPU. In fact, without my tell you which card was which, you'd likely have no way to tell them apart without looking at the PCB more closely. Gigabyte will be setting the base clock on this model at 1085 MHz and the Boost clock at 1150 MHz.
The cards will also include a back plate on the rear of the PCB to help protect the components and ICs while also strengthening the board during shipping general use. Gigabyte says these cards will only carry and MSRP that is $20-50 more than the reference cards so look for each of them this month!
Follow all of our coverage of the show at http://pcper.com/ces!
Get notified when we go live!