All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | October 22, 2012 - 12:00 AM | Ryan Shrout
Tagged: never settle, HD 7970, hd 7950, hd 7870, hd 7850, hd 7770, bundle, amd
AMD has a couple of surprises for gamers today, both using the "Never Settle" branding. Later this morning you will see an article that looks at a new driver revision - 12.11 beta - that will be published this week (it maybe online already). Promising performance increases is 20% and more, it should be an interesting discussion.
Another big push for AMD going into the holiday season will be the Never Settle game bundle; a collection of games included with a graphics card purchase unlike you have ever seen before. And we aren't talking about scrub games here, with the HD 7900 series of cards you'll see Far Cry 3, Hitman: Absolution, Sleeping Dogs and Medal of Honor Warfighter.
Starting today, if you buy an AMD Radeon HD 7900 series of graphics cards, including the HD 7990, HD 7970 and HD 7950, you will get three full games absolutely free! Far Cry 3, Hitman: Absolution and Sleeping Dogs to be exact. Also, you will get a 20% discount on a copy of the new Medal of Honor: Warfighter. And because I already asked, AMD assures us that this is the ONLY discount on Medal of Honor that will be available this year.
Buyers on tighter budgets aren't going to be left out and if you pick up an HD 7800 series or an HD 7770 GHz Edition (not a 7750), you'll get a free copy of Far Cry 3 as well as the 20% off offer on Medal of Honor.
And just to mix things up, if you buy a PAIR of Radeon HD 7800s or a pair of HD 7770 GHz Editions AMD will add in a free copy of Hitman.
I realize that not every gamer is interested in every game that is released, but the value of the Never Settle bundle is really unmatched in anything I have seen before. Valued at $170, the package that you can get by purchasing a Radeon HD 7950 3GB for $299 in theory brings the total out of pocket price of the GPU to $129!!
As I have mentioned previously, bundles are not a cure-all for performance issues, but they can definitely swing a buyer's decision when the other factors are close. I think AMD will have a HUGE advantage going into the holiday buying season even though NVIDIA has the Assassin's Creed 3 bundle (with the GTX 650 Ti) and the Borderlands 2 bundle (with the GTX 660 Ti and above).
See what happens when you have healthy competition in the market? Gamer's always come out ahead!
Subject: Editorial, Graphics Cards | October 20, 2012 - 12:33 PM | Ryan Shrout
Tagged: hitman, amd, extravalanza, hitman: absolution, video
We are at the first AMD ExtravaLANza today getting some hands on time with some cool new hardware as well as new games like Far Cry 3, Tomb Raider and Hitman: Absolution. We attended a session with the Hitman developer IO Interactive where some interesting information about the DX11 features. I recorded a video of the presentation for those interested in seeing it in its entirety.
The brand new Glacier2 engine has some impressive new features including:
- DX11 hardware tessellation on character models
- A global illumination engine using light propagation volumes
- AA options including FXAA and MSAA 2x-8x
- Eyefinity and HD3D
There is more, but you can hear it all in the video above. IO Interactive wanted to assure PC gamers that they are developing the game to be a first class PC title with higher quality imaging, controls and texture detail; this doesn't look to be a standard console port.
Subject: Graphics Cards | October 10, 2012 - 09:33 PM | Jeremy Hellstrom
Tagged: nvidia, driver, win8
It is not just that the latest GeForce drivers will work on Windows 8, these are the third WHQL certified drivers so you can be pretty much guaranteed to have the same compatibility and control over your GPU after making the switch as you do with Win7 and previous versions. The GeForce 306.97 drivers are good for Win7 and Win8 and offers the list of fixes and improvements which you can see below. Owners of Doom 3: BFG Edition who want to play in NVIDIA 3D should definitely upgrade as NVIDIA specifically mentions the quality improvements you will enjoy upon upgrading.
Adds support for the new GeForce GTX 650 Ti GPU.
Updates SLI profile for Tom Clancy's Ghost Recon Future Soldier.
Updates 3D Vision profiles for the following PC games:
- Check vs. Mate - Rated Excellent
- Counter-Strike: Global Offensive - Rated Good
- Doom 3: BFG Edition - Rated Excellent
- English Country Tune - Rated Good
- F1 2012 - Rated Good
- Iron Brigade - Rated Fair
- Jagged Alliance: Crossfire - Rated Good
- Orcs Must Die 2! - Rated Good
- Planetside 2 - Rated Not Recommended
- Prototype 2 - Rated Poor
- Sleeping Dogs - Rated Good
- Spec Ops: The Line - Rated Good
- Tiny Troopers - Rated Fair
- Torchlight 2 - Rated Good
- Transformers: Fall of Cybertron - Rated Fair
Subject: Graphics Cards | October 7, 2012 - 10:37 PM | Tim Verry
Tagged: nvidia, kepler, gtx 650ti, gpu, gk106-220
The NVIDIA GeForce GTX 650 Ti is rumored to launch soon, and so far specifications have leaked on the reference design as well as two custom cards from ASUS and Galaxy. Zotac is the latest manufacturer to have its GTX 650 Ti lineup leaked, and the company is bringing as many as three graphics cards to the GK106-220 Kepler family. In all, Zotac is rumored to be launching one 1GB GTX 650 Ti and two 2GB GPUs – all with vared levels of factory overclocks. Video outputs on all three cards include two DVI and two HDMI connectors.
The Zotac GTX 650 Ti 1GB stays close to the reference design, but bumps up the GPU core clockspeed to 941 MHz. It also includes 1 GB of GDDR5 memory on a 128-bit interface clocked at 1350 MHz (5400 MHz effective), which matches the reference design. The price of this card is said to be $160, and features a custom cooler from Zotac that is similar (but smaller than) to the cooler used on the company's GTX 660 Ti GPU wich we recently reviewed.
The Zotac GTX 650 Ti 2GB is, as the name suggests, a GTX 650 Ti graphics card with 2GB of GDDR5 memory. It features Zotac's custom cooler, and a single PCI-E 6-pin power connector. The GPU clockspeed is 941 MHz and the memory clockspeed is 1350 MHz. The extra 1GB of graphics memory is nice, but it is still on a 128-bit interface so don't expect too much of a performance boost. MSRP of this card is rumored to be $180.
Finally, the GTX 650 Ti 2GB AMP! Edition is Zotac's highest-end GTX 650 Ti graphics card. It comes with the GK106-220 Kepler GPU and 2GB of GDDR5 memory on a 128-bit bus. Powered by a single 6-pin PEG connector, the factory overclocked graphics card is clocked at 1033 MHz for the GPU and 1550 MHz (6200 MHz effective) for the memory.The Zotac GTX 650 Ti AMP! Edition comes with the company's custom cooler and is the first card to feature factory overclocked memory. The rumored price of this card is $190. Unfortunately, that puts it fairly close to the price of a reference GTX 660, which may make this card a hard sell. The factory overclocks are impressive, but saving up the extra $30 needed to get a GTX 660 is likely a better idea because it will still offer up better performance thanks to the additional CUDA cores and wider memory bus.
The following chart compares the three Zotac cards to the leaked reference specifications.
|Reference Specifications||Zotac GTX 650 Ti 1GB||Zotac GTX 650 Ti 2GB||Zotac GTX 650 Ti 2GB AMP! Edition|
|CPU Clockspeed||925 MHz||941 MHz||941 MHz||1033 MHz|
|Memory Clockspeed||1350 MHz||1350 MHz||1350 MHz||1550 MHz|
|GDDR5 Amount||1 GB||1 GB||2 GB||2 GB|
Comparison of several GTX 650 Ti graphics cards versus the rumored reference specifications.
Further, this chart compares the leaked specifications of the top end cards from each manufacturer (at least, the ones we know of so far) to the highest-end Zotac GPU: the 2GB AMP! Edition.
|Reference Specifications||ASUS GTX 650 Ti TOP||Galaxy GTX 650 Ti GC 1GB||Gigabyte GTX 650 Ti OC||Zotac GTX 650 Ti 2GB AMP! Edition||POV GTX 650 Ti 1GB Ultra Charged|
|CPU Clockspeed||925 MHz||1033 MHz||966 MHz||1032 MHz||1033 MHz||1058 MHz|
|Memory Clockspeed||1350 MHz||1350 MHz||1350 MHz||1350 MHz||1550 MHz||1350 MHz|
|GDDR5 Amount||1 GB||1 GB||1 GB||2 GB||2 GB||1 GB|
|Video Outputs||2 x DVI, 1 x HDMI||2 x DVI, 1 x HDMI, 1 x VGA||2 x DVI, 1 x HDMI||2 x DVI, 1 x HDMI, 1 x VGA||2 x DVI, 2 x HDMI||1 x DVI, 1 x HDMI, 1 x VGA|
Inno3D is also rumored to have a GTX 650 Ti graphics card coming out, but we don't know clockspeeds or price on it. Only that it has two DVI and one HDMI connector, a single PEG power connector, and a custom cooler.
Overall, the Zotac card measures up well, with pricing being the only major disadvantage. The 2GB of memory, factory overclocks, and two HDMI ports are welcome additions, however. Interestingly, the Zotac card is not the highest clocked graphics card overall, but it is the only one that features overclocked memory. It is unclear to me why manufactuers of NVIDIA cards are so hesitant to push the memory clockspeeds (or if they are even allowed to), but Zotac seems to prove that it is possible to do so.
Also worth pointing out is the rumored pricing, as some of these custom graphics cards are pushing $200 (especially the ASUS card when coverted to USD... I'm sure that has to be in error...), and reference GTX 660 with the full GK106 Kepler core are only $230. It will be interesting to see if these rumored prices turn out to be true, and how well Zotac's factory overclocked 650 Ti models sell.
Subject: Graphics Cards | October 7, 2012 - 04:17 PM | Jeremy Hellstrom
Tagged: gigabyte, GTX 660 Windforce OC, factory overclocked, gtx 660
Gigabyte's Windforce cooler has become popular thanks to its efficient performance and low noise, which makes it perfect for a card like the GTX 660 which you would expect to find in a small enclosure. Gigabyte gave a little more power to this non-Ti GTX 660 however, with a base clock of 1033MHz, boosting to 1098MHz and GDDR5 at 6GHz which Guru of 3D managed to increase when they tried overclocking the card and ended up with many benchmarks equalling or surpassing a GTX 660 Ti. At $230 the Gigabyte GeForce GTX 660 Windforce OC is not a bad choice for a system that needs to be quiet and won't be used to play the newest games at high settings,
"We review one more Gigabyte GeForce GTX 660 it is the Windforce OC model The Gigabyte GeForce GTX 660 Windforce OC comes with a dual-slot Windforce cooler that is incredibly silent yet manages to keep the card at very cool temps, and it's even factory overclocked for you. Have a peek as this card should be somewhere at the top of you list. Combined with Ulra Durable component selection you may expect something long-lasting and well performing."
Here are some more Graphics Card articles from around the web:
- ASUS GTX 560 Ti 2GB DirectCUII @ Bjorn3D
- Zotac GeForce GT 640 Zone Edition Video Card Review @ Legit Reviews
- ARCTIC Accelero Twin Turbo 690 Cooler @ Kitguru
- i3DSpeed, September 2012 @ iXBTlabs
- Workstation Graphics Card Comparison Guide @ TechARP
- Prolimatech MK-26 Video Card Cooler @ TweakTown
- Prolimatech MK-26 VGA Cooler Review @ Hardware Secrets
- Devilishly Effective: Deepcool Dracula Graphics Card Cooler @ X-bit Labs
- Arctic Accelero Hybrid 7970 Liquid Cooling System @ Guru of 3D
- Sapphire Radeon HD 7950 3GB Vapor-X Video Card Review @ Legit Reviews
- HIS 7750 iCooler 1GB GDDR5 PCI-E Video Card Review @ Madshrimps
- XFX Radeon HD 7850 1GB Core Edition Video Card Review @ Legit Reviews
- Sapphire Radeon HD 7770 fleX GHz Edition 1GB Graphics Card Review @ eTeknix
- Sapphire Flex HD 7770 GHz Edition @ Bjorn3D
- Sapphire Radeon HD 7770 Flex Edition 1GB @ Tweaktown
- Sapphire Radeon HD 7770 GHz Edition FleX Review @ Neoseeker
- PowerColor HD 7870 PCS+ 2 GB @ techPowerUp
- Sapphire HD 7970 6GB Vapor-X GHZ Edition Review @ OCC
- Arctic Accelero Hybrid 7970 Cooler Review @ OCC
- SAPPHIRE Vapor-X Radeon HD 7970 GHz Edition @ [H]ard|OCP
Subject: Graphics Cards | October 5, 2012 - 06:40 PM | Tim Verry
Tagged: powercolor, gpu, dual gpu, amd, 7990
Towards the end of August, a new dual GPU graphics card from PowerColor was fully detailed. The dual GPU Devil 13 graphics card combined two AMD Radeon HD 7970 GPUs onto a single PCB with factory overclocks and a custom cooler. The 6GB (3GB per GPU) HD 7990 6GB Devil 13 is an awesome card, but comes with a hefty $999 price tag.
This month, PowerColor has taken the wraps off of a (slightly) cheaper 7990 graphics card that is not clocked as high but uses a similar custom cooler as the Devil 13. It will allegedly be priced at around $900 USD.
The new PowerColor HD7990 (sans Devil 13 branding) features two HD7970 Graphics Core Next (GCN) based GPUs clocked at 900 MHz by default or 925 MHz when using the factory overclocked BIOS. (You can switch between the two modes by using the Dual BIOS switch.) As a point of comparison, standard Radeon 7970s have a reference clockspeed of 925 MHz, and PowerColor’s own HD 7990 Devil 13 is clocked at either 925 MHz or 1 GHz depending on BIOS switch position. PowerColor is likely binning 7970 GPUs that don’t quite make the cut as Devil 13 models for this new dual gpu 7990 graphics card with lower clockspeeds.
Fortunately, the memory clockspeed has not been downclocked on the new HD 7990. Each GPU has 3GB of GDDR5 memory on a 384-bit bus, and the memory is clocked at 1375 MHz.
Also good news is that the standard PowerColor 7990 appears to use the same custom cooler as the Devil 13 – but with an all-black design rather than the red and black color scheme. That includes a triple slot design, numerous heatpipes and fins, and two 92mm fans on either side of an 80mm fan.
The graphics card measures 315mm x 140mm x 60mm and features two DVI, one HDMI, and two min-DisplayPort video outputs. It has the same 850W minimum system power requirement as the Devil 13, and is powered by three 8 pin PCI-E power connectors in addition to power from the PCI-E 3.0 x16 slot.
Although an interesting card that is sure to attract enthusiasts, it lends credence to the idea that AMD is not going to release its own reference HD 7990 after all. At this point, so long as your case and motherboard permit, it would likely best to go for two individual ~$400 Radeon 7970 GHz Edition cards in a CrossFire configuration. PowerColor does seem to have you covered if that’s not an option for you though there is no word on exactly when this graphics card will be available – or what the final pricing will be.
Read more about AMD’s Graphics Core Next architecture at PC Perspective.
Subject: Graphics Cards | October 4, 2012 - 04:42 PM | Tim Verry
Tagged: nvidia, kepler, gtx 680, gtx 670, gtx 660 Ti, gigabyte, factory overclocked
Gigabyte is launching three new factory overclocked graphics cards featuring a Kepler GPU, custom PCB, and custom cooler. The factory overclocks are notable, but will cost you. Specifically, the company is producing versions of the GTX 660 Ti, GTX 670, and GTX 680.
The Gigabyte GV-N680OC-4GD takes the GTX 680 GPU, places it on a custom PCB, and pairs it with 4GB of GDDR5 memory. It features two 6-pin PCI-E power connectors, and Gigabyte’s Windforce X3 450W custom cooler using a triangular fin design that allegedly increases cooling potential. While the GDDR5 memory clockspeeds have not been increased over the reference clocks, the GPU core and boost clockspeeds have been pushed to 1071 MHz and 1137 MHz respectively. The following chart shows the differences in clockspeed and memory over the reference design.
|Reference GTX 680||Gigabyte N680OC-4GD|
|GPU Core||1006 MHz||1071 MHz|
|GPU Boost||1058 MHz||1137 MHz|
|GDDR5 Amount||2 GB||4 GB|
|GDDR5 Speed||6 Gbps||6 Gbps|
The GTX 680 is not the only card to get a custom makeover by Gigabyte, however. The GV-N670OC-4GD is a custom GTX 670. With this card, Gigabyte has set the base clockspeed at 980 MHz – the boost clockspeed of reference cards – and the boost clockspeed at 1058 MHz. Gigabyte has also doubled down on the GDDR5 memory by packing 4GB onto the custom PCB. The memory clockspeed remains the same 6 Gbps as reference cards, however.
This card uses the same Windforce X3 cooler as the cust GTX 680, and as a result has a triple slot design that looks identical to the N680OC-4GD. If you look just above the PCI-E connector though, you can see tell them apart by the product name.
|Reference GTX 670||Gigabyte N670OC-4GD|
|GPU Core||915 MHz||980 MHz|
|GPU Boost||980 MHz||1058 MHz|
|GDDR5 Amount||2 GB||4 GB|
|GDDR5 Speed||6 Gbps||6 Gbps|
Finally, we have the GV-N66TOC-3GD which overclocks the GTX 660 Ti GPU to the max. Factory clockspeeds are set at 1032 MHz base and 1111 MHz boost. Memory also sees a small bump from 2GB reference to 3GB. On the other hand, the memory is not overclocked and remains at the reference 6 Gbps clockspeed. This card also has a triple fan Windforce cooler, however this version is not the triple slot design found on the GTX 670 and GTX 680s SKUs – only dual slot.
|Reference GTX 660 Ti||Gigabyte N66TOC-3GD|
|GPU Core||915 MHz||1032 MHz|
|GPU Boost||980 MHz||1111 MHz|
|GDDR5 Amount||2 GB||3 GB|
|GDDR5 Speed||6 Gbps||6 Gbps|
All three of the Gigabyte GPUs feature two DVI, one full-size HDMI, and one full-size DisplayPort connector for video outputs.
All three factory overclocked graphics cards feature respectable GPU overclocks, and it appears that Gigabyte has provided ample cooling for each GPU. The triple slot, triple fan version on the N670OC-4GD and N680OC-4GD in particular seem to offer headroom above even what Gigabyte has clocked these out of the box. Curiously though, Gigabyte is continuing the trend of not touching the memory clockspeed of Kepler cards. It may be that the RAM chips are already at their max on the reference design, or there could be some behind the scenes talk with NVIDIA not waning Add In Board partners to touch the memory Unfortunately, all I have at this point is speculation, but it is a rather curious omission on such high end cards. That point becomes clearer when price is taken into consideration. Videocardz claims to have the pricing information for the three video cards, and the custom cards are going to cost you a large premium over reference cards. The rumored prices can be found in the charts above compared against the reference pricing, but the basic run down is that the GV-N66TOC-3GD will cost $415, the GV-N670OC-4GD will cost $550, and the GV-N680OC-4GD will cost (an astounding) $800.
I’m hoping that the rumored prices are in error and will be adjusted once the cards are available. These are neat cards that look to have plenty of cooling, but I’m still trying to figure out just what these cards have to offer to justify the huge jump over reference pricing. And, no, the superfluous gold plated HDMI connectors do not count. [For example, the 4GB Galaxy GTX 670 we recently reviewed was only $70 over reference while the Gigabyte card is rumored to be $150!]
The Gigabyte N66TOC-3GD factory overclocked GPU.
You can find links to the Gigabyte product pages in the charts above. If you have not already, please check out our GTX 660 Ti, GTX 670, and GTX 680 graphics card reviews for the full scoop on the various Kepler iterations. And if you are considering the Gigabyte N680OC-4GD, you should probably check out the dual GPU GTX 690 review as well (heh).
Subject: Graphics Cards | September 26, 2012 - 01:57 PM | Jeremy Hellstrom
Tagged: amd, catalyst, catalyst 12.9, beta, driver
You can grab the latest Catalyst Beta driver today, the 12.9 beta driver and Catalyst Application Profiles for CrossFire today. For the beta driver you must pick your card and OS version as normal, from there scroll below the current WHQL 12.8 driver and you will see the beta. Why would you want to grab the driver? It could make your Panda bears perform better and for those on laptops, the introduction of AMD Enduro Technology will allow you to set separate power profiles for every installed application, determining if it uses hybrid graphics or only the APU.
Feature Highlights of AMD Catalyst 12.9 Beta: AMD Catalyst Mobility support for AMD Enduro Technology
AMD Catalyst Mobility now includes support for AMD Enduro Technology.
AMD Enduro Technology for Notebooks delivers:
- Unbeatable battery life
- GPU accelerated performance for gaming, video, and compute apps
- A Seamless and automatic experience
New Enduro Technology features found in Catalyst 12.9 Beta:
- Re-designed Catalyst Control Center user interface
- View all profiled applications
- View recently run applications
- Profile applications based on power source
- Expert mode control and customization
- Performance centric AC
- Battery centric DC
Performance highlights of AMD Catalyst 12.9 Beta (versus AMD Catalyst 12.8)
- Up to 10% in Lost Planet 2 in single GPU configurations
AMD’s latest Catalyst Application Profile: AMD Catalyst 12.9 CAP1 (to be used with AMD Catalyst 12.9 Beta)
Find the latest available AMD Catalyst CAP here : http://sites.amd.com/us/game/downloads/Pages/crossfirex-app-profiles.aspx
- World of Warcraft - Mist of Pandaria (DX11, DX9): Fixes texture flickering observed when enabling high graphics settings with CrossFire enabled
- World of Warcraft - Mist of Pandaria (DX9): Resolve corruption when enabling Anti-Aliasing through the Catalyst Control Center
- World of Warcraft - (DX9): Resolves performance issues observed with the 64-bit variant of the client
- Tribes Ascend: Improves CrossFire performance
- F1 2012: Improves CrossFire performance, resolves texture flickering in reflections
Resolved issue highlights of the AMD Catalyst 12.9 Beta driver
- Tri and Quad CrossFire + Eyefinity configurations – Users will no longer see lower than expected performance in certain DirectX 10 and DirectX 11 applications
- FireFox – corruption is no longer observed on CrossFire configurations
- Enabling Overdrive settings no longer increases clocks in all power states
- AMD Video Converter support is available in AMD Catalyst 12.9 Beta Windows 7 and Windows Vista packages
Feature Highlights of AMD Catalyst 12.9 Beta Linux Driver: New OS Support
This release of AMD Catalyst Linux introduces support for the following new operating systems
- Ubuntu 12.10 early look support
- RHEL 6.3 production support
Subject: Graphics Cards | September 25, 2012 - 02:59 PM | Ryan Shrout
Tagged: radeon, amd, video, pitcairn, hd 7870 ghz edition, hd 7870
There have been quite a few new graphics card releases this year and with the now crowded GPU market, we have gotten many requests to revisit some of the earlier launches to see how they stack up in the latest GPU landscape. One such card is AMD’s Radeon HD 7870 GHz Edition, which has seen some dramatic improvements since its initial release in March.
AMD’s entire lineup of graphics cards based on the Southern Islands architecture were released between the months of January and March of this year, with only a few updates during the summer to combat new releases from NVIDIA. Though they don’t get as much review time anymore, the Tahiti, Pitcairn and Cape Verde GPUs still have a lot to offer gamers and the Radeon HD 7870 GHz Edition is a perfect example of that.
Thanks to recent price cuts, the Sapphire Radeon HD 7870 GHz Edition and all other Pitcairn GPUs can be found for much less than when they were launched. With a starting price of $350 in March, some base HD 7870s can be found online for $250 and sometimes less with rebates today making it a great deal for gamers on a budget.
You can check out all of our graphics card reviews right here to see how the market currently stands but there are really very few bad choices anymore.
Subject: Graphics Cards | September 22, 2012 - 04:01 AM | Tim Verry
Tagged: nvidia, MSI GTX660 HAWK, msi, gtx 660
This week has certainly had its share of leaked graphics card news, and the latest information on that market indicates that MSI is working on a enthusiast-level HAWK version of the GTX 660 GPU. That card will take the GK106 Kepler chip to the max with the fastest factory overclocks yet.
Last week Nvidia debuted its GTX 660 graphics card, which is currently the lowest-end GPU to use the Kepler GK106 chip. Once the NDA broke, the review of the card went live, and the performance of the reference designs was analyzed.
GK106 features 5 SMX units in 2.5 Graphics Processing Clusters (GPC), which Nvidia has said is the most that the chip will ever have. The GTX 660 version has 960 CUDA cores, 80 texture units, 24 ROPs, and a 192-bit memory bus.
While GK106 will likely not see a version with three complete GPCs, the mid-range Kepler chip still has a bit of performance headroom that can be unleased with overclocking, and several OEMs are preparing factory overclocked GTX 660 graphics cards with custom coolers.
The latest custom GTX 660 to be leaked is the MSI GTX 660 HAWK edition with out-of-the-box overclocked settings, beefed up power management hardware, and a TwinFrozr IV cooler.
MSI has gone with a custom PCB and cooler to keep the GK106 fed with power and running cool. The PCB has been fitted with a 10-phase VRM, SSC chokes, and IR DirectFETs to provide the power needed to run at overclocked settings. Of course, MSI has included its GPU Reactor hardware – a feature exclusive to its HAWK branded cards that differentiates them from the lower tier lightning and power edition cards. The GPU Reactor is a set of tantalum capacitors that are said to deliver more stable voltage to the Kepler chip.
The graphics card continues to be powered by two 6-pin PCI-E power connectors. MSI has also added a dual BIOS feature to the HAWK card that will run the GPU at GTX 660 reference speeds (980/1033MHz) or at the overclocked profile, depending on physical BIOS switch position.
Clockspeeds are where the MSI GTX 660 HAWK really gets interesting, however. The base clockspeed of 1100MHz is more than most GTX 660 cards run at /boost/ speeds, and the 1176MHz boost speed is the fastest boost speed we’ve seen yet. In an interesting twist, MSI has not touched the clockspeed for the 2GB of GDDR5 memory. Instead, it has left the graphics card clocked at 6008MHz memory (the reference speed). It may be that the memory chips simply cannot overclock much beyond the reference clockspeeds as there are no other factory overclocked GTX 660s that I know of that push the memory clocks beyond reference.
Of course, the other big selling point of this MSI card is the custom cooler – one that Josh seems to like thanks to the addition of “supa pipes!” The Twin Frozr IV is a dual fan cooled aluminum fin array that is connected to the block over the GPU by five heat-pipes. There does not appear to be much information on the HSF beyond that, unfortunately. Judging by past iterations, it should be more than capable of running at the factory overclocked speeds, however.
Display outputs will include two DVI, one DisplayPort, and one HDMI. Pricing and availability are still unknown, but expect it to command a small premium over the standard GTX 660’s $229 price tag.
EXPreview was the source of the photos, however the webpage seems to be down at the moment. Fortunately, WCCF Tech manged to grab them before the original page was lost, and you can see more photos of the MSI GTX 660 HAWK (SKU: N660GTX HAWK) on that page.
A comparison chart of the various GTX 600 series cards.
Note: GTX 650 is GK107, GTX 660 is GK106, GTX 660Ti and above is GK104.
Read more about Nvidia's Kepler graphics card architecture at PC Perspective!
Subject: Graphics Cards | September 21, 2012 - 02:55 PM | Tim Verry
Tagged: tenerife, Sea Islands, radeon, GCN, amd, 8970
(Updated to add additional information on the 8900 series rumors – mainly on Radeon 8950.)
Earlier this week, we reported on rumors of two upcoming mid-range AMD 8800 series graphics cards based on the Sea Islands architecture. As mentioned previously, Sea Islands is the successor to the Southern Islands architecture used on the 7000 series. It features an improved Graphics Core Next GPU processor architecture based on TSMC's 28nm process. With that said, the chip will draw less power and be faster on GPGPU workloads thanks to several efficiency tweaks. Graphics cards based on Sea Islands will support DirectX 11, and will be available early next year.
While the 8850 and 8870 are based on the Oland GPU, this newly leaked Radeon HD 8970 will use the "Sea Islands" Tenerife GPU. New information seems to suggest that AMD will actually brand it the Venus XTX for 8970 cards and Venus XT/Pro for 8950 cards, though Oland would remain the chip name for 8800 series cards.
Tenerife offers up some impressive (but realistic) specifications, including 2,560 shaders, 160 texture units, 48 ROPs, and a relatively massive 384-bit memory bus. Also impressive is an alleged transistor count of 5.1 billion, which puts it a great deal above the Radeon 7970's 4.31 billion transistors. This rumored Tenerife/Venus XTX GPU (whichever AMD ends up calling it) will have a 250W TDP and will be use in the 8970 flagship graphics card. Venus XT/Pro will scale back the chip a bit by featuring 2,304 shaders, 144 texture units, and 32 ROPs. No word yet on what the TDP will be.
Both the HD 8970 and HD 8950 are said to support 3GB of GDDR5 memory running at 6GHz on a 384-bit bus, which works out such that the cards have approximately 322 GB/s of bandwidth! Further, the 16 additional ROP units in the Radeon HD 8970 will give it a nice performance boost over the 8950 and 8800 series, especially when running multiple monitors in Eyefinity configurations.
As far as specifications go, we do not yet know the die size of the GPU or what the GPU base (and boost) clockspeeds are beyond a source indicating the boost frequency of the 8970 will be above 1050 MHz. According to PC Perspective's GPU
packrat reviewer Josh Walrath, the Tenerife GPU will have a much larger die than that of Oland. Because it will feature a sizeable increase in number of transistors, but still be based on a 28nm process, the die size will be somewhere between 380mm^2 and 420mm^2.
To put that in perspective, the 8850/8870 has a die size of 270mm^2, and the current generation predecessor (7950/7970) has a die size of only 365mm^2.
The following chart compares the various rumored Radeon 8000-series graphics cards to their previous generation counterparts.
|Radeon HD 7850||Radeon HD 8850||Radeon HD 7870||Radeon HD 8870||Radeon 7950||Radeon 8950||Radeon HD 7970||Radeon HD 8970|
|Die Size||212mm^2||270mm^2||212mm^2||270mm^2||365mm^2||~400mm^2||365mm^2||~ 400mm^2|
|Bandwidth||153.6 GB/s||192 GB/s||153.6 GB/s||192 GB/s||240 GB/s||322 GB/s||288 GB/s||322 GB/s|
*Tenerife die size is estimate only, actual die size is still unknown.
The AMD Radeon HD 8970 will be AMD's next generation single-GPU flagship graphics card, and it looks to offer up some respectable hardware. The Radeon HD 8950 should be a decent step up in performance versus the 7950, though it would have been nice to see the 8970's additional ROP units stick around in the 8950. Unfortunately we do not know what this Tenerife (aka Venus) GPU-based graphics card will be priced at. For now, we will just have to be cautiously optimistic and wait a few months to see how much this card will cost. The wait should not be very long either, if rumors are true as they seem to indicate that the 8970 will enter manufacturing in late 2012 and launched in early (January/February) 2013.
Are you excited for AMD's next-generation flagship?
Subject: Graphics Cards | September 20, 2012 - 04:35 PM | Jeremy Hellstrom
Tagged: overclock, gtx 660, DirectCU II, asus
As promised [H]ard|OCP has spent some time overclocking the ASUS GTX 660 DirectCU II card and have come back with their results. The highest GPU clock they managed was a reported 1170MHz Boost clock in GPU Tweak but which was 1215MHz in actual in-game performance. While that was the high speed record it did not provide the best performance as the frequency often dipped much lower because of the heat produced, [H]'s sweet spot was actually a 1100MHz Boost clock, in-game a much more steady 1152MHz though it did still dip occasionally. They also upped the memory, but again because of the heat produced by the overclock they could not raise voltage without negative consequences. Check the whole review here.
"We put our new ASUS GeForce GTX 660 through the ringer of overclocking and make real world gaming comparisons. If you are thinking the new GTX 660 (GK106) GPU will be a good overclocker like its bigger brother GK104, you may be in for a surprise that puts the new GTX 660 in a new light."
Here are some more Graphics Card articles from around the web:
- ASUS GeForce GTX 660 Ti DirectCU II TOP @ [H]ard|OCP
- GeForce 9800 GT vs GeForce 660 GTX @ Guru of 3D
- Zotac GTX680 AMP Edition @ Bjorn3D
- EVGA GeForce GTX 660 SuperClocked Video Card Review @ Hardware Secrets
- Zotac GeForce GTX 660 with GK106 GPU @ @ X-bit Labs
- NVIDIA GeForce GTX 660 2GB Review @ Techgage
- Sparkle GTX650 OC Dragon Series @ Kitguru
- GeForce GTX 650 MSI Power edition @ Guru3D
- KFA GeForce GTX 650 EX OC 1 GB @ techPowerUp
- EVGA GeForce GTX 660 SC @ Guru of 3D
- MSI GeForce GTX 650 Power Edition OC 1 GB @ techPowerUp
- NVIDIA Chips Comparison Table @ Hardware Secrets
- NVIDIA FXAA Anti-Aliasing Performance @ Phoronix
- Seven Nvidia GeForce GTX 680 round-up: Super cards @ Hardware.info
- Desktop Graphics Card Comparison Guide @ TechARP
- Arctic Accelero Twin Turbo 6990 VGA Cooler Review @ eTeknix
- Sapphire Radeon HD 7750 1GB Low Profile Review @ Neoseeker
- ARCTIC Accelero Hybrid 7970 @ Hardwareoverclock
- PowerColor Devil 13 HD 7990 Review @ OCC
- Sapphire Radeon HD 7770 Flex Edition Review @ Hi Tech Legion
- XFX Radeon HD 7770 Black Edition Overclocked 1GB Graphics Card Review @ eTeknix
- Sapphire HD7770 GHZ FleX Edition @ Kitguru
- Sapphire Radeon Flex HD 7770 GHz Edition Video Card @ Pro-Clockers
- Sapphire Radeon HD 7950 3GB Vapor-X Review @ OCC
- HD 7990 Review; PowerColor’s Devil 13 @ Hardware Canucks
- MSI HD7850 Power Edition Video Card @ Bjorn3D
Subject: Graphics Cards | September 18, 2012 - 06:34 PM | Tim Verry
Tagged: Sea Islands, oland, hd8870, hd8850, gpu, amd radeon, amd
AMD beat NVIDIA to the punch with its 7000-series “Southern Islands” graphics cards, and if the rumors hold true the company may well accomplish the same feat with its next generation architecture. Codenamed Sea Islands, the architecture of AMD’s 8800-series is set to (allegedly) debut around January 2013 time frame. Featuring DirectX 11, GPGPU and power efficiency improvements, 3.4 billion transistors on a 28nm process, and a rumored sub-$300 price, will the 8850 and 8870 win over enthusiasts?
AMD launched its Southern Island graphics cards with the Graphics Core Next (GCN) architecture and Pitcairn GPU in March of this year. Since then NVIDIA has moved into the market with the 660 and 660Ti, and budget gamers have lots of options. However, yet another budget gaming GPU from AMD will be coming in just a few months if certain sources' leaks prove correct. The 8850 and 8870 graphics cards are rumored to launch in January 2013 for under $300 and offer up some significant performance and efficiency improvements. Both the 8850 and 8870 GPUs are based on the Oland variant of AMD’s Sea Islands architecture. As a point of reference, AMD’s 7850 and 7870 are using the Pitcairn version of AMD’s Southern Islands architecture – thus Sea Islands is the overarching architecture and Oland is an actual chip based on it.
Sea Islands is essentially an improved and tweaked Graphics Core Next design. It will continue to utilize TSMC's 28 nm process, but will require less power than the 7000-series while being much faster. While the specifications for the top-end 8900-series is still up in the air, Videocardz is claiming sources in the know have supplied the following numbers for the mid-range 8850 and 8870 Oland cards.
Videocardz put together a table comparing AMD's current and future GPU series.
The GPU die size has reportedly increased to 270mm^2 (squared) versus the 7850/7870’s 212mm^2 die. This increase is the result of AMD packing an additional 600 million transistors for a total of 3.4 billion. 3D Center further breaks the GPU down in stating that the 8870 will feature 1792 shader units, 112 texture manipulation units (TMU), 32 ROPs, and support a 256-bit memory interface. The 8850 graphics card will scale the Oland GPU down a bit further by featuring only 1536 shader units and 96 TMUs, but keeping the 32 ROPs and 256-bit interface.
For comparison, here’s a handy table comparing the 8850/8870 to the current-generation 7850/7870 (which we recently reviewed).
|Radeon HD 7850||Radeon HD 8850||Radeon HD 7870||Radeon HD 8870|
|Bandwidth||153.6 GB/s||192 GB/s||153.6 GB/s||192 GB/s|
So while the memory bus and number of ROP units is staying the same, you are getting more shaders and texture units along with a boost to the overall memory bandwidth with the larger die size – sounds like an okay compromise to me!
AMD has managed to increase the clock speeds and GPGPU performance with Oland/Sea Islands as well. On the clockspeed front, the 8850 has a base boost GPU clockspeed of 925 MHz and 975 MHz respectively. Further, the 8870 has base/boost clocks of 1050 MHz/1100 MHz. That is a nice improvement over the 7850’s 860 MHz clockspeed, and 7870’s 1000 MHz clockspeed. AMD is also adding its PowerTune with Boost functionality to the Oland-based graphics cards which is a welcome addition. The theoretical computational power of the graphics chips has been increased as well, by as much as 75% for single precision and 60% for double precision (7870 to 8870). The single precision performance has been increased to 2.99 TFLOPS on the 8850 (1.76 TFLOPS on the 7850), and 3.94 TFLOPS on the 8870 (7870 has 2.25 TFLOPS). The single precision numbers are relevant to gaming and general applications that consumers would run that are GPU accelerated. The figures are not really suited/representative of high performance computing (HPC) workloads where precision is important (think simulations and high-end mathematics), and that is where the double precision numbers come in. The 8800 series gets a nice boost in potential performance as well, topping out at 187.2 GFLOPS for the 8850 and 246 GFLOPS for the 8870. That is in comparison the 7850’s 110 GFLOPS and 7870’s 160 GFLOPS.
The sources also disclosed that while the 8850 would have the same TDP (thermal design power) rating as the 7850, the higher-end 8870 would actually see a decreased 160W TDP versus the previous generation’s 175W. Unfortunately, there were not any specific power draw numbers talked about, just that the cards were more power efficient, so it remains to be seen just how much (if at all) less power the GPUs will need. The sources put the 8870 at the same performance level as the NVIDIA GeForce GTX 680, which would mean that this will be an amazing mid-range card if true. Especially considering that the cards have a rumored price of $279 for the 8870 and $199 for the 8850. Granted, those prices are likely much lower than what we will actually see if AMD does indeed launch the cards in January as the company will not have competition from NVIDIA’s 700 series right away.
In some respects, the rumored specifications seem almost too good to be true, but I’m going to remain hopeful and am looking forward to not only seeing the mid-range Oland GPU coming out, but the unveiling of AMD’s top-end 8900 series (which should be amazing, based on the 8800-series rumors).
What do you think of the rumored 8850 and 8870 graphics cards from AMD? Will they be enough to temp even NVIDIA fans?
Subject: Editorial, Graphics Cards | September 17, 2012 - 02:23 PM | Ryan Shrout
Tagged: pcper, nvidia, live, giveaway, contest, borderlands 2
I hope your day is going to be free tomorrow - we have some big stuff planned! In cooperation with NVIDIA, Gearbox and PC Perspective, we'll be hosting a multi-hour live streaming launch party for Borderlands 2! We'll be going over some of the unique PC-exclusive features, showing off gameplay in the crazy co-op mode and we'll have some giveaways for viewers as well including a pair of Zotac GeForce GTX 660 Ti cards!
Tomorrow, Sept 18th, from 4pm ET until at least 8pm ET, staff from PC Perspective and NVIDIA will be using our PC Perspective Live! channel to discuss and show off the new "shoot and loot" title from Gearbox.
Come join us to see Borderlands 2 in action, hang out with PC Perspective and NVIDIA reps and enter for a chance to win one of two Zotac GeForce GTX 660 Ti cards!
Be sure to set your calendars and join us for the Borderlands 2 launch live streaming celebration!!
- PC Perspective Live! channel - pcper.com/live
- Start time: 4pm ET / 1pm PT
Subject: Graphics Cards, Mobile | September 13, 2012 - 06:42 PM | Ryan Shrout
Tagged: lucid, dynamix, ultrabook
Lucid has a history of fast product development as a software company. It wasn't too long ago that Lucidlogix was a fabless semiconductor company that made chips for motherboards to enabled multi-GPU solutions across card models and GPU vendors. Since then we have seen them move to GPU virtualization tasks like enabling discrete and integrated GPUs to work seamlessly without user interaction on the same notebook.
The Lucid MVP software is the most recent version of that track and it has been very well received, find its way onto most motherboard brands and recently the Origin gaming notebook line.
While huddling in San Francisco during IDF, we stopped by Lucid's suite to see what new stuff they were cooking up. One of the products was called Dynamix and it has the goal of adjusting the image quality of games in real time to help users hit minimal gaming experience levels. Lucid isn't adjusting the settings on your games but rather is intercepting calls from the game to the graphics solution (integrated or discrete) and altering them slightly to adjust performance.
Above you'll see the beta user interface for Dynamix that allows the user to configure it and assign which titles it should operate on. Two sliders, one for a frame rate and one for a somewhat subjective "quality" level can be moved in order to alter the algorithms Lucid has set in the place.
When you set the minimum frame rate, that is the "threshold" with which you would like to make sure all of your games run at. The default was 30 FPS when I played with it and left the quality slider where it started as well. If you start a game that does NOT run at 30 FPS with the settings you have (or maybe it won't with any settings) Lucid's software will attempt to change some quality and rendering settings completely transparently to bring the frame rate up.
In our demo we saw Crysis 2 running on a Dell Ultrabook at 1366x768 and a reported frame rate of 9 from FRAPS. Obviously a game at that frame rate is pretty much unplayable, so when you enable the Dynamix software via a hotkey it attempts to bring up the frame rate; not by adjusting settings in the game engine but rather by changing DX calls to the GPU itself.
Examples given were that Dynamix might change the color depth requested by the game, or it might lower the texture resolutions and anti-aliasing passes. It gradually degrades image quality until it is close to reaching your desired minimum frame rate. When I enabled it on Crysis 2, my frame rate went from 9 to 28 or so - a sizeable difference that made the game mostly playable.
It's not magic though - there are degradations in quality that are visible.
Here you can see a close up of the game running without Dynamix at work. The quality is good but the frame rate was again at 9 FPS or so.
This image shows the game after enabling Dynamix, with a frame rate of 28 or so. You can definitely see blurrier textures, less sharpness around the gun and the foliage quality has gone done some as well.
So why is this even interesting? There are several reasons. First there are some games that may not have quality settings low enough to run on an Ultrabook with HD 2500 graphics; kind of like Crysis 2. Lucid is able to change things that the developer might not have thought of (or might not have wanted) with its access to the graphics pipeline.
Secondly, as the name implies, the software is dynamic. If you already running a game OVER your minimum threshold then the software will not change anything. But if you are running in an indoor area at 40 FPS and then drop to 20 FPS when you go outdoors, the software will kick in and attempt to adjust quality to get you back up to the 30 FPS mark.
Finally, the UI remains untouched - the informational points that were part of the game's interface were untouched so you don't have to worry about blury text or anything like that. Lucid's capability to know about the back end of the 3D engines allows them to tweak things like this pretty easily.
Lucid says the goal is to make games that would otherwise be unplayable on a system, playable for consumers. Without a doubt the target is Sandy Bridge and Ivy Bridge notebooks and the somewhat limited performance of the HD 2500 graphics system. While this could also be applied to discrete graphics system from AMD and NVIDIA, I don't see that being necessary.
Currently the software works with DX9 and DX10 games though they are still working to get DX11 covered completely. And while the software worked find our demo, we only tried out one game on one notebook - there is still a lot of proving that Lucid needs to do for us to buy in completely. If Lucid's bragging was anything to judge by though you should see Dynamix in quite a few major notebook brands later this year.
What do YOU think? Is this a technology you are interested in and do you see a place for it?
AMD's Radeon HD 7000 Series Graphics Cards Reportedly Receiving Price Cuts Soon (Update: AMD denies further price cuts)
Subject: Graphics Cards | September 13, 2012 - 05:25 PM | Tim Verry
Tagged: Radeon HD 7000, price cuts, pitcairn, HD7000, gpu, amd
Update: AMD has stated that there will not be any price cuts.
NVIDIA launched two budget Kepler-based graphics cards today, and the sub-$250 GPUs are competitively priced. The GTX 650 is a card with an MSRP of $109 and is matched against the Radeon 7750 (which retails for around $110 depending on manufacturer). Further, the $229 GTX 660 is pitted against the Radeon 7850 – an approximately $220 card (some manufacturers beat that price, others are priced higher).
The AMD Radeon HD 7850 Graphics Card from our review.
And while you can find these AMD graphics cards for slightly less than the NVIDIA competition, the green team GPU is a faster card in most games (especially at 1080p). In an attempt to sway gamers towards the AMD choice, the company is preparing to cut prices on the entire 7000-series line – including the 7750 and 7850. These are cuts on the, erm, arleady-cut prices announced last month.
The Price cuts are as follows:
|AMD Radeon HD GPU||New Slashed Prices|
|7970 GHz Edition||$430|
|7950 Boost Edition||$300|
These prices are almost certainly for reference designs, and you can naturally expect to pay for any factory overclocked model. What these price cuts mean, though is that the base versions are now cheaper to get ahold of, which is a good thing (for gamers, not so much for AMD heh).
When specifically talking about the price cuts as a response to budget Kepler cards, both the 7750 and 7850 can be had for anywhere between $5 and $20 cheaper in general. That’s is ~$20 extra dollars that you could devote to more RAM or put you over the edge into getting a better quality PSU. It definitely makes the decision to go AMD or NVIDIA a bit more difficult (but in an exciting/good way).
This is not the first time that AMD has slashed prices on its 7000 series graphics cards and now that it has competition on all fronts, it will be interesting to see how all the prices finally shake out to be. Interestingly, Softpedia seems to have posted the price cut information on Tuesday (two days before Kepler) but states that the cuts will not go into effect until next week – though Newegg seems to have taken some initiative of its own by pricing certain cards at the new prices already. This may have technically been more of a pre-emptive move than a reactionary one, but either way the budget gaming section of the market just got exciting again!
Do the impending price cuts have you reconsidering your budget GPU choice, or are you set on the new Kepler hardware?
Subject: Graphics Cards | September 13, 2012 - 05:09 PM | Jeremy Hellstrom
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga, factory overclocked
As those of you who have already read the post below this one know, ASUS decided to create a DirectCU II model for their GTX 660, with the famous heatpipe bearing heatsink. They have overclocked the GPU already and the card comes with tools to allow you to push it even further if you take the time to get to know your card and what it can manage. Check the full press release below.
Fremont, CA (September 13, 2012) - ASUS is excited to release the ASUS GeForce GTX 660 DirectCU II series featuring the Standard, OC and TOP editions. Utilizing the latest 28nm NVIDIA Kepler graphics architecture, the OC and TOP cards deliver a factory-overclock while all three cards feature ASUS exclusive DirectCU thermal design and GPU Tweak tuning software to deliver a quieter, cooler, faster, and more immersive gameplay experience. The ASUS GeForce GTX 660 DirectCU II series set a new benchmark for exceptional performance and power efficiency in a highly affordable graphics card. The ASUS GeForce GTX 660 DirectCU II is perfect for gamers looking to upgrade from last-generation graphics technology while retaining ASUS’ class-leading cooling and acoustic performance.
Superior Design and Software for the Best Gaming Experience ASUS equips the GeForce GTX 660 DirectCU II series with 2GB of GDDR5 memory clocked up to 6108MHz. The TOP edition features a blistering GPU core boost clock of 1137MHz, 104MHz faster than reference designs while the OC edition arrives with a factory-set GPU core boost speed of 1085MHz. Exclusive ASUS DIGI+ VRM digital power delivery and user-friendly GPU Tweak tuning software allows all cards to easily overclock beyond factory-set speeds offering enhanced performance in your favorite game or compute intensive application.
The ASUS GeForce GTX 660 DirectCU II series feature exclusive DirectCU technology. The custom designed cooler uses direct contact copper heatpipes for faster heat transduction and up to 20% lower normal operating temperatures than reference designs. The optimized fans are able operate at lower speeds providing a much quieter gaming or computing environment. For enhanced stability, energy efficiency, and overclocking margins the cards feature DIGI+ VRM digital power deliver plus a class-leading six-phase Super Alloy Power design for the capacitors, chokes, and MOSFETs meant to extend product lifespan and durability while operating noise-free even under heavy workloads.
ASUS once again includes the award winning GPU Tweak tuning suite in the box. Overclocking-inclined enthusiasts or gamers can boost clock speeds, set power targets, and configure fan operating parameters and policies; all this and more is accessible in the user-friendly interface. GPU Tweak offers built-in safe guards to ensure all modifications are safe, maintaining optimal stability and card reliability.
Subject: Graphics Cards | September 13, 2012 - 04:49 PM | Jeremy Hellstrom
Tagged: nvidia, msi, kepler, gtx 660, gk106, geforce, evga
The non-Ti version of the GTX 660 has arrived on test benches and retailers, with even the heavily overclocked cards being available at $230, like EVGA's Superclocked model or MSI's OC'd card once you count the MIR. That price places it right in between the HD 7850 and 7870, and ~$70 less than the GTX 660 Ti, while the performance is mostly comparable to a stock HD7870 though the OC versions can top the GTX660.
[H]ard|OCP received ASUS' version of the card, a DirectCU II based version with the distinctive heatpipes. ASUS overclocked the card to a 1072MHz base clock and 1137MHz GPU Boost and [H] plans to see just how much further the frequencies can be pushed at a later date. Their final word on this card for those looking to upgrade, for those of you with "a GTX 560 Ti, and even the GTX 570, the GTX 660 is an upgrade".
"NVIDIA is launching the new GeForce GTX 660 GPU, codenamed GK106. We have a retail ASUS GeForce GTX 660 DirectCU II custom video card fully evaluated against a plethora of competition at this price point. This brand new GPU aims for a price point just under the GTX 660 Ti but still promises to deliver exceptional 1080p gaming with AA."
Here are some more Graphics Card articles from around the web:
- Nvidia's GeForce GTX 660 @ The Tech Report
- ASUS GTX 660 Direct CU II TOP Review @ OCC
- NVIDIA GeForce GTX 660 Launch Review @ Neoseeker
- EVGA GeForce GTX 660 SC (SuperClocked) 2GB @ Bjorn3D
- Nvidia GeForce GTX 660 @ Hardware.info
- NVIDIA Geforce GTX 660 Reviews @Hi Tech Legion
- The NVIDIA GeForce GTX 660 Review: GK106 Fills Out The Kepler Family @ AnandTech
- SI GEFORCE GTX 660 Twin Frozr 2GB OC @ Tweaktown
- Gigabyte GeForce GTX 660 @ Legion Hardware
- Gigabyte GTX 660 Overclock 2GB Graphics Card Review @ eTeknix
- EVGA GeForce GTX 660 2GB SuperClocked @ Benchmark Reviews
- MSI GTX 660 OC Edition Twin Frozr @ Kitguru
- Nvidia GeForce GTX 660 @ Techspot
- Gigabyte GTX 660 OC Video Card Review @ Ninjalane
- MSI GTX 660 Twin Frozr 2GB OC @ LanOC Reviews
- NVIDIA GeForce GTX 660 Overclocked Graphics Card Review (EVGA/ZOTAC)@ HardwareHeaven
- EVGA GTX 660 Superclocked 2Gb @ LanOC Reviews
- NVIDIA GeForce GTX 660 Review @ Hardware Canucks
- ASUS, KFA2 and MSI GeForce GTX 660 reviews with 2-way SLI @ Guru of 3D
- MSI GeForce GTX 660 Twin Frozr 2 GB @ techPowerUp
- ZOTAC GeForce GTX 660 2 GB @ techPowerUp
- Gigabyte GTX 660 Windforce OC 2 GB @ techPowerUp
- ASUS GeForce GTX 660 Direct Cu II 2 GB @ techPowerUp
NVIDIA GeForce GTX 660 Video Card Review w/ MSI and EVGA @ Legit Reviews
- Six GeForce GTX 660 Ti graphics cards: ASUS, EVGA, Gigabyte, MSI and Zotac @ Hardware.info
- Gigabyte GTX 660 Ti OC Windforce @ Kitguru
- AFOX Radeon HD 7850 (Single Slot), MSI R7870 Hawk Graphics Cards @ iXBT Labs
- Inno3D GTX 680 iChill Black Series Accelero Hybrid 4GB Overclocked @ Tweaktown
- MSI Geforce GTX 670 Power Edition @ Rbmods
- i3DSpeed, August 2012 @ iXBT Labs
- Arctic Accelero Xtreme 7970 VGA Cooler Review @ eTeknix
- Sapphire Radeon HD 7970 Vapor-X OC 6GB Graphics Card Review @ eTeknix
- Sapphire FleX HD 7770 GHz Edition @ LanOC Reviews
Subject: Graphics Cards | September 13, 2012 - 09:38 AM | Tim Verry
Tagged: nvidia, kepler, gtx 650, graphics cards, geforce
Ah, Kepler: the (originally intended as) midrange graphics card architecture that took the world by storm and allowed NVIDIA to take it from the dual-GPU GeForce GTX 690 all the way down to budget discrete HTPC cards. So far this year we have seen the company push Kepler to its limits by adding GPU boost and placing it in the GTX 690 and GTX 680. Those cards were great, but commanded a price premium that most gamers could not afford. Enter the GTX 670 and GTX 660 Ti earlier this year and Kepler started to become an attractive option for gamers wanting a high-end single GPU system without breaking the bank. Those cards, at $399 and $299 respectively were a step in the right direction to making the Kepler architecture available to everyone but were still a bit pricey if you were on a tighter budget for your gaming rig (or needed to factor in the Significant Other Approval Process™).
Well, Kepler has now been on the market for about six months, and I’m excited to (finally) announce that NVIDIA is launching its first Kepler-based budget gaming card! The NVIDIA GeForce GTX 650 brings Kepler down to the ever-attractive $109 price point and is even capable of playing new games at 1080p above 30FPS. Not bad for such a cheap card!
With the GTX 650, you are making some sacrifices as far as hardware, but things are not all bad. The card features a mere 384 CUDA cores and 1GB of GDDR5 memory on a 128-bit bus. This is a huge decrease in hardware compared to the GTX 660 Ti’s 1344 CUDA cores and 2GB memory on a 192-bit bus – but that card is also $200 more. And while the GTX 650 runs the memory at 5Gbps, NVIDIA was not shy about pumping up the GPU core clockspeed. No boost functionality was mentioned but the base clockspeed is a respectable 1058 MHz. Even better, the card only requires a single 6-pin PCI-E power connector and has a TDP of 64W (less than half of its higher-end GeForce brethren).
The following chart compares the specifications between the new Geforce GTX 650 through the GTX 670 graphics card.
Click on the above chart for a larger image.
The really important question is how well it handles games, and NVIDIA showed off several slides with claimed performance numbers. Taking these numbers with a grain of salt as they are coming from the same company that built the hardware, the GTX 650 looks like a capable GPU for the price. The company compared it to both its GTS 450 (Fermi) and AMD’s 7750 graphics card. Naturally, it was shown in a good light in both comparisons, but nothing egregious.
NVIDIA is claiming an 8X performance increase versus the old 9500 GT, and an approximate 20% speed increase versus the GTS 450. And improvements to the hardware itself has allowed NVIDIA to improve performance while requiring less power; the company claims the GTX 650 uses up to half the power of its Fermi predecessor.
The comparison between the GTX 650 and AMD Radeon HD 7750 is harder to gauge, though the 7750 is priced competitively around the GTX 650’s $109 MSRP so it will be interesting to see how that shakes out. NVIDIA is claiming anywhere from 1.08 to 1.34 times the performance of the 7750 in a number of games, shown in the chart below.
If you have been eyeing a 7750, the GTX 650 looks like it might be the better option, assuming reviewers are able to replicate NVIDIA’s results.
Keep in mind, these are NVIDIA's numbers and not from our reviews.
Unfortunately, NVIDIA did not benchmark the GTS 450 against the GTX 650 in the games. Rather, they compared it to the 9500 GT to show the upgrade potential for anyone still holding onto the older hardware (pushing the fact that you can run DirectX 11 at 1080p if you upgrade). Still, the results for the 650 are interesting by themselves. In MechWarrior Online, World of Warcraft, and Max Payne 3 the budget GPU managed at least 40 FPS at 1920x1080 resolution in DirectX 11 mode. Nothing groundbreaking, for sure, but fairly respectable for the price. Assuming it can pull at least a min of 30 FPS in other recent games, this will be a good option for DIY builders that want to get started with PC gaming on a budget.
All in all, the NVIDIA GeForce GTX 650 looks to be a decent card and finally rounds out the Kepler architecture. At this price point, NVIDIA can finally give every gamer a Kepler option instead of continuing to rely on older cards to answer AMD at the lower price points. I’m interested to see how AMD answers this, and specifically if gamers will see more price cuts on the AMD side.
If you have not already, I strongly recommend you give our previous Kepler GPU reviews a read through for a look at what NVIDIA’s latest architecture is all about.
PC Perspective Kepler-based GTX Graphics Card Reviews:
Subject: General Tech, Graphics Cards, Mobile | September 12, 2012 - 07:20 PM | Scott Michaud
Tagged: lucid, external graphics
Lucid looks to utilize Thunderbolt and its PCIe-format interface with external video cards. Their ideal future would allow for customers to purchase Ultrabook or other laptop device to bring around town. Upon reaching home the user could sit the laptop on their desk; plug in a high-end video card for performance; and surround their Ultrabook in other monitors.
While there are situations for acceleration hardware to be inside the device that is not necessary.
There have been numerous attempts in the past to provide a dockable graphics accelerator. ASUS, AMD, Vidock, as well as many others have attempted this feat but all had drawbacks and/or difficulty getting to market. Just prior to Intel Developer Forum, Laptop Magazine was given a demonstration from Lucid with their own attempt.
How about some Thunderbolt?
Mobile GPUs are really the only thing keeping a good laptop from being a gaming machine.
There’s good need for desktop CPUs with lots of RAM – but these days, not to game.
I have been excited each time a product manufacturer claims to have a non-proprietary method to accelerate laptop graphics. Laptops are appealing for so many purposes and it is frustrating to have devices come so close but fall so short of being a reasonable gaming machine.
The demo that Lucid showed off ran 3DMark 06 on an Intel HD 4000 with an external AMD Radeon HD 6700. On integrated graphics the gaming performance hovered just south of 30 FPS. With the Radeon HD 6700 – as expected – performance greatly increased to almost 90 FPS.
It should be much more compelling for a PC store to say “For somewhere near the price of a console, you could dock your laptop which you already own into this box when you want to game and instantly have all PC gaming and Home Theatre PC benefits.”
And it should have happened a long time ago.
Get notified when we go live!