Author:
Manufacturer: PC Perspective

In case you missed it...

UPDATE: We have now published full details on our Frame Rating capture and analysis system as well as an entire host of benchmark results.  Please check it out!!

In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time.  Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle.  If you already read that page of our TITAN review, nothing new is included below. 

I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!


Why are you not testing CrossFire??

If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out.  (Part 1 is here, part 2 is here.)  The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.

Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system.  With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.

We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS.  As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.

Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all.  They are simply presenting data that they believe to be true based on the tools at their disposal.  More data is always better. 

Here are these results and our discussion.  I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way.  I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI.  To gather results I used two processes:

  1. Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
  2. Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.

Here is an example of what the overlay looks like in Battlefield 3.

fr_sli_1.jpg

Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge

The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process.  A solid color is added to the PRESENT call (more details to come later) for each individual frame.  As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.

By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.

fr_cf_1.jpg

Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge

Here you see a very similar screenshot running on CrossFire.  Notice the thin silver band between the maroon and purple?  That is a complete frame according to FRAPS and most reviews.  Not to us - we think that frame rendered is almost useless. 

Continue reading our 3rd part in a series of Frame Rating and to see our first performance results!!

AMD Releases Catalyst 13.2 Beta GPU Driver With Optimizations For Crysis 3

Subject: Graphics Cards | January 31, 2013 - 08:04 AM |
Tagged: PC, gaming, amd, graphics drivers, gpu, Crysis 3, catalyst

The Crysis 3 beta was launched January 29th, and AMD came prepared with its new Catalyst 13.2 beta driver. In addition to the improvements rolled into the Catalyst 13.1 WHQL graphics driver, Catalyst 13.2 beta features performance improvements in a number of games.

Foremost, AMD focused on optimizing the drivers for the Crysis 3 beta. With the new 13.2 beta drivers, gamers will see a 15% performance improvement in Crysis 3 when using high MSAA settings. AMD has also committed itself to future tweaks to improve Crysis 3 performance when using both single and CrossFire graphics configurations. The driver also allows for a 10% improvement in CrossFire performance in Crytek’s Crysis 2 and a 50% performance boost in DMC: Devil May Cry when running a single AMD GPU. Reportedly, the new beta driver also reduces latency issues in Skyrim, Borderlands 2, and Guild Wars 2. Finally, the 13.2 beta driver resolves a texture filtering issue when running DirectX 9.0c games.

For more details on the driver, AMD has posted the change log on its blog as well as suggested image quality settings for AMD cards running the Crysis 3 beta.

Now watch: PC Perspective live streams the Crysis 3 Multiplayer Beta.

 

Source: AMD

AMD Releases Catalyst 13.1 GPU Drivers With Various Tweaks for Radeon HD 7000 Series

Subject: Graphics Cards | January 18, 2013 - 11:33 AM |
Tagged: Radeon HD 7000, gpu, drivers, catalyst 13.1, amd

AMD recently released a new set of Catalyst graphics card drivers with Catalyst 13.1. The new drivers are WHQL (Microsoft certified) and incorporate all of the fixes contained in the 12.11 beta 11 drivers. The Radeon HD 7000 series will see the majority of the performance and stability tweaks with 13.1. Additionally, the Catalyst 13.1 suite includes a new 3D settings interface in Catalyst Control Center that allows per-application profile management. The Linux version of the Catalyst 13.1 drivers now officially support Ubuntu 12.10 as well.

amd catalyst.jpg

Some of the notable performance tweaks for the HD 7000 series include:

  • CrossFire scaling performance in Call of Duty: Black Ops II improvements.
  • Up to a 25% increase in Far Cry 3 when using 8X MSAA.
  • An 8% performance increase in Sleeping Dogs and StarCraft II.
  • A 5% improvement in Max Payne 3.

 

New 3D Settings UI in Catalyst.jpg

Beyond the performance increases, AMD has fixes several bugs with the latest drivers. Some of the noteworthy fixes include:

  • Fixed a system hang on X58 and X79 chipset-based systems using HD 7000-series GPUs.
  • Fixed an intermittent hang with HD 7000-series GPUs in CrossFireX and Eyefinity configurations.
  • Resolved a system hang in Dishonored on 5000 and 6000 series graphics cards.
  • Resolved a video issue with WMP Classic Home Cinema.
  • Added Super Sample Anti-Aliasing support in the OpenGL driver.

AMD has also released a new standalone un-installation utility that will reportedly clean your system of AMD graphics card drivers to make way for newer versions. That utility can be downloaded here.

If you have a Radeon HD 7000-series card, it would be worth it to update your drivers ASAP. You can download the Catalyst 13.1 drivers on the AMD website.

You can find a full list of the performance tweaks and bug fixes in the Catalyst 13.1 release notes.

Source: AMD
Author:
Manufacturer: PC Perspective

Another update

In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates.  I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.

d31.jpg

Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.

dis1.jpg

This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.

 

Video Loading...

 

Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback.  Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.

We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic.  You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.

Let me know below what you thought of this video and any questions that you might have. 

Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.

Podcast #232 - Our picks for Best Products of 2012!

Subject: General Tech | December 27, 2012 - 03:37 PM |
Tagged: video, ssd, podcast, picks of the year, memory, gpu, editors choice, cpu, case, best of the year

PC Perspective Podcast #232 - 12/27/2012

Join us this week as we discuss our picks for Best Products of 2012!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano and Scott Michaud

This Podcast is brought to you by MSI!

Program length: 1:40:13

Podcast topics of discussion:

  1. Welcome to our Best Of 2012 Episode!
  2. Rules explanation: there are no rules!!
  3. 0:07:30 Best CPU
    1. AMD Trinity A10 APU
    2. Intel Core i7-3770K
    3. Intel Ultrabook mobile CPU
    4. Qualcomm Krait Mobile CPU
  4. 0:20:00 Best Motherboard
    1. MSI Z77 MPower
    2. ASUS Rampage IV Extreme
    3. ASUS Z77 Mini ITX
    4. EVGA dual Xeon board
    5. Asus Crosshair Formula Z
  5. 0:31:20 Best GPU
    1. GeForce GTX 680 2GB
    2. GeForce GTX 660 Ti 2GB
    3. Radeon HD 7970 3GB
    4. Radeon HD 7870 2GB
  6. 0:44:00 This Podcast is brought to you by MSI!
  7.  0:45:00 Best Storage
    1. Samsung 840 Pro SSD
    2. OCZ Vector SSD
    3. WD Red HDD
    4. WD 4TB Enterprise
  8. 1:05:00 Best Case
    1. Corsair Carbide 300R
    2. Corsair Obsidian 550D
    3. Cooler Master Cosmos II
    4. Mineral Oil Case
  9. 1:12:00 Best Price Drop
    1. SSDs
    2. AMD Radeon 7000 GPUs
    3. Good IPS displays ($199 - $400)
    4. 2560x1440 27-in panels
    5. System Memory
  10. 1:22:00 Best Newly Popular Technology
    1. Thunderbolt
    2. High-res monitors (Korean or otherwise)
    3. Cherry style keyboards
    4. Mini ITX Motherboards
    5. Touch screen?
  11. 1:35:00 Best Lenovo Laptop on my Desk
    1. Thinkpad Twist
    2. Thinkpad X1 Carbon
    3. Thinkpad X230
    4. Yoga 13
  12. 1-888-38-PCPER or podcast@pcper.com
  13. http://pcper.com/podcast
  14. http://twitter.com/ryanshrout and http://twitter.com/pcper
  15. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

 

Bitcoin Block Reward Halved to 25BTC

Subject: General Tech | December 3, 2012 - 01:35 AM |
Tagged: mining, gpu, btc, block reward halving, block reward, bitcoin

Earlier this week, the 50BTC reward given to miners that successfully find blocks of Bitcoin transactions was halved to 25BTC. This means that the gross income of miners is now half of what it has been since the cryptocurrency’s inception. As a result, many miners – especially those using graphics cards – will have to re-evaluate their net earnings to determine if they are still making any profits from mining coins after hardware and electricity costs are taken into consideration.

Bitcoin Clock.jpg

As an example, when mining bitcoins using a single shader-unlocked AMD Radeon HD 6950 graphics card, I was able to obtain approximately 0.12 BTC per day. Now, because the reward is cut in half, I am able to make about 0.06 BTC per day. Unfortunately, that is approximately the same amount of BTC (when converted to USD) that it costs in electricity to run, negating profits. Technically, according to Allchains.info, I am (just barely) still profitable with a net profit of $0.076 USD per day after electricity costs. As the exchange rate has gone up slightly, it is a bit more than that in actuality but it is still a good estimation of profitability. There are also non-monetary costs associated with mining bitcoins in the form of the extra heat and noise generated by the graphics card being run under load 24/7. And at $0.076 cents a day, it really does not seem worth it anymore. Then again, it does act as a room heater in the winter and it at least subsidizes part of the cost of heating the room (heh) even if it does not pay out much more than it costs to run.

It will be interesting to see how miners react, especially once the colder months are behind us. Some Bitcoin miners have moved on to alternative cryptocurrencies such as Litecoin and Terracoin, however that has caused the difficulty of mining LTC to double without an equal increase in exchange rate which has actually made mining LTC less profitable than mining BTC (d'oh!).

Screenshot (398).png

The reward halving is not an unexpected event, and, in fact, it has been intentionally executed by the bitcoin developers to prevent inflation. Just as the mining difficulty adjusts every 2016 blocks, every four years the reward is cut in half until mining blocks no longer provides rewards (with the intention that transaction fees will provide all of the incentive to mine from then one). As such, many miners knew about it before hand, and planned accordingly. Some miners sold off their rigs, others (mostly those with free electricity) are continuing to mine, and yet other miners moved to alternative cryptocurrencies. (That's not to mention those that mine for the purpose of securing the network which is not always a profitable (albeit necessary) pursuit.)

Interestingly, the difficulty of Bitcoin is estimated to increase rather than decrease with the upcoming adjustment despite the reward split and a certain number of users moving away from mining. This may be the result of miners speculating that the price (exchange rate of USD/BTC) will increase and/or that other miners will drop off and the difficulty will begin to decrease at some point. Miners with lage farms of gaphics cards may also be able to hold out despite the block reward halving as they have enough hashing power to keep profits at a worthwile level. While my single card is only making about $18 a month now (versus ~35+), users with more cards can still be seeing sizable returns. So long as profits are there, mining will continue, though newcomers looking to invest in mining rigs are less likely to join in the current climate (seemingly vaporware ASICs notwithstanding). There may also be users that are mining solely for BTC that they will hoard or spend with merchants (like WordPress) that accept bitcoins without concern for exchanging to USD or other national currencies. 

In all, there are an astounding number of factors surrounding the block reward halving along with many theories about what will happen as a result of it. At this point, it is still to early to tell, but It will be interesting to see which theories hold true.

Read more about the bitcoin cryptocurrency and how mining works at PC Perspective.

What do you think about the bitcoin reward being cut in half? How will it effect you, and will you continue to mine at the current exchange rate and difficulty? Let us know in the comments below (no registration required).

Source: Bitcoinclock

Podcast #223 - AVADirect Mini ITX Gaming Machine, Patriot Gauntlet 320GB Wireless Drive, Windows 8 Pricing and more!

Subject: General Tech | October 18, 2012 - 02:38 PM |
Tagged: video, windows 8, podcast, patriot, nvidia, mini ITX, Intel, gpu, gauntlet, gauntle node, cpu, AVADirect, amd

PC Perspective Podcast #223 - 10/18/2012

Join us this week as we talk about the AVADirect Mini ITX Gaming Machine, Patriot Gauntlet 320GB Wireless Drive, Windows 8 Pricing and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, and Scott Michaud

This Podcast is brought to you by MSI!

Program length: 56:12

Podcast topics of discussion:

  1. Week in Reviews:
    1. 0:01:45 Why stereoscopic 3D is Awesome
    2. 0:09:15 Patriot Gauntlet 320 Wireless HDD
    3. 0:14:56 AVADirect Mini Gaming PC
  2. 0:24:30 This podcast is brought to you by MSI
  3. News items of interest:
    1. 0:25:30 AMD finances are bad, mkay?
      1. Slashing 30% of work force?
    2. 0:31:10 Windows 8 Retail pricing
      1. $69 with $30 Amazon Credit
    3. 0:35:55 ASUS announces PadFone 2
  4. 0:39:30 Alxtech.net/pcper Ad spot!! http://alxtech.net/pcper/
      1. Now at $0.50/slot for pcper viewers and listeners!!
    1. 0:42:00 Intel may have 10-core Ivy Bridge-E ready
    2. 0:43:45 Corsair raising money for charity with gaming marathon!
    3. 0:45:50 Win a FREE AMD APU on our YouTube channel!
  5. Closing:
    1. 0:48:00 Hardware / Software Pick of the Week
      1. Ryan: Gauntlet Node - just the enclosure
      2. Jeremy: Ridiculous and added infections too
      3. Josh: NOPE
      4. Allyn: NOPE
      5. Scott: 12 GB of RAM :D
  1. 1-888-38-PCPER or podcast@pcper.com
  2. http://pcper.com/podcast
  3. http://twitter.com/ryanshrout and http://twitter.com/pcper
  4. Closing/outro

 

 

 

Zotac Rumored To Be Preparing Three GTX 650 Ti Graphics Cards

Subject: Graphics Cards | October 7, 2012 - 10:37 PM |
Tagged: nvidia, kepler, gtx 650ti, gpu, gk106-220

The NVIDIA GeForce GTX 650 Ti is rumored to launch soon, and so far specifications have leaked on the reference design as well as two custom cards from ASUS and Galaxy. Zotac is the latest manufacturer to have its GTX 650 Ti lineup leaked, and the company is bringing as many as three graphics cards to the GK106-220 Kepler family. In all, Zotac is rumored to be launching one 1GB GTX 650 Ti and two 2GB GPUs – all with vared levels of factory overclocks. Video outputs on all three cards include two DVI and two HDMI connectors.

Zotac GTX 650 Ti 2GB AMP Editon.jpg

The Zotac GTX 650 Ti 1GB stays close to the reference design, but bumps up the GPU core clockspeed to 941 MHz. It also includes 1 GB of GDDR5 memory on a 128-bit interface clocked at 1350 MHz (5400 MHz effective), which matches the reference design. The price of this card is said to be $160, and features a custom cooler from Zotac that is similar (but smaller than) to the cooler used on the company's GTX 660 Ti GPU wich we recently reviewed.

The Zotac GTX 650 Ti 2GB is, as the name suggests, a GTX 650 Ti graphics card with 2GB of GDDR5 memory. It features Zotac's custom cooler, and a single PCI-E 6-pin power connector. The GPU clockspeed is 941 MHz and the memory clockspeed is 1350 MHz. The extra 1GB of graphics memory is nice, but it is still on a 128-bit interface so don't expect too much of a performance boost. MSRP of this card is rumored to be $180.

Finally, the GTX 650 Ti 2GB AMP! Edition is Zotac's highest-end GTX 650 Ti graphics card. It comes with the GK106-220 Kepler GPU and 2GB of GDDR5 memory on a 128-bit bus. Powered by a single 6-pin PEG connector, the factory overclocked graphics card is clocked at 1033 MHz for the GPU and 1550 MHz (6200 MHz effective) for the memory.The Zotac GTX 650 Ti AMP! Edition comes with the company's custom cooler and is the first card to feature factory overclocked memory. The rumored price of this card is $190. Unfortunately, that puts it fairly close to the price of a reference GTX 660, which may make this card a hard sell. The factory overclocks are impressive, but saving up the extra $30 needed to get a GTX 660 is likely a better idea because it will still offer up better performance thanks to the additional CUDA cores and wider memory bus.

The following chart compares the three Zotac cards to the leaked reference specifications.

  Reference Specifications Zotac GTX 650 Ti 1GB Zotac GTX 650 Ti 2GB Zotac GTX 650 Ti 2GB AMP! Edition
CPU Clockspeed 925 MHz 941 MHz 941 MHz 1033 MHz
Memory Clockspeed 1350 MHz 1350 MHz 1350 MHz 1550 MHz
GDDR5 Amount 1 GB 1 GB 2 GB 2 GB
Price ~$140 $160 $180 $190

Comparison of several GTX 650 Ti graphics cards versus the rumored reference specifications.

Further, this chart compares the leaked specifications of the top end cards from each manufacturer (at least, the ones we know of so far) to the highest-end Zotac GPU: the 2GB AMP! Edition.

  Reference Specifications ASUS GTX 650 Ti TOP Galaxy GTX 650 Ti GC 1GB Gigabyte GTX 650 Ti OC Zotac GTX 650 Ti 2GB AMP! Edition POV GTX 650 Ti 1GB Ultra Charged
CPU Clockspeed 925 MHz 1033 MHz 966 MHz 1032 MHz 1033 MHz 1058 MHz
Memory Clockspeed 1350 MHz 1350 MHz 1350 MHz 1350 MHz 1550 MHz 1350 MHz
GDDR5 Amount 1 GB 1 GB 1 GB 2 GB 2 GB 1 GB
Video Outputs 2 x DVI, 1 x HDMI 2 x DVI, 1 x HDMI, 1 x VGA 2 x DVI, 1 x HDMI 2 x DVI, 1 x HDMI, 1 x VGA 2 x DVI, 2 x HDMI 1 x DVI, 1 x HDMI, 1 x VGA
Price ~$140 €206 (~$267?) $150 €169 $190 unkown

Inno3D is also rumored to have a GTX 650 Ti graphics card coming out, but we don't know clockspeeds or price on it. Only that it has two DVI and one HDMI connector, a single PEG power connector, and a custom cooler.

Overall, the Zotac card measures up well, with pricing being the only major disadvantage. The 2GB of memory, factory overclocks, and two HDMI ports are welcome additions, however. Interestingly, the Zotac card is not the highest clocked graphics card overall, but it is the only one that features overclocked memory. It is unclear to me why manufactuers of NVIDIA cards are so hesitant to push the memory clockspeeds (or if they are even allowed to), but Zotac seems to prove that it is possible to do so. 

Also worth pointing out is the rumored pricing, as some of these custom graphics cards are pushing $200 (especially the ASUS card when coverted to USD... I'm sure that has to be in error...), and reference GTX 660 with the full GK106 Kepler core are only $230. It will be interesting to see if these rumored prices turn out to be true, and how well Zotac's factory overclocked 650 Ti models sell.

You can find more photos of the Zotac GTX 650 Ti graphics cards on the Videocardz website, and brush up on the GK106 Kepler GPU architecture in our review of the GTX 660 graphics card.

Source: Videocardz

PowerColor shows off new (cheaper) HD 7990 graphics card with lower clocks than Devil 13

Subject: Graphics Cards | October 5, 2012 - 06:40 PM |
Tagged: powercolor, gpu, dual gpu, amd, 7990

Towards the end of August, a new dual GPU graphics card from PowerColor was fully detailed. The dual GPU Devil 13 graphics card combined two AMD Radeon HD 7970 GPUs onto a single PCB with factory overclocks and a custom cooler. The 6GB (3GB per GPU) HD 7990 6GB Devil 13 is an awesome card, but comes with a hefty $999 price tag.

This month, PowerColor has taken the wraps off of a (slightly) cheaper 7990 graphics card that is not clocked as high but uses a similar custom cooler as the Devil 13. It will allegedly be priced at around $900 USD.

PowerColor Radeon HD 7990.jpeg

The new PowerColor HD7990 (sans Devil 13 branding) features two HD7970 Graphics Core Next (GCN) based GPUs  clocked at 900 MHz by default or 925 MHz when using the factory overclocked BIOS. (You can switch between the two modes by using the Dual BIOS switch.) As a point of comparison, standard Radeon 7970s have a reference clockspeed of 925 MHz, and PowerColor’s own HD 7990 Devil 13 is clocked at either 925 MHz or 1 GHz depending on BIOS switch position. PowerColor is likely binning 7970 GPUs that don’t quite make the cut as Devil 13 models for this new dual gpu 7990 graphics card with lower clockspeeds.

Fortunately, the memory clockspeed has not been downclocked on the new HD 7990. Each GPU has 3GB of GDDR5 memory on a 384-bit bus, and the memory is clocked at 1375 MHz.

Also good news is that the standard PowerColor 7990 appears to use the same custom cooler as the Devil 13 – but with an all-black design rather than the red and black color scheme. That includes a triple slot design, numerous heatpipes and fins, and two 92mm fans on either side of an 80mm fan.

PowerColor Radeon HD 7990 Dual GPU graphics card.jpeg

The graphics card measures 315mm x 140mm x 60mm and features two DVI, one HDMI, and two min-DisplayPort video outputs. It has the same 850W minimum system power requirement as the Devil 13, and is powered by three 8 pin PCI-E power connectors in addition to power from the PCI-E 3.0 x16 slot.

Although an interesting card that is sure to attract enthusiasts, it lends credence to the idea that AMD is not going to release its own reference HD 7990 after all. At this point, so long as your case and motherboard permit, it would likely best to go for two individual ~$400 Radeon 7970 GHz Edition cards in a CrossFire configuration. PowerColor does seem to have you covered if that’s not an option for you though there is no word on exactly when this graphics card will be available – or what the final pricing will be.

Read more about AMD’s Graphics Core Next architecture at PC Perspective.

Source: PowerColor