Report: AMD's Dual-GPU Fiji XT Card Might Be Coming Soon

Subject: Graphics Cards | October 5, 2015 - 02:33 AM |
Tagged: rumor, report, radeon, graphics cards, Gemini, fury x, fiji xt, dual-GPU, amd

The AMD R9 Fury X, Fury, and Nano have all been released, but a dual-GPU Fiji XT card could be on the way soon according to a new report.


Back in June at AMD's E3 event we were shown Project Quantum, AMD's concept for a powerful dual-GPU system in a very small form-factor. It was speculated that the system was actually housing an unreleased dual-GPU graphic card, which would have made sense given the very small size of the system (and mini-ITX motherboard therein). Now a report from WCCFtech is pointing to a manifest that just might be a shipment of this new dual-GPU card, and the code-name is Gemini.


"Gemini is the code-name AMD has previously used in the past for dual GPU variants and surprisingly, the manifest also contains another phrase: ‘Tobermory’. Now this could simply be a reference to the port that the card shipped from...or it could be the actual codename of the card, with Gemini just being the class itself."

The manifest also indicates a Cooler Master cooler for the card, the maker of the liquid cooling solution for the Fury X. As the Fury X has had its share of criticism for pump whine issues it would be interesting to see how a dual-GPU cooling solution would fare in that department, though we could be seeing an entirely new generation of the pump as well. Of course speculation on an unreleased product like this could be incorrect, and verifiable hard details aren't available yet. Still, of the dual-GPU card is based on a pair of full Fiji XT cores the specs could be very impressive to say the least:

  • Core: Fiji XT x2
  • Stream Processors: 8192
  • GCN Compute Units: 128
  • ROPs: 128
  • TMUs: 512
  • Memory: 8 GB (4GB per GPU)
  • Memory Interface: 4096-bit x2
  • Memory Bandwidth: 1024 GB/s

In addition to the specifics above the report also discussed the possibility of 17.2 TFLOPS of performance based on 2x the performance of Fury X, which would make the Gemini product one of the most powerful single-card GPU solutions in the world. The card seems close enough to the final stage that we should expect to hear something official soon, but for now it's fun to speculate - unless of course the speculation concerns a high initial retail price, and unfortunately something at or above $1000 is quite likely. We shall see.

Source: WCCFtech

GPU Market Share: NVIDIA Gains in Shrinking Add-in Board Market

Subject: Graphics Cards | August 21, 2015 - 11:30 AM |
Tagged: PC, nvidia, Matrox, jpr, graphics cards, gpu market share, desktop market share, amd, AIB, add in board

While we reported recently on the decline of overall GPU shipments, a new report out of John Peddie Research covers the add-in board segment to give us a look at the desktop graphics card market. So how are the big two (sorry Matrox) doing?

GPU Supplier Market Share This Quarter Market Share Last Quarter Market Share Last Year
AMD 18.0% 22.5% 37.9%
Matrox 0.00% 0.1% 0.1%
NVIDIA 81.9% 77.4% 62.0%

The big news is of course a drop in market share for AMD of 4.5% quarter-to-quarter, and down to just 18% from 37.9% last year. There will be many opinions as to why their share has been dropping in the last year, but it certainly didn't help that the 300-series GPUs are rebrands of 200-series, and the new Fury cards have had very limited availability so far.


The graph from Mercury Research illustrates what is almost a mirror image, with NVIDIA gaining 20% as AMD lost 20%, for a 40% swing in overall share. Ouch. Meanwhile (not pictured) Matrox didn't have a statistically meaningful quarter but still manage to appear on the JPR report with 0.1% market share (somehow) last quarter.

The desktop market isn't actually suffering quite as much as the overall PC market, and specifically the enthusiast market.

"The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful (GPUs). The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space."

But not all is well considering overall the add-in board attach rate with desktops "has declined from a high of 63% in Q1 2008 to 37% this quarter". This is indicative of the overall trend toward integrated GPUs in the industry with AMD APUs and Intel processor graphics, as illustrated by this graphic from the report.


The year-to-year numbers show an overall drop of 18.8%, and even with their dominant 81.9% market share NVIDIA has still seen their shipments decrease by 12% this quarter. These trends seem to indicate a gloomy future for discrete graphics in the coming years, but for now we in the enthusiast community will continue to keep it afloat. It would certainly be nice to see some gains from AMD soon to keep things interesting, which might help lower prices down from their lofty $400 - $600 mark for flagship cards at the moment.

The Latest NVIDIA GeForce Drivers Are Here: Version 347.25 adds GTX 960 Support, MFAA to Most Games

Subject: Graphics Cards | January 23, 2015 - 11:09 PM |
Tagged: nvidia, gtx 960, graphics drivers, graphics cards, GeForce 347.25, geforce, game ready, dying light

With the release of GTX 960 yesterday NVIDIA also introduced a new version of the GeForce graphics driver, 347.25 - WHQL.


NVIDIA states that the new driver adds "performance optimizations, SLI profiles, expanded Multi-Frame Sampled Anti-Aliasing support, and support for the new GeForce GTX 960".

While support for the newly released GPU goes without saying, the expanded MFAA support will help provide better anti-aliasing performance to many existing games, as “MFAA support is extended to nearly every DX10 and DX11 title”. In the release notes three games are listed that do not benefit from the MFAA support, as “Dead Rising 3, Dragon Age 2, and Max Payne 3 are incompatible with MFAA”.

347.25 also brings additional SLI profiles to add support for five new games, and a DirectX 11 SLI profile for one more:

SLI profiles added

  • Black Desert
  • Lara Croft and the Temple of Osiris
  • Nosgoth
  • Zhu Xian Shi Jie
  • The Talos Principle

DirectX 11 SLI profile added

  • Final Fantasy XIV: A Realm Reborn

The update is also the Game Ready Driver for Dying Light, a zombie action/survival game set to debut on January 27.


Much more information is available under the release notes on the driver download page, and be sure to check out Ryan’s chat with Tom Peterson from the live stream for a lot more information about this driver and the new GTX 960 graphics card.

Source: NVIDIA
Subject: Systems
Manufacturer: Various

The Road to 1080p


The stars of the show: a group of affordable GPU options

When preparing to build or upgrade a PC on any kind of a budget, how can you make sure you're extracting the highest performance per dollar from the parts you choose? Even if you do your homework comparing every combination of components is impossible. As system builders we always end up having to look at various benchmarks here and there and then ultimately make assumptions. It's the nature of choosing products within an industry that's completely congested at every price point.

Another problem is that lower-priced graphics cards are usually benchmarked on high-end test platforms with Core i7 processors - which is actually a necessary thing when you need to eliminate CPU bottlenecks from the mix when testing GPUs. So it seems like it might be valuable (and might help narrow buying choices down) if we could take a closer look at gaming performance from complete systems built with only budget parts, and see what these different combinations are capable of.

With this in mind I set out to see just how much it might take to reach acceptable gaming performance at 1080p (acceptable being 30 FPS+). I wanted to see where the real-world gaming bottlenecks might occur, and get a feel for the relationship between CPU and GPU performance. After all, if there was no difference in gaming performance between, say, a $40 and an $80 processor, why spend twice as much money? The same goes for graphics. We’re looking for “good enough” here, not “future-proof”.


The components in all their shiny boxy-ness (not everything made the final cut)

If money was no object we’d all have the most amazing high-end parts, and play every game at ultra settings with hundreds of frames per second (well, except at 4K). Of course most of us have limits, but the time and skill required to assemble a system with as little cash as possible can result in something that's actually a lot more rewarding (and impressive) than just throwing a bunch of money at top-shelf components.

The theme of this article is good enough, as in, don't spend more than you have to. I don't want this to sound like a bad thing. And if along the way you discover a bargain, or a part that overperforms for the price, even better!

Yet Another AM1 Story?

We’ve been talking about the AMD AM1 platform since its introduction, and it makes a compelling case for a low cost gaming PC. With the “high-end” CPU in the lineup (the Athlon 5350) just $60 and motherboards in the $35 range, it makes sense to start here. (I actually began this project with the Sempron 3820 as well, but it just wasn’t enough for 1080p gaming by a long shot so the test results were quickly discarded.) But while the 5350 is an APU, I didn't end up testing it without a dedicated GPU. (Ok, I eventually did but it just can't handle 1080p.)

But this isn’t just a story about AM1 after all. Jumping right in here, let's look at the result of my research (and mounting credit card debt). All prices were accurate as I wrote this, but are naturally prone to fluctuate:

Tested Hardware
Graphics Cards

MSI AMD Radeon R7 250 2GB OC - $79.99

XFX AMD Radeon R7 260X - $109.99

EVGA NVIDIA GeForce GTX 750 - $109.99

EVGA NVIDIA GeForce GTX 750 Ti SC - $153.99


AMD Athlon 5350 2.05 GHz Quad-Core APU - $59.99

AMD Athlon X2 340X 3.2 GHz Dual-Core CPU - $44.99.

AMD Athlon X4 760K 3.8 GHz Quad-Core CPU - $84.99

Intel Pentium G3220 3.0 GHz Dual-Core CPU - $56.99


ASRock AM1B-ITX Mini-ITX AMD AM1 - $39.99

MSI A88XM-E45 Micro-ATX AMD A88X - $72.99

ECS H81H3-M4 Micro-ATX Intel H81 - $47.99

Memory 4GB Samsung OEM PC3-12800 DDR3-1600 (~$40 Value)
Storage Western Digital Blue 1TB Hard Drive - $59.99
Power Supply EVGA 430 Watt 80 PLUS PSU - $39.99
OS Windows 8.1 64-bit - $99

So there it is. I'm sure it won't please everyone, but there is enough variety in this list to support no less than 16 different combinations, and you'd better believe I ran each test on every one of those 16 system builds!

Keep reading our look at budget gaming builds for 1080p!!

GCN-Based AMD 7000 Series GPUs Will Fully Support DirectX 11.2 After Driver Update

Subject: Graphics Cards | August 26, 2013 - 01:24 AM |
Tagged: amd, Windows 8.1, microsoft, directx 11.2, graphics cards, gaming, GCN

Earlier this month, several websites reported that AMD’s latest Graphics Core Next (GCN) based graphics cards (7000 series and 8000 series OEM lines) would not be compatible with the Windows 8.1-only DirectX 11.2 API. This was inferred from a statement made by AMD engineer Laylah Mah in an interview with c1 Magazin.

AMD Radeon 7970 GHz Edition.jpg

An AMD Radeon HD 7970 GHz Edition.

Fortunately, the GCN-based cards will fully support DirectX 11.2 once an updated driver has been released. As it turns out, Microsoft’s final DirectX 11.2 specification ended up being slightly different than what AMD expected. As a result, the graphics cards do not currently fully support the API. The issue is not one of hardware, however, and an updated driver can allow the GCN-based 7000 series hardware to fully support the latest DirectX 11.2 API and major new features such as tiled resources.

The updated driver will reportedly be released sometime in October to coincide with Microsoft’s release of Windows 8.1. Specifically, Maximum PC quoted AMD in stating the following:

"The Radeon HD 7000 series hardware architecture is fully DirectX 11.2-capable when used with a driver that enables this feature. AMD is planning to enable DirectX 11.2 with a driver update in the Windows 8.1 launch timeframe in October, when DirectX 11.2 ships. Today, AMD is the only GPU manufacturer to offer fully-compatible DirectX 11.1 support, and the only manufacturer to support Tiled Resources Tier-2 within a shipping product stack.”

So fret not, Radeon 7000-series owners, you will be able to fully utilize DX 11.2 and all its features once games start implementing them, and assuming you upgrade to Windows 8.1.

Source: Maximum PC

GTC 2013: Pedraforca Is A Power Efficient ARM + GPU Cluster For Homogeneous (GPU) Workloads

Subject: General Tech, Graphics Cards | March 20, 2013 - 01:47 PM |
Tagged: tesla, tegra 3, supercomputer, pedraforca, nvidia, GTC 2013, GTC, graphics cards, data centers

There is a lot of talk about heterogeneous computing at GTC, in the sense of adding graphics cards to servers. If you have HPC workloads that can benefit from GPU parallelism, adding GPUs gives you computing performance in less physical space, and using less power, than a CPU only cluster (for equivalent TFLOPS).

However, there was a session at GTC that actually took things to the opposite extreme. Instead of a CPU only cluster or a mixed cluster, Alex Ramirez (leader of Heterogeneous Architectures Group at Barcelona Supercomputing Center) is proposing a homogeneous GPU cluster called Pedraforca.
Pedraforca V2 combines NVIDIA Tesla GPUs with low power ARM processors. Each node is comprised of the following components:

  • 1 x Mini-ITX carrier board
  • 1 x Q7 module (which hosts the ARM SoC and memory)
    • Current config is one Tegra 3 @ 1.3GHz and 2GB DDR2
  • 1 x NVIDIA Tesla K20 accelerator card (1170 GFLOPS)
  • 1 x InfiniBand 40Gb/s card (via Mellanox ConnectX-3 slot)
  • 1 x 2.5" SSD (SATA 3 MLC, 250GB)

The ARM processor is used solely for booting the system and facilitating GPU communication between nodes. It is not intended to be used for computing. According to Dr. Ramirez, in situations where running code on a CPU would be faster, it would be best to have a small number of Intel Xeon powered nodes to do the CPU-favorable computing, and then offload the parallel workloads to the GPU cluster over the InfiniBand connection (though this is less than ideal, Pedraforca would be most-efficient with data-sets that can be processed solely on the Tesla cards).


While Pedraforca is not necessarily locked to NVIDIA's Tegra hardware, it is currently the only SoC that meets their needs. The system requires the ARM chip to have PCI-E support. The Tegra 3 SoC has four PCI-E lanes, so the carrier board is using two PLX chips to allow the Tesla and InfiniBand cards to both be connected.

The researcher stated that he is also looking forward to using NVIDIA's upcoming Logan processor in the Pedraforca cluster. It will reportedly be possible to upgrade existing Pedraforca clusters with the new chips by replacing the existing (Tegra 3) Q7 module with one that has the Logan SoC when it is released.

Pedraforca V2 has an initial cluster size of 64 nodes. While the speaker was reluctant to provide TFLOPS performance numbers, as it would depend on the workload, with 64 Telsa K20 cards, it should provide respectable performance. The intent of the cluster is to save power costs by using a low power CPU. If your sever kernel and applications can run on GPUs alone, there are noticeable power savings to be had by switching from a ~100W Intel Xeon chip to a lower-power (approximately 2-3W) Tegra 3 processor. If you have a kernel that needs to run on a CPU, it is recommended to run the OS on an Intel server and transfer just the GPU work to the Pedraforca cluster. Each Pedraforca node is reportedly under 300W, with the Tesla card being the majority of that figure. Despite the limitations, and niche nature of the workloads and software necessary to get the full power-saving benefits, Pedraforca is certainly an interesting take on a homogeneous server cluster!


In another session relating to the path to exascale computing, power use in data centers was listed as one of the biggest hurdles to getting to Exaflop-levels of performance, and while Pedraforca is not the answer to Exascale, it should at least be a useful learning experience at wringing the most parallelism out of code and pushing GPGPU to the limits. And that research will help other clusters use the GPUs more efficiently as researchers explore the future of computing.

The Pedraforca project built upon research conducted on Tibidabo, a multi-core ARM CPU cluster, and CARMA (CUDA on ARM development kit) which is a Tegra SoC paired with an NVIDIA Quadro card. The two slides below show CARMA benchmarks and a Tibidabo cluster (click on image for larger version).

Stay tuned to PC Perspective for more GTC 2013 coverage!


GTC 2013: Jen-Hsun Huang Takes the Stage to Discuss NVIDIA's Future, New Hardware

Subject: General Tech, Graphics Cards | March 19, 2013 - 02:55 PM |
Tagged: unified virtual memory, ray tracing, nvidia, GTC 2013, grid vca, grid, graphics cards

Today, NVIDIA's CEO Jen-Hsun Huang stepped on stage to present the GTC keynote. In the presentation (which was live streamed on the GTC website and archived here.), NVIDIA discussed five major points, looking back over 2013 and into the future of its mobile and professional products. In addition to the product roadmap, NVIDIA discussed the state of computer graphics and GPGPU software. Remote graphics and GPU virtualization was also on tap. Finally, towards the end of the Keynote, the company revealed its first appliance with the NVIDIA GRID VCA. The culmination of NVIDIA's GRID and GPU virtualization technology, the VCA is a device that hosts up to 16 virtual machines which each can tap into one of 16 Kepler-based graphics processors (8 cards, 16 GPUs per card) to fully hardware accelerate software running of the VCA. Three new mobile Tegra parts and two new desktop graphics processors were also hinted at, with improvements to power efficiency and performance.


On the desktop side of things, NVIDIA's roadmap included two new GPUs. Following Kepler, NVIDIA will introduce Maxwell and Volta. Maxwell will feature a new virtualized memory technology called Unified Virtual Memory. This tech will allow both the CPU and GPU to read from a single (virtual) memory store. Much as with the promise of AMD's Kaveri APU, the Unified Virtual Meory will result in speed improvements in heterogeneous applications because data will not have to be copied to/from the GPU and CPU in order for the data to be processed. Server applications will really benefit from the shared memory tech. NVIDIA did not provide details, but from the sound of it, the CPU and GPU both continue to write to their own physical memory, but their is a layer of virtualized memory on top of that, that will allow the two (or more) different processors to read from each other's memory store.
Following Maxwell, Volta will be a physically smaller chip with more transistors (likely a smaller process node). In addition to the power efficiency improvements over Maxwell, it steps up the memory bandwidth significantly. NVIDIA will use TSV (through silicon via) technology to physically mount the graphics DRAM chips over the GPU (attached to the same silicon substrate electrically). According to NVIDIA, this new TSV-mounted memory will achieve up to 1 Terabytes/second of memory bandwidth, which is a notable increase over existing GPUs.


NVIDIA continues to pursue the mobile market with its line of Tegra chips that pair an ARM CPU, NVIDIA GPU, and SDR modem. Two new mobile chips called Logan and Parker will follow Tegra 4. Both new chips will support the full CUDA 5 stack and OpenGL 4.3 out of the box. Logan will feature a Kepler-based graphics porcessor on the chip that can “everything a modern computer ought to do” according to NVIDIA. Parker will have a yet-to-be-revealed graphics processor (Kepler successor). This mobile chip will utilize 3D FinFET transistors. It will have a greater number of transistors in a smaller package than previous Tegra parts (it will be about the size of a dime), and NVIDIA also plans to ramp up the frequency to wrangle more performance out of the mobile chip. NVIDIA has stated that Logan silicon should be completed towards the end of 2013, with the mobile chips entering production in 2014.


Interestingly, Logan has a sister chip that NVIDIA is calling Kayla. This mobile chip is capable of running ray tracing applications and features OpenGL geometric shaders. It can support GPGPU code and will be compatible with Linux.

NVIDIA has been pushing CUDA for several years, now. The company has seen some respectable adoption rates, by growing from 1 Tesla supercomputer in 2008 to its graphics cards being used in 50 supercomputers, with 500 million CUDA processors on the market. There are now allegedly 640 universities working with CUDA and 37,000 academic papers on CUDA.


Finally, NVIDIA's hinted-at new product announcement was the NVIDIA VCA, which is a GPU virtualization appliance that hooks into the network and can deliver up to 16 virtual machines running independant applications. These GPU accelerated workspaces can be presneted to thin clinets over the netowrk by installing the GRID client software on users' workstations. The specifications of the GRID VCA is rather impressive, as well.

The GRID VCA features:

  • 2 x Intel Xeon processors with 16 threads each (32 total threads)
  • 192GB to 384GB of system memory
  • 8 Kepler-based graphics cards, with two GPUs each (16 total GPUs)
  • 16 x GPU-accelerated virtual machines

The GRID VCA fits into a 4U case. It can deliver remote graphics to workstations, and is allegedly fast enough to deliver gpu accelerated software that is equivalent to having it run on the local machine (at least over LAN). The GRID Visual Computing Appliance will come in two flavors at different price points. The first will have 8 Kepler GPUs with 4GB of memory each, 16 CPU threads, and 192GB of system memory for $24,900. The other version will cost $34,900 and features 16 Kepler GPUs (4GB memory), 32 CPU threads, and 384GB system memory. On top of the hardware cost, NVIDIA is also charging licensing fees. While both GRID VCA devices can support unlimited devices, the licenses cost $2,400 and $4,800 per year respectively.


Overall, it was an interesting keynote, and the proposed graphics cards look to be offering up some unique and necessary features that should help hasten the day of ubiquitous general purpose GPU computing. The Unified Virtual Memory was something I was not expecting, and it will be interesting to see how AMD responds. AMD is already promising shared memory in its Kaveri APU, but I am interested to see the details of how NVIDIA and AMD will accomplish shared memory with dedicated grapahics cards (and whether CrossFire/SLI setups will all have a single shared memory pool)..

Stay tuned to PC Perspective for more GTC 2013 Coverage!

NVIDIA Launches GTX 650 for Budget Gamers

Subject: Graphics Cards | September 13, 2012 - 09:38 AM |
Tagged: nvidia, kepler, gtx 650, graphics cards, geforce

Ah, Kepler: the (originally intended as) midrange graphics card architecture that took the world by storm and allowed NVIDIA to take it from the dual-GPU GeForce GTX 690 all the way down to budget discrete HTPC cards. So far this year we have seen the company push Kepler to its limits by adding GPU boost and placing it in the GTX 690 and GTX 680. Those cards were great, but commanded a price premium that most gamers could not afford. Enter the GTX 670 and GTX 660 Ti earlier this year and Kepler started to become an attractive option for gamers wanting a high-end single GPU system without breaking the bank. Those cards, at $399 and $299 respectively were a step in the right direction to making the Kepler architecture available to everyone but were still a bit pricey if you were on a tighter budget for your gaming rig (or needed to factor in the Significant Other Approval Process™).

Well, Kepler has now been on the market for about six months, and I’m excited to (finally) announce that NVIDIA is launching its first Kepler-based budget gaming card! The NVIDIA GeForce GTX 650 brings Kepler down to the ever-attractive $109 price point and is even capable of playing new games at 1080p above 30FPS. Not bad for such a cheap card!

GTX 650.jpg

With the GTX 650, you are making some sacrifices as far as hardware, but things are not all bad. The card features a mere 384 CUDA cores and 1GB of GDDR5 memory on a 128-bit bus. This is a huge decrease in hardware compared to the GTX 660 Ti’s 1344 CUDA cores and 2GB memory on a 192-bit bus – but that card is also $200 more. And while the GTX 650 runs the memory at 5Gbps, NVIDIA was not shy about pumping up the GPU core clockspeed. No boost functionality was mentioned but the base clockspeed is a respectable 1058 MHz. Even better, the card only requires a single 6-pin PCI-E power connector and has a TDP of 64W (less than half of its higher-end GeForce brethren).

Specs Comparison

The following chart compares the specifications between the new Geforce GTX 650 through the GTX 670 graphics card. 

GTX 650 and GTX 660 Specifications.jpg

Click on the above chart for a larger image.

Gaming Potential?

The really important question is how well it handles games, and NVIDIA showed off several slides with claimed performance numbers. Taking these numbers with a grain of salt as they are coming from the same company that built the hardware, the GTX 650 looks like a capable GPU for the price. The company compared it to both its GTS 450 (Fermi) and AMD’s 7750 graphics card. Naturally, it was shown in a good light in both comparisons, but nothing egregious.

NVIDIA is claiming an 8X performance increase versus the old 9500 GT, and an approximate 20% speed increase versus the GTS 450. And improvements to the hardware itself has allowed NVIDIA to improve performance while requiring less power; the company claims the GTX 650 uses up to half the power of its Fermi predecessor.

20percent better than fermi.jpg

The comparison between the GTX 650 and AMD Radeon HD 7750 is harder to gauge, though the 7750 is priced competitively around the GTX 650’s $109 MSRP so it will be interesting to see how that shakes out. NVIDIA is claiming anywhere from 1.08 to 1.34 times the performance of the 7750 in a number of games, shown in the chart below.

GTX 650 vs HD 7750.jpg

If you have been eyeing a 7750, the GTX 650 looks like it might be the better option, assuming reviewers are able to replicate NVIDIA’s results.

FPS GTX 650.png

Keep in mind, these are NVIDIA's numbers and not from our reviews.

Unfortunately, NVIDIA did not benchmark the GTS 450 against the GTX 650 in the games. Rather, they compared it to the 9500 GT to show the upgrade potential for anyone still holding onto the older hardware (pushing the fact that you can run DirectX 11 at 1080p if you upgrade). Still, the results for the 650 are interesting by themselves. In MechWarrior Online, World of Warcraft, and Max Payne 3 the budget GPU managed at least 40 FPS at 1920x1080 resolution in DirectX 11 mode. Nothing groundbreaking, for sure, but fairly respectable for the price. Assuming it can pull at least a min of 30 FPS in other recent games, this will be a good option for DIY builders that want to get started with PC gaming on a budget.

All in all, the NVIDIA GeForce GTX 650 looks to be a decent card and finally rounds out the Kepler architecture. At this price point, NVIDIA can finally give every gamer a Kepler option instead of continuing to rely on older cards to answer AMD at the lower price points. I’m interested to see how AMD answers this, and specifically if gamers will see more price cuts on the AMD side.

GTX 650 Specs.jpg

If you have not already, I strongly recommend you give our previous Kepler GPU reviews a read through for a look at what NVIDIA’s latest architecture is all about.

PC Perspective Kepler-based GTX Graphics Card Reviews:

NVIDIA GTX 660 Ti May Be Pricier Than Gamers Have Been Hoping For

Subject: Graphics Cards | August 5, 2012 - 08:33 PM |
Tagged: nvidia, kepler, gtx 660ti, graphics cards, gaming

Update: US-based retailers are starting to list the GTX 660 Ti as well, and at least one card is listed for $299, so there may be some hope despite the $349.99 MSRP. See the $299 PNY GTX 660 Ti graphics card here.

The GTX 660 Ti is an NVIDIA Kepler-based graphics card that has seen several leaks and even a full review ahead of official release. In the leaked review, rumored specifications were confirmed, and the card was shown to be very close to the existing GTX 670 GPU. Sometimes it was merely a couple of frames behind the $400+ GPU.


On the podcast, Ryan, Josh, and Jeremy speculated that–should the GTX 660 Ti be priced closer to the $300 mark in the rumored $300-400 pricing–it would be a very desirable gaming graphics card. Hardware-wise, the GTX 660 Ti is nearly identical to the GTX 670, and only sees a reduction in the memory bus from 256-bit to 192-bit. For a $100 cheaper card, gamers would be getting extremely close to the performance of the much more expensive GTX 670 Kepler card.

Unfortunately, it may not be the gaming card that people have been hoping for. According to Tom’s Hardware, a Swedish retailer has listed the GTX 660 Ti on its website for pre-orders at just under $400. At that price point, the GTX 660 Ti is much less desirable, and will be hard to justify versus springing for the GTX 670 for a bit more money.

Here’s hoping that the pre-order pricing is simply higher than the prices people will see once actual cards from NVIDIA and partners are officially released en masse. Do you think that there is still hope for the GTX 660 Ti as the gaming card of choice, or will you be looking elsewhere? 

GTX 660Ti Rumors Confirmed By Leaked Review

Subject: Graphics Cards | July 31, 2012 - 03:03 PM |
Tagged: nvidia, kepler, gtx 670, gtx 660ti, graphics cards

Last week, additional information leaked about the upcoming Kepler-based NVIDIA GTX 660 Ti graphics card. Those rumors suggested that the GPU would be very similar to the one found in existing GTX 670 (which we recently reviewed).

We speculated that the GTX 660 Ti could be an awesome card, assuming the price was right. While we do not have any pricing information–the best guess from rumors is that it is in the $300 to $400 range–as a result of Tweaktown breaking the release date, we now know that the latest rumors were true.


The GTX 660Ti will feature 1344 CUDA cores and 2GB of GDDR5 memory on a 192-bit memory bus. This puts the GTX 660 Ti very close to the current 670 in terms of potential performance. According to the leaked benchmarks, that seems to be the case. The GTX 660 Ti is only a couple of frames behind the GTX 670 in Just Cause 2 and Dirt 3, for example. Considering this card is likely to use a bit less power and cost less, it is shaping up to be a rather desirable card. If this ends up being on the low end of the $300-400 range (rumors suggest otherwise, however), I suspect many gamers are going to opt for this new Kepler card rather than the more expensive and only very slightly faster GTX 670.

What do you think about the GTX 660 Ti, is the card you were hoping for?

Source: Tweaktown