A TITANic roundup of GPUs

Subject: Graphics Cards | March 19, 2015 - 03:20 PM |
Tagged: titan x, nvidia, gtx titan x, gm200, geforce, 4k

You have read Ryan's review of the $999 behemoth from NVIDIA and now you can take the opportunity to see what other reviewers think of the card.  [H]ard|OCP tested it against the GTX 980 which shares the same cooler and is every bit as long as the TITAN X.  Along the way they found a use for the 12GB of VRAM as both Watch_Dogs and Far Cry 4 used over 7GB of memory when tested at 4k resolution though the frame rates were not really playable, you will need at least two TITAN X's to pull that off.  They will be revisiting this card in the future, providing more tests for a card with incredible performance and an even more incredible price.

14265930473m7LV4iNyQ_1_6_l.jpg

"The TITAN X video card has 12GB of VRAM, not 11.5GB, 50% more streaming units, 50% more texture units, and 50% more CUDA cores than the current GTX 980 flagship NVIDIA GPU. While this is not our full TITAN X review, this preview focuses on what the TITAN X delivers when directly compared to the GTX 980."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: NVIDIA

GM200 Specifications

With the release of the GeForce GTX 980 back in September of 2014, NVIDIA took the lead in performance with single GPU graphics cards. The GTX 980 and GTX 970 were both impressive options. The GTX 970 offered better performance than the R9 290 as did the GTX 980 compared to the R9 290X; on top of that, both did so while running at lower power consumption and while including new features like DX12 feature level support, HDMI 2.0 and MFAA (multi-frame antialiasing). Because of those factors, the GTX 980 and GTX 970 were fantastic sellers, helping to push NVIDIA’s market share over 75% as of the 4th quarter of 2014.

IMG_1954.JPG

But in the back of our mind, and in the minds of many NVIDIA fans, we knew that the company had another GPU it was holding on to: the bigger, badder version of Maxwell. The only question was going to be WHEN the company would release it and sell us a new flagship GeForce card. In most instances, this decision is based on the competitive landscape, such as when AMD might be finally updating its Radeon R9 290X Hawaii family of products with the rumored R9 390X. Perhaps NVIDIA is tired of waiting or maybe the strategy is to launch soon before Fiji GPUs make their debut. Either way, NVIDIA officially took the wraps off of the new GeForce GTX TITAN X at the Game Developers Conference two weeks ago.

At the session hosted by Epic Games’ Tim Sweeney, NVIDIA CEO Jen-Hsun Huang arrived when Tim lamented about needing more GPU horsepower for their UE4 content. In his hands he had the first TITAN X GPU and talked about only a couple of specifications: the card would have 12GB of memory and it would be based on a GPU with 8 billion transistors.

Since that day, you have likely seen picture after picture, rumor after rumor, about specifications, pricing and performance. Wait no longer: the GeForce GTX TITAN X is here. With a $999 price tag and a GPU with 3072 CUDA cores, we clearly have a new king of the court.

Continue reading our review of the NVIDIA GeForce GTX Titan X 12GB Graphics Card!!

GTC 2015: NVIDIA Roadmap Shows Pascal with 3D Memory, NVLink and Mixed Precision Compute

Subject: Graphics Cards | March 17, 2015 - 01:47 PM |
Tagged: pascal, nvidia, gtc 2015, GTC, geforce

At the keynote of the GPU Technology Conference (GTC) today, NVIDIA CEO Jen-Hsun Huang disclosed some more updates on the roadmap for future GPU technologies.

GTC-36.jpg

Most of the detail was around Pascal, due in 2016, that will introduce three new features including mixed compute precision, 3D (stacked) memory, and NVLink. Mixed precision is a method of computing in FP16, allowing calculations to run much faster at lower accuracy than full single or double precision when they are not necessary. Keeping in mind that Maxwell doesn't have an implementation with full speed DP compute (today), it would seem that NVIDIA is targeting different compute tasks moving forward. Though details are short, mixed precision would likely indicate processing cores than can handle both data types.

3D memory is the ability to put memory on-die with the GPU directly to improve overall memory banwidth. The visual diagram that NVIDIA showed on stage indicated that Pascal would have 750 GB/s of bandwidth, compared to 300-350 GB/s on Maxwell today.

NVLink is a new way of connecting GPUs, improving on bandwidth by more than 5x over current implementations of PCI Express. They claim this will allow for connecting as many as 8 GPUs for deep learning performance improvements (up to 10x). What that means for gaming has yet to be discussed.

GTC-38.jpg

NVIDIA made some other interesting claims as well. Pascal will be more than 2x more performance per watt efficient than Maxwell, even without the three new features listed above. It will also ship (in a compute targeted product) with a 32GB memory system compared to the 12GB of memory announced on the Titan X today. Pascal will also have 4x the performance in mixed precision compute.

PCPer Live! GeForce GTX TITAN X Live Stream and Giveaway!

Subject: Graphics Cards | March 16, 2015 - 07:13 PM |
Tagged: video, tom petersen, titan x, nvidia, maxwell, live, gtx titan x, gtx, gm200, geforce

UPDATE 2: If you missed the live stream, we now have the replay available below!

UPDATE: The winner has been announced: congrats to Ethan M. for being selected as the random winner of the GeForce GTX TITAN X graphics card!!

Get yourself ready, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout! This time the focus is going to be NVIDIA's brand-new GeForce GTX TITAN X graphics card, first teased a couple of weeks back at GDC. NVIDIA's Tom Petersen will be joining us live from the GPU Technology Conference show floor to discuss the GM200 GPU, it's performance and to show off some demos of the hardware in action.

GeForce_GTX_TITANX_3Qtr.jpg

And what's a live stream without a prize? One lucky live viewer will win a GeForce GTX TITAN X 12GB graphics card of their very own! That's right - all you have to do is tune in for the live stream tomorrow afternoon and you could win a Titan X!!

pcperlive.png

NVIDIA GeForce GTX TITAN X Live Stream and Giveaway

1pm PT / 4pm ET - March 17th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

The event will take place Tuesday, March 17th at 1pm PT / 4pm ET at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience. To win the prize you will have to be watching the live stream, with exact details of the methodology for handing out the goods coming at the time of the event.

Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else.

If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?

So join us! Set your calendar for this coming Tuesday at 1pm PT / 4pm ET and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!

Huge thanks to ASUS for supplying a new G751JY notebook, featuring an Intel Core i7-4710HQ and a GeForce GTX 980M 4GB GPU to power our live stream from GTC!!

NVIDIA Releases GeForce GTX 960M and GTX 950M Mobile Graphics

Subject: Graphics Cards | March 12, 2015 - 11:13 PM |
Tagged: nvidia, maxwell, GTX 960M, GTX 950M, gtx 860m, gtx 850m, gm107, geforce

NVIDIA has announced new GPUs to round out their 900-series mobile lineup, and the new GTX 960M and GTX 950M are based on the same GM107 core as the previous 860M/850M parts.

geforce-gtx-960m-3qtr.png

Both GPUs feature 640 CUDA Cores and are separated by Base clock speed, with the GTX 960M operating at 1096 MHz and GTX 950M at 914 MHz. Both have unlisted maximum Boost frequencies that will likely vary based on thermal constraints. The memory interface is the other differentiator between the GPUs, with the GTX 960M sporting dedicated GDDR5 memory, and the GTX 950M can be implemented with either DDR3 or GDDR5 memory. Both GTX 960M and 950M use the same 128-bit memory interface and support up to 4GB of memory.

As reported by multiple sources the core powering the 960M/950M is a GM107 Maxwell GPU, which means that we are essentially talking about rebadged 860M/850M products, though the unlisted Boost frequencies could potentially be higher with these parts with improved silicon on a mature 28nm process. In contrast the previously announced GTX 965M is based on a cut down Maxwell GM204 GPU, with its 1024 CUDA Cores representing half of the GPU core introduced with the GTX 980.

New notebooks featuring the GTX 960M have already been announced by NVIDIA's partners, so we will soon see if there is any performance improvement to these refreshed GM107 parts.

Source: NVIDIA

MSI Announces 4GB Version of NVIDIA GeForce GTX 960 Gaming Graphics Card

Subject: Graphics Cards | March 10, 2015 - 07:35 PM |
Tagged: nvidia, msi, gtx 960, geforce, 960 Gaming, 4GB GTX 960

Manufacturers announcing 4GB versions of the GeForce GTX 960 has become a regular occurrence of late, and today MSI has announced their own 4GB GTX 960, adding a model with this higher memory capacity to their popular MSI Gaming graphics card lineup.

MSI_9604GB.jpg

The GTX 960 Gaming 4GB features an overclocked core in addition to the doubled frame buffer, with 1241MHz Base, 1304MHz Boost clocks (compared to the stock GTX 960 1127MHz Base, 1178MHz Boost clocks). The card also features their proprietary Twin Frozr V (5 for non-Romans) cooler, which they claim surpasses previous generations of their Twin Frozr coolers "by a large margin", with a new design featuring their SuperSU heat pipes and a pair of 100mm Torx fans with alternating standard/dispersion fan blades.

MSI_GTX9604GB.png

The card is set to be shown at the Intel Extreme Masters gaming event in Poland later this week, and pricing/availability have not been announced.

Source: MSI

GDC 15: NVIDIA Shows TITAN X at Epic Games Keynote

Subject: Graphics Cards | March 4, 2015 - 01:10 PM |
Tagged: titan x, nvidia, maxwell, gtx, geforce, gdc 15, GDC

For those of you worried that GDC would sneak by without any new information for NVIDIA's GeForce fans, Jen-Hsun Huang surprised everyone by showing up at the Epic Games' keynote with Tim Sweeny to hijack it.

The result: the first showing of the upcoming GeForce TITAN X based on the upcoming Maxwell GM200 GPU.

titanx3.jpg

JHH stated that it would have a 12GB frame buffer and was built using 8 billion transistors! There wasn't much more information than that, but I was promised that the details would be revealed sooner rather than later.

titanx2.jpg

Any guesses on performance or price?

titanx1.jpg

titanx4.jpg

Jen-Hsun signs the world's first TITAN X for Tim Sweeney.

Kite Demo running on TITAN X

UPDATE: I ran into the TITAN X again at the NVIDIA booth and was able to confirm a couple more things. First, the GPU will only require a 6+8-pin power connections, indicating that NVIDIA is still pushing power efficiency with GM200.

titanx5.jpg

Also, as you would expect, the TITAN X will support 3-way and 4-way SLI, or at very least has the SLI bridges to support it.

titanx6.jpg

EVGA and Inno3D Announce the First 4GB NVIDIA GeForce GTX 960 Cards

Subject: Graphics Cards | March 3, 2015 - 02:44 PM |
Tagged: video cards, nvidia, gtx 960, geforce, 4GB

They said it couldn't be done, but where there are higher density chips there's always a way. Today EVGA and Inno3D have both announced new versions of GTX 960 graphics cards with 4GB of GDDR5 memory, placing the cards in a more favorable mid-range position depending on the launch pricing.

960_evga.PNG

EVGA's new 4GB NVIDIA GTX 960 SuperSC

Along with the expanded memory capacity EVGA's card features their ACX 2.0+ cooler, which promises low noise and better cooling. The SuperSC is joined by a standard ACX and the higher-clocked FTW variant, which pushes Base/Boost clocks to 1304/1367MHz out of the box.

960_evga_2.PNG

Inno3D's press release provides fewer details, and the company appears to be launching a single new model featuring 4GB of memory which looks like a variant of their existing GTX 960 OC card.

inno3d_960.jpg

The existing Inno3D GTX 960 OC card

The current 2GB version of the GTX 960 can be found starting at $199, so expect these expanded versions to include a price bump. The GTX 960, with only 1024 CUDA cores (half the count of a GTX 980) and a 128-bit memory interface, has been a very good performer nonetheless with much better numbers than last year's GTX 760, and is very competitive with AMD's R9 280/285. (It's a great overclocker, too.) The AMD/NVIDIA debate rages on, and NVIDIA's partners adding another 4GB offering to the mix will certainly add to the conversation, particularly as an upcoming 4GB version of the GTX 960 was originally said to be unlikely.

Source: EVGA

NVIDIA Faces Class Action Lawsuit for the GeForce GTX 970

Subject: Graphics Cards | February 23, 2015 - 04:12 PM |
Tagged: nvidia, geforce, GTX 970

So apparently NVIDIA and a single AIB partner, Gigabyte, are facing a class action lawsuit because of the GeForce GTX 970 4GB controversy. I am not sure why they singled out Gigabyte, but I guess that is the way things go in the legal world. Unlucky for them, and seemingly lucky for the rest.

nvidia-970-architecture.jpg

For those who are unaware, the controversy is based on NVIDIA claiming that the GeForce GTX 970 has 4GB of RAM, 64 ROPs, and 2048 KB of L2 Cache. In actuality, it has 56 ROPs and 1792KB of L2 Cache. The main talking point is that the RAM is segmented into two partitions, one that is 3.5GB and another that is 0.5GB. All 4GB are present on the card though, and accessible (unlike the disable L2 Cache and ROPs). Then again, I cannot see an instance in that class action lawsuit's exhibits which claim an incorrect number of ROPs or amount of L2 Cache.

Again, the benchmarks that you saw when the GeForce GTX 970 launched are still valid. Since the issue came up, Ryan has also tried various configurations of games in single- and multi-GPU systems to find conditions that would make the issue appear.

Source: Court Filing

Windows Update Installs GeForce 349.65 with WDDM 2.0

Subject: General Tech, Graphics Cards | February 21, 2015 - 04:23 PM |
Tagged: wddm 2.0, nvidia, geforce 349.65, geforce, dx12

Update 2: Outside sources have confirmed to PC Perspective that this driver contains DirectX 12 as well as WDDM 2.0. They also claim that Intel and AMD have DirectX 12 drivers available through Windows Update as well. After enabling iGPU graphics on my i7-4790K, the Intel HD 4600 received a driver update, which also reports as WDDM 2.0 in DXDIAG. I do not have a compatible AMD GPU to test against (just a couple of old Windows 7 laptops) but the source is probably right and some AMD GPUs will be updated to DX12 too.

So it turns out that if your motherboard dies during a Windows Update reboot, then you are going to be spending several hours reinstalling software and patches, but that is not important. What is interesting is the installed version number for NVIDIA's GeForce Drivers when Windows Update was finished with its patching: 349.65. These are not available on NVIDIA's website, and the Driver Model reports WDDM 2.0.

nvidia-34965-driver.png

It looks like Microsoft pushed out NVIDIA's DirectX 12 drivers through Windows Update. Update 1 Pt. 1: The "Runtime" reporting 11.0 is confusing though, perhaps this is just DX11 with WDDM 2.0?

nvidia-34965-dxdiag.png

I am hearing online that these drivers support the GeForce 600 series and later GPUs, and that there are later, non-public drivers available (such as 349.72 whose release notes were leaked online). NVIDIA has already announced that DirectX 12 will be supported on GeForce 400-series and later graphics cards, so Fermi drivers will be coming at some point. For now, it's apparently Kepler-and-later, though.

So with OS support and, now, released graphics drivers, all that we are waiting on is software and an SDK (plus any NDAs that may still be in effect). With Game Developers Conference (GDC 2015) coming up in a little over a week, I expect that we will get each of these very soon.

Update 1 Pt. 2: I should note that the release notes for 349.72 specifically mention DirectX 12. As mentioned above, is possible that 349.65 contains just WDDM 2.0 and not DX12, but it contains at least WDDM 2.0.