Subject: General Tech, Graphics Cards | July 29, 2014 - 08:27 PM | Scott Michaud
Tagged: nvidia, geforce, graphics drivers, shield tablet, shield
Alongside the NVIDIA SHIELD Tablet launch, the company has released their GeForce 340.52 drivers. This version allows compatible devices to use GameStream and it, also, is optimized for Metro: Redux and Final Fantasy XIV (China).
The driver supports GeForce 8-series graphics cards, and later. As a reminder, for GPUs that are not based on the Fermi architecture (or later), 340.xx will be your last driver version. NVIDIA does intend to provided extended support for 340.xx (and earlier) drivers until April 1st, 2016. But, when Fermi, Kepler, and Maxwell move on to 343.xx, Tesla and earlier will not. That said, most of the content of this driver is aimed at Kepler and later. Either way, the driver itself is available for those pre-Fermi cards.
I should also mention that a user of Anandtech's forums noted the removal of Miracast from NVIDIA documentation. NVIDIA has yet to comment, although it is still very short notice, at this point.
Subject: General Tech, Graphics Cards, Mobile | July 19, 2014 - 03:29 AM | Scott Michaud
Tagged: nvidia, geforce, maxwell, mobile gpu, mobile graphics
Apparently, some hardware sites got their hands on an NVIDIA driver listing with several new product codes. They claim thirteen N16(P/E) chips are listed (although I count twelve (??)). While I do not have much knowledge of NVIDIA's internal product structure, the GeForce GTX 880M, based on Kepler, is apparently listed as N15E.
Things have changed a lot since this presentation.
These new parts will allegedly be based on the second-generation Maxwell architecture. Also, the source believes that these new GPUs will in the GeForce GTX 800-series, possibly with the MX suffix that was last seen in October 2012 with the GeForce GTX 680MX. Of course, being a long-time PC gamer, the MX suffix does not exactly ring positive with my memory. It used to be the Ti-line that you wanted, and the MX-line that you could afford. But who am I kidding? None of that is relevant these days. Get off my lawn.
Subject: Graphics Cards | June 12, 2014 - 06:17 PM | Ryan Shrout
Tagged: overclocking, nvidia, gtx titan z, geforce
Earlier this week I posted a review of the NVIDIA GeForce GTX Titan Z graphics card, a dual-GPU Kepler GK110 part that currently sells for $3000. If you missed that article you should read it first and catch up but the basic summary was that, for PC gamers, it's slower and twice the price of AMD's Radeon R9 295X2.
During that article though I mentioned that the Titan Z had more variable clock speeds than any other GeForce card I had tested. At the time I didn't go any further than that since the performance of the card already pointed out the deficit it had going up against the R9 295X2. However, several readers asked me to dive into overclocking with the Titan Z and with that came the need to show clock speed changes.
My overclocking was done through EVGA's PrecisionX software and we measured clock speeds with GPU-Z. The first step in overclocking an NVIDIA GPU is to simply move up the Power Target sliders and see what happens. This tells the card that it is allowed to consume more power than it would normally be allowed to, and then thanks to GPU Boost technology, the clock speed should scale up naturally.
Click to Enlarge
And that is exactly what happened. I ran through 30 minutes of looped testing with Metro: Last Light at stock settings, with the Power Target at 112%, with the Power Target at 120% (the maximum setting) and then again with the Power Target at 120% and the GPU clock offset set to +75 MHz.
That 75 MHz offset was the highest setting we could get to run stable on the Titan Z, which brings the Base clock up to 781 MHz and the Boost clock to 951 MHz. Though, as you'll see in our frequency graphs below the card was still reaching well above that.
Click to Enlarge
This graph shows clock rates of the GK110 GPUs on the Titan Z over the course of 25 minutes of looped Metro: Last Light gaming. The green line is the stock performance of the card without any changes to the power settings or clock speeds. While it starts out well enough, hitting clock rates of around 1000 MHz, it quickly dives and by 300 seconds of gaming we are often going at or under the 800 MHz mark. That pattern is consistent throughout the entire tested time and we have an average clock speed of 894 MHz.
Next up is the blue line, generated by simply moving the power target from 100% to 112%, giving the GPUs a little more thermal headroom to play with. The results are impressive, with a much more consistent clock speed. The yellow line, for the power target at 120%, is even better with a tighter band of clock rates and with a higher average clock.
Finally, the red line represents the 120% power target with a +75 MHz offset in PrecisionX. There we see a clock speed consistency matching the yellow line but offset up a bit, as we have been taught to expect with NVIDIA's recent GPUs.
Click to Enlarge
The result of all this data comes together in the bar graph here that lists the average clock rates over the entire 25 minute test runs. At stock settings, the Titan Z was able to hit 894 MHz, just over the "typical" boost clock advertised by NVIDIA of 876 MHz. That's good news for NVIDIA! Even though there is a lot more clock speed variance than I would like to see with the Titan Z, the clock speeds are within the expectations set by NVIDIA out the gate.
Bumping up that power target though will help out gamers that do invest in the Titan Z quite a bit. Just going to 112% results in an average clock speed of 993 MHz, a 100 MHz jump worth about 11% overall. When we push that power target up even further, and overclock the frequency offset a bit, we actually get an average clock rate of 1074 MHz, 20% faster than the stock settings. This does mean that our Titan Z is pulling more power and generating more noise (quite a bit more actually) with fan speeds going from around 2000 to 2700 RPM.
At both 2560x1440 and 3840x2160, in the Metro: Last Light benchmark we ran, the added performance of the Titan Z does put it at the same level of the Radeon R9 295X2. Of course, it goes without saying that we could also overclock the 295X2 a bit further to improve ITS performance, but this is an exercise in education.
Does it change my stance or recommendation for the Titan Z? Not really; I still think it is overpriced compared to the performance you get from AMD's offerings and from NVIDIA's own lower priced GTX cards. However, it does lead me to believe that the Titan Z could have been fixed and could have offered at least performance on par with the R9 295X2 had NVIDIA been willing to break PCIe power specs and increase noise.
UPDATE (6/13/14): Some of our readers seem to be pretty confused about things so I felt the need to post an update to the main story here. One commenter below mentioned that I was one of "many reviewers that pounded the R290X for the 'throttling issue' on reference coolers" and thinks I am going easy on NVIDIA with this story. However, there is one major difference that he seems to overlook: the NVIDIA results here are well within the rated specs.
When I published one of our stories looking at clock speed variance of the Hawaii GPU in the form of the R9 290X and R9 290, our results showed that clock speed of these cards were dropping well below the rated clock speed of 1000 MHz. Instead I saw clock speeds that reached as low as 747 MHz and stayed near the 800 MHz mark. The problem with that was in how AMD advertised and sold the cards, using only the phrase "up to 1.0 GHz" in its marketing. I recommended that AMD begin selling the cards with a rated base clock and a typical boost clock instead only labeling with the, at the time, totally incomplete "up to" rating. In fact, here is the exact quote from this story: "AMD needs to define a "base" clock and a "typical" clock that users can expect." Ta da.
The GeForce GTX Titan Z though, as we look at the results above, is rated and advertised with a base clock of 705 MHz and a boost clock of 876 MHz. The clock speed comparison graph at the top of the story shows the green line (the card at stock) never hitting that 705 MHz base clock while averaging 894 MHz. That average is ABOVE the rated boost clock of the card. So even though the GPU is changing between frequencies more often than I would like, the clock speeds are within the bounds set by NVIDIA. That was clearly NOT THE CASE when AMD launched the R9 290X and R9 290. If NVIDIA had sold the Titan Z with only the specification of "up to 1006 MHz" or something like then the same complaint would be made. But it is not.
The card isn't "throttling" at all, in fact, as someone specifies below. That term insinuates that it is going below a rated performance rating. It is acting in accordance with the GPU Boost technology that NVIDIA designed.
Some users seem concerned about temperature: the Titan Z will hit 80-83C in my testing, both stock and overclocked, and simply scales the fan speed to compensate accordingly. Yes, overclocked, the Titan Z gets quite a bit louder but I don't have sound level tests to show that. It's louder than the R9 295X2 for sure but definitely not as loud as the R9 290 in its original, reference state.
Finally, some of you seem concerned that I was restrticted by NVIDIA on what we could test and talk about on the Titan Z. Surprise, surprise, NVIDIA didn't send us this card to test at all! In fact, they were kind of miffed when I did the whole review and didn't get into showing CUDA benchmarks. So, there's that.
Subject: Editorial | June 12, 2014 - 02:28 PM | Ken Addison
Tagged: Z97X-SOC Force, video, titan z, radeon, project tango, podcast, plextor, nvidia, Lightning, gtx titan z, gigabyte, geforce, E3 14, amd, 4790k, 290x
PC Perspective Podcast #304 - 06/12/2014
We have lots of reviews to talk about this week including the GeForce GTX TITAN Z, Core i7-4790K, Gigabyte Z97X-SOC Force, E3 News and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom and Allyn Maleventano
Pimp Next Week Events
0:03:45 Podcast #305 with David Hewlett!
Week in Review:
News items of interest:
Hardware/Software Picks of the Week:
0:57:40 Ryan: Ken's Switching Hardware
1:02:15 Josh: This saved me last weekend.
1:04:00 Allyn: Sony DSC-RX10
A powerful architecture
In March of this year, NVIDIA announced the GeForce GTX Titan Z at its GPU Technology Conference. It was touted as the world's fastest graphics card with its pair of full GK110 GPUs but it came with an equally stunning price of $2999. NVIDIA claimed it would be available by the end of April for gamers and CUDA developers to purchase but it was pushed back slightly and released at the very end of May, going on sale for the promised price of $2999.
The specifications of GTX Titan Z are damned impressive - 5,760 CUDA cores, 12GB of total graphics memory, 8.1 TFLOPs of peak compute performance. But something happened between the announcement and product release that perhaps NVIDIA hadn't accounted for. AMD's Radeon R9 295X2, a dual-GPU card with full-speed Hawaii chips on-board, was released at $1499. I think it's fair to say that AMD took some chances that NVIDIA was surprised to see them take, including going the route of a self-contained water cooler and blowing past the PCI Express recommended power limits to offer a ~500 watt graphics card. The R9 295X2 was damned fast and I think it caught NVIDIA a bit off-guard.
As a result, the GeForce GTX Titan Z release was a bit quieter than most of us expected. Yes, the Titan Black card was released without sampling the gaming media but that was nearly a mirror of the GeForce GTX 780 Ti, just with a larger frame buffer and the performance of that GPU was well known. For NVIDIA to release a flagship dual-GPU graphics cards, admittedly the most expensive one I have ever seen with the GeForce brand on it, and NOT send out samples, was telling.
NVIDIA is adamant though that the primary target of the Titan Z is not just gamers but the CUDA developer that needs the most performance possible in as small of a space as possible. For that specific user, one that doesn't quite have the income to invest in a lot of Tesla hardware but wants to be able to develop and use CUDA applications with a significant amount of horsepower, the Titan Z fits the bill perfectly.
Still, the company was touting the Titan Z as "offering supercomputer class performance to enthusiast gamers" and telling gamers in launch videos that the Titan Z is the "fastest graphics card ever built" and that it was "built for gamers." So, interest peaked, we decided to review the GeForce GTX Titan Z.
The GeForce GTX TITAN Z Graphics Card
Cost and performance not withstanding, the GeForce GTX Titan Z is an absolutely stunning looking graphics card. The industrial design started with the GeForce GTX 690 (the last dual-GPU card NVIDIA released) and continued with the GTX 780 and Titan family, lives on with the Titan Z.
The all metal finish looks good and stands up to abuse, keeping that PCB straight even with the heft of the heatsink. There is only a single fan on the Titan Z, center mounted, with a large heatsink covering both GPUs on opposite sides. The GeForce logo up top illuminates, as we have seen on all similar designs, which adds a nice touch.
Subject: General Tech, Graphics Cards | June 2, 2014 - 05:52 PM | Scott Michaud
Tagged: nvidia, geforce, geforce experience, ShadowPlay
NVIDIA has just launched another version of their GeForce Experience, incrementing the version to 2.1. This release allows video of up to "2500x1600", which I assume means 2560x1600, as well as better audio-video synchronization in Adobe Premiere. Also, because why stop going after FRAPS once you start, it also adds an in-game framerate indicator. It also adds push-to-talk for recording the microphone.
Another note: when GeForce Experience 2.0 launched, it introduced streaming of the user's desktop. This allowed recording of OpenGL and windowed-mode games by simply capturing an entire monitor. This mode was not capable of "Shadow Mode", which I believed was because they thought users didn't want a constant rolling video to be taken of their desktop in the event that they wanted to save a few minutes of it at some point. Turns out that I was wrong; the feature was coming and it arrived with GeForce Experience 2.1.
GeForce Experience 2.1 is now available at NVIDIA's website, unless it already popped up a notification for you.
Subject: Graphics Cards | May 28, 2014 - 11:19 AM | Ryan Shrout
Tagged: titan z, nvidia, gtx, geforce
Though delayed by a month, today marks the official release of NVIDIA's Titan Z graphics card, the dual GK110 beast with the $3000 price tag. The massive card was shown for the first time in March at NVIDIA's GPU Technology Conference and our own Tim Verry was on the grounds to get the information.
The details remain the same:
Specifically, the GTX TITAN Z is a triple slot graphics card that marries two full GK110 (big Kepler) GPUs for a total of 5,760 CUDA cores, 448 TMUs, and 96 ROPs with 12GB of GDDR5 memory on a 384-bit bus (6GB on a 384-bit bus per GPU). For the truly adventurous, it appears possible to SLI two GTX Titan Z cards using the single SLI connector. Display outputs include two DVI, one HDMI, and one DisplayPort connector.
The difference now of course is that all the clock speeds and pricing are official.
A base clock speed of 705 MHz with a Boost rate of 876 MHz places it well behind the individual GPU performance of a GeForce GTX 780 Ti or GTX Titan Black (rated at 889/980 MHz). The memory clock speed remains the same at 7.0 Gbps and you are still getting a massive 6GB of memory per GPU.
Maybe most interesting with the release of the GeForce GTX Titan Z is that NVIDIA seems to have completely fixated on non-DIY consumers with the card. We did not receive a sample of the Titan Z (nor did we get one of the Titan Black) and when I inquired as to why, NVIDIA PR stated that they were "only going to CUDA developers and system builders."
I think it is more than likely that after the release of AMD's Radeon R9 295X2 dual GPU graphics card on April 8th, with a price tag of $1500 (half of the Titan Z), the target audience was redirected. NVIDIA already had its eye on the professional markets that weren't willing to dive into the Quadro/Tesla lines (CUDA developers will likely drop $3k at the drop of a hat to get this kind of performance density). But a side benefit of creating the best flagship gaming graphics card on the planet was probably part of the story - and promptly taken away by AMD.
I still believe the Titan Z will be an impressive graphics card to behold both in terms of look and style and in terms of performance. But it would take the BIGGEST NVIDIA fans to be able to pass up buying a pair of Radeon R9 295X2 cards for a single GeForce GTX Titan Z. At least that is our assumption until we can test one for ourselves.
I'm still working to get my hands on one of these for some testing as I think the ultra high end graphics card coverage we offer is incomplete without it.
Several of NVIDIA's partners are going to be offering the Titan Z including EVGA, ASUS, MSI and Zotac. Maybe the most intersting though is EVGA's water cooled option!
So, what do you think? Anyone lining up for a Titan Z when they show up for sale?
Subject: General Tech, Graphics Cards | May 12, 2014 - 08:00 PM | Scott Michaud
Tagged: titan z, nvidia, gtx titan z, geforce
To a crowd of press and developers at their GTC summit, NVIDIA announced the GeForce GTX Titan Z add-in board (AIB). Each of the two, fully unlocked, GK110 GPUs would each have access to 6GB of GDDR5 memory (12GB total). The card was expected to be available on May 8th but has yet to surface. As NVIDIA has yet to comment on the situation, many question whether it ever will.
And then we get what we think are leaked benchmarks (note: two pictures).
One concern about the Titan Z was its rated 8 TeraFLOPs of compute performance. This is a fairly sizable reduction from the theoretical maximum of 10.24 TeraFLOPs of two Titan Black processors and even less than two first-generation Titans (9 TeraFLOPs combined). We expected that this is due to reduced clock rates. What we did not expect is for benchmarks to show the GPUs boost way above those advertised levels, and even beyond the advertised boost clocks of the Titan Black and the 780 Ti. The card was seen pushing 1058 MHz in some sections, which leads to a theoretical compute performance of 12.2 TeraFLOPs (6.1 TeraFLOPs per GPU) in single precision. That is a lot.
These benchmarks also show that NVIDIA has a slight lead over AMD's R9 295X2 in many games, except Battlefield 4 and Sleeping Dogs (plus 3DMark and Unigine). Of course, these benchmarks measure the software reported frame rate and frame times and those may or may not be indicative of actual performance. While I would say that the Titan Z appears to have a slight performance lead over the R9 295X2, although a solid argument for an AMD performance win exists, it does so double the cost (at its expected $3000 USD price point). That is not up for debate.
So, until NVIDIA says anything, the Titan Z is in limbo. I am sure there exists CUDA developers who await its arrival. Personally, I would just get three Titan Blacks since you are going to need to manually schedule your workloads across multiple processors anyway (or 780 Tis if 32-bit arithmetic is enough precision). That is, of course, unless you cannot physically fit enough GeForce Titan Blacks in your motherboard and, as such, you require two GK110 chips per AIB (but not enough to bother writing a cluster scheduling application).
Subject: Graphics Cards | April 29, 2014 - 10:22 AM | Ryan Shrout
Tagged: nvidia, watch_dogs, watch dogs, bundle, geforce
A bit of a surprise email found its way to my inbox today that announced NVIDIA's partnership with Ubisoft to include copies of the upcoming Watch_Dogs game with GeForce GTX graphics cards.
Gamers that purchase a GeForce GTX 780 Ti, GTX 780, GTX 770 or GTX 760 from select retailers will qualify for a free copy of the game. You can details on this bundle and available GPUs to take advantage of it at Amazon.com!
The press release also confirms inclusion of NVIDIA exclusive features like TXAA and HBAO+ in the game itself, which is interesting. From what I am hearing, Watch_Dogs is going to be a beast of a game on GPU hardware and we are looking forward to using it as a test platform going forward.
Full press release is included below.
OWN THE TECH AND CONTROL THE CITY WITH NVIDIA® AND UBISOFT®
Select GeForce GTX GPUs Now Include the Hottest Game of the Year: Watch_Dogs™
Santa Clara, CA — April 29, 2014 — Destructoid calls it one of the “most wanted games of 2014.” CNET said it was “one of the most anticipated games in recent memory.” MTV said it’s one of the “Can’t-Miss Video Games of 2014.” This, all before anyone out there has even played it.
So, starting today(1), gamers who purchase select NVIDIA® GeForce® GTX® 780 Ti, 780, 770 and 760 desktop GPUs can get their chance to play Watch_Dogs™, the new PC game taking the world by storm and latest masterpiece from Ubisoft®.
Described as a “polished, refined and truly next generation experience,” in Watch_Dogs you play as Aiden Pearce, a brilliant hacker whose criminal past led to a violent family tragedy. While seeking justice, you will monitor and hack those around you, access omnipresent security cameras, download personal information to locate a target, control traffic lights and public transportation to stop the enemy and more.
Featuring NVIDIA TXAA and HBAO+ technology for an interactive, immersive experience, it’s clear that gamers can’t wait to play Watch_Dogs, especially considering the effusive praise that the official trailer received. Launched mere weeks ago, the trailer has already been viewed more than a combined 650,000 times. For gamers, Watch_Dogs seamlessly blends a mixture of single-player and multiplayer action in a way never before seen, and Ubisoft has gone one step further in creating a unique ctOS mobile companion app for users of smartphone and tablet devices allowing for even greater access to the fun. If you haven’t checked out the trailer, please check it out here: https://www.youtube.com/watch?
The GeForce GTX and Watch_Dogs bundle is available starting today from leading e-tailers including Newegg, Amazon.com, TigerDirect, NCIX; add-in card vendors such as EVGA; and nationwide system builders including AVADirect, CyberPower, Digital Storm, Falcon Northwest, iBUYPOWER, Maingear, Origin PC, Puget Systems, V3 Gaming PC and Velocity Micro. For a full list of participating partners, please visit: www.GeForce.com/GetWatchDogs.
AM1 Walks New Ground
After Josh's initial review of the AMD AM1 Platform and the Athlon 5350, we received a few requests to look at gaming performance with a discrete GPU installed. Even though this platform isn't being aimed at gamers looking to play demanding titles, we started to investigate this setup anyway.
While Josh liked the ASUS AM1I-A Mini ITX motherboard he used in his review, with only a x1 PCI-E slot it would be less than ideal for this situation.
Luckily we had the Gigabyte AM1M-S2H Micro ATX motherboard, which features a full length PCI-E x16 slot, as well as 2 x1 slots.
Don't be mistaken by the shape of the slot though, the AM1 chipset still only offers 4 lanes of PCI-Express 2.0. This, of course, means that the graphics card will not be running at full bandwidth. However, having the physical x16 slot makes it a lot easier to physically connect a discrete GPU, without having to worry about those ribbon cables that miners use.