Subject: Graphics Cards | July 6, 2016 - 05:10 PM | Scott Michaud
Tagged: VR, Oculus, nvidia, graphics drivers, DiRT Rally
A Game Ready Driver has just launched for DiRT Rally VR. GeForce Drivers 368.69 WHQL increments upon the last release, obviously adding optimizations for DiRT Rally VR, but it also includes a few new SLI profiles (Armored Warfare, Dangerous Golf, iRacing: Motorsport Simulator, Lost Ark, and Tiger Knight) and probably other bug fixes.
The update doesn't yet have a release date, but it should be soon. According to NVIDIA's blog post, it sounds like it will come first to the Oculus Store, but arrive on Steam later this month. I haven't been following the game too heavily, but there doesn't seem to be any announcement about official HTC Vive support that I can find.
You can pick them up at NVIDIA's website or through GeForce Experience. Thankfully, the GeForce Experience 3 Beta seems to pick up on new drivers much quicker than the previous version.
Subject: Graphics Cards | July 6, 2016 - 07:15 AM | Scott Michaud
Tagged: pascal, nvidia, htc vive, GTX 1080, gtx 1070, GP104
NVIDIA is working on a fix to allow the HTC Vive to be connected to the GeForce GTX 1070 and GTX 1080 over DisplayPort. The HTC Vive apparently has the choice between HDMI and Mini DisplayPort, but the headset will not be identified when connected over that connection. Currently, the two workarounds are to connect the HTC Vive over HDMI, or use a DisplayPort to HDMI adapter if your card's HDMI output is already occupied.
It has apparently been an open issue for over a month now. That said, NVIDIA's Manuel Guzman has acknowledged the issue. Other threads claim that there are other displays that have a similar issue, and, within the last 24 hours, some users have experienced luck with modifying their motherboard's settings. I'd expect that it's something the can fix in an upcoming driver, though. For now, I guess plan your monitor outputs accordingly if you were planning on getting the HTC Vive.
Subject: Graphics Cards | July 6, 2016 - 07:01 AM | Scott Michaud
Tagged: rx 480, Polaris, amd
Apparently, some people think that AMD will be releasing an RX 490 based on Polaris 10 with an extra four compute units, bringing the total number of stream processors to 2560. I'm guessing that people expected it to be a nice, round number or something, but that's not the case. According to Evan Groenke, Senior Product Manager at AMD, the die has 36 compute units, and there is “nothing else hidden on the product that end users might be looking forward to unlocking”.
Really, this kind-of makes sense. AMD seems to have designed this chip around the performance target of VR, which the RX 480 hits. I don't think that it would really make sense to push about 11% more compute processors into the design, decreasing their yield per wafer for such a relatively small gain.
We are expecting an RX 490 card to land at some point though, thanks to a mistake in publishing on AMD's part. It won't be Polaris 10 or 11.
Subject: Graphics Cards | July 5, 2016 - 12:38 PM | Tim Verry
Tagged: rx 480, Radeon RX 480, polaris 10, Polaris, msi, gcn4
It appears that MSI will be one of the first AIB partners to get a reference version of the AMD RX 480 graphics card out. Available as soon as next week, the MSI Radeon RX 480 8G pairs AMD’s Polaris-based GPU with 8GB of GDDR5 memory on a reference platform and cooler.
The MSI card uses the AMD reference cooler with a blower style fan and measures 9.45” in length. It is a dual slot design with a red and black aesthetic. Rear IO includes three DisplayPort and one HDMI ports. It is powered by a single 6-pin PCI-E power connector.
There is not much to say with regards to clocks on this GCN4-based card as there are no factory overclocks to speak of. The base clock sits at 1120 MHz (which is an average expected clock, not necessarily the minimum) and the GPU can boost up to a maximum of 1266 MHz out of the box. MSI is clocking the memory at the full 8 GHz though, which is good (AMD stated that partners could clock memory anywhere from seven to eight GHz).
Looking around various retailers, it appears that you will be able to get your hands on it as soon as July 9th from Newegg for $240.
Watch out for pricing before clicking that buy button though, because some sites that allow third party sellers have jacked up the prices quite a bit! If you are looking for a reference design, this card should be as good as the rest. Personally, I am looking forward to MSI and other AIB partner’s custom RX 480 cards which should have much higher overclocking potential and a better power phase setup that should alleviate any power consumption concerns of the reference design’s VRM setup. That is not to say that the reference MSI is going to blow up your PC or anything, but from a buyer's perspective I would rather wait for the custom boards with better coolers that I can push further and faster for only a fairly slight premium. If you need a blower style cooler, this card should work.
- The AMD Radeon RX 480 Review - The Polaris Promise
- PCPer Live! Radeon RX 480 Live Stream with Raja Koduri!
- AMD's Raja Koduri talks moving past CrossFire, smaller GPU dies, HBM2 and more.
Subject: Graphics Cards | July 5, 2016 - 07:01 AM | Scott Michaud
Tagged: msi, GTX 1080, gtx 1070, GP104, duke
Getting a custom-cooled GTX 1080 (for around its MSRP) basically involves monitoring Newegg for a good business week or two, several times per day, pouncing on whatever isn't marked-up. Whether it's low supply or high demand, add-in board vendors haven't stopped announcing new models.
Image Credit: EXPReview
The MSI GTX 1080 8G DUKE is a three-fan (“TriFrozr”) design with an 8-pin and a 6-pin PCIe power connector, which provides 75W more headroom than the Founders Edition. EXPReview claims that it slides between the AERO and the GAMING lines. Although they don't claim how it matches up to ARMOR, which is also between AERO and GAMING, it looks like it's slightly above it, with its RGB LEDs. The GTX 1080 GPU is factory overclocked to 1708 MHz and boosts to 1847 MHz, and the GTX 1070 is overclocked to 1607 MHz with a 1797 MHz boost.
Launch regions are not listed for the cards, but the launch price is supposedly 5399 Chinese Yuan (which converts to $810 USD) and 3499 Chinese Yuan ($524.70 USD) for the GTX 1070. This is quite a bit higher than we would expect, but I'm not sure how regional pricing on electronics works between the USA and China.
Subject: Graphics Cards | July 5, 2016 - 01:49 AM | Tim Verry
Tagged: gigabyte, gtx 1070, pascal, mini ITX, factory overclocked
Custom graphics cards based on NVIDIA’s GTX 1070 GPU have been rolling out from all the usual suspects, and today small form factor enthusiasts have a new option with Gigabyte’s Mini ITX friendly GTX 1070 Mini ITX OC. As the name implies, this is a factory overclocked card that can hit 1746 MHz boost with the right checkboxes ticked in the company’s vBIOS utility.
The new SFF graphics card measures a mere 6.7-inches long and is a dual slot design with a custom single 90mm fan HSF. It is a custom design that uses a 5+1 power phase design which Gigabyte claims is engineered to provide lower temperatures and more stable voltage compared to Nvidia’s reference design which is a 4+1 setup. The cooler on the dual slot card uses an aluminum fin array that is fed by three direct touch heatpipes. The 90mm fan is able to spin down to 0 rpm when the card is not under load which would make it a good candidate for a gaming capable living room PC that also doubles as your media center. Gigabyte further claims that their "3D stripe" ridged fan blade design helps to reduce noise and improve cooling performance.
Rear IO on the card includes two dual link DVI connectors, one HDMI, and one DisplayPort output. The graphics card is powered by a single 8-pin PCI-E power connector.
As far as the nitty gritty specifications are concerned, Gigabyte has the GTX 1070 GPU clocked out of the box at 1531 MHz base and 1721 MHz boost. Using the company’s Xtreme Engine utility, users can enable the “OC Mode” which automatically clocks the card further to 1556 MHz base and 1746 MHz boost. The OC Mode in particular is a decent factory overclock over the reference clocks of 1506 MHz base and 1683 MHz boost respectively. The 8 GB of GDDR5 memory remains effectively untouched at 8008 MHz.
Unfortunately as is usually the case with these kinds of launches pricing and availability has not yet been announced. From a cursory look around Newegg I would guess that the card will be somewhere around $465 (both the factory overclock and SFF premium).
Subject: Graphics Cards | July 4, 2016 - 03:27 PM | Jeremy Hellstrom
Tagged: asus, ROG, GTX 1080 STRIX GAMING, GTX 1080, factory overclocked
It is rather difficult to rate the cost to performance ratio of GTX 1080's as the prices and availability are in a constant state of flux but we can certainly peg the overall performance of the cards. [H]ard|OCP recently strapped the new ASUS ROG GTX 1080 STRIX GAMING GPU to their testbed to see how it performs. Right out of the box the cards base clock is 1759MHz with a boost clock of 1898MHz and 10GHz GDDR5X, which [H] successfully raised to 1836MHz base, 1973MHz boost with in game frequencies reaching 2139 MHz and the GDDR5 running at 11.3GHz. This had an effect on performance.
"Today we review in full detail our first custom GeForce GTX 1080 video card. ASUS has decked the ROG GTX 1080 STRIX GAMING out with a factory overclock, the STRIX cooling system, and a fully customizable lighting system. Let's see this beast overclock and compare it to the previous gen's GTX 980 Ti and Radeon R9 Fury X."
Here are some more Graphics Card articles from around the web:
- EVGA GeForce GTX 1080 FTW GAMING ACX 3.0 @ Bjorn3d
- MSI GTX 1080 Gaming X 8G RGB @ Kitguru
- MSI GTX 1070 Gaming X 8 GB @ techPowerUp
- Radeon RX 480 @ Hardware Secrets
- The OpenGL Speed & Performance-Per-Watt From The Radeon RX 480 To HD 4850/4870 @ Phoronix
Subject: Graphics Cards | July 2, 2016 - 01:25 AM | Scott Michaud
Tagged: nvidia, geforce, geforce experience
GeForce Experience will be getting an updated UI soon, and a beta release is available now. It has basically been fully redesigned, although the NVIDIA Control Panel is the same as it has been. That said, even though it is newer, GeForce Experience could benefit from a good overhaul, especially in terms of start-up delay. NVIDIA says it uses 2X less memory and loads 3X faster. It still has a slightly loading bar, but less than a second.
Interestingly, I noticed that, even though I skipped over Sharing Settings on first launch, Instant Replay was set to On by default. This could have been carried over from my previous instance of GeForce Experience, although I'm pretty sure I left it off. Privacy-conscious folks might want to verify that ShadowPlay isn't running, just in case.
One downside for some of our users is that you now require an NVIDIA account (or connect your Google Account to NVIDIA) to access it. Previously, you could use features, like ShadowPlay, while logged out, but that doesn't appear to be the case anymore. This will no-doubt upset some of our audience, but it's not entirely unexpected, given NVIDIA's previous statements about requiring an NVIDIA account for Beta drivers. The rest of GeForce Experience isn't too surprising considering that.
We'll now end where we began: installation. For testing (and hopefully providing feedback) during the beta, NVIDIA will be giving away GTX 1080s on a weekly basis. To enter, you apparently just need to install the Beta and log in with your NVIDIA (or Google) account.
Subject: Graphics Cards | June 30, 2016 - 07:54 PM | Scott Michaud
Tagged: amd, nvidia, FinFET, Polaris, polaris 10, pascal
If you're trying to purchase a Pascal or Polaris-based GPU, then you are probably well aware that patience is a required virtue. The problem is that, as a hardware website, we don't really know whether the issue is high demand or low supply. Both are manufactured on a new process node, which could mean that yield is a problem. On the other hand, it's been about four years since the last fabrication node, which means that chips got much smaller for the same performance.
Over time, manufacturing processes will mature, and yield will increase. But what about right now? AMD made a very small chip that produces ~GTX 970-level performance. NVIDIA is sticking with their typical, 3XXmm2 chip, which ended up producing higher than Titan X levels of performance.
It turns out that, according to online retailer, Overclockers UK, via Fudzilla, both the RX480 and GTX 1080 have sold over a thousand units at that location alone. That's quite a bit, especially when you consider that it only considers one (large) online retailer from Europe. It's difficult to say how much stock other stores (and regions) received compared to them, but it's still a thousand units in a day.
It's sounding like, for both vendors, pent-up demand might be the dominant factor.
Too much power to the people?
UPDATE (7/1/16): I have added a third page to this story that looks at the power consumption and power draw of the ASUS GeForce GTX 960 Strix card. This card was pointed out by many readers on our site and on reddit as having the same problem as the Radeon RX 480. As it turns out...not so much. Check it out!
UPDATE 2 (7/2/16): We have an official statement from AMD this morning.
As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).
Honestly, that doesn't tell us much. And AMD appears to be deflecting slightly by using words like "some RX 480 boards". I don't believe this is limited to a subset of cards, or review samples only. AMD does indicate that the 8 Gbps memory on the 8GB variant might be partially to blame - which is an interesting correlation to test out later. The company does promise a fix for the problem via a driver update on Tuesday - we'll be sure to give that a test and see what changes are measured in both performance and in power consumption.
The launch of the AMD Radeon RX 480 has generally been considered a success. Our review of the new reference card shows impressive gains in architectural efficiency, improved positioning against NVIDIA’s competing parts in the same price range as well as VR-ready gaming performance starting at $199 for the 4GB model. AMD has every right to be proud of the new product and should have this lone position until the GeForce product line brings a Pascal card down into the same price category.
If you read carefully through my review, there was some interesting data that cropped up around the power consumption and delivery on the new RX 480. Looking at our power consumption numbers, measured directly from the card, not from the wall, it was using slightly more than the 150 watt TDP it was advertised as. This was done at 1920x1080 and tested in both Rise of the Tomb Raider and The Witcher 3.
When overclocked, the results were even higher, approaching the 200 watt mark in Rise of the Tomb Raider!
A portion of the review over at Tom’s Hardware produced similar results but detailed the power consumption from the motherboard PCI Express connection versus the power provided by the 6-pin PCIe power cable. There has been a considerable amount of discussion in the community about the amount of power the RX 480 draws through the motherboard, whether it is out of spec and what kind of impact it might have on the stability or life of the PC the RX 480 is installed in.
As it turns out, we have the ability to measure the exact same kind of data, albeit through a different method than Tom’s, and wanted to see if the result we saw broke down in the same way.
Our Testing Methods
This is a complex topic so it makes sense to detail the methodology of our advanced power testing capability up front.
How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12V and 3.3V power and built some Corsair power cables that measure the 12V coming through those as well.
The result is data that looks like this.
What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.
There are four lines drawn in the graph, the 12V and 3.3V results are from the PCI Express bus interface, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have the ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.
From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.
Different games and applications impose different loads on the GPU and can cause it to draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems.
One interesting note on our data compared to what Tom’s Hardware presents – we are using a second order low pass filter to smooth out the data to make it more readable and more indicative of how power draw is handled by the components on the PCB. Tom’s story reported “maximum” power draw at 300 watts for the RX 480 and while that is technically accurate, those figures represent instantaneous power draw. That is interesting data in some circumstances, and may actually indicate other potential issues with excessively noisy power circuitry, but to us, it makes more sense to sample data at a high rate (10 kHz) but to filter it and present it more readable way that better meshes with the continuous power delivery capabilities of the system.
Image source: E2E Texas Instruments
An example of instantaneous voltage spikes on power supply phase changes
Some gamers have expressed concern over that “maximum” power draw of 300 watts on the RX 480 that Tom’s Hardware reported. While that power measurement is technically accurate, it doesn’t represent the continuous power draw of the hardware. Instead, that measure is a result of a high frequency data acquisition system that may take a reading at the exact moment that a power phase on the card switches. Any DC switching power supply that is riding close to a certain power level is going to exceed that on the leading edges of phase switches for some minute amount of time. This is another reason why our low pass filter on power data can help represent real-world power consumption accurately. That doesn’t mean the spikes they measure are not a potential cause for concern, that’s just not what we are focused on with our testing.