All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | November 18, 2016 - 12:24 AM | Scott Michaud
Tagged: nvidia, graphics drivers
Update, November 17th @ 7:21pm: NVIDIA has released 375.95 Hotfix to fix this issue. They are working on getting it WHQL certified for their website and GeForce Experience. You can download and install it directly, though.
Update, November 16th @ 12:56pm: NVIDIA has reproduced the low memory clocks issue, found its cause, and are working on a fix. They believe it only affects certain factory-overclocked cards. It is obviously a high priority, so a hotfix driver will likely be issued (unless they can get it WHQL certified quick enough that it would be pointless).
Original post below:
NVIDIA has just released a new graphics driver. GeForce Game Ready 375.86 provides optimized support for Ubisoft's Steep, which is an open-world game with wingsuiters, skiers, snowboarders, and paragliders. It also rolls in extra optimizations for previous game ready games that are receiving patches: Battlefield 1, Civilization VI, and Tom Clancy's The Division.
Before you install, though, there is one particularly annoying issue that is being reported on GeForce forums. NVIDIA is currently investigating reports that certain, but not all, Pascal GPUs are having their video memory stuck at 810 MHz, leading to (as you would expect) severe performance loss. It's possible that the affected users are all running a specific overclocking application or something. If you are in a bit of a rush and don't want to put up with potentially rolling back, then you might want to skip the version.
Thankfully, both discrete graphics vendors have been releasing multiple versions per month. The wait shouldn't be too long.
Subject: Graphics Cards | November 15, 2016 - 07:58 PM | Jeremy Hellstrom
Tagged: rx 480, nvidia, GTX1060, amd
On one side of the ring is the RX 480, with 2304 Stream Processors, 32 ROPs and 144 Texture Units. In the opposite corner, at 1280 CUDA Cores, 48 ROPs and 80 Texture Units is the GTX 1060. The two cards retail for between $200 to $250 depending on the features present on the card as well as any sales. [H]ard|OCP tested the two cards head to head, not just raw performance numbers but also the stability of the GPU frequencies. power draw and temperatures. All games were tested at base clocks and at the highest stable overclock and the results were back and forth, in some games AMD pulled ahead while in others NVIDIA was the clear winner. It is worth keeping in mind that these results do not include VR results.
"We take GIGABYTE’s Radeon RX 480 G1 GAMING video card and pit it against a MSI GeForce GTX 1060 GAMING X video card in today’s evaluation. We will overclock both video cards as high as possible and compare performance and find out what both video cards have to offer in the upper $200 price range for gaming."
Here are some more Graphics Card articles from around the web:
- Galax GTX 1070 EXOC Sniper Review @ OCC
- Zotac GeForce GTX 1050 @ Hardware Secrets
- Gigabyte GTX 1050 Ti G1 Gaming 4 GB @ techPowerUp
- MSI GTX 1050 Ti 4GB Gaming X 4G @ Kitguru
Subject: Graphics Cards | November 14, 2016 - 04:22 PM | Ryan Shrout
Tagged: video, the last hope, serious sam vr, rx 480, radeon, Polaris, multi-gpu, liquidvr, amd, affinity
While VR excitement might have cooled slightly in the enthusiast community, there continues to be innovation and software releases on both the Oculus Rift and HTC Vive that are bringing me back to what I think we believe to be part of the future of PC gaming. Serious Sam VR: The Last Hope was announced at E3 this year and is now available as an early access game on Steam. It is a dual wielding shooter that combines the enemies of the previous games along with the crazy weapons that made the series iconic.
And hey, there is something awesome about using a missile launcher that takes up half the screen.
One interesting technology addition to the game is use of AMD LiquidVR affinity multi-GPU. A Croteam developer recently posted a blog on the GPUOpen.com site talking about the implementation.
We wanted to add LiquidVR Affinity Multi-GPU rendering support to our engine because two GPUs can render the two eye views in almost half the time compared to a single GPU and this would greatly reduce our GPU bottlenecks. Affinity MGPU can either be done in one pass or with a separate pass for each eye, in which case we reap the GPU side benefits while the CPU workload stays the same.
We needed about a week to modify all shaders and to make sure that correct data is set for each eye. Single pass rendering with Affinity Multi-GPU gave us a huge speed improvement on both CPU and GPU from our original VR implementation. In the end, it took us less time to do single pass rendering correctly than it took us to fix all the problems caused by multi pass multi-GPU rendering.
After the interest in the Deus Ex multi-GPU scaling video I thought I would see if the Serious Sam implementation was actually beneficial to gamers.
- Test System
- Core i7-5960X
- X99 MB + 16GB DDR4
- AMD Radeon RX 480 8GB
- Driver: 16.10.2
The test was simple: I found that a single RX 480 could run the game at Medium settings perfectly well, but could it be playable on High with multi-GPU? By adding in a second Radeon RX 480 I was able to bring the performance up by 55% or so, making the VR experience nearly flawless.
It's not perfect scaling, but the benefits of multi-GPU for VR, when properly implemented, are obvious. As more games and experiences are released that require higher compute capability or have in-game settings that allow for better image quality, the ability to scale across GPUs will be a welcome addition to the ecosystem.
Check out the video here if you haven't seen any Serious Sam VR gameplay yet!
Subject: Graphics Cards | November 11, 2016 - 01:27 AM | Scott Michaud
Tagged: quarterly earnings, nvidia
The most recent quarter for NVIDIA, which is the three months ending on October 30th, has just passed $2 Billion USD in revenue, an increase of 54% from last year. All said and done, this leads to $542 million in GAAP net income, which is also up 108% from last quarter (or up 120% from the same quarter last year).
NVIDIA doesn't attribute this increase to any specific line of products. Instead, CEO Jen-Hsun Huang takes the opportunity to promote the “years of work and billions of dollars” they spent on the Pascal architecture, applying it all over the place. While I'm guessing a lot of the sales are carried over from last quarter's parts, which are now able to keep up with demand, NVIDIA points to laptop SKUs of 10-series GPUs, the launch of Tesla P4 and P40 GPUs, and initial shipments of the DGX-1 as new and notable for this quarter.
NVIDIA expects to have an even better quarter with the holiday, aimed at $2.1 Billion USD, plus or minus a couple percent. A lot more details are available on NVIDIA's blog, including their Switch announcement with Nintendo, their Drive PX2 platform, and their next-generation Tegra processor, codenamed Xavier.
Subject: Graphics Cards, Systems | November 10, 2016 - 04:44 PM | Ryan Shrout
Tagged: VR, rift, Oculus, atw, asynchronous timewarp, asynchronous spacewarp, asw
Oculus has announced that as of today, support for Asynchronous Spacewarp is available and active for all users that install the 1.10 runtime. Announced at the Oculus Connect 3 event in October, ASW promises to complement existing Asynchronous Timewarp (ATW) technology to improve the experience of VR for lower performance systems that might otherwise result in stutter.
A quick refresher on Asynchronous Timewarp is probably helpful. ATW was introduced to help alleviate the impact of missed frames on VR headsets and started development back with Oculus DK2 headset. By shifting the image on the VR headset without input from the game engine based on relative head motion that occurred AFTER the last VR pose was sent to the game, timewarp presents a more accurate image to the user. While this technology was first used as a band-aid for slow frame rates, Oculus felt confident enough in its advantages to the Rift that it enables for all frames of all applications, regardless of frame rate.
ATW moves the entire frame as a whole, shifting it only based on relative changes to the user’s head rotation. New Asynchronous Spacewarp attempts to shift objects and motion inside of the scene by generating new frames to insert in between “real” frames from the game engine when the game is running in a 45 FPS state. With a goal of maintaining a smooth, enjoyable and nausea-free experience, Oculus says that ASW “includes character movement, camera movement, Touch controller movement, and the player's own positional movement.”
To many of you that are familiar with the idea of timewarp, this might sound like black magic. Oculus presents this example on their website to help understand what is happening.
Seeing the hand with the gun in motion, ASW generates a frame that continues the animation of the gun to the left, tricking the user into seeing the continuation of the motion they are going through. When the next actual frame is presented just after, the gun will have likely moved slightly more than that, and then the pattern repeats.
You can notice a couple of things about ASW in this animation example however. If you look just to the right of the gun barrel in the generated frame, there is a stretching of the pixels in an artificial way. The wheel looks like something out of Dr. Strange. However, this is likely an effect that would not be noticeable in real time and should not impact the user experience dramatically. And, as Oculus would tell us, it is better than the alternative of simply missing frames and animation changes.
Some ASW interpolation changes will be easier than others thanks to secondary data available. For example, with the Oculus Touch controller, the runtime will know how much the players hand has moved, and thus how much the object being held has moved, and can better estimate the new object location. Positional movement would also have this advantage. If a developer has properly implemented the different layers of abstraction for Oculus and its runtime, separating out backgrounds from cameras from characters, etc., then the new frames being created are less likely to have significant distortions.
I am interested in how this new feature affects the current library of games on PCs that do in fact drop below that 90 FPS mark. In October, Oculus was on stage telling users that the minimum spec for VR systems was dropping from requiring a GTX 970 graphics card to a GTX 960. This clearly expands the potential install base for the Rift. Will the magic behind ASW live up to its stated potential without an abundance of visual artifacts?
In a blog post on the Oculus website, they mention some other specific examples of “imperfect extrapolation.” If your game or application includes rapid brightness changes, object disocclusion trails (an object moving out of the way of another object), repeated patterns, or head-locked elements (that aren’t designated as such in the runtime) could cause distracting artifacts in the animation if not balanced and thought through. Oculus isn’t telling game developers to go back and modify their titles but instead to "be mindful of their appearance."
Oculus does include a couple of recommendations to developers looking to optimize quality for ASW with locked layers, using real-time rather than frame count for animation steps, and easily adjustable image quality settings. It’s worth noting that this new technology is enabled by default as of runtime 1.10 and will start working once a game drops below the 90 FPS line only. If your title stays over 90 FPS, then you get the advantages of Asynchronous Timewarp without the potential issues of Asynchronous Spacewarp.
The impact of ASW will be interesting to see. For as long as Oculus has been around they have trumpeted the need for 90 FPS to ensure a smooth gaming experience free of headaches and nausea. With ASW, that, in theory, drops to 45 FPS, though with the caveats mentioned above. Many believe, as do I, that this new technology was built to help Microsoft partner with Oculus to launch VR on the upcoming Scorpio Xbox console coming next year. Because the power of that new hardware still will lag behind the recommended specification from both Oculus and Valve for VR PCs, something had to give. The result is a new “minimum” specification for Oculus Rift gaming PCs and a level of performance that makes console-based integrations of the Rift possible.
Subject: Graphics Cards | November 10, 2016 - 02:52 AM | Scott Michaud
Tagged: graphics drivers, dishonored 2, crimson, amd
Just a handful of days into this busy month for video game companies, and AMD has released their third Radeon Software Crimson Edition drivers for November. 16.11.3, like 16.11.2 and 16.11.1, are not certified by WHQL. From a quality standpoint, Microsoft certification hasn't exactly made a difference over the last year or so. In fact, both graphics vendors rapidly releasing hotfixes between regular WHQL milestones seems to have a better user experience.
Unfortunately, this does mean that users of clean installed Windows 10 1607 with Secure Boot enabled will be missing out. Correction: The drivers are actually signed by Microsoft with the attestation process.
As for the driver itself, 16.11.3 rolls in AMD's optimizations for Dishonored 2. The game goes live in two days, so this should give users an opportunity to find a good time to install and reboot before launch. It also fixes an issue where Valve's Steam client and EA's Origin client would fail when an external GPU, using AMD's X-Connect Technology standard, is detached.
Subject: Graphics Cards | November 7, 2016 - 11:00 PM | Scott Michaud
Tagged: nvidia, graphics drivers
Update, November 7th @ 5:25pm EST:
First, NVIDIA gave Ryan their official statement, which I included below verbatem.
GeForce Experience collects data to improve the application experience; this includes crash and bug reports as well as system information needed to deliver the correct drivers and optimal settings. NVIDIA does not share any personally identifiable information collected by GeForce Experience outside the company. NVIDIA may share aggregate-level data with select partners, but does not share user-level data. The nature of the information collected has remained consistent since the introduction of GeForce Experience 1.0.The change with GeForce Experience 3.0 is that this error reporting and data collection is now being done in real-time.
They also pointed to their GeForce Experience FAQ.
It sounds like there's a general consensus, both from NVIDIA and even their harshest critics, that telemetry only affects GeForce Experience, and not their base driver. I still believe that there should be a more granular opt-out that still allows access to GeForce Experience, like web browsers and Visual Studio prompt with a checkbox during install. Still, if this concerns you, and, like Windows 10, it might not and that's okay, you can remove GeForce Experience.
Also, GamersNexus yet again did a very technical breakdown of the situation. I think they made an error, though, since they claimed to have recorded traffic "for about an hour", which may not have included the once-per-day reporting time from Windows Task Scheduler. (My image below suggests, at least for my system, monitor once per hour but report at 12:25pm and user login.) I reached out to them on Twitter for clarification, but it looks like they may have just captured GeForce Experience's typical traffic.
Update, November 7th @ 7:15pm EST: Heard back from GamersNexus. They did check at the Windows Task Scheduler time as well, and they claim that they didn't see anything unusual. They aren't finished with their research, though.
Original news, posted November 6th @ 4:25pm EST, below.
Over the last day, users have found NVIDIA Telemetry Monitor added to Windows Task Scheduler. We currently don't know what it is or exactly when it was added, but we do know its schedule. When the user logs in, it runs an application that monitors... something... once every hour while the computer is active. Then, once per day (at just after noon on my PC) and once on login, it runs an application that reports that data, which I assume means sends it to NVIDIA.
Before we begin, NVIDIA (or anyone) should absolutely not be collecting data from personal devices without clearly explaining the bounds and giving a clear option to disable it. Lots of applications, from browsers to software development tools, include crash and error reporting, but they usually and rightfully ask you to opt-in. Microsoft is receiving a lot of crap for this practice in Windows 10, even with their “Basic” option, and, while most of those points are nonsense, there is ground for some concern.
I've asked NVIDIA if they have a statement regarding what it is, what it collects, and what their policy will be for opt-in and opt-out. I haven't received a response yet, because I sent it less than an hour ago on a weekend, but we'll keep you updated.
Subject: Graphics Cards | November 7, 2016 - 02:32 PM | Josh Walrath
Tagged: WX 7100, WX 5100, WX 4100, workstation, radeon pro, radeon, quadro, Polaris, amd
The professional card market is a lucrative one. For many years NVIDIA has had a near strangle-hold on it with their Quadro series of cards. Offering features and extended support far beyond that of their regular desktop cards, Quadros became the go-to cards for many professional applications. AMD has not been overlooking this area though and have had a history of professional cards that have also included features and support not seen in the standard desktop arena. AMD has slowly been chipping away at Quadro’s marketshare and they hope that today’s announcement will help further that particular goal.
It has now been around five months since the initial release of the Polaris based graphics cards from AMD. Featuring the 4th generation GCN architecture and fabricated on Samsung’s latest 14nm process, the RX 4x0 series of chips have proven to be a popular option in the sub-$250 range of cards. These products may not have been the slam-dunk that many were hoping from AMD, they have kept the company competitive in terms of power and performance. AMD has also seen a positive impact from the sales of these products on the overall bottom line.
Today AMD is announcing three new professional cards based on the latest Polaris based GPUs. These range in power and performance from a sub 50 watt part up to a very reasonable 130 watts. These currently do not feature the SSD that was shown off earlier this year.
The lowest end offering is the Radeon Pro WX 4100. This is a low profile, single slot card that consumes less than 50 watts. It features 1024 stream units, which is greater than that of the desktop RX 460’s 896. The WX 4100 features 2.4 TFLOPS of performance while the RX 460 is at 2.2 TFLOPS. AMD did not specify exactly what chips were used in the professional cards, but the assumption here is that this one is a fully enabled Polaris 11.
The power consumption of this card is probably the most impressive part. Also of great interest is the DP 1.4 support and the four outputs. Finally the card supports 5K monitors at 60 Hz. This is a small, quiet, and cool running part that features the entire AMD Radeon Enterprise software support of the professional market.
The next card up is the Pro WX 5100. This features a sub 75 watt GPU that runs 1792 stream units. We guess that this chip is a cut down Polaris 10. On the desktop side it is similar to the RX 470, but that particular card features more stream units and a faster clockspeed. The RX 470 is rated at 4.9 TFLOPS while the WX 5100 is at 3.9 TFLOPS. Fewer stream units and a lower clockspeed allow it to hit that sub-75 watt figure.
It supports the same number of outputs as the 4100, but they are full sized DP. The card is full sized but still only single slot due to the very conservative TDP.
The final card is the WX 7100. This is based on the fully enabled Polaris 10 GPU and is physically similar to the RX 480. They both feature 2304 stream units, but the WX 7100 is slightly clocked down from the RX 480 as it features 5.7 TFLOPS of performance vs. 5.8 TFLOPS. The card is rated below 130 watts TDP which is about 20 watts lower than a standard RX 480. AMD did not explain to us how they were able to lower the TDP of this card, but it could be simple binning of parts or an upcoming revision of Polaris 10 to improve thermals.
This card is again full sized but single slot. It features the same 4 DP connectors as the WX 5100 and the full monitor support that the 1.4 standard entails.
These products will see initial availability for this month. Plans may of course change and they will be introduced slightly later. Currently the 7100 and 4100 are expected after the 10th while the 5100 should show up on the 18th.
AMD is also releasing the Radeon Pro Software. This is essentially their professional driver development that improves upon features, stability, and performance over time. AMD aims to release new drivers for this market every 4th Thursday each quarter.
This is certainly an important area for AMD to address with their new cards and this updated software scheme. NVIDIA has made a pretty penny over the years from their Quadro stack due to the extremely robust margins for these cards. The latest generation of AMD Radeon Pro WX cards look to stack up favorably against the latest products from NVIDIA.
The WX 7100 will come in at a $799 price point, while the WX 5100 and WX 4100 will hit $499 and $399 respectively.
Subject: Graphics Cards | November 6, 2016 - 12:19 AM | Scott Michaud
Tagged: linux, DOTA 2, valve, nvidia, vulkan, opengl
Phoronix published interesting benchmark results for OpenGL vs Vulkan on Linux, across a wide spread of thirteen NVIDIA GPUs. Before we begin, the CPU they chose was an 80W Intel Xeon E3-1280 v5, which fits somewhere between the Skylake-based Core i7-6700k and Core i7-6700 (no suffix). You may think that Xeon v5 would be based on Broadwell, but, for some reason, Intel chose the E3-1200 series to be based on Skylake. Regardless, the choice of CPU will come in to play.
They will apparently follow up this article with AMD results.
A trend arose throughout the whole article. At 1080p, everything, from the GTX 760 to the GTX 1080, was rendering at ~101 FPS on OpenGL and ~115 FPS on Vulkan. The obvious explanation is that the game is 100% CPU-bound on both APIs, but Vulkan is able to relax the main CPU thread enough to squeeze out about 14% more frames.
The thing is, the Xeon E3-1280 v5 is about as high-end of a mainstream CPU as you can get. It runs the most modern architecture and it can achieve clocks up to 4 GHz on all cores. DOTA 2 can get harsh on the CPU when a lot of units are on screen, but this is a little surprisingly low. Then again, I don't have any experience running DOTA 2 benchmarks, so maybe it's a known thing, or maybe even a Linux-version thing?
Moving on, running the game at 4K, the results get more interesting. In GPU-bound scenarios, NVIDIA's driver shows a fairly high performance gain on OpenGL. Basically all GPUs up to the GTX 1060 run at a higher frame rate in OpenGL, only switching to Vulkan with the GTX 1070 and GTX 1080, where OpenGL hits that 101 FPS ceiling and Vulkan goes a little above.
Again, it will be interesting to see how AMD fairs against this line of products, both in Vulkan and OpenGL. Those will apparently come “soon”.
Subject: Graphics Cards | November 4, 2016 - 09:57 PM | Scott Michaud
Tagged: amd, graphics drivers, crimson
AMD has released another hotfix driver, just a day after releasing 16.11.1. This version has a single listed change: “Improved Shader Cache storage limit”. I'm not sure why the company decided to release this update so abruptly, since I'm not aware of any critical issue that relies upon it, but there's certainly nothing wrong with rapidly releasing optional software. I'm guessing at least one new game has a performance issue with the previous maximum, though.
If this has been an issue for you, and you are able to install drivers that are unsigned, it's available.
Subject: Graphics Cards | November 3, 2016 - 08:27 PM | Jeremy Hellstrom
Tagged: amd, Crimson Edition 16.11.1
By the time you read this you should be able to grab the new Radeon Crimson Edition 16.11.1 driver from AMD. While mostly focused on the new CoD games there are also some fixes for existing games. Check here for the version which is right for you.
Radeon Software Crimson Edition is AMD’s revolutionary new graphics software that delivers redesigned functionality, supercharged graphics performance, remarkable new features, and innovation that redefines the overall user experience. Every Radeon Software release strives to deliver new features, better performance and stability improvements.
Radeon Software Crimson Edition 16.11.1 Highlights
- Support For: Call of Duty: Infinite Warfare
- Call of Duty: Modern Warfare Remastered
New AMD CrossFire profile added for DirectX®11:
- Titanfall 2
- AMD XConnect Technology will now allow Microsoft Office applications to migrate to iGPU on unplug.
- Flickering may be observed on some surfaces in a few maps or locations in Battlefield 1 in AMD CrossFire mode.
- Radeon R9 390 graphics series may experience a crash or application hang when running Unigine Heaven using OpenGL.
- The Radeon WattMan feature may intermittently display a Radeon Software popup error regarding Radeon WattMan for non-supported products.
- The Division may experience an application freeze or hang when running in AMD CrossFire mode after extended periods of play.
- OBS screen capture may stutter after extended periods of use while capturing video and watching or streaming content in a web browser.
Subject: General Tech, Graphics Cards | November 3, 2016 - 05:48 PM | Jeremy Hellstrom
Tagged: VRMark, Futuremark, Blue Room, Orange Room, VR
Futuremark's VRMark is available today via Steam or directly from Futuremark. As with 3DMark the basic version is free while the Advanced Edition is $20, with a 25% discount for the first week of its release.
The difference between the two versions is the inclusion of the Blue Room in addition to the Orange Room; the Blue Room is for high end systems which surpass the basic VR requirements and need heavier loads to test. The two rooms can be used to run either a standard benchmark or to enter Experience Mode which lets you wander the room on your own to get a feel for the headsets reprojection performance as well as spatial audio and an interactive flashlight to test lighting.
VRMark Basic Edition - free download
- See if your PC meets the performance requirements for HTC Vive and Oculus Rift
- Test your system's VR readiness with the Orange Room benchmark
- Explore the Orange Room in Experience mode
VRMark Advanced Edition - $19.99
- Unlock the Blue Room benchmark for high-performance PCs
- See detailed results and hardware monitoring charts
- Explore both rooms in Experience mode
- Make tests more or less demanding with custom settings.
VRMark comes with two VR benchmark tests, which you can run on your desktop monitor, no headset required, or on a connected HMD. There is also a free-roaming Experience mode that lets you judge the quality of a system's VR performance with your own eyes.
The performance requirements for VR games are much higher than for typical PC games. So if you're thinking about buying an HTC Vive or an Oculus Rift this holiday, wouldn't it be good to know that your PC is ready for VR?
VRMark includes two VR benchmark tests that run on your monitor, no headset required. At the end of each test, you'll see whether your PC is VR-ready, and if not, how far it falls short.
Orange Room benchmark
The VRMark Orange Room benchmark shows the impressive level of detail that can be achieved on a PC that meets the recommended hardware requirements for the HTC Vive and Oculus Rift. If your PC passes this test, it's ready for the two most popular VR systems available today.
Blue Room benchmark
The VRMark Blue Room benchmark is a more demanding test with a greater level of detail. It is the ideal benchmark for comparing high-end systems with specs above the recommended requirements for the HTC Vive and Oculus Rift. A PC that passes this test will be able to run the latest VR games at the highest settings, and may even be VR-ready for the next generation of VR headsets.
Results and reporting
After running a benchmark, you'll see clearly whether your PC is VR-ready or not. To pass, your PC has to meet or exceed the target frame rate without dropping frames. You also get an overall score, which you can use to compare systems.
Hardware monitoring charts show how your PC performed frame-by-frame. There are charts for frame rate, GPU frequency, GPU load, and GPU temperature.
VR headsets use clever techniques to compensate for missed frames. With Experience mode, you can judge the quality of the VR experience with your own eyes. VRMark Experience mode features free movement, spatial audio, and an interactive flashlight for lighting up the details of the scene. Explore each scene in your own time in VR or on your monitor.
Subject: Graphics Cards | November 2, 2016 - 11:10 PM | Jeremy Hellstrom
Tagged: pascal, nvidia, GTX1070, GTX1060, GTX 1080, fail, evga, ACX 3.0
Checklist time readers, do you have the following:
- A GTX 1060/1070/1080
- Which is from EVGA
- With an ACX 3.0 cooler
- With one of the model numbers above
If not, make like Bobby McFerrin.
If so, you have a reason to be concerned and EVGA offers their apologies and more importantly, a fix. EVGA's tests, which emulate the ones performed at Tom's show that the thermal temperature of the PWM and memory was just marginally within spec. That is a fancy way of saying that in certain circumstances the PWM was running just short of causing a critical thermal incident, also know as catching on fire and letting out the magic smoke. They claim that this was because the testing focused on GPU temperature and the lowest acoustic levels possible and did not involve measuring the heat produced on memory or the VRM which is, as they say, a problem.
You have several choices of remedy from EVGA, please remember that you should reach out directly to their support, not NVIDIA's. You can try requesting a refund from the store you purchased it at but your best bet is EVGA.
The first option is a cross-ship RMA. Contact EVGA as a guest or with your account to set up an RMA and they will ship you a replacement card with a new VBIOS which will not have this issue and you won't need to send yours back until the replacement arrives.
You can flash to the new VBIOS which will adjust the fan-speed curve to ensure that your fans are running higher than 30% and will provide sufficient cooling to additional portions of the GPU. Your card will be louder but it will also be less likely to commit suicide in a dramatic fashion.
Lastly you can request a thermal pad kit, which EVGA suggests is unnecessary but certainly sounds like a good idea especially as it is free although requires you sign up for an EVGA account. Hopefully in the spare seconds currently available to the team we can get our hands on an ACX 3.0 cooled Pascal card with the VBIOS update and thermal pads so we can verify this for you.
This issue should not have happened and does reflect badly on certain factors of EVGA's testing. Their response has been very appropriate on the other hand, if you are affected then you can get a replacement card with no issues or you can fix the issue yourself. Any cards shipped, though not necessarily purchased, after Nov. 1st will have the new VBIOS so be careful if you are sticking with a new EVGA Pascal card.
Subject: Graphics Cards | November 2, 2016 - 08:01 PM | Scott Michaud
Tagged: nvidia, graphics drivers
The release of NVIDIA's GeForce 375.57 graphics drivers wasn't the most smooth. It introduced a few bugs into the package, which was likely due to all of the games that were coming out at the time. One issue introduced artifacts in animated GIFs, which could introduce seconds worth of black blotches. This was supposed to be fixed in the next WHQL driver, but it slipped. Since the next WHQL driver is looking to be a couple of weeks out, NVIDIA released a hotfix.
The driver also fixes “occasional flicker on high refresh rate monitors”. I'm not sure how old this bug is. I've heard some people complain about it with recent drivers, but Allyn and I have noticed weird snowy flickers for several months now. (Allyn actually took slow motion video of one occurrence back in May.) I guess we'll see if this is the same issue.
You can pick up 375.76 Hotfix from NVIDIA's CustHelp.
Subject: General Tech, Graphics Cards, Motherboards | November 1, 2016 - 09:23 PM | Scott Michaud
Tagged: msi, giveaway, giveaways, pc gaming
To celebrate their 30th anniversary, MSI is having a massive giveaway. Each day, from today (November 1st) to November 30th, you are able to answer a trivia question to be entered in that day's drawing. Being that it's MSI, they are also requiring that you capitalize every letter of your answer. I'm not joking; that really is in their How to Enter process. You also need to follow MSI and HyperX on Twitter to enter but, although the form is through Facebook, it looks like you do not need a Facebook account. I could be wrong about that last part, though.
Also, winning a prize does not exclude you from winning future prizes. Don't bother trying to game the system, like waiting to enter until the “good prizes” but not the “great prizes” that will get too many entries, etc. Try every day if you can, even if you already won previously.
The prize for today is the GeForce GTX 1050 Ti GAMING 4G from MSI, but they vary wildly from day to day. Even though NVIDIA is a partner in this giveaway, along with HyperX and Intel, there are even some AMD cards scattered throughout the month. I mean, it makes sense: MSI sells AMD cards. Their contest page claims that the total prize pool is up to $14,000 USD.
Subject: Graphics Cards | November 1, 2016 - 03:57 PM | Ryan Shrout
Tagged: video, rx 480, readon, nvidia, multi-gpu, gtx 1060, geforce, dx12, deus ex: mankind divided, amd
Last week a new update was pushed out to Deus Ex: Mankind Divided that made DX12 a part of the main line build and also integrated early support for multi-GPU support under DX12. I wanted to quickly see what kind of scaling it provided as we still have very few proof points on the benefit of running more than one graphics card with games utilizing the DX12 API.
As it turns out, the current build and driver combination only shows scaling on the AMD side of things. NVIDIA still doesn't have DX12 multi-GPU support enabled at this point for this title.
- Test System
- Core i7-5960X
- X99 MB + 16GB DDR4
- AMD Radeon RX 480 8GB
- Driver: 16.10.2
- NVIDIA GeForce GTX 1060 6GB
- Driver: 375.63
Not only do we see great scaling in terms of average frame rates, but using PresentMon for frame time measurment we also see that the frame pacing is consistent and provides the user with a smooth gaming experience.
Subject: Graphics Cards | October 31, 2016 - 05:21 PM | Jeremy Hellstrom
Tagged: powercolor, devil box, external gpu
Thunderbolt 3, when properly implemented, provides enough bandwidth to make external GPUs possible. The rather large Devil Box dock offers all the connectivity generally found in a docking station but can also handle even the most recently released GPUs. Overclockers Club tested out the effectiveness of the Devil Box with an RX 480, comparing the performance of the card when installed internally and externally. As you would reasonably expect the performance is slower over Thunderbolt, by a fair margin in most cases but not as much in the DX12 Ashes of the Singularity. Drop by to see the full review and ponder if adding an external desktop GPU to your laptop is interesting enough for to you invest in.
"If you are using a laptop, you get single connection to everything you need via Thunderbolt 3. External storage, connecting USB peripherals, Gigabit LAN connectivity, display output, and charging all through one cable. Pricing will come in at $375 US for just the Devil Box enclosure and included Thunderbolt 3 40Gbps cable. Add in the cost of a good, solid $200 GPU and you fast approach $600."
Here are some more Graphics Card articles from around the web:
- MSI GeForce GTX 1050 Ti GAMING X vs AMD Radeon RX 470 @ [H]ard|OCP
- $100-$150 Best Playable Roundup: AMD’s RX 460 & NVIDIA’s GTX 1050 / 1050 Ti @ Techgage
- NVIDIA GeForce GTX 1050 Ti Linux Benchmarks @ Phoronix
- NVIDIA GeForce GTX 1050 OpenGL/Vulkan/OpenCL Linux Performance @ Phoronix
- MSI GeForce GTX 1050 Ti GAMING X 4G @ [H]ard|OCP
- The GeForce GTX 1050 & 1050 Ti Performance Comparison @ Tech ARP
- AMD & NVIDIA GPU VR Performance - theBlu: Encounter @ [H]ard|OCP
Subject: Graphics Cards | October 30, 2016 - 07:09 AM | Scott Michaud
Tagged: fail, evga
About a week ago, EVGA acknowledged an issue with their brand of custom-cooled GTX 1070 and GTX 1080 FTW cards. This came after the German branch of Tom's Hardware measured, back on October 6th, very high temperatures on the voltage regulator modules (VRMs), which was caused by these components not being able to adequately remove heat. To remedy the situation, EVGA offers cooling pads for all affected customers, which these customers could install under backplate and under the heatsink fins.
Image Credit: EVGA
Over the last day or so, users have been reporting that their cards are breaking, and even allegedly catching fire. According to GamersNexus and their source, Buildzoid of Actually Hardcore Overclocking, VRMs, if they fail, will just burn out without warning. The user in question claims that they were just playing Shadow Warrior 2 when their computer just shut down, with a sparkle and magic smoke. Taking the card out, they noticed a scorch mark on the PCB, right in the middle of the VRMs.
Regardless of how gloriously pyrotechnic this issue became, the consensus is that the thermal pads will still fix the issue. If you're not comfortable adding them yourself, then you should contact EVGA support.
Subject: Graphics Cards | October 30, 2016 - 04:08 AM | Scott Michaud
Tagged: titanfall 2, graphics drivers, amd
If you are experiencing crashes in Titanfall 2, and you are using an AMD graphics card, then you will probably be interested in AMD's Radeon Software Crimson Edition 16.10.3. According to its release notes, that is the only issue this hotfix driver addresses.
Also, being a hotfix driver, you might have issues with clean installs of Windows 10 Anniversary Update, because I'm not sure if it's signed by Microsoft. It might be, but that's obviously a fairly narrow subset of hardware and software that I cannot test on a single machine. If that's the case, though, then you can temporarily disable Secure Boot... or just wait until AMD releases a signed driver.
I should note that, while we're posting this a couple of days late, like our news about NVIDIA's driver, AMD was able to release this the day before Titanfall 2 launched. Our readers, at least I hope, found out about the update before now, rather than suffering through some crashes when a fix was already available. Sorry that I didn't get a post up sooner, though; AMD did their part.
Subject: Graphics Cards | October 30, 2016 - 03:45 AM | Scott Michaud
Tagged: nvidia, gtx 1070, vbios
So apparently I completely missed this news for over a week. It's probably something that our readers would like to know, though, because it affects the stability of GTX 1070 cards. Video RAM chips are purchased from a variety of vendors, and they should ideally be interchangeable. It turns out that, while NVIDIA seems to ship their cards with Samsung memory, some partners have switched to Micron GDDR5 modules.
According to DigitalTrends, the original VBIOS installed in graphics cards cannot provide enough voltage for Micron quick enough, so it would improperly store data. This reminds me when I had a 7900 GT, which apparently had issues with the voltage regulators feeding the VRAM, leading to interesting failures when the card got hot, like random red, green, and blue dots scattered across the screen, even during POST.
Anywho, AIB vendors have been releasing updated VBIOSes through their websites. DigitalTrends listed EVGA, Gainward, and Palit, but progress has been made since then. I've found updates at ASUS that were released a couple of days ago, which claim to fix Micron memory stability, but it looks like Gigabyte and MSI are still MIA. The best idea is to run GPU-Z and, if Micron produces your GDDR5 memory, check your vendor's website for a new VBIOS.
It's a pain, but this sort of issue goes beyond driver updates.