Subject: Graphics Cards | October 25, 2017 - 03:34 PM | Tim Verry
Tagged: zotac, gtx 1080 ti, SFF, water cooler
Zotac finally made its watercooled GTX 1080 Ti ArcticStorm Mini official last week. A card that was first teased at Computex, the ArcticStorm Mini is a dual slot with metal backplate and full cover water block that has been significantly shortened such that it can fit into many more cases including Micro ATX and some Mini ITX form factors. Specifically, the ArcticStorm Mini measures 212mm (8.35”) x 164mm (6.46”) and uses a custom shortened PCB that appears to be the same platform as the dual fan air cooled model.
The star of the ArcticStorm Mini is the full cover waterblock with nickel plated copper base and a tinted acrylic top cover. According to Zotac the waterblock uses 0.3mm micro channels above the GPU to improve cooling performance by moving as much heat from the GPU into the water loop as possible. There are ports for vertical or horizontal barb orientation though I would have loved to see a card that routed the water cooling in and out ports to the rear of the card rather than the side especially since this is aimed at small form factor builds. The water block can accommodate standard G1/4” fittings and Zotac includes two barbs that support 10mm ID (inner diameter) tubing in the box. A metal backplate helps prevent warping of the PCB from the water cooling which can be rather hefty.
While there is no RGB on this card, Zotac did go with an always on white LED that along with the gray and silver colors of the card itself are supposed to be color neutral and allow it to fit into more builds (as opposed to Zotac’s usual yellow and black colors). Around the front are five display outputs including: DVI-D, HDMI 2.0b, and three DisplayPort 1.4 connections.
Out of the box, the GTX 1080 Ti ArcticStorm Mini comes with a modest factory overlock that pushes the GP102’s 3,584 CUDA cores to 1506 MHz base and 1620 MHz boost. The 11GB of GDDR5X remains clocked at the stock 11 GHz, however. (For comparison, reference clocks are 1480 MHz base and 1582 MHz boost.) The graphics card is powered by two 8-pin PCI-E power connectors and enthusiasts should be able to push it quite a bit further than the out of the box clocks simply by increasing the power target as we saw in our review of the 1080 Ti, and barring any silicon lottery duds this card should be able to clock higher and have more stable clocks than our card thanks to the liquid cooler.
As is usual with these things, Zotac did not reveal exact pricing or availability, but with the full sized GTX 1080 Ti ArcticStorm already selling for $809 on Amazon and $820 over at Newegg, I would expect the little SFF brother to sell for a bit of a premium beyond that, say $840 at launch with the price going down a bit with sales later.
It would have been nice to see this be a single slot card, and giving up DVI would be worth it, but you can’t have everything (heh). I am looking forward to seeing the systems modders and enthusiasts are able to cram this card (or two) into!
Forza Motorsport 7 Performance
The first full Forza Motorsport title available for the PC, Forza Motorsport 7 on Windows 10 launched simultaneously with the Xbox version earlier this month. With native 4K assets, HDR support, and new visual features like fully dynamic weather, this title is an excellent showcase of what modern PC hardware can do.
Now that both AMD and NVIDIA have released drivers optimized for Forza 7, we've taken an opportunity to measure performance across an array of different GPUs. After some significant performance mishaps with last year's Forza Horizon 3 at launch on PC, we are excited to see if Forza Motorsport 7 brings any much-needed improvements.
For this testing, we used our standard GPU testbed, including an 8-core Haswell-E processor and plenty of memory and storage.
|PC Perspective GPU Testbed|
|Processor||Intel Core i7-5960X Haswell-E|
|Motherboard||ASUS Rampage V Extreme X99|
|Memory||G.Skill Ripjaws 16GB DDR4-3200|
|Storage||OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
|Power Supply||Corsair AX1500i 1500 watt|
|OS||Windows 10 x64|
|Drivers||AMD: 17.10.1 (Beta)
As with a lot of modern console-first titles, Forza 7 defaults to "Dynamic" image quality settings. This means that the game engine is supposed to find the best image settings for your hardware automatically, and dynamically adjust them so that you hit a target frame rate (adjustable between 30 and 60fps) no matter what is going on in the current scene that is being rendered.
While this is a good strategy for consoles, and even for casual PC gamers, it poses a problem for us trying to measure equivalent performance across GPUs. Luckily the developers of Forza Motorsport 7, Turn 10 Studios, still let you disable the dynamic control and configure the image quality settings as you desire.
One quirk however though is that in order for V-Sync to be disabled, the rendering resolution within the game must match the native resolution of your monitor. This means that if you are running 2560x1440 on your 4K monitor, you must first set the resolution within windows to 2560x1440 in order to run the game in V-Sync off mode.
We did our testing with an array of three different resolutions (1080p, 1440p, and 4K) at maximum image quality settings. We tested both AMD and NVIDIA graphics cards in similar price and performance segments. The built-in benchmark mode for this game was used, which does feature some variance due to dynamic weather patterns. However, our testing within the full game matched the results of the benchmark mode closely, so we used it for our final results.
Right off the bat, I have been impressed at how well optimized Forza Motorsport 7 seems to be on the PC. Compared to the unoptimized disaster that was Forza Horizon 3 when it launched on PC last year, it's clear that Turn 10 Studios and Microsoft have come a long way.
Even gamers looking to play on a 4K display at 60Hz can seemingly get away with the cheaper, and more mainstream GPUs such as the RX 580 or the GTX 1060 with acceptable performance in most scenarios.
Games on high-refresh-rate displays don't appear to have the same luxury. If you want to game at a resolution such as 2560x1440 at a full 144Hz, neither the RX Vega 64 or GTX 1080 will do this with maximum image quality settings. Although these GPUs appear to be in the margin where you could turn down a few settings to achieve your full refresh rate.
For some reason, the RX Vega cards didn't seem to show any scaling in performance when moving from 2560x1440 to 1920x1080, unlike the Polaris-based RX 580 and the NVIDIA options. We aren't quite sure of the cause of this and have reached out to AMD for clarification.
As far as frame times are concerned, we also gathered some data with our Frame Rating capture analysis system.
Taking a look at the first chart, we can see while the GTX 1080 frame times are extremely consistent, the RX Vega 64 shows some additional variance.
However, the frame time variance chart shows that over 95% of the frame times of the RX Vega 64 come in at under 2ms of variance, which will still provide a smooth gameplay experience in most scenarios. This matches with our experience while playing on both AMD and NVIDIA hardware where we saw no major issues with gameplay smoothness.
Forza Motorsport 7 seems to be a great addition to the PC gaming world (if you don't mind using the Microsoft store exclusively) and will run great on a wide array of hardware. Whether or not you have a NVIDIA or AMD GPU, you should be able to enjoy this fantastic racing simulator.
Subject: Graphics Cards | October 20, 2017 - 04:30 PM | Jeremy Hellstrom
Tagged: amd, RX VEGA 64, 4k
[H]ard|OCP updated their benchmarking suite with several new games and have published a review of AMD's Vega 64 focusing on 4K performance. The race between the GTX 1080 and Vega 64 is quite close, with many benchmarks showing less than a 10% difference in performance. Neither card came close to touching the GTX 1080 Ti, that card is still the only one that can truly handle 4K gaming with graphics options on high or ultra. For 1440p performance, the GTX 1080 is better overall but the Vega is still a very strong contender.
Pop over for a look at the detailed results.
"Does the AMD Radeon RX Vega 64 play games well at 4K resolution? What game settings work best at 4K, and how does it compare to GeForce GTX 1080 and GeForce GTX 1080 Ti? Ten games are tested, new and old, DX11, DX12, and Vulkan at playable game settings and pushed to the max in this all out 4K brawl."
Here are some more Graphics Card articles from around the web:
- MSI GTX 1080 Ti Gaming X Trio 11 GB @ TechPowerUp
- MSI GeForce GTX 1080 Ti Gaming X Trio @ Guru3D
- ASUS ROG GeForce GTX 1080 Ti Poseidon @ Guru3D
- Then and Now: 6 Generations of GeForce Graphics Compared @ TechSpot
- EKWB and Bykski Water Blocks tested on Asus GTX 1080 Ti Strix @ TechPowerUp
Subject: Graphics Cards, Processors | October 16, 2017 - 05:07 PM | Ryan Shrout
Tagged: amd, raven ridge, APU, ryzen 7 2700u, Ryzen 5 2500U, ryzen 7 pro 2700u
Hot on the heels of the HP leak that showed the first AMD Raven Ridge based notebook that may be hitting store shelves later this year, another leak of potential Raven Ridge APU performance is cycling through. The AMD Ryzen 7 2700U with integrated Vega-based graphics architecture, and also rumored to have a ~35-watt TDP, is showing 3DMark11 graphics scores near that of the discrete NVIDIA GeForce MX150.
With a graphics score of 4072, the integrated graphics on the upcoming AMD APU is slightly behind the score of 4570 from the MX150, a difference of 11.5%. Interestingly, the Physics score on the Raven Ridge APU of 6419 is solid as well, and puts an interesting light on the 8th gen KBL-R processors. As you can see in the graph below, from two systems we already have in-house with quad-core parts, CPU performance is going to vary dramatically from one machine to the next depending on the thermal headroom of the physical implementation.
The HP Spectre x360 with the Core i7-8550U and the MX150 GPU is able to generate a Physics score of 8278, well above the leaked result of the Raven Ridge APU. However, when we ran the 3DMark11 on the ASUS Zenbook 3 UX490UA with the same Core i7-8550U, the Physics score was 6627, a 19% drop! Clearly there are configurability shifts that will adjust the performance of the 8th gen Intel parts. We are diving more into this effect in a couple of upcoming reviews.
Though the true power consumption of these Ryzen 7 2700U systems is still up in the air, AMD has claimed for some time that it would have the ability to compete with Intel for the first time in several generations. If these solutions turn out to be in the 35-watt range, which would be at or lower than the typical 15-watt Intel CPU and 25-watt NVIDIA discrete GPU combined, AMD may have a winning combination for mobile performance users to entertain.
Subject: Graphics Cards | October 12, 2017 - 03:23 PM | Jeremy Hellstrom
Tagged: msi, gtx 1080 ti, gtx 1080 ti gaming x trio, TRI-FROZR
MSI have just announced the GTX 1080 Ti GAMING X TRIO, which will hit the market in November, though with the current price of Bitcoin you may have trouble locating one.
The cards will feature their Tri-Frozr cooler with two 10cm and one 9cm TORX 2.0 fans along with a pair of 8mm SuperPipes which will provide 300W of heat dissipation for those planning on pushing the overclock even further. It will also have Mystic Light, offering you three zones of controllable RGBs, with the option to synchronize the light show emanating from your various components.
Subject: Graphics Cards | October 9, 2017 - 09:28 PM | Scott Michaud
Tagged: nvidia, graphics drivers
NVIDIA gave their graphics drivers a decent version bump today, from 385.69 to 387.92. When the first number jumps, it seems to mean that we are on a new feature branch, rather than just adding bug fixes and game-specific improvements to an existing branch. (Sometimes they just ran out of the second set of numbers, though. You can tell the difference because the release notes will typically state the old number. For example, 385.69’s release notes, which is the previous driver release, state “Release 384 Graphics Drivers for Windows, Version 385.69”.)
There’s a bunch of new features this time, including OpenGL 4.6 support (assuming the driver passes conformance), HDR in NVIDIA GameStream, Fast Sync in SLI mode, 32-bit optimizations for Vulkan, and support for DXIL. This last one is kind-of interesting for two reasons: first, it allows shaders to be written in LLVM bytecode, like Vulkan’s SPIR-V and, second, it introduces Shader Model 6.0. This isn’t as big as the jumps that we saw in the DirectX 9 era, but it allows operations that cross between shader threads, like wave ballots and reduction.
In this release, NVIDIA has also added game-specific optimizations for Arktika.1, The Evil Within 2, Forza Motorsport 7, and tomorrow’s Middle-Earth: Shadow of War. The following games were also given a new SLI profile: Earthfall, Lawbreakers, Middle-Earth: Shadow of War, Nex Machina, ReCore, RiME, Snake Pass, Tekken 7, The Evil Within 2, and We Happy Few.
Pick it up from GeForce Experience or NVIDIA’s website.
Subject: Graphics Cards | September 28, 2017 - 02:46 PM | Jeremy Hellstrom
Tagged: corsair, gtx 1080 ti, hydro gfx, liquid cooled, factory overclocked
Corsair's Hydro GFX GTX 1080 Ti liquid cooled GPU offers two preset modes, a respectable Gaming mode with frequencies of 1544MHz base, 1657MHz boost and a more impressive OC Mode which runs at 1569MHz and 1683MHz. [H]ard|OCP blasted past those frequencies when overclocking, hitting a 2050MHz GPU, 11.6GHz memory after increasing the power settings. This was enough to allow playable frame rates at 4k on the games they tested, even with graphics settings pushed up. If 4k gaming is in your plans, this review is worth checking out.
"We’ve got an exciting new video card for you today, the Corsair Hydro GFX GTX 1080 Ti Liquid Cooled Graphics Card with a Corsair Hydro Series AIO liquid cooling package on board. We find out how well this video card performs, how cool it runs, and how well it will overclock at 4K and 1440p."
Here are some more Graphics Card articles from around the web:
- Zotac GTX 1080 Ti Mini @ Kitguru
- SUS ROG Poseidon GTX 1080 Ti 11G @ Modders-Inc
- ASUS ROG Radeon RX Vega 64 STRIX Gaming @ Guru3D
- The AMD Radeon RX Vega 56 @ TechARP
- The RX Vega 56 vs. GTX 1070 FE Overclocking Showdown @ BabelTechReview
- The Best Graphics Cards @ Techspot
- Koolance VID-NX1080 GPU Water Block @ techPowerUp
Subject: Graphics Cards | September 24, 2017 - 12:33 PM | Scott Michaud
Tagged: pc gaming, nvidia, graphics drivers
New graphics drivers for GeForce cards were published a few days ago. Unfortunately, I became a bit reliant upon GeForce Experience to notify me, and it didn’t this time, so I am a bit late on the draw. The 385.69 update adds “Game Ready” optimizations for a bunch of new games: Project Cars 2, Call of Duty: WWII open beta, Total War: WARHAMMER II, Forza Motorsport 7, EVE: Valkyrie - Warzone, FIFA 18, Raiders of the Broken Planet, and Star Wars Battlefront 2 open beta.
We’re starting the holiday games rush, folks!
There isn’t really any major new features of this driver per se. It’s a lot of game-specific optimizations and a whole page of bug fixes, ranging from flickering in DOOM to preventing NVENC from freaking out at frame rates greater than 240 FPS.
One open issue is that GeForce TITAN (which I’m assuming refers to the original, Kepler-based one) cannot be installed on a Threadripper-based motherboard in Windows 10. The OS refuses to boot after the initial install. I’m guessing this has been around for a while, but in case you’re planning on upgrading to Threadripper (or buying a second-hand TITAN) it might be good to know.
If you haven’t received notification to update your drivers yet, poke GeForce Experience to make sure that it’s running and checking. Or, of course, you can download them from NVIDIA’s website.
Subject: Graphics Cards, Mobile | September 23, 2017 - 09:59 PM | Scott Michaud
Tagged: Imagination Technologies
Canyon Bridge, a private investment LLC and a believable codename for an Intel processor architecture, has just reached an agreement with Imagination Technologies to acquire most of their company. This deal is valued at £550 million GBP and does not include MIPS Technologies, Inc., which Imagination Technologies purchased on February 8th of 2013.
According to Anandtech, however, MIPS Technologies, Inc. will be purchased by Tallwood Venture Capital for $65 million USD.
The reason why Imagination Technologies is expected to be split in two like this is because purchasing CPU companies places you under national security review with the United States, and Canyon Bridge is backed by the Chinese government. As such, they can grab everything but the CPU division, which lets another party swoop in for a good price on the leftover.
That said, it is currently unclear what either company, Canyon Bridge Capital Partners or Tallwood Venture Capital, wants to do with Imagination Technologies or MIPS Technologies, Inc., respectively. When Canyon Bridge attempted to purchase Lattice Semiconductor last year, they mentioned that they were interested in their FPGAs, their “video connectivity” products (HDMI, MHL, etc.), and their wireless products (60 GHz, etc.). I would assume that they’re just picking up good technology deals, but it’s also possible that they’re looking into accelerated compute companies in particular.
There’s still a few barriers before the sale closes, but it’s looking like we’re not going to end up with Imagination just merging into an existing player or something.
Subject: Graphics Cards | September 23, 2017 - 12:16 AM | Scott Michaud
Tagged: google, nvidia, p100, GP100
NVIDIA seems to have scored a fairly large customer lately, as Google has just added Tesla P100 GPUs to their cloud infrastructure. Effective immediately, you can attach up to four of these GPUs to your rented servers on an hourly or monthly basis. According to their pricing calculator, each GPU adds $2.30 per hour to your server’s fee in Oregon and South Carolina, which isn’t a lot if you only use them for short periods of time.
If you need to use them long-term, though, Google has also announced “sustained use discounts” with this blog post, too.
While NVIDIA has technically launched a successor to the P100, the Volta-based V100, the Pascal-based part is still quite interesting. The main focus of the GPU design, GP100, was bringing FP64 performance up to its theoretical maximum of 1/2 FP32. It also has very high memory bandwidth, due to its HBM 2.0 stacks, which is often a huge bottleneck for GPU-based applications.
For NVIDIA, selling high-end GPUs is obviously good. The enterprise market is lucrative, and it validates their push into the really large die sizes. For Google, it gives a huge reason for interested parties to consider them over just defaulting to Amazon. AWS has GPU instances, but they’re currently limited to Kepler and Maxwell (and they offer FPGA-based acceleration, too). They can always catch up, but they haven’t yet, and that's good for Google.