All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | May 10, 2016 - 07:50 PM | Scott Michaud
Tagged: nvidia, maxwell, GTX 980 Ti, GTX 970, GTX 1080, geforce
The GTX 1080 announcement is starting to ripple into retailers, leading to price cuts on the previous generation, Maxwell-based SKUs. If you were interested in the GTX 1080, or an AMD graphics card of course, then you probably want to keep waiting. That said, you can take advantage of the discounts to get a VR-ready GPU or if you already have a Maxwell card that could use a cheap SLI buddy.
This tip comes from a NeoGAF thread. Microcenter has several cards on sale, but EVGA seems to have the biggest price cuts. This 980 Ti has dropped from $750 USD down to $499.99 (or $474.99 if you'll promise yourself to do that mail-in rebate). That's a whole third of its price slashed, and puts it about a hundred dollars under GTX 1080. Granted, it will also be slower than the GTX 1080, with 2GB less video RAM, but $100 might be worth that for you.
Subject: Graphics Cards | May 10, 2016 - 07:29 PM | Ryan Shrout
Tagged: video, pascal, nvidia, GTX 1080, gtx 1070, geforce
After the live streamed event announcing the GeForce GTX 1080 and GTX 1070, Allyn and I spent a few minutes this afternoon going over the information as it was provided, discussing our excitement about the product and coming to grips with what in the world a "Founder's Edition" even is.
If you haven't yet done so, check out Scott's summary post on the GTX 1080 and GTX 1070 specs right here.
Subject: Graphics Cards | May 10, 2016 - 04:06 PM | Sebastian Peak
Tagged: video card, reference cooler, pascal, nvidia, GTX 1080, graphics card, GeForce GTX 1080, Founder's Edition
The first non-reference GTX 1080 has been revealed courtesy of Galax, and the images (via VideoCardz.com) look a lot different than the Founder's Edition.
Galax GTX 1080 (Image Credit: VideoCardz)
The Galax is the first custom implementation of the GTX 1080 we've seen, and as such the first example of a $599 variant of the GTX 1080. The Founder's Edition cards carry a $100 premium (and offer that really nice industrial design) but ultimately it's about performance and the Galax card will presumably offer completely stock specifications.
(Image Credit: VideoCardz)
Expect to see a deluge of aftermarket cooling from EVGA, ASUS, MSI, and others soon enough - most of which will presumably be using a dual or triple-fan cooler, and not a simple blower like this.
Subject: Graphics Cards | May 10, 2016 - 12:11 PM | Ryan Shrout
Tagged: windows 10, windows, vrr, variable refresh rate, uwp, microsoft, g-sync, freesync
Back in March, Microsoft's Phil Spencer addressed some of the concerns over the Unified Windows Platform and PC gaming during his keynote address at the Build Conference. He noted that MS would "plan to open up VSync off, FreeSync, and G-Sync in May" and the company would "allow modding and overlays in UWP applications" sometime further into the future. Well it appears that Microsoft is on point with the May UWP update.
According to the MS DirectX Developer Blog, a Windows 10 update being pushed out today will enable UWP to support unlocked frame rates and variable refresh rate monitors in both G-Sync and FreeSync varieties.
As a direct response to your feedback, we’re excited to announce the release today of new updates to Windows 10 that make gaming even better for game developers and gamers.
Later today, Windows 10 will be updated with two key new features:
Support for AMD’s FreesyncTM and NVIDIA’s G-SYNC™ in Universal Windows Platform games and apps
Unlocked frame rate for Universal Windows Platform (UWP) games and apps
Once applications take advantage of these new features, you will be able to play your UWP games with unlocked frame rates. We expect Gears of War: UE and Forza Motorsport 6: Apex to lead the way by adding this support in the very near future.
This OS update will be gradually rolled out to all machines, but you can download it directly here.
These updates to UWP join the already great support for unlocked frame rate and AMD and NVIDIA’s technologies in Windows 10 for classic Windows (Win32) apps.
Please keep the feedback coming!
Today's update won't automatically enable these features in UWP games like Gears of War or Quantum Break, they will still need to be updated individually by the developer. MS states that Gears of War and Forza will be the first to see these changes, but there is no mention of Quantum Break here, which is a game that could DEFINITELY benefit from the love of variable refresh rate monitors.
Microsoft describes an unlocked frame rate as thus:
Vsync refers to the ability of an application to synchronize game rendering frames with the refresh rate of the monitor. When you use a game menu to “Disable vsync”, you instruct applications to render frames out of sync with the monitor refresh. Being able to render out of sync with the monitor refresh allows the game to render as fast as the graphics card is capable (unlocked frame rate), but this also means that “tearing” will occur. Tearing occurs when part of two different frames are on the screen at the same time.
I should note that these changes do not indicate that Microsoft is going to allow UWP games to go into an exclusive full screen mode - it still believes the disadvantages of that configuration outweigh the advantages. MS wants its overlays and a user's ability to easily Alt-Tab around Windows 10 to remain. Even though MS mentions screen tearing, I don't think that non-exclusive full screen applications will exhibit tearing.
Gears of War on Windows 10 is a game that could definitely use an uncapped render rate and VRR support.
Instead, what is likely occurring, as we saw with the second iteration of the Ashes of the Singularity benchmark, is that the game will have an uncapped render rate internally but that frames rendered OVER 60 FPS (or the refresh rate of the display) will not be shown. This will improve perceived latency as the game will be able to present the most up to date frame (with the most update to date input data) when the monitor is ready for a new refresh.
UPDATE 5/10/16 @ 4:31pm: Microsoft just got back to me and said that my above statement wasn't correct. Screen tearing will be able to occur in UWP games on Windows 10 after they integrate support for today's patch. Interesting!!
For G-Sync and FreeSync users, the ability to draw to the screen at any range of render rates will offer an even further advantage of uncapped frame rates, no tearing but also, no "dropped" frames caused by running at off-ratios of a standard monitor's refresh rate.
I'm glad to see Microsoft taking these steps at a brisk pace after the feedback from the PC community early in the year. As for UWP's continued evolution, the blog post does tease that we should "expect to see some exciting developments on multiple GPUs in DirectX 12 in the near future."
Subject: Graphics Cards, Cases and Cooling | May 10, 2016 - 08:55 AM | Sebastian Peak
Tagged: water cooling, radeon pro duo, radeon, pro duo, liquid cooling, graphics cards, gpu cooler, gpu, EKWB, amd
While AMD's latest dual-GPU powerhouse comes with a rather beefy-looking liquid cooling system out of the box, the team at EK Water Blocks have nonetheless created their own full-cover block for the Pro Duo, which is now available in a pair of versions.
"Radeon™ has done it again by creating the fastest gaming card in the world. Improving over the Radeon™ R9 295 X2, the Radeon Pro Duo card is faster and uses the 3rd generation GCN architecture featuring asynchronous shaders enables the latest DirectX™ 12 and Vulkan™ titles to deliver amazing 4K and VR gaming experiences. And now EK Water Blocks made sure, the owners can get the best possible liquid cooling solution for the card as well!"
Nickel version (top), Acetal+Nickel version (bottom)
The blocks include a single-slot I/O bracket, which will allow the Pro Duo to fit in many more systems (and allow even more of them to be installed per motherboard!).
"EK-FC Radeon Pro Duo water block features EK unique central inlet split-flow cooling engine with a micro fin design for best possible cooling performance of both GPU cores. The block design also allows flawless operation with reversed water flow without adversely affecting the cooling performance. Moreover, such design offers great hydraulic performance, allowing this product to be used in liquid cooling systems using weaker water pumps.
The base is made of nickel-plated electrolytic copper while the top is made of quality POM Acetal or acrylic (depending on the variant). Screw-in brass standoffs are pre-installed and allow for safe installation procedure."
Suggested pricing is set at 155.95€ for the blocks (approx. $177 US), and they are "readily available for purchase through EK Webshop and Partner Reseller Network".
Subject: Graphics Cards | May 9, 2016 - 05:00 PM | Scott Michaud
Tagged: pascal, nvidia, GTX 1080, geforce
During the GeForce GTX 1080 launch event, NVIDIA announced two prices for the card. The new GPU has an MSRP of $599 USD, while a Founders Edition will be available for $699 USD. They did not really elaborate on the difference at the keynote, but they apparently clarified the product structure for the attending press.
According to GamersNexus, the “Founders Edition” is NVIDIA's new branding for their reference design, which has been updated with the GeForce GTX 1080. That is it. Normally, a reference design is pretty much bottom-tier in a product stack. Apart from AMD's water-cooling experiments, reference designs are relatively simple, single-fan blower coolers. NVIDIA's reference cooler though, at least on their top-three-or-so models of any given generation, are pretty good. They are fairly quiet, effective, and aesthetically pleasing. When searching for a specific GPU online, you will often see a half-dozen entries based on this, from various AIB partners, and another half-dozen other offerings from those same companies, which is very different. MSI does their Twin Frozr thing, while ASUS has their Direct CU and Poseidon coolers.
If you want the $599 model, then, counter to what we've been conditioned to expect, you will not be buying NVIDIA's reference cooler. These will come from AIB partners, which means that NVIDIA is (at least somewhat) allowing them to set a minimum product this time around. They expect reference cards to be intrinsically valuable, not just purchased because they rank highest on a “sort by lowest price” metric.
This is interesting for a number of reasons. It wasn't too long ago that NVIDIA finally allowed AIB vendors to customize Titan-level graphics cards. Before that, NVIDIA's reference cooler was the only option. When they released control to their partners, we started to see water cooled Titan Xs. There is two ways to look at it: either NVIDIA is relaxing their policy of controlling user experience, or they want their personal brand to be more than the cheapest offering of their part. Granted, the GTX 1080 is supposed to be their high-end, but still mainstream offering.
It's just interesting to see this decision and rationalize it both as a release of control over user experience, and, simultaneously, as an increase of it.
Subject: Graphics Cards | May 9, 2016 - 02:05 PM | Scott Michaud
Tagged: amd, graphics drivers, crimson
This is good to see. AMD has released Radeon Software Crimson Edition 16.5.1 to align with Forza Motorsport 6: Apex. The drivers are classified as Beta, and so is the game, coincidentally, which means 16.5.1 is not WHQL-certified. That doesn't have the weight that it used to, though. Its only listed feature is performance improvements with that title, especially for the R9 Fury X graphics card. Game-specific optimizations near launch appear to be getting consistent, and that was an area that AMD really needed to improve upon, historically.
There are a handful of known issues, but they don't seem particularly concerning. The AMD Gaming Evolved overlay may crash in some titles, and The Witcher 3 may flicker in Crossfire, both of which could be annoying if they affect a game that you have been focusing on, but that's about it. There might be other issues (and improvements) that are not listed in the notes, but that's all I have to work on at the moment.
If you're interested in Forza 6: Apex, check out AMD's download page.
Subject: Graphics Cards | May 6, 2016 - 10:38 PM | Scott Michaud
Tagged: pascal, nvidia, GTX 1080, gtx 1070, GP104, geforce
So NVIDIA has announced their next generation of graphics processors, based on the Pascal architecture. They introduced it as “a new king,” because they claim that it is faster than the Titan X, even at a lower power. It will be available “around the world” on May 27th for $599 USD (MSRP). The GTX 1070 was also announced, with slightly reduced specifications, and it will be available on June 10th for $379 USD (MSRP).
Pascal is created on the 16nm process at TSMC, which gives them a lot of headroom. They have fewer shaders than the Titan X, but with a significantly higher clock rate. It also uses GDDR5X, which is an incremental improvement over GDDR5. We knew it wasn't going to use HBM2.0, like Big Pascal does, but it's interesting that they did not stick with old, reliable GDDR5.
The full specifications of the GTX 1080 are as follows:
- 2560 CUDA Cores
- 1607 MHz Base Clock (8.2 TFLOPs)
- 1733 MHz Boost Clock (8.9 TFLOPs)
- 8GB GDDR5X Memory at 320 GB/s (256-bit)
- 180W Listed Power (Update: uses 1x 8-pin power)
We do not currently have the specifications of the GTX 1070, apart from it being 6.5 TFLOPs.
It also looks like it has five display outputs: 3x DisplayPort 1.2, which are “ready” for 1.3 and 1.4, 1x HDMI 2.0b, and 1x DL-DVI. They do not explicitly state that all three DisplayPorts will run on the same standard, even though that seems likely. They also do not state whether all five outputs can be used simultaneously, but I hope that they can be.
They also have a new SLI bridge, called SLI HB Bridge, that is supposed to have double the bandwidth of Maxwell. I'm not sure what that will mean for multi-gpu systems, but it will probably be something we'll find out about soon.
Subject: Graphics Cards | May 5, 2016 - 02:38 PM | Scott Michaud
Tagged: nvidia, pascal, geforce
We're expecting a major announcement tomorrow... at some point. NVIDIA create a teaser website, called “Order of 10,” that is counting down to a 1PM EDT. On the same day, at 9PM EDT, they will have a live stream on their Twitch channel. This wasn't planned as long-term as their Game24 event, which turned out to be a GTX 970 and GTX 980 launch party, but it wouldn't surprise me if it ended up being a similar format. I don't know for sure whether one or both events will be about the new mainstream Pascal, but it would be surprising if Friday ends (for North America) without a GPU launch of some sort.
VideoCardz got a hold of 3DMark Fire Strike Extreme benchmarks, though. It is registered as an 8GB card with a GPU clock of 1860 MHz. While synthetic benchmarks, let alone a single benchmark of anything, isn't necessarily representative of overall performance, it scores slightly higher than a reasonably overclocked GTX 980 Ti (and way above a stock one). Specifically, this card yields a graphics score of 10102 on Fire Strike Extreme 1.1, while the 980 Ti achieved 7781 for us without an overclock.
We expected a substantial bump in clock rate, especially after GP100 was announced at GTC. This “full” Pascal chip was listed at a 1328 MHz clock, with a 1480 MHz boost. Enterprise GPUs are often underclocked compared to consumer parts, stock to stock. As stated a few times, overclocking could be a huge gap, too. The GTX 980 Ti was able to go from 1190 MHz to 1465 MHz. On the other hand, consumer Pascal's recorded 1860 MHz could itself be an overclock. We won't know until NVIDIA makes an official release. If not, maybe we could see these new parts break 2 GHz in general use?
Subject: Graphics Cards | April 29, 2016 - 07:09 PM | Jeremy Hellstrom
Tagged: amd, dx12, async shaders
Earlier in the month [H]ard|OCP investigated the performance scaling that Intel processors display in DX12, now they have finished their tests on AMD processors. These tests include Async computing information, so be warned before venturing forth into the comments. [H] tested an FX 8370 at 2GHz and 4.3GHz to see what effect this had on the games, the 3GHz tests did not add any value and were dropped in favour of these two turbo frequencies. There are some rather interesting results and discussion, drop by for the details.
"One thing that has been on our minds about the new DX12 API is its ability to distribute workloads better on the CPU side. Now that we finally have a couple of new DX12 games that have been released to test, we spend a bit of time getting to bottom of what DX12 might be able to do for you. And a couple sentences on Async Compute."
Here are some more Graphics Card articles from around the web:
- XFX R9 Fury Triple Dissipation Review @ OCC
- AMD Radeon Pro Duo Preview @ techPowerUp
- NVIDIA VR performance featuring ASUS @ Kitguru
- ASUS GTX 950 2 GB (no power connector) @ techPowerUp
- Windows 10 vs. Ubuntu 16.04 NVIDIA OpenGL Performance @ Phoronix
Subject: Graphics Cards | April 25, 2016 - 04:41 PM | Jeremy Hellstrom
Tagged: graphics driver, crimson, amd
AMD's new Crimson driver has just been released with new features including official support for the new Radeon Pro Duo as well as both the Oculus Rift and HTC Vive VR headsets. It also adds enhanced support for AMD's XConnect technology for external GPUs connected via a Thunderbolt 3 interface. Crossfire profile updates include Hitman, Elite Dangerous and Need for Speed and they have also resolved the ongoing issue with the internal update procedure not seeing the newest drivers. If you are having issues with games crashing to desktop on launch you will still need to disable the AMD Gaming Evolved overlay, unfortunately.
"The latest version of Radeon Software Crimson Edition is here with 16.4.2. With this version, AMD delivers many quality improvements, updated/introduced new CrossFire profiles and delivered full support for AMD’s XConnect technology (including plug’n’play simplicity for Thunderbolt 3 eGFX enclosures configured with Radeon R9 Fury, Nano or 300 Series GPUs.) Best of all, our DirectX 12 leadership continues to be strong, as shown by the performance numbers below."
Subject: Graphics Cards | April 22, 2016 - 10:16 AM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, leak, graphics card, gpu, gddr5x, GDDR5
According to a report from VideoCardz (via Overclock.net/Chip Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.
Image credit: VideoCardz.com
"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."
A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory - based on a previously leaked photo of the die on PCB.
Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)
"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."
While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.
Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X - the first GPU to use HBM - was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.
If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected - which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.
Subject: Graphics Cards | April 21, 2016 - 08:37 AM | Sebastian Peak
Zotac has released a new variant of the low-power NVIDIA GeForce GT 710, and while this wouldn't normally be news this card has a very important distinction: its PCI-E x1 interface.
With a single-slot design, low-profile ready (with a pair of brackets included), and that PCI-E x1 interface, this card can go places where GPUs have never been able to go (AFAIK). Granted, you won't be doing much gaming on a GT 710, which features 192 CUDA cores and 1GB of DDR3 memory, but this card does provide support for up to 3 monitors via DVI, HDMI, and VGA outputs.
A PCI-E x1 GPU would certainly provide some interesting options for ultra-compact systems such as those based on thin mini-ITX, which does not offer a full-length PCI Express slot; or for adding additional monitor support for business machines that only offer a single PCI-E x16 slot, but have a x1 slot available.
Specifications from Zotac:
- GPU: GeForce GT 710
- CUDA cores: 192
- Video Memory: 1GB DDR3
- Memory Bus: 64-bit
- Engine Clock: 954 MHz
- Memory Clock: 1600 MHz
- PCI Express: PCI-E x1
- Display Outputs: DL-DVI, VGA, HDMI
- HDCP Support: Yes
- Multi Display Capability: 3
- Recommended Power Supply: 300W
- Power Consumption: 25W
- Power Input: N/A
- API Support: DirectX 12 (feature level 11_0), OpenGL 4.5
- Cooling: Passive
- Slot Size: Single Slot
- SLI: N/A
- Supported OS: Windows 10 / 8 / 7 / Vista / XP
- Card Length: 146.05mm x 111.15mm
- Accessories: 2x Low profile bracket I/O brackets, Driver Disk, User Manual
The card, which is listed with the model ZT-71304-20L, has not yet appeared on any U.S. sites for purchase (that I can find, anyway), so we will have to wait to see where pricing will be.
Subject: Graphics Cards, Processors | April 19, 2016 - 11:21 AM | Ryan Shrout
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd
Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.
Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO."
The current PlayStation 4 APU
Image source: iFixIt.com
There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.
The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.
Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.
Giant Bomb even has details on the architectural changes.
|Shipping PS4||PS4 "NEO"|
|CPU||8 Jaguar Cores @ 1.6 GHz||8 Jaguar Cores @ 2.1 GHz|
|GPU||AMD GCN, 18 CUs @ 800 MHz||AMD GCN+, 36 CUs @ 911 MHz|
|Stream Processors||1152 SPs ~ HD 7870 equiv.||2304 SPs ~ R9 390 equiv.|
|Memory||8GB GDDR5 @ 176 GB/s||8GB GDDR5 @ 218 GB/s|
(We actually did a full video teardown of the PS4 on launch day!)
If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).
Could Sony's NEO platform rival the R9 390?
If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so.
It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.
However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.
New features of the AMD Polaris architecture due this summer
There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path.
The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on.
In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.
There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.
Subject: Graphics Cards | April 19, 2016 - 11:08 AM | Sebastian Peak
Tagged: rumor, report, nvidia, leak, GTX 1080, graphics card, gpu, geforce
Another reported photo of an upcoming GTX 1080 graphics card has appeared online, this time via a post on Baidu.
(Image credit: VR-Zone, via Baidu)
The image is typically low-resolution and features the slightly soft focus we've come to expect from alleged leaks. This doesn't mean it's not legitimate, and this isn't the first time we have seen this design. This image also appears to only be the cooler, without an actual graphics card board underneath.
We have reported on the upcoming GPU rumored to be named "GTX 1080" in the recent past, and while no official announcement has been made it seems safe to assume that a successor to the current 900-series GPUs is forthcoming.
Subject: Graphics Cards | April 14, 2016 - 06:44 PM | Scott Michaud
Tagged: nvidia, graphics drivers
The GeForce 364.xx line of graphics drivers hasn't been smooth for NVIDIA. Granted, they tried to merge Vulkan support into their main branch at the same time as several new games, including DirectX 12 ones, launched. It was probably a very difficult period for NVIDIA, but WHQL-certified drivers should be better than this.
Regardless, they're trying, and today they released GeForce Hot Fix Driver 364.96. Some of the early reactions mock NVIDIA for adding “Support for DOOM Open Beta” as the only listed feature of a “hotfix” driver, but I don't see it. It's entirely possible that the current drivers have a known issue with DOOM Open Beta and, thus, they require a hotfix. It's not necessarily “just a profile,” and “profiles” isn't exactly what a hardware vendor does to support a new title.
But anyway, Manuel Guzman, one of the faces for NVIDIA Customer Care, also says that this driver includes fixes for FPS drops in Dark Souls 3. According to some forum-goers, despite its numbering, it also does not contain the Vulkan updates from 364.91. This is probably a good thing, because it would be a bit silly to merge developer-branch features into a customer driver that only intends to solve problems before an official driver can be certified. I mean, that's like patching a flat tire, then drilling a hole in one of the good ones to mess around with it, too.
The GeForce 364.96 Hotfix Drivers are available at NVIDIA's website. If you're having problems, then it might be your solution. Otherwise? Wait until NVIDIA has an official release (or you start getting said problems).
Subject: Graphics Cards | April 14, 2016 - 06:17 PM | Scott Michaud
Tagged: microsoft, windows 10, uwp, DirectX 12, dx12
At the PC Gaming Conference from last year's E3 Expo, Microsoft announced that they were looking to bring more first-party titles to Windows. They used to be one of the better PC gaming publishers, back in the Mechwarrior 4 and earlier Flight Simulator days, but they got distracted as Xbox 360 rose and Windows Vista fell.
Again, part of that is because they attempted to push users to Windows Vista and Games for Windows Live, holding back troubled titles like Halo 2: Vista and technologies like DirectX 10 from Windows XP, which drove users to Valve's then-small Steam platform. Epic Games was also a canary in the coalmine at that time, warning users that Microsoft was considering certification for Games for Windows Live, which threatened mod support “because Microsoft's afraid of what you might put into it”.
It's sometimes easy to conform history to fit a specific viewpoint, but it does sound... familiar.
Anyway, we're glad that Microsoft is bringing first-party content to the PC, and they are perfectly within their rights to structure it however they please. We are also within our rights to point out its flaws and ask for them to be corrected. Turns out that Quantum Break, like Gears of War before it, has some severe performance issues. Let's be clear, these will likely be fixed, and I'm glad that Microsoft didn't artificially delay the PC version to give the console an exclusive window. Also, had they delayed the PC version until it was fixed, we wouldn't have known whether it needed the time.
Still, the game apparently has issues with a 50 FPS top-end cap, on top of pacing-based stutters. One concern that I have is, because DigitalFoundry is a European publication, perhaps the 50Hz issue might be caused by their port being based on a PAL version of the game??? Despite suggesting it, I would be shocked if that were the case, but I'm just trying to figure out why anyone would create a ceiling at that specific interval. They are also seeing NVIDIA's graphics drivers frequently crash, which probably means that some areas of their DirectX 12 support are not quite what the game expects. Again, that is solvable by drivers.
It's been a shaky start for both DirectX 12 and the Windows 10 UWP platform. We'll need to keep waiting and see what happens going forward. I hope this doesn't discourage Microsoft too much, but also that they robustly fix the problems we're discussing.
Subject: Graphics Cards | April 12, 2016 - 01:34 PM | Jeremy Hellstrom
Tagged: asus, 980 Ti, GTX 980 Ti MATRIX Platinum, DirectCU II
The ASUS GTX 980 Ti MATRIX Platinum comes with a mix of features including a memory defroster, as this card is designed with LN2 cooling in mind so we may see it appear in some of this years overclocking contests. It uses the older dual-fan DirectCU II, not the newer CU III version but the cards still remained around 60C under full load when [H]ard|OCP tested them. The one-press VBIOS reload is perfect if you run into issues overclocking, and this card will overclock as [H] hit 1266MHz Base/1367MHz Boost/1503MHz In-Game with VRAM at 8.2GHz. That overclocking potential as well as an asking price currently under MSRP helped this card win the Gold, see it surpass the MSI Lightning in the full review.
"Today we review the ASUS GTX 980 Ti MATRIX Platinum, a gaming enthusiast centered video card which boasts enthusiast air cooling and an enthusiast overclock on air cooling. This high-end video card features DirectCU II cooling, making it the perfect comparison to the MSI GTX 980 TI LIGHTNING in class, price, performance and cooling."
Here are some more Graphics Card articles from around the web:
- Gigabyte GTX 980 Ti XtremeGaming 6GB @ techPowerUp
- ASUS Radeon R9 Fury STRIX Graphics Card Review @ NikKTech
- AMD VR Performance featuring Sapphire @ Kitguru
Subject: Graphics Cards | April 11, 2016 - 11:23 AM | Ryan Shrout
Tagged: rtg, radeon technologies group, radeon, driver, crimson, amd
For longer than AMD would like to admit, Radeon drivers and software were often criticized for plaguing issues on performance, stability and features. As the graphics card market evolved and software became a critical part of the equation, that deficit affected AMD substantially.
In fact, despite the advantages that modern AMD Radeon parts typically have over GeForce options in terms of pure frame rate for your dollar, I recommended an NVIDIA GeForce GTX 970, 980 and 980 Ti for our three different VR Build Guides last month ($900, $1500, $2500) in large part due to confidence in NVIDIA’s driver team to continue delivering updated drivers to provide excellent experiences for gamers.
But back in September of 2015 we started to see changes inside AMD. There was drastic reorganization of the company and those people in charge. AMD setup the Radeon Technologies Group, a new entity inside the organization that would have complete control over the graphics hardware and software directions. And it put one of the most respected people in the industry at its helm: Raja Koduri. On November 24th AMD launched Radeon Software Crimson, a totally new branding, style and implementation to control your Radeon GPU. I talked about it at the time, but the upgrade was noticeable; everything was faster, easier to find and…pretty.
Since then, AMD has rolled out several new drivers with key feature additions, improvements and of course, game performance increases. Thus far in 2016 the Radeon Technologies Group has released 7 new drivers, three of which have been WHQL certified. That is 100% more than they had during this same time last year when AMD released zero WHQL drivers and a big increase over the 1 TOTAL driver AMD released in Q1 of 2015.
Maybe most important of all, the team at Radeon Technologies Group claims to be putting a new emphasis on “day one” support for major PC titles. If implemented correctly, this gives enthusiasts and PC gamers that want to stay on the cutting edge of releases the ability to play optimized titles on the day of release. Getting updated drivers that fix bugs and improve performance weeks or months after release is great, but for gamers that may already be done with that game, the updates are worthless. AMD was guilty of this practice for years, having driver updates that would fix performance issues on Radeon hardware for reviewer testing but that missed the majority of the play time of early adopting consumers.
Thus far, AMD has only just started down this path. Newer games like Far Cry Primal, The Division, Hitman and Ashes of the Singularity all had drivers from AMD on or before release with performance improvements, CrossFire profiles or both. A few others were CLOSE to day one ready including Rise of the Tomb Raider, Plants vs Zombies 2 and Gears of War Ultimate Edition.
|Game||Release Date||First Driver Mention||Driver Date||Feature / Support|
|Rise of the Tomb Raider||01-28-2016||16.1.1||02-05-2016||Performance and CrossFire Profile|
|Plants vs Zombies 2||02-23-2016||16.2.1||03-01-2016||Performance|
|Gears Ultimate Edition||03-01-2016||16.3||03-10-2016||Performance|
|Far Cry Primal||03-01-2016||16.2.1||03-01-2016||CrossFire Profile|
|The Division||03-08-2016||16.1||02-25-2016||CrossFire Profile|
|Hitman||03-11-2016||16.3||03-10-2016||Performance, CrossFire Profile|
|Need for Speed||03-15-2016||16.3.1||03-18-2016||Performance, CrossFire Profile|
|Ashes of the Singularity||03-31-2016||16.2||02-25-2016||Performance|
AMD claims that the push for this “day one” experience will continue going forward, pointing at a 35% boost in performance in Quantum Break between Radeon Crimson 16.3.2 and 16.4.1. There will be plenty of opportunities in the coming weeks and months to test AMD (and NVIDIA) on this “day one” focus with PC titles that will have support for DX12, UWP and VR.
The software team at RTG has also added quite a few interesting features since the release of the first Radeon Crimson driver. Support for the Vulkan API and a DX12 capability called Quick Response Queue, along with new additions to the Radeon settings (Per-game display scaling, CrossFire status indicator, power efficiency toggle, etc.) are just a few.
Critical for consumers that were buying into VR, the Radeon Crimson drivers launched with support alongside the Oculus Rift and HTC Vive. Both of these new virtual reality systems are putting significant strain on the GPU of modern PCs and properly implementing support for techniques like timewarp is crucial to enabling a good user experience. Though Oculus and HTC / Valve were using NVIDIA based systems more or less exclusively during our time at the Game Developers Summit last month, AMD still has approved platforms and software from both vendors. In fact, in a recent change to the HTC Vive minimum specifications, Valve retroactively added the Radeon R9 280 to the list, giving a slight edge in component pricing to AMD.
AMD was also the first to enable full support for external graphics solutions like the Razer Core external enclosure in its drivers with XConnect. We wrote about that release in early March, and I’m eager to get my hands on a product combo to give it a shot. As of this writing and after talking with Razer, NVIDIA had still not fully implemented external GPU functionality for hot/live device removal.
When looking for some acceptance metric, AMD did point us to a survey they ran to measure the approval and satisfaction of Crimson. After 1700+ submission, the score customers gave them was a 4.4 out of 5.0 - pretty significant praise even coming from AMD customers. We don't exactly how the poll was run or in what location it was posted, but the Crimson driver release has definitely improved the perception that Radeon drivers have with many enthusiasts.
I’m not going to sit here and try to impart on everyone that AMD is absolved of past sins and we should immediately be converted into believers. What I can say is that the Radeon Technologies Group is moving in the right direction, down a path that shows a change in leadership and a change in mindset. I talked in September about the respect I had for Raja Koduri and interviewed him after AMD’s Capsaicin event at GDC; you can already start to see the changes he is making inside this division. He has put a priority on software, not just on making it look pretty, but promising to make good on proper multi-GPU support, improved timeliness of releases and innovative features. AMD and RTG still have a ways to go before they can unwind years of negativity, but the ground work is there.
The company and every team member has a sizeable task ahead of them as we approach the summer. The Radeon Technologies Group will depend on the Polaris architecture and its products to swing back the pendulum against NVIDIA, gaining market share, mind share and respect. From what we have seen, Polaris looks impressive and differentiates from Hawaii and Fiji fairly dramatically. But this product was already well baked before Raja got total control and we might have to see another generation pass before the portfolio of GPUs can change around the institution. NVIDIA isn’t sitting idle and the Pascal architecture also promises improved performance, while leaning on the work and investment in software and drivers that have gotten them to the dominant market leader position they are in today.
I’m looking forward to working with AMD throughout 2016 on what promises to be an exciting and market-shifting time period.
Subject: Graphics Cards | April 10, 2016 - 09:04 PM | Scott Michaud
Tagged: nvidia, vulkan, graphics drivers
This is not a main-line, WHQL driver. This is not even a mainstream beta driver. The beta GeForce 364.91 drivers (364.16 on Linux) are only available on the NVIDIA developer website, which, yes, is publicly accessible, but should probably not be installed unless you are intending to write software and every day counts. Also, some who have installed it claim that certain Vulkan demos stop working. I'm not sure whether that means the demo is out-of-date due to a rare conformance ambiguity, the driver has bugs, or the reports themselves are simply unreliable.
That said, if you are a software developer, and you don't mind rolling back if things go awry, you can check out the new version at NVIDIA's website. It updates Vulkan to 1.0.8, which is just documentation bugs and conformance tweaks. These things happen over time. In fact, the initial Vulkan release was actually Vulkan 1.0.3, if I remember correctly.
The driver also addresses issues with Vulkan and NVIDIA Optimus technologies, which is interesting. Optimus controls which GPU acts as primary in a laptop, switching between the discrete NVIDIA one and the Intel integrated one, depending on load and power. Vulkan and DirectX 12, however, expose all GPUs to the system. I'm curious how NVIDIA knows whether to sleep one or the other, and what that would look like to software that enumerates all compatible devices. Would it omit listing one of the GPUs? Or would it allow the software to wake the system out of Optimus should it want more performance?
Anywho, the driver is available now, but you probably should wait for official releases. The interesting thing is this seems to mean that NVIDIA will continue to release non-public Vulkan drivers. Hmm.