Subject: Graphics Cards | October 15, 2015 - 04:01 PM | Ryan Shrout
Tagged: nvidia, geforce experience, beta drivers
NVIDIA just released a new driver, version 358.50, with an updated version of GeForce Experience that brings about some interesting changes to the program. First, let's talk about the positive changes, including beta access to the updated NVIDIA Share utility and improvements in GameStream.
As we detailed first with the release of the GeForce GTX 950, NVIDIA is making some impressive additions to the ShadowPlay portion of GeForce Experience, along with a rename to NVIDIA Share.
The idea is to add functionality to the Shadowplay feature including an in-game overlay to control the settings and options for local recording and even an in-overlay editor and previewer for your videos. This allows the gamer to view, edit, snip and then upload those completed videos to YouTube directly, without ever having to leave the game. (Though you’ll obviously want to pause it before going through that process.) Capture and “Instant Replay” support is now capable of 4K / 60 Hz capture and upload as well – nice!
Besides added capability for the local recording portion of Share, NVIDIA is also adding some new features to mix. NVIDIA Share will now allow for point to point stream sharing, giving you the ability to send a link to your friend that they can open in a web browser and watch the game that you are playing with very low latency. You could use this as a way to show your friend that new skill you learned for Rocket League, to try and convince him to pick up his own copy or even just for a social event. It supports voice communication for the ability to talk smack if necessary.
But it goes beyond just viewing the game – this point to point streaming allows the remote player to take over the controls to teach the local gamer something new or to finish a difficult portion of the game you might be stuck on. And if the game supports local multiplayer, you can BOTH play as the remote gaming session will emulate a second attached Xbox / SHIELD controller to the system! This does have a time limit of 1 hour as a means to persuade game developers and publishers to not throw a hissy-fit.
The demo I saw recently was very impressive and it all worked surprisingly well out of the box.
Fans of NVIDIA local network GameStream might enjoy the upgrade to support streaming games at 4K 60 FPS - as long as you have an NVIDIA SHIELD Android TV device connected to a 4K capable TV in your home. Clearly this will make the visual presentation of your games on your television more impressive than ever and NVIDIA has added support for 5.1 channel surround sound pass through.
There is another change coming with this release of GFE that might turn some heads surrounding the frequently updated "Game Ready" drivers NVIDIA puts out for specific game launches. These drivers have been a huge part of NVIDIA's success in recent years as the day one experience for GeForce users has been improved over AMD in many instances. It is vital for drivers and performance to be optimal on the day of a game's release as many enthusiast gamers are the ones going through the preloading process and midnight release timings.
Future "Game Ready" drivers will no longer be made available through GeForce.com and instead will ONLY be delivered through GeForce Experience. You'll also be required to have a validated email address to get the downloads for beta drivers - though NVIDIA admitted to me you would be able to opt-out of the mailing list anytime after signing up.
NVIDIA told media that this method of driver release was planning for stuff in the future but gamers would be getting early access to new features, chances to win free hardware and the ability to take part in the driver development process like never before. Honestly though, this is a way to get users to sign up for a marketing mailing list that has some specific purpose going forward. Not all mailing lists are bad obviously (have you signed up for the PC Perspective Live! Mailing List yet?!?) but there is bound to be some raised eyebrows over this.
NVIDIA says that more than 90% of its driver downloads today already come through GeForce Experience, so changes to the user experience should be minimal. We'll wait to see how the crowd reacts but I imagine once we get past the initial shock of the change over to this system, the roll outs will be fast, clean and simple. But dammit - we fear change.
Subject: Graphics Cards | October 14, 2015 - 03:24 PM | Sebastian Peak
Tagged: radeon, dx12, DirectX 12, Catalyst 15.10 beta, catalyst, ashes of the singularity, amd
The AMD Catalyst 15.9 beta driver was released just two weeks ago, and already AMD is ready with a new version. 15.10 is available now and offers several bug fixes, though the point of emphasis is DX12 performance improvements to the Ashes of the Singularity benchmark.
Highlights of AMD Catalyst 15.10 Beta Windows Driver
- Ashes of the Singularity - DirectX 12 Quality and Performance optimizations
- Video playback of MPEG2 video fails with a playback error/error code message
- A TDR error or crash is experienced when running the Unreal Engine 4 DirectX benchmark
- Star Wars: Battlefront is able to use high performance graphics when launched on mobile devices with switchable graphics
- Intermittent playback issues with Cyberlink PowerDVD when connecting to a 3D display with an HDMI cable
- Ashes of the Singularity - A 'Driver has stopped responding' error may be experienced in DirectX 12 mode
- Driver installation may halt on some configurations
- A TDR error may be experienced while toggling between minimized and maximized mode while viewing 4K YouTube content
- Ashes of the Singularity may crash on some AMD 300 series GPUs
- Core clock fluctuations may be experienced when FreeSync and FRTC are both enabled on some AMD CrossFire systems
- Ashes of the Singularity may fail to launch on some GPUs with 2GB Video Memory. AMD continues to work with Stardock to resolve the issue. In the meantime, deleting the game config file helps resolve the issue
- The secondary display adapter is missing in the Device Manager and the AMD Catalyst Control Center after installing the driver on a Microsoft Windows 8.1 system
- Elite: Dangerous - poor performance may be experienced in SuperCruise mode
- A black screen may be encountered on bootup on Windows 10 systems. The system will ultimately continue to the Windows login screen
The driver is available now from AMD's Catalyst beta download page.
Last month NVIDIA introduced the world to the GTX 980 in a new form factor for gaming notebook. Using the same Maxwell GPU, the same performance levels but with slightly tweaked power delivery and TDPs, notebooks powered by the GTX 980 promise to be a noticeable step faster than anything before it.
Late last week I got my hands on the updated MSI GT72S Dominator Pro G, the first retail ready gaming notebook to not only integrate the new GTX 980 GPU but also an unlocked Skylake mobile processor.
This machine is something to behold - though it looks very similar to previous GT72 versions, this machine hides hardware unlike anything we have been able to carry in a backpack before. And the sexy red exterior with MSI Dragon Army logo blazoned across the back definitely help it to stand out in a crowd. If you happen to be in a crowd of notebooks.
A quick spin around the GT72S reveals a sizeable collection of hardware and connections. On the left you'll find a set of four USB 3.0 ports as well as four audio inputs and ouputs and an SD card reader.
On the opposite side there are two more USB 3.0 ports (totalling six) and the optical / Blu-ray burner. With that many USB 3.0 ports you should never struggle with accessories availability - headset, mouse, keyboard, hard drive and portable fan? Check.
Subject: Graphics Cards | October 7, 2015 - 05:45 PM | Scott Michaud
Tagged: opengl es 3.2, nvidia, graphics drivers, geforce
The GeForce Game Ready 358.50 WHQL driver has been released so users can perform their updates before the Star Wars Battlefront beta goes live tomorrow (unless you already received a key). As with every “Game Ready” driver, NVIDIA ensures that the essential performance and stability tweaks are rolled in to this version, and tests it against the title. It is WHQL certified too, which is a recent priority for NVIDIA. Years ago, “Game Ready” drivers were often classified as Beta, but the company now intends to pass their work through Microsoft for a final sniff test.
Another interesting addition to this driver is the inclusion of OpenGL 2015 ARB and OpenGL ES 3.2. To use OpenGL ES 3.2 on the PC, if you want to develop software in it for instance, you needed to use a separate release since it was released at SIGGRAPH. It has now been rolled into the main, public driver. The mobile devs who use their production machines to play Battlefront rejoice, I guess. It might also be useful if developers, for instance at Mozilla or Google, want to create pre-release implementations of future WebGL specs too.
Subject: Graphics Cards | October 7, 2015 - 11:01 AM | Scott Michaud
Tagged: opengl, metal, apple
Ars Technica took it upon themselves to benchmark Metal in the latest OSX El Capitan release. Even though OpenGL on Mac OSX is not considered to be on par with its Linux counterparts, which is probably due to the driver situation until recently, it pulls ahead of Metal in many situations.
Image Credit: Ars Technica
Unlike the other graphics APIs, Metal uses the traditional binding model. Basically, you have a GPU object that you attach your data to, then call one of a handful of “draw” functions to signal the driver. DirectX 12, Vulkan, and Mantle, on the other hand, treat work like commands on queues. The latter model works better in multi-core environments, and it aligns with GPU compute APIs, but the former is easier to port OpenGL and DirectX 11 applications to.
Ars Technica notes that faster GPUs, such as the NVIDIA GeForce GTX 680MX, show higher gains than slower ones. Their “best explanation” is that “faster GPUs can offload more work from the CPU”. That is pretty much true, yes. The new APIs are designed to keep GPUs loaded and working as much as possible, because they really do sit around doing nothing a lot. If you are able to keep a GPU loaded, because it can't accept much load in the first place, then there is little benefit to decreasing CPU load or spreading out across multiple cores.
Granted, there are many ways that benchmarks like these could be incorrectly used. I'll assume that Ars Technica and GFXBench are not making any simple mistakes, though, but it's good to be critical just in case.
Subject: Graphics Cards | October 7, 2015 - 05:51 AM | Sebastian Peak
The latest game bundle for NVIDIA GPU customers offers the buyer a choice between Tom Clancy’s Rainbow Six Siege or Assassin’s Creed Syndicate.
To qualify for the free game you need to purchase a GTX 980 Ti, GTX 980, or GTX 970 graphics card. On the mobile side of things purchasing a laptop with GTX 970M or above graphics earns the game.
"It’s the final few months of the year, and as always that means a rush of new triple-A games that promise to excite and delight over the Holiday season. This year, Ubisoft's Assassin’s Creed Syndicate andTom Clancy's Rainbow Six Siege are vying for glory. And to ensure the definitive versions are found on PC we’ve teamed up with Ubisoft once again to add NVIDIA GameWorks effects to each, bringing richer, more detailed experiences to your desktop."
The Bullets or Blades bundle is already underway as of 10/06/15, and to qualify for the game codes purchases require the retailer to be participating in this program. Full details are available from NVIDIA here.
Subject: Graphics Cards | October 6, 2015 - 06:40 PM | Jeremy Hellstrom
Tagged: 4k, gtx titan x, fury x, GTX 980 Ti, crossfire, sli
[H]ard|OCP shows off just what you can achieve when you spend over $1000 on graphics cards and have a 4K monitor in their latest review. In Project Cars you can expect never to see less than 40fps with everything cranked to maximum and if you invested in Titan X's you can even enable DS2X AntiAliasing for double the resolution, before down sampling. Witcher 3 is a bit more challenging and no card is up for HairWorks without a noticeable hit to performance. FarCry 4 still refuses to believe in Crossfire and as far as NVIDIA performance goes, if you want to see soft shadows you are going to have to invest in a pair of Titan X's. Check out the full review to see what the best of the current market is capable of.
"The ultimate 4K battle is about to begin, AMD Radeon R9 Fury X CrossFire, NVIDIA GeForce GTX 980 Ti SLI, and NVIDIA GeForce GTX TITAN X SLI will compete for the best gameplay experience at 4K resolution. Find out what $1300 to $2000 worth of GPU backbone will buy you. And find out if Fiji really can 4K."
Here are some more Graphics Card articles from around the web:
- Sapphire R7 370 Nitro Review @ OCC
- PNY GTX 950 2GB @ Kitguru
- Gigabyte GTX 950 Xtreme Gaming 2GB @ Kitguru
- Nvidia's GeForce GTX 950 @ The Tech Report
Subject: Graphics Cards | October 5, 2015 - 11:13 AM | Scott Michaud
Tagged: graphics drivers, amd
Apparently users of AMD's Catalyst 15.9 drivers have been experiencing issues. Specifically, “major memory leaks” could be caused by adjusting windows, such as resizing them or snapping them to edges of the desktop. According to PC Gamer, AMD immediately told users to roll back when they found out about the bug.
They have since fixed it with Catalyst 15.9.1 Beta. This subversion driver also fixes crashes and potential “signal loss” problems with a BenQ FreeSync monitor. As such, if you were interested in playing around with the Catalyst 15.9 beta driver, then it should be safe to do so now. I wish I could offer more input, but I just found out about it and it seems pretty cut-and-dry: if you had problems, they should be fixed. The update is available here.
Subject: Graphics Cards | October 5, 2015 - 06:33 AM | Sebastian Peak
Tagged: rumor, report, radeon, graphics cards, Gemini, fury x, fiji xt, dual-GPU, amd
The AMD R9 Fury X, Fury, and Nano have all been released, but a dual-GPU Fiji XT card could be on the way soon according to a new report.
Back in June at AMD's E3 event we were shown Project Quantum, AMD's concept for a powerful dual-GPU system in a very small form-factor. It was speculated that the system was actually housing an unreleased dual-GPU graphic card, which would have made sense given the very small size of the system (and mini-ITX motherboard therein). Now a report from WCCFtech is pointing to a manifest that just might be a shipment of this new dual-GPU card, and the code-name is Gemini.
"Gemini is the code-name AMD has previously used in the past for dual GPU variants and surprisingly, the manifest also contains another phrase: ‘Tobermory’. Now this could simply be a reference to the port that the card shipped from...or it could be the actual codename of the card, with Gemini just being the class itself."
The manifest also indicates a Cooler Master cooler for the card, the maker of the liquid cooling solution for the Fury X. As the Fury X has had its share of criticism for pump whine issues it would be interesting to see how a dual-GPU cooling solution would fare in that department, though we could be seeing an entirely new generation of the pump as well. Of course speculation on an unreleased product like this could be incorrect, and verifiable hard details aren't available yet. Still, of the dual-GPU card is based on a pair of full Fiji XT cores the specs could be very impressive to say the least:
- Core: Fiji XT x2
- Stream Processors: 8192
- GCN Compute Units: 128
- ROPs: 128
- TMUs: 512
- Memory: 8 GB (4GB per GPU)
- Memory Interface: 4096-bit x2
- Memory Bandwidth: 1024 GB/s
In addition to the specifics above the report also discussed the possibility of 17.2 TFLOPS of performance based on 2x the performance of Fury X, which would make the Gemini product one of the most powerful single-card GPU solutions in the world. The card seems close enough to the final stage that we should expect to hear something official soon, but for now it's fun to speculate - unless of course the speculation concerns a high initial retail price, and unfortunately something at or above $1000 is quite likely. We shall see.
GPU Enthusiasts Are Throwing a FET
NVIDIA is rumored to launch Pascal in early (~April-ish) 2016, although some are skeptical that it will even appear before the summer. The design was finalized months ago, and unconfirmed shipping information claims that chips are being stockpiled, which is typical when preparing to launch a product. It is expected to compete against AMD's rumored Arctic Islands architecture, which will, according to its also rumored numbers, be very similar to Pascal.
This architecture is a big one for several reasons.
Image Credit: WCCFTech
First, it will jump two full process nodes. Current desktop GPUs are manufactured at 28nm, which was first introduced with the GeForce GTX 680 all the way back in early 2012, but Pascal will be manufactured on TSMC's 16nm FinFET+ technology. Smaller features have several advantages, but a huge one for GPUs is the ability to fit more complex circuitry in the same die area. This means that you can include more copies of elements, such as shader cores, and do more in fixed-function hardware, like video encode and decode.
That said, we got a lot more life out of 28nm than we really should have. Chips like GM200 and Fiji are huge, relatively power-hungry, and complex, which is a terrible idea to produce when yields are low. I asked Josh Walrath, who is our go-to for analysis of fab processes, and he believes that FinFET+ is probably even more complicated today than 28nm was in the 2012 timeframe, which was when it launched for GPUs.
It's two full steps forward from where we started, but we've been tiptoeing since then.
Image Credit: WCCFTech
Second, Pascal will introduce HBM 2.0 to NVIDIA hardware. HBM 1.0 was introduced with AMD's Radeon Fury X, and it helped in numerous ways -- from smaller card size to a triple-digit percentage increase in memory bandwidth. The 980 Ti can talk to its memory at about 300GB/s, while Pascal is rumored to push that to 1TB/s. Capacity won't be sacrificed, either. The top-end card is expected to contain 16GB of global memory, which is twice what any console has. This means less streaming, higher resolution textures, and probably even left-over scratch space for the GPU to generate content in with compute shaders. Also, according to AMD, HBM is an easier architecture to communicate with than GDDR, which should mean a savings in die space that could be used for other things.
Third, the architecture includes native support for three levels of floating point precision. Maxwell, due to how limited 28nm was, saved on complexity by reducing 64-bit IEEE 754 decimal number performance to 1/32nd of 32-bit numbers, because FP64 values are rarely used in video games. This saved transistors, but was a huge, order-of-magnitude step back from the 1/3rd ratio found on the Kepler-based GK110. While it probably won't be back to the 1/2 ratio that was found in Fermi, Pascal should be much better suited for GPU compute.
Image Credit: WCCFTech
Mixed precision could help video games too, though. Remember how I said it supports three levels? The third one is 16-bit, which is half of the format that is commonly used in video games. Sometimes, that is sufficient. If so, Pascal is said to do these calculations at twice the rate of 32-bit. We'll need to see whether enough games (and other applications) are willing to drop down in precision to justify the die space that these dedicated circuits require, but it should double the performance of anything that does.
So basically, this generation should provide a massive jump in performance that enthusiasts have been waiting for. Increases in GPU memory bandwidth and the amount of features that can be printed into the die are two major bottlenecks for most modern games and GPU-accelerated software. We'll need to wait for benchmarks to see how the theoretical maps to practical, but it's a good sign.