All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | March 30, 2016 - 02:58 AM | Tim Verry
Tagged: maxwell, gtx 950, GM206, asus
Asus is launching a new midrange gaming graphics card clad in arctic camouflage. The Echelon GTX 950 Limited Edition is a Maxwell-based card that will come factory overclocked and paired with Asus features normally reserved for their higher end cards.
This dual slot, dual fan graphics card features “auto-extreme technology” which is Asus marketing speak for high end capacitors, chokes, and other components. Further, the card uses a DirectCU II cooler that Asus claims offers 20% better cooling performance while being 3-times quieter than the NVIDIA reference cooler. Asus tweaked the shroud on this card to resemble a white and gray arctic camouflage design. There is also a reinforced backplate that continues the stealthy camo theme.
I/O on the Echelon GTX 950 Limited Edition includes:
- 1 x DVI-D
- 1 x DVI-I
- 1 x HDMI 2.0
- 1 x DisplayPort
The card supports NVIDIA’s G-Sync technology and the inclusion of an HDMI 2.0 port allows it to be used in a HTPC/gaming PC build for the living room though case selection would be limited since it’s a larger dual slot card.
Beneath the stealthy exterior, Asus conceals a GM206-derived GTX 950 GPU with 768 CUDA cores, 48 Texture Units, and 32 ROPs as well as 2GB of GDDR5 memory. Out of the box, users have two factory overclocks to choose from that Asus calls Gaming and Overclock modes. In gaming mode, the Echelon GTX 950 GPU is clocked at 1,140 MHz base and 1,329 MHz boost. Turing the card to OC Mode, clockspeeds are further increased to 1,165 MHz base and 1,355 MHz boost.
For reference, the, well, reference GTX 950 clockspeeds are 1,024 MHz base and 1,186 MHz boost.
Asus also ever-so-slightly overclocked the GDDR5 memory to 6,610 MHz which is unfortunately a mere 10MHz over reference. The memory sits on a 128-bit bus and while a factory overclock is nice to see, transfer speeds increases will be minimal at best.
In our review of the GTX 950 which focused on the Asus Strix variant, Ryan found it be a good option for 1080p gamers wanting a bit more graphical prowess than the 750Ti for their games.
Maximum PC reports that camo-clad Echelon GTX 950 will be available at the end of the month. Pricing has not been released by Asus, but I would expect this card to come with an MSRP of around $180 USD.
Subject: General Tech, Graphics Cards | March 28, 2016 - 11:24 PM | Ryan Shrout
Tagged: pcper, hardware, technology, review, Oculus, rift, Kickstarter, nvidia, geforce, GTX 980 Ti
It's Oculus Rift launch day and the team and I spent the afternoon setting up the Rift, running through a set of game play environments and getting some good first impressions on performance, experience and more. Oh, and we entered a green screen into the mix today as well.
Subject: Graphics Cards | March 28, 2016 - 10:20 AM | Ryan Shrout
Tagged: vive, valve, steamvr, rift, Oculus, nvidia, htc, amd
As the first Oculus Rift retail units begin hitting hands in the US and abroad, both AMD and NVIDIA have released new drivers to help gamers ease into the world of VR gaming.
Up first is AMD, with Radeon Software Crimson Edition 16.3.2. It adds support for Oculus SDK v1.3 and the Radeon Pro Duo...for all none of you that have that product in your hands. AMD claims that this driver will offer "the most stable and compatible driver for developing VR experiences on the Rift to-date." AMD tells us that the latest implementation of LiquidVR features in the software help the SDKs and VR games at release take better advantage of AMD Radeon GPUs. This includes capabilities like asynchronous shaders (which AMD thinks should be capitalized for some reason??) and Quick Response Queue (which I think refers to the ability to process without context change penalties) to help Oculus implement Asynchronous Timewarp.
NVIDIA's release is a bit more substantial, with GeForce Game Ready 364.72 WHQL drivers adding support for the Oculus Rift, HTC Vive and improvements for Dark Souls III, Killer Instinct, Paragon early access and even Quantum Break.
For the optimum experience when using the Oculus Rift, and when playing the thirty games launching alongside the headset, upgrade to today's VR-optimized Game Ready driver. Whether you're playing Chronos, Elite Dangerous, EVE: Valkyrie, or any of the other VR titles, you'll want our latest driver to minimize latency, improve performance, and add support for our newest VRWorks features that further enhance your experience.
Today's Game Ready driver also supports the HTC Vive Virtual Reality headset, which launches next week. As with the Oculus Rift, our new driver optimizes and improves the experience, and adds support for the latest Virtual Reality-enhancing technology.
Good to see both GPU vendors giving us new drivers for the release of the Oculus Rift...let's hope it pans out well and the response from the first buyers is positive!
Subject: General Tech, Graphics Cards | March 26, 2016 - 12:11 AM | Ryan Shrout
Tagged: VR, vive pre, vive, virtual reality, video, pre, htc
On Friday I was able to get a pre-release HTC Vive Pre in the office and spend some time with it. Not only was I interested in getting more hands-on time with the hardware without a time limit but we were also experimenting with how to stream and record VR demos and environments.
Enjoy and mock!
Subject: Graphics Cards | March 24, 2016 - 02:04 PM | Jeremy Hellstrom
Tagged: Ubuntu 16.04, linux, vulkan, amd, nvidia
Last week AMD released a new GPU-PRO Beta driver stack and this Monday, NVIDIA released the 364.12 beta driver, both of which support Vulkan and meant that Phoronix had a lot of work to do. Up for testing were the GTX 950, 960, 970, 980, and 980 Ti as well as the R9 Fury, 290 and 285. Logically, they used the Talos Principal test, their results compare not only the cards but also the performance delta between OpenGL and Vulkan and finished up with several OpenGL benchmarks to see if there were any performance improvements from the new drivers. The results look good for Vulkan as it beats OpenGL across the board as you can see in the review.
"Thanks to AMD having released their new GPU-PRO "hybrid" Linux driver a few days ago, there is now Vulkan API support for Radeon GPU owners on Linux. This new AMD Linux driver holds much potential and the closed-source bits are now limited to user-space, among other benefits covered in dozens of Phoronix articles over recent months. With having this new driver in hand plus NVIDIA promoting their Vulkan support to the 364 Linux driver series, it's a great time for some benchmarking. Here are OpenGL and Vulkan atop Ubuntu 16.04 Linux for both AMD Radeon and NVIDIA GeForce graphics cards."
Here are some more Graphics Card articles from around the web:
- XFX R9 390 Double Dissipation Black Edition @ [H]ard|OCP
- Far Cry Primal Graphics Card Performance Analysis @ eTeknix
- Inno3D GTX 980Ti iChill Black @ eTeknix
Subject: Graphics Cards | March 19, 2016 - 03:02 PM | Ryan Shrout
Tagged: VR, vive, valve, htc, gdc 2016, GDC
A story posted over at UploadVR has some interesting information that came out of the final days of GDC last week. We know that Valve, HTC and Oculus have recommended users have a Radeon R9 290 or GTX 970 GPU or higher to run virtual reality content on both the Vive and the Rift, and that comes with a high cost for users that weren't already invested in PC gaming. Valve’s Alex Vlachos has other plans that might enable graphics cards from as far back as 2012 to work in Valve's VR ecosystem.
Valve wants to lower the requirements for VR
Obviously there are some trade offs to consider. The reason GPUs have such high requirements for the Rift and Vive is their need to run at 90 FPS / 90 Hz without dropping frames to create a smooth and effective immersion. Deviance from that means the potential for motion sickness and poor VR experiences in general.
From UploadVR's story:
“As long as the GPU can hit 45 HZ we want for people to be able to run VR,” Vlachos told UploadVR after the talk. “We’ve said the recommended spec is a 970, same as Oculus, but we do want lesser GPUs to work. We’re trying to reduce the cost [of VR].”
It's interesting that Valve would be talking about a 45 FPS target now, implying there would be some kind of frame doubling or frame interpolation to get back to the 90 FPS mark that the company believes is required for a good VR experience.
Image source: UploadVR
Vlachos also mentioned some other avenues that Valve could expand on to help improve performance. One of them is "adaptive quality", a feature we first saw discussed with the release of the Valve SteamVR Performance Test. This would allow the game to lower the image quality dynamically (texture detail, draw distance, etc.) based on hardware performance but might also include something called fixed foveated rendering. With FFR only the center of the image is rendered at maximum detail while the surrounding image runs at lower quality; the theory being that you are only focused on the center of the screen anyway and human vision blurs the periphery already. This is similar to NVIDIA's multi-res shading technology that is integrated into UE4 already, so I'm curious to see how this one might shape out.
Another quote from UploadVR:
“I can run Aperture [a graphically rich Valve-built VR experience] on a 680 without dropping frames at a lower quality, and, for me, that’s enough of a proof of concept,” Vlachos said.
I have always said that neither Valve nor Oculus are going to lock out older hardware, but that they wouldn't directly support it. That a Valve developer can run its performance test (with adaptive quality) on a GTX 680 is a good sign.
The Valve SteamVR Performance Test
But the point is also made by Vlachos that "most art we’re seeing in VR isn’t as dense" as other PC titles is a bit worrisome. We WANT VR games to improve to the same image quality and realism levels that we see in modern PC titles and not depend solely on artistic angles to get to the necessary performance levels for high quality virtual reality. Yes, the entry price today for PC-based VR is going to be steep, but I think "console-ifying" the platform will do a disservice in the long run.
Subject: Graphics Cards | March 18, 2016 - 01:59 PM | Jeremy Hellstrom
Tagged: msi, GTX 980 Ti, MSI GTX 980 Ti GOLDEN Edition, nvidia, factory overclocked
Apart from the golden fan and HDMI port MSI's 980 Ti GOLDEN Edition also comes with a moderate factory overclock, 1140MHz Base, 1228MHz Boost and 7GHz memory, with an observed frequency of 1329MHz in game. [H]ard|OCP managed to up those to 1290MHz Base and 1378MHz Boost and 7.8GHz memory with the card hitting 1504MHz in game. That overclock produced noticeable results in many games and pushed it close to the performance of [H]'s overclocked MSI 980 Ti LIGHTNING. The LIGHTNING proved to be the better card in terms of performance, both graphically and thermally, however it is also more expensive than the GOLDEN and does not have quite the same aesthetics, if that is important to you.
"Today we evaluate the MSI GTX 980 Ti GOLDEN Edition video card. This video card features a pure copper heatsink geared towards faster heat dissipation and better temps on air than other air cooled video cards. We will compare it to the MSI GTX 980 Ti LIGHTNING, placing the two video cards head to head in an overclocking shootout. "
Here are some more Graphics Card articles from around the web:
- Gigabyte GeForce GTX 980Ti Xtreme @ eTeknix
- ASUS GeForce GTX 980 Ti Matrix 6 GB @ techPowerUp
- 4 Weeks with NVIDIA TITAN X SLI at 4K Resolution @ [H]ard|OCP
- NVIDIA GeForce GT 710: Trying NVIDIA's Newest Sub-$50 GPU On Linux @ Phoronix
Subject: Graphics Cards | March 16, 2016 - 09:29 PM | Sebastian Peak
Razer has announced pricing and availability for their Core external GPU enclosure, which allows GPUs of up to 375W to run over Thunderbolt 3 with compatible devices.
"The Razer Core is the world’s first true plug and play Thunderbolt 3 (USB-C) external graphics enclosure, allowing you to transform your notebook into a desktop gaming experience. Featuring plug and play support with compatible graphics cards, you won’t need to reboot your system every time you connect your Razer Blade Stealth to Razer Core. Connect to the future with the most advanced and versatile external desktop graphics solution available."
The Razer Core will cost $499 alone, or $399 when purchased with a Razer laptop. It will be available in April.
What's this? The new Core i7 Skull Canyon NUC connected to the Core eGPU??
An interesting addition to this announcement, the Razer Core is certified with the upcoming Core i7 Skull Canyon NUC, which features Thunderbolt 3. I don't know about you, but the idea of portable, external, upgradable graphics is awesome.
So what do you think? $499 as a standalone product for a user-upgradable external GPU solution with power supply? The $399 price is obviously more attractive, but you'd need to be in the market for a new laptop as well (and again, it would need to a Razer laptop to get that $100 discount). In any case, AMD's XConnect technology certainly makes the Core a compelling possibility.
Subject: Graphics Cards | March 16, 2016 - 10:00 AM | Ryan Shrout
Tagged: video, rift, Oculus
As part of our second day at GDC, Ken and I spent 4+ hours with Oculus during their "Game Days 2016" event, an opportunity for us to taste test games in 30 minute blocks, getting more hands on time than we ever have before. The event was perfectly organized and easy to work in, and it helps that the product is amazing as well.
Of the 40-ish games available to play, 30 of them will be available on the Rift launch day, March 28th. We were able to spend some time with the following:
We aren't game reviewers here, but we obviously have a deep interest in games, and thus, having access to these games is awesome. But more than that, access to the best software that VR will have to offer this spring is invaluable as we continue to evaluate hardware accurately for our readers.
Ken and I sat down after the Oculus event to talk about the games we played, the experiences we had and what input the developers had about the technical issues and concerns surrounding VR development.
Subject: Graphics Cards | March 15, 2016 - 02:02 AM | Ryan Shrout
Tagged: vulkan, raja koduri, Polaris, HBM2, hbm, dx12, crossfire, amd
After hosting the AMD Capsaicin event at GDC tonight, the SVP and Chief Architect of the Radeon Technologies Group Raja Koduri sat down with me to talk about the event and offered up some additional details on the Radeon Pro Duo, upcoming Polaris GPUs and more. The video below has the full interview but there are several highlights that stand out as noteworthy.
- Raja claimed that one of the reasons to launch the dual-Fiji card as the Radeon Pro Duo for developers rather than pure Radeon, aimed at gamers, was to “get past CrossFire.” He believes we are at an inflection point with APIs. Where previously you would abstract two GPUs to appear as a single to the game engine, with DX12 and Vulkan the problem is more complex than that as we have seen in testing with early titles like Ashes of the Singularity.
But with the dual-Fiji product mostly developed and prepared, AMD was able to find a market between the enthusiast and the creator to target, and thus the Radeon Pro branding was born.
Raja further expands on it, telling me that in order to make multi-GPU useful and productive for the next generation of APIs, getting multi-GPU hardware solutions in the hands of developers is crucial. He admitted that CrossFire in the past has had performance scaling concerns and compatibility issues, and that getting multi-GPU correct from the ground floor here is crucial.
- With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs, and to scale performance overall, multi-GPU solutions need to be efficient and plentiful. The “economics of the smaller die” are much better for AMD (and we assume NVIDIA) and by 2017-2019, this is the reality and will be how graphics performance will scale.
Getting the software ecosystem going now is going to be crucial to ease into that standard.
- The naming scheme of Polaris (10, 11…) has no equation, it’s just “a sequence of numbers” and we should only expect it to increase going forward. The next Polaris chip will be bigger than 11, that’s the secret he gave us.
There have been concerns that AMD was only going to go for the mainstream gaming market with Polaris but Raja promised me and our readers that we “would be really really pleased.” We expect to see Polaris-based GPUs across the entire performance stack.
- AMD’s primary goal here is to get many millions of gamers VR-ready, though getting the enthusiasts “that last millisecond” is still a goal and it will happen from Radeon.
- No solid date on Polaris parts at all – I tried! (Other than the launches start in June.) Though Raja did promise that after tonight, he will only have his next alcoholic beverage until the launch of Polaris. Serious commitment!
- Curious about the HBM2 inclusion in Vega on the roadmap and what that means for Polaris? Though he didn’t say it outright, it appears that Polaris will be using HBM1, leaving me to wonder about the memory capacity limitations inherent in that. Has AMD found a way to get past the 4GB barrier? We are trying to figure that out for sure.
Why is Polaris going to use HBM1? Raja pointed towards the extreme cost and expense of building the HBM ecosystem prepping the pipeline for the new memory technology as the culprit and AMD obviously wants to recoup some of that cost with another generation of GPU usage.
Speaking with Raja is always interesting and the confidence and knowledge he showcases is still what gives me assurance that the Radeon Technologies Group is headed in the correct direction. This is going to be a very interesting year for graphics, PC gaming and for GPU technologies, as showcased throughout the Capsaicin event, and I think everyone should be looking forward do it.
Subject: Graphics Cards | March 14, 2016 - 07:00 PM | Ryan Shrout
Tagged: VR, radeon pro duo, radeon, Fiji, dual fiji, capsaicin, amd
It’s finally here, and AMD is ready to ship it, the much discussed and often debated dual-Fiji graphics card that the company first showed with the launch of the Fury series of Radeon cards way back in June of last year. It was unnamed then, and I started calling it the AMD Fury X2, but it seems that AMD has other plans for this massive compute powerhouse, now with a price tag of $1,499.
As part of the company’s Capsaicin event at GDC tonight, AMD showed the AMD Radeon Pro Duo, calling it the “most powerful platform for VR” among other things. The card itself is a dual-slot configuration with what appears to be a (very thick) 120mm self-contained liquid cooler, similar to the Fury X design. You’ll need three 8-pin power connectors for the Radeon Pro Duo as well, but assuming you are investing in this kind of hardware that should be no issue.
Even with the integration of HBM to help minimize the footprint of the GPU and memory system, the Radeon Pro Duo is a bit taller than the standard bracket and is more analogous the length of a standard graphics card.
AMD isn’t telling us much about performance in the early data provided, only mentioning again that the card provides 16 teraflops of compute performance. This is just about double that of the Fury X, single GPU variant released last year; clearly the benefit of water cooling the Pro Duo is that it can run at maximum clock speeds.
Probably the biggest change from what we learned about the dual-GPU card in June to today is its target market. AMD claims that the Radeon Pro Duo is “aimed at all aspects of the VR developer lifestyle: developing content more rapidly for tomorrow’s killer VR experiences while at work, and playing the latest DirectX® 12 experiences at maximum fidelity while off work.” Of course you can use this card for gaming – it will show up in your system just as any dual-GPU configuration would and can be taken advantage of at the same level owning two Fury X cards would.
The Radeon Pro Duo is cooled by a rear-mounted liquid cooler in this photo
That being said, with a price tag of $1,499, it makes very little sense for gamers to invest in this product for gaming alone. Just as we have said about the NVIDIA TITAN line of products, they are the best of the best but are priced to attract developers rather than gamers. In the past AMD had ridiculed NVIDIA for this kind of move but it seems that the math just works here – the dual-Fiji card is likely a high cost, low yield, low production part. Add to that the fact that it was originally promised in Q3 2015, and that AMD has publicly stated that its Polaris-based GPUs would be ready starting in June, and the window for a consumer variant of the Radeon Pro Duo is likely closed.
"The Radeon Pro Duo is AMD's evolution of their high-end graphics card strategy with them positioning the Radeon Pro Duo more towards a content creator audience rather than gamers. This helps justify the higher price and lower volumes as well as gives developers the frame of mind to develop for multi-GPU VR from the get go rather than as an afterthought." - Anshel Sag, Analyst at Moor Insights & Strategy
For engineers, developers, educational outlets and other professional landscapes though, the pure processing power compressed into a single board will be incredibly by useful. And of course, for those gamers crazy enough out there with the unlimited budget and the need to go against our recommendations.
Update: AMD's Capsaicin livestream included some interesting slides on the new Pro Duo GPU, including some of the capabilities and a look at the cooling system.
The industrial design has carried over from the Fury X
As we see from this slide, the Pro Duo offers 2x Fiji GPUs with 8GB of HBM, and boasts 16 TFLOPS of compute power (the AMD Nano offers 8.19 TFLOPS, so this is consistent with a dual-Nano setup).
The cooling system is again a Cooler Master design, with a separate block for each GPU. The hoses have a nice braided cover, and lead to a very thick looking radiator with a pre-attached fan.
From the look of the fan blades this looks like it's designed to move quite a bit of air, and it will need to considering a single (120 mm?) radiator is handling cooling for a pair of high-end GPUs. Temperatures and noise levels will be something to look for when we have hardware in hand.
Subject: Graphics Cards | March 14, 2016 - 07:00 PM | Ryan Shrout
Tagged: VR, radeon pro duo, radeon, capsaicin, amd
As part of AMD’s Capsaicin event in San Francisco today, the company is making some bold statements around its strategy for VR, including massive market share dominance, new readiness programs and the future of VR with the Polaris architecture due out this year.
The most surprising statement made by AMD at the event was the claim that “AMD is powering the overwhelming majority of home entertainment VR systems around the world, with an estimated 83 percent market share.” This is obviously not based on discrete GPU sales in the PC market alone, but instead includes the sales of the PlayStation 4 game console, for which Sony will launch its own PlayStation VR headset later this year. (Side note, does JPR not include the array of Samsung phones to be “home entertainment VR” systems?)
There is no denying that Sony's install base with the PS4 has put AMD in the driver seat when it comes to global gaming GPU distribution, but as of today this advantage has not amounted to anything noticeable in the PC space – a stance that AMD was selling hard before the consoles’ launch. I am hesitant to put any weight behind AMD’s PS4 integration for VR moving forward, so the company will have to prove that this is in fact an advantage for the chip maker going into 2016.
AMD is talking up other partnerships as well, including those with HTC and Oculus for their respective headset launches, due in the next 30 days. Beyond that, AMD hardware is being used in the just announced Sulon Q wireless VR headset and has deals in place with various healthcare, media and educational outlets to seed development hardware.
For system vendors and add-in card builders, AMD is launching a certification program that will create labels of “Radeon™ VR Ready Premium” and “Radeon™ VR Ready Creator”. The former will be assigned to graphics cards at Radeon R9 290 performance and above to indicate they are capable of meeting the specifications required by Oculus and HTC for their VR headsets; the latter is going to be assigned only to the Radeon Pro Duo dual-Fiji graphics card, meant to target developers that need maximum performance.
Finally, AMD is showing that its next generation graphics architecture, Polaris, is capable of VR as well.
AMD today demonstrated for the first time ever the company’s forthcoming Polaris 10 GPU running Valve’s Aperture Science Robot Repair demo powered by the HTC Vive Pre. The sample GPU features the recently announced Polaris GPU architecture designed for 14nm FinFET, optimized for DirectX® 12 and VR, and boasts significant architectural improvements over previous AMD architectures including HDR monitor support, industry-leading performance-per-watt 2, and AMD’s 4th generation Graphics Core Next (GCN) architecture.
We are still waiting to see if this is the same silicon that AMD showed at CES, a mainstream part, or if we might be witnessing the first demo of a higher end part, wetting the appetite of the enthusiast community.
Subject: Graphics Cards | March 14, 2016 - 07:00 PM | Sebastian Peak
Tagged: crytek, CRYENGINE, amd
AMD will be the sole GPU presence in the labs at universities participating in Crytek’s VR First initiative, which “provides colleges and universities a ready-made VR solution for developers, students and researchers”, according to AMD.
AMD is leveraging the newly-announced Radeon Pro Duo graphics cards for this partnership, which lends immediate credibility to their positioning of the new GPU for VR development.
“The new labs will be equipped with AMD Radeon™ Pro Duo graphics cards with LiquidVR™ SDK, the world’s fastest VR content creator platform bridging content creation and consumption and offering an astonishing 16 teraflops of compute power. Designed to be compatible with multiple head mounted displays, including the Oculus Rift™ and HTC Vive™, AMD Radeon™ Pro Duo cards will encourage grassroots VR development around the world. The initial VR First Lab at Bahçeşehir University in Istanbul is already up and running in January of this year.”
Crytek CEO Cevat Yerli explains VR First:
“VR First labs will become key incubators for nurturing new talent in VR development and creating a global community well-prepared to innovate in this exciting and emerging field. VR experiences, harnessing the power of the CRYENGINE and developed using world-class Radeon™ hardware and software, will have the potential to fundamentally transform how we interact with technology.”
This certainly appears to be an early win for AMD in VR development, at least in the higher education sector.
Subject: Graphics Cards | March 14, 2016 - 04:41 PM | Ryan Shrout
Tagged: video, live, capsaicin, amd
Subject: Graphics Cards | March 11, 2016 - 05:03 PM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, HBM2, gtx1080, GTX 1080, gtx, GP104, geforce, gddr5x
We are expecting news of the next NVIDIA graphics card this spring, and as usual whenever an announcement is imminent we have started seeing some rumors about the next GeForce card.
(Image credit: NVIDIA)
Pascal is the name we've all being hearing about, and along with this next-gen core we've been expecting HBM2 (second-gen High Bandwidth Memory). This makes today's rumor all the more interesting, as VideoCardz is reporting (via BenchLife) that a card called either the GTX 1080 or GTX 1800 will be announced, using the GP104 GPU core with 8GB of GDDR5X - and not HBM2.
The report also claims that NVIDIA CEO Jen-Hsun Huang will have an announcement for Pascal in April, which leads us to believe a shipping product based on Pascal is finally in the works. Taking in all of the information from the BenchLife report, VideoCardz has created this list to summarize the rumors (taken directly from the source link):
- Pascal launch in April
- GTX 1080/1800 launch in May 27th
- GTX 1080/1800 has GP104 Pascal GPU
- GTX 1080/1800 has 8GB GDDR5X memory
- GTX 1080/1800 has one 8pin power connector
- GTX 1080/1800 has 1x DVI, 1x HDMI, 2x DisplayPort
- First Pascal board with HBM would be GP100 (Big Pascal)
Rumored GTX 1080 Specs (Credit: VideoCardz)
The alleged single 8-pin power connector with this GTX 1080 would place the power limit at 225W, though it could very well require less power. The GTX 980 is only a 165W part, with the GTX 980 Ti rated at 250W.
As always, only time will tell how accurate these rumors are; though VideoCardz points out "BenchLife stories are usually correct", though they are skeptical of the report based on the name GTX 1080 (though this would follow the current naming scheme of GeForce cards).
Subject: Graphics Cards | March 10, 2016 - 01:27 PM | Sebastian Peak
Tagged: XConnect, thunderbolt 3, radeon, graphics card, gpu, gaming laptop, external gpu, amd
AMD has announced their new external GPU technology called XConnect, which leverages support from the latest Radeon driver to support AMD graphics over Thunderbolt 3.
The technology showcased by AMD is powered by Razer, who partnered with AMD to come up with an expandable solution that supports up to 375W GPUs, including R9 Fury, R9 Nano, and all R9 300 series GPUs up to the R9 390X (there is no liquid cooling support, and the R9 Fury X isn't listed as being compatible). The notebook in AMD's marketing material is the Razer Blade Stealth, which offers the Razer Core external GPU enclosure as an optional accessory. (More information about these products from Razer here.) XConnect is not tied to any vendor, however; this is "generic driver" support for GPUs over Thunderbolt 3.
AMD has posted this video with the head of Global Technical Marketing, Robert Hallock, to explain the new tech and show off the Razer hardware:
The exciting part has to be the promise of an industry standard for external graphics, something many have hoped for. Not everyone will produce a product exactly like Razer has, since there is no requirement to provide a future upgrade path in a larger enclosure like this, but the important thing is that Thunderbolt 3 support is built in to the newest Radeon Crimson drivers.
Here are the system requirements for AMD XConnect from AMD:
- Radeon Software 16.2.2 driver (or later)
- 1x Thunderbolt 3 port
- 40Gbps Thunderbolt 3 cable
- Windows 10 build 10586 (or later)
- BIOS support for external graphics over Thunderbolt 3 (check with system vendor for details)
- Certified Thunderbolt 3 graphics enclosure configured with supported Radeon R9 Series GPU
- Thunderbolt firmware (NVM) v.16
The announcement introduces all sorts of possibilities. How awesome would it be to see a tiny solution with an R9 Nano powered by, say, an SFX power supply? Or what about a dual-GPU enclosure (possibly requiring 2 Thunderbolt 3 connections?), or an enclosure supporting liquid cooling (and the R9 Fury X)? The potential is certainly there, and with a standard in place we could see some really interesting products in the near future (or even DIY solutions). It's a promising time for mobile gaming!
Subject: Graphics Cards, Systems | March 10, 2016 - 11:38 AM | Sebastian Peak
Tagged: zotac, zbox, VR, SFF, nvidia, mini-pc, MAGNUS EN980, liquid cooling, GTX980, GTX 980, graphics, gpu, geforce
ZOTAC is teasing a new mini PC "ready for virtual reality" leading up to Cebit 2016, happening later this month. The ZBOX MAGNUS EN980 supplants the EN970 as the most powerful version of ZOTAC's gaming mini systems, and will come equipped with no less than an NVIDIA GeForce GTX 980.
(Image via Guru3D)
Some questions remain ahead of a more formal announcemnent, and foremost among them is the version of the system's GTX 980. Is this the full desktop variant, or the GTX 980m? It seems to be the former, if we can read into the "factory-installed water-cooling solution", especially if that pertains to the GPU. In any case this will easily be the most powerful mini-PC ZOTAC has released, as even the current MAGNUS EN970 doesn't actually ship with a GTX 970 as the name would imply; rather, a GTX 960 handles discrete graphics duties according to the specs.
The MAGNUS EN980's GTX 980 GPU - mobile or not - will make this a formidable gaming system, paired as it is with a 6th-gen Intel Skylake CPU (the specific model was not mentioned in the press release; the current high-end EN970 with dicrete graphics uses the Intel Core i5-5200U). Other details include support for up to four displays via HDMI and DisplayPort, USB 3.0 and 3.1 Type-C inputs, and built-in 802.11ac wireless.
We'll have to wait until Cebit (which runs from March 14 - 18) for more details. Full press release after the break.
Subject: Graphics Cards | March 9, 2016 - 07:42 PM | Scott Michaud
Tagged: amd, radeon, graphics drivers, vulkan, dx12, DirectX 12
New graphics drivers from AMD have just been published, and it's a fairly big release. First, Catalyst 16.3 adds Vulkan support to main-branch drivers, which they claim is conformant to the 1.0 specification. The Khronos Group website still doesn't list AMD as conforming, but I assume that they will be added shortly (rather than some semantic “conformant” “fully conformant” thing going on). This is great for the platform, as we are still in the launch window of DirectX 12.
Performance has apparently increased as well, significantly. This is especially true in the DirectX 12 title, Gears of War Ultimate Edition. AMD claims that FuryX will see up to a 60% increase in that title, and the R9 380 will gain up to 44%. It's unclear how much that is in real world performance, especially in terms of stutter and jank, which apparently plagues that game.
The driver also has a few other interesting features. One that I don't quite understand is “Power Efficiency Toggle”. This supposedly “allows the user to disable some power efficiency optimizations”. I would assume that means keeping you GPU up-clocked under certain conditions, but I don't believe that was much of an issue for the last few generations. That said, the resolved issues section claims that some games were choppy because of core clock fluctuation, and lists this option as the solution, so maybe it was. It is only available on “select” Radeon 300 GPUs and Fury X. That is, Fury X specifically, not the regular Fury or the Nano. I expect Ryan will be playing around with it in the next little while.
Last of the main features, the driver adds support for XConnect, which is AMD's new external graphics standard. It requires a BIOS that support external GPUs, which AMD lists the Razer Blade Stealth as. Also noteworthy, Eyefinity can now be enabled with just two displays, and Display Scaling can be set per-game. I avoid manually controlling drivers, even my Wacom tablet, to target specific applications, but that's probably great for those who do.
As a final note: the Ashes of the Singularity 2.0 benchmark now supports DirectFlip.
If you have a recent AMD GPU, grab the drivers from AMD's website.
Subject: Graphics Cards | March 9, 2016 - 03:30 PM | Scott Michaud
Tagged: ubuntu, graphics drivers, graphics driver, amd
AMD has been transitioning their kernel driver from the closed-source fglrx to the open-source AMDGPU driver that was announced last year. This forms the base that both closed and open user-mode drivers will utilize. For the upcoming Ubuntu 16.04 LTS, Canonical has decided to deprecate fglrx and remove it from the system upon upgrade. Users can then choose to install an AMDGPU-based one, or reinstall the Radeon driver. That will need to be done without Canonical's support, though.
It makes sense that they would choose Ubuntu 16.04 to pull the plug. This is the version that Canonical will be maintaining for the next five years, which could give a headache when AMD has spent the last year trying to get rid of it. AMDGPU is a much safer target as the years roll forward. On the other hand, GPUs prior to Fiji will not have the luxury of choosing, because AMD still hasn't announced AMDGPU for
GDC (Update March 9th @ 6pm: Fixed typo) GCN 1.0 and 1.1.
Subject: Graphics Cards | March 9, 2016 - 11:55 AM | Scott Michaud
Tagged: nvidia, graphics drivers
The last couple of days were not too great for software patches. Microsoft released a Windows 10 update that breaks 5K monitors, and NVIDIA's driver bug, mentioned in the last post, was bigger than they realized. It turns out that the issue is not isolated to multiple monitors, but rather had something to do with choosing “Express Install” in the setup screen.
In response, NVIDIA has removed 364.47 from their website. For those who want “Game Ready” drivers with games like “The Division,” NVIDIA has provided a 364.51 beta driver that supposedly corrects this issue. People on the forums still claim to have problems with this driver, but nothing has been confirmed yet. It's difficult to tell whether other issues exist with the drivers, whether users are having unrelated issues that are attributed to the drivers, or if it's just a few hoaxes. ((Update on March 9th @ 12:41pm: Still nothing confirmed, but one of our comments claim that they've experienced issues personally.)) If you are concerned, then you can roll back to 362.00.
Fortunately for me, I chose to clean install 364.47 and have not had any issues with them. I asked a representative from NVIDIA on Twitter whether I should upgrade to 364.51, and he said that a few other bugs were fixed but I shouldn't bother.
If you managed to properly install 364.47, then you should be fine staying there.