All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | March 14, 2016 - 07:00 PM | Sebastian Peak
Tagged: crytek, CRYENGINE, amd
AMD will be the sole GPU presence in the labs at universities participating in Crytek’s VR First initiative, which “provides colleges and universities a ready-made VR solution for developers, students and researchers”, according to AMD.
AMD is leveraging the newly-announced Radeon Pro Duo graphics cards for this partnership, which lends immediate credibility to their positioning of the new GPU for VR development.
“The new labs will be equipped with AMD Radeon™ Pro Duo graphics cards with LiquidVR™ SDK, the world’s fastest VR content creator platform bridging content creation and consumption and offering an astonishing 16 teraflops of compute power. Designed to be compatible with multiple head mounted displays, including the Oculus Rift™ and HTC Vive™, AMD Radeon™ Pro Duo cards will encourage grassroots VR development around the world. The initial VR First Lab at Bahçeşehir University in Istanbul is already up and running in January of this year.”
Crytek CEO Cevat Yerli explains VR First:
“VR First labs will become key incubators for nurturing new talent in VR development and creating a global community well-prepared to innovate in this exciting and emerging field. VR experiences, harnessing the power of the CRYENGINE and developed using world-class Radeon™ hardware and software, will have the potential to fundamentally transform how we interact with technology.”
This certainly appears to be an early win for AMD in VR development, at least in the higher education sector.
Subject: Graphics Cards | March 14, 2016 - 04:41 PM | Ryan Shrout
Tagged: video, live, capsaicin, amd
Subject: Graphics Cards | March 11, 2016 - 05:03 PM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, HBM2, gtx1080, GTX 1080, gtx, GP104, geforce, gddr5x
We are expecting news of the next NVIDIA graphics card this spring, and as usual whenever an announcement is imminent we have started seeing some rumors about the next GeForce card.
(Image credit: NVIDIA)
Pascal is the name we've all being hearing about, and along with this next-gen core we've been expecting HBM2 (second-gen High Bandwidth Memory). This makes today's rumor all the more interesting, as VideoCardz is reporting (via BenchLife) that a card called either the GTX 1080 or GTX 1800 will be announced, using the GP104 GPU core with 8GB of GDDR5X - and not HBM2.
The report also claims that NVIDIA CEO Jen-Hsun Huang will have an announcement for Pascal in April, which leads us to believe a shipping product based on Pascal is finally in the works. Taking in all of the information from the BenchLife report, VideoCardz has created this list to summarize the rumors (taken directly from the source link):
- Pascal launch in April
- GTX 1080/1800 launch in May 27th
- GTX 1080/1800 has GP104 Pascal GPU
- GTX 1080/1800 has 8GB GDDR5X memory
- GTX 1080/1800 has one 8pin power connector
- GTX 1080/1800 has 1x DVI, 1x HDMI, 2x DisplayPort
- First Pascal board with HBM would be GP100 (Big Pascal)
Rumored GTX 1080 Specs (Credit: VideoCardz)
The alleged single 8-pin power connector with this GTX 1080 would place the power limit at 225W, though it could very well require less power. The GTX 980 is only a 165W part, with the GTX 980 Ti rated at 250W.
As always, only time will tell how accurate these rumors are; though VideoCardz points out "BenchLife stories are usually correct", though they are skeptical of the report based on the name GTX 1080 (though this would follow the current naming scheme of GeForce cards).
Subject: Graphics Cards | March 10, 2016 - 01:27 PM | Sebastian Peak
Tagged: XConnect, thunderbolt 3, radeon, graphics card, gpu, gaming laptop, external gpu, amd
AMD has announced their new external GPU technology called XConnect, which leverages support from the latest Radeon driver to support AMD graphics over Thunderbolt 3.
The technology showcased by AMD is powered by Razer, who partnered with AMD to come up with an expandable solution that supports up to 375W GPUs, including R9 Fury, R9 Nano, and all R9 300 series GPUs up to the R9 390X (there is no liquid cooling support, and the R9 Fury X isn't listed as being compatible). The notebook in AMD's marketing material is the Razer Blade Stealth, which offers the Razer Core external GPU enclosure as an optional accessory. (More information about these products from Razer here.) XConnect is not tied to any vendor, however; this is "generic driver" support for GPUs over Thunderbolt 3.
AMD has posted this video with the head of Global Technical Marketing, Robert Hallock, to explain the new tech and show off the Razer hardware:
The exciting part has to be the promise of an industry standard for external graphics, something many have hoped for. Not everyone will produce a product exactly like Razer has, since there is no requirement to provide a future upgrade path in a larger enclosure like this, but the important thing is that Thunderbolt 3 support is built in to the newest Radeon Crimson drivers.
Here are the system requirements for AMD XConnect from AMD:
- Radeon Software 16.2.2 driver (or later)
- 1x Thunderbolt 3 port
- 40Gbps Thunderbolt 3 cable
- Windows 10 build 10586 (or later)
- BIOS support for external graphics over Thunderbolt 3 (check with system vendor for details)
- Certified Thunderbolt 3 graphics enclosure configured with supported Radeon R9 Series GPU
- Thunderbolt firmware (NVM) v.16
The announcement introduces all sorts of possibilities. How awesome would it be to see a tiny solution with an R9 Nano powered by, say, an SFX power supply? Or what about a dual-GPU enclosure (possibly requiring 2 Thunderbolt 3 connections?), or an enclosure supporting liquid cooling (and the R9 Fury X)? The potential is certainly there, and with a standard in place we could see some really interesting products in the near future (or even DIY solutions). It's a promising time for mobile gaming!
Subject: Graphics Cards, Systems | March 10, 2016 - 11:38 AM | Sebastian Peak
Tagged: zotac, zbox, VR, SFF, nvidia, mini-pc, MAGNUS EN980, liquid cooling, GTX980, GTX 980, graphics, gpu, geforce
ZOTAC is teasing a new mini PC "ready for virtual reality" leading up to Cebit 2016, happening later this month. The ZBOX MAGNUS EN980 supplants the EN970 as the most powerful version of ZOTAC's gaming mini systems, and will come equipped with no less than an NVIDIA GeForce GTX 980.
(Image via Guru3D)
Some questions remain ahead of a more formal announcemnent, and foremost among them is the version of the system's GTX 980. Is this the full desktop variant, or the GTX 980m? It seems to be the former, if we can read into the "factory-installed water-cooling solution", especially if that pertains to the GPU. In any case this will easily be the most powerful mini-PC ZOTAC has released, as even the current MAGNUS EN970 doesn't actually ship with a GTX 970 as the name would imply; rather, a GTX 960 handles discrete graphics duties according to the specs.
The MAGNUS EN980's GTX 980 GPU - mobile or not - will make this a formidable gaming system, paired as it is with a 6th-gen Intel Skylake CPU (the specific model was not mentioned in the press release; the current high-end EN970 with dicrete graphics uses the Intel Core i5-5200U). Other details include support for up to four displays via HDMI and DisplayPort, USB 3.0 and 3.1 Type-C inputs, and built-in 802.11ac wireless.
We'll have to wait until Cebit (which runs from March 14 - 18) for more details. Full press release after the break.
Subject: Graphics Cards | March 9, 2016 - 07:42 PM | Scott Michaud
Tagged: amd, radeon, graphics drivers, vulkan, dx12, DirectX 12
New graphics drivers from AMD have just been published, and it's a fairly big release. First, Catalyst 16.3 adds Vulkan support to main-branch drivers, which they claim is conformant to the 1.0 specification. The Khronos Group website still doesn't list AMD as conforming, but I assume that they will be added shortly (rather than some semantic “conformant” “fully conformant” thing going on). This is great for the platform, as we are still in the launch window of DirectX 12.
Performance has apparently increased as well, significantly. This is especially true in the DirectX 12 title, Gears of War Ultimate Edition. AMD claims that FuryX will see up to a 60% increase in that title, and the R9 380 will gain up to 44%. It's unclear how much that is in real world performance, especially in terms of stutter and jank, which apparently plagues that game.
The driver also has a few other interesting features. One that I don't quite understand is “Power Efficiency Toggle”. This supposedly “allows the user to disable some power efficiency optimizations”. I would assume that means keeping you GPU up-clocked under certain conditions, but I don't believe that was much of an issue for the last few generations. That said, the resolved issues section claims that some games were choppy because of core clock fluctuation, and lists this option as the solution, so maybe it was. It is only available on “select” Radeon 300 GPUs and Fury X. That is, Fury X specifically, not the regular Fury or the Nano. I expect Ryan will be playing around with it in the next little while.
Last of the main features, the driver adds support for XConnect, which is AMD's new external graphics standard. It requires a BIOS that support external GPUs, which AMD lists the Razer Blade Stealth as. Also noteworthy, Eyefinity can now be enabled with just two displays, and Display Scaling can be set per-game. I avoid manually controlling drivers, even my Wacom tablet, to target specific applications, but that's probably great for those who do.
As a final note: the Ashes of the Singularity 2.0 benchmark now supports DirectFlip.
If you have a recent AMD GPU, grab the drivers from AMD's website.
Subject: Graphics Cards | March 9, 2016 - 03:30 PM | Scott Michaud
Tagged: ubuntu, graphics drivers, graphics driver, amd
AMD has been transitioning their kernel driver from the closed-source fglrx to the open-source AMDGPU driver that was announced last year. This forms the base that both closed and open user-mode drivers will utilize. For the upcoming Ubuntu 16.04 LTS, Canonical has decided to deprecate fglrx and remove it from the system upon upgrade. Users can then choose to install an AMDGPU-based one, or reinstall the Radeon driver. That will need to be done without Canonical's support, though.
It makes sense that they would choose Ubuntu 16.04 to pull the plug. This is the version that Canonical will be maintaining for the next five years, which could give a headache when AMD has spent the last year trying to get rid of it. AMDGPU is a much safer target as the years roll forward. On the other hand, GPUs prior to Fiji will not have the luxury of choosing, because AMD still hasn't announced AMDGPU for
GDC (Update March 9th @ 6pm: Fixed typo) GCN 1.0 and 1.1.
Subject: Graphics Cards | March 9, 2016 - 11:55 AM | Scott Michaud
Tagged: nvidia, graphics drivers
The last couple of days were not too great for software patches. Microsoft released a Windows 10 update that breaks 5K monitors, and NVIDIA's driver bug, mentioned in the last post, was bigger than they realized. It turns out that the issue is not isolated to multiple monitors, but rather had something to do with choosing “Express Install” in the setup screen.
In response, NVIDIA has removed 364.47 from their website. For those who want “Game Ready” drivers with games like “The Division,” NVIDIA has provided a 364.51 beta driver that supposedly corrects this issue. People on the forums still claim to have problems with this driver, but nothing has been confirmed yet. It's difficult to tell whether other issues exist with the drivers, whether users are having unrelated issues that are attributed to the drivers, or if it's just a few hoaxes. ((Update on March 9th @ 12:41pm: Still nothing confirmed, but one of our comments claim that they've experienced issues personally.)) If you are concerned, then you can roll back to 362.00.
Fortunately for me, I chose to clean install 364.47 and have not had any issues with them. I asked a representative from NVIDIA on Twitter whether I should upgrade to 364.51, and he said that a few other bugs were fixed but I shouldn't bother.
If you managed to properly install 364.47, then you should be fine staying there.
Subject: General Tech, Graphics Cards | March 8, 2016 - 10:18 PM | Ryan Shrout
Tagged: video, polygon.com, ben kuchera, VR, htc, vive, Oculus, rift
During our 12-hour live streaming event cleverly titled "Streaming Out Loud", we invited Ben Kuchera from Polygon.com to stop in and talk about a subject he is very passionate about: virtual reality. Ben has been a VR enthusiast since the beginning, getting a demo of the first Rift prototype from John Carmack himself. He was able to bring over the HTC Vive Pre unit to the office for some show and tell, answer questions about the experiences he has had so far, hardware requirements and much more.
Subject: Graphics Cards | March 7, 2016 - 06:15 PM | Scott Michaud
Tagged: vulkan, nvidia, graphics drivers, game ready
This new driver for NVIDIA brings Vulkan support to their current, supported branch. This is particularly interesting for me, because the Vulkan branch used to pre-date fixes for Adobe Creative Cloud, which meant that things like “Export As...” in Photoshop CC didn't work and After Effects CC would crash. They are also WHQL-certified, and they roll in all of the “Game Ready” fixes and optimizations that were released since ~October, which would be mostly new for Vulkan's branch.
... This is going to be annoying to temporarily disable...
Speaking of which, GeForce Game Ready 364.47 drivers is classified as “Game Ready” itself. The four titles optimized with this release are: Tom Clancy's The Division, Need For Speed, Hitman, and Ashes of the Singularity. If you are interested in playing those games, then this driver is what NVIDIA recommends that you use.
Note that an installation bug has been reported, however. When installing with multiple monitors, NVIDIA suggests that you disable all but one during the setup process, but you can safely re-enable them after. For me, with four monitors and a fairly meticulous desktop icon layout, this was highly annoying, but something I've had to deal with over time (especially the last two, beta Vulkan drivers). It's probably a good idea to close all applications and screenshot your icons before running the installer.
Subject: Graphics Cards | March 4, 2016 - 04:48 PM | Sebastian Peak
Tagged: PCIe power, PCI Express, nvidia, GTX 950 2G, gtx 950, graphics card, gpu, geforce, asus, 75W
ASUS has released a new version of the GTX 950 called the GTX 950 2G, and the interesting part isn't what's been added, but what was taken away; namely, the PCIe power requirement.
When NVIDIA announced the GTX 950 (which Ryan reviewed here) it carried a TDP of 90W, which prevented it from running without a PCIe power connector. The GTX 950 was (seemingly) the replacement for the GTX 750, which didn't require anything beyond motherboard power via the PCIe slot, and the same held true for the more powerful GTX 750 Ti. Without the need for PCIe power that GTX 750 Ti became our (any many others) default recommendation to turn any PC into a gaming machine (an idea we just happened to cover in depth here).
Here's a look at the specs from ASUS for the GTX 950 2G:
- Graphics Engine: NVIDIA GeForce GTX 950
- Interface: PCI Express 3.0
- Video Memory: GDDR5 2GB
- CUDA Cores: 768
- Memory Clock: 6610 MHz
- Memory Interface: 128-bit
- Engine Clock
- Gaming Mode (Default) - GPU Boost Clock : 1190 MHZ , GPU Base Clock : 1026 MHz
- OC Mode - GPU Boost Clock : 1228 MHZ , GPU Base Clock : 1051 MHz
- Interface: HDMI 2.0, DisplayPort, DVI
- Power Consumption: Up to 75W, no additional PCIe power required
- Dimensions: 8.3 x 4.5 x 1.6 inches
Whether this model has any relation to the rumored "GTX 950 SE/LP" remains to be seen (and other than power, this card appears to have stock GTX 950 specs), but the option of adding in a GPU without concern over power requirements makes this a very attractive upgrade proposition for older builds or OEM PC's, depending on cost.
The full model number is ASUS GTX950-2G,
and a listing is up on Amazon, though seemingly only a placeholder at the moment. (Link removed. The listing was apparently for an existing GTX 950 product.)
Subject: Graphics Cards | March 3, 2016 - 03:00 PM | Ryan Shrout
Tagged: uwp, radeon, dx12, amd
AMD's Robert Hallock, frequenter of the PC Perspective live streams and a favorite of the team here, is doing an AMAA on reddit today. While you can find some excellent information and views from Robert in that Q&A session, two particular answers stood out to me.
Asked by user CataclysmZA: Can you comment on the recent developments regarding Ashes of the Singularity and DirectX 12 in PC Perspective and Extremetech's tests? Will changes in AMD's driver to include FlipEx support fix the framerate issues and allow high-refresh monitor owners to enjoy their hardware fully? http://www.pcper.com/reviews/General-Tech/PC-Gaming-Shakeup-Ashes-Singularity-DX12-and-Microsoft-Store
Answer from Robert: We will add DirectFlip support shortly.
Well, there you have it. This is the first official notice I have from AMD that it is in fact its driver that was causing the differences in behavior between Radeon and GeForce cards in Ashes of the Singularity last week. It appears that a new driver will be incoming (sometime) that will enable DirectFlip / FlipEx, allowing exclusive full screen modes in DX12 titles. Some of our fear of the unknown can likely be resolved - huzzah!
Ashes of the Singularity wouldn't enter exclusive full screen mode on AMD Radeon hardware.
Another quesiton also piqued my interest:
Asked by user CataclysmZA: Can you comment on how FreeSync is affected by the way games sold through the Windows Store run in borderless windowed mode?
Answer from Robert: This article discusses the issue thoroughly. Quote: "games sold through Steam, Origin [and] anywhere else will have the ability to behave with DX12 as they do today with DX11."
While not exactly spelling it out, this answer seems to indicate that for the time being, AMD doesn't think FreeSync will work with Microsoft Store sold games in the forced borderless windowed mode. NVIDIA has stated that G-Sync works in some scenarios with the new Gears of War (a Universal Windows Platform app), but it seems they too have issues.
As more informaiton continues to come in, from whatever sources we can validate, I'll keep you updated!
Subject: Graphics Cards | March 3, 2016 - 11:54 AM | Ryan Shrout
Tagged: uwp, uwa, universal windows platform, microsoft, full screen, dx12, DirectX 12
With all of the debate and discussion that followed the second release of Ashes of the Singularity's DX12 benchmark mode, questions about full screen capabilities on AMD hardware and a debate of the impact the Microsoft Store and Universal Windows Platform would have on PC gaming, we went to the source of the debate to try and get some feedback. Microsoft was willing to talk about the issues that arose from this most recent storm though honestly what it is willing to say on the record today is limited.
When asked specifically about the UWP and PC games made available on the Windows 10 Store, Microsoft reiterated its desire to work with gamers and the community to find what works.
“UWP (Universal Windows Platform) allows developers to create experiences that are easily deployed across all Windows 10 devices, from PCs to tablets to phones to Xbox One. When it comes to a UWP game on Windows 10 PCs, we’re early in our journey. We’re listening to the feedback from the community – multiple GPUs, SLI, crossfire, v-sync, etc. We’re embracing the feedback and working to ensure gamers on Windows 10 have a great experience. We’ll have more to discuss in the coming months.” – a Microsoft spokesperson
It's good to know that Microsoft is listening to the media and gamers and seems willing to make changes based on feedback. It will have to be seen though what of this feedback gets implemented and in what time frame.
Universal Windows Platform
One particular fear for some gamers is that Microsoft would attempt to move to the WDDM compositing model not just for games sold in the Windows Store, but for all games that run on the OS. I asked Microsoft directly:
To answer your question, can we assume that those full screen features that work today with DX12 will work in the future as well – yes.
This should ease the worries of people thinking the very worst for Windows and DX12 gaming going forward. As long as DX12 allows for games to enter into an exclusive full screen mode, like the FlipEx option we discussed in a previous story, games sold through Steam, Origin and anywhere else will have the ability to behave with DX12 as they do today with DX11.
Windows 10 Store
I have some meetings setup with various viewpoints on this debate for GDC in a couple weeks, so expect more then!
Subject: Graphics Cards | March 2, 2016 - 05:30 PM | Sebastian Peak
Tagged: nvidia, geforce, game ready, 362.00 WHQL
The new Far Cry game is out (Far Cry Primal), and for NVIDIA graphics card owners this means a new GeForce Game Ready driver. The 362.00 WHQL certified driver provides “performance optimizations and a SLI profile” for the new game, is now available via GeForce Experience, as well as the manual driver download page.
(Image credit: Ubisoft)
The 362.00 WHQL driver also supports the new Gears of War: Ultimate Edition, which is a remastered version of 2007 PC version of the game that includes Windows 10 only enhancements such as 4k resolution support and unlocked frame rates. (Why these "need" to be Windows 10 exclusives can be explained by checking the name of the game’s publisher: Microsoft Studios.)
(Image credit: Microsoft)
Here’s a list of what’s new in version 362.00 of the driver:
- Added Beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3. GPUs supported include all GTX 900 series, Titan X, and GeForce GTX 750 and 750Ti.
- As of Windows 10 November Update, Fermi GPUs now use WDDM 2.0 in single GPU configurations.
For multi-GPU configurations, WDDM usage is as follows:
- In non-SLI multi-GPU configurations, Fermi GPUs use WDDM 2.0. This includes configurations where a Fermi GPU is used with Kepler or Maxwell GPUs.
- In SLI mode, Fermi GPUs still use WDDM 1.3. Application SLI Profiles
Added or updated the following SLI profiles:
- Assassin's Creed Syndicate - SLI profile changed (with driver code as well) to make the application scale better
- Bless - DirectX 9 SLI profile added, SLI set to SLI-Single
- DayZ - SLI AA and NVIDIA Control Panel AA enhance disabled
- Dungeon Defenders 2 - DirectX 9 SLI profile added
- Elite Dangerous - 64-bit EXE added
- Hard West - DirectX 11 SLI profile added
- Metal Gear Solid V: The Phantom Pain - multiplayer EXE added to profile
- Need for Speed - profile EXEs updated to support trial version of the game
- Plants vs Zombies Garden Warfare 2 - SLI profile added
- Rise of the Tomb Raider - profile added
- Sebastien Loeb Rally Evo - profile updated to match latest app behavior
- Tom Clancy's Rainbow Six: Siege - profile updated to match latest app behavior
- Tom Clancy's The Division - profile added
- XCOM 2 - SLI profile added (including necessary code change)
The "beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3" is certainly interesting addition, and one that could eventually lead to external solutions for notebooks, coming on the heels of AMD teasing their own standardization of external GPUs.
The full release 361 (GeForce 362.00) notes can be viewed here (warning: PDF).
Subject: Graphics Cards, Processors | February 29, 2016 - 06:48 PM | Scott Michaud
Tagged: tesla motors, tesla, SoC, Peter Bannon, Jim Keller
When we found out that Jim Keller has joined Tesla, we were a bit confused. He is highly skilled in processor design, and he moved to a company that does not design processors. Kind of weird, right? There are two possibilities that leap to mind: either he wanted to try something new in life, and Elon Musk hired him for his general management skills, or Tesla wants to get more involved in the production of their SoCs, possibly even designing their own.
Now Peter Bannon, who was a colleague of Jim Keller at Apple, has been hired by Tesla Motors. Chances are, the both of them were not independently interested in an abrupt career change that led them to the same company. That seems highly unlikely, to say the least. So it appears that Tesla Motors wants experienced chip designers in house. What for? We don't know. This is a lot of talent to just look over the shoulders of NVIDIA and other SoC partners, to make sure they have an upper hand in negotiation. Jim Keller is at Tesla as their “Vice-President of Autopilot Hardware Engineering.” We don't know what Peter Bannon's title will be.
And then, if Tesla Motors does get into creating their own hardware, we wonder what they will do with it. The company has a history of open development and releasing patents (etc.) into the public. That said, SoC design is a highly encumbered field, depending on what they're specifically doing, which we have no idea about.
Subject: Graphics Cards | February 29, 2016 - 02:06 PM | Scott Michaud
Tagged: nvidia, amd, AIB, pc gaming
Jon Peddie Research, which is market analysis firm that specializes in PC hardware, has compiled another report about add-in board (AIB) sales. There's a few interesting aspects to this report. First, shipments of enthusiast AIBs (ie: discrete GPUs) are up, not a handful of percent, but a whole two-fold. Second, AMD's GPU market share climbed once again, from 18.8% up to 21.1%.
This image seems contradict their report, which claims the orange line rose from 44 million in 2014 to 50 million in 2015. I'm not sure where the error is, so I didn't mention it in the news post.
Image Credit: JPR
The report claims that neither AMD nor NVIDIA released a “killer new AIB in 2015.” That... depends on how you look at it. They're clearly referring to upper mainstream, which sit just below the flagship and contribute to a large chunk of enthusiast sales. If they were including the flagship, then they ignored the Titan X, 980 Ti, and Fury line of GPUs, which would just be silly. Since they were counting shipped units, though, it makes sense to neglect those SKUs because they are priced way above the inflection point in actual adoption.
Image Credit: JPR
But that's not the only “well... sort-of” with JPR's statement. Unlike most generations, the GTX 970 and 980 launched late in 2014, rather than their usual Spring-ish cadence. Apart from the GeForce GTX 580, this trend has been around since the GeForce 9000-series. As such, these 2014 launches could have similar influence as another year's early-2015 product line. Add a bit of VR hype, and actual common knowledge that consoles are lower powered than PCs this generation, and you can see these numbers make a little more sense.
Even still, a 100% increase in enthusiast AIB shipments is quite interesting. This doesn't only mean that game developers can target higher-end hardware. The same hardware to consume content can be used to create it, which boosts both sides of the artist / viewer conversation in art. Beyond its benefits to society, this could snowball into more GPU adoption going forward.
Subject: Graphics Cards, Mobile, Shows and Expos | February 23, 2016 - 08:46 PM | Scott Michaud
Tagged: raytracing, ray tracing, PowerVR, mwc 16, MWC, Imagination Technologies
For the last couple of years, Imagination Technologies has been pushing hardware-accelerated ray tracing. One of the major problems in computer graphics is knowing what geometry and material corresponds to a specific pixel on the screen. Several methods exists, although typical GPUs crush a 3D scene into the virtual camera's 2D space and do a point-in-triangle test on it. Once they know where in the triangle the pixel is, if it is in the triangle, it can be colored by a pixel shader.
Another method is casting light rays into the scene, and assigning a color based on the material that it lands on. This is ray tracing, and it has a few advantages. First, it is much easier to handle reflections, transparency, shadows, and other effects where information is required beyond what the affected geometry and its material provides. There are usually ways around this, without resorting to ray tracing, but they each have their own trade-offs. Second, it can be more efficient for certain data sets. Rasterization, since it's based around a “where in a triangle is this point” algorithm, needs geometry to be made up of polygons.
It also has the appeal of being what the real world sort-of does (assuming we don't need to model Gaussian beams). That doesn't necessarily mean anything, though.
At Mobile World Congress, Imagination Technologies once again showed off their ray tracing hardware, embodied in the PowerVR GR6500 GPU. This graphics processor has dedicated circuitry to calculate rays, and they use it in a couple of different ways. They presented several demos that modified Unity 5 to take advantage of their ray tracing hardware. One particularly interesting one was their quick, seven second video that added ray traced reflections atop an otherwise rasterized scene.
It was a little too smooth, creating reflections that were too glossy, but that could probably be downplayed in the material ((Update: Feb 24th @ 5pm Car paint is actually that glossy. It's a different issue). Back when I was working on a GPU-accelerated software renderer, before Mantle, Vulkan, and DirectX 12, I was hoping to use OpenCL-based ray traced highlights on idle GPUs, if I didn't have any other purposes for it. Now though, those can be exposed to graphics APIs directly, so they might not be so idle.
The downside of dedicated ray tracing hardware is that, well, the die area could have been used for something else. Extra shaders, for compute, vertex, and material effects, might be more useful in the real world... or maybe not. Add in the fact that fixed-function circuitry already exists for rasterization, and it makes you balance gain for cost.
It could be cool, but it has its trade-offs, like anything else.
Subject: Graphics Cards | February 22, 2016 - 06:03 PM | Ryan Shrout
Tagged: vive, valve, steamvr, steam, rift, performance test, Oculus, htc
Though I am away from my stacks of hardware at the office attending Mobile World Congress in Barcelona, Valve dropped a bomb on us today in the form of a new hardware performance test that gamers can use to determine if they are ready for the SteamVR revolution. The aptly named "SteamVR Performance Test" is a free title available through Steam that any user can download and run to get a report card on their installed hardware. No VR headset required!
And unlike the Oculus Compatibility Checker, the application from Valve runs actual game content to measure your system. Oculus' app only looks at the hardware on your system for certification, not taking into account the performance of your system in any way. (Overclockers and users with Ivy Bridge Core i7 processors have been reporting failed results on the Oculus test for some time.)
The SteamVR Performance Test runs a set of scenes from the Aperture Science Robot Repair demo, an experience developed directly for the HTC Vive and one that I was able to run through during CES last month. Valve is using a very interesting new feature called "dynamic fidelity" that adjusts image quality of the game in a way to avoid dropped frames and frame rates under 90 FPS in order to maintain a smooth and comfortable experience for the VR user. Though it is the first time I have seen it used, it sounds similar to what John Carmack did with the id Tech 5 engine, attempting to balance performance on hardware while maintaining a targeted frame rate.
The technology could be a perfect match for VR content where frame rates above or at the 90 FPS target are more important than visual fidelity (in nearly all cases). I am curious to see how Valve may or may not pursue and push this technology in its own games and for the Vive / Rift in general. I have some questions pending with them, so we'll see what they come back with.
A result for a Radeon R9 Fury provided by AMD
Valve's test offers a very simple three tiered breakdown for your system: Not Ready, Capable and Ready. For a more detailed explanation you can expand on the data to see metrics like the number of frames you are CPU bound on, frames below the very important 90 FPS mark and how many frames were tested in the run. The Average Fidelity metric is the number that we are reporting below and essentially tells us "how much quality" the test estimates you can run at while maintaining that 90 FPS mark. What else that fidelity result means is still unknown - but again we are trying to find out. The short answer is that the higher that number goes, the better off you are, and the more demanding game content you'll be able to run at acceptable performance levels. At least, according to Valve.
Because I am not at the office to run my own tests, I decided to write up this story using results from a third part. That third party is AMD - let the complaining begin. Obviously this does NOT count as independent testing but, in truth, it would be hard to cheat on these results unless you go WAY out of your way to change control panel settings, etc. The demo is self run and AMD detailed the hardware and drivers used in the results.
- Intel i7-6700K
- 2x4GB DDR4-2666 RAM
- Z170 motherboard
- Radeon Software 16.1.1
- NVIDIA driver 361.91
- Win10 64-bit
|2x Radeon R9 Nano||11.0|
|GeForce GTX 980 Ti||11.0|
|Radeon R9 Fury X||9.6|
|Radeon R9 Fury||9.2|
|GeForce GTX 980||8.1|
|Radeon R9 Nano||8.0|
|Radeon R9 390X||7.8|
|Radeon R9 390||7.0|
|GeForce GTX 970||6.5|
These results were provided by AMD in an email to the media. Take that for what you will until we can run our own tests.
First, the GeForce GTX 980 Ti is the highest performing single GPU tested, with a score of 11 - because of course it goes to 11. The same score is reported on the multi-GPU configuration with two Radeon R9 Nanos so clearly we are seeing a ceiling of this version of the SteamVR Performance Test. With a single GPU score of 9.2, that is only a 19% scaling rate, but I think we are limited by the test in this case. Either way, it's great news to see that AMD has affinity multi-GPU up and running, utilizing one GPU for each eye's rendering. (AMD pointed out that users that want to test the multi-GPU implementation will need to add the -multigpu launch option.) I still need to confirm if GeForce cards scale accordingly. UPDATE: Ken at the office ran a quick check with a pair of GeForce GTX 970 cards with the same -multigpu option and saw no scaling improvements. It appears NVIDIA has work to do here.
Moving down the stack, its clear why AMD was so excited to send out these early results. The R9 Fury X and R9 Fury both come out ahead of the GeForce GTX 980 while the R9 Nano, R9 390X and R9 390 result in better scores than NVIDIA's GeForce GTX 970. This comes as no surprise - AMD's Radeon parts tend to offer better performance per dollar when it comes to benchmarks and many games.
There is obviously a lot more to consider than the results this SteamVR Performance Test provides when picking hardware for a VR system, but we are glad to see Valve out in front of the many, many questions that are flooding forums across the web. Is your system ready??
Subject: Graphics Cards | February 19, 2016 - 07:11 PM | Scott Michaud
Tagged: vulkan, linux
Update: Venn continued to benchmark and came across a few extra discoveries. For example, he disabled VDPAU and jumped to 89.6 FPS in OpenGL and 80.6 FPS in Vulkan. Basically, be sure to read the whole thread. It might be updated further even. Original post below (unless otherwise stated).
On Windows, the Vulkan patch of The Talos Principle leads to a net loss in performance, relative to DirectX 11. This is to be expected when a developer like Croteam optimizes their game for existing APIs, and tries to port all that work to a new, very different standard, with a single developer and three months of work. They explicitly state, multiple times, not to expect good performance.
Image Credit: Venn Stone of LinuxGameCast
On Linux, Venn Stone of LinuxGameCast found different results. With everything maxed out at 1080p, his OpenGL benchmark reports 38.2 FPS, while his Vulkan raises this to an average of 66.5 FPS. Granted, this was with an eight-core AMD FX-8150, which launched with the Bulldozer architecture back in 2011. It did not have the fastest single-threaded performance, falling behind even AMD's own Phenom II parts before it in that regard.
Still, this is a scenario that allowed the game to scale to Bulldozer's multiple cores and circumvent a lot of the driver overhead in OpenGL. It resulted in a 75% increase in performance, at least for people who pair a GeForce 980
Ti ((Update: The Ti was a typo. Venn uses a standard GeForce GTX 980.)) with an eight-core, Bulldozer CPU from 2011.
Subject: Graphics Cards | February 16, 2016 - 12:01 PM | Sebastian Peak
Tagged: rumor, report, nvidia, Maxwell 2.0, GTX 950 SE, GTX 950 LP, gtx 950, gtx 750, graphics card, gpu
A report from VideoCardz.com claims that NVIDIA is working on another GTX 950 graphics card, but not the 950 Ti you might have expected.
Reference GTX 950 (Image credit: NVIDIA)
While the GTX 750 Ti was succeeded by the GTX 950 in August of last year, the higher specs for this new GPU came at the cost of a higher TDP (90W vs. 60W). This new rumored GTX 950, which might be called either 950 SE or 950 LP according to the report, would be a lower power version of the GTX 950, and would actually have a lot more in common with the outgoing GTX 750 Ti than the plain GTX 750 as we can see from this chart:
(Image credit: VideoCardz)
As you can see the GTX 750 Ti is based on GM107 (Maxwell 1.0) and has 640 CUDA cores, 40 TUs, 16 ROPs, and it operates at 1020 MHz Base/1085 MHz Boost clocks. The reported specs of this new GTX 950 SE/LP would be nearly identical, though based on GM206 (Maxwell 2.0) and offering greater memory bandwidth (and slightly higher power consumption).
The VideoCardz report was sourced from Expreview, which claimed that this GTX 950 SE/LP product would arrive next month at some point. This report is a little more vague than some of the rumors we see, but it could very well be that NVIDIA has a planned replacement for the remaining Maxwell 1.0 products on the market. I would have personally expected to see a"Ti” product before any “LE/LP” version of the GTX 950, and this reported name seems more like an OEM product than a retail part. We will have to wait and see if this report is accurate.