Subject: Graphics Cards | March 24, 2016 - 02:04 PM | Jeremy Hellstrom
Tagged: Ubuntu 16.04, linux, vulkan, amd, nvidia
Last week AMD released a new GPU-PRO Beta driver stack and this Monday, NVIDIA released the 364.12 beta driver, both of which support Vulkan and meant that Phoronix had a lot of work to do. Up for testing were the GTX 950, 960, 970, 980, and 980 Ti as well as the R9 Fury, 290 and 285. Logically, they used the Talos Principal test, their results compare not only the cards but also the performance delta between OpenGL and Vulkan and finished up with several OpenGL benchmarks to see if there were any performance improvements from the new drivers. The results look good for Vulkan as it beats OpenGL across the board as you can see in the review.
"Thanks to AMD having released their new GPU-PRO "hybrid" Linux driver a few days ago, there is now Vulkan API support for Radeon GPU owners on Linux. This new AMD Linux driver holds much potential and the closed-source bits are now limited to user-space, among other benefits covered in dozens of Phoronix articles over recent months. With having this new driver in hand plus NVIDIA promoting their Vulkan support to the 364 Linux driver series, it's a great time for some benchmarking. Here are OpenGL and Vulkan atop Ubuntu 16.04 Linux for both AMD Radeon and NVIDIA GeForce graphics cards."
Here are some more Graphics Card articles from around the web:
Subject: General Tech | March 9, 2016 - 01:14 PM | Jeremy Hellstrom
Tagged: linux, Fedora, ubuntu, debian, CentOS, opensuse, Antergos, Sabayon, Void Linux, Zenwalk, KaOS, Clear, Alpine, Skylake
Phoronix have just wrapped up a marathon benchmarking session comparing 15 different flavours of Linux on a system with a Skylake based Xeon E3-1280 v5 and a MSI Radeon R7 370. They tested a long list of programs, from SQLite through OpenGL based games and multi-threaded ray-tracer benchmarks. They wrap up the reveiw with a table showing all the results in an easy to see format for you to reference when choosing your preferred Linux distro. If you know what tasks your machine will be assigned to, you can see which of these 15 distros will offer you the best performance, as not every Linux machine is used for the same purpose.
"Succeeding January's 10-way Linux distribution battle is now a 15-way Linux distribution comparison on an Intel Xeon "Skylake" system with Radeon R7 graphics. Distributions part of this Linux OS performance showdown include Fedora, Ubuntu, Debian, CentOS, OpenSUSE, Antergos, Sabayon, Void Linux, Zenwalk, KaOS, Clear Linux, and Alpine Linux."
Here is some more Tech News from around the web:
Subject: Graphics Cards | February 19, 2016 - 07:11 PM | Scott Michaud
Tagged: vulkan, linux
Update: Venn continued to benchmark and came across a few extra discoveries. For example, he disabled VDPAU and jumped to 89.6 FPS in OpenGL and 80.6 FPS in Vulkan. Basically, be sure to read the whole thread. It might be updated further even. Original post below (unless otherwise stated).
On Windows, the Vulkan patch of The Talos Principle leads to a net loss in performance, relative to DirectX 11. This is to be expected when a developer like Croteam optimizes their game for existing APIs, and tries to port all that work to a new, very different standard, with a single developer and three months of work. They explicitly state, multiple times, not to expect good performance.
Image Credit: Venn Stone of LinuxGameCast
On Linux, Venn Stone of LinuxGameCast found different results. With everything maxed out at 1080p, his OpenGL benchmark reports 38.2 FPS, while his Vulkan raises this to an average of 66.5 FPS. Granted, this was with an eight-core AMD FX-8150, which launched with the Bulldozer architecture back in 2011. It did not have the fastest single-threaded performance, falling behind even AMD's own Phenom II parts before it in that regard.
Still, this is a scenario that allowed the game to scale to Bulldozer's multiple cores and circumvent a lot of the driver overhead in OpenGL. It resulted in a 75% increase in performance, at least for people who pair a GeForce 980
Ti ((Update: The Ti was a typo. Venn uses a standard GeForce GTX 980.)) with an eight-core, Bulldozer CPU from 2011.
Caught Up to DirectX 12 in a Single Day
I'm not just talking about the specification. Members of the Khronos Group have also released compatible drivers, SDKs and tools to support them, conformance tests, and a proof-of-concept patch for Croteam's The Talos Principle. To reiterate, this is not a soft launch. The API, and its entire ecosystem, is out and ready for the public on Windows (at least 7+ at launch but a surprise Vista or XP announcement is technically possible) and several distributions of Linux. Google will provide an Android SDK in the near future.
I'm going to editorialize for the next two paragraphs. There was a concern that Vulkan would be too late. The thing is, as of today, Vulkan is now just as mature as DirectX 12. Of course, that could change at a moment's notice; we still don't know how the two APIs are being adopted behind the scenes. A few DirectX 12 titles are planned to launch in a few months, but no full, non-experimental, non-early access game currently exists. Each time I say this, someone links the Wikipedia list of DirectX 12 games. If you look at each entry, though, you'll see that all of them are either: early access, awaiting an unreleased DirectX 12 patch, or using a third-party engine (like Unreal Engine 4) that only list DirectX 12 as an experimental preview. No full, released, non-experimental DirectX 12 game exists today. Besides, if the latter counts, then you'll need to accept The Talos Principle's proof-of-concept patch, too.
But again, that could change. While today's launch speaks well to the Khronos Group and the API itself, it still needs to be adopted by third party engines, middleware, and software. These partners could, like the Khronos Group before today, be privately supporting Vulkan with the intent to flood out announcements; we won't know until they do... or don't. With the support of popular engines and frameworks, dependent software really just needs to enable it. This has not happened for DirectX 12 yet, and, now, there doesn't seem to be anything keeping it from happening for Vulkan at any moment. With the Game Developers Conference just a month away, we should soon find out.
But back to the announcement.
Vulkan-compatible drivers are launching today across multiple vendors and platforms, but I do not have a complete list. On Windows, I was told to expect drivers from NVIDIA for Windows 7, 8.x, 10 on Kepler and Maxwell GPUs. The standard is compatible with Fermi GPUs, but NVIDIA does not plan on supporting the API for those users due to its low market share. That said, they are paying attention to user feedback and they are not ruling it out, which probably means that they are keeping an open mind in case some piece of software gets popular and depends upon Vulkan. I have not heard from AMD or Intel about Vulkan drivers as of this writing, one way or the other. They could even arrive day one.
On Linux, NVIDIA, Intel, and Imagination Technologies have submitted conformant drivers.
Drivers alone do not make a hard launch, though. SDKs and tools have also arrived, including the LunarG SDK for Windows and Linux. LunarG is a company co-founded by Lens Owen, who had a previous graphics software company that was purchased by VMware. LunarG is backed by Valve, who also backed Vulkan in several other ways. The LunarG SDK helps developers validate their code, inspect what the API is doing, and otherwise debug. Even better, it is also open source, which means that the community can rapidly enhance it, even though it's in a releasable state as it is. RenderDoc,
the open-source graphics debugger by Crytek, will also add Vulkan support. ((Update (Feb 16 @ 12:39pm EST): Baldur Karlsson has just emailed me to let me know that it was a personal project at Crytek, not a Crytek project in general, and their GitHub page is much more up-to-date than the linked site.))
The major downside is that Vulkan (like Mantle and DX12) isn't simple.
These APIs are verbose and very different from previous ones, which requires more effort.
Image Credit: NVIDIA
There really isn't much to say about the Vulkan launch beyond this. What graphics APIs really try to accomplish is standardizing signals that enter and leave video cards, such that the GPUs know what to do with them. For the last two decades, we've settled on an arbitrary, single, global object that you attach buffers of data to, in specific formats, and call one of a half-dozen functions to send it.
Compute APIs, like CUDA and OpenCL, decided it was more efficient to handle queues, allowing the application to write commands and send them wherever they need to go. Multiple threads can write commands, and multiple accelerators (GPUs in our case) can be targeted individually. Vulkan, like Mantle and DirectX 12, takes this metaphor and adds graphics-specific instructions to it. Moreover, GPUs can schedule memory, compute, and graphics instructions at the same time, as long as the graphics task has leftover compute and memory resources, and / or the compute task has leftover memory resources.
This is not necessarily a “better” way to do graphics programming... it's different. That said, it has the potential to be much more efficient when dealing with lots of simple tasks that are sent from multiple CPU threads, especially to multiple GPUs (which currently require the driver to figure out how to convert draw calls into separate workloads -- leading to simplifications like mirrored memory and splitting workload by neighboring frames). Lots of tasks aligns well with video games, especially ones with lots of simple objects, like strategy games, shooters with lots of debris, or any game with large crowds of people. As it becomes ubiquitous, we'll see this bottleneck disappear and games will not need to be designed around these limitations. It might even be used for drawing with cross-platform 2D APIs, like Qt or even webpages, although those two examples (especially the Web) each have other, higher-priority bottlenecks. There are also other benefits to Vulkan.
The WebGL comparison is probably not as common knowledge as Khronos Group believes.
Still, Khronos Group was criticized when WebGL launched as "it was too tough for Web developers".
It didn't need to be easy. Frameworks arrived and simplified everything. It's now ubiquitous.
In fact, Adobe Animate CC (the successor to Flash Pro) is now a WebGL editor (experimentally).
Open platforms are required for this to become commonplace. Engines will probably target several APIs from their internal management APIs, but you can't target users who don't fit in any bucket. Vulkan brings this capability to basically any platform, as long as it has a compute-capable GPU and a driver developer who cares.
Thankfully, it arrived before any competitor established market share.
Subject: Graphics Cards | January 20, 2016 - 03:26 PM | Scott Michaud
Tagged: nvidia, linux, tesla, fermi, kepler, maxwell
It's nice to see long-term roundups every once in a while. They do not really provide useful information for someone looking to make a purchase, but they show how our industry is changing (or not). In this case, Phoronix tested twenty-seven NVIDIA GeForce cards across four architectures: Tesla, Fermi, Kepler, and Maxwell. In other words, from the GeForce 8 series all the way up to the GTX 980 Ti.
Image Credit: Phoronix
Nine years of advancements in ASIC design, with a doubling time-step of 18 months, should yield a 64-fold improvement. The number of transistors falls short, showing about a 12-fold improvement between the Titan X and the largest first-wave Tesla, although that means nothing for a fabless semiconductor designer. The main reason why I include this figure is to show the actual Moore's Law trend over this time span, but it also highlights the slowdown in process technology.
Performance per watt does depend on NVIDIA though, and the ratio between the GTX 980 Ti and the 8500 GT is about 72:1. While this is slightly better than the target 64:1 ratio, these parts are from very different locations in their respective product stacks. Swapping the 8500 GT for the following year's 9800 GTX, which leads to a comparison between top-of-the-line GPUs of their respective times, and you see a 6.2x improvement in performance per watt versus the GTX 980 Ti. On the other hand, that part was outstanding for its era.
I should note that each of these tests take place on Linux. It might not perfectly reflect the landscape on Windows, but again, it's interesting in its own right.
Subject: Graphics Cards, Processors | January 8, 2016 - 02:38 AM | Scott Michaud
Tagged: Intel, kaby lake, linux, mesa
Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.
It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.
Subject: Graphics Cards | December 29, 2015 - 07:05 AM | Scott Michaud
Tagged: opengl, mesa, linux, Intel
The open-source driver for Intel is known to be a little behind on Linux. Because Intel does not provide as much support as they should, the driver still does not support OpenGL 4.0, although that is changing. One large chunk of that API is support for tessellation, which comes from DirectX 11, and recent patches are adding it for supported hardware. Proprietary drivers exist, at least for some platforms, but they have their own issues.
According to the Phoronix article, once the driver succeeds in supporting OpenGL 4.0, it will not be too long to open the path to 4.2. Tessellation is a huge hurdle, partially because it involves adding two whole shading stages to the rendering pipeline. Broadwell GPUs were recently added, but a patch that was committed yesterday will expand that to Ivy Bridge and Haswell. On Windows, Intel is far ahead -- pushing OpenGL 4.4 for Skylake-based graphics, although that platform only has proprietary drivers. AMD and NVIDIA are up to OpenGL 4.5, which is the latest version.
While all of this is happening, Valve is working on an open-source Vulkan driver for Intel on Linux. This API will be released adjacent to OpenGL, and is built for high-performance graphics and compute. (Note that OpenCL is more sophisticated than Vulkan "1.0" will be on the compute side of things.) As nice as it would be to get high-end OpenGL support, especially for developers who want a more simplified structure to communicate to GPUs with, Vulkan will probably be the API that matters most for high-end video games. But again, that only applies to games that are developed for it.
Subject: Processors | November 12, 2015 - 01:22 PM | Jeremy Hellstrom
Tagged: linux, Skylake, Intel, i5-6600K, hd 530, Ubuntu 15.10
A great way to shave money off of a minimalist system is to skip buying a GPU and using the one present on modern processors, as well as installing Linux instead of buying a Windows license. The problem with doing so is that playing demanding games is going to be beyond your computers ability, at least without turning off most of the features that make the game look good. To help you figure out what your machine would be capable of is this article from Phoronix. Their tests show that Windows 10 currently has a very large performance lead compared to the same hardware running on Ubuntu as the Windows OpenGL driver is superior to the open-source Linux driver. This may change sooner rather than later but you should be aware that for now you will not get the most out of your Skylakes GPU on Linux at this time.
"As it's been a while since my last Windows vs. Linux graphics comparison and haven't yet done such a comparison for Intel's latest-generation Skylake HD Graphics, the past few days I was running Windows 10 Pro x64 versus Ubuntu 15.10 graphics benchmarks with a Core i5 6600K sporting HD Graphics 530."
Here are some more Processor articles from around the web:
- Intel Core i5 6500: A Great Skylake CPU For $200, Works Well On Linux @ Phoronix
- CPU Battle - Old and High-End vs. New and Entry-Level @ Hardware Secrets
- Which is the faster CPU: old but high-end or entry-level and new? - Part 2 @ Hardware Secrets
- AMD FX 8320E CPU Review @ Neoseeker
Subject: Graphics Cards | October 23, 2015 - 03:19 PM | Jeremy Hellstrom
Tagged: linux, amd, nvidia, steam os
Steam Machines powered by SteamOS are due to hit stores in the coming months and in order to get the best performance you need to make sure that the GPU inside the machine plays nicely with the new OS. To that end Phoronix has tested 22 GPUs, 15 NVIDIA ranging from a GTX 460 straight through to a TITAN X and seven AMD cards from an HD 6570 through to the new R9 Fury. Part of the reason they used less AMD cards in the testing stems from driver issues which prevented some models from functioning properly. They tested Bioshock Infinite, both Metro 2033 games, CS:GO and one of Josh's favourites, DiRT Showdown. The performance results may not be what you expect and are worth checking out fully. As well Phoronix put in cost to performance findings, for budget conscious gamers.
"With Steam Machines set to begin shipping next month and SteamOS beginning to interest more gamers as an alternative to Windows for building a living room gaming PC, in this article I've carried out a twenty-two graphics card comparison with various NVIDIA GeForce and AMD Radeon GPUs while testing them on the Debian Linux-based SteamOS 2.0 "Brewmaster" operating system using a variety of Steam Linux games."
Here are some more Graphics Card articles from around the web:
- MSI GeForce GTX 980 Ti LIGHTNING @ [H]ard|OCP
- Gigabyte GTX 950 Xtreme @ Modders-Inc
- The EVGA GTX 980 Ti Hybrid Review @ Hardware Canucks
- MSI GTX 980 Ti Sea Hawk Review @ Hardware Canucks
- EVGA GeForce GTX 980 Ti FTW Graphics Card Review @ Techgage
- PNY GTX 950 2GB @ eTeknix
- MSI GTX 980 Ti Lightning 6GB @ Kitguru
- PowerColor Devil 13 Dual Core R9 390 @ [H]ard|OCP
Subject: Systems | October 13, 2015 - 03:23 PM | Jeremy Hellstrom
Tagged: server farm, linux, DIY
Phoronix recently built a server farm and bar, a perfect use for a basement. In building the server farm they learned quite a bit about the process of creating your own server farm as well as the costs involved. For instance their power bill has gone up somewhat, including the air conditioning they are seeing usage of 3,000 kWh a month so you might want to do some calculations before setting up your own. Take a look at how the mostly finished design worked out and if you are interested you can find a link to the original article covering the build on the last page.
"It's been just over six months since I completed construction on the large 60+ system server room where a ton of Linux benchmarking takes place just not for Phoronix.com but also the new LinuxBenchmarking.com daily performance tracking initiative and testing and development around our Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org software. Here's a look back, a few recommendations to reiterate for those aspiring to turn their cellar into a server farm, and a few things I'd do differently next time around."
Here are some more Systems articles from around the web:
- TechPowerUp 4K Gaming Build Guide @ techPowerUp
- OCUK Titan Electron Intel Core i3 Mini-ITX gaming PC @ Kitguru
- ASRock Beebox Mini PC @ techPowerUp
- Scan 3XS GW-HTX35 Workstation (w/ Quadro M6000) @ Kitguru