Tessellation Support Expands for Intel's Open Linux Driver

Subject: Graphics Cards | December 29, 2015 - 12:05 PM |
Tagged: opengl, mesa, linux, Intel

The open-source driver for Intel is known to be a little behind on Linux. Because Intel does not provide as much support as they should, the driver still does not support OpenGL 4.0, although that is changing. One large chunk of that API is support for tessellation, which comes from DirectX 11, and recent patches are adding it for supported hardware. Proprietary drivers exist, at least for some platforms, but they have their own issues.

intel-2015-linux-driver-mesa.png

According to the Phoronix article, once the driver succeeds in supporting OpenGL 4.0, it will not be too long to open the path to 4.2. Tessellation is a huge hurdle, partially because it involves adding two whole shading stages to the rendering pipeline. Broadwell GPUs were recently added, but a patch that was committed yesterday will expand that to Ivy Bridge and Haswell. On Windows, Intel is far ahead -- pushing OpenGL 4.4 for Skylake-based graphics, although that platform only has proprietary drivers. AMD and NVIDIA are up to OpenGL 4.5, which is the latest version.

While all of this is happening, Valve is working on an open-source Vulkan driver for Intel on Linux. This API will be released adjacent to OpenGL, and is built for high-performance graphics and compute. (Note that OpenCL is more sophisticated than Vulkan "1.0" will be on the compute side of things.) As nice as it would be to get high-end OpenGL support, especially for developers who want a more simplified structure to communicate to GPUs with, Vulkan will probably be the API that matters most for high-end video games. But again, that only applies to games that are developed for it.

Source: Phoronix
Author:
Manufacturer: AMD

May the Radeon be with You

In celebration of the release of The Force Awakens as well as the new Star Wars Battlefront game from DICE and EA, AMD sent over some hardware for us to use in a system build, targeted at getting users up and running in Battlefront with impressive quality and performance, but still on a reasonable budget. Pairing up an AMD processor, MSI motherboard, Sapphire GPU with a low cost chassis, SSD and more, the combined system includes a FreeSync monitor for around $1,200.

swbf.jpg

Holiday breaks are MADE for Star Wars Battlefront

Though the holiday is already here and you'd be hard pressed to build this system in time for it, I have a feeling that quite a few of our readers and viewers will find themselves with some cash and gift certificates in hand, just ITCHING for a place to invest in a new gaming PC.

The video above includes a list of components, the build process (in brief) and shows us getting our gaming on with Star Wars Battlefront. Interested in building a system similar the one above on your own? Here's the hardware breakdown.

  AMD Powered Star Wars Battlefront System
Processor AMD FX-8370 - $197
Cooler Master Hyper 212 EVO - $29
Motherboard MSI 990FXA Gaming - $137
Memory AMD Radeon Memory DDR3-2400 - $79
Graphics Card Sapphire NITRO Radeon R9 380X - $266
Storage SanDisk Ultra II 240GB SSD - $79
Case Corsair Carbide 300R - $68
Power Supply Seasonic 600 watt 80 Plus - $69
Monitor AOC G2460PF 1920x1080 144Hz FreeSync - $259
Total Price Full System (without monitor) - Amazon.com - $924

For under $1,000, plus another $250 or so for the AOC FreeSync capable 1080p monitor, you can have a complete gaming rig for your winter break. Let's detail some of the specific components.

cpu.jpg

AMD sent over the FX-8370 processor for our build, a 4-module / 8-core CPU that runs at 4.0 GHz, more than capable of handling any gaming work load you can toss at it. And if you need to do some transcoding, video work or, heaven forbid, school or productivity work, the FX-8370 has you covered there too.

cooler.jpg

For the motherboard AMD sent over the MSI 990FXA Gaming board, one of the newer AMD platforms that includes support for USB 3.1 so you'll have a good length of usability for future expansion. The Cooler Master Hyper 212 EVO cooler was our selection to keep the FX-8370 running smoothly and 8GB of AMD Radeon DDR3-2133 memory is enough for the system to keep applications and the Windows 10 operating system happy.

Continue reading about our AMD system build for Star Wars Battlefront!!

NVIDIA GameWorks VR 1.1 arrives with support for OpenGL VR SLI support and the Oculus SDK

Subject: Graphics Cards | December 21, 2015 - 06:04 PM |
Tagged: GameWorks VR 1.1, nvidia, Oculus, opengl, vive

If you are blessed with the good fortune of already having a VR headset and happen to be running an NVIDIA GPU then there is a new driver you want to grab as soon as you can.  The driver includes a new OpenGL extension that enables NVIDIA SLI support for OpenGL apps that display on an Oculus or Vive.  NVIDIA's PR suggests you can expect your performance to improve 1.7 times, not quite doubling but certainly offering a noticeable performance improvement.  The update is for both GeForce and Quadro cards.

vrlock.jpg

They describe how the update will generate images in their blog post here.  They imply that the changes to your code in order to benefit from this update will be minimal and it will also reduce the CPU overhead required to display the images for the right and left eye.  Read on if you are interested in the developer side of this update, otherwise download your new driver and keep an eye out for application updates that enable support for SLI in VR.

Source: NVIDIA

Vulkan API Slips to 2016

Subject: Graphics Cards | December 21, 2015 - 12:25 PM |
Tagged: vulkan, Mantle, Khronos, dx12, DirectX 12

The Khronos Group announced on Friday that the Vulkan API will not ship until next year. The standards body was expecting to launch it at some point in 2015. In fact, when I was first briefed on it, they specifically said that 2015 was an “under-promise and over-deliver” estimate. Vulkan is an open graphics and compute standard that was derived from AMD's Mantle. It, like OpenCL 2.1, uses the SPIR-V language for compute and shading though, which can be compiled from subsets of a variety of languages.

khronos-vulkan-logo.png

I know that most people will be quick to blame The Khronos Group for this, because industry bodies moving slowly is a stereotype, but I don't think it applies. When AMD created Mantle, it bore some significant delays at all levels. Its drivers and software were held back, and the public release of its SDK was delayed out of existence. Again, it would be easy to blame AMD for this, but hold on. We now get to Microsoft. DirectX 12, which is maybe even closer to Mantle than Vulkan is due to its shading language, didn't roll out as aggressively as Microsoft expected, either. Software is still pretty much non-existent when they claimed, at GDC 2014, that about 50% of PC games would be DX12-compatible by Holiday 2015. We currently have... ... zero (excluding pre-release).

Say what you like about the three examples individually, but when all three show problems, then there might just be a few issues that took longer than expected to solve. Again, this is a completely different metaphor of translating voltages coming through a PCI Express bus into fancy graphics and GPU compute, and create all of the supporting ecosystems, too.

Speaking of ecosystems, The Khronos Group has also announced that Google has upgraded their membership to “Promoter” to get more involved with Vulkan development. Google has been sort-of hostile towards certain standards from The Khronos Group on Android in the past, such as disabling OpenCL on Nexus devices, and trying to steer developers into using Android Extension Pack and Renderscript. They seem to want to use Vulkan proper this time, which is always healthy for the API.

I guess look forward to Vulkan in 2016... hopefully early.

Thought you wouldn't see a new Radeon Crimson driver before the New Year, eh?

Subject: Graphics Cards | December 17, 2015 - 10:49 PM |
Tagged: radeon, crimson, amd

unnamed.jpg

That's right folks, the official AMD Radeon Software Crimson Edition 15.12 has just launched for you to install.  This includes the fixes for fan speeds when you are using AMD Overdrive, your settings will stick and the fans will revert to normal after you go back to the desktop from an intense gaming session.  There are multiple fixes for Star Wars Battlefront, Fallout 4 and several GUI fixes within the software itself.  As always there are still a few kinks being worked out but overall it is worth popping over to AMD to grab the new driver.  You should also have less issues upgrading from within Crimson after this update as well.

Source: AMD

NVIDIA Updates GeForce Experience with Beta Features

Subject: Graphics Cards | December 16, 2015 - 01:12 PM |
Tagged: nvidia, geforce experience, geforce

A new version of GeForce Experience was published yesterday. It is classified as a beta, which I'm guessing means that they wanted to release it before the Holidays, but they didn't want to have to fix potential, post-launch issues during the break. Thus, release it as a beta so users will just roll back if something doesn't work. On the other hand, NVIDIA is suggesting that it will be a recurring theme with their new "Early Access" program.

It has a few interesting features, though. First, it has a screenshot function that connects with Imgur. Steam's F12 function is pretty good for almost any title, but there are some times that you don't want to register the game with Steam, so a second option is welcome. They also have an overlay to control your stream, rather than just an indicator icon.

nvidia-2015-geforce-experience-streamstuff.png

They added the ability to choose the Twitch Ingest Server, which is the server that the broadcaster connects to and delivers your stream into Twitch's back-end. I haven't used ShadowPlay for a while, but you previously needed to use whatever GeForce Experience chose. If it's not the best connection (ex: across the continent) then you basically had to deal with it. OBS and OBS Studio have these features too of course. I'll be clear: NVIDIA is playing catch-up to open source software in that area.

The last feature to be mentioned might just be the most interesting, though. A while ago, we mentioned that NVIDIA wants to allow online co-op by a GameStream-like service. They now have it, and it's called GameStream Co-op. The viewer can watch, take over your input, or register as a second gamepad. It requires a GeForce GTX 650 (or 660M) or higher.

GeForce Experience Beta is available now.

Source: NVIDIA
Author:
Manufacturer: AMD

Open Source your GPU!

As part of the AMD’s recent RTG (Radeon Technologies Group) Summit in Sonoma, the company released information about a new initiative to help drive development and evolution in the world of gaming called GPUOpen.  As the name implies, the idea is to use an open source mentality to drivers, libraries, SDKs and more to improve the relationship between AMD’s hardware and the gaming development ecosystem.

gpuopen-5.jpg

When the current generation of consoles was first announced, AMD was riding a wave of positive PR that it hadn’t felt in many years. Because AMD Radeon hardware was at the root of the PlayStation 4 and the Xbox One, game developers would become much more adept at programming for AMD’s GCN architecture and that would waterfall down to PC gamers. At least, that was the plan. In practice though I think you’d be hard pressed to find any analyst to put their name on a statement claiming that proclamation from AMD actually transpired. It just hasn’t happened – but that does not mean that it still can’t if all the pieces fall into place.

gpuopen-7.jpg

The issue that AMD, NVIDIA, and game developers have to work around is a divided development ecosystem. While on the console side programmers tend to have very close to the metal access on CPU and GPU hardware, that hasn’t been the case with PCs until very recently. AMD was the first to make moves in this area with the Mantle API but now we have DirectX 12, a competing low level API, that will have much wider reach than Mantle or Vulkan (what Mantle has become).

AMD also believes, as do many developers, that a “black box” development environment for tools and effects packages is having a negative effect on the PC gaming ecosystem. The black box mentality means that developers don’t have access to the source code of some packages and thus cannot tweak performance and features to their liking.

Continue reading our overview of the new GPUOpen initiative from the Radeon Technologies Group!!

Pushing the ASUS STRIX R9 380X DirectCU II OC to the limit

Subject: Graphics Cards | December 14, 2015 - 08:55 PM |
Tagged: amd, asus, STRIX R9 380X DirectCU II OC, overclock

Out of the box the ASUS STRIX R9 380X OC has a top GPU speed of 1030MHz and memory at 5.7GHz, enough to outperform a stock GTX 960 4GB at 1440p but not enough to provide satisfactory performance at that resolution.  After spending some time with the card, [H]ard|OCP determined that the best overclock they could coax out of this particular GPU was 1175MHz and 6.5GHz, so they set about testing the performance at 1440p again.  To make it fair they also overclocked their STRIX GTX 960 OC 4GB to 1527MHz and 8GHz.  Read the full review for the detailed results, you will see that overclocking your 380X does really increase the value you get for your money.

1450065721TtW76q7G9J_1_1.gif

"We take the new ASUS STRIX R9 380X DirectCU II OC based on AMD's new Radeon R9 380X GPU and overclock this video card to its highest potential. We'll compare performance in six games, including Fallout 4, to a highly overclocked ASUS GeForce GTX 960 4GB video card and find out who dominates 1440p gaming."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

What is this, a GPU for Seattle fans? The MSI GTX 980 Ti SEA HAWK

Subject: Graphics Cards | December 8, 2015 - 08:56 PM |
Tagged: msi, GTX 980 Ti SEA HAWK, 980 Ti, 4k

[H]ard|OCP recently wrapped up a review of the MSI GTX 980 Ti SEA HAWK and are now revisiting the GPU, focusing on the performance at 4K resolutions.  The particular card that they have tops out at 1340MHz base and a 1441MHz boost clock which results in an in-game frequency of 1567MHz.  For comparison testing they have tested their overclocked card against the SEA HAWK at the factory overclock and a regular 980 Ti at 4k resolution.  Read on to see if this watercooled 980 Ti is worth the premium price in the full review.

1449478389yrHHlrQR28_1_1.jpg

"Our second installment with the MSI GTX 980 Ti SEA HAWK focuses in on gameplay at 4K and how viable it is with the fastest overclocked GTX 980 Ti we have seen yet. We will be using a reference GeForce GTX 980 Ti to show the full spectrum of the 980 Ti’s performance capabilities and emphasize the MSI GTX 980 Ti SEA HAWK’s value."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: AMD

What RTG has planned for 2016

Last week the Radeon Technology Group invited a handful of press and analysts to a secluded location in Sonoma, CA to discuss the future of graphics, GPUs and of course Radeon. For those of you that seem a bit confused, the RTG (Radeon Technologies Group) was spun up inside AMD to encompass all of the graphics products and IP inside the company. Though today’s story is not going to focus on the fundamental changes that RTG brings to the future of AMD, I will note, without commentary, that we saw not a single AMD logo in our presentations or in the signage present throughout the week.

Much of what I learned during the RTG Summit in Sonoma is under NDA and will likely be so for some time. We learned about the future architectures, direction and product theories that will find their way into a range of solutions available in 2016 and 2017.

What I can discuss today is a pair of features that are being updated and improved for current generation graphics cards and for Radeon GPUs coming in 2016: FreeSync and HDR displays. The former is one that readers of PC Perspective should be very familiar with while the latter will offer a new window into content coming in late 2016.

High Dynamic Range Displays: Better Pixels

In just the last couple of years we have seen a spike in resolution for mobile, desktop and notebook displays. We now regularly have 4K monitors on sale for around $500 and very good quality 4K panels going for something in the $1000 range. Couple that with the increase in market share of 21:9 panels with 3440x1440 resolutions and clearly there is a demand from consumers for a better visual experience on their PCs.

rtg1-8.jpg

But what if the answer isn’t just more pixels, but better pixels? We already have this discussed weekly when comparing render resolutions in games of 4K at lower image quality solutions versus 2560x1440 at maximum IQ settings (for example) but the truth is that panel technology has the ability to make a dramatic change to how we view all content – games, movies, productivity – with the introduction of HDR, high dynamic range.

rtg1-10.jpg

As the slide above demonstrates there is a wide range of luminance in the real world that our eyes can see. Sunlight crosses the 1.6 billion nits mark while basic fluorescent lighting in our homes and offices exceeds 10,000 nits. Compare to the most modern PC displays that range from 0.1 nits to 250 nits and you can already tell where the discussion is heading. Even the best LCD TVs on the market today have a range of 0.1 to 400 nits.

Continue reading our overview of new FreeSync and HDR features for Radeon in 2016!!