Author:
Manufacturer: Intel

It comes after 8, but before 10

As the week of Intel’s Developer Forum (IDF) begins, you can expect to see a lot of information about Intel’s 6th Generation Core architecture, codenamed Skylake, finally revealed. When I posted my review of the Core i7-6700K, the first product based on that architecture to be released in any capacity, I was surprised that Intel was willing to ship product without the normal amount of background information for media and developers. Rather than give us the details and then ship product, which has happened for essentially every consumer product release I have been a part of, Intel did the reverse: ship a consumer friendly CPU and then promise to tell us how it all works later in the month at IDF.

Today I came across a document posted on Intel’s website that dives into very specific detail on the new Gen9 graphics and compute architecture of Skylake. Details on the Core architecture changes are not present, and instead we are given details on how the traditional GPU portion of the SoC has changed. To be clear: I haven’t had any formal briefing from Intel on this topic or anything surrounding the architecture of Skylake or the new Gen9 graphics system but I wanted to share the details we found available. I am sure we’ll learn more this week as IDF progresses so I will update this story where necessary.

What Intel calls Processor Graphics is what we used to call simply integrated graphics for the longest time. The purpose and role of processor graphics has changed drastically over the years and it is now not only responsible for 3D graphics rendering but compute, media and display capabilities of the Intel Skylake SoC (when discrete add-in graphics is not used). The architecture document used to source this story focuses on Gen9 graphics, the compute architecture utilized in the latest Skylake CPUs. The Intel HD Graphics 530 on the Core i7-6700K / Core i5-6600K is the first product released and announced using Gen9 graphics and is also the first to adopt Intel’s new 3-digit naming scheme.

skylakegen9-4.jpg

This die shot of the Core i7-6700K shows the increased size and prominence of the Gen9 graphics in the overall SoC design. Containing four traditional x86 CPU cores and 1 “slice” implementation of Gen9 graphics (with three visible sub-slices we’ll describe below), this is not likely to be the highest performing iteration of the latest Intel HD Graphics technology.

skylakegen9-4.2.jpg

Like the Intel processors before it, the Skylake design utilizes a ring bus architecture to connect the different components of the SoC. This bi-directional interconnect has a 32-byte wide data bus and connects to multiple “agents” on the CPU. Each individual CPU core is considered its own agent while the Gen9 compute architecture is considered one complete agent. The system agent bundles the DRAM memory, the display controller, PCI Express and other I/O interface that communicate with the rest of the PC. Any off-chip memory requests and transactions occur through this bus while on-chip data transfers tend to be handled differently.

Continue reading our look at the new Gen9 graphics and compute architecture on Skylake!!

Overall GPU Shipments Down from Last Year, PC Industry Drops 10%

Subject: Graphics Cards, Systems | August 17, 2015 - 11:00 AM |
Tagged: NPD, gpu, discrete gpu, graphics, marketshare, PC industry

News from NPD Research today shows a sharp decline in discrete graphics shipments from all major vendors. Not great news for the PC industry, but not all that surprising, either.

graphics_shipments.jpg

These numbers don’t indicate a lack of discrete GPU interest in the PC enthusiast community of course, but certainly show how the mainstream market has changed. OEM laptop and (more recently) desktop makers predominantly use processor graphics from Intel and AMD APUs, though the decrease of over 7% for Intel GPUs suggests a decline in PC shipments overall.

Here are the highlights, quoted directly from NPD Research:

  • AMD's overall unit shipments decreased -25.82% quarter-to-quarter, Intel's total shipments decreased -7.39% from last quarter, and Nvidia's decreased -16.19%.
  • The attach rate of GPUs (includes integrated and discrete GPUs) to PCs for the quarter was 137% which was down -10.82% from last quarter, and 26.43% of PCs had discrete GPUs, which is down -4.15%.
  • The overall PC market decreased -4.05% quarter-to-quarter, and decreased -10.40% year-to-year.
  • Desktop graphics add-in boards (AIBs) that use discrete GPUs decreased -16.81% from last quarter.

marketshare.png

An overall decrease of 10.4 % year-to-year indicates what I'll call the continuing evolution of the PC (rather than a decline, per se), and shows how many have come to depend on smartphones for the basic computing tasks (email, web browsing) that once required a PC. Tablets didn’t replace the PC in the way that was predicted only 5 years ago, and it’s almost become essential to pair a PC with a smartphone for a complete personal computing experience (sorry, tablets – we just don’t NEED you as much).

I would guess anyone reading this on a PC enthusiast site is not only using a PC, but probably one with discrete graphics, too. Or maybe you exclusively view our site on a tablet or smartphone? I for one won’t stop buying PC components until they just aren’t available anymore, and that dark day is probably still many years off.

Source: NPD Research

AMD Radeon R9 Fury Unlocked as Fury X, Overclocked to 1 GHz HBM

Subject: Graphics Cards | August 12, 2015 - 05:29 PM |
Tagged: STRIX R9 Fury, Radeon R9 Fury, overclocking, oc, LN2, hbm, fury x, asus, amd

What happens when you unlock an AMD Fury to have the Compute Units of a Fury X, and then overclock the snot out of it using LN2? User Xtreme Addict in the HWBot forums has created a comprehensive guide to do just this, and the results are incredible.

fury_ln2_01.JPG

Not for the faint of heart (image credit: Xtreme Addict)

"The steps include unlocking the Compute Units to enable Fury X grade performance, enabling the hotwire soldering pads, a 0.95v Rail mod, and of course the trimpot/hotwire VGPU, VMEM, VPLL (VDDCI) mods.

The result? A GPU frequency of 1450 MHz and HBM frequency of 1000 MHz. For the HBM that's a 100% overclock."

Beginning with a stock ASUS R9 Fury STRIX card Xtreme Addict performed some surgery to fully unlock the voltage, and unlocked the Compute Units using a tool from this Overclock.net thread.

fury_ln2_02.jpg

The results? Staggering. HBM at 1000 MHz is double the rate of the stock Fury X, and a GPU core of 1450 MHz is a 400 MHz increase. So what kind of performance did this heavily overclocked card achieve?

"The performance goes up from 6237 points at default to 6756 after unlocking the CUs, then 8121 points after overclock on air cooling, to eventually end up at 9634 points when fully unleashed with liquid nitrogen."

Apparently they were able to push the card even further, ending up with a whopping 10033 score in 3DMark Fire Strike Extreme.

fury_ln2_03.JPG

While this method is far too extreme for 99% of enthusiasts, the idea of unlocking a retail Fury to the level of a Fury X through software/BIOS mods is much more accessible, as is the possibility of reaching much higher clocks through advanced cooling methods.

Unfortunately, if reading through this makes you want to run out and grab one of these STRIX cards availability is still limited. Hopefully supply catches up to demand in the near future.

fury_strix.PNG

A quick look at stock status on Newegg for the featured R9 Fury card

Source: HWBot

3dfx Voodoo 3 2000 PCI Unboxing - What year is it??!?

Subject: Graphics Cards | August 12, 2015 - 04:43 PM |
Tagged: what year is it, voodoo 3, voodoo, video, unboxing, pci, 3dfx

What do you do when you have a new, in box 3dfx Voodoo 3 2000 graphics card that gets some water damage? You do a classic unboxing and then try to get that PCI graphics card from 1999 up and running and playing some Unreal Tournament. 
 
pic1.jpg
 
Were we successful?
 

GIGABYTE GTX 980 Ti G1 GAMING loves it when you overclock

Subject: Graphics Cards | August 12, 2015 - 02:44 PM |
Tagged: GTX 980 Ti G1 GAMING, gigabyte, GTX 980 Ti, factory overclocked

The Gigabyte GTX 980 Ti G1 GAMING card comes with a 1152MHz Base Clock and 1241MHz Boost Clock straight out of the box and uses two 8-pin power connectors as opposed to an 8 and a 6-pin.  That extra power and the WINDFORCE 3X custom cooler help you when overclocking the card beyond the frequencies it ships at.  [H]ard|OCP used OC GURU II to up the voltage provided to this card and reached an overclock that hit 1367MHz in game with a 7GHz clock for the VRAM.  Manually they managed to go even further, the VRAM could reach 8GHz and the GPU clock was measured at 1535 in game, a rather significant increase.  The overclock increased performance by around 10% in most of the tests; which makes this card impressive even before you consider some of the other beneficial features which you can read about at [H]ard|OCP.

1439202407lsYxTzB0s7_1_15_l.jpg

"Today we review a custom built retail factory overclocked GIGABYTE GTX 980 Ti G1 GAMING video card. This video card is built to overclock in every way. We'll take this video card, compare it to the AMD Radeon R9 Fury X and overclock the GIGABYTE GTX 980 Ti G1 GAMING to its highest potential. The overclocking potential is amazing."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Qualcomm Introduces Adreno 5xx Architecture for Snapdragon 820

Subject: Graphics Cards, Processors, Mobile | August 12, 2015 - 07:30 AM |
Tagged: snapdragon 820, snapdragon, siggraph 2015, Siggraph, qualcomm, adreno 530, adreno

Despite the success of the Snapdragon 805 and even the 808, Qualcomm’s flagship Snapdragon 810 SoC had a tumultuous lifespan.  Rumors and stories about the chip and an inability to run in phone form factors without overheating and/or draining battery life were rampant, despite the company’s insistence that the problem was fixed with a very quick second revision of the part. There are very few devices that used the 810 and instead we saw more of the flagship smartphones uses the slightly cut back SD 808 or the SD 805.

Today at Siggraph Qualcomm starts the reveal of a new flagship SoC, Snapdragon 820. As the event coinciding with launch is a graphics-specific show, QC is focusing on a high level overview of the graphics portion of the Snapdragon 820, the updated Adreno 5xx architecture and associated designs and a new camera image signal processor (ISP) aiming to improve quality of photos and recording on our mobile devices.

sd820-1.jpg

A modern SoC from Qualcomm features many different processors working in tandem to impact the user experience on the device. While the only details we are getting today focus around the Adreno 530 GPU and Spectra ISP, other segments like connectivity (wireless), DSP, video processing and digital signal processing are important parts of the computing story. And we are well aware that Qualcomm is readying its own 64-bit processor architecture for the Kryo CPU rather than implementing the off-the-shelf cores from ARM used in the 810.

We also know that Qualcomm is targeting a “leading edge” FinFET process technology for SD 820 and though we haven’t been able to confirm anything, it looks very like that this chip will be built on the Samsung 14nm line that also built the Exynos 7420.

But over half of the processing on the upcoming Snapdragon 820 fill focus on visual processing, from graphics to gaming to UI animations to image capture and video output, this chip’s die will be dominated by high performance visuals.

Qualcomm’s lists of target goals for SD 820 visuals reads as you would expect: wanting perfection in every area. Wouldn’t we all love a phone or tablet that takes perfect photos each time, always focusing on the right things (or everything) with exceptional low light performance? Though a lesser known problem for consumers, having accurate color reproduction from capture, through processing and to the display would be a big advantage. And of course, we all want graphics performance that impresses and a user interface that is smooth and reliable while enabling NEW experience that we haven’t even thought of in the mobile form factor. Qualcomm thinks that Snapdragon 820 will be able to deliver on all of that.

Continue reading about the new Adreno 5xx architecture!!

Source: Qualcomm

Is this the GTX 950?

Subject: Graphics Cards | August 11, 2015 - 02:54 PM |
Tagged: rumour, nvidia, gtx 950

Rumours of the impending release of a GTX 950 and perhaps even a GTX 950 Ti continue to spread, most recently at Videocardz who have developed a reputation for this kind of report.  Little is known at this time, the specifications are still unspecified but they have found a page showing a ASUS STRIX GTX 950, with 2GB of memory and a DirectCUII cooler. The prices shown are unlikely to represent the actual retail price, even in Finland where the capture is from.

PNY-GeForce-GTX-950.jpg

Also spotted is a PNY GTX 950 retail box which shows us little in the way of details, the power plug is facing away from the camera so we are still unsure how many power plugs will be need./  Videocardz also reiterates their belief from the first leak that the card will 75% of a GM206 Maxwell graphics processor, with 768 CUDA cores and a 128-bit interface.

Source: Videocardz

Overclock any NVIDIA GPU on Desktop and Mobile with a New Utility

Subject: Graphics Cards | August 10, 2015 - 06:14 PM |
Tagged: overclocking, overclock, open source, nvidia, MSI Afterburner, API

An author called "2PKAQWTUQM2Q7DJG" (likely not a real name) has published a fascinating little article today on his/her Wordpress blog entitled, "Overclocking Tools for NVIDIA GPUs Suck. I Made My Own". What it contains is a full account of the process of creating an overclocking tool beyond the constraints of common utilities such as MSI Afterburner.

By probing MSI's OC utility using Ollydbg (an x86 "assembler level analysing debugger") the author was able to track down how Afterburner was working.

nvapiload.png

“nvapi.dll” definitely gets loaded here using LoadLibrary/GetModuleHandle. We’re on the right track. Now where exactly is that lib used? ... That’s simple, with the program running and the realtime graph disabled (it polls NvAPI constantly adding noise to the mass of API calls). we place a memory breakpoint on the .Text memory segment of the NVapi.dll inside MSI Afterburner’s process... Then we set the sliders in the MSI tool to get some negligible GPU underclock and hit the “apply” button. It breaks inside NvAPI… magic!

After further explaining the process and his/her source code for an overclocking utility, the user goes on to show the finished product in the form of a command line utility.

overclock.png

There is a link to the finished version of this utility at the end of the article, as well as the entire process with all source code. It makes for an interesting read (even for the painfully inept at programming, such as myself), and the provided link to download this mysterious overclocking utility (disguised as a JPG image file, no less) makes it both tempting and a little dubious. Does this really allow overclocking any NVIDIA GPU, including mobile? What could be the harm in trying?? In all seriousness however since some of what was seemingly uncovered in the article is no doubt proprietary, how long will this information be available?

It would probably be wise to follow the link to the Wordpress page ASAP!

Source: Wordpress
Manufacturer: PC Perspective

It's Basically a Function Call for GPUs

Mantle, Vulkan, and DirectX 12 all claim to reduce overhead and provide a staggering increase in “draw calls”. As mentioned in the previous editorial, loading graphics card with tasks will take a drastic change in these new APIs. With DirectX 10 and earlier, applications would assign attributes to (what it is told is) the global state of the graphics card. After everything is configured and bound, one of a few “draw” functions is called, which queues the task in the graphics driver as a “draw call”.

While this suggests that just a single graphics device is to be defined, which we also mentioned in the previous article, it also implies that one thread needs to be the authority. This limitation was known about for a while, and it contributed to the meme that consoles can squeeze all the performance they have, but PCs are “too high level” for that. Microsoft tried to combat this with “Deferred Contexts” in DirectX 11. This feature allows virtual, shadow states to be loaded from secondary threads, which can be appended to the global state, whole. It was a compromise between each thread being able to create its own commands, and the legacy decision to have a single, global state for the GPU.

Some developers experienced gains, while others lost a bit. It didn't live up to expectations.

pcper-2015-dx12-290x.png

The paradigm used to load graphics cards is the problem. It doesn't make sense anymore. A developer might not want to draw a primitive with every poke of the GPU. At times, they might want to shove a workload of simple linear algebra through it, while other requests could simply be pushing memory around to set up a later task (or to read the result of a previous one). More importantly, any thread could want to do this to any graphics device.

pcper-2015-dx12-980.png

The new graphics APIs allow developers to submit their tasks quicker and smarter, and it allows the drivers to schedule compatible tasks better, even simultaneously. In fact, the driver's job has been massively simplified altogether. When we tested 3DMark back in March, two interesting things were revealed:

  • Both AMD and NVIDIA are only a two-digit percentage of draw call performance apart
  • Both AMD and NVIDIA saw an order of magnitude increase in draw calls

Read on to see what this means for games and game development.

Khronos Group at SIGGRAPH 2015

Subject: Graphics Cards, Processors, Mobile, Shows and Expos | August 10, 2015 - 09:01 AM |
Tagged: vulkan, spir, siggraph 2015, Siggraph, opengl sc, OpenGL ES, opengl, opencl, Khronos

When the Khronos Group announced Vulkan at GDC, they mentioned that the API is coming this year, and that this date is intended to under promise and over deliver. Recently, fans were hoping that it would be published at SIGGRAPH, which officially begun yesterday. Unfortunately, Vulkan has not released. It does hold a significant chunk of the news, however. Also, it's not like DirectX 12 is holding a commanding lead at the moment. The headers were public only for a few months, and the code samples are less than two weeks old.

khronos-2015-siggraph-sixapis.png

The organization made announcements for six products today: OpenGL, OpenGL ES, OpenGL SC, OpenCL, SPIR, and, as mentioned, Vulkan. They wanted to make their commitment clear, to all of their standards. Vulkan is urgent, but some developers will still want the framework of OpenGL. Bind what you need to the context, then issue a draw and, if you do it wrong, the driver will often clean up the mess for you anyway. The briefing was structure to be evident that it is still in their mind, which is likely why they made sure three OpenGL logos greeted me in their slide deck as early as possible. They are also taking and closely examining feedback about who wants to use Vulkan or OpenGL, and why.

As for Vulkan, confirmed platforms have been announced. Vendors have committed to drivers on Windows 7, 8, 10, Linux, including Steam OS, and Tizen (OSX and iOS are absent, though). Beyond all of that, Google will accept Vulkan on Android. This is a big deal, as Google, despite its open nature, has been avoiding several Khronos Group standards. For instance, Nexus phones and tablets do not have OpenCL drivers, although Google isn't stopping third parties from rolling it into their devices, like Samsung and NVIDIA. Direct support of Vulkan should help cross-platform development as well as, and more importantly, target the multi-core, relatively slow threaded processors of those devices. This could even be of significant use for web browsers, especially in sites with a lot of simple 2D effects. Google is also contributing support from their drawElements Quality Program (dEQP), which is a conformance test suite that they bought back in 2014. They are going to expand it to Vulkan, so that developers will have more consistency between devices -- a big win for Android.

google-android-opengl-es-extensions.jpg

While we're not done with Vulkan, one of the biggest announcements is OpenGL ES 3.2 and it fits here nicely. At around the time that OpenGL ES 3.1 brought Compute Shaders to the embedded platform, Google launched the Android Extension Pack (AEP). This absorbed OpenGL ES 3.1 and added Tessellation, Geometry Shaders, and ASTC texture compression to it. It was also more tension between Google and cross-platform developers, feeling like Google was trying to pull its developers away from Khronos Group. Today, OpenGL ES 3.2 was announced and includes each of the AEP features, plus a few more (like “enhanced” blending). Better yet, Google will support it directly.

Next up are the desktop standards, before we finish with a resurrected embedded standard.

OpenGL has a few new extensions added. One interesting one is the ability to assign locations to multi-samples within a pixel. There is a whole list of sub-pixel layouts, such as rotated grid and Poisson disc. Apparently this extension allows developers to choose it, as certain algorithms work better or worse for certain geometries and structures. There were probably vendor-specific extensions for a while, but now it's a ratified one. Another extension allows “streamlined sparse textures”, which helps manage data where the number of unpopulated entries outweighs the number of populated ones.

OpenCL 2.0 was given a refresh, too. It contains a few bug fixes and clarifications that will help it be adopted. C++ headers were also released, although I cannot comment much on it. I do not know the state that OpenCL 2.0 was in before now.

And this is when we make our way back to Vulkan.

khronos-2015-siggraph-spirv.png

SPIR-V, the code that runs on the GPU (or other offloading device, including the other cores of a CPU) in OpenCL and Vulkan is seeing a lot of community support. Projects are under way to allow developers to write GPU code in several interesting languages: Python, .NET (C#), Rust, Haskell, and many more. The slide lists nine that Khronos Group knows about, but those four are pretty interesting. Again, this is saying that you can write code in the aforementioned languages and have it run directly on a GPU. Curiously missing is HLSL, and the President of Khronos Group agreed that it would be a useful language. The ability to cross-compile HLSL into SPIR-V means that shader code written for DirectX 9, 10, 11, and 12 could be compiled for Vulkan. He expects that it won't take long for a project to start, and might already be happening somewhere outside his Google abilities. Regardless, those who are afraid to program in the C-like GLSL and HLSL shading languages might find C# and Python to be a bit more their speed, and they seem to be happening through SPIR-V.

As mentioned, we'll end on something completely different.

khronos-2015-siggraph-sc.png

For several years, the OpenGL SC has been on hiatus. This group defines standards for graphics (and soon GPU compute) in “safety critical” applications. For the longest time, this meant aircraft. The dozens of planes (which I assume meant dozens of models of planes) that adopted this technology were fine with a fixed-function pipeline. It has been about ten years since OpenGL SC 1.0 launched, which was based on OpenGL ES 1.0. SC 2.0 is planned to launch in 2016, which will be based on the much more modern OpenGL ES 2 and ES 3 APIs that allow pixel and vertex shaders. The Khronos Group is asking for participation to direct SC 2.0, as well as a future graphics and compute API that is potentially based on Vulkan.

The devices that this platform intends to target are: aircraft (again), automobiles, drones, and robots. There are a lot of ways that GPUs can help these devices, but they need a good API to certify against. It needs to withstand more than an Ouya, because crashes could be much more literal.