GTA V: The GPU review

Subject: Graphics Cards | April 21, 2015 - 04:07 PM |
Tagged: GTA5, gaming, titan x, GTX 980, R9 290X, r9 295x2

Some sort of game involving driving stolen prostitutes into cars in an open sore world has arrived and the questions about what it takes to make the game look good are popping up like pills.  [H]ard|OCP seems to have heard of the game and tested out its performance on the top performing video cards from AMD and NVIDIA in both single and doubles.  You will get more out of a double but unfortunately only around a 50% improvement so obviously that second shot is watered down a bit.  In the end the GTX TITAN X was the best choice for those who want to crank everything up, with the 980 tasting slightly better than the 290X for those that actually have to ask the price.  Check the full review here.


"Grand Theft Auto V has finally been released on the PC. In this preview we will look at some video card comparisons in performance, maximize graphics settings at 1440p and 4K. We will briefly test AMD CHS Shadow and NVIDIA PCSS shadow and talk about them. We will even see if SLI and CrossFire work."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Red Hat Joins Khronos Group

Subject: General Tech, Graphics Cards | April 20, 2015 - 07:30 AM |
Tagged: Red Hat, Khronos

With a brief blog post, Red Hat has announced that they are now members of the Khronos Group. Red Hat, one of the largest vendors of Linux software and services, would like to influence the direction of OpenGL and the upcoming Vulkan API. Also, apart from Valve, they are one of the only Linux vendors that contributes to the Khronos Group as an organization. I hope that their input counter-balances Apple, Google, and Microsoft, who are each members, in areas that are beneficial to the open-source operating system.


As for now, Red Hat intends to use their membership to propose OpenGL extensions as well as influence Vulkan as previously mentioned. It also seems reasonable that they would push for extensions to Vulkan, which the Khronos Group mentioned would support extensions at GDC, especially if something that they need fails to reach “core” status. While this feels late, I am glad that they at least joined now.

Source: Red Hat

Moore's Law Is Fifty Years Old!

Subject: General Tech, Graphics Cards, Processors | April 19, 2015 - 02:08 PM |
Tagged: moores law, Intel

While he was the director of research and development at Fairchild Semiconductor, Gordon E. Moore predicted that the number of components in an integrated circuits would double every year. Later, this time-step would slow to every two years; you can occasionally hear people talk about eighteen months too, but I am not sure who derived that number. In a few years, he would go on to found Intel with Robert Noyce, where they spend tens of billions of dollars annually to keep up with the prophecy.


It works out for the most part, but we have been running into physical issues over the last few years though. One major issue is that, with our process technology dipping into the single- and low double-digit nanometers, we are running out of physical atoms to manipulate. The distance between silicon atoms in a solid at room temperature is about 0.5nm; a 14nm product has features containing about 28 atoms, give or take a few in rounding error.

Josh has a good editorial that discusses this implication with a focus on GPUs.

It has been a good fifty years since the start of Moore's Law. Humanity has been developing plans for how to cope with the eventual end of silicon lithography process shrinks. We will probably transition to smaller atoms and molecules and later consider alternative technologies like photonic crystals, which routes light in the hundreds of terahertz through a series of waveguides that make up an integrated circuit. Another interesting thought: will these technologies fall in line with Moore's Law in some way?

Source: Tom Merritt

NVIDIA Released 350.12 WHQL and AMD Released 15.4 Beta for Grand Theft Auto V

Subject: Graphics Cards | April 14, 2015 - 01:27 AM |
Tagged: nvidia, amd, GTA5

Grand Theft Auto V launched today at around midnight GMT worldwide. This corresponded to 7PM EDT for those of us in North America. Well, add a little time for Steam to unlock the title and a bit longer for Rockstar to get enough servers online. One thing you did not need to wait for was new video card drivers. Both AMD and NVIDIA have day-one drivers that provide support.


You can get the NVIDIA drivers at their landing page

You can get the AMD drivers at their release notes

Personally, I ran the game for about a half hour on Windows 10 (Build 10049) with a GeForce GTX 670. Since these drivers are not for the pre-release operating system, I tried running it on 349.90 to see how it performed before upgrading. Surprisingly, it seems to be okay (apart from a tree that was flickering in and out of existence during a cut-scene). I would definitely update my drivers if they were available and supported, but I'm glad that it seems to be playable even on Windows 10.

Source: AMD
Manufacturer: MSI

Notebooks Specifications

Way back in January of this year, while attending CES 2015 in Las Vegas, we wandered into the MSI suite without having any idea what we might see as new and exciting product. Besides the GT80 notebook with a mechanical keyboard on it, the MSI GS30 Shadow was easily the most interesting and exciting technology. Although MSI is not the first company to try this, the Shadow is the most recent attempt to combine the benefits of a thin and light notebook with a discrete, high performance GPU when the former is connected to the latter's docking station.


The idea has always been simple but the implementation has always been complex. Take a thin, light, moderately powered notebook that is usable and high quality in its own right and combine it with the ability to connect a discrete GPU while at home for gaming purposes. In theory, this is the best of both worlds: a notebook PC for mobile productivity and gaming capability courtesy of an external GPU. But as the years have gone on, more companies try and more companies fail; the integration process is just never as perfect a mix as we hope.

Today we see if MSI and the GS30 Shadow can fare any better. Does the combination of a very high performance thin and light notebook and the GamingDock truly create a mobile and gaming system that is worth your investment?

Continue reading our review of the MSI GS30 Shadow Notebook and GamingDock!!

90-some percent of the performance for 70 percent of the price; PowerColor's PCS+ R9 290X

Subject: Graphics Cards | April 6, 2015 - 04:54 PM |
Tagged: factory overclocked, powercolor pcs+, R9 290X

The lowest priced GTX 980 on Amazon is currently $530 while the PowerColor PCS+ R9 290X is $380, about 72% of the price of the GTX 980.  The performance that [H]ard|OCP saw after overclocking the 290X was much closer, in some games even matching it but usually about 5-10% slower than the GTX 980, making it quite obvious which card is the better value.  The GTX 970 is a different story, you can find a card for $310 and the performance is only slightly behind the 290X although the 290X takes a larger lead at higher resolutions.  Read through the review carefully as the performance delta and overall smoothness varies from game to game but unless you like paying to brag about your handful of extra frames the 970 and 290X are the cards offering you the best bang for your buck.


"Today we examine what value the PowerColor PCS+ R9 290X holds compared to overclocked GeForce GTX 970. AMD's Radeon R9 290X pricing has dropped considerably since launch and constitutes a great value and competition for the GeForce GTX 970. At $350 this may be an excellent value compared to the competition."

Here are some more Graphics Card articles from around the web:

Graphics Cards


Source: [H]ard|OCP

Saved so much using Linux you can afford a Titan X?

Subject: Graphics Cards | March 27, 2015 - 04:02 PM |
Tagged: gtx titan x, linux, nvidia

Perhaps somewhere out there is a Linux user who wants a TITAN X and if there is they will like the results of Phoronix's testing.  The card works perfectly straight out of the box with the latest 346.47 driver as well as the 349.12 Beta; if you want to use Nouveau then don't buy this card.  The TITAN did not win any awards for power efficiency but for OpenCL tests, synthetic OpenGL benchmarks and Unigine on Linux it walked away a clear winner.  Phoronix, and many others, hope that AMD is working on an updated Linux driver to accompany the new 300 series of cards we will see soon to help them be more competitive on open source systems.

If you are sick of TITAN X reviews by now, just skip to their 22 GPU performance roundup of Metro Redux.


"Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix
Manufacturer: Various

It's more than just a branding issue

As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:

First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.

AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).

But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync.  For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).

G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.

As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.

Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.

Continue reading our story dissecting NVIDIA G-Sync and AMD FreeSync!!

Manufacturer: Futuremark

Our first DX12 Performance Results

Late last week, Microsoft approached me to see if I would be interested in working with them and with Futuremark on the release of the new 3DMark API Overhead Feature Test. Of course I jumped at the chance, with DirectX 12 being one of the hottest discussion topics among gamers, PC enthusiasts and developers in recent history. Microsoft set us up with the latest iteration of 3DMark and the latest DX12-ready drivers from AMD, NVIDIA and Intel. From there, off we went.

First we need to discuss exactly what the 3DMark API Overhead Feature Test is (and also what it is not). The feature test will be a part of the next revision of 3DMark, which will likely ship in time with the full Windows 10 release. Futuremark claims that it is the "world's first independent" test that allows you to compare the performance of three different APIs: DX12, DX11 and even Mantle.

It was almost one year ago that Microsoft officially unveiled the plans for DirectX 12: a move to a more efficient API that can better utilize the CPU and platform capabilities of future, and most importantly current, systems. Josh wrote up a solid editorial on what we believe DX12 means for the future of gaming, and in particular for PC gaming, that you should check out if you want more background on the direction DX12 has set.


One of DX12 keys for becoming more efficient is the ability for developers to get closer to the metal, which is a phrase to indicate that game and engine coders can access more power of the system (CPU and GPU) without having to have its hand held by the API itself. The most direct benefit of this, as we saw with AMD's Mantle implementation over the past couple of years, is improved quantity of draw calls that a given hardware system can utilize in a game engine.

Continue reading our overview of the new 3DMark API Overhead Feature Test with early DX12 Performance Results!!

NVIDIA Quadro M6000 Announced

Subject: Graphics Cards | March 23, 2015 - 07:30 AM |
Tagged: quadro, nvidia, m6000, gm200

Alongside the Titan X, NVIDIA has announced the Quadro M6000. In terms of hardware, they are basically the same component: 12 GB of GDDR5 on a 384-bit memory bus, 3072 CUDA cores, and a reduction in double precision performance to 1/32nd of its single precision. The memory, but not the cache, is capable of ECC (error-correction) for enterprises who do not want a stray photon to mess up their computation. That might be the only hardware difference between it and the Titan X.


Compared to other Quadro cards, it loses some double precision performance as mentioned earlier, but it will be an upgrade in single precision (FP32). The add-in board connects to the power supply with just a single eight-pin plug. Technically, with its 250W TDP, it is slightly over the rating for one eight-pin PCIe connector, but NVIDIA told Anandtech that they're confident that it won't matter for the card's intended systems.

That is probably true, but I wouldn't put it past someone to do something spiteful given recent events.

The lack of double precision performance (IEEE 754 FP64) could be disappointing for some. While NVIDIA would definitely know their own market better than I do, I was under the impression that a common workstation system for GPU compute was a Quadro driving a few Teslas (such as two of these). It would seem weird for a company to have such a high-end GPU be paired with Teslas that have such a significant difference in FP64 compute. I wonder what this means for the Tesla line, and whether we will see a variant of Maxwell with a large boost in 64-bit performance, or if that line will be in an awkward place until Pascal.

Or maybe not? Maybe NVIDIA is planning to launch products based on an unannounced, FP64-focused architecture? The aim could be to let the Quadro deal with the heavy FP32 calculations, while the customer could opt to load co-processors according to their double precision needs? It's an interesting thought as I sit here at my computer musing to myself, but then I immediately wonder why did they not announce it at GTC if that is the case? If that is the case, and honestly I doubt it because I'm just typing unfiltered thoughts here, you would think they would kind-of need to be sold together. Or maybe not. I don't know.

Pricing and availability is not currently known, except that it is “soon”.

Source: Anandtech