Author:
Manufacturer: Microsoft

Things are about to get...complicated

Earlier this week, the team behind Ashes of the Singularity released an updated version of its early access game, which updated its features and capabilities. With support for DirectX 11 and DirectX 12, and adding in multiple graphics card support, the game featured a benchmark mode that got quite a lot of attention. We saw stories based on that software posted by Anandtech, Guru3D and ExtremeTech, all of which had varying views on the advantages of one GPU or another.

That isn’t the focus of my editorial here today, though.

Shortly after the initial release, a discussion began around results from the Guru3D story that measured frame time consistency and smoothness with FCAT, a capture based testing methodology much like the Frame Rating process we have here at PC Perspective. In that post on ExtremeTech, Joel Hruska claims that the results and conclusion from Guru3D are wrong because the FCAT capture methods make assumptions on the output matching what the user experience feels like.  Maybe everyone is wrong?

First a bit of background: I have been working with Oxide and the Ashes of the Singularity benchmark for a couple of weeks, hoping to get a story that I was happy with and felt was complete, before having to head out the door to Barcelona for the Mobile World Congress. That didn’t happen – such is life with an 8-month old. But, in my time with the benchmark, I found a couple of things that were very interesting, even concerning, that I was working through with the developers.

vsyncoff.jpg

FCAT overlay as part of the Ashes benchmark

First, the initial implementation of the FCAT overlay, which Oxide should be PRAISED for including since we don’t have and likely won’t have a DX12 universal variant of, was implemented incorrectly, with duplication of color swatches that made the results from capture-based testing inaccurate. I don’t know if Guru3D used that version to do its FCAT testing, but I was able to get some updated EXEs of the game through the developer in order to the overlay working correctly. Once that was corrected, I found yet another problem: an issue of frame presentation order on NVIDIA GPUs that likely has to do with asynchronous shaders. Whether that issue is on the NVIDIA driver side or the game engine side is still being investigated by Oxide, but it’s interesting to note that this problem couldn’t have been found without a proper FCAT implementation.

With all of that under the bridge, I set out to benchmark this latest version of Ashes and DX12 to measure performance across a range of AMD and NVIDIA hardware. The data showed some abnormalities, though. Some results just didn’t make sense in the context of what I was seeing in the game and what the overlay results were indicating. It appeared that Vsync (vertical sync) was working differently than I had seen with any other game on the PC.

For the NVIDIA platform, tested using a GTX 980 Ti, the game seemingly randomly starts up with Vsync on or off, with no clear indicator of what was causing it, despite the in-game settings being set how I wanted them. But the Frame Rating capture data was still working as I expected – just because Vsync is enabled doesn’t mean you can look at the results in capture formats. I have written stories on what Vsync enabled captured data looks like and what it means as far back as April 2013. Obviously, to get the best and most relevant data from Frame Rating, setting vertical sync off is ideal. Running into more frustration than answers, I moved over to an AMD platform.

Continue reading PC Gaming Shakeup: Ashes of the Singularity, DX12 and the Microsoft Store!!

JPR Notes Huge Increase in Enthusiast AIBs for 2015

Subject: Graphics Cards | February 29, 2016 - 02:06 PM |
Tagged: nvidia, amd, AIB, pc gaming

Jon Peddie Research, which is market analysis firm that specializes in PC hardware, has compiled another report about add-in board (AIB) sales. There's a few interesting aspects to this report. First, shipments of enthusiast AIBs (ie: discrete GPUs) are up, not a handful of percent, but a whole two-fold. Second, AMD's GPU market share climbed once again, from 18.8% up to 21.1%.

jpr-aib-february-2016.png

This image seems contradict their report, which claims the orange line rose from 44 million in 2014 to 50 million in 2015. I'm not sure where the error is, so I didn't mention it in the news post.
Image Credit: JPR

The report claims that neither AMD nor NVIDIA released a “killer new AIB in 2015.” That... depends on how you look at it. They're clearly referring to upper mainstream, which sit just below the flagship and contribute to a large chunk of enthusiast sales. If they were including the flagship, then they ignored the Titan X, 980 Ti, and Fury line of GPUs, which would just be silly. Since they were counting shipped units, though, it makes sense to neglect those SKUs because they are priced way above the inflection point in actual adoption.

jpr-aib-february-2016-table.png

Image Credit: JPR

But that's not the only “well... sort-of” with JPR's statement. Unlike most generations, the GTX 970 and 980 launched late in 2014, rather than their usual Spring-ish cadence. Apart from the GeForce GTX 580, this trend has been around since the GeForce 9000-series. As such, these 2014 launches could have similar influence as another year's early-2015 product line. Add a bit of VR hype, and actual common knowledge that consoles are lower powered than PCs this generation, and you can see these numbers make a little more sense.

Even still, a 100% increase in enthusiast AIB shipments is quite interesting. This doesn't only mean that game developers can target higher-end hardware. The same hardware to consume content can be used to create it, which boosts both sides of the artist / viewer conversation in art. Beyond its benefits to society, this could snowball into more GPU adoption going forward.

Source: JPR

MWC 16: Imagination Technologies Ray Tracing Accelerator

Subject: Graphics Cards, Mobile, Shows and Expos | February 23, 2016 - 08:46 PM |
Tagged: raytracing, ray tracing, PowerVR, mwc 16, MWC, Imagination Technologies

For the last couple of years, Imagination Technologies has been pushing hardware-accelerated ray tracing. One of the major problems in computer graphics is knowing what geometry and material corresponds to a specific pixel on the screen. Several methods exists, although typical GPUs crush a 3D scene into the virtual camera's 2D space and do a point-in-triangle test on it. Once they know where in the triangle the pixel is, if it is in the triangle, it can be colored by a pixel shader.

imagtech-2016-PowerVR-GR6500-GPU-PowerVR-Wizard-GPUs.png

Another method is casting light rays into the scene, and assigning a color based on the material that it lands on. This is ray tracing, and it has a few advantages. First, it is much easier to handle reflections, transparency, shadows, and other effects where information is required beyond what the affected geometry and its material provides. There are usually ways around this, without resorting to ray tracing, but they each have their own trade-offs. Second, it can be more efficient for certain data sets. Rasterization, since it's based around a “where in a triangle is this point” algorithm, needs geometry to be made up of polygons.

It also has the appeal of being what the real world sort-of does (assuming we don't need to model Gaussian beams). That doesn't necessarily mean anything, though.

At Mobile World Congress, Imagination Technologies once again showed off their ray tracing hardware, embodied in the PowerVR GR6500 GPU. This graphics processor has dedicated circuitry to calculate rays, and they use it in a couple of different ways. They presented several demos that modified Unity 5 to take advantage of their ray tracing hardware. One particularly interesting one was their quick, seven second video that added ray traced reflections atop an otherwise rasterized scene. It was a little too smooth, creating reflections that were too glossy, but that could probably be downplayed in the material ((Update: Feb 24th @ 5pm Car paint is actually that glossy. It's a different issue). Back when I was working on a GPU-accelerated software renderer, before Mantle, Vulkan, and DirectX 12, I was hoping to use OpenCL-based ray traced highlights on idle GPUs, if I didn't have any other purposes for it. Now though, those can be exposed to graphics APIs directly, so they might not be so idle.

The downside of dedicated ray tracing hardware is that, well, the die area could have been used for something else. Extra shaders, for compute, vertex, and material effects, might be more useful in the real world... or maybe not. Add in the fact that fixed-function circuitry already exists for rasterization, and it makes you balance gain for cost.

It could be cool, but it has its trade-offs, like anything else.

Valve Releases SteamVR Performance Test - Is Your Rig Ready?

Subject: Graphics Cards | February 22, 2016 - 06:03 PM |
Tagged: vive, valve, steamvr, steam, rift, performance test, Oculus, htc

Though I am away from my stacks of hardware at the office attending Mobile World Congress in Barcelona, Valve dropped a bomb on us today in the form of a new hardware performance test that gamers can use to determine if they are ready for the SteamVR revolution. The aptly named "SteamVR Performance Test" is a free title available through Steam that any user can download and run to get a report card on their installed hardware. No VR headset required!

And unlike the Oculus Compatibility Checker, the application from Valve runs actual game content to measure your system. Oculus' app only looks at the hardware on your system for certification, not taking into account the performance of your system in any way. (Overclockers and users with Ivy Bridge Core i7 processors have been reporting failed results on the Oculus test for some time.)

screen1.jpg

The SteamVR Performance Test runs a set of scenes from the Aperture Science Robot Repair demo, an experience developed directly for the HTC Vive and one that I was able to run through during CES last month. Valve is using a very interesting new feature called "dynamic fidelity" that adjusts image quality of the game in a way to avoid dropped frames and frame rates under 90 FPS in order to maintain a smooth and comfortable experience for the VR user. Though it is the first time I have seen it used, it sounds similar to what John Carmack did with the id Tech 5 engine, attempting to balance performance on hardware while maintaining a targeted frame rate.

The technology could be a perfect match for VR content where frame rates above or at the 90 FPS target are more important than visual fidelity (in nearly all cases). I am curious to see how Valve may or may not pursue and push this technology in its own games and for the Vive / Rift in general. I have some questions pending with them, so we'll see what they come back with.

fury.png

A result for a Radeon R9 Fury provided by AMD

Valve's test offers a very simple three tiered breakdown for your system: Not Ready, Capable and Ready. For a more detailed explanation you can expand on the data to see metrics like the number of frames you are CPU bound on, frames below the very important 90 FPS mark and how many frames were tested in the run. The Average Fidelity metric is the number that we are reporting below and essentially tells us "how much quality" the test estimates you can run at while maintaining that 90 FPS mark. What else that fidelity result means is still unknown - but again we are trying to find out. The short answer is that the higher that number goes, the better off you are, and the more demanding game content you'll be able to run at acceptable performance levels. At least, according to Valve.

screen2.jpg

Because I am not at the office to run my own tests, I decided to write up this story using results from a third part. That third party is AMD - let the complaining begin. Obviously this does NOT count as independent testing but, in truth, it would be hard to cheat on these results unless you go WAY out of your way to change control panel settings, etc. The demo is self run and AMD detailed the hardware and drivers used in the results.

  • Intel i7-6700K
  • 2x4GB DDR4-2666 RAM
  • Z170 motherboard
  • Radeon Software 16.1.1
  • NVIDIA driver 361.91
  • Win10 64-bit

GPU Score
2x Radeon R9 Nano 11.0
GeForce GTX 980 Ti 11.0
Radeon R9 Fury X 9.6
Radeon R9 Fury 9.2
GeForce GTX 980 8.1
Radeon R9 Nano 8.0
Radeon R9 390X 7.8
Radeon R9 390 7.0
GeForce GTX 970 6.5

These results were provided by AMD in an email to the media. Take that for what you will until we can run our own tests.

First, the GeForce GTX 980 Ti is the highest performing single GPU tested, with a score of 11 - because of course it goes to 11. The same score is reported on the multi-GPU configuration with two Radeon R9 Nanos so clearly we are seeing a ceiling of this version of the SteamVR Performance Test. With a single GPU score of 9.2, that is only a 19% scaling rate, but I think we are limited by the test in this case. Either way, it's great news to see that AMD has affinity multi-GPU up and running, utilizing one GPU for each eye's rendering. (AMD pointed out that users that want to test the multi-GPU implementation will need to add the -multigpu launch option.) I still need to confirm if GeForce cards scale accordingly. UPDATE: Ken at the office ran a quick check with a pair of GeForce GTX 970 cards with the same -multigpu option and saw no scaling improvements. It appears NVIDIA has work to do here.

Moving down the stack, its clear why AMD was so excited to send out these early results. The R9 Fury X and R9 Fury both come out ahead of the GeForce GTX 980 while the R9 Nano, R9 390X and R9 390 result in better scores than NVIDIA's GeForce GTX 970. This comes as no surprise - AMD's Radeon parts tend to offer better performance per dollar when it comes to benchmarks and many games.

badresult2.jpg

There is obviously a lot more to consider than the results this SteamVR Performance Test provides when picking hardware for a VR system, but we are glad to see Valve out in front of the many, many questions that are flooding forums across the web. Is your system ready??

Source: Valve

LinuxGameCast Benchmarks Vulkan on Linux

Subject: Graphics Cards | February 19, 2016 - 07:11 PM |
Tagged: vulkan, linux

Update: Venn continued to benchmark and came across a few extra discoveries. For example, he disabled VDPAU and jumped to 89.6 FPS in OpenGL and 80.6 FPS in Vulkan. Basically, be sure to read the whole thread. It might be updated further even. Original post below (unless otherwise stated).

On Windows, the Vulkan patch of The Talos Principle leads to a net loss in performance, relative to DirectX 11. This is to be expected when a developer like Croteam optimizes their game for existing APIs, and tries to port all that work to a new, very different standard, with a single developer and three months of work. They explicitly state, multiple times, not to expect good performance.

croteam-2016-talos-benchmark-vulkan.png

Image Credit: Venn Stone of LinuxGameCast

On Linux, Venn Stone of LinuxGameCast found different results. With everything maxed out at 1080p, his OpenGL benchmark reports 38.2 FPS, while his Vulkan raises this to an average of 66.5 FPS. Granted, this was with an eight-core AMD FX-8150, which launched with the Bulldozer architecture back in 2011. It did not have the fastest single-threaded performance, falling behind even AMD's own Phenom II parts before it in that regard.

Still, this is a scenario that allowed the game to scale to Bulldozer's multiple cores and circumvent a lot of the driver overhead in OpenGL. It resulted in a 75% increase in performance, at least for people who pair a GeForce 980 Ti ((Update: The Ti was a typo. Venn uses a standard GeForce GTX 980.)) with an eight-core, Bulldozer CPU from 2011.

Report: NVIDIA Working on Another GeForce GTX 950 GPU

Subject: Graphics Cards | February 16, 2016 - 12:01 PM |
Tagged: rumor, report, nvidia, Maxwell 2.0, GTX 950 SE, GTX 950 LP, gtx 950, gtx 750, graphics card, gpu

A report from VideoCardz.com claims that NVIDIA is working on another GTX 950 graphics card, but not the 950 Ti you might have expected.

gtx950.jpg

Reference GTX 950 (Image credit: NVIDIA)

While the GTX 750 Ti was succeeded by the GTX 950 in August of last year, the higher specs for this new GPU came at the cost of a higher TDP (90W vs. 60W). This new rumored GTX 950, which might be called either 950 SE or 950 LP according to the report, would be a lower power version of the GTX 950, and would actually have a lot more in common with the outgoing GTX 750 Ti than the plain GTX 750 as we can see from this chart:

chart.png

(Image credit: VideoCardz)

As you can see the GTX 750 Ti is based on GM107 (Maxwell 1.0) and has 640 CUDA cores, 40 TUs, 16 ROPs, and it operates at 1020 MHz Base/1085 MHz Boost clocks. The reported specs of this new GTX 950 SE/LP would be nearly identical, though based on GM206 (Maxwell 2.0) and offering greater memory bandwidth (and slightly higher power consumption).

The VideoCardz report was sourced from Expreview, which claimed that this GTX 950 SE/LP product would arrive next month at some point. This report is a little more vague than some of the rumors we see, but it could very well be that NVIDIA has a planned replacement for the remaining Maxwell 1.0 products on the market. I would have personally expected to see a"Ti” product before any “LE/LP” version of the GTX 950, and this reported name seems more like an OEM product than a retail part. We will have to wait and see if this report is accurate.

Source: VideoCardz
Manufacturer: PC Perspective

Caught Up to DirectX 12 in a Single Day

The wait for Vulkan is over.

I'm not just talking about the specification. Members of the Khronos Group have also released compatible drivers, SDKs and tools to support them, conformance tests, and a proof-of-concept patch for Croteam's The Talos Principle. To reiterate, this is not a soft launch. The API, and its entire ecosystem, is out and ready for the public on Windows (at least 7+ at launch but a surprise Vista or XP announcement is technically possible) and several distributions of Linux. Google will provide an Android SDK in the near future.

khronos-2016-vulkan-why.png

I'm going to editorialize for the next two paragraphs. There was a concern that Vulkan would be too late. The thing is, as of today, Vulkan is now just as mature as DirectX 12. Of course, that could change at a moment's notice; we still don't know how the two APIs are being adopted behind the scenes. A few DirectX 12 titles are planned to launch in a few months, but no full, non-experimental, non-early access game currently exists. Each time I say this, someone links the Wikipedia list of DirectX 12 games. If you look at each entry, though, you'll see that all of them are either: early access, awaiting an unreleased DirectX 12 patch, or using a third-party engine (like Unreal Engine 4) that only list DirectX 12 as an experimental preview. No full, released, non-experimental DirectX 12 game exists today. Besides, if the latter counts, then you'll need to accept The Talos Principle's proof-of-concept patch, too.

But again, that could change. While today's launch speaks well to the Khronos Group and the API itself, it still needs to be adopted by third party engines, middleware, and software. These partners could, like the Khronos Group before today, be privately supporting Vulkan with the intent to flood out announcements; we won't know until they do... or don't. With the support of popular engines and frameworks, dependent software really just needs to enable it. This has not happened for DirectX 12 yet, and, now, there doesn't seem to be anything keeping it from happening for Vulkan at any moment. With the Game Developers Conference just a month away, we should soon find out.

khronos-2016-vulkan-drivers.png

But back to the announcement.

Vulkan-compatible drivers are launching today across multiple vendors and platforms, but I do not have a complete list. On Windows, I was told to expect drivers from NVIDIA for Windows 7, 8.x, 10 on Kepler and Maxwell GPUs. The standard is compatible with Fermi GPUs, but NVIDIA does not plan on supporting the API for those users due to its low market share. That said, they are paying attention to user feedback and they are not ruling it out, which probably means that they are keeping an open mind in case some piece of software gets popular and depends upon Vulkan. I have not heard from AMD or Intel about Vulkan drivers as of this writing, one way or the other. They could even arrive day one.

On Linux, NVIDIA, Intel, and Imagination Technologies have submitted conformant drivers.

Drivers alone do not make a hard launch, though. SDKs and tools have also arrived, including the LunarG SDK for Windows and Linux. LunarG is a company co-founded by Lens Owen, who had a previous graphics software company that was purchased by VMware. LunarG is backed by Valve, who also backed Vulkan in several other ways. The LunarG SDK helps developers validate their code, inspect what the API is doing, and otherwise debug. Even better, it is also open source, which means that the community can rapidly enhance it, even though it's in a releasable state as it is. RenderDoc, the open-source graphics debugger by Crytek, will also add Vulkan support. ((Update (Feb 16 @ 12:39pm EST): Baldur Karlsson has just emailed me to let me know that it was a personal project at Crytek, not a Crytek project in general, and their GitHub page is much more up-to-date than the linked site.))

vulkan_gltransition_maintenance1.png

The major downside is that Vulkan (like Mantle and DX12) isn't simple.
These APIs are verbose and very different from previous ones, which requires more effort.

Image Credit: NVIDIA

There really isn't much to say about the Vulkan launch beyond this. What graphics APIs really try to accomplish is standardizing signals that enter and leave video cards, such that the GPUs know what to do with them. For the last two decades, we've settled on an arbitrary, single, global object that you attach buffers of data to, in specific formats, and call one of a half-dozen functions to send it.

Compute APIs, like CUDA and OpenCL, decided it was more efficient to handle queues, allowing the application to write commands and send them wherever they need to go. Multiple threads can write commands, and multiple accelerators (GPUs in our case) can be targeted individually. Vulkan, like Mantle and DirectX 12, takes this metaphor and adds graphics-specific instructions to it. Moreover, GPUs can schedule memory, compute, and graphics instructions at the same time, as long as the graphics task has leftover compute and memory resources, and / or the compute task has leftover memory resources.

This is not necessarily a “better” way to do graphics programming... it's different. That said, it has the potential to be much more efficient when dealing with lots of simple tasks that are sent from multiple CPU threads, especially to multiple GPUs (which currently require the driver to figure out how to convert draw calls into separate workloads -- leading to simplifications like mirrored memory and splitting workload by neighboring frames). Lots of tasks aligns well with video games, especially ones with lots of simple objects, like strategy games, shooters with lots of debris, or any game with large crowds of people. As it becomes ubiquitous, we'll see this bottleneck disappear and games will not need to be designed around these limitations. It might even be used for drawing with cross-platform 2D APIs, like Qt or even webpages, although those two examples (especially the Web) each have other, higher-priority bottlenecks. There are also other benefits to Vulkan.

khronos-2016-vulkan-middleware.png

The WebGL comparison is probably not as common knowledge as Khronos Group believes.
Still, Khronos Group was criticized when WebGL launched as "it was too tough for Web developers".
It didn't need to be easy. Frameworks arrived and simplified everything. It's now ubiquitous.
In fact, Adobe Animate CC (the successor to Flash Pro) is now a WebGL editor (experimentally).

Open platforms are required for this to become commonplace. Engines will probably target several APIs from their internal management APIs, but you can't target users who don't fit in any bucket. Vulkan brings this capability to basically any platform, as long as it has a compute-capable GPU and a driver developer who cares.

Thankfully, it arrived before any competitor established market share.

EVGA Releases GeForce GTX 980 Ti VR Edition

Subject: Graphics Cards | February 10, 2016 - 05:59 PM |
Tagged: VR, vive vr, Oculus, evga, 980 Ti

You might wonder what makes a graphics card “designed for VR,” but this is actually quite interesting. Rather than plugging your headset into the back of your desktop, EVGA includes a 5.25” bay that provides 2x USB 3.0 ports and 1x HDMI 2.0 connection. The use case is that some users will want to easily connect and disconnect their VR devices, which, knowing a few indie VR developers, seems to be a part of their workflow. The same may be true of gamers, but I'm not sure.

evga-2016-980Ti-VR-Header_BG.jpg

While the bay allows for everything, including the HDMI plug via an on-card port, to be connected internally, you will need a spare USB 3.0 header on your motherboard to hook it up. It would have been interesting to see whether EVGA could have attached a USB 3.0 controller on the add-in board, but that might have been impossible (or unpractical) given that the PCIe connector would need to be shared with the GPU (not to mention the complexity of also adding a USB 3.0 controller to the board). Also, I expect motherboards should have at least one. If not, you can find USB 3.0 add-in cards with internal headers.

evga-2016-980ti.jpg

The card comes in two sub-versions, one with the NVIDIA-style blower cooler, and the other with EVGA's ACX 2.0+ cooler. I tend to prefer exposed fan GPUs because they're easier to blow air into after a few years, but you might have other methods to control dust.

Both are currently available for $699.99 on Newegg.com, while Amazon only lists the ACX2.0+ cooler version, and that's out of stock. It is also $699.99, though, so that should be what to expect.

Source: EVGA
Author:
Manufacturer: Various

Early testing for higher end GPUs

UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.

I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.

rotr-screen1.jpg

Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.

In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.

Continue reading our look at GPU performance in Rise of the Tomb Raider!!

Mods like memory; the Gainward GTX 960 Phantom 4GB

Subject: Graphics Cards | February 4, 2016 - 05:51 PM |
Tagged: gainward, GTX 960 Phantom 4GB. gtx 960, NVIDA, 4GB

If you don't have a lot of cash on hand for games or hardware, a 4k adaptive sync monitor with two $600 GPUs and a collection of $80 AAA titles simply isn't on your radar.  That doesn't mean you have to toss in your love of gaming for occasional free to play gaming sessions; you just have to adapt.  A prime example are those die hard Skyrim fans who have modded the game to oblivion over the past few years, with many other games and communities that may not be new but are still thriving.  Chances are that you are playing at 1080p so a high powered GPU is not needed, however mods that upscale textures and many others do love huge tracts of RAM. 

So for those outside of North America looking for a card they can afford after a bit of penny pinching, check out Legion Hardware's review of the 4GB version of the Gainward GTX 960 Phantom.  It won't break any benchmarking records but it will let you play the games you love and even new games as their prices inevitably decrease over time.

Image_03S.jpg

Today we are checking out Gainward’s premier GeForce GTX 960 graphics card, the Phantom 4GB. Equipped with twice the memory buffer of standard cards, it is designed for extreme 1080p gaming. Therefore it will be interesting to see how the Phantom 4GB compares to a 2GB GTX 960..."

Here are some more Graphics Card articles from around the web:

Graphics Cards