Author:
Manufacturer: Intel

Bioshock Infinite Results

Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!

Today marks the release of Intel's newest CPU architecture, code named Skylake. I already posted my full review of the Core i7-6700K processor so, if you are looking for CPU performance and specification details on that part, you should start there. What we are looking at in this story is the answer to a very simple, but also very important question:

Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?

I think you'll find that answer will depend on a few things, including your gaming resolution and aptitude for multi-GPU configuration, but even I was surprised by the differences I saw in testing.

sli1.jpg

Our testing scenario was quite simple. Compare the gaming performance of an Intel Core i7-6700K processor and Z170 motherboard running both a single GTX 980 and a pair of GTX 980s in SLI against an Intel Core i7-2600K and Z77 motherboard using the same GPUs. I installed both the latest NVIDIA GeForce drivers and the latest Intel system drivers for each platform.

  Skylake System Sandy Bridge System
Processor Intel Core i7-6700K Intel Core i7-2600K
Motherboard ASUS Z170-Deluxe Gigabyte Z68-UD3H B3
Memory 16GB DDR4-2133 8GB DDR3-1600
Graphics Card 1x GeForce GTX 980
2x GeForce GTX 980 (SLI)
1x GeForce GTX 980
2x GeForce GTX 980 (SLI)
OS Windows 8.1 Windows 8.1

Our testing methodology follows our Frame Rating system, which uses a capture-based system to measure frame times at the screen (rather than trusting the software's interpretation).

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.

If you need some more background on how we evaluate gaming performance on PCs, just check out my most recent GPU review for a full breakdown.

I only had time to test four different PC titles:

  • Bioshock Infinite
  • Grand Theft Auto V
  • GRID 2
  • Metro: Last Light

Continue reading our look at discrete GPU scaling on Skylake compared to Sandy Bridge!!

Which 980 Titanium should you put on your card?

Subject: Graphics Cards | August 4, 2015 - 06:06 PM |
Tagged: 980 Ti, asus, msi, gigabyte, evga, GTX 980 Ti G1 GAMING, GTX 980 Ti STRIX OC, GTX 980 Ti gaming 6g

If you've decided that the GTX 980 Ti is the card for you due to price, performance or other less tangible reasons you will find that there are quite a few to choose from.  Each have the same basic design but the coolers and frequencies vary between manufacturers, as do the prices.  That is why it is handy that The Tech Report have put together a round up of four models for a direct comparison.  In the article you will see the EVGA GeForce GTX 980 Ti SC+, Gigabyte's GTX 980 Ti G1 Gaming, MSI GTX 980 Ti Gaming 6G and the ASUS Strix GTX 980 Ti OC Edition.  The cards are not only checked for basic and overclocked performance, there is also noise levels and power consumption to think about, so check out the full review.

4cards.jpg

"The GeForce GTX 980 Ti is pretty much the fastest GPU you can buy.The aftermarket cards offer higher clocks and better cooling than Nvidia's reference design. But which one is right for you?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Rumor: NVIDIA to Replace Maxwell GTX 750 Ti

Subject: Graphics Cards | August 1, 2015 - 07:31 AM |
Tagged: nvidia, maxwell, gtx 960, gtx 950 ti, gtx 950

A couple of sites are claiming that NVIDIA intends to replace the first-generation GeForce GTX 750 Ti with more Maxwell, in the form of the GeForce GTX 950 and/or GTX 950 Ti. The general consensus is that it will run on a cut-down GM206 chip, which is currently found in the GTX 960. I will go light on the rumored specifications because this part of the rumor is single-source, from accounts of a HWBattle page that has been deleted. But for a general ballpark of performance, the GTX 960 has a full GM206 chip while the 950(/Ti) is expected to lose about a quarter of its printed shader units.

nvidia-geforce.png

The particularly interesting part is the power, though. As we reported, Maxwell was branded as a power-efficient version of the Kepler architecture. This led to a high-end graphics cards that could be powered by the PCIe bus. According to these rumors, the new card will require a single, 8-pin power connector on top of the 75W provided by the bus. This has one of two interesting implications that I can think of.

Either:

  • The 750 Ti did not sell for existing systems as well as anticipated, or
  • The GM206 chip just couldn't hit that power target and they didn't want to make another die

Whichever is true, it will be interesting to see how NVIDIA brands this if/when the card launches. Creating a graphics card for systems without available power rails was a novel concept and it seemed to draw attention. That said, the rumors claim they're not doing it this time... for some reason.

Source: VR-Zone
Author:
Manufacturer: Wasabi Mango

Overview

A few years ago, we took our first look at the inexpensive 27" 1440p monitors which were starting to flood the market via eBay sellers located in Korea. These monitors proved to be immensely popular and largely credited for moving a large number of gamers past 1080p.

However, in the past few months we have seen a new trend from some of these same Korean monitor manufacturers. Just like the Seiki Pro SM40UNP 40" 4K display that we took a look at a few weeks ago, the new trend is large 4K monitors.

Built around a 42-in LG AH-IPS panel, the Wasabi Mango UHD420 is an impressive display. Inclusion of HDMI 2.0 and DisplayPort 1.2 allow you to achieve 4K at a full 60Hz and 4:4:4 color gamut. At a cost of just under $800 on Amazon, this is an incredibly appealing value.

IMG_2939.JPG

Whether or not the UHD420 is a TV or a monitor is actually quite the tossup. The lack of a tuner
might initially lead you to believe it's not a TV. Inclusion of a DisplayPort connector, and USB 3.0 hub might make you believe it's a monitor, but it's bundled with a remote control (entirely in Korean). In reality, this display could really be used for either use case (unless you use OTA tuning), and really starts to blur the lines between a "dumb" TV and a monitor. You'll also find VESA 400x400mm mounting holes on this display for easy wall mounting.

Continue reading our overview of the Wasabi Mango UHD420 4K HDMI 2.0 FreeSync Display!!

You have a 4K monitor and $650USD, what do you do?

Subject: Graphics Cards | July 27, 2015 - 04:33 PM |
Tagged: 4k, amd, R9 FuryX, GTX 980 Ti, gtx titan x

[H]ard|OCP have set up their testbed for a 4K showdown between the similarly priced GTX 980 Ti and Radeon R9 Fury X with the $1000 TITAN X tossed in there for those with more money than sense.  The test uses the new Catalyst 15.7 and the GeForce 353.30 drivers to give a more even playing field while benchmarking Witcher 3, GTA V and other games.  When the dust settled the pattern was obvious and the performance differences could be seen.  The deltas were not huge but when you are paying $650 + tax for a GPU even performance a few frames better or a graphical option that can be used really matters.  Perhaps the most interesting result was the redemption of the TITAN X, its extra price was reflected in the performance results.  Check them out for yourself here.

1437535126iwTl74Zfm5_1_1.gif

"We take the new AMD Radeon R9 Fury X and evaluate the 4K gaming experience. We will also compare against the price competitive GeForce GTX 980 Ti as well as a GeForce GTX TITAN X. Which video card provides the best experience and performance when gaming at glorious 4K resolution?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: PC Perspective

... But Is the Timing Right?

Windows 10 is about to launch and, with it, DirectX 12. Apart from the massive increase in draw calls, Explicit Multiadapter, both Linked and Unlinked, has been the cause of a few pockets of excitement here and there. I am a bit concerned, though. People seem to find this a new, novel concept that gives game developers the tools that they've never had before. It really isn't. Depending on what you want to do with secondary GPUs, game developers could have used them for years. Years!

Before we talk about the cross-platform examples, we should talk about Mantle. It is the closest analog to DirectX 12 and Vulkan that we have. It served as the base specification for Vulkan that the Khronos Group modified with SPIR-V instead of HLSL and so forth. Some claim that it was also the foundation of DirectX 12, which would not surprise me given what I've seen online and in the SDK. Allow me to show you how the API works.

amd-2015-mantle-execution-model.png

Mantle is an interface that mixes Graphics, Compute, and DMA (memory access) into queues of commands. This is easily done in parallel, as each thread can create commands on its own, which is great for multi-core processors. Each queue, which are lists leading to the GPU that commands are placed in, can be handled independently, too. An interesting side-effect is that, since each device uses standard data structures, such as IEEE754 decimal numbers, no-one cares where these queues go as long as the work is done quick enough.

Since each queue is independent, an application can choose to manage many of them. None of these lists really need to know what is happening to any other. As such, they can be pointed to multiple, even wildly different graphics devices. Different model GPUs with different capabilities can work together, as long as they support the core of Mantle.

microsoft-dx12-build15-ue4frame.png

DirectX 12 and Vulkan took this metaphor so their respective developers could use this functionality across vendors. Mantle did not invent the concept, however. What Mantle did is expose this architecture to graphics, which can make use of all the fixed-function hardware that is unique to GPUs. Prior to AMD's usage, this was how GPU compute architectures were designed. Game developers could have spun up an OpenCL workload to process physics, audio, pathfinding, visibility, or even lighting and post-processing effects... on a secondary GPU, even from a completely different vendor.

Vista's multi-GPU bug might get in the way, but it was possible in 7 and, I believe, XP too.

Read on to see a couple reasons why we are only getting this now...

Rumor: NVIDIA Pascal up to 17 Billion Transistors, 32GB HBM2

Subject: Graphics Cards | July 24, 2015 - 12:16 PM |
Tagged: rumor, pascal, nvidia, HBM2, hbm, graphics card, gpu

An exclusive report from Fudzilla claims some outlandish numbers for the upcoming NVIDIA Pascal GPU, including 17 billion transistors and a massive amount of second-gen HBM memory.

According to the report:

"Pascal is the successor to the Maxwell Titan X GM200 and we have been tipped off by some reliable sources that it will have  more than a double the number of transistors. The huge increase comes from  Pascal's 16 nm FinFET process and its transistor size is close to two times smaller."

PascalBoard.jpg

The NVIDIA Pascal board (Image credit: Legit Reviews)

Pascal's 16nm FinFET production will be a major change from the existing 28nm process found on all current NVIDIA GPUs. And if this report is accurate they are taking full advantage considering that transistor count is more than double the 8 billion found in the TITAN X.

PlanarFinFET.jpg

(Image credit: Fudzilla)

And what about memory? We have long known that Pascal will be NVIDIA's first forray into HBM, and Fudzilla is reporting that up to 32GB of second-gen HBM (HBM2) will be present on the highest model, which is a rather outrageous number even compared to the 12GB TITAN X.

"HBM2 enables cards with 4 HBM 2.0 cards with 4GB per chip, or four HBM 2.0 cards with 8GB per chips results with 16GB and 32GB respectively. Pascal has power to do both, depending on the SKU."

Pascal is expected in 2016, so we'll have plenty of time to speculate on these and doubtless other rumors to come.

Source: Fudzilla

NVIDIA Adds Metal Gear Solid V: The Phantom Pain Bundle to GeForce Cards

Subject: Graphics Cards | July 23, 2015 - 10:52 AM |
Tagged: nvidia, geforce, gtx, bundle, metal gear solid, phantom pain

NVIDIA continues with its pattern of flagship game bundles with today's announcement. Starting today, GeForce GTX 980 Ti, 980, 970 and 960 GPUs from select retailers will include a copy of Metal Gear Solid V: The Phantom Pain, due out September 15th. (Bundle is live on Amazon.com.) Also, notebooks that use the GTX 980M or 970M GPU qualify.

mgsv-bundle-header-new.png

From NVIDIA's marketing on the bundle:

Only GeForce GTX gives you the power and performance to game like the Big Boss. Experience the METAL GEAR SOLID V: THE PHANTOM PAIN with incredible visuals, uncompromised gameplay, and advanced technologies. NVIDIA G-SYNC™ delivers smooth and stutter-free gaming, GeForce Experience™ provides optimal playable settings, and NVIDIA GameStream™ technology streams your game to any NVIDIA SHIELD™ device.

It appears that Amazon.com already has its landing page up and ready for the MGS V bundle program, so if you are hunting for a new graphics card stop there and see what they have in your range.

Let's hope that this game release goes a bit more smooth than Batman: Arkham Knight...

Source: NVIDIA

Stop me if you've heard this before; change your .exe and change your performance

Subject: Graphics Cards | July 20, 2015 - 02:00 PM |
Tagged: amd, linux, CS:GO

Thankfully it has been quite a while since we saw GPU driver optimization specific to .exe filenames on Windows, in the past both major providers have tweaked performance based on the name of the executable which launches the game.  Until now this particular flavour of underhandedness had become passé, at least until now.  Phoronix has spotted it once again, this time seeing a big jump in performance in CS:GO when they rename the binary from csgo_linux binary to hl2_Linux.  The game is built on the same engine but the optimization for the Source Engine are not properly applied to CS:GO.

There is nothing nefarious about this particular example, it seems more a case of AMD's driver team being lazy, or more likely short-staffed.  If you play CS:GO on Linux then rename your binary, you will see a jump in performance with no deleterious side effects.  Phoronix is investigating more games to see if there are other inconsistently applied optimizations.

csgo.jpg

"Should you be using a Radeon graphics card with the AMD Catalyst Linux driver and are disappointed by the poor performance, there is a very easy workaround for gaining much better performance under Linux... In some cases a simple tweak will yield around 40% better performance!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

TSMC Plans 10nm, 7nm, and "Very Steep" Ramping of 16nm.

Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM |
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm

Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.

tsmc.jpg

So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.

Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.

In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.

Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.

Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.

Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?