Report: NVIDIA GP104 Die Pictured; GTX 1080 Does Not Use HBM

Subject: Graphics Cards | April 22, 2016 - 10:16 AM |
Tagged: rumor, report, pascal, nvidia, leak, graphics card, gpu, gddr5x, GDDR5

According to a report from VideoCardz (via Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.


Image credit:

"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."

A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory - based on a previously leaked photo of the die on PCB.


Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)

"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."

While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.

Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X - the first GPU to use HBM - was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.

If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected - which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.

Source: VideoCardz

Report: NVIDIA GTX 1080 GPU Cooler Pictured

Subject: Graphics Cards | April 19, 2016 - 11:08 AM |
Tagged: rumor, report, nvidia, leak, GTX 1080, graphics card, gpu, geforce

Another reported photo of an upcoming GTX 1080 graphics card has appeared online, this time via a post on Baidu.


(Image credit: VR-Zone, via Baidu)

The image is typically low-resolution and features the slightly soft focus we've come to expect from alleged leaks. This doesn't mean it's not legitimate, and this isn't the first time we have seen this design. This image also appears to only be the cooler, without an actual graphics card board underneath.

We have reported on the upcoming GPU rumored to be named "GTX 1080" in the recent past, and while no official announcement has been made it seems safe to assume that a successor to the current 900-series GPUs is forthcoming.

Source: VR-Zone

EVGA Releases NVIDIA GeForce GTX 950 Low Power Cards

Subject: Graphics Cards | April 5, 2016 - 11:57 AM |
Tagged: PCIe power, nvidia, low-power, GTX950, GTX 950 Low Power, graphics card, gpu, GeForce GTX 950, evga

EVGA has announced new low-power versions of the NVIDIA GeForce GTX 950, some of which do not require any PCIe power connection to work.


"The EVGA GeForce GTX 950 is now available in special low power models, but still retains all the performance intact. In fact, several of these models do not even have a 6-Pin power connector!"

With or without power, all of these cards are full-on GTX 950's, with 768 CUDA cores and 2GB of GDDR5 memory. The primary difference will be with clock speeds, and EVGA provides a chart to illustrate which models still require PCIe power, as well as how they compare in performance.


It looks like the links to the 75W (no PCIe power required) models aren't working just yet on EVGA's site. Doubtless we will soon have active listings for pricing and availability info.

Source: EVGA

AMD Announces XConnect Technology for External Graphics

Subject: Graphics Cards | March 10, 2016 - 01:27 PM |
Tagged: XConnect, thunderbolt 3, radeon, graphics card, gpu, gaming laptop, external gpu, amd

AMD has announced their new external GPU technology called XConnect, which leverages support from the latest Radeon driver to support AMD graphics over Thunderbolt 3.


The technology showcased by AMD is powered by Razer, who partnered with AMD to come up with an expandable solution that supports up to 375W GPUs, including R9 Fury, R9 Nano, and all R9 300 series GPUs up to the R9 390X (there is no liquid cooling support, and the R9 Fury X isn't listed as being compatible). The notebook in AMD's marketing material is the Razer Blade Stealth, which offers the Razer Core external GPU enclosure as an optional accessory. (More information about these products from Razer here.) XConnect is not tied to any vendor, however; this is "generic driver" support for GPUs over Thunderbolt 3.

AMD has posted this video with the head of Global Technical Marketing, Robert Hallock, to explain the new tech and show off the Razer hardware:

The exciting part has to be the promise of an industry standard for external graphics, something many have hoped for. Not everyone will produce a product exactly like Razer has, since there is no requirement to provide a future upgrade path in a larger enclosure like this, but the important thing is that Thunderbolt 3 support is built in to the newest Radeon Crimson drivers.

Here are the system requirements for AMD XConnect from AMD:

  • ​Radeon Software 16.2.2 driver (or later)
  • 1x Thunderbolt 3 port
  • 40Gbps Thunderbolt 3 cable
  • Windows 10 build 10586 (or later)
  • BIOS support for external graphics over Thunderbolt 3 (check with system vendor for details)
  • Certified Thunderbolt 3 graphics enclosure configured with supported Radeon R9 Series GPU
  • Thunderbolt firmware (NVM) v.16


The announcement introduces all sorts of possibilities. How awesome would it be to see a tiny solution with an R9 Nano powered by, say, an SFX power supply? Or what about a dual-GPU enclosure (possibly requiring 2 Thunderbolt 3 connections?), or an enclosure supporting liquid cooling (and the R9 Fury X)? The potential is certainly there, and with a standard in place we could see some really interesting products in the near future (or even DIY solutions). It's a promising time for mobile gaming!

Source: AMD

ZOTAC Introduces ZBOX MAGNUS EN980 VR Ready Mini-PC

Subject: Graphics Cards, Systems | March 10, 2016 - 11:38 AM |
Tagged: zotac, zbox, VR, SFF, nvidia, mini-pc, MAGNUS EN980, liquid cooling, GTX980, GTX 980, graphics, gpu, geforce

ZOTAC is teasing a new mini PC "ready for virtual reality" leading up to Cebit 2016, happening later this month. The ZBOX MAGNUS EN980 supplants the EN970 as the most powerful version of ZOTAC's gaming mini systems, and will come equipped with no less than an NVIDIA GeForce GTX 980.


(Image via Guru3D)

Some questions remain ahead of a more formal announcemnent, and foremost among them is the version of the system's GTX 980. Is this the full desktop variant, or the GTX 980m? It seems to be the former, if we can read into the "factory-installed water-cooling solution", especially if that pertains to the GPU. In any case this will easily be the most powerful mini-PC ZOTAC has released, as even the current MAGNUS EN970 doesn't actually ship with a GTX 970 as the name would imply; rather, a GTX 960 handles discrete graphics duties according to the specs.

The MAGNUS EN980's GTX 980 GPU - mobile or not - will make this a formidable gaming system, paired as it is with a 6th-gen Intel Skylake CPU (the specific model was not mentioned in the press release; the current high-end EN970 with dicrete graphics uses the Intel Core i5-5200U). Other details include support for up to four displays via HDMI and DisplayPort, USB 3.0 and 3.1 Type-C inputs, and built-in 802.11ac wireless.

We'll have to wait until Cebit (which runs from March 14 - 18) for more details. Full press release after the break.

Source: ZOTAC

New ASUS GeForce GTX 950 2G Requires No PCIe Power

Subject: Graphics Cards | March 4, 2016 - 04:48 PM |
Tagged: PCIe power, PCI Express, nvidia, GTX 950 2G, gtx 950, graphics card, gpu, geforce, asus, 75W

ASUS has released a new version of the GTX 950 called the GTX 950 2G, and the interesting part isn't what's been added, but what was taken away; namely, the PCIe power requirement.


When NVIDIA announced the GTX 950 (which Ryan reviewed here) it carried a TDP of 90W, which prevented it from running without a PCIe power connector. The GTX 950 was (seemingly) the replacement for the GTX 750, which didn't require anything beyond motherboard power via the PCIe slot, and the same held true for the more powerful GTX 750 Ti. Without the need for PCIe power that GTX 750 Ti became our (any many others) default recommendation to turn any PC into a gaming machine (an idea we just happened to cover in depth here).

Here's a look at the specs from ASUS for the GTX 950 2G:

  • Graphics Engine: NVIDIA GeForce GTX 950
  • Interface: PCI Express 3.0
  • Video Memory: GDDR5 2GB
  • CUDA Cores: 768
  • Memory Clock: 6610 MHz
  • Memory Interface: 128-bit
  • Engine Clock
    • Gaming Mode (Default) - GPU Boost Clock : 1190 MHZ , GPU Base Clock : 1026 MHz
    • OC Mode - GPU Boost Clock : 1228 MHZ , GPU Base Clock : 1051 MHz
  • Interface: HDMI 2.0, DisplayPort, DVI
  • Power Consumption: Up to 75W, no additional PCIe power required
  • Dimensions: 8.3 x 4.5 x 1.6 inches


Whether this model has any relation to the rumored "GTX 950 SE/LP" remains to be seen (and other than power, this card appears to have stock GTX 950 specs), but the option of adding in a GPU without concern over power requirements makes this a very attractive upgrade proposition for older builds or OEM PC's, depending on cost.

The full model number is ASUS GTX950-2G, and a listing is up on Amazon, though seemingly only a placeholder at the moment. (Link removed. The listing was apparently for an existing GTX 950 product.)

Source: ASUS

Report: NVIDIA Working on Another GeForce GTX 950 GPU

Subject: Graphics Cards | February 16, 2016 - 12:01 PM |
Tagged: rumor, report, nvidia, Maxwell 2.0, GTX 950 SE, GTX 950 LP, gtx 950, gtx 750, graphics card, gpu

A report from claims that NVIDIA is working on another GTX 950 graphics card, but not the 950 Ti you might have expected.


Reference GTX 950 (Image credit: NVIDIA)

While the GTX 750 Ti was succeeded by the GTX 950 in August of last year, the higher specs for this new GPU came at the cost of a higher TDP (90W vs. 60W). This new rumored GTX 950, which might be called either 950 SE or 950 LP according to the report, would be a lower power version of the GTX 950, and would actually have a lot more in common with the outgoing GTX 750 Ti than the plain GTX 750 as we can see from this chart:


(Image credit: VideoCardz)

As you can see the GTX 750 Ti is based on GM107 (Maxwell 1.0) and has 640 CUDA cores, 40 TUs, 16 ROPs, and it operates at 1020 MHz Base/1085 MHz Boost clocks. The reported specs of this new GTX 950 SE/LP would be nearly identical, though based on GM206 (Maxwell 2.0) and offering greater memory bandwidth (and slightly higher power consumption).

The VideoCardz report was sourced from Expreview, which claimed that this GTX 950 SE/LP product would arrive next month at some point. This report is a little more vague than some of the rumors we see, but it could very well be that NVIDIA has a planned replacement for the remaining Maxwell 1.0 products on the market. I would have personally expected to see a"Ti” product before any “LE/LP” version of the GTX 950, and this reported name seems more like an OEM product than a retail part. We will have to wait and see if this report is accurate.

Source: VideoCardz
Manufacturer: Various

Early testing for higher end GPUs

UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.

I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.


Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.

In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.

Continue reading our look at GPU performance in Rise of the Tomb Raider!!

Manufacturer: PC Perspective
Tagged: moores law, gpu, cpu

Are Computers Still Getting Faster?

It looks like CES is starting to wind down, which makes sense because it ended three days ago. Now that we're mostly caught up, I found a new video from The 8-Bit Guy. He doesn't really explain any old technologies in this one. Instead, he poses an open question about computer speed. He was able to have a functional computing experience on a ten-year-old Apple laptop, which made him wonder if the rate of computer advancement is slowing down.

I believe that he (and his guest hosts) made great points, but also missed a few important ones.

One of his main arguments is that software seems to have slowed down relative to hardware. I don't believe that is true, but I believe it's looking in the right area. PCs these days are more than capable of doing just about anything in terms of 2D user interface that we would want to, and do so with a lot of overhead for inefficient platforms and sub-optimal programming (relative to the 80's and 90's at the very least). The areas that require extra horsepower are usually doing large batches of many related tasks. GPUs are key in this area, and they are keeping up as fast as they can, despite some stagnation with fabrication processes and a difficulty (at least before HBM takes hold) in keeping up with memory bandwidth.

For the last five years to ten years or so, CPUs have been evolving toward efficiency as GPUs are being adopted for the tasks that need to scale up. I'm guessing that AMD, when they designed the Bulldozer architecture, hoped that GPUs would have been adopted much more aggressively, but even as graphics devices, they now have a huge effect on Web, UI, and media applications.


These are also tasks that can scale well between devices by lowering resolution (and so forth). The primary thing that a main CPU thread needs to do is figure out the system's state and keep the graphics card fed before the frame-train leaves the station. In my experience, that doesn't scale well (although you can sometimes reduce the amount of tracked objects for games and so forth). Moreover, it is easier to add GPU performance, compared to single-threaded CPU, because increasing frequency and single-threaded IPC should be more complicated than planning out more, duplicated blocks of shaders. These factors combine to give lower-end hardware a similar experience in the most noticeable areas.

So, up to this point, we discussed:

  • Software is often scaling in ways that are GPU (and RAM) limited.
  • CPUs are scaling down in power more than up in performance.
  • GPU-limited tasks can often be approximated with smaller workloads.
    • Software gets heavier, but it doesn't need to be "all the way up" (ex: resolution).
    • Some latencies are hard to notice anyway.

Back to the Original Question

This is where “Are computers still getting faster?” can be open to interpretation.


Tasks are diverging from one class of processor into two, and both have separate industries, each with their own, multiple goals. As stated, CPUs are mostly progressing in power efficiency, which extends (an assumed to be) sufficient amount of performance downward to multiple types of devices. GPUs are definitely getting faster, but they can't do everything. At the same time, RAM is plentiful but its contribution to performance can be approximated with paging unused chunks to the hard disk or, more recently on Windows, compressing them in-place. Newer computers with extra RAM won't help as long as any single task only uses a manageable amount of it -- unless it's seen from a viewpoint that cares about multi-tasking.

In short, computers are still progressing, but the paths are now forked and winding.

Rumor: Polaris Is the next AMD Radeon Core Architecture

Subject: Graphics Cards | December 31, 2015 - 01:41 PM |
Tagged: rumor, report, radeon, Polaris, graphics card, gpu, GCN, amd

A report claims that Polaris will succeed GCN (Graphics Core Next) as the next AMD Radeon GPU core, which will power the 400-series graphics cards.


Image via

As these rumors go, this is about as convoluted as it gets. VideoCardz has published the story, sourced from WCCFtech, who was reporting on a post with supposedly leaked slides at HardwareBattle. The primary slide in question has since been pulled, and appears below:


Image via

Of course the name does nothing to provide architectural information on this presumptive GCN replacement, and a new core for the 400-series GPUs was expected anyway after the 300-series was largely a rebranded 200-series (that's a lot of series). Let's hope actual details emerge soon, but for now we can speculate on mysterious tweets from certain interested parties:


Source: VideoCardz