Report: AMD Radeon R9 380X Coming November 15 for $249

Subject: Graphics Cards | November 7, 2015 - 04:46 PM |
Tagged: tonga, rumor, report, Radeon R9 380X, r9 285, graphics card, gpu, GDDR5, amd

AMD will reportedly be launching their latest performance graphics card soon, and specs for this rumored R9 380X have now been reported at VR-Zone (via Hardware Battle).


(Image credit: VR-Zone)

Here are the full specifications from this report:

  • GPU Codename: Antigua
  • Process: 28 nm
  • Stream Processors: 2048
  • GPU Clock: Up to 1000 – 1100 MHz (exact number not known)
  • Memory Size: 4096 MB
  • Memory Type: GDDR5
  • Memory Interface: 256-bit
  • Memory Clock: 5500 – 6000 MHz (exact number not known)
  • Display Output: DisplayPort 1.2, HDMI 1.4, Dual-Link DVI-D

The launch date is reportedly November 15, and the card will (again, reportedly) carry a $249 MSRP at launch.


The 380X would build on the existing R9 285

Compared to the R9 280X, which also offers 2048 stream processors, a boost clock up to 1000 MHz, and 6000 MHz GDDR5, the R9 380X would lose memory bandwidth due to the move from a 384-bit memory interface to 256-bit. The actual performance won’t be exactly comparable however, as the core (Antigua, previously Tonga) will share more in common with the R9 285 (Tonga), though the R9 285 only offered 1792 Stream processors and 2 GB of GDDR5.

You can check out our review of the R9 285 here to see how it performed against the R9 280X, and it will certainly be interesting to see how this R9 380X will fare if these specifications are accurate.

Source: VR-Zone

Report: AMD Radeon 400 Series Taped Out, Coming 2016

Subject: Graphics Cards | October 23, 2015 - 01:49 AM |
Tagged: tape out, rumor, report, Radeon 400 Series, radeon, graphics card, gpu, Ellesmere, Baffin, amd

Details are almost nonexistent, but a new report claims that AMD has reached tape out for an upcoming Radeon 400 series of graphics cards, which could be the true successor to the R9 200-series after the rebranded 3xx cards.


Image credit: WCCFtech

According to the report:

"AMD has reportedly taped out two of its next-gen GPUs, with "Ellesmere" and "Baffin" both taping out - and both part of the upcoming Radeon 400 series of video cards."

I wish there was more here to report, but if this is accurate we should start to hear some details about these new cards fairly soon. The important thing is that AMD is working on the new performance mainstream cards so soon after releasing what was largely a simple rebrand accross much of the 300-series GPUs this year.

Source: WCCFTech

ASUS Has Created a White AMD Radeon R9 Nano

Subject: Graphics Cards | October 23, 2015 - 12:29 AM |
Tagged: r9 nano, mITX, mini-itx, graphics card, gpu, asus, amd

AMD's Radeon R9 Nano is a really cool product, able to provide much of power of the bigger R9 Fury X without the need for more than a standard air cooler, and doing so with an impossibly tiny size for a full graphics card. And while mini-ITX graphics cards serve a small segment of the market, just who might be buying a white one when this is released?


According to a report published first by Computer Base in Germany, ASUS is releasing an all-white AMD R9 Nano, and it looks really sharp. The stock R9 Nano is no slouch in the looks department as you can see here in our full review of AMD's newest GPU, but with this design ASUS provides a totally different look that could help unify the style of your build depending on your other component choices. White is just starting to show up for things like motherboard PCBs, but it's pretty rare in part due to the difficulty in manufacturing white parts that stay white when they are subjected to heat.


There was no mention on a specific release window for the ASUS R9 Nano White, so we'll have to wait for official word on that. It is possible that ASUS has also implemented their own custom PCB, though details are not know just yet. We should know more by the end of next month according to the report.

Manufacturer: NVIDIA

GPU Enthusiasts Are Throwing a FET

NVIDIA is rumored to launch Pascal in early (~April-ish) 2016, although some are skeptical that it will even appear before the summer. The design was finalized months ago, and unconfirmed shipping information claims that chips are being stockpiled, which is typical when preparing to launch a product. It is expected to compete against AMD's rumored Arctic Islands architecture, which will, according to its also rumored numbers, be very similar to Pascal.

This architecture is a big one for several reasons.


Image Credit: WCCFTech

First, it will jump two full process nodes. Current desktop GPUs are manufactured at 28nm, which was first introduced with the GeForce GTX 680 all the way back in early 2012, but Pascal will be manufactured on TSMC's 16nm FinFET+ technology. Smaller features have several advantages, but a huge one for GPUs is the ability to fit more complex circuitry in the same die area. This means that you can include more copies of elements, such as shader cores, and do more in fixed-function hardware, like video encode and decode.

That said, we got a lot more life out of 28nm than we really should have. Chips like GM200 and Fiji are huge, relatively power-hungry, and complex, which is a terrible idea to produce when yields are low. I asked Josh Walrath, who is our go-to for analysis of fab processes, and he believes that FinFET+ is probably even more complicated today than 28nm was in the 2012 timeframe, which was when it launched for GPUs.

It's two full steps forward from where we started, but we've been tiptoeing since then.


Image Credit: WCCFTech

Second, Pascal will introduce HBM 2.0 to NVIDIA hardware. HBM 1.0 was introduced with AMD's Radeon Fury X, and it helped in numerous ways -- from smaller card size to a triple-digit percentage increase in memory bandwidth. The 980 Ti can talk to its memory at about 300GB/s, while Pascal is rumored to push that to 1TB/s. Capacity won't be sacrificed, either. The top-end card is expected to contain 16GB of global memory, which is twice what any console has. This means less streaming, higher resolution textures, and probably even left-over scratch space for the GPU to generate content in with compute shaders. Also, according to AMD, HBM is an easier architecture to communicate with than GDDR, which should mean a savings in die space that could be used for other things.

Third, the architecture includes native support for three levels of floating point precision. Maxwell, due to how limited 28nm was, saved on complexity by reducing 64-bit IEEE 754 decimal number performance to 1/32nd of 32-bit numbers, because FP64 values are rarely used in video games. This saved transistors, but was a huge, order-of-magnitude step back from the 1/3rd ratio found on the Kepler-based GK110. While it probably won't be back to the 1/2 ratio that was found in Fermi, Pascal should be much better suited for GPU compute.


Image Credit: WCCFTech

Mixed precision could help video games too, though. Remember how I said it supports three levels? The third one is 16-bit, which is half of the format that is commonly used in video games. Sometimes, that is sufficient. If so, Pascal is said to do these calculations at twice the rate of 32-bit. We'll need to see whether enough games (and other applications) are willing to drop down in precision to justify the die space that these dedicated circuits require, but it should double the performance of anything that does.

So basically, this generation should provide a massive jump in performance that enthusiasts have been waiting for. Increases in GPU memory bandwidth and the amount of features that can be printed into the die are two major bottlenecks for most modern games and GPU-accelerated software. We'll need to wait for benchmarks to see how the theoretical maps to practical, but it's a good sign.

Report: TSMC To Produce NVIDIA Pascal On 16 nm FinFET

Subject: Graphics Cards | September 16, 2015 - 09:16 AM |
Tagged: TSMC, Samsung, pascal, nvidia, hbm, graphics card, gpu

According to a report by BusinessKorea TSMC has been selected to produce the upcoming Pascal GPU after initially competing with Samsung for the contract.


Though some had considered the possibility of both Samsung and TSMC sharing production (albeit on two different process nodes, as Samsung is on 14 nm FinFET), in the end the duties fall on TSMC's 16 nm FinFET alone if this report is accurate. The move is not too surprising considering the longstanding position TSMC has maintained as a fab for GPU makers and Samsung's lack of experience in this area.

The report didn't make the release date for Pascal any more clear, naming it "next year" for the new HBM-powered GPU, which will also reportedly feature 16 GB of HBM 2 memory for the flagship version of the card. This would potentially be the first GPU released at 16 nm (unless AMD has something in the works before Pascal's release), as all current AMD and NVIDIA GPUs are manufactured at 28 nm.

Detailed Photos of AMD Radeon R9 Nano Surface (Confirmed)

Subject: Graphics Cards | August 25, 2015 - 02:23 PM |
Tagged: Radeon R9 Nano, radeon, r9 nano, hbm, graphics, gpu, amd

New detailed photos of the upcoming Radeon R9 Nano have surfaced, and Ryan has confirmed with AMD that these are in fact real.


We've seen the outside of the card before, but for the first time we are provided a detailed look under the hood.


The cooler is quite compact and has copper heatpipes for both core and VRM

The R9 Nano is a very small card and it will be powered with a single 8-pin power connector directed toward the back.



Connectivity is provided via three DisplayPort outputs and a single HDMI port

And fans of backplates will need to seek 3rd-party offerings as it looks like this will have a bare PCB around back.


We will keep you updated if any official specifications become available, and of course we'll have complete coverage once the R9 Nano is officially launched!

Overall GPU Shipments Down from Last Year, PC Industry Drops 10%

Subject: Graphics Cards, Systems | August 17, 2015 - 11:00 AM |
Tagged: NPD, gpu, discrete gpu, graphics, marketshare, PC industry

News from NPD Research today shows a sharp decline in discrete graphics shipments from all major vendors. Not great news for the PC industry, but not all that surprising, either.


These numbers don’t indicate a lack of discrete GPU interest in the PC enthusiast community of course, but certainly show how the mainstream market has changed. OEM laptop and (more recently) desktop makers predominantly use processor graphics from Intel and AMD APUs, though the decrease of over 7% for Intel GPUs suggests a decline in PC shipments overall.

Here are the highlights, quoted directly from NPD Research:

  • AMD's overall unit shipments decreased -25.82% quarter-to-quarter, Intel's total shipments decreased -7.39% from last quarter, and Nvidia's decreased -16.19%.
  • The attach rate of GPUs (includes integrated and discrete GPUs) to PCs for the quarter was 137% which was down -10.82% from last quarter, and 26.43% of PCs had discrete GPUs, which is down -4.15%.
  • The overall PC market decreased -4.05% quarter-to-quarter, and decreased -10.40% year-to-year.
  • Desktop graphics add-in boards (AIBs) that use discrete GPUs decreased -16.81% from last quarter.


An overall decrease of 10.4 % year-to-year indicates what I'll call the continuing evolution of the PC (rather than a decline, per se), and shows how many have come to depend on smartphones for the basic computing tasks (email, web browsing) that once required a PC. Tablets didn’t replace the PC in the way that was predicted only 5 years ago, and it’s almost become essential to pair a PC with a smartphone for a complete personal computing experience (sorry, tablets – we just don’t NEED you as much).

I would guess anyone reading this on a PC enthusiast site is not only using a PC, but probably one with discrete graphics, too. Or maybe you exclusively view our site on a tablet or smartphone? I for one won’t stop buying PC components until they just aren’t available anymore, and that dark day is probably still many years off.

Source: NPD Research

Rumor: NVIDIA Pascal up to 17 Billion Transistors, 32GB HBM2

Subject: Graphics Cards | July 24, 2015 - 12:16 PM |
Tagged: rumor, pascal, nvidia, HBM2, hbm, graphics card, gpu

An exclusive report from Fudzilla claims some outlandish numbers for the upcoming NVIDIA Pascal GPU, including 17 billion transistors and a massive amount of second-gen HBM memory.

According to the report:

"Pascal is the successor to the Maxwell Titan X GM200 and we have been tipped off by some reliable sources that it will have  more than a double the number of transistors. The huge increase comes from  Pascal's 16 nm FinFET process and its transistor size is close to two times smaller."


The NVIDIA Pascal board (Image credit: Legit Reviews)

Pascal's 16nm FinFET production will be a major change from the existing 28nm process found on all current NVIDIA GPUs. And if this report is accurate they are taking full advantage considering that transistor count is more than double the 8 billion found in the TITAN X.


(Image credit: Fudzilla)

And what about memory? We have long known that Pascal will be NVIDIA's first forray into HBM, and Fudzilla is reporting that up to 32GB of second-gen HBM (HBM2) will be present on the highest model, which is a rather outrageous number even compared to the 12GB TITAN X.

"HBM2 enables cards with 4 HBM 2.0 cards with 4GB per chip, or four HBM 2.0 cards with 8GB per chips results with 16GB and 32GB respectively. Pascal has power to do both, depending on the SKU."

Pascal is expected in 2016, so we'll have plenty of time to speculate on these and doubtless other rumors to come.

Source: Fudzilla
Subject: Systems
Manufacturer: PC Perspective
Tagged: quad-core, gpu, gaming, cpu

Introduction and Test Hardware


The PC gaming world has become divided by two distinct types of games: those that were designed and programmed specifically for the PC, and console ports. Unfortunately for PC gamers it seems that far too many titles are simply ported over (or at least optimized for consoles first) these days, and while PC users can usually enjoy higher detail levels and unlocked frame rates there is now the issue of processor core-count to consider. This may seem artificial, but in recent months quite a few games have been released that require at least a quad-core CPU to even run (without modifying the game).

One possible explanation for this is current console hardware: PS4 and Xbox One systems are based on multi-core AMD APUs (the 8-core AMD "Jaguar"). While a quad-core (or higher) processor might not be techincally required to run current games on PCs, the fact that these exist on consoles might help to explain quad-core CPU as a minimum spec. This trend could simply be the result of current x86 console hardware, as developement of console versions of games is often prioritized (and porting has become common for development of PC versions of games). So it is that popular dual-core processors like the $69 Intel Pentium Anniversary Edition (G3258) are suddenly less viable for a future-proofed gaming build. While hacking these games might make dual-core CPUs work, and might be the only way to get such a game to even load as the CPU is checked at launch, this is obviously far from ideal.


Is this much CPU really necessary?

Rather than rail against this quad-core trend and question its necessity, I decided instead to see just how much of a difference the processor alone might make with some game benchmarks. This quickly escalated into more and more system configurations as I accumulated parts, eventually arriving at 36 different configurations at various price points. Yeah, I said 36. (Remember that Budget Gaming Shootout article from last year? It's bigger than that!) Some of the charts that follow are really long (you've been warned), and there’s a lot of information to parse here. I wanted this to be as fair as possible, so there is a theme to the component selection. I started with three processors each (low, mid, and high price) from AMD and Intel, and then three graphics cards (again, low, mid, and high price) from AMD and NVIDIA.

Here’s the component rundown with current pricing*:

Processors tested:

Graphics cards tested:

  • AMD Radeon R7 260X (ASUS 2GB OC) - $137.24
  • AMD Radeon R9 280 (Sapphire Dual-X) - $169.99
  • AMD Radeon R9 290X (MSI Lightning) - $399
  • NVIDIA GeForce GTX 750 Ti (OEM) - $149.99
  • NVIDIA GeForce GTX 770 (OEM) - $235
  • NVIDIA GeForce GTX 980 (ASUS STRIX) - $519

*These prices were current as of  6/29/15, and of course fluctuate.

Continue reading our Quad-Core Gaming Roundup: How Much CPU Do You Really Need?

Introduction and Technical Specifications


In our previous article here, we demonstrated how to mod the EVGA GTX 970 SC ACX 2.0 video card to get higher performance and significantly lower running temps. Now we decided to take two of these custom modded EVGA GTX 970 cards to see how well they perform in an SLI configuration. ASUS was kind enough to supply us with one of their newly introduced ROG Enthusiast SLI Bridges for our experiments.

ASUS ROG Enthusiast SLI Bridge


Courtesy of ASUS


Courtesy of ASUS

For the purposes of running the two EVGA GTX 970 SC ACX 2.0 video cards in SLI, we chose to use the 3-way variant of ASUS' ROG Enthusiast SLI Bridge so that we could run the tests with full 16x bandwidth across both cards (with the cards in PCIe 3.0 x16 slots 1 and 3 in our test board). This customized SLI adapter features a powered red-colored ROG logo embedded in its brushed aluminum upper surface. The adapter supports 2-way and 3-way SLI in a variety of board configurations.


Courtesy of ASUS

ASUS offers their ROG Enthusiast SLI Bridge in 3 sizes for various variations on 2-way, 3-way, and 4-way SLI configurations. All bridges feature the top brushed-aluminum cap with embedded glowing ROG logo.

Continue reading our article on Modding the EVGA GTX 970 SC Graphics Card!


Courtesy of ASUS

The smallest bridge supports 2-way SLI configurations with either a two or three slot separation. The middle sized bridge supports up to a 3-way SLI configuration with a two slot separation required between each card. The largest bridge support up to a 4-way SLI configuration, also requiring a two slot separation between each card used.

Technical Specifications (taken from the ASUS website)

Dimensions 2-WAY: 97 x 43 x 21 (L x W x H mm)
3-WAY: 108 x 53 x 21 (L x W x H mm)
4-WAY: 140 x 53 x 21 (L x W x H mm)
Weight 70 g (2-WAY)
91 g (3-WAY)
123 g(4-WAY)
Compatible GPU set-ups 2-WAY: 2-WAY-S & 2-WAY-M
3-WAY: 2-WAY-L & 3-WAY
4-WAY: 4-WAY
Contents 2-WAY: 1 x optional power cable & 2 PCBs included for varying configurations
3-WAY: 1 x optional power cable
4-WAY: 1 x optional power cable

Continue reading our story!!