Subject: Graphics Cards | July 24, 2015 - 04:16 PM | Sebastian Peak
Tagged: rumor, pascal, nvidia, HBM2, hbm, graphics card, gpu
An exclusive report from Fudzilla claims some outlandish numbers for the upcoming NVIDIA Pascal GPU, including 17 billion transistors and a massive amount of second-gen HBM memory.
According to the report:
"Pascal is the successor to the Maxwell Titan X GM200 and we have been tipped off by some reliable sources that it will have more than a double the number of transistors. The huge increase comes from Pascal's 16 nm FinFET process and its transistor size is close to two times smaller."
The NVIDIA Pascal board (Image credit: Legit Reviews)
Pascal's 16nm FinFET production will be a major change from the existing 28nm process found on all current NVIDIA GPUs. And if this report is accurate they are taking full advantage considering that transistor count is more than double the 8 billion found in the TITAN X.
(Image credit: Fudzilla)
And what about memory? We have long known that Pascal will be NVIDIA's first forray into HBM, and Fudzilla is reporting that up to 32GB of second-gen HBM (HBM2) will be present on the highest model, which is a rather outrageous number even compared to the 12GB TITAN X.
"HBM2 enables cards with 4 HBM 2.0 cards with 4GB per chip, or four HBM 2.0 cards with 8GB per chips results with 16GB and 32GB respectively. Pascal has power to do both, depending on the SKU."
Pascal is expected in 2016, so we'll have plenty of time to speculate on these and doubtless other rumors to come.
Introduction and Test Hardware
The PC gaming world has become divided by two distinct types of games: those that were designed and programmed specifically for the PC, and console ports. Unfortunately for PC gamers it seems that far too many titles are simply ported over (or at least optimized for consoles first) these days, and while PC users can usually enjoy higher detail levels and unlocked frame rates there is now the issue of processor core-count to consider. This may seem artificial, but in recent months quite a few games have been released that require at least a quad-core CPU to even run (without modifying the game).
One possible explanation for this is current console hardware: PS4 and Xbox One systems are based on multi-core AMD APUs (the 8-core AMD "Jaguar"). While a quad-core (or higher) processor might not be techincally required to run current games on PCs, the fact that these exist on consoles might help to explain quad-core CPU as a minimum spec. This trend could simply be the result of current x86 console hardware, as developement of console versions of games is often prioritized (and porting has become common for development of PC versions of games). So it is that popular dual-core processors like the $69 Intel Pentium Anniversary Edition (G3258) are suddenly less viable for a future-proofed gaming build. While hacking these games might make dual-core CPUs work, and might be the only way to get such a game to even load as the CPU is checked at launch, this is obviously far from ideal.
Is this much CPU really necessary?
Rather than rail against this quad-core trend and question its necessity, I decided instead to see just how much of a difference the processor alone might make with some game benchmarks. This quickly escalated into more and more system configurations as I accumulated parts, eventually arriving at 36 different configurations at various price points. Yeah, I said 36. (Remember that Budget Gaming Shootout article from last year? It's bigger than that!) Some of the charts that follow are really long (you've been warned), and there’s a lot of information to parse here. I wanted this to be as fair as possible, so there is a theme to the component selection. I started with three processors each (low, mid, and high price) from AMD and Intel, and then three graphics cards (again, low, mid, and high price) from AMD and NVIDIA.
Here’s the component rundown with current pricing*:
- AMD Athlon X4 860K - $74.99
- AMD FX 8350 - $165.93
- AMD FX 9590 (with AIO cooler) - $259.99
- Intel Core i3-4130 - $118
- Intel Core i5-4440 - $184.29
- Intel Core i7-4790K - $338.99
Graphics cards tested:
- AMD Radeon R7 260X (ASUS 2GB OC) - $137.24
- AMD Radeon R9 280 (Sapphire Dual-X) - $169.99
- AMD Radeon R9 290X (MSI Lightning) - $399
- NVIDIA GeForce GTX 750 Ti (OEM) - $149.99
- NVIDIA GeForce GTX 770 (OEM) - $235
- NVIDIA GeForce GTX 980 (ASUS STRIX) - $519
*These prices were current as of 6/29/15, and of course fluctuate.
Introduction and Technical Specifications
In our previous article here, we demonstrated how to mod the EVGA GTX 970 SC ACX 2.0 video card to get higher performance and significantly lower running temps. Now we decided to take two of these custom modded EVGA GTX 970 cards to see how well they perform in an SLI configuration. ASUS was kind enough to supply us with one of their newly introduced ROG Enthusiast SLI Bridges for our experiments.
ASUS ROG Enthusiast SLI Bridge
Courtesy of ASUS
Courtesy of ASUS
For the purposes of running the two EVGA GTX 970 SC ACX 2.0 video cards in SLI, we chose to use the 3-way variant of ASUS' ROG Enthusiast SLI Bridge so that we could run the tests with full 16x bandwidth across both cards (with the cards in PCIe 3.0 x16 slots 1 and 3 in our test board). This customized SLI adapter features a powered red-colored ROG logo embedded in its brushed aluminum upper surface. The adapter supports 2-way and 3-way SLI in a variety of board configurations.
Courtesy of ASUS
ASUS offers their ROG Enthusiast SLI Bridge in 3 sizes for various variations on 2-way, 3-way, and 4-way SLI configurations. All bridges feature the top brushed-aluminum cap with embedded glowing ROG logo.
Courtesy of ASUS
The smallest bridge supports 2-way SLI configurations with either a two or three slot separation. The middle sized bridge supports up to a 3-way SLI configuration with a two slot separation required between each card. The largest bridge support up to a 4-way SLI configuration, also requiring a two slot separation between each card used.
Technical Specifications (taken from the ASUS website)
|Dimensions||2-WAY: 97 x 43 x 21 (L x W x H mm)
3-WAY: 108 x 53 x 21 (L x W x H mm)
4-WAY: 140 x 53 x 21 (L x W x H mm)
|Weight||70 g (2-WAY)
91 g (3-WAY)
|Compatible GPU set-ups||2-WAY: 2-WAY-S & 2-WAY-M
3-WAY: 2-WAY-L & 3-WAY
|Contents||2-WAY: 1 x optional power cable & 2 PCBs included for varying configurations
3-WAY: 1 x optional power cable
4-WAY: 1 x optional power cable
Qualcomm’s GPU History
Despite its market dominance, Qualcomm may be one of the least known contenders in the battle for the mobile space. While players like Apple, Samsung, and even NVIDIA are often cited as the most exciting and most revolutionary, none come close to the sheer sales, breadth of technology, and market share that Qualcomm occupies. Brands like Krait and Snapdragon have helped push the company into the top 3 semiconductor companies in the world, following only Intel and Samsung.
Founded in July 1985, seven industry veterans came together in the den of Dr. Irwin Jacobs’ San Diego home to discuss an idea. They wanted to build “Quality Communications” (thus the name Qualcomm) and outlined a plan that evolved into one of the telecommunications industry’s great start-up success stories.
Though Qualcomm sold its own handset business to Kyocera in 1999, many of today’s most popular mobile devices are powered by Qualcomm’s Snapdragon mobile chipsets with integrated CPU, GPU, DSP, multimedia CODECs, power management, baseband logic and more. In fact the typical “chipset” from Qualcomm encompasses up to 20 different chips of different functions besides just the main application processor. If you are an owner of a Galaxy Note 4, Motorola Droid Turbo, Nexus 6, or Samsung Galaxy S5, then you are most likely a user of one of Qualcomm’s Snapdragon chipsets.
Qualcomm’s GPU History
Before 2006, the mobile GPU as we know it today was largely unnecessary. Feature phones and “dumb” phones were still the large majority of the market with smartphones and mobile tablets still in the early stages of development. At this point all the visual data being presented on the screen, whether on a small monochrome screen or with the color of a PDA, was being drawn through a software renderer running on traditional CPU cores.
But by 2007, the first fixed-function, OpenGL ES 1.0 class of GPUs started shipping in mobile devices. These dedicated graphics processors were originally focused on drawing and updating the user interface on smartphones and personal data devices. Eventually these graphics units were used for what would be considered the most basic gaming tasks.
Introduction and Technical Specifications
The measure of a true modder is not in how powerful he can make his system by throwing money at it, but in how well he can innovate to make his components run better with what he or she has on hand. Some make artistic statements with their truly awe-inspiring cases, while others take the dremel and clamps to their beloved video cards in an attempt to eek out that last bit of performance. This article serves the later of the two. Don't get me wrong, the card will look nice once we're done with it, but the point here is to re-use components on hand where possible to minimize the cost while maximizing the performance (and sound) benefits.
EVGA GTX 970 SC Graphics Card
Courtesy of EVGA
We started with an EVGA GTX 970 SC card with 4GB ram and bundled with the new revision of EVGA's ACX cooler, ACX 2.0. This card is well built with a slight factory overclock out of the box. The ACX 2.0 cooler is a redesigned version of the initial version of the cooler included with the card, offering better cooling potential with fan's not activated for active cooling until the GPU block temperature breeches 60C.
Courtesy of EVGA
WATERCOOL HeatKiller GPU-X3 Core GPU Waterblock
Courtesy of WATERCOOL
For water cooling the EVGA GTX 970 SC GPU, we decided to use the WATERCOOL HeatKiller GPU-X3 Core water block. This block features a POM-based body with a copper core for superior heat transfer from the GPU to the liquid medium. The HeatKiller GPU-X3 Core block is a GPU-only cooler, meaning that the memory and integrated VRM circuitry will not be actively cooled by the block. The decision to use a GPU only block rather than a full cover block was two fold - availability and cost. I had a few of these on hand, making of an easy decision cost-wise.
Subject: Graphics Cards | June 1, 2015 - 06:00 AM | Sebastian Peak
Tagged: computex, Poseidon GTX 980 Ti, GTX 980 Ti, gpu, ASUS ROG, computex 2015
ASUS has already announced a Poseidon version of the new NVIDIA GeForce GTX 980 Ti graphics card, which is part of the company's Republic of Gamers (ROG) lineup.
No photo of the GTX 980 Ti available yet, so here's the GTX 980 version for reference
"ROG Poseidon GTX 980 Ti incorporates the DirectCU H2O hybrid cooling solution with a combined vapor chamber and water channels to give users cooler temperatures along with improved noise reduction for 3x quieter performance. ASUS graphics cards are produced via exclusive Auto-Extreme technology, an industry-first 100% automated process, and feature aerospace-grade Super Alloy Power II components for unsurpassed quality and reliability. ROG Poseidon GTX 980Ti also features GPU Tweak II with XSplit Gamecaster for intuitive performance tweaks and gameplay streaming."
We'll keep you posted on pricing and availability (and actual product photos) once they're available!
Subject: Graphics Cards | May 29, 2015 - 03:05 PM | Sebastian Peak
Tagged: rumors, radeon, hbm, graphics, gpu, Fury, Fiji, amd
Another rumor has emerged about an upcoming GPU from AMD, and this time it's a possible name for the HBM-powered Fiji card a lot of us have been speculating about.
The rumor from VideoCardz via Expreview (have to love the multiple layers of reporting here) states the the new card will be named Radeon Fury:
"Radeon Fury would be AMD’s response to growing popularity of TITAN series. It is yet unclear how AMD is planning to adopt Fury naming schema. Are we going to see Fury XT or Fury PRO? Well, let’s just wait and see. This rumor also means that Radeon R9 390X will be a direct rebrand of R9 290X with 8GB memory."
Of course this is completely unsubstantiated, and Fury is a branding scheme from the ATI days, but who knows? I can only hope that if true, AMD will adopt all caps: TITAN! FURY! Feel the excitement. What do you think of this possible name for the upcoming AMD flagship GPU?
Subject: Graphics Cards | May 26, 2015 - 09:03 PM | Sebastian Peak
Tagged: rumors, nvidia, leaks, GTX 980 Ti, gpu, gm200
Who doesn’t love rumor and speculation about unreleased products? (Other than the manufacturers of such products, of course.) Today VideoCardz is reporting via HardwareBattle a GPUZ screenshot reportedly showing specs for an NIVIDIA GeForce GTX 980 Ti.
Image credit: HardwareBattle via VideoCardz.com
First off, the HardwareBattle logo conveniently obscures the hardware ID (as well as ROP/TMU counts). What is visible is the 2816 shader count, which places it between the GTX 980 (2048) and TITAN X (3072). The 6 GB of GDDR5 memory has a 384-bit interface and 7 Gbps speed, so bandwidth should be the same 336 GB/s as the TITAN X. As far as core clocks on this GPU (which seems likely to be a cut-down GM200), they are identical to those of the TITAN X as well with 1000 MHz Base and 1076 MHz Boost clocks shown in the screenshot.
Image credit: VideoCardz.com
We await any official announcement, but from the frequency of the leaks it seems we won’t have to wait too long.
Subject: Graphics Cards | May 6, 2015 - 06:21 PM | Ryan Shrout
Tagged: amd, hbm, radeon, gpu
During today's 2015 AMD Financial Analyst Day, CEO Dr. Lisa Su discussed some of the details of the upcoming enthusiast Radeon graphics product. Though it wasn't given a name, she repeatedly said that the product would be announced "in the coming weeks...at upcoming industry events."
You won't find specifications here but understanding the goals and targets that AMD has for this new flagship product will help tell the story of this new Radeon product. Dr. Su sees AMD investing at very specific inflection points, the most recent of which are DirectX 12, 4K displays and VR technology. With adoption of HBM (high bandwidth memory) that sits on-die with the GPU, rather than across a physical PCB, we will see both a reduction in power consumption as well as a significant increase in GPU memory bandwidth.
HBM will accelerate the performance improvements at those key inflection points Dr. Su mentioned. Additional memory bandwidth will aid the ability for discrete GPUs to push out 4K resolutions and beyond, no longer limited by texture sizes. AMD's LiquidVR software, in conjunction with HBM, will be able to improve latency and reduce performance concerns on current and future generations of virtual reality hardware.
One interesting comment made during the conference was that HBM would enable new form factors for the GPUs now that you now longer need to have memory spread out on a PCB. While there isn't much room in the add-in card market for differentiation, in the mobile space that could mean some very interesting things for higher performance gaming notebooks.
Mark Papermaster, AMD CTO, said earlier in the conference call that HBM would aid in performance but maybe more importantly will lower power and improve total GPU efficiency. HBM will offer more than 3x improved performance/watt compared to GDDR5 while also running more than 50% lower power than GDDR5. Lower power and higher performance upgrades don't happen often so I am really excited to see what AMD does with it.
There weren't any more details on the next flagship Radeon GPU but it doesn't look like we'll have to wait much longer.
Subject: General Tech | May 4, 2015 - 10:19 PM | Jeremy Hellstrom
Tagged: R9 380, R7 A360, R7 A330, leak, gpu, amd
HP announced their upcoming line up of desktops, including new Pavilions, ENVYs and a Spectre studio display with 4K resolution. An astute reader noticed something else that they announced unintentionally, the models of three unreleased AMD GPUs. The machines will be available starting on June 10th which even gives us a rough release time line. The pricing does not reveal all that much as they reference the base models and so it is hard to know what, if any discrete GPU is in the base model.
The HP Pavilion All-in-One PCs will sport USB 3.0 and your choice of an AMD Radeon R7 A330 or an R7 A360. As these are all in one PCs such as the one below you can expect these cards to represent the mid-range of AMD's upcoming lineup, though they could still put out a decent amount of power as the cooling in these systems is effective enough that HP offers models with Intel i7 and AMD A10 chips.
What most people will likely get excited about is in the HP ENVY and HP ENVY Phoenix Towers, the R9 380 which is offered as an alternative to the GTX 980. These machines also offer USB 3.0 as well as an option for a 512GB SSD as opposed to a 3TB HDD. The R9 380 will be powerful enough to handle the new 32" HP Spectre Studio Display, a 4K display with built in speakers and a viewing angle of 178° which implies an IPS display, albeit with an unknown refresh rate.
That is about all we know for now, but you can keep an eye out for more news about the R7 A330, R7 A360 and R9 380 right here.