Subject: Graphics Cards | May 5, 2016 - 06:38 PM | Scott Michaud
Tagged: nvidia, pascal, geforce
We're expecting a major announcement tomorrow... at some point. NVIDIA create a teaser website, called “Order of 10,” that is counting down to a 1PM EDT. On the same day, at 9PM EDT, they will have a live stream on their Twitch channel. This wasn't planned as long-term as their Game24 event, which turned out to be a GTX 970 and GTX 980 launch party, but it wouldn't surprise me if it ended up being a similar format. I don't know for sure whether one or both events will be about the new mainstream Pascal, but it would be surprising if Friday ends (for North America) without a GPU launch of some sort.
VideoCardz got a hold of 3DMark Fire Strike Extreme benchmarks, though. It is registered as an 8GB card with a GPU clock of 1860 MHz. While synthetic benchmarks, let alone a single benchmark of anything, isn't necessarily representative of overall performance, it scores slightly higher than a reasonably overclocked GTX 980 Ti (and way above a stock one). Specifically, this card yields a graphics score of 10102 on Fire Strike Extreme 1.1, while the 980 Ti achieved 7781 for us without an overclock.
We expected a substantial bump in clock rate, especially after GP100 was announced at GTC. This “full” Pascal chip was listed at a 1328 MHz clock, with a 1480 MHz boost. Enterprise GPUs are often underclocked compared to consumer parts, stock to stock. As stated a few times, overclocking could be a huge gap, too. The GTX 980 Ti was able to go from 1190 MHz to 1465 MHz. On the other hand, consumer Pascal's recorded 1860 MHz could itself be an overclock. We won't know until NVIDIA makes an official release. If not, maybe we could see these new parts break 2 GHz in general use?
Subject: Graphics Cards | April 29, 2016 - 11:09 PM | Jeremy Hellstrom
Tagged: amd, dx12, async shaders
Earlier in the month [H]ard|OCP investigated the performance scaling that Intel processors display in DX12, now they have finished their tests on AMD processors. These tests include Async computing information, so be warned before venturing forth into the comments. [H] tested an FX 8370 at 2GHz and 4.3GHz to see what effect this had on the games, the 3GHz tests did not add any value and were dropped in favour of these two turbo frequencies. There are some rather interesting results and discussion, drop by for the details.
"One thing that has been on our minds about the new DX12 API is its ability to distribute workloads better on the CPU side. Now that we finally have a couple of new DX12 games that have been released to test, we spend a bit of time getting to bottom of what DX12 might be able to do for you. And a couple sentences on Async Compute."
Here are some more Graphics Card articles from around the web:
- XFX R9 Fury Triple Dissipation Review @ OCC
- AMD Radeon Pro Duo Preview @ techPowerUp
- NVIDIA VR performance featuring ASUS @ Kitguru
- ASUS GTX 950 2 GB (no power connector) @ techPowerUp
- Windows 10 vs. Ubuntu 16.04 NVIDIA OpenGL Performance @ Phoronix
History and Specifications
The Radeon Pro Duo had an interesting history. Originally shown as an unbranded, dual-GPU PCB during E3 2015, which took place last June, AMD touted it as the ultimate graphics card for both gamers and professionals. At that time, the company thought that an October launch was feasible, but that clearly didn’t work out. When pressed for information in the Oct/Nov timeframe, AMD said that they had delayed the product into Q2 2016 to better correlate with the launch of the VR systems from Oculus and HTC/Valve.
During a GDC press event in March, AMD finally unveiled the Radeon Pro Duo brand, but they were also walking back on the idea of the dual-Fiji beast being aimed at the gaming crowd, even partially. Instead, the company talked up the benefits for game developers and content creators, such as its 8192 stream processors for offline rendering, or even to aid game devs in the implementation and improvement of multi-GPU for upcoming games.
Anyone that pays attention to the graphics card market can see why AMD would make the positional shift with the Radeon Pro Duo. The Fiji architecture is on the way out, with Polaris due out in June by AMD’s own proclamation. At $1500, the Radeon Pro Duo will be a stark contrast to the prices of the Polaris GPUs this summer, and it is well above any NVIDIA-priced part in the GeForce line. And, though CrossFire has made drastic improvements over the last several years thanks to new testing techniques, the ecosystem for multi-GPU is going through a major shift with both DX12 and VR bearing down on it.
So yes, the Radeon Pro Duo has both RADEON and PRO right there in the name. What’s a respectable PC Perspective graphics reviewer supposed to do with a card like that if it finds its way into your office? Test it of course! I’ll take a look at a handful of recent games as well as a new feature that AMD has integrated with 3DS Max called FireRender to showcase some of the professional chops of the new card.
Subject: Graphics Cards | April 25, 2016 - 08:41 PM | Jeremy Hellstrom
Tagged: graphics driver, crimson, amd
AMD's new Crimson driver has just been released with new features including official support for the new Radeon Pro Duo as well as both the Oculus Rift and HTC Vive VR headsets. It also adds enhanced support for AMD's XConnect technology for external GPUs connected via a Thunderbolt 3 interface. Crossfire profile updates include Hitman, Elite Dangerous and Need for Speed and they have also resolved the ongoing issue with the internal update procedure not seeing the newest drivers. If you are having issues with games crashing to desktop on launch you will still need to disable the AMD Gaming Evolved overlay, unfortunately.
"The latest version of Radeon Software Crimson Edition is here with 16.4.2. With this version, AMD delivers many quality improvements, updated/introduced new CrossFire profiles and delivered full support for AMD’s XConnect technology (including plug’n’play simplicity for Thunderbolt 3 eGFX enclosures configured with Radeon R9 Fury, Nano or 300 Series GPUs.) Best of all, our DirectX 12 leadership continues to be strong, as shown by the performance numbers below."
The Dual-Fiji Card Finally Arrives
This weekend, leaks of information on both WCCFTech and VideoCardz.com have revealed all the information about the pending release of AMD’s dual-GPU giant, the Radeon Pro Duo. While no one at PC Perspective has been briefed on the product officially, all of the interesting data surrounding the product is clearly outlined in the slides on those websites, minus some independent benchmark testing that we are hoping to get to next week. Based on the report from both sites, the Radeon Pro Duo will be released on April 26th.
AMD actually revealed the product and branding for the Radeon Pro Duo back in March, during its live streamed Capsaicin event surrounding GDC. At that point we were given the following information:
- Dual Fiji XT GPUs
- 8GB of total HBM memory
- 4x DisplayPort (this has since been modified)
- 16 TFLOPS of compute
- $1499 price tag
The design of the card follows the same industrial design as the reference designs of the Radeon Fury X, and integrates a dual-pump cooler and external fan/radiator to keep both GPUs running cool.
Based on the slides leaked out today, AMD has revised the Radeon Pro Duo design to include a set of three DisplayPort connections and one HDMI port. This was a necessary change as the Oculus Rift requires an HDMI port to work; only the HTC Vive has built in support for a DisplayPort connection and even in that case you would need a full-size to mini-DisplayPort cable.
The 8GB of HBM (high bandwidth memory) on the card is split between the two Fiji XT GPUs on the card, just like other multi-GPU options on the market. The 350 watts power draw mark is exceptionally high, exceeded only by AMD’s previous dual-GPU beast, the Radeon 295X2 that used 500+ watts and the NVIDIA GeForce GTX Titan Z that draws 375 watts!
Here is the specification breakdown of the Radeon Pro Duo. The card has 8192 total stream processors and 128 Compute Units, split evenly between the two GPUs. You are getting two full Fiji XT GPUs in this card, an impressive feat made possible in part by the use of High Bandwidth Memory and its smaller physical footprint.
|Radeon Pro Duo||R9 Nano||R9 Fury||R9 Fury X||GTX 980 Ti||TITAN X||GTX 980||R9 290X|
|GPU||Fiji XT x 2||Fiji XT||Fiji Pro||Fiji XT||GM200||GM200||GM204||Hawaii XT|
|Rated Clock||up to 1000 MHz||up to 1000 MHz||1000 MHz||1050 MHz||1000 MHz||1000 MHz||1126 MHz||1000 MHz|
|Memory||8GB (4GB x 2)||4GB||4GB||4GB||6GB||12GB||4GB||4GB|
|Memory Clock||500 MHz||500 MHz||500 MHz||500 MHz||7000 MHz||7000 MHz||7000 MHz||5000 MHz|
|Memory Interface||4096-bit (HMB) x 2||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||384-bit||384-bit||256-bit||512-bit|
|Memory Bandwidth||1024 GB/s||512 GB/s||512 GB/s||512 GB/s||336 GB/s||336 GB/s||224 GB/s||320 GB/s|
|TDP||350 watts||175 watts||275 watts||275 watts||250 watts||250 watts||165 watts||290 watts|
|Peak Compute||16.38 TFLOPS||8.19 TFLOPS||7.20 TFLOPS||8.60 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||5.63 TFLOPS|
|Transistor Count||8.9B x 2||8.9B||8.9B||8.9B||8.0B||8.0B||5.2B||6.2B|
The Radeon Pro Duo has a rated clock speed of up to 1000 MHz. That’s the same clock speed as the R9 Fury and the rated “up to” frequency on the R9 Nano. It’s worth noting that we did see a handful of instances where the R9 Nano’s power limiting capability resulted in some extremely variable clock speeds in practice. AMD recently added a feature to its Crimson driver to disable power metering on the Nano, at the expense of more power draw, and I would assume the same option would work for the Pro Duo.
Subject: Graphics Cards | April 22, 2016 - 02:16 PM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, leak, graphics card, gpu, gddr5x, GDDR5
According to a report from VideoCardz (via Overclock.net/Chip Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.
Image credit: VideoCardz.com
"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."
A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory - based on a previously leaked photo of the die on PCB.
Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)
"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."
While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.
Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X - the first GPU to use HBM - was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.
If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected - which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.
Subject: Graphics Cards | April 21, 2016 - 12:37 PM | Sebastian Peak
Zotac has released a new variant of the low-power NVIDIA GeForce GT 710, and while this wouldn't normally be news this card has a very important distinction: its PCI-E x1 interface.
With a single-slot design, low-profile ready (with a pair of brackets included), and that PCI-E x1 interface, this card can go places where GPUs have never been able to go (AFAIK). Granted, you won't be doing much gaming on a GT 710, which features 192 CUDA cores and 1GB of DDR3 memory, but this card does provide support for up to 3 monitors via DVI, HDMI, and VGA outputs.
A PCI-E x1 GPU would certainly provide some interesting options for ultra-compact systems such as those based on thin mini-ITX, which does not offer a full-length PCI Express slot; or for adding additional monitor support for business machines that only offer a single PCI-E x16 slot, but have a x1 slot available.
Specifications from Zotac:
- GPU: GeForce GT 710
- CUDA cores: 192
- Video Memory: 1GB DDR3
- Memory Bus: 64-bit
- Engine Clock: 954 MHz
- Memory Clock: 1600 MHz
- PCI Express: PCI-E x1
- Display Outputs: DL-DVI, VGA, HDMI
- HDCP Support: Yes
- Multi Display Capability: 3
- Recommended Power Supply: 300W
- Power Consumption: 25W
- Power Input: N/A
- API Support: DirectX 12 (feature level 11_0), OpenGL 4.5
- Cooling: Passive
- Slot Size: Single Slot
- SLI: N/A
- Supported OS: Windows 10 / 8 / 7 / Vista / XP
- Card Length: 146.05mm x 111.15mm
- Accessories: 2x Low profile bracket I/O brackets, Driver Disk, User Manual
The card, which is listed with the model ZT-71304-20L, has not yet appeared on any U.S. sites for purchase (that I can find, anyway), so we will have to wait to see where pricing will be.
Subject: Graphics Cards, Processors | April 19, 2016 - 03:21 PM | Ryan Shrout
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd
Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.
Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO."
The current PlayStation 4 APU
Image source: iFixIt.com
There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.
The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.
Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.
Giant Bomb even has details on the architectural changes.
|Shipping PS4||PS4 "NEO"|
|CPU||8 Jaguar Cores @ 1.6 GHz||8 Jaguar Cores @ 2.1 GHz|
|GPU||AMD GCN, 18 CUs @ 800 MHz||AMD GCN+, 36 CUs @ 911 MHz|
|Stream Processors||1152 SPs ~ HD 7870 equiv.||2304 SPs ~ R9 390 equiv.|
|Memory||8GB GDDR5 @ 176 GB/s||8GB GDDR5 @ 218 GB/s|
(We actually did a full video teardown of the PS4 on launch day!)
If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).
Could Sony's NEO platform rival the R9 390?
If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so.
It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.
However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.
New features of the AMD Polaris architecture due this summer
There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path.
The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on.
In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.
There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.
Subject: Graphics Cards | April 19, 2016 - 03:08 PM | Sebastian Peak
Tagged: rumor, report, nvidia, leak, GTX 1080, graphics card, gpu, geforce
Another reported photo of an upcoming GTX 1080 graphics card has appeared online, this time via a post on Baidu.
(Image credit: VR-Zone, via Baidu)
The image is typically low-resolution and features the slightly soft focus we've come to expect from alleged leaks. This doesn't mean it's not legitimate, and this isn't the first time we have seen this design. This image also appears to only be the cooler, without an actual graphics card board underneath.
We have reported on the upcoming GPU rumored to be named "GTX 1080" in the recent past, and while no official announcement has been made it seems safe to assume that a successor to the current 900-series GPUs is forthcoming.
Subject: Graphics Cards | April 14, 2016 - 10:44 PM | Scott Michaud
Tagged: nvidia, graphics drivers
The GeForce 364.xx line of graphics drivers hasn't been smooth for NVIDIA. Granted, they tried to merge Vulkan support into their main branch at the same time as several new games, including DirectX 12 ones, launched. It was probably a very difficult period for NVIDIA, but WHQL-certified drivers should be better than this.
Regardless, they're trying, and today they released GeForce Hot Fix Driver 364.96. Some of the early reactions mock NVIDIA for adding “Support for DOOM Open Beta” as the only listed feature of a “hotfix” driver, but I don't see it. It's entirely possible that the current drivers have a known issue with DOOM Open Beta and, thus, they require a hotfix. It's not necessarily “just a profile,” and “profiles” isn't exactly what a hardware vendor does to support a new title.
But anyway, Manuel Guzman, one of the faces for NVIDIA Customer Care, also says that this driver includes fixes for FPS drops in Dark Souls 3. According to some forum-goers, despite its numbering, it also does not contain the Vulkan updates from 364.91. This is probably a good thing, because it would be a bit silly to merge developer-branch features into a customer driver that only intends to solve problems before an official driver can be certified. I mean, that's like patching a flat tire, then drilling a hole in one of the good ones to mess around with it, too.
The GeForce 364.96 Hotfix Drivers are available at NVIDIA's website. If you're having problems, then it might be your solution. Otherwise? Wait until NVIDIA has an official release (or you start getting said problems).