All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | July 29, 2016 - 02:51 AM | Tim Verry
Tagged: water cooling, RGB LED, phanteks, GPU Water Block
Phanteks, a company that produces cases, CPU coolers, and fans has unveiled its first GPU cooler in the form of a full cover water block for Nvidia's GTX 1080 Founder's Edition (and any partner PCBs that use the reference design) graphics card. The PH-GB1080-X is a full cover nickel plated copper block with acrylic top and black (aluminum?) accents on the edges of the block. There are two ports for inlet/outlet on both top and bottom (so users could SLI multiple cards and water cool in series or parallel). Phanteks allegedly uses Dupont Viton for the gaskets which is a "high-performance seal elastomer" for the aerospace industry (and overkill for the temps that will be seen in a PC water loop heh).
In addition to the acrylic top, users can plug in three (1mm) RGB LEDs into the bottom edge of the card to add a glow effect. Oddly, Phanteks shows the LEDs using three individual cables that then go off to a reported proprietary power adapter that can plug into RGB motherboards or Phanteks' cases. Having the LEDs running off of a single cable (or bundled together) coming of the back edge of the card closest to the motherboard would have been helpful to cable management!
Phanteks' new water block is available for pre-order now for $129.99.
Using a water block on the GTX 1080 should allow users to easily achieve above 2000 MHz GPU clocks and have the card clockspeeds be much more stable than on air. Gamer's Nexus tested their GTX 1080 with an EVGA all in one cooler and managed to crank the GPU clockspeeds up to 2164 MHz and the memory clockspeeds up to 5602 MHz. That 2164 MHz clockspeed is quite the overclock and while it was only a bit above what they achieved on air, the clocks were much more stable and actually able to be maintained during long gaming sessions unlike on air. A custom water loop and a water block like the one Phanteks is selling should do just as well as Gamer's Nexus' results if not ever so slightly better.
If you already have a water loop in your system and have been waiting for a block to go with your GTX 1080 you now have another option!
Subject: Graphics Cards | July 29, 2016 - 01:09 AM | Ryan Shrout
Tagged: rx 470, rx 460, radeon, polaris 11, polaris 10, Polaris, amd
We know pretty much all there is to know about AMD's new Polaris architecture thanks to our Radeon RX 480 review, but AMD is taking the covers off of the lower priced, lower performance products based on the same architecture tonight. We previously covered AMD's launch event in Australia where the company officially introduced the Polaris 10 RX 470 and Polaris 11 RX 460 and talked about the broader specifications. Now, we have a bit more information to share on specifics and release dates. Specifically, AMD's RX 470 will launch on Thursday, August 4th and the RX 460 will launch on the following Monday, August 8th.
First up is the Radeon RX 470, based on the same Polaris 10 GPU as the RX 480, but with some CUs disabled to lower performance and increase yields.
This card is aimed at 1080p gaming at top quality settings with AA enabled at 60 FPS. Obviously that is a very vague statement, but it gives you an idea of what price point and target segment the RX 470 is going after.
The only comparison we have from AMD pits the upcoming RX 470 against the R9 270, where Polaris offers a range from 1.5x to 2.4x improvement in a handful of titles, which include DX12 and Vulkan enabled games, of course.
From a specifications stand point, the RX 470 will include 2048 stream processors running at a base clock of 926 MHz and a rated boost frequency of 1206 MHz. That gives us 4.9 TFLOPS of theoretical peak performance to pair with a 6.6 Gbps memory interface capable of 211 GB/s of peak bandwidth. With a 4GB frame buffer and a 120 watt TDP, the RX 470 should offer some compelling performance in the ~$150 price segment (this price is just a guess on my part... though yields should be better – they can salvage RX 480s – and partners being able to use memory chips that do not have to hit 8 Gbps should help to lower costs).
Going down another step to the Radeon RX 460, AMD is targeting this card at 1080p resolutions at "high" image quality settings. The obvious game categories here are eSports titles like MOBAs, CS: Go, Overwatch, etc.
Again, AMD provides a comparison to other AMD hardware: in this case the R7 260X. You'll find a 1.2x to 1.3x performance improvements in these types of titles. Clearly we want to know where the performance rests against the GeForce line but this comparison seems somewhat modest.
Based on the smaller Polaris 11 GPU, which is a new chip that we have not seen before, the RX 460 features up to 2.2 TFLOPS of computing capability with 896 stream processors (14 CUs enabled out of 16 total in full Polaris 11) running between 1090 MHz and 1200 MHz. The memory system is actually running faster on the RX 460 than the RX 470, though with half the memory bus width at 128-bits. The TDP of this card is sub-75 watts and thus we should find cards that don't require any kind of external power. The RX 460 GPU will be used in desktop cards as well as notebooks (though with lower TDPs and clocks).
The chart below outlines the comparison between the three known Polaris graphics processors.
|RX 480||RX 470||RX 460|
|GPU Clock (Base)||1120 MHz||926 MHz||1090 MHz|
|GPU Clock (Boost)||1266 MHz||1206 MHz||1200 MHz|
|Memory||4 or 8 GB GDDR5||4 or 8 GB GDDR5||2 or 4 GB GDDR5|
|Memory Bandwidth||256 GB/s||211 GB/s||112 GB/s|
|GPU||Polaris 10||Polaris 10||Polaris 11|
There is still much to learn about these new products, most importantly, prices. AMD is still shying away from telling us that important data point. The RX 470 will be on sale and will have reviews on August 4th, with the RX 460 following that on August 8th, so we'll have details and costs in our hands very soon.
It is not clear how many or what kinds of cards we can expect to see on the August 4th and August 8th release days though it would stand to reason that they will be mostly based upon reference designs especially for the RX 460 (though Gamer's Nexus did spot a dual fan Sapphire card).. With that said, we may see custom cooled RX 470 graphics cards because while AMD does technically have a reference design with blower style cooler the company expects most if not all of its partners to go their own direction with this board including their own single and dual fan coolers.
For gamers looking to buy into the truly budget card segment, stay tuned just a little longer!
NVIDIA Offers Preliminary Settlement To Geforce GTX 970 Buyers In False Advertising Class Action Lawsuit
Subject: Graphics Cards | July 28, 2016 - 07:07 PM | Tim Verry
Tagged: nvidia, maxwell, GTX 970, GM204, 3.5gb memory
A recent post on Top Class Actions suggests that buyers of NVIDIA GTX 970 graphics cards may soon see a payout from a settlement agreement as part of the series of class action lawsuits facing NVIDIA over claims of false advertising. NVIDIA has reportedly offered up a preliminary settlement of $30 to "all consumers who purchased the GTX 970 graphics card" with no cap on the total payout amount along with a whopping $1.3 million in attorney's fees.
This settlement offer is in response to several class action lawsuits that consumers filed against the graphics giant following the controversy over mis-advertised specifications (particularly the number of ROP units and amount of L2 cache) and the method in which NVIDIA's GM204 GPU addressed the four total gigabytes of graphics memory.
Specifically, the graphics card specifications initially indicated that it had 64 ROPs and 2048 KB of L2 cache, but later was revealed to have only 56 ROPs and 1792 KB of L2. On the memory front, the "3.5 GB memory controvesy" spawned many memes and investigations into how the 3.5 GB and 0.5 GB pools of memory worked and how performance both real world and theoretical were affected by the memory setup.
(My opinions follow)
It was quite the PR disaster and had NVIDIA been upfront with all the correct details on specifications and the new memory implementation the controversy could have been avoided. As is though buyers were not able to make informed decisions about the card and at the end of the day that is what is important and why the lawsuits have merit.
As such, I do expect both sides to reach a settlement rather than see this come to a full trial, but it may not be exactly the $30 per buyer payout as that amount still needs to be approved by the courts to ensure that it is "fair and reasonable."
For more background on the GTX 970 memory issue (it has been awhile since this all came about after all, so you may need a refresher):
- NVIDIA Discloses Full Memory Structure and Limitations of GTX 970
- NVIDIA Responds to GTX 970 3.5GB Memory Issue
- Frame Rating: GTX 970 Memory Issues Tested in SLI
- Frame Rating: Looking at GTX 970 Memory Performance
Subject: Editorial, Graphics Cards | July 28, 2016 - 02:36 PM | Ryan Shrout
Tagged: video, sapphire, rx 480, radeon, Polaris, pcper live, live, amd
UPDATE: Did you miss the live event? No worries, see what trouble Ed and I got into with the recording embedded below!!
When it comes to GPU releases, we at PC Perspective take things up a level in the kind of content we produce as well as the amount of information we provide to the community. Part of that commitment is our drive to bring in the very best people from around the industry to talk directly to the consumers, providing interesting and honest views on where their technology is going.
Though the Radeon RX 480 was released last month, based on AMD's latest Polaris, we are bringing in our first board partner. Ed Crisler, NA PR/Marketing Manager for Sapphire will be joining us in studio to talk about the RX 480 and Sapphire's plans for custom cards.
The Sapphire Nitro+ RX 480 Graphics Card
Sapphire Live Stream and Giveaway with Ed Crisler and Ryan Shrout
10:00am PT / 1:00pm ET - July 29th
Need a reminder? Join our live stream notification list!
The event will take place Friday, July 29th at 1:00pm ET / 10:00am PT at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience, asking questions for me and Ed to answer live.
As a price for hosting Sapphire in the offices, we demand a sacrifice: in the form of hardware to giveaway to our viewers! We'll have a brand new Sapphire Nitro+ RX 480 8GB to hand out during the live stream! All you have to do to win on the 29th is watch the live stream!
If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Ed or me?
So join us! Set your calendar for this coming Friday at 1:00pm ET / 10:00am PT and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live notification list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!
Subject: Graphics Cards, Systems, Mobile | July 27, 2016 - 07:58 PM | Scott Michaud
Tagged: nvidia, Nintendo, nintendo nx, tegra, Tegra X1, tegra x2, pascal, maxwell
Okay so there's a few rumors going around, mostly from Eurogamer / DigitalFoundry, that claim the Nintendo NX is going to be powered by an NVIDIA Tegra system on a chip (SoC). DigitalFoundry, specifically, cites multiple sources who claim that their Nintendo NX development kits integrate the Tegra X1 design, as seen in the Google Pixel C. That said, the Nintendo NX release date, March 2017, does provide enough time for them to switch to NVIDIA's upcoming Pascal Tegra design, rumored to be called the Tegra X2, which uses NVIDIA's custom-designed Denver CPU cores.
Preamble aside, here's what I think about the whole situation.
First, the Tegra X1 would be quite a small jump in performance over the WiiU. The WiiU's GPU, “Latte”, has 320 shaders clocked at 550 MHz, and it was based on AMD's TeraScale 1 architecture. Because these stream processors have single-cycle multiply-add for floating point values, you can get its FLOP rating by multiplying 320 shaders, 550,000,000 cycles per second, and 2 operations per clock (one multiply and one add). This yields 352 GFLOPs. The Tegra X1 is rated at 512 GFLOPs, which is just 45% more than the previous generation.
This is a very tiny jump, unless they indeed use Pascal-based graphics. If this is the case, you will likely see a launch selection of games ported from WiiU and a few games that use whatever new feature Nintendo has. One rumor is that the console will be kind-of like the WiiU controller, with detachable controllers. If this is true, it's a bit unclear how this will affect games in a revolutionary way, but we might be missing a key bit of info that ties it all together.
As for the choice of ARM over x86... well. First, this obviously allows Nintendo to choose from a wider selection of manufacturers than AMD, Intel, and VIA, and certainly more than IBM with their previous, Power-based chips. That said, it also jives with Nintendo's interest in the mobile market. They joined The Khronos Group and I'm pretty sure they've said they are interested in Vulkan, which is becoming the high-end graphics API for Android, supported by Google and others. That said, I'm not sure how many engineers exist that specialize in ARM optimization, as most mobile platforms try to abstract this as much as possible, but this could be Nintendo's attempt to settle on a standardized instruction set, and they opted for mobile over PC (versus Sony and especially Microsoft, who want consoles to follow high-end gaming on the desktop).
Why? Well that would just be speculating on speculation about speculation. I'll stop here.
Subject: Graphics Cards | July 27, 2016 - 03:43 AM | Tim Verry
Tagged: Twin Frozr VI, Radeon RX 480, polaris 10, msi
MSi is jumping full force into custom RX 480s with its upcoming line of Radeon RX 480 Gaming series including factory overclocked Gaming X and (slightly lower end) Gaming cards in both 8GB and 4GB SKUs. All four of the new graphics cards use a custom 8 phase power design, custom PCB with Military Class 4 components, and perhaps most importantly a beefy Twin Frozr VI cooler. The overclockable cards will be available by the middle of next month.
Specifically, MSI will be launching the RX 480 GAMING X 8G and RX 480 GAMING X 4G with 8GB and 4GB of GDDR5 memory respectively. These cards will have solid metal backplates and the highest factory overclocks. Below these cards sit the RX 480 GAMING 8G and RX480 GAMING 4G with the same TWIN FROZR VI cooler but sans backplate and with lower out of the box clockspeeds. Aside from those aspects, the cards all appear to offer identical features.
The new Gaming series graphics cards feature 8-pin PCI-E power connectors and 8-phase power design on a custom PCB that should allow users to push Polaris 10 quite a bit without running into issues of overheating the VRMs. The Twin Frozr VI cooler uses a nickel plated copper base plate, three 8mm copper heatpipes, a large aluminum fin array, and two large fans that spin down while the GPU temperature is under 60°C. The heatsink results in a larger than reference card that is both wider and longer at 276mm, but the size is made up for by offering 22% better cooling performance according to MSI. Further, RGB LEDs backlight the MSI logo on the side of the card. The metal backplate on the X variants should help dissipate slightly more heat than the non X models.
All for Polaris-based graphics cards offer a single DL-DVI, two HDMI, and two DisplayPort video outputs. The inclusion of two HDMI ports rather than three DP ports is allegedly to more easily support VR users by allowing them to have an HDMI connected monitor and headset connected at the same time without using adapters.
|RX 480 Gaming X 8G||RX 480 Gaming X 4G||RX 480 Gaming 8G||RX 480 Gaming 4G||RX 480 Reference|
|GPU Clock (OC Mode)||1316 MHz||1316 MHz||1292 MHz||1292 MHz||1266 MHz|
|GPU Clock (Gaming Mode)||1303 MHz||1303 MHz||1279 MHz||1279 MHz||1266 MHz|
|GPU Clock (Silent Mode)||1266 MHz||1266 MHz||1266 MHz||1266 MHz||1266 MHz|
|Memory||8GB GDDR5||4GB GDDR5||8GB GDDR5||4GB GDDR5||8GB or 4GB GDDR5|
|Memory Clock||8100 MHz||8100 MHz||8000 MHz (?)||8000 MHz (?)||8000 MHz|
|MSRP||?||?||?||?||$249 for 8GB, $199 for 4GB|
The GAMING and GAMING X RX 480s offer two tiers of factory overclocks that users can select using MSI's software utility. The non X GAMING cards will clock up to 1279 MHz in Gaming Mode and 1292 MHz in OC Mode. In Silent Mode the card will run at the same 1266 MHz boost speed as AMD's reference design card. Meanwhile the RX 480 GAMING X cards will boost up to 1303 MHz in Gaming Mode and 1316 MHz in OC Mode. In addition, MSI is bumping up the memory clockspeeds to 8100 MHz in OC Mode which is a nice surprise! MSI's announcement is not exactly clear, but it appears that the non X versions do not have factory overlcocked memory and it remains at the reference 8000 MHz.
Pricing has not yet been announced, but the cards will reportedly be on sale worldwide by mid August.
I am looking forward to seeing how far reviewers and users are able to push Polaris 10 with the Twin Frozr cooler and 8-phase VRMs!
Subject: Graphics Cards | July 27, 2016 - 01:56 AM | Tim Verry
Tagged: solid state, radeon pro, Polaris, gpgpu, amd
UPDATE (July 27th, 1am ET): More information on the Radeon Pro SSG has surfaced since the original article. According to AnandTech, the prototype graphics card actually uses an AMD Fiji GPU. The Fiji GPU is paired onboard PCI-E based storage using the same PEX8747 bridge chip used in the Radeon Pro Duo. Storage is handled by two PCI-E 3.0 x4 M.2 slots that can accommodate up to 1TB of NAND flash storage. As I mentioned below, having the storage on board the graphics card vastly reduces latency by reducing the number of hops and not having to send requests out to the rest of the system. AMD had more numbers to share following their demo, however.
From the 8K video editing demo, the dual Samsung 950 Pro PCI-E SSDs (in RAID 0) on board the Radeon Pro SSG hit 4GB/s while scrubbing through the video. That same video source stored on a Samsung 950 Pro attached to the motherboard had throughput of only 900MB/s. In theory, reaching out to system RAM still has raw throughput advantages (with DDR4 @ 3200 MHz on a Haswell-E platform theroretically capable of 62 GB/s reads and 47 GB/s writes though that would be bottlenecked by the graphics card having to go over the PCI-E 3.0 x16 link and it's maximum of 15.754 GB/s.). Of course if you can hold it in (much smaller) GDDR5 (300+GB/s depending on clocks and memory bus width) or HBM (1TB/s) and not have to go out to any other storage tier that's ideal but not always feasible especially in the HPC world.
However, having onboard storage on the same board as the GPU only a single "hop" away vastly reduces latency and offers much more total storage space than most systems have in DDR3 or DDR4. In essence, the solid state storage on the graphics card (which developers will need to specifically code for) acts as a massive cache for streaming in assets for data sets and workloads that are highly impacted by latency. This storage is not the fastest, but is the next best thing for holding active data outside of GDDR5/x or HBM. For throughput intensive workloads reaching out to system RAM will be better Finally, reaching out to system attached storage should be the last resort as it will be the slowest and most latent. Several commentors mentioned using a PCI-E based SSD in a second slot on the motherboard accessed much like GPUs in CrossFire communicate now (DMA over the PCI-E bus) which is an interesting idea that I had not considered.
Per my understanding of the situation, I think that the on board SSG storage would still be slightly more beneficial than this setup but it would get you close (I am assuming the GPU would be able to directly interact and request data from the SSD controller and not have to rely on the system CPU to do this work but I may well be mistaken. I will have to look into this further and ask the experts heh). On the prototype Radeon Pro SSG the M.2 slots are actually able to be seen as drives by the system and OS so it is essentially acting as if there was a PCI-E adapter card in a slot on the motherboard holding those drives but that may not be the case should this product actually hit the market. I do question their choice to go with Fiji rather than Polaris, but it sounds like they built the prototype off of the Radeon Pro Duo platform so I suppose it would make sense there.
Hopefully the final versions in 2017 or beyond use at least Vega though :).
Alongside the launch of new Radeon Pro WX (workstation) series graphics cards, AMD teased an interesting new Radeon Pro product: the Radeon Pro SSG. This new professional graphics card pairs a Polaris GPU with up ot a terabyte of on board solid state storage and seeks to solve one of the biggest hurdles in GP GPU performance when dealing with extremely large datasets which is latency.
One of the core focuses of AMD's HSA (heterogeneous system architecture) is unified memory and the ability of various processors (CPU, GPU, specialized co-processors, et al) to work together efficiently by being able to access and manipulate data from the same memory pool without having to copy data bck and forth between CPU-accessible memory and GPU-accessible memory. With the Radeon Pro SSG, this idea is not fully realized (it is more of a sidestep), but it will move performance further. It does not eliminate the need to copy data to the GPU before it can work on it, but once copied the GPU will be able to work on data stored in what AMD describes as a one terabyte frame buffer. This memory will be solid state and very fast, but more importantly it will be able to get at the data with much lower latency than previous methods. AMD claims the solid state storage (likely NAND but they have not said) will link with the GPU over a dedicated PCI-E bus. I suppose that if you can't bring the GPU to the data, you bring the data to the GPU!
Considering AMD's previous memory champ – the Radeon W9100 – maxed out at 32GB of GDDR5, the teased Radeon Pro SSG with its 1TB of purportedly low latency onboard flash storage opens up a slew of new possibilities for researchers and professionals in media, medical, and scientific roles working with massive datasets for imaging, creation, and simulations! I expect that there are many professionals out there eager to get their hands on one of these cards! They will be able to as well thanks to a beta program launching shortly, so long as they have $10,000 for the hardware!
AMD gave a couple of examples in their PR on the potential benefits of its "solid state graphics" including the ability to image a patient's beating heart in real time to allow medical professionals to examine and spot issues as early as possible and using the Radeon Pro SSG to edit and scrub through 8K video in real time at 90 FPS versus 17 with current offerings. On the scientific side of things being able to load up entire models into the new graphics memory (not as low latency as GDDR5 or HBM certainly) will be a boon as will being able to get data sets as close to the GPU as possible into servers using GPU accelerated databases powering websites accessed by millions of users.
It is not exactly the HSA future I have been waiting for ever so impatiently, but it is a nice advancement and an intriguing idea that I am very curious to see how well it pans out and if developers and researchers will truly take advantage of and use to further their projects. I suspect something like this could be great for deep learning tasks as well (such as powering the "clouds" behind self driving cars perhaps).
Stay tuned to PC Perspective for more information as it develops.
This is definitely a product that I will be watching and I hope that it does well. I am curious what Nvidia's and Intel's plans are here as well! What are your thoughts on AMD's "Solid State Graphics" card? All hype or something promising?
Subject: Graphics Cards | July 26, 2016 - 12:36 AM | Tim Verry
Tagged: windforce, pascal, gigabyte, GeForce GTX 1060
In a recent press release, Gigabyte announced that it will soon be adding four new GTX 1060 graphics cards to its lineup. The new cards feature Windforce series coolers and custom PCBs. At the high end is the GTX 1060 G1 Gaming followed by the GTX 1060 Windforce OC, small form factor friendly GTX 1060 Mini ITX OC, and the budget minded GTX 1060 D5. While the company has yet to divulge pricing or availability, the cards should be out within the next month or two.
All of the upcoming cards use a custom design that uses a custom PCB and power phase setup paired with Gigabyte's dual – or in the case of the Mini ITX card – single fan Windforce air cooler. Unfortunately, exact specifications for all of the cards except the high end model are unknown including core and memory clocks. The coolers use a dual composite heatpipe that directly touches the GPU to pull heat away and is dissipated by an aluminum fin stack. The fans are 90mm on all of the cards with the dual fan models using a design that has each fan spinning alternate directions of the other. The cards feature 6GB of GDDR5 memory as well as DVI, HDMI, and DisplayPort video outputs. For example, the Mini ITX OC graphics card (which is only 17cm long) and features two DVI, one HDMI, and one DP output.
More information is available on the GTX 1060 G1 Gaming. This card is a dual slot dual fan design with a 6+1 power phase (reference is 3+1) powered by a single 8-pin power connector. The fans are shrouded and there is a metal backplate to aid in stability and cooling. Gigabyte claims that its "GPU Gauntlet" technology ensures users get heavily overclockable chips thanks to sorting and using the most promising chips.
The 16nm Pascal GPU is factory overclocked to 1847 MHz boost and 1620 MHz base clockspeeds in OC mode and 1809 MHz boost and 1594 MHz base in gaming mode. Users will be able to use the company's Xtreme Engine software to dial up the overclocks further as well as mess with the RGB LEDs. For comparison, the reference clockspeeds are 1708 MHz boost and 1506 MHz base. Gigabyte has left the 6GB of GDDR5 memory untouched at 8008 MHz.
The other cards should have similarly decent factory overclocks, but it is hard to say exactly what they will be out of the box. While I am not a big fan of the aesthetics, the Windforce coolers should let users push Pascal fairly far (for air cooling).
I would guess that the Gigabyte GTX 1060 G1 Gaming will MSRP for just above $300 while the lower end cards will be around $260 (the Mini ITX OC may be at a slight premium above that).
What do you think about Gigabyte's new cards?
Subject: General Tech, Graphics Cards | July 25, 2016 - 09:48 PM | Sebastian Peak
Tagged: siggraph 2016, Siggraph, capsaicin, amd, 3D rendering
At their Capsaicin Siggraph event tonight AMD has announced that what was previously announced as the FireRender rendering engine is being officially launched as AMD Radeon ProRender, and this is becoming open-source as part of AMD's GPUOpen initiative.
From AMD's press release:
AMD today announced its powerful physically-based rendering engine is becoming open source, giving developers access to the source code.
As part of GPUOpen, Radeon ProRender (formerly previewed as AMD FireRender) enables creators to bring ideas to life through high-performance applications and workflows enhanced by photorealistic rendering.
GPUOpen is an AMD initiative designed to assist developers in creating ground-breaking games, professional graphics applications and GPU computing applications with much greater performance and lifelike experiences, at no cost and using open development tools and software.
Unlike other renderers, Radeon ProRender can simultaneously use and balance the compute capabilities of multiple GPUs and CPUs – on the same system, at the same time – and deliver state-of-the-art GPU acceleration to produce rapid, accurate results.
Radeon ProRender plugins are available today for many popular 3D content creation applications, including Autodesk® 3ds Max®, SOLIDWORKS by Dassault Systèmes and Rhino®, with Autodesk® Maya® coming soon. Radeon ProRender works across Windows®, OS X and Linux®, and supports AMD GPUs, CPUs and APUs as well as those of other vendors.
Subject: Graphics Cards | July 25, 2016 - 09:30 PM | Sebastian Peak
Tagged: siggraph 2016, Siggraph, Radeon Pro WX Series, Radeon Pro WX 7100, Radeon Pro WX 5100, Radeon Pro WX 4100, radeon, capsaicin, amd
AMD has announced new Polaris-based professional graphics cards at Siggraph 2016 this evening, with the Radeon Pro WX 4100, WX 5100, and WX 7100 GPUs.
The AMD Radeon Pro WX 7100 GPU (Image credit: AMD)
From AMD's official press release:
AMD today unveils powerful new solutions to address modern content creation and engineering: the new Radeon Pro WX Series of professional graphics cards, which harness the award-winning Polaris architecture and is designed to deliver exceptional capabilities for the immersive computing era.
Radeon Pro solutions and the new Radeon Pro WX Series of professional graphics cards represent a fundamentally different approach for professionals rooted in a commitment to open, non-proprietary software and performant, feature-rich hardware that empowers people to create the “art of the impossible”.
The new Radeon Pro WX series graphics cards deliver on the promise of this new era of creation, are optimized for open source software, and are designed for creative professionals and those pushing the boundaries of science, technology and engineering.
The AMD Radeon Pro WX 5100 GPU (Image credit: AMD)
Radeon Pro WX Series professional graphics cards are designed to address specific demands of the modern content creation era:
- Radeon Pro WX 7100 GPU is capable of handling demanding design engineering and media and entertainment workflows and is AMD’s most affordable workstation solution for professional VR content creation.
- Radeon Pro WX 5100 GPU is the ideal solution for product development, powered by the impending game-engine revolution in design visualization.
- Radeon Pro WX 4100 GPU provides great performance in a half-height design, finally bringing mid-range application performance demanded by CAD professionals to small form factor (SFF) workstations
The AMD Radeon Pro WX 4100 GPU (Image credit: AMD)
A breakdown of the known specifications for these new GPUs was provided by AnandTech in their report on the WX Series:
Subject: Graphics Cards | July 25, 2016 - 08:49 PM | Tim Verry
Tagged: sapphire, Radeon RX 480, polaris 10, nitro+, nitro
UPDATE (July 27th, 1am ET): The 8GB overclocked Sapphire Nitro+ will MSRP for $269 while the 4GB version will be $219. For more information on Sapphire's new Polaris 10 graphics card check out our archived livestream with Sapphire's Ed Crisler!
More details on custom graphics cards based around AMD's RX 480 reference GPU are starting to trickle out now that the official shipping dates are approaching (it appears many of the cards will be available next month). Sapphire is the latest AIB to provide all the juicy details on its custom Nitro+ Radeon RX 480 card!
The Nitro+ RX 480 is a dual slot card with a Dual X cooler that features two 95mm quick connect fans, vented aluminum backplate, black shroud, and aluminum heatsink. The graphics card is powered by a single 8-pin PCI-E power connector which should be enough to allow overclocking headroom and alleviate any worries over pulling too much amperage over the PEG slot on the motherboard.
Sapphire is using high end capacitors and black diamond 4 chokes. The twin fan cooler supports "quick connect" which lets users easily pull out the fans for cleaning or replacement (which seems like a neat feature considering how dusty my PC can get (it doesn't help that my corgi loves to lay against my tower heh)). RGB LEDs illuminate the Sapphire logo and fans.
Of course, all of the LEDs can be controlled by software or a button on the back of the card to change colors in response to temperatures, fan speed, cycling through all colors, and turned off completely.
The company also uses an aluminum backplate which has a nice design to it (nice to see the only part of the card most will see getting some attention for once heh) as well as vents that allow hot air to escape. Air is pulled into the card from the two fans and pushed out the back of the card and up through the backplate. I am interested to see how much this design actually improved cooling.
Rear IO includes a single DL-DVI output along with two DisplayPort 1.4 and two HDMI 2.0b video outputs. This configuration results in a smaller air intake but also lets you hook up both a HDMI monitor and VR headset. While there are five connectors, only four may be used at the same time.
While Sapphire did not touch the memory, it did factory overclock the Polaris 10 GPU to up to 1,342 MHz boost. Compared to the reference boost clockspeed of 1,266 this is a decent jump, especially for a factory out of the box overclock. Users should be able to push the GPU further though exactly how far remains to be seen and will depend on the cooler and the quality of their specific chip.
Sapphire's Nitro+ RX 480 will reportedly be available as soon as next week in both 4GB and 8GB models. The 4GB will run $220 while the 8GB card will cost $269. If these numbers hold true, that is only a $20 premium over the reference designs which certainly seems like a great value all things considered! I am looking forward to the reviews on this slick looking card and I hope that the performance and build quality are up to snuff!
Subject: Graphics Cards | July 25, 2016 - 06:51 PM | Jeremy Hellstrom
Tagged: msi, gtx 1070, Gaming Z, Twin Frozr VI, factory overclocked
The Tech Report had a chance to see what the MSI Twin Frozr VI cooler can do to a GTX 1070, they have just wrapped up a review of the Gaming Z edition of that NVIDIA card. It comes with a respectable frequency bump when you enable OC mode, 1657 MHz base and 1860 MHz boost. When they tested it under load the GPU stayed below 70C so there should be room to push the card further. Check out the full benchmark suite in their full review.
"Nvidia's second Pascal graphics card, the GeForce GTX 1070, aims to set a new bar for graphics performance in the $379-and-up price range. We put MSI's GeForce GTX 1070 Gaming Z card through the wringer to see how a more affordable Pascal card performs."
Here are some more Graphics Card articles from around the web:
- Gigabyte GeForce GTX 1070 Xtreme Gaming @ Modders-Inc
- MSI GTX 1080 Gaming X 8G RGB SLI @ Kitguru
- NVIDIA GeForce GTX 1080 Founders Edition 8GB Graphics Card Review @ NikKTech
- MSI GTX 1060 Gaming X @ eTeknix
- MSI GTX 1060 Gaming X 6G Review @ OCC
- ASUS RX 480 STRIX OC 8 GB @ techPowerUp
Subject: Graphics Cards | July 25, 2016 - 04:48 PM | Scott Michaud
Tagged: siggraph 2016, Siggraph, quadro, nvidia
SIGGRAPH is the big, professional graphics event of the year, bringing together tens of thousands of attendees. They include engineers from Adobe, AMD, Blender, Disney (including ILM, Pixar, etc.), NVIDIA, The Khronos Group, and many, many others. Not only are new products announced, but many technologies are explained in detail, down to the specific algorithms that are used, so colleagues can advance their own research and share in kind.
But new products will indeed be announced.
The NVIDIA Quadro P6000
NVIDIA, having just launched a few Pascal GPUs to other markets, decided to announce updates to their Quadro line at the event. Two cards have been added, the Quadro P5000 and the Quadro P6000, both at the top end of the product stack. Interestingly, both use GDDR5X memory, meaning that neither will be based on the GP100 design, which is built around HBM2 memory.
The NVIDIA Quadro P5000
The lower end one, the Quadro P5000, should look somewhat familiar to our reader. Exact clocks are not specified, but the chip has 2560 CUDA cores. This is identical to the GTX 1080, but with twice the memory: 16GB of GDDR5X.
Above it sits the Quadro P6000. This chip has 3840 CUDA cores, paired with 24GB of GDDR5X. We have not seen a GPU with exactly these specifications before. It has the same number of FP32 shaders as a fully unlocked GP100 die, but it doesn't have HBM2 memory. On the other hand, the new Titan X uses GP102, combining 3584 CUDA cores with GDDR5X memory, although only 12GB of it. This means that the Quadro P6000 has 256 more (single-precision) shader units than the Titan X, but otherwise very similar specifications.
Both graphics cards have four DisplayPort 1.4 connectors, as well as a single DVI output. These five connectors can be used to drive up to four, 4K, 120Hz monitors, or four, 5K, 60Hz ones. It would be nice if all five connections could be used at once, but what can you do.
Pascal has other benefits for professional users, too. For instance, Simultaneous Multi-Projection (SMP) is used in VR applications to essentially double the GPU's geometry processing ability. NVIDIA will be pushing professional VR at SIGGRAPH this year, also launching Iray VR. This uses light fields, rendered on devices like the DGX-1, with its eight GP100 chips connected by NVLink, to provide accurately lit environments. This is particularly useful for architectural visualization.
No price is given for either of these cards, but they will launch in October of this year.
Subject: Graphics Cards | July 22, 2016 - 05:51 PM | Scott Michaud
Tagged: pascal, nvidia, graphics drivers
Turns out the Pascal-based GPUs suffered from DPC latency issues, and there's been an ongoing discussion about it for a little over a month. This is not an area that I know a lot about, but it's a system that schedules workloads by priority, which provides regular windows of time for sound and video devices to update. It can be stalled by long-running driver code, though, which could manifest as stutter, audio hitches, and other performance issues. With a 10-series GeForce device installed, users have reported that this latency increases about 10-20x, from ~20us to ~300-400us. This can increase to 1000us or more under load. (8333us is ~1 whole frame at 120FPS.)
NVIDIA has acknowledged the issue and, just yesterday, released an optional hotfix. Upon installing the driver, while it could just be psychosomatic, the system felt a lot more responsive. I ran LatencyMon (DPCLat isn't compatible with Windows 8.x or Windows 10) before and after, and the latency measurement did drop significantly. It was consistently the largest source of latency, spiking in the thousands of microseconds, before the update. After the update, it was hidden by other drivers for the first night, although today it seems to have a few spikes again. That said, Microsoft's networking driver is also spiking in the ~200-300us range, so a good portion of it might be the sad state of my current OS install. I've been meaning to do a good system wipe for a while...
Measurement taken after the hotfix, while running Spotify.
That said, my computer's a mess right now.
That said, some of the post-hotfix driver spikes are reaching ~570us (mostly when I play music on Spotify through my Blue Yeti Pro). Also, Photoshop CC 2015 started complaining about graphics acceleration issues after installing the hotfix, so only install it if you're experiencing problems. About the latency, if it's not just my machine, NVIDIA might still have some work to do.
It does feel a lot better, though.
Subject: Graphics Cards | July 21, 2016 - 10:21 PM | Ryan Shrout
Tagged: titan x, titan, pascal, nvidia, gp102
Donning the leather jacket he goes very few places without, NVIDIA CEO Jen-Hsun Huang showed up at an AI meet-up at Stanford this evening to show, for the very first time, a graphics card based on a never before seen Pascal GP102 GPU.
Source: Twitter (NVIDIA)
Rehashing an old name, NVIDIA will call this new graphics card the Titan X. You know, like the "new iPad" this is the "new TitanX." Here is the data we know about thus far:
|Titan X (Pascal)||GTX 1080||GTX 980 Ti||TITAN X||GTX 980||R9 Fury X||R9 Fury||R9 Nano||R9 390X|
|GPU||GP102||GP104||GM200||GM200||GM204||Fiji XT||Fiji Pro||Fiji XT||Hawaii XT|
|Rated Clock||1417 MHz||1607 MHz||1000 MHz||1000 MHz||1126 MHz||1050 MHz||1000 MHz||up to 1000 MHz||1050 MHz|
|Texture Units||224 (?)||160||176||192||128||256||224||256||176|
|ROP Units||96 (?)||64||96||96||64||64||64||64||64|
|Memory Clock||10000 MHz||10000 MHz||7000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz||500 MHz||6000 MHz|
|Memory Interface||384-bit G5X||256-bit G5X||384-bit||384-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||512-bit|
|Memory Bandwidth||480 GB/s||320 GB/s||336 GB/s||336 GB/s||224 GB/s||512 GB/s||512 GB/s||512 GB/s||320 GB/s|
|TDP||250 watts||180 watts||250 watts||250 watts||165 watts||275 watts||275 watts||175 watts||275 watts|
|Peak Compute||11.0 TFLOPS||8.2 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS||7.20 TFLOPS||8.19 TFLOPS||5.63 TFLOPS|
Note: everything with a ? on is educated guesses on our part.
Obviously there is a lot for us to still learn about this new GPU and graphics card, including why in the WORLD it is still being called Titan X, rather than...just about anything else. That aside, GP102 will feature 40% more CUDA cores than the GP104 at slightly lower clock speeds. The rated 11 TFLOPS of single precision compute of the new Titan X is 34% better than that of the GeForce GTX 1080 and I would expect gaming performance to scale in line with that difference.
The new Titan X will feature 12GB of GDDR5X memory, not HBM as the GP100 chip has, so this is clearly a new chip with a new memory interface. NVIDIA claims it will have 480 GB/s of bandwidth, and I am guessing is built on a 384-bit memory controller interface running at the same 10 Gbps as the GTX 1080. It's truly amazing hardware.
What will you be asked to pay? $1200, going on sale on August 2nd, and only on NVIDIA.com, at least for now. Considering the prices of GeForce GTX 1080 cards with such limited availability, the $1200 price tag MIGHT NOT seem so insane. That's higher than the $999 starting price of the Titan X based on Maxwell in March of 2015 - the claims that NVIDIA is artificially raising prices of cards in each segment will continue, it seems.
I am curious about the TDP on the new Titan X -
will it hit the 250 watt mark of the previous version? Yes, apparently it will it that 250 watt TDP - specs above updated. Does this also mean we'll see a GeForce GTX 1080 Ti that falls between the GTX 1080 and this new Titan X? Maybe, but we are likely looking at an $899 or higher SEP - so get those wallets ready.
That's it for now; we'll have a briefing where we can get more details soon, and hopefully a review ready for you on August 2nd when the cards go on sale!
Subject: Graphics Cards | July 21, 2016 - 02:04 PM | Jeremy Hellstrom
Tagged: gtx 460, gtx 760, gtx 960, gtx 1060, fermi, kepler, maxwell, pascal
Phoronix took a look at how NVIDIA's mid range cards performance on Linux has changed over the past four generations of GPU, from Fermi, through Kepler, Maxwell, and finally Pascal. CS:GO was run at 4k to push the newer GPUs as was DOTA, much to the dismay of the GTX 460. The scaling is rather interesting, there is a very large delta between Fermi and Kepler which comes close to being replicated when comparing Maxwell to Pascal. From the looks of the vast majority of the tests, the GTX 1060 will be a noticeable upgrade for Linux users no matter which previous mid range card they are currently using. We will likely see a similar article covering AMD in the near future.
"To complement yesterday's launch-day GeForce GTX 1060 Linux review, here are some more benchmark results with the various NVIDIA x60 graphics cards I have available for testing going back to the GeForce GTX 460 Fermi. If you are curious about the raw OpenGL/OpenCL/CUDA performance and performance-per-Watt for these mid-range x60 graphics cards from Fermi, Kepler, Maxwell, and Pascal, here are these benchmarks from Ubuntu 16.04 Linux." Here are some more Graphics Card articles from around the web:
- ASUS ROG STRIX-GTX1070-O8G-GAMING: GTX 1070, Strix Style! @ Bjorn3d
- MSI GeForce GTX 1060 Gaming X Review @HiTech Legion
- EVGA GeForce GTX 1070 SC Gaming ACX 3.0 Review - Affordable Enthusiast Gaming @HiTech Legion
- Radeon RX 480 performance revisited with AMD's 16.7.1 driver @ The Tech Report
- AMD Radeon RX 480 8GB CrossFire @ [H]ard|OCP
Subject: Graphics Cards | July 20, 2016 - 12:19 PM | Sebastian Peak
Tagged: VideoCardz, rumor, report, nvidia, GTX 1070M, GTX 1060M, GeForce GTX 1070, GeForce GTX 1060, 2048 CUDA Cores
Specifications for the upcoming mobile version of NVIDIA's GTX 1070 GPU may have leaked, and according to the report at VideoCardz.com this GTX 1070M will have 2048 CUDA cores; 128 more than the desktop version's 1920 cores.
The report comes via BenchLife, with the screenshot of GPU-Z showing the higher CUDA core count (though VideoCardz mentions the TMU count should be 128). The memory interface remains at 256-bit for the mobile version, with 8GB of GDDR5.
VideoCardz reported another GPU-Z screenshot (via PurePC) of the mobile GTX 1060, which appears to offer the same specs of the desktop version, at a slightly lower clock speed.
Finally, this chart was provided for reference:
Image credit: VideoCardz
Note the absence of information about a mobile variant of the GTX 1080, details of which are still unknown (for now).
Subject: Graphics Cards | July 19, 2016 - 01:54 PM | Jeremy Hellstrom
Tagged: pascal, nvidia, gtx 1060, gp106, geforce, founders edition
The GTX 1060 Founders Edition has arrived and also happens to be our first look at the 16nm FinFET GP106 silicon, the GTX 1080 and 1070 used GP104. This card features 10 SMs, 1280 CUDA cores, 48 ROPs and 80 texture units, in many ways it is a half of a GTX 1080. The GPU is clocked at a base of 1506MHz with a boost of 1708MHz, the 6GB of VRAM at 8GHz. [H]ard|OCP took this card through its paces, contrasting it with the RX480 and the GTX 980 at resolutions of 1440p as well as the more common 1080p. As they do not use the frame rating tools which are the basis of our graphics testing of all cards, including the GTX 1060 of course, they included the new DOOM in their test suite. Read on to see how they felt the card compared to the competition ... just don't expect to see a follow up article on SLI performance.
"NVIDIA's GeForce GTX 1060 video card is launched today in the $249 and $299 price point for the Founders Edition. We will find out how it performs in comparison to AMD Radeon RX 480 in DOOM with the Vulkan API as well as DX12 and DX11 games. We'll also see how a GeForce GTX 980 compares in real world gaming."
Here are some more Graphics Card articles from around the web:
- The NVIDIA GTX 1060 6GB Review @ Hardware Canucks
- A quick look at Nvidia's GeForce GTX 1060 @ The Tech Report
- VIDIA GeForce GTX 1060 Founders Edition Review @ OCC
- NVIDIA GeForce GTX 1060 Founder’s Edition @ Tech ARP
- NVIDIA GeForce GTX 1060 6GB Graphics Card Review @ Techgage
- GeForce GTX 1060 @ Hardwareheaven
- Nvidia GTX 1060 6GB Founders Edition @ Kitguru
- MSI GeForce GTX 1060 Gaming X 6 GB @ techPowerUp
- NVIDIA GeForce GTX 1060 6 GB @ techPowerUp
- NVIDIA GeForce GTX 1060 Review - Enthusiast Gaming at a Mainstream Price @ HiTech Legion
- NVIDIA GeForce GTX 1060 Offers Great Performance On Linux @ Phoronix
Subject: Graphics Cards | July 19, 2016 - 01:07 AM | Scott Michaud
Honestly, when I first received this news, I thought it was a mistaken re-announcement of the contest from a few months ago. The original Order of 10 challenge was made up of a series of puzzles, and the first handful of people to solve it, received a GTX 10-Series graphics card. Turns out, NVIDIA is doing it again.
For four weeks, starting on July 21st, NVIDIA will add four new challenges and, more importantly, 100 new “chances to win”. They did not announce what those prizes will be or whether all of them will be distributed to the first 25 complete entries of each challenge, though. Some high-profile YouTube personalities, such as some of the members of Rooster Teeth, were streaming their attempts the last time around, so there might be some of that again this time, too.
Subject: Graphics Cards | July 16, 2016 - 11:03 PM | Tim Verry
Tagged: rx 480, ROG, Radeon RX 480, polaris 10 xt, polaris 10, DirectCU III, asus
Following its previous announcement, Asus has released more information on the Republic of Gamers STRIX RX 480 graphics card. Pricing is still a mystery but the factory overclocked card will be available in the middle of next month!
In my previous coverage, I detailed that the STRIX RX 480 would be using a custom PCB along with Asus' DirectCU III cooler and Aura RGB back lighting. Yesterday, Asus revealed that the card also has a custom VRM solution that, in an interesting twist, draws all of the graphics card's power from the two PCI-E power connectors and nothing from the PCI-E slot. This would explain the inclusion of both a 6-pin and 8-pin power connector on the card! I do think that it is a bit of an over-reaction to not draw anything from the slot, but it is an interesting take on powering a graphics card and I'm interested to see how it all works out once the reviews hit and overclockers get a hold of it!
The custom graphics card is assembled using Asus' custom "Auto Extreme" automated assembly process and uses "Super Alloy Power II" components (which is to say that Asus claims to be using high quality hardware and build quality). The DirectCU III cooler is similar to the one used on the STRIX GTX 1080 and features direct contact heatpipes, an aluminum fin stack, and three Wing Blade fans that can spin down to zero RPMs when the card is being used on the desktop or during "casual gaming." The fan shroud and backplate are both made of metal which is a nice touch. Asus claims that the cooler is 30% cooler and three times quieter than the RX 480 reference cooler.
Last but certainly not least, Asus revealed boost clock speeds! The STRIX RX 480 will clock up to 1,330 MHz in OC Mode and up to 1,310 MHz in Gaming Mode. Further Asus has not touched the GDDR5 memory frequency which stays at the reference 8 GHz. Asus did not reveal base (average) GPU clocks. I was somewhat surprised by the factory overclock as I did not expect much out of the box, but 1,330 MHz is fairly respectable. This card should have a lot more headroom beyond that though, and fortunately Asus provides software that will automatically overclock the card even further with one click (GPU Tweak II also lets advanced users manually overclock the card). Users should be able to hit at least 1,450 MHz assuming they do decently in the silicon lottery.
For reference, stock RX 480s are clocked at 1,120 MHz base and up to 1,266 MHz boost. Asus claims their factory overclock results in a 15% higher score in 3DMark Fire Strike and 19% more performance in DOOM and Hitman.
Other features of the STRIX RX 480 include FanConnect which is two 4-pin fan headers that allows users to hook up two case fans and allow them to be controlled by the GPU. Aura RGB LEDs on the shroud and backplate allow users to match their build aesthetics. Asus also includes XSplit GameCaster for game streaming with the card.
No word on pricing yet, but you will be able to get your hands on the card in the middle of next month (specifically "worldwide from mid-August")!
This card is definitely one of the most interesting RX 480 designs so far and I am anxiously awaiting the full reviews!
How far do you think the triple fan cooler can push AMD's Polaris 10 XT GPU?