All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | July 25, 2016 - 06:51 PM | Jeremy Hellstrom
Tagged: msi, gtx 1070, Gaming Z, Twin Frozr VI, factory overclocked
The Tech Report had a chance to see what the MSI Twin Frozr VI cooler can do to a GTX 1070, they have just wrapped up a review of the Gaming Z edition of that NVIDIA card. It comes with a respectable frequency bump when you enable OC mode, 1657 MHz base and 1860 MHz boost. When they tested it under load the GPU stayed below 70C so there should be room to push the card further. Check out the full benchmark suite in their full review.
"Nvidia's second Pascal graphics card, the GeForce GTX 1070, aims to set a new bar for graphics performance in the $379-and-up price range. We put MSI's GeForce GTX 1070 Gaming Z card through the wringer to see how a more affordable Pascal card performs."
Here are some more Graphics Card articles from around the web:
- Gigabyte GeForce GTX 1070 Xtreme Gaming @ Modders-Inc
- MSI GTX 1080 Gaming X 8G RGB SLI @ Kitguru
- NVIDIA GeForce GTX 1080 Founders Edition 8GB Graphics Card Review @ NikKTech
- MSI GTX 1060 Gaming X @ eTeknix
- MSI GTX 1060 Gaming X 6G Review @ OCC
- ASUS RX 480 STRIX OC 8 GB @ techPowerUp
Subject: Graphics Cards | July 25, 2016 - 04:48 PM | Scott Michaud
Tagged: siggraph 2016, Siggraph, quadro, nvidia
SIGGRAPH is the big, professional graphics event of the year, bringing together tens of thousands of attendees. They include engineers from Adobe, AMD, Blender, Disney (including ILM, Pixar, etc.), NVIDIA, The Khronos Group, and many, many others. Not only are new products announced, but many technologies are explained in detail, down to the specific algorithms that are used, so colleagues can advance their own research and share in kind.
But new products will indeed be announced.
The NVIDIA Quadro P6000
NVIDIA, having just launched a few Pascal GPUs to other markets, decided to announce updates to their Quadro line at the event. Two cards have been added, the Quadro P5000 and the Quadro P6000, both at the top end of the product stack. Interestingly, both use GDDR5X memory, meaning that neither will be based on the GP100 design, which is built around HBM2 memory.
The NVIDIA Quadro P5000
The lower end one, the Quadro P5000, should look somewhat familiar to our reader. Exact clocks are not specified, but the chip has 2560 CUDA cores. This is identical to the GTX 1080, but with twice the memory: 16GB of GDDR5X.
Above it sits the Quadro P6000. This chip has 3840 CUDA cores, paired with 24GB of GDDR5X. We have not seen a GPU with exactly these specifications before. It has the same number of FP32 shaders as a fully unlocked GP100 die, but it doesn't have HBM2 memory. On the other hand, the new Titan X uses GP102, combining 3584 CUDA cores with GDDR5X memory, although only 12GB of it. This means that the Quadro P6000 has 256 more (single-precision) shader units than the Titan X, but otherwise very similar specifications.
Both graphics cards have four DisplayPort 1.4 connectors, as well as a single DVI output. These five connectors can be used to drive up to four, 4K, 120Hz monitors, or four, 5K, 60Hz ones. It would be nice if all five connections could be used at once, but what can you do.
Pascal has other benefits for professional users, too. For instance, Simultaneous Multi-Projection (SMP) is used in VR applications to essentially double the GPU's geometry processing ability. NVIDIA will be pushing professional VR at SIGGRAPH this year, also launching Iray VR. This uses light fields, rendered on devices like the DGX-1, with its eight GP100 chips connected by NVLink, to provide accurately lit environments. This is particularly useful for architectural visualization.
No price is given for either of these cards, but they will launch in October of this year.
Subject: Graphics Cards | July 22, 2016 - 05:51 PM | Scott Michaud
Tagged: pascal, nvidia, graphics drivers
Turns out the Pascal-based GPUs suffered from DPC latency issues, and there's been an ongoing discussion about it for a little over a month. This is not an area that I know a lot about, but it's a system that schedules workloads by priority, which provides regular windows of time for sound and video devices to update. It can be stalled by long-running driver code, though, which could manifest as stutter, audio hitches, and other performance issues. With a 10-series GeForce device installed, users have reported that this latency increases about 10-20x, from ~20us to ~300-400us. This can increase to 1000us or more under load. (8333us is ~1 whole frame at 120FPS.)
NVIDIA has acknowledged the issue and, just yesterday, released an optional hotfix. Upon installing the driver, while it could just be psychosomatic, the system felt a lot more responsive. I ran LatencyMon (DPCLat isn't compatible with Windows 8.x or Windows 10) before and after, and the latency measurement did drop significantly. It was consistently the largest source of latency, spiking in the thousands of microseconds, before the update. After the update, it was hidden by other drivers for the first night, although today it seems to have a few spikes again. That said, Microsoft's networking driver is also spiking in the ~200-300us range, so a good portion of it might be the sad state of my current OS install. I've been meaning to do a good system wipe for a while...
Measurement taken after the hotfix, while running Spotify.
That said, my computer's a mess right now.
That said, some of the post-hotfix driver spikes are reaching ~570us (mostly when I play music on Spotify through my Blue Yeti Pro). Also, Photoshop CC 2015 started complaining about graphics acceleration issues after installing the hotfix, so only install it if you're experiencing problems. About the latency, if it's not just my machine, NVIDIA might still have some work to do.
It does feel a lot better, though.
Subject: Graphics Cards | July 21, 2016 - 10:21 PM | Ryan Shrout
Tagged: titan x, titan, pascal, nvidia, gp102
Donning the leather jacket he goes very few places without, NVIDIA CEO Jen-Hsun Huang showed up at an AI meet-up at Stanford this evening to show, for the very first time, a graphics card based on a never before seen Pascal GP102 GPU.
Source: Twitter (NVIDIA)
Rehashing an old name, NVIDIA will call this new graphics card the Titan X. You know, like the "new iPad" this is the "new TitanX." Here is the data we know about thus far:
|Titan X (Pascal)||GTX 1080||GTX 980 Ti||TITAN X||GTX 980||R9 Fury X||R9 Fury||R9 Nano||R9 390X|
|GPU||GP102||GP104||GM200||GM200||GM204||Fiji XT||Fiji Pro||Fiji XT||Hawaii XT|
|Rated Clock||1417 MHz||1607 MHz||1000 MHz||1000 MHz||1126 MHz||1050 MHz||1000 MHz||up to 1000 MHz||1050 MHz|
|Texture Units||224 (?)||160||176||192||128||256||224||256||176|
|ROP Units||96 (?)||64||96||96||64||64||64||64||64|
|Memory Clock||10000 MHz||10000 MHz||7000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz||500 MHz||6000 MHz|
|Memory Interface||384-bit G5X||256-bit G5X||384-bit||384-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||512-bit|
|Memory Bandwidth||480 GB/s||320 GB/s||336 GB/s||336 GB/s||224 GB/s||512 GB/s||512 GB/s||512 GB/s||320 GB/s|
|TDP||250 watts||180 watts||250 watts||250 watts||165 watts||275 watts||275 watts||175 watts||275 watts|
|Peak Compute||11.0 TFLOPS||8.2 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS||7.20 TFLOPS||8.19 TFLOPS||5.63 TFLOPS|
Note: everything with a ? on is educated guesses on our part.
Obviously there is a lot for us to still learn about this new GPU and graphics card, including why in the WORLD it is still being called Titan X, rather than...just about anything else. That aside, GP102 will feature 40% more CUDA cores than the GP104 at slightly lower clock speeds. The rated 11 TFLOPS of single precision compute of the new Titan X is 34% better than that of the GeForce GTX 1080 and I would expect gaming performance to scale in line with that difference.
The new Titan X will feature 12GB of GDDR5X memory, not HBM as the GP100 chip has, so this is clearly a new chip with a new memory interface. NVIDIA claims it will have 480 GB/s of bandwidth, and I am guessing is built on a 384-bit memory controller interface running at the same 10 Gbps as the GTX 1080. It's truly amazing hardware.
What will you be asked to pay? $1200, going on sale on August 2nd, and only on NVIDIA.com, at least for now. Considering the prices of GeForce GTX 1080 cards with such limited availability, the $1200 price tag MIGHT NOT seem so insane. That's higher than the $999 starting price of the Titan X based on Maxwell in March of 2015 - the claims that NVIDIA is artificially raising prices of cards in each segment will continue, it seems.
I am curious about the TDP on the new Titan X -
will it hit the 250 watt mark of the previous version? Yes, apparently it will it that 250 watt TDP - specs above updated. Does this also mean we'll see a GeForce GTX 1080 Ti that falls between the GTX 1080 and this new Titan X? Maybe, but we are likely looking at an $899 or higher SEP - so get those wallets ready.
That's it for now; we'll have a briefing where we can get more details soon, and hopefully a review ready for you on August 2nd when the cards go on sale!
Subject: Graphics Cards | July 21, 2016 - 02:04 PM | Jeremy Hellstrom
Tagged: gtx 460, gtx 760, gtx 960, gtx 1060, fermi, kepler, maxwell, pascal
Phoronix took a look at how NVIDIA's mid range cards performance on Linux has changed over the past four generations of GPU, from Fermi, through Kepler, Maxwell, and finally Pascal. CS:GO was run at 4k to push the newer GPUs as was DOTA, much to the dismay of the GTX 460. The scaling is rather interesting, there is a very large delta between Fermi and Kepler which comes close to being replicated when comparing Maxwell to Pascal. From the looks of the vast majority of the tests, the GTX 1060 will be a noticeable upgrade for Linux users no matter which previous mid range card they are currently using. We will likely see a similar article covering AMD in the near future.
"To complement yesterday's launch-day GeForce GTX 1060 Linux review, here are some more benchmark results with the various NVIDIA x60 graphics cards I have available for testing going back to the GeForce GTX 460 Fermi. If you are curious about the raw OpenGL/OpenCL/CUDA performance and performance-per-Watt for these mid-range x60 graphics cards from Fermi, Kepler, Maxwell, and Pascal, here are these benchmarks from Ubuntu 16.04 Linux." Here are some more Graphics Card articles from around the web:
- ASUS ROG STRIX-GTX1070-O8G-GAMING: GTX 1070, Strix Style! @ Bjorn3d
- MSI GeForce GTX 1060 Gaming X Review @HiTech Legion
- EVGA GeForce GTX 1070 SC Gaming ACX 3.0 Review - Affordable Enthusiast Gaming @HiTech Legion
- Radeon RX 480 performance revisited with AMD's 16.7.1 driver @ The Tech Report
- AMD Radeon RX 480 8GB CrossFire @ [H]ard|OCP
Subject: Graphics Cards | July 20, 2016 - 12:19 PM | Sebastian Peak
Tagged: VideoCardz, rumor, report, nvidia, GTX 1070M, GTX 1060M, GeForce GTX 1070, GeForce GTX 1060, 2048 CUDA Cores
Specifications for the upcoming mobile version of NVIDIA's GTX 1070 GPU may have leaked, and according to the report at VideoCardz.com this GTX 1070M will have 2048 CUDA cores; 128 more than the desktop version's 1920 cores.
The report comes via BenchLife, with the screenshot of GPU-Z showing the higher CUDA core count (though VideoCardz mentions the TMU count should be 128). The memory interface remains at 256-bit for the mobile version, with 8GB of GDDR5.
VideoCardz reported another GPU-Z screenshot (via PurePC) of the mobile GTX 1060, which appears to offer the same specs of the desktop version, at a slightly lower clock speed.
Finally, this chart was provided for reference:
Image credit: VideoCardz
Note the absence of information about a mobile variant of the GTX 1080, details of which are still unknown (for now).
Subject: Graphics Cards | July 19, 2016 - 01:54 PM | Jeremy Hellstrom
Tagged: pascal, nvidia, gtx 1060, gp106, geforce, founders edition
The GTX 1060 Founders Edition has arrived and also happens to be our first look at the 16nm FinFET GP106 silicon, the GTX 1080 and 1070 used GP104. This card features 10 SMs, 1280 CUDA cores, 48 ROPs and 80 texture units, in many ways it is a half of a GTX 1080. The GPU is clocked at a base of 1506MHz with a boost of 1708MHz, the 6GB of VRAM at 8GHz. [H]ard|OCP took this card through its paces, contrasting it with the RX480 and the GTX 980 at resolutions of 1440p as well as the more common 1080p. As they do not use the frame rating tools which are the basis of our graphics testing of all cards, including the GTX 1060 of course, they included the new DOOM in their test suite. Read on to see how they felt the card compared to the competition ... just don't expect to see a follow up article on SLI performance.
"NVIDIA's GeForce GTX 1060 video card is launched today in the $249 and $299 price point for the Founders Edition. We will find out how it performs in comparison to AMD Radeon RX 480 in DOOM with the Vulkan API as well as DX12 and DX11 games. We'll also see how a GeForce GTX 980 compares in real world gaming."
Here are some more Graphics Card articles from around the web:
- The NVIDIA GTX 1060 6GB Review @ Hardware Canucks
- A quick look at Nvidia's GeForce GTX 1060 @ The Tech Report
- VIDIA GeForce GTX 1060 Founders Edition Review @ OCC
- NVIDIA GeForce GTX 1060 Founder’s Edition @ Tech ARP
- NVIDIA GeForce GTX 1060 6GB Graphics Card Review @ Techgage
- GeForce GTX 1060 @ Hardwareheaven
- Nvidia GTX 1060 6GB Founders Edition @ Kitguru
- MSI GeForce GTX 1060 Gaming X 6 GB @ techPowerUp
- NVIDIA GeForce GTX 1060 6 GB @ techPowerUp
- NVIDIA GeForce GTX 1060 Review - Enthusiast Gaming at a Mainstream Price @ HiTech Legion
- NVIDIA GeForce GTX 1060 Offers Great Performance On Linux @ Phoronix
Subject: Graphics Cards | July 19, 2016 - 01:07 AM | Scott Michaud
Honestly, when I first received this news, I thought it was a mistaken re-announcement of the contest from a few months ago. The original Order of 10 challenge was made up of a series of puzzles, and the first handful of people to solve it, received a GTX 10-Series graphics card. Turns out, NVIDIA is doing it again.
For four weeks, starting on July 21st, NVIDIA will add four new challenges and, more importantly, 100 new “chances to win”. They did not announce what those prizes will be or whether all of them will be distributed to the first 25 complete entries of each challenge, though. Some high-profile YouTube personalities, such as some of the members of Rooster Teeth, were streaming their attempts the last time around, so there might be some of that again this time, too.
Subject: Graphics Cards | July 16, 2016 - 11:03 PM | Tim Verry
Tagged: rx 480, ROG, Radeon RX 480, polaris 10 xt, polaris 10, DirectCU III, asus
Following its previous announcement, Asus has released more information on the Republic of Gamers STRIX RX 480 graphics card. Pricing is still a mystery but the factory overclocked card will be available in the middle of next month!
In my previous coverage, I detailed that the STRIX RX 480 would be using a custom PCB along with Asus' DirectCU III cooler and Aura RGB back lighting. Yesterday, Asus revealed that the card also has a custom VRM solution that, in an interesting twist, draws all of the graphics card's power from the two PCI-E power connectors and nothing from the PCI-E slot. This would explain the inclusion of both a 6-pin and 8-pin power connector on the card! I do think that it is a bit of an over-reaction to not draw anything from the slot, but it is an interesting take on powering a graphics card and I'm interested to see how it all works out once the reviews hit and overclockers get a hold of it!
The custom graphics card is assembled using Asus' custom "Auto Extreme" automated assembly process and uses "Super Alloy Power II" components (which is to say that Asus claims to be using high quality hardware and build quality). The DirectCU III cooler is similar to the one used on the STRIX GTX 1080 and features direct contact heatpipes, an aluminum fin stack, and three Wing Blade fans that can spin down to zero RPMs when the card is being used on the desktop or during "casual gaming." The fan shroud and backplate are both made of metal which is a nice touch. Asus claims that the cooler is 30% cooler and three times quieter than the RX 480 reference cooler.
Last but certainly not least, Asus revealed boost clock speeds! The STRIX RX 480 will clock up to 1,330 MHz in OC Mode and up to 1,310 MHz in Gaming Mode. Further Asus has not touched the GDDR5 memory frequency which stays at the reference 8 GHz. Asus did not reveal base (average) GPU clocks. I was somewhat surprised by the factory overclock as I did not expect much out of the box, but 1,330 MHz is fairly respectable. This card should have a lot more headroom beyond that though, and fortunately Asus provides software that will automatically overclock the card even further with one click (GPU Tweak II also lets advanced users manually overclock the card). Users should be able to hit at least 1,450 MHz assuming they do decently in the silicon lottery.
For reference, stock RX 480s are clocked at 1,120 MHz base and up to 1,266 MHz boost. Asus claims their factory overclock results in a 15% higher score in 3DMark Fire Strike and 19% more performance in DOOM and Hitman.
Other features of the STRIX RX 480 include FanConnect which is two 4-pin fan headers that allows users to hook up two case fans and allow them to be controlled by the GPU. Aura RGB LEDs on the shroud and backplate allow users to match their build aesthetics. Asus also includes XSplit GameCaster for game streaming with the card.
No word on pricing yet, but you will be able to get your hands on the card in the middle of next month (specifically "worldwide from mid-August")!
This card is definitely one of the most interesting RX 480 designs so far and I am anxiously awaiting the full reviews!
How far do you think the triple fan cooler can push AMD's Polaris 10 XT GPU?
Subject: Graphics Cards | July 16, 2016 - 06:37 PM | Scott Michaud
Tagged: Volta, pascal, nvidia, maxwell, 16nm
For the past few generations, NVIDIA has been roughly trying to release a new architecture with a new process node, and release a refresh the following year. This ran into a hitch as Maxwell was delayed a year, apart from the GTX 750 Ti, and then pushed back to the same 28nm process that Kepler utilized. Pascal caught up with 16nm, although we know that some hard, physical limitations are right around the corner. The lattice spacing for silicon at room temperature is around ~0.5nm, so we're talking about features the size of ~the low 30s of atoms in width.
This rumor claims that NVIDIA is not trying to go with 10nm for Volta. Instead, it will take place on the same, 16nm node that Pascal is currently occupying. This is quite interesting, because GPUs scale quite well with complexity changes, as they have many features with a relatively low clock rate, so the only real ways to increase performance are to make the existing architecture more efficient, or make a larger chip.
That said, GP100 leaves a lot of room on the table for an FP32-optimized, ~600mm2 part to crush its performance at the high end, similar to how GM200 replaced GK110. The rumored GP102, expected in the ~450mm2 range for Titan or GTX 1080 Ti-style parts, has some room to grow. Like GM200, however, it would also be unappealing to GPU compute users who need FP64. If this is what is going on, and we're totally just speculating at the moment, it would signal that enterprise customers should expect a new GPGPU card every second gaming generation.
That is, of course, unless NVIDIA recognized ways to make the Maxwell-based architecture significantly more die-space efficient in Volta. Clocks could get higher, or the circuits themselves could get simpler. You would think that, especially in the latter case, they would have integrated those ideas into Maxwell and Pascal, though; but, like HBM2 memory, there might have been a reason why they couldn't.
We'll need to wait and see. The entire rumor could be crap, who knows?
Subject: Graphics Cards | July 16, 2016 - 01:10 AM | Tim Verry
Tagged: rx 470, rx 460, polaris 11, polaris 10, gcn4, esports, amd
At a launch event in Australia earlier this week AMD talked about its Polaris architecture, launched the RX 480 and revealed the specifications for the Polaris 10-based RX 470 and Polaris 11-derived RX 470 GPUs. The new budget GPUs are aimed at 1080p or lower gaming and will allegedly be available for purchase sometime in August.
First up is the AMD Radeon RX 470. This GPU is based on Polaris 10 (like the RX 480) but has some hardware disabled (mainly the number of stream processors). Based on the same 14nm process the GPU has 2,048 cores running at not yet known clocks. Thankfully, AMD has left the memory interface intact, and the RX 470 uses the same 256-bit memory bus pairing the GPU with 4GB of GDDR5 memory on the reference design and up to 8GB GDDR5 on partner cards.
Speaking of the reference design, the reference RX 470 will utilize a blower style cooler that AIBs can use but AMD expects that partners will opt to use their own custom dual and triple fan coolers (as would I). The card is powered by a single 6-pin power connector though, again, AIBs are allowed to design a card with more.
This card is reportedly aimed at 1080p gaming at "ultra and max settings". Video outputs will include DisplayPort 1.3/1.4 HDR support.
Breaking away from Polaris 10 is the RX 460 which is the first GPU AMD has talked about using Polaris 11. This GCNv4 architecture is similar it its larger Polaris sibling but is further cut down and engineered for low power and mobile environments. While the "full" Polaris 11 appears to have 16 CUs (Compute Units), RX 460 will feature 14 of them (this should open up opportunities for lots of salvaged dies and once yields are good enough we might see a RX 465 or something with all of its stream processors enabled). With 14 CUs, that means RX 460 has 896 stream processors (again clock speeds were not discussed) and a 128-bit memory bus. AMD's reference design will pair this card with 2GB of GDDR5 but I would not be surprised to see 4GB versions possibly in a gaming laptop SKU if only just because it looks better (heh). There is no external PCI-E power connector on this card so it will be drawing all of its power from the PCI-E slot on the motherboard.
The reference graphics card is a tiny affair with a single fan HSF and support for DP 1.3/1.4 HDR. AMD further mentions 4K H.264 / HEVC encoding/decoding support. AMD is positioning this card at HTPCs and "eSports" budget gamers.
One other tidbit of information from the announcement was that AMD reiterated their new "RX" naming scheme saying that RX would be reserved for gaming and we would no longer see R9, R7, and R5 branding though AMD did not rule out future products that would not use RX aimed at other non-gaming workloads. I would expect that this will apply to APU GPUs eventually as well.
Naturally, AMD is not talking exact shipping dates or pricing but expect them to be well under the $239 of the RX 480! I would guess that RX 470 would be around the $150 mark while RX 460 will be a sub $100 part (if only barely).
What do you think about the RX 470 and RX 460? If you are interested in watching the whole event, there is a two part video of it available on YouTube. Part 1 and Part 2 are embedded below the break.
Subject: Graphics Cards | July 13, 2016 - 09:20 PM | Scott Michaud
Tagged: vulkan, R9 Fury X, nvidia, Mantle, gtx 1070, fury x, doom, amd
We haven't yet benchmarked DOOM on Vulkan Update (immediately after posting): Ryan has just informed me that, apparently, we did benchmark Vulkan on our YouTube page (embed below). I knew we were working on it, I just didn't realize we published content yet. Original post continues below.
As far as I know, we're trying to get our testing software for frame time analysis running on the new API, but other sites have posted framerate-based results. The results show that AMD's cards benefit greatly from the new, Mantle-derived interface (versus the OpenGL one). On the other hand, while NVIDIA never really sees a decrease, more than 1% at least, it doesn't really get much of a boost, either.
Image Credit: ComputerBase.de
I tweeted out to ID's lead renderer programmer, Tiago Sousa, to ask whether they take advantage of NVIDIA-specific extensions on the OpenGL path (like command submission queues). I haven't got a response yet, so it's difficult to tell whether this speaks more toward NVIDIA's OpenGL performance, or AMD's Vulkan performance. In the end, it doesn't really matter, though. AMD's Fury X (which can be found for as low as $399 with a mail-in rebate) is beating the GTX 1070 (which is in stock for the low $400s) by a fair margin. The Fury X also beats its own OpenGL performance by up to 66% (at 1080p) with the new API.
The API should also make it easier for games to pace their frames, too, which should allow smoother animation at these higher rates. That said, we don't know for sure because we can't test that from just seeing FPS numbers. The gains are impressive from AMD, though.
Subject: Graphics Cards | July 12, 2016 - 12:01 AM | Tim Verry
Tagged: strix, rx 480, Radeon RX 480, polaris 10, asus, amd
Alongside the launch of AMD’s reference design Radeon RX 480, the company’s various AIB (Add-In Board) partners began announcing their own custom versions pairing AMD’s Polaris 10 GPU with custom PCBs and coolers. Asus took the launch to heart and teased its Radeon RX 480 STRIX under it’s ROG lineup. The press release was rather scant with details, but it does look like a promising card that will let users really push Polaris 10 to it’s limits.
Thanks to forum user Eroticus over at VideoCardz, the RX 480 STRIX looks to use a custom PCB and power delivery design that feeds the GPU via two PCI-E power connectors in addition to the PCI-E slot. Asus is not talking clock speeds on the GPU, but they did reveal that they are going with 8GB of GDDR5 memory at 8 GHz. The DirectCU III cooler pairs heatpipes and an aluminum fin stack with three shrouded fans. There is also a backplate (of course, with a LED backlit logo) which should help support the card and provide a bit more cooling.
I would not expect too much of a factory (out of the box) overclock from this card. However, I do expect that users will be able to seriously overclock the Polaris 10 GPU thanks to the extra power connector (allegedly one 6-pin and one 8-pin which seems a bit much but we’ll see!) and beefy air cooler.
For reference, the, well, reference design RX 480 has base and boost clock speeds of 1120 MHz and 1266 MHz respectively. The Polaris 10 GPU has 2,304 cores, 144 texture units, and 32 raster operators. If buyers get a good chip in their RX 480 Strix, it may be possible for them to get to 1400 MHz boost as some of the rumors around the Internet claim though it’s hard to say for sure as that may require quite a bit more voltage (and heat) to reach. I wouldn’t put it out of the realm of possibility though!
Of course it would not be Republic of Gamers’ material without LEDs, and ASUS delivers with the inclusion of its Aura RGB LEDs on the cooler shroud and backplate which I believe are user configurable in Asus’ software utility.
Beyond that, not much is known about the upcoming RX 480 STRIX graphics card. Stay tuned to PC Perspective for more information as it gets closer to availability!
- The AMD Radeon RX 480 Review - The Polaris Promise
- PowerColor Radeon RX 480 Red Devil Leak
- PCPer Live! Radeon RX 480 Live Stream with Raja Koduri!
- Meet ASUS' DirectCU III on the Radeon Fury
Subject: Graphics Cards | July 11, 2016 - 01:59 PM | Jeremy Hellstrom
Tagged: GTX 1080, GameRock Premium, palit, factory overclocked
Palit's card is certainly unique looking in the GTX 1080 market, that blue, white and silver is not a colour palette used by other manufacturers. That is not the only difference between this card and a stock GTX 1080, it is also overclocked with a core of 1746 MHz and VRAM at 1315 MHz, along with a cooler that covers the entire card and takes up three slots. That extra cooling ability translates into a card that runs at 30dBA under load, and TechPowerUp did not see temperatures exceeding 72°C. It is a little on the expensive side but if you have space in your case this is a worth contender for your hard earned cash.
"Palit's GTX 1080 GameRock uses a mighty triple-slot dual-fan design, which provides excellent temperatures and noise levels better than any GTX 1080 we tested so far. The fans also turn off in idle, and thanks to the large overclock out the box, the card is the fastest GTX 1080 we ever tested, too."
Here are some more Graphics Card articles from around the web:
- EVGA GeForce GTX 1070 SuperClocked 8 GB @ techPowerUp
- ASUS STRIX GAMING GTX 1080 @ eTeknix
- ASUS GTX 950-2G “Unplugged” @ Kitguru
- PNY GTX 950 2GB and GTX 960 4GB XLR8 OC Gaming @ Kitguru
- Radeon Software 16.7.1 Performance Comparison @ Tech ARP
Subject: Graphics Cards | July 7, 2016 - 10:13 PM | Scott Michaud
Tagged: nvidia, GTX 1080, ea, dice, battlefield, battlefield 1
Battlefield 1 looks pretty good. To compare how it scales between its settings, DigitalFoundry took a short amount of video at 4K across all four, omnibus graphics settings: Low, Medium, High, and Ultra. These are, as should be expected for a high-end PC game, broken down into more specific categories, like lighting quality and texture filtering, but I can't blame them for not adding that many permutations to a single video. It would just be a mess.
The rendering itself doesn't change too much between settings to my eye. Higher quality settings draw more distant objects than lower ones, and increases range that level of detail falls off, too. About a third of the way into the video, they show a house from a moderate distance. The lowest quality version was almost completely devoid of shadowing and its windows would not even draw. The lighting then scaled up from there as the settings were moved progressively toward Ultra.
Image Credit: DigitalFoundry
While it's still Alpha-level code, a single GTX 1080 was getting between 50 and 60 FPS at 4K. This is a good range to be in for a G-Sync monitor, as the single-card machine doesn't need to deal with multi-GPU issues, like pacing and driver support.
Subject: Graphics Cards | July 7, 2016 - 04:37 PM | Scott Michaud
Tagged: amd, rx 480, powercolor
According to Videocardz, a custom RX 480 from PowerColor has been caught on camera. The most interesting part about this variant is that it connects to the power supply with a single eight-pin PCIe connector. With AMD's latest driver, and hopefully even a modified vBIOS and PCB, this should be plenty enough power for the GPU, even with overclocking.
Image Credit: Videocardz
The card itself is a three-fan design with three DisplayPorts, one HDMI, and a single DVI. This retains the reference design's three DisplayPorts, but also adds the option to use DVI without an adapter. I'm not sure whether all five connectors can be used simultaneously, which isn't too bad -- apparently the GTX 1080 also cannot use all five connectors at the same time, so I wouldn't plan on connecting five monitors to a single-GPU system, just in case.
No pricing and availability yet... this is just a picture. We don't even know clock rates.
Subject: Graphics Cards | July 7, 2016 - 02:50 PM | Sebastian Peak
Tagged: rx480, rx 480, Radeon RX 480, radeon, power draw, PCIe power, graphics drivers, driver, Crimson Edition 16.7.1, amd
As promised, AMD has released an updated driver for the RX 480 graphics card, and the Radeon Software Crimson Edition 16.7.1 promises a fix for the power consumption concerns we have been covering in-depth.
Note: We have published our full analysis of the new 16.7.1 driver, available here.
AMD lists these highlights for the new Crimson Edition 16.7.1 software:
"The Radeon RX 480’s power distribution has been improved for AMD reference boards, lowering the current drawn from the PCIe bus.
A new 'compatibility mode' UI toggle has been made available in the Global Settings menu of Radeon Settings. This option is designed to reduce total power with minimal performance impact if end users experience any further issues. This toggle is 'off' by default.
Performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the 'compatibility' toggle."
You can go directly to AMD's page for this updated driver from this direct link: http://support.amd.com/en-us/download/desktop?os=Windows%2010%20-%2064
Subject: Graphics Cards | July 6, 2016 - 11:56 PM | Scott Michaud
Tagged: titan, pascal, nvidia, gtx 1080 ti, gp102, GP100
Normally, I pose these sorts of rumors as “Well, here you go, and here's a grain of salt.” This one I'm fairly sure is bogus, at least to some extent. I could be wrong, but especially the GP100 aspects of it just doesn't make sense.
Before I get to that, the rumor is that NVIDIA will announce a GeForce GTX Titan P at Gamescom in Germany. The event occurs mid-August (17th - 21st) and it has been basically Europe's E3 in terms of gaming announcements. It also overlaps with Europe's Game Developers Conference (GDC), which occurs in March for us. The rumor says that it will use GP100 (!?!) with either 12GB of VRAM, 16GB of VRAM, or two variants as we've seen with the Tesla P100 accelerator.
The rumor also acknowledges the previously rumored GP102 die, claims that it will be for the GTX 1080 Ti, and suggests that it will have up to 3840 CUDA cores. This is the same number of CUDA cores as the GP100, which is where I get confused. This would mean that NVIDIA made a special die, which other rumors claim is ~450mm2, for just the GeForce GTX 1080 Ti.
I mean, it's possible that NVIDIA would split the GTX 1080 Ti and the next Titan by similar gaming performance, just with better half- and double-precision performance and faster memory for GPGPU developers. That would be a very weird to me, though, developing two different GPU dies for the consumer market with probably the same gaming performance.
And they would be announcing the Titan P first???
The harder to yield one???
When the Tesla version isn't even expected until Q4???
I can see it happening, but I seriously doubt it. Something may be announced, but I'd have to believe it will be at least slightly different from the rumors that we are hearing now.
Subject: Graphics Cards | July 6, 2016 - 09:37 PM | Scott Michaud
Tagged: amd, linux, graphics drivers, rx 480, Polaris
Linux support from AMD seems to be improving, as it has been on Windows. We'll be combining two separate, tiny stories into one, so bear with us. The first is from Fudzilla, and it states that AMD has AMDGPU-PRO 16.30 drivers for the RX 480 out on day one. It's nice to see that their Radeon driver initiative applies to Linux, too.
That brings us to the second story, this one from Phoronix. One Windows, the Crimson 16.7.1 drivers will include a fix for the RX 480 power issues (which we will obviously test of course). Michael Larabel was apparently talking with AMD's Linux team, and it seems likely that this update will roll into the Linux driver as well. They "are still investigating", of course, but it is apparently on their radar.
Subject: Graphics Cards | July 6, 2016 - 08:11 PM | Scott Michaud
Tagged: rx 480, Polaris, graphics drivers, amd
In the next 24 hours or so, AMD will publish Radeon Software 16.7.1, which addresses the power distribution issues in the AMD Radeon RX 480. The driver makes two major changes. First, AMD claims that it will lower the draw from the PCIe bus. While they don't explicitly say how, it sounds like it will increase the load on the 6-pin PCIe cable, which is typically over-provisioned. In fact, many power supplies have 6-pin connectors that have the extra two pins of an 8-pin connector hanging off of it.
Second, seemingly for those who aren't comfortable with the extra load on the 6-pin PCIe connector, a UI control has been added to lower overall power. Being that the option's called “compatibility”, it sounds like it should put the RX 480 back into spec on both slot and the extra power connector. Again, AMD says that they believe it's not necessary, and it seems to be true, because that option is off by default.
Beyond these changes, the driver also adds a bunch of game optimizations. Allyn and Ryan have been working on this coverage, so expect more content from them in the very near future.