Subject: Graphics Cards | September 22, 2017 - 04:36 PM | Jeremy Hellstrom
Tagged: Radeon Software 17.9.2, crossfire, Vega
The newest Radeon Software ReLive 17.9.2 is especially worth grabbing if you have or plan to have more than one Vega based card in your system as it marks the return of Crossfire support. You can pair up Vega64 or Vega56 cards but do make sure they are a matched set. We haven't had time to test the performance results yet but you can be sure we will be working on that in the near future. Below are the results which AMD suggests you can expect in several different games, as well as a look at the other notes associated with this new driver.
Radeon™ Software Crimson ReLive Edition is AMD's advanced graphics software for enabling high-performance gaming and engaging VR experiences. Create, capture, and share your remarkable moments. Effortlessly boost performance and efficiency. Experience Radeon Software with industry-leading user satisfaction, rigorously-tested stability, comprehensive certification, and more.
Radeon Software Crimson ReLive Edition 17.9.2 Highlights
- Radeon RX Vega Series Up to 2x Multi GPU support
- Project CARS 2™ Multi GPU profile support added
Hearts of Iron IV™ may experience a system hang when the campaign scenario is launched.
Radeon Software may display an erroneous "1603 Error" after installing Radeon Software. This error will not affect your Radeon Software installation.
Two Vegas...ha ha ha
When the preorders for the Radeon Vega Frontier Edition went up last week, I made the decision to place orders in a few different locations to make sure we got it in as early as possible. Well, as it turned out, we actually had the cards show up very quickly…from two different locations.
So, what is a person to do if TWO of the newest, most coveted GPUs show up on their doorstep? After you do the first, full review of the single GPU iteration, you plug those both into your system and do some multi-GPU CrossFire testing!
There of course needs to be some discussion up front about this testing and our write up. If you read my first review of the Vega Frontier Edition you will clearly note my stance on the idea that “this is not a gaming card” and that “the drivers aren’t ready. Essentially, I said these potential excuses for performance were distraction and unwarranted based on the current state of Vega development and the proximity of the consumer iteration, Radeon RX.
But for multi-GPU, it’s a different story. Both competitors in the GPU space will tell you that developing drivers for CrossFire and SLI is incredibly difficult. Much more than simply splitting the work across different processors, multi-GPU requires extra attention to specific games, game engines, and effects rendering that are not required in single GPU environments. Add to that the fact that the market size for CrossFire and SLI has been shrinking, from an already small state, and you can see why multi-GPU is going to get less attention from AMD here.
Even more, when CrossFire and SLI support gets a focus from the driver teams, it is often late in the process, nearly last in the list of technologies to address before launch.
With that in mind, we all should understand the results we are going to show you might be indicative of the CrossFire scaling when Radeon RX Vega launches, but it very well could not. I would look at the data we are presenting today as a “current state” of CrossFire for Vega.
Subject: Systems | June 12, 2017 - 07:00 PM | Sebastian Peak
Tagged: Threadripper, sli, ryzen, RX 580, PC, gtx 1080 ti, gaming, desktop, dell, crossfire, amd, alienware
Dell has revealed their new Alienware Area-51 gaming desktops featuring the latest high-performance AMD and Intel processors. We will begin with a look at the Alienware Area-51 Threadripper Edition, and Dell has an exclusive on pre-built systems using the new Ryzen Threadripper CPUs.
"Through 2017, Dell will be the exclusive OEM partner to deliver AMD Ryzen Threadripper pre-built systems to the market and the high-end 16-core will be factory-overclocked across all 16-cores and 32 logical threads. The Area-51 Threadripper Edition is ideal for customers who explore the world of mega-tasking, doing many system demanding tasks at the same time, and are looking for a complete, reliable solution from a trusted brand."
The systems are based on the X399 chipset and can be configured with either a 12-core/24-thread or 16-core/32-thread AMD Ryzen Threadripper processors, which are liquid-cooled in all configurations. Standard memory configurations begin with quad-channel 2667 MHz DDR4 up to 64GB, with 2933MHz HyperX memory up to the same quad-channel 64GB available. Graphics options begin with a choice between an NVIDIA GeForce GTX 1050 Ti or AMD Radeon RX 570, and max out at either dual GTX 1080 Ti or triple Radeon RX 580 cards.
Storage options include up to a 1TB M.2 PCIe SSD and 2TB 7200RPM SATA 6Gb/s HDD, and networking is handled by dual Killer E2500 Gigabit NICs and a choice of either Dell 1820 802.11ac 2x2 or Killer 1535 802.11ac 2x2 Wi-Fi. (A look at the other Area-51 desktop annoucement provides a more complete look at the rest of the general specifications - with a few chipset-related differences.)
Features from Dell/Alienware:
- Designed for Megatasking, game streaming and more, the new Area 51 Threadripper Edition is ready for today’s most demanding PC gaming enthusiast and supports high performance configurations with a chipset that enables up to 64 PCIe Gen 3 lanes.
- All configuration come standard with unlocked, factory-overclocked across all cores and liquid cooled AMD Ryzen Threadripper CPUs with Alienware's most powerful liquid cooling unit to date.
- Iconic triad high quality, uniquely engineered chassis built to deliver exceptional airflow, thermal management, and user ergonomics for daily use and future upgrades.
- Supports NVIDIA SLI and AMD Crossfire graphics technology, with dual and triple GPU options
- Introduces M.2 storage options to Area-51.
- Built for gaming enthusiast wanting the absolute best gaming performance played with a VR, 4k or 8k display
- Alienware Command Center includes AlienFX, AlienAdrenaline, AlienFusion, Thermal and Overclocking Controls
The Alienware Area-51 Threadripper Edition will be available beginning
June July 27, and pricing information is not yet announced.
Subject: Graphics Cards | March 21, 2017 - 07:47 PM | Scott Michaud
Tagged: windows 10, vulkan, sli, multi-gpu, crossfire
Update (March 22nd @ 3:50pm EDT): And the Khronos Group has just responded to my follow-up questions. LDA has existed since Windows Vista, at the time for assisting with SLI and Crossfire support. Its implementation has changed in Windows 10, but that's not really relevant for Vulkan's multi-GPU support. To prove this, they showed LDA referenced in a Windows 8.1 MSDN post.
Vulkan's multi-GPU extensions can be used on Windows 7 and Windows 8.x. The exact process will vary from OS to OS, but the GPU vendor can implement these extensions if they choose, and LDA mode isn't exclusive to Windows 10.
Update (March 21st @ 11:55pm EDT): I came across a Microsoft Support page that discusses issues with LDA in Windows 7, so it seems like that functionality isn't limited to WDDM 2.0 and Windows 10. (Why have a support page otherwise?) Previously, I looked up an MSDN article that had it listed as a WDDM 2.0 feature, so I figured DSOGaming's assertion that it was introduced with WDDM 2.0 was correct.
As such, LDA might not require a GPU vendor's implementation at all. It'll probably be more clear when the Khronos Group responds to my earlier request, though.
That said, we're arguing over how much a GPU vendor needs to implement; either way, it will be possible to use the multi-GPU extensions in Windows 7 and Windows 8.x if the driver supports it.
Update (March 21st @ 7:30pm EDT): The Khronos Group has just released their statement. It's still a bit unclear, and I've submit another request for clarification.
Specifically, the third statement:
If an implementation on Windows does decide to use LDA mode, it is NOT tied to Windows 10. LDA mode has been available on many versions of Windows, including Windows 7 and 8.X.
... doesn't elaborate what is required for LDA mode on Windows outside of 10. (It could be Microsoft-supported, vendor-supported, or something else entirely.) I'll update again when that information is available. For now, it seems like the table, below, should actually look something like this:
|macOS||Apple doesn't allow the Vulkan API to ship in graphics drivers.
|Linux / etc.||✓||✓||✓|
... but we will update, again, should this be inaccurate.
Update (March 20th @ 3:50pm EDT): The Khronos Group has just responded that the other posts are incorrect. They haven't yet confirmed whether this post (which separates "device groups" from the more general "multi-GPU in Vulkan") is correct, though, because they're preparing an official statement. We'll update when we have more info.
Original Post Below (March 19th @ 9:36pm EDT)
A couple of days ago, some sites have noticed a bullet point that claims Windows-based GPU drivers will need WDDM in “linked display adapter” mode for “Native multi-GPU support for NVIDIA SLI and AMD Crossfire platforms” on Vulkan. This note came from an official slide deck by the Khronos Group, which was published during the recent Game Developers Conference, GDC 2017. The concern is that “linked display adapter” mode is a part of WDDM 2.0, which is exclusive to Windows 10.
This is being interpreted as “Vulkan does not support multi-GPU under Windows 7 or 8.x”.
I reached out to the Khronos Group for clarification, but I’m fairly sure I know what this does (and doesn’t) mean. Rather than starting with a written out explanation in prose, I will summarize it into a table, below, outlining what is possible on each platform. I will then elaborate below that.
|macOS||Apple doesn't allow the Vulkan API to ship in graphics drivers.
|Linux / etc.||✓||✓||✓|
So the good news is that it’s possible for a game developer to support multi-GPU (through what DirectX 12 would call MDA) on Windows 7 and Windows 8.x; the bad news is that no-one might bother with the heavy lifting. Linked display adapters allow the developer to assume that all GPUs are roughly the same performance, have the same amount of usable memory, and can be accessed through a single driver interface. On top of these assumptions, device groups also hide some annoying and tedious work inside the graphics driver, like producing a texture on one graphics card and quickly giving it to another GPU for rendering.
Basically, if the developer will go through the trouble of supporting AMD + NVIDIA or discrete GPU + integrated GPU systems, then they can support Windows 7 / 8.x in multi-GPU as well. Otherwise? Your extra GPUs will be sitting out unless you switch to DirectX 11 or OpenGL (or you use it for video encoding or something else outside the game).
On the other hand, this limitation might pressure some developers to support unlinked multi-GPU configurations. There are some interesting possibilities, including post-processing, GPGPU tasks like AI visibility and physics, and so forth, which might be ignored in titles whose developers were seduced by the simplicity of device groups. On the whole, device groups was apparently a high-priority request by game developers, and its inclusion will lead to more multi-GPU content. Developers who can justify doing it themselves, though, now have another reason to bother re-inventing a few wheels.
Or... you could just use Linux. That works, too.
Again, we are still waiting on the Khronos Group to confirm this story. See the latest update, above.
Subject: Motherboards | February 26, 2017 - 01:29 AM | Tim Verry
Tagged: x370, sli, ryzen, PCI-E 3.0, gaming, crossfire, b350, amd
Computerbase.de recently published an update (translated) to an article outlining the differences between AMD’s AM4 motherboard chipsets. As it stands, the X370 and B350 chipsets are set to be the most popular chipsets for desktop PCs (with X300 catering to the small form factor crowd) especially among enthusiasts. One key differentiator between the two chipsets was initially support for multi-GPU configurations with X370. Now that motherboards have been revealed and are up for pre-order now, it turns out that the multi-GPU lines have been blurred a bit. As it stands, both B350 and X370 will support AMD’s CrossFire multi-GPU technology and the X370 alone will also have support for NVIDIA’s SLI technology.
The AM4 motherboards equipped with the B350 and X370 chipsets that feature two PCI-E x16 expansion slots will run as x8 in each slot in a dual GPU setup. (In a single GPU setup, the top slot can run at full x16 speeds.) Which is to say that the slots behave the same across both chipsets. Where the chipsets differ is in support for specific GPU technologies where NVIDIA’s SLI is locked to X370. TechPowerUp speculates that the decision to lock SLI to its top-end chipset is due, at least in part, to licensing costs. This is not a bad thing as B350 was originally not going to support any dual x16 slot multi-GPU configurations, but now motherboard manufacturers are being allowed to enable it by including a second slot and AMD will reportedly permit CrossFire usage (which costs AMD nothing in licensing). Meanwhile the most expensive X370 chipset will support SLI for those serious gamers that demand and can afford it. Had B350 supported SLI and carried the SLI branding, they likely would have been ever so slightly more expensive than they are now. Of course, DirectX 12's multi-adapter will work on either chipset so long as the game supports it.
|X370||B350||A320||X300 / B300 / A300||Ryzen CPU||Bristol Ridge APU|
|PCI-E 3.0||0||0||0||4||20 (18 w/ 2 SATA)||10|
|USB 3.1 Gen 2||2||2||1||1||0||0|
|USB 3.1 Gen 1||6||2||2||2||4||4|
|SATA 6 Gbps||4||2||2||2||2||2|
|Overclocking Capable?||Yes||Yes||No||Yes (X300 only)|
Multi-GPU is not the only differentiator though. Moving up from B350 to X370 will get you 6 USB 3.1 Gen 1 (USB 3.0) ports versus 2 on B350/A30/X300, two more PCI-E 2.0 lanes (8 versus 6), and two more SATA ports (6 total usable; 4 versus 2 coming from the chipset).
Note that X370, B350, and X300 all support CPU overclocking. Hopefully this helps you when trying to decide which AM4 motherboard to pair with your Ryzen CPU once the independent benchmarks are out. In short, if you must have SLI you are stuck ponying up for X370, but if you plan to only ever run a single GPU or tend to stick with AMD GPUs and CrossFire, B350 gets you most of the way to a X370 for a lot less money! You do not even have to give up any USB 3.1 Gen 2 ports though you limit your SATA drive options (it’s all about M.2 these days anyway heh).
For those curious, looking around on Newegg I notice that most of the B350 motherboards have that second PCI-E 3.0 x16 slot and CrossFire support listed in their specifications and seem to average around $99. Meanwhile X370 starts at $140 and rockets up from there (up to $299!) depending on how much bling you are looking for!
Are you going for a motherboard with the B350 or X370 chipset? Will you be rocking multiple graphics cards?
- AMD Ryzen Pre-order Starts Today, Specs and Performance Revealed
- AMD Launching Ryzen 5 Six Core Processors Soon (Q2 2017)
- AMD Details AM4 Chipsets and Upcoming Motherboards
- AMD Officially Launches Bristol Ridge Processors And Zen-Ready AM4 Platform
- Biostar Launches X370GT7 Flagship Motherboard For Ryzen CPUs
- Gigabyte is Ryzen up to the challenge of their rivals
- Mid-Range Gigabyte Socket AM4 (B350 Chipset) Micro ATX Motherboard Pictured
- CES 2017: MSI Shows Off X370 XPower Gaming Titanium AM4 Motherboard
Why Two 4GB GPUs Isn't Necessarily 8GB
We're trying something new here at PC Perspective. Some topics are fairly difficult to explain cleanly without accompanying images. We also like to go fairly deep into specific topics, so we're hoping that we can provide educational cartoons that explain these issues.
This pilot episode is about load-balancing and memory management in multi-GPU configurations. There seems to be a lot of confusion around what was (and was not) possible with DirectX 11 and OpenGL, and even more confusion about what DirectX 12, Mantle, and Vulkan allow developers to do. It highlights three different load-balancing algorithms, and even briefly mentions what LucidLogix was attempting to accomplish almost ten years ago.
If you like it, and want to see more, please share and support us on Patreon. We're putting this out not knowing if it's popular enough to be sustainable. The best way to see more of this is to share!
Subject: Graphics Cards | May 16, 2016 - 03:52 PM | Jeremy Hellstrom
Tagged: amd, r9 380x, crossfire
A pair of R9 380X's will cost you around $500, a bit more $100 less than a single GTX 980Ti and on par or a little less expensive than a straight GTX 980. You have likely seen these cards compared but how often have you seen these cards pitted against a pair of GTX 960's which costs a little bit less than two 380X cards? [H]ard|OCP decided it was worth investigating, perhaps for those who currently have a single one of these cards that are considering a second if the price is right. The results are very tight, overall the two setups performed very similarly with some games favouring AMD and others NVIDIA, check out the full review here.
"We are evaluating two Radeon R9 380X video cards in CrossFire against two GeForce GTX 960 video cards in a SLI arrangement. We will overclock each setup to its highest, to experience the full gaming benefit each configuration has to offer. Additionally we will compare a Radeon R9 380 CrossFire setup to help determine the best value."
Here are some more Graphics Card articles from around the web:
- With Pascal Ahead, A 16-Way Recap From NVIDIA's 9800 GTX To Maxwell @ Phoronix
- Maxwell Quadro For All: NVIDIA Quadro M2000 Workstation Graphics Card Review @ Techgage
- Desktop Graphics Card Comparison Guide @ TechARP
- PCI Express 3.0 vs. 2.0: Is There a Gaming Performance Gain? @ Hardware Secrets
The Dual-Fiji Card Finally Arrives
This weekend, leaks of information on both WCCFTech and VideoCardz.com have revealed all the information about the pending release of AMD’s dual-GPU giant, the Radeon Pro Duo. While no one at PC Perspective has been briefed on the product officially, all of the interesting data surrounding the product is clearly outlined in the slides on those websites, minus some independent benchmark testing that we are hoping to get to next week. Based on the report from both sites, the Radeon Pro Duo will be released on April 26th.
AMD actually revealed the product and branding for the Radeon Pro Duo back in March, during its live streamed Capsaicin event surrounding GDC. At that point we were given the following information:
- Dual Fiji XT GPUs
- 8GB of total HBM memory
- 4x DisplayPort (this has since been modified)
- 16 TFLOPS of compute
- $1499 price tag
The design of the card follows the same industrial design as the reference designs of the Radeon Fury X, and integrates a dual-pump cooler and external fan/radiator to keep both GPUs running cool.
Based on the slides leaked out today, AMD has revised the Radeon Pro Duo design to include a set of three DisplayPort connections and one HDMI port. This was a necessary change as the Oculus Rift requires an HDMI port to work; only the HTC Vive has built in support for a DisplayPort connection and even in that case you would need a full-size to mini-DisplayPort cable.
The 8GB of HBM (high bandwidth memory) on the card is split between the two Fiji XT GPUs on the card, just like other multi-GPU options on the market. The 350 watts power draw mark is exceptionally high, exceeded only by AMD’s previous dual-GPU beast, the Radeon 295X2 that used 500+ watts and the NVIDIA GeForce GTX Titan Z that draws 375 watts!
Here is the specification breakdown of the Radeon Pro Duo. The card has 8192 total stream processors and 128 Compute Units, split evenly between the two GPUs. You are getting two full Fiji XT GPUs in this card, an impressive feat made possible in part by the use of High Bandwidth Memory and its smaller physical footprint.
|Radeon Pro Duo||R9 Nano||R9 Fury||R9 Fury X||GTX 980 Ti||TITAN X||GTX 980||R9 290X|
|GPU||Fiji XT x 2||Fiji XT||Fiji Pro||Fiji XT||GM200||GM200||GM204||Hawaii XT|
|Rated Clock||up to 1000 MHz||up to 1000 MHz||1000 MHz||1050 MHz||1000 MHz||1000 MHz||1126 MHz||1000 MHz|
|Memory||8GB (4GB x 2)||4GB||4GB||4GB||6GB||12GB||4GB||4GB|
|Memory Clock||500 MHz||500 MHz||500 MHz||500 MHz||7000 MHz||7000 MHz||7000 MHz||5000 MHz|
|Memory Interface||4096-bit (HMB) x 2||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||384-bit||384-bit||256-bit||512-bit|
|Memory Bandwidth||1024 GB/s||512 GB/s||512 GB/s||512 GB/s||336 GB/s||336 GB/s||224 GB/s||320 GB/s|
|TDP||350 watts||175 watts||275 watts||275 watts||250 watts||250 watts||165 watts||290 watts|
|Peak Compute||16.38 TFLOPS||8.19 TFLOPS||7.20 TFLOPS||8.60 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||5.63 TFLOPS|
|Transistor Count||8.9B x 2||8.9B||8.9B||8.9B||8.0B||8.0B||5.2B||6.2B|
The Radeon Pro Duo has a rated clock speed of up to 1000 MHz. That’s the same clock speed as the R9 Fury and the rated “up to” frequency on the R9 Nano. It’s worth noting that we did see a handful of instances where the R9 Nano’s power limiting capability resulted in some extremely variable clock speeds in practice. AMD recently added a feature to its Crimson driver to disable power metering on the Nano, at the expense of more power draw, and I would assume the same option would work for the Pro Duo.
Subject: Graphics Cards | March 15, 2016 - 02:02 AM | Ryan Shrout
Tagged: vulkan, raja koduri, Polaris, HBM2, hbm, dx12, crossfire, amd
After hosting the AMD Capsaicin event at GDC tonight, the SVP and Chief Architect of the Radeon Technologies Group Raja Koduri sat down with me to talk about the event and offered up some additional details on the Radeon Pro Duo, upcoming Polaris GPUs and more. The video below has the full interview but there are several highlights that stand out as noteworthy.
- Raja claimed that one of the reasons to launch the dual-Fiji card as the Radeon Pro Duo for developers rather than pure Radeon, aimed at gamers, was to “get past CrossFire.” He believes we are at an inflection point with APIs. Where previously you would abstract two GPUs to appear as a single to the game engine, with DX12 and Vulkan the problem is more complex than that as we have seen in testing with early titles like Ashes of the Singularity.
But with the dual-Fiji product mostly developed and prepared, AMD was able to find a market between the enthusiast and the creator to target, and thus the Radeon Pro branding was born.
Raja further expands on it, telling me that in order to make multi-GPU useful and productive for the next generation of APIs, getting multi-GPU hardware solutions in the hands of developers is crucial. He admitted that CrossFire in the past has had performance scaling concerns and compatibility issues, and that getting multi-GPU correct from the ground floor here is crucial.
- With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs, and to scale performance overall, multi-GPU solutions need to be efficient and plentiful. The “economics of the smaller die” are much better for AMD (and we assume NVIDIA) and by 2017-2019, this is the reality and will be how graphics performance will scale.
Getting the software ecosystem going now is going to be crucial to ease into that standard.
- The naming scheme of Polaris (10, 11…) has no equation, it’s just “a sequence of numbers” and we should only expect it to increase going forward. The next Polaris chip will be bigger than 11, that’s the secret he gave us.
There have been concerns that AMD was only going to go for the mainstream gaming market with Polaris but Raja promised me and our readers that we “would be really really pleased.” We expect to see Polaris-based GPUs across the entire performance stack.
- AMD’s primary goal here is to get many millions of gamers VR-ready, though getting the enthusiasts “that last millisecond” is still a goal and it will happen from Radeon.
- No solid date on Polaris parts at all – I tried! (Other than the launches start in June.) Though Raja did promise that after tonight, he will only have his next alcoholic beverage until the launch of Polaris. Serious commitment!
- Curious about the HBM2 inclusion in Vega on the roadmap and what that means for Polaris? Though he didn’t say it outright, it appears that Polaris will be using HBM1, leaving me to wonder about the memory capacity limitations inherent in that. Has AMD found a way to get past the 4GB barrier? We are trying to figure that out for sure.
Why is Polaris going to use HBM1? Raja pointed towards the extreme cost and expense of building the HBM ecosystem prepping the pipeline for the new memory technology as the culprit and AMD obviously wants to recoup some of that cost with another generation of GPU usage.
Speaking with Raja is always interesting and the confidence and knowledge he showcases is still what gives me assurance that the Radeon Technologies Group is headed in the correct direction. This is going to be a very interesting year for graphics, PC gaming and for GPU technologies, as showcased throughout the Capsaicin event, and I think everyone should be looking forward do it.
Subject: Motherboards | November 3, 2015 - 04:00 AM | Sebastian Peak
Tagged: Z170A SLI PLUS, X99 SLI PLUS, sli, msi, motherboard, crossfire, black PCB
MSI has announced their third PRO Series motherboard, the all-black Z170A SLI PLUS.
"MSI, leading in motherboard design, completes the PRO Series motherboard line-up with the launch of the all black Z170A SLI PLUS motherboard. Inheriting DNA from the critically acclaimed X99A SLI PLUS motherboard, the Z170A SLI PLUS is powerful, packed with features and styled with class."
While not lacking in stealthy looks from the black PCB to the matching RAM slots and heatsinks, the specifications of this new SLI PLUS board place it firmly into the premium category.
Features include Intel Gigabit LAN, MSI’s DDR4 Boost technology (which isolates memory traces on the board), Turbo M.2 (and U.2) with speeds up to 32 Gb/s with NVMe SSDs, and the Steel Armor PCI Express slots. And while SLI is right there in the title, there is also support for AMD Crossfire with those super-strong PCIe slots.
The board is fabricated with MSI’s Military Class 4 components including super ferrite chokes and all solid capacitors, and also offer a “EZ Debug” LED and overvoltage protection. In addition to support for the fastest internal storage standards (M.2, U.2, and SATA Express), the Z170A SLI PLUS offers full USB 3.1 Gen2 support for transfers up to 10 Gb/s for compatible devices.
- Supports 6th Gen Intel Core / Pentium / Celeron processors for LGA 1151 socket
- Supports DDR4-3600 Memory (OC)
- DDR4 Boost
- USB 3.1 Gen2
- Turbo M.2 32Gb/s + Turbo U.2 ready + USB 3.1 Gen2 Type-C + SATA 6Gb/s
- Steel Armor PCI-E slots; Supports NVIDIA SLI and AMD Crossfire
- OC Genie 4
- Click BIOS 5
- Audio Boost
- Military Class 4
- EZ Debug LED
- Overvoltage Protection
- 4K UHD Support
- Windows 10 Ready
Pricing and availability were not specified with this morning’s announcement.