Author:
Subject: Processors
Manufacturer: AMD

Just a little taste

In a surprise move with no real indication as to why, AMD has decided to reveal some of the most exciting and interesting information surrounding Threadripper and Ryzen 3, both due out in just a few short weeks. AMD CEO Lisa Su and CVP of Marketing John Taylor (along with guest star Robert Hallock) appear in a video being launched on the AMD YouTube website today to divulge the naming, clock speeds and pricing for the new flagship HEDT product line under the Ryzen brand.

people.jpg

We already know a lot of about Threadripper, AMD’s answer to the X299/X99 high-end desktop platforms from Intel, including that they would be coming this summer, have up to 16-cores and 32-threads of compute, and that they would all include 64 lanes of PCI Express 3.0 for a massive amount of connectivity for the prosumer.

Now we know that there will be two models launching and available in early August: the Ryzen Threadripper 1920X and the Ryzen Threadripper 1950X.

  Core i9-7980XE Core i9-7960X Core i9-7940X Core i9-7920X Core i9-7900X Core i7-7820X Core i7-7800X Threadripper 1950X Threadripper 1920X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Zen Zen
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm 14nm
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 16/32 12/24
Base Clock ? ? ? ? 3.3 GHz 3.6 GHz 3.5 GHz 3.4 GHz 3.5 GHz
Turbo Boost 2.0 ? ? ? ? 4.3 GHz 4.3 GHz 4.0 GHz 4.0 GHz 4.0 GHz
Turbo Boost Max 3.0 ? ? ? ? 4.5 GHz 4.5 GHz N/A N/A N/A
Cache 16.5MB (?) 16.5MB (?) 16.5MB (?) 16.5MB (?) 13.75MB 11MB 8.25MB 40MB ?
Memory Support ? ? ? ? DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666 Quad Channel
PCIe Lanes ? ? ? ? 44 28 28 64 64
TDP 165 watts (?) 165 watts (?) 165 watts (?) 165 watts (?) 140 watts 140 watts 140 watts 180 watts 180 watts
Socket 2066 2066 2066 2066 2066 2066 2066 TR4 TR4
Price $1999 $1699 $1399 $1199 $999 $599 $389 $999 $799

 

  Threadripper 1950X Threadripper 1920X Ryzen 7 1800X Ryzen 7 1700X Ryzen 7 1700 Ryzen 5 1600X Ryzen 5 1600 Ryzen 5 1500X Ryzen 5 1400
Architecture Zen Zen Zen Zen Zen Zen Zen Zen Zen
Process Tech 14nm 14nm 14nm 14nm 14nm 14nm 14nm 14nm 14nm
Cores/Threads 16/32 12/24 8/16 8/16 8/16 6/12 6/12 4/8 4/8
Base Clock 3.4 GHz 3.5 GHz 3.6 GHz 3.4 GHz 3.0 GHz 3.6 GHz 3.2 GHz 3.5 GHz 3.2 GHz
Turbo/Boost Clock 4.0 GHz 4.0 GHz 4.0 GHz 3.8  GHz 3.7 GHz 4.0 GHz 3.6  GHz 3.7 GHz 3.4 GHz
Cache 40MB ? 20MB 20MB 20MB 16MB 16MB 16MB 8MB
Memory Support DDR4-2666
Quad Channel
DDR4-2666 Quad Channel DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
Dual Channel
DDR4-2400
PCIe Lanes 64 64 20 20 20 20 20 20 20
TDP 180 watts 180 watts 95 watts 95 watts 65 watts 95 watts 65 watts 65 watts 65 watts
Socket TR4 TR4 AM4 AM4 AM4 AM4 AM4 AM4 AM4
Price $999 $799 $499 $399 $329 $249 $219 $189 $169

Continue reading about the announcement of the Ryzen Threadripper and Ryzen 3 processors!

As cheap as Mortar? MSI's new B350 motherboard for Ryzen

Subject: Motherboards | July 10, 2017 - 03:38 PM |
Tagged: amd, b350, B350 Mortar, msi, AM4, mATX

MSI's B350 Mortar comes in the model you see below as well as an Arctic version if you prefer a different colour scheme.  AMD's B350 chipset carries a lower cost than the X370 series but retains most of the features enthusiasts delight in, such as M.2, support for DDR4-3200MHz, a USB 3.1 Gen1 Type-C plug and a Realtek ALC892 HD audio codec for audio.  Indeed about the only thing you lose is the ability to run multiple GPUs, which is not exactly a common need on an mATX build.  Modders-Inc were taken with this low cost motherboard, especially the amount of customization available in the UEFI to adjust your fan speeds ... and yes it has your RGBs.

B350mortar_07.jpg

"AMD's B350 chipset is challenging Intel's market dominance in a different subset that the chip giant did not expect: affordability. If AMD's Ryzen product releases sound too familiar with that of Intel's line, that is because it is deliberate. It is basically an aggressive move by AMD, challenging Intel directly that they can take over the naming scheme and do …"

Here are some more Motherboard articles from around the web:

Motherboards

 

Source: Modders Inc
Author:
Manufacturer: Sapphire

Overview

There has been a lot of news lately about the release of Cryptocurrency-specific graphics cards from both NVIDIA and AMD add-in board partners. While we covered the currently cryptomining phenomenon in an earlier article, today we are taking a look at one of these cards geared towards miners.

IMG_4681.JPG

It's worth noting that I purchased this card myself from Newegg, and neither AMD or Sapphire are involved in this article. I saw this card pop up on Newegg a few days ago, and my curiosity got the best of me.

There has been a lot of speculation, and little official information from vendors about what these mining cards will actually entail.

From the outward appearance, it is virtually impossible to distinguish this "new" RX 470 from the previous Sapphire Nitro+ RX 470, besides the lack of additional display outputs beyond the DVI connection. Even the branding and labels on the card identify it as a Nitro+ RX 470.

In order to test the hashing rates of this GPU, we are using Claymore's Dual Miner Version 9.6 (mining Ethereum only) against a reference design RX 470, also from Sapphire.

IMG_4684.JPG

On the reference RX 470 out of the box, we hit rates of about 21.8 MH/s while mining Ethereum. 

Once we moved to the Sapphire mining card, we move up to at least 24 MH/s from the start.

Continue reading about the Sapphire Radeon RX 470 Mining Edition!

Author:
Manufacturer: AKiTiO

A long time coming

External video cards for laptops have long been a dream of many PC enthusiasts, and for good reason. It’s compelling to have a thin-and-light notebook with great battery life for things like meetings or class, with the ability to plug it into a dock at home and enjoy your favorite PC games.

Many times we have been promised that external GPUs for notebooks would be a viable option. Over the years there have been many commercial solutions involving both industry standard protocols like ExpressCard, as well as proprietary connections to allow you to externally connect PCIe devices. Inspiring hackers have also had their hand with this for many years, cobbling together interesting solutions using mPCIe and M.2 ports on their notebooks which were meant for other devices.

With the introduction of Intel’s Thunderbolt standard in 2011, there was a hope that we would finally achieve external graphics nirvana. A modern, Intel-backed protocol promising PCIe x4 speeds (PCIe 2.0 at that point) sounded like it would be ideal for connecting GPUs to notebooks, and in some ways it was. Once again the external graphics communities managed to get it to work through the use of enclosures meant to connect other non-GPU PCIe devices such as RAID and video capture cards to systems. However, software support was still a limiting factor. You were required to use an external monitor to display your video, and it still felt like you were just riding the line between usability and a total hack. It felt like we were never going to get true universal support for external GPUs on notebooks.

Then, seemingly of out of nowhere, Intel decided to promote native support for external GPUs as a priority when they introduced Thunderbolt 3. Fast forward, and we've already seen a much larger adoption of Thunderbolt 3 on PC notebooks than we ever did with the previous Thunderbolt implementations. Taking all of this into account, we figured it was time to finally dip our toes into the eGPU market. 

For our testing, we decided on the AKiTio Node for several reasons. First, at around $300, it's by far the lowest cost enclosure built to support GPUs. Additionally, it seems to be one of the most compatible devices currently on the market according to the very helpful comparison chart over at eGPU.io. The eGPU site is a wonderful resource for everything external GPU, over any interface possible, and I would highly recommend heading over there to do some reading if you are interested in trying out an eGPU for yourself.

The Node unit itself is a very utilitarian design. Essentially you get a folded sheet metal box with a Thunderbolt controller and 400W SFX power supply inside.

DSC03490.JPG

In order to install a GPU into the Node, you must first unscrew the enclosure from the back and slide the outer shell off of the device.

DSC03495.JPG

Once inside, we can see that there is ample room for any graphics card you might want to install in this enclosure. In fact, it seems a little too large for any of the GPUs we installed, including GTX 1080 Ti models. Here, you can see a more reasonable RX 570 installed.

Beyond opening up the enclosure to install a GPU, there is very little configuration required. My unit required a firmware update, but that was easily applied with the tools from the AKiTio site.

From here, I simply connected the Node to a ThinkPad X1, installed the NVIDIA drivers for our GTX 1080 Ti, and everything seemed to work — including using the 1080 Ti with the integrated notebook display and no external monitor!

Now that we've got the Node working, let's take a look at some performance numbers.

Continue reading our look at external graphics with the Thunderbolt 3 AKiTiO Node!

Author:
Manufacturer: AMD

Two Vegas...ha ha ha

When the preorders for the Radeon Vega Frontier Edition went up last week, I made the decision to place orders in a few different locations to make sure we got it in as early as possible. Well, as it turned out, we actually had the cards show up very quickly…from two different locations.

dualvega.jpg

So, what is a person to do if TWO of the newest, most coveted GPUs show up on their doorstep? After you do the first, full review of the single GPU iteration, you plug those both into your system and do some multi-GPU CrossFire testing!

There of course needs to be some discussion up front about this testing and our write up. If you read my first review of the Vega Frontier Edition you will clearly note my stance on the idea that “this is not a gaming card” and that “the drivers aren’t ready. Essentially, I said these potential excuses for performance were distraction and unwarranted based on the current state of Vega development and the proximity of the consumer iteration, Radeon RX.

IMG_4688.JPG

But for multi-GPU, it’s a different story. Both competitors in the GPU space will tell you that developing drivers for CrossFire and SLI is incredibly difficult. Much more than simply splitting the work across different processors, multi-GPU requires extra attention to specific games, game engines, and effects rendering that are not required in single GPU environments. Add to that the fact that the market size for CrossFire and SLI has been shrinking, from an already small state, and you can see why multi-GPU is going to get less attention from AMD here.

Even more, when CrossFire and SLI support gets a focus from the driver teams, it is often late in the process, nearly last in the list of technologies to address before launch.

With that in mind, we all should understand the results we are going to show you might be indicative of the CrossFire scaling when Radeon RX Vega launches, but it very well could not. I would look at the data we are presenting today as a “current state” of CrossFire for Vega.

Continue reading our look at a pair of Vega Frontier Edition cards in CrossFire!

Manufacturer: NVIDIA

Performance not two-die four.

When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.

What’s one way around it? Split your design across multiple dies!

nvidia-2017-multidie.png

NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.

NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.

While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.

amd-2017-epyc-breakdown.jpg

If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”

Apparently not? It should be possible?

I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.

The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.

This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.

AMD's market share is Ryzen

Subject: General Tech | July 4, 2017 - 12:40 PM |
Tagged: passmark, amd, Intel, ryzen, market share

The designers of Passmark benchmarking software have noticed a trend in the past year, a surge in the number of AMD processors being tested.  The jump is quite impressive and even if it does not directly represent sales it certainly suggests that AMD's recent launch of Ryzen has been attracting enthusiasts.  At the beginning of the year AMD accounted for just over 18% of the benchmarks being run but as of now over a quarter of all benchmarks are being run on AMD processors.  With Threadripper on the horizon this number could grow, though perhaps not as dramatically as with the launch of the lower priced Ryzen family.  Drop by The Inquirer for more.

maxresdefault.jpg

"However, AMD's share has bounced back this year, rising from 18.1 per cent logged at the beginning of the first quarter to 26.2 per cent at the beginning of the third quarter. Intel's share has dipped to 73.8 per cent at the same time."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer

Radeon Vega Frontier Edition GPU and PCB Exposed

Subject: Graphics Cards | June 30, 2017 - 02:17 PM |
Tagged: Vega, radeon, Frontier Edition, amd

Hopefully you have already read up on my review of the new Radeon Vega Frontier Edition graphics card; it is full of interesting information about the gaming and professional application performance. 

IMG_4620.JPG

But I thought it would be interesting to share the bare card and GPU in its own post, just to help people find it later on.

For measurements, here's what we were able to gleam with the calipers.

(Editor's Update: we have updated the die measurements after doing a remeasure. I think my first was a bit loose as I didn't want to impact the GPU directly.)

  • Die size: 25.90mm x 19.80mm (GPU only, not including memory stacks)
    • Area: 512.82mm2
  • Package size: 47.3mm x 47.3mm
    • Area: 2,237mm2

Enjoy the sexy!

DSC03538 copy.jpg

Click to Enlarge

DSC03539 copy.jpg

Click to Enlarge

DSC03536 copy.jpg

Click to Enlarge

DSC03540.JPG

Click to Enlarge

DSC03541.JPG

Click to Enlarge

DSC03544.JPG

Click to Enlarge

Interesting notes:

  • There is a LOT of empty PCB space on the Vega FE card. This is likely indicative of added area needed for a large heatsink and fan to cool 300-375 watt TDP without throttling.
  • Benefits of the smaller HBM-based package appears to be at a cost of SMT components on the GPU substrate and the PCB
  • The die size of Vega is large - bigger than GP102 even, despite running at a much lower performance level. It will be interesting to see how AMD answers the question of why the die has expanded as much as it did.

Feel free to leave us some comments if anything stands out!

Author:
Manufacturer: AMD

An interesting night of testing

Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.

But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.

IMG_4621.JPG

Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.

On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.

Radeon Vega Frontier Edition Specifications

Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.

  Vega Frontier Edition Titan Xp GTX 1080 Ti Titan X (Pascal) GTX 1080 TITAN X GTX 980 R9 Fury X R9 Fury
GPU Vega GP102 GP102 GP102 GP104 GM200 GM204 Fiji XT Fiji Pro
GPU Cores 4096 3840 3584 3584 2560 3072 2048 4096 3584
Base Clock 1382 MHz 1480 MHz 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz
Boost Clock 1600 MHz 1582 MHz 1582 MHz 1480 MHz 1733 MHz 1089 MHz 1216 MHz - -
Texture Units ? 224 224 224 160 192 128 256 224
ROP Units 64 96 88 96 64 96 64 64 64
Memory 16GB 12GB 11GB 12GB 8GB 12GB 4GB 4GB 4GB
Memory Clock 1890 MHz 11400 MHz 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 1000 MHz 1000 MHz
Memory Interface 2048-bit HBM2 384-bit G5X 352-bit 384-bit G5X 256-bit G5X 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 483 GB/s 547.7 GB/s 484 GB/s 480 GB/s 320 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s
TDP 300 watts 250 watts 250 watts 250 watts 180 watts 250 watts 165 watts 275 watts 275 watts
Peak Compute 13.1 TFLOPS 12.0 TFLOPS 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS
Transistor Count ? 12.0B 12.0B 12.0B 7.2B 8.0B 5.2B 8.9B 8.9B
Process Tech 14nm 16nm 16nm 16nm 16nm 28nm 28nm 28nm 28nm
MSRP (current) $999 $1200 $699 $1,200 $599 $999 $499 $649 $549

The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.

DSC03536 copy.jpg

The Radeon Vega GPU

The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)

Continue reading our review of the AMD Radeon Vega Frontier Edition!

How exactly do we type these new Ryzen Pro parts? R7P, Ryzen 7P, R7 Pro?

Subject: General Tech | June 29, 2017 - 12:43 PM |
Tagged: amd, ryzen pro, EPYC

Official news about Ryzen Pro has finally arrived and The Tech Report was right on top of it.  This is the first we have seen of the "3" parts, a Ryzen 3 Pro 1300 and Ryzen 3 Pro 1200, their four non-SMT cores clocked at a decent 3.5/3.7GHz and 3.1/3.4GHz respectively.  That makes the Ryzen 3 Pro 1300 essentially the same chip as the Ryzen 5 Pro 1500 but with half the total cache and without multi-threading, theoretically reducing the price.  Five of the six new parts have a TDP of 65W with only the top tier Ryzen 7 Pro 1700X hitting 95W, with its 8 cores, 16 threads operating at 3.5/3.7GHz.

The speeds and core counts are not the most important features of these chips however, it is the features they share with AMD's soon to arrive EPYC chips.  AMD Secure Processor features, TPM 2.0 and DASH which offers features similar to Intel's vPro architecture.  This one area in which AMD offers a broader choice of products than Intel whose Core i3 parts do not support enterprise features; at least not yet.  Click the link above to check out more.

ryzenpro-stability.png

"AMD's Ryzen Pro platform blends business-class security and management features with the performance of the Zen architecture. We take an early look at how AMD plans to grapple with Intel in the battle for the standard corporate desktop."

Here is some more Tech News from around the web:

Tech Talk