Author:
Manufacturer: ASUS

Overview

To say that the consumer wired networking market has stagnated has been an understatement. While we've seen generational improvements on NICs from companies like Intel, and companies like Rivet trying to add their own unique spin on things with their Killer products, the basic idea has remained mostly unchanged.

And for its time, Gigabit networking was an amazing thing. In the era of hard drive-based storage as your only option, 100 MB/s seemed like a great data transfer speed for your home network — who could want more?

Now that we've moved well into the era of flash-based storage technologies capable of upwards of 3 GB/s transfer speeds, and even high capacity hard drives hitting the 200 MB/s category, Gigabit networking is a frustrating bottleneck when trying to move files from PC to PC.

For the enterprise market, there has been a solution to this for a long time. 10 Gigabit networking has been available in enterprise equipment for over 10 years, and even old news with even faster specifications like 40 and 100 Gbps interfaces available.

So why then are consumers mostly stuck at 1Gbps? As is the case with most enterprise technologies, the cost for 10 Gigabit equipment is still at a high premium compared to it's slower sibling. In fact, we've only just started to see enterprise-level 10 Gigabit NICs integrated on consumer motherboards, like the ASUS X99-E 10G WS at a staggering $650 price point.

However, there is hope. Companies like Aquantia are starting to aggressively push down the price point of 10 Gigabit networking, which brings us to the product we are taking a look at today — the ASUS XG-C100C 10 Gigabit Network Adapter.

IMG_4714.JPG

Continue reading about the ASUS XG-C100C 10GigE add-in card!

Author:
Subject: Processors
Manufacturer: Intel

A massive lineup

The amount and significance of the product and platform launches occurring today with the Intel Xeon Scalable family is staggering. Intel is launching more than 50 processors and 7 chipsets falling under the Xeon Scalable product brand, targeting data centers and enterprise customers in a wide range of markets and segments. From SMB users to “Super 7” data center clients, the new lineup of Xeon parts is likely to have an option targeting them.

All of this comes at an important point in time, with AMD fielding its new EPYC family of processors and platforms, for the first time in nearly a decade becoming competitive in the space. That decade of clear dominance in the data center has been good to Intel, giving it the ability to bring in profits and high margins without the direct fear of a strong competitor. Intel did not spend those 10 years flat footed though, and instead it has been developing complimentary technologies including new Ethernet controllers, ASICs, Omni-Path, FPGAs, solid state storage tech and much more.

cpus.jpg

Our story today will give you an overview of the new processors and the changes that Intel’s latest Xeon architecture offers to business customers. The Skylake-SP core has some significant upgrades over the Broadwell design before it, but in other aspects the processors and platforms will be quite similar. What changes can you expect with the new Xeon family?

01-11 copy.jpg

Per-core performance has been improved with the updated Skylake-SP microarchitecture and a new cache memory hierarchy that we had a preview of with the Skylake-X consumer release last month. The memory and PCIe interfaces have been upgraded with more channels and more lanes, giving the platform more flexibility for expansion. Socket-level performance also goes up with higher core counts available and the improved UPI interface that makes socket to socket communication more efficient. AVX-512 doubles the peak FLOPS/clock on Skylake over Broadwell, beneficial for HPC and analytics workloads. Intel QuickAssist improves cryptography and compression performance to allow for faster connectivity implementation. Security and agility get an upgrade as well with Boot Guard, RunSure, and VMD for better NVMe storage management. While on the surface this is a simple upgrade, there is a lot that gets improved under the hood.

01-12 copy.jpg

We already had a good look at the new mesh architecture used for the inter-core component communication. This transition away from the ring bus that was in use since Nehalem gives Skylake-SP a couple of unique traits: slightly longer latencies but with more consistency and room for expansion to higher core counts.

01-18 copy.jpg

Intel has changed the naming scheme with the Xeon Scalable release, moving away from “E5/E7” and “v4” to a Platinum, Gold, Silver, Bronze nomenclature. The product differentiation remains much the same, with the Platinum processors offering the highest feature support including 8-sockets, highest core counts, highest memory speeds, connectivity options and more. To be clear: there are a lot of new processors and trying to create an easy to read table of features and clocks is nearly impossible. The highlights of the different families are:

  • Xeon Platinum (81xx)
    • Up to 28 cores
    • Up to 8 sockets
    • Up to 3 UPI links
    • 6-channel DDR4-2666
    • Up to 1.5TB of memory
    • 48 lanes of PCIe 3.0
    • AVX-512 with 2 FMA per core
  • Xeon Gold (61xx)
    • Up to 22 cores
    • Up to 4 sockets
    • Up to 3 UPI links
    • 6-channel DDR4-2666
    • AVX-512 with 2 FMA per core
  • Xeon Gold (51xx)
    • Up to 14 cores
    • Up to 2 sockets
    • 2 UPI links
    • 6-channel DDR4-2400
    • AVX-512 with 1 FMA per core
  • Xeon Silver (41xx)
    • Up to 12 cores
    • Up to 2 sockets
    • 2 UPI links
    • 6-channel DDR4-2400
    • AVX-512 with 1 FMA per core
  • Xeon Bronze (31xx)
    • Up to 8 cores
    • Up to 2 sockets
    • 2 UPI links
    • No Turbo Boost
    • 6-channel DDR4-2133
    • AVX-512 with 1 FMA per core

That’s…a lot. And it only gets worse when you start to look at the entire SKU lineup with clocks, Turbo Speeds, cache size differences, etc. It’s easy to see why the simplicity argument that AMD made with EPYC is so attractive to an overwhelmed IT department.

01-20 copy.jpg

Two sub-categories exist with the T or F suffix. The former indicates a 10-year life cycle (thermal specific) while the F is used to indicate units that integrate the Omni-Path fabric on package. M models can address 1.5TB of system memory. This diagram above, which you should click to see a larger view, shows the scope of the Xeon Scalable launch in a single slide. This release offers buyers flexibility but at the expense of complexity of configuration.

Continue reading about the new Intel Xeon Scalable Skylake-SP platform!

Author:
Manufacturer: Sapphire

Overview

There has been a lot of news lately about the release of Cryptocurrency-specific graphics cards from both NVIDIA and AMD add-in board partners. While we covered the currently cryptomining phenomenon in an earlier article, today we are taking a look at one of these cards geared towards miners.

IMG_4681.JPG

It's worth noting that I purchased this card myself from Newegg, and neither AMD or Sapphire are involved in this article. I saw this card pop up on Newegg a few days ago, and my curiosity got the best of me.

There has been a lot of speculation, and little official information from vendors about what these mining cards will actually entail.

From the outward appearance, it is virtually impossible to distinguish this "new" RX 470 from the previous Sapphire Nitro+ RX 470, besides the lack of additional display outputs beyond the DVI connection. Even the branding and labels on the card identify it as a Nitro+ RX 470.

In order to test the hashing rates of this GPU, we are using Claymore's Dual Miner Version 9.6 (mining Ethereum only) against a reference design RX 470, also from Sapphire.

IMG_4684.JPG

On the reference RX 470 out of the box, we hit rates of about 21.8 MH/s while mining Ethereum. 

Once we moved to the Sapphire mining card, we move up to at least 24 MH/s from the start.

Continue reading about the Sapphire Radeon RX 470 Mining Edition!

Subject: Motherboards
Manufacturer: GIGABYTE

Introduction and Technical Specifications

Introduction

02-board_0.jpg

Courtesy of GIGABYTE

For the launch of the Intel X299 chipset motherboards, GIGABYTE chose their AORUS brand to lead the charge. The AORUS branding differentiates the enthusiast and gamer friendly products from other GIGABYTE product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The X299 AORUS Gaming 3 is among GIGABYTE's intial release boards offering support for the latest Intel HEDT chipset and processor line. Built around the Intel X299 chlipset, the board supports the Intel LGA2066 processor line, including the Skylake-X and Kaby Lake-X processors, with support for Quad-Channel DDR4 memory running at a 2667MHz speed. The X299 AORUS Gaming 3 can be found in retail with an MRSP of $279.99.

03-board-profile_0.jpg

Courtesy of GIGABYTE

GIGABYTE integrated the following features into the X299 AORUS Gaming 3 motherboard: eight SATA III 6Gbps ports; two M.2 PCIe Gen3 x4 32Gbps capable ports with Intel Optane support built-in; an Intel I219-V Gigabit RJ-45 port; five PCI-Express x16 slots; Realtek® ALC1220 8-Channel audio subsystem; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

04-pwr-system.jpg

Courtesy of GIGABYTE

To power the board, GIGABYTE integrated integrated a 9-phase digital power delivery system into the X299 AORUS Gaming 3's design. The digital power system was designed with IR digital power controllers and PowIRstage ICs, Server Level Chokes, and Durable Black capacitors.

05-pcie-armor.jpg

Courtesy of GIGABYTE

Designed to withstand the punishment of even the largest video cards, GIGABYTE's Ultra Durable PCIe Armor gives added strength and retention force to the primary and secondary PCIe x16 video card slots (PCIe X16 slots 1 and 3). The PCIe slots are reinforced with a metal overlay that is anchored to the board, giving the slot better hold capabilities (both side-to-side and card retention) when the board is used in a vertical orientation.

Continue reading our preview of the GIGABYTE X299 AORUS Gaming 3 motherboard!

Author:
Manufacturer: AKiTiO

A long time coming

External video cards for laptops have long been a dream of many PC enthusiasts, and for good reason. It’s compelling to have a thin-and-light notebook with great battery life for things like meetings or class, with the ability to plug it into a dock at home and enjoy your favorite PC games.

Many times we have been promised that external GPUs for notebooks would be a viable option. Over the years there have been many commercial solutions involving both industry standard protocols like ExpressCard, as well as proprietary connections to allow you to externally connect PCIe devices. Inspiring hackers have also had their hand with this for many years, cobbling together interesting solutions using mPCIe and M.2 ports on their notebooks which were meant for other devices.

With the introduction of Intel’s Thunderbolt standard in 2011, there was a hope that we would finally achieve external graphics nirvana. A modern, Intel-backed protocol promising PCIe x4 speeds (PCIe 2.0 at that point) sounded like it would be ideal for connecting GPUs to notebooks, and in some ways it was. Once again the external graphics communities managed to get it to work through the use of enclosures meant to connect other non-GPU PCIe devices such as RAID and video capture cards to systems. However, software support was still a limiting factor. You were required to use an external monitor to display your video, and it still felt like you were just riding the line between usability and a total hack. It felt like we were never going to get true universal support for external GPUs on notebooks.

Then, seemingly of out of nowhere, Intel decided to promote native support for external GPUs as a priority when they introduced Thunderbolt 3. Fast forward, and we've already seen a much larger adoption of Thunderbolt 3 on PC notebooks than we ever did with the previous Thunderbolt implementations. Taking all of this into account, we figured it was time to finally dip our toes into the eGPU market. 

For our testing, we decided on the AKiTio Node for several reasons. First, at around $300, it's by far the lowest cost enclosure built to support GPUs. Additionally, it seems to be one of the most compatible devices currently on the market according to the very helpful comparison chart over at eGPU.io. The eGPU site is a wonderful resource for everything external GPU, over any interface possible, and I would highly recommend heading over there to do some reading if you are interested in trying out an eGPU for yourself.

The Node unit itself is a very utilitarian design. Essentially you get a folded sheet metal box with a Thunderbolt controller and 400W SFX power supply inside.

DSC03490.JPG

In order to install a GPU into the Node, you must first unscrew the enclosure from the back and slide the outer shell off of the device.

DSC03495.JPG

Once inside, we can see that there is ample room for any graphics card you might want to install in this enclosure. In fact, it seems a little too large for any of the GPUs we installed, including GTX 1080 Ti models. Here, you can see a more reasonable RX 570 installed.

Beyond opening up the enclosure to install a GPU, there is very little configuration required. My unit required a firmware update, but that was easily applied with the tools from the AKiTio site.

From here, I simply connected the Node to a ThinkPad X1, installed the NVIDIA drivers for our GTX 1080 Ti, and everything seemed to work — including using the 1080 Ti with the integrated notebook display and no external monitor!

Now that we've got the Node working, let's take a look at some performance numbers.

Continue reading our look at external graphics with the Thunderbolt 3 AKiTiO Node!

Author:
Manufacturer: AMD

Two Vegas...ha ha ha

When the preorders for the Radeon Vega Frontier Edition went up last week, I made the decision to place orders in a few different locations to make sure we got it in as early as possible. Well, as it turned out, we actually had the cards show up very quickly…from two different locations.

dualvega.jpg

So, what is a person to do if TWO of the newest, most coveted GPUs show up on their doorstep? After you do the first, full review of the single GPU iteration, you plug those both into your system and do some multi-GPU CrossFire testing!

There of course needs to be some discussion up front about this testing and our write up. If you read my first review of the Vega Frontier Edition you will clearly note my stance on the idea that “this is not a gaming card” and that “the drivers aren’t ready. Essentially, I said these potential excuses for performance were distraction and unwarranted based on the current state of Vega development and the proximity of the consumer iteration, Radeon RX.

IMG_4688.JPG

But for multi-GPU, it’s a different story. Both competitors in the GPU space will tell you that developing drivers for CrossFire and SLI is incredibly difficult. Much more than simply splitting the work across different processors, multi-GPU requires extra attention to specific games, game engines, and effects rendering that are not required in single GPU environments. Add to that the fact that the market size for CrossFire and SLI has been shrinking, from an already small state, and you can see why multi-GPU is going to get less attention from AMD here.

Even more, when CrossFire and SLI support gets a focus from the driver teams, it is often late in the process, nearly last in the list of technologies to address before launch.

With that in mind, we all should understand the results we are going to show you might be indicative of the CrossFire scaling when Radeon RX Vega launches, but it very well could not. I would look at the data we are presenting today as a “current state” of CrossFire for Vega.

Continue reading our look at a pair of Vega Frontier Edition cards in CrossFire!

Manufacturer: NVIDIA

Performance not two-die four.

When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.

What’s one way around it? Split your design across multiple dies!

nvidia-2017-multidie.png

NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.

NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.

While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.

amd-2017-epyc-breakdown.jpg

If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”

Apparently not? It should be possible?

I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.

The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.

This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.

Author:
Manufacturer: Galax

GTX 1060 keeps on kicking

Despite the market for graphics cards being disrupted by the cryptocurrency mining craze, board partners like Galax continue to build high quality options for gamers...if they can get their hands on them. We recently received a new Galax GTX 1060 EXOC White 6GB card that offers impressive performance and features as well as a visual style to help it stand out from the crowd.

We have worked with GeForce GTX 1060 graphics cards quite a bit on PC Perspective, so there is not a need to dive into the history of the GPU itself. If you need a refresher on this GP106 GPU, where it stands in the pantheon on the current GPU market, check out my launch review of the GTX 1060 from last year. The release of AMD’s Radeon RX 580 did change things a bit in the market landscape, so that review might be worth looking at too.

Our quick review at the Galax GTX 1060 EXOC White will look at performance (briefly), overclocking, and cost. But first, let’s take a look at this thing.

The Galax GTX 1060 EXOC White

As the name implies, the EXOC White card from Galax is both overclocked and uses a white fan shroud to add a little flair to the design. The PCB is a standard black color, but with the fan and back plate both a bright white, the card will be a point of interest for nearly any PC build. Pairing this with a white-accented motherboard, like the recent ASUS Prime series, would be an excellent visual combination.

IMG_4674.JPG

The fans on the EXOC White have clear-ish white blades that are illuminated by the white LEDs that shine through the fan openings on the shroud.

IMG_4675.JPG

IMG_4676.JPG

The cooler that Galax has implemented is substantial, with three heatpipes used to distribute the load from the GPU across the fins. There is a 6-pin power connector (standard for the GTX 1060) but that doesn’t appear to hold back the overclocking capability of the GPU.

IMG_4677.JPG

There is a lot of detail on the heatsink shroud – and either you like it or you don’t.

IMG_4678.JPG

Galax has included a white backplate that doubles as artistic style and heatsink. I do think that with most users’ cases showcasing the rear of the graphics card more than the front, a good quality back plate is a big selling point.

IMG_4680.JPG

The output connectivity includes a pair of DVI ports, a full-size HDMI and a full-size DisplayPort; more than enough for nearly any buyer of this class of GPU.

Continue reading about the Galax GTX 1060 EXOC White 6GB!

Author:
Manufacturer: Seasonic

Introduction and Features

Introduction

Seasonic is in the process of overhauling their entire PC power supply lineup. They began with the introduction of the PRIME Series in late 2016 and are now introducing the new FOCUS family, which will include three different series ranging from 450W up to 850W output capacity with either Platinum or Gold efficiency certification. In this review we will be taking a detailed look at the new Seasonic FOCUS PLUS Gold (FX) 650W power supply. And just to prove that reviewers are not being sent hand-picked golden samples, our PSU was delivered straight from Newegg.com inventory.

2-Banner.jpg

The Seasonic FOCUS PLUS Gold series includes four models: 550W, 650W, 750W, and 850W. In addition to 80 Plus Gold certification, the FOCUS Plus (FX) series features a small footprint chassis (140mm deep), all modular cables, high quality components, and comes backed by a 10-year warranty.

•    FOCUS PLUS Gold (FX) 550W: $79.90 USD
•    FOCUS PLUS Gold (FX) 650W: $89.90 USD
•    FOCUS PLUS Gold (FX) 750W: $99.90 USD
•    FOCUS PLUS Gold (FX) 850W: $109.90 USD

3-Diag-cables.jpg

Seasonic FOCUS PLUS 650W Gold (FX) PSU Key Features:

•    650W Continuous DC output at up to 50°C (715W peak)
•    80 PLUS Gold certified for high efficiency
•    Small footprint: chassis measures just 140mm (5.5”) deep
•    Micro Tolerance load regulation @ 1%
•    Fully-modular cables
•    DC-to-DC Voltage converters
•    Single +12V output
•    Multi-GPU Technology support
•    Quiet 120mm Fluid Dynamic Bearing (FDB) cooling fan
•    Haswell support
•    Active Power Factor correction with Universal AC input (100 to 240 VAC)
•    Safety protections: OPP, OVP, UVP, OCP, OTP and SCP
•    10-Year warranty

Please continue reading our review of the FOCUS PLUS 650W Gold PSU!!!

Author:
Manufacturer: AMD

An interesting night of testing

Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.

But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.

IMG_4621.JPG

Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.

On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.

Radeon Vega Frontier Edition Specifications

Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.

  Vega Frontier Edition Titan Xp GTX 1080 Ti Titan X (Pascal) GTX 1080 TITAN X GTX 980 R9 Fury X R9 Fury
GPU Vega GP102 GP102 GP102 GP104 GM200 GM204 Fiji XT Fiji Pro
GPU Cores 4096 3840 3584 3584 2560 3072 2048 4096 3584
Base Clock 1382 MHz 1480 MHz 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz
Boost Clock 1600 MHz 1582 MHz 1582 MHz 1480 MHz 1733 MHz 1089 MHz 1216 MHz - -
Texture Units ? 224 224 224 160 192 128 256 224
ROP Units 64 96 88 96 64 96 64 64 64
Memory 16GB 12GB 11GB 12GB 8GB 12GB 4GB 4GB 4GB
Memory Clock 1890 MHz 11400 MHz 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 1000 MHz 1000 MHz
Memory Interface 2048-bit HBM2 384-bit G5X 352-bit 384-bit G5X 256-bit G5X 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 483 GB/s 547.7 GB/s 484 GB/s 480 GB/s 320 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s
TDP 300 watts 250 watts 250 watts 250 watts 180 watts 250 watts 165 watts 275 watts 275 watts
Peak Compute 13.1 TFLOPS 12.0 TFLOPS 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS
Transistor Count ? 12.0B 12.0B 12.0B 7.2B 8.0B 5.2B 8.9B 8.9B
Process Tech 14nm 16nm 16nm 16nm 16nm 28nm 28nm 28nm 28nm
MSRP (current) $999 $1200 $699 $1,200 $599 $999 $499 $649 $549

The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.

DSC03536 copy.jpg

The Radeon Vega GPU

The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)

Continue reading our review of the AMD Radeon Vega Frontier Edition!

Subject: Mobile
Manufacturer: Samsung

Introduction and Specifications

The Galaxy S8 Plus is Samsung's first ‘big’ phone since the Note7 fiasco, and just looking at it the design and engineering process seems to have paid off. Simply put, the GS8/GS8+ might just be the most striking handheld devices ever made. The U.S. version sports the newest and fastest Qualcomm platform with the Snapdragon 835, and the international version of the handset uses Samsung’s Exynos 8895 Octa SoC. We have the former on hand, and it was this MSM8998-powered version of the 6.2-inch GS8+ that I spent some quality time with over the past two weeks.

DSC_0836.jpg

There is more to a phone than its looks, and even in that department the Galaxy S8+ raises questions about durability with that large, curved glass screen. With the front and back panels wrapping around as they do the phone has a very slim, elegant look that feels fantastic in hand. And while one drop could easily ruin your day with any smartphone, this design is particularly unforgiving - and screen replacement costs with these new S8 phones are particularly high due to the difficulty in repairing the screen, and need to replace the AMOLED display along with the laminated glass.

Forgetting the fragility for a moment and just embracing the case-free lifestyle I was so tempted to adopt, lest I change the best in-hand feel I've had from a handset (and I didn't want to hide its knockout design, either), I got down to actually objectively assessing the phone's performance. This is the first production phone we have had on hand with the new Snapdragon 835 platform, and we will be able to draw some definitive performance conclusions compared to SoCs in other shipping phones.

DSC_0815.jpg

Samsung Galaxy S8+ Specifications (US Version)
Display 6.2-inch 1440x2960 AMOLED
SoC Qualcomm Snapdragon 835 (MSM8998)
CPU Cores 4x 2.45 GHz Kryo
4x 1.90 GHz Kryo
GPU Cores Adreno 540
RAM 4 / 6 GB LPDDR4 (6 GB with 128 GB storage option)
Storage 64 / 128 GB
Network Snapdragon X16 LTE
Connectivity 802.11ac Wi-Fi
2x2 MU-MIMO
Bluetooth 5.0; A2DP, aptX
USB 3.1 (Type-C)
NFC
Battery 3500 mAh Li-Ion
Dimensions 159.5 x 73.4 x 8.1 mm, 173 g
OS Android 7.0

Continue reading our review of the Samsung Galaxy S8+ smartphone!

Subject: Storage
Manufacturer: Intel

Introduction, Specifications and Packaging

Introduction

Today Intel is launching a new line of client SSDs - the SSD 545S Series. These are simple, 2.5" SATA parts that aim to offer good performance at an economical price point. Low-cost SSDs is not typically Intel's strong suit, mainly because they are extremely rigorous on their design and testing, but the ramping up of IMFT 3D NAND, now entering its second generation stacked to 64-layers, should finally help them get the cost/GB down to levels previously enjoyed by other manufacturers.

diag.jpg

Intel and Micron jointly announced 3D NAND just over two years ago, and a year ago we talked about the next IMFT capacity bump coming as a 'double' move. Well, that's only partially happening today. The 545S line will carry the new IMFT 64-layer flash, but the capacity per die remains the same 256Gbit (32GB) as the previous generation parts. The dies will be smaller, meaning more can fit on a wafer, which drives down production costs, but the larger 512Gbit dies won't be coming until later on (and in a different product line - Intel told us they do not intend to mix die types within the same lines as we've seen Samsung do in the past).

Specifications

specs.png

There are no surprises here, though I am happy to see a 'sustained sequential performance' specification stated by an SSD maker, and I'm happier to see Intel claiming such a high figure for sustained writes (implying this is the TLC writing speed as the SLC cache would be exhausted in sustained writes).

I'm also happy to see sensical endurance specs for once. We've previously seen oddly non-scaling figures in prior SSD releases from multiple companies. Clearly stating a specific TBW 'per 128GB' makes a lot of sense here, and the number itself isn't that bad, either.

Packaging

packaging.jpg

Simplified packaging from Intel here, apparently to help further reduce shipping costs.

Read on for our full review of the Intel 545S 512GB SSD!

Introduction and Technical Specifications

Introduction

02-cooler-all.jpg

Courtesy of Thermalright

Thermalright is a well established brand-name, known for their high performance air coolers. Their newest edition to the TRUE Spirit Series line of air coolers, the TRUE Spririt 140 Direct, is a redesigned version of their TRUE Spirit 140 Power air cooler offering a similar level of performance at a lower price point. The Thermalright TRUE Spirit 140 Direct cooler is a slim, single tower cooler featuring a nickle-plated copper base and an aluminum radiator with a 140mm fan. Additionally, Thermalright designed the cooler to be compatible with all modern platforms. The TRUE Spirit 140 Direct cooler is available with an MSRP $46.95.

03-cooler-front-fan.jpg

Courtesy of Thermalright

04-cooler-profile-nofan.jpg

Courtesy of Thermalright

05-cooler-side-fan.jpg

Courtesy of Thermalright

06-cooler-specs.jpg

Courtesy of Thermalright

The Thermalright TRUE Spirit 140 Direct cooler consists of a single finned aluminum tower radiator fed by five 6mm diameter nickel-plated copper heat pipes in a U-shaped configuration. The cooler can accommodate up to two 140mm fans, but comes standard with a single fan only. The fans are held to the radiator tower using metal clips through the radiator tower body. The cooler is held to the CPU using screws on either side of the mount plate that fix to the unit's mounting cage installed to the motherboard.

Continue reading our review of the Thermalright TRUE Spirit 140 Direct CPU air cooler!

Subject: General Tech
Manufacturer: Logitech

Introduction and Specifications

Logitech has been releasing gaming headphones with a steady regularity of late, and this summer we have another new model to examine in the G433 Gaming Headset, which has just been released (along with the G233). This wired, 7.1-channel capable headset is quite different visually from previous Logitech models as is finished with an interesting “lightweight, hydrophobic fabric shell” and offered in various colors (our review pair is a bright red). But the G433’s have function to go along with the style, as Logitech has focused on both digital and analog sound quality with this third model to incorporate the Logitech’s Pro-G drivers. How do they sound? We’ll find out!

DSC_0555.jpg

One of the main reasons to consider a gaming headset like this in the first place is the ability to take advantage of multi-channel surround sound from your PC, and with the G433’s (as with the previously reviewed G533) this is accomplished via DTS Headphone:X, a technology which in my experience is capable of producing a convincing sound field that is very close to that of multiple surround drivers. All of this is being created via the same pair of left/right drivers that handle music, and here Logitech is able to boast of some very impressive engineering that produced the Pro-G driver introduced two years ago. An included DAC/headphone amp interfaces with your PC via USB to drive the surround experience, and without this you still have a standard stereo headset that can connect to anything with a 3.5 mm jack.

g433_colors.jpg

The G433 is available in four colors, of which we have the red on hand today

If you have not read up on Logitech’s exclusive Pro-G driver, you will find in their description far more similarities to an audiophile headphone company than what we typically associate with a computer peripheral maker. Logitech explains the thinking behind the technology:

“The intent of the Pro-G driver design innovation is to minimize distortion that commonly occurs in headphone drivers. When producing lower frequencies (<1kHz), most speaker diaphragms operate as a solid mass, like a piston in an engine, without bending. When producing many different frequencies at the same time, traditional driver designs can experience distortion caused by different parts of the diaphragm bending when other parts are not. This distortion caused by rapid transition in the speaker material can be tuned and minimized by combining a more flexible material with a specially designed acoustic enclosure. We designed the hybrid-mesh material for the Pro-G driver, along with a unique speaker housing design, to allow for a more smooth transition of movement resulting in a more accurate and less distorted output. This design also yields a more efficient speaker due to less overall output loss due to distortion. The result is an extremely accurate and clear sounding audio experience putting the gamer closer to the original audio of the source material.”

Logitech’s claims about the Pro-G have, in my experience with the previous models featuring these drivers (G633/G933 Artemis Spectrum and G533 Wireless), have been spot on, and I have found them to produce a clarity and detail that rivals ‘audiophile’ stereo headphones.

DSC_0573.jpg

Continue reading our review of the Logitech G433 7.1 Wired Surround Gaming Headset!

Author:
Manufacturer: ASUS

Overview

It feels like forever that we've been hearing about 802.11ad. For years it's been an up-and-coming technology, seeing some releases in devices like Dell's WiGig-powered wireless docking stations for Latitude notebooks.

However, with the release of the first wave of 802.11ad routers earlier this year from Netgear and TP-Link there has been new attention drawn to more traditional networking applications for it. This was compounded with the announcement of a plethora of X299-chipset based motherboards at Computex, with some integrating 802.11ad radios.

That brings us to today, where we have the ASUS Prime X299-Deluxe motherboard, which we used in our Skylake-X review. This almost $500 motherboard is the first device we've had our hands on which features both 802.11ac and 802.11ad networking, which presented a great opportunity to get experience with WiGig. With promises of wireless transfer speeds up to 4.6Gbps how could we not?

For our router, we decided to go with the Netgear Nighthawk X10. While the TP-Link and Netgear options appear to share the same model radio for 802.11ad usage, the Netgear has a port for 10 Gigabit networking, something necessary to test the full bandwidth promises of 802.11ad from a wired connection to a wireless client.

IMG_4611.JPG

The Nighthawk X10 is a beast of a router (with a $500 price tag to match) in its own right, but today we are solely focusing on it for 802.11ad testing.

Making things a bit complicated, the Nighthawk X10's 10GbE port utilizes an SFP+ connector, and the 10GbE NIC on our test server, with the ASUS X99‑E‑10G WS motherboard, uses an RJ45 connection for its 10 Gigabit port. In order to remedy this in a manner where we could still move the router away from the test client to test the range, we used a Netgear ProSAFE XS716E 10GigE switch as the go-between.

IMG_4610.JPG

Essentially, it works like this. We are connecting the Nighthawk X10 to the ProSAFE switch through a SFP+ cable, and then to the test server through 10GBase-T. The 802.11ad client is of course connected wirelessly to the Nighthawk X10.

On the software side, we are using the tried and true iPerf3. You run this software in server mode on the host machine and connect to that machine through the same piece of software in client mode. In this case, we are running iPerf with 10 parallel clients, over a 30-second period which is then averaged to get the resulting bandwidth of the connection.

bandwith-comparison.png

There are two main takeaways from this chart - the maximum bandwidth comparison to 802.11ac, and the scaling of 802.11ad with distance.

First, it's impressive to see such high bandwidth over a wireless connection. In a world where the vast majority of the Ethernet connections are still limited to 1Gbps, seeing up to 2.2Gbps over a wireless connection is very promising.

However, when you take a look at the bandwidth drops as we move the router and client further and further away, we start to see some of the main issues with 802.11ad.

Instead of using more traditional frequency ranges like 2.4GHz and 5.0GHz like we've seen from Wi-Fi for so many years, 802.11ad uses frequency in the unlicensed 60GHz spectrum. Without getting too technical about RF technology, essentially this means that 802.11ad is capable of extremely high bandwidth rates, but cannot penetrate walls with line of sight between devices being ideal. In our testing, we even found that the given orientation of the router made a big difference. Rotating the router 180 degrees allowed us to connect or not in some scenarios.

As you can see, the drop off in bandwidth for the 802.11ad connection between our test locations 15 feet away from the client and 35 feet away from the client was quite stark. 

That being said, taking another look at our results you can see that in all cases the 802.11ad connection is faster than the 802.11ac results, which is good. For the promised applications of 802.11ad where the device and router are in the same room of reasonable size, WiGig seems to be delivering most of what is promised.

IMG_4613.JPG

It is likely we won't see high adoption rates of 802.11ad for networking computers. The range limitations are just too stark to be a solution that works for most homes. However, I do think WiGig has a lot of promise to replace cables in other situations. We've seen notebook docks utilizing WiGig and there has been a lot of buzz about VR headsets utilizing WiGig for wireless connectivity to gaming PCs.

802.11ad networking is in its infancy, so this is all subject to change. Stay tuned to PC Perspective for continuing news on 802.11ad and other wireless technologies!

Author:
Subject: Processors
Manufacturer: AMD

EPYC makes its move into the data center

Because we traditionally focus and feed on the excitement and build up surrounding consumer products, the AMD Ryzen 7 and Ryzen 5 launches were huge for us and our community. Finally seeing competition to Intel’s hold on the consumer market was welcome and necessary to move the industry forward, and we are already seeing the results of some of that with this week’s Core i9 release and pricing. AMD is, and deserves to be, proud of these accomplishments. But from a business standpoint, the impact of Ryzen on the bottom line will likely pale in comparison to how EPYC could fundamentally change the financial stability of AMD.

AMD EPYC is the server processor that takes aim at the Intel Xeon and its dominant status on the data center market. The enterprise field is a high margin, high profit area and while AMD once had significant share in this space with Opteron, that has essentially dropped to zero over the last 6+ years. AMD hopes to use the same tactic in the data center as they did on the consumer side to shock and awe the industry into taking notice; AMD is providing impressive new performance levels while undercutting the competition on pricing.

Introducing the AMD EPYC 7000 Series

Targeting the single and 2-socket systems that make up ~95% of the market for data centers and enterprise, AMD EPYC is smartly not trying to swing over its weight class. This offers an enormous opportunity for AMD to take market share from Intel with minimal risk.

epyc-13.jpg

Many of the specifications here have been slowly shared by AMD over time, including at the recent financial analyst day, but seeing it placed on a single slide like this puts everything in perspective. In a single socket design, servers will be able to integrate 32 cores with 64 threads, 8x DDR4 memory channels with up to 2TB of memory capacity per CPU, 128 PCI Express 3.0 lanes for connectivity, and more.

Worth noting on this slide, and was originally announced at the financial analyst day as well, is AMD’s intent to maintain socket compatibility going forward for the next two generations. Both Rome and Milan, based on 7nm technology, will be drop-in upgrades for customers buying into EPYC platforms today. That kind of commitment from AMD is crucial to regain the trust of a market that needs those reassurances.

epyc-14.jpg

Here is the lineup as AMD is providing it for us today. The model numbers in the 7000 series use the second and third characters as a performance indicator (755x will be faster than 750x, for example) and the fourth character to indicate the generation of EPYC (here, the 1 indicates first gen). AMD has created four different core count divisions along with a few TDP options to help provide options for all types of potential customers. It is worth noting that though this table might seem a bit intimidating, it is drastically more efficient when compared to the Intel Xeon product line that exists today, or that will exist in the future.  AMD is offering immediate availability of the top five CPUs in this stack, with the bottom four due before the end of July.

Continue reading about the AMD EPYC data center processor!

Author:
Subject: Processors
Manufacturer: Intel

Specifications and Design

Intel is at an important crossroads for its consumer product lines. Long accused of ignoring the gaming and enthusiast markets, focusing instead on laptops and smartphones/tablets at the direct expense of the DIY user, Intel had raised prices and only shown limited ability to increase per-die performance over a fairly extended period. The release of the AMD Ryzen processor, along with the pending release of the Threadripper product line with up to 16 cores, has moved Intel into a higher gear; they are more prepared to increase features, performance, and lower prices now.

We have already talked about the majority of the specifications, pricing, and feature changes of the Core i9/Core i7 lineup with the Skylake-X designation, but it is worth including them here, again, in our review of the Core i9-7900X for reference purposes.

  Core i9-7980XE Core i9-7960X Core i9-7940X Core i9-7920X Core i9-7900X Core i7-7820X Core i7-7800X Core i7-7740X Core i5-7640X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Kaby Lake-X Kaby Lake-X
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 4/8 4/4
Base Clock ? ? ? ? 3.3 GHz 3.6 GHz 3.5 GHz 4.3 GHz 4.0 GHz
Turbo Boost 2.0 ? ? ? ? 4.3 GHz 4.3 GHz 4.0 GHz 4.5 GHz 4.2 GHz
Turbo Boost Max 3.0 ? ? ? ? 4.5 GHz 4.5 GHz N/A N/A N/A
Cache 16.5MB (?) 16.5MB (?) 16.5MB (?) 16.5MB (?) 13.75MB 11MB 8.25MB 8MB 6MB
Memory Support ? ? ? ? DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Dual Channel
DDR4-2666 Dual Channel
PCIe Lanes ? ? ? ? 44 28 28 16 16
TDP 165 watts (?) 165 watts (?) 165 watts (?) 165 watts (?) 140 watts 140 watts 140 watts 112 watts 112 watts
Socket 2066 2066 2066 2066 2066 2066 2066 2066 2066
Price $1999 $1699 $1399 $1199 $999 $599 $389 $339 $242

There is a lot to take in here. The three most interesting points are that, one, Intel plans to one-up AMD Threadripper by offering an 18-core processor. Two, which is potentially more interesting, is that it also wants to change the perception of the X299-class platform by offering lower price, lower core count CPUs like the quad-core, non-HyperThreaded Core i5-7640X. Third, we also see the first ever branding of Core i9.

Intel only provided detailed specifications up to the Core i9-7900X, which is a 10-core / 20-thread processor that has a base clock of 3.3 GHz and a Turbo peak of 4.5 GHz (using the new Turbo Boost Max Technology 3.0). It sports 13.75MB of cache thanks to an updated cache configuration, it includes 44 lanes of PCIe 3.0, an increase of 4 lanes over Broadwell-E, it has quad-channel DDR4 memory up to 2666 MHz and it has a 140 watt TDP. The new LGA2066 socket will be utilized. Pricing for this CPU is set at $999, which is interesting for a couple of reasons. First, it is $700 less than the starting MSRP of the 10c/20t Core i7-6950X from one year ago; obviously a big plus. However, there is quite a ways UP the stack, with the 18c/36t Core i9-7980XE coming in at a cool $1999.

  Core i9-7900X Core i7-6950X Core i7-7700K
Architecture Skylake-X Broadwell-E Kaby Lake
Process Tech 14nm+ 14nm+ 14nm+
Cores/Threads 10/20 10/20 4/8
Base Clock 3.3 GHz 3.0 GHz 4.2 GHz
Turbo Boost 2.0 4.3 GHz 3.5 GHz 4.5 GHz
Turbo Boost Max 3.0 4.5 GHz 4.0 GHz N/A
Cache 13.75MB 25MB 8MB
Memory Support DDR4-2666
Quad Channel
DDR4-2400
Quad Channel
DDR4-2400
Dual Channel
PCIe Lanes 44 40 16
TDP 140 watts 140 watts 91 watts
Socket 2066 2011 1151
Price (Launch) $999 $1700 $339

The next CPU down the stack is compelling as well. The Core i7-7820X is the new 8-core / 16-thread HEDT option from Intel, with similar clock speeds to the 10-core above it (save the higher base clock). It has 11MB of L3 cache, 28-lanes of PCI Express (4 higher than Broadwell-E) but has a $599 price tag. Compared to the 8-core 6900K, that is ~$400 lower, while the new Skylake-X part iteration includes a 700 MHz clock speed advantage. That’s huge, and is a direct attack on the AMD Ryzen 7 1800X, which sells for $499 today and cut Intel off at the knees this March. In fact, the base clock of the Core i7-7820X is only 100 MHz lower than the maximum Turbo Boost clock of the Core i7-6900K!

intel1.jpg

It is worth noting the performance gap between the 7820X and the 7900X. That $400 gap seems huge and out of place when compared to the deltas in the rest of the stack that never exceed $300 (and that is at the top two slots). Intel is clearly concerned about the Ryzen 7 1800X and making sure it has options to compete at that point (and below) but feels less threatened by the upcoming Threadripper CPUs. Pricing out the 10+ core CPUs today, without knowing what AMD is going to do for that, is a risk and could put Intel in the same position as it was in with the Ryzen 7 release.

Continue reading our review of the Intel Core i9-7900X Processor!

Author:
Manufacturer: Corsair

Introduction and Features

Introduction

2-Products.jpg

(Courtesy of Corsair)

Corsair recently refreshed their TX Series power supplies which now include four new models: the TX550M, TX650M, TX750M, and TX850M. The new TX-M Series sits right in the middle of Corsair’s PC power supply lineup and was designed to offer efficient operation and easy installation. Corsair states the TX-M Series power supplies provide industrial build quality, 80 Plus Gold efficiency, extremely tight voltages and come with a semi-modular cable set. In addition, the TX-M Series power supplies use a compact chassis measuring only 140mm deep and come backed by a 7-year warranty.

3-Side-diag.jpg

We will be taking a detailed look at the TX-M Series 750W power supply in this review.

Corsair TX-M Series PSU Key Features:

•    550W, 650W, 750W, and 850W models
•    Server-grade 50°C max operating temperature
•    7-Year warranty
•    80 PLUS Gold certified
•    All capacitors are Japanese brand, 105°C rated
•    Compact chassis measures only 140mm (5.5”) deep
•    Quiet 120mm cooling fan
•    Semi-modular cable set
•    Comply with ATX12V v2.4 and EPS 2.92 standards
•    6th Generation Intel Core processor Ready
•    Full suite of protection circuits: OVP, UVP, SCP, OPP and OTP
•    Active PFC with full range AC input (100-240 VAC)
•    MSRP for the TX750M is $99.99 USD

4-Front-diag.jpg

Here is what Corsair has to say about the new TX-M Series power supplies: “TX Series semi-modular power supplies are ideal for basic desktop systems where low energy use, low noise, and simple installation are essential. All of the capacitors are 105°C rated, Japanese brand, to insure solid power delivery and long term reliability. 80 Plus Gold efficiency reduces operating costs and excess heat. Flat, sleeved black modular cables with clearly marked connectors make installation fast and straightforward, with good-looking results."

Please continue reading our review of the Corsair TX750M PSU!!!

Author:
Manufacturer: PC Perspective

Why?

Astute readers of the site might remember the original story we did on Bitcoin mining in 2011, the good ole' days where the concept of the blockchain was new and exciting and mining Bitcoin on a GPU was still plenty viable.

gpu-bitcoin.jpg

However, that didn't last long, as the race for cash lead people to developing Application Specific Integrated Circuits (ASICs) dedicated solely to Bitcoin mining quickly while sipping power. Use of the expensive ASICs drove the difficulty of mining Bitcoin to the roof and killed any sort of chance of profitability from mere mortals mining cryptocurrency.

Cryptomining saw a resurgence in late 2013 with the popular adoption of alternate cryptocurrencies, specifically Litecoin which was based on the Scrypt algorithm instead of AES-256 like Bitcoin. This meant that the ASIC developed for mining Bitcoin were useless. This is also the period of time that many of you may remember as the "Dogecoin" era, my personal favorite cryptocurrency of all time. 

dogecoin-300.png

Defenders of these new "altcoins" claimed that Scrypt was different enough that ASICs would never be developed for it, and GPU mining would remain viable for a larger portion of users. As it turns out, the promise of money always wins out, and we soon saw Scrypt ASICs. Once again, the market for GPU mining crashed.

That brings us to today, and what I am calling "Third-wave Cryptomining." 

While the mass populous stopped caring about cryptocurrency as a whole, the dedicated group that was left continued to develop altcoins. These different currencies are based on various algorithms and other proofs of works (see technologies like Storj, which use the blockchain for a decentralized Dropbox-like service!).

As you may have predicted, for various reasons that might be difficult to historically quantify, there is another very popular cryptocurrency from this wave of development, Ethereum.

ETHEREUM-LOGO_LANDSCAPE_Black.png

Ethereum is based on the Dagger-Hashimoto algorithm and has a whole host of different quirks that makes it different from other cryptocurrencies. We aren't here to get deep in the woods on the methods behind different blockchain implementations, but if you have some time check out the Ethereum White Paper. It's all very fascinating.

Continue reading our look at this third wave of cryptocurrency!

Author:
Subject: Mobile
Manufacturer: Dell

Overview

Editor’s Note: After our review of the Dell XPS 13 2-in-1, Dell contacted us about our performance results. They found our numbers were significantly lower than their own internal benchmarks. They offered to send us a replacement notebook to test, and we have done so. After spending some time with the new unit we have seen much higher results, more in line with Dell’s performance claims. We haven’t been able to find any differences between our initial sample and the new notebook, and our old sample has been sent back to Dell for further analysis. Due to these changes, the performance results and conclusion of this review have been edited to reflect the higher performance results.

It's difficult to believe that it's only been a little over 2 years since we got our hands on the revised Dell XPS 13. Placing an emphasis on minimalistic design, large displays in small chassis, and high-quality construction, the Dell XPS 13 seems to have influenced the "thin and light" market in some noticeable ways.

IMG_4579.JPG

Aiming their sights at a slightly different corner of the market, this year Dell unveiled the XPS 13 2-in-1, a convertible tablet with a 360-degree hinge. However, instead of just putting a new hinge on the existing XPS 13, Dell has designed the all-new XPS 13 2-in-1 from the ground up to be even more "thin and light" than it's older sibling, which has meant some substantial design changes. 

Since we are a PC hardware-focused site, let's take a look under the hood to get an idea of what exactly we are talking about with the Dell XPS 13 2-in-1.

Dell XPS 13 2-in-1
MSRP $999 $1199 $1299 $1399
Screen 13.3” FHD (1920 x 1080) InfinityEdge touch display
CPU Core i5-7Y54 Core i7-7Y75
GPU Intel HD Graphics 615
RAM 4GB 8GB 16GB
Storage 128GB SATA 256GB PCIe
Network Intel 8265 802.11ac MIMO (2.4 GHz, 5.0 GHz)
Bluetooth 4.2
Display Output

1 x Thunderbolt 3
1 x USB 3.1 Type-C (DisplayPort)

Connectivity USB 3.0 Type-C
3.5mm headphone
USB 3.0 x 2 (MateDock)
Audio Dual Array Digital Microphone
Stereo Speakers (1W x 2)
Weight 2.7 lbs ( 1.24 kg)
Dimensions 11.98-in x 7.81-in x 0.32-0.54-in
(304mm x 199mm x 8 -13.7 mm)
Battery 46 WHr
Operating System Windows 10 Home / Pro (+$50)

One of the more striking design decisions from a hardware perspective is the decision to go with the low power Core i5-7Y54 processor, or as you may be familar with from it's older naming scheme, Core M. In the Kaby Lake generation, Intel has decided to drop the Core M branding (though oddly Core m3 still exists) and integrate these lower power parts into the regular Core branding scheme.

Click here to continue reading our review of the Dell XPS 13 2-in-1