Author:
Manufacturer: Seasonic

Introduction and Features

Introduction

Seasonic is in the process of overhauling their entire PC power supply lineup. They began with the introduction of the PRIME Series in late 2016 and are now introducing the new FOCUS family, which will include three different series ranging from 450W up to 850W output capacity with either Platinum or Gold efficiency certification. In this review we will be taking a detailed look at the new Seasonic FOCUS PLUS Gold (FX) 650W power supply. And just to prove that reviewers are not being sent hand-picked golden samples, our PSU was delivered straight from Newegg.com inventory.

2-Banner.jpg

The Seasonic FOCUS PLUS Gold series includes four models: 550W, 650W, 750W, and 850W. In addition to 80 Plus Gold certification, the FOCUS Plus (FX) series features a small footprint chassis (140mm deep), all modular cables, high quality components, and comes backed by a 10-year warranty.

•    FOCUS PLUS Gold (FX) 550W: $79.90 USD
•    FOCUS PLUS Gold (FX) 650W: $89.90 USD
•    FOCUS PLUS Gold (FX) 750W: $99.90 USD
•    FOCUS PLUS Gold (FX) 850W: $109.90 USD

3-Diag-cables.jpg

Seasonic FOCUS PLUS 650W Gold (FX) PSU Key Features:

•    650W Continuous DC output at up to 50°C (715W peak)
•    80 PLUS Gold certified for high efficiency
•    Small footprint: chassis measures just 140mm (5.5”) deep
•    Micro Tolerance load regulation @ 1%
•    Fully-modular cables
•    DC-to-DC Voltage converters
•    Single +12V output
•    Multi-GPU Technology support
•    Quiet 120mm Fluid Dynamic Bearing (FDB) cooling fan
•    Haswell support
•    Active Power Factor correction with Universal AC input (100 to 240 VAC)
•    Safety protections: OPP, OVP, UVP, OCP, OTP and SCP
•    10-Year warranty

Please continue reading our review of the FOCUS PLUS 650W Gold PSU!!!

Author:
Manufacturer: AMD

An interesting night of testing

Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.

But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.

IMG_4621.JPG

Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.

On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.

Radeon Vega Frontier Edition Specifications

Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.

  Vega Frontier Edition Titan Xp GTX 1080 Ti Titan X (Pascal) GTX 1080 TITAN X GTX 980 R9 Fury X R9 Fury
GPU Vega GP102 GP102 GP102 GP104 GM200 GM204 Fiji XT Fiji Pro
GPU Cores 4096 3840 3584 3584 2560 3072 2048 4096 3584
Base Clock 1382 MHz 1480 MHz 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz
Boost Clock 1600 MHz 1582 MHz 1582 MHz 1480 MHz 1733 MHz 1089 MHz 1216 MHz - -
Texture Units ? 224 224 224 160 192 128 256 224
ROP Units 64 96 88 96 64 96 64 64 64
Memory 16GB 12GB 11GB 12GB 8GB 12GB 4GB 4GB 4GB
Memory Clock 1890 MHz 11400 MHz 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 1000 MHz 1000 MHz
Memory Interface 2048-bit HBM2 384-bit G5X 352-bit 384-bit G5X 256-bit G5X 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 483 GB/s 547.7 GB/s 484 GB/s 480 GB/s 320 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s
TDP 300 watts 250 watts 250 watts 250 watts 180 watts 250 watts 165 watts 275 watts 275 watts
Peak Compute 13.1 TFLOPS 12.0 TFLOPS 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS
Transistor Count ? 12.0B 12.0B 12.0B 7.2B 8.0B 5.2B 8.9B 8.9B
Process Tech 14nm 16nm 16nm 16nm 16nm 28nm 28nm 28nm 28nm
MSRP (current) $999 $1200 $699 $1,200 $599 $999 $499 $649 $549

The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.

DSC03536 copy.jpg

The Radeon Vega GPU

The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)

Continue reading our review of the AMD Radeon Vega Frontier Edition!

Subject: Mobile
Manufacturer: Samsung

Introduction and Specifications

The Galaxy S8 Plus is Samsung's first ‘big’ phone since the Note7 fiasco, and just looking at it the design and engineering process seems to have paid off. Simply put, the GS8/GS8+ might just be the most striking handheld devices ever made. The U.S. version sports the newest and fastest Qualcomm platform with the Snapdragon 835, and the international version of the handset uses Samsung’s Exynos 8895 Octa SoC. We have the former on hand, and it was this MSM8998-powered version of the 6.2-inch GS8+ that I spent some quality time with over the past two weeks.

DSC_0836.jpg

There is more to a phone than its looks, and even in that department the Galaxy S8+ raises questions about durability with that large, curved glass screen. With the front and back panels wrapping around as they do the phone has a very slim, elegant look that feels fantastic in hand. And while one drop could easily ruin your day with any smartphone, this design is particularly unforgiving - and screen replacement costs with these new S8 phones are particularly high due to the difficulty in repairing the screen, and need to replace the AMOLED display along with the laminated glass.

Forgetting the fragility for a moment and just embracing the case-free lifestyle I was so tempted to adopt, lest I change the best in-hand feel I've had from a handset (and I didn't want to hide its knockout design, either), I got down to actually objectively assessing the phone's performance. This is the first production phone we have had on hand with the new Snapdragon 835 platform, and we will be able to draw some definitive performance conclusions compared to SoCs in other shipping phones.

DSC_0815.jpg

Samsung Galaxy S8+ Specifications (US Version)
Display 6.2-inch 1440x2960 AMOLED
SoC Qualcomm Snapdragon 835 (MSM8998)
CPU Cores 4x 2.45 GHz Kryo
4x 1.90 GHz Kryo
GPU Cores Adreno 540
RAM 4 / 6 GB LPDDR4 (6 GB with 128 GB storage option)
Storage 64 / 128 GB
Network Snapdragon X16 LTE
Connectivity 802.11ac Wi-Fi
2x2 MU-MIMO
Bluetooth 5.0; A2DP, aptX
USB 3.1 (Type-C)
NFC
Battery 3500 mAh Li-Ion
Dimensions 159.5 x 73.4 x 8.1 mm, 173 g
OS Android 7.0

Continue reading our review of the Samsung Galaxy S8+ smartphone!

Subject: Storage
Manufacturer: Intel

Introduction, Specifications and Packaging

Introduction

Today Intel is launching a new line of client SSDs - the SSD 545S Series. These are simple, 2.5" SATA parts that aim to offer good performance at an economical price point. Low-cost SSDs is not typically Intel's strong suit, mainly because they are extremely rigorous on their design and testing, but the ramping up of IMFT 3D NAND, now entering its second generation stacked to 64-layers, should finally help them get the cost/GB down to levels previously enjoyed by other manufacturers.

diag.jpg

Intel and Micron jointly announced 3D NAND just over two years ago, and a year ago we talked about the next IMFT capacity bump coming as a 'double' move. Well, that's only partially happening today. The 545S line will carry the new IMFT 64-layer flash, but the capacity per die remains the same 256Gbit (32GB) as the previous generation parts. The dies will be smaller, meaning more can fit on a wafer, which drives down production costs, but the larger 512Gbit dies won't be coming until later on (and in a different product line - Intel told us they do not intend to mix die types within the same lines as we've seen Samsung do in the past).

Specifications

specs.png

There are no surprises here, though I am happy to see a 'sustained sequential performance' specification stated by an SSD maker, and I'm happier to see Intel claiming such a high figure for sustained writes (implying this is the TLC writing speed as the SLC cache would be exhausted in sustained writes).

I'm also happy to see sensical endurance specs for once. We've previously seen oddly non-scaling figures in prior SSD releases from multiple companies. Clearly stating a specific TBW 'per 128GB' makes a lot of sense here, and the number itself isn't that bad, either.

Packaging

packaging.jpg

Simplified packaging from Intel here, apparently to help further reduce shipping costs.

Read on for our full review of the Intel 545S 512GB SSD!

Introduction and Technical Specifications

Introduction

02-cooler-all.jpg

Courtesy of Thermalright

Thermalright is a well established brand-name, known for their high performance air coolers. Their newest edition to the TRUE Spirit Series line of air coolers, the TRUE Spririt 140 Direct, is a redesigned version of their TRUE Spirit 140 Power air cooler offering a similar level of performance at a lower price point. The Thermalright TRUE Spirit 140 Direct cooler is a slim, single tower cooler featuring a nickle-plated copper base and an aluminum radiator with a 140mm fan. Additionally, Thermalright designed the cooler to be compatible with all modern platforms. The TRUE Spirit 140 Direct cooler is available with an MSRP $46.95.

03-cooler-front-fan.jpg

Courtesy of Thermalright

04-cooler-profile-nofan.jpg

Courtesy of Thermalright

05-cooler-side-fan.jpg

Courtesy of Thermalright

06-cooler-specs.jpg

Courtesy of Thermalright

The Thermalright TRUE Spirit 140 Direct cooler consists of a single finned aluminum tower radiator fed by five 6mm diameter nickel-plated copper heat pipes in a U-shaped configuration. The cooler can accommodate up to two 140mm fans, but comes standard with a single fan only. The fans are held to the radiator tower using metal clips through the radiator tower body. The cooler is held to the CPU using screws on either side of the mount plate that fix to the unit's mounting cage installed to the motherboard.

Continue reading our review of the Thermalright TRUE Spirit 140 Direct CPU air cooler!

Subject: General Tech
Manufacturer: Logitech

Introduction and Specifications

Logitech has been releasing gaming headphones with a steady regularity of late, and this summer we have another new model to examine in the G433 Gaming Headset, which has just been released (along with the G233). This wired, 7.1-channel capable headset is quite different visually from previous Logitech models as is finished with an interesting “lightweight, hydrophobic fabric shell” and offered in various colors (our review pair is a bright red). But the G433’s have function to go along with the style, as Logitech has focused on both digital and analog sound quality with this third model to incorporate the Logitech’s Pro-G drivers. How do they sound? We’ll find out!

DSC_0555.jpg

One of the main reasons to consider a gaming headset like this in the first place is the ability to take advantage of multi-channel surround sound from your PC, and with the G433’s (as with the previously reviewed G533) this is accomplished via DTS Headphone:X, a technology which in my experience is capable of producing a convincing sound field that is very close to that of multiple surround drivers. All of this is being created via the same pair of left/right drivers that handle music, and here Logitech is able to boast of some very impressive engineering that produced the Pro-G driver introduced two years ago. An included DAC/headphone amp interfaces with your PC via USB to drive the surround experience, and without this you still have a standard stereo headset that can connect to anything with a 3.5 mm jack.

g433_colors.jpg

The G433 is available in four colors, of which we have the red on hand today

If you have not read up on Logitech’s exclusive Pro-G driver, you will find in their description far more similarities to an audiophile headphone company than what we typically associate with a computer peripheral maker. Logitech explains the thinking behind the technology:

“The intent of the Pro-G driver design innovation is to minimize distortion that commonly occurs in headphone drivers. When producing lower frequencies (<1kHz), most speaker diaphragms operate as a solid mass, like a piston in an engine, without bending. When producing many different frequencies at the same time, traditional driver designs can experience distortion caused by different parts of the diaphragm bending when other parts are not. This distortion caused by rapid transition in the speaker material can be tuned and minimized by combining a more flexible material with a specially designed acoustic enclosure. We designed the hybrid-mesh material for the Pro-G driver, along with a unique speaker housing design, to allow for a more smooth transition of movement resulting in a more accurate and less distorted output. This design also yields a more efficient speaker due to less overall output loss due to distortion. The result is an extremely accurate and clear sounding audio experience putting the gamer closer to the original audio of the source material.”

Logitech’s claims about the Pro-G have, in my experience with the previous models featuring these drivers (G633/G933 Artemis Spectrum and G533 Wireless), have been spot on, and I have found them to produce a clarity and detail that rivals ‘audiophile’ stereo headphones.

DSC_0573.jpg

Continue reading our review of the Logitech G433 7.1 Wired Surround Gaming Headset!

Author:
Manufacturer: ASUS

Overview

It feels like forever that we've been hearing about 802.11ad. For years it's been an up-and-coming technology, seeing some releases in devices like Dell's WiGig-powered wireless docking stations for Latitude notebooks.

However, with the release of the first wave of 802.11ad routers earlier this year from Netgear and TP-Link there has been new attention drawn to more traditional networking applications for it. This was compounded with the announcement of a plethora of X299-chipset based motherboards at Computex, with some integrating 802.11ad radios.

That brings us to today, where we have the ASUS Prime X299-Deluxe motherboard, which we used in our Skylake-X review. This almost $500 motherboard is the first device we've had our hands on which features both 802.11ac and 802.11ad networking, which presented a great opportunity to get experience with WiGig. With promises of wireless transfer speeds up to 4.6Gbps how could we not?

For our router, we decided to go with the Netgear Nighthawk X10. While the TP-Link and Netgear options appear to share the same model radio for 802.11ad usage, the Netgear has a port for 10 Gigabit networking, something necessary to test the full bandwidth promises of 802.11ad from a wired connection to a wireless client.

IMG_4611.JPG

The Nighthawk X10 is a beast of a router (with a $500 price tag to match) in its own right, but today we are solely focusing on it for 802.11ad testing.

Making things a bit complicated, the Nighthawk X10's 10GbE port utilizes an SFP+ connector, and the 10GbE NIC on our test server, with the ASUS X99‑E‑10G WS motherboard, uses an RJ45 connection for its 10 Gigabit port. In order to remedy this in a manner where we could still move the router away from the test client to test the range, we used a Netgear ProSAFE XS716E 10GigE switch as the go-between.

IMG_4610.JPG

Essentially, it works like this. We are connecting the Nighthawk X10 to the ProSAFE switch through a SFP+ cable, and then to the test server through 10GBase-T. The 802.11ad client is of course connected wirelessly to the Nighthawk X10.

On the software side, we are using the tried and true iPerf3. You run this software in server mode on the host machine and connect to that machine through the same piece of software in client mode. In this case, we are running iPerf with 10 parallel clients, over a 30-second period which is then averaged to get the resulting bandwidth of the connection.

bandwith-comparison.png

There are two main takeaways from this chart - the maximum bandwidth comparison to 802.11ac, and the scaling of 802.11ad with distance.

First, it's impressive to see such high bandwidth over a wireless connection. In a world where the vast majority of the Ethernet connections are still limited to 1Gbps, seeing up to 2.2Gbps over a wireless connection is very promising.

However, when you take a look at the bandwidth drops as we move the router and client further and further away, we start to see some of the main issues with 802.11ad.

Instead of using more traditional frequency ranges like 2.4GHz and 5.0GHz like we've seen from Wi-Fi for so many years, 802.11ad uses frequency in the unlicensed 60GHz spectrum. Without getting too technical about RF technology, essentially this means that 802.11ad is capable of extremely high bandwidth rates, but cannot penetrate walls with line of sight between devices being ideal. In our testing, we even found that the given orientation of the router made a big difference. Rotating the router 180 degrees allowed us to connect or not in some scenarios.

As you can see, the drop off in bandwidth for the 802.11ad connection between our test locations 15 feet away from the client and 35 feet away from the client was quite stark. 

That being said, taking another look at our results you can see that in all cases the 802.11ad connection is faster than the 802.11ac results, which is good. For the promised applications of 802.11ad where the device and router are in the same room of reasonable size, WiGig seems to be delivering most of what is promised.

IMG_4613.JPG

It is likely we won't see high adoption rates of 802.11ad for networking computers. The range limitations are just too stark to be a solution that works for most homes. However, I do think WiGig has a lot of promise to replace cables in other situations. We've seen notebook docks utilizing WiGig and there has been a lot of buzz about VR headsets utilizing WiGig for wireless connectivity to gaming PCs.

802.11ad networking is in its infancy, so this is all subject to change. Stay tuned to PC Perspective for continuing news on 802.11ad and other wireless technologies!

Author:
Subject: Processors
Manufacturer: AMD

EPYC makes its move into the data center

Because we traditionally focus and feed on the excitement and build up surrounding consumer products, the AMD Ryzen 7 and Ryzen 5 launches were huge for us and our community. Finally seeing competition to Intel’s hold on the consumer market was welcome and necessary to move the industry forward, and we are already seeing the results of some of that with this week’s Core i9 release and pricing. AMD is, and deserves to be, proud of these accomplishments. But from a business standpoint, the impact of Ryzen on the bottom line will likely pale in comparison to how EPYC could fundamentally change the financial stability of AMD.

AMD EPYC is the server processor that takes aim at the Intel Xeon and its dominant status on the data center market. The enterprise field is a high margin, high profit area and while AMD once had significant share in this space with Opteron, that has essentially dropped to zero over the last 6+ years. AMD hopes to use the same tactic in the data center as they did on the consumer side to shock and awe the industry into taking notice; AMD is providing impressive new performance levels while undercutting the competition on pricing.

Introducing the AMD EPYC 7000 Series

Targeting the single and 2-socket systems that make up ~95% of the market for data centers and enterprise, AMD EPYC is smartly not trying to swing over its weight class. This offers an enormous opportunity for AMD to take market share from Intel with minimal risk.

epyc-13.jpg

Many of the specifications here have been slowly shared by AMD over time, including at the recent financial analyst day, but seeing it placed on a single slide like this puts everything in perspective. In a single socket design, servers will be able to integrate 32 cores with 64 threads, 8x DDR4 memory channels with up to 2TB of memory capacity per CPU, 128 PCI Express 3.0 lanes for connectivity, and more.

Worth noting on this slide, and was originally announced at the financial analyst day as well, is AMD’s intent to maintain socket compatibility going forward for the next two generations. Both Rome and Milan, based on 7nm technology, will be drop-in upgrades for customers buying into EPYC platforms today. That kind of commitment from AMD is crucial to regain the trust of a market that needs those reassurances.

epyc-14.jpg

Here is the lineup as AMD is providing it for us today. The model numbers in the 7000 series use the second and third characters as a performance indicator (755x will be faster than 750x, for example) and the fourth character to indicate the generation of EPYC (here, the 1 indicates first gen). AMD has created four different core count divisions along with a few TDP options to help provide options for all types of potential customers. It is worth noting that though this table might seem a bit intimidating, it is drastically more efficient when compared to the Intel Xeon product line that exists today, or that will exist in the future.  AMD is offering immediate availability of the top five CPUs in this stack, with the bottom four due before the end of July.

Continue reading about the AMD EPYC data center processor!

Author:
Subject: Processors
Manufacturer: Intel

Specifications and Design

Intel is at an important crossroads for its consumer product lines. Long accused of ignoring the gaming and enthusiast markets, focusing instead on laptops and smartphones/tablets at the direct expense of the DIY user, Intel had raised prices and only shown limited ability to increase per-die performance over a fairly extended period. The release of the AMD Ryzen processor, along with the pending release of the Threadripper product line with up to 16 cores, has moved Intel into a higher gear; they are more prepared to increase features, performance, and lower prices now.

We have already talked about the majority of the specifications, pricing, and feature changes of the Core i9/Core i7 lineup with the Skylake-X designation, but it is worth including them here, again, in our review of the Core i9-7900X for reference purposes.

  Core i9-7980XE Core i9-7960X Core i9-7940X Core i9-7920X Core i9-7900X Core i7-7820X Core i7-7800X Core i7-7740X Core i5-7640X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Kaby Lake-X Kaby Lake-X
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 4/8 4/4
Base Clock ? ? ? ? 3.3 GHz 3.6 GHz 3.5 GHz 4.3 GHz 4.0 GHz
Turbo Boost 2.0 ? ? ? ? 4.3 GHz 4.3 GHz 4.0 GHz 4.5 GHz 4.2 GHz
Turbo Boost Max 3.0 ? ? ? ? 4.5 GHz 4.5 GHz N/A N/A N/A
Cache 16.5MB (?) 16.5MB (?) 16.5MB (?) 16.5MB (?) 13.75MB 11MB 8.25MB 8MB 6MB
Memory Support ? ? ? ? DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Dual Channel
DDR4-2666 Dual Channel
PCIe Lanes ? ? ? ? 44 28 28 16 16
TDP 165 watts (?) 165 watts (?) 165 watts (?) 165 watts (?) 140 watts 140 watts 140 watts 112 watts 112 watts
Socket 2066 2066 2066 2066 2066 2066 2066 2066 2066
Price $1999 $1699 $1399 $1199 $999 $599 $389 $339 $242

There is a lot to take in here. The three most interesting points are that, one, Intel plans to one-up AMD Threadripper by offering an 18-core processor. Two, which is potentially more interesting, is that it also wants to change the perception of the X299-class platform by offering lower price, lower core count CPUs like the quad-core, non-HyperThreaded Core i5-7640X. Third, we also see the first ever branding of Core i9.

Intel only provided detailed specifications up to the Core i9-7900X, which is a 10-core / 20-thread processor that has a base clock of 3.3 GHz and a Turbo peak of 4.5 GHz (using the new Turbo Boost Max Technology 3.0). It sports 13.75MB of cache thanks to an updated cache configuration, it includes 44 lanes of PCIe 3.0, an increase of 4 lanes over Broadwell-E, it has quad-channel DDR4 memory up to 2666 MHz and it has a 140 watt TDP. The new LGA2066 socket will be utilized. Pricing for this CPU is set at $999, which is interesting for a couple of reasons. First, it is $700 less than the starting MSRP of the 10c/20t Core i7-6950X from one year ago; obviously a big plus. However, there is quite a ways UP the stack, with the 18c/36t Core i9-7980XE coming in at a cool $1999.

  Core i9-7900X Core i7-6950X Core i7-7700K
Architecture Skylake-X Broadwell-E Kaby Lake
Process Tech 14nm+ 14nm+ 14nm+
Cores/Threads 10/20 10/20 4/8
Base Clock 3.3 GHz 3.0 GHz 4.2 GHz
Turbo Boost 2.0 4.3 GHz 3.5 GHz 4.5 GHz
Turbo Boost Max 3.0 4.5 GHz 4.0 GHz N/A
Cache 13.75MB 25MB 8MB
Memory Support DDR4-2666
Quad Channel
DDR4-2400
Quad Channel
DDR4-2400
Dual Channel
PCIe Lanes 44 40 16
TDP 140 watts 140 watts 91 watts
Socket 2066 2011 1151
Price (Launch) $999 $1700 $339

The next CPU down the stack is compelling as well. The Core i7-7820X is the new 8-core / 16-thread HEDT option from Intel, with similar clock speeds to the 10-core above it (save the higher base clock). It has 11MB of L3 cache, 28-lanes of PCI Express (4 higher than Broadwell-E) but has a $599 price tag. Compared to the 8-core 6900K, that is ~$400 lower, while the new Skylake-X part iteration includes a 700 MHz clock speed advantage. That’s huge, and is a direct attack on the AMD Ryzen 7 1800X, which sells for $499 today and cut Intel off at the knees this March. In fact, the base clock of the Core i7-7820X is only 100 MHz lower than the maximum Turbo Boost clock of the Core i7-6900K!

intel1.jpg

It is worth noting the performance gap between the 7820X and the 7900X. That $400 gap seems huge and out of place when compared to the deltas in the rest of the stack that never exceed $300 (and that is at the top two slots). Intel is clearly concerned about the Ryzen 7 1800X and making sure it has options to compete at that point (and below) but feels less threatened by the upcoming Threadripper CPUs. Pricing out the 10+ core CPUs today, without knowing what AMD is going to do for that, is a risk and could put Intel in the same position as it was in with the Ryzen 7 release.

Continue reading our review of the Intel Core i9-7900X Processor!

Author:
Manufacturer: Corsair

Introduction and Features

Introduction

2-Products.jpg

(Courtesy of Corsair)

Corsair recently refreshed their TX Series power supplies which now include four new models: the TX550M, TX650M, TX750M, and TX850M. The new TX-M Series sits right in the middle of Corsair’s PC power supply lineup and was designed to offer efficient operation and easy installation. Corsair states the TX-M Series power supplies provide industrial build quality, 80 Plus Gold efficiency, extremely tight voltages and come with a semi-modular cable set. In addition, the TX-M Series power supplies use a compact chassis measuring only 140mm deep and come backed by a 7-year warranty.

3-Side-diag.jpg

We will be taking a detailed look at the TX-M Series 750W power supply in this review.

Corsair TX-M Series PSU Key Features:

•    550W, 650W, 750W, and 850W models
•    Server-grade 50°C max operating temperature
•    7-Year warranty
•    80 PLUS Gold certified
•    All capacitors are Japanese brand, 105°C rated
•    Compact chassis measures only 140mm (5.5”) deep
•    Quiet 120mm cooling fan
•    Semi-modular cable set
•    Comply with ATX12V v2.4 and EPS 2.92 standards
•    6th Generation Intel Core processor Ready
•    Full suite of protection circuits: OVP, UVP, SCP, OPP and OTP
•    Active PFC with full range AC input (100-240 VAC)
•    MSRP for the TX750M is $99.99 USD

4-Front-diag.jpg

Here is what Corsair has to say about the new TX-M Series power supplies: “TX Series semi-modular power supplies are ideal for basic desktop systems where low energy use, low noise, and simple installation are essential. All of the capacitors are 105°C rated, Japanese brand, to insure solid power delivery and long term reliability. 80 Plus Gold efficiency reduces operating costs and excess heat. Flat, sleeved black modular cables with clearly marked connectors make installation fast and straightforward, with good-looking results."

Please continue reading our review of the Corsair TX750M PSU!!!

Author:
Manufacturer: PC Perspective

Why?

Astute readers of the site might remember the original story we did on Bitcoin mining in 2011, the good ole' days where the concept of the blockchain was new and exciting and mining Bitcoin on a GPU was still plenty viable.

gpu-bitcoin.jpg

However, that didn't last long, as the race for cash lead people to developing Application Specific Integrated Circuits (ASICs) dedicated solely to Bitcoin mining quickly while sipping power. Use of the expensive ASICs drove the difficulty of mining Bitcoin to the roof and killed any sort of chance of profitability from mere mortals mining cryptocurrency.

Cryptomining saw a resurgence in late 2013 with the popular adoption of alternate cryptocurrencies, specifically Litecoin which was based on the Scrypt algorithm instead of AES-256 like Bitcoin. This meant that the ASIC developed for mining Bitcoin were useless. This is also the period of time that many of you may remember as the "Dogecoin" era, my personal favorite cryptocurrency of all time. 

dogecoin-300.png

Defenders of these new "altcoins" claimed that Scrypt was different enough that ASICs would never be developed for it, and GPU mining would remain viable for a larger portion of users. As it turns out, the promise of money always wins out, and we soon saw Scrypt ASICs. Once again, the market for GPU mining crashed.

That brings us to today, and what I am calling "Third-wave Cryptomining." 

While the mass populous stopped caring about cryptocurrency as a whole, the dedicated group that was left continued to develop altcoins. These different currencies are based on various algorithms and other proofs of works (see technologies like Storj, which use the blockchain for a decentralized Dropbox-like service!).

As you may have predicted, for various reasons that might be difficult to historically quantify, there is another very popular cryptocurrency from this wave of development, Ethereum.

ETHEREUM-LOGO_LANDSCAPE_Black.png

Ethereum is based on the Dagger-Hashimoto algorithm and has a whole host of different quirks that makes it different from other cryptocurrencies. We aren't here to get deep in the woods on the methods behind different blockchain implementations, but if you have some time check out the Ethereum White Paper. It's all very fascinating.

Continue reading our look at this third wave of cryptocurrency!

Author:
Subject: Mobile
Manufacturer: Dell

Overview

Editor’s Note: After our review of the Dell XPS 13 2-in-1, Dell contacted us about our performance results. They found our numbers were significantly lower than their own internal benchmarks. They offered to send us a replacement notebook to test, and we have done so. After spending some time with the new unit we have seen much higher results, more in line with Dell’s performance claims. We haven’t been able to find any differences between our initial sample and the new notebook, and our old sample has been sent back to Dell for further analysis. Due to these changes, the performance results and conclusion of this review have been edited to reflect the higher performance results.

It's difficult to believe that it's only been a little over 2 years since we got our hands on the revised Dell XPS 13. Placing an emphasis on minimalistic design, large displays in small chassis, and high-quality construction, the Dell XPS 13 seems to have influenced the "thin and light" market in some noticeable ways.

IMG_4579.JPG

Aiming their sights at a slightly different corner of the market, this year Dell unveiled the XPS 13 2-in-1, a convertible tablet with a 360-degree hinge. However, instead of just putting a new hinge on the existing XPS 13, Dell has designed the all-new XPS 13 2-in-1 from the ground up to be even more "thin and light" than it's older sibling, which has meant some substantial design changes. 

Since we are a PC hardware-focused site, let's take a look under the hood to get an idea of what exactly we are talking about with the Dell XPS 13 2-in-1.

Dell XPS 13 2-in-1
MSRP $999 $1199 $1299 $1399
Screen 13.3” FHD (1920 x 1080) InfinityEdge touch display
CPU Core i5-7Y54 Core i7-7Y75
GPU Intel HD Graphics 615
RAM 4GB 8GB 16GB
Storage 128GB SATA 256GB PCIe
Network Intel 8265 802.11ac MIMO (2.4 GHz, 5.0 GHz)
Bluetooth 4.2
Display Output

1 x Thunderbolt 3
1 x USB 3.1 Type-C (DisplayPort)

Connectivity USB 3.0 Type-C
3.5mm headphone
USB 3.0 x 2 (MateDock)
Audio Dual Array Digital Microphone
Stereo Speakers (1W x 2)
Weight 2.7 lbs ( 1.24 kg)
Dimensions 11.98-in x 7.81-in x 0.32-0.54-in
(304mm x 199mm x 8 -13.7 mm)
Battery 46 WHr
Operating System Windows 10 Home / Pro (+$50)

One of the more striking design decisions from a hardware perspective is the decision to go with the low power Core i5-7Y54 processor, or as you may be familar with from it's older naming scheme, Core M. In the Kaby Lake generation, Intel has decided to drop the Core M branding (though oddly Core m3 still exists) and integrate these lower power parts into the regular Core branding scheme.

Click here to continue reading our review of the Dell XPS 13 2-in-1

Subject: Mobile
Manufacturer: Google

Introduction and Design

In case you have not heard by now, Pixel is the re-imagining of the Nexus phone concept by Google; a fully stock version of the Android experience on custom, Google-authorized hardware - and with the promise of the latest OS updates as they are released. So how does the hardware stack up? We are late into the life of the Pixel by now, and this is more of a long-term review as I have had the smaller version of the phone on hand for some weeks now. As a result I can offer my candid view of the less-covered of the two Pixel handsets (most reviews center around the Pixel XL), and its performance.

DSC_0186.jpg

There was always a certain cachet to owning a Nexus phone, and you could rest assured that you would be running the latest version of Android before anyone on operator-controlled hardware. The Nexus phones were sold primarily by Google, unlocked, with operator/retail availability at times during their run. Things took a turn when Google opted to offer a carrier-branded version of the Nexus 6 back in November of 2014, along with their usual unlocked Google Play store offering. But this departure was not just an issue of branding, as the price jumped to a full $649; the off-contract cost of premium handsets such as Apple’s iPhone. How could Google hope to compete in a space dominated by Apple and Samsung phones purchased by and large with operator subsidies and installment plans? They did not compete, of course, and the Nexus 6 flopped.

Pixel, coming after the Huawei-manufactured Nexus 6p and LG-manufactured Nexus 5X, drops the “Nexus” branding while continuing the tradition of a reference Android experience - and the more recent tradition of premium pricing. As we have seen in the months since its release, the Pixel did not put much of a dent into the Apple/Samsung dominated handset market. But even during the budget-friendly Nexus era, which offered a compelling mix of day-one Android OS update availability and inexpensive, unlocked hardware (think Nexus 4 at $299 and Nexus 5 at $349), Google's own phones were never mainstream. Still, in keeping with iPhone and Galaxy flagships $649 nets you a Pixel, which also launched through Verizon in an exclusive operator deal. Of course a larger version of the Pixel exists, and I would be remiss if I did not mention the Pixel XL. Unfortunately I would also be remiss if I didn't mention that stock for the XL has been quite low with availability constantly in question.

DSC_0169.jpg

The Pixel is hard to distinguish from an iPhone 7 from a distance (other than the home button)

Google Pixel Specifications
Display 5.0-inch 1080x1920 AMOLED
SoC Qualcomm Snapdragon 821 (MSM8996)
CPU Cores 2x 2.15 GHz Kryo
2x 1.60 GHz Kryo
GPU Cores Adreno 530
RAM 4GB LPDDR4
Storage 32 / 128 GB
Network Snapdragon X12 LTE
Connectivity 802.11ac Wi-Fi
2x2 MU-MIMO
Bluetooth 4.2
USB 3.0
NFC
Dimensions 143.8 x 69.5 x 8.5 mm, 143 g
OS Android 7.1

Continue reading our review of the Google Pixel smartphone!

Author:
Subject: General Tech
Manufacturer: Logitech

Logitech G413 Mechanical Gaming Keyboard

The rise in popularity of mechanical gaming keyboards has been accompanied by the spread of RGB backlighting. But RGBs, which often include intricate control systems and software, can significantly raise the price of an already expensive peripheral. There are many cheaper non-backlit mechanical keyboards out there, but they are often focused on typing, and lack the design and features that are unique to the gaming keyboard market.

logitech_g413.jpg

Gamers on a budget, or those who simply dislike fancy RGB lights, are therefore faced with a relative dearth of options, and it's exactly this market segment that Logitech is targeting with its G413 Mechanical Gaming Keyboard.

Continue reading our review of the Logitech G413 Mechanical Gaming Keyboard!

Subject: Systems
Manufacturer: ECS

Introduction and First Impressions

The LIVA family of mini PCs has been refreshed regularly since its introduction in 2014, and the LIVA Z represents a change to sleek industrial design as well as the expected updates to the internal hardware.

DSC_0542.jpg

The LIVA Z we have for review today is powered by an Intel Apollo Lake SoC, and the product family includes SKUs with both Celeron and Pentium processors. Our review unit is the entry-level model with a Celeron N3350 processor, 4GB memory, and 32GB storage. Memory and storage support are improved compared to past LIVAs, as this is really more of a mini-PC kit like an Intel NUC, as the LIVA Z includes an M.2 slot (SATA 6.0 Gbps) for storage expansion, and a pair of SODIMM slots support up to 8 GB of DDR3L memory (a single 4GB SODIMM is installed by default).

The LIVA Z is a very small device, just a bit bigger than your typical set-top streaming box, and like all LIVAs it is fanless; making it totally silent in operation. This is important for many people in applications such as media consumption in a living room, and like previous LIVA models the Z includes a VESA mount for installation on the back of a TV or monitor. So how does it perform? We will find out!

Continue reading our review of the ECS LIVA Z fanless mini PC!

Subject: General Tech
Manufacturer: The Khronos Group

An Data Format for Whole 3D Scenes

The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.

khronos-2017-glTF_500px_June16.png

glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.

glTF 2.0 makes several important changes.

The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.

khronos-2017-PBR material model in glTF 2.0.jpg

First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.

The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).

A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.

epicgames-2017-suntempledeferred.png

PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.

glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.

Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.

Also, the binary version of glTF is now a core feature in glTF 2.0.

khronos-2017-gltfroadmap.png

The slide looks at the potential future of glTF, after 2.0.

Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.

Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.

Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.

Introduction, How PCM Works, Reading, Writing, and Tweaks

I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.

XPoint.png

XPoint memory. Note the shape of the cell/selector structure. This will be significant later.

While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.

comparison.png

Some die-level performance characteristics of various memory types. source

The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:

gap.png

The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).

There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!

XPoint Die.jpg

A 3D XPoint die, submitted for your viewing pleasure (click for larger version).

You want to know how this stuff works, right? Read on to find out!

Author:
Manufacturer: AMD

We are up to two...

UPDATE (5/31/2017): Crystal Dynamics was able to get back to us with a couple of points on the changes that were made with this patch to affect the performance of AMD Ryzen processors.

  1. Rise of the Tomb Raider splits rendering tasks to run on different threads. By tuning the size of those tasks – breaking some up, allowing multicore CPUs to contribute in more cases, and combining some others, to reduce overheads in the scheduler – the game can more efficiently exploit extra threads on the host CPU.
     
  2. An optimization was identified in texture management that improves the combination of AMD CPU and NVIDIA GPU.  Overhead was reduced by packing texture descriptor uploads into larger chunks.

There you have it, a bit more detail on the software changes made to help adapt the game engine to AMD's Ryzen architecture. Not only that, but it does confirm our information that there was slightly MORE to address in the Ryzen+GeForce combinations.

END UPDATE

Despite a couple of growing pains out of the gate, the Ryzen processor launch appears to have been a success for AMD. Both the Ryzen 7 and the Ryzen 5 releases proved to be very competitive with Intel’s dominant CPUs in the market and took significant leads in areas of massive multi-threading and performance per dollar. An area that AMD has struggled in though has been 1080p gaming – performance in those instances on both Ryzen 7 and 5 processors fell behind comparable Intel parts by (sometimes) significant margins.

Our team continues to watch the story to see how AMD and game developers work through the issue. Most recently I posted a look at the memory latency differences between Ryzen and Intel Core processors. As it turns out, the memory latency differences are a significant part of the initial problem for AMD:

Because of this, I think it is fair to claim that some, if not most, of the 1080p gaming performance deficits we have seen with AMD Ryzen processors are a result of this particular memory system intricacy. You can combine memory latency with the thread-to-thread communication issue we discussed previously into one overall system level complication: the Zen memory system behaves differently than anything we have seen prior and it currently suffers in a couple of specific areas because of it.

In that story I detailed our coverage of the Ryzen processor and its gaming performance succinctly:

Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.

Quick on the heels of the Ryzen 7 release, AMD worked with the developer Oxide on the Ashes of the Singularity: Escalation engine. Through tweaks and optimizations, the game was able to showcase as much as a 30% increase in average frame rate on the integrated benchmark. While this was only a single use case, it does prove that through work with the developers, AMD has the ability to improve the 1080p gaming positioning of Ryzen against Intel.

rotr-screen4-small.jpg

Fast forward to today and I was surprised to find a new patch for Rise of the Tomb Raider, a game that was actually one of the worst case scenarios for AMD with Ryzen. (Patch #12, v1.0.770.1) The patch notes mention the following:

The following changes are included in this patch

- Fix certain DX12 crashes reported by users on the forums.

- Improve DX12 performance across a variety of hardware, in CPU bound situations. Especially performance on AMD Ryzen CPUs can be significantly improved.

While we expect this patch to be an improvement for everyone, if you do have trouble with this patch and prefer to stay on the old version we made a Beta available on Steam, build 767.2, which can be used to switch back to the previous version.

We will keep monitoring for feedback and will release further patches as it seems required. We always welcome your feedback!

Obviously the data point that stood out for me was the improved DX12 performance “in CPU bound situations. Especially on AMD Ryzen CPUs…”

Remember how the situation appeared in April?

rotr.png

The Ryzen 7 1800X was 24% slower than the Intel Core i7-7700K – a dramatic difference for a processor that should only have been ~8-10% slower in single threaded workloads.

How does this new patch to RoTR affect performance? We tested it on the same Ryzen 7 1800X benchmarks platform from previous testing including the ASUS Crosshair VI Hero motherboard, 16GB DDR4-2400 memory and GeForce GTX 1080 Founders Edition using the 378.78 driver. All testing was done under the DX12 code path.

tr-1.png

tr-2.png

The Ryzen 7 1800X score jumps from 107 FPS to 126.44 FPS, an increase of 17%! That is a significant boost in performance at 1080p while still running at the Very High image quality preset, indicating that the developer (and likely AMD) were able to find substantial inefficiencies in the engine. For comparison, the 8-core / 16-thread Intel Core i7-6900K only sees a 2.4% increase from this new game revision. This tells us that the changes to the game were specific to Ryzen processors and their design, but that no performance was redacted from the Intel platforms.

Continue reading our look at the new Rise of the Tomb Raider patch for Ryzen!

Author:
Manufacturer: Intel

An abundance of new processors

During its press conference at Computex 2017, Intel has officially announced the upcoming release of an entire new family of HEDT (high-end desktop) processors along with a new chipset and platform to power it. Though it has only been a year since Intel launched the Core i7-6950X, a Broadwell-E processor with 10-cores and 20-threads, it feels like it has been much longer than that. At the time Intel was accused of “sitting” on the market – offering only slight performance upgrades and raising prices on the segment with a flagship CPU cost of $1700. With can only be described as scathing press circuit, coupled with a revived and aggressive competitor in AMD and its Ryzen product line, Intel and its executive teams have decided it’s time to take enthusiasts and high end prosumer markets serious, once again.

slides-3.jpg

Though the company doesn’t want to admit to anything publicly, it seems obvious that Intel feels threatened by the release of the Ryzen 7 product line. The Ryzen 7 1800X was launched at $499 and offered 8 cores and 16 threads of processing, competing well in most tests against the likes of the Intel Core i7-6900X that sold for over $1000. Adding to the pressure was the announcement at AMD’s Financial Analyst Day that a new brand of processors called Threadripper would be coming this summer, offering up to 16 cores and 32 threads of processing for that same high-end consumer market. Even without pricing, clocks or availability timeframes, it was clear that AMD was going to come after this HEDT market with a brand shift of its EPYC server processors, just like Intel does with Xeon.

The New Processors

Normally I would jump into the new platform, technologies and features added to the processors, or something like that before giving you the goods on the CPU specifications, but that’s not the mood we are in. Instead, let’s start with the table of nine (9!!) new products and work backwards.

  Core i9-7980XE Core i9-7960X Core i9-7940X Core i9-7920X Core i9-7900X Core i7-7820X Core i7-7800X Core i7-7740X Core i5-7640X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Kaby Lake-X Kaby Lake-X
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 4/8 4/4
Base Clock ? ? ? ? 3.3 GHz 3.6 GHz 3.5 GHz 4.3 GHz 4.0 GHz
Turbo Boost 2.0 ? ? ? ? 4.3 GHz 4.3 GHz 4.0 GHz 4.5 GHz 4.2 GHz
Turbo Boost Max 3.0 ? ? ? ? 4.5 GHz 4.5 GHz N/A N/A N/A
Cache 16.5MB (?) 16.5MB (?) 16.5MB (?) 16.5MB (?) 13.75MB 11MB 8.25MB 8MB 6MB
Memory Support ? ? ? ? DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Dual Channel
DDR4-2666 Dual Channel
PCIe Lanes ? ? ? ? 44 28 28 16 16
TDP 165 watts (?) 165 watts (?) 165 watts (?) 165 watts (?) 140 watts 140 watts 140 watts 112 watts 112 watts
Socket 2066 2066 2066 2066 2066 2066 2066 2066 2066
Price $1999 $1699 $1399 $1199 $999 $599 $389 $339 $242

There is a lot to take in here. The most interesting points are that Intel plans to one-up AMD Threadripper by offering an 18-core processor but it also wants to change the perception of the X299-class platform by offering lower price, lower core count CPUs like the quad-core, non-HyperThreaded Core i5-7640X. We also see the first ever branding of Core i9.

Intel only provided detailed specifications up to the Core i9-7900X, a 10-core / 20-thread processor with a base clock of 3.3 GHz and a Turbo peak of 4.5 GHz using the new Turbo Boost Max Technology 3.0. It sports 13.75MB of cache thanks to an updated cache configuration, includes 44 lanes of PCIe 3.0, an increase of 4 lanes over Broadwell-E, quad-channel DDR4 memory up to 2666 MHz and a 140 watt TDP. The new LGA2066 socket will be utilized. Pricing for this CPU is set at $999, which is interesting for a couple of reasons. First, it is $700 less than the starting MSRP of the 10c/20t Core i7-6950X from one year ago; obviously a big plus. However, there is quite a ways UP the stack, with the 18c/36t Core i9-7980XE coming in at a cool $1999.

intel1.jpg

The next CPU down the stack is compelling as well. The Core i7-7820X is the new 8-core / 16-thread HEDT option from Intel, with similar clock speeds to the 10-core above it, save the higher base clock. It has 11MB of L3 cache, 28-lanes of PCI Express (4 higher than Broadwell-E) but has a $599 price tag. Compared to the 8-core 6900K, that is ~$400 lower, while the new Skylake-X part iteration includes a 700 MHz clock speed advantage. That’s huge, and is a direct attack on the AMD Ryzen 7 1800X that sells for $499 today and cut Intel off at the knees this March. In fact, the base clock of the Core i7-7820X is only 100 MHz lower than the maximum Turbo Boost clock of the Core i7-6900K!

Continue reading about the Intel Core i9 series announcement!

Author:
Manufacturer: ARM

ARM Refreshes All the Things

This past April ARM invited us to visit Cambridge, England so they could discuss with us their plans for the next year.  Quite a bit has changed for the company since our last ARM Tech Day in 2016.  They were acquired by SoftBank, but continue to essentially operate as their own company.  They now have access to more funds, are less risk averse, and have a greater ability to expand in the ever growing mobile and IOT marketplaces.

dynamiq_01.png

The ARM of today certainly is quite different than what we had known 10 years ago when we saw their technology used in the first iPhone.  The company back then had good technology, but a relatively small head count.  They kept pace with the industry, but were not nearly as aggressive as other chip companies in some areas.  Through the past 10 years they have grown not only in numbers, but in technologies that they have constantly expanded on.  The company became more PR savvy and communicated more effectively with the press and in the end their primary users.  Where once ARM would announce new products and not expect to see shipping products upwards of 3 years away, we are now seeing the company be much more aggressive with their designs and getting them out to their partners so that production ends up happening in months as compared to years.

Several days of meetings and presentations left us a bit overwhelmed by what ARM is bringing to market towards the end of 2017 and most likely beginning of 2018.  On the surface it appears that ARM has only done a refresh of the CPU and GPU products, but once we start looking at these products in the greater scheme and how they interact with DynamIQ we see that ARM has changed the mobile computing landscape dramatically.  This new computing concept allows greater performance, flexibility, and efficiency in designs.  Partners will have far more control over these licensed products to create more value and differentiation as compared to years past.

dynamiq_02.png

We have previously covered DynamIQ at PCPer this past March.  ARM wanted to seed that concept before they jumped into more discussions on their latest CPUs and GPUs.  Previous Cortex products cannot be used with DynamIQ.  To leverage that technology we must have new CPU designs.  In this article we are covering the Cortex-A55 and Cortex-A75.  These two new CPUs on the surface look more like a refresh, but when we dig in we see that some massive changes have been wrought throughout.  ARM has taken the concepts of the previous A53 and A73 and expanded upon them fairly dramatically, not only to work with DynamIQ but also by removing significant bottlenecks that have impeded theoretical performance.

Continue reading our overview of the new family of ARM CPUs and GPU!