All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Two Vegas...ha ha ha
When the preorders for the Radeon Vega Frontier Edition went up last week, I made the decision to place orders in a few different locations to make sure we got it in as early as possible. Well, as it turned out, we actually had the cards show up very quickly…from two different locations.
So, what is a person to do if TWO of the newest, most coveted GPUs show up on their doorstep? After you do the first, full review of the single GPU iteration, you plug those both into your system and do some multi-GPU CrossFire testing!
There of course needs to be some discussion up front about this testing and our write up. If you read my first review of the Vega Frontier Edition you will clearly note my stance on the idea that “this is not a gaming card” and that “the drivers aren’t ready. Essentially, I said these potential excuses for performance were distraction and unwarranted based on the current state of Vega development and the proximity of the consumer iteration, Radeon RX.
But for multi-GPU, it’s a different story. Both competitors in the GPU space will tell you that developing drivers for CrossFire and SLI is incredibly difficult. Much more than simply splitting the work across different processors, multi-GPU requires extra attention to specific games, game engines, and effects rendering that are not required in single GPU environments. Add to that the fact that the market size for CrossFire and SLI has been shrinking, from an already small state, and you can see why multi-GPU is going to get less attention from AMD here.
Even more, when CrossFire and SLI support gets a focus from the driver teams, it is often late in the process, nearly last in the list of technologies to address before launch.
With that in mind, we all should understand the results we are going to show you might be indicative of the CrossFire scaling when Radeon RX Vega launches, but it very well could not. I would look at the data we are presenting today as a “current state” of CrossFire for Vega.
Performance not two-die four.
When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.
What’s one way around it? Split your design across multiple dies!
NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.
NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.
While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.
If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”
Apparently not? It should be possible?
I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.
The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.
This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.
GTX 1060 keeps on kicking
Despite the market for graphics cards being disrupted by the cryptocurrency mining craze, board partners like Galax continue to build high quality options for gamers...if they can get their hands on them. We recently received a new Galax GTX 1060 EXOC White 6GB card that offers impressive performance and features as well as a visual style to help it stand out from the crowd.
We have worked with GeForce GTX 1060 graphics cards quite a bit on PC Perspective, so there is not a need to dive into the history of the GPU itself. If you need a refresher on this GP106 GPU, where it stands in the pantheon on the current GPU market, check out my launch review of the GTX 1060 from last year. The release of AMD’s Radeon RX 580 did change things a bit in the market landscape, so that review might be worth looking at too.
Our quick review at the Galax GTX 1060 EXOC White will look at performance (briefly), overclocking, and cost. But first, let’s take a look at this thing.
The Galax GTX 1060 EXOC White
As the name implies, the EXOC White card from Galax is both overclocked and uses a white fan shroud to add a little flair to the design. The PCB is a standard black color, but with the fan and back plate both a bright white, the card will be a point of interest for nearly any PC build. Pairing this with a white-accented motherboard, like the recent ASUS Prime series, would be an excellent visual combination.
The fans on the EXOC White have clear-ish white blades that are illuminated by the white LEDs that shine through the fan openings on the shroud.
The cooler that Galax has implemented is substantial, with three heatpipes used to distribute the load from the GPU across the fins. There is a 6-pin power connector (standard for the GTX 1060) but that doesn’t appear to hold back the overclocking capability of the GPU.
There is a lot of detail on the heatsink shroud – and either you like it or you don’t.
Galax has included a white backplate that doubles as artistic style and heatsink. I do think that with most users’ cases showcasing the rear of the graphics card more than the front, a good quality back plate is a big selling point.
The output connectivity includes a pair of DVI ports, a full-size HDMI and a full-size DisplayPort; more than enough for nearly any buyer of this class of GPU.
Introduction and Features
Seasonic is in the process of overhauling their entire PC power supply lineup. They began with the introduction of the PRIME Series in late 2016 and are now introducing the new FOCUS family, which will include three different series ranging from 450W up to 850W output capacity with either Platinum or Gold efficiency certification. In this review we will be taking a detailed look at the new Seasonic FOCUS PLUS Gold (FX) 650W power supply. And just to prove that reviewers are not being sent hand-picked golden samples, our PSU was delivered straight from Newegg.com inventory.
The Seasonic FOCUS PLUS Gold series includes four models: 550W, 650W, 750W, and 850W. In addition to 80 Plus Gold certification, the FOCUS Plus (FX) series features a small footprint chassis (140mm deep), all modular cables, high quality components, and comes backed by a 10-year warranty.
• FOCUS PLUS Gold (FX) 550W: $79.90 USD
• FOCUS PLUS Gold (FX) 650W: $89.90 USD
• FOCUS PLUS Gold (FX) 750W: $99.90 USD
• FOCUS PLUS Gold (FX) 850W: $109.90 USD
Seasonic FOCUS PLUS 650W Gold (FX) PSU Key Features:
• 650W Continuous DC output at up to 50°C (715W peak)
• 80 PLUS Gold certified for high efficiency
• Small footprint: chassis measures just 140mm (5.5”) deep
• Micro Tolerance load regulation @ 1%
• Fully-modular cables
• DC-to-DC Voltage converters
• Single +12V output
• Multi-GPU Technology support
• Quiet 120mm Fluid Dynamic Bearing (FDB) cooling fan
• Haswell support
• Active Power Factor correction with Universal AC input (100 to 240 VAC)
• Safety protections: OPP, OVP, UVP, OCP, OTP and SCP
• 10-Year warranty
An interesting night of testing
Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.
But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.
Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.
On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.
Radeon Vega Frontier Edition Specifications
Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.
|Vega Frontier Edition||Titan Xp||GTX 1080 Ti||Titan X (Pascal)||GTX 1080||TITAN X||GTX 980||R9 Fury X||R9 Fury|
|GPU||Vega||GP102||GP102||GP102||GP104||GM200||GM204||Fiji XT||Fiji Pro|
|Base Clock||1382 MHz||1480 MHz||1480 MHz||1417 MHz||1607 MHz||1000 MHz||1126 MHz||1050 MHz||1000 MHz|
|Boost Clock||1600 MHz||1582 MHz||1582 MHz||1480 MHz||1733 MHz||1089 MHz||1216 MHz||-||-|
|Memory Clock||1890 MHz||11400 MHz||11000 MHz||10000 MHz||10000 MHz||7000 MHz||7000 MHz||1000 MHz||1000 MHz|
|Memory Interface||2048-bit HBM2||384-bit G5X||352-bit||384-bit G5X||256-bit G5X||384-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)|
|Memory Bandwidth||483 GB/s||547.7 GB/s||484 GB/s||480 GB/s||320 GB/s||336 GB/s||224 GB/s||512 GB/s||512 GB/s|
|TDP||300 watts||250 watts||250 watts||250 watts||180 watts||250 watts||165 watts||275 watts||275 watts|
|Peak Compute||13.1 TFLOPS||12.0 TFLOPS||10.6 TFLOPS||10.1 TFLOPS||8.2 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS||7.20 TFLOPS|
The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.
The Radeon Vega GPU
The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)
Introduction and Specifications
The Galaxy S8 Plus is Samsung's first ‘big’ phone since the Note7 fiasco, and just looking at it the design and engineering process seems to have paid off. Simply put, the GS8/GS8+ might just be the most striking handheld devices ever made. The U.S. version sports the newest and fastest Qualcomm platform with the Snapdragon 835, and the international version of the handset uses Samsung’s Exynos 8895 Octa SoC. We have the former on hand, and it was this MSM8998-powered version of the 6.2-inch GS8+ that I spent some quality time with over the past two weeks.
There is more to a phone than its looks, and even in that department the Galaxy S8+ raises questions about durability with that large, curved glass screen. With the front and back panels wrapping around as they do the phone has a very slim, elegant look that feels fantastic in hand. And while one drop could easily ruin your day with any smartphone, this design is particularly unforgiving - and screen replacement costs with these new S8 phones are particularly high due to the difficulty in repairing the screen, and need to replace the AMOLED display along with the laminated glass.
Forgetting the fragility for a moment and just embracing the case-free lifestyle I was so tempted to adopt, lest I change the best in-hand feel I've had from a handset (and I didn't want to hide its knockout design, either), I got down to actually objectively assessing the phone's performance. This is the first production phone we have had on hand with the new Snapdragon 835 platform, and we will be able to draw some definitive performance conclusions compared to SoCs in other shipping phones.
|Samsung Galaxy S8+ Specifications (US Version)|
|Display||6.2-inch 1440x2960 AMOLED|
|SoC||Qualcomm Snapdragon 835 (MSM8998)|
|CPU Cores||4x 2.45 GHz Kryo
4x 1.90 GHz Kryo
|GPU Cores||Adreno 540|
|RAM||4 / 6 GB LPDDR4 (6 GB with 128 GB storage option)|
|Storage||64 / 128 GB|
|Network||Snapdragon X16 LTE|
Bluetooth 5.0; A2DP, aptX
USB 3.1 (Type-C)
|Battery||3500 mAh Li-Ion|
|Dimensions||159.5 x 73.4 x 8.1 mm, 173 g|
Introduction, Specifications and Packaging
Today Intel is launching a new line of client SSDs - the SSD 545S Series. These are simple, 2.5" SATA parts that aim to offer good performance at an economical price point. Low-cost SSDs is not typically Intel's strong suit, mainly because they are extremely rigorous on their design and testing, but the ramping up of IMFT 3D NAND, now entering its second generation stacked to 64-layers, should finally help them get the cost/GB down to levels previously enjoyed by other manufacturers.
Intel and Micron jointly announced 3D NAND just over two years ago, and a year ago we talked about the next IMFT capacity bump coming as a 'double' move. Well, that's only partially happening today. The 545S line will carry the new IMFT 64-layer flash, but the capacity per die remains the same 256Gbit (32GB) as the previous generation parts. The dies will be smaller, meaning more can fit on a wafer, which drives down production costs, but the larger 512Gbit dies won't be coming until later on (and in a different product line - Intel told us they do not intend to mix die types within the same lines as we've seen Samsung do in the past).
There are no surprises here, though I am happy to see a 'sustained sequential performance' specification stated by an SSD maker, and I'm happier to see Intel claiming such a high figure for sustained writes (implying this is the TLC writing speed as the SLC cache would be exhausted in sustained writes).
I'm also happy to see sensical endurance specs for once. We've previously seen oddly non-scaling figures in prior SSD releases from multiple companies. Clearly stating a specific TBW 'per 128GB' makes a lot of sense here, and the number itself isn't that bad, either.
Simplified packaging from Intel here, apparently to help further reduce shipping costs.
Introduction and Technical Specifications
Courtesy of Thermalright
Thermalright is a well established brand-name, known for their high performance air coolers. Their newest edition to the TRUE Spirit Series line of air coolers, the TRUE Spririt 140 Direct, is a redesigned version of their TRUE Spirit 140 Power air cooler offering a similar level of performance at a lower price point. The Thermalright TRUE Spirit 140 Direct cooler is a slim, single tower cooler featuring a nickle-plated copper base and an aluminum radiator with a 140mm fan. Additionally, Thermalright designed the cooler to be compatible with all modern platforms. The TRUE Spirit 140 Direct cooler is available with an MSRP $46.95.
Courtesy of Thermalright
Courtesy of Thermalright
Courtesy of Thermalright
Courtesy of Thermalright
The Thermalright TRUE Spirit 140 Direct cooler consists of a single finned aluminum tower radiator fed by five 6mm diameter nickel-plated copper heat pipes in a U-shaped configuration. The cooler can accommodate up to two 140mm fans, but comes standard with a single fan only. The fans are held to the radiator tower using metal clips through the radiator tower body. The cooler is held to the CPU using screws on either side of the mount plate that fix to the unit's mounting cage installed to the motherboard.
Introduction and Specifications
Logitech has been releasing gaming headphones with a steady regularity of late, and this summer we have another new model to examine in the G433 Gaming Headset, which has just been released (along with the G233). This wired, 7.1-channel capable headset is quite different visually from previous Logitech models as is finished with an interesting “lightweight, hydrophobic fabric shell” and offered in various colors (our review pair is a bright red). But the G433’s have function to go along with the style, as Logitech has focused on both digital and analog sound quality with this third model to incorporate the Logitech’s Pro-G drivers. How do they sound? We’ll find out!
One of the main reasons to consider a gaming headset like this in the first place is the ability to take advantage of multi-channel surround sound from your PC, and with the G433’s (as with the previously reviewed G533) this is accomplished via DTS Headphone:X, a technology which in my experience is capable of producing a convincing sound field that is very close to that of multiple surround drivers. All of this is being created via the same pair of left/right drivers that handle music, and here Logitech is able to boast of some very impressive engineering that produced the Pro-G driver introduced two years ago. An included DAC/headphone amp interfaces with your PC via USB to drive the surround experience, and without this you still have a standard stereo headset that can connect to anything with a 3.5 mm jack.
The G433 is available in four colors, of which we have the red on hand today
If you have not read up on Logitech’s exclusive Pro-G driver, you will find in their description far more similarities to an audiophile headphone company than what we typically associate with a computer peripheral maker. Logitech explains the thinking behind the technology:
“The intent of the Pro-G driver design innovation is to minimize distortion that commonly occurs in headphone drivers. When producing lower frequencies (<1kHz), most speaker diaphragms operate as a solid mass, like a piston in an engine, without bending. When producing many different frequencies at the same time, traditional driver designs can experience distortion caused by different parts of the diaphragm bending when other parts are not. This distortion caused by rapid transition in the speaker material can be tuned and minimized by combining a more flexible material with a specially designed acoustic enclosure. We designed the hybrid-mesh material for the Pro-G driver, along with a unique speaker housing design, to allow for a more smooth transition of movement resulting in a more accurate and less distorted output. This design also yields a more efficient speaker due to less overall output loss due to distortion. The result is an extremely accurate and clear sounding audio experience putting the gamer closer to the original audio of the source material.”
Logitech’s claims about the Pro-G have, in my experience with the previous models featuring these drivers (G633/G933 Artemis Spectrum and G533 Wireless), have been spot on, and I have found them to produce a clarity and detail that rivals ‘audiophile’ stereo headphones.
It feels like forever that we've been hearing about 802.11ad. For years it's been an up-and-coming technology, seeing some releases in devices like Dell's WiGig-powered wireless docking stations for Latitude notebooks.
However, with the release of the first wave of 802.11ad routers earlier this year from Netgear and TP-Link there has been new attention drawn to more traditional networking applications for it. This was compounded with the announcement of a plethora of X299-chipset based motherboards at Computex, with some integrating 802.11ad radios.
That brings us to today, where we have the ASUS Prime X299-Deluxe motherboard, which we used in our Skylake-X review. This almost $500 motherboard is the first device we've had our hands on which features both 802.11ac and 802.11ad networking, which presented a great opportunity to get experience with WiGig. With promises of wireless transfer speeds up to 4.6Gbps how could we not?
For our router, we decided to go with the Netgear Nighthawk X10. While the TP-Link and Netgear options appear to share the same model radio for 802.11ad usage, the Netgear has a port for 10 Gigabit networking, something necessary to test the full bandwidth promises of 802.11ad from a wired connection to a wireless client.
The Nighthawk X10 is a beast of a router (with a $500 price tag to match) in its own right, but today we are solely focusing on it for 802.11ad testing.
Making things a bit complicated, the Nighthawk X10's 10GbE port utilizes an SFP+ connector, and the 10GbE NIC on our test server, with the ASUS X99‑E‑10G WS motherboard, uses an RJ45 connection for its 10 Gigabit port. In order to remedy this in a manner where we could still move the router away from the test client to test the range, we used a Netgear ProSAFE XS716E 10GigE switch as the go-between.
Essentially, it works like this. We are connecting the Nighthawk X10 to the ProSAFE switch through a SFP+ cable, and then to the test server through 10GBase-T. The 802.11ad client is of course connected wirelessly to the Nighthawk X10.
On the software side, we are using the tried and true iPerf3. You run this software in server mode on the host machine and connect to that machine through the same piece of software in client mode. In this case, we are running iPerf with 10 parallel clients, over a 30-second period which is then averaged to get the resulting bandwidth of the connection.
There are two main takeaways from this chart - the maximum bandwidth comparison to 802.11ac, and the scaling of 802.11ad with distance.
First, it's impressive to see such high bandwidth over a wireless connection. In a world where the vast majority of the Ethernet connections are still limited to 1Gbps, seeing up to 2.2Gbps over a wireless connection is very promising.
However, when you take a look at the bandwidth drops as we move the router and client further and further away, we start to see some of the main issues with 802.11ad.
Instead of using more traditional frequency ranges like 2.4GHz and 5.0GHz like we've seen from Wi-Fi for so many years, 802.11ad uses frequency in the unlicensed 60GHz spectrum. Without getting too technical about RF technology, essentially this means that 802.11ad is capable of extremely high bandwidth rates, but cannot penetrate walls with line of sight between devices being ideal. In our testing, we even found that the given orientation of the router made a big difference. Rotating the router 180 degrees allowed us to connect or not in some scenarios.
As you can see, the drop off in bandwidth for the 802.11ad connection between our test locations 15 feet away from the client and 35 feet away from the client was quite stark.
That being said, taking another look at our results you can see that in all cases the 802.11ad connection is faster than the 802.11ac results, which is good. For the promised applications of 802.11ad where the device and router are in the same room of reasonable size, WiGig seems to be delivering most of what is promised.
It is likely we won't see high adoption rates of 802.11ad for networking computers. The range limitations are just too stark to be a solution that works for most homes. However, I do think WiGig has a lot of promise to replace cables in other situations. We've seen notebook docks utilizing WiGig and there has been a lot of buzz about VR headsets utilizing WiGig for wireless connectivity to gaming PCs.
802.11ad networking is in its infancy, so this is all subject to change. Stay tuned to PC Perspective for continuing news on 802.11ad and other wireless technologies!
EPYC makes its move into the data center
Because we traditionally focus and feed on the excitement and build up surrounding consumer products, the AMD Ryzen 7 and Ryzen 5 launches were huge for us and our community. Finally seeing competition to Intel’s hold on the consumer market was welcome and necessary to move the industry forward, and we are already seeing the results of some of that with this week’s Core i9 release and pricing. AMD is, and deserves to be, proud of these accomplishments. But from a business standpoint, the impact of Ryzen on the bottom line will likely pale in comparison to how EPYC could fundamentally change the financial stability of AMD.
AMD EPYC is the server processor that takes aim at the Intel Xeon and its dominant status on the data center market. The enterprise field is a high margin, high profit area and while AMD once had significant share in this space with Opteron, that has essentially dropped to zero over the last 6+ years. AMD hopes to use the same tactic in the data center as they did on the consumer side to shock and awe the industry into taking notice; AMD is providing impressive new performance levels while undercutting the competition on pricing.
Introducing the AMD EPYC 7000 Series
Targeting the single and 2-socket systems that make up ~95% of the market for data centers and enterprise, AMD EPYC is smartly not trying to swing over its weight class. This offers an enormous opportunity for AMD to take market share from Intel with minimal risk.
Many of the specifications here have been slowly shared by AMD over time, including at the recent financial analyst day, but seeing it placed on a single slide like this puts everything in perspective. In a single socket design, servers will be able to integrate 32 cores with 64 threads, 8x DDR4 memory channels with up to 2TB of memory capacity per CPU, 128 PCI Express 3.0 lanes for connectivity, and more.
Worth noting on this slide, and was originally announced at the financial analyst day as well, is AMD’s intent to maintain socket compatibility going forward for the next two generations. Both Rome and Milan, based on 7nm technology, will be drop-in upgrades for customers buying into EPYC platforms today. That kind of commitment from AMD is crucial to regain the trust of a market that needs those reassurances.
Here is the lineup as AMD is providing it for us today. The model numbers in the 7000 series use the second and third characters as a performance indicator (755x will be faster than 750x, for example) and the fourth character to indicate the generation of EPYC (here, the 1 indicates first gen). AMD has created four different core count divisions along with a few TDP options to help provide options for all types of potential customers. It is worth noting that though this table might seem a bit intimidating, it is drastically more efficient when compared to the Intel Xeon product line that exists today, or that will exist in the future. AMD is offering immediate availability of the top five CPUs in this stack, with the bottom four due before the end of July.
Specifications and Design
Intel is at an important crossroads for its consumer product lines. Long accused of ignoring the gaming and enthusiast markets, focusing instead on laptops and smartphones/tablets at the direct expense of the DIY user, Intel had raised prices and only shown limited ability to increase per-die performance over a fairly extended period. The release of the AMD Ryzen processor, along with the pending release of the Threadripper product line with up to 16 cores, has moved Intel into a higher gear; they are more prepared to increase features, performance, and lower prices now.
We have already talked about the majority of the specifications, pricing, and feature changes of the Core i9/Core i7 lineup with the Skylake-X designation, but it is worth including them here, again, in our review of the Core i9-7900X for reference purposes.
|Core i9-7980XE||Core i9-7960X||Core i9-7940X||Core i9-7920X||Core i9-7900X||Core i7-7820X||Core i7-7800X||Core i7-7740X||Core i5-7640X|
|Architecture||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Skylake-X||Kaby Lake-X||Kaby Lake-X|
|Base Clock||?||?||?||?||3.3 GHz||3.6 GHz||3.5 GHz||4.3 GHz||4.0 GHz|
|Turbo Boost 2.0||?||?||?||?||4.3 GHz||4.3 GHz||4.0 GHz||4.5 GHz||4.2 GHz|
|Turbo Boost Max 3.0||?||?||?||?||4.5 GHz||4.5 GHz||N/A||N/A||N/A|
|Cache||16.5MB (?)||16.5MB (?)||16.5MB (?)||16.5MB (?)||13.75MB||11MB||8.25MB||8MB||6MB|
|DDR4-2666 Dual Channel|
|TDP||165 watts (?)||165 watts (?)||165 watts (?)||165 watts (?)||140 watts||140 watts||140 watts||112 watts||112 watts|
There is a lot to take in here. The three most interesting points are that, one, Intel plans to one-up AMD Threadripper by offering an 18-core processor. Two, which is potentially more interesting, is that it also wants to change the perception of the X299-class platform by offering lower price, lower core count CPUs like the quad-core, non-HyperThreaded Core i5-7640X. Third, we also see the first ever branding of Core i9.
Intel only provided detailed specifications up to the Core i9-7900X, which is a 10-core / 20-thread processor that has a base clock of 3.3 GHz and a Turbo peak of 4.5 GHz (using the new Turbo Boost Max Technology 3.0). It sports 13.75MB of cache thanks to an updated cache configuration, it includes 44 lanes of PCIe 3.0, an increase of 4 lanes over Broadwell-E, it has quad-channel DDR4 memory up to 2666 MHz and it has a 140 watt TDP. The new LGA2066 socket will be utilized. Pricing for this CPU is set at $999, which is interesting for a couple of reasons. First, it is $700 less than the starting MSRP of the 10c/20t Core i7-6950X from one year ago; obviously a big plus. However, there is quite a ways UP the stack, with the 18c/36t Core i9-7980XE coming in at a cool $1999.
|Core i9-7900X||Core i7-6950X||Core i7-7700K|
|Base Clock||3.3 GHz||3.0 GHz||4.2 GHz|
|Turbo Boost 2.0||4.3 GHz||3.5 GHz||4.5 GHz|
|Turbo Boost Max 3.0||4.5 GHz||4.0 GHz||N/A|
|TDP||140 watts||140 watts||91 watts|
The next CPU down the stack is compelling as well. The Core i7-7820X is the new 8-core / 16-thread HEDT option from Intel, with similar clock speeds to the 10-core above it (save the higher base clock). It has 11MB of L3 cache, 28-lanes of PCI Express (4 higher than Broadwell-E) but has a $599 price tag. Compared to the 8-core 6900K, that is ~$400 lower, while the new Skylake-X part iteration includes a 700 MHz clock speed advantage. That’s huge, and is a direct attack on the AMD Ryzen 7 1800X, which sells for $499 today and cut Intel off at the knees this March. In fact, the base clock of the Core i7-7820X is only 100 MHz lower than the maximum Turbo Boost clock of the Core i7-6900K!
It is worth noting the performance gap between the 7820X and the 7900X. That $400 gap seems huge and out of place when compared to the deltas in the rest of the stack that never exceed $300 (and that is at the top two slots). Intel is clearly concerned about the Ryzen 7 1800X and making sure it has options to compete at that point (and below) but feels less threatened by the upcoming Threadripper CPUs. Pricing out the 10+ core CPUs today, without knowing what AMD is going to do for that, is a risk and could put Intel in the same position as it was in with the Ryzen 7 release.
Introduction and Features
(Courtesy of Corsair)
Corsair recently refreshed their TX Series power supplies which now include four new models: the TX550M, TX650M, TX750M, and TX850M. The new TX-M Series sits right in the middle of Corsair’s PC power supply lineup and was designed to offer efficient operation and easy installation. Corsair states the TX-M Series power supplies provide industrial build quality, 80 Plus Gold efficiency, extremely tight voltages and come with a semi-modular cable set. In addition, the TX-M Series power supplies use a compact chassis measuring only 140mm deep and come backed by a 7-year warranty.
We will be taking a detailed look at the TX-M Series 750W power supply in this review.
Corsair TX-M Series PSU Key Features:
• 550W, 650W, 750W, and 850W models
• Server-grade 50°C max operating temperature
• 7-Year warranty
• 80 PLUS Gold certified
• All capacitors are Japanese brand, 105°C rated
• Compact chassis measures only 140mm (5.5”) deep
• Quiet 120mm cooling fan
• Semi-modular cable set
• Comply with ATX12V v2.4 and EPS 2.92 standards
• 6th Generation Intel Core processor Ready
• Full suite of protection circuits: OVP, UVP, SCP, OPP and OTP
• Active PFC with full range AC input (100-240 VAC)
• MSRP for the TX750M is $99.99 USD
Here is what Corsair has to say about the new TX-M Series power supplies: “TX Series semi-modular power supplies are ideal for basic desktop systems where low energy use, low noise, and simple installation are essential. All of the capacitors are 105°C rated, Japanese brand, to insure solid power delivery and long term reliability. 80 Plus Gold efficiency reduces operating costs and excess heat. Flat, sleeved black modular cables with clearly marked connectors make installation fast and straightforward, with good-looking results."
Astute readers of the site might remember the original story we did on Bitcoin mining in 2011, the good ole' days where the concept of the blockchain was new and exciting and mining Bitcoin on a GPU was still plenty viable.
However, that didn't last long, as the race for cash lead people to developing Application Specific Integrated Circuits (ASICs) dedicated solely to Bitcoin mining quickly while sipping power. Use of the expensive ASICs drove the difficulty of mining Bitcoin to the roof and killed any sort of chance of profitability from mere mortals mining cryptocurrency.
Cryptomining saw a resurgence in late 2013 with the popular adoption of alternate cryptocurrencies, specifically Litecoin which was based on the Scrypt algorithm instead of AES-256 like Bitcoin. This meant that the ASIC developed for mining Bitcoin were useless. This is also the period of time that many of you may remember as the "Dogecoin" era, my personal favorite cryptocurrency of all time.
Defenders of these new "altcoins" claimed that Scrypt was different enough that ASICs would never be developed for it, and GPU mining would remain viable for a larger portion of users. As it turns out, the promise of money always wins out, and we soon saw Scrypt ASICs. Once again, the market for GPU mining crashed.
That brings us to today, and what I am calling "Third-wave Cryptomining."
While the mass populous stopped caring about cryptocurrency as a whole, the dedicated group that was left continued to develop altcoins. These different currencies are based on various algorithms and other proofs of works (see technologies like Storj, which use the blockchain for a decentralized Dropbox-like service!).
As you may have predicted, for various reasons that might be difficult to historically quantify, there is another very popular cryptocurrency from this wave of development, Ethereum.
Ethereum is based on the Dagger-Hashimoto algorithm and has a whole host of different quirks that makes it different from other cryptocurrencies. We aren't here to get deep in the woods on the methods behind different blockchain implementations, but if you have some time check out the Ethereum White Paper. It's all very fascinating.
Editor’s Note: After our review of the Dell XPS 13 2-in-1, Dell contacted us about our performance results. They found our numbers were significantly lower than their own internal benchmarks. They offered to send us a replacement notebook to test, and we have done so. After spending some time with the new unit we have seen much higher results, more in line with Dell’s performance claims. We haven’t been able to find any differences between our initial sample and the new notebook, and our old sample has been sent back to Dell for further analysis. Due to these changes, the performance results and conclusion of this review have been edited to reflect the higher performance results.
It's difficult to believe that it's only been a little over 2 years since we got our hands on the revised Dell XPS 13. Placing an emphasis on minimalistic design, large displays in small chassis, and high-quality construction, the Dell XPS 13 seems to have influenced the "thin and light" market in some noticeable ways.
Aiming their sights at a slightly different corner of the market, this year Dell unveiled the XPS 13 2-in-1, a convertible tablet with a 360-degree hinge. However, instead of just putting a new hinge on the existing XPS 13, Dell has designed the all-new XPS 13 2-in-1 from the ground up to be even more "thin and light" than it's older sibling, which has meant some substantial design changes.
Since we are a PC hardware-focused site, let's take a look under the hood to get an idea of what exactly we are talking about with the Dell XPS 13 2-in-1.
|Dell XPS 13 2-in-1|
|Screen||13.3” FHD (1920 x 1080) InfinityEdge touch display|
|CPU||Core i5-7Y54||Core i7-7Y75|
|GPU||Intel HD Graphics 615|
|Storage||128GB SATA||256GB PCIe|
|Network||Intel 8265 802.11ac MIMO (2.4 GHz, 5.0 GHz)
1 x Thunderbolt 3
|Connectivity||USB 3.0 Type-C
USB 3.0 x 2 (MateDock)
|Audio||Dual Array Digital Microphone
Stereo Speakers (1W x 2)
|Weight||2.7 lbs ( 1.24 kg)|
|Dimensions||11.98-in x 7.81-in x 0.32-0.54-in
(304mm x 199mm x 8 -13.7 mm)
|Operating System||Windows 10 Home / Pro (+$50)|
One of the more striking design decisions from a hardware perspective is the decision to go with the low power Core i5-7Y54 processor, or as you may be familar with from it's older naming scheme, Core M. In the Kaby Lake generation, Intel has decided to drop the Core M branding (though oddly Core m3 still exists) and integrate these lower power parts into the regular Core branding scheme.
Introduction and Design
In case you have not heard by now, Pixel is the re-imagining of the Nexus phone concept by Google; a fully stock version of the Android experience on custom, Google-authorized hardware - and with the promise of the latest OS updates as they are released. So how does the hardware stack up? We are late into the life of the Pixel by now, and this is more of a long-term review as I have had the smaller version of the phone on hand for some weeks now. As a result I can offer my candid view of the less-covered of the two Pixel handsets (most reviews center around the Pixel XL), and its performance.
There was always a certain cachet to owning a Nexus phone, and you could rest assured that you would be running the latest version of Android before anyone on operator-controlled hardware. The Nexus phones were sold primarily by Google, unlocked, with operator/retail availability at times during their run. Things took a turn when Google opted to offer a carrier-branded version of the Nexus 6 back in November of 2014, along with their usual unlocked Google Play store offering. But this departure was not just an issue of branding, as the price jumped to a full $649; the off-contract cost of premium handsets such as Apple’s iPhone. How could Google hope to compete in a space dominated by Apple and Samsung phones purchased by and large with operator subsidies and installment plans? They did not compete, of course, and the Nexus 6 flopped.
Pixel, coming after the Huawei-manufactured Nexus 6p and LG-manufactured Nexus 5X, drops the “Nexus” branding while continuing the tradition of a reference Android experience - and the more recent tradition of premium pricing. As we have seen in the months since its release, the Pixel did not put much of a dent into the Apple/Samsung dominated handset market. But even during the budget-friendly Nexus era, which offered a compelling mix of day-one Android OS update availability and inexpensive, unlocked hardware (think Nexus 4 at $299 and Nexus 5 at $349), Google's own phones were never mainstream. Still, in keeping with iPhone and Galaxy flagships $649 nets you a Pixel, which also launched through Verizon in an exclusive operator deal. Of course a larger version of the Pixel exists, and I would be remiss if I did not mention the Pixel XL. Unfortunately I would also be remiss if I didn't mention that stock for the XL has been quite low with availability constantly in question.
The Pixel is hard to distinguish from an iPhone 7 from a distance (other than the home button)
|Google Pixel Specifications|
|Display||5.0-inch 1080x1920 AMOLED|
|SoC||Qualcomm Snapdragon 821 (MSM8996)|
|CPU Cores||2x 2.15 GHz Kryo
2x 1.60 GHz Kryo
|GPU Cores||Adreno 530|
|Storage||32 / 128 GB|
|Network||Snapdragon X12 LTE|
|Dimensions||143.8 x 69.5 x 8.5 mm, 143 g|
Logitech G413 Mechanical Gaming Keyboard
The rise in popularity of mechanical gaming keyboards has been accompanied by the spread of RGB backlighting. But RGBs, which often include intricate control systems and software, can significantly raise the price of an already expensive peripheral. There are many cheaper non-backlit mechanical keyboards out there, but they are often focused on typing, and lack the design and features that are unique to the gaming keyboard market.
Gamers on a budget, or those who simply dislike fancy RGB lights, are therefore faced with a relative dearth of options, and it's exactly this market segment that Logitech is targeting with its G413 Mechanical Gaming Keyboard.
Introduction and First Impressions
The LIVA family of mini PCs has been refreshed regularly since its introduction in 2014, and the LIVA Z represents a change to sleek industrial design as well as the expected updates to the internal hardware.
The LIVA Z we have for review today is powered by an Intel Apollo Lake SoC, and the product family includes SKUs with both Celeron and Pentium processors. Our review unit is the entry-level model with a Celeron N3350 processor, 4GB memory, and 32GB storage. Memory and storage support are improved compared to past LIVAs, as this is really more of a mini-PC kit like an Intel NUC, as the LIVA Z includes an M.2 slot (SATA 6.0 Gbps) for storage expansion, and a pair of SODIMM slots support up to 8 GB of DDR3L memory (a single 4GB SODIMM is installed by default).
The LIVA Z is a very small device, just a bit bigger than your typical set-top streaming box, and like all LIVAs it is fanless; making it totally silent in operation. This is important for many people in applications such as media consumption in a living room, and like previous LIVA models the Z includes a VESA mount for installation on the back of a TV or monitor. So how does it perform? We will find out!
An Data Format for Whole 3D Scenes
The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.
glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.
glTF 2.0 makes several important changes.
The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.
First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.
The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).
A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.
PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.
glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.
Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.
Also, the binary version of glTF is now a core feature in glTF 2.0.
The slide looks at the potential future of glTF, after 2.0.
Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.
Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.
Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.
Introduction, How PCM Works, Reading, Writing, and Tweaks
I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.
XPoint memory. Note the shape of the cell/selector structure. This will be significant later.
While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.
Some die-level performance characteristics of various memory types. source
The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:
The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).
There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!
A 3D XPoint die, submitted for your viewing pleasure (click for larger version).