Author:
Manufacturer: PC Perspective

Why?

Astute readers of the site might remember the original story we did on Bitcoin mining in 2011, the good ole' days where the concept of the blockchain was new and exciting and mining Bitcoin on a GPU was still plenty viable.

gpu-bitcoin.jpg

However, that didn't last long, as the race for cash lead people to developing Application Specific Integrated Circuits (ASICs) dedicated solely to Bitcoin mining quickly while sipping power. Use of the expensive ASICs drove the difficulty of mining Bitcoin to the roof and killed any sort of chance of profitability from mere mortals mining cryptocurrency.

Cryptomining saw a resurgence in late 2013 with the popular adoption of alternate cryptocurrencies, specifically Litecoin which was based on the Scrypt algorithm instead of AES-256 like Bitcoin. This meant that the ASIC developed for mining Bitcoin were useless. This is also the period of time that many of you may remember as the "Dogecoin" era, my personal favorite cryptocurrency of all time. 

dogecoin-300.png

Defenders of these new "altcoins" claimed that Scrypt was different enough that ASICs would never be developed for it, and GPU mining would remain viable for a larger portion of users. As it turns out, the promise of money always wins out, and we soon saw Scrypt ASICs. Once again, the market for GPU mining crashed.

That brings us to today, and what I am calling "Third-wave Cryptomining." 

While the mass populous stopped caring about cryptocurrency as a whole, the dedicated group that was left continued to develop altcoins. These different currencies are based on various algorithms and other proofs of works (see technologies like Storj, which use the blockchain for a decentralized Dropbox-like service!).

As you may have predicted, for various reasons that might be difficult to historically quantify, there is another very popular cryptocurrency from this wave of development, Ethereum.

ETHEREUM-LOGO_LANDSCAPE_Black.png

Ethereum is based on the Dagger-Hashimoto algorithm and has a whole host of different quirks that makes it different from other cryptocurrencies. We aren't here to get deep in the woods on the methods behind different blockchain implementations, but if you have some time check out the Ethereum White Paper. It's all very fascinating.

Continue reading our look at this third wave of cryptocurrency!

Author:
Subject: Mobile
Manufacturer: Dell

Overview

Editor’s Note: After our review of the Dell XPS 13 2-in-1, Dell contacted us about our performance results. They found our numbers were significantly lower than their own internal benchmarks. They offered to send us a replacement notebook to test, and we have done so. After spending some time with the new unit we have seen much higher results, more in line with Dell’s performance claims. We haven’t been able to find any differences between our initial sample and the new notebook, and our old sample has been sent back to Dell for further analysis. Due to these changes, the performance results and conclusion of this review have been edited to reflect the higher performance results.

It's difficult to believe that it's only been a little over 2 years since we got our hands on the revised Dell XPS 13. Placing an emphasis on minimalistic design, large displays in small chassis, and high-quality construction, the Dell XPS 13 seems to have influenced the "thin and light" market in some noticeable ways.

IMG_4579.JPG

Aiming their sights at a slightly different corner of the market, this year Dell unveiled the XPS 13 2-in-1, a convertible tablet with a 360-degree hinge. However, instead of just putting a new hinge on the existing XPS 13, Dell has designed the all-new XPS 13 2-in-1 from the ground up to be even more "thin and light" than it's older sibling, which has meant some substantial design changes. 

Since we are a PC hardware-focused site, let's take a look under the hood to get an idea of what exactly we are talking about with the Dell XPS 13 2-in-1.

Dell XPS 13 2-in-1
MSRP $999 $1199 $1299 $1399
Screen 13.3” FHD (1920 x 1080) InfinityEdge touch display
CPU Core i5-7Y54 Core i7-7Y75
GPU Intel HD Graphics 615
RAM 4GB 8GB 16GB
Storage 128GB SATA 256GB PCIe
Network Intel 8265 802.11ac MIMO (2.4 GHz, 5.0 GHz)
Bluetooth 4.2
Display Output

1 x Thunderbolt 3
1 x USB 3.1 Type-C (DisplayPort)

Connectivity USB 3.0 Type-C
3.5mm headphone
USB 3.0 x 2 (MateDock)
Audio Dual Array Digital Microphone
Stereo Speakers (1W x 2)
Weight 2.7 lbs ( 1.24 kg)
Dimensions 11.98-in x 7.81-in x 0.32-0.54-in
(304mm x 199mm x 8 -13.7 mm)
Battery 46 WHr
Operating System Windows 10 Home / Pro (+$50)

One of the more striking design decisions from a hardware perspective is the decision to go with the low power Core i5-7Y54 processor, or as you may be familar with from it's older naming scheme, Core M. In the Kaby Lake generation, Intel has decided to drop the Core M branding (though oddly Core m3 still exists) and integrate these lower power parts into the regular Core branding scheme.

Click here to continue reading our review of the Dell XPS 13 2-in-1

Subject: Mobile
Manufacturer: Google

Introduction and Design

In case you have not heard by now, Pixel is the re-imagining of the Nexus phone concept by Google; a fully stock version of the Android experience on custom, Google-authorized hardware - and with the promise of the latest OS updates as they are released. So how does the hardware stack up? We are late into the life of the Pixel by now, and this is more of a long-term review as I have had the smaller version of the phone on hand for some weeks now. As a result I can offer my candid view of the less-covered of the two Pixel handsets (most reviews center around the Pixel XL), and its performance.

DSC_0186.jpg

There was always a certain cachet to owning a Nexus phone, and you could rest assured that you would be running the latest version of Android before anyone on operator-controlled hardware. The Nexus phones were sold primarily by Google, unlocked, with operator/retail availability at times during their run. Things took a turn when Google opted to offer a carrier-branded version of the Nexus 6 back in November of 2014, along with their usual unlocked Google Play store offering. But this departure was not just an issue of branding, as the price jumped to a full $649; the off-contract cost of premium handsets such as Apple’s iPhone. How could Google hope to compete in a space dominated by Apple and Samsung phones purchased by and large with operator subsidies and installment plans? They did not compete, of course, and the Nexus 6 flopped.

Pixel, coming after the Huawei-manufactured Nexus 6p and LG-manufactured Nexus 5X, drops the “Nexus” branding while continuing the tradition of a reference Android experience - and the more recent tradition of premium pricing. As we have seen in the months since its release, the Pixel did not put much of a dent into the Apple/Samsung dominated handset market. But even during the budget-friendly Nexus era, which offered a compelling mix of day-one Android OS update availability and inexpensive, unlocked hardware (think Nexus 4 at $299 and Nexus 5 at $349), Google's own phones were never mainstream. Still, in keeping with iPhone and Galaxy flagships $649 nets you a Pixel, which also launched through Verizon in an exclusive operator deal. Of course a larger version of the Pixel exists, and I would be remiss if I did not mention the Pixel XL. Unfortunately I would also be remiss if I didn't mention that stock for the XL has been quite low with availability constantly in question.

DSC_0169.jpg

The Pixel is hard to distinguish from an iPhone 7 from a distance (other than the home button)

Google Pixel Specifications
Display 5.0-inch 1080x1920 AMOLED
SoC Qualcomm Snapdragon 821 (MSM8996)
CPU Cores 2x 2.15 GHz Kryo
2x 1.60 GHz Kryo
GPU Cores Adreno 530
RAM 4GB LPDDR4
Storage 32 / 128 GB
Network Snapdragon X12 LTE
Connectivity 802.11ac Wi-Fi
2x2 MU-MIMO
Bluetooth 4.2
USB 3.0
NFC
Dimensions 143.8 x 69.5 x 8.5 mm, 143 g
OS Android 7.1

Continue reading our review of the Google Pixel smartphone!

Author:
Subject: General Tech
Manufacturer: Logitech

Logitech G413 Mechanical Gaming Keyboard

The rise in popularity of mechanical gaming keyboards has been accompanied by the spread of RGB backlighting. But RGBs, which often include intricate control systems and software, can significantly raise the price of an already expensive peripheral. There are many cheaper non-backlit mechanical keyboards out there, but they are often focused on typing, and lack the design and features that are unique to the gaming keyboard market.

logitech_g413.jpg

Gamers on a budget, or those who simply dislike fancy RGB lights, are therefore faced with a relative dearth of options, and it's exactly this market segment that Logitech is targeting with its G413 Mechanical Gaming Keyboard.

Continue reading our review of the Logitech G413 Mechanical Gaming Keyboard!

Subject: Systems
Manufacturer: ECS

Introduction and First Impressions

The LIVA family of mini PCs has been refreshed regularly since its introduction in 2014, and the LIVA Z represents a change to sleek industrial design as well as the expected updates to the internal hardware.

DSC_0542.jpg

The LIVA Z we have for review today is powered by an Intel Apollo Lake SoC, and the product family includes SKUs with both Celeron and Pentium processors. Our review unit is the entry-level model with a Celeron N3350 processor, 4GB memory, and 32GB storage. Memory and storage support are improved compared to past LIVAs, as this is really more of a mini-PC kit like an Intel NUC, as the LIVA Z includes an M.2 slot (SATA 6.0 Gbps) for storage expansion, and a pair of SODIMM slots support up to 8 GB of DDR3L memory (a single 4GB SODIMM is installed by default).

The LIVA Z is a very small device, just a bit bigger than your typical set-top streaming box, and like all LIVAs it is fanless; making it totally silent in operation. This is important for many people in applications such as media consumption in a living room, and like previous LIVA models the Z includes a VESA mount for installation on the back of a TV or monitor. So how does it perform? We will find out!

Continue reading our review of the ECS LIVA Z fanless mini PC!

Subject: General Tech
Manufacturer: The Khronos Group

An Data Format for Whole 3D Scenes

The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.

khronos-2017-glTF_500px_June16.png

glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.

glTF 2.0 makes several important changes.

The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.

khronos-2017-PBR material model in glTF 2.0.jpg

First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.

The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).

A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.

epicgames-2017-suntempledeferred.png

PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.

glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.

Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.

Also, the binary version of glTF is now a core feature in glTF 2.0.

khronos-2017-gltfroadmap.png

The slide looks at the potential future of glTF, after 2.0.

Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.

Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.

Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.

Introduction, How PCM Works, Reading, Writing, and Tweaks

I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.

XPoint.png

XPoint memory. Note the shape of the cell/selector structure. This will be significant later.

While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.

comparison.png

Some die-level performance characteristics of various memory types. source

The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:

gap.png

The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).

There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!

XPoint Die.jpg

A 3D XPoint die, submitted for your viewing pleasure (click for larger version).

You want to know how this stuff works, right? Read on to find out!

Author:
Manufacturer: AMD

We are up to two...

UPDATE (5/31/2017): Crystal Dynamics was able to get back to us with a couple of points on the changes that were made with this patch to affect the performance of AMD Ryzen processors.

  1. Rise of the Tomb Raider splits rendering tasks to run on different threads. By tuning the size of those tasks – breaking some up, allowing multicore CPUs to contribute in more cases, and combining some others, to reduce overheads in the scheduler – the game can more efficiently exploit extra threads on the host CPU.
     
  2. An optimization was identified in texture management that improves the combination of AMD CPU and NVIDIA GPU.  Overhead was reduced by packing texture descriptor uploads into larger chunks.

There you have it, a bit more detail on the software changes made to help adapt the game engine to AMD's Ryzen architecture. Not only that, but it does confirm our information that there was slightly MORE to address in the Ryzen+GeForce combinations.

END UPDATE

Despite a couple of growing pains out of the gate, the Ryzen processor launch appears to have been a success for AMD. Both the Ryzen 7 and the Ryzen 5 releases proved to be very competitive with Intel’s dominant CPUs in the market and took significant leads in areas of massive multi-threading and performance per dollar. An area that AMD has struggled in though has been 1080p gaming – performance in those instances on both Ryzen 7 and 5 processors fell behind comparable Intel parts by (sometimes) significant margins.

Our team continues to watch the story to see how AMD and game developers work through the issue. Most recently I posted a look at the memory latency differences between Ryzen and Intel Core processors. As it turns out, the memory latency differences are a significant part of the initial problem for AMD:

Because of this, I think it is fair to claim that some, if not most, of the 1080p gaming performance deficits we have seen with AMD Ryzen processors are a result of this particular memory system intricacy. You can combine memory latency with the thread-to-thread communication issue we discussed previously into one overall system level complication: the Zen memory system behaves differently than anything we have seen prior and it currently suffers in a couple of specific areas because of it.

In that story I detailed our coverage of the Ryzen processor and its gaming performance succinctly:

Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.

Quick on the heels of the Ryzen 7 release, AMD worked with the developer Oxide on the Ashes of the Singularity: Escalation engine. Through tweaks and optimizations, the game was able to showcase as much as a 30% increase in average frame rate on the integrated benchmark. While this was only a single use case, it does prove that through work with the developers, AMD has the ability to improve the 1080p gaming positioning of Ryzen against Intel.

rotr-screen4-small.jpg

Fast forward to today and I was surprised to find a new patch for Rise of the Tomb Raider, a game that was actually one of the worst case scenarios for AMD with Ryzen. (Patch #12, v1.0.770.1) The patch notes mention the following:

The following changes are included in this patch

- Fix certain DX12 crashes reported by users on the forums.

- Improve DX12 performance across a variety of hardware, in CPU bound situations. Especially performance on AMD Ryzen CPUs can be significantly improved.

While we expect this patch to be an improvement for everyone, if you do have trouble with this patch and prefer to stay on the old version we made a Beta available on Steam, build 767.2, which can be used to switch back to the previous version.

We will keep monitoring for feedback and will release further patches as it seems required. We always welcome your feedback!

Obviously the data point that stood out for me was the improved DX12 performance “in CPU bound situations. Especially on AMD Ryzen CPUs…”

Remember how the situation appeared in April?

rotr.png

The Ryzen 7 1800X was 24% slower than the Intel Core i7-7700K – a dramatic difference for a processor that should only have been ~8-10% slower in single threaded workloads.

How does this new patch to RoTR affect performance? We tested it on the same Ryzen 7 1800X benchmarks platform from previous testing including the ASUS Crosshair VI Hero motherboard, 16GB DDR4-2400 memory and GeForce GTX 1080 Founders Edition using the 378.78 driver. All testing was done under the DX12 code path.

tr-1.png

tr-2.png

The Ryzen 7 1800X score jumps from 107 FPS to 126.44 FPS, an increase of 17%! That is a significant boost in performance at 1080p while still running at the Very High image quality preset, indicating that the developer (and likely AMD) were able to find substantial inefficiencies in the engine. For comparison, the 8-core / 16-thread Intel Core i7-6900K only sees a 2.4% increase from this new game revision. This tells us that the changes to the game were specific to Ryzen processors and their design, but that no performance was redacted from the Intel platforms.

Continue reading our look at the new Rise of the Tomb Raider patch for Ryzen!

Author:
Manufacturer: Intel

An abundance of new processors

During its press conference at Computex 2017, Intel has officially announced the upcoming release of an entire new family of HEDT (high-end desktop) processors along with a new chipset and platform to power it. Though it has only been a year since Intel launched the Core i7-6950X, a Broadwell-E processor with 10-cores and 20-threads, it feels like it has been much longer than that. At the time Intel was accused of “sitting” on the market – offering only slight performance upgrades and raising prices on the segment with a flagship CPU cost of $1700. With can only be described as scathing press circuit, coupled with a revived and aggressive competitor in AMD and its Ryzen product line, Intel and its executive teams have decided it’s time to take enthusiasts and high end prosumer markets serious, once again.

slides-3.jpg

Though the company doesn’t want to admit to anything publicly, it seems obvious that Intel feels threatened by the release of the Ryzen 7 product line. The Ryzen 7 1800X was launched at $499 and offered 8 cores and 16 threads of processing, competing well in most tests against the likes of the Intel Core i7-6900X that sold for over $1000. Adding to the pressure was the announcement at AMD’s Financial Analyst Day that a new brand of processors called Threadripper would be coming this summer, offering up to 16 cores and 32 threads of processing for that same high-end consumer market. Even without pricing, clocks or availability timeframes, it was clear that AMD was going to come after this HEDT market with a brand shift of its EPYC server processors, just like Intel does with Xeon.

The New Processors

Normally I would jump into the new platform, technologies and features added to the processors, or something like that before giving you the goods on the CPU specifications, but that’s not the mood we are in. Instead, let’s start with the table of nine (9!!) new products and work backwards.

  Core i9-7980XE Core i9-7960X Core i9-7940X Core i9-7920X Core i9-7900X Core i7-7820X Core i7-7800X Core i7-7740X Core i5-7640X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Kaby Lake-X Kaby Lake-X
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 4/8 4/4
Base Clock ? ? ? ? 3.3 GHz 3.6 GHz 3.5 GHz 4.3 GHz 4.0 GHz
Turbo Boost 2.0 ? ? ? ? 4.3 GHz 4.3 GHz 4.0 GHz 4.5 GHz 4.2 GHz
Turbo Boost Max 3.0 ? ? ? ? 4.5 GHz 4.5 GHz N/A N/A N/A
Cache 16.5MB (?) 16.5MB (?) 16.5MB (?) 16.5MB (?) 13.75MB 11MB 8.25MB 8MB 6MB
Memory Support ? ? ? ? DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Dual Channel
DDR4-2666 Dual Channel
PCIe Lanes ? ? ? ? 44 28 28 16 16
TDP 165 watts (?) 165 watts (?) 165 watts (?) 165 watts (?) 140 watts 140 watts 140 watts 112 watts 112 watts
Socket 2066 2066 2066 2066 2066 2066 2066 2066 2066
Price $1999 $1699 $1399 $1199 $999 $599 $389 $339 $242

There is a lot to take in here. The most interesting points are that Intel plans to one-up AMD Threadripper by offering an 18-core processor but it also wants to change the perception of the X299-class platform by offering lower price, lower core count CPUs like the quad-core, non-HyperThreaded Core i5-7640X. We also see the first ever branding of Core i9.

Intel only provided detailed specifications up to the Core i9-7900X, a 10-core / 20-thread processor with a base clock of 3.3 GHz and a Turbo peak of 4.5 GHz using the new Turbo Boost Max Technology 3.0. It sports 13.75MB of cache thanks to an updated cache configuration, includes 44 lanes of PCIe 3.0, an increase of 4 lanes over Broadwell-E, quad-channel DDR4 memory up to 2666 MHz and a 140 watt TDP. The new LGA2066 socket will be utilized. Pricing for this CPU is set at $999, which is interesting for a couple of reasons. First, it is $700 less than the starting MSRP of the 10c/20t Core i7-6950X from one year ago; obviously a big plus. However, there is quite a ways UP the stack, with the 18c/36t Core i9-7980XE coming in at a cool $1999.

intel1.jpg

The next CPU down the stack is compelling as well. The Core i7-7820X is the new 8-core / 16-thread HEDT option from Intel, with similar clock speeds to the 10-core above it, save the higher base clock. It has 11MB of L3 cache, 28-lanes of PCI Express (4 higher than Broadwell-E) but has a $599 price tag. Compared to the 8-core 6900K, that is ~$400 lower, while the new Skylake-X part iteration includes a 700 MHz clock speed advantage. That’s huge, and is a direct attack on the AMD Ryzen 7 1800X that sells for $499 today and cut Intel off at the knees this March. In fact, the base clock of the Core i7-7820X is only 100 MHz lower than the maximum Turbo Boost clock of the Core i7-6900K!

Continue reading about the Intel Core i9 series announcement!

Author:
Manufacturer: ARM

ARM Refreshes All the Things

This past April ARM invited us to visit Cambridge, England so they could discuss with us their plans for the next year.  Quite a bit has changed for the company since our last ARM Tech Day in 2016.  They were acquired by SoftBank, but continue to essentially operate as their own company.  They now have access to more funds, are less risk averse, and have a greater ability to expand in the ever growing mobile and IOT marketplaces.

dynamiq_01.png

The ARM of today certainly is quite different than what we had known 10 years ago when we saw their technology used in the first iPhone.  The company back then had good technology, but a relatively small head count.  They kept pace with the industry, but were not nearly as aggressive as other chip companies in some areas.  Through the past 10 years they have grown not only in numbers, but in technologies that they have constantly expanded on.  The company became more PR savvy and communicated more effectively with the press and in the end their primary users.  Where once ARM would announce new products and not expect to see shipping products upwards of 3 years away, we are now seeing the company be much more aggressive with their designs and getting them out to their partners so that production ends up happening in months as compared to years.

Several days of meetings and presentations left us a bit overwhelmed by what ARM is bringing to market towards the end of 2017 and most likely beginning of 2018.  On the surface it appears that ARM has only done a refresh of the CPU and GPU products, but once we start looking at these products in the greater scheme and how they interact with DynamIQ we see that ARM has changed the mobile computing landscape dramatically.  This new computing concept allows greater performance, flexibility, and efficiency in designs.  Partners will have far more control over these licensed products to create more value and differentiation as compared to years past.

dynamiq_02.png

We have previously covered DynamIQ at PCPer this past March.  ARM wanted to seed that concept before they jumped into more discussions on their latest CPUs and GPUs.  Previous Cortex products cannot be used with DynamIQ.  To leverage that technology we must have new CPU designs.  In this article we are covering the Cortex-A55 and Cortex-A75.  These two new CPUs on the surface look more like a refresh, but when we dig in we see that some massive changes have been wrought throughout.  ARM has taken the concepts of the previous A53 and A73 and expanded upon them fairly dramatically, not only to work with DynamIQ but also by removing significant bottlenecks that have impeded theoretical performance.

Continue reading our overview of the new family of ARM CPUs and GPU!

Subject: Motherboards
Manufacturer: MSI

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of MSI

The MSI Z270 Gaming Pro Carbon board features a black PCB with carbon fiber overlay covering the board's heat sinks and rear panel cover. MSI also liberally sprinkled RGB LED-enabled components across the board's surface and under the board for an interesting ground effects type look. The board is designed around the Intel Z270 chipset with in-built support for the latest Intel LGA1151 Kaby Lake processor line (as well as support for Skylake processors) and Dual Channel DDR4 memory running at a 2400MHz speed. The Z270 Gaming Pro Carbon can be found in retail with an MRSP of $174.99.

03-board-profile.jpg

Courtesy of MSI

04-board-flyapart.jpg

Courtesy of MSI

MSI integrated the following features into the Z270 Gaming Pro Carbon motherboard: six SATA III 6Gbps ports; two M.2 PCIe Gen3 x4 32Gbps capable ports with Intel Optane support built-in; an RJ-45 Intel I219-V Gigabit NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; a Realtek ALC1220 8-Channel audio subsystem; integrated DVI-D and HDMI video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

05-military-class-v.jpg

Courtesy of MSI

To power the Z270 Gaming Pro Carbon motherboard, MSI integrated a 10 phase (8+2) digital power delivery system dubbed Military Class V. The Military Class V integrated components included Titanium chokes, 10 year-rated Dark capacitors, and Dark chokes.

06-rear-panel-flyapart.jpg

Courtesy of MSI

Continue reading our preview of the MSI Z270 Gaming Pro Carbon motherboard!

Manufacturer: Alphacool

Introduction and Specifications

Alphacool's Eisbaer is a line of pre-assembled liquid CPU coolers using standard parts that add quick-release connections to make adding components to the loop simple. Today we'll have a look at the 360 mm and 280 mm versions of the Eisbaer and see what kind of performance you can expect from an all-in-one solution from a respected brand in custom liquid cooling.

DSC_0203.jpg

"With the Alphacool “Eisbaer”, we’re offering an extremely quiet high-performance cooler for every CPU on the market currently. A closed water cooling system that’s easy to install and can be easily and safely expanded with its quick-lock closure."

Not all AiO liquid coolers are created equal, of course, with different materials and approaches; and there are generally tradeoffs to be made between design and pricing. The best performance can result in pumps and fans that produce more noise than a high-performance air solution, while some liquid coolers manage to balance noise and performance in a way that makes liquid a far more attractive option - especially when overclocking a CPU.

DSC_0205.jpg

True to the premium nature of a product line like this, Alphacool has incorportated first-rate components into the Eisbaer series including all-copper radiators, high-performance fans, and touches like anti-kink springs for the hoses. The capability of easily adding a GPU to the loop with the quick-lock closure (Alphacool offers a line of GPU products called "Eiswolf" that connect with these quick-lock closures) is a nice plus, and the use of standard G1/4 fittings ensures compatibility with custom parts for future expansion/modification.

DSC_0100.jpg

The Eisbaer 360 spending some quality time on the test bench

Continue reading our review of the Alphacool Eisbaer 360 and 280 liquid CPU coolers!

Manufacturer: The Khronos Group

The Right People to Interview

Last week, we reported that OpenCL’s roadmap would be merging into Vulkan, and OpenCL would, starting at some unspecified time in the future, be based “on an extended version of the Vulkan API”. This was based on quotes from several emails between myself and the Khronos Group.

Since that post, I had the opportunity to have a phone interview with Neil Trevett, president of the Khronos Group and chairman of the OpenCL working group, and Tom Olson, chairman of the Vulkan working group. We spent a little over a half hour going over Neil’s International Workshop on OpenCL (IWOCL) presentation, discussing the decision, and answering a few lingering questions. This post will present the results of that conference call in a clean, readable way.

khronos-officiallogo-Vulkan_500px_Dec16.png

First and foremost, while OpenCL is planning to merge into the Vulkan API, the Khronos Group wants to make it clear that “all of the merging” is coming from the OpenCL working group. The Vulkan API roadmap is not affected by this decision. Of course, the Vulkan working group will be able to take advantage of technologies that are dropping into their lap, but those discussions have not even begun yet.

Neil: Vulkan has its mission and its roadmap, and it’s going ahead on that. OpenCL is doing all of the merging. We’re kind-of coming in to head in the Vulkan direction.

Does that mean, in the future, that there’s a bigger wealth of opportunity to figure out how we can take advantage of all this kind of mutual work? The answer is yes, but we haven’t started those discussions yet. I’m actually excited to have those discussions, and are many people, but that’s a clarity. We haven’t started yet on how Vulkan, itself, is changed (if at all) by this. So that’s kind-of the clarity that I think is important for everyone out there trying to understand what’s going on.

Tom also prepared an opening statement. It’s not as easy to abbreviate, so it’s here unabridged.

Tom: I think that’s fair. From the Vulkan point of view, the way the working group thinks about this is that Vulkan is an abstract machine, or at least there’s an abstract machine underlying it. We have a programming language for it, called SPIR-V, and we have an interface controlling it, called the API. And that machine, in its full glory… it’s a GPU, basically, and it’s got lots of graphics functionality. But you don’t have to use that. And the API and the programming language are very general. And you can build lots of things with them. So it’s great, from our point of view, that the OpenCL group, with their special expertise, can use that and leverage that. That’s terrific, and we’re fully behind it, and we’ll help them all we can. We do have our own constituency to serve, which is the high-performance game developer first and foremost, and we are going to continue to serve them as our main mission.

So we’re not changing our roadmap so much as trying to make sure we’re a good platform for other functionality to be built on.

Neil then went on to mention that the decision to merge OpenCL’s roadmap into the Vulkan API took place only a couple of weeks ago. The purpose of the press release was to reach OpenCL developers and get their feedback. According to him, they did a show of hands at the conference, with a room full of a hundred OpenCL developers, and no-one was against moving to the Vulkan API. This gives them confidence that developers will accept the decision, and that their needs will be served by it.

Next up is the why. Read on for more.

Author:
Subject: Storage
Manufacturer: Sony
Tagged: ssd, ps4 pro, ps4, consoles

Intro and Upgrading the PS4 Pro Hard Drive

When Sony launched the PS4 Pro late last year, it introduced an unusual mid-cycle performance update to its latest console platform. But in addition to increased processing and graphics performance, Sony also addressed one of the original PS4's shortcomings: the storage bus.

The original, non-Pro PlayStation 4 utilized a SATA II bus, capping speeds at 3Gb/s. This was more than adequate for keeping up with the console's stock hard drive, but those who elected to take advantage of Sony's user-upgradeable storage policy and install an SSD faced the prospect of a storage bus bottleneck. As we saw in our original look at upgrading the PS4 Pro with a solid state drive, the SSD brought some performance improvements in terms of load times, but these improvements weren't always as impressive as we might expect.

ps4-pro-ssd-vs-hdd.jpg

We therefore set out to see what performance improvements, if any, could be gained by the inclusion of SATA III in the PS4 Pro, and if this new Pro model makes a stronger case for users to shell out even more cash for a high capacity solid state drive. We weren't the only ones interested in this test. Digital Foundry conducted their own tests of the PS4 Pro's SATA III interface. They found that while a solid state drive in the PS4 Pro clearly outperformed the stock hard drive in the original PS4, it generally didn't offer much improvement over the SATA II-bottlenecked SSD in the original PS4, or even, in some cases, the stock HDD in the PS4 Pro.

ocz-trion-100.jpg

But we noticed a major issue with Digital Foundry's testing process. For their SSD tests, they used the OCZ Trion 100, an older SSD with relatively mediocre performance compared to its latest competitors. The Trion 100 also has a relatively low write endurance and we therefore don't know the condition and performance characteristics of Digital Foundry's drive.

samsung-850-evo-1tb.jpg

To address these issues, we conducted our tests with a brand new 1TB Samsung 850 EVO. While far from the cheapest, or even most reasonable option for a PS4 Pro upgrade, our aim is to assess the "best case scenario" when it comes to SSD performance via the PS4 Pro's SATA III bus.

Continue reading our analysis of PS4 Pro loading times with an SSD upgrade!

Author:
Subject: Systems, Mobile
Manufacturer: Apple

What have we here?

The latest iteration of the Apple MacBook Pro has been a polarizing topic to both Mac and PC enthusiasts. Replacing the aging Retina MacBook Pro introduced in 2012, the Apple MacBook Pro 13-inch with Touch Bar introduced late last year offered some radical design changes. After much debate (and a good Open Box deal), I decided to pick up one of these MacBooks to see if it could replace my 11" MacBook Air from 2013, which was certainly starting to show it's age.

DSC02852.JPG

I'm sure that a lot of our readers, even if they aren't Mac users, are familiar with some of the major changes the Apple made with this new MacBook Pro. One of the biggest changes comes when you take a look at the available connectivity on the machine. Gone are the ports you might expect like USB type-A, HDMI, and Mini DisplayPort. These ports have been replaced with 4 Thunderbolt 3 ports, and a single 3.5mm headphone jack.

While it seems like USB-C (which is compatible with Thunderbolt 3) is eventually posed to take over the peripheral market, there are obvious issues with replacing all of the connectivity on a machine aimed at professionals with type-c connectors. Currently, type-c devices are few and are between, meaning you will have to rely on a series of dongles to connect the devices you already own. 

DSC02860.JPG

I will say however, that it ultimately hasn't been that much of an issue for me so far in the limited time that I've owned this MacBook. In order to evaluate how bad the dongle issue was, I only purchased a single, simple adapter with my MacBook which provided me with a Type-A USB port and a pass-through Type-C port for charging.

Continue reading our look at using the MacBook Pro with Windows!

Author:
Manufacturer: Seasonic

Introduction and Features

Introduction

2-Prime-logo.jpg

Sea Sonic Electronics Co., Ltd has been designing and building PC power supplies since 1981 and they are one of the most highly respected manufacturers on the planet. Not only do they market power supplies under their own Seasonic name but they are the OEM for many other big name brands.

Seasonic began introducing their new PRIME Series power supplies last year and we reviewed several of the flagship Titanium units and found them to be among the best power supplies we have tested to date. But the PRIME Series now includes many more units with both Platinum and Gold level efficiency certification.

3-Prime-compare.jpg

The power supply we have in for review is the Seasonic PRIME 1200W Gold. This unit comes with all modular cables and is certified to comply with the 80 Plus Gold criteria for high efficiency. The power supply is designed to deliver very tight voltage regulation on the three primary rails (+3.3V, +5V and +12V) and provides superior AC ripple and noise suppression. Add in a super-quiet 135mm cooling fan with a Fluid Dynamic Bearing, top-quality components and a 12-year warranty, and you have the makings for another outstanding power supply.

4a-Side.jpg

Seasonic PRIME Gold Series PSU Key Features:

•    650W, 750W, 850W, 1000W or 1200W continuous DC output
•    High efficiency, 80 PLUS Gold certified
•    Micro-Tolerance Load Regulation (MTLR)
•    Top-quality 135mm Fluid Dynamic Bearing fan
•    Premium Hybrid Fan Control (allows fanless operation at low power)
•    Superior AC ripple and noise suppression
•    Fully modular cabling design
•    Multi-GPU technologies supported
•    Gold-plated high-current terminals
•    Protections: OPP,OVP,UVP,SCP,OCP and OTP
•    12-Year Manufacturer’s warranty
•    MSRP for the PRIME 1200W Gold is $199.90 USD

4b-Front-cables.jpg

Here is what Seasonic has to say about the new PRIME power supply line: “The creation of the PRIME Series is a renewed testimony of Sea Sonic’s determination to push the limits of power supply design in every aspect. This elegant-looking, exclusive lineup of new products will include 80 Plus Titanium – in the range of 650W to 1000W, and Platinum-Gold-rated units in the range of 650W to 1200W, with excellent electrical characteristics, top-level components and fully modular cabling.

Seasonic employs the most efficient manufacturing methods, uses the best materials and works with most reliable suppliers to produce reliable products. The PRIME Series layout, revolutionary manufacturing solutions and solid design attest to the highest level of ingenuity of Seasonics’s engineers and product developers. Demonstrating confidence in its power supplies, Seasonic stands out in the industry by offering PRIME Series a generous 12-year manufacturer’s warranty period.

Please continue reading our review of the Seasonic PRIME 1200W Gold PSU!

Author:
Manufacturer: AMD

Is it time to buy that new GPU?

Testing commissioned by AMD. This means that AMD paid us for our time, but had no say in the results or presentation of them.

Earlier this week Bethesda and Arkane Studios released Prey, a first-person shooter that is a re-imaging of the 2006 game of the same name. Fans of System Shock will find a lot to love about this new title and I have found myself enamored with the game…in the name of science of course.

Prey-2017-05-06-15-52-04-16.jpg

While doing my due diligence and performing some preliminary testing to see if we would utilize Prey for graphics testing going forward, AMD approached me to discuss this exact title. With the release of the Radeon RX 580 in April, one of the key storylines is that the card offers a reasonably priced upgrade path for users of 2+ year old hardware. With that upgrade you should see some substantial performance improvements and as I will show you here, the new Prey is a perfect example of that.

Targeting the Radeon R9 380, a graphics card that was originally released back in May of 2015, the RX 580 offers substantially better performance at a very similar launch price. The same is true for the GeForce GTX 960: launched in January of 2015, it is slightly longer in the tooth. AMD’s data shows that 80% of the users on Steam are running on R9 380X or slower graphics cards and that only 10% of them upgraded in 2016. Considering the great GPUs that were available then (including the RX 480 and the GTX 10-series), it seems more and more likely that we going to hit an upgrade inflection point in the market.

slides-5.jpg

A simple experiment was setup: does the new Radeon RX 580 offer a worthwhile upgrade path for those many users of R9 380 or GTX 960 classifications of graphics cards (or older)?

  Radeon RX 580 Radeon R9 380 GeForce GTX 960
GPU Polaris 20 Tonga Pro GM206
GPU Cores 2304 1792 1024
Rated Clock 1340 MHz 918 MHz 1127 MHz
Memory 4GB
8GB
4GB 2GB
4GB
Memory Interface 256-bit 256-bit 128-bit
TDP 185 watts 190 watts 120 watts
MSRP (at launch) $199 (4GB)
$239 (8GB)
$219 $199

Continue reading our look at the Radeon RX 580 in Prey!

Subject: General Tech
Manufacturer: The Khronos Group

It Started with an OpenCL 2.2 Press Release

Update (May 18 @ 4pm EDT): A few comments across the internet believes that the statements from The Khronos Group were inaccurately worded, so I emailed them yet again. The OpenCL working group has released yet another statement:

OpenCL is announcing that their strategic direction is to support CL style computing on an extended version of the Vulkan API. The Vulkan group is agreeing to advise on the extensions.

In other words, this article was and is accurate. The Khronos Group are converging OpenCL and Vulkan into a single API: Vulkan. There was no misinterpretation.

Original post below

Earlier today, we published a news post about the finalized specifications for OpenCL 2.2 and SPIR-V 1.2. This was announced through a press release that also contained an odd little statement at the end of the third paragraph.

We are also working to converge with, and leverage, the Khronos Vulkan API — merging advanced graphics and compute into a single API.

khronos-vulkan-logo.png

This statement seems to suggest that OpenCL and Vulkan are expecting to merge into a single API for compute and graphics at some point in the future. This seemed like a huge announcement to bury that deep into the press blast, so I emailed The Khronos Group for confirmation (and any further statements). As it turns out, this interpretation is correct, and they provided a more explicit statement:

The OpenCL working group has taken the decision to converge its roadmap with Vulkan, and use Vulkan as the basis for the next generation of explicit compute APIs – this also provides the opportunity for the OpenCL roadmap to merge graphics and compute.

This statement adds a new claim: The Khronos Group plans to merge OpenCL into Vulkan, specifically, at some point in the future. Making the move in this direction, from OpenCL to Vulkan, makes sense for a handful of reasons, which I will highlight in my analysis, below.

Going Vulkan to Live Long and Prosper?

The first reason for merging OpenCL into Vulkan, from my perspective, is that Apple, who originally created OpenCL, still owns the trademarks (and some other rights) to it. The Khronos Group licenses these bits of IP from Apple. Vulkan, based on AMD’s donation of the Mantle API, should be easier to manage from the legal side of things.

khronos-2016-vulkan-why.png

The second reason for going in that direction is the actual structure of the APIs. When Mantle was announced, it looked a lot like an API that wrapped OpenCL with a graphics-specific layer. Also, Vulkan isn’t specifically limited to GPUs in its implementation.

Aside: When you create a device queue, you can query the driver to see what type of device it identifies as by reading its VkPhysicalDeviceType. Currently, as of Vulkan 1.0.49, the options are Other, Integrated GPU, Discrete GPU, Virtual GPU, and CPU. While this is just a clue, to make it easier to select a device for a given task, and isn’t useful to determine what the device is capable of, it should illustrate that other devices, like FPGAs, could support some subset of the API. It’s just up to the developer to check for features before they’re used, and target it at the devices they expect.

If you were to go in the other direction, you would need to wedge graphics tasks into OpenCL. You would be creating Vulkan all over again. From my perspective, pushing OpenCL into Vulkan seems like the path of least resistance.

The third reason (that I can think of) is probably marketing. DirectX 12 isn’t attempting to seduce FPGA developers. Telling a game studio to program their engine on a new, souped-up OpenCL might make them break out in a cold sweat, even if both parties know that it’s an evolution of Vulkan with cross-pollination from OpenCL. OpenCL developers, on the other hand, are probably using the API because they need it, and are less likely to be shaken off.

What OpenCL Could Give Vulkan (and Vice Versa)

From the very onset, OpenCL and Vulkan were occupying similar spaces, but there are some things that OpenCL does “better”. The most obvious, and previously mentioned, element is that OpenCL supports a wide range of compute devices, such as FPGAs. That’s not the limit of what Vulkan can borrow, though, although it could make for an interesting landscape if FPGAs become commonplace in the coming years and decades.

khronos-SYCL_Color_Mar14_154_75.png

Personally, I wonder how SYCL could affect game engine development. This standard attempts to guide GPU- (and other device-) accelerated code into a single-source, C++ model. For over a decade, Tim Sweeney of Epic Games has talked about writing engines like he did back in the software-rendering era, but without giving up the ridiculous performance (and efficiency) provided by GPUs.

Long-time readers of PC Perspective might remember that I was investigating GPU-accelerated software rendering in WebCL (via Nokia’s implementation). The thought was that I could concede the raster performance of modern GPUs and make up for it with added control, the ability to explicitly target secondary compute devices, and the ability to run in a web browser. This took place in 2013, before AMD announced Mantle and browser vendors expressed a clear disinterest in exposing OpenCL through JavaScript. Seeing the idea was about to be crushed, I pulled out the GPU-accelerated audio ideas into a more-focused project, but that part of my history is irrelevant to this post.

The reason for bringing up this anecdote is because, if OpenCL is moving into Vulkan, and SYCL is still being developed, then it seems likely that SYCL will eventually port into Vulkan. If this is the case, then future game engines can gain benefits that I was striving toward without giving up access to fixed-function features, like hardware rasterization. If Vulkan comes to web browsers some day, it would literally prune off every advantage I was hoping to capture, and it would do so with a better implementation.

microsoft-2015-directx12-logo.jpg

More importantly, SYCL is something that Microsoft cannot provide with today’s DirectX.

Admittedly, it’s hard to think of something that OpenCL can acquire from Vulkan, besides just a lot more interest from potential developers. Vulkan was already somewhat of a subset of OpenCL that had graphics tasks (cleanly) integrated over top of it. On the other hand, OpenCL has been struggling to acquire mainstream support, so that could, in fact, be Vulkan’s greatest gift.

The Khronos Group has not provided a timeline for this change. It’s just a roadmap declaration.

Author:
Subject: General Tech
Manufacturer: YouTube TV

YouTube Tries Everything

Back in March, Google-owned YouTube announced a new live TV streaming service called YouTube TV to compete with the likes of Sling, DirecTV Now, PlayStation Vue, and upcoming offerings from Hulu, Amazon, and others. All these services aim to deliver curated bundles of channels aimed at cord cutters that run over the top of customer’s internet only connections as replacements for or in addition to cable television subscriptions.  YouTube TV is the latest entrant to this market with the service only available in seven test markets currently, but it is off to a good start with a decent selection of content and features including both broadcast and cable channels, on demand media, and live and DVR viewing options. A responsive user interface and generous number of family sharing options (six account logins and three simultaneous streams) will need to be balanced by the requirement to watch ads (even on some DVR’ed shows) and the $35 per month cost.

Get YouTube TV 1.jpg

YouTube TV was launched in 5 cities with more on the way. Fortunately, I am lucky enough to live close enough to Chicago to be in-market and could test out Google’s streaming TV service. While not a full review, the following are my first impressions of YouTube TV.

Setup / Sign Up

YouTube TV is available with a one month free trail, after which you will be charged $35 a month. Sign up is a simple affair and can be started by going to tv.youtube.com or clicking the YouTube TV link from “hamburger” menu on YouTube. If you are on a mobile device, YouTube TV uses a separate app than the default YouTube app and weighs in at 9.11 MB for the Android version. The sign up process is very simple. After verifying your location, the following screens show you the channels available in your market and gives you the option of adding Showtime ($11) and/or Fox Soccer ($15) for additional monthly fees. After that, you are prompted for a payment method that can be the one already linked to your Google account and used for app purchases and other subscriptions. As far as the free trial, I was not charged anything and there was no hold on my account for the $35. I like that Google makes it easy to see exactly how many days you have left on your trial and when you will be charged if you do not cancel. Further, the cancel link is not buried away and is intuitively found by clicking your account photo in the upper right > Personal > Membership. Google is doing things right here. After signup, a tour is offered to show you the various features, but you can skip this if you want to get right to it.

In my specific market, I have the following channels. When I first started testing some of the channels were not available, and were just added today. I hope to see more networks added, and if Google can manage that YouTube TV and it’s $35/month price are going to shape up to be a great deal.

  • ABC 7, CBS 2, Fox 32, NBC 5, ESPN, CSN, CSN Plus, FS1, CW, USA, FX, Free Form, NBC SN, ESPN 2, FS2, Disney, E!, Bravo, Oxygen, BTN, SEC ESPN Network, ESPN News, CBS Sports, FXX, Syfy, Disney Junior, Disney XD, MSNBC, Fox News, CNBC, Fox Business, National Geographic, FXM, Sprout, Universal, Nat Geo Wild, Chiller, NBC Golf, YouTube Red Originals
  • Plus: AMC, BBC America, IFC, Sundance TV, We TV, Telemundo, and NBC Universal (just added).
  • Optional Add-Ons: Showtime and Fox Soccer.

I tested YouTube TV out on my Windows PCs and an Android phone. You can also watch YouTube TV on iOS devices, and on your TV using an Android TVs and Chromecasts (At time of writing, Google will send you a free Chromecast after your first month). (See here for a full list of supported devices.) There are currently no Roku or Apple TV apps.

Get YouTube TV_full list.jpg

Each YouTube TV account can share out the subscription to 6 total logins where each household member gets their own login and DVR library. Up to three people can be streaming TV at the same time. While out and about, I noticed that YouTube TV required me to turn on location services in order to use the app. Looking further into it, the YouTube TV FAQ states that you will need to verify your location in order to stream live TV and will only be able to stream live TV if you are physically in the markets where YouTube TV has launched. You can watch your DVR shows anywhere in the US. However, if you are traveling internationally you will not be able to use YouTube TV at all (I’m not sure if VPNs will get around this or if YouTube TV blocks this like Netflix does). Users will need to login from their home market at least once every 3 months to keep their account active and able to stream content (every month for MLB content).

YouTube TV verifying location in Chrome (left) and on the android app (right).

On one hand, I can understand this was probably necessary in order for YouTube TV to negotiate a licensing deal, and their terms do seem pretty fair. I will have to do more testing on this as I wasn’t able to stream from the DVR without turning on location services on my Android – I can chalk this up to growing pains though and it may already be fixed.

Features & First Impressions

YouTube TV has an interface that is perhaps best described as a slimmed down YouTube that takes cues from Netflix (things like the horizontal scrolling of shows in categories). The main interface is broken down into three sections: Library, Home, and Live with the first screen you see when logging in being Home. You navigate by scrolling and clicking, and by pulling the menus up from the bottom while streaming TV like YouTube.

YouTube TV Home.jpg

Continue reading for my first impressions of YouTube TV!

Author:
Subject: Processors
Manufacturer: Various

Application Profiling Tells the Story

It should come as no surprise to anyone that has been paying attention the last two months that the latest AMD Ryzen processors and architecture are getting a lot of attention. Ryzen 7 launched with a $499 part that bested the Intel $1000 CPU at heavily threaded applications and Ryzen 5 launched with great value as well, positioning a 6-core/12-thread CPU against quad-core parts from the competition. But part of the story that permeated through both the Ryzen 7 and the Ryzen 5 processor launches was the situation surrounding gaming performance, in particular 1080p gaming, and the surprising delta  that we see in some games.

Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.

ping-amd.png

As a part of that dissection of the Windows 10 scheduler story, we also discovered interesting data about the CCX construction and how the two modules on the 1800X communicated. The result was significantly longer thread to thread latencies than we had seen in any platform before and it was because of the fabric implementation that AMD integrated with the Zen architecture.

This has led me down another hole recently, wondering if we could further compartmentalize the gaming performance of the Ryzen processors using memory latency. As I showed in my Ryzen 5 review, memory frequency and throughput directly correlates to gaming performance improvements, in the order of 14% in some cases. But what about looking solely at memory latency alone?

Continue reading our analysis of memory latency, 1080p gaming, and how it impacts Ryzen!!