Podcast #484 - New Samsung SSDs, Spectre and Meltdown updates, and more!

Subject: General Tech | January 25, 2018 - 01:26 PM |
Tagged: spectre, Samsung, podcast, plex, meltdown, Intel, inspiron 13, dell, amd, 860 pro, 860 evo

PC Perspective Podcast #484 - 01/25/18

Join us this week for a recap of news and reviews including new SSDs from Samsung, updates on Spectre and Meltdown, and building the ultimate Plex server, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano

Peanut Gallery: Ken Addison

Program length: 1:28:56

Podcast topics of discussion:

  1. Week in Review:
  2. 0:41:30 Thanks to Casper for supporting our channel. Save $50 on select mattresses at http://www.casper.com/pcper code: pcper
  3. News items of interest:
  4. 1:14:10 Picks of the Week:
    1. Ryan:
  5. Closing/outro
 
Subject: Storage
Manufacturer: Samsung

Introduction, Specifications and Packaging

Introduction:

Samsung launched their 850 line of SSDs in mid-2014 (over three years ago now). The line evolved significantly over time, with the additions of PRO and EVO models, capacity expansions reaching up to 4TB, and a later silent migration to 64-layer V-NAND. Samsung certainly got their money's worth out of the 850 name, but it is now time to move onto something newer:

180123-114944.jpg

Specifications:

specs.png

Of note above is a significantly higher endurance rating as compared to the 850 Series products, along with an update to a new 'MJX' controller, which accounts for a slight performance bump across the board. Not mentioned here is the addition of queued TRIM, which is more of a carryover from the enterprise / Linux systems (Windows 10 does not queue its TRIM commands).

Packaging:

180123-112150.jpg

Aside from some updated specs and the new name, packaging remains very much the same.

Read on for our review of the Samsung 860 PRO and EVO SSDs (in multiple capacities!)

(Those of you interested in Samsung's press release for this launch will find it after the break)

Samsung Begins Mass Production Of 18 Gbps 16-Gigabit GDDR6 Memory

Subject: Memory | January 18, 2018 - 12:34 AM |
Tagged: Samsung, graphics memory, graphics cards, gddr6, 19nm

Samsung is now mass producing new higher density GDDR6 memory built on its 10nm-class process technology that it claims offers twice the speed and density of its previous 20nm GDDR5. Samsung's new GDDR6 memory uses 16 Gb dies (2 GB) featuring pin speeds of 18 Gbps (gigabits-per-second) and is able to hit data transfer speeds of up to 72 GB/s per chip.

Samsung GDDR6_PhotoFs.png

According to Samsnug, its new GDDR6 uses a new circuit design which allows it to run on a mere 1.35 volts. Also good news for Samsung and for memory supply (and thus pricing and availability of products) is that the company is seeing a 30% gain in manufacturing productivity cranking out its 16Gb GDDR6 versus its 20nm GDDR5. 

Running at 18 Gbps, the new GDDR6 offers up quite a bit of bandwidth and will allow for graphics cards with much higher amounts of VRAM. Per package, Samsung's 16Gb GDDR6 offers 72 GB/s which is twice the density, pin speed, and bandwidth than that of its 8Gb GDDR5 running at 8Gbps and 1.5V with data transfers of 32 GB/s. (Note that SK Hynix has announced it plans to produce 9Gbps and 10Gbps dies which max out at 40 GB/s.) GDDR5X gets closer to this mark, and in theory is able to hit up to 16 Gbps per pin and 64 GB/s per die, but so far the G5X used in real world products has been much slower (the Titan XP runs at 11.4 Gbps for example). The Titan XP runs 12 8Gb (1GB) dies at 11.4 Gbps on a 384-bit memory bus for maximum memory bandwidth of 547 GB/s. Moving to GDDR6 would enable that same graphics card to have 24 GB of memory (with the same number of dies) with up to 864 GB/s of bandwidth which is approaching High Bandwidth Memory levels of performance (though it still falls short of newer HBM2 and in practice the graphics card would likely be more conservative on the memory speeds). Still, it's an impressive jump in memory performance that widens the gap between GDDR6 and GDDR5X. I am curious how the GPU memory market will shake out in 2018 and 2019 with GDDR5, GDDR5X, GDDR6, HBM, HBM2, and HBM3 all being readily available for use in graphics cards and where each memory type will land especially on the mid-range and high-end consumer cards (HBM2/3 still holds the performance crown and is ideal for the HPC market).

Samsung is aiming its new 18Gbps 16Gb memory at high performance graphics cards, game consoles, vehicles, and networking devices. Stay tuned for more information on GDDR6 as it develops!

Also read:

Source: Samsung

Samsung Mass Producing Second Generation "Aquabolt" HBM2: Better, Faster, and Stronger

Subject: Memory | January 12, 2018 - 05:46 PM |
Tagged: supercomputing, Samsung, HPC, HBM2, graphics cards, aquabolt

Samsung recently announced that it has begun mass production of its second generation HBM2 memory which it is calling “Aquabolt”. Samsung has refined the design of its 8GB HBM2 packages allowing them to achieve an impressive 2.4 Gbps per pin data transfer rates without needing more power than its first generation 1.2V HBM2.

SAMSUNG-HBM2_C.jpg

Reportedly Samsung is using new TSV (through-silicon-via) design techniques and adding additional thermal bumps between dies to improve clocks and thermal control. Each 8GB HBM2 “Aquabolt” package is comprised of eight 8Gb dies each of which is vertically interconnected using 5,000 TSVs which is a huge number especially considering how small and tightly packed these dies are. Further, Samsung has added a new protective layer at the bottom of the stack to reinforce the package’s physical strength. While the press release did not go into detail, it does mention that Samsung had to overcome challenges relating to “collateral clock skewing” as a result of the sheer number of TSVs.

On the performance front, Samsung claims that Aquabolt offers up a 50% increase in per package performance versus its first generation “Flarebolt” memory which ran at 1.6Gbps per pin and 1.2V. Interestingly, Aquabolt is also faster than Samsung’s 2.0Gbps per pin HBM2 product (which needed 1.35V) without needing additional power. Samsung also compares Aquabolt to GDDR5 stating that it offers 9.6-times the bandwidth with a single package of HBM2 at 307 GB/s and a GDDR5 chip at 32 GB/s. Thanks to the 2.4 Gbps per pin speed, Aquabolt offers 307 GB/s of bandwidth per package and with four packages products such as graphics cards can take advantage of 1.2 TB/s of bandwidth.

This second generation HBM2 memory is a decent step up in performance (with HBM hitting 128GB/s and first generation HBM2 hitting 256 GB/s per package and 512 GB/s and 1 TB/s with four packages respectively), but the interesting bit is that it is faster without needing more power. The increased bandwidth and data transfer speeds will be a boon to the HPC and supercomputing market and useful for working with massive databases, simulations, neural networks and AI training, and other “big data” tasks.

Aquabolt looks particularly promising for the mobile market though with future products succeeding the current mobile Vega GPU in Kaby Lake-G processors, Ryzen Mobile APUs, and eventually discrete Vega mobile graphics cards getting a nice performance boost (it’s likely too late for AMD to go with this new HBM2 on these specific products, but future refreshes or generations may be able to take advantage of it). I’m sure it will also see usage in the SoCs uses in Intel’s and NVIDIA’s driverless car projects as well.

Source: Samsung

CES 2018: Samsung Plans To Launch Galaxy S9 at MWC Next Month

Subject: General Tech, Mobile | January 11, 2018 - 05:20 PM |
Tagged: snapdragon 845, Samsung, MWC, galaxy s9, galaxy, exynos 9810, CES 2018, CES

Samsung confirmed to ZDNet at CES that it plans to launch its new flagship Galaxy S smartphone at Mobile World Congress next month. The company has managed to keep a tight lid on the new devices, which are expected to be named the Galaxy S9 and Galaxy S9+, with surprisingly few leaks. Samsung will reportedly show off the smartphone and announce the official specifications along with the release date and pricing information at its MWC keynote event.

Samsung.jpg

Thanks to the rumor mill, there are potential specifications floating about with a few conflicting bits of information particularly where the fingerprint scanner is concerned. Looking around there seems to be several corroborated (but still rumored) specifications on the Galaxy S9 and Galaxy S9+. Allegedly the Galaxy smartphones will feature curved Super AMOLED displays with QHD+ (3200x1800) resolutions measuring 5.8" on the Galaxy S9 and 6.2" on the Galaxy S9+. Further, Smasung is equipping them with dual rear-facing cameras, USB-C, and 3.5mm headphone jack. There are conflicting rumors on the fingerprint scanner with some rumors stating it will feature a fingerprint sensor embedded in the display while other rumors claim that Samsung ran into issues and instead opted for a rear-mounted fingerprint sensor.

Internally, the Galaxy S9 and Galaxy S9+ will be powered by a Qualcomm Snapdragon 845 in the US and Samsung's own Exynos 9810 SoC outside of the US. Cat 18 LTE support is present in either case with faster than gigabit download speeds possible (though less in real world situations). The Galaxy S9 will allegedly be offered with 4GB of RAM and either 64GB of 128GB of storage while the S9+ will have 6GB of RAM and up to 256GB of internal flash storage.

In any case, the Galaxy S9 and S9+ are set to be powerhouses with the latest SoCs and (hopefully) large batteries for those infinity displays! It seems that we will have to wait another month for official information, but it should be out within the first quarter which is actually pretty fast considering it seems like the Galaxy S8 just came out (although it was actually last March heh). Mobile World Congress 2018 is scheduled from February 26th to March 1st in Barcelona, Spain.

What are your thoughts on the Galaxy S9 rumors so far? Do you plan to upgrade? This year may be the year I upgrade my LG G3 since the display is dying, but we'll see!

Also read:

 

Source: Ars Technica

Samsung SZ985 Z-NAND SSD - Upcoming Competition for Intel's P4800X?

Subject: Storage | November 20, 2017 - 10:56 PM |
Tagged: Z-NAND, SZ985, slc, Samsung, P4800X, nand, Intel, flash

We haven't heard much about Samsung's 'XPoint Killer' Z-NAND since Flash Memory Summit 2017, but now we have a bit more to go on:

z-nand specs.png

Yes, actual specs. In print. Not bad either, considering the Samsung SZ985 appears to offer a bus-saturating 3.2GB/s for reads and writes. The 30 DWPD figure matches Intel's P4800X, which is impressive given Samsung's part operates on flash derived from their V-NAND line (but operating in a different mode). The most important figures here are latency, so let's focus there for a bit:

bridging the gap samsung z-nand.png

While the SZ985 runs at ~1/3rd the latency of Samsung's own NAND SSDs, it has roughly double the latency of the P4800X. For the moment that is actually not as bad as it seems as it takes a fair amount of platform optimization to see the full performance benefits of optane, and operating slightly higher on the latency spectrum helps negate the negative impacts of incorrectly optimized platforms:

02 - IRQ service time penalty.png

Source: Shrout Research

As you can see above, operating at slightly higher latencies, while netting lower overall performance, does lessen the sting of platform induced IRQ latency penalties.

170808-120245.jpg

Now to discuss costs. While we don't have any hard figures, we do have the above slide from FMS 2017, where Samsung stressed that they are trying to get the costs of Z-NAND down while keeping latencies as low as possible.

z-nand.jpg

Image Source: ExtremeTech

Samsung backed up their performance claims with a Technology Brief (available here), which showed decent performance gains and cited use cases paralleling those we've seen used by Intel. The takeaway here is that Samsung *may* be able to compete with the Intel P4800X in a similar performance bracket - not matching the performance but perhaps beating it on cost. The big gotcha is that we have yet to see a single Samsung NVMe Enterprise SSD come through our labs for testing, or anywhere on the market for that matter, so take these sorts of announcements with a grain of salt until we see these products gain broader adoption/distribution.

Source: Samsung

Samsung Odyssey VR Headset Announced

Subject: General Tech | October 3, 2017 - 10:00 PM |
Tagged: VR, Samsung, pc gaming, microsoft

The upcoming Fall Creators Update will be Microsoft’s launch into XR with headsets from a variety of vendors. You can now add Samsung to that list with their Odyssey VR headset and motion controllers, which is important for two reasons. First, Samsung has a lot experience in VR technology as they lead the charge (with their partner, Oculus) in the mobile space.

samsung-2017-odyssey-pc-vr-headset.png

Second, and speaking of Oculus, the Samsung Odyssey actually has a higher resolution than both it and the HTC Vive (2880x1600 total for Samsung vs 2160 x 1200 total for the other two). This doesn’t seem like a lot, but it’s actually 77% more pixels, which might be significant for text and other fine details. The refresh rate is still 90 Hz, and the field of view is around 110 degrees, which is the same as the HTC Vive. Of course the screen technology, itself, is AMOLED, being that it’s from Samsung and deeper blacks are more important in an enclosed cavity than brightness. In fact, you probably want to reduce brightness in a VR headset so you don’t strain the eyes.

According to Peter Bright of Ars Technica, Microsoft is supporting SteamVR titles, which gives the platform a nice catalog to launch with. The Samsung Odyssey VR headset launched November 6th for $499 USD.

Source: Microsoft
Subject: Storage
Manufacturer: PC Perspective

Introduction

Introduction

We've been hearing about Intel's VROC (NVMe RAID) technology for a few months now. ASUS started slipping clues in with their X299 motherboard releases starting back in May. The idea was very exciting, as prior NVMe RAID implementations on Z170 and Z270 platforms were bottlenecked by the chipset's PCIe 3.0 x4 DMI link to the CPU, and they also had to trade away SATA ports for M.2 PCIe lanes in order to accomplish the feat. X99 motherboards supported SATA RAID and even sported four additional ports, but they were left out of NVMe bootable RAID altogether. It would be foolish of Intel to launch a successor to their higher end workstation-class platform without a feature available in two (soon to be three) generations of their consumer platform.

To get a grip on what VROC is all about, lets set up some context with a few slides:

slide-1.png

First, we have a slide laying out what the acronyms mean:

  • VROC = Virtual RAID on CPU
  • VMD = Volume Management Device

What's a VMD you say?

slide-2.png

...so the VMD is extra logic present on Intel Skylake-SP CPUs, which enables the processor to group up to 16 lanes of storage (4x4) into a single PCIe storage domain. There are three VMD controllers per CPU.

slide-3.png

VROC is the next logical step, and takes things a bit further. While boot support is restricted to within a single VMD, PCIe switches can be added downstream to create a bootable RAID possibly exceeding 4 SSDs. So long as the array need not be bootable, VROC enables spanning across multiple VMDs and even across CPUs!

Assembling the Missing Pieces

Unlike prior Intel storage technology launches, the VROC launch has been piecemeal at best and contradictory at worst. We initially heard that VROC would only support Intel SSDs, but Intel later published a FAQ that stated 'selected third-party SSDs' would also be supported. One thing they have remained steadfast on is the requirement for a hardware key to unlock RAID-1 and RAID-5 modes - a seemingly silly requirement given their consumer chipset supports bootable RAID-0,1,5 without any key requirement (and VROC only supports one additional SSD over Z170/Z270/Z370, which can boot from 3-drive arrays).

On the 'piecemeal' topic, we need three things for VROC to work:

  • BIOS support for enabling VMD Domains for select groups of PCIe lanes.
  • Hardware for connecting a group of NVMe SSDs to that group of PCIe lanes.
  • A driver for OS mounting and managing of the array.

Let's run down this list and see what is currently available:

BIOS support?

170927121509.png

Check. Hardware for connecting multiple drives to the configured set of lanes?

170927-165526.jpg

Check (960 PRO pic here). Note that the ASUS Hyper M.2 X16 Card will only work on motherboards supporting PCIe bifurcation, which allows the CPU to split PCIe lanes into subgroups without the need of a PLX chip. You can see two bifurcated modes in the above screenshot - one intended for VMD/VROC, while the other (data) selection enables bifurcation without enabling the VMD controller. This option presents the four SSDs to the OS without the need of any special driver.

With the above installed, and the slot configured for VROC in the BIOS, we are greeted by the expected disappointing result:

VROC-2.png

Now for that pesky driver. After a bit of digging around the dark corners of the internet:

VROC-11.png

Check! (well, that's what it looked like after I rapidly clicked my way through the array creation)

Don't even pretend like you won't read the rest of this review! (click here now!)

Still no good news on the DRAM front

Subject: General Tech | September 22, 2017 - 02:05 PM |
Tagged: DRAM, Samsung, SK Hynix, micron

The change process technology continues to have a negative effect on DRAM supplies and according to the story posted on Electronics Weekly there is no good news in sight.  The three major vendors, Samsung, SK Hynix and Micron are all slowing production as a result of new fabs being built and existing production lines upgraded for new process technology such as EUV.  This will ensure that prices continue to slowly creep up over the remainder of this year and likely into 2018.  Drop by for more information on the challenges each are facing.

index.jpg

"While overall DRAM demand will remain high in 2018, new fabs being planned will not be ready for mass production until 2019 at the earliest."

Here is some more Tech News from around the web:

Tech Talk

Intel Technology and Manufacturing Day in China

Subject: General Tech | September 19, 2017 - 11:33 PM |
Tagged: Intel, China, cannon lake, coffee lake, 10nm, 14nm+, 14nm++, 22FFL, GLOBALFOUNDRIES, Samsung, 22FDX

Today in China Intel is holding their Technology and Manufacturing Day. Unlike previous "IDF" events this appears to be far more centered on the manufacturing aspects of Intel's latest process nodes. During presentations Intel talked about their latest steps down the process ladder to smaller and smaller geometries all the while improving performance and power efficiency.
 
Mark-Bohr-Intel-Manufacturing.jpg
Mark Bohr presenting at Intel Technology and Manufacturing Day in China. (Image courtesy of Intel Corporation)
 
It really does not seem as though 14nm has been around as long as it has, but the first Intel products based on that node were released in the 2nd half of 2014.  Intel has since done further work on the process. Today the company talked about two other processes as well as products being made on these nodes.
 
The 10nm process has been in development for some time and we will not see products this year. Instead we will see two product cycles based on 14nm+ and 14nm++ parts. Intel did show off a wafer of 10nm Cannon Lake dies. Intel claims that their 10nm process is still around 3 years more advanced than the competition. Other foundry groups have announced and shown off 10nm parts, but overall transistor density and performance does not look to match what Intel has to offer.
 
We have often talked about the marketing names that these nodes have been given, and how often their actual specifications have not really lived up to the reality. Intel is not immune to this, but they are closer to describing these structures than the competition. Even though this gap does exist, competition is improving their products and offering compelling solutions at decent prices so that fabless semi firms can mostly keep up with Intel.
 
Stacy-Smith-Intel-Manufacturing.jpg
Nothing like handling a 10nm Cannon Lake wafer with bare hands! (Image courtesy of Intel Corporation)
 
A new and interesting process is being offered by intel in the form of 22FFL. This is an obviously larger process node, but it is highly optimized for low power operation with far better leakage characteristics than the previous 22nm FF process that Intel used all those years ago. This is aimed at the ultra-mobile devices with speeds above 2 GHz. This seems to be a response to other low power lines like the 22FDX product from GLOBALFOUNDRIES. Intel did not mention potential RF implementations which is something of great interest from those also looking at 22FDX.
 
Perhaps the biggest news that was released today is that of Intel Custom Foundry announcing and agreement with ARM to develop and implement those CPUs on the upcoming 10nm process. This can have a potentially huge impact depending on the amount of 10nm line space that Intel is willing to sell to ARM's partners as well as what timelines they are looking at to deliver products. ARM showed off a 10nm test wafer of Cortex-A75 CPUs. The company claims that they were able to design and implement these cores using industry standard design flows (automated place and route, rather than fully custom) and achieving performance in excess of 3 GHz.
 
ARM-Intel-manufcturing.jpg
Gus Yeung of ARM holding a 10nm Cortex-A75 based CPUs designed by Intel. (Image courtesy of Intel Corporation)
 
Intel continues to move forward and invest a tremendous amount of money in their process technology. They have the ability to continue at this rate far beyond that of other competitors. Typically the company does a lot of the heavy lifting with the tools partners, which then trickles down to the other manufacturers. This has allowed Intel to stay so far ahead of the competition, and with the introduction of 14nm+, 14nm++, and 10nm they will keep much of that lead. Now we must wait and see what kind of clockspeed and power performance we see from these new nodes and how well the competition can react and when.