Phison Previews PCIe Gen4 x4 NVMe with PS5016-E16 Controller

Subject: Storage | January 11, 2019 - 09:36 AM |
Tagged: ssd controller, ssd, solid state drive, PS5016-E16, phison, PCIe Gen4, PCI Express 4.0, NVMe

One of the areas that can see an immediate impact from PCI Express Gen 4 which will first arrive with AMD’s upcoming Ryzen desktop processors is storage, and to that end Phison is not waiting around to show just what we can expect from the first generation of PCIe Gen4 SSDs.

View Full Size

Phison PS5016-E16 performance slide (image credit: ComputerBase)

The company’s PS5016-E16 controller was on display at CES in a prototype device, and is powered by a quad-core solution combining two ARM cores with a pair of proprietary CO-X processor cores from Phison. Basic specs from Phison include:

  • PCIe Gen4 x4 NVMe
  • 8 Channels with 32 CEs
  • NAND interface: 800 MT/s support
  • DDR4 interface: 1600 Mb/s support
  • 3D TLC and QLC support
  • Designed with Phison’s 4th Gen LDPC Engine

View Full Size

Phison PS5016-E16 prototype device (image credit: Legit Reviews)

As to performance, Phison lists sequentials of 4000 MB/s reads and 4100 MB/s writes, while providing a graphic showing CrystalDiskMark results slightly exceeding these numbers. How can Phison exceed the potential of PCIe Gen3 x4 with this early demo? As reported by Legit Reviews Phison is using a Gen4HOST add-in card from PLDA, which “uses a PCIe 3.0 x16 (upstream) to PCIe 4.0 x8 (downstream) integration backplane for development and validation of PCIe 4.0 endpoints”.

View Full Size

Phison PS5016-E16 demo system in action (image credit: Legit Reviews)

The Phison PCIe Gen4 x4 NVMe controller is expected to hit the consumer market by Q3 2019.

Source: ComputerBase

Video News

January 11, 2019 | 10:46 AM - Posted by LappyNeedThatPCIe4sLoveEvenMore (not verified)

Laptops need PCIe 4.0 more than Desktops what with Laptops having more limited numbers of PCIe lanes relative to PCs. TB3 requires a lot of PCIe 3.o lanes or else that 40Gbs is going to be limited to not enough PCIe bandwidth.

This TB3 bandwidth discrepancy occured on some OEM's laptops where the OEM only provided 2 PCIe 3.0 lanes. Most Laptop usage does not make use of TB3's max bandwidth requirements but External TB3/PCIe enclosures with Desktop GPUs and high resolution monitors interfaced via TB3 are Bandwidth taxing even at TB3's maximum 40Gbs limit.

There are Also USB 3.2 with its 2/USB 3.1 Gen 2(10 Gbs) links that are link bonded/aggregated to provide a total of 20Gbs bandwidth and that is also going to increase the bandwidth demands on laptops. Add in the PCIe lanes required for high performance NVM/SSDs and laptops are really becoming I/O bandwidth constrained with their limited numbers of available PCIe 3.0 lanes that are currently the standard.

PC/Motherboards are just now starting to offer integrated 2.5 Gb/s ethernet connectivity and hopefully that may come to laptops at some point in time also.

January 11, 2019 | 08:27 PM - Posted by First Ever (not verified)

Money, money, money, money ... and only 4GB/s :P
This is old picture from last year. I've now got a 27GB/s on the same machine! :)

January 11, 2019 | 08:52 PM - Posted by Paul A. Mitchell (not verified)

Many thanks for reporting this news, Sebastian!

PCIe 4.0 is a very significant development
for the PCI-Express bus, in general.

It retains the 128b/130b "jumbo frame" and
doubles the clock rate from 8G to 16G.

Thus, 16G / 8.125 bits per byte
= 1,969.2 MB/second aka MAX HEADROOM

This doubling will also double all PCIe lanes
for NVMe devices, because most of them
already use x4 lanes. One exception was
the initial versions of M.2 Optanes
which only used x2 PCIe 3.0 lanes.

Thus, MAX HEADROOM for x4 NVMe PCIe 4.0 devices =
1,969.2 x 4 = 7,876.8 MB/second

As such, this vastly increased raw bandwidth
should present some very exciting challenges
for all storage vendors.

Pretty amazing, to be sure!

January 11, 2019 | 10:01 PM - Posted by YouAreNotTuringComplete (not verified)

Paul A. Mitchell are you a not very well trained AI? MAX HEADROOM, isn't he that snarky AI from the 1980's!

Either way your AI appears to be trained via ingesting petabytes of marketing speak along with a very large side order of non sequiturs!

January 12, 2019 | 04:37 PM - Posted by Paul A. Mitchell (not verified)

Yes, indeed, MAX HEADROOM here, thrilling my aging circuits with more new math to chew on! 16G/8.125x4=7876 -- LUV IT!!

January 14, 2019 | 06:48 AM - Posted by Abc123 (not verified)

This is great news, looks like 2019 will be the year of the cheap SATA SSD and fast NVME drives!! Looking forward to a faster drive...Sucks sebastian left :((

January 14, 2019 | 01:26 PM - Posted by Sebastian Peak

I didn't leave. Well, I mean I left CES, but I'm still here. 2019 will indeed be a great year for storage if trends continue...

January 14, 2019 | 06:36 PM - Posted by oneofthebest...taughtjeremyeverythingheknows (not verified)

agreed... Thanks for your many great years of service to PC Per, Sebastian. Look forward to seeing what you accomplish in the future. Cheers and Godspeed o7 :)

January 14, 2019 | 09:32 PM - Posted by Paul A. Mitchell (not verified)

Ditto that: our, thanks to y'all!!

January 14, 2019 | 09:53 PM - Posted by Paul A. Mitchell (not verified)

Here's how we compute the gross controller overhead
using that Gen4HOST development and validation card:

take the best number from that CDM measurement
(photo above "Phison PS5016-E16 demo system in action"):

1.0 - (4,244.4 / 7,876.8) = 1.0 - 0.5488 = 45.12% overhead

So, there is obviously a lot of overhead that
results from an extra "translation" layer
of hardware and firmware.

That layer was necessary because the host
motherboard uses PCIe 3.0 lanes.

The 45% overhead should drop significantly
when that extra layer is gone.

Use the same formula to compare the overhead
of a Samsung SATA SSD doing READs at 560 MB/second
on a SATA 6G cable:

1.0 - (560 / 600) = 1.0 - 0.9333 = 6.67%

Now, predict PCIe 4.0 performance, using the latter:

7,876.8 x 0.9333 = ~7,350 MB/second (!!)

January 14, 2019 | 10:06 PM - Posted by Paul A. Mitchell (not verified)

Have you heard or read any mention of a PCIe 4.0 version in the M.2 form factor? That device will be a very welcome
addition to motherboards that have native PCIe 4.0 support
in their chipsets, as long as the M.2 socket is wired
directly to the CPU.

January 15, 2019 | 06:33 PM - Posted by chipman (not verified)

Who the hell needs an overheating PCI-E SSD controller in the M.2 form factor? That's a complete waste of money!

The SATA-IO should better develop a new SATA revision providing multiple lanes (similar or better than PCI-E 16x) with a new connector backward compatible by the mean of a notch separating basic pins from extended pins.

SATA Express hybrid connector (SATA + PCI-E) is huge crap that nobody needed!

The huge base code developed from ATA to SATA won't go anywhere with PCI-E storage devices... starting with the bootstrap from the BIOS.

It's by far less expessive to invest in a new backward compatible hardware than in many man hours of software development.

January 17, 2019 | 07:40 PM - Posted by Paul A. Mitchell (not verified)

Good points: several years ago, we proposed "syncing" storage with chipsets: x4 NVMe sockets do that. Along those same lines, SATA-IV could adopt the 128b/130b "jumbo frame" and up the clock rate at least to 12G, which is now available with some SAS controllers. I've seen a few articles that claim the 16G clock in PCIe 4.0 limits cable and trace lengths. But, 12G seems to be working AOK with current SAS devices and regular cables. My current theory is that the IT oligopoly effectively "froze" SATA development, as a means of pushing prosumers and the "bleeding edge" to adopt NVMe. p.s. There are now lots of M.2 cooling solutions. My favorite is the ASRock Ultra Quad M.2 add-in card. Also, a few ASUS motherboards support their DIMM.2 socket with optional fan.

January 15, 2019 | 07:01 PM - Posted by chipman (not verified)

About the new SATA revision... it could be better to define it as a new major SATA standard such as Wide SATA for example but not SATA II or SATA-X which could lead to confusion for SATA revision 2 (aka SATA-300) or SATA Express (i.e. the SATA + PCI-E hybrid connector).

January 17, 2019 | 07:52 PM - Posted by Paul A. Mitchell (not verified)

FYI: we proposed something similar back in 2012, at the Storage Developer Conference:

What emerged instead was NVMe in a few form factors e.g. x4 M.2. Each PCIe 3.0 lane now clocks at 8G and uses the 128b/130b "jumbo frame".

Your points about software development are right on. For example, most modern 2.5" SSDs work fine with SATA-I and SATA-II host controllers.

Future SATA-IV SSDs should be able to negotiate a target speed with the host controller and do so transparently to the user.

Take a look at the current prices for 12G SAS SSDs! Last time we looked, they were SKY HIGH.

If the next SATA spec retains a single "Serial" channel, at least the next spec could accommodate the 16G clock which will be standard in upcoming PCIe 4.0 chipsets.

And, while they are at it, the SATA standards group should also throw in the jumbo frame (only makes sense).

One lingering issue is the engineering limit on cable and trace lengths oscillating at 16 GHz.

Many thanks for sharing your valuable insights!

January 17, 2019 | 08:06 PM - Posted by Paul A. Mitchell (not verified)

At Newegg today, $2,900 USD for Seagate 1200.2 ST3840FM0003 2.5" 3,840 GB Dual 12Gb/s SAS eMLC Enterprise Solid State Disk:

Max Sequential READ:
Up to 1850 MBps (Dual port)
Up to 1100 MBps (Single port)

Thus, doubling the clock rate does seem to "scale"
even though SAS still uses the 8b/10b legacy frame:
1,100 MBps / 2 = 550 MBps

January 16, 2019 | 02:41 PM - Posted by ButtonPuncher (not verified)

4GB/sec on the desktop but 10Gbps ethernet is still rare/expensive. Argh.

March 23, 2019 | 10:21 PM - Posted by peter j connell (not verified)

On a few select asus am4 mobos, right now, there is lane space for 3x nvme & an 8 lane dgpu.

extrapolate that to pcie 4 & ~7GB/s nvme drives, thats a whopping 21 GB/s in a 3x raid 0 - about half the memory bandwidth.

Used for a page file, that's ~unlimited ~memory on AM4.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.