The Connector Formerly Known as SFF-8639 - Now Called U.2

Subject: Storage | June 8, 2015 - 04:04 PM |
Tagged: U.2, ssd, SFF-8639, pcie, NVMe, Intel, computex 2015, computex

Intel has announced that the SSD Form Factor Working Group has finally come up with a name to replace the long winded SFF-8639 label currently applied to 2.5" devices that connect via PCIe.

View Full Size

As Hardwarezone peeked in the above photo, the SFF-8639 connector will now be called U.2 (spoken 'U dot 2'). This appropriately corresponds with the M.2 connector currently used in portable and small form factor devices today, just with a new letter before the dot.

View Full Size

An M.2 NVMe PCIe device placed on top of a U.2 NVMe PCIe device.

Just as how the M.2 connector can carry SATA and PCIe signaling, the U.2 connector is an extension of the SATA / SAS standard connectors:

View Full Size

Not only are there an additional 7 pins between the repurposed SATA data and power pins, there are an additional 40 pins on the back side. These can carry up to PCIe 3.0 x4 to the connected device. Here is what those pins look like on a connector itself:

View Full Size

Further details about the SFF-8639 / U.2 connector can be seen in the below slide, taken from the P3700 press briefing:

View Full Size

With throughputs of up to 4 GB/sec and the ability to employ the new low latency NVMe protocol, the U.2 and M.2 standards are expected to quickly overtake the need for SATA Express. An additional look at the U.2 standard (then called SFF-8639), as well as a means of adapting from M.2 to U.2, can be found in our Intel SSD 750 Review.

Source: Hardwarezone

June 8, 2015 | 04:41 PM - Posted by Anonymous (not verified)

cluster

June 8, 2015 | 06:59 PM - Posted by Zabojnik

With or without the U.2, I'm getting a NVMe SSD soon. (I'm sorry.)

June 8, 2015 | 07:06 PM - Posted by Anonymous (not verified)

Sounds like you've found what you're looking for.

June 8, 2015 | 07:50 PM - Posted by Jeremy Hellstrom

Strange, it doesn't look like an edge device.

September 15, 2015 | 09:29 PM - Posted by Anonymous (not verified)

Yes, where the SSDs have no name.

September 30, 2015 | 04:46 PM - Posted by Anonymous (not verified)

bono-voyage to the old name, I guess.

June 8, 2015 | 09:25 PM - Posted by MRFS (not verified)

Thanks for the HEADS UP, Allyn.

Keep up the good work!

MRFS

June 9, 2015 | 08:48 AM - Posted by Mobile_Dom

If NVMe boot and boards with M.2 slots and U.2 connectors dont come when Zen Launches, it'll feel as if AMD doesnt want to compete at all

June 9, 2015 | 09:38 AM - Posted by MRFS (not verified)

Good point! Yes, we need motherboards with integrated U.2 ports and support for normal RAID modes + TRIM for RAID-0. If the motherboard manufacturers won't build them, then maybe Add-In Cards ("AIC") will do the job until motherboards integrate enough U.2 ports. 2.5" NVMe SSDs will become an industry-wide standard, so it's reasonable to expect prices to come down eventually. I will enjoy the combination of speed and reliability that should result from a RAID-5 array of 4 x 2.5" Intel 750 SSDs, or even a RAID-0 array of 2 such SSDs. We have one OS running on a RAID-0 array of 4 x Samsung 840 Pro SSDs, and the responsiveness of that workstation is notable: ATTO reports READs at 1,846 MBps:

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

June 9, 2015 | 10:28 AM - Posted by willmore (not verified)

It's going to take a lot of work for either AMD or Intel to support this. A current socket 1150 processor only has 16 lanes of PCI-E coming off of it. That's four drive connectors--and no video card.

We're either going to have to move to 8x PCI-E for our video cards (and get two drives worth of bandwidth), we're going to need to see a lot more lanes of PCI-E coming off the CPU, or we're going to start seeing PCI-E switches on the MBs again.

Only the first of those is going to make users happy, while he middle one is a reasonable compromise, and the last will be slowest and most expensive.

Even a high end 40 lane processor can only handle 2 16X PCI-E slots (video cards) and two of these 4x PCI-E drive connectors. That leaves nothing for ethernet, USB3.1, lightning.

June 10, 2015 | 10:41 AM - Posted by MRFS (not verified)

2 @ 16 = 32 + (2 @ 4) = 40.

Thanks for that!

What is Supermicro doing with their NVMe backplanes?

I seem to recall reading one article about their NVMe solutions for high-end servers.

June 10, 2015 | 10:56 AM - Posted by MRFS (not verified)

I don't understand many of the acronyms listed here:

http://www.supermicro.com/products/nfo/NVMe.cfm

Right below "4x NVMe and 8x 2.5" Hot-swap SAS3 HDD bays"

... what are "1x InfiniBand port (FDR, 56Gbps), w/ QSFP connector ("FR" SKUs)" ?

June 10, 2015 | 08:31 PM - Posted by MRFS (not verified)

According to Supermicro, their server model SS2028TP-DNCR uses LSI 3008 RAID controllers. Here's what I found about that controller at storagereview.com :

http://www.storagereview.com/supermicro_lsi_sas3008_hba_review

Host Bus: x8 lane, PCI Express 3.0 compliant
PCI Data Burst Transfer Rate: Half Duplex x8, PCIe 3.0, 8000MB/s

8000MB/s are consistent with the PCIe 3.0 spec:
8G / 8.125 bits per byte = ~1GB/s per lane.

And one of the photos at that review shows three of these LSI model 3008 controllers.

Does that mean those 3 controllers are using 24 PCIe lanes?

The motherboard manual mentions the Intel PCH C612 chipset.

I'll see what I can find about that C612 chipset.

June 10, 2015 | 08:38 PM - Posted by MRFS (not verified)

Intel® C612 Chipset
(Intel® DH82029 PCH)

http://ark.intel.com/products/81759/Intel-DH82029-PCH

PCI Express Revision Gen 2 <--- CAN THIS BE CORRECT ??

June 10, 2015 | 10:44 AM - Posted by MRFS (not verified)

> PCI-E switches on the MBs again

Could those be mounted on PCIe NVMe add-on RAID controllers, instead of directly on the motherboad?

June 10, 2015 | 11:20 AM - Posted by willmore (not verified)

They could, but that would only allow downstream devices to share upstream bandwidth. So, if you have 4 lanes of v3 PCI-E bandwidth and two drives with similar connections, you't not going to see double the read speed from the drives--unless the drives only used half of that BW to start with. If you are copying from one drive to the other, then you're be in a better situation as PCI-E has separate lanes for up and down.

A card that fits in a 4x PCI-E v3 slot and provides 2-4 of these 4x PCI-E v3 U.2 connectors would be useful, but it would only be a stopgap until more lanes become available from the CPU.

June 10, 2015 | 07:33 PM - Posted by MRFS (not verified)

Excellent analysis! Thanks. It's possible that PCIe hardware designers are banking on PCIe 4.0's 16G clock, which will double upstream bandwidth "across the board", relieving some of the pressure to increase the total number of PCIe lanes:

https://www.pcisig.com/news_room/faqs/FAQ_PCI_Express_4.0/#EQ3

e.g.

"Q: What are the results of the feasibility testing for the PCIe 4.0 specification?

A: After technical analysis, the PCI-SIG has determined that 16 GT/s on copper, which will double the bandwidth over the PCIe 3.0 specification, is technically feasible at approximately PCIe 3.0 power levels."

June 10, 2015 | 04:55 AM - Posted by Prodeous

Any more pins and my IDE cables can be used again.

The return of PATA :)

June 10, 2015 | 08:27 AM - Posted by JohnGR

"Not only are there an additional 7 pins between the repurposed SATA data and power pins"

Allyn, 6 pins

June 12, 2015 | 07:59 AM - Posted by Anonymous (not verified)

From what I can tell from the SFF-8639 spec sheet (if anything has changed for U.2, it's not in any public document), this ONLY mcovers the drive end of things. What connector ends up at the other end of the cable on the motherboard is undefined. Could be HD Mini SAS as in the current M.2 interposer solution, could be something else entirely.

June 12, 2015 | 01:11 PM - Posted by MRFS (not verified)

> What connector ends up at the other end of the cable on the motherboard is undefined. Could be HD Mini SAS as in the current M.2 interposer solution, could be something else entirely.

Possibly a connector like the one on this LSI SAS 3008 HBA:

http://supremelaw.org/systems/lsi/LSI.SAS3008.HBA.JPG

Source:
http://www.storagereview.com/supermicro_lsi_sas3008_hba_review

June 19, 2015 | 02:54 PM - Posted by MRFS (not verified)

repeating from the other thread:

https://forums.servethehome.com/index.php?threads/nvme-2-5-sff-drives-wo...

MRFS

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.