Intel Expands Optane SSD 905P HHHL AIC and U.2 to 1.5TB Capacity + Quick Look Review

Subject: Storage
Manufacturer: Intel
Tagged: U.2, ssd, Optane, Intel, HHHL, AIC, 905P


Intel just sent over a note that they have officially launched the 1.5TB capacity for the Optane SSD 905P (for both HHHL and U.2 form factors). We'd been expecting this for a while now, considering we had tested a full system incorporating the U.2 version of this very capacity two months ago. That system has now been given away, but I borrowed the SSD while Ken was tearing down the system for his review. With the product now officially launched, I thought it appropriate to take a quick look at this higher capacity part, both inside and out.


View Full Size

View Full Size


View Full Size

7 packages on one side of a single PCB. This is unexpected for a U.2 SSD since there is usually some sort of folded-over PCB sandwich, which doubles the available area for packages. Odd finding a single PCB here given the large 1.5TB capacity combined with XPoint dies only holding 16GB each.

View Full Size

7 more packages along with the now standard XPoint controller. No DRAM necessary because, well, XPoint can easily pull double duty in that respect. Alright, so we have 1.5TB spread across only 14 packages. Throughout every Intel SSD we have ever laid our hands on for review, we've never seen *any* product (NAND or 3D XPoint) stack more than 4 dies per package. Had Intel stuck with that limit here, we would only have a maximum raw media capacity of 896GB. This is a 1.5TB SSD, so the only possible answer here is that we apparently have the first 8-die-per-package SSD to come out of Intel.

Read on for the test results!


We were not expecting any groundbreaking results here when compared to the previous 905P, so I'll quickly step through some comparative results just to make sure nothing has changed. Detail of our testing process is available here.

View Full Size

The random workload performance was identical to the 960GB capacity. Within 0.1% anyway.

View Full Size

The story was similar with sequential performance.

View Full Size

View Full Size

...and again, identical performance in our mixed workload test.

Pricing and Conclusion

Pricing is listed as 'currently unavailable' for this new capacity, but an early number we saw in passing was $2199, which comes to $1.47/GB, which is a bit on the high side compared with lower capacity models we have previously tested.

The same exact performance we're used to seeing out of the SSD 905P line, just now expanded to a higher capacity point. The identical results reinforce one point that I'd like to make - it's probably time for Intel to update the Optane controller to handle greater throughputs, because it is clearly not the media speeds that is the limiting factor. While Optane dominates in random performance, the controller holds it back when compared with some of the faster NAND-based SSDs to beat it in sequential performance.


Review Terms and Disclosure
All Information as of the Date of Publication
How product was obtained: The product was part of a Falcon Northwest Tiki system that came in for review.
What happens to product after review: This particular SSD went back into the source system and was then part of a giveaway.
Company involvement: Intel had no control over the content of the review and was not consulted prior to publication.
PC Perspective Compensation: Neither PC Perspective nor any of its staff was paid or compensated in any way by Intel for this review.
Advertising Disclosure: Intel has not purchased advertising at PC Perspective during the past twelve months.
Affiliate links: This article contains affiliate links to online retailers. PC Perspective may receive compensation for purchases through those links.
Consulting Disclosure: Intel is not a current client of Shrout Research for products or services related to this review.

Video News

September 20, 2018 | 11:37 PM - Posted by chipman (not verified)

No DRAM for an overpriced SSD with esoteric 3D XPoint memory chips... that's a complete joke!

We use DRAM to accelerate the processing of data on non volatile memory devices (e.g. HDD) and AFAIK 3D XPoint is way slower than DRAM (cf. access time).

September 21, 2018 | 01:47 AM - Posted by James

That is kind of like complaining that an electric car doesn't have a carburetor. It obviously doesn't need a DRAM cache with how low the latency is on xpoint. It outperforms flash based ssds in everything except sequential read, although it is unclear where you would get 2.9 GB/s (or even 2.3 GB/s) to actually sustain writes at that speed for any length of time in a consumer system. The write speed is a little higher in the 970 pro, but not much.

September 21, 2018 | 04:15 AM - Posted by chipman (not verified)

NAND Flash or 3D XPoint cache is stupid since massive writes will rapidly destroy the cache (cf. planned obsolescence) and despite 3D XPoint memory chips access time may be assumed "low" compared to other non volatile memory technology, it's still significantly higher than DRAM.

To be even more idiot, you could also replace your DRAM modules by 3D XPoint ones... for badly swapping while multiple access to DRAM would be more efficient in the end.

September 22, 2018 | 02:42 PM - Posted by Allyn Malventano

You appear to have some misconceptions about the raw speed of XPoint media. It is way closer to the speed of DRAM than to the speed of NAND. The current bottlenecks are more due to the controller and protocol limitations. Optane DIMMs are already entering the server space, and caching is one of its primary use cases. Use as a cache falls well within the rated lifespan of the product - the Optane Memory (cache-only) modules sold greater than 1 million units last quarter. NAND flash caches also do just fine - they are present in the majority of shipping (TLC) SSDs today.

The other thing you are missing is that while DRAM is 'more efficient', XPoint is cheaper. Not everyone can throw 192GB of DRAM in every server, and some workloads can perform just as well with less DRAM and more Optane. based on the market today, it seems pretty much the entire planet disagrees with you.

September 26, 2018 | 12:50 AM - Posted by chipman (not verified)

You appear to have a biased opinion (always sponsored by your Intel friend?) about the 3D XPoint technology.

Indeed, 3D XPoint memory's access time is significantly (>= x10) lower than NAND Flash but also significantly HIGHER than DRAM.

Actually 3D XPoint memory is not as close as DRAM than it is as NAND Flash. It is between both from the performance perspective but that's all.

The worst part of 3D XPoint is its non volatile side that makes it naturally bad for massive writes with limited write cycles which is not the case for DRAM.

Replacing DRAM by 3D XPoint is NOT a big deal even at a lower price per gigabyte since it is significantly slower and definitely destructible.

I am not sure if it is really economically efficient to compensate bad programming skills to manage DRAM memory with massive 3D XPoint garbages...

To conclude, the entire planet is NOT restricted to your biaised commercial speech since the server market is a niche compared to the global computer market.

September 28, 2018 | 12:33 PM - Posted by tomatotree (not verified)

"...since the server market is a niche compared to the global computer market"

Huh? The server market is HUGE. Looking just at Intel, their Data Center Group accounted for ~30% of revenue, which while smaller than the Client Computing Group is far from insignificant.

More importantly, unlike the client computing market, the server market is rapidly growing. Source:

Your points about XPoint aren't so much wrong as there are just a lot of use cases where those drawbacks don't matter, and the lower cost vs. DRAM makes it attractive. Although I'm not sure why you think XPoint is "naturally bad" at massive writes, since its endurance is still much higher than any NAND-based solution.

September 28, 2018 | 01:22 PM - Posted by Allyn Malventano

Well it’s close enough to DRAM speed to negate the need for DRAM on these products, and it works just fine. Not sure what your beef is with the technology, but you appear way more biased against it than the facts determine. Also, this is a client product, yet you somehow believe that it needs to compete directly with DRAM. If you want to make that argument, do so on the P4800X articles out there.

September 21, 2018 | 02:22 AM - Posted by James

I have been wondering about the current state of Intel VROC. Hopefully you (Allyn) are working on something? I need to replace the NFS server for a small cluster at work. It seems like we should use SSDs for the scratch space. It only needs about 3 or 4 TB of scratch space, accesible from all machines. Consumer SSDs are not going to be suficient since it could easily write a couple hundred GB an hour when test are running. It is unclear what the current requirements are for VROC. I was thinking that 4 1TB cheaper, but still enterprise devices, in a raid 1 or 5 would be sufficient. The larger, higher durability drives get ridiculously expensive. This optane device would have super high durability, but that is way out of the price range. Two of those might be over $4000. It also would be massiveover kill. We might just need to get hard drives, although we don't really need the space. Without the SSDs, I probably could stand to get at least 128 GB of memory, which will be expensive. More would be better, but I assume that going up to more than 128 GB would be a big cost increase.

September 21, 2018 | 10:46 PM - Posted by Paul A. Mitchell (not verified)

James, if I were you, I would contact Samsung with a few questions about their current enterprise-class M.2 NVMe SSDs. Samsung already understands their real competition, and they are marketing devices that take Intel's high prices into consideration e.g.:
"With speedy response times, SM963 is the perfect enterprise SSD for Write-Intensive data centers. MLC NAND technology enhances reliability at 3.6 DWPD. Our offerings of the compact M.2 form factor or the 2.5” form factor allow for significantly space-efficient server designs."

September 22, 2018 | 01:35 PM - Posted by HolyGrailsAgainMaybe (not verified)

Allyn, what do you think about this IP(1)?

The Wikichip Fuse article states this:

"The diagram above should give you a good idea of the kind of characteristics you can expect from NRAM. It’s very fast with a 5ns or less read/write access. That makes it fast enough to replace DRAM and possibly find interesting uses as another level of cache (low-level caches are a bit faster at 0.5ns). It’s got a data retention of around 12,000 years or under the worst condition (>300 °C) down to a few 100s making it ideal for harsh conditions such as automotive. Nantero reports >10 trillions of cycles for endurance and the energy required to change the state of the cell is 5 fJ/bit, matching or beating DRAM (usually 5-7 fJ/bit). From a cost perspective, NRAM is also cheaper than DRAM (16Gb for $6 on 28nm, discussed later). By the way, if all of that wasn’t enough, NRAM is also particularly suited for aerospace application because of its alpha particles immunity.

It’s interesting to point out that Nantero doesn’t really see Intel’s Optane and similar technologies a direct competition. Instead, they view them as complementary technologies. This isn’t too surprising considering the performance characteristics of NRAM." (1) [See page 2 of the Wikichip Fuse article]

"Nantero’s NRAM, A Universal Memory Candidate?"

September 22, 2018 | 02:51 PM - Posted by Paul A. Mitchell (not verified)

This is also a very interesting claim:

"Nantero is currently working on their first reference product –- a drop-in DRAM replacement. And they really mean a drop-in replacement –- down to the exact JEDEC specs –- performance, timing, power, and unlimited endurance requirements."

Looking at that claim from the perspective of an OS,
it now seems feasible to enable INSTANT-ON capabilities
once the OS is loaded into memory, after a cold start
and after POST (Power On Self-Test).

This could be enabled as a variation on HIBERNATION,
perhaps by eliminating the need to write regions of RAM
to non-volatile storage. After a monitor is powered OFF,
and after spinning HDDs are spun down, the residual 5V
current that supports WAKE-ON LAN may be all that's
required to support such an INSTANT-ON capability.

September 22, 2018 | 06:51 PM - Posted by ThisNewMemoryTechnologyHasManyAdvantagesAndRisksAlso (not verified)

Instant On is not really that important compared to the other implications of said technology. This has a much better usage model for many non consumer usage scenarios than only consumer usage and my 5 year Ivy Bridge/Spinning rust laptop starts up just fine and I'll have nothing to do with always on because in the consumer space always on equates to always spied upon!

Let's Keep it at better Storage Class Memory/Better Memory in general with a better Data retention should something go wrong and keep the Filthy Ad pushers and the Retailers at a distance on consumer kit if at all possible.

If someone can not Human Multi-Task while their PC/Laptop is starting up then the problem is with the Human and not the device. That consumer stuff needs to remain on Phones and Tablets as far as some folks are concerned.

This IP, if it works out, will be great for the server and research folks working on big datasets for many uses and I'd rather have some things go away intentionally on power down like encryption Keys and other Data that needs to remain private. So I hope that there can be some human initated Power Down With Memory Wipe settings as this new Carbon Nano Tube Memory does not require refresh to retain its state! And Pravate Data on any Storage Class DIMMs that can be removed from a Motherboard and still retain its state is going to become a big security issue if this new IP becomes widespread!

I'd rather see some form of DIMM protection that Zeros out the DIMMS on any unauthorized removial on that the DIMM or that the memory be required to be always encrypted! I prefer the Zeroing of the memory method as encryption can be broken.

This Memory IP, if the entire article has been read, has some better latency advantages also as there is no memory refresh cycles to work around that can disrupt any lower latency Read/Write related traffic and even error correction has to be worked around refresh cycles among other memory related tasks affected by refresh.

I'd still rather focus on the IP related technical advantages the new Memory IP has over regular DRAM than focus any consumer related end usage at the moment. How this relates to OS, and Application latency, other technical, and even security related issues in more inportant at the moment.

September 24, 2018 | 11:44 AM - Posted by Paul A. Mitchell (not verified)

All good point: THANKS!

September 26, 2018 | 07:48 AM - Posted by Stef (not verified)

Hi Allyn, NAND like operating at high temp, do phase changing memory exhibit the same behavior?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.