Review Index:
Feedback

Intel SSD 910 Series 800GB PCIe SSD First Look

Subject: Storage
Manufacturer: Intel SSD 900 Family
Tagged: ssd, pcie, Intel, 910, 800gb

Background and Internals

A little over two weeks back, Intel briefed me on their new SSD 910 Series PCIe SSD. Since that day I've been patiently awaiting its arrival, which happened just a few short hours ago. I've burned the midnight oil for the sake of getting some greater details out there. Before we get into the goods, here's a quick recap of the specs for the 800 (or 400) GB model:

"Performance Mode" is a feature that can be enabled through the Intel Data Center Tool Software. This feature is only possible on the 800GB model, but not for the reason you might think. The 400GB model is *always* in Performance Mode, since it can go full speed without drawing greater than the standard PCIe 25W power specification. The 800GB model has twice the components to drive yet it stays below the 25W limit so long as it is in its Default Mode. Switching the 800GB model to Performance Mode increases that draw to 38W (the initial press briefing stated 28W, which appears to have been a typo). Note that this increased draw is only seen during writes.

Ok, now into the goodies:

View Full Size

Click here to read on!

Behold the 800GB SSD 910!

View Full Size

The four capacitors pictured are Aluminum Electrolytic, "V" Type "FK" Series. Each is rated at 330uF and each appears to be routed to its respective power converter circuit, which in turn drives one of the four 200GB SAS SSD units.

View Full Size

A side profile of the 910, showing the stacked layout, which I could only look at for just long enough to take this photo before the screwdriver came out:

View Full Size

The top two PCBs contain nothing but flash, while the bottom PCB holds four SAS SSD controllers and the LSI SAS HBA (hidden under the heatsink, which I opted to not remove considering I hadn't even fired up the 910 for the first time yet):

View Full Size

Each SAS controller gets a fair chunk of DDR RAM (bottom right), while the LSI HBA gets a little to itself as well (center left).

The beefy connectors mating each flash memory PCB to the main board are fairly stout:

View Full Size

And finally the backs of the three PCBs for your viewing pleasure. Power inverters and additional RAM for the controllers lines the bottom of the main board, while the large chip in the center holds the firmware for the LSI SAS HBA.

View Full Size

Continue on page two for more pictures and preliminary benchmarks.

April 27, 2012 | 12:52 PM - Posted by Compton (not verified)

It's true that on a 1155 mainboard/cpu combo you should keep all available PCIe bandwidth for the 910. But over at The SSD Review they were testing it on a X79 with 40 PCIe lanes and not 16.

April 27, 2012 | 08:11 PM - Posted by Allyn Malventano

Re-tested using the same QD=64 and saw same result. Updated the piece with that analysis. Thanks!

April 27, 2012 | 01:11 PM - Posted by Eastside (not verified)

You are comparing your 4k random write speeds to their posted 4k random read speeds. Their read results are actually higher than your posted results, and they did not post their 4k write results.

April 27, 2012 | 08:11 PM - Posted by Allyn Malventano

You're absolutely correct! I've fixed this just now.

April 27, 2012 | 10:02 PM - Posted by Eastside (not verified)

When you are comparing the LUN performance, you are using these as individual volumes?
Or are they in a raid configuration? So with 4 LUN, is that 4RO, or 4 separate volumes being accessed simultaneously?

April 28, 2012 | 08:35 AM - Posted by Allyn Malventano

The ATTO run was using standard Windows RAID-0 for the 4 LUNs combined. The IOMeter run accessed the tested LUNs simultaneously in RAW form. The latter was done to properly evaluate IOPS scaling of the LSI HBA without adding variables caused by the Windows RAID layer.

April 29, 2012 | 01:12 AM - Posted by Paul Alcorn (not verified)

Therein lies the issue with the max latency. The results that we posted on thessdreview.com were from a RAID 0 of the four volumes.
Under similar tests to yours, with the same parameters, the results come in at 29310.72 IOPS, 895.74 MB/s (binary), Average response time of 1.116263 and a maximum response time of 2.170227. CPU utilization is 11.39%.
The higher maximum latency reported from RAID 0 is indicative of typical RAID overhead with windows. These were cursory benchmarks, ran before the SSD went into a automated test regimen.
Of note; The maximum latency is the single I/O that requires the longest time to complete. If there is a correlation between a very high maximum latency and an overall higher average latency, that can be indicative of a device/host issue. Even with the RAID result kicking out an appreciably higher maximum latency, that result would have to be in conjunction with higher overall latency to indicate a serious problem.
The SLI GPUs are rarely used during bench sessions, unless we are doing 3d benchmarks. They are on an entirely separate loop, allowing them to be used, or removed easily. During all testing thus far, we have used a 9800gt as the video card.
No worries, the X79 Patsburg (C600)chipset is designed for servers and high end workstations. Plenty of bandwidth there.

April 30, 2012 | 08:11 PM - Posted by Allyn Malventano

Actually, if you were testing a single RAID volume with 4 workers and QD=64, you were actually testing with QD=256, which might have upped the latency. From the Iometer User's Guide:

---

 

9.4 # of Outstanding I/Os
 
The # of Outstanding I/Os control specifies the maximum number of outstanding asynchronous I/O operations per disk the selected worker(s) will attempt to have active at one time. (The actual queue depth seen by the disks may be less if the operations complete very quickly.) The default value is 1.
 
Note that the value of this control applies to each selected worker and each selected disk.
---

 

May 1, 2012 | 11:48 PM - Posted by Paul Alcorn (not verified)

It was with 16 QD for each worker. It is elementary that it adds up to 64. The reason that we stated that the QD was 64 is because 16 X 4 = 64. When listing results typically the results are listed as the overall QD, with the number of workers noted.
If there were an overall issue with the latency, it would show in the average latency measurement, which is actually slightly lower than your results. Did you receive the email with the Iometer results that I sent you?

May 7, 2012 | 11:09 PM - Posted by Paul Alcorn (not verified)

our full review is up, you should give it a glance allyn at http://thessdreview.com/our-reviews/intel-910-pcie-ssd-review-amazing-pe...

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.