Micron launches M600 SATA SSD with innovative SLC/MLC Dynamic Write Acceleration

Subject: Storage, Shows and Expos | September 16, 2014 - 02:29 PM |
Tagged: ssd, slc, sata, mlc, micron, M600, crucial

You may already be familiar with the Micron Crucial M550 line of SSDs (if not, familiarize yourself with our full capacity roundup here). Today Micron is pushing their tech further by releasing a new M600 line. The M600's are the first full lineup from Micron to use their 16nm flash (previously only in their MX100 line). Aside from the die shrink, Micron has addressed the glaring issue we noted in our M550 review - that issue being the sharp falloff in write speeds in lower capacities of that line. Their solution is rather innovative, to say the least.

Recall the Samsung 840 EVO's 'TurboWrite' cache, which gave that drive a burst of write speed during short sustained write periods. The 840 EVO accomplished this by each TLC die having a small SLC section of flash memory. All data written passed through this cache, and once full (a few GB, varying with drive capacity), write speed slowed to TLC levels until the host system stopped writing for long enough for the SSD to flush the cached data from SLC to TLC.

high_res_M600D_form_factors_1.jpg

The Micron M600 SSD in 2.5" SATA, MSATA, and M.2 form factors.

Micron flips the 'typical' concept of caching methods on its head. It does employ two different types of flash writing (SLC and MLC), but the first big difference is that the SLC is not really cache at all - not in the traditional sense, at least. The M600 controller, coupled with some changes made to Micron's 16nm flash, is able to dynamically change the mode of each flash memory die *on the fly*. For example, the M600 can place most of the individual 16GB (MLC) dies into SLC mode when the SSD is empty. This halves the capacity of each die, but with the added benefit of much faster and more power efficient writes. This means the M600 would really perform more like an SLC-only SSD so long as it was kept less than half full.

M600-1.png

As you fill the SSD towards (and beyond) half capacity, the controller incrementally clears the SLC-written data, moving that data onto dies configured to MLC mode. Once empty, the SLC die is switched over to MLC mode, effectively clearing more flash area for the increasing amount of user data to be stored on the SSD. This process repeats over time as the drive is filled, meaning you will see less SLC area available for accelerated writing (see chart above). Writing to the SLC area is also advantageous in mobile devices, as those writes not only occur more quickly, they consume less power in the process:

M600-2.png

For those worst case / power user scenarios, here is a graph of what a sustained sequential write to the entire drive area would look like:

M600-3.png

Realize this is not typical usage, but if it happened, you would see SLC speeds for the first ~45% of the drive, followed by MLC speeds for another 10%. After the 65% point, the drive is forced to initiate the process of clearing SLC and flipping dies over to MLC, doing so while the host write is still in progress, and therefore resulting in the relatively slow write speed (~50 MB/sec) seen above. Realize that in normal use (i.e. not filling the entire drive at full speed in one go), garbage collection would be able to rearrange data in the background during idle time, meaning write speeds should be near full SLC speed for the majority of the time. Even with the SSD nearly full, there should be at least a few GB of SLC-mode flash available for short bursts of SLC speed writes.

This caching has enabled some increased specs over the prior generation models:

M600-4.png

M600-5.png

Note the differences in write speeds, particularly in the lower capacity models. The 128GB M550 was limited to 190MB/sec, while the M600 can write at 400MB/sec in SLC mode (which is where it should sit most of the time).

We'll be testing the M600 shortly and will come back with a full evaluation of the SSD as a whole and more specifically how it handles this new tech under real usage scenarios.

Full press blast after the break.

Source: Micron

IDF 2014 Storage Roundup - RAM and NVMe and IOPS! Oh my!

Subject: Storage, Shows and Expos | September 16, 2014 - 12:49 PM |
Tagged: ram, NVMe, IOPS, idf 2014, idf, ddr4, DDR

The Intel Developer Forum was last week, and there were many things to be seen for sure. Mixed in with all of the wearable and miniature technology news, there was a sprinkling of storage goodness. Kicking off the show, we saw new cold storage announcements from both HGST and Western Digital, but that was about it for HDD news, as the growing trend these days is with solid state storage technologies. I'll start with RAM:

First up was ADATA, who were showing off 64GB DDR3 (!) DIMMs:

DSC05446.JPG

Next up were various manufacturers pushing DDR4 technology quite far. First was SK Hynix's TSV 128GB DIMMs (covered in much greater depth last week):

DSC05415.JPG

Next up is Kingston, who were showing a server chassis equipped with 256GB of DDR4:

DSC05460.JPG

If you look closer at the stats, you'll note there is more RAM in this system than flash:

DSC05462.JPG

Next up is IDT, who were showing off their LRDIMM technology:

DSC05454.JPG

This technology adds special data buffers to the DIMM modules, enabling significantly higher amounts of installed RAM into a single system, with a 1-2 step de-rating of clock speeds as you take capacities to the far extremes. The above server has 768GB of DDR4 installed and running!:

DSC05455.JPG

Moving onto flash memory type stuff, Scott covered Intel's new 40 Gbit Ethernet technology last week. At IDF, Intel had a demo showing off some of the potential of these new faster links:

DSC05430.JPG

This demo used a custom network stack that allowed a P3700 in a local system to be matched in IOPS by an identical P3700 *being accessed over the network*. Both local and networked storage turned in the same 450k IOPS, with the remote link adding only 8ms of latency. Here's a close-up of one of the SFF-8639 (2.5" PCIe 3.0 x4) SSDs and the 40 Gbit network card above it (low speed fans were installed in these demo systems to keep some air flowing across the cards):

DSC05438.JPG

Stepping up the IOPS a bit further, Microsoft was showing off the capabilities of their 'Inbox AHCI driver', shown here driving a pair of P3700's at a total of 1.5 million IOPS:

DSC05445.JPG

...for those who want to get their hands on this 'Inbox driver', guess what? You already have it! "Inbox" is Microsoft's way of saying the driver is 'in the box', meaning it comes with Windows 8. Bear in bind you may get better performance with manufacturer specific drivers, but it's still a decent showing for a default driver.

Now for even more IOPS:

DSC05441.JPG

Yes, you are reading that correctly. That screen is showing a system running over 11 million IOPS. Think it's RAM? Wrong. This is flash memory pulling those numbers. Remember the 2.5" P3700 from a few pics back? How about 24 of them:

DSC05443.JPG

The above photo shows three 2U systems (bottom), which are all connected to a single 2U flash memory chassis (top). The top chassis supports three submodules, each with eight SFF-8639 SSDs. The system, assembled by Newisys, demonstrates just how much high speed flash you can fit within an 8U space. The main reason for connecting three systems to one flash chassis is because it takes those three systems to process the full IOPS capability of 24 low latency NVMe SSDs (that's 96 total lanes of PCIe 3.0!)!

So there you have it, IDF storage tech in a nutshell. More to come as we follow these emerging technologies to their maturity.

Intel Loves Exponential Trends: Shrinking Mini-PCs

Subject: General Tech, Cases and Cooling, Systems, Shows and Expos | September 12, 2014 - 02:20 PM |
Tagged: idf, idf 2014, nuc, Intel, SFF, small form factor

A few years ago, Intel introduced the NUC line of small form factor PCs. At this year's IDF, they have announced plans to make even smaller, and cheaper, specifications that are intended for OEMs to install Windows, Linux, Android, and Chrome OS on. This initiative is not yet named, but will consist of mostly soldered components, leaving basically just the wireless adapters user-replaceable, rather than the more user-serviceable NUC.

intel-idf-mini-pc.jpg

Image Credit: Liliputing

Being the owner of Moore's Law, they just couldn't help but fit it to some type of exponential curve. While it is with respect to generation, not time, Intel expects the new, currently unnamed form factor to halve both the volume (size) and bill of material (BOM) cost of the NUC. They then said that another generation after ("Future SFF") will halve the BOM cost again, to a quarter of the NUC.

What do our readers think? Would you be willing to give up socketed components for smaller and cheaper devices in this category or does this just become indistinguishable from mobile devices (which we already know can be cheap and packed into small spaces)?

Source: Liliputing

IDF 2014: Through Silicon Via - Connecting memory dies without wires

Subject: Storage, Shows and Expos | September 10, 2014 - 03:34 PM |
Tagged: TSV, Through Silicon Via, memory, idf 2014, idf

If you're a general computer user, you might have never heard the term "Through Silicon Via". If you geek out on photos of chip dies and wafers, and how chips are assembled and packaged, you might have heard about it. Regardless of your current knowledge of TSV, it's about to be a thing that impacts all of you in the near future.

Let's go into a bit of background first. We're going to talk about how chips are packaged. Micron has an excellent video on the process here:

The part we are going to focus on appears at 1:31 in the above video:

die wiring.png

This is how chip dies are currently connected to the outside world. The dies are stacked (four high in the above pic) and a machine has to individually wire them to a substrate, which in turn communicates with the rest of the system. As you might imagine, things get more complex with this process as you stack more and more dies on top of each other:

chip stacking.png

16 layer die stack, pic courtesy NovaChips

...so we have these microchips with extremely small features, but to connect them we are limited to a relatively bulky process (called package-on-package). Stacking these flat planes of storage is a tricky thing to do, and one would naturally want to limit how many of those wires you need to connect. The catch is that those wires also equate to available throughput from the device (i.e. one wire per bit of a data bus). So, just how can we improve this method and increase data bus widths, throughput, etc?

Before I answer that, let me lead up to it by showing how flash memory has just taken a leap in performance. Samsung has recently made the jump to VNAND:

vnand crop--.png

By stacking flash memory cells vertically within a die, Samsung was able to make many advances in flash memory, simply because they had more room within each die. Because of the complexity of the process, they also had to revert back to an older (larger) feature size. That compromise meant that the capacity of each die is similar to current 2D NAND tech, but the bonus is speed, longevity, and power reduction advantages by using this new process.

I showed you the VNAND example because it bears a striking resemblance to what is now happening in the area of die stacking and packaging. Imagine if you could stack dies by punching holes straight through them and making the connections directly through the bottom of each die. As it turns out, that's actually a thing:

tsv cross section.png

Read on for more info about TSV!

IDF 2014: Skylake Silicon Up and Running for 2H 2015 Release

Subject: Shows and Expos | September 9, 2014 - 05:27 PM |
Tagged: Skylake, Intel, idf 2014, idf, 14nm

2015 is shaping up to be an interesting year for Intel's consumer processor product lines. We are still expected to see Broadwell make some kind of debut in a socketed form in addition to the mobile releases trickling out beginning this holiday, but it looks like we will also get our first taste of Skylake late next year.

skylake1.jpg

Skylake is Intel's next microarchitecture and will be built on the same 14nm process technology currently shipping with Broadwell-Y. Intel stated that it expects to see dramatic improvements in all areas of measurement including performance, power consumption and silicon efficiency.

On stage the company demoed Skylake running the 3DMark Fire Strike benchmark though without providing any kind of performance result (obviously). That graphics demo was running on an engineering development board and platform and though it looked incredibly good from where we were sitting, we can't make any guess as to the performance quite yet.

skylake3.jpg

Intel then surprised us by bringing a notebook out from behind the monitor showing Skylake up and running in a mobile form factor decoding and playing back 4K video. Once again, the demo was smooth and impressive though you expect no more from an overly rehearsed keynote.

skylake2.jpg

Intel concluded that it was "excited about the health of Skylake" and that they should be in mass production in the first quarter of 2015 with samples going out to customers. Looking even further down the rabbit hole the company believes they have a "great line of sight to 10nm and beyond." 

Even though details were sparse, it is good news for Intel that they would be willing to show Skylake so early and yet I can't help but worry about a potentially shorter-than-expected life span for Broadwell in the desktop space. Mobile users will find the increased emphasis on power efficiency a big win for thin and light notebooks but enthusiast are still on the look out for a new product to really drive performance up in the mainstream.

IDF 2014: Western Digital announces new Ae HDD series for archival / cold storage

Subject: Storage, Shows and Expos | September 9, 2014 - 04:51 PM |
Tagged: WDC< Western Digital, WD, idf 2014, idf, hdd, Cold, Archival, Ae

We talked about helium filled, shingled HDD's from HGST earlier today. Helium may give you reduced power demands, but at the added expensive of hermetically sealed enclosures over conventional HDD's. Shingling may give added capacity, but at the expense of being forced into specific writing methods. Now we know Western Digital's angle into archival / cold storage:

WD_AE_PRN.jpg

..so instead of going with higher cost newer technologies, WD is taking their consumer products and making them more robust. They are also getting rid of the conventional thinking of capacity increments and are moving to 100GB increments. The idea is that once a large company or distributor has qualified a specific HDD model on their hardware, that model will stick around for a while, but be continued at an increased capacity as platter density yields increase over time. WD has also told me that capacities may even be mixed an matched within a 20-box of drives, so long as the average capacity matches the box label. This works in the field of archival / cold storage for a few reasons:

  • Archival storage systems generally do not use conventional RAID (where an entire array of matching capacity disks are spinning simultaneously). Drives are spun up and written to individually, or spun up individually to service the occasional read request. This saves power overall, and it also means the individual drives can vary in capacity with no ill effects.
  • Allowing for variable capacity binning helps WD ship more usable platters/drives overall (i.e. not rejecting drives that can't meet 6TB). This should drive overall costs down.
  • Increasing capacity by only a few hundred GB per drive turns into *huge* differences in cost when you scale that difference up to the number of drives you would need to handle a very large total capacity (i.e. Exabytes).

So the idea here is that WD is choosing to stick with what they do best, which they can potentially do for even cheaper than their consumer products. That said, this is really meant for enterprise use and not as a way for a home power user to save a few bucks on a half-dozen drives for their home NAS. You really need an infrastructure in place that can handle variable capacity drives seamlessly. While these drives do not employ SMR to get greater capacity, that may work out as a bonus, as writes can be performed in a way that all systems are currently compatible with (even though I suspect they will be tuned more for sequential write workloads).

Here's an illustration of this difference:

capacity 1.png

The 'old' method meant that drives on the left half of the above bell curve would have to be sold as 5TB units.

capacity 2.png

With the 'new' method, drives can be sold based on a spec closer to their actual capacity yield. For a given model, shipping capacities would increase as time goes on (top to bottom of the above graphic).

To further clarify what is meant by the term 'cold storage' - the data itself is cold, as in rarely if ever accessed:

tiers.png

Examples of this would be Facebook posts / images from months or years ago. That data may be rarely touched, but it needs to be accessible enough to be browsed to via the internet. The few second archival HDD spinup can handle this sort of thing, while a tape system would take far too long and would likely timeout that data request.

WD's Ae press blast after the break.

IDF 2014: HGST announces 3.2TB NVMe SSDs, shingled 10TB HDDs

Subject: Storage, Shows and Expos | September 9, 2014 - 02:00 PM |
Tagged: ssd, SMR, pcie, NVMe, idf 2014, idf, hgst, hdd, 10TB

It's the first day of IDF, so it's only natural that we see a bunch of non-IDF news start pouring out :). I'll kick them off with a few announcements from HGST. First item up is their new SN100 line of PCIe SSDs:

Ultrastar_SN100_Family_CMYK_Master.jpg

These are NVMe capable PCIe SSDs, available from 800GB to 3.2TB capacities and in (PCI-based - not SATA) 2.5" as well as half-height PCIe cards.

Next up is an expansion of their HelioSeal (Helium filled) drive line:

10TB_Market_applications_HR.jpg

Through the use of Shingled Magnetic Recording (SMR), HGST can make an even bigger improvement in storage densities. This does not come completely free, as due to the way SMR writes to the disk, it is primarily meant to be a sequential write / random access read storage device. Picture roofing shingles, but for hard drives. The tracks are slightly overlapped as they are written to disk. This increases density greatly, but writting to the middle of a shingled section is not possible without potentially overwriting two shingled tracks simultaneously. Think of it as CD-RW writing, but for hard disks. This tech is primarily geared towards 'cold storage', or data that is not actively being written. Think archival data. The ability to still read that data randomly and on demand makes these drives more appealing than retrieving that same data from tape-based archival methods.

Further details on the above releases is scarce at present, but we will keep you posted on further details as they develop.

Full press blast for the SN100 after the break.

Source: HGST

Intel Developer Forum (IDF) 2014 Keynote Live Blog

Subject: Processors, Shows and Expos | September 9, 2014 - 11:02 AM |
Tagged: idf, idf 2014, Intel, keynote, live blog

Today is the beginning of the 2014 Intel Developer Forum in San Francisco!  Join me at 9am PT for the first of our live blogs of the main Intel keynote where we will learn what direction Intel is taking on many fronts!

intelicon.jpg

NVIDIA Announces GAME24: Global PC Gaming Event

Subject: General Tech, Shows and Expos | September 2, 2014 - 05:51 PM |
Tagged: nvidia, game24, pc gaming

At 6PM PDT on September 18th, 2014, NVIDIA and partners will be hosting GAME24. The evemt will start at that time, all around the world, and finish 24 hours later. The three main event locations are Los Angeles, California, USA; London, England; and Shanghai, China. Four, smaller events will be held in Chicago, Illinois, USA; Indianapolis, Indiana, USA; Mission Viejo, California, USA; and Stockholm, Sweden. It will also be live streamed on the official website.

nvidia-game24-2014.png

Registration and attendance is free. If you will be in the area and want to join, sign up. Registration closes an hour before the event, but it is first-come-first-serve. Good luck. Have fun. Good game.

Source: NVIDIA

Tune in this Saturday! Celebrate 30 Years of Graphics and Gaming

Subject: General Tech, Shows and Expos | August 22, 2014 - 04:53 PM |
Tagged: richard huddy, kick ass, amd

amd-radeon-graphics-30-years.png

Join AMD’s Chief Gaming Scientist, Richard Huddy on Saturday, Aug. 23, 2014 at 10:00 AM EDT/7:00 AM PDT to celebrate 30 Years of Graphics and Gaming.  The event will feature interviews with Raja Koduri, AMD’s Corporate VP, Visual Computing; John Byrne, AMD’s Senior VP and General Manager, Computing and Graphics Business Group; and several special guests.   You can also expect new product announcements along with stories covering the history of AMD.  You can watch the twitch.tv livestream below once the festivities kick off!

Watch live video from AMD on www.twitch.tv

There is also a contest for those who follow @AMDRadeon and retweet their tweet of "Follow @AMDRadeon Tune into #AMD30Live 8/23/14 at 9AM CT www.amd.com/AMD30Live – Follow & Retweet for a chance to win! www.amd.com/AMD30Live"

Source: AMD

Khronos Announces "Next" OpenGL & Releases OpenGL 4.5

Subject: General Tech, Graphics Cards, Shows and Expos | August 15, 2014 - 08:33 PM |
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd

Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.

OpenGL 4.5 Released

OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.

opengl_logo.jpg

It also adds a few new extensions as an option:

ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).

ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).

ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.

KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.

nvidia-opengl-debugger.jpg

Image from NVIDIA GTC Presentation

If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.

Next Generation OpenGL Initiative Announced

The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).

amd-mantle-queues.jpg

And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.

Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.

Intel and Microsoft Show DirectX 12 Demo and Benchmark

Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 09:55 PM |
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX

Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.

intel-dx12-LockedFPS.png

Variable power to hit a desired frame rate, DX11 and DX12.

The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.

While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.

Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.

Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.

intel-dx12-unlockedFPS-1.jpg

Maximum power in DirectX 11 mode.

For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?

That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.

intel-dx12-unlockedFPS-2.jpg

Maximum power when switching to DirectX 12 mode.

If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.

If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?

FMS 2014: Silicon Motion announces new SM2256 controller driving 1xnm TLC NAND

Subject: Storage, Shows and Expos | August 7, 2014 - 05:37 PM |
Tagged: ssd, SM2256, silicon motion, sata, FMS 2014, FMS

Silicon Motion has announced their SM2256 controller. We caught a glimpse of this new controller on the Flash Memory Summit show floor:

DSC04256.JPG

The big deal here is the fact that this controller is a complete drop-in solution that can drive multiple different types of flash, as seen below:

DSC04258.JPG

The SM2256 can drive all variants of TLC flash.

The controller itself looks to have decent specs, considering it is meant to drive 1xnm TLC flash. Just under 100k random 4k IOPS. Writes are understandably below the max saturation of SATA 6Gb/sec at 400MB/sec (writing to TLC is tricky!). There is also mention of Silicon Motion's NANDXtend Technology, which claims to add some extra ECC and DSP tech towards the end of increasing the ability to correct for bit errors in the flash (more likely as you venture into 8 bit per cell territory).

Press blast after the break:

FMS 2014: Phison announces new quad-core PS3110 SATA 6Gb/s SSD controller

Subject: Storage, Shows and Expos | August 7, 2014 - 05:25 PM |
Tagged: ssd, sata, PS5007, PS3110, phison, pcie, FMS 2014, FMS

At the Flash Memory Summit, Phison has updated their SSD controller lineup with a new quad-core SSD controller.

DSC04264.JPG

The PS3110 is capable of handling TLC as well as MLC flash, and the added horsepower lets it push as high as 100k IOPS.

DSC04260.JPG

Also seen was an upcoming PS5007 controller, capable of pushing PCIe 3.0 x4 SSDs at 300k IOPS and close to 3GB/sec sequential throughputs. While there were no actual devices on display of this new controller, we did spot the full specs:

DSC04263.JPG

Full press blast on the PS3110 appears after the break:

Source: Phison

FMS 2014: HGST Claims 3 Million IOPS and 1.5us Access Time SSD - updated with pics

Subject: General Tech, Storage, Shows and Expos | August 7, 2014 - 02:17 PM |
Tagged: ssd, phase change memory, PCM, hgst, FMS 2014, FMS

According to an HGST press release, the company will bring an SSD based on phase change memory to the 2014 Flash Memory Summit in Santa Clara, California. They claim that it will actually be at their booth, on the show floor, for two days (August 6th and 7th).

The device, which is not branded, connects via PCIe 2.0 x4. It is designed for speed. It is allegedly capable of 3 million IOPS, with just 1.5 microseconds required for a single access. For comparison, the 800GB Intel SSD DC P3700, recently reviewed by Allyn, had a dominating lead over the competitors that he tested. It was just shy of 250 thousand IOPS. This is, supposedly, about twelve times faster.

HGST_CompanyLogo.png

While it is based on a different technology than NAND, and thus not directly comparable, the PCM chips are apparently manufactured at 45nm. Regardless, that is significantly larger lithography than competing products. Intel is manufacturing their flash at 20nm, while Samsung managed to use a 30nm process for their recent V-NAND launch.

What does concern me is the capacity per chip. According to the press release, it is 1Gb per chip. That is about two orders of magnitude smaller than what NAND is pushing. That is, also, the only reference to capacity in the entire press release. It makes me wonder how small the total drive capacity will be, especially compared to RAM drives.

Of course, because it does not seem to be a marketed product yet, nothing about pricing or availability. It will almost definitely be aimed at the enterprise market, though (especially given HGST's track record).

*** Update from Allyn ***

I'm hijacking Scott's news post with photos of the actual PCM SSD, from the FMS show floor:

DSC04122.JPG

DSC04124.JPG

In case you all are wondering, yes, it does in fact work:

DSC04125.JPG

DSC04126.JPG

DSC04127.JPG

One of the advantages of PCM is that it is addressed at smaller sections as compared to typical flash memory. This means you can see ~700k *single sector* random IOPS at QD=1. You can only pull off that sort of figure with extremely low IO latency. They only showed this output at their display, but ramping up QD > 1 should reasonably lead to the 3 million figure claimed in their release.

Source: HGST

FMS 2014: Marvell announces new 88SS1093 PCIe SSD controller

Subject: Storage, Shows and Expos | August 6, 2014 - 03:03 PM |
Tagged: ssd, pcie, NVMe, Marvell, FMS 2014, FMS, controller, 88SS1093

Marvell is notorious for being the first to bring a 6Gb/sec SATA controller to market, and they continue to do very well in that area. Their very capable 88SS9189 controller powers the Crucial MX100 and M550, as well as the ADATA SP920.

chip-shot-88SS1093.jpg

Today they have announced a newer controller, the 88SS1093. Despite the confusing numbering, the 88SS1093 has a PCIe 3.0 x4 host interface and will support the full NVMe protocol. The provided specs are on the light side, as performance of this controller will ultimately depend on the speed and parallelism of the attached flash, but its sure to be a decent performer. I suspect it would behave like their SATA part, only no longer bottlenecked by SATA 6Gb/sec speeds.

More to follow as I hope to see this controller in person on the exhibition hall (which opens to press in a few hours). Full press blast after the break.

*** Update ***

Apologies as there was no photo to be taken - Marvell had no booth at the exibition space at FMS.

Source: Marvell

FMS 2014: Samsung announces 3D TLC VNAND, Storage Intelligence initiative

Subject: Storage, Shows and Expos | August 5, 2014 - 04:19 PM |
Tagged: FMS, vnand, tlc, ssd, Samsung, FMS 2014, Flash Memory Summit

Just minutes ago at the Flash Memory Summit, Samsung announced the production of 32-layer TLC VNAND:

DSC03974.JPG

This is the key to production of a soon-to-be-released 850 EVO, which should bring the excellent performance of the 850 Pro, with the reduced cost benefit we saw with the previous generation 840 EVO. Here's what the progression to 3D VNAND looks like:

progression slide.png

3D TLC VNAND will look identical to the right most image in the above slide, but the difference will be that the charge stored has more variability. Given that Samsung's VNAND tech has more volume to store electrons when compared to competing 2D planar flash technology, it's a safe bet that this new TLC will come with higher endurance ratings than those other technologies. There is much more information on Samsung's VNAND technology on page 1 of our 850 Pro review. Be sure to check that out if you haven't already!

Another announcement made was more of an initiative, but a very interesting one at that. SSDs are generally dumb when it comes to coordinating with the host - in that there is virtually no coordination. An SSD has no idea which pieces of files were meant to be grouped together, etc (top half of this slide):

DSC04016.JPG

Stuff comes into the SSD and it puts it where it can based on its best guess as to how it should optimize those writes. What you'd want to have, ideally, is a more intelligent method of coordination between the host system and the SSD (more like the bottom half of the above slide). Samsung has been dabbling in the possibilities here and has seen some demonstrable gains to be made. In a system where they made the host software aware of the SSD flash space, and vice versa, they were able to significantly reduce write latency during high IOPS activity.

DSC04014.JPG

The key is that if the host / host software has more control over where and how data is stored on the SSD, the end result is a much more optimized write pattern, which ultimately boosts overall throughput and IOPS. We are still in the experimentation stage on Storage Intelligence, with more to follow as standards are developed and the industry pushes forward.

It might be a while before we see Storage Intelligence go mainstream, but I'm definitely eager to see 3D TLC VNAND hit the market, and now we know it's coming! More to follow in the coming days as we continue our live coverage of the Flash Memory Summit!

PC Perspective Hardware Workshop 2014 @ Quakecon 2014 in Dallas, TX

Subject: Editorial, General Tech, Shows and Expos | July 23, 2014 - 04:43 PM |
Tagged: workshop, video, streaming, quakecon, prizes, live, giveaways

UPDATE: The event is over, but the video is embeded below if you want to see the presentations! Thanks again to everyone that attended and all of our sponsors!

It is that time of year again: another installment of the PC Perspective Hardware Workshop!  Once again we will be presenting on the main stage at Quakecon 2014 being held in Dallas, TX July 17-20th.

logo-1500px.jpg
 

Main Stage - Quakecon 2014

Saturday, July 19th, 12:00pm CT

Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year.  We love giving back to the community of enthusiasts and gamers that drive us to do what we do!  Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!

Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out!  Our thanks to NVIDIASeasonic and Logitech!!

nvidia_logo_small.png

seasonic-transparent.png

logitech-transparent.png

Live Streaming

If you can't make it to the workshop - don't worry!  You can still watch the workshop live on our live page as we stream it over one of several online services.  Just remember this URL: http://pcper.com/live and you will find your way!

 

PC Perspective LIVE Podcast and Meetup

We are planning on hosting any fans that want to watch us record our weekly PC Perspective Podcast (http://pcper.com/podcast) on Wednesday or Thursday evening in our meeting room at the Hilton Anatole.  I don't yet know exactly WHEN or WHERE the location will be, but I will update this page accordingly on Wednesday July 16th when we get the data.  You might also consider following me on Twitter for updates on that status as well.

After the recording, we'll hop over the hotel bar for a couple drinks and hang out.  We have room for at leaast 50-60 people to join us in the room but we'll still be recording if just ONE of you shows up.  :)

Prize List (will continue to grow!)

Continue reading to see the list of prizes for the workshop!!!

John Carmack's Replacement at id: Tiago Sousa (Crytek)

Subject: General Tech, Shows and Expos | July 19, 2014 - 05:13 PM |
Tagged: quakecon 2014, quakecon, id, crytek

Tiago Sousa was "Lead R&D Graphics Engineer" at Crytek, according to his now defunct Twitter account, "@CRYTEK_TIAGO". According to his new Twitter account, "@idSoftwareTiago", he will be joining id Software to help with DOOM and idTech 6.

id-logo.jpg

A little less DOOM and gloom.

I find this more interesting because idTech 5 has not exactly seen much usage, outside of RAGE. Wolfenstein: The New Order was also released on the technology, two months ago. There is one other game planned -- and that is it. Sure, RAGE is almost three years old and the engine was first revealed in 2007, making it seven-year-old technology, basically. Still, that is a significant investment to see basically no return on, especially considering that its sales figures were not too impressive (Steam and other digital delivery services excluded).

I also cannot tell if this looks positive for id, after mixed comments from current and former employees (or people who claim to be), or bad for Crytek. The latter company was rumored to be hurting for cash since 2011 and saw the departure of many employees. I expect that there will be more to this story in the coming months and years.

Source: Twitter

Win a BYOC Seat at Quakecon 2014!

Subject: Shows and Expos | July 9, 2014 - 05:27 PM |
Tagged: workshop, quakecon, contest, byoc

Are you interested in attending Quakecon 2014 next weekend in Dallas, TX but just can't swing the BYOC spot? Well, thanks to our friends at Quakecon and at PC Part Picker, we have two BYOC spots up for grabs for fans of PC Perspective!

qconlogo-small.jpg

While we are excited to be hosting our PC Perspective Hardware Workshop with thousands of dollars in giveaways to pass out on Saturday the 19th, I know that the big draw is the chance to spend Thursday, Friday and Saturday at North America's largest LAN Party.

logo-1500px.jpg

The giveaway is simple. 

  1. Fill out the form below with your name and email address.
     
  2. Make sure you are able and willing to attend Quakecon from July 17th - July 20th. There is no point in winning a free BYOC spot that you cannot use!
     
  3. We'll pick a winner on Friday, July 11th so you'll have enough time to make plans.

There you have it. Get it to it guys and we'll see you in Dallas!