Micron Launches 32GB NVDIMM-N - Intel Announces 3D XPoint NVDIMM

Subject: Storage | November 15, 2017 - 09:59 PM |
Tagged: NVDIMM, XPoint, 3D XPoint, 32GB, NVDIMM-N, NVDIMM-F, NVDIMM-P, DIMM

We're finally starting to see NVDIMM materialize beyond the unobtanium. Micron recently announced 32GB NVDIMM-N:

View Full Size

These come with 32GB of DRAM plus 64GB of SLC NAND flash.

View Full Size

These are in the NVDIMM-N form factor and can offer some very impressive latency improvements over other non-volatile storage methods.

Next up is Intel, who recently presented at the UBS Global Technology Conference:

View Full Size

We've seen Intel's Optane in many different forms, and now it looks like we finally have a date for 3D XPoint DIMMs - 2nd half of 2018! There are lots of hurdles to overcome as the JEDEC spec is not yet finalized (and might not be by the time this launches). Motherboard and BIOS support also needs to be more widely adopted for this to take off as well.

Don't expect this to be in your desktop machine anytime soon, but one can hope!

Press blast for the Micron 32GB NVDIMM-N appears after the break.

Micron Advances Persistent Memory with 32GB NVDIMM

First Solution Delivering 2933 MT/s Speeds to Eliminate Storage Bottlenecks

DENVER, Nov. 13, 2017 (GLOBE NEWSWIRE) -- At the SC17 show, Micron Technology, Inc., (Nasdaq:MU) today announced a new 32GB NVDIMM-N offering twice the capacity of existing NVDIMMs, providing system designers and original equipment manufacturers (OEMs) with new flexibility to work with larger data sets in fast persistent memory.

The solution is architected to support the increasing performance, energy efficiency and uptime requirements of data analytics and online transaction processing applications. Compared to server configurations using traditional far storage, deploying NVDIMMs can deliver up to 400 percent performance benefits.i

As data center storage volumes grow, database queries increasingly need key datasets to be retained in-memory to improve access speeds due to the rising business requirement for higher availability. Many businesses are seeing increased value in placing fast memory near the processor to reduce the need to transfer data from far storage.

Persistent memory delivers a unique balance of latency, bandwidth, capacity and cost by delivering ultra-fast DRAM speeds for critical data. What sets it apart from standard server DRAM is its ability to preserve information in the event of a power loss. Micron's technology provides a unique solution for near-memory data analysis and addresses rising bandwidth demands of data-rich applications in markets such as finance, medicine, retail, and oil and gas exploration.

NVDIMM has emerged as a critical persistent memory technology due to its ability to deliver the performance levels of DRAM combined with the persistent reliability of NAND. It reduces the bandwidth gap between memory and storage.

Applications which require frequent updates — such as journaling or transactional logging of metadata — now have the capability to leverage NVDIMM for these functions instead of traditional far storage. Micron's NVDIMM allows customers to raise read-centric performance by 11 percent and write-centric performance by 63 percent for block level data.

"As data sets get larger and larger, data access becomes increasingly critical to application performance," said Tom Eby, senior vice president for Micron's Compute and Networking Business Unit. "Our new 32GB NVDIMM-N equips system architects with a high-capacity persistent memory solution that can dramatically increase throughput and improve total cost of ownership."

VMware and Dell are collaborating with Micron to increase the performance for virtualized applications. With virtual persistent memory, customers can now run multiple operating systems in a virtualized environment while reducing overall network traffic.

"As the global leader in cloud infrastructure and business mobility, VMware recognized early the significant reduction of database and local storage latencies that Micron NVDIMM-N can bring to our virtualized customers using Dell PowerEdge servers," said Richard A. Brunner, chief platform architect and vice president of Server Platform Technologies at VMware, Inc. "Using the 16 GB NVDIMM-N from Micron for the Dell PowerEdge 14G servers, a future version of VMware vSphere(R) intends to efficiently grow the number and size of virtualized persistent memory workloads in the data center while ensuring the benefits of live migration, check-pointing, and legacy storage optimizations for NVDIMM. VMware looks forward to the improvements that can arise when the server industry starts deploying the new 32 GB Micron NVDIMM-N to our customers."

"Persistent memory solutions enables our customers to optimize intensive database and analytics workloads," said Robert Hormuth, vice president and fellow, Server Division CTO at Dell EMC. "Micron's advancement in persistent memory offering and Dell EMC engineering efforts to enhance NVDIMM capability of PowerEdge servers will boost application performance, reduce system crash recovery time and enhance SSD endurance for our customers."

Demonstrations of the new persistent memory solutions using Micron NVDIMMs running on a Dell PowerEdge 14G server will be showcased at SC17 at the Micron Booth (#1963).


November 16, 2017 | 12:30 AM - Posted by Bill (not verified)

Does that mean instant (sub 3 seconds) boot for Windows??

November 16, 2017 | 04:30 AM - Posted by Power (not verified)

Depends how fast can you buy supported motherboard and CPU.

November 16, 2017 | 12:08 PM - Posted by Dvon-E (not verified)

No. Whenever faster hardware comes out, Microsoft finds a way of slowing it down. I've seen it happen when upgraded hardware arrived at the office before the "upgraded" software.

Now that MS is requiring new hardware to run the latest version of Windows, you won't be able to compare old hardware with old Windows to new hardware with old Windows any more, and the speed differences of new hardware with or without whatever super-fast component will be masked by mandatory poorly-implemented useless features added to Windows.

Putting it another way, to support X, Microsoft will add x, y, and z "for a better user experience."

November 16, 2017 | 02:55 PM - Posted by serpico (not verified)

meh. Windows 10 boots pretty fast. It definitly lowered my boot time when I upgraded my laptop from windows 7 to 10. There are other things I dont like about windows 10, but general performance is not one of them.

November 20, 2017 | 03:33 PM - Posted by Godzilla (not verified)

A HP Spectre laptop boots from power off to the Windows 10 signon page in about 7 seconds. And this isn't recovering from sleep, but from a full power shutdown. Dunno what HP did to make that happen, but they did a good job, as allegedly faster desktop systems still take ~25 seconds.

November 16, 2017 | 09:04 AM - Posted by Anonymously Anonymous (not verified)

I think this would be the tech I'd finally want to upgrade for, from my 2600k. Maybe 2 to 3 years from now we'll see this on desktop? I think my cpu will manage until then.

November 16, 2017 | 10:16 AM - Posted by notmuch (not verified)

Finally, the almost "philosophical absurdity" of volatibility in computing is being addressed.

Soon, all latest tech will be "pre-historic".

November 16, 2017 | 10:50 AM - Posted by NotYetHereForAverageJoe (not verified)

Hopefully Micron can get its QuantX/XPoint products to market in a form that makes use of some future JEDEC standard for NVDIMMs, both XPoint or NAND based, but it looks like Intel will have that XPoint market all to itself in 2017 as Micron's XPoint competition is MIA.

How much memory bandwidth must be shared between memory accesses DRAM and the on DIMM NAND.

November 16, 2017 | 11:07 AM - Posted by notmuch (not verified)

So, besides missing last year's new year celebrations, what else have you been up to? :-)

November 16, 2017 | 11:24 AM - Posted by MoreXPointsFerAll (not verified)

What's New Years?

November 16, 2017 | 11:23 AM - Posted by MoreXPointsFerAll (not verified)

And some joint XPoint fab embiggen news:

"Intel and Micron have expanded their XPoint production fab in Utah, USA, as the clock ticks down to the launch of XPoint DIMMs in the second half of 2018." (1)

And Future generations of XPoint:

"Today's first-generation chips have two layers, and Intel has just announced a 750GB Optane drive using them. Micron has its own QuantX brand of 3D XPoint products, but so far hasn't actually got any shipping gear.

We understand from Micron that the next two generations of XPoint are being developed. These might extend the layering inside XPoint chips from two to four, say, and then on to eight, doubling chip capacity each time. Alternatively, today's chips could be stacked to achieve the same end." (1)

(1)

"Intel-Micron scrap the summer diet, enlarge 3D XPoint mem DIMM fab"

https://www.theregister.co.uk/2017/11/16/intel_micron_3d_xpoint/

November 16, 2017 | 12:16 PM - Posted by notmuch (not verified)

Once they get to 8 layers, it will be possible to flip a whole Byte with only one induction within the crossing.

The theorethical resul is 1/2 consumption, 2x speed and less Joules. The latter will be the main concern with crosspoint tech.

November 16, 2017 | 11:29 AM - Posted by notmuch (not verified)

Crosspoint is the way to go. Life expectancy is vital here.

Also, I remember 2 years ago, Micron saying that it was cheaper to produce.

Better + cheaper = no bainer.

November 16, 2017 | 11:42 AM - Posted by notmuch (not verified)

Well, that's unless Raja come up with some sort of HBM on batteries...

November 16, 2017 | 10:18 PM - Posted by XPointOnTheHBMStacks (not verified)

Not really just stick some XPoint in with the DRAM on the HBM2 stacks. That bottom die on the HBM2 stack can have different controller IP on it than just the DRAM controller IP. Better yet the HBM memory makers can all get together and create an HBM-NVM JEDEC standard as they are all practicing members of JEDEC along with a lot of CPU/GPU makers. I'd like to see HBM with XPoint on the stacks and extra controller logic to handle all that DRAM to XPoint data transfer in the background via a back-plane bus specifically for HBM2/Newer HBM# DRAM to XPoint NVM on the die stacks sorts of persistent memory.

If they can get XPoint's durability up there enough so it can last at least 10 years then that's enough for some inclusion on the HBM2/Newer HBM# die stacks. Look at that Vega 10 die based Radeon WX 9100 SSD variant and now imagine all the GPUs that make use of HBM2 having loads of XPoint on there also right on the HBM2 stacks for even lower latency than that Radeon WX 9100 SSD variant PCIe card based GPU with its own on card SSD variant.

Any XPoint right on the HBM2 stacks will have the lowest latency as there will be no need to even go off the interposer to get at the NVM like the current WX 9100 SSD variant has to. That Radeon WX 9100 SSD variant is faster than having to go all the way off the PCIe card via an extra hop to any SSD on the motherboard but any XPoint on the HBM stacks and there would be no latency inducing hops over any PCIe protocol if the NVM/XPoint were already right there on the stacks along side the DRAM dies.

November 17, 2017 | 09:54 AM - Posted by notmuch (not verified)

Bingo!

That's what I meant when I said (in other posts) that the SSG was the only real (and rational) innovation from RTG (the likes we expect from AMD).

There was a demo showing what the SSG can do that all others don't, in dealing with an enormous quantity of polygons (billions? millions? Can't remember).

I guess Intel saw in there the right niche for crosspoint.

If they hold to their proprietary side of crosspoint and Micron keeps its one on hold for licensing, then we'll have a crushing Intel in the AI/Compute/HSA field.

So, all eyes on Micron for the sake of competition.

November 17, 2017 | 12:14 PM - Posted by ItOnlyAboutTheProfessionalMarkets (not verified)

"That's what I meant when I said (in other posts) that the SSG was the only real (and rational) innovation from RTG (the likes we expect from AMD)."

No Vega 10 has plenty of Innovative features like primitive shaders and HBCC and it's just that Vega 10 is for the Radeon Pro WX 9100 and the Radeon Instinct MI25 professional compute/AI markets first and can be binned down for gaming usage for any Vega 10 dies that do not make the professional grade.

For AMD, RTG's focus was on the professional markets and AMD has no great market share in the gaming GPU market where the markups and margins are terrable. So AMD being a CPU company from way back is very rationally focusing on the professional market for both Zen and Vega 10 and so is Nvidia and Intel with their respective offerings. If you look at what JHH at Nvidia has been focusing on it's GV100 and the compute/AI markets and also the automotive markets. So that's very logically where the best revenues are and the best source of new revenues are coming from. Intel hired Raja to help Intel create a GV100 competitor.

The gaming market has poor markups relative to the professional markets so AMD and Nvidia/Intel will always focus their most on the profesional compute/AI markets with that low margin consumer market there mostly for somewhere to market the binned dies and try to recoupe some money from the binned dies.

The gaming market really leaches off of the R&D paid for by the professional markets but that is somewhat counteracted by the CPU/GPU makers ability to dump their not so performant die binns onto the consumer/gaming markets.
Intel even makes extra revenues from the gaming/consumer by forcing motherboard upgrades and segementing its consumer lines of CPU/SOC/MB products to maixmize the already poor markups obtained from the mainstream consumer gaming market.

Look for Intel to continue to source semi-custom Radeon dies directly from AMD(No IP licensing Required) while Raja and his Intel Team focus like a LASER on getting a competing product to Nvidia's GV100 and that Raja designed accelerator does not necessarily need ROPs or TMUs it just need TPU(Tensor Processing Units) and Shader(Processor) cores by the thousands with plenty of DP FP units that can each do 32 bit packed math with one 64 bit FP unit able to do 2 32 bit FP values and the TPU units also able to do 16 and 8 bit math, and even 4 bit math, for AI workloads.

Intel can source from AMD for any low margin gaming focused GPUs to place on that EMIB/MCM module for the low margin consumer gaming markets, Raja's expertise is needed for the real revnue producing products in the professional markets.

November 16, 2017 | 03:58 PM - Posted by Chi (not verified)

This is only relevant for servers because high-end desktop motherboards / chipsets support either 64 GB or 128 GB max. This tech will not be accessible for the average consumer until they can buy motherboards that support 512 GB or more RAM.

November 16, 2017 | 08:21 PM - Posted by notmuch (not verified)

Bill and you. Both visionaries. Can't write more. 640KB reached.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.