Subject: General Tech | June 26, 2018 - 01:17 PM | Jeremy Hellstrom
Tagged: phase change memory, PCM, antimony
Researchers from IBM and a German university have come up with a new way of creating phase change memory which should be far more effective than the current process. Current PCMs are made from a complex alloy of materials in order to limit the amount of energy required to flip their state to ensure temperatures do not build up, which leads to a scalabilty issue. As memory cells shrink, the purity of the alloy needs to be improved as even a single wandering atom could render the cell unusable. These researchers have created PCM cells between 3 and 10 nm thick using pure antimony separated by SiO2 layers of insulation that are 40-200-nm thick which can change state in a mere 50 nanoseconds.
There is still a long way to go, the process they used creates a form of antimony which remains stable for 51 hours, at a temperature of 20C which is not quite good enough for prime time but could lead to usable materials in the future. Drop by physicsworld for more coverage.
"Monatomic glassy antimony might be used as a new type of single-element phase change memory. This is the new finding from researchers at IBM Research-Zurich and RWTH Aachen University who say that their approach avoids the problem of local compositional variations in conventional multi-element PCMs. This problem becomes ever more important as devices get smaller."
Here is some more Tech News from around the web:
- Nvidia prototype board leak teases GDDR6 for upcoming Turing GPU @ The Inquirer
- Nvidia adds nine nifty AI supercomputing containers to the cloud @ The Register
- WPA3 rollout begins to give WiFi security a kick up the backside @ The Inquirer
- Hyperthreading under scrutiny with new TLBleed crypto key leak @ Ars Technica
- AIM Has Been Resurrected. Kind Of. @ Slashdot
- Qualcomm still serious about Windows 10 on Arm: Engineers work on '12W' Snapdragon 1000 @ The Register
Subject: General Tech | November 1, 2017 - 02:03 PM | Jeremy Hellstrom
Tagged: PCM, IBM
A team of researchers at IBM Zurich have come up with a way to utilize PCM as a simple computational device which does not follow the traditional Von Neumann architecture. Phase change memory works in a way somewhat analogous to optical storage, with changes to the physical state of the storage medium being used to represent a 1 or 0. In this case it is a substance that switches from amorphous to crystalline and back again with the application of electrical current; the article at The Register describes this in more detail.
This research envisions connecting to a sensor which can send an electrical pulse to PCM to change its state; the example given involves detecting rain and changing the memory to a 1 if rain is detected, a 0 if not. With the application of a algorithm to detect the state of the PCM you can read out rainfall patterns from storage without requiring a processor. While the computational power of PCM will be quite simple, describing how this works is certainly not so follow the links to the research if your curiosity is piqued.
"But memory has no processor so some aspect of a memory device has to be used, an aspect that changes its nature depending upon the data contents of the memory device. Also the computation is going to be quite primitive"
Here is some more Tech News from around the web:
- Android Oreo Bug Sends Thousands of Phones Into Infinite Boot Loops @ Slashdot
- Microsoft slowly closes Outlook Premium's door while Office 365 winks at you across the street @ The Register
- Russia's Anti-VPN Law Goes Into Effect @ Slashdot
- Only good guys would use an automated GPU-powered password-cracker ... right? @ The Register
- The underground story of Cobra, the 1980s’ illicit handmade computer @ Ars Technica
- We May Not Have Enough Minerals To Even Meet Electric Car Demand @ Slashdot
Introduction, How PCM Works, Reading, Writing, and Tweaks
I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.
XPoint memory. Note the shape of the cell/selector structure. This will be significant later.
While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.
Some die-level performance characteristics of various memory types. source
The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:
The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).
There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!
A 3D XPoint die, submitted for your viewing pleasure (click for larger version).
Bluetooth has come a long way since the technology was introduced in 1998. The addition of the Advanced Audio Distribution Profile (A2DP) in 2003 brought support for high-quality audio streaming, but Bluetooth still didn’t offer anywhere near the quality of a wired connection. This unfortunate fact is often overlooked in favor of the technology's convenience factor, but what if we could have the best of both worlds? This is where Qualcomm's aptX comes in, and it is a departure from the methods in place since the introduction of Bluetooth audio.
What is aptX audio? It's actually a codec that compresses audio in a very different manner than that of the standard Bluetooth codec, and the result is as close to uncompressed audio as the bandwidth-constrained Bluetooth technology can possibly allow. Qualcomm describes aptX audio as "a bit-rate efficiency technology that ensures you receive the highest possible sound quality from your Bluetooth audio device," and there is actual science to back up this claim. After doing quite a bit of reading on the subject as I prepared for this review, I found that the technology behind aptX audio, and its history, is very interesting.
A Brief History of aptX Audio
The aptX codec has actually been around since long before Bluetooth, with its invention in the 1980s and first commercial applications beginning in the 1990s. The version now found in compatible Bluetooth devices is 4th-generation aptX, and in the very beginning it was actually a hardware product (the APTX100ED chip). The technology has had a continued presence in pro audio for three decades now, with a wider reach than I had ever imagined when I started researching the topic. For example, aptX is used for ISDN line connections for remote voice work (voice over, ADR, foreign language dubs, etc.) in movie production, and even for mix approvals on film soundtracks. In fact, aptX was also the compression technology behind DTS theater sound, which had its introduction in 1993 with Jurassic Park. It is in use in over 30,000 radio stations around the world, where it has long been used for digital music playback.
So, while it is clear that aptX is a respected technology with a long history in the audio industry, how exactly does this translate into improvements for someone who just wants to listen to music over a bandwidth-constrained Bluetooth connection? The nature of the codec and its differences/advantages vs. A2DP is a complex topic, but I will attempt to explain in plain language how it actually can make Bluetooth audio sound better. Having science behind the claim of better sound goes a long way in legitimizing perceptual improvements in audio quality, particularly as the high-end audio industry is full of dubious - and often ridiculous - claims. There is no snake-oil to be sold here, as we are simply talking about a different way to compress and uncompress an audio signal - which is the purpose of a codec (code, decode) to begin with.
Subject: Displays | January 4, 2015 - 10:00 PM | Tim Verry
Tagged: thinkvision, thin bezel, PCM, neo-blade ips, Lenovo, ips, ces 2015
Today at the Consumer Electronics Show, Lenovo announced updates and new additions to its Think-branded products aimed at business customers. New ThinkPad PCs, ThinkVision displays, and stackable ThinkPad accessories are launching early this year.
Lenovo is also expanding its line of professional displays with the ThinkVision X24. This monitor is a slim full HD display with a thin bezel aimed at business users desiring single or dual monitor setups. The ThinkVision X24 is a 23.6" Neo-Blade IPS panel with a resolution of 1920x1080. Lenovo used pre-coated metal (PCM) for the rear panel to get the monitor to as thin as 7.5mm. The chrome stand supports tilt adjustments but not swivel or height.
The ThinkVision X24 supports HDMI and DisplayPort inputs, 7ms response time, 1000:1 contrast ratio, 250cd/m^2 brightness, and 178-degree viewing angles. The left and right bezels are extremely thin to allow for favorable dual monitor setups. The ThinkVision X24 provides a new budget option ($249) for the ThinkVision family.
The ThinkVision X24 will be available in April starting at $249.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Storage, Shows and Expos | August 7, 2014 - 02:17 PM | Scott Michaud
Tagged: ssd, phase change memory, PCM, hgst, FMS 2014, FMS
According to an HGST press release, the company will bring an SSD based on phase change memory to the 2014 Flash Memory Summit in Santa Clara, California. They claim that it will actually be at their booth, on the show floor, for two days (August 6th and 7th).
The device, which is not branded, connects via PCIe 2.0 x4. It is designed for speed. It is allegedly capable of 3 million IOPS, with just 1.5 microseconds required for a single access. For comparison, the 800GB Intel SSD DC P3700, recently reviewed by Allyn, had a dominating lead over the competitors that he tested. It was just shy of 250 thousand IOPS. This is, supposedly, about twelve times faster.
While it is based on a different technology than NAND, and thus not directly comparable, the PCM chips are apparently manufactured at 45nm. Regardless, that is significantly larger lithography than competing products. Intel is manufacturing their flash at 20nm, while Samsung managed to use a 30nm process for their recent V-NAND launch.
What does concern me is the capacity per chip. According to the press release, it is 1Gb per chip. That is about two orders of magnitude smaller than what NAND is pushing. That is, also, the only reference to capacity in the entire press release. It makes me wonder how small the total drive capacity will be, especially compared to RAM drives.
Of course, because it does not seem to be a marketed product yet, nothing about pricing or availability. It will almost definitely be aimed at the enterprise market, though (especially given HGST's track record).
*** Update from Allyn ***
I'm hijacking Scott's news post with photos of the actual PCM SSD, from the FMS show floor:
In case you all are wondering, yes, it does in fact work:
One of the advantages of PCM is that it is addressed at smaller sections as compared to typical flash memory. This means you can see ~700k *single sector* random IOPS at QD=1. You can only pull off that sort of figure with extremely low IO latency. They only showed this output at their display, but ramping up QD > 1 should reasonably lead to the 3 million figure claimed in their release.