Subject: Storage | July 28, 2015 - 12:41 PM | Allyn Malventano
Tagged: XPoint, non-volatile RAM, micron, memory, Intel
Everyone that reads SSD reviews knows that NAND Flash memory comes with advantages and disadvantages. The cost is relatively good as compared to RAM, and the data remains even with power removed (non-volatile), but there are penalties in the relatively slow programming (write) speeds. To help solve this, today Intel and Micron jointly launched a new type of memory technology.
XPoint (spoken 'cross point') is a new class of memory technology with some amazing characteristics. 10x the density (vs. DRAM), 1000x the speed, and most importantly, 1000x the endurance as compared to current NAND Flash technology.
128Gb XPoint memory dies, currently being made by Intel / Micron, are of a similar capacity to current generation NAND dies. This is impressive for a first generation part, especially since it is physically smaller than a current gen NAND die of the same capacity.
Intel stated that the method used to store the bits is vastly different from what is being used in NAND flash memory today. Intel stated that the 'whole cell' properties change as a bit is being programmed, and that the fundamental physics involved is different, and that it is writable in small amounts (NAND flash must be erased in large blocks). While they did not specifically state it, it looks to be phase change memory (*edit* at the Q&A Intel stated this is not Phase Change). The cost of this technology should end up falling somewhere between the cost of DRAM and NAND Flash.
3D XPoint memory is already being produced at the Intel / Micron Flash Technology plant at Lehi, Utah. We toured this facility a few years ago.
Intel and Micron stated that this technology is coming very soon. 2016 was stated as a launch year, and there was a wafer shown to us on stage:
You know I'm a sucker for good wafer / die photos. As soon as this session breaks I'll get a better shot!
There will be more analysis to follow on this exciting new technology, but for now I need to run to a Q&A meeting with the engineers who worked on it. Feel free to throw some questions in the comments and I'll answer what I can!
*edit* - here's a die shot:
Added note - this wafer was manufactured on a 20nm process, and consists of a 2-layer matrix. Future versions should scale with additional layers to achieve higher capacities.
Subject: Graphics Cards | May 19, 2015 - 03:51 PM | Jeremy Hellstrom
Tagged: memory, high bandwidth memory, hbm, Fiji, amd
Ryan and the rest of the crew here at PC Perspective are excited about AMD's new memory architecture and the fact that they will be first to market with it. However as any intelligent reader is wont to look for; a second opinion on the topic is worth finding. Look no further than The Tech Report who have also been briefed on AMD's new memory architecture. Read on to see what they learned from Joe Macri and their thoughts on the successor to GDDR5 and HBM2 which is already in the works.
"HBM is the next generation of memory for high-bandwidth applications like graphics, and AMD has helped usher it to market. Read on to find out more about HBM and what we've learned about the memory subsystem in AMD's next high-end GPU, code-named Fiji."
Here are some more Graphics Card articles from around the web:
- AMD HBM High Bandwidth Memory Technology Unveiled @ [H]ard|OCP
- Diamond Wireless Video Stream HD 1080P HDMI @ eTeknix
- KFA2 GeForce GTX 980 ‘8Pack Edition’ 4096MB @ Kitguru
- Gigabyte GTX 960 OC 2 GB @ techPowerUp
- eForce GTX TITAN X Video Card Review @ Hardware Secrets
High Bandwidth Memory
UPDATE: I have embedded an excerpt from our PC Perspective Podcast that discusses the HBM technology that you might want to check out in addition to the story below.
The chances are good that if you have been reading PC Perspective or almost any other website that focuses on GPU technologies for the past year, you have read the acronym HBM. You might have even seen its full name: high bandwidth memory. HBM is a new technology that aims to turn the ability for a processor (GPU, CPU, APU, etc.) to access memory upside down, almost literally. AMD has already publicly stated that its next generation flagship Radeon GPU will use HBM as part of its design, but it wasn’t until today that we could talk about what HBM actually offers to a high performance processor like Fiji. At its core HBM drastically changes how the memory interface works, how much power is required for it and what metrics we will use to compare competing memory architectures. AMD and its partners started working on HBM with the industry more than 7 years ago, and with the first retail product nearly ready to ship, it’s time to learn about HBM.
We got some time with AMD’s Joe Macri, Corporate Vice President and Product CTO, to talk about AMD’s move to HBM and how it will shift the direction of AMD products going forward.
The first step in understanding HBM is to understand why it’s needed in the first place. Current GPUs, including the AMD Radeon R9 290X and the NVIDIA GeForce GTX 980, utilize a memory technology known as GDDR5. This architecture has scaled well over the past several GPU generations but we are starting to enter the world of diminishing returns. Balancing memory performance and power consumption is always a tough battle; just ask ARM about it. On the desktop component side we have much larger power envelopes to work inside but the power curve that GDDR5 is on will soon hit a wall, if you plot it far enough into the future. The result will be either drastically higher power consuming graphics cards or stalling performance improvements of the graphics market – something we have not really seen in its history.
While it’s clearly possible that current and maybe even next generation GPU designs could still have depended on GDDR5 as the memory interface, the move to a different solution is needed for the future; AMD is just making the jump earlier than the rest of the industry.
Subject: Storage, Shows and Expos | September 10, 2014 - 03:34 PM | Allyn Malventano
Tagged: TSV, Through Silicon Via, memory, idf 2014, idf
If you're a general computer user, you might have never heard the term "Through Silicon Via". If you geek out on photos of chip dies and wafers, and how chips are assembled and packaged, you might have heard about it. Regardless of your current knowledge of TSV, it's about to be a thing that impacts all of you in the near future.
Let's go into a bit of background first. We're going to talk about how chips are packaged. Micron has an excellent video on the process here:
The part we are going to focus on appears at 1:31 in the above video:
This is how chip dies are currently connected to the outside world. The dies are stacked (four high in the above pic) and a machine has to individually wire them to a substrate, which in turn communicates with the rest of the system. As you might imagine, things get more complex with this process as you stack more and more dies on top of each other:
16 layer die stack, pic courtesy NovaChips
...so we have these microchips with extremely small features, but to connect them we are limited to a relatively bulky process (called package-on-package). Stacking these flat planes of storage is a tricky thing to do, and one would naturally want to limit how many of those wires you need to connect. The catch is that those wires also equate to available throughput from the device (i.e. one wire per bit of a data bus). So, just how can we improve this method and increase data bus widths, throughput, etc?
Before I answer that, let me lead up to it by showing how flash memory has just taken a leap in performance. Samsung has recently made the jump to VNAND:
By stacking flash memory cells vertically within a die, Samsung was able to make many advances in flash memory, simply because they had more room within each die. Because of the complexity of the process, they also had to revert back to an older (larger) feature size. That compromise meant that the capacity of each die is similar to current 2D NAND tech, but the bonus is speed, longevity, and power reduction advantages by using this new process.
I showed you the VNAND example because it bears a striking resemblance to what is now happening in the area of die stacking and packaging. Imagine if you could stack dies by punching holes straight through them and making the connections directly through the bottom of each die. As it turns out, that's actually a thing:
Subject: General Tech, Motherboards, Memory | July 6, 2014 - 03:53 AM | Scott Michaud
Tagged: overclocking, memory, gigabyte
About a week ago, HWBOT posted a video of a new DDR3 memory clock record which was apparently beaten the very next day after the movie was published. Tom's Hardware reported on the first of the two, allegedly performed by Gigabyte on their Z97X-SOC Force LN2 Motherboard. The Tom's Hardware article also, erroneously, lists the 2nd place overclock (then 1st place) at 4.56 GHz when it was really half that, because DDR is duplex (2.28 GHz). This team posted their video with a recording of the overclock being measured by an oscilloscope. This asserts that they did not mess with HWBOT.
The now first place team, which managed 2.31 GHz on the same motherboard, did not go to the same level of proof, as far as I can tell.
This is the 2nd fastest overclock...
... but the fastest to be recorded with an oscilloscope that I can tell
Before the machine crashes to a blue screen, the oscilloscope actually reports 2.29 GHz. I am not sure why they took 10 MHZ off, but I expect it is because the system crashed before HWBOT was able to record that higher frequency. Either way, 2.28 GHz was a new world record, and verified by a video, whether or not it was immediately beat.
Tom's Hardware also claims that liquid nitrogen was used to cool the system, which brings sense to why they would use an LN2 board. It could have been chosen just for its overclocking features, but that would have been a weird tradeoff. The LN2 board doesn't have mounting points for a CPU air or water cooler. The extra features would have been offset by the need to build a custom CPU cooler, to not use liquid nitrogen with. It is also unclear how the memory was cooled, whether it was, somehow, liquid nitrogen-cooled too, or if it was exposed to the air.
Subject: Memory, Storage | June 4, 2014 - 11:15 AM | Sebastian Peak
Tagged: ssd, solid state drive, pcie, pci-e ssd, memory, M.2, ddr4, computex 2014, computex, adata, 2tb ssd
ADATA has been showing off some upcoming products at Computex, and it's all about DRAM.
We'll begin with an upcoming line of PCIe Enterprise/Server SSDs powered by the SandForce SF3700-series controller. We've been waiting for products with the SF3700 controller since January, when ADATA showed a prototype board at CES, and ADATA is now showcasing the controller in the "SR1020" series drives.
The first is a 2TB 2.5" drive, but the interface was not announced (and the sample on the floor appeared to be an empty shell). The listed specs are performance up to 1800MB/s and 150K IOPS, with the drive powered by the SF-3739 controller. Support for both AHCI and NVMe is also listed, along with the usual TRIM, NCQ, and SMART support.
Another 2TB SSD was shown with exactly the same specs as the 2.5" version, but this one is built on the M.2 spec. The drive will connect via 4 lanes of Gen 2 PCI Express. Both drives in ADATA's SR1020 PCIe SSD lineup will be available in capacities from 240GB - 2TB, and retail pricing and availability is forthcoming.
Continuing the DRAM theme, ADATA also showed new DDR4 modules in commodity and enthusiast flavors. Both of the registered DIMMs on display (an ultra-low profile DIMM was also shown) had standard DDR4 specs of 2133MHz at 1.2V, but ADATA also showed some performance DDR4 at their booth.
A pair of XPG Z1 DDR4 modules in action
No pricing or availability just yet on these products.
Ultra-Speed RAM, APU-Style
In our review of the Kingston HyperX Predator 2666MHz kit, we discovered what those knowledgeable about Intel memory scaling already knew: for most applications, and specifically games, there is no significant advantage to increases in memory speed past the current 1600MHz DDR3 standard. But this was only half of the story. What about memory scaling with an AMD processor, and specifically an APU? To find out, we put AMD’s top APU, the A10-7850K, to the test!
Ready for some APU memory testing!
AMD has created a compelling option with their APU lineup, and the inclusion of powerful integrated graphics allows for interesting build options with lower power and space requirements, and even make building tiny mini-ITX systems for gaming realistic. It’s this graphical prowess compared to any other onboard solution that creates an interesting value proposition for any gamer looking at a new low-cost build. The newest Kaveri APU’s are getting a lot of attention and they beg the question, is a discrete graphics card really needed for gaming at reasonable settings?
Subject: Storage, Shows and Expos | January 8, 2014 - 12:57 AM | Allyn Malventano
Tagged: ram, micron, memory, ddr4, CES 2014, CES
While the Crucial did not have much in the way of new flash memory product launches this year, Micron as a whole has been busily churning out further revisions of DDR4 memory. While our visit last year only revealed a single prototype for us to look at, now we have all of the typical form factors covered:
From top down we have enterprise, enthusiast, OEM, and SO-DIMM form factors, all populated with DDR4 parts. All that needs to happen now is for motherboard and portable manufacturers to get on board with the new technology. As with all chicken-and-egg launches, someone needs to take the first plunge, and here we can see Micron has certainly been on the leading edge of things. That enterprise part above is a full 16GB (not bits!) of DDR4 capacity.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Memory | June 3, 2013 - 05:50 AM | Tim Verry
Tagged: xmp, overclocking, memory, haswell, G.Skill Trident X, G.Skill, ddr3 3000, ddr3
G.Skill is a company known for its DDR3 memory products and overclocking contests. It recently unveiled a new 32GB DDR3 RAM kit under its TridentX series that is clocked at an impressive 3,000 MHz!
The new G.Skill DDR3 3000MHz 32GB (4 x 8GB) memory kit is aimed at enthusiasts running Intel Haswell processors on Z87 motherboards. It features CAS12 latencies and can be run at 1.65V. It also supports Intel's XMP (Extreme Memory Profiles) standard, which will allow the motherboard to automatically configure the RAM for the full 3000 MHz clockspeed, though it requires a slight CPU overclock as well.
In G.Skill's own benchmark tests, the company managed to run its new 32GB TridentX memory at 3,000 MHz with CAS latencies of 12-14-14-35-CR2 at 1.65V. The Memtest Pro benchmark run was done on a system with an Intel Core i7-4770K and an ASUS Maximus VI Extreme Z87 motherboard. The Intel chip was running with a bus speed of 102.32 MHz and a multiplier of 39 for a total 3.99 GHz core clockspeed with all cores under load. Considering the i7-4770K is only rated for a maximum of DDR3-1600 memory, seeing it running DDR3 at 3GHz is impressive!
The new 32GB (4x8GB) TridentX kit is joined by 8GB (2x4GB) and 16GB (4x8GB) kits that are all rated for DDR3-3000 speeds. The kits continue to be covered by G.Skill's lifetime warranty. The company has not announced pricing or availability, but expect to pay a hefty premium for this super-fast RAM. Think upwards of $1,750 considering the existing 32GB DDR3-2933 C12 G.Skill kit is going for $1,700 on Newegg.
Subject: Memory | May 8, 2013 - 12:01 AM | Josh Walrath
Tagged: radeon ramdisk, radeon, memory, amd, 4GB, 2133, 1.65v
AMD makes memory! Ok, they likely contract out memory. Then they brand it! Then they throw in some software to make RAMDisks out of all that memory that you are not using. Let us face it; AMD is not particularly doing anything new here with memory. It is very much a commodity market that is completely saturated with quality parts from multiple manufacturers.
So why is AMD doing it? Well, I guess part of it is simply brand recognition and potentially another source of income to help pad the bottom line. They will not sell these parts for a loss, and they will have buyers with the diehard AMD fans. Tim covered the previous release of AMD memory pretty well, and he looked at the performance results of the free RAMDisk software that AMD bundled with the DIMMs. It does exactly what it is supposed to, but of course it takes portions of memory away. When dealing with upwards of 16 GB of memory for a desktop computer, sacrificing half of that is really not that big a deal unless heavy duty image and video editing are required.
*Tombraider not included with Radeon Memory. Radeon RAMDisk instead!
Today AMD is announcing a new memory product and a new bundled version of the RAMDisk software. The top end SKU is now the AMD Radeon RG2133 DDR-3 modules. It comes in a package of up to 4 x 4GB DIMMS and carries a CAS latency of 10 with the voltage at a reasonable 1.65v. These modules are programmed with both the Intel based XMP and the AMD based AMP (MP stands for Memory Profiles… if that wasn’t entirely obvious). The modules themselves are reasonable in terms of size (they will fit in any board, even with larger heatsinks on the CPU). AMD claims that they are all high quality parts, which again is not entirely surprising since I do not know of anyone who advertises that their DIMMS feature only the most mediocre memory modules available.
Faster memory is faster, water is wet, and Ken still needs a girlfriend.
AMD goes on to claim that faster memory does improve overall system performance. Furthermore AMD has revealed that UV light is in fact a cancer causing agent, Cocoa Puffs will turn any milk brown, and passing gas in church will rarely be commented upon (unless it is truly rank or you start calling yourself “Legion”). Many graphs were presented that essentially showed an overclocked APU with this memory will outperform a non-overclocked APU with DDR-3 1600 units. Truly eye opening, to say the least.
How much RAMDisk can any one man take? AMD wants to know!
The one big piece of the pie that we have yet to talk about is the enhanced version of Radeon RAMDisk (is Farva naming these things?). This particular version can carve out up to 64 GB of memory for a RAMDisk! I can tell you this now, me and my 8 GB of installed memory will get a LOT of mileage out of this one! I can only imagine the product meeting. “Hey, I’ve got a great idea! We can give them up to 64 GB of RAMDisk!” While another person replies, “How do you propose getting people above 64 GB, much less 32 GB of memory on a consumer level product…?” After much hand wringing and mumbling someone comes up with, “I know! They can span it across two motherboards! That way they have to buy an extra motherboard AND a CPU! Think of our attach rate!” And there was much rejoicing.
So yes, more memory that goes faster is better. Radeon RAMDisk is not just a comic superhero, it can improve overall system performance. Combine the two and we have AMD Radeon Memory RG2133 with 64 GB of RAMDisk. Considering that the top SKU will feature 4 x 4GB DIMMS, a user only needs to buy four kits and four motherboards and processors to get a 64GB RAMDisk. Better throw in another CPU and motherboard so a user can at least have 16GB of memory available as, you know, memory.
Update and Clarification
Perhaps my tone was a bit too sarcastic, but I just am not seeing the value here. Apparently (and I was not given this info before hand) the 4 x 4 GB kits with the 64 GB RAMDisk will retail at $155. Taking a quick look at Newegg I see that a user can buy quite a few different 2 x 8 GB 2133 kits anywhere from $139 to $145 with similar or better latencies/voltages. Around $155 users will get better latencies and voltages down to 1.5v. For 4 x 4GB kits we again see prices start at the $139 mark, but there are a significant number of other kits with again better voltages and latencies from $144 through $155.
Users can also get the free version of the Radeon RAMDisk that will utilize up to 4GB of space. There are multiple other software kits for not a whole lot of money (less than $10) that will provide you up to 16 GB of RAMDisk. I just find the whole kit to be comparable to what is currently out there. Offering a 64 GB RAMDisk for use with 16 GB of total system memory just seems to be really silly. The only way that could possibly be interesting would be if you could allocate 8 GB of that onto RAM and the other 56 GB onto a fast SSD. I do not believe that to be the case with this software, but I would love to be proved wrong.