Subject: Memory | August 25, 2016 - 02:39 AM | Tim Verry
Tagged: TSV, SK Hynix, Samsung, hot chips, hbm3, hbm
Samsung and SK Hynix were in attendance at the Hot Chips Symposium in Cupertino, California to (among other things) talk about the future of High Bandwidth Memory (HBM). In fact, the companies are working on two new HBM products: HBM3 and an as-yet-unbranded "low cost HBM." HBM3 will replace HBM2 at the high end and is aimed at the HPC and "prosumer" markets while the low cost HBM technology lowers the barrier to entry and is intended to be used in mainstream consumer products.
As currently planned, HBM3 (Samsung refers to its implementation as Extreme HBM) features double the density per layer and at least double the bandwidth of the current HBM2 (which so far is only used in NVIDIA's planned Tesla P100). Specifically, the new memory technology offers up 16Gb (~2GB) per layer and as many as eight (or more) layers can be stacked together using TSVs into a single chip. So far we have seen GPUs use four HBM chips on a single package, and if that holds true with HBM3 and interposer size limits, we may well see future graphics cards with 64GB of memory! Considering the HBM2-based Tesla will have 16 and AMD's HBM-based Fury X cards had 4GB, HBM3 is a sizable jump!
Capacity is not the only benefit though. HBM3 doubles the bandwidth versus HBM2 with 512GB/s (or more) of peak bandwidth per stack! In the theoretical example of a graphics card with 64GB of HBM3 (four stacks), that would be in the range of 2 TB/s of theoretical maximum peak bandwidth! Real world may be less, but still that is many terabytes per second of bandwidth which is exciting because it opens a lot of possibilities for gaming especially as developers push graphics further towards photo realism and resolutions keep increasing. HBM3 should be plenty for awhile as far as keeping the GPU fed with data on the consumer and gaming side of things though I'm sure the HPC market will still crave more bandwidth.
Samsung further claims that HBM3 will operate at similar (~500MHz) clocks to HBM2, but will use "much less" core voltage (HBM2 is 1.2V).
Stacked HBM memory on an interposer surrounding a processor. Upcoming HBM technologies will allow memory stacks with double the number of layers.
HBM3 is perhaps the most interesting technologically; however, the "low cost HBM" is exciting in that it will enable HBM to be used in the systems and graphics cards most people purchase. There were less details available on this new lower cost variant, but Samsung did share a few specifics. The low cost HBM will offer up to 200GB/s per stack of peak bandwidth while being much cheaper to produce than current HBM2. In order to reduce the cost of production, their is no buffer die or ECC support and the number of Through Silicon Vias (TSV) connections have been reduced. In order to compensate for the lower number of TSVs, the pin speed has been increased to 3Gbps (versus 2Gbps on HBM2). Interestingly, Samsung would like for low cost HBM to support traditional silicon as well as potentially cheaper organic interposers. According to NVIDIA, TSV formation is the most expensive part of interposer fabrication, so making reductions there (and somewhat making up for it in increased per-connection speeds) makes sense when it comes to a cost-conscious product. It is unclear whether organic interposers will win out here, but it is nice to seem them get a mention and is an alternative worth looking into.
Both high bandwidth and low latency memory technologies are still years away and the designs are subject to change, but so far they are both plans are looking rather promising. I am intrigued by the possibilities and hope to see new products take advantage of the increased performance (and in the latter case lower cost). On the graphics front, HBM3 is way too far out to see a Vega release, but it may come just in time for AMD to incorporate it into its high end Navi GPUs, and by 2020 the battle between GDDR and HBM in the mainstream should be heating up.
What are your thoughts on the proposed HBM technologies?
Subject: Memory | August 20, 2016 - 01:25 AM | Tim Verry
Tagged: X99, Samsung, ripjaws, overclocking, G.Skill, ddr4, Broadwell-E
Early this week at the Intel Developer Forum in San Francisco, California G.Skill showed off new low latency DDR4 memory modules for desktop and notebooks. The company launched two Trident series DDR4 3333 MHz kits and one Ripjaws branded DDR4 3333 MHz SO-DIMM. While these speeds are not close to the fastest we have seen from them, these modules offer much tighter timings. All of the new memory modules use Samsung 8Gb chips and will be available soon.
On the desktop side of things, G.Skill demonstrated a 128GB (8x16GB) DDR4-3333 kit with CAS latencies of 14-14-14-34 running on a Asus ROG Rampage V Edition 10 motherboard with an Intel Core i7 6800K processor. They also showed a 64GB (8x8GB) kit clocked at 3333 MHz with timings of 13-13-13-33 running on a system with the same i7 6800K and Asus X99 Deluxe II motherboard.
G.Skill demonstrating 128GB DDR4-3333 memory kit at IDF 2016.
In addition to the desktop DIMMs, G.Skill showed a 32GB Ripjaws kit (2x16GB) clocked at 3333 MHz running on an Intel Skull Canyon NUC. The SO-DIMM had timings of 16-18-18-43 and ran at 1.35V.
Nowadays lower latency is not quite as important as it once was, but there is still a slight performance advantage to be had tighter timings and pure clockspeed is not the only important RAM metric. Overclocking can get you lower CAS latencies (sometimes at the cost of more voltage), but if you are not into that tedious process and are buying RAM anyway you might as well go for the modules with the lowest latencies out of the box at the clockspeeds you are looking for. I am not sure how popular RAM overclocking is these days outside of benchmark runs and extreme overclockers though to be honest.
Overclocking Innovation session at IDF 2016.
With regards to extreme overclocking, there was reportedly an "Overclocking Innovation" event at IDF where G.Skill and Asus overclocker Elmor achieved a new CPU overclocking record of 5,731.78 MHz on the i7 6950X running on a system with G.Skill memory and Asus motherboard. The company's DDR4 record of 5,189.2 MHz was not beaten at the event, G.Skill notes in its press release (heh).
Are RAM timings important to you when looking for memory? What are your thoughts on the ever increasing clocks of new DDR4 kits with how overclocking works on the newer processors/motherboards?
It always feels a little odd when covering NVIDIA’s quarterly earnings due to how they present their financial calendar. No, we are not reporting from the future. Yes, it can be confusing when comparing results and getting your dates mixed up. Regardless of the date before the earnings, NVIDIA did exceptionally well in a quarter that is typically the second weakest after Q1.
NVIDIA reported revenue of $1.43 billion. This is a jump from an already strong Q1 where they took in $1.30 billion. Compare this to the $1.027 billion of its competitor AMD who also provides CPUs as well as GPUs. NVIDIA sold a lot of GPUs as well as other products. Their primary money makers were the consumer space GPUs and the professional and compute markets where they have a virtual stranglehold on at the moment. The company’s GAAP net income is a very respectable $253 million.
The release of the latest Pascal based GPUs were the primary mover for the gains for this latest quarter. AMD has had a hard time competing with NVIDIA for marketshare. The older Maxwell based chips performed well against the entire line of AMD offerings and typically did so with better power and heat characteristics. Even though the GTX 970 was somewhat limited in its memory configuration as compared to the AMD products (3.5 GB + .5 GB vs. a full 4 GB implementation) it was a top seller in its class. The same could be said for the products up and down the stack.
Pascal was released at the end of May, but the company had been shipping chips to its partners as well as creating the “Founder’s Edition” models to its exacting specifications. These were strong sellers throughout the end of May until the end of the quarter. NVIDIA recently unveiled their latest Pascal based Quadro cards, but we do not know how much of an impact those have had on this quarter. NVIDIA has also been shipping, in very limited quantities, the Tesla P100 based units to select customers and outfits.
Subject: Storage | August 10, 2016 - 02:00 PM | Allyn Malventano
Tagged: 2.5, V-NAND, ssd, Samsung, nand, FMS 2016, FMS, flash, 64-Layer, 32TB, SAS, datacenter
..now this picture has been corrected for extreme parallax and was taken in far from ideal conditions, but you get the point. Samsung's keynote is coming up later today, and I have a hunch this will be a big part of what they present. We did know 64-Layer was coming, as it was mentioned in Samsung's last earnings announcement, but confirmation is nice.
*edit* now that the press conference has taken place, here are a few relevant slides:
With 48-Layer V-NAND announced last year (and still rolling out), it's good to see Samsung pushing hard into higher capacity dies. 64-Layer enables 512Gbits (64GB) per die, and 100MB/s per die maximum throughput means even lower capacity SSDs should offer impressive sequentials.
Samsung 48-Layer V-NAND. Pic courtesy of TechInsights.
We will know more shortly, but for now, dream of even higher capacity SSDs :)
*edit* and this just happened:
*additional edit* - here's a better picture taken after the keynote:
The 32TB model in their 2.5" form factor displaces last years 16TB model. The drive itself is essentially identical, but the flash packages now contain 64-layer dies, doubling the available capacity of the device.
Subject: Storage | August 9, 2016 - 06:44 PM | Jeremy Hellstrom
Tagged: ssd t3, Samsung, portable storage
Just because you are on the road there is no reason to subject yourself to HDD speeds when transferring files. Not only will an SSD be quieter and more resilient but the USB 3.1 Gen 1 Type C port theoretically offers up to 450MB/s transfer speeds. This particular 2TB portable SSD uses the same MGX controller as the 850 EVO, the NAND is Samsung's 48-layer TLC V-NAND. The Tech Report previously tried out the T1 model so their expectations were that this drive would improve performance in addition to offering larger sizes of drive. Does it live up to expectations? Find out in their full review.
"Not all new SSDs go inside your computer. We take a quick look at Samsung's latest V-NAND-powered external drive, the Portable SSD T3, to see what it's like to put 2TB of fast storage in one's pocket."
Here are some more Storage reviews from around the web:
- Corsair Neutron XTi SSD Review (480GB) @ The SSD Review
- Kingston SSDNow UV400 SSD Review (480GB) @ The SSD Review
- Crucial MX300 750GB Limited Edition SSD Review @ NikKTech
- Apricorn Aegis Secure Key 3.0 Review – Data Protection For Every Security Need @ The SSD Review
- Kingston 512GB SDXC Card @ The SSD Review
- QNAP TurboNAS TS-531P-8G NAS Server Review @ NikKTech
- Synology DiskStation DS916+ 4-Bay SMB NAS @ eTeknix
- QNAP TS-453A QTS-Ubuntu Combo NAS @ eTeknix
- Synology DS916+ 4-bay NAS @ techPowerUp
Subject: General Tech | August 9, 2016 - 02:55 PM | Jeremy Hellstrom
Tagged: XPoint, Samsung, Intel, HybriDIMM
Bit of a correction folks, Netlist developed the HybriDIMMs using Samsung DRAM and NAND, united using their own proprietary interface. This, and my confusion, is in part to do some nasty and very costly IP litigation behind the scenes which lead to Samsung getting more credit for this than they deserved.
Netlist and Diablo Technologies worked together for a while and then parted ways. Soon after the split Diablo licensed SMART to produce ULLtraDIMMs and a court case was born. Not long afterwards SanDisk grabbed the IP from Diablo and now WD is buying SanDisk, making this an utter nightmare for a smaller company. Samsung invested $23m in Netdisk and offered a source of chips, albeit likely with strings, which has allowed Netdisk to develop HybriDIMMs.
Samsung Netlist has developed HybriDIMMs, replacing some of the DRAM on a memory module with NAND. This allows you to significantly increase the amount of memory available on a DIMM and reduces the price dramatically at the same time. The drawback is that NAND is significantly slower than DRAM; they intend to overcome that with the use of predictive algorithms they have called PreSight to pre-fetch data from the NAND and stage it in DRAM. This will compete with Intel's Optane XPoint DIMMs once they are released and will mean the DRAM market will split into two, the DRAM we are currently used to and these hybrid NAND DIMMs. Check out more details over at The Register.
"Gold plate can give a durable and affordable alloy a 24-carat veneer finish, adding value to cheap metal. DRAM gives Samsung-Netlist Hybrid DIMMs a cache veneer, providing what looks like DRAM to applications but is really persistent NAND underneath, cheaper than DRAM and lots of it."
Here is some more Tech News from around the web:
- Linux subsystem could cause Windows 10 Anniversary Update to eat itself @ The Inquirer
- Google study shows unwanted software is a bigger headache than malware @ The Inquirer
- Save Up To 70% On Steamcrate Subscriptions: Get 10 New Games Each Month @ Gizmodo
- Hacker Uses Fake Boarding Pass App To Get Into Fancy Airline Lounges @ Slashdot
Subject: Storage | August 1, 2016 - 11:03 PM | Scott Michaud
Tagged: ssd, Samsung, enterprise ssd
Allyn first mentioned this device last year, but they're apparently now shipping for a whopping $10,000 USD. To refresh, the PM1633a is an SSD from Samsung that packs 15.36TB into a 2.5-inch form factor. According to Samsung, it does this by stacking 16 dies, each containing 48 layers of flash cells, into a 512GB package.
It's unclear how many packages are installed in the device, because we don't know how much over-provisioning Samsung provides, but the advertised capacity equates to exactly 30 packages. Update @ 11:30pm: Turns out I was staring right at it in the old press release. The drive has 32 packages, so 16384 GB, once you account for over-provisioning.
Image Credit: Samsung
Down at CDW, they are selling them for $10,311.99 USD with the option to lease for $321.73 / month. That's only 2.1c/GB... per month... for probably three whole years. No Ryan, that doesn't count. The warranty period doesn't seem to be listed, but Samsung will cover up to 15.36TB per day in writes. I mean, we knew it would be expensive, given its size and performance. At least it's only ~65c/GB.
Subject: General Tech | July 14, 2016 - 01:50 PM | Ryan Shrout
Tagged: video, Samsung, rx 480, radeon, Primochill, praxis, power consumption, podcast, phononic, gtx 1060, amd, 850 EVO, 4TB
PC Perspective Podcast #408 - 07/14/2016
Join us this week as we discuss a conclusion to the RX 480 power issue, the GTX 1060, a 4TB Samsung 850 EVO and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
This episode of the PC Perspective Podcast is sponsored by Lenovo!
Hosts: Ryan Shrout, Allyn Malventano, Jeremy Hellstrom, and Josh Walrath
Introduction, Specifications, and Packaging
Everyone expects SSD makers to keep pushing out higher and higher capacity SSDs, but the thing holding them back is sufficient market demand for that capacity. With that, it appears Samsung has decided it was high time for a 4TB model of their 850 EVO. Today we will be looking at this huge capacity point, and paying close attention to any performance dips that sometimes result in pushing a given SSD controller / architecture to extreme capacities.
This new 4TB model benefits from the higher density of Samsung’s 48-layer V-NAND. We performed a side-by-side comparison of 32 and 48 layer products back in March, and found the newer flash to reduce Latency Percentile profiles closer to MLC-equipped Pro model than the 32-layer (TLC) EVO:
Latency Percentile showing reduced latency of Samsung’s new 48-layer V-NAND
We’ll be looking into all of this in today’s review, along with trying our hand at some new mixed paced workload testing, so let’s get to it!
Subject: General Tech | July 7, 2016 - 12:37 PM | Jeremy Hellstrom
Tagged: UFS, Samsung, microSD
Samsung just announced the first product based on the new Universal Flash Storage standard which will be making microSD cards as obsolete as your old mix tape. They will come in sizes from 256GB down to 32GB but it is the speed of these new storage devices that will impress, not the density. Samsung tells of sequential read speeds of up to 530MB/s, allowing you to dump HD quality video to a PC and random reads of 40,000 IOPS if you have a usage scenario which would read in such a manner. For recording video you can expect up to 170MB/s sequential write speed or 35,000 random IOPS; 4K drone recordings won't be limited by bandwidth anymore.
Unfortunately, as The Inquirer points out, no one can use these yet as we haven't a place to stick them.
"What UFS does mean already is that we'll start to see a bottleneck lifted in storage speeds in phones and tablets. As we've already seen, MicroSD doesn't cut it in the speed stakes, and it doesn't seem so long ago that we reported on torn down phones with 'internal' memory that was really just an SD card hidden away."
Here is some more Tech News from around the web:
- NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening @ Phoronix
- Official NVIDIA GeForce GTX 1060 Announcement @ [H]ard|OCP
- Symantec admits it won't patch 'catastrophic' security flaws until mid-July @ The Inquirer
- Hackers Can Use Smart Watch Movements To Reveal A Wearer's ATM PIN @ Slashdot
- Huge double boxset of Android patches lands after Qualcomm disk encryption blown open @ The Register
- ⌘+c malware smacks Macs, drains keychains, pours over Tor @ The Register
- TP-Link abandons 'forgotten' router config domains @ The Register