PCPer Mailbag #59 - Nearly 1 Hour of Storage Discussion With Mr. Malventano

Subject: Editorial | October 19, 2018 - 09:00 AM |
Tagged: video, pcper mailbag, Allyn Malventano

It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!

Allyn takes the hot seat this week to answer your storage questions:

00:23 - Would you worry about NVMe cooling? If you’re running video editing workloads, should you spend time trimming thermal pads on motherboard heatsinks to avoid overcooling the flash? Also, NVMe or Optane for an editing rig?

10:39 - I recently cloned my Samsung 850 EVO to a new ADATA SX8200. All was well at first, but when I formatted the EVO, Windows refused to boot and gave a BSoD. It would only boot once I removed every other drive from the system except the NVMe. Any ideas why?

13:51 - If I have a 3-year-old SM941 and it works fine, what kind of upgrade path do I have? MLC/NVMe are still good...better than TLC/QLC. So, Optane?

19:49 - What are the developments needed to give us even faster speeds than today’s NVMe? Where is the bottleneck that limits current speeds?

28:01 - Will NVMe SSD pricing ever catch up with SATA SSD drives?

30:33 - Will QLC replace TLC on lower-end SSDs?

32:42 - Can you overclock SSDs? If so, what kind of positive and negative impacts would it have?

37:20 - Does regularly TRIMing an SSD extend its life? If so, how often should I run a TRIM command on my drives?

43:57 - Is there any negative impact on an SSD from leaving it connected to power at all times even when idle compared to only powering it up when I need to use it? I use an SSD for my wireless Samba server in my semi-truck to stream videos in my off time but most of the day it sits there doing nothing.

46:55 - Allyn, have you and Steve Gibson ever done a podcast or talk together?

Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos (usually) each week!

Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!

Source: YouTube

October 19, 2018 | 12:21 PM - Posted by Kareha

I'm stuck at work for another 2 hours so can't wait to get home and watch this :)

October 19, 2018 | 01:49 PM - Posted by grrrumpcat (not verified)

Here's a question: when will NVidia start shipping RTX 2080 TI with 12 gigabytes of RAM on board?

October 19, 2018 | 11:51 PM - Posted by bobhumplick (not verified)

so in hwinfo64 under samsung 960 it has 2 temp readings. ones at 30c and ones at 48c. is one the controller and one the flash? if not what are they.

if that one was an easy answer then here is backup question. ok so people get wound up because intel limits nvme to the dmi bandwidth. if you have 2 970 evos they both basically share a gen 3 x4 lane and one of them can almost max that out alone by itself. but my questiong is what, if any, real world workloads would that actually bottleneck? i mean for video editing you could have 1/3 of that reading from a source drive and a third reading and writing to a a scratch disk and the final third writing to a destination drive. will that actually bottleneck in something like premiere? or anything besides files transfers and\or maybe some kind of server app like a database or something crazy?

October 22, 2018 | 03:47 PM - Posted by Allyn Malventano

I'm pretty sure 1 is for the media (flash) and 2 is the internal controller temp, which is expected to run hot while active (similar to any other internal CPU temp reading).

For the chipset bottleneck thing, consider that most NVMe SSDs don't fully saturate x4, so you see an overall advantage just moving to a RAID-0 of a pair of drives, even if bottlenecked by the controller. Another part of the equation is that random performance still sees a decent boost since most NAND SSDs can't come close to saturating a x4 link when performing more random ops. Another factor is general less loading of the multiple parts dividing the same workload - queue depths will be lower to each drive, and SSD caches are additive across the array, increasing the likelihood that you are writing at full DMI throughput even under heavier loads. I did a review of triple M.2 RAID a while back - shows the advantages off using Latency Percentile, etc.

October 20, 2018 | 12:55 AM - Posted by Dark_wizzie

If I have same number of cores and frequency as another chip but more L3, I should expect higher single thread perf, right? It's both higher L3 per core and total L3. What about just total L3? 4 core 8mb vs 8core 16mb, which has faster single thread perf? Does answer vary based on Intel vs AMD?

October 20, 2018 | 11:51 AM - Posted by EddieObscurant (not verified)

Allyn, i think you didn't explain yourself correctly. Trimming or not an ssd, doesn't affect its lifespan. It affects its performance for sure, but it doesn't affect its health.

For example if you have the ssd on an external usb to sata controller that doesn't pass the trim command, the performance of the ssd will be much slower if you write the whole disk, then format it and start using it again.

The disk will be empty but since the trim command wouldn't have passed the cells will be dirty and the performance will be slow (apart from the internal garbage collector of the ssd). But it's lifespan shouldn't be affected. The ssd should just be faster if the trim command was supported by the controller.

October 22, 2018 | 03:39 PM - Posted by Allyn Malventano

The slowness observed from an untrimmed external SSD is the result of the additional shuffling of data that must take place in the background. That additional data movement translates to more block erasures for a given amount of host writes (increased write amplification), which means more wear on the flash. Using TRIM is effectively similar to running an SSD with greater amounts of overprovisioning, which is a technique to increase endurance.

October 22, 2018 | 08:36 PM - Posted by Jimbo Jam (not verified)

What are the biggest limitations with 4K rendering? Even with the 2080Ti, 4K benchmarks vary from over 100 FPS in games like Wolfenstein II, DOOM, and Battlefield 1 to sub 60 FPS timings with games like Shadow of the Tomb Raider, FF15, Ghost Recon Wildlands, etc. Is 4K more of a software limitation at this point or is it still more of a hardware limitation?

October 22, 2018 | 08:36 PM - Posted by Jimbo Jam (not verified)

What are the biggest limitations with 4K rendering? Even with the 2080Ti, 4K benchmarks vary from over 100 FPS in games like Wolfenstein II, DOOM, and Battlefield 1 to sub 60 FPS timings with games like Shadow of the Tomb Raider, FF15, Ghost Recon Wildlands, etc. Is 4K more of a software limitation at this point or is it still more of a hardware limitation?

October 25, 2018 | 08:57 AM - Posted by Kokorniokos (not verified)

I would like to learn how easy or difficult it is, for microsoft to make the (dark) UI consistent. How does the company allocate resources of developers and artists? How much time and how many people have to work to make, for example, all right click menus same color? I am confused on why they introduce new stuff in every update, but fail to finish loose ends.

November 8, 2018 | 03:44 AM - Posted by dreamcat4

Allyn, we often hear that NVME ssds don't make much speed difference over SATA for regular client desktop workloads. Which is already pretty well understood. However if running zfs / linux on a laptop. With two 1TB ssds in raid 1 (lets assume 32gb ram here). For a workload including VMs. And sometimes compiling / building things. Similar kinds of tasks. In some circumstances also being forced to use certain really poorly optimized software. Would this kind of an 'in-between' workload translate into any worthwhile difference for nvme vs sata? Well what if we are considering the whole lifetime of the machine. In excess of 5 years not just only 2 years? I am asking because SATA drives still seem to remain cheaper than nvme ones. And this price difference seems to add up more, when buying the larger capacities. In fact, why should it stack up on the larger sizes? If the nand flash costs the same price for both types of drive, shouldn't the cost of the controller bit then become a smaller fraction of the total BOM cost? At what point will companies consider their R&D investment into NVME controllers has been returned / paid back? Does this have anything to do with market forces in relation to OEM ssds that use silicon motion or phsion controllers?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.