SSD as Long Term Storage Testing

(htwingnut.com)

132 points | by userbinator 363 days ago

11 comments

  • Robotbeat 362 days ago
    Interestingly, unpowered flash memory has much higher retention rates at low temperatures and much SHORTER retention rates at elevated temperatures (the typical JEDEC spec is just 1 year retention at 30C unpowered).

    Flash memory in a freezer (assuming you don't have cold-induced circuit board failures due to CTE mismatch) could last hundreds of years. In a hot car, maybe a month. https://www.curtisswrightds.com/media-center/blog/extended-t...

    None of that is particularly surprising, but what's interesting is that write endurance can have the opposite effect... Writing at high temperature (followed by cooling to ambient... or lower) actually improves data retention over just writing at ambient.

    • userbinator 362 days ago
      Is the 1-year spec after all the rated program/erase cycles have already been used, so the flash cells are worn and much "leakier"?

      That's still rather disturbing, since I have datasheets for NAND flash from around 2 decades ago that specify 10 years retention after 100K cycles (although probably at 25C), and some slightly newer ones with 5 years / 1.5K cycles (MLC); and also explains the increasing secrecy surrounding the flash industry. The very few I could find for TLC don't even mention retention and endurance is vaguely specified, and refer you to some, probably super-secret, qualification report for the actual numbers.

      Then again, perhaps I shouldn't be surprised ever since they came up with the misnomers that are "TLC" and now "QLC", and nearly drove SLC to extinction. I don't want 3x or 4x more storage for the same price (or 1/3 or 1/4 the price) if it's 1/8th or 1/16th exponentially more unreliable --- that's how the physics works and there's no way around that --- but that's what they seem to be pushing for.

      You can get $13 128GB TLC SSDs as mentioned in the article but I don't see any $39 128GB SLC SSDs being made, nor $13 ~40GB SLC SSDs, despite the fact that such a device would have the exact same cost per NAND cell (and as a bonus, be much faster and simpler since SLC needs much less ECC and advanced wear leveling algorithms than TLC/QLC.)

      • drtgh 362 days ago
        Not just SLC (1bit per cell). One thing that exasperates me is that one can't find even MLC drives anymore (2bits per cell), now everything is TLC or QLC (QLC sounds as a bad joke, how are they able to sell that thing?).

        A few years ago Samsung Pro SSDs were MLC disks, but suddenly they changed them to TLC. They are shameless for calling them 3bit-MLC that is pure oxymoron. 3bits per cell is TLC-mode, and the degradation is higher than a MLC-mode. Basically, it is a price increase by deceiving the consumer (to achieve this, they have reduced the characteristics of their other lines also. shameless).

        • myself248 361 days ago
          The misnomer drives me nuts, it should be called 8LC or 16LC for the ever-finer states they have to resolve, and that would make the dismal endurance and reliability make sense.
      • smartbit 362 days ago
        > but I don't see any $39 128GB SLC SSDs being made, nor $13 ~40GB SLC SSDs

        Shouldn’t it be 2^3 times more expensive than TLC? IOW a $104 128GB SLC SSD or a $13 16GB SLC SSD.

        Edit: guess you’re right userbinator

        • drtgh 362 days ago
          NAND cell modes: SLC 1-bit per cell, MLC 2-bit, TLC 3-bit, QLC 4-bit.

          And those cell modes are usually determined by the firmware/hardware controller of the NAND memory.

          The more bits stored per cell, the greater is the degradation. SLC-mode requires in average 2.2 times more erase cycles than MLC to achieve the same error rate [1].

          So the differences in prices make it look as if some manufacturers are playing with us...

          [1] https://www.researchgate.net/publication/254005562_Software_...

          PS: QLC-mode is the worst in all terms, the highest degradation and lower speed.

          • vbezhenar 362 days ago
            Does it mean that with "raw" flash it's possible to use it in either mode? I guess it should not be impossible to create DIY SSD?
            • ilyt 361 days ago
              It's essentially charging a cell to a level on write and measuring the voltage on read.

              So technically it's up to firmware to decide, assuming the hardware have stuff (ADC/DAC etc.) to handle it would be possible.

        • userbinator 362 days ago
          No. Each cell in TLC stores three bits (and has 8 actual voltage levels, hence the misnomer), and thus TLC should either be a third of the cost of SLC at the same density, or three times the density as SLC for the same cost.
    • fbdab103 362 days ago
      Nuts! I have been thinking of leaving an emergency encrypted backup in my car, but evidently that is likely to self cook almost immediately. I assumed the lifetime was not great, but that is far more aggressive than I had feared.
      • sgtnoodle 362 days ago
        Maybe use a 2.5" hard drive rather than an SSD? I can't think of any downsides for that purpose.
        • Dalewyn 362 days ago
          Assuming the car is actually used, an HDD would be subject to numerous physical stresses. Not exactly something HDDs like.
          • upofadown 361 days ago
            2.5" hard drives are usually very robust when powered down. That comes from the age when they were expected to be used in laptops.
            • gymbeaux 361 days ago
              I was about to say “the age? You mean today” and realized 2.5” “laptop” hard drives aren’t really used in laptops anymore, even the low-end ones. Damn.
              • crazygringo 361 days ago
                Funny, I thought the exact same thing. I still think of SSD's as "fancy" like it's still 2008 or something.
        • fbdab103 361 days ago
          I had some immediate disgust thinking, "Moving parts? I am done with those!", but I think you are totally right. The 2.5" hard drive is likely to have better endurance in the poor environment and repeated thermal cycling of a car trunk.

          Great idea. Hopefully I never have to try and read data from it.

          • sgtnoodle 361 days ago
            Yep, an unpowered hard disk will be rated to something like 250 g's of shock and -40C to +70C temperatures. You could put it in a sealed pyrex container with some silica desiccant and some foam padding. Just keep it from sliding around loose, and it should physically survive anything but direct impact in a collision.
          • Robotbeat 361 days ago
            I’m not sure either would be great… An untested backup is a backup that may as well never have happened, make sure to test your backups!
            • fbdab103 361 days ago
              This is very much a backup option of last resort. I maintain a real backup solution inside my house, but basically nowhere to keep something off-site. I do not consider the cloud acceptable. For my top tier priority data (low total file size), I really want something stored elsewhere.

              Given the non-ideal state of the car, I think the plan would be to just replace that drive on an annual basis.

            • sgtnoodle 361 days ago
              It's still very valuable to have a reasonably robust backup even if you skimp on maintenance, though. No one's going to smash your hard disk with a hammer to punish you for being lazy. 90% isn't much different than 99% in an exceptional circumstance.
      • cout 362 days ago
        There are plenty of places under your car where you can securely mount a box to hold the drive, if you really want to do this, but there are probably better places to keep a backup than a moving vehicle.
        • fbdab103 361 days ago
          No argument that these it is not an ideal storage environment, but it costs little to setup. I do not want to rely upon the cloud, and I have limited access to off-site locations.

          Given unfortunate circumstances, I might lose the home one day (fire, burglary) or the car, but unlikely to be both.

      • myself248 361 days ago
        You can get industrial SLC, it's just expensive and small. Pick your 8 most previous GB...
      • 7speter 362 days ago
        Would it cook in the trumk (assuming your car has one)
        • usefulcat 362 days ago
          In a warm climate the trunk is likely no better..
    • NikolaNovak 362 days ago
      I don't understand this. Most laptops will have internal / ssd temperatures over 50C, and yet data usually lasts for years?

      >>"client class SSD must maintain its data integrity at the defined BER for only 500 hours at 52°C (less than 21 days) or 96 hours at 66°C (only four days)."

      • schemester 362 days ago
        Those are powered on when the laptop is on.
        • NikolaNovak 362 days ago
          Right, but through which mechanism does that maKe a difference? Does controller periodically rewrite all contents? In other words how, does being powered-on increase retention of static data?
          • Tuna-Fish 362 days ago
            Yes, it does precisely that.
        • pclmulqdq 362 days ago
          Also all of these are minimum requirements. They don't actually imply much, except about the state of technology when the statement was written. Flash has been improving very quickly on all axes.
          • jeffbee 362 days ago
            And yet there are still people who will post about the good old days and the evils of MLC, as if current SSDs were not demonstrably better in every way.
            • Robotbeat 362 days ago
              They aren't. Write endurance is significantly lower than SLC, it's just compensated for by lots of write leveling.
              • userbinator 362 days ago
                More precisely, endurance and retention become exponentially lower with each additional bit stored per cell, while capacity only increases multiplicatively.
              • pclmulqdq 361 days ago
                They are better. SLC is improving as much as MLC. The ratio of speed, durability, and capacity is the same between SLC/MLC/TLC, but modern MLC is faster and more durable than 5-year-old SLC.
                • jeffbee 361 days ago
                  Exactly. And I do not care what the low-level bit performance is if the device-level performance is better.
                • rasz 361 days ago
                  >modern MLC is faster and more durable than 5-year-old SLC

                  great April fools joke

                  • pclmulqdq 361 days ago
                    You are probably writing this from a computer using a TLC SSD. Outside of applications that need extreme latency, pure SLC has almost completely disappeared from the storage world. From materials science to management algorithms, a lot has advanced in flash technology in terms of durability.
                    • Tuna-Fish 361 days ago
                      It's true that TLC and MLC have for good reason displaced SLC. However, in no way are they even anywhere near the old SLC in durability. ~5 years old SLC literally had 100 times more write endurance (as in, how many times you can rewrite each bit, not total amount of writes in the drive) than typical modern TLC.
                      • pclmulqdq 361 days ago
                        That is pretty much true, and it's pretty much the only stat that hasn't improved. However, write endurance isn't really a factor in terms of data durability. It has to do more with the drive's ability to continue being written than with the safety of data that has already been written. If your drive goes read-only or loses capacity, that has to do with write endurance. Neither of those involve data loss.
                • rubatuga 361 days ago
                  Citation needed
            • ezconnect 362 days ago
              I still remember the bubble memory system that would someday replace HDD back in the 80s
    • Helmut10001 362 days ago
      I cool my server to 30°-35° HDD temperatures, because that's supposed to increase their life. However, all my SSDs are then at 23-27° [1]. I think I saw some long endurance tests and they pointed out that failure rates for SSDs increase slightly below 25°. Tricky tradeoff.

      [1]: https://i.ibb.co/dtt6dwj/ssd-hdd-tmp.png

      • Dalewyn 362 days ago
        HDDs and SSDs operate on fundamentally different technologies, so it shouldn't be a surprise that they desire vastly different environments to reside in.
        • sgtnoodle 362 days ago
          I would think that over many years of iteration by engineers, both would converge on functioning reliably in the same environment. They're both intended to be integrated into computer systems after all.
          • Dalewyn 362 days ago
            They do operate reliably in the same environment; what you're talking about is operating in ideal environments.

            When we're talking ideals, it shouldn't come as a surprise that two fundamentally different things desire two fundamentally different ideal environments.

            • sgtnoodle 361 days ago
              I appreciate your distinction, but I still think my point stands. As the technology is iterated on and refined based on years of feedback from real world use, I would expect the ideal environment for a given class of technology to converge on the typical operating environment. Decisions will be made that shift the physical properties one way or another. Silicon will be doped with different impurities. Transistors will be made with different sizes and geometries. Different metals and plastics will be used. Different voltages will be applied.

              People desire practical technological solutions to serve their needs. A box of "spinning rust" doesn't desire much of anything. A CPU is just a rock that's better at arithmetic than another rock.

      • Robotbeat 361 days ago
        Yeah, it's interesting because I think some parts of the SSD want cooler temperatures but writing to the flash might actually do less damage to the cells if you're writing at higher temperatures (higher charge mobility in silicon).
  • ipv6ipv4 362 days ago
    The longest lived data is replicated data.

    Making more copies, with some geographic distribution, is more important than the durability of any particular technology. This applies to everything from SSDs, HDDs, CDs, to paper, to DNA.

    If you want your data to last, replicate it all the time.

    • jbverschoor 362 days ago
      Sounds like data is a living organism
      • barelyauser 362 days ago
        A virus has data. Is it alive?
    • gymbeaux 361 days ago
      Wouldn’t that exacerbate bit rot?
      • ilyt 361 days ago
        More chances of bit rot but many more chances of spotting it early and correcting, assuming copies are periodically checked against eachother and errors are corrected
  • beauHD 362 days ago
    > data needs to be validated regularly and cycled to a new media every 5-10 years in order to ensure it’s safe and easily accessible

    This is what I do. I hate doing it, but it's for posterity's sake. I'd be lost without certain data. I have old virtual machine disk images that I've been using for years, ISOs of obscure software, and other rarities. Every 4 years I buy a new 4TB HDD and copy over files to a freshly bought disk.

    • jjav 362 days ago
      > data needs to be validated regularly and cycled to a new media every 5-10 years in order to ensure it’s safe and easily accessible

      I used to do that but found it to be a gamble. I have files back to the 80s, so I rotated them from 5.25" floppies to 3.5" floppies to zip drives to CD-R and the DVD-R. But it's a fragile system, files can get corrupted somewhere along the line and if I didn't migrate in time it can be hard to go back. For instance I lost a handful of files during the iomega zip drive phase when the drive died and I had no way to recover (and the files weren't that important to try to source a new iomega drive).

      Now I simply keep everything online in a big zfs mirror pool.

    • justinclift 362 days ago
      Hang on, that sounds like you're copying critical data to a single disk?

      That's not actually the case is it?!?!?!

      • Juke_Ellington 362 days ago
        It makes sense if you keep the old disks around until they kick it. You can always have 3-5 copies around in a decently readable state
        • favorited 362 days ago
          If you're not periodically checking the data for corruption, and only moving from a single drive to a replacement single drive, eventually you'll have some corrupted data which gets copied from drive to drive without you noticing.

          4 TB drives are dirt cheap. If someone would really be "lost" without this data, having some redundancy would be inexpensive and easy.

          • lazide 362 days ago
            If stored on ZFS, at least it would be validated each time it was copied.
            • jjav 362 days ago
              Also, run zpool scrub regularly to detect and repair any corruption early.
            • kadoban 362 days ago
              If you keep a ZFS mirror of the most-recent N drives, and store them separately, that should be pretty good.

              I forget how ZFS behaves if a mirror is missing drives though, if some are off-site. Hopefully it's smart enough to let you do that and just rotate through.

              • lazide 362 days ago
                ZFS doesn’t handle that well generally.

                You’d have better luck making a full (manual) copy most likely (ZFS send/recv, or mirror then un-mirror even better), assuming you’d run a scrub after.

                Or manually make checksum files I guess. I’ve done that, less ‘magic’ that way.

                • jjav 362 days ago
                  > ZFS doesn’t handle that well generally.

                  Can you expand on that?

                  The purpose and benefit of a zfs mirror is that every disk in the mirror contains everything. So it's expensive in space usage, but great for reliability. As long as any one of the disks in a mirror survives, you can recover everything.

                  • lazide 361 days ago
                    The issue is that a mirror in ZFS is assumed to be ‘live’, unless you explicitly sever the relationship (unmirror it). Having tons of unreachable devices (because you have them sitting on a shelf somewhere) makes things unhappy operationally.

                    So if you have one ‘live’ copy, create a mirror, then sever the relationship before taking the external device offline, it’s fine.

                    Even taking it offline sometimes when it’s a normal live mirror is fine (though it will complain of course, depending on how it’s done).

                    But if you want to make copies, so add a bunch of mirrors, taking them offline ‘forever’ (really over multiple boot cycles) it makes the running ZFS instance angry because it expects those mirror copies to still be accessible somewhere (after all, ZFS is a live filesystem) and will keep trying to use/access them, and won’t be able to.

                    I don’t think you’ll lose data or brick anything, but it will be a hassle at some point.

                    Also, if you reconnect those old instances, it will try to resilver the older copies (and hence modify them).

                    Which is not what I would want for an archive, unless I manually told it to do so anyway.

                    Which is easy enough to do of course even after severing the mirror relationship later, albeit with more disk I/O.

                    I’ve done this kind of archiving before, there is built in ZFS support for making a new zpool when splitting mirrors this way, and it works well.

                    The way I ended up doing it was primary/secondary disks (both external hard drives).

                    Setup as a mirror, copy archive data over. Split the mirror, now you have two (slightly differently named) unmirrored zpools with identical data that can both be mounted at the same time, scrubbed independently, etc.

                    Having one of them ‘live’ and the other one the archived copy (on the external disk) would be trivial, and allows you to zpool export the archived/external copy, name it with the date it was made, etc. - which is what you want to make everyone happy.

                    P.S. if doing this, be REALLY careful about what kernel/OS/ZFS features you are using with your pools or you can end up with an unmountable ZFS copy! (As I found out). Built-in ZFS encryption is a major footgun here. Better to use dmcrypt style encryption for several reasons.

                    • jjav 361 days ago
                      Thanks, I see. It feels as if you're describing using a mirror pool as something that it isn't so I get the impedance mismatch.

                      > taking them offline ‘forever’ (really over multiple boot cycles) it makes the running ZFS instance angry because it expects those mirror copies to still be accessible somewhere

                      If a drive dies that's normal, it can be removed from the pool.

                      > Also, if you reconnect those old instances, it will try to resilver the older copies (and hence modify them).

                      I mean yes, because that's what one would want if it is a mirror. Every device should contain the same data so if one drive comes back from the dead it should be resilvered to be up to date.

                      If what you need is archival as opposed to a mirror pool, I'd say use snapshot and send to preserve that point in time in a backup somewhere.

              • nubinetwork 362 days ago
                If you offline or detach the drive, it should be fine afaik. Detach is probably the better way to go though, because it removes the drive from the status listing.
                • lazide 361 days ago
                  You can’t ZFS import a pool on a drive detached this way without things happening to that drive, is the issue. So if you want a point in time archive, it’s dangerous to do that.

                  I think it’s nearly impossible to actually ZFS import a pool from just a single detached drive too if the original is nuked, but I imagine there is some special magic operation that might make it possible.

                  Splitting the new disk off into it’s own pool doesn’t have any of these issues, and is a much cleaner way to handle it.

    • ilyt 361 days ago
      You'd at least need to check checksums of all of them post-copy and preferably store it in error-resistant (RAID5/6 or other error correction) way. Else you might just be copying the errors that sneak in. It might not even be the source hard drive producing it just transient bit flip in RAM
  • Joel_Mckay 362 days ago
    To workaround the various tricks a drive can play, one may wish to use a capacity tester in addition to hardware stats.

    Fight Flash Fraud (f3)

    https://github.com/AltraMayor/f3

    One could setup a destructive wear-test, but results may not be generalized between lots with identical model number. This is because some manufactures sell performant products for early reviews/shills, and eventually start cost-optimizing the identical product sku with degraded performance.

    As annoying as the bait-and-switch trend became, for off-brand consumer hardware YMMV.

    Good luck =)

    • Dalewyn 362 days ago
      Kind of going on a tangent, but I think it's relevant so please bear with me:

      I never understood the point of cheaping out on storage media.

      Look, I get it. Most of us have budgets to work with, not all of us can afford enterprise 40TB Kioxia SSDs or enterprise HDDs hot off the presses.

      But if I actually, truly care about my data I'm going to at least shell out for (brand new!) drives from reputable brands like Samsung, Crucial, Seagate, Western Digital, Kingston, and so on. The ease of mind is worth the cost, as far as I'm concerned.

      What is the rationale behind buying used drives, or drives from off-brand vendors of unknown or even ill reputation? Aside from just goofing around, I mean. I never can justify the idea, no matter how strapped for cash I could be.

      • NikolaNovak 362 days ago
        Partially its because the details are somewhat opaque. I do buy new drives from reputable brands, but can find it hard to know what I gain and what I lose at different price points within those manufacturers. It's hard to even find out which drives are dram less, never mind what other features they have that impact reliability. From the days of ibm deskstars I've also learned that drive model reliability is likely only understood by the time its off the market.

        (I agree in principle that used or off brand drives seem insane to me, but at the same time, I do live on laptops I buy used so their drives are actually used as well :/)

      • dannyw 362 days ago
        You can never rely on a storage medium being perfect, so you must always plan for redundancy (e.g. ZFS, off-site backups).

        When you have that, cheaping out on storage doesn’t matter so much anymore.

        • Dalewyn 362 days ago
          I am assuming appropriate precautions are being taken, so that is beside the point.

          I'm talking about saving a dime on cheapo, non-reputable drives only to then spend extra time verifying they are actually fit for service.

          Why? Why would someone do this? I'm of the mind that buying a drive from a reputable vendor and saving yourself the time of verification and other red tape is worth the additional cost premium.

          • Joel_Mckay 362 days ago
            "Why would someone do this? "

            Personally, I have used f3 to identify counterfeit/buggy hardware. However, direct manufacturer sales or large retail-chain outlets have proven far more trustworthy than online 3rd party sales options.

            It is also something people do if the hardware serial number looks suspicious. =)

          • dannyw 360 days ago
            Some people enjoy the process of hardware and software tinkering. I personally like to "cost golf" all my home gaming PCs, by taking advantage of great deals, auctions, and some judicious part selection.

            I have a lot of fun doing so.

      • ilyt 361 days ago
        There is little proof that the "enterprise" expensive ones are that much more durable, let alone for the price. Enterprise ones usually have some better power off protection and some more spare flash for write endurance but that's about it. Hell, we just had 2 of the enterprise intel ones outright die in last month (out of lot of ~30), at 98% life left!

        On spinning rust there is practically no difference on reliability (assuming you buy drives designed for 24/7 not some green shit), just that you can attach SAS to it. We got stacks of dead drives to prove it.

        > What is the rationale behind buying used drives, or drives from off-brand vendors of unknown or even ill reputation? Aside from just goofing around, I mean. I never can justify the idea, no matter how strapped for cash I could be.

        That the flash is the same but strapped to different controller.

        And if you truly care about your data you want redundancy, not more expensive storage. Put saved money into getting your home server with ECC memory(in home) or having one extra node or hot spare (in work).

      • Joel_Mckay 362 days ago
        Use-cases differ even for storage, and cost is sometimes a competitive advantage in low-value products. While high-end SSD include onboard super-capacitors to keep the hardware stable during power-failures, larger drive ram buffers with fetch prediction, and sector sparing with wear leveling.

        If your stack uses anything dependent on classic transactional integrity, than the long term hidden IT costs of cheap ssd failures don't make sense.

        "buy cheap, buy twice" as they say. =)

  • cantrevealname 362 days ago
    I think it’s a mistake to test the worn and fresh disks at different intervals. I.e., testing worn disks in years 1 and 3, and fresh disks in years 2 and 4.

    Let’s say that the worn disks are found to have failed the hash check in year 1 and the fresh disks are found to have failed in year 2. Can you conclude that worn and fresh are equally bad? No, you can’t, because maybe the fresh disks were still OK in year 1 — but you didn’t check them in year 1.

    As another example, suppose the worn disks are found to be good in year 1 but the fresh disks are found to be bad in year 2. This seems like an unlikely result, but if it happened, what could you conclude? Well, you can’t conclude anything. Maybe worn is better because they are still good in year 2, but you aren’t checking them in year 2. Maybe fresh is better because the worn will fail in year 1.1 but the fresh last until year 1.9 before failing. Maybe all the disks fail in year 1.5 so they are equally bad.

    I think it’s better to test the disks at the same intervals since you can always draw a conclusion.

    • NKosmatos 362 days ago
      Fully agree. The test would be better if all parameters are/were the same. Same set of data written (why put different GB on each SSD) and same testing periods. Nevertheless it’s an interesting little experiment :-)
  • 71a54xd 362 days ago
    I wonder if storing SSD's in a faraday bag or a thick steel box could prevent degradation caused by cosmic rays / errant charged particles passing through?

    Curiously, acrylic plastic is one of the best materials to absorb highly energetic particles https://www.space.com/21561-space-exploration-radiation-prot...

    • wtallis 362 days ago
      The most practical highly-effective radiation shielding would be putting your drives in a waterproof container and storing them at the bottom of a swimming pool. That will also do a pretty good job of keeping the temperature stable and not too high.
    • sgtnoodle 362 days ago
      I don't think a faraday cage would help at all to stop any actual particles with mass? It seems like even a steel box would have to be rather thick to make a meaningful dent. Astronauts on the international space station are essentially in a solid metal faraday cage, and they see speckles in their vision each time their orbit passes through weak spots in the Earth's magnetic field.

      If you're really worried about cosmic rays, maybe you could try to figure out their predominant direction of travel for your location, then store your SSD in an orientation that minimizes its cross-sectional area. I naively assume they're coming from straight up?

    • KennyBlanken 362 days ago
      That's like saying "let's put bulletproof vests on the ballistic gel dummies for our test on how survivable this particular bullet round is on the human body."
  • nubinetwork 362 days ago
    I might be able to test this, I have a 4790k that I haven't touched in years...
    • Aachen 362 days ago
      I'm interested, but will OP deliver? Not sure I want to get my hopes up just yet :(

      Related, I was wondering how I'm going to learn of the results of the submitted article. Calendar reminder with a link to the blog perhaps? Putting my email address somewhere would be much preferred for me tbh but I didn't see any sign-up form.

      • 0cVlTeIATBs 362 days ago
        I had a Q6600 machine with some boot ssd, unplugged for years. Maybe a crucial vertex 4 128GB? Started right up like nothing happened.
        • spockz 362 days ago
          I’ve had some similarly old hardware that didn’t boot and a desktop that had been powered off over an holiday. Both had corrupt data but on the second one it was in a file crucial for windows to boot so I had to repair it. Data corruption can occur without notice.
          • 411111111111111 362 days ago
            And it's all pretty anecdotal with one digit sample sizes.

            I don't think this can really be answered unless we get something like the reliability report from blackblaze. Still, i hope I'll read about the final result of the article in a few years.

            • Aachen 361 days ago
              Anecdotal, yes, but having any data is a better indication than having no data. If nobody can find evidence of the phenomenon you know you're more likely fine (no guarantee) than when people start plugging in old drives and some discover rotting.
      • lionkor 362 days ago
        Nice username! Im from/in Aachen ;) saw your meta comment as well on your name
        • Aachen 360 days ago
          Sent you an email btw, in case your ISP misclassified it as spam
  • petergreen 362 days ago
    I wish I saw this in 2026!
    • majkinetor 362 days ago
      Thanks for the laughs man :)
  • zeroping 362 days ago
    So, check back in 5 years.
  • avi_vallarapu 362 days ago
    Good luck with your tests. Nice article though.
  • plebianRube 362 days ago
    I've had magnetic drives with excellent data integrity that spent the last 20 years un touched in self storage units.

    I have read CD-RWs of approximately the same age with no data loss.

    SSDs sacrifice durability of data for io speed

    • mrjin 362 days ago
      I have HDDs from around 20 years ago and hardly touched in last 10 years, which are still good (able to read out without any issues, random checksum are good, but not all files validated).

      But my optical backs were completely a disaster. Just a little bit over 5 years, over 50% were not able to read out, ~30% could be read but content were corrupted. There might be ~10% still good but too time consuming to check so I dumped them all. I still have optical drives but I cannot really remember when was the last time used it any more.

      For SSDs, I've a couple of them left in cold for around 3 years, just checked a couple of days ago, seems to be good. I'm not sure how much longer they can hold, as there were known issues with Samsung 840 serials.

    • sebazzz 362 days ago
      Given the recent debacle with the Samsung SSD firmware, I'd love to read internal hardware and firmware engineer notes and concerns. I think there are quite a few bodies in the closet that is called the consumer SSD market.

      You'd at least hope that enterprise SSDs with a Dell sticker on them are better.

      • eyegor 362 days ago
        The enterprise stuff is almost always longer lasting, but the only one that truly lasts is (was?) optane. You shouldn't trust an ssd long term, especially modern ones. I've probably seen 100 drive failures in total (hdd and ssd) and covid era ssds are garbage for longevity. The big downside of enterprise ssds (besides price) is performance. You can literally double your speed by buying consumer grade (and it's roughly the same price to buy 2 drives for every 1 enterprise grade).
        • sekh60 362 days ago
          Consumer grade drives have some fast cache in front of them and while initially crazy fast can't do sustained writes without allowing down.
    • zamadatix 362 days ago
      The article doesn't really support this claim. It argues SSDs sacrifice speed and durability for capacity and cost and then posits an unanswered question about absolute durability with an ongoing test to find out.

      I do wish the test had more than $13 TLC drives though.

    • philjohn 361 days ago
      That's a trade-off I'm fine making, because time is in short supply - a snappier experience using my computer is worth it, and I have everything backup up in three places (Backblaze, 6 Drive RaidZ2 array in my home server, and from there to JottaCloud via Restic).