• Eximius@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      2 days ago

      A better way to word it is: SMR is only suited for archival usage. Large writes, little-to-no random writes.

        • Eximius@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          2 days ago

          If you know the format of SMR, then you can trivially see the read performance is not impacted. Writing is impacted, because it has to write multiple times for each sector write (because of overlapping sectors that allow the extra density).

          Impacted write performance, coupled with hdds are generally slow with random writes PLUS the extra potential for data loss due to less-atomic sector writes, makes them terrible drives for everything except archival usage.

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Wonder what happens if you throw them in an unraid BTRFS/jbod configuration with a CMR parity drive.

  • Ugurcan@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    3 days ago

    If you eyeballing these, please remind that these babies tend to be LOUD AS FUCK, so might not be suitable for home server use.

      • Todd Bonzalez@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Drives like this are hermetically sealed with an inert gas like argon or helium on the inside. Even the presence of oxygen and nitrogen molecules can compromise the drive. If dust is getting to the moving parts of your hard drive, it’s toast no matter where it’s installed.

    • Jarix@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      3 days ago

      Are they any louder than any HDD from the last 30 years?

      If so, im actually curious why that is

      Edit: fixed to say HDD not SSD

      • frezik
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        My NAS uses a pair of SAS drives, and they make noises at boot up that would be concerning in a desktop. They’re quite obnoxious. But I keep them in part of the house where they don’t bother me.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        3 days ago

        Well I have no experience with these particular drives, but they do seem to have 11 platters. Which is beyond insane as far as I’m concerned. More platters means more moving parts, more friction more noise (all other things being equal).

      • Ugurcan@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        3 days ago

        Oops, yes. I definitely would expect these to be much louder than your 6 GB 1998 model HDD wrangling under stress of copying files at 30 MB/s.

        • Onsotumenh@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Tell that to my IBM 10GB 10.000 RPM U2W SCSI from back then. To this day I have never witnessed a noisier harddrive… But that PC was pretty epic, including the biggest mf of a mainboard I ever had (the SCSI controller was onboard).

          • varyingExpertise@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Ah, the sound of turning on the SCSI storage tower.

            KA-TSCHONK. WeeeeeeeeEEEEEIIIIIII… skrrrt, skrrrt, clack.

            Either that or KA-TSCHONK, silence, if there were already too many boxes on that circuit at a lan party 😁

        • MonkderVierte@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Your everyday modern HDD does not much more than 60MB/s after the on-disk cache (a few GB) is full.

          • DaPorkchop_@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            not sure what you’re on about, i have some cheap 500GB USB 3 drives from like 2016 lying around and even those can happily deal with sustained writes over 130MB/s.

            • frezik
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              When the cache isn’t full, yes, that’s true. Copy a file that’s significantly bigger than cache and performance will drop part way through.

              • DaPorkchop_@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                17 hours ago

                You’ve made me uncertain if I’ve somehow never noticed this before, so I gave it a shot. I’ve been dd-ing /dev/random onto one of those drives for the last 20 minutes and the transfer rate has only dropped by about 4MB/s since I started, which is about the kind of slowdown I would expect as the drive head gets closer to the center of the platter.

                EDIT: I’ve now been doing 1.2GB/s onto an 8 drive RAID0 (8x 600GB 15k SAS Seagates) for over 10 minutes with no noticable slowdown. That comes out to 150MB/s per drive, and these drives are from 2014 or 2015. If you’re only getting 60MB/s on a modern non-SMR HDD, especially something as dense as an 18TB drive, you’ve either configured something wrong or your hardware is broken.

    • varyingExpertise@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I’ve found that the only thing you can hear through a closed basement door are noisy high speed fans, e.g. from used 19" servers, disks produce much less noise.

        • varyingExpertise@feddit.org
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          2 days ago

          Nah, I’m living outside the US, my home is made from proper bricks and concrete. A bit slower to build but rather good when it comes to sound insulation. I could imagine with those strand board walls that might be a problem though.

  • addie@feddit.uk
    link
    fedilink
    English
    arrow-up
    26
    ·
    3 days ago

    Assuming that these have fairly impressive 100 MB/s sustained write speed, then it’s going to take about 93 hours to write the whole contents of the disk - basically four days. That’s a long time to replace a failed drive in a RAID array; you’d need to consider multiple disks of redundancy just in case another one fails while you’re resilvering the first.

    • DaPorkchop_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      My 16TB ultrastars get upwards of 180MB/s sustained read and write, these will presumably be faster than that as the density is higher.

      • frezik
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I’m guessing that only works if the file is smaller than the RAM cache of the drives. Transfer a file that’s bigger than that, and it will go fast at first, but then fill the cache and the rate starts to drop closer to 100 MB/s.

        My data hoarder drives are a pair of WD ultrastar 18TB SAS drives on RAID1, and that’s how they tend to behave.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          17 hours ago

          This is for very long sustained writes, like 40TiB at a time. I can’t say I’ve ever noticed any slowdown, but I’ll keep a closer eye on it next time I do another huge copy. I’ve also never seen any kind of noticeable slowdown on my 4 8TB SATA WD golds, although they only get to about 150MB/s each.

          EDIT: The effect would be obvious pretty fast at even moderate write speeds, I’ve never seen a drive with more than a GB of cache. My 16TB drives have 256MB, and the 8TB drives only 64MB of cache.

    • AmbiguousProps@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      This is one of the reasons I use unRAID with two parity disks. If one fails, I’ll still have access to my data while I rebuild the data on the replacement drive.

      Although, parity checks with these would take forever, of course…

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      That’s a pretty common failure scenario in SANs. If you buy a bunch of drives, they’re almost guaranteed to come from the same batch, meaning they’re likely to fail around the same time. The extra load of a rebuild can kill drives that are already close to failure.

      Which is why SANs have hot spares that can be allocated instantly on failure. And you should use a RAID level with enough redundancy to meet your reliability needs. And RAID is not backup, you should have backups too.

      • kalleboo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Also why you need to schedule periodical parity scrubs, then the “extra load of a rebuild” is exercised regularly so weak drives will be found long before a rebuild is needed.

    • C126@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      2 parity is standard and should still be adequate. Likelihood of two failures within four days on the same array is small.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        It’s more likely if you bought all the drives from the same store (since that increases the likelihood that they’re from the same batch), so you should make sure that you buy them from different stores.

    • SoGrumpy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      2 days ago

      Except these drives are SMR - not something you’d want in a RAID.

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        8
        ·
        2 days ago

        Title literally says SMR for one size and CMR for another. Not that I should expect much from a .ml account.

  • Longpork3@lemmy.nz
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    3 days ago

    When will it be commercially available though? Supposedly Seagate has had 30TB drives out for the better part of a year, but I can’t find anything larger than 24TB actually available for purchase.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I’d guess that they’re commercially available but only for hyperscalers - large companies like Google, Amazon (AWS), etc that need a huge amount of storage.

    • Pyotr@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      I’ve been waiting for a 32TB to become available as well, Seagate announced that drive last year and it’s still not available outside data centers. I suspect the WD one will be the same.

  • Teils13@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    12
    ·
    2 days ago

    There is already a samsung 8 Tb SSD being sold on amazon. Buying 4 of those will be far cheaper than this monstrosity. And it will be silent, and actually useful as a home server, much faster too.

    • hobovision@lemm.ee
      link
      fedilink
      English
      arrow-up
      21
      ·
      2 days ago

      No shot 4 SSDs will be the same price as a HDD of the same capacity yet. HDD is still the king of GB/$.

      If I’m wrong… Can you send me some links? I could use some cheap 8TB SSDs.

    • randombullet@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Nah I don’t believe you at all.

      SAMSUNG 870 QVO SATA 8TB = $683.38 x 4 = $2,733.52

      8TB x 4 = 32TB

      $2,733.52 / 32TB = $85.4225/TB

      Yeah one of these disks does not cost more than $25/TB.

      26TB x $25 = $650

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        QVO drives are trash though. Would not recommend. Very slow and they don’t last as long as Samsung’s EVO and PRO drives.

      • olympicyes@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        FWIW in July last year Amazon was selling these as low as $320. My biggest fear of a 26 TB HDD is getting all 26 TB of data off of it if I needed it without the drive dying.

          • frezik
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            It’s really difficult/expensive for a home user to do a 3-2-1 backup properly. Especially if you’re pushing beyond a few TB.

          • olympicyes@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            That’s true but more concerned with rebuilding the raid than necessarily losing the data. I have to admit that I’m lazy with backups and I’ve had my ass saved by RAID 6.