I have been lurking on this community for a while now and have really enjoyed the informational and instructional posts but a topic I don’t see come up very often is scaling and hoarding. Currently, I have a 20TB server which I am rapidly filling and most posts talking about expanding recommend simply buying larger drives and slotting them in to a single machine. This definitely is the easiest way to expand, but seems like it would get you to about 100TB before you cant reasonably do that anymore. So how do you set up 100TB+ networks with multiple servers?

My main concern is that currently all my services are dockerized on a single machine running Ubuntu, which works extremely well. It is space efficient with hardlinking and I can still seed back everything. From different posts I’ve read, it seems like as people scale they either give up on hardlinks and then eat up a lot of their storage with copying files or they eventually delete their seeds and just keep the content. Does the Arr suite and Qbit allow dynamically selecting servers based on available space? Or are there other ways to solve these issues with additional tools? How do you guys set up large systems and what recommendations would you make? Any advice is appreciated from hardware to software!

Also, huge shout out to Saik0 from this thread: https://lemmy.dbzer0.com/post/24219297 I learned a ton from his post, but it seemed like the tip of the iceberg!

  • ReallyActuallyFrankenstein@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    18 hours ago

    Your setup is closer to “statistic” than “anecdote,” so curious how many drive failures you’ve had?

    What is the primary OS you run to manage all of the components?

    • tenchiken@lemmy.dbzer0.comM
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 hours ago

      Most of my drives are in the 3tb/4tb range… Something about that timeframe made some reliable disks. Newer disks have had more issues really. A few boxes run some 8tb or 12tb, and I keep some external 8tb for evacuation purposes, but I don’t think I trust most options lately.

      HGST / Toshiba seems to have done good by me overall, but that’s subjective certainly.

      I have 2 Seagate I need to pull from one of the older boxes right now, but they are 2tb and well past their due:

      root@Mizuho:~# smartctl -a /dev/sdc|grep “Vendor|Product|Capacity|minutes” Vendor: SEAGATE Product: ST2000NM0021 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Accumulated power on time, hours:minutes 41427:43

      root@Mizuho:~# smartctl -a /dev/sdh|grep “Vendor|Product|Capacity|minutes” Vendor: SEAGATE Product: ST2000NM0021 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Accumulated power on time, hours:minutes 23477:56

      Typically I’m a Debian/Ubuntu guy. Easiest multi tool for my needs.

      I usually use OpenMediaVault for my simple NAS needs.

      Proxmox and XCP-NG for hypervisor. I was involved in the initial development of OpenStack, and have much love for classic Xen itself (screw Citrix and their mistreatment of xenserver).

      My dockers are either via DockGE or the compose plugins under OMV, leaning more toward DockGE lately for simplicity and eye candy.

      Overall, I’ve had my share of disk failures. Usually from being sloppy. I only trust software RAID, as I have better shot at recovery if I’m stupid enough to store something critical on less that N+2.

      I usually buy drives only on previous generation, and at that only when price absolutely craters. The former due to being bitten by new models crapping out early, and latter due to being too poor to support my bad habits.

      Nearly all of my SATA disks came from externals, but that’s become tenuous lately… SMR disks are getting stuck into these more and more, and manufacturers sneakier about hiding shit design.

      Used SAS from a place with solid warranty seems to be most reliable. About half my fleet was bought used and I’ve only lost about 1/4 of those with less than 5+ years active run time.