I’m curious how you have automated/optimized your workflows for downloading, saving, archiving media.

For instance:

  1. On my laptop I download an epub into a folder that Calibre watches.
  2. Calibre copies and imports that epub into the Calibre library and removes the old epub.
  3. Calibre Library is hooked up to SyncThing, which passes the epub to my eReader.

My workflow is probably not the most efficient, but I’m hoping I can be inspired by people’s approaches.

  • DogMuffins@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    I’ve been catch and release for 5 years or so now.

    Archiving is such a huge drain on time / effort / resources.

    • LocustOfControl@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      catch and release

      Brilliant phrase! I’m an archiver myself partly because it takes me ages to watch things, and partly because some things get returned to again and again. I could definitely do with a cull, but it’s easier to commit to more storage.

      • DogMuffins@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yeah look, everyone has to find their own way, I’m not trying to make the case that catch & release is going to be better for everyone, and there’s certainly a case to be made for archiving.

        The thing that eventually got me was maintaining a big raid array. Lots of heat, power, drives dying every now and again. When it only takes a few minutes to download something and I never go near my bandwidth quota (or it’s unlimited maybe) going to catch & release made a lot of sense. I’m not religious about it but I generally delete things after I’ve listened / watched.

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yea I feel that, if it wasn’t for my many years of GDrive unlimited (RIP unlimited) I wouldn’t have anywhere near 200TB+ of “Linux ISOs” lmao