Just thought I’d share this since it’s working for me at my home instance of federate.cc, even though it’s not documented in the Lemmy hosting guide.

The image server used by Lemmy, pict-rs, recently added support for object storage like Amazon S3, instead of serving images directly off the disk. This is potentially interesting to you because object storage is orders of magnitude cheaper than disk storage with a VM.

By way of example, I’m hosting my setup on Vultr, but this applies to say Digital Ocean or AWS as well. Going from a 50GB to a 100GB VM instance on Vultr will take you from $12 to $24/month. Up to 180GB, $48/month. Of course these include CPU and RAM step-ups too, but I’m focusing only on disk space for now.

Vultr’s object storage by comparison is $5/month for 1TB of storage and includes a separate 1TB of bandwidth that doesn’t count against your main VM, plus this content is served off of Vultr’s CDN instead of your instance, meaning even less CPU load for you.

This is pretty easy to do. What we’ll be doing is diverging slightly from the official Lemmy ansible setup to add some different environment variables to pict-rs.

After step 5, before running the ansible playbook, we’re going to modify the ansible template slightly:

cd templates/

cp docker-compose.yml docker-compose.yml.original

Now we’re going to edit the docker-compose.yml with your favourite text editor, personally I like micro but vim, emacs, nano or whatever will do…

favourite-editor docker-compose.yml

Down around line 67 begins the section for pictrs, you’ll notice under the environment section there are a bunch of things that the Lemmy guys predefined. We’re going to add some here to take advantage of the new support for object storage in pict-rs 0.4+:

At the bottom of the environment section we’ll add these new vars:

  - PICTRS__STORE__TYPE=object_storage
  - PICTRS__STORE__ENDPOINT=Your Object Store Endpoint
  - PICTRS__STORE__BUCKET_NAME=Your Bucket Name
  - PICTRS__STORE__REGION=Your Bucket Region
  - PICTRS__STORE__USE_PATH_STYLE=false
  - PICTRS__STORE__ACCESS_KEY=Your Access Key
  - PICTRS__STORE__SECRET_KEY=Your Secret Key

So your whole pictrs section looks something like this: https://pastebin.com/X1dP1jew

The actual bucket name, region, access key and secret key will come from your provider. If you’re using Vultr like me then they are under the details after you’ve created your object store, under Overview -> S3 Credentials. On Vultr your endpoint will be something like sjc1.vultrobjects.com, and your region is the domain prefix, so in this case sjc1.

Now you can install as usual. If you have an existing instance already deployed, there is an additional migration command you have to run to move your on-disk images into the object storage.

You’re now good to go and things should pretty much behave like before, except pict-rs will be saving images to your designated cloud/object store, and when serving images it will instead redirect clients to pull directly from the object store, saving you a lot of storage, cpu use and bandwidth, and therefore money.

Hope this helps someone, I am not an expert in either Lemmy administration nor Linux sysadmin stuff, but I can say I’ve done this on my own instance at federate.cc and so far I can’t see any ill effects.

Happy Lemmy-ing!

  • sparky@lemmy.federate.cc@lemmy.federate.ccOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Thinking about this a little more: I think yeah the HTTP requests will always hit your VPS, but if what you’re saying is that pictrs is loading from object store and then re-serving them off your VPS, then an NGINX rule might be able to redirect the GET directly to the object store; so that instead of transferring the actual image bytes, it just 204’s the browser through to the object store. I don’t know how feasible this is but I may play around with it to see.

    • cablepick@lemmy.cablepick.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Pict-rs uses a database to match the uri hash to the file name on disk or object store. This allows for deduplication. It always needs to sit between storage and requests. I have my instance setup to use a separate CDN domain and caching servers to reduce load in my instance. One day soon I hope to get a write done on how to do it.

      • Lodion 🇦🇺@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        This allows for deduplication

        Really? I’ve found uploading the same image to pict-rs multiple times gives a different hash. It does not seem to dedupe at all.