I was waiting for new versions, but the disk filled up faster than I expected. I’m moving 400GB image archive from local disk to object-storage. Unfortunately the process is very slow. It will take several hours.
Update: Migration finished 🎉
I’ve read the cloud engineering headache write up from a few different Lemmy servers now so I no longer can remember which ones are on CDNs and which ones aren’t. Are you backed by anything in particular? Once you are on object storage you are likely to find a surprising egress bill if you aren’t caching the images and other content at a higher CDN tier.
You should also more or less immediately get lifecycle policies setup in your object store to tier older posts into cheaper storage. It’s 10-20 minutes of learning effort that stands to save you thousands of dollars. The trade offs with storage classes are usually cheaper storage as reads get more expensive. Since Lemmy promotes fresh content there is likely a balance you can strike in your lifecycle policies to drive storage costs down for older content that is unlikely to be loaded a ton of times.
While I don’t like Cloudflare for centralizing the web, I’m grateful they host almost 40TB of data for free in a month.
It’s mad to think that that’s small change for them
Hey, I’ve noticed two things since the downtime and migration.
-
NSFW is no longer automatically checked on submissions. I can deal with that, but it used to be automatic here.
-
When I submit a new entry, the submission hangs with a spinning wheel. But I notice that the actual submission goes through. Don’t know why the web server just hangs like that. But it is annoying.
Thanks for all your hard work!
I’ll add first and second one should be fixed by now 🫡
-
I wanted to ask if you are storing images after transcoding them or not. 400GB image archive seems quite a bit High to me.
If you are no,t I highly recommend that you transcode the images that the users upload, it will significantly reduce the size of your image library.
I recommend that you transcode to webp format. I tried making avif work but it wouldn’t and jxl is still ways of.
Images are transcoded but thats not the problem. The problem is lemmy literally downloading every remote posts thumbnails to our archive. There is no option to opt out.
I looked at the images now and I will tell you for a fact that they are not being transcoded and saved. to transcode and save image to desired format you have to have set the following environment variable for the
pictrs
service in the docker-compose file:PICTRS__MEDIA__FORMAT=
When I look at your communities, I find images saved as png, jpeg, jpg, but not as webp. Saving images as webp/avif/jxl will result in an incredible reduction of storage requirements.
oh shit you are right! It used to be default but probably removed in new versions. I added it back. Thanks!
Buy 1 TB USB drive.
Rsync photo archive into 1tb USB drive
Format 400 gb drive
Replace 400 gb drive with something more substantial.
Rsync photo archive from USB drive to replacement.
You know I’m not serving the site from my local computer right? 😀
Just drive to the data centre and let them know you have an extra 1tb hard drive to plug in, they’ll walk you right in and serve you mimosas.
So?
My server has attached storage for cases like this. Before I had a mirror setup.
Someone tell me that you’re not in IT.
deleted by creator
You can rsync a remote drive to anither remote drive.
No shit sherlock. You can also use apps like resilio to copy files from any server to any server. I use it for a ton of stuff.
I would hope that the admins of the instance know something as basic as rsync.