deleted by creator
deleted by creator
You know this hobby of yours is directly or indirectly bad for the environment, for society (Middle East tensions, see 9/11), for road safety in case of SUVs that block view of children in front of it, for city planning, and I could go on. But still you’re a “car person”, so none of that matters.
Normal people will have to wait for you and other “car people” to die off for the planet to become a better place. Until then, you’re actively making things worse.
You’re worried it’s not loud enough and that people will laugh at you in car shows?? You’re part of the problem.
One day we’ll know more about the Roman Empire than the early Web
The “tragedy of the commons” is misunderstood and maybe not even a thing https://aeon.co/essays/the-tragedy-of-the-commons-is-a-false-and-dangerous-myth
I couldn’t do this with OPNSense, but you can do it with nginx, without TLS termination https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html I just used a separate Linux VM for it
Wrote a blog post today with a working example https://blog.nootch.net/post/my-home-network-setup-in-2024/
I’m getting real “people person” vibes here.
Boeing’s stock kept rising in the last 10 years, because they were sacrificing what they should be doing for shareholder value. Stock price alone is not a good metric for companies.
I’m not supporting them, I haven’t bought anything from Blizzard since the last Protoss SC2 game ages ago. But I don’t want to lose access to my games.
It’s not that easy. My Blizzard account is over 10 years old - never thought they’d go down hill so much. What’s the solution, to never create accounts online anywhere? Even if a service looks good and you support it, a corporation like Activision can come along and have their asshole CEO infect everything.
Walking away from my account now means throwing away a lot of money spent on it.
Yes, it would cause downtime for the one being migrated - right? Or does that not count as downtime?
You’ve never had to run migrations that lock tables or rebuild an index in two decades?
The official Postgres Docker image is geared towards single database per instance for several reasons - security through isolation and the ability to run different versions easily on the same server chief among them. The performance overhead is negligible.
And about devs not supporting custom installation methods, I’m more inclined to think it’s for lack of time to support every individual native setup and just responding to tickets about their official one (which also is why Docker exists in the first place).
I used to be a sysadmin in 2002/3 and let me tell you - Docker makes all that menial, boring work go away and services just work. Which is want I want, instead of messing with php.ini extensions or iptables custom rules.
Not really. The docker-compose file has services in it, and they’re separate from eachother. If I want to update sonarr but not jellyfin (or its DB service) I can.
It’s kinda weird to see the Docker scepticism around here. I run 40ish services on my server, all with a single docker-compose YAML file. It just works.
Comparing it to manually tweaking every project I run seems impossibly time-draining in comparison. I don’t care about the underlying mechanics, just want shit to work.
I’ve been using it with a 6800 for a few months now, all it needs is a few env vars.
I keep trying it every couple of years to see if it works better, but nah. Even with MySQL/PG + Redis, it’s still slow and clunky. Maybe in 2026
Nah, plenty of these people will look back fondly on their sports cars, cocaine and lush homes.