Hey, this is really useful.
I wanted to ask a few follow-ups, because the jump from 16 GB to 64 GB sounds pretty dramatic:
- What kind of storage were you using when it was struggling — HDD, SSD, NVMe?
- Did you only increase RAM, or did storage / CPU / other settings change too?
- Roughly what kind of workload was this? Number of users, subscribed communities, amount of federated traffic, image-heavy browsing, etc.
- Do you remember what the actual bottleneck looked like — high RAM use, swap, I/O wait, Postgres getting slow, pictrs, federation queue buildup?
- When you say disabling image proxying helped, how much did it help in practice?
- Was this on a recent Lemmy version, or a while back?
I’m trying to separate “Lemmy really needs big hardware” from “a specific part of the stack was the real problem”.
Sorry if some of these questions are a bit basic or oddly specific — I’m using AI to help gather as much real-world Lemmy hosting experience as possible, and it generated most of these follow-up questions for me.


Hey, super helpful comment.
A few of the details you mentioned are exactly the kind of practical stuff I’m trying to collect, so I wanted to ask a bit more:
Waiting for X workerslog message?pg_dump+ VPS backups, or also separately backing uppictrs, configs, secrets, and proxy setup?I’m mostly interested in the boring operational side of running Lemmy long-term: backup/restore, federation lag, storage growth, and early warning signs before things get messy.
Sorry if some of these questions are a bit basic or oddly specific — I’m using AI to help gather as much real-world Lemmy hosting experience as possible, and it generated most of these follow-up questions for me.