• 1 Post
  • 3 Comments
Joined 2 months ago
cake
Cake day: February 8th, 2026

help-circle
  • Hey, super helpful comment.

    A few of the details you mentioned are exactly the kind of practical stuff I’m trying to collect, so I wanted to ask a bit more:

    • When you say you pushed federation workers up to 128, which exact setting are you referring to?
    • Roughly how big is your instance in practice — users, subscriptions, remote communities, storage size, daily activity?
    • What were the first signs that federation was falling behind, besides the Waiting for X workers log message?
    • Did increasing workers fully solve it, or did it just move the bottleneck somewhere else?
    • What kind of Postgres tuning ended up mattering most for you?
    • For backups, are you only doing weekly pg_dump + VPS backups, or also separately backing up pictrs, configs, secrets, and proxy setup?
    • Have you tested full restore end-to-end on another machine?
    • For pictrs growth, have you found any good way to keep storage under control, or is it mostly just “plan for it to grow”?
    • For monitoring/logging, if you were starting over, what would you set up from day one?

    I’m mostly interested in the boring operational side of running Lemmy long-term: backup/restore, federation lag, storage growth, and early warning signs before things get messy.

    Sorry if some of these questions are a bit basic or oddly specific — I’m using AI to help gather as much real-world Lemmy hosting experience as possible, and it generated most of these follow-up questions for me.


  • Hey, this is really useful.

    I wanted to ask a few follow-ups, because the jump from 16 GB to 64 GB sounds pretty dramatic:

    • What kind of storage were you using when it was struggling — HDD, SSD, NVMe?
    • Did you only increase RAM, or did storage / CPU / other settings change too?
    • Roughly what kind of workload was this? Number of users, subscribed communities, amount of federated traffic, image-heavy browsing, etc.
    • Do you remember what the actual bottleneck looked like — high RAM use, swap, I/O wait, Postgres getting slow, pictrs, federation queue buildup?
    • When you say disabling image proxying helped, how much did it help in practice?
    • Was this on a recent Lemmy version, or a while back?

    I’m trying to separate “Lemmy really needs big hardware” from “a specific part of the stack was the real problem”.

    Sorry if some of these questions are a bit basic or oddly specific — I’m using AI to help gather as much real-world Lemmy hosting experience as possible, and it generated most of these follow-up questions for me.


  • nachitima@lemmy.mlOPtoSelfhosted@lemmy.worldHosting Lemmy experience
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 hours ago

    Hey, thanks for sharing this.

    I’m trying to get a clearer picture of what a reliable Lemmy backup/restore setup looks like in practice, especially for self-hosting.

    A few things I’d be curious about in your setup:

    • Are your Proxmox backups enough on their own, or do you also make separate Postgres dumps?
    • Are you backing up the whole container/VM image, or do you also separately keep pictrs data, config files, secrets, reverse proxy config, etc.?
    • Have you actually tested a full restore from backup onto another machine? If yes, did it come back cleanly?
    • Do you do local-only backups, or also offsite copies?
    • When you update Lemmy, do you rely on rollback from snapshots if something breaks, or do you have another recovery path?

    Main thing I’m trying to understand is whether Proxmox-only backups are “good enough” operationally, or whether people still end up needing app-level backups too.

    Sorry if some of these questions are a bit basic or oddly specific — I’m using AI to help gather as much real-world Lemmy hosting experience as possible, and it generated most of these follow-up questions for me.