I’m pretty new to self-hosting in general, so I’m sorry if I’m not using correct terminology or if this is a dumb question.
I did a big archival project last year, and ripped all 700 or so DVDs/Blu-rays I own. Ngl, I had originally planned on just having them all in a big media folder and picking out whatever I wanted to watch that way. Fortunately, I discovered Jellyfin, and went with that instead.
So I bought a mini pc to run Ubuntu server on, and I just installed Jellyfin directly there. Eventually I decided to try hosting a few other services (like Home Assistant and BookLore (R.I.P.)), which I did through Docker.
So I’m wondering, should I be running Jellyfin through Docker as well? Are there advantages to running Jellyfin through Docker as opposed to installed directly on the server? Would transitioning my Jellyfin instance to Docker be a complicated process (bearing in mind that I’m new and dumb)?
Thanks for any assistance.
I used to do everything in VMs or containers not sure what to call them now LXCs. But migrated everything to docker it is just so much easier. Easier to backup update and roll back.
I just use docker compose for everything, i like how everything pertaining to a service can be contained within a single directory and there’s minimal file permission management. Also lots of services need their own databases which might conflict on system installs.
Docker and Docker-compose makes things very easy to maintain, restart, update, migrate. I don’t see downsides, maybe a bit longer to get started in the first place ?
My recommendation is to go with docker. I don’t know the process to migrate your database from baremetal to container, but I am sure this question has been answered somewhere.
You should know how to host something without using docker, because well… that’s how you’d make a dockerfile.
But you should not self host without containerization. The whole idea is that your self hosted applications are not polluting your environment. Your system doesn’t need all these development libraries and packages. Once you remove your application you will realize that the environment is permanently polluted and often times it is difficult to “reset” it to its previous state (without dependencies and random files left behind).
However with docker none of that happens. Your environment is in the same state you left it.
It’s pretty easy to just unzip the tarball and set it up once manually. Upgrades are just unzipping a new tarball. Create the systems file and a start script once, those are very short, and that’s all.
The biggest advantage of Docker is that it’s a little bit easier to manage all the dependencies of a service. And often enough the Docker images come from the official vendor and thus should in theory be configured optimally out of the box and give you timely updates.
But if you don’t have any problems with your current install I wouldn’t touch it.
I run it in docker and it’s fine. It’s not because I don’t know how to run it natively - I’m a linux sysadmin - it’s just that very often, docker is easier to do this stuff with. Easier to migrate to other machines, easier to upgrade, easier to install, easier to remove if you want to.
By all means go native if you want to learn. Pros and cons in each method, but for me, docker works just fine for most things.
I prefer to run processes directly on the host system if I can. Jellyfin is well behaved, running as its own user and not hogging RAM, and it doesn’t need dependencies that conflict with other apps/services. So I don’t see a need to add a layer of port/volume/stderr mapping.
I also ran HA and AppDaemon just in Python virtual envs. Glad to share Ansible playbooks if you’re interested.
Ngl, I used an ansible playbook one time and I felt like a fourth grader trying to perform open heart surgery. Again, I am just so very very new and dumb lmao
Isolating network services from the rest of your system is a good thing
Bearing that in mind, I now have a new problem, which is that apparently none of my containers actually have internet access? I hadn’t noticed because I mostly just run local media servers, and I tend to clean up all the metadata before I upload anything (i.e. I usually clean up my ebooks in Calibre before I send them to BookLore, so I’ve never had to actually use BookLore to fetch anything from the web).
Only way I was able to get internet access in any of my containers was adding
network_mode: "host"to the docker-compose.yml files, which, if I’m understanding correctly, negates the point of isolating network services, no? So something is broken somewhere but I have no idea what it is or how to fix it, so I guess my JF server is staying on bare metal for now lol
Do you mean the ability of jellyfin to access the internet or the ability for network access to jellyfin.
If you mean the second then you need to map ports https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/
If you mean the first then something is wonky, but also using host mode still doesn’t negate the point. You’re still only allowing the processes in the container to access only directories you’ve specified and isolated them from the other processes on the system. It’s about limited the blast radius if an exploit against your network application occurred
Jellyfin isn’t running in a docker container, so it’s working fine. I’ve just noticed that everything I am running in a container doesn’t have network access, unless I change network mode to host in that container’s compose yml. So I guess docker’s network bridge isn’t configured correctly? Which makes sense, as I have basically no idea what I’m doing lmao. So until I figure out what’s going on there, I think I’ll just let my JF server run as is. I’d prefer it in a container I think, but not before I figure out what exactly I broke.
I don’t think the migration will be that awful going from Linux to Linux container? I just gave up and nuked it going from Windows to a Linux container, but that was after hours of playing whack-a-mole with Windows -> Linux path issues.
The main thing is you’ll probably want to mount your media location as a volume in docker using the same location as it was on bare metal, as otherwise I think you’ll need to fix all those paths in Jellyfin’s DBs. Otherwise you’ll need to locate Jellyfin’s config/etc directory and mount it in docker with the appropriate binds, and while doing that you’ll probably want to move it to a spot that’s more appropriate for container config storage.
An additional thing is that the container will need to be explicitly given access to your GPU for transcoding if needed, but that changes with your system and is just part of Jellyfin docker setup.
Contrary to the other poster I prefer Docker over directly on the main OS. For one simple reason, uninstall. I tend to install/uninstall stuff frequently. Sure Jellyfin is great now, but what about next year when something happens and I want to switch to a fork, or emby, or something else? Uninstalling in Linux is a crapshoot. Not too bad if you’re using a package manager, but oftentimes the things I install aren’t in the package manager. Uninstalling binaries, cleaning up directories, removing users and groups, and removing dependancies is a massive pain. Back before docker instead of doing dist upgrades on my ubuntu server, I’d reinstall from scratch just to clean everything up.
With docker, cleanup is a breeze.
The official docker image takes the thinking and updating challenges away.
Don’t change now if you don’t have an issues in my opinion. However, if you have the space for the jellyfin backup, it should be a pretty simple transition. I always prefer deploying using docker compose for all my services, I have backups of the compose files, and it handles all the networking between all the services (VPN, *arr stack, qbt, seer, jellyfin) When I had to move off of my ancient server after it kicked the bucket, it was as simple as copying my compose files, a single docker deployment per stack, and loading the backups for specific services. I’ve not had any issues with Jellyfin on docker, but I am using GPU passthrough to allow for hardware accelerated transcoding.
LXC all the way
Imperative installations are messy to deal with and maintain, I recommend using either Docker Compose or NixOS





