this post was submitted on 10 Jan 2024
77 points (86.7% liked)

Selfhosted

40495 readers
705 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hi! Question in the title.

I get that its super easy to setup. But its really worthwhile to have something that:

  • runs everything as root (not many well built images with proper useranagement it seems)
  • you cannot really know which stuff is in the images: you must trust who built it
  • lots of mess in the system (mounts, fake networks, rules...)

I always host on bare metal when I can, but sometimes (immich, I look at you!) Seems almost impossible.

I get docker in a work environment, but on self hosted? Is it really worth while? I would like to hear your opinions fellow hosters.

top 50 comments
sorted by: hot top controversial new old
[–] shalva97 39 points 10 months ago (3 children)

Life is too short to install everything on baremetal.

load more comments (3 replies)
[–] [email protected] 36 points 10 months ago (1 children)

people are rebuffing the criticism already.

heres the main advantage imo:

no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.

docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.

much easier to maintain once you get the hang of things.

[–] million 2 points 10 months ago

Quick addition, I think for the messy argument the way I would articulate it for folks running servers is it helps you move from pets to cattle.

[–] [email protected] 33 points 10 months ago (1 children)

About the root problem, as of now new installs are trying to let the user to run everything as a limited user. And the program is ran as root inside the container so in order to escape from it the attacker would need a double zero day exploit (one for doing rce in the container, one to escape the container)

The alternative to "don't really know what's in the image" usually is: "just download this Easy minified and incomprehensible trustmeimtotallynotavirus.sh script and run it as root". Requires much more trust than a container that you can delete with no traces in literally seconds

If the program that you want to run requires python modules or node modules then it will make much more mess on the system than a container.

Downgrading to a previous version (or a beta preview) of the app you're running due to bugs it's trivial, you just change a tag and launch it again. Doing this on bare metal requires to be a terminal guru

Finally, migrating to a new fresh server is just docker compose down, then rsync to new server, and then docker compose up -d. And not praying to ten different gods because after three years you forgot how did you install the app in bare metal like that.

Docker is perfect for common people like us self hosting at home, the professionals at work use kubernetes

load more comments (1 replies)
[–] [email protected] 29 points 10 months ago* (last edited 10 months ago) (3 children)

Imo, yes.

  • only run containers from trusted sources (btw. google, ms, apple have proven they cant be trusted either)
  • run apps without dependency hell
  • even if someone breaks in, they’re not in your system but in a container
  • have everything web facing separate from the rest
  • get per app resource statistics

Those are just what was in my head. Probably more to be said.

[–] [email protected] 6 points 10 months ago (1 children)

Also the ability to snapshot an image, goof around with changes, and if you don’t like them restore the snapshot makes it much easier to experiment than trying to unwind all the changes you make.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 20 points 10 months ago

Docker is a messy and not ideal but it was born out of a necessity, getting multiple services to coexist together outside of a container can be a nightmare, updating and moving configuration is a nightmare and removing things can leave stuff behind which gets messier and messier over time. Docker just standardises most of the configuration whilst requiring minimal effort from the developer

[–] [email protected] 19 points 10 months ago (1 children)

I find it makes my life easier, personally, because I can set up and tear down environments I'm playing with easily.

As for your user & permissions concern, are you aware that docker these days can be configured to map "root" in the container to a different user? Personally I prefer to use podman though, which doesn't have that problem to begin with

[–] micka190 2 points 10 months ago* (last edited 10 months ago) (1 children)

I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.

Same here. I self-host a bunch of dev tools for my personal toy projects, and I decided to migrate from Drone CI to Woodpecker CI this week. Didn't have to worry about uninstalling anything, learning what commands I need to start/stop/restart Woodpecker properly, etc. I just commented-out my Drone CI/Runner services from my docker-compose file, added the Woodpecker stuff, pointed it to my Gitea variables and ran docker compose up -d.

If my server ever crashes, I can just copy it over and start from scratch.

load more comments (1 replies)
[–] [email protected] 19 points 10 months ago (1 children)

Others have addressed the root and trust questions, so I thought I'd mention the "mess" question:

Even the messiest bowl of ravioli is easier to untangle than a bowl of spaghetti.

The mounts/networks/rules and such aren't "mess", they are isolation. They're commoditization. They're abstraction - Ways to tell whatever is running in the container what it wants to hear, so that you can treat the container as a "black box" that solves the problem you want solved.

Think of Docker containers less like pets and more like cattle, and it very quickly justifies a lot of that stuff because it makes the container disposable, even if the data it's handling isn't.

load more comments (1 replies)
[–] ssdfsdf3488sd 18 points 10 months ago

Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.

Portability and backup are dead simple.

[–] MigratingtoLemmy 11 points 10 months ago

Docker can be run rootless. Podman is rootless by default.

I build certain containers from scratch. Very popular FOSS software can be trusted, but if you're as paranoid, you should probably run the bare-minimum software in the first-place.

It's a mess if you're not used to it. But yes, normal unix networking is somewhat simpler (like someone mentioned, LXC containers can be a decent idea). Well, you'll realise that Docker is not really top-dog in terms of complexity when you start playing with the big boys like full-fledged k8s

[–] [email protected] 10 points 10 months ago (1 children)

To answer each question:

  • You can run rootless containers but, importantly, you don't need to run Docker as root. Should the unthinkable happen, and someone "breaks out" of docker jail, they'll only be running in the context of the user running the docker daemon on the physical host.
  • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
  • It's the opposite - you don't really need to care about docker networks, unless you have an explicit need to contain a given container's traffic to it's own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I've created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

It's not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

Why? I like to play.

Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

Let's say there's a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

I have a play with the competitor for a bit. If I don't like it, I just delete the CT and move on. If I do, I can point my photos... hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don't like about the new kid on the block.

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

Should the unthinkable happen, and someone "breaks out" of docker jail, they'll only be running in the context of the user running the docker daemon on the physical host.

There is no daemon in rootless mode. Instead of a daemon running containers in client/server mode you have regular user processes running containers using fork/exec. Not running as root is part and parcel of this approach and it's a good thing, but the main motivator was not "what if someone breaks out of the container" (which doesn't necessarily mean they'd get all the privileges of the running user on the host and anyway it would require a kernel exploit, which is a pretty tall order). There are many benefits to making running containers as easy as running any kind of process on a Linux host. And it also enabled some cool new features like the ability to run only partial layers of a container, or nested containers.

[–] [email protected] 2 points 10 months ago

Yep, all true. I was oversimplifying in my explanation, but you're right. There's a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.

[–] [email protected] 9 points 10 months ago

Well docker tends to be more secure if you configure it right. As far as images go it really is just a matter of getting your images from official sources. If there isn't a image already available you can make one.

The big advantage to containers is that they are highly reproducible. You no longer need to worry about issues that arise when running on the host directly.

Also if you are looking for a container runtime that runs as a local user you should check out podman. Podman works very similarly to docker and can even run your containers as a systemd user service.

[–] [email protected] 7 points 10 months ago

Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

The mess is only a mess if you don't really understand what you're doing, same goes for traditional services.

[–] aleq 5 points 10 months ago

the biggest selling point for me is that I'll have a mounted folder or two, a shell script for creating the container, and then if I want to move the service to a new computer I just move these files/folders and run the script. it's awesome. the initial setup is also a lot easier because all dependencies and stuff are bundled with the app.

in short, it's basically the exe-file of the server world

runs everything as root (not many well built images with proper useranagement it seems)

that's true I guess, but for the most part shit's stuck inside the container anyway so how much does it really matter?

you cannot really know which stuff is in the images: you must trust who built it

you kinda can, reading a Dockerfile is pretty much like reading a very basic shell script for the most part. regardless, I do trust most creators of images I use. most of the images I have running are either created by the people who made the app, or official docker images. if I trust them enough to run their apps, why wouldn't I trust their images?

lots of mess in the system (mounts, fake networks, rules...)

that's sort of the point, isn't it? stuff is isolated

[–] [email protected] 5 points 10 months ago
  1. I don't run any of my containers as root
  2. Dockerfiles aren't hard to read so you can pretty easily figure out what they're doing
  3. I find managing dependencies for non-containerized services to be worse than one messy docker directory I never look at

Plus having all my services in a couple docker-compose files also means I can move them around incredibly easily.

[–] [email protected] 4 points 10 months ago

I have VMs on my metal, one specific for containers.

Though I use LXC. Docker started with LXC, then grew bigger, and I don't like how big it is.

If I can set up one simple NAT and run everything inside a container, I don't need Docker.

Docker's main advantage is the hub.

[–] specseaweed 4 points 10 months ago

I know enough to be dangerous. I know enough to follow faqs but dumb enough to not backup like I should.

So I’d be running my server on bare metal and have a couple services going and sooner or later, shit would get borked. Shit that was miles past my competence to fix. Sometimes I’d set up a DB wrong, or break it, or an update would screw it up, and then it would all fall apart and I’m there cursing and wiping and starting all over.

Docker fixes that completely. It’s not perfect, but it has drastically lowered my time working on my server.

My server used to be a hobby that I loved dumping hours into. Now, I just want shit to work.

[–] [email protected] 3 points 10 months ago

Docker gives you a few different things which might or might not matter. Note that all of the following can be gotten in ways other than docker as well. Sometimes those ways are better, but often what is better is just opinion. There are downsides to some of the following as well that may not be obvious.

With docker you can take a container and roll it out to 100s of different machines quickly. this is great for scaling if your application can scale that way.

With docker you can run two services on the same machine that use incompatible versions of some library. It isn't unheard of to try to upgrade your system and discover something you need isn't compatible with the new library, while something else you need to upgrade needs the new library. Docker means each service gets separate copies of what is needs and when you upgrade one you can leave the other behind.

With docker you can test an upgrade and then when you roll it out know you are rolling out the same thing everywhere.

With docker you can move a service from one machine to a different one somewhat easily if needed. Either to save money on servers, or to use more as more power is needed. Since the service itself is in a docker you can just start the container elsewhere and change pointers.

With docker if someone does manage to break into a container they probably cannot break into other containers running on the same system. (if this is a worry you need to do more risk assessment, they can still do plenty of damage)

[–] [email protected] 3 points 10 months ago

In short, yes, yes it's worth it.

[–] [email protected] 3 points 10 months ago

About the trust issue. There's no more or less trust than running on bare metal. Sure you could compile everything from source but you probably won't, and you might trust your distro package manager, but that still has a similar problem.

[–] [email protected] 3 points 10 months ago

I concur with most of your points. Docker is a nice thing for some use cases, but if I can easily use a package or set up my own configurations, then I will do that instead of use a docker container every time. My main issues with docker:

  • Containers are not updated with the rest of the host OS
  • firewall and mounting complexities which make securing it more difficult
[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (6 children)

How is this meaningfully different than using Deb packages? Or building from source without inspecting the build commands? Or even just building from source without auditing the source?

In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.

load more comments (6 replies)
[–] TCB13 2 points 10 months ago (2 children)

Why docker?

Its all about companies re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer.

We now have a generation of developers that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker or isn’t a 3rd party cloud xyz deploy-from-github service.

oh but the underlying technologies aren’t proprietary

True, but this Docker hype invariably and inevitably leads people down a path that will then require some proprietary solution or dependency somewhere that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. In this particular case is Docker Hub / Kubernetes BS and all the cloud garbage around it.

oh but there are alternatives like podman

It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT/Podman and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.

lots of mess in the system (mounts, fake networks, rules…)

Yes, a total mess of devices hard to audit, constant ram wasting and worse than all it isn't as easy change a docker image / develop things as it used to be.

[–] [email protected] 3 points 10 months ago

This is a really bad take. I'm all for OSS, but that doesn't mean that there isn't value with things like Docker.

Yes, developers know less about infra. I'd argue that can be a good thing. I don't need my devs to understand VLANs, the nuances of DNS, or any of that. I need them to code, and code well. That's why we have devops/infra people. If my devs to know it? Awesome, but docker and containerization allows them to focus on code and let my ops teams figure out how they want to put it in production.

As for OSS - sure, someone can come along and make an OSS solution. Until then - I don't really care. Same thing with cloud providers. It's all well and good to have opinions about OSS, but when it comes to companies being able to push code quickly and scalably, then yeah I'm hiring the ops team who knows kubernetes and containerization vs someone who's going to spend weeks trying to spin up bare iron machines.

[–] [email protected] 2 points 10 months ago (1 children)

Is all this true? Its a perspective I didn't considered, but feels true, don't know if it is tough.

load more comments (1 replies)
[–] [email protected] 2 points 10 months ago

I'll answer your question of why with your own frustration - bare metal is difficult. Every engineer uses a different language/framework/dependencies/whathaveyou and usually they'll conflict with others. Docker solves this be containing those apps in their own space. Their code, projects, dependencies are already installed and taken care of, you don't need to worry about it.

Take yourself out of homelab and put yourself into a sysadmin. Now instead of knowing how packages may conflict with others, or if updating this OS will break applications, you just need to know docker. If you know docker, you can run any docker app.

So, yes, volumes and environments are a bit difficult at first. But it's difficult because it is a standard. Every docker container is going to need a couple mounts, a couple variables, a port or two open, and if you're going crazy maybe a GPU. It doesn't matter if you're running 1 or 50 containers on a system, you aren't going to get conflicts.

As for the security concerns, they are indeed security concerns. Again imagine you're a sysadmin - you could direct developers that they can't use root, that they need to be built on OS's with the latest patches. But you're at home, so you're at the mercy of whoever built the image.

Now that being said, since you're at their mercy, their code isn't going to get much safer whether you run it bare-iron or containerized. So, do you want to spend hours for each app figuring out how to run it, or spend a few hours now to learn docker and then have it standardized?

load more comments
view more: next ›