this post was submitted on 11 Mar 2024
153 points (94.2% liked)

Selfhosted

40461 readers
485 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I never understood how to use Docker, what makes it so special? I would really like to use it on my Rapsberry Pi 3 Model B+ to ease the setup process of selfhosting different things.

I'm currently running these things without Docker:

  • Mumble server with a Discord bridge and a music bot
  • Maubot, a plugin-based Matrix bot
  • FTP server
  • Two Discord Music bots

All of these things are running as systemd services in the background. Should I change this? A lot of the things I'm hosting offer Docker images.

It would also be great if someone could give me a quick-start guide for Docker. Thanks in advance!

you are viewing a single comment's thread
view the rest of the comments
[–] TCB13 -5 points 8 months ago* (last edited 8 months ago) (1 children)

The thing with Docker is that people don't want to learn how to use Linux and are buying into an overhyped solution that makes their life easier without understanding the long term consequences. Most of the pro-Docker arguments go around security and that's mostly BS because 1) systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

Companies such as Microsoft and GitHub are all about re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer. We now have a generation that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker BS or isn’t a 3rd party cloud xyz deploy-from-github service.

Before anyone comments that Docker isn’t totally proprietary and there’s Podman consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

Docker may make development and deployment very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did. There's LOT of money into transitioning everyone to the "deploy-from-github-to-cloud-x-with-hooks" model so those companies will keep pushing for it.

Note that technologies such as Docker keep commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

[–] [email protected] 8 points 8 months ago (2 children)

Most of the pro-Docker arguments go around security

Actually Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere. Security is a side-effect and definitely not the reason why containers picked-up.

systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

Yes, and it's much harder to achieve the same. In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle. I made an example on my blog where I decided to run blocky in Systemd and not in Docker. It's just less convenient and accessible, harder to debug and also relies on each individual user to do it, while with containers a lot gets packed into the image and therefore harder to mess up.

Docker isn’t totally proprietary

There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

I will avoid comment what looks like a rant, but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendors, while containers use only native OS features and are therefore a step towards openness.

[–] TCB13 3 points 8 months ago* (last edited 8 months ago) (2 children)

Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere

I don't disagree with you, but that also shows that most modern software is poorly written. Usually a bunch of solutions that hardly work and nobody is able to reproduce their setup in a quick, sane and secure way.

There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

Yes, that's exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle.

Yes... maybe we just need some automation/orchestration tool for that. This is like saying that it's way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace... Docker as you said provides a convenient API but it doesn't mean we can't do the same for systemd.

but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendor

Completely proprietary... like QEMU/libvirt? :P

[–] [email protected] 3 points 8 months ago

but that also shows that most modern software is poorly written

Does it? I mean, this is especially annoying with old software, maybe dynamically linked or PHP, or stuff like that. Modern tools (go, rust) don't actually even have this problem. Dependencies are annoying in general, I don't think it's a property of modern software.

Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

Who are these people? There are tons of registries that people use, github has its own, quay.io, etc. You also can simply publish Dockerfiles and people can build themselves. Ofc Docker has the edge because it was the first mainstream tool, and it's still a great choice for single machine deployments, but it's far from the only used. Kubernetes abandoned Docker as default runtime for years, for example... who are you referring to?

Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.

But Systemd also uses unshare, chroot, etc. They are at the same level of abstraction. Docker (and container runtimes) are simply specialized tools, while systemd is not. Why wouldn't I use a tool that is meant for this when it's available. I suppose bubblewrap does something similar too (used by Flatpak), and I am sure there are more.

Completely proprietary… like QEMU/libvirt? :P

Right, because organizations generally run QEMU, not VMware, Nutanix and another handful of proprietary platforms... :)

[–] [email protected] 1 points 8 months ago (1 children)

I use ghcr, i have no issues pulling images from amazon ECR or wherever.
Docker got there first with the adoption and marketing.

Automation tools like ansible and terraform have existed for ages, and are great for running things without containers.
OCI just makes it a hell of a lot easier and portable

[–] TCB13 1 points 8 months ago

I’ve been using ansible as well and it’s great.

[–] [email protected] 1 points 8 months ago (1 children)

but I want to simply remind you that containers are the successor of VMs

Successor implies replacement. I think containers are another tool in the toolkit of servers/hosting, but not a replacement for VMs

[–] [email protected] 1 points 8 months ago

Well, I did not mean replacement (in fact, most orgs run in clouds which uses VMs) but I meant that a lot of orgs moved from VMs as the way to slice their compute to containers/kubernetes. Often the technologies are combined, so you are right.