this post was submitted on 12 Jul 2023
127 points (93.2% liked)

Selfhosted

40023 readers
1040 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

For the vast majority of docker images, the documentation only mention a super long and hard to understand "docker run" one liner.

Why nobody is placing an example docker-compose.yml in their documentation? It's so tidy and easy to understand, also much easier to run in the future, just set and forget.

If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml

I want to know if it's just me that I'm out of touch and should use "docker run" or it's just that an "one liner" looks much tidier in the docs. Like to say "hey just copy and paste this line to run the container. You don't understand what it does? Who cares"

The worst are the ones that are piping directly from curl to "sudo bash"...

top 50 comments
sorted by: hot top controversial new old
[–] OmltCat 80 points 1 year ago (8 children)

Because it’s “quick start”. Least effort to get a taste of it. For actual deployment I would use compose as well.

Many project also have a example docker-compose.yml in the repository if you dig not so deep into it

There is https://www.composerize.com to convert run command to compose. Works ~80% of the time.

I honestly don’t understand why anyone would make “curl and bash” the officially installation method these days, with docker around. Unless this is the ONLY thing you install on the system, so many things can go wrong.

[–] Shrek 35 points 1 year ago (2 children)

I used to host composerize. Now I host it-tools which has its own version and many other super helpful tools!

[–] Heastes 10 points 1 year ago (1 children)

I was going to mention it-tools. It's great!
And if you need more stuff in a similar vein, cyberchef is also pretty neat.

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago (1 children)

You have changed my life today.

[–] Shrek 2 points 1 year ago

No, the creator of it-tools did. I just told you about it. Give them a star on GitHub and maybe donate if you can ❤️

load more comments (7 replies)
[–] [email protected] 38 points 1 year ago (2 children)

you don't have to decode anything.. just throw it in here :

https://www.composerize.com

load more comments (2 replies)
[–] ilmagico 10 points 1 year ago (1 children)

I don't think you're out of touch, just use docker compose. It's not that hard to conver the docker run example command line into a neat docker-compose.yml, if they don't already provide one for you. So much better than just running containers manually.

Also, you should always understand what any command or docker compose file does before you run it! And don't blindly curl | bash either, download the bash script and look at it first.

[–] [email protected] 7 points 1 year ago (1 children)

Nah I'll just copy paste half the tutorial in one go and then blame others when things break

[–] [email protected] 4 points 1 year ago

Average linux user /s

[–] [email protected] 10 points 1 year ago

Plain docker is useful when running some simple containers, or even one-off things. A lot of people thing about containers as long running services, but there's also many containers that are for running essentially a single command to completion and then shuts down.

There's also alternate ways to handle containers, for example Podman is typically used with systemd services as unlike Docker it doesn't work through a persistent daemon, so the configuration goes to a service.

I typically skip the docker-compose for simple containers, and turn to compose for either containers with loads of arguments or multi-container things.

Also switching between Docker and Podman depending on the machine and needs.

[–] AlexKalopsia 10 points 1 year ago

I used docker run when I first started, I think it's a fairly easy entry point that "just works".

However I would never really go back to it, since compose is a lot tighter and offers a better sense of overview and control

[–] [email protected] 9 points 1 year ago

I too am endlessly frustrated by documentation that lacks compose file examples.

Fortunately, this exists: Docker Compose Generator

[–] [email protected] 8 points 1 year ago (4 children)

I've started replacing my docker compose files with pure ansible that is the equivilent of doing docker run. My ansible playbooks look almost exactly like my compose file but they can also create folders, set config files or cycle services when configs are updated.

It's been a bit of a learning process but it's replaced a lot what was previously documentation with code instead.

[–] [email protected] 4 points 1 year ago (1 children)

Check out the GitHub project ansible-nas

[–] [email protected] 2 points 1 year ago

ansible-nas

Wow, yeah this is exactly the sort of roles/playbooks that I've been building. I'm definitely using this as a source before starting my own from scratch. Thanks for sharing.

[–] [email protected] 3 points 1 year ago (3 children)

I've done something similar, but I'm using compose files orchestrated by Ansible instead.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 7 points 1 year ago

Personally, I do usually want the docker run command. Much easier to use when orchestrating the deployment with other tools.

For readability, I just line-break the command after each argument...

[–] [email protected] 7 points 1 year ago (2 children)

Ive almost completely moved to podman managed by systemd and I highly recommend it.

[–] [email protected] 3 points 1 year ago (2 children)

I do this out of habit because this is how my work does it, but I honestly don't know the benefits of doing it this way. Can you explain (or provide a link?)

[–] [email protected] 3 points 1 year ago (1 children)

Do you use podman run followed by podman generate or are you using quadlet?

Quadlet is integrated in podman 4.4 and up and makes it possible to declare your containers in .container files that look like systems unit files and still get the full systems integration: https://www.redhat.com/sysadmin/multi-container-application-podman-quadlet

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago (1 children)

I prefer to use ansible to define and provision my containers (docker/podman over containerd). For work, of course k8s and helm takes the cake. no reason to run k8s for personal self hosting, though.

[–] [email protected] 3 points 1 year ago

No reason aside from building endless unnecessary complexity, which--let's be honest--is 90% of the point of running a home lab.

Shit's broken at work: hate it. Shit's broken at home: ooh a project!

[–] [email protected] 4 points 1 year ago

Honestly I never really saw the point of it, just seems like another dependency. The compose file and the docket run commands have almost the same info. I'd rather jump to kubectl and skip compose entirely. I'd like to see a tool that can convert between these 3 formats for you. As for piping into bash, no - I'd only do it on a very trusted package.

[–] SilentMobius 4 points 1 year ago* (last edited 1 year ago)

Docker-compose is a orchestration tool that wraps around the inbuilt docker functions that are exposed like "docker run", when teaching people a tool you generally explain the base functions of the tool and then explain wrappers around that tool in terms of the functions you've already learned.

Similarly when you have a standalone container you generally provide the information to get the container running in terms of base docker, not an orchestration tool... unless the container must be used alongside other containers, then orchestration config is often provided.

[–] [email protected] 3 points 1 year ago (1 children)

I always use docker-compose. It is very handy if you ever want to have a good backup or move the whole server to another. Copy over files -> docker compose up -d and you are done For beginners, they should use docker compose from the start. Easier than docker run

If you ever want to convert those one-liner to a proper .yml then use this converter

[–] [email protected] 2 points 1 year ago (2 children)

That is one docker compose up -d for each file you copied over, right.. Or are you doing something even smarter?

[–] [email protected] 3 points 1 year ago (1 children)

I have one docker-compose.yml for each service. You can use docker compose -f /path/to/docker-compose.yml up -d in scripts

I would never use "one big" file for all. You only get various problems imo

[–] SheeEttin 3 points 1 year ago (1 children)

You use a separate file for each service? Why? I use one file for each stack, and if anything, breaking them out would give me issues.

[–] [email protected] 2 points 1 year ago

I meant stack 😸

My structure is like

/docker/immich/docker-compose /docker/synapse/docker...

But I read that some people make one big file for everything

[–] [email protected] 2 points 1 year ago

I have all services in one compose file. Up -d starts them all. Servicename up -d is more selective.

[–] [email protected] 3 points 1 year ago (1 children)

I’m curious to hear from the runners. I use compose and I feel the same, it’s more readable and editable and it allows me to backup the command by backing up the docker-compose.yml

[–] [email protected] 2 points 1 year ago (1 children)

When orchestration or provisioning tools are used (Ansible, kurbernetes, etc...), creating networks and containers are equally readable in code. The way docker compose is designed makes it hard to integrate with these tools.

[–] [email protected] 2 points 1 year ago

This is the response I was hoping to hear. I'm primarily a home-automation/self-hosted enthusiast, not necessarily a infrastructure enthusiast. As of yet, I haven't felt the need for using more involved orchestration tools/infra.

[–] [email protected] 3 points 1 year ago

I am not using docker-compose personally, and moving away from it at work, because it is only a CLI client and doesn't integrate with other tools except she'll scripts.

[–] yaaaaayPancakes 3 points 1 year ago

First version of my server, I wrote a bunch of custom shell scripts to execute docker run statements to launch all my containers b/c I didn't know docker at all and didn't want to learn compose.

Current version of my server, I use docker compose. But all the containers I use come from linuxserver.io, and they always give examples for both. I use ansible to deploy everything.

[–] [email protected] 2 points 1 year ago (2 children)

Something that always confused me was how docker doesn’t come with compose installed as a core component.

[–] [email protected] 5 points 1 year ago

It does now.

[–] [email protected] 2 points 1 year ago (1 children)

docker compose vs docker-compose. Yes I know it’s stupid.

[–] [email protected] 2 points 1 year ago

Well today I learned something! I’ve been using docker-compose for 5+ years now and I never happened upon the addition of compose to docker haha.

It’s also the issue with the internet and all the fantastic guides which even if they were written 12 months ago, are already out of date!

[–] eager_eagle 2 points 1 year ago

Even for the one-liner argument - a better one liner than any docker run is docker compose up [-d].

[–] TitanLaGrange 2 points 1 year ago* (last edited 1 year ago)

Previously my server was just a Debian box where I had a 'docker' directory with a bunch of .sh files containing 'docker run' commands (and a couple of docker-compose files for services that that have closely related containers). That works really well, it's easy to understand and manage. I had nginx running natively to expose stuff as necessary.

Recently I decided to try TrueNAS Scale (I wanted more reliable storage for my media library, which is large enough to be annoying to replace when a simple drive fails), and I'm still trying to figure it out. It's kind of a pain in the ass for running containers since the documentation is garbage. The web interface is kind of nice (other than constantly logging me out), but the learning curve for charts and exposing services has been tough, and it seems that ZFS is just a bad choice for Docker.

I was attracted to the idea of being able to run my services on my NAS server as one appliance, but it's feeling like TrueNAS Scale is way too complicated for home-scale (and way too primitive for commercial, not entirely sure what market they are aiming for) and I'm considering dumping it and setting up two servers, one for NAS and for running my containers and VMs.

[–] eager_eagle 2 points 1 year ago* (last edited 1 year ago)

it turns out GPT converts plain docker commands into docker compose files well enough to me, it's been my go-to when I need to create a compose YAML. Checking a YAML and making one or two small corrections is even faster than entering all info in a form like Docker Compose Generator.

[–] lkami 2 points 1 year ago

I use docker to test individual container images. Anything long running is getting a Kubernetes manifest. I never use docker compose, except when supporting developers.

[–] [email protected] 2 points 1 year ago

Totally agree. I need to then pick apart the run command to make the docker compose file, then get something wrong and need to do a search.

load more comments
view more: next ›