this post was submitted on 01 Jul 2023
33 points (94.6% liked)

DevOps

1667 readers
28 users here now

DevOps integrates and automates the work of software development (Dev) and IT operations (Ops) as a means for improving and shortening the systems development life cycle.

Rules:

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

I'll start:

When I was first learning to use Docker, I didn't realize that most tutorials that include a database don't configure the database to persist. Imagine my surprise when I couldn't figure out why the database kept getting wiped!

top 22 comments
sorted by: hot top controversial new old
[–] [email protected] 22 points 1 year ago* (last edited 1 year ago) (1 children)

A lot of people don't don't realize that the port configuration of 127.0.0.1:8080/ localhost:8080 is different than just 8080. One binds to localhost, meaning remote machines can't access it and the other does not.Combine this with the fact that people don't realize docker bypasses firewalls like ufw or firewalld, editing ip/nftables rules directly...

If you use an ip/port search engine like shodan or zoomeye, you can find loads of docker containers that probably weren't supposed to be on the net.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago)

Protip: you can configure the default host bind IP via /etc/docker/daemon.json. You could for example set:

{
  "ip": "127.0.0.1"
}

which would result in -p "8080:8080" being equivalent to -p "127.0.0.1:8080:8080"

[–] [email protected] 21 points 1 year ago* (last edited 1 year ago) (2 children)

The biggest footgun I encounter every time I set up a raspberry pi or other linux host for a side project is forgetting that Docker doesn't do log rotation for containers' logs by default, which results in the service going down and seeing a sweat inducing ENOSPC error when you ssh in to check it out.

You can configure this by creating /etc/docker/daemon.json and either setting up log rotation with log-opts or using the local logging driver (it defaults to json) if you're not shipping container logs anywhere and just read the logs locally. The local driver compresses the logs and automatically does log rotation:

{
  "log-driver": "local",
  "log-opts": {
     "max-size": "10m",
     "max-file": "3"
  }
}
[–] [email protected] 8 points 1 year ago* (last edited 1 year ago)

TIL. Thank you! (Now I will ssh into all my VPSes and set this up!)

(cool username btw)

[–] vegetaaaaaaa 6 points 1 year ago (1 children)

I prefer this method:

{
  "log-driver": "syslog",
  "log-opts": {
    "tag": "docker.{{.Name}}"
  }
}

This way container logs are forwarded to /var/log/syslog, which already contains all other services logs, and has sane rotation rules by default (and it allows rsyslog to manage log forwarding/shipping if needed).

[–] [email protected] 3 points 1 year ago

Thanks, good to know! I had no idea about the tags. Looks like there's a lot more variables available.

I just reread the docs on the log drivers - they mentioned that as of docker 20.x local logs now work with all drivers as it buffers the logs locally as well. I think this is probably why I hadn't explored the other drivers before - couldn't use docker-compose logs.

[–] vegetaaaaaaa 14 points 1 year ago* (last edited 1 year ago)
  • Docker swarm does not respect its own compose spec, exposes services on all interfaces and bypasses firewall rules [1], [2]
  • 1 million SLOC daemon running as root [1]
  • Buggy network implementation, sometimes requires restarting the daemon to release bridges [1]
  • Requires frequent rebuilds to keep up to date with security patches [1] [2] [3]
  • No proper support for external config files/templating, not possible to do graceful reloads, requires full restarts/downtime for simple configuration changes [1]
  • Buggy NAT implementation [1]
  • Buggy overlay network implementation, causes TCP resets [1]
  • No support for PID limits/fork bomb protection [1], no support for I/O limits [2]
  • No sane/safe garbage collection mechanism, docker system prune --all deletes all unused volumes - including named volumes which are unused because the container/swarm service that uses them is stopped at that particular moment for whatever reason. Eats disk space like mad [1] [2]
  • Requires heavy tooling if you're serious about it (CI, container scanning tools, highly-available registry...) [1], Docker development and infrastructure is fully controlled by Docker Inc. [1] [2] [3] [4] [5] [6]
[–] [email protected] 13 points 1 year ago (4 children)

Using Docker Desktop at work without a license. Use Rancher Desktop instead. It's essentially like what Oracle did with the Oracle JDK. To my knowledge they haven't gone after anyone but it is technically a license violation to use it for work without a license. I could not (easily) find a way to install Docker on Mac without using Docker Desktop but Rancher Desktop worked fine.

Also, podman exists as a drop in replacement for Docker for the curious. I haven't tried it myself though so this isn't a recommendation.

[–] [email protected] 10 points 1 year ago

I can vouch for podman. It can run daemonless and rootless, symlinks to docker.sock and the ui works with both kubernetes (kind & minikube) and most of the docker desktop extensions.

[–] [email protected] 5 points 1 year ago (2 children)

Interesting — coming from the Linux world where docker is an 'apt install' away, I struggled with docker installation on Mac and settled on their client because of various “gotchas” I saw online. And even then got pissed because the client overwrote my local 'kubectl' bin.

Guess I’ll have to reevaluate.

[–] [email protected] 3 points 1 year ago

Podman is just as easy to install--admittedly they give way more support for Ubuntu and Fedora than other platforms (unfortunately). But once you've switched, you won't go back; it really is a 'seemless' transition, and you can use the same dockerfiles and docker-compose files with it.

[–] [email protected] 2 points 1 year ago

It's been a drop in replacement for me as far as I can tell. I have had one problem with Docker since I switched to Rancher Desktop on Mac that a coworker didn't (I assume they use Docker Desktop) but I'm not convinced it was due to Rancher (something about some sftp containernand file permissions).

[–] [email protected] 3 points 1 year ago

Podman is great and now is compatible with the docker engine. Having rootless containers by default is awesome! There's also a utility called podman-compose that I also highly recommend.

[–] UncommonBagOfLoot 2 points 1 year ago* (last edited 1 year ago)

A workmate mentioned it to me recently. Fortunately, our company does buy the license. Will be contacting the team responsible for it on Monday.

My manager is very forgetful, so not surprised he didn't mention it.

[–] [email protected] 6 points 1 year ago (1 children)

Be really careful when building images that require secrets for build configuration. Secrets can be passed in as build args, but you MUST UNSET THEM IN THE DOCKERFILE and then repass them in as environment variables at runtime (or else you are leaking your secrets with your image).

Also, image != container. Image is the thing you publish to a registry (e.g. dockerhub). Container is an instance of an image.

[–] [email protected] 11 points 1 year ago (1 children)

This is no longer true with buildkit - you can use the --secret to securely pass a secret in as an argument.

[–] [email protected] 4 points 1 year ago

Thanks for sharing! I will need to look deeper into build kit. Containers aren't my main artifacts, unfortunately, so I am still building them the ways of old, sounds like.

[–] [email protected] 5 points 1 year ago

Spending hours and hours debugging an issue where files weren’t being written before finally realising I was looking in my host file system not the container… fml.

[–] [email protected] 4 points 1 year ago (1 children)

I still don't really know how to get mounted folders to not wreck permission on the files. You can use a non root user, but that requires users have UID 1000 when you distribute an image.

[–] [email protected] 6 points 1 year ago

The closest thing I've found is to allow users to specify the UID and GID to run with, but there's no good way to just auto detect that upon runtime unfortunately

[–] [email protected] 3 points 1 year ago

Early in the history of docker, a lot of bits and bobs hadn't been worked out yet, and I had a bug land on my desk where a service was leaking memory until it crashed, but only when running in a container. Turns out, the jvm at the time just never collected in a container because the /proc directory was mounted from the host rather than the k8s scheduler. So it would only collect if it did not receive a second allocation request during the GC.

[–] [email protected] 3 points 1 year ago

Forgetting about SELinux.

load more comments
view more: next ›