this post was submitted on 09 Jan 2025
1094 points (98.4% liked)

Programmer Humor

19935 readers
3316 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 1 day ago (1 children)

Now if only Docker could solve the "hey I'm caching a layer that I think didn't change" (Narrator: it did) problem, that even setting the "don't fucking cache" flag doesn't always work. So many debug issues come up when devs don't realize this and they're like, "but I changed the file, and the change doesn't work!"

docker system prune -a and beat that SSD into submission until it dies, alas.

[–] [email protected] 2 points 1 day ago

My favourite is when docker refuses to build unless u enable the no cache options. It claims no disk space (it has plenty, might be some issue with qubes and dynamic disk sizing). I set up a network cache to cache packages at the network level its saved me years in build time.

[–] MoonlightFox 85 points 2 days ago* (last edited 2 days ago) (10 children)

There are another important reason than most of the issues pointer out here that docker solves.

Security.

By using containerization Docker effectively creates another important barrier which is incredibly hard to escape, which is the OS (container)

If one server is running multiple Docker containers, a vulnerability in one system does not expose the others. This is a huge security improvement. Now the attacker needs to breach both the application and then break out of a container in order to directly access other parts of the host.

Also if the Docker images are big then the dev needs to select another image. You can easily have around 100MB containers now. With the "distroless" containers it is maybe down to like 30 MB if I recall correctly. Far from 1GB.

Reproducability is also huge efficiency booster. "Here run these this command and it will work perfecty on your machine" And it actually does.

It also reliably allows the opportunity to have self-healing servers, which means businesses can actually not have people available 24/7.

The use of containerization is maybe one of the greatest marvels in software dev in recent (10+) years.

[–] [email protected] 13 points 1 day ago* (last edited 1 day ago) (2 children)

Oof. I'm anxious that folks are going to get the wrong idea here.

While OCI does provide security benefits, it is not a part of a healthly security architecture.

If you see containers advertised on a security architecture diagram, be alarmed.

If a malicious user gets terminal access inside a container, it is nice that there's a decent chance that they won't get further.

But OCI was not designed to prevent malicious actors from escaping containers.

It is not safe to assume that a malicious actor inside a container will be unable to break out.

Don't get me wrong, your point stands: Security loves it when we use containers.

I just wish folks would stop treating containers as "load bearing" in their security plans.

load more comments (1 replies)
[–] Landless2029 49 points 2 days ago (2 children)
[–] [email protected] 17 points 2 days ago

Yes, yes you really should

[–] [email protected] 14 points 2 days ago (2 children)

I said this a year and a half ago and I still haven't, awful decision, I now own servers too so I should really learn them

[–] Landless2029 7 points 2 days ago (5 children)

The worse part is having the gear and STILL not learning/playing with it.

I got stuff to start [email protected] like an old i5 minipc and even a 64gb i7 pro series laptop...

Theyre just sitting unplugged with mint on them.

load more comments (5 replies)
load more comments (1 replies)
[–] [email protected] 21 points 2 days ago (3 children)

Isn't Docker massively insecure when compared to the likes of Podman, since Docker has to run as a root daemon?

[–] MoonlightFox 17 points 2 days ago* (last edited 2 days ago) (2 children)

I don't have in-depth knowledge of the differences and how big that is. So take the following with a grain of salt.

My main point is that using containerization is a huge security improvement. Podman seems to be even more secure. Calling Docker massively insecure makes it seem like something we should avoid, which takes focus away from the enormous security benefit containerization gives. I believe Docker is fine, but I do use Podman myself, but that is only because Podman desktop is free, and Docker files seem to run fine with Podman.

Edit: After reading a bit I am more convinced that the Podman way of handling it is superior, and that the improvement is big enough to recommend it over Docker in most cases.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

ofc containerisation is still better than running it natively in terms of security (which is why I said "compared to Podman"), but that's kind of mostly a side effect of it's main thing: reproducible runtime environments. It's not rly good security tho afaik and shouldn't be relied upon in that regard at all, but I don't know too much about it

load more comments (1 replies)
[–] [email protected] 14 points 2 days ago (1 children)

I prefer Podman. But Docker can run rootless. It does run under root by default, though.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

afaik it's still using a daemon, compared to Podman being daemonless, right? ofc it's better to run it in userspace, tho I can't recall if it limited some of the features or not and whether it was easy to set up

[–] [email protected] 9 points 2 days ago (4 children)

Not only that but containers in general run on the host system's kernel, the actual isolation of the containers is pretty minimal compared to virtual machines for example.

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago) (1 children)

It amused me that the votes on your comment (a simple factual statement) reflect how many people here vote without knowing what the fuck they're talking about.

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago)

I think many of the people don't understand the difference between containers vs VMs

[–] stetech 6 points 2 days ago

… With the tradeoff being containers much more lightweight and having much less overhead than VMs…

load more comments (2 replies)
[–] [email protected] 6 points 2 days ago

Sounds like an ugly retrofit of bsd jail

[–] [email protected] 4 points 2 days ago

Containerized software is huge in the sciences for reproducible research. Or at least it will/should be (speaking as someone adjacent to bioinformatics and genomics)

load more comments (4 replies)
[–] [email protected] 86 points 2 days ago (17 children)

Call me crusty, old-fart, unwilling to embrace change... but docker has always felt like a cop-out to me as a dev. Figure out what breaks and fix it so your app is more robust, stop being lazy.

I pretty much refuse to install any app which only ships as a docker install.

No need to reply to this, you don't have to agree and I know the battle has been already lost. I don't care. Hmmph.

[–] [email protected] 71 points 2 days ago (8 children)

Docker is more than a cop out for than one use case. It's a way for quickly deploy an app irrespective of an environment, so you can scale and rebuild quickly. It fixes a problem that used to be solved by VMs, so in that way it's more efficient.

load more comments (8 replies)
[–] SpaceNoodle 77 points 2 days ago (1 children)

Why put in a little effort when we can just waste a gigabyte of your hard drive instead?

I have similar feelings about how every website is now a JavaScript application.

[–] roofuskit 28 points 2 days ago (3 children)

Yeah, my time is way more valuable than a gigabyte of drive space. In what world is anyone's not today?

[–] SpaceNoodle 20 points 2 days ago (4 children)

It's a gigabyte of every customer's drive space.

load more comments (4 replies)
[–] [email protected] 17 points 2 days ago (2 children)

Mine, on my 128gb dual boot laptop.

[–] [email protected] 3 points 1 day ago (1 children)

I've got you beat. 32gb emmc laptop.

I need every last mb on this thing. It's kind of nice because I literally cannot have bloat, so I clear out folders before I forget where things went. I only really use it for the internets and to ssh into my servers, but it's also where I usually make my bootable USB drives, so I'll need 2-5 gb free for whichever ISO I want to try out. I really detest the idea of downloading to one USB, then dd-ing that to another. I should probably start using ventoy or something, but I guess I'm old school stubborn.

I tried using flatpak and docker, but it's just not gonna happen.

[–] [email protected] 2 points 1 day ago

:-)

Going back in time is cheating a bit, but around 2013 my computer was an 8gb netbook. I carefully segregated my files into a couple of GB that I'd keep available, and the rest on an external HDD. To this day I keep that large/small scheme, though both parts have grown since then.

[–] roofuskit 10 points 2 days ago (4 children)

How many docker containers would you deploy on a laptop? Also 128gb is tiny even for an SSD these days .

load more comments (4 replies)
[–] [email protected] 4 points 2 days ago* (last edited 2 days ago)

A gigabyte of drive space is something like 10-20 cents on a good SSD.

[–] mlg 6 points 1 day ago

You ever notice how most docker images are usually based from Ubuntu, the arguably worse distro to use for dependency management.

The other core issue is people using docker as a configuration solution with stuff like compose.

If I want containers, I usually just use LXC.

Only docker project I liked was docker-osx which made spinning up OSX VMs easy, but again it was basically 80% configuration for libvirt.

[–] [email protected] 46 points 2 days ago (2 children)

If this is your take your exposure has been pretty limited. While I agree some devs take it to the extreme, Docker is not a cop out. It (and similar containerization platforms) are invaluable tools.

Using devcontainers (Docker containers in the IDE, basically) I'm able to get my team developing in a consistent environment in mere minutes, without needing to bother IT.

Using Docker orchestration I'm able to do a lot in prod, such as automatic scaling, continuous deployment with automated testing, and in worst case near instantaneous reverts to a previously good state.

And that's just how I use it as a dev.

As self hosting enthusiast I can deploy new OSS projects without stepping through a lengthy install guide listing various obscure requirements, and if I did want to skip the container (which I've only done a few things) I can simply read the Dockerfile to figure out what I need to do instead of hoping the install guide covers all the bases.

And if I need to migrate to a new host? A few DNS updates and SCP/rsync later and I'm done.

[–] [email protected] 10 points 2 days ago (5 children)

I've been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We're an Ops team though so it's a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.

load more comments (5 replies)
load more comments (1 replies)
[–] [email protected] 24 points 2 days ago (1 children)

I hate that it puts package management in Devs hands. The same Devs that usually want root access to run their application and don’t know a vulnerability scan for the life of them. So now rather than having the one up to date version of a package on my system I may have 3 different old ones with differing vulnerabilities and devs that don’t want to change it because “I need this version because it works!”

load more comments (1 replies)
[–] [email protected] 19 points 2 days ago

Docker or containers in general provide isolation too, not just declarative image generation, it's all neatly packaged into one tool that isn't that heavy on the system either, it's not a cop out at all.

If I could choose, not for laziness, but for reproducibility and compatibility, I would only package software in 3 formats:

  • Nix package
  • Container image
  • Flatpak

The rest of the native packaging formats are all good in their own way, but not as good. Some may have specific use cased that make them best like Appimage, soooo result...
Yeah, no universal packaging format yet

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago)

It eliminates the dependency of specific distributions problem and, maybe more importantly, it solves the dependency of specific distribution versions problem (i.e. working fine now but might not work at all later in the very same distribution because some libraries are missing or default configuration is different).

For example, one of the games I have in my GOG library is over 10 years old and has a native Linux binary, which won't work in a modern Debian-based distro by default because some of the libraries it requires aren't installed (meanwhile, the Windows binary will work just fine with Wine). It would be kinda deluded to expect the devs would keep on updating the Linux native distro (or even the Windows one) for over a decade, whilst if it had been released as a Docker app, that would not be a problem.

So yeah, stuff like Docker does have a reasonable justification when it comes to isolating from some external dependencies which the application devs have no control over, especially when it comes to future-proofing your app: the Docker API itself needs to remain backwards compatible, but there is no requirement that the Linux distros are backwards compatible (something which would be much harder to guarantee).

Mind you, Docker and similar is a bit of a hack to solve a systemic (cultural even) problem in software development which is that devs don't really do proper dependency management and just throw in everything and the kitchen sink in terms of external libraries (which then depend on external libraries which in turn depend on more external libraries) into the simplest of apps, but that's a broader software development culture problem and most of present day developers only ever learned the "find some library that does what you need and add it to the list of dependencies of your build tool" way of programming.

I would love it if we solved what's essentially the core Technical Architecture problem of in present day software development practices, but I have no idea how we can do so, hence the "hack" of things like Docker of pretty much including the whole runtime environment (funnilly enough, a variant of the old way of having your apps build statically with every dependency) to work around it.

load more comments (10 replies)
load more comments
view more: next ›