this post was submitted on 15 Aug 2023
41 points (100.0% liked)

Selfhosted

40187 readers
710 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Just wondering what tools and techniques people are using to keep on top of updates, particularly security-related updates, for their self-hosting fleet.

I'm not talking about docker containers - that's relatively easy. I have Watchtower pull (not update) latest images once per week. My Saturday mornings are usually spent combing through Portainer and hitting the recreate button for those containers with updated images. After checking the service is good, I manually delete the old images.

But, I don't have a centralised, automated solution for all my Linux hosts. I have a few RasPis and a bunch of LXCs on a pair of Proxmox nodes, all running their respective variation of Debian.

Not a lot of this stuff is exposed direct to the internet - less than a handful of services, with the rest only accessible over Wireguard. I'm also running OPNsense with IPS enabled, so this problem isn't exactly keeping me up at night right now. But, as we all know, security is about layers.

Some time ago, on one of my RasPis, I did setup Unattended Upgrades and it works OK, but there was a little bit of work involved in getting it setup just right. I don't relish the idea of doing that another 40 or so times for the rest of my fleet.

I also don't want all of those hosts grabbing updates at around the same time, smashing my internet link (yes, I could randomise the cron job within a time range, but I'd rather not have to).

I have a fledgling Ansible setup that I'm just starting to wrap my head around. Is that the answer? Is there something better?

Would love to hear how others are dealing with this.

Cheers!

top 26 comments
sorted by: hot top controversial new old
[–] [email protected] 21 points 1 year ago* (last edited 1 year ago) (1 children)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LXC Linux Containers
Plex Brand of media server package
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
VPS Virtual Private Server (opposed to shared hosting)

4 acronyms in this thread; the most compressed thread commented on today has 20 acronyms.

[Thread #47 for this sub, first seen 15th Aug 2023, 07:25] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 1 points 1 year ago

Damn good bot!

[–] [email protected] 4 points 1 year ago (1 children)

A few simple rules make it quite simple for me:

  • Firstly, I do not run anything critical myself. I cannot guarantee that I will have time to resolve issues as they come up. Therefore, I tolerate a moderate risk of a borked update.
  • All servers run the same be OS. Therefore, I don't have to resolve different issues for different machines. There is then the risk that one update will take them all out, but see my first point.
  • That OS is stable, in my case Debian so updates are rare and generally safe to apply without much thought.
  • Run as little as possible on bare metal and avoid third party repos or downloading individual binaries unless absolutely necessary. Complex services should run in containers and update by updating the container image.
  • Run unattended-upgrades on all of them. I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it's just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server, because it has ZFS through DKMS on it so it's too risky to blindly apply.
  • Have postfix set up so that unattended-upgrades can email me when a reboot is required. I reboot only when I know I'll have some time to fix anything that breaks. For the blacklisted packages I will get an email that they've been held back so I know that I need to update manually.

This has been working great for me for the past several months.

For containers, I rely on Podman auto update and systemd. Actually my own script that imitates its behavior because I had issues with Podman pulling images which were not new, but which nevertheless triggered restarts of the containers. However, I lock the major version number manually and check and update major versions manually. Major version updates stung me too much in the past when I'd update them after a long break.

[–] [email protected] 1 points 1 year ago (1 children)

I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it’s just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server

Yep, this is what I was thinking I'd have to do. So, from your perspective, Unattended Updates is still the best way to achieve this on Debian, with the right config? Cheers.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Correct. And getting the right configuration is pretty easy. Debian has good defaults. The only changes I make are configuring it to send emails to me when updates are installed. These emails will also then tell you if you need to reboot in subject line which is very convenient. As I said I also blacklist kernel updates on the server that uses ZFS as recompiling the modules causes inconsistencies between kernel and user space until a reboot. If you set up emails, you will also know when these updates are ready to be installed because you'll be notified that they're being held van.

So yea, I strongly recommend unattended-upgrades with email configured.

Edit: you can also make it reboot itself if you want to. Might be worth it on devices that don't run anything very important and that can handle downtime.

[–] [email protected] 1 points 1 year ago

Yep, cool. The single host I have with UU running on it does send the listchanges via email already, which I've found useful.

Well, time to refresh my memory on how I have it setup and build up an Ansible playbook to repeat success everywhere else.

Cheers.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

I'm in the process of migrating my servers to NixOS. It takes a lot of time and the learning curve is steep, but I have one config shared for all the servers and PCs. I have setup the servers to automatically pull the latest configuration everyday and even restart if there's a kernel update.

This means I just need to update my laptop and push the changes to the repository, and all the servers will also update.

I haven't had this setup long enough to know if things will break unexpectedly with updates tho. NixOS has a great feature where you can rollback to a previous configuration (generation) with a single command. You can always keep using containers to isolate updates, if you want (Nix allows you to declare those in the config as well).

As an example, you can take a look at my config.

EDIT: Systemd timers have an option to randomize the time a service runs, I use it all the time. The option for Nix's config pulling is using systemd timers, so you can use that.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

OK, that does sound really good. Reminds me of a CVS & Perl based config management system I worked on many (many) years ago (was invented by one of the other sys admins before I got there). That was for OpenBSD, but similar concept - centralised config, pushed to client, with automation of service/server restarts as required.

I might have to consider NixOS for a long-term strategy. Cheers.

[–] [email protected] 3 points 1 year ago (1 children)

I set up flexo for Arch Linux update caching and squid proxy for Alpine, Debian. This stops me from having to download the same files over and over.

[–] [email protected] 1 points 1 year ago (1 children)

Yeah - a caching proxy would alleviate the pain on internet link, for sure. So flexo is similar to Unattended Upgrades for Debian, yeah? Automates pacman?

[–] [email protected] 2 points 1 year ago (1 children)

No, Flexo is not like Unattended Upgrades. Flexo just downloads packages in a cache for you to download them locally using pacman as usual. It's mainly to increase download speeds and decrease doubledownloadsing the same files in one network to different clients. Unattended Upgrades is actually installing security updates automatically without user input. This is by design not supported and not possible on Arch Linux.

[–] [email protected] 2 points 1 year ago

Ah, gotcha. Missed that bit about Squid being for Alpine and Debian. Makes more sense now. Cheers.

[–] cow 3 points 1 year ago

apk upgrade -U in a cronjob daily and hope it does not break.

[–] vegetaaaaaaa 2 points 1 year ago (1 children)

I did setup Unattended Upgrades and it works OK, but there was a little bit of work involved in getting it setup just right. I don’t relish the idea of doing that another 40 or so times for the rest of my fleet.

automate it! I run unattended-upgrades on dozens of servers without any problems: [1] [2]. Configuration is actually really simple.

I use other methods for things that are not distribution packages [3], but for APT upgrades unattended-upgrades is the only correct™ solution.

[–] [email protected] 2 points 1 year ago

Yep, I'm working on a test Ansible playbook now. Thanks for those repo links - very useful stuff in there.

[–] [email protected] 1 points 1 year ago (1 children)

I consolidated everything on to one a bit beefier VPS so that if I update that one VPS 90% of stuf updates itself. The rest are 3 RPis which run Home Assistant in different places, those I go throuch manually and update when I see that there is a new update.

[–] [email protected] 1 points 1 year ago (1 children)

Wow. No concerns an update will bork that 90% of your fleet that sits on the VPS? That's one reason I'm loving LXCs - anything that screws with one specific service doesn't pose a risk to any other service.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

It hasn't in the last 10 or so years, but if it does it's not a problem I have backups which I can get up and running within half an hour.

I'm not running anything mission critical, just single user instances of Mastodon, Lemmy, Nextcloud, PeerTube, Matrix, my website, Firefox Sync, some old static websites of mine and my sister which are basically archived. So even if it's down for a week, nobody but me cares.

[–] [email protected] 1 points 1 year ago (1 children)

That’s actually insanely cool! I‘m on a similar path rn. 10+ containers running services, thinking of adding peertube, lemmy and co as well as my webpages. But its still a honeserver so I‘d need to go vps at some point.

Did you start at home or directly go to vps? How was your journey?

In any case, thanks for sharing and have a good one. :)

[–] [email protected] 1 points 1 year ago (1 children)

Actually my goal is to move everything to a home lab server, but my last one broke a year ago and I didn't want to spent all the money at once to buy a new one so i just moved everything to the VPS where I already had my website.

[–] [email protected] 1 points 1 year ago (1 children)

Hrhr thats actually very funny. You are basically the other car in the meme driving in the opposite direction. How did you keep it from being hacked?

[–] [email protected] 1 points 1 year ago (1 children)

Just normal, keep everything up to date and don't fuck with scriptkiddies.

[–] [email protected] 1 points 1 year ago (1 children)

How does one fuck with scriptkiddies? You mean provoke them or allow them on your server?

[–] [email protected] 2 points 1 year ago (1 children)

Yeah mostly provoke then, I mean it mostly jokingly ^^

[–] [email protected] 1 points 1 year ago

Okay thanks :)

[–] [email protected] 1 points 1 year ago

Yep, understood. My setup is a little more "mission" critical, if you consider availability of my Plex, *arrs, Home Assistant and Pi-holes being the mission, and the critical bit being that I have impatient teenagers in the house.