Usually Debian as base, then ansible to setup openssh for accessandd for the longest time, I just ran docker-compose straight on bare metal, these days though, I prefer k3s.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Up until now I've been using docker and mostly manually configuring by dumping docker compose files in /opt/whatever and calling it a day. Portainer is running, but I mainly use it for monitoring and occasionally admin tasks. Yesterday though, I spun up machine number 3 and I'm strongly considering setting up something better for provisioning/config. After it's all set up right, it's never been a big problem, but there are a couple of bits of initial with that are a bit of a pain (mostly hooking up wireguard, which I use as a tunnel for remote admin and off-site reverse proxying.
Salt is probably the strongest contender for me, though that's just because I've got a bit of experience with it.
For me it’s Ubuntu Server as the OS base, swag as reverse proxy and docker-compose for the services. So mostly SSH and yolo but with containers. I’d guess having something like Portainer running would probably be useful, but for me the terminal was enough.
As folder structure I just have a services
directory with subfolders for each app/service.
NixOS instances running Nomad/Vault/Consul. Each service behind Traefik with LE certs. Containers can mount NFS shares from a separate NAS which optionally gets backed up to cloud blob storage.
I use SSH and some CLI commands for deployment but only because that’s faster than CICD. I’m only running ~’nomad run …’ for the most part
The goal was to be resilient to single node failures and align with a stack I might use for production ops work. It’s also nice to be able to remove/add nodes fairly easily without worrying about breaking any home automation or hosting.
A series of VPSes running AlmaLinux, I have a relatively big Ansible playbook to setup everything after the server goes online. The idea is that I can at any time scrape the server off, install an OS, put in all the persistent data (Docker volumes and /srv partition with all the heavy data), and run a playbok.
Docker Compose for services, last time I checked Podman, podman-compose didn't work properly, and learning a new orchestration tool would take an unjustifiable amount of time.
I try to avoid shell scripts as much as possible because they are hard to write in such a way so that they handle all possible scenarios, they are difficult to debug, and they can make a mess when not done properly. Premade scripts are usually the big offenders here, and they are I nice way to leave you without a single clue how the stuff they set up works.
I don't have a selfhosting addiction.
I've recently switched my entire self hosted infrastructure to NixOS, but only after a few years of evaluation, because it's quite a paradigm shift but well worth it imho.
Before that I used to stick to a solid base of Debian with some docker containers. There are still a few of those remaining that I have yet to migrate to my NixOS infra (namely mosquitto, gotify, nodered and portainer for managing them).
Cloud vps with debian. Then fix/update whatever weird or outdated image my vps provider gave me (over ssh). Then setup ssh certs instead of password. I use tmux a lot. Sometimes I have local scripts with scp to move some files around.
Usually I'm just hosting mosquitto, maybe apache2 webserver and WordPress or Flask. The latter two are only for development and get moved to other servers when done.
I don't usually use containers.
I'm better at hardware development than all this newfangled web stuff, so mostly just give me a command line without abstractions and I'm happy.
I'd like to use rootless podman, but since I include zerotier in my containers, they need access to the tunnel device and net_admin, so rootless isn't an option right now.
Podman-compose works for me. I'd like to learn how to use Ansible and Kubernetes, but right now, it's just my Lemmy VPS and my Raspberry Pi 4, so I don't have much need for automation at the moment. Maybe some day.
You can add net_admin to the user running podman, I have added it to the ambient capability mask before, which acts like an inherited override for everything the user runs.
I run Debian + Docker, and use Portainer to manage the docker stacks
Web servers: unikernel (nanos) + fastly to backblaze Mail servers: tailscale (for admin ssh), fail2ban, letsencrypt nginx, docker compose, mailu, heavy use of vps firewall Raspberry pi "servers": k3s, managed with lens desktop pro, but honestly learning kubernetes isn't exactly enjoyable and feels painful compared to a shell script that launches dead simple API deployments
I used to do a lot of Ansible to manage a lot of servers but most of the time the thing I need to fix is manual, and a setup.md in my personal notes is all I need for "how do I make a new minio backup host" or whatever.
I usually set up SSH keys and disable password login.
Then I git-pull my base docker-compose stack that sets up:
- Nginx proxy manager
- Portainer
- Frontend and backend networks
I have a handful of other docker-compose files that hook into that setup to make it easy to quickly deploy various services wherever in a modular way.
Proxmox + mostly Debian + currently documenting my builds for future automation.
Lots of snapshots and clones/backups, for in case I want to roll back, or in case I want a head start in the future.
For example, I have a couple LAMP stack VMs backed up. If I need another LAMP VM, I clone (restore-as-unique) the backup in Proxmox, twiddle a few settings to make it actually unique, and go.
I don't do Docker or anything like it currently, and eventually I'm sure I'll learn, but having a crapload of VMs (true VM or LXC) suits me just fine for now. I will likely learn how to do my deployments with Ansible before learning Docker et al.
I have a bunch of different stuff, a dedicated server with Debian, 4 raspberry Pis + 1 micro computer that acts as a LB/Router/DHCP/DNS for the Pis.
In general I would say that my logic is as follows:
- Every OS change is done through Ansible. This sometimes is a pain, you want to just
apt install X
and instead you might need to create a new playbook for it, but in the long term, it paid off multiple times. I do have some default playbook that does basic config (user, SSH key provisioning, some default packages) and hardening (SSH config, iptables). - I then try to keep the OS logic to a minimum, and do everything else as code. On my older dedicated server I run mostly docker-compose with Systemd + templated docker-compose files dropped by Ansible. The Pis instead run Kubernetes, with flux and all my applications are either directly managed via Flux or they have Helm in between. This means I can destroy a cluster, create another way, point it to my flux repository and I am pretty much back where I started.
Sounds cool. ansible could never convince me, though, because playbook writing is so annoying.
Oh, I am there with you on that. I got used in my previous job, where everything was done with Ansible, but I still find myself copy pasting and changing most of the times. I actually like way more a declarative approach a-la-terraform.
Overall though there is a lot of community material, and once the playbooks are written it's quite good!
I guess if I would automate my base setups with ansible so that I have a good foundation and have learned the tool properly, I would stick to it, but it was one of the cases were I was pushed away right from the start.
I try to have most of the common parts setup with ansible. Over time, keep adding more and more. This is useful specially for things you may not do, or need, often and that is not as fresh on your mind how you set it up last time.
Any configuration management system would work; I find ansible is very approachable and fast to get productive with it.
Kubernetes.
I deploy all of my container/Kubernetes definitions from Github:
I have a stupid overcomplicated networking script that never works. So every time i set up a new server I need to fix a myriad of weird issues I've never seen before. Usually I setup a server with a keyboard and mouse because SSH needs networking, if it's a cloud machine its the QEMU console or hundreds of reboots.
Only ssh,nvim,htop, and screen. Rest all are whatever is required. I like to keep things minimal until i really need the server to do anything specific.
I resort to docker only if i need the application temporarily or the application setup is awkward/annoying.
Probably the odd one here with Arch Linux + docker compose with still a lot of manual labor
updating it after maximum 4 weeks is enough, container more often
I deploy bare-metal with a mix of Ansible and Docker Compose.
For years over done an Ubuntu LTS base with docker, but I've just recently started using debian base. Moved to debian for my workstation as well.