this post was submitted on 24 Aug 2024
22 points (86.7% liked)

Selfhosted

40264 readers
1268 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I've been in the process of migrating a lot things back to kubernetes, and I'm debating whether I should have separate private and public clusters.

Some stuff I'll keep out of kubernetes and leave in separate vms, like nextcloud/immich/etc. Basically anything I think would be more likely to have sensitive data in it.

I also have a few public-facing things like public websites, a matrix server, etc.

Right now I'm solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

The main concern I'd have is reducing the blast radius if something gets compromised. But I also don't know if I really want to maintain multiple personal clusters. I am using Omni+Talos for kubernetes, so it's not too difficult to maintain two clusters. It would be more inefficient as far as resources go since some of the nodes are baremetal servers and others are only vms. I wouldn't be able to share a large baremetal server anymore, unless I split it into vms.

What are y'all's opinions on whether to keep everything in one cluster or not?

top 18 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 2 months ago (1 children)

You're looking for namespaces. Have a public and private namespace

[–] [email protected] 4 points 2 months ago

Just to add to this point. I have been running a separate namespace for CI and it is possible to limit total CPU and memory use for each namespace. This saved me from having to run a VM. Everything (even junk) goes onto k8s isolated by separate namespaces.

If limits and namespaces like this are interesting to you, the k8s resources to read up on are ResourceQuota and LimitRange.

[–] [email protected] 6 points 2 months ago (1 children)

I really don't see much benefit to running two clusters.

I'm also running single clusters with multiple ingress controllers both at home and at work.

If you are concerned with blast radius, you should probably first look into setting up Network Policies to ensure that pods can't talk to things they shouldn't.

There is of course still the risk of something escaping the container, but the risk is rather low in comparison. There are options out there for hardening the container runtime further.

You might also look into adding things that can monitor the cluster for intrusions or prevent them. Stuff like running CrowdSec on your ingresses, and using Falco to watch for various malicious behaviour.

[–] [email protected] 1 points 2 months ago

Network Policies are a good idea, thanks.

I was more worried about escaping the container, but maybe I shouldn't be. I'm using Talos now as the OS and there isn't much on the OS as it is. I can probably also enforce all of my public services to run as non-root users and not allow privileged containers/etc.

Thanks for recommending crowdsec/falco too. I'll look into those

[–] [email protected] 4 points 2 months ago (2 children)

At work we use separate clusters for various things. We built an Ansible collection to manage the lot so it's not too much overhead.

For home use I skipped K8s and went to rootless Quadlet manifests. Each quadlet is in a separate non-root user with lingering enabled to reduce exposure from a container breakout.

[–] anyhow2503 1 points 2 months ago (1 children)

If I may ask: how practical is monitoring / administering rootless quadlets? I'm running rootless podman containers via systemd for home use, but splitting the single rootless user into multiple has proven to be quite the pain.

[–] [email protected] 2 points 2 months ago

Yeah it is a bit of a pain. I currently only have a few users. Tooling-wise there are ways to tail the journals (if you're using journalctl) and collate them but I haven't gotten around to doing this myself yet.

[–] [email protected] 1 points 2 months ago

Quadlet

I haven't heard of Quadlet before this, thanks I'll take a look at it.

[–] wirehead 3 points 2 months ago (1 children)

Well, one option, which can be pushing the boundaries of selfhosted for some, would be to use a hosted k8s service for your public-facing stuff and then a home real k8s cluster for the rest of it.

[–] [email protected] 1 points 2 months ago

This is an option, my main reason for not wanting to use a hosted k8s service is cost. I already have the hardware, so I'd rather use it first if possible.

Though I have been thinking of converting some sites to be statically-generated and hosted externally.

[–] just_another_person 2 points 2 months ago

Container orchestration is solely for redundancy and reliability. If you really feel the need for it, go ahead.

[–] [email protected] 2 points 2 months ago (1 children)

I’ve dealt with exactly the same dilemma in my homelab. I used to have 3 clusters, because you'd always want to have an "infra" cluster which others can talk to (for monitoring, logs, docker registry, etc. workloads). In the end, I decided it's not worth it.

I separated on the public/private boundary and moved everything publicly facing to a separate cluster. It can only talk to my primary cluster via specific endpoints (via tailscale ingress), and I no longer do a multi-cluster mesh (I used to have istio for that, then cilium). This way, the public cluster doesn’t have to be too large capacity-wise, e.g. all the S3 api needs are served by garage from the private cluster, but the public cluster will reverse-proxy into it for specific needs.

[–] [email protected] 1 points 2 months ago (1 children)

I did actually consider a 3rd cluster for infra stuff like dns/monitoring/etc, but at the moment I have those things in separate vms so that they don't depend on me not breaking kubernetes.

Do you have your actual public services running in the public cluster, or only the load balancer/ingress for those public resources?

Also how are you liking garage so far? I was looking at it (instead of minio) to set up backups for a few things.

[–] [email protected] 2 points 2 months ago (1 children)

Actual public services run there, yeah. In case if any is compromised they can only access limited internal resources, and they'd have to fully compromise the cluster to get the secrets to access those in the first place.

I really like garage. I remember when minio was straightforward and easy to work with. Garage is that thing now. I use it because it's just co much easier to handle file serving where you have s3-compatible uploads even when you don’t do any real clustering.

[–] [email protected] 2 points 2 months ago (1 children)

Do you use garage for backups by any chance? I was wanting to deploy it in kubernetes, but one of my uses would be to back up volumes, and.. that doesn't really help me if the kubernetes cluster itself is broken somehow and I have to rebuild it.

I kind of want to avoid a separate cluster for storage or even separate vms. I'm still thinking of deploying garage in k8s, and then just using rclone or something to copy the contents from garage s3 to my nas

[–] [email protected] 1 points 2 months ago

No. It's my in-cluster storage that I only use for things that are easier to work with via S3 api, and I do backups outside of the k8s scope (it's a bunch of various solutions that boil down to offsite zfs replication, basically). I'd suggest you to take a look at garage's replication features if you want it to be durable.

[–] [email protected] 1 points 2 months ago (1 children)

Right now I'm solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

How's this working out? What kinda alternatives are there with a single cluster?

[–] [email protected] 2 points 2 months ago

It's mostly working fine for me.

An alternative I tried before was just whitelisting which IPs are allowed to access specific ingresses, but having the ingress listen on both public/private networks. I like having a separate ingress controller better because I know the ingress isn't accessible at all from a public ip. It keeps the logs separated as well.

Another alternative would be an external load balancer or reverse proxy that can access your cluster. It'd act as the "public" ingress, but would need to be configured to allow specific hostnames/services through.