this post was submitted on 06 May 2024
18 points (82.1% liked)

Selfhosted

40416 readers
328 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.cloudhub.social/post/347779

I am running a Kubernetes cluster for this domain, and I'm looking at more services to run (right now I have Mastodon and Lemmy).

I was considering WriteFreely and PixelFed, but they don't seem to have an easy solution for running on Kubernetes (WriteFreely doesn't even have a production-ready docker image).

Is anyone else running federated services in their lab? Do you run any of them on Kubernetes?

top 25 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 6 months ago (1 children)

I run plenty of services in my lab on k3s. Most of them I had to manually build charts for and add them to my cluster. I doubt anyone has built charts for Lemmy, maybe mastodon. Anything that's dockerized like Lemmy can be put into kubernetes, but it's going to take some doing. Good luck out there, and of course if you get it working then I think the maintainers would be happy to get a helm chart merged into their repo.

[–] [email protected] 1 points 6 months ago (2 children)

Yeah, that's the pain point - building and maintaining the charts.

Also, I know the charts likely wouldn't have to be super complex, but I'm used to working with Bitnami's charts that are massively complex - I just don't have the time to go that in-depth.

[–] [email protected] 3 points 6 months ago (2 children)

I start simple with mine. Start with the deployment, just getting the pod up and running, completely stateless. Get that stabilized, once that's happy then start moving onto the service, connecting to the UI. Then I finish anything else I can before adding in state - environment variables (first just in the deployment, then once that's done worry about a config map and secrets). (Adding volumes in just complicates it because then you have to reset those volumes if you want to go back to a clean state).

As a list:

  • Deployment (stateless) - get to not crash
  • Service - connect to it
  • Environment variables, extracting out what can be
  • Secrets, adding those to the chart
  • Volumes, adding in state
  • Move to a nice clean setup with a clean values.yaml.

Finally, once your happy, then make it conform to other standards. Getting it stood up in k8s is the hard part, then customizing can come after.

[–] [email protected] 2 points 6 months ago (1 children)

That's actually super helpful! I haven't done much custom Helm chart-ing, and was kinda lost where to start. That really helps break the process down, and the tip about skipping state to start is very wise.

[–] [email protected] 3 points 6 months ago (1 children)

I've done... an annoying amount of them now. I hope my trials help you.

Be ready, it's a very annoying and slow process, watching logs, figuring out why things are failing, debugging, githubs, everything. I just did one last week that was saying that it couldn't write to /etc/passXXXXX, and it took 2 hours to track down that there was an optional command I could pass in that would change the running user of the container I was running (separate from the kube user). It's a slog, but when you get it running it's a rush of endorphins.

Biggest thing - kubernetes is a read-only file system compared to docker. So, for good devs they minimize writes to the filesystem unless they have to, and keep them localized. For bad devs, they write everywhere - and that gets sticky fast. So, if you're getting write errors, know that there's probably another volume you need to attach. The kicker is knowing which ones can be an emptyDir scratch directory and which ones actually need to persist. If you have a docker-compose file it's a great place to start, just set all the volumes to emptyDir to start off with.

Good luck!

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago)

I think both of the ones I mentioned have docker-compose files, which I think I can convert with kompose convert? I guess from there I would follow your steps and then start parameterizing it once it's running properly.

Thanks! I think I'll start trying out PixelFed tomorrow.

[–] ChapulinColorado 1 points 6 months ago

This all makes sense to me since we deal with it at work. I would maybe add a service vs route point to differentiate things like UI that need external exposure. The main difference is we use kustomize instead of helm. Out of curiosity if you had any experience with both and why did you settle in helm?

[–] [email protected] 1 points 6 months ago

I don’t like helm, so I use nix to maintain my fediverse deployments in kubernetes. Typically that'd just autoupdate itself to new releases, but for lemmy specifically I upgrade by hand nowadays since one release some time ago broke my deployment and its schema change was incompatible with the automated rollback.

My setup is a combination of https://github.com/farcaller/nixdockertag (auto-updated docker imagesfor things where I fully own the deployments) and https://github.com/farcaller/nixhelm (for helm charts that I either consume verbatim PR have local patches on). Both just auto update nightly thanks to github.

[–] NegativeLookBehind 2 points 6 months ago* (last edited 6 months ago) (1 children)

If they’re containerized they’ll run in a Kubernetes cluster, I’m sure you know this. Why not build the K8s infrastructure you need to get them functional? Or maybe Helm charts if you’re up for it.

[–] [email protected] 1 points 6 months ago (1 children)

Oh, I know I could get them to run with enough work. I just don't have that much time to spend on initial implementation and upkeep of the charts.

I'm using FluxCD, which I believe can do deployments of plain Kubernetes manifests, but that still requires a decent amount of overhead to keep up to date.

[–] vegetaaaaaaa 0 points 6 months ago

I just don’t have that much time to spend on initial implementation and upkeep

Well k8s is a poor choice of platform for you :D

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
CSAM Child Sexual Abuse Material
HA Home Assistant automation software
~ High Availability
HTTP Hypertext Transfer Protocol, the Web
k8s Kubernetes container management package
nginx Popular HTTP server

4 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.

[Thread #740 for this sub, first seen 6th May 2024, 19:45] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago) (1 children)

Using different federation protocol, but matrix wservers ould be the other big one.

Edit you also mentioned trouble creating them. I suggest looking into operator hub and using operators for postgres and redis and auth (keycloak?). This can get you down in the rabbit hole for making everything highly available too.

[–] [email protected] 2 points 6 months ago (1 children)

Yeah, I used to host a Matrix instance - could do that for this one too.

The issue is more about setting up the Kubernetes manifests and templating them. I usually use the chart's built-in postgres and redis config, though using an operator would make it more scalable for sure.

I'm using Authentik for auth, but I do also like Keycloak.

[–] [email protected] 2 points 6 months ago (1 children)

Yeah it's a bit of work sometimes. Synapse matrix kinda sucks too their philosophy of no environment variables for secrets. I ended up making an init container that hijacks my config map and I jet's the environment variables into the config

[–] [email protected] 1 points 6 months ago (1 children)

They store the secrets in a file? Gross. What a poor way of handling that. Pretty sure environment variables would be more secure. Especially in Kubernetes.

[–] [email protected] 1 points 6 months ago (1 children)

Yeah I want to switch when other implementations catch up. Unfortunately I think that will be some more time especially since you can't migrate from synapse and have to start from fresh. One day though!

I did the same for Lemmy at one point then found out all the configs are mapped to environment variables my convention. My Lemmy setup is the most advanced, but it has HA postgres, and all of its modules separated and HA. The proxy setup for it in k8s was rough but I eventually got it working directly on ingress-nginx too.

[–] [email protected] 1 points 6 months ago (1 children)

Huh, do you have your lemmy config documented somewhere? I keep running into issues with it and I'm not sure which component exactly is failing, but it's annoying. I'm using this helm chart currently: ananace/lemmy It works, but I don't have pict-rs setup in HA either.

[–] [email protected] 0 points 6 months ago (1 children)

I got all my yaml files source controlled privately right now but I can share if you want them. I disabled Pictrs around the time of CSAM attacks and have yet to bother enabling it again haha

[–] [email protected] 1 points 6 months ago (1 children)

I disabled Pictrs around the time of CSAM attacks and have yet to bother enabling it again

Uhh… what?? When did that happen? I thought pictrs was a requirement also…

[–] [email protected] 0 points 6 months ago (1 children)

Nah not a requirement. I think like 3 months or so after the reddit API shutdown. Big instances got local AI models to detect it and Lemmy server now supports disabling caching other instances so I'd probably disable that if I ever enable it again haha

[–] [email protected] 1 points 6 months ago

I should look into how to do that on my instance probably. Pictrs always seemed like a bit of a security nightmare.

[–] mesamunefire 0 points 6 months ago* (last edited 6 months ago) (1 children)

It's not kuberneties, but I run a family sized yunohost. It's great at installing and updating webapps. They have an awesome selection of federation apps like mastodon, writefreely, misskey, bookwyrm, and more.

For less than 5 users, I personally don't need kub, but it f I were to scale I would probably go that direction.

[–] [email protected] 1 points 6 months ago

I've seen that around, but I prefer to run my own services instead of relying on a ready-built system like that. I find they don't offer that much customization options usually.

[–] Ctrl_Alt_Banana 0 points 6 months ago