this post was submitted on 11 Sep 2024
50 points (83.8% liked)

Selfhosted

40186 readers
619 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

But I want it so badly! All i need to figure out is:

reverse proxys (I stumbled through getting one caddy instance setup so far but gosh I struggle with that also, nginx proxy manager seems like my next step)

a rock solid backup/restore setup (but first I need to figure out where the vaultwarden alpine files live, then be able to get those off of the proxmox vm)

this is more of a vent, than a request for someone to spell it all out for me. But I wouldn't be upset if anyone had the time to point me in the right direction for me.

Would it just be easier to run a keypass XC and syncthing setup?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 42 points 2 months ago (2 children)

Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.

However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.

Reverse proxies are the backbone of hosting and services these days.
Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.

The most annoying thing about a lot of these is that tutorials are "minimal viable setup" sorta things.
Like "now you have it setup, make sure you tune it for production" and it just ends.
And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn't quite apply.

I understand your frustrations.

[–] model_tar_gz 7 points 2 months ago* (last edited 2 months ago) (1 children)

The most annoying thing about a lot of these is that tutorials are "minimal viable setup" sorta things. Like "now you have it setup, make sure you tune it for production"

Dude I’m already in pain from trying to serve these models and you just have to go rub salt into my eyes. “Simplify your stack with ” they said. “Share your resources effectively and easily with ” they said. “Here’s your fuckin’ ‘Hello, World’ now GRTFM and buzz off” they said.

Working close to the metal do be like that.

[–] [email protected] 10 points 2 months ago (1 children)

At the homelab scale, proxmox is great.
Create a VM, install docker and use docker compose for various services.
Create additional VMs when you feel the need. You might never feel the need, and that's fine. Or you might want a VM per service for isolation purposes.
Have proxmox take regular snapshots of the VMs.
Every now and then, copy those backups onto an external USB harddrive.
Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.

Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.

Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.

That's all you really need to do.
At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
Until you hit those limitations, what's the point in over engineering it? It's just going to over complicate things. I'm guilty of this.

Automating any of the above will become apparent when tinkering stops being fun.

The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.

[–] ChapulinColorado 2 points 2 months ago

Great points, as someone who is very happy with their current home automation and services, checking in the config files to a git repo was the critical step. Also backup volumes since many containers tend to store state in some binary or internal DB. At the very least try restoring the config to verify you have what’s needed. The containers should start even if they have no media on it.

In terms of tinkering not being fun anymore. That’s okay, sometimes you need a break.

A point that is sometimes not brought up enough in my opinion is to plan for loses. What can you afford to lose if you can’t backup everything (due to price, etc.)? config files and photos or personal data are relatively small (compared to something like a media library) and should be prioritized.

[–] [email protected] 2 points 2 months ago

and I trust them as a company enough that I have no interest in self hosting vaultwarden.

I pay the subscription, but I trust no company that much.