this post was submitted on 30 Aug 2024
103 points (96.4% liked)

Selfhosted

40432 readers
527 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
top 19 comments
sorted by: hot top controversial new old
[–] [email protected] 16 points 3 months ago (1 children)

Nice article.

why bother? Why I self host

Most of this article is not purely about that question, but I dislike clickbait, so I’ll actually answer the question from the title: Two reasons.

First of all, I like to be independent - or at least, as much as I can. Same reason we have backup power, why I know how to bake bread, preserve food, and generally LARP as a grandmother desperate to feed her 12 grandchildren until they are no longer capable of self propelled movement. It makes me reasonably independent of whatever evil scheme your local $MEGA_CORP is up to these days (hint: it’s probably a subscription).

It’s basically the Linux and Firefox argument - competition is good, and freedom is too.

If that’s too abstract for you, and what this article is really about, is the fact that it teaches you a lot and that is a truth I hold to be self-evident: Learning things is good & useful.

Turns out, forcing yourself to either do something you don’t do every day, or to get better at something you do occasionally, or to simply learn something that sounds fun makes you better at it. Wild concept, I know.

Contents

Introduction
My Services
Why I self host
Reasoning about complex systems
Things that broke in the last 6 months
Things I learned (or recalled) in the last 6 months

  • You can self host VS Code
  • UPS batteries die silently and quicker than you think
  • Redundant DNS is good DNS
  • Raspberry PIs run ARN, Proxmox does not
  • zfs + Proxmox eat memmory and will OOM kill your VMS
  • The mystery of random crashes (Is it hardware? It’s always hardware.)
  • SNMP(v3) is still cool
  • Don’t trust your VPS vendor
  • Gotta go fast
  • CIFS is still not fast
  • Blob storage, blob fish, and file systems: It’s all “meh”
  • CrowdSec

Conclusion

[–] [email protected] 4 points 3 months ago (1 children)

Raspberry PIs run ARN

Sooo RPis are now processing proteins ? XD sorry I'm bit drunk... Thank for the short insight, was too lazy to click !!

[–] [email protected] 4 points 3 months ago

Enjoy your Friday

[–] [email protected] 16 points 3 months ago (2 children)

Good post; kinda surprised sshfs is outperforming cifs and makes me need to take a second look at that because, boy, do I ever not like how samba performs, though I'm willing to chalk some of that up to configuration weirdness on my end since I have samba configured to allow any version of Windows that could ever connect to smb/cifs shares to be able to. (Retro computing yay.)

Also, I'd also like to toss in iDrive e2 as a cheap S3 blob storage provider.

I'm paying ~$30 a year for 1tb, with "free" egress. (They operate on the IT'S ON SALE! pricing nonsense so your price will certainly vary because well, it's always on sale, but always different amounts but $30 is the usualish price.)

You get zero useful support, less than the best performance I've ever seen, but it's shockingly cheap and in the last ~2 years (out of the VA datacenter) I've had exactly ONE downtime where it wasn't working, for about three hours.

Good enough to stuff server backups and object storage for a couple of websites.

Oh, and "free" egress means up to 3x the amount you have stored, so it's probably bad if your majority use is going to be public downloads, but if it's not, it'll probably never be an issue; I have like 600gb of backups sitting there so lots of buffer.

[–] theRealBassist 3 points 3 months ago (1 children)

I really need to move my CIFS shares to NFS now that I've migrated to linux for everything. It'd probably fix half the errors I regularly have tbh.

[–] c10l 5 points 3 months ago

Ah NFS… It’s so good when it works! When it doesn’t though, figuring out why is like trying to navigate someone else’s house in pitch dark.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (1 children)

CIFS supports leases. That is, hosts will try to ask for exclusive access to a file, so that they can assume that it hasn't changed.

IIRC sshfs just doesn't care much about cache coherency across hosts and just kind of assumes that things haven't changed underfoot, uses a timer to expire the cache.

considers

Honestly, with inotify, it'd probably be possible to make a newer sshfs that does support leases.

I suspect that the Unixy thing to do is to use NFSv4 which also does cache coherency correctly.

It is easy to deploy sshfs, though, so I do appreciate why people use it; I do so myself.

kagis to see if anyone has benchmarks

https://blog.ja-ke.tech/2019/08/27/nas-performance-sshfs-nfs-smb.html

Here are some 2019 benchmarks that show NFSv4 to generally be the most-performant.

The really obnoxious thing about NFSv4, IMHO, is that ssh is pretty trivial to set up, and sshfs just requires a working ssh connection and sshfs software installed, whereas if you want secure NFSv4, you need to set up Kerberos. Setting up Kerberos is a pain. It's great for large organizations, but for "I have three computers that I want to make talk together", it's just overkill.

[–] [email protected] 2 points 3 months ago

NFSv4

I'm an idiot. I do have NFS setup on the NAS (I mean, because why not?) but I always forget it's there, since one client OS (Mac OS) doesn't support it basically at all, and the other (Windows) does, but it's not really integrated into the GUI at all, and I'm lazy. I should see what the performance looks like between Windows SMB and NFS implementations are.

As for your key storage, I bloody love my (pair of) Yubikey 5s. I've stuffed a giant pile of keys and certs in there and basically don't think about managing them anymore because, well, it's just there and just works*.

*Okay the setup was a fuck and a half, but I mean, that does technically qualify as works.

[–] [email protected] 8 points 3 months ago (3 children)

Retail UPS batteries don't even last a single year, in my experience. The weekly brownouts and momentary blackouts probably aren't helping.

At this point, I'm just thinking of building my own with a charge controller, inverter and a bank of car batteries.

[–] [email protected] 14 points 3 months ago

If you do, make sure you get deep-cycle/marine batteries as a normal car battery will die reeeealllly fast under the typical workload you'll have from a UPS.

They're pretty simple and I considered one before deciding I really didn't want to deal with a whole bunch of batteries sitting on a shelf and just bought a normal UPS.

[–] [email protected] 2 points 3 months ago (1 children)

Do you mind sharing what brand retail UPS weren't lasting a year?

I'm dealing with similar brownouts and also an area with lots of lightning. I got about 5 years out of my UPS batteries. Wondering if I've just been lucky.

[–] [email protected] 2 points 3 months ago (1 children)

Schneider Electric APC Back-UPS 1500VA, 900W.

They power on self-test okay, but go on to just fail to switchover during outages. I'm still trying to figure out if it is a factor of cumulative time, running hours, or they're only good for a fixed number of power failures. And whether its the battery or the UPS device itself.

It feels like crashing your car, and then the airbags go off after you're already mangled and bleeding out.

[–] [email protected] 1 points 2 months ago

I have the same model, powering 3 machines with an average load of ~125w when it switches to battery power. I have a NUT host on one of the servers which will broadcast the outage for the other machines and the whole stack shuts down after 30 seconds and switches off the UPS at the very end. Gone through about 4 or 5 true power events now and double that in testing (overzealous I know) but the UPS is 2.5 years old now and is doing just fine. I have a spare battery because I heard ~3 years is normal but so far no indication it’s reaching replacement yet.

I think the important thing for these is to not run them down to 0. They’re only good for one event at a time and shouldn’t constantly be switching over without basically a full day of recharging again (more like 16h to recharge).

I can see consistent brownouts and events being a problem for these little machines. I’m planning on upgrading to a rack solution soon and relegating this one to my desktop in the other room (with a fresh battery of course).

[–] [email protected] 1 points 3 months ago

I use an Allpower power station, that has build in UPS.

[–] [email protected] 5 points 3 months ago* (last edited 2 months ago) (1 children)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
NAS Network-Attached Storage
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
SMB Server Message Block protocol for file and printer sharing; Windows-native
SSH Secure Shell for remote terminal access
VPS Virtual Private Server (opposed to shared hosting)

7 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

[Thread #942 for this sub, first seen 30th Aug 2024, 18:35] [FAQ] [Full list] [Contact] [Source code]

[–] meonkeys 1 points 3 months ago (1 children)

I think "VPS" in Christian's blog post does refer to shared hosting.

[–] [email protected] 3 points 3 months ago

I think shared hosting there is more meant to refer to the older "upload your files in webmin and we'll shove them in /cgi-bin/ with everybody else's"-style hosting where multiple users sites are running on a single instance of a webserver versus a VPS giving you a VM with SSH access?

[–] [email protected] 5 points 3 months ago (1 children)

Interesting read, Hetzner's been on my radar for a while.

[–] FlightyPenguin 3 points 3 months ago

Highly recommend. I moved my web hosting from my home server to a CPX11 server for better uptime (my tinkering around in the homelab was always bringing things down) and couldn't be happier. It's dirt cheap (cheaper than shared web hosting, even), performant (performance is better than shared web hosting) and reliable. With a 20TB bandwidth limit at the lowest tier, I can reverse proxy for most of my homelab, too.