this post was submitted on 06 Jul 2023
43 points (93.9% liked)

Selfhosted

40413 readers
253 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I figured most of you could relate to this.

I was updating my Proxmox servers from 7.4 to 8. First one went without problems. That second one though... Yea, not so much.. I THINK it's GRUB but not sure yet.

Now my Nextcloud, NAS, main reverse proxy and half my DNS went down. And no time to fix it before work. Lovely πŸ€• Well I now know what I'll be doing when I get home.

Out of morbid curiosity, What are some of ya'lls self hosting horror stories.?

top 23 comments
sorted by: hot top controversial new old
[–] Molecular0079 17 points 1 year ago* (last edited 1 year ago) (2 children)

Oh man, I empathize with you. Sometimes your self-hosted services go down at really bad times and you just don't have time to fix it in the moment. Then the fact that its broken starts nagging at you throughout the rest of the day. Hope you get your stuff back up without too much fuss.

My current horror story is that my QNAP TS-453 Pro NAS that was hosting my Jellyfin and Nextcloud shut off on its own several weeks back and then refused to boot up. Turns out there's a known manufacturing defect in the Intel J1900 chip the NAS uses that causes clock drift and every TS-451 and TS-453 NAS that was ever sold is basically a ticking time bomb and it was my time to get bit. QNAP never issued a recall even though they knew about the issue and is refusing to help customers affected by it. Now I am hoping that I can use the resistor fix in that forum post to briefly revive my NAS so that I can then backup all the data into a DIY NAS that I am still ordering parts for. Picked up some good deals but man DIY is still expensive. Hopefully, it's worth it as I never want to use turnkey solutions again after this experience.

[–] ech0 16 points 1 year ago (1 children)

The fact that QNAP knew about this and didn't warn their customers would cause me to boycott them for life. This isn't just like a gaming PC. This is a NAS. Some peoples entire lives are on there.

There are lots of reasons to avoid QNAP but that's rough.

So glad I went DIY with Ryzen and Unraid

[–] Molecular0079 3 points 1 year ago (1 children)

That's why I am doing a DIY NAS now. I don't think I'll ever buy another QNAP ever again after this experience. Is your DIY a mini-ITX by the way? I've been having a hell of a time figuring out whether I can get PCI-E bifurcation for my nvme SSDs while using a 5600G CPU.

What are you thoughts about Unraid btw? I've been looking into TrueNAS Scale.

[–] [email protected] 2 points 1 year ago (2 children)

Not OP but Unraid is fantastic. I know ZFS expansion is coming at some point but being able to slap in another drive and add it to the pool and have parity β€œjust work” is worth the money. Plus it makes Docker containers much easier to manage (Not like Portainer is that hard, but it’s nice to have configs already set to go).

[–] [email protected] 1 points 1 year ago

The ZFS update is live now

[–] Molecular0079 1 points 1 year ago

Nice, I'll definitely take a look at it. Both my Jellyfin and Nextcloud are setup using docker-compose so having easier Docker management is definitely a plus for me.

[–] [email protected] 5 points 1 year ago

This was a loooot of pcs affected by that one. Synology was also hit for example.

[–] [email protected] 13 points 1 year ago (1 children)

Used to have a Dell R710 in a rack in the garage. The rack doesn't have a door, bit it was cheap and fits in the space like a glove.

One day I was down there with the wife and kids sorting some stuff out at one end of the garage. Look over and see that the little one had pulled all the disks out of the server.

Managed to recover all my VMs that were running ext4 with a quick fsck. My main data storage VM that was using btrfs just locked me out with no possibility of mounting it even read only. From then on I will not touch btrfs with a barge pole.

[–] hutchmcnugget 7 points 1 year ago (1 children)

A moment to practice deep breathing for sure. How old was your kid?

[–] [email protected] 5 points 1 year ago

3, so not his fault at all

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago)

Had a mini heart attack as my Cryptpad told me password or username wrong, but I use a pw-manager. There are important documents in there and there is no "forgot password" function.

Solution was, I copied a wrong password to Cryptpad in Vaultwarden. Thanks to the password history of Vaultwarden, I found the right one and got logged in again πŸ˜…

Good luck for your Proxmox problem

[–] [email protected] 7 points 1 year ago

I've been carrying an OMV VM since Proxmox 5. Between one of the major version updates, usrmerge made a mess and forced me to reinstall the boot disk, re-hook everything up, and while not ideal, it works. Updated again recently, and my disks started to fall into read only mode. Tried the usual, rebooting into single user mode, fsck the volume, remounting, etc. and "hey look, it came back online!" only for it to go back into read only mode again. Since it was a virtual disk on a RAID6 array, and nothing else was breaking, it was really boggling my mind. It kept doing that despite still having a couple TB of free space available... or at least so I thought.

Turns out:

I had the virtual disk allocated to 19TB of my 24TB available space to work with. The qcow file lazy-write so despite it showing 19TB on disk in ls, it only used as much as the VM actually used. Usage grew to 16TB, the qcow file tried to write more data, but 16TB is the ext4 file size limit on my system. Oops.

I ended up ordering 3 more drives, expanding to 8x8TB on RAID6 w/ 48TB ish workable space, copied the data out into separate volumes, with none of them exceeding 15TB in size, then finally deleting the old "19TB" volume. Now I have over 25TB of space to grow, and new found appreciation for the 16TB limit :)

[–] [email protected] 6 points 1 year ago

My first power outage was a very bad experience since I was absolutely not prepared for it.

I have no ups since the grid is very stable in here (it's been the one and only outage in 5 years). The outage was short but I had forgot to activate the option in the BIOS of my server to power on when plugged in. So my server stayed shut down after electricity was restored. Of course, I happened to be away for the whole week when this happened with no way to access my server physically.

This is the event that made me learn about and start using a KVM that I can use remotely.

[–] [email protected] 6 points 1 year ago

Had my entire home setup (all my arr services, nextcloud, home assistant, monitoring, etc) all running in my k3s setup on like 5 vms at home. Had velero backups of it etc.

Fast forward to i have no idea what happened and my masters just died. Nothing should sync anymore etc. Nobody in k3s community had an idea either. So lost my entire cluster and the backups weren't too useful since the cluster itself was dead.

Rebuilt with Talos. But man that sucked.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

I'm using 3 Ubiquiti APs and running my own management instance on my server in a docker container.

I still haven't been able to figure out why, except for maybe crappy Ubiquiti firmware, but if that container goes down or loses connectivity then the APs flood my router with traffic and my whole network goes down.

Even wired connections don't work since the router is locked up, and when my server comes back up it won't be able to reestablish connection because the router is still dead.

The only way I've found to fix it is to power cycle the APs which is obviously a huge pain.

Can't get any support from Ubiquiti on it since I'm not using one of their controllers even though it's obviously a firmware issue. Definitely do not recommend.

[–] [email protected] 1 points 1 year ago (1 children)

That's an odd one. I've delt with Unifi at a lot of scales and never heard of them acting up when the controller goes down. Do you perhaps use a guest network with an intercept page? That's the only thing I can imagine possibly causing any issue.

[–] [email protected] 1 points 1 year ago

No guest network, I have a really simple setup at home in general, the 3 Ubiquiti APs are the only ones broadcasting, firmware is up to date and everything

[–] RonnyZittledong 4 points 1 year ago (1 children)

It is times like these the love I have for my pikvm is renewed ever stronger.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I really gotta get on building one (or two or three) of those. My employer has free colocation and I’m tech support for my parents’ server. Sigh.

[–] [email protected] 3 points 1 year ago (1 children)

I have a beefy Unraid server for Dockers and VMs. The idea was to have it replace all my computers. At home the VMs output the image to a monitor so that's my desktop. And remotely I connect my phone to my home VPN and connect my phone to a lapdock and use it as a thin client to connect to my VMs. Nomachine for Linux/work, Moonlight for Windows/gaming.

Well, it's been over a year of not being able to have my server reach an uptime higher that 15 days and I have no fucking idea why. There are no traces of any error anywhere.

[–] Nagairius 2 points 1 year ago

That's too bad. I am at the beginning of exactly that path. I have my Unraid running my containers and just started building VMs for myself, but I've had much better success in uptime.

[–] [email protected] 2 points 1 year ago (1 children)

Ugh, this happened to me during a minor release. For whatever reason I had to lug the PC into my office, connect keyboard and mouse, boot it up, and press a key. Then it would boot normally again. I get jealous of those of you with servers that have those remote KVM capabilities.

[–] [email protected] 1 points 1 year ago

My issue wasn't quite that easy but it wasn't as headache inducing as I had thought. Turns out, last time I had rejiggered my services I had failed to delete a now unused fstab entry. One pound sign, save file and a reboot later and everything was back up and running correctly. I lucked out! Now tiem to move my Nextcloud backups off that machine!

load more comments
view more: next β€Ί