this post was submitted on 04 Nov 2024
117 points (97.6% liked)

Selfhosted

40703 readers
460 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I currently have a home server which I use a lot and has a few important things in it, so I kindly ask help making this setup safer.

I have an openWRT router on my home network with firewall active. The only open ports are 443 (for all my services) and 853 (for DoT).

I am behind NAT, but I have ipv6, so I use a domain to point to my ipv6, which is how I access my serves when I am not on lan and share stuff with friends.

On port 443 I have nginx acting as a reverse proxy to all my services, and on port 853 I have adguardhome. I use a letsencrypt certificate with this proxy.

Both nginx, adguardhome and almost all of my services are running in containers. I use rootless podman for containers. My network driver is pasta, and no container has "--net host", although the containers can access host services because they have the option "--map-guest-addr" set, so I don't know if this is any safer then "--net host".

I have two means of accessing the server via ssh, either password+2fa or ssh key, but ssh port is lan only so I believe this is fine.

My main concern is, I have a lot of personal data on this server, some things that I access only locally, such as family photos and docs (these are literally not acessible over wan and I wouldnt want them to be), and some less critical things which are indeed acessible externally, such as my calendars and tasks (using caldav and baikal), for exemple.

I run daily encrypted backups into OneDrive using restic+backrest, so if the server where to die I believe this would be fine. But I wouldnt want anyone to actually get access to that data. Although I believe more likely than not an invader would be more interested in running cryptominers or something like that.

I am not concerned about dos attacks, because I don't think I am a worthy target and even if it were to happen I can wait a few hours to turn the server back on.

I have heard a lot about wireguard - but I don't really understand how it adds security. I would basically change the ports I open. Or am I missing something?

So I was hoping we could talk about ways to improve my servers security.

top 50 comments
sorted by: hot top controversial new old
[–] satanmat 39 points 1 month ago (2 children)

The single best thing you can do security wise, is to NOT have any personal data on a web facing server.

Separate the data

Rereading it does look like you are doing the things right; so just audit what is on the public side. - your calendar and tasks- cool

Your photo and docs, do those need to be on there?

they are not accessible on the WAN

If they are on a server that is publicly accessible, please move them to a different location

Otherwise you sound like your doing well

[–] [email protected] 6 points 1 month ago (2 children)

That was a great answer, thank you so much!

Yes I didnt even notice the family photos and docs dont need to be on that same server. Initially I just put them there to act as a local file share. But you are absolutely right, moving them from the public server is the best thing I can do to protect them.

I will look into setting up a second server for the private stuff that is not publicluly accessible

[–] Lyricism6055 8 points 1 month ago (1 children)

If this server is publicly accessible and gets pwned, they can use it as a jump box for your internal devices.

[–] [email protected] 3 points 1 month ago (1 children)

Thats a good point, I hadnt thought about it before. I like the possibility of sharing these files in my intranet but I suppose you are right. Maybe I could use openwrt to split two networks, one for public stuff only, but my knowledge of networking is quite limited.

load more comments (1 replies)
[–] just_another_person 1 points 1 month ago

Also, this answer.

[–] [email protected] 3 points 1 month ago (1 children)

Your photo and docs

At least in my case, it's really handy to share photos with other family members. But certainly you don't need all of them available on the same public service.

[–] [email protected] 3 points 1 month ago

Thats a good point. Maybe I can get away with just temporary file sharing. So when someone wants something I can upload it to the server and send a link. I bet even nextcloud could do that.

Still way less scary then having everything on the server all the time

[–] bokherif 25 points 1 month ago (1 children)

Start with the basics:

  • Harden SSH by only allowing public key authentication and use strong keys to authenticate instead of passwords.
  • Setup fail2ban (lots of online resources, check Linode guides) to block malicious IPs temporarily.
  • If the data you store is something only you should see, then it should not ever be connected to the internet, airgap wherever possible.
  • And finally, keep your shit updated.
[–] [email protected] 10 points 1 month ago

To be even more explicit on the last point, that means regularly updating OpenWRT and all your containers, not just the server's base OS

[–] [email protected] 15 points 1 month ago (5 children)

Admittedly I'm paranoid, but I'd be looking to:

  1. Isolate your personal data from any web facing servers as much as possible. I break my own rule here with Immich, but I also...
  2. Use a Cloudflare tunnel instead of opening ports on your router directly. This gets your IP address out of public record.
  3. Use Cloudflare's WAF features to limit ingress to trusted countries at a minimum.
  4. If you can get your head around it, lock things down more with features like Cloudflare device authentication.
  5. Especially if you don't do step 4: Integrate Crowdsec into your Nginx setup to block probes, known bot IPs, and common attack vectors.

All of the above is free, but past step 2 can be difficult to setup. The peace of mind once it is, however, is worth it to me.

[–] [email protected] 4 points 1 month ago (1 children)

Thanks for your reply!

Suggestion 1 definetely does make a lot of sense and I will be doing exactly that asap. Its something I didnt think through before but that would make me much more in peace.

Suggestions 2-4 sound very reasonable, I have indeed searched for a way to self host a waf but didnt find much info. My only only concern with your points is... Cloudflare. From my understanding that would indeed add a lot of security to the whole setup but they would then be able to see everything going through my network, is that right?

[–] [email protected] 2 points 1 month ago (1 children)

Yes and no? It's not quite as black and white as that though. Yes, they can technically decrypt anything that's been encrypted with a cert that they've issued. But they can't see through any additional encryption layers applied to that traffic (eg. encrypted password vault blobs) or see any traffic on your LAN that's not specifically passing through the tunnel to or from the outside.

Cloudflare is a massive CDN provider, trusted to do exactly this sort of thing with the private data of equally massive companies, and they're compliant with GDPR and other such regulations. Ultimately, the likelihood that they give the slightest jot about what passes through your tunnel as an individual user is minute, but whether you're comfortable with them handling your data is something only you can decide.

There's a decent question and answer about the same thing here: https://community.cloudflare.com/t/what-data-does-cloudflare-actually-see/28660

[–] [email protected] 2 points 1 month ago

Yes absolutely. For work most of my clients use cloudflare's different services so I understand they have credibility.

For me though, part of the reason I self host is to get away from some big tech companies' grasp. But I understand I am a bit extreme at times.

So thanks for opening my mind and pointing me to that very interesting discussion, as well as for sharing your setup, it sure seems to be very sound security wise.

[–] [email protected] 1 points 1 month ago

Sounds exactly like my setup for the last 5 years, minus NGINX (don't need it with Cloidflared since each service is it's own Proxmos Container and use their own exclusive tunnels).

load more comments (3 replies)
[–] just_another_person 11 points 1 month ago (1 children)

Wireguard is a VPN, so that's not going to help you much here unless you're forwarding all your traffic through a remote server, in which case anyone gets in there will still be able to get your local machines. It's another hop in the chain, but that's about it.

If you want to be more on guard about reacting to attacks, or just bad traffic, you probably want something like Crowdsec. You'll at least be able to detect and ban IPs probing your services. If that's too much work, leverage OoenWRT reporting and some scripting to ban bad actors that probe your firewall and open ports. That's a good first step.

If you're concerned about the containers, consider using something more secure than dockerd. Podman rootless with a dedicated service user is a good start. Then maybe look at something more complex: Kata, gvisor, lxc...etc. The goal being sandboxing the containers more to prevent jailbreaks.

[–] [email protected] 4 points 1 month ago (1 children)

Thanks for the amazing reply and specially for the explanation regarding wireguard.

I didnt know about crowsec and kata containers, both amazing projects, I will definetely look into it and try to set them up.

Just one quick follow up question, when you mention dedicanted service user, do you mean its best to have a sepate user for each service, such as one for nginx, one for adguardhome and so on? Currently all of them run under the same user and I didnt think about this possibility before.

[–] just_another_person 3 points 1 month ago (1 children)

Yeah, so if you're running rootless containers, they aren't run by root, and for added security, you don't want them run by your normal user because if they get broken, then they'd have access to what your user has access to. Just create another user that only runs containers, and doesn't have access to your things or root.

[–] [email protected] 1 points 1 month ago

That makes a lot of sense. Thats also very easy to setup so I will do it tonight.

Thanks again for your amazing input!

[–] [email protected] 8 points 1 month ago (3 children)

Something you might want to look into is using mTLS, or client certificate authentication, on any external facing services that aren't intended for anybody but yourself or close friends/family. Basically, it means nobody can even connect to your server without having a certificate that was pre-generated by you. On the server end, you just create the certificate, and on the client end, you install it to the device and select it when asked.

The viability of this depends on what applications you use, as support for it must be implemented by its developers. For anything only accessed via web browser, it's perfect. All web browsers (except Firefox on mobile...) can handle mTLS certs. Lots of Android apps also support it. I use it for Nextcloud on Android (so Files, Tasks, Notes, Photos, RSS, and DAVx5 apps all work) and support works across the board there. It also works for Home Assistant and Gotify apps. It looks like Immich does indeed support it too. In my configuration, I only require it on external connections by having 443 on the router be forwarded to 444 on the server, so I can apply different settings easily without having to do any filtering.

As far as security and privacy goes, mTLS is virtually impenetrable so long as you protect the certificate and configure the proxy correctly, and similar in concept to using Wireguard. Nearly everything I publicly expose is protected via mTLS, with very rare exceptions like Navidrome due to lack of support in subsonic clients, and a couple other things that I actually want to be universally reachable.

[–] [email protected] 4 points 1 month ago

mTLS is great and it's a shame Firefox mobile still doesn't support it.

[–] [email protected] 2 points 1 month ago

Wow, thats very, very nice. I didnt know this even existed.

But I suppose if it had widespread support it would be the perfect solution.

Firefox mobile not supporting it might be a dealbreaker though, since it is the browser I use and the one I persuaded all my friends and family to switch to...

But this is an incredibly interesting technology and I will surely look into implementing at least partially if that works.

Thanks a lot for sharing!

[–] [email protected] 1 points 1 month ago

Sounds like the clearnet equivalent to i2p encrypted lease sets

[–] root 8 points 1 month ago (1 children)

Is keeping everything inside of a local “walled garden”, then exposing the minimum amount of services needed to a WireGuard VPN not sufficient?

There would be be no attack surface from WAN other than the port opened to WireGuard

[–] linearchaos 1 points 1 month ago

Minimum open services is indeed best practice but be careful about making statements that the attack surface is relegated to open inbound ports.

Even Enterprise gear gets hit every now and then with a vulnerability that's able to bypass closed port blocking from the outside. Cisco had some nasty ones where you could DDOS a firewall to the point the rules engine would let things through. It's rare but things like that do happen.

You can also have vulnerabilities with clients/services inside your network. Somebody gets someone in your family to click on something or someone slips a mickey inside one of your container updates, all of a sudden you have a rat on the inside. Hell even baby monitors are a liability these days.

I wish all the home hardware was better at zero trust. Keeping crap in isolation networks and setting up firewalls between your garden and your clients can either be prudent or overkill depending on your situation. Personally I think it's best for stuff that touches the web to only be allowed a minimum amount of network access to internal devices. Keep that Plex server isolated from your document store if you can.

[–] slug 6 points 1 month ago (3 children)

does anyone have an actual horror story about anything happening via an exposed web service? let's set aside SSH

[–] [email protected] 12 points 1 month ago* (last edited 1 month ago)

Counter question

How would you know something went wrong? Do you monitor all the logs? Do you have alerting?

What happens if one service has a serious vulnerability and is compromised? Would an adversary be able to do lateral movement? For that matter are you scanning/checking for vulnerabilities? Do you monitor security tracker?

All of these are things to consider

[–] linearchaos 5 points 1 month ago (2 children)

Yeah, a company got toasted because one of their admins was running Plex and had tautulli installed and opened to the outside figuring it was read-only and safe.

Zero day bug in tat exposed his Plex token. They then used another vulnerability in Plex to remote code execute. He was self-hosting a GitHub copy of all the company's code.

[–] mint_tamas 2 points 1 month ago

This guy was running a three year old version of Plex with a known (and later fixed RCE), and was working for LastPass.

[–] [email protected] 1 points 1 month ago

Last time they’ll ever do that! Pass the buck of hosting web-facing Plex servers onto somebody else.

[–] [email protected] 3 points 1 month ago (1 children)

Id like to know as well. I definetely dont want to be the first person of that story tough

Ive heard of someone who exposed the docker management port on the internet and woke up to malware running on their server. But thats of course not the same as web services.

[–] [email protected] 3 points 1 month ago

Once a server is compromised there are lots of uses. Everything from DDOS attacks to obscuring attacks against other targets. An attacker doesn't want to be discovered so they likely will hide as much as they can.

[–] [email protected] 6 points 1 month ago (2 children)

Just do what I do and consistently forget to set up DDNS and also be bad at noticing when your ISP juggles your IP address.

[–] [email protected] 6 points 1 month ago (2 children)

Been there, done that lol, my ISP doesnt change my IP half as much as I should like, and I renew my certs half as often as they deserve.

Seriously though, I had certs expire twice until I finally decided to get this setup properly.

[–] [email protected] 2 points 1 month ago

It makes sense that Bilbo would run a homeserver.

load more comments (1 replies)
[–] [email protected] 2 points 1 month ago

Get a VPS and route traffic into a isolated network

[–] filister 5 points 1 month ago

Why don't you use something like Tailscale? Other than that using non standard ports greatly reduces the risks of you getting compromised. The majority of attacks come from port scanners scanning for default ports and trying to use known vulnerabilities.

[–] [email protected] 5 points 1 month ago

It's great that you self host but security especially of service directly exposed to internet is very difficult. Use somekind of Direct VPN or services like tailscale etc

[–] Lyricism6055 2 points 1 month ago (1 children)

Just close 443 and use VPN with ACME DNS challenges for your certs. That'll help make it even more secure, nothing is full proof though and a VPN is a good first step

[–] [email protected] 1 points 1 month ago (2 children)

Thanks for replying!

I do use dns challanges for renewing my certs. But I use port 443 for application data, not for certs.

Is a vpn always safer then a reverse proxy? Do you use wireguard or do you have any other options worth looking into?

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

Is a vpn always safer then a reverse proxy?

Depends on what you trust, I guess.

A reverse proxy on a standard cert is a bigger target for automated scripts than a reverse proxy on a non-standard port. A VPN runs through the VPN's authentication, whereas a reverse proxy relies on whatever that app's authentication is. So whether it's secure enough depends on the VPN configuration, what you're hosting, etc.

I'm behind CGNAT, so I have limitations you don't, but here's my setup:

  • VPS at the edge for my public services - basically the same as a reverse proxy because the application is directly exposed
  • self-hosted VPN at VPS to facilitate reverse-proxy - I could shut down public access any time and just login w/ the VPN
  • static DNS entries on my router so I can use my domains inside my network (TLS also works properly)

I like this approach because I can eat my cake (nice domain names instead of IPs and ports) and have it too (fast connection inside LAN, can disable reverse proxy if I want better security). You could get the same w/o the VPS, and if you require WireGuard VPN access outside the LAN, you get better security than a public-facing service.

[–] [email protected] 2 points 1 month ago

I didnt mention on my original post but I do have a virtual machine on gcp, which I use to run mongodb. I didnt mention it because I am not too concerned with it, but mostly it follows the same practices, with the exception being that ssh is open and it has no private data in it.

But I suppose I could do something similiar to what you mentioned. The ideia of having and eating the cake is very nice. And if something goes wrong I could turn of public access and have the vpn still working.

I will consider implementing something like that as well, thanks a lot for sharing your thoughts!

load more comments (1 replies)
[–] Blue_Morpho 1 points 1 month ago (3 children)

You might want to consider that backups only protect very old data from ransomware.

Ransomware works by getting on a machine and sitting for several months before activating. During that time, your data is encrypted but you don't know because when you open a file, your computer decrypts it and shows you what you expect to see. So your backups are working but are saving files that will be lost once the ransom ware activates.

The only solution is to frequently manually verify the backup from a known safe computer. Years ago I looked for something to automate this but didn't find it. (Something like a raspberry pi with no Internet that can only see the PC it's testing, compares a known file, then touches the file so it gets backed up again.)

[–] [email protected] 2 points 1 month ago

Thanks a lot for your input. I honestly had not considered this possibility.

Others in the post recommended removing those important files from the public facing server so that in the case of an attack they wouldnt be exposed. So I will try and follow this recommendation asap.

But your answer still applies to everything else I will be hosting so I am concerned. I had no idea ransomware was this smart. I will research more about this topic, but basically if I access a file from two different servers and its fine it means the file is free from infection?

[–] [email protected] 1 points 1 month ago (1 children)

During that time, your data is encrypted but you don't know because when you open a file, your computer decrypts it and shows you what you expect to see.

First time i hear of that. You sure? Would be really risky since you basically need to hijack the complete Filesystem communication to do that. Also for that to work you would need the private and public key of the encryption on the system on run time. Really risky and unlikely that this is the case imho.

[–] [email protected] 1 points 1 month ago (1 children)

I don't know much about ransomware but thats what got me concerned. I always assumed if I were to be infected, restic would just create a new snapshot for the files and Id be able to restore after nuking the server.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)

I doubt that this is the case, whether it is encrypted or not. The complexity and risks involved with decrypting it on the fly is really unrealistic and unheard of by me (have not heard of everything but still)

Also the ransomware would also need to differentiate between the user and the backup program. When you do differentiated backups(like restic) with some monitoring you also would notice the huge size of the new data that gets pushed to your repo.

Edit: The important thing about your backup is, to protect it against overwrites and deletes and have different admin credentials that are not managed by the AD or ldap of the server that gets backed up.

[–] [email protected] 1 points 1 month ago (2 children)

I see, I appreciate you sharing your knowledge on the matter.

Yeah I thoght about the spike in size, which I would definetely notice because the amount of data is pretty stable and I have limited cloud storage.

Regarding your last point, I currently have everything under a user account: the data I am backing up, the applications and restic itself all run on the same user account. Would it be a good ideia to run restic as root? Or as a different service account?

[–] [email protected] 2 points 1 month ago (1 children)

You want your backup functional even if the system is compromised so yes another system is required for that, or through it to the cloud. Important that you do not allow deleting or editing of the backup even if the credentials used for backing up are compromised. Basically an append only storage.

Most Cloud Storage like S3 Amazon (or most other S3 compatible providers like backblaze) offer such a setting.

[–] [email protected] 1 points 1 month ago

Oh, now I get what you mean, thanks for the explanation

Yeah it makes sense, I had originally gone with onedrive for the much cheaper price but I will take a look into s3 compatible storage and consider migrating in the future.

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: next ›