this post was submitted on 14 Jun 2023
71 points (100.0% liked)

Selfhosted

40459 readers
379 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have a few selfhosted services, but I'm slowly adding more. Currently, they're all in subdomains like linkding.sekoia.example etc. However, that adds DNS records to fetch and means more setup. Is there some reason I shouldn't put all my services under a single subdomain with paths (using a reverse proxy), like selfhosted.sekoia.example/linkding?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (4 children)

Everyone is saying subdomains so I'll try to give a reason for paths. Using subdomains makes local access a bit harder. With paths you can use httpS://192etc/example, but if you use subdomains, how do you connect internally with https? Https://example.192etc won't work as you can't mix an ip address with domain resolution. You'll have to use http://192etc:port. So no httpS for internal access. I got around this by hosting adguard as a local DNS and added an override so that my domain resolved to the local IP. But this won't work if you're connected to a VPN as it'll capture your DNS requests, if you use paths you could exclude the IP from the VPN.

Edit: not sure what you mean by "more setup", you should be using a reverse proxy either way.

[–] [email protected] 3 points 1 year ago (1 children)

If your router has NAT reflection, then the problem you describe is non existent. I use the same domain/protocol both inside and outside my network.

[–] [email protected] 1 points 1 year ago (2 children)

Does NAT reflection still work if your PC is connected to a VPN?

[–] [email protected] 1 points 1 year ago

I don't know for sure... but my instinct is that NAT reflection is moot in that case, because your connection is going out past the edge router and doing the DNS query there, which will then direct you back to your public IP. I'm sure there's somebody around that knows the answer for certain!

[–] BombOmOm -2 points 1 year ago

Depends:

  • If you have your VPN setup so it sends all traffic to the internet, then your request will pass through the VPN server, then back to your location from the internet.

  • If you have your VPN setup to exempt LAN traffic, then if you specify a local IP, your traffic will stay on your LAN, however, if you specify the domain, the VPN will almost certainly continue to treat it as internet-bound traffic and route it through their servers. This is possibly avoidable if you also put your own IP on the exempt list, if that is a feature.

[–] [email protected] 2 points 1 year ago (1 children)

You’ll have to use http://192etc:port. So no httpS for internal access

This is not really correct. When you use http this implies that you want to connect to port 80 without encryption, while using https implies that you want to use an ssl connection to port 443.

You can still use https on a different port, Proxmox by default exposes itself on https://proxmox-ip:8006 for example.

Its still better to use (sub)domains as then you don't have to remember strings of numbers.

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (1 children)

I understand, though if the services you're hosting are all http by themselves, and https due to a reverse proxy, if you attempt to connect to the reverse proxy it'll only serve the root service. I'm not aware of a method of getting to subdomains from the reverse proxy if you try to reach it locally via ip.

[–] macgregor 4 points 1 year ago (1 children)

Generally a hostname based reverse proxy routes requests based on the host header, which some tools let you set. For example, curl:

curl -H 'Host: my.local.service.com' http://192.168.1.100

here 192.168.1.100 is the LAN IP address of your reverse proxy and my.local.service.com is the service behind the proxy you are trying to reach. This can be helpful for tracking down network routing problems.

If TLS (https) is in the mix and you care about it being fully secure even locally it can get a little tricky depending on whether the route is pass through (application handles certs) or terminate and reencrypt (reverse proxy handles certs). Most commonly you'll run into problems with the client not trusting the server because the "hostname" (the LAN IP address when accessing directly) doesn't match what the certificate says (the DNS name). Lots of ways around that as well, for example adding the service's LAN IP address to the cert's subject alternate names (SAN) which feels wrong but it works.

Personally I just run a little DNS server so I can resolve the various services to their LAN IP addresses and TLS still works properly. You can use your /etc/hosts file for a quick and dirty "DNS server" for your dev machine.

[–] Goldenderp 1 points 1 year ago

TLS SNI will take care of that issue just fine, most reverse proxies will just handle it for you especially if you use certbot i.e. usually letsencrypt

[–] [email protected] 2 points 1 year ago (1 children)

Edit: not sure what you mean by “more setup”, you should be using a reverse proxy either way.

I'm using cloudflare tunnels (because I don't have a static IP and I'm behind a NAS, so I would need to port forward and stuff, which is annoying). For me specifically, that means I have to do a bit of admin on the cloudflare dashboard for every subdomain, whereas with paths I can just config the reverse proxy.

[–] [email protected] 1 points 1 year ago

because I don’t have a static IP and I’m behind a NAS, so I would need to port forward and stuff, which is annoying

This week I discovered that Porkbun DNS has a nice little API that makes it easy to update your DNS programmatically. I set up Quentin's DDNS Updater https://github.com/qdm12/ddns-updater

Setup is a little fiddly, as you have to write some JSON by hand, but once you've done that, it's done and done. (Potential upside: You could use another tool to manage or integrate by just emitting a JSON file.) This effectively gets me dynamic DNS updates.

[–] [email protected] 1 points 1 year ago

I got around this by hosting adguard as a local DNS and added an override so that my domain resolved to the local IP. But this won't work if you're connected to a VPN as it'll capture your DNS requests

Why didn't you use your hosts file?