this post was submitted on 13 Dec 2024
26 points (93.3% liked)

Linux

48624 readers
1584 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/23071801

Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn't go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the "public-facing" VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the "public" VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the "public" VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the "personal" OS and b) a direct connection to the switch (and ISP) when running the "public/hosting" OS.

For increased security, each OS would be installed on a separate NVMe drive, and the "personal" one would use TPM with full disk encryption to protect sensitive data. If the "public/hosting" system were compromised.

The theory here is that, if properly done, the TPM doesn't release the keys to decrypt the "personal" disk OS when the user is booted into the "public/hosting" OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What's your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let's discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

top 15 comments
sorted by: hot top controversial new old
[–] fhein 16 points 4 days ago

My paranoia level: Even though I'm pretty good with computers in general, I would not trust myself to set up a safe public facing service, which is the reason that I don't have any of those on my home server. If I needed something like that I wouldn't self host it.

[–] [email protected] 1 points 2 days ago (1 children)

Don't expose anything publicly, instead setup wireguard for every VM. Connect your phone, PC etc to the VPN so you have full access without publicly exposing anything.

You may have touched on this but your post was way too long so I only read the headings

[–] TCB13 1 points 2 days ago (1 children)

If you did you would know I wasn't looking for advice. You also knew that exposing stuff publicly was a prerequisite.

[–] [email protected] 1 points 2 days ago (1 children)

Fair, but you were asking how people approach security for self hosted solutions and I guess I'm challenging why anything needs to be public. Self hosting is typically for your own services which can usually be hidden behind a VPN.

The exception I guess is email, but I never understand why people attempt self hosted mail servers

[–] TCB13 2 points 2 days ago (1 children)

Why only email? Why not also a website? :)

"self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot"

Some people do it and to be fair a website is way simpler and less prone to issues than mail.

[–] [email protected] 1 points 2 days ago (1 children)

Do websites come under the remit of self hosting?

Personally I host static websites with GitHub, cloudfront, netlify, onrender etc. Trivial to setup, more reliable and better cdn distribution. Anything dynamic lives in a data center rather than a self host setup.

[–] TCB13 2 points 2 days ago

You may not want to depend on those cloud services and if you need something not static, doesn't cut it.

[–] subtext 13 points 4 days ago* (last edited 4 days ago) (2 children)

Scenario 5: put it all in one big long docker-compose.yml and cross your fingers that docker isolation does its job.

E: definitely not what I do, no siree

[–] [email protected] 2 points 3 days ago

Nope don't do that either.

[–] [email protected] 1 points 4 days ago

😱😱😱

[–] [email protected] 2 points 4 days ago (1 children)

Kinda Scenario 1 is the standard way: firewall at the perimeter with separately isolated networks for DMZ, LAN & Wifi

The Firewall provides a proxy for anything in the DMZ, so all the filtering is done there and not on the DMZ device(s).

GeoIP on the firewall, so anything that's opened to the interweb - inc. inbound VPNs can only come from selected regions.

Fail2Ban on DMZ device(s), to prevent repeated login attacks.

Wifi has multiple SSIDs to block / permit outbound access to the internet (IoT stuff), LAN (Guests), etc.

Then regular updates / patching / backups....

[–] TCB13 1 points 4 days ago (1 children)

Kinda Scenario 1 is the standard way: firewall at the perimeter with separately isolated networks for DMZ, LAN & Wifi

What you're describing is close to scenario 1, but not purely scenario 1. It is a mix between public and private traffic on a single IP address and single firewall that a lot of people use because they can't have two separate public IP addresses running side by side on their connection.

The advantage of that setup is that it greatly reduces the attack surface by NOT exposing your home network public IP to whatever you're hosting and by not relying on the same firewall for both. Even if your entire hosting stack gets hacked there's no way the hacker can get in your home network because they're two separate networks.

The scenario one describes having 2 public IPs, a switch after the ISP ONT and one cable goes to the home firewall/router and another to the server (or another router / firewall). Much more isolated. It isn't a simple DMZ, it's literally the same as two different internet connections for each thing.

[–] [email protected] 1 points 4 days ago (1 children)

Fair point, I neglected to mention that I have >1 Public IP The firewall directs traffic as required.

[–] TCB13 1 points 3 days ago (1 children)

That's a good setup with multiple IP, but still you've a single firewall that might be compromised somehow if someone get's access to the "public" machine. :)

[–] [email protected] 2 points 3 days ago

I have a single connection to the 'net, hence a single firewall.

I've port scanned my firewall (externally) when travelling for work, so I've verified what's exposed and verified that GeoIP works (forgot to enable a region before travelling there), so I've reached the point where I'm happy with this setup