this post was submitted on 08 Jan 2024
61 points (96.9% liked)

Selfhosted

39215 readers
472 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am currently running most of my stuff from an unraid box using spare parts I have. It seems like I am hitting my limit on it and just want to turn it into a NAS. Micro PCs/USFF are what I am planning on moving stuff to (probably a cluster of 2 for now but might expand later.). Just a few quick questions:

  1. Running arr services on a proxmox cluster to download to a device on the same network. I don't think there would be any problems but wanted to see what changes need to be done.

  2. Which micro PCs are you running? I am leaving towards HP prodesk or Lenovo 7xx/9xx series around 200 each. I don't really plan on getting more than 2-3 and don't run too many things, but would want enough overhead if I switch stuff over to home assistant and windows and Linux VMs if needed.

  3. Any best practices you recommend when starting a Proxmox cluster? I've learned over time it's best to set it up correctly than try to fix stuff when it's running. I wish I could coach myself from 7 years ago now. Would of saved a lot of headaches lol.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 8 months ago (18 children)

Do two NICs. I have a bigger setup, and it's all running on one LAN, and it is starting to run into problems. Changing to a two network setup from the outset probably would have saved me a lot of grief.

[–] [email protected] 1 points 8 months ago (16 children)

So dual NIC on each device and set up another lan on my router? Sorry it seems like a dumb question but just want to make sure.

[–] fuckwit_mcbumcrumble 1 points 8 months ago (15 children)

Why would you need two nics unless you’re planning on having a proxmox Vm being your router?

[–] FiduciaryOne 1 points 8 months ago (2 children)

I think two NICs is required to do VLANing properly? Not 100% sure.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago) (2 children)

Nope - Proxmox lets you create VLAN trunks, just like a physical switch.

Edit: here's one of my Proxmox server network configs.

[–] monkinto 2 points 8 months ago (2 children)

Is there a reason to do this over just giving the nic for the vm/container a vlan tag?

[–] [email protected] 2 points 8 months ago

You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.

So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).

My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.

The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:

  • switch trunk port
    • enp2s0f0 (physical)
      • vmbr1 (Linux bridge)
        • vmbr1.60 (Proxmox server interface)
        • vmbr1.100 (Proxmox VLAN interface)
          • virtual guest nic (w/ vlan tag and IP address)
        • vtnet1 (OPNsense "physical" nic, but actually virtual)
          • vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)

All virtual guests default route via OPNsense's IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.

Like I said, it's a headfuck when you first set it up. Interface-ception.

The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I'd use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would've been overkill.

[–] FiduciaryOne 2 points 8 months ago (1 children)

Huh, cool, thank you! I'm going to have to look into that. I'd love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊

[–] [email protected] 2 points 8 months ago (1 children)

No worries mate. Sing out if you get stuck - happy to provide more details about my setup if you think it'll help.

[–] FiduciaryOne 2 points 8 months ago

Thanks for the kind offer! I won't get to this for a while, but I may take you up on it if I get stuck.

[–] [email protected] 2 points 8 months ago

No, you can do more than 1 VLAN per port. It's called a trunk

load more comments (12 replies)
load more comments (12 replies)
load more comments (13 replies)