this post was submitted on 30 Jun 2023
56 points (95.2% liked)

Selfhosted

40746 readers
640 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

This was a very nerve racking experience as I'd never gone through a major version Proxmox update before and I had spent a lot of time getting everything just so with lots of config around disk and VLANs. The instructions were also a big long page, which never fills me with confidence as it normally means there's a lot of holes to fall in to.

My initial issue was that it says to perform the upgrade with no VM's running, but it requires an internet connection and my router is Opnsense in a VM. Thankfully apt dist-upgrade --download-only, shutdown the Opnsense VM and then apt dist-upgrade did the trick.

A few config files changed and I always hate this part of Debian upgrades, but nothing major or of importance was impacted.

A nervous reboot and everything was back up running the new Proxmox with the new kernel. Surprisingly smooth overall and the most time consuming part by far was backing up my VM's just in case. The upgrade itself including reboot was probably 15 mins, the backups and making sure I was prepared and mentally ready was about an hour.

Compared to upgrading ESXi on old hardware like I was doing last year, it was a breeze.

Highly recommended, would upgrade again.

top 19 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 2 years ago (1 children)

I've really come to appreciate having test systems working as a systems engineer. A simple virtualised install of Proxmox that replicates some small part of your environment is great to simply go through the upgrade once or twice.

[–] [email protected] 2 points 2 years ago

I'd like to run a small cluster of mini PCs or have extra hardware running a mirror setup, but the cost has put me off.

[–] [email protected] 6 points 2 years ago (1 children)

Like you I have OPNsense in a VM on one of my PVEs. But I only made sure the nigthly VM back up ran and didnt even bother shutting down the VMs during the upgrade. The VMs got restarted during the final reboot, as the would with every other reboot, and I was back in business.

[–] [email protected] 6 points 2 years ago (1 children)

It's a bold strategy, Cotton. I'm glad it worked out for you.

[–] [email protected] 2 points 2 years ago* (last edited 2 years ago) (1 children)

:-)

But seriously, I was wondering about the requirement to shutdown the VM's and couldn't come up with a solid reason? I mean, even if QEMU/KVM/Kernel get replaced during a version upgrade or a more common update, all of these kick in only after the reboot? And how's me shutting down VMs manually different from the OS shutting down during a reboot?

I know I am speculating and may not have the fill picture, probably a question for the Proxmox team, there may be some corner case where this is indeed important.

By the way, Mexican or US black strat? :-)

[–] [email protected] 2 points 2 years ago (1 children)

I have no idea why, but I thought there must be some good reason to document it and put the check in to the test tool.

I don't yet have a black strat. I'm considering the Player series of a non-Fender option of a Vintage V6.

[–] jackiebrown 2 points 2 years ago

I don't know why but figured the same as you. If they bothered to document it, I'd bother to follow it. I did the download only option too since I also run opnsense from a VM.

[–] [email protected] 4 points 2 years ago (1 children)

I’m about to do the same thing. Thanks for sharing your experience.

[–] [email protected] 1 points 2 years ago

Best of luck. FWIW I did the upgrade over an SSH connection and didn't have any issues. Just had to be careful not to wipe my custom sshd_config during the update.

[–] [email protected] 2 points 2 years ago

Upgraded 4 instances without a problem. Really love PVE and PBS!

[–] [email protected] 2 points 2 years ago

Highly recommended, would upgrade again.

rofl! Highlight of the post!

Happy to read this though, thanks. I've been putting off 7->8 myself ... this gives me quite a bit more confidence!

[–] mrginger 2 points 2 years ago (1 children)

I did the same a few months ago and was extremely nervous. I have a 4 node cluster running 30 VMs in production. After migrating the VMS off of one node I quickly realized what a pleasure it was to do it. No muss no fuss. Migrated the VMs back and continued on with the other 3.

[–] [email protected] 3 points 2 years ago (1 children)

That's pretty cool that it worked so well. Does migrating the VM's result in any downtime or is it a seamless cross over?

I waited a few days before upgrading as I wanted to make sure I wasn't going to get stung by any teething troubles. Would have ideally waited longer but had an ideal few hours available to do it without the family being annoyed by any downtime.

[–] mrginger 1 points 2 years ago

Sorry for the late reply. Using ZFS and replicating the VM first makes it really quick. Less than 5 minutes of downtime.

[–] [email protected] 2 points 2 years ago

Pve8 was surprisingly painless on my homelab. Only thing that bit me was a left over ceph package that was installed caused apt to want to uninstall pve. After I figured that out and upgraded ceph I was in business.

[–] SheeEttin 1 points 2 years ago

I just did mine yesterday. One stopped responding mid-upgrade and I wasn't able to reconnect, but I was able to log in at the console and run dpkg --reconfigure -a until I got the network back, then apt install --reinstall proxmox-ve pve-manager got those packages to finish installing, then everything worked.

[–] iks 1 points 2 years ago

I always feared the same, from PVE 1.2 on, and was almost everytime painless (after you have all the needed parachutes if anything goes wrong in the process), I have two nodes (not clustered) one for prod env and the other for firewall and VoIP services, and only the jump from 1.2 to 1.5 (if I remember well) was a bit of pain due to added local-LVM, but was a lot of years ago. Now it is a breeze.

[–] RonnyZittledong 1 points 2 years ago (1 children)

This is one of the reasons I am very against virtualizing core foundational services like routers and NAS. It just causes way too many headaches.

[–] SheeEttin 1 points 2 years ago

Or if you do, at least make them HA.

load more comments
view more: next ›