gansheim

joined 1 year ago
[–] gansheim 5 points 3 weeks ago

I was making the wife and I some turkey meatballs with a little cube of mozzarella in the middle trying to be fancy. Turns out, when the mozzarella melts, and you don't have a food thermometer, it's REALLY hard to tell if the meatballs are done in the middle... We both spent two days being sick from both ends in a one bathroom apartment. Also had to pay $300 for the doctors note for my crappy job so they wouldn't fire me for missing work for two days. Not my proudest cooking moment...

[–] gansheim 7 points 1 month ago

I used it for about 9 months on my gaming laptop. From a gaming perspective it was great. I have AMD and everything just worked out of the box. Installing new packages takes some getting used to, but 99% of the time you can find it on flathub which is super easy to use and maintain. I did notice that it seemed to take a significantly longer time to boot compared to my previous distros. On average probably a good 15-30 seconds longer which feels like forever when waiting for it to come up. Not sure if this is common or just something funky with my setup. I used Silverblue a few years back and don't remember it being that slow. I also never could get my Canon printer drivers to install and function properly which is what inevitably lead me to switch back to Nobara. I like being able to use "dnf install" for something or tweak a vendor provided install script and it just installs the thing and then be done with it.

[–] gansheim 1 points 2 months ago

I have for years. All the bands you listed, As I Lay Dying (their albums all the way up to An Ocean Between. I listened to then after that too, but couldn't really call then Christian metal at that point), In the Midst of Lions, O Sleeper, and August Burns Red (not strictly a Christian metal band, but at least "Alternative").

[–] gansheim 2 points 3 months ago* (last edited 3 months ago) (1 children)

That's fair, I should have clarified that on most Enterprise Linux distros LVM is definitely the norm. I know Fedora switched to btrfs a few releases back and you may be right about Suse Tumbleweed but pretty sure Suse Leap uses LVM. CentOS, RHEL, Alma, etc. all still default to LVM, as the idea of keeping everything on a single partition is a bad idea and managing multiple partitions is significantly easier with LVM. More than likely that'll change when btrfs has a little more mileage on it and is trusted as "enterprise ready" but for now LVM is the way they go. MBR vs GPT and EFI vs non-EFI don't have a lot to do with it though, it's more about the ease of managing multiple partitions (or subvolumes if you're used to btrfs), as having a single partition for root, var, and home is bad idea jeans.

[–] gansheim 2 points 3 months ago (3 children)

LVM is actually super common. Most Linux distros default to LVM unless you do custom partitioning. It's not just about the max number of partitions supported by the table. LVM provides a TON more flexibility and ease of management of partitions.

[–] gansheim 3 points 3 months ago (5 children)

Definitely second this. If you're using LVM, it uses the physical UUID for the pv. You have to update that on the new drive so it knows where the vg and lvs are being mounted to.

[–] gansheim 25 points 6 months ago (1 children)
[–] gansheim 11 points 1 year ago

Horizon Zero Dawn showed us that this doesn't have any potential to be problematic...

[–] gansheim 1 points 1 year ago

At least in my experience, this isn't true for all pets. I ever only had one dog that ate all the food in her bowl no matter what. All the dogs and cats I've had besides that one, including the 2 dogs and cat I have now will eat until they're full, regardless if the bowl is empty, and then come back to it later.

 

My 8 year old son told me this randomly after getting out of the shower this morning.

[–] gansheim 16 points 1 year ago* (last edited 1 year ago) (1 children)

Sounds like you're conflating terms a bit. The example storage volumes you gave sounds like ProxMox volumes, but then you switched to VM volumes in your example.

The hypervisor (ProxMox) has storage volumes that are used for multiple things; these are sometimes called Datastores. One will be for the local ProxMox OS, another might be for sending logs to, and lastly you'll have one for VM storage. The VM Storage volume(s) are where your virtual machine files live, to include any virtual disks you provision.

The storage type for the volume you create in ProxMox will dictate what you can and can't do with VM disks in that storage. They provide a list here: https://pve.proxmox.com/wiki/Storage

Pay attention to things like if you can take a live VM snapshots and such.

Getting back to your original issue, it sounds like you tried to create multiple VM disks using the same bus channel (i.e. SCSI0 or IDE0). You have to specify a different channel for each disk. First disk is channel 0, next is 1, etc.

Assuming you configure a datastore for each storage class (slow spinny, fast ssd, super fast flash), you would then configure a virtual disk for the VM backed by each respective datastore depending on the storage class you want. Let's assume you're using you're the SCSI storage driver, disk 0 is your boot drive, and you want that on datastore0 which super fast NVMe, disk 1 is your general OS drive and you want that on datastore1 which is fast ssd, and disk 2 is your data drive and you want that on slow hdd. You'd configure your disks similar to the following: Disk0 SCSI0 Datastore0

Disk1 SCSI1 Datastore1

Disk2 SCSI2 Datastore2

Lastly, if you're wondering what storage driver to use, it comes down to performance and compatibility of your VM. IDE is slowest but has good compatibility with pretty much everything, although I've heard rumors that newer systems may be dropping support for IDE

SCSI is usually a good happy medium but has compatibility issues with older legacy systems

VirtIO is generally supposed be fastest, but has more compatibility issues with a wider range of "older" systems

Compatibility issues for each can/may be resolved with the right drivers, but then you might need to custom build your boot image to support them, especially in Windows.

There's an older post in the forum that goes over the differences that should still be relevant here: https://forum.proxmox.com/threads/differences-between-ide-virtio-scsi.8356/

I've only been using ProxMox for a couple years in my homelab, and mostly use VMware so I may have crossed the terms a bit, but hopefully this gives you the gist. I hope I understood your question/issue and that this helps.

[–] gansheim 2 points 1 year ago

Yeah, I've been secretly hoping someone will pick it up again. They left in such a way that it shouldn't be hard to pick it and continue, even with a different cast if they need to.

view more: next ›