this post was submitted on 26 Jun 2023
10 points (85.7% liked)

Linux

48372 readers
1216 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I run several VMs in Libvirt, host is Debian 12. I'm want to create one dedicated to Samba sharing. What is the best way to expose more disks to a QEMU host? I have 6 SSDs I want to utilize on this VM.

  1. Creating a filesystem hardware node (virtiofs or virtio-9p)?
  2. A physical disk exposed to the VM via a Storage hardware node?
  3. Something more exotic like a zfs storage pool?

Best for me is more about reliability than anything else. I have tried sharing folders before using the Filesystem hardware node and Plan 9 but it was a little wonky when it came to permissions. I may not have set it up right, however.

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 year ago (1 children)

It kind of depends where you want to manage the filesystem and stuff there. If you want the host to also be able to use the disks, you probably want to manage that on the host. If you want the VM to handle any potential RAID, filesystems, then you likely want to pass through the whole disk.

My experience with filesystem passthrough is that it doesn't work that well. virtiofs works but it's still got rough edges, and you introduce additional host<>VM overhead to access the files, on top of the overhead of your network share.

I tend to do a bit of a hybrid approach for mines: host manages the ZFS pools, and then give a zvol or qcow2 to the VM for it to do its duties. Makes it easier to share the space across multiple VMs. But I also run NFS/Samba on the host as part of the host's responsibilities, so VMs that really needs to share data with the host filesystem use those shares.

One thing I find helpful is defining clearly what the responsibilities of the VM are: is it a file server? Is it a general storage server? If it's only a file server then I'd manage the disks on the host and give the VM the final volume(s) to manage the shares on. If it's a storage server that needs to manage the disks (could be a TrueNAS or unRAID VM, for example), then I'd pass the disks in as virtio-blk/virtio-scsi and let the VM manage that stuff.

[–] Nullpointer 1 points 1 year ago

Thank you, very helpful! In this case the vm is just a file server. The bulk of the traffic is for Time Machine. In the past I had the host manage samba shares on btrfs but was exploring delegating that to a vm and playing with zfs. This is all home network admin fun, and your response gave some good info.

[–] piranhaphish 1 points 1 year ago* (last edited 1 year ago)

I am partial to a LVM storage pool.

If there is nothing particularly distinct about any of your drives, then just lump them together into a volume group and add that as a storage pool to libvirt. Then you can divvy them out in whatever capacities you want.

This is assuming you don't need core access to them from the host (other than the obvious samba share).

Disclaimer: I don't have any experience with ZFS, but I suspect it probably serves a similar role and might possibly be just as well or better suited.