this post was submitted on 07 Oct 2023
8 points (83.3% liked)

Sysadmin

5613 readers
1 users here now

A community dedicated to the profession of IT Systems Administration

founded 5 years ago
MODERATORS
 

Hi all, I want to setup a fileserver as a KVM which will access a 2TB disk partition to store its data. In order to do this I saw 5 options:

  1. Attach the whole disk to the VM and access the partition as you do in the host machine. -> contraindicated by the RHEL documentation for security reasons.

  2. Attach only the partition to the VM. Inside the VM, the partition appears as a drive which needs a new partition table. This seems good to me (for reasons I'll explain later), but I don't know how the partition-table-inside-a-partition thing works and what implications it comes with.

  3. Create a sparse max-2TB qcow2 image, store it in the physical partition and attach it to the VM. -> rejected by me because the partition inside the qcow2 image needs constant resizing as your storage needs grow.

  4. Create a fully initialized 2TB qcow2 image. -> current way of doing it, no resizes, no security concerns (I guess). The only drawback I perceive is the time required to initialize a 2TB image (~2.5hours in an HDD).

  5. Use the physical partition as NFS. I haven't really investigated this solution -nor am I experienced with NFS- but to me it seems like it will require some configuration in the host too, which is something I want to avoid because I don't want to redeploy the host in case shit hits the fan.

So, why 2 seems good to me? Neither resizes as in 3 nor long setup times (image initializing) as in 4.

Is there any other solution that I have missed? If not, out of these, which should I choose?

Sorry for the long, I tried to be as detailed as possible.

top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 year ago (1 children)

Create a sparse max-2TB qcow2 image, store it in the physical partition and attach it to the VM. -> rejected by me because the partition inside the qcow2 image needs constant resizing as your storage needs grow.

I don't see how this is an issue? If you set the partition to e.g. 1TB, the qcow2 image will automatically resize itself as the drive is filling up, right?

[–] [email protected] 1 points 1 year ago

Yes, the VM will resize it. The problem is the partition inside the image, when I tried this method the image's actual size was ~200KB so when I tried to make a partition table inside it I was able to create a 200KB partition. I think that when this partition fills or something the VM will reserve more space inside the image that will appear as unallocated space in the guest, it won't grow the partition automatically. I might have overlooked something though, so I will try this method again.

[–] SheeEttin 1 points 1 year ago (1 children)

Create a 2tb virtual disk, whatever that means on your platform, and attach it to the VM. Growing a qcow2 image is trivial (qemu-img resize disk.qcow2 +10G). Yes, you will also have to grow the partition inside the VM, but that's always going to be true and should also be trivial.

[–] [email protected] 1 points 1 year ago (1 children)

The problem aren't the steps involved but the manual vs automatic side of the resizing. E.g. the partition inside the vm is full and a user tries to send a big file, is it easy to automate the resize of the partition so that the file can fit? If it requires manual intervention I can't use this solution.

I think that the qcow2 image doesn't have to be resized manually; only the partition inside it. When you create the image the size you specify is the max size it is allowed to reach. When I first tried this, I created a max-2TB image, the actual size of the image was 200KB.

[–] SheeEttin 1 points 1 year ago (1 children)

I'm not aware of any on-prem solution that will automatically resize if it needs more space. You could set it up to expand if it hits some low disk space threshold. But if your use case is users randomly sending giant files, consider cloud storage.

Actually, you might be able to do some kind of object storage on-prem, Ceph or something. Personally I would get some enterprise storage, like a full SAN.

[–] [email protected] 1 points 1 year ago

You could set it up to expand if it hits some low disk space threshold

That's a good idea, I can be proactive about some things, e.g. it won't get suddenly more than 30GB of data, so I could maybe resize it once the free space is ~50GB. I'll look into it, thanks!