this post was submitted on 03 Mar 2024
28 points (100.0% liked)

Selfhosted

40249 readers
1030 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi all,

I need to exposs an iscsi disk to be used as a main disk in a vm. Because I am pretty new in this solution I would like to ask some tips and good practices to avoid making rookie mistakes that can really hit the performance or availability.

What are the common things I should take into account before deploying everything?

Thanks in advance

all 11 comments
sorted by: hot top controversial new old
[–] theit8514 3 points 8 months ago (1 children)

Since we don't know what server or VM tech you're using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There's also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you're using vSphere I would advise against this as you lose the ability to vMotion or use DRS.

[–] thelittleblackbird 1 points 8 months ago

Thanks for the comment, I will try to check but performance should not be an issue. In the end it is personal selfhosted service.

[–] [email protected] 2 points 8 months ago (1 children)

Don't present LUNs to VMs is the best practice. The only time you should do it is if you're absolutely forced because you're doing something that doesn't support better shared disk options.

[–] thelittleblackbird 1 points 8 months ago (1 children)

The you recommend to mount them via the hypervisor?

I was certainly planning to use it in the vm itself....

[–] [email protected] 0 points 8 months ago

The problem with external LUNs is that they're out of the control of the hypervisor. You can run into situations where migration events can cause access issues. If you can have the data presented through the hypervisor it will lead to less potential issues. Using object or NFS are also good options, if available.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
ESXi VMWare virtual machine hypervisor
HA Home Assistant automation software
~ High Availability
NAS Network-Attached Storage

3 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.

[Thread #568 for this sub, first seen 3rd Mar 2024, 16:55] [FAQ] [Full list] [Contact] [Source code]

[–] Thade780 1 points 8 months ago (1 children)

Is there a specific reason to mount the lun directly opposed to creating a virtual disk? Performance, maybe?

[–] MigratingtoLemmy 1 points 8 months ago* (last edited 8 months ago)

More like, if you wanted the storage under the LUN to be shared through the VM. Essentially, mount the LUN into the VM and then run NFS/SMB from the VM as a NAS. Works out pretty well since with a little bit of trickery you can have a NAS that is also HA (assuming the storage pool doesn't go down).

With that said, I'm very interested too.

Unless I completely misunderstood your question

[–] [email protected] 1 points 8 months ago (1 children)

Unless you are forced to use the same network interface, always use dedicated NIC, vLAN when possible.

Like others mentioned, if the VM is on a hypervisor where you can use that to present the disk, you should try that.

Examples would be NAS box with two interfaces, use second one for iscsi. Connect that to switch with different vlan. Connect something like proxmox second nic to iscsi vlan. Add remote disk in proxmox from iscsi nas. Add disk to VM.

This idea spans all different tech.

[–] thelittleblackbird 2 points 8 months ago

Thx, I will keep in mind, but I have consumer grade hw and I am afraid that vlan in my switch is not possible.

I any case thanks for the bunch of tips