this post was submitted on 15 Mar 2024
8 points (83.3% liked)

Selfhosted

40397 readers
681 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I can’t even figure out how to tell if it’s supported or not. If it is supported, I can’t figure out how to enable it. If it is enabled, idk where I should be seeing it in proxmox!

Can anyone point me in the right direction?

all 26 comments
sorted by: hot top controversial new old
[–] AlphaAutist 4 points 8 months ago (1 children)

It looks like it should be possible as both your cpu and motherboard support Intel VT-d

https://ark.intel.com/content/www/us/en/ark/products/236781/intel-core-i7-processor-14700-33m-cache-up-to-5-40-ghz.html

https://download.asrock.com/Manual/Z690%20Extreme.pdf

PCIe pass through isn’t enabled by default in Proxmox and requires some manual changes to the bootloader (grub or systemd-boot) as well as loading some kernel modules. You may also need to enable VT-d in your BIOS. You can read proxmox’ guide for enabling PCIe pass through here:

https://pve.proxmox.com/wiki/PCI(e)_Passthrough

[–] [email protected] 1 points 8 months ago (2 children)

The motherboard need to support IOMMU, not Vt-d

[–] [email protected] 1 points 8 months ago (1 children)

Are you sure? I though Vt-d is the Intel virtualization extension that is used my IOMMU

[–] [email protected] 0 points 8 months ago (1 children)

Yes I'm sure, they are related and you need VT-d for IOMMU but not all motherboard isolate all the PCIe devices separately. Server/Enterprise boards always do, but consumer grade stuff can be hit or miss. Maybe it's a little better with more recent hardware though, I haven't checked in a couple of gens.

[–] [email protected] 1 points 8 months ago (1 children)

Can you name an Intel system from the last 2 years that doesn't support it?

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago) (1 children)

There are THOUSANDS of motherboards its impossible to know for sure sadly. I literally just told you I haven't checked I a couple of gens, so no I cannot ell you in the last two years.

OP has yet to provide the request info, so we don't know for their specific case.

[–] [email protected] 0 points 8 months ago (1 children)

It has nothing to do with the board. It is the CPU that matters as PCIe is controlled by the CPU

This persons CPU supports it https://ark.intel.com/content/www/us/en/ark/products/236783/intel-core-i7-processor-14700k-33m-cache-up-to-5-60-ghz.html

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago) (1 children)

Isn't it fun being confidently wrong? https://iommu.info/

VT-d is a CPU function, but IOMMU groups are a function of the CPU, Chipset, and board configuration combo.

[–] [email protected] 1 points 8 months ago (1 children)

Where does it say that? On some boards you need to adjust iommu but other than that it shouldn't be a problem if the CPU supports it. I am not an expert and if I am wrong I would like you to prove it.

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago) (1 children)

The previous site already shows this in practice, but here's a more technical explanation for you

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-iommu-deep-dive

Here's some more real life examples: https://forum.level1techs.com/t/vfio-passthrough-in-2023-call-to-arms/199671/18

https://old.reddit.com/r/VFIO/comments/16czih3/how_to_find_a_motherboard_with_good_iommu_groups/

Passed that you can take your skepticism somewhere else, I don't really care that you don't believe me, it doesn't change the reality of how IOMMU groups work.

Grouping has become overall better since X570, but still not a given.

[–] [email protected] 1 points 8 months ago
[–] nemanin 1 points 8 months ago (2 children)

Ok. So they are different?

How do I tell which motherboards support IOMMU?

I can’t find it as a filter or search option on any websites…?

[–] [email protected] 0 points 8 months ago (1 children)

Yes they are different. VT-d is purely a function of the CPU (passed the BIOS enabling option).

First you will want to look at the output of acpidump | egrep "DMAR|IVRS", then you will also want to very that IOMMU groups don't group your GFX with something that won't be passed through using something like: https://gist.github.com/r15ch13/ba2d738985fce8990a4e9f32d07c6ada

[–] nemanin 1 points 8 months ago (1 children)

This is in proxmox?

How can I tell if my mobo even supports it?

[–] [email protected] 2 points 8 months ago (2 children)

Run those two commands in the command line, post the result here

[–] nemanin 1 points 6 months ago

sorry this took so long.. you know, life. trying that command altogether, I get this response: -bash: acpidump: command not found

trying just egrep "DMAR|IVRS" (in case they are two commands) seems to hang the terminal session.

I tried following a guide to enable PICe passthrough and get this. One important thing, there is no discrete GPU at the moment, I'm trying to pass through an HBA..

root@prox:~# dmesg | grep -e IOMMU [ 0.100411] DMAR: IOMMU enabled [ 0.254862] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 [ 0.629143] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics [ 0.713978] DMAR: IOMMU feature fl1gp_support inconsistent [ 0.713979] DMAR: IOMMU feature pgsel_inv inconsistent [ 0.713980] DMAR: IOMMU feature nwfs inconsistent [ 0.713981] DMAR: IOMMU feature dit inconsistent [ 0.713982] DMAR: IOMMU feature sc_support inconsistent [ 0.713983] DMAR: IOMMU feature dev_iotlb_support inconsistent

[–] nemanin 1 points 6 months ago (1 children)

This may also help, my HBA is there:

root@prox:~# for d in /sys/kernel/iommu_groups//devices/; do n=${d#/iommu_groups/}; n=${n%%/}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##/}"; done IOMMU group 0 00:02.0 VGA compatible controller [0300]: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] [8086:a780] (rev 04)

IOMMU group 10 00:1f.0 ISA bridge [0601]: Intel Corporation Z690 Chipset LPC/eSPI Controller [8086:7a84] (rev 11)

IOMMU group 10 00:1f.3 Audio device [0403]: Intel Corporation Alder Lake-S HD Audio Controller [8086:7ad0] (rev 11)

IOMMU group 10 00:1f.4 SMBus [0c05]: Intel Corporation Alder Lake-S PCH SMBus Controller [8086:7aa3] (rev 11)

IOMMU group 10 00:1f.5 Serial bus controller [0c80]: Intel Corporation Alder Lake-S PCH SPI Controller [8086:7aa4] (rev 11)

IOMMU group 10 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (17) I219-V [8086:1a1d] (rev 11)

IOMMU group 11 01:00.0 Non-Volatile memory controller [0108]: Sandisk Corp Western Digital WD Black SN850X NVMe SSD [15b7:5030] (rev 01)

IOMMU group 12 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller [10ec:8125] (rev 05)

IOMMU group 13 03:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology Device [c0a9:5415] (rev 01)

IOMMU group 14 04:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

IOMMU group 15 05:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

IOMMU group 16 05:08.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

IOMMU group 17 05:09.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

**IOMMU group 18 06:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)

IOMMU group 19 08:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)**

IOMMU group 1 00:00.0 Host bridge [0600]: Intel Corporation Device [8086:a740] (rev 01)

IOMMU group 2 00:14.0 USB controller [0c03]: Intel Corporation Alder Lake-S PCH USB 3.2 Gen 2x2 XHCI Controller [8086:7ae0] (rev 11)

IOMMU group 2 00:14.2 RAM memory [0500]: Intel Corporation Alder Lake-S PCH Shared SRAM [8086:7aa7] (rev 11)

IOMMU group 3 00:15.0 Serial bus controller [0c80]: Intel Corporation Alder Lake-S PCH Serial IO I2C Controller #0 [8086:7acc] (rev 11)

IOMMU group 4 00:16.0 Communication controller [0780]: Intel Corporation Alder Lake-S PCH HECI Controller #1 [8086:7ae8] (rev 11)

IOMMU group 5 00:17.0 SATA controller [0106]: Intel Corporation Alder Lake-S PCH SATA Controller [AHCI Mode] [8086:7ae2] (rev 11)

IOMMU group 6 00:1a.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #25 [8086:7ac8] (rev 11)

IOMMU group 7 00:1c.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #2 [8086:7ab9] (rev 11)

IOMMU group 8 00:1c.4 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #5 [8086:7abc] (rev 11)

IOMMU group 9 00:1d.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #9 [8086:7ab0] (rev 11)

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

You should be good to go. Make sure vfio is loaded in the modules-load.d

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Make sure the module options are set correctly and the kernel module is blacklisted in /etc/modprobe.d/vfio.conf

options vfio-pci ids=1000:0097
blacklist MODULE_NAME

Make sure.IOMMU is enabled in your kernel command line (ex via grub): intel_iommu=on iommu=pt

This is probably not complete, but it should get you pretty far into allowing you to add the pci device in the hardware config of your vm

[–] [email protected] 0 points 8 months ago

Its your CPU and yes it does support it as all Intel CPUs made within the last few years have support.

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
NVMe Non-Volatile Memory Express interface for mass storage
PCIe Peripheral Component Interconnect Express
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage

5 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.

[Thread #739 for this sub, first seen 6th May 2024, 05:15] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 1 points 8 months ago

Start by going into your firmware setting (hold del on boot) and enabling virtualization and all extensions.

Once that's done you can just click on add hardware and then ePCI device. If its a GPU it will require more setup.

[–] [email protected] 0 points 8 months ago (2 children)

The thing called bifurcation?

[–] roofuskit 1 points 8 months ago

No, bifurcation is s splitting the lanes in the actual l actual slot between devices. Usually for something like an m.2 adapter board.

[–] nemanin 1 points 8 months ago (1 children)

Well, that's at least part of my problem. I have no idea what it would be called so it's hard to google. I guess the underlying technology is all 'IOMMU', but each motherboard manufacture and Intel and AMD all have other names for it...

I'm trying to pass an HBA through to a VM for a NAS.

[–] [email protected] 1 points 8 months ago

Right, confused it with the splitting of lanes (M.2 x4 bays).