From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8520048892 for ; Thu, 2 Oct 2025 16:40:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 44988402C0; Thu, 2 Oct 2025 16:40:37 +0200 (CEST) Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mails.dpdk.org (Postfix) with ESMTP id 0174540268 for ; Thu, 2 Oct 2025 16:40:35 +0200 (CEST) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-27ee214108cso24394305ad.0 for ; Thu, 02 Oct 2025 07:40:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=interfacemasters.com; s=google; t=1759416034; x=1760020834; darn=dpdk.org; h=cc:to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=ao5EAlJEsshXi5RSsJ1OR+td4nO87w9Y8FnhrPgZ8Wk=; b=L4koZNyQ7tpbK/2YoOA6B8Z0wIHfTRVvO5aljnB5Gdlyo1GzJfXwXNBKIdfJpyft5t Snt5NWeEMXQkr+gv5G8LMgFTKpJoeujMISWGVZT2oiTIZ1PZ/c8hzDUZY0l761XG2Fux NSMCNMkWlxkspTSuBKdX1GzUeBwSz+Z/YFA10= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759416034; x=1760020834; h=cc:to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ao5EAlJEsshXi5RSsJ1OR+td4nO87w9Y8FnhrPgZ8Wk=; b=vtqtkl7JxsWLvRiexDqVlQUlfIB0WDhEjdaDs3cDTRQVtAXv9EFutIWTczJJuQDMPO 2SrPBljUYGV/WUA0m9Bofa/IspbMRP5sUCmWFACVx8yWHQb1TJBTJOAXyUKz4+wb3FrD DtyBaNXzV42uc0MhGQ48R7VzteDiDoIKZNl2i95ItZZH+Kd/1y6MlEnL+mulxfHXhD7H KqooePFg8Xy7JDoTvnEI48lS72RyshPL4qNs23oiCMzcq68hfpnKHFSJKITEh38QtqvW HjQYoNrILKa9+0BE+b4NBTnmEzzbKq43QAlMIHAUMB7VmwOohsJwoqvLAyVhdbXNO65r AZ+A== X-Gm-Message-State: AOJu0Yzpfja21iz1tmMUmK9eu+ytW98u9LCHBDdnfZmkDTVCJETJeJL3 7XCi+yYlxEnZLMZ09GoB1xrVO3QlqIsiml9a50HBkMeYNpyY9UxiM85ypGFfaCZxn9jFEWPKL5t /AOS5wdrD7VNJtw73CzjRh07X8BtuRXKuDKZmKIkE/Ofv2jj64XAy8DvZcw== X-Gm-Gg: ASbGncsHvRG3I7ZdIbc8ltX+Q4xVLCj5mvzy3yfD6/nVyEbWCWw9SUobqMSXjhYuscT zEsBnmH+NtZBi4bT+Ab8qfwcU53iNoUVtLs1xX7BbjjeB4X4fU6m/qK5oMAEKy6FvgO9Hzl46bA R5EA7K79yaC3TFqkKez3lM6/NVJPie7ue5b8A0zMsXEtKPki296E+VtmYBxdfUqrRnteoMshDw5 qMXNeEiZqMr1Dw2OMdnCc/JpXvH5c96frThSA== X-Google-Smtp-Source: AGHT+IGsiz0LJoRVo0uQNdvaOtNvmROsuZ5a5YHWbRmMq9+fpf3/AaR3MKwO9voj6l284cvaEm4L8Zxm+tc7lolbRP4= X-Received: by 2002:a17:902:ea0d:b0:272:2bf1:6a21 with SMTP id d9443c01a7336-28e7f27c01fmr108995345ad.14.1759416034027; Thu, 02 Oct 2025 07:40:34 -0700 (PDT) MIME-Version: 1.0 From: Oleksandr Nahnybida Date: Thu, 2 Oct 2025 17:40:07 +0300 X-Gm-Features: AS18NWC3A3g3zO9bSzbCmVyRMs4Hg-jSuPh-RQ44P35w32_B9bzCL9Wx_FbdG-s Message-ID: Subject: How to use vfio-pci in vmware VM with PCI passthrough? To: users Cc: Vitaliy Ivanov , Taras Bilous Content-Type: multipart/alternative; boundary="000000000000b87f0a06402df79c" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000b87f0a06402df79c Content-Type: text/plain; charset="UTF-8" Hi all, We are working on migrating our DPDK application from igb_uio to vfio-pci. Our target environment is a VMware ESXi host running on an AMD Epyc server with NICs configured for PCI Passthrough to a guest VM with Debian Bookworm (Kernel 6.1.0-39-amd64) We've encountered a couple of issues. Problem 1: Initially, attempting to use vfio-pci failed with an error code of -22, and the /sys/class/iommu/ directory was empty. We discovered the "expose IOMMU to guest OS" option in VMware and enabled it. This led to a new error: "The virtual machine cannot be powered on because IOMMU virtualization is not compatible with PCI passthru on AMD platforms" We found a workaround by adding amd.iommu.supportsPcip = "TRUE" to the VM's configuration. The VM now boots, and the IOMMU is visible in the guest. However, when we run our DPDK application, it hangs after printing "EAL: VFIO support initialized", and shortly after, the guest kernel panics with a soft lockup error, making the system eventually unresponsive. BUG: soft lockup - CPU#34 stuck for 75s! [kcompactd0:529] Problem 2: Separately, we've noticed that our IOMMU groups are not ideal. Many groups contain not only the NICs we need to bind, but also other devices like PCI bridges. IOMMU Group 7: 0000:00:17.0 - PCI bridge: VMware PCI Express Root Port 0000:00:17.1 0000:00:17.2 0000:00:17.3 0000:00:17.4 0000:00:17.5 0000:00:17.6 0000:00:17.7 0000:13:00.0 - nic 0000:13:00.1 - nic 0000:14:00.0 - nic 0000:14:00.1 - nic Questions: 1. Is enabling guest IOMMU virtualization in VMware with the amd.iommu.supportsPcip workaround, the correct approach here? 2. I see that vfio-pci can be run in unsafe mode, but is there any benefit to using it over igb_uio in this case? 1. In my understanding, the hypervisor is using actual hardware IOMMU to implement PCI pass-through anyway, so what's the point of having it inside of a guest VM again? 2. Also, usually enable_unsafe_noiommu_mode is not compiled in, so we need to recompile vfio separately, and since we already compile igb_uio anyway, vfio won't be any better in terms of deployment. 3. There also seems to be an option of using SR-IOV instead of passthrough, but we haven't explored this option yet. The question here is, do you still need to "expose iommu" to be able to bind VF to vfio? And what's the correct workflow here in general? Best Regarads, Oleksandr --000000000000b87f0a06402df79c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Hi all,

We are working on migrating our DPDK appl= ication from igb_uio to vfio-pci. Our target envi= ronment is a VMware ESXi host running on an AMD Epyc server with NICs confi= gured for PCI Passthrough to a guest VM with Debian Bookworm (Kernel 6.1.0-= 39-amd64)

We've encountered a couple of issues.

Probl= em 1:

Initially, attempting to use vfio-pci failed with an error cod= e of -22, and the /sys/class/iommu/ directory was empty. We discovered the = "expose IOMMU to guest OS" option in VMware and enabled it.
This led to a new error:
"The virtual machine cannot be powered o= n because IOMMU virtualization is not compatible with PCI passthru on AMD p= latforms"

We found a workaround by adding amd.iommu.supportsPci= p =3D "TRUE" to the VM's configuration. The VM now boots, and= the IOMMU is visible in the guest.

However, when we run our DPDK ap= plication, it hangs after printing "EAL: VFIO support initialized"= ;, and shortly after, the guest kernel panics with a soft lockup error, mak= ing the system eventually unresponsive.
BUG: soft lockup - CPU#34 stuck = for 75s! [kcompactd0:529]

Problem 2:

Separately, we've no= ticed that our IOMMU groups are not ideal. Many groups contain not only the= NICs we need to bind, but also other devices like PCI bridges.=C2=A0
IOMMU Group 7:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 0000:00:17.0 - PCI bridge: = VMware PCI Express Root Port
=C2=A0 =C2=A0 =C2=A0 =C2=A0 0000:00:17.1=C2=A0 =C2=A0 =C2=A0 =C2=A0 0000:00:17.2
=C2=A0 =C2=A0 =C2=A0 =C2=A0 00= 00:00:17.3
=C2=A0 =C2=A0 =C2=A0 =C2=A0 0000:00:17.4
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 0000:00:17.5
=C2=A0 =C2=A0 =C2=A0 =C2=A0 0000:00:17.6
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 0000:00:17.7
=C2=A0 =C2=A0 =C2=A0 =C2=A0 0000:1= 3:00.0 - nic
=C2=A0 =C2=A0 =C2=A0 =C2=A0 0000:13:00.1 - nic=C2=A0
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 0000:14:00.0 - nic
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = 0000:14:00.1 - nic

Questions:

  1. =C2=A0Is enabling guest= IOMMU virtualization in VMware with the amd.iommu.supportsPcip workaround, the correct approach here?
  2. I see that vfio-pci can be= run in unsafe mode, but is there any benefit to using it over igb_uio in t= his case?=C2=A0
    1. In my understanding, the hypervisor is using ac= tual hardware IOMMU to implement PCI pass-through anyway, so what's the= point of having it inside of a guest VM again?
    2. Also, usually=C2=A0= enable_unsafe_noiommu_mode is not compiled in, so we need to recompile vfio= separately, and since we already compile igb_uio anyway, vfio won't be= any better in terms of deployment.
  3. There also seems to be an = option of using SR-IOV instead of passthrough, but we haven't explored = this option yet. The question here is, do you still need to "expose io= mmu" to be able to bind VF to vfio? And what's the correct workflo= w here in general?
Best Regarads,
Oleksandr

--000000000000b87f0a06402df79c--