https://bugs.dpdk.org/show_bug.cgi?id=1161 Bug ID: 1161 Summary: [dpdk-23.03]virtio_user_as_exceptional_path/vhost_exce ption_path_with_virtio_user: launch testpmd as virtio-user failed Product: DPDK Version: 22.03 Hardware: x86 OS: Linux Status: UNCONFIRMED Severity: normal Priority: Normal Component: vhost/virtio Assignee: dev@dpdk.org Reporter: weix.ling@intel.com Target Milestone: --- [Environment] DPDK version: Use make showversion or for a non-released version: git remote -v && git show-ref --heads DPDK-23.03.0-rc0 Other software versions: N/A OS: Ubuntu 22.04.1 LTS/Linux 5.15.45-051545-generic Compiler: gcc version 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04) Hardware platform: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz NIC hardware: Intel Ethernet Controller XL710 for 40GbE QSFP+ 1583 Driver/NIC firmware: i40e-2.22.8/firmware-version: 9.10 0x8000d01e 1.3295.0 [Test Setup] Steps to reproduce List the steps to reproduce the issue. 1. Build DPDK: rm -fr x86_64-native-linuxapp-gcc CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc 2. Bind 1 NIC port to vfio-pci: dpdk-devbind.py --bind=vfio-pci 0000:31:00.0 3. Start testpmd as virito-user: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 8 -a 0000:31:00.0 --file-prefix=dpdk_1925052_20230217094416 --vdev=virtio_user0,mac=52:54:00:00:00:01,path=/dev/vhost-net,queue_size=1024 -- -i --rxd=1024 --txd=1024 [Show the output from the previous commands.] # Start testpmd as virtio-user failed: root@dut245:~/dpdk# x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 8 -a 0000:31:00.0 --file-prefix=dpdk_1925052_20230217094416 --vdev=virtio_user0,mac=52:54:00:00:00:01,path=/dev/vhost-net,queue_size=1024 -- -i --rxd=1024 --txd=1024 EAL: Detected CPU lcores: 128 EAL: Detected NUMA nodes: 2 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/dpdk_1925052_20230217094416/mp_socket EAL: Selected IOVA mode 'VA' EAL: 4096 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size EAL: VFIO support initialized EAL: Using IOMMU type 1 (Type 1) EAL: Ignore mapping IO port bar(1) EAL: Ignore mapping IO port bar(4) EAL: Probe PCI driver: net_i40e (8086:1583) device: 0000:31:00.0 (socket 0) i40e_GLQF_reg_init(): i40e device 0000:31:00.0 changed global register [0x002689a0]. original: 0x00000021, new: 0x00000029 vhost_kernel_ioctl(): Vhost-kernel ioctl 2148052736 failed (Bad file descriptor) vhost_kernel_get_features(): Failed to get features virtio_user_dev_init(): (/dev/vhost-net) Failed to get device features virtio_user_pmd_probe(): virtio_user_dev_init fails vdev_probe(): failed to initialize virtio_user0 device EAL: Bus (vdev) probe failed. Interactive-mode selected testpmd: create a new mbuf pool : n=155456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mcWarning! port-topology=paired and odd forward ports number, the last port will pair with itself.Configuring Port 0 (socket 0) Port 0: 00:00:00:00:01:00 Checking link statuses... Done testpmd> Port 0: link state change eventtestpmd> show port summary all Number of available ports: 1 Port MAC Address Name Driver Status Link 0 00:00:00:00:01:00 0000:31:00.0 net_i40e up 40 Gbps testpmd> [Expected Result] Explain what is the expected result in text or as an example output: # Start testpmd as virtio-user sucessed: root@dut245:~/dpdk# x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 8 -a 0000:31:00.0 --file-prefix=dpdk_1925052_20230217094416 --vdev=virtio_user0,mac=52:54:00:00:00:01,path=/dev/vhost-net,queue_size=1024 -- -i --rxd=1024 --txd=1024 EAL: Detected CPU lcores: 128 EAL: Detected NUMA nodes: 2 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/dpdk_1925052_20230217094416/mp_socket EAL: Selected IOVA mode 'VA' EAL: 4096 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size EAL: VFIO support initialized EAL: Using IOMMU type 1 (Type 1) EAL: Ignore mapping IO port bar(1) EAL: Ignore mapping IO port bar(4) EAL: Probe PCI driver: net_i40e (8086:1583) device: 0000:31:00.0 (socket 0) Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool : n=155456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) i40e_set_mac_max_frame(): Set max frame size at port level not applicable on link down Port 0: 00:00:00:00:01:00 Configuring Port 1 (socket 0) Port 1: 52:54:00:00:00:01 Checking link statuses... Done Error during enabling promiscuous mode for port 1: Operation not supported - ignore testpmd> Port 0: link state change eventtestpmd> show port summary all Number of available ports: 2 Port MAC Address Name Driver Status Link 0 00:00:00:00:01:00 0000:31:00.0 net_i40e up 40 Gbps 1 52:54:00:00:00:01 virtio_user0 net_virtio_user up Unknown testpmd> [Regression] Is this issue a regression: (Y/N) Y Version the regression was introduced: Specify git id if known. 7be724856315a0dab9645c20a617fba276607294 is the first bad commit commit 7be724856315a0dab9645c20a617fba276607294 Author: Maxime Coquelin Date: Thu Feb 9 10:17:04 2023 +0100 net/virtio-user: get max number of queue pairs from device When supported by the backend (only vDPA for now), this patch gets the maximum number of queue pairs supported by the device by querying it in its config space. This is required for adding backend control queue support, as is index equals the maximum number of queues supported by the device as described by the Virtio specification. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia Acked-by: Eugenio Pérez drivers/net/virtio/virtio_user/virtio_user_dev.c | 93 ++++++++++++++++++------ drivers/net/virtio/virtio_user_ethdev.c | 7 – 2 files changed, 71 insertions, 29 deletions -- You are receiving this mail because: You are the assignee for the bug.