DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1196] [pvp]  Failed to start a vm with two vhost-user interfaces based on the host dpdk-testpmd
@ 2023-03-22 12:22 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2023-03-22 12:22 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 4761 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1196

            Bug ID: 1196
           Summary: [pvp]  Failed to start a vm with two vhost-user
                    interfaces based on the host dpdk-testpmd
           Product: DPDK
           Version: 23.03
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: testpmd
          Assignee: dev@dpdk.org
          Reporter: yanghliu@redhat.com
  Target Milestone: ---

Description of problem:
start a vm with two vhost-user interfaces based on the host dpdk-testpmd , the
vm is hung


Version-Release number of selected component (if applicable):
host:
5.14.0-70.50.1.el9_0.x86_64
qemu-kvm-6.2.0-11.el9_0.7.x86_64
23.03-rc3 dpdk
Ethernet Controller 10-Gigabit X540-AT2
guest:
5.14.0-70.50.1.el9_0.x86_64


How reproducible:
100%

Steps to Reproduce:
1. setup the host kernel option
# echo
"isolated_cores=2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,31,29,27,25,23,21,19,17,15,13,11"
 >> /etc/tuned/cpu-partitioning-variables.conf  
tuned-adm profile cpu-partitioning
# grubby --args="iommu=pt intel_iommu=on default_hugepagesz=1G"
--update-kernel=`grubby --default-kernel` 
# reboot

2. setup hugepage number

# echo 20 >
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
# echo 20 >
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages

2. bind the X540 driver to vfio-pci
# modprobe vfio-pci
# modprobe vfio
# dpdk-devbind.py --bind=vfio-pci 0000:5e:00.0
# dpdk-devbind.py --bind=vfio-pci 0000:5e:00.1



3. start a dpdk-testpmd
# /usr/local/bin/dpdk-testpmd -l 1,3,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
--socket-mem 1024,1024 -n 4 --vdev
'net_vhost0,iface=/tmp/vhost-user1,queues=4,client=1,iommu-support=1' --vdev
'net_vhost1,iface=/tmp/vhost-user2,queues=4,client=1,iommu-support=1' -b
0000:3b:00.0 -b 0000:3b:00.1 -- --portmask=f -i --rxd=512 --txd=512 --rxq=4
--txq=4 --nb-cores=16 --forward-mode=io
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ixgbe (8086:1528) device: 0000:5e:00.0 (socket 0)
EAL: Probe PCI driver: net_ixgbe (8086:1528) device: 0000:5e:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_1>: n=275456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_0>: n=275456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B4:96:91:14:22:C4
Configuring Port 1 (socket 0)
Port 1: B4:96:91:14:22:C6
Configuring Port 2 (socket 1)
VHOST_CONFIG: (/tmp/vhost-user1) vhost-user client: socket created, fd: 92
VHOST_CONFIG: (/tmp/vhost-user1) failed to connect: Connection refused
VHOST_CONFIG: (/tmp/vhost-user1) reconnecting...
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 1)
VHOST_CONFIG: (/tmp/vhost-user2) vhost-user client: socket created, fd: 95
VHOST_CONFIG: (/tmp/vhost-user2) failed to connect: Connection refused
VHOST_CONFIG: (/tmp/vhost-user2) reconnecting...
Port 3: 56:48:4F:53:54:03
Checking link statuses...
Done
testpmd> set portlist 0,2,1,3
testpmd> start

4. start a vm with two vhost-user interface

# virsh start rhel9.0
Domain 'rhel9.0' started

# virsh list --all
 Id   Name      State
-------------------------
 2    rhel9.0   running   <-- The 


The related xml:

    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:12'/>
      <source type='unix' path='/tmp/vhost-user1' mode='server'/>
      <target dev=''/>
      <model type='virtio'/>
      <driver name='vhost' queues='4' rx_queue_size='1024' iommu='on'
ats='on'/>
    </interface>

    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:13'/>
      <source type='unix' path='/tmp/vhost-user2' mode='server'/>
      <target dev=''/>
      <model type='virtio'/>
      <driver name='vhost' queues='4' rx_queue_size='1024' iommu='on'
ats='on'/>
    </interface>

5. Try to connect to the vm and check the vm status

We can not login to the vm because the vm is hung in the bootup



Actual results:
The vm is hung

Expected results:
The vm can be started successfully and works well

Additional info:
(1) This issue can not be reproduced with dpdk-21.11

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 6793 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-03-22 12:22 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-22 12:22 [Bug 1196] [pvp] Failed to start a vm with two vhost-user interfaces based on the host dpdk-testpmd bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).