From mboxrd@z Thu Jan 1 00:00:00 1970
Return-Path:
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
by inbox.dpdk.org (Postfix) with ESMTP id 8556D42809;
Wed, 22 Mar 2023 13:22:20 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
by mails.dpdk.org (Postfix) with ESMTP id 59BA040E09;
Wed, 22 Mar 2023 13:22:20 +0100 (CET)
Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178])
by mails.dpdk.org (Postfix) with ESMTP id 8085940A84
for ; Wed, 22 Mar 2023 13:22:18 +0100 (CET)
Received: by inbox.dpdk.org (Postfix, from userid 33)
id 76BF64280A; Wed, 22 Mar 2023 13:22:18 +0100 (CET)
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [Bug 1196] [pvp] Failed to start a vm with two vhost-user
interfaces based on the host dpdk-testpmd
Date: Wed, 22 Mar 2023 12:22:18 +0000
X-Bugzilla-Reason: AssignedTo
X-Bugzilla-Type: new
X-Bugzilla-Watch-Reason: None
X-Bugzilla-Product: DPDK
X-Bugzilla-Component: testpmd
X-Bugzilla-Version: 23.03
X-Bugzilla-Keywords:
X-Bugzilla-Severity: normal
X-Bugzilla-Who: yanghliu@redhat.com
X-Bugzilla-Status: UNCONFIRMED
X-Bugzilla-Resolution:
X-Bugzilla-Priority: Normal
X-Bugzilla-Assigned-To: dev@dpdk.org
X-Bugzilla-Target-Milestone: ---
X-Bugzilla-Flags:
X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform
op_sys bug_status bug_severity priority component assigned_to reporter
target_milestone
Message-ID:
Content-Type: multipart/alternative; boundary=16794877380.7f396cc.1698054
Content-Transfer-Encoding: 7bit
X-Bugzilla-URL: http://bugs.dpdk.org/
Auto-Submitted: auto-generated
X-Auto-Response-Suppress: All
MIME-Version: 1.0
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions
List-Unsubscribe: ,
List-Archive:
List-Post:
List-Help:
List-Subscribe: ,
Errors-To: dev-bounces@dpdk.org
--16794877380.7f396cc.1698054
Date: Wed, 22 Mar 2023 13:22:18 +0100
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Bugzilla-URL: http://bugs.dpdk.org/
Auto-Submitted: auto-generated
X-Auto-Response-Suppress: All
https://bugs.dpdk.org/show_bug.cgi?id=3D1196
Bug ID: 1196
Summary: [pvp] Failed to start a vm with two vhost-user
interfaces based on the host dpdk-testpmd
Product: DPDK
Version: 23.03
Hardware: x86
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: testpmd
Assignee: dev@dpdk.org
Reporter: yanghliu@redhat.com
Target Milestone: ---
Description of problem:
start a vm with two vhost-user interfaces based on the host dpdk-testpmd , =
the
vm is hung
Version-Release number of selected component (if applicable):
host:
5.14.0-70.50.1.el9_0.x86_64
qemu-kvm-6.2.0-11.el9_0.7.x86_64
23.03-rc3 dpdk
Ethernet Controller 10-Gigabit X540-AT2
guest:
5.14.0-70.50.1.el9_0.x86_64
How reproducible:
100%
Steps to Reproduce:
1. setup the host kernel option
# echo
"isolated_cores=3D2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,31,29,27,25,23,2=
1,19,17,15,13,11"
>> /etc/tuned/cpu-partitioning-variables.conf=20=20
tuned-adm profile cpu-partitioning
# grubby --args=3D"iommu=3Dpt intel_iommu=3Don default_hugepagesz=3D1G"
--update-kernel=3D`grubby --default-kernel`=20
# reboot
2. setup hugepage number
# echo 20 >
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
# echo 20 >
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
2. bind the X540 driver to vfio-pci
# modprobe vfio-pci
# modprobe vfio
# dpdk-devbind.py --bind=3Dvfio-pci 0000:5e:00.0
# dpdk-devbind.py --bind=3Dvfio-pci 0000:5e:00.1
3. start a dpdk-testpmd
# /usr/local/bin/dpdk-testpmd -l 1,3,2,4,6,8,10,12,14,16,18,20,22,24,26,28,=
30
--socket-mem 1024,1024 -n 4 --vdev
'net_vhost0,iface=3D/tmp/vhost-user1,queues=3D4,client=3D1,iommu-support=3D=
1' --vdev
'net_vhost1,iface=3D/tmp/vhost-user2,queues=3D4,client=3D1,iommu-support=3D=
1' -b
0000:3b:00.0 -b 0000:3b:00.1 -- --portmask=3Df -i --rxd=3D512 --txd=3D512 -=
-rxq=3D4
--txq=3D4 --nb-cores=3D16 --forward-mode=3Dio
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ixgbe (8086:1528) device: 0000:5e:00.0 (socket 0)
EAL: Probe PCI driver: net_ixgbe (8086:1528) device: 0000:5e:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool : n=3D275456, size=3D2176, socke=
t=3D1
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool : n=3D275456, size=3D2176, socke=
t=3D0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B4:96:91:14:22:C4
Configuring Port 1 (socket 0)
Port 1: B4:96:91:14:22:C6
Configuring Port 2 (socket 1)
VHOST_CONFIG: (/tmp/vhost-user1) vhost-user client: socket created, fd: 92
VHOST_CONFIG: (/tmp/vhost-user1) failed to connect: Connection refused
VHOST_CONFIG: (/tmp/vhost-user1) reconnecting...
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 1)
VHOST_CONFIG: (/tmp/vhost-user2) vhost-user client: socket created, fd: 95
VHOST_CONFIG: (/tmp/vhost-user2) failed to connect: Connection refused
VHOST_CONFIG: (/tmp/vhost-user2) reconnecting...
Port 3: 56:48:4F:53:54:03
Checking link statuses...
Done
testpmd> set portlist 0,2,1,3
testpmd> start
4. start a vm with two vhost-user interface
# virsh start rhel9.0
Domain 'rhel9.0' started
# virsh list --all
Id Name State
-------------------------
2 rhel9.0 running <-- The=20
The related xml:
5. Try to connect to the vm and check the vm status
We can not login to the vm because the vm is hung in the bootup
Actual results:
The vm is hung
Expected results:
The vm can be started successfully and works well
Additional info:
(1) This issue can not be reproduced with dpdk-21.11
--=20
You are receiving this mail because:
You are the assignee for the bug.=
--16794877380.7f396cc.1698054
Date: Wed, 22 Mar 2023 13:22:18 +0100
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Bugzilla-URL: http://bugs.dpdk.org/
Auto-Submitted: auto-generated
X-Auto-Response-Suppress: All
[pvp] Failed to start a vm with two vhost-user interfaces ba=
sed on the host dpdk-testpmd
Product
DPDK
Version
23.03
Hardware
x86
OS
Linux
Status
UNCONFIRMED
Severity
normal
Priority
Normal
Component
testpmd
Assignee
dev@dpdk.org
Reporter
yanghliu@redhat.com
Target Milestone
---
Description of problem:
start a vm with two vhost-user interfaces based on the host dpdk-testpmd , =
the
vm is hung
Version-Release number of selected component (if applicable):
host:
5.14.0-70.50.1.el9_0.x86_64
qemu-kvm-6.2.0-11.el9_0.7.x86_64
23.03-rc3 dpdk
Ethernet Controller 10-Gigabit X540-AT2
guest:
5.14.0-70.50.1.el9_0.x86_64
How reproducible:
100%
Steps to Reproduce:
1. setup the host kernel option
# echo
"isolated_cores=3D2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,31,29,27,25=
,23,21,19,17,15,13,11"
>> /etc/tuned/cpu-partitioning-variables.conf=20=20
tuned-adm profile cpu-partitioning
# grubby --args=3D"iommu=3Dpt intel_iommu=3Don default_hugepagesz=3D1G=
"
--update-kernel=3D`grubby --default-kernel`=20
# reboot
2. setup hugepage number
# echo 20 >
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
# echo 20 >
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
2. bind the X540 driver to vfio-pci
# modprobe vfio-pci
# modprobe vfio
# dpdk-devbind.py --bind=3Dvfio-pci 0000:5e:00.0
# dpdk-devbind.py --bind=3Dvfio-pci 0000:5e:00.1
3. start a dpdk-testpmd
# /usr/local/bin/dpdk-testpmd -l 1,3,2,4,6,8,10,12,14,16,18,20,22,24,26,28,=
30
--socket-mem 1024,1024 -n 4 --vdev
'net_vhost0,iface=3D/tmp/vhost-user1,queues=3D4,client=3D1,iommu-support=3D=
1' --vdev
'net_vhost1,iface=3D/tmp/vhost-user2,queues=3D4,client=3D1,iommu-support=3D=
1' -b
0000:3b:00.0 -b 0000:3b:00.1 -- --portmask=3Df -i --rxd=3D512 --txd=3D512 -=
-rxq=3D4
--txq=3D4 --nb-cores=3D16 --forward-mode=3Dio
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ixgbe (8086:1528) device: 0000:5e:00.0 (socket 0)
EAL: Probe PCI driver: net_ixgbe (8086:1528) device: 0000:5e:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_1>: n=3D275456, size=3D2176,=
socket=3D1
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_0>: n=3D275456, size=3D2176,=
socket=3D0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B4:96:91:14:22:C4
Configuring Port 1 (socket 0)
Port 1: B4:96:91:14:22:C6
Configuring Port 2 (socket 1)
VHOST_CONFIG: (/tmp/vhost-user1) vhost-user client: socket created, fd: 92
VHOST_CONFIG: (/tmp/vhost-user1) failed to connect: Connection refused
VHOST_CONFIG: (/tmp/vhost-user1) reconnecting...
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 1)
VHOST_CONFIG: (/tmp/vhost-user2) vhost-user client: socket created, fd: 95
VHOST_CONFIG: (/tmp/vhost-user2) failed to connect: Connection refused
VHOST_CONFIG: (/tmp/vhost-user2) reconnecting...
Port 3: 56:48:4F:53:54:03
Checking link statuses...
Done
testpmd> set portlist 0,2,1,3
testpmd> start
4. start a vm with two vhost-user interface
# virsh start rhel9.0
Domain 'rhel9.0' started
# virsh list --all
Id Name State
-------------------------
2 rhel9.0 running <-- The=20
The related xml:
<interface type=3D'vhostuser'>
<mac address=3D'88:66:da:5f:dd:12'/>
<source type=3D'unix' path=3D'/tmp/vhost-user1' mode=3D'server'/&g=
t;
<target dev=3D''/>
<model type=3D'virtio'/>
<driver name=3D'vhost' queues=3D'4' rx_queue_size=3D'1024' iommu=
=3D'on'
ats=3D'on'/>
</interface>
<interface type=3D'vhostuser'>
<mac address=3D'88:66:da:5f:dd:13'/>
<source type=3D'unix' path=3D'/tmp/vhost-user2' mode=3D'server'/&g=
t;
<target dev=3D''/>
<model type=3D'virtio'/>
<driver name=3D'vhost' queues=3D'4' rx_queue_size=3D'1024' iommu=
=3D'on'
ats=3D'on'/>
</interface>
5. Try to connect to the vm and check the vm status
We can not login to the vm because the vm is hung in the bootup
Actual results:
The vm is hung
Expected results:
The vm can be started successfully and works well
Additional info:
(1) This issue can not be reproduced with dpdk-21.11