DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1119] [dpdk-22.11]pvp_virtio_bonding/vhost_virtio_bonding_mode_from_0_to_6: start bonding device failed and core dumped when quit testpmd
@ 2022-11-01  9:58 bugzilla
  2022-11-15  6:13 ` bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2022-11-01  9:58 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=1119

            Bug ID: 1119
           Summary: [dpdk-22.11]pvp_virtio_bonding/vhost_virtio_bonding_mo
                    de_from_0_to_6: start bonding device failed and core
                    dumped when quit testpmd
           Product: DPDK
           Version: 22.11
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: testpmd
          Assignee: dev@dpdk.org
          Reporter: dukaix.yuan@intel.com
  Target Milestone: ---

[Environment]

DPDK version: 
 DPDK 22.11-rc2

Other software versions: QEMU 7.1.0
OS:5.15.45-051545-generic
Compiler: gcc-11.2.0
Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: Ethernet Controller XL710 for 40GbE QSFP+ 1583
NIC firmware:  9.00 0x8000c8d4 1.3179.0
NIC driver: I40e 2.20.12

[Test Setup]
Steps to reproduce
List the steps to reproduce the issue.

1.Bind 1 NIC port to vfio-pci:

usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:af:00.0

2.Start vhost testpmd:

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 -a 0000:af:00.0
--file-prefix=vhost_61927_20221101094930   --vdev
'net_vhost0,iface=vhost-net0,client=1,queues=1' --vdev
'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev
'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev
'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained
--nb-cores=4 --txd=1024 --rxd=1024

testpmd> set fwd mac
testpmd> start

3.Start VM:

taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-7.1.0/bin/qemu-system-x86_64
 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor
unix:/tmp/vm0_monitor.sock,server,nowait -netdev
user,id=nttsip1,hostfwd=tcp:10.239.252.214:6000-:22 -device
e1000,netdev=nttsip1  -cpu host -smp 6 -m 16384 -object
memory-backend-file,id=mem,size=16384M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -mem-prealloc -chardev
socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial
-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :4
-drive file=/home/image/ubuntu2004.img -chardev
socket,id=char0,path=./vhost-net0,server -netdev
type=vhost-user,id=netdev0,chardev=char0,vhostforce -device
virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 -chardev
socket,id=char1,path=./vhost-net1,server -netdev
type=vhost-user,id=netdev1,chardev=char1,vhostforce -device
virtio-net-pci,netdev=netdev1,mac=52:54:00:00:00:02 -chardev
socket,id=char2,path=./vhost-net2,server -netdev
type=vhost-user,id=netdev2,chardev=char2,vhostforce -device
virtio-net-pci,netdev=netdev2,mac=52:54:00:00:00:03 -chardev
socket,id=char3,path=./vhost-net3,server -netdev
type=vhost-user,id=netdev3,chardev=char3,vhostforce -device
virtio-net-pci,netdev=netdev3,mac=52:54:00:00:00:04

4.Start testpmd in VM:

echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts`
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/hugemodprobe vfio
modprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 0000:00:06.0
0000:00:07.0 0000:00:08.0

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-5 -n 1 -a 0000:00:05.0 -a
0000:00:06.0 -a 0000:00:07.0 -a 0000:00:08.0
--file-prefix=dpdk_61927_20221101095244    -- -i --port-topology=chained
--nb-cores=5

testpmd> create bonded device 0 0 
testpmd> add bonding slave 0 4 
testpmd> add bonding slave 1 4 
testpmd> add bonding slave 2 4 
testpmd> port start 4 
testpmd> show bonding config 4
testpmd> quit



[Show the output from the previous commands.]

root@virtiovm:~/dpdk# x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-5 -n 1
-a 0000:00:05.0 -a 0000:00:06.0 -a 0000:00:07.0 -a 0000:00:08.0
--file-prefix=dpdk_61927_20221101095244    -- -i --port-topology=chained
--nb-cores=5
EAL: Detected CPU lcores: 6
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_61927_20221101095244/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket -1)
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Ignore mapping IO port bar(0)
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:06.0 (socket -1)
EAL: Ignore mapping IO port bar(0)
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:07.0 (socket -1)
EAL: Ignore mapping IO port bar(0)
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:08.0 (socket -1)
EAL: Ignore mapping IO port bar(0)
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=187456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 39
Port 0: 52:54:00:00:00:01
Configuring Port 1 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 43
Port 1: 52:54:00:00:00:02
Configuring Port 2 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 47
Port 2: 52:54:00:00:00:03
Configuring Port 3 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 51
Port 3: 52:54:00:00:00:04
Checking link statuses...
Done
testpmd> create bonded device 0 0
Created new bonded device net_bonding_testpmd_0 on (port 4).
testpmd> add bonding slave 0 4
testpmd> add bonding slave 1 4
testpmd> add bonding slave 2 4
testpmd> port start 4
Configuring Port 4 (socket 0)
bond_ethdev_start(1985) - Cannot start port since there are no slave devices
Fail to start port 4: Operation not permitted
Please stop the ports first
Done
testpmd> quit
Stopping port 0...
Stopping ports...
Please remove port 0 from bonded device.
DoneStopping port 1...
Stopping ports...
Please remove port 1 from bonded device.
DoneStopping port 2...
Stopping ports...
Please remove port 2 from bonded device.
DoneStopping port 3...
Stopping ports...
DoneStopping port 4...
Stopping ports...
DoneShutting down port 0...
Closing ports...
Please remove port 0 from bonded device.
DoneShutting down port 1...
Closing ports...
Please remove port 1 from bonded device.
DoneShutting down port 2...
Closing ports...
Please remove port 2 from bonded device.
DoneShutting down port 3...
Closing ports...
EAL: Error disabling MSI-X interrupts for fd 51
EAL: Releasing PCI mapped resource for 0000:00:08.0
EAL: Calling pci_unmap_resource for 0000:00:08.0 at 0x110080f000
EAL: Calling pci_unmap_resource for 0000:00:08.0 at 0x1100810000
Port 3 is closed
DoneShutting down port 4...
Closing ports...
Port 4 is closed
DoneBye...
EAL: Releasing PCI mapped resource for 0000:00:05.0
EAL: Calling pci_unmap_resource for 0000:00:05.0 at 0x1100800000
EAL: Calling pci_unmap_resource for 0000:00:05.0 at 0x1100801000
Port 0 is closed
Segmentation fault (core dumped)

[Expected Result]
Explain what is the expected result in text or as an example output:

Start bonding device normally and no core dumped when quit testpmd.


[Bad commit]

commit 339f1ba5135367e566c3ca9db68910fd8c7a6448 (HEAD, refs/bisect/bad)
Author: Ivan Malov <ivan.malov@oktetlabs.ru>
Date:   Tue Oct 18 22:45:49 2022 +0300

    net/bonding: make configure method re-entrant

    According to the documentation, rte_eth_dev_configure()
    can be invoked repeatedly while in stopped state.
    The current implementation in the bonding driver
    allows for that (technically), but the user sees
    warnings which say that back-end devices have
    already been harnessed. Re-factor the code
    to have cleanup before each (re-)configure.

    Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
    Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
    Acked-by: Chas Williams <3chas3@gmail.com>

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Bug 1119] [dpdk-22.11]pvp_virtio_bonding/vhost_virtio_bonding_mode_from_0_to_6: start bonding device failed and core dumped when quit testpmd
  2022-11-01  9:58 [Bug 1119] [dpdk-22.11]pvp_virtio_bonding/vhost_virtio_bonding_mode_from_0_to_6: start bonding device failed and core dumped when quit testpmd bugzilla
@ 2022-11-15  6:13 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2022-11-15  6:13 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=1119

Yuan,Dukai (dukaix.yuan@intel.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
         Resolution|---                         |FIXED
             Status|CONFIRMED                   |RESOLVED

--- Comment #5 from Yuan,Dukai (dukaix.yuan@intel.com) ---
Verify on dpdk-22.11.0-rc2: 10d9e91a769.
Test passed, so closed.

OS: Ubuntu 22.04.1 LTS/5.15.45-051545-generic

NIC type: Ethernet Controller XL710 for 40GbE QSFP+ 1583

NIC driver:  i40e-2.20.12

NIC firmware:9.00 0x8000c8d4 1.3179.0

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-11-15  6:13 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-01  9:58 [Bug 1119] [dpdk-22.11]pvp_virtio_bonding/vhost_virtio_bonding_mode_from_0_to_6: start bonding device failed and core dumped when quit testpmd bugzilla
2022-11-15  6:13 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).