From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C18D6A00C2; Tue, 1 Nov 2022 10:58:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6892D40223; Tue, 1 Nov 2022 10:58:43 +0100 (CET) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 9ADEA40156 for ; Tue, 1 Nov 2022 10:58:42 +0100 (CET) Received: by inbox.dpdk.org (Postfix, from userid 33) id 809A6A00C4; Tue, 1 Nov 2022 10:58:42 +0100 (CET) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: [Bug 1119] [dpdk-22.11]pvp_virtio_bonding/vhost_virtio_bonding_mode_from_0_to_6: start bonding device failed and core dumped when quit testpmd Date: Tue, 01 Nov 2022 09:58:41 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: testpmd X-Bugzilla-Version: 22.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: dukaix.yuan@intel.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org https://bugs.dpdk.org/show_bug.cgi?id=3D1119 Bug ID: 1119 Summary: [dpdk-22.11]pvp_virtio_bonding/vhost_virtio_bonding_mo de_from_0_to_6: start bonding device failed and core dumped when quit testpmd Product: DPDK Version: 22.11 Hardware: All OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: testpmd Assignee: dev@dpdk.org Reporter: dukaix.yuan@intel.com Target Milestone: --- [Environment] DPDK version:=20 DPDK 22.11-rc2 Other software versions: QEMU 7.1.0 OS:5.15.45-051545-generic Compiler: gcc-11.2.0 Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz NIC hardware: Ethernet Controller XL710 for 40GbE QSFP+ 1583 NIC firmware: 9.00 0x8000c8d4 1.3179.0 NIC driver: I40e 2.20.12 [Test Setup] Steps to reproduce List the steps to reproduce the issue. 1.Bind 1 NIC port to vfio-pci: usertools/dpdk-devbind.py --force --bind=3Dvfio-pci 0000:af:00.0 2.Start vhost testpmd: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 -a 0000:af:00.0 --file-prefix=3Dvhost_61927_20221101094930 --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D1' --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D1' --vdev 'net_vhost2,iface=3Dvhost-net2,client=3D1,queues=3D1' --vdev 'net_vhost3,iface=3Dvhost-net3,client=3D1,queues=3D1' -- -i --port-topolog= y=3Dchained --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 testpmd> set fwd mac testpmd> start 3.Start VM: taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-7.1.0/bin/qemu-system-x8= 6_64 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.252.214:6000-:22 -device e1000,netdev=3Dnttsip1 -cpu host -smp 6 -m 16384 -object memory-backend-file,id=3Dmem,size=3D16384M,mem-path=3D/mnt/huge,share=3Don = -numa node,memdev=3Dmem -mem-prealloc -chardev socket,path=3D/tmp/vm0_qga0.sock,server,nowait,id=3Dvm0_qga0 -device virtio= -serial -device virtserialport,chardev=3Dvm0_qga0,name=3Dorg.qemu.guest_agent.0 -vn= c :4 -drive file=3D/home/image/ubuntu2004.img -chardev socket,id=3Dchar0,path=3D./vhost-net0,server -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01 -chardev socket,id=3Dchar1,path=3D./vhost-net1,server -netdev type=3Dvhost-user,id=3Dnetdev1,chardev=3Dchar1,vhostforce -device virtio-net-pci,netdev=3Dnetdev1,mac=3D52:54:00:00:00:02 -chardev socket,id=3Dchar2,path=3D./vhost-net2,server -netdev type=3Dvhost-user,id=3Dnetdev2,chardev=3Dchar2,vhostforce -device virtio-net-pci,netdev=3Dnetdev2,mac=3D52:54:00:00:00:03 -chardev socket,id=3Dchar3,path=3D./vhost-net3,server -netdev type=3Dvhost-user,id=3Dnetdev3,chardev=3Dchar3,vhostforce -device virtio-net-pci,netdev=3Dnetdev3,mac=3D52:54:00:00:00:04 4.Start testpmd in VM: echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts` mkdir -p /mnt/huge mount -t hugetlbfs nodev /mnt/hugemodprobe vfio modprobe vfio-pci echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode usertools/dpdk-devbind.py --force --bind=3Dvfio-pci 0000:00:05.0 0000:00:06= .0 0000:00:07.0 0000:00:08.0 x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-5 -n 1 -a 0000:00:05.0 -a 0000:00:06.0 -a 0000:00:07.0 -a 0000:00:08.0 --file-prefix=3Ddpdk_61927_20221101095244 -- -i --port-topology=3Dchained --nb-cores=3D5 testpmd> create bonded device 0 0=20 testpmd> add bonding slave 0 4=20 testpmd> add bonding slave 1 4=20 testpmd> add bonding slave 2 4=20 testpmd> port start 4=20 testpmd> show bonding config 4 testpmd> quit [Show the output from the previous commands.] root@virtiovm:~/dpdk# x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-5 -n= 1 -a 0000:00:05.0 -a 0000:00:06.0 -a 0000:00:07.0 -a 0000:00:08.0 --file-prefix=3Ddpdk_61927_20221101095244 -- -i --port-topology=3Dchained --nb-cores=3D5 EAL: Detected CPU lcores: 6 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/dpdk_61927_20221101095244/mp_socket EAL: Selected IOVA mode 'PA' EAL: VFIO support initialized EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket = -1) EAL: Using IOMMU type 8 (No-IOMMU) EAL: Ignore mapping IO port bar(0) EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:06.0 (socket = -1) EAL: Ignore mapping IO port bar(0) EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:07.0 (socket = -1) EAL: Ignore mapping IO port bar(0) EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:08.0 (socket = -1) EAL: Ignore mapping IO port bar(0) Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool : n=3D187456, size=3D2176, socke= t=3D0 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) EAL: Error disabling MSI-X interrupts for fd 39 Port 0: 52:54:00:00:00:01 Configuring Port 1 (socket 0) EAL: Error disabling MSI-X interrupts for fd 43 Port 1: 52:54:00:00:00:02 Configuring Port 2 (socket 0) EAL: Error disabling MSI-X interrupts for fd 47 Port 2: 52:54:00:00:00:03 Configuring Port 3 (socket 0) EAL: Error disabling MSI-X interrupts for fd 51 Port 3: 52:54:00:00:00:04 Checking link statuses... Done testpmd> create bonded device 0 0 Created new bonded device net_bonding_testpmd_0 on (port 4). testpmd> add bonding slave 0 4 testpmd> add bonding slave 1 4 testpmd> add bonding slave 2 4 testpmd> port start 4 Configuring Port 4 (socket 0) bond_ethdev_start(1985) - Cannot start port since there are no slave devices Fail to start port 4: Operation not permitted Please stop the ports first Done testpmd> quit Stopping port 0... Stopping ports... Please remove port 0 from bonded device. DoneStopping port 1... Stopping ports... Please remove port 1 from bonded device. DoneStopping port 2... Stopping ports... Please remove port 2 from bonded device. DoneStopping port 3... Stopping ports... DoneStopping port 4... Stopping ports... DoneShutting down port 0... Closing ports... Please remove port 0 from bonded device. DoneShutting down port 1... Closing ports... Please remove port 1 from bonded device. DoneShutting down port 2... Closing ports... Please remove port 2 from bonded device. DoneShutting down port 3... Closing ports... EAL: Error disabling MSI-X interrupts for fd 51 EAL: Releasing PCI mapped resource for 0000:00:08.0 EAL: Calling pci_unmap_resource for 0000:00:08.0 at 0x110080f000 EAL: Calling pci_unmap_resource for 0000:00:08.0 at 0x1100810000 Port 3 is closed DoneShutting down port 4... Closing ports... Port 4 is closed DoneBye... EAL: Releasing PCI mapped resource for 0000:00:05.0 EAL: Calling pci_unmap_resource for 0000:00:05.0 at 0x1100800000 EAL: Calling pci_unmap_resource for 0000:00:05.0 at 0x1100801000 Port 0 is closed Segmentation fault (core dumped) [Expected Result] Explain what is the expected result in text or as an example output: Start bonding device normally and no core dumped when quit testpmd. [Bad commit] commit 339f1ba5135367e566c3ca9db68910fd8c7a6448 (HEAD, refs/bisect/bad) Author: Ivan Malov Date: Tue Oct 18 22:45:49 2022 +0300 net/bonding: make configure method re-entrant According to the documentation, rte_eth_dev_configure() can be invoked repeatedly while in stopped state. The current implementation in the bonding driver allows for that (technically), but the user sees warnings which say that back-end devices have already been harnessed. Re-factor the code to have cleanup before each (re-)configure. Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko Acked-by: Chas Williams <3chas3@gmail.com> --=20 You are receiving this mail because: You are the assignee for the bug.=