From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0095A04D7 for ; Tue, 11 Aug 2020 07:33:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0854E4C99; Tue, 11 Aug 2020 07:33:31 +0200 (CEST) Received: from EXCH01.xci.local (mail.xci.dk [77.243.61.194]) by dpdk.org (Postfix) with ESMTP id 782742BFA for ; Tue, 11 Aug 2020 07:33:29 +0200 (CEST) Received: from EXCH01.xci.local (10.11.100.40) by EXCH01.xci.local (10.11.100.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1847.3; Tue, 11 Aug 2020 07:33:28 +0200 Received: from EXCH01.xci.local ([fe80::6c6b:2fc4:9be8:a642]) by EXCH01.xci.local ([fe80::6c6b:2fc4:9be8:a642%3]) with mapi id 15.01.1847.009; Tue, 11 Aug 2020 07:33:28 +0200 From: =?iso-8859-1?Q?J=F8rgen_=D8stergaard_Sloth?= To: "users@dpdk.org" Thread-Topic: [testpmd][mlx5] chrashes wheb using: flow dump 0 Thread-Index: AdZvn4wKgSMj9/ujSxO8TfXUX3MYKA== Date: Tue, 11 Aug 2020 05:33:28 +0000 Message-ID: <1461b05f7761459db0b21e978d92a0cf@xci.dk> Accept-Language: en-US, da-DK Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.11.100.124] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: [dpdk-users] [testpmd][mlx5] chrashes wheb using: flow dump 0 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hi DPDK 20.08 - Ubuntu 18.08.4 - OFED MLNX_OFED_LINUX-5.1-0.6.6.0-ubuntu18.04-= x86_64 - ConnectX-5 CCAT Testpmd crashes when using: flow dump 0 It might have something to do with whitelist option: dv_flow_en=3D0 because= when dv_flow_en=3D1 =3D> then it dumps flow info without crashing. I'm using dv_flow_en=3D0 because RSS for ETH/VLAN/IP/UDP seems not to be wo= rking then. (when whitelist argument is dv_flow_en=3D1) Console output: root@server102:/home/xciuser/SW/20.08# gdb --args ./testpmd -c 0xff -n 4 -w= 86:00.0,dv_flow_en=3D0 -- --port-numa-config=3D0,1,1,1 --socket-num=3D0 --= burst=3D64 --txd=3D1024 --rxd=3D1024 --mbcache=3D512 --rxq=3D8 --txq=3D8 --= nb-cores=3D2 --forward-mode=3Drxonly -i --rss-ip GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./testpmd...done. (gdb) r Starting program: /home/xciuser/SW/20.08/testpmd -c 0xff -n 4 -w 86:00.0,dv= _flow_en=3D0 -- --port-numa-config=3D0,1,1,1 --socket-num=3D0 --burst=3D64 = --txd=3D1024 --rxd=3D1024 --mbcache=3D512 --rxq=3D8 --txq=3D8 --nb-cores=3D= 2 --forward-mode=3Drxonly -i --rss-ip [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". EAL: Detected 72 lcore(s) EAL: Detected 4 NUMA nodes [New Thread 0x7ffff630c700 (LWP 12830)] EAL: Multi-process socket /var/run/dpdk/rte/mp_socket [New Thread 0x7ffff5b0b700 (LWP 12831)] EAL: Selected IOVA mode 'PA' EAL: No free hugepages reported in hugepages-2048kB EAL: No free hugepages reported in hugepages-2048kB EAL: No free hugepages reported in hugepages-2048kB EAL: No free hugepages reported in hugepages-2048kB EAL: No available hugepages reported in hugepages-2048kB EAL: Probing VFIO support... EAL: VFIO support initialized [New Thread 0x7ffff530a700 (LWP 12832)] [New Thread 0x7ffff4b09700 (LWP 12833)] [New Thread 0x7fffeffff700 (LWP 12834)] [New Thread 0x7fffef7fe700 (LWP 12835)] [New Thread 0x7fffeeffd700 (LWP 12836)] [New Thread 0x7fffee7fc700 (LWP 12837)] [New Thread 0x7fffedffb700 (LWP 12838)] EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:86:00.0 (socket 2) [New Thread 0x7fffed7fa700 (LWP 12841)] EAL: No legacy callbacks, legacy socket not created Set rxonly packet forwarding mode Interactive-mode selected testpmd: create a new mbuf pool : n=3D262144, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D262144, size=3D21= 76, socket=3D2 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port= will pair with itself. Configuring Port 0 (socket 2) Port 0: 0C:42:A1:46:5B:6A Checking link statuses... Done testpmd> start rxonly packet forwarding - ports=3D1 - cores=3D2 - streams=3D8 - NUMA suppo= rt enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards packets on 4 streams: RX P=3D0/Q=3D0 (socket 2) -> TX P=3D0/Q=3D0 (socket 2) peer=3D02:00:00:00= :00:00 RX P=3D0/Q=3D1 (socket 2) -> TX P=3D0/Q=3D1 (socket 2) peer=3D02:00:00:00= :00:00 RX P=3D0/Q=3D2 (socket 2) -> TX P=3D0/Q=3D2 (socket 2) peer=3D02:00:00:00= :00:00 RX P=3D0/Q=3D3 (socket 2) -> TX P=3D0/Q=3D3 (socket 2) peer=3D02:00:00:00= :00:00 Logical Core 2 (socket 0) forwards packets on 4 streams: RX P=3D0/Q=3D4 (socket 2) -> TX P=3D0/Q=3D4 (socket 2) peer=3D02:00:00:00= :00:00 RX P=3D0/Q=3D5 (socket 2) -> TX P=3D0/Q=3D5 (socket 2) peer=3D02:00:00:00= :00:00 RX P=3D0/Q=3D6 (socket 2) -> TX P=3D0/Q=3D6 (socket 2) peer=3D02:00:00:00= :00:00 RX P=3D0/Q=3D7 (socket 2) -> TX P=3D0/Q=3D7 (socket 2) peer=3D02:00:00:00= :00:00 rxonly packet forwarding packets/burst=3D64 nb forwarding cores=3D2 - nb forwarding ports=3D1 port 0: RX queue number: 8 Tx queue number: 8 Rx offloads=3D0x0 Tx offloads=3D0x0 RX queue: 0 RX desc=3D1024 - RX free threshold=3D0 RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 RX Offloads=3D0x0 TX queue: 0 TX desc=3D1024 - TX free threshold=3D0 TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX offloads=3D0x0 - TX RS bit threshold=3D0 testpmd> flow dump 0 [PORT ID]: port identifier testpmd> flow dump 0 [PORT ID]: port identifier testpmd> flow dump 0 Thread 1 "testpmd" received signal SIGSEGV, Segmentation fault. __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62 62 ../sysdeps/x86_64/multiarch/strlen-avx2.S: No such file or director= y. (gdb) bt #0 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62 #1 0x00007ffff6e6d4d3 in _IO_vfprintf_internal (s=3D0x7ffff71fc760 <_IO_2_= 1_stdout_>, format=3D0x5555564b4088 "%s(): Caught PMD error type %d (%s): %= s%s: %s\n", ap=3Dap@entry=3D0x7fffffff7fb0) at vfprintf.c:1643 #2 0x00007ffff6f422ec in ___printf_chk (flag=3D1, format=3D= ) at printf_chk.c:35 #3 0x00005555558bec31 in port_flow_dump () #4 0x0000555555af1cf8 in cmdline_parse () #5 0x0000555555af0cf0 in cmdline_valid_buffer () #6 0x0000555555af44b1 in rdline_char_in () #7 0x0000555555af09f0 in cmdline_in.part.1.constprop () #8 0x0000555555af0fab in cmdline_interact () #9 0x00005555558abfe0 in prompt () #10 0x00005555556b05f2 in main () (gdb) q A debugging session is active. Inferior 1 [process 12826] will be killed. Quit anyway? (y or n) y mailto:root@server102:/home/xciuser/SW/20.08# Br Jorgen