DPDK usage discussions
 help / color / mirror / Atom feed
From: "Jørgen Østergaard Sloth" <Jorgen.Sloth@xci.dk>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] [testpmd][mlx5] chrashes wheb using: flow dump 0
Date: Tue, 11 Aug 2020 05:33:28 +0000	[thread overview]
Message-ID: <1461b05f7761459db0b21e978d92a0cf@xci.dk> (raw)

Hi

DPDK 20.08 - Ubuntu 18.08.4 - OFED MLNX_OFED_LINUX-5.1-0.6.6.0-ubuntu18.04-x86_64 - ConnectX-5 CCAT
Testpmd crashes when using: flow dump 0

It might have something to do with whitelist option: dv_flow_en=0 because when dv_flow_en=1 => then it dumps flow info without crashing.
I'm using dv_flow_en=0 because RSS for ETH/VLAN/IP/UDP seems not to be working then. (when whitelist argument is dv_flow_en=1)


Console output:
root@server102:/home/xciuser/SW/20.08# gdb --args ./testpmd -c 0xff -n 4 -w 86:00.0,dv_flow_en=0 -- --port-numa-config=0,1,1,1 --socket-num=0 --burst=64 --txd=1024 --rxd=1024 --mbcache=512 --rxq=8 --txq=8 --nb-cores=2 --forward-mode=rxonly -i --rss-ip
GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./testpmd...done.
(gdb) r
Starting program: /home/xciuser/SW/20.08/testpmd -c 0xff -n 4 -w 86:00.0,dv_flow_en=0 -- --port-numa-config=0,1,1,1 --socket-num=0 --burst=64 --txd=1024 --rxd=1024 --mbcache=512 --rxq=8 --txq=8 --nb-cores=2 --forward-mode=rxonly -i --rss-ip
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
EAL: Detected 72 lcore(s)
EAL: Detected 4 NUMA nodes
[New Thread 0x7ffff630c700 (LWP 12830)]
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
[New Thread 0x7ffff5b0b700 (LWP 12831)]
EAL: Selected IOVA mode 'PA'
EAL: No free hugepages reported in hugepages-2048kB
EAL: No free hugepages reported in hugepages-2048kB
EAL: No free hugepages reported in hugepages-2048kB
EAL: No free hugepages reported in hugepages-2048kB
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
[New Thread 0x7ffff530a700 (LWP 12832)]
[New Thread 0x7ffff4b09700 (LWP 12833)]
[New Thread 0x7fffeffff700 (LWP 12834)]
[New Thread 0x7fffef7fe700 (LWP 12835)]
[New Thread 0x7fffeeffd700 (LWP 12836)]
[New Thread 0x7fffee7fc700 (LWP 12837)]
[New Thread 0x7fffedffb700 (LWP 12838)]
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:86:00.0 (socket 2)
[New Thread 0x7fffed7fa700 (LWP 12841)]
EAL: No legacy callbacks, legacy socket not created
Set rxonly packet forwarding mode
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=262144, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_2>: n=262144, size=2176, socket=2
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 2)
Port 0: 0C:42:A1:46:5B:6A
Checking link statuses...
Done
testpmd> start
rxonly packet forwarding - ports=1 - cores=2 - streams=8 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 4 streams:
  RX P=0/Q=0 (socket 2) -> TX P=0/Q=0 (socket 2) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 2) -> TX P=0/Q=1 (socket 2) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 2) -> TX P=0/Q=2 (socket 2) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 2) -> TX P=0/Q=3 (socket 2) peer=02:00:00:00:00:00
Logical Core 2 (socket 0) forwards packets on 4 streams:
  RX P=0/Q=4 (socket 2) -> TX P=0/Q=4 (socket 2) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 2) -> TX P=0/Q=5 (socket 2) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 2) -> TX P=0/Q=6 (socket 2) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 2) -> TX P=0/Q=7 (socket 2) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=64
  nb forwarding cores=2 - nb forwarding ports=1
  port 0: RX queue number: 8 Tx queue number: 8
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=1024 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> flow dump
 0 [PORT ID]: port identifier
testpmd> flow dump
 0 [PORT ID]: port identifier
testpmd> flow dump 0

Thread 1 "testpmd" received signal SIGSEGV, Segmentation fault.
__strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62
62      ../sysdeps/x86_64/multiarch/strlen-avx2.S: No such file or directory.
(gdb) bt
#0  __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62
#1  0x00007ffff6e6d4d3 in _IO_vfprintf_internal (s=0x7ffff71fc760 <_IO_2_1_stdout_>, format=0x5555564b4088 "%s(): Caught PMD error type %d (%s): %s%s: %s\n", ap=ap@entry=0x7fffffff7fb0) at vfprintf.c:1643
#2  0x00007ffff6f422ec in ___printf_chk (flag=1, format=<optimized out>) at printf_chk.c:35
#3  0x00005555558bec31 in port_flow_dump ()
#4  0x0000555555af1cf8 in cmdline_parse ()
#5  0x0000555555af0cf0 in cmdline_valid_buffer ()
#6  0x0000555555af44b1 in rdline_char_in ()
#7  0x0000555555af09f0 in cmdline_in.part.1.constprop ()
#8  0x0000555555af0fab in cmdline_interact ()
#9  0x00005555558abfe0 in prompt ()
#10 0x00005555556b05f2 in main ()
(gdb) q
A debugging session is active.

        Inferior 1 [process 12826] will be killed.

Quit anyway? (y or n) y
mailto:root@server102:/home/xciuser/SW/20.08#

Br Jorgen

                 reply	other threads:[~2020-08-11  5:33 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1461b05f7761459db0b21e978d92a0cf@xci.dk \
    --to=jorgen.sloth@xci.dk \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).