DPDK usage discussions
 help / color / mirror / Atom feed
From: "jiangheng (G)" <jiangheng14@huawei.com>
To: Slava Ovsiienko <viacheslavo@nvidia.com>, Matan Azrad <matan@nvidia.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: 答复: dpdk coredump when i used mlx5 NIC send packets in secondary process
Date: Thu, 13 Jul 2023 14:14:39 +0000	[thread overview]
Message-ID: <631f8752ee294d069d87f79d2a0b1b71@huawei.com> (raw)
In-Reply-To: <DM6PR12MB375399CC6D90035110083A91DF37A@DM6PR12MB3753.namprd12.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 3223 bytes --]

Hi Slava,
Sorry, the coredump is caused by my app. The primary process uses port 1 but the secondary process uses port 0 incorrectly, and port 0 is not initialized.
Please close this issue


发件人: Slava Ovsiienko [mailto:viacheslavo@nvidia.com]
发送时间: 2023年7月13日 15:29
收件人: jiangheng (G) <jiangheng14@huawei.com>; Matan Azrad <matan@nvidia.com>
抄送: users@dpdk.org
主题: RE: dpdk coredump when i used mlx5 NIC send packets in secondary process


We see the array of queue data (Rx/Tx) is not filled correctly (NULLs).
Was device started ?
When the secondary process was started?
After the primary probed the device and started traffic successfully ?
Could you, please, tell the scenario of the pri/sec proceses start?

With best regards,

From: jiangheng (G) <jiangheng14@huawei.com<mailto:jiangheng14@huawei.com>>
Sent: Wednesday, July 12, 2023 3:08 PM
To: Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: dpdk coredump when i used mlx5 NIC send packets in secondary process

Hi matan and Slava:
I observed dpdk coredump when I used mlx5 NIC send packets in secondary process:

Thread 6 received signal SIGSEGV, Segmentation fault.
[Switching to LWP 593263]
0x00007ffff7f5b44e in rte_eth_tx_burst (port_id=0, queue_id=3, tx_pkts=0x7fffefff7f28, nb_pkts=1)
at /usr/local/include/rte_ethdev.h:5777
5777 qd = p->txq.data[queue_id];
(gdb) bt
#0 0x00007ffff7f5b44e in rte_eth_tx_burst (port_id=0, queue_id=3, tx_pkts=0x7fffefff7f28, nb_pkts=1)
at /usr/local/include/rte_ethdev.h:5777
#1 0x00007ffff7f5ed7a in vdev_tx_xmit (stack=0x7fffe8000b20, pkts=0x7fffefff7f28, nr_pkts=1) at netif/lstack_vdev.c:142
#2 0x00007ffff7f4bd77 in eth_dev_output (netif=0x7fffe8002528, pbuf=0x0) at netif/lstack_ethdev.c:876
#3 0x00007ffff7e84ab8 in etharp_raw (netif=0x7fffe8002528, ethsrc_addr=0x7fffe8002564,
ethdst_addr=0x7ffff7f775da <ethbroadcast>, hwsrc_addr=0x7fffe8002564, ipsrc_addr=0x7fffe8002530,
hwdst_addr=0x7ffff7f775d4 <ethzero>, ipdst_addr=0x7fffe8002530, opcode=1) at core/ipv4/etharp.c:1179
#4 0x00007ffff7e84ec8 in etharp_request_dst (hw_dst_addr=<optimized out>, ipaddr=<optimized out>, netif=<optimized out>)
at core/ipv4/etharp.c:1206
#5 etharp_request (netif=<optimized out>, ipaddr=<optimized out>) at core/ipv4/etharp.c:1224

(gdb) p *p
$1 = {rx_pkt_burst =Jiang 0x7ffff63288c0 <mlx5_rx_burst_vec>, rx_queue_count = 0x0,
rx_descriptor_status = 0x7ffff61709d0 <mlx5_rx_descriptor_status>, rxq = {data = 0x0,
clbk = 0x7ffff6e515a8 <rte_eth_devices+104>}, reserved1 = {0, 0, 0}, tx_pkt_burst = 0x7ffff623ba60 <mlx5_tx_burst_none>,
tx_pkt_prepare = 0x0, tx_descriptor_status = 0x7ffff618a870 <mlx5_tx_descriptor_status>, txq = {data = 0x0,
clbk = 0x7ffff6e535a8 <rte_eth_devices+8296>}, reserved2 = {0, 0, 0}}
(gdb) p p->txq.data
$2 = (void **) 0x0

P->txq.data is NULL

I used dpdk-21.11. (https://github.com/DPDK/dpdk/tree/v22.11), I looked up the stable branch, and there doesn't seem to be a relevant bugfix patch.
The same app does not have this coredump on i40e/ixgbe NIC.


[-- Attachment #2: Type: text/html, Size: 9106 bytes --]

      reply	other threads:[~2023-07-13 14:14 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-12 12:07 jiangheng (G)
2023-07-13  7:19 ` Maayan Kashani
2023-07-13  7:29 ` Slava Ovsiienko
2023-07-13 14:14   ` jiangheng (G) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=631f8752ee294d069d87f79d2a0b1b71@huawei.com \
    --to=jiangheng14@huawei.com \
    --cc=matan@nvidia.com \
    --cc=users@dpdk.org \
    --cc=viacheslavo@nvidia.com \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).