From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [Bug 990] mlx5 pmd crashing when trying to free mbuf from secondary process when mprq is enabled
Date: Fri, 01 Apr 2022 10:49:42 +0000 [thread overview]
Message-ID: <bug-990-3@http.bugs.dpdk.org/> (raw)
https://bugs.dpdk.org/show_bug.cgi?id=990
Bug ID: 990
Summary: mlx5 pmd crashing when trying to free mbuf from
secondary process when mprq is enabled
Product: DPDK
Version: 21.11
Hardware: x86
OS: Linux
Status: UNCONFIRMED
Severity: critical
Priority: Normal
Component: ethdev
Assignee: dev@dpdk.org
Reporter: sahithi.singam@oracle.com
Target Milestone: ---
This issue is always reproducible when any two DPDK applications (primary ,
secondary) are run on mellanox virtual functions using mlx5_pmd with mprq
enabled , segfault occurs whenever the secondary application tries to free a
packet received by primary application. Packet size received should be greater
than 128Bytes to reproduce this issue.
Run multi process sample application as below on a node with mellanox virtual
functions using mlx5_pmd.
First start server application as below
1. /opt/dpdk-mp_server -l 2-3 -n 4 --allow
0000:00:05.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9 --allow
0000:00:06.0,mprq_en=1,rxqs_min_mprq=1
,mprq_log_stride_num=9 --proc-type=primary -- -p 0x3 -n 1
2. Once server is started ,start client application as below
~ # /opt/dpdk-mp_client -l 4 -n 4 --proc-type=auto -- -n 0
3. Then from a different machine connected to the above machine, try to send
traffic to the above server/client application using testpmd run as below (note
pktsize should be greater than 128Bytes to reproduce this crash, I am using
256Bytes packets).
/opt/dpdk-testpmd -l 2-3 -m 4 --allow
0000:00:04.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9 -- --portmask=0x1
--mbcache=64 --forward-mode=txonly
--eth-peer=0,02:00:17:0A:4B:FB --stats-period=10 --txpkts=256
--tx-ip=10.1.12.253,10.1.12.27 --txq 1 --rxq 1
Once the traffic is received by server application, server will receive packets
in mbuf and will send it to client. client will try to free the mbuf during
which it is giving segmentation fault.
Rootcause for this issue :
=========================
When server tries to receive the packet, mlx5_pmd is internally calling
mprq_buf_to_pkt which will use and attach external buffer to the mbuf (as the
packets from testpmd are greater than 128 bytes size).
When external buffers are used, mlx5_pmd will internally sets the callback
function, which should free this external buffer to the address of
mlx5_mprq_buf_free_cb
i.e mbuf->shinfo->free_cb will be set to address of mlx5_mprq_buf_free_cb in
primary application (i.e dpdk-mp_server).
But when this mbuf is given to the secondary application i.e dpdk-mp_client ,
while freeing this mbuf, secondary application will try to invoke free_cb in
mbuf but as this address is not mapped correctly in secondary application, it
will give a segmentation fault or could lead to corruption.
[linux-shell]$ nm dpdk_build/examples/dpdk-mp_client |grep -i
"mlx5_mprq_buf_free_cb"
00000000008886e0 T mlx5_mprq_buf_free_cb
[linux-shell]$ nm dpdk_build/examples/dpdk-mp_server |grep -i
"mlx5_mprq_buf_free_cb"
0000000000888e00 T mlx5_mprq_buf_free_cb
--
You are receiving this mail because:
You are the assignee for the bug.
next reply other threads:[~2022-04-01 10:49 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-01 10:49 bugzilla [this message]
2022-04-05 10:52 ` bugzilla
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-990-3@http.bugs.dpdk.org/ \
--to=bugzilla@dpdk.org \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).