From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [DPDK/ethdev Bug 1536] net/tap: crash in tap pmd when using more than RTE_MP_MAX_FD_NUM rx queues
Date: Fri, 06 Sep 2024 13:57:54 +0000 [thread overview]
Message-ID: <bug-1536-3@http.bugs.dpdk.org/> (raw)
[-- Attachment #1: Type: text/plain, Size: 3080 bytes --]
https://bugs.dpdk.org/show_bug.cgi?id=1536
Bug ID: 1536
Summary: net/tap: crash in tap pmd when using more than
RTE_MP_MAX_FD_NUM rx queues
Product: DPDK
Version: 22.03
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: ethdev
Assignee: dev@dpdk.org
Reporter: edwin.brossette@6wind.com
Target Milestone: ---
Hello,
I have recently stumbled into an issue with my DPDK-based application running
the failsafe pmd. This pmd uses a tap device, with which my application fails
to start if more than 8 rx queues are used. This issue appears to be related to
this patch:
https://git.dpdk.org/dpdk/commit/?id=c36ce7099c2187926cd62cff7ebd479823554929
I have seen in the documentation that there was a limitation to 8 max queues
shared when using a tap device shared between multiple processes. However, my
application uses a single primary process, with no secondary process, but it
appears that I am still running into this limitation.
Now if we look at this small chunk of code:
memset(&msg, 0, sizeof(msg));
strlcpy(msg.name, TAP_MP_REQ_START_RXTX, sizeof(msg.name));
strlcpy(request_param->port_name, dev->data->name,
sizeof(request_param->port_name));
msg.len_param = sizeof(*request_param);
for (i = 0; i < dev->data->nb_tx_queues; i++) {
msg.fds[fd_iterator++] = process_private->txq_fds[i];
msg.num_fds++;
request_param->txq_count++;
}
for (i = 0; i < dev->data->nb_rx_queues; i++) {
msg.fds[fd_iterator++] = process_private->rxq_fds[i];
msg.num_fds++;
request_param->rxq_count++;
}
(Note that I am not using the latest DPDK version, but stable v23.11.1. But I
believe the issue is still present on latest.)
There are no checks on the maximum value i can take in the for loops. Since the
size of msg.fds is limited by the maximum of 8 queues shared between process
because of the IPC API, there is a potential buffer overflow which can happen
here.
See the struct declaration:
struct rte_mp_msg {
char name[RTE_MP_MAX_NAME_LEN];
int len_param;
int num_fds;
uint8_t param[RTE_MP_MAX_PARAM_LEN];
int fds[RTE_MP_MAX_FD_NUM];
};
This means that if the number of queues used is more than 8, the program will
crash. This is what happens on my end as I get the following log:
*** stack smashing detected ***: terminated
Reverting the commit mentioned above fixes my issue. Also setting a check like
this works for me:
if (dev->data->nb_tx_queues + dev->data->nb_rx_queues > RTE_MP_MAX_FD_NUM)
return -1;
I've made the changes on my local branch to fix my issue.
----------
Potential fixes discussed:
1. Add "nb_rx_queues > RTE_MP_MAX_FD_NUM" check to not blindly update the
'msg.fds[]'
2. Prevent this to be a limit for tap PMD when there is only a primary process.
--
You are receiving this mail because:
You are the assignee for the bug.
[-- Attachment #2: Type: text/html, Size: 5127 bytes --]
reply other threads:[~2024-09-06 13:57 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-1536-3@http.bugs.dpdk.org/ \
--to=bugzilla@dpdk.org \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).