DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Ferruh Yigit <ferruh.yigit@amd.com>
Cc: Edwin Brossette <edwin.brossette@6wind.com>,
	dev@dpdk.org, Olivier Matz <olivier.matz@6wind.com>,
	Didier Pallard <didier.pallard@6wind.com>,
	Laurent Hardy <laurent.hardy@6wind.com>,
	kparameshwar@vmware.com, ferruh.yigit@intel.com
Subject: Re: Crash in tap pmd when using more than 8 rx queues
Date: Tue, 10 Sep 2024 09:58:02 -0700	[thread overview]
Message-ID: <20240910095802.22f3ab60@hermes.local> (raw)
In-Reply-To: <ba62a81b-dcd1-4caf-a8c8-6a25f410ff8c@amd.com>

On Fri, 6 Sep 2024 12:16:47 +0100
Ferruh Yigit <ferruh.yigit@amd.com> wrote:

> On 9/5/2024 1:55 PM, Edwin Brossette wrote:
> > Hello,
> > 
> > I have recently stumbled into an issue with my DPDK-based application
> > running the failsafe pmd. This pmd uses a tap device, with which my
> > application fails to start if more than 8 rx queues are used. This issue
> > appears to be related to this patch:
> > https://git.dpdk.org/dpdk/commit/?
> > id=c36ce7099c2187926cd62cff7ebd479823554929 <https://git.dpdk.org/dpdk/  
> > commit/?id=c36ce7099c2187926cd62cff7ebd479823554929>  
> > 
> > I have seen in the documentation that there was a limitation to 8 max
> > queues shared when using a tap device shared between multiple processes.
> > However, my application uses a single primary process, with no secondary
> > process, but it appears that I am still running into this limitation.
> > 
> > Now if we look at this small chunk of code:
> > 
> > memset(&msg, 0, sizeof(msg));
> > strlcpy(msg.name <http://msg.name>, TAP_MP_REQ_START_RXTX,
> > sizeof(msg.name <http://msg.name>));
> > strlcpy(request_param->port_name, dev->data->name, sizeof(request_param-  
> >>port_name));  
> > msg.len_param = sizeof(*request_param);
> > for (i = 0; i < dev->data->nb_tx_queues; i++) {
> >     msg.fds[fd_iterator++] = process_private->txq_fds[i];
> >     msg.num_fds++;
> >     request_param->txq_count++;
> > }
> > for (i = 0; i < dev->data->nb_rx_queues; i++) {
> >     msg.fds[fd_iterator++] = process_private->rxq_fds[i];
> >     msg.num_fds++;
> >     request_param->rxq_count++;
> > }
> > (Note that I am not using the latest DPDK version, but stable v23.11.1.
> > But I believe the issue is still present on latest.)
> > 
> > There are no checks on the maximum value i can take in the for loops.
> > Since the size of msg.fds is limited by the maximum of 8 queues shared
> > between process because of the IPC API, there is a potential buffer
> > overflow which can happen here.
> > 
> > See the struct declaration:
> > struct rte_mp_msg {
> >      char name[RTE_MP_MAX_NAME_LEN];
> >      int len_param;
> >      int num_fds;
> >      uint8_t param[RTE_MP_MAX_PARAM_LEN];
> >      int fds[RTE_MP_MAX_FD_NUM];
> > };
> > 
> > This means that if the number of queues used is more than 8, the program
> > will crash. This is what happens on my end as I get the following log:
> > *** stack smashing detected ***: terminated
> > 
> > Reverting the commit mentionned above fixes my issue. Also setting a
> > check like this works for me:
> > 
> > if (dev->data->nb_tx_queues + dev->data->nb_rx_queues > RTE_MP_MAX_FD_NUM)
> >      return -1;
> > 
> > I've made the changes on my local branch to fix my issue. This mail is
> > just to bring attention on this problem.
> > Thank you in advance for considering it.
> >   
> 
> Hi Edwin,
> 
> Thanks for the report, I confirm issue is valid, although that code
> changed a little (to increase 8 limit) [3].
> 
> And in this release Stephen put another patch [1] to increase the limit
> even more, but irrelevant from the limit, tap code needs to be fixed.
> 
> To fix:
> 1. We need to add "nb_rx_queues > RTE_MP_MAX_FD_NUM" check you
> mentioned, to not blindly update the 'msg.fds[]'
> 2. We should prevent this to be a limit for tap PMD when there is only
> primary process, this seems was oversight in our end.
> 

It is not clear what the error handling should be if the user requests
10 queues but RTE_MP_MAX_FD_NUM is 8. Ideally, it should work if no secondary
process is used. But there is no good way to know that in the driver.

That is why it is best to just set TAP max queues to be less than
or equal to RTE_MP_MAX_FD_NUM, and enforce that with a static assertion
at compile time.

  parent reply	other threads:[~2024-09-10 16:58 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-05 12:55 Edwin Brossette
2024-09-06 11:16 ` Ferruh Yigit
2024-09-06 14:04   ` Edwin Brossette
2024-09-06 14:14     ` Ferruh Yigit
2024-09-10 16:58   ` Stephen Hemminger [this message]
2024-09-10 17:25     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240910095802.22f3ab60@hermes.local \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=didier.pallard@6wind.com \
    --cc=edwin.brossette@6wind.com \
    --cc=ferruh.yigit@amd.com \
    --cc=ferruh.yigit@intel.com \
    --cc=kparameshwar@vmware.com \
    --cc=laurent.hardy@6wind.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).