DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "David Marchand" <david.marchand@redhat.com>,
	"Stephen Hemminger" <stephen@networkplumber.org>
Cc: <dev@dpdk.org>, <konstantin.v.ananyev@yandex.ru>
Subject: RE: [PATCH] dumpcap: fix mbuf pool ring type
Date: Mon, 2 Oct 2023 10:42:53 +0200	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D87C27@smartserver.smartshare.dk> (raw)
In-Reply-To: <CAJFAV8yktcK4DZXCYeLnsJ==tEWvD3y=uS9rZFZX=HjZHdEvnQ@mail.gmail.com>

> From: David Marchand [mailto:david.marchand@redhat.com]
> Sent: Monday, 2 October 2023 09.34
> 
> On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > The ring used to store mbufs needs to be multiple producer,
> > multiple consumer because multiple queues might on multiple
> > cores might be allocating and same time (consume) and in
> > case of ring full, the mbufs will be returned (multiple producer).
> 
> I think I get the idea, but can you rephrase please?
> 
> 
> >
> > Bugzilla ID: 1271
> > Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
> 
> This Fixes: tag looks wrong.
> 
> 
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > ---
> >  app/dumpcap/main.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> > index 64294bbfb3e6..991174e95022 100644
> > --- a/app/dumpcap/main.c
> > +++ b/app/dumpcap/main.c
> > @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
> >                         data_size = mbuf_size;
> >         }
> >
> > -       mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> > -                                           MBUF_POOL_CACHE_SIZE, 0,
> > -                                           data_size,
> > -                                           rte_socket_id(), "ring_mp_sc");
> > +       mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> > +                                    MBUF_POOL_CACHE_SIZE, 0,
> > +                                    data_size, rte_socket_id());
> 
> Switching to rte_pktmbuf_pool_create() still leaves the user with the
> possibility to shoot himself in the foot (I was thinking of setting
> some --mbuf-pool-ops-name EAL option).
> 
> This application has explicit requirements in terms of concurrent
> access (and I don't think the mempool library exposes per driver
> capabilities in that regard).
> The application was enforcing the use of mempool/ring so far.
> 
> I think it is safer to go with an explicit
> rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> WDYT?

<feature creep>
Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those mbuf pool drivers are specified on the command line; otherwise fall back to "ring_mp_mc".

Actually, I prefer Stephen's suggestion of using the default mbuf pool driver. The option is there for a reason.

However, David is right: We want to prevent the user from using a thread-unsafe mempool driver in this use case.

And I guess there might be other use cases than this one, where a thread-safe mempool driver is required. So adding a generalized function to get the "upgraded" (i.e. thread safe) variant of a mempool driver would be nice.
</feature creep>

Feel free to ignore my suggested feature creep, and go ahead with David's suggestion instead.

> 
> 
> >         if (mp == NULL)
> >                 rte_exit(EXIT_FAILURE,
> >                          "Mempool (%s) creation failed: %s\n", pool_name,
> > --
> > 2.39.2
> >
> 
> Thanks.
> 
> --
> David Marchand


  reply	other threads:[~2023-10-02  8:42 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-04 16:16 Stephen Hemminger
2023-08-05  9:05 ` Morten Brørup
2023-10-02  7:33 ` David Marchand
2023-10-02  8:42   ` Morten Brørup [this message]
2023-11-06 19:23     ` Stephen Hemminger
2023-11-06 21:50       ` Morten Brørup
2023-11-07  2:36         ` Stephen Hemminger
2023-11-07  7:22           ` Morten Brørup
2023-11-07 16:41             ` Stephen Hemminger
2023-11-07 17:38               ` Morten Brørup
2023-11-08 16:50                 ` Stephen Hemminger
2023-11-08 17:43                   ` Morten Brørup
2023-11-07 17:00             ` Stephen Hemminger
2023-11-06 19:24   ` Stephen Hemminger
2023-11-06 19:34 ` [PATCH v2] " Stephen Hemminger
2023-11-08 17:47 ` [PATCH v3 ] " Stephen Hemminger
2023-11-09  7:21   ` Morten Brørup
2023-11-12 14:05     ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D87C27@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.v.ananyev@yandex.ru \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).