From: David Marchand <david.marchand@redhat.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: dev@dpdk.org, "Morten Brørup" <mb@smartsharesystems.com>
Subject: Re: [PATCH] dumpcap: fix mbuf pool ring type
Date: Mon, 2 Oct 2023 09:33:50 +0200 [thread overview]
Message-ID: <CAJFAV8yktcK4DZXCYeLnsJ==tEWvD3y=uS9rZFZX=HjZHdEvnQ@mail.gmail.com> (raw)
In-Reply-To: <20230804161604.61050-1-stephen@networkplumber.org>
On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> The ring used to store mbufs needs to be multiple producer,
> multiple consumer because multiple queues might on multiple
> cores might be allocating and same time (consume) and in
> case of ring full, the mbufs will be returned (multiple producer).
I think I get the idea, but can you rephrase please?
>
> Bugzilla ID: 1271
> Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
This Fixes: tag looks wrong.
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> app/dumpcap/main.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> index 64294bbfb3e6..991174e95022 100644
> --- a/app/dumpcap/main.c
> +++ b/app/dumpcap/main.c
> @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
> data_size = mbuf_size;
> }
>
> - mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> - MBUF_POOL_CACHE_SIZE, 0,
> - data_size,
> - rte_socket_id(), "ring_mp_sc");
> + mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> + MBUF_POOL_CACHE_SIZE, 0,
> + data_size, rte_socket_id());
Switching to rte_pktmbuf_pool_create() still leaves the user with the
possibility to shoot himself in the foot (I was thinking of setting
some --mbuf-pool-ops-name EAL option).
This application has explicit requirements in terms of concurrent
access (and I don't think the mempool library exposes per driver
capabilities in that regard).
The application was enforcing the use of mempool/ring so far.
I think it is safer to go with an explicit
rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
WDYT?
> if (mp == NULL)
> rte_exit(EXIT_FAILURE,
> "Mempool (%s) creation failed: %s\n", pool_name,
> --
> 2.39.2
>
Thanks.
--
David Marchand
next prev parent reply other threads:[~2023-10-02 7:34 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-04 16:16 Stephen Hemminger
2023-08-05 9:05 ` Morten Brørup
2023-10-02 7:33 ` David Marchand [this message]
2023-10-02 8:42 ` Morten Brørup
2023-11-06 19:23 ` Stephen Hemminger
2023-11-06 21:50 ` Morten Brørup
2023-11-07 2:36 ` Stephen Hemminger
2023-11-07 7:22 ` Morten Brørup
2023-11-07 16:41 ` Stephen Hemminger
2023-11-07 17:38 ` Morten Brørup
2023-11-08 16:50 ` Stephen Hemminger
2023-11-08 17:43 ` Morten Brørup
2023-11-07 17:00 ` Stephen Hemminger
2023-11-06 19:24 ` Stephen Hemminger
2023-11-06 19:34 ` [PATCH v2] " Stephen Hemminger
2023-11-08 17:47 ` [PATCH v3 ] " Stephen Hemminger
2023-11-09 7:21 ` Morten Brørup
2023-11-12 14:05 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJFAV8yktcK4DZXCYeLnsJ==tEWvD3y=uS9rZFZX=HjZHdEvnQ@mail.gmail.com' \
--to=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=mb@smartsharesystems.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).