* [PATCH] dumpcap: fix mbuf pool ring type
@ 2023-08-04 16:16 Stephen Hemminger
2023-08-05 9:05 ` Morten Brørup
` (3 more replies)
0 siblings, 4 replies; 18+ messages in thread
From: Stephen Hemminger @ 2023-08-04 16:16 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
The ring used to store mbufs needs to be multiple producer,
multiple consumer because multiple queues might on multiple
cores might be allocating and same time (consume) and in
case of ring full, the mbufs will be returned (multiple producer).
Bugzilla ID: 1271
Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/dumpcap/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 64294bbfb3e6..991174e95022 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
data_size = mbuf_size;
}
- mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
- MBUF_POOL_CACHE_SIZE, 0,
- data_size,
- rte_socket_id(), "ring_mp_sc");
+ mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
+ MBUF_POOL_CACHE_SIZE, 0,
+ data_size, rte_socket_id());
if (mp == NULL)
rte_exit(EXIT_FAILURE,
"Mempool (%s) creation failed: %s\n", pool_name,
--
2.39.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH] dumpcap: fix mbuf pool ring type
2023-08-04 16:16 [PATCH] dumpcap: fix mbuf pool ring type Stephen Hemminger
@ 2023-08-05 9:05 ` Morten Brørup
2023-10-02 7:33 ` David Marchand
` (2 subsequent siblings)
3 siblings, 0 replies; 18+ messages in thread
From: Morten Brørup @ 2023-08-05 9:05 UTC (permalink / raw)
To: Stephen Hemminger, dev
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Friday, 4 August 2023 18.16
>
> The ring used to store mbufs needs to be multiple producer,
> multiple consumer because multiple queues might on multiple
> cores might be allocating and same time (consume) and in
> case of ring full, the mbufs will be returned (multiple producer).
>
> Bugzilla ID: 1271
> Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH] dumpcap: fix mbuf pool ring type
2023-08-04 16:16 [PATCH] dumpcap: fix mbuf pool ring type Stephen Hemminger
2023-08-05 9:05 ` Morten Brørup
@ 2023-10-02 7:33 ` David Marchand
2023-10-02 8:42 ` Morten Brørup
2023-11-06 19:24 ` Stephen Hemminger
2023-11-06 19:34 ` [PATCH v2] " Stephen Hemminger
2023-11-08 17:47 ` [PATCH v3 ] " Stephen Hemminger
3 siblings, 2 replies; 18+ messages in thread
From: David Marchand @ 2023-10-02 7:33 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Morten Brørup
On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> The ring used to store mbufs needs to be multiple producer,
> multiple consumer because multiple queues might on multiple
> cores might be allocating and same time (consume) and in
> case of ring full, the mbufs will be returned (multiple producer).
I think I get the idea, but can you rephrase please?
>
> Bugzilla ID: 1271
> Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
This Fixes: tag looks wrong.
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> app/dumpcap/main.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> index 64294bbfb3e6..991174e95022 100644
> --- a/app/dumpcap/main.c
> +++ b/app/dumpcap/main.c
> @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
> data_size = mbuf_size;
> }
>
> - mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> - MBUF_POOL_CACHE_SIZE, 0,
> - data_size,
> - rte_socket_id(), "ring_mp_sc");
> + mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> + MBUF_POOL_CACHE_SIZE, 0,
> + data_size, rte_socket_id());
Switching to rte_pktmbuf_pool_create() still leaves the user with the
possibility to shoot himself in the foot (I was thinking of setting
some --mbuf-pool-ops-name EAL option).
This application has explicit requirements in terms of concurrent
access (and I don't think the mempool library exposes per driver
capabilities in that regard).
The application was enforcing the use of mempool/ring so far.
I think it is safer to go with an explicit
rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
WDYT?
> if (mp == NULL)
> rte_exit(EXIT_FAILURE,
> "Mempool (%s) creation failed: %s\n", pool_name,
> --
> 2.39.2
>
Thanks.
--
David Marchand
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH] dumpcap: fix mbuf pool ring type
2023-10-02 7:33 ` David Marchand
@ 2023-10-02 8:42 ` Morten Brørup
2023-11-06 19:23 ` Stephen Hemminger
2023-11-06 19:24 ` Stephen Hemminger
1 sibling, 1 reply; 18+ messages in thread
From: Morten Brørup @ 2023-10-02 8:42 UTC (permalink / raw)
To: David Marchand, Stephen Hemminger; +Cc: dev, konstantin.v.ananyev
> From: David Marchand [mailto:david.marchand@redhat.com]
> Sent: Monday, 2 October 2023 09.34
>
> On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > The ring used to store mbufs needs to be multiple producer,
> > multiple consumer because multiple queues might on multiple
> > cores might be allocating and same time (consume) and in
> > case of ring full, the mbufs will be returned (multiple producer).
>
> I think I get the idea, but can you rephrase please?
>
>
> >
> > Bugzilla ID: 1271
> > Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
>
> This Fixes: tag looks wrong.
>
>
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > ---
> > app/dumpcap/main.c | 7 +++----
> > 1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> > index 64294bbfb3e6..991174e95022 100644
> > --- a/app/dumpcap/main.c
> > +++ b/app/dumpcap/main.c
> > @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
> > data_size = mbuf_size;
> > }
> >
> > - mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> > - MBUF_POOL_CACHE_SIZE, 0,
> > - data_size,
> > - rte_socket_id(), "ring_mp_sc");
> > + mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> > + MBUF_POOL_CACHE_SIZE, 0,
> > + data_size, rte_socket_id());
>
> Switching to rte_pktmbuf_pool_create() still leaves the user with the
> possibility to shoot himself in the foot (I was thinking of setting
> some --mbuf-pool-ops-name EAL option).
>
> This application has explicit requirements in terms of concurrent
> access (and I don't think the mempool library exposes per driver
> capabilities in that regard).
> The application was enforcing the use of mempool/ring so far.
>
> I think it is safer to go with an explicit
> rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> WDYT?
<feature creep>
Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those mbuf pool drivers are specified on the command line; otherwise fall back to "ring_mp_mc".
Actually, I prefer Stephen's suggestion of using the default mbuf pool driver. The option is there for a reason.
However, David is right: We want to prevent the user from using a thread-unsafe mempool driver in this use case.
And I guess there might be other use cases than this one, where a thread-safe mempool driver is required. So adding a generalized function to get the "upgraded" (i.e. thread safe) variant of a mempool driver would be nice.
</feature creep>
Feel free to ignore my suggested feature creep, and go ahead with David's suggestion instead.
>
>
> > if (mp == NULL)
> > rte_exit(EXIT_FAILURE,
> > "Mempool (%s) creation failed: %s\n", pool_name,
> > --
> > 2.39.2
> >
>
> Thanks.
>
> --
> David Marchand
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH] dumpcap: fix mbuf pool ring type
2023-10-02 8:42 ` Morten Brørup
@ 2023-11-06 19:23 ` Stephen Hemminger
2023-11-06 21:50 ` Morten Brørup
0 siblings, 1 reply; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-06 19:23 UTC (permalink / raw)
To: Morten Brørup; +Cc: David Marchand, dev, konstantin.v.ananyev
On Mon, 2 Oct 2023 10:42:53 +0200
Morten Brørup <mb@smartsharesystems.com> wrote:
> > Switching to rte_pktmbuf_pool_create() still leaves the user with the
> > possibility to shoot himself in the foot (I was thinking of setting
> > some --mbuf-pool-ops-name EAL option).
> >
> > This application has explicit requirements in terms of concurrent
> > access (and I don't think the mempool library exposes per driver
> > capabilities in that regard).
> > The application was enforcing the use of mempool/ring so far.
> >
> > I think it is safer to go with an explicit
> > rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> > WDYT?
>
> <feature creep>
> Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those mbuf pool drivers are specified on the command line; otherwise fall back to "ring_mp_mc".
>
> Actually, I prefer Stephen's suggestion of using the default mbuf pool driver. The option is there for a reason.
>
> However, David is right: We want to prevent the user from using a thread-unsafe mempool driver in this use case.
>
> And I guess there might be other use cases than this one, where a thread-safe mempool driver is required. So adding a generalized function to get the "upgraded" (i.e. thread safe) variant of a mempool driver would be nice.
> </feature creep>
If the user overrides the default mbuf pool type, then it will need to be thread safe for
the general case of driver as well (or they are on single cpu).
I think the patch should be applied as is.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH] dumpcap: fix mbuf pool ring type
2023-10-02 7:33 ` David Marchand
2023-10-02 8:42 ` Morten Brørup
@ 2023-11-06 19:24 ` Stephen Hemminger
1 sibling, 0 replies; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-06 19:24 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Morten Brørup
On Mon, 2 Oct 2023 09:33:50 +0200
David Marchand <david.marchand@redhat.com> wrote:
> Switching to rte_pktmbuf_pool_create() still leaves the user with the
> possibility to shoot himself in the foot (I was thinking of setting
> some --mbuf-pool-ops-name EAL option).
>
> This application has explicit requirements in terms of concurrent
> access (and I don't think the mempool library exposes per driver
> capabilities in that regard).
> The application was enforcing the use of mempool/ring so far.
>
> I think it is safer to go with an explicit
> rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> WDYT?
The dumpcap command does not have EAL command line arguments that can
be changed by user.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2] dumpcap: fix mbuf pool ring type
2023-08-04 16:16 [PATCH] dumpcap: fix mbuf pool ring type Stephen Hemminger
2023-08-05 9:05 ` Morten Brørup
2023-10-02 7:33 ` David Marchand
@ 2023-11-06 19:34 ` Stephen Hemminger
2023-11-08 17:47 ` [PATCH v3 ] " Stephen Hemminger
3 siblings, 0 replies; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-06 19:34 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
The internal buffer pool used for copies of mbufs captured
needs to be thread safe. If capturing on multiple interfaces
or multiple queues, the same pool will be used (consumers).
And if the capture ring gets full, the queues will need
to put back the capture buffer which leads to multiple producers.
Since this is the same use case as normal drivers and
the default pool time can not be overridden on dumpcap command
line, it is OK to use the default pool type.
Bugzilla ID: 1271
Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2 - reword commit message
app/dumpcap/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 64294bbfb3e6..991174e95022 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
data_size = mbuf_size;
}
- mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
- MBUF_POOL_CACHE_SIZE, 0,
- data_size,
- rte_socket_id(), "ring_mp_sc");
+ mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
+ MBUF_POOL_CACHE_SIZE, 0,
+ data_size, rte_socket_id());
if (mp == NULL)
rte_exit(EXIT_FAILURE,
"Mempool (%s) creation failed: %s\n", pool_name,
--
2.39.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-06 19:23 ` Stephen Hemminger
@ 2023-11-06 21:50 ` Morten Brørup
2023-11-07 2:36 ` Stephen Hemminger
0 siblings, 1 reply; 18+ messages in thread
From: Morten Brørup @ 2023-11-06 21:50 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: David Marchand, dev, konstantin.v.ananyev
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Monday, 6 November 2023 20.24
>
> On Mon, 2 Oct 2023 10:42:53 +0200
> Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > > Switching to rte_pktmbuf_pool_create() still leaves the user with
> the
> > > possibility to shoot himself in the foot (I was thinking of setting
> > > some --mbuf-pool-ops-name EAL option).
> > >
> > > This application has explicit requirements in terms of concurrent
> > > access (and I don't think the mempool library exposes per driver
> > > capabilities in that regard).
> > > The application was enforcing the use of mempool/ring so far.
> > >
> > > I think it is safer to go with an explicit
> > > rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> > > WDYT?
> >
> > <feature creep>
> > Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those
> mbuf pool drivers are specified on the command line; otherwise fall
> back to "ring_mp_mc".
> >
> > Actually, I prefer Stephen's suggestion of using the default mbuf
> pool driver. The option is there for a reason.
> >
> > However, David is right: We want to prevent the user from using a
> thread-unsafe mempool driver in this use case.
> >
> > And I guess there might be other use cases than this one, where a
> thread-safe mempool driver is required. So adding a generalized
> function to get the "upgraded" (i.e. thread safe) variant of a mempool
> driver would be nice.
> > </feature creep>
>
> If the user overrides the default mbuf pool type, then it will need to
> be thread safe for
> the general case of driver as well (or they are on single cpu).
If they have chosen a thread safe pool type, using the chosen type will work for dumpcap too. I think we all agree on that.
I'm not sure I understand your single-cpu argument, so let me try to paraphrase:
If their application only uses one EAL thread, and they have chosen "ring_sc_sp", dumpcap also work, because mempool accesses are still single-threaded. Is this correctly understood?
Or are you saying that if they want to use dumpcap, they must choose a thread safe pool type for their application (regardless if the application is single-threaded or not)?
>
>
> I think the patch should be applied as is.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-06 21:50 ` Morten Brørup
@ 2023-11-07 2:36 ` Stephen Hemminger
2023-11-07 7:22 ` Morten Brørup
0 siblings, 1 reply; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-07 2:36 UTC (permalink / raw)
To: Morten Brørup; +Cc: David Marchand, dev, konstantin.v.ananyev
On Mon, 6 Nov 2023 22:50:50 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:
> > > And I guess there might be other use cases than this one, where a
> > thread-safe mempool driver is required. So adding a generalized
> > function to get the "upgraded" (i.e. thread safe) variant of a mempool
> > driver would be nice.
> > > </feature creep>
> >
> > If the user overrides the default mbuf pool type, then it will need to
> > be thread safe for
> > the general case of driver as well (or they are on single cpu).
>
> If they have chosen a thread safe pool type, using the chosen type will work for dumpcap too. I think we all agree on that.
>
> I'm not sure I understand your single-cpu argument, so let me try to paraphrase:
>
> If their application only uses one EAL thread, and they have chosen "ring_sc_sp", dumpcap also work, because mempool accesses are still single-threaded. Is this correctly understood?
>
> Or are you saying that if they want to use dumpcap, they must choose a thread safe pool type for their application (regardless if the application is single-threaded or not)?
There is no command line of EAL nature in dumpcap.
This is intentional.
QED: overriding default pool type is not going to be a possible
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-07 2:36 ` Stephen Hemminger
@ 2023-11-07 7:22 ` Morten Brørup
2023-11-07 16:41 ` Stephen Hemminger
2023-11-07 17:00 ` Stephen Hemminger
0 siblings, 2 replies; 18+ messages in thread
From: Morten Brørup @ 2023-11-07 7:22 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: David Marchand, dev, konstantin.v.ananyev
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, 7 November 2023 03.36
>
> On Mon, 6 Nov 2023 22:50:50 +0100
> Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > > > And I guess there might be other use cases than this one, where a
> > > thread-safe mempool driver is required. So adding a generalized
> > > function to get the "upgraded" (i.e. thread safe) variant of a
> mempool
> > > driver would be nice.
> > > > </feature creep>
> > >
> > > If the user overrides the default mbuf pool type, then it will need
> to
> > > be thread safe for
> > > the general case of driver as well (or they are on single cpu).
> >
> > If they have chosen a thread safe pool type, using the chosen type
> will work for dumpcap too. I think we all agree on that.
> >
> > I'm not sure I understand your single-cpu argument, so let me try to
> paraphrase:
> >
> > If their application only uses one EAL thread, and they have chosen
> "ring_sc_sp", dumpcap also work, because mempool accesses are still
> single-threaded. Is this correctly understood?
> >
> > Or are you saying that if they want to use dumpcap, they must choose
> a thread safe pool type for their application (regardless if the
> application is single-threaded or not)?
>
> There is no command line of EAL nature in dumpcap.
> This is intentional.
> QED: overriding default pool type is not going to be a possible
The preferred mbuf pool type can configured in the primary process by EAL params. If so configured, it is stored in a memzone named "mbuf_user_pool_ops".
And if it is set there, the secondary process will also use it as its preferred mbuf pool type.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-07 7:22 ` Morten Brørup
@ 2023-11-07 16:41 ` Stephen Hemminger
2023-11-07 17:38 ` Morten Brørup
2023-11-07 17:00 ` Stephen Hemminger
1 sibling, 1 reply; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-07 16:41 UTC (permalink / raw)
To: Morten Brørup; +Cc: David Marchand, dev, konstantin.v.ananyev
On Tue, 7 Nov 2023 08:22:37 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:
> > >
> > > Or are you saying that if they want to use dumpcap, they must choose
> > a thread safe pool type for their application (regardless if the
> > application is single-threaded or not)?
> >
> > There is no command line of EAL nature in dumpcap.
> > This is intentional.
> > QED: overriding default pool type is not going to be a possible
>
> The preferred mbuf pool type can configured in the primary process by EAL params. If so configured, it is stored in a memzone named "mbuf_user_pool_ops".
> And if it is set there, the secondary process will also use it as its preferred mbuf pool type.
If user decides to use a thread unsafe mempool, wouldn't it break in every PMD as well.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-07 7:22 ` Morten Brørup
2023-11-07 16:41 ` Stephen Hemminger
@ 2023-11-07 17:00 ` Stephen Hemminger
1 sibling, 0 replies; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-07 17:00 UTC (permalink / raw)
To: Morten Brørup; +Cc: David Marchand, dev, konstantin.v.ananyev
On Tue, 7 Nov 2023 08:22:37 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:
> > > Or are you saying that if they want to use dumpcap, they must choose
> > a thread safe pool type for their application (regardless if the
> > application is single-threaded or not)?
> >
> > There is no command line of EAL nature in dumpcap.
> > This is intentional.
> > QED: overriding default pool type is not going to be a possible
>
> The preferred mbuf pool type can configured in the primary process by EAL params. If so configured, it is stored in a memzone named "mbuf_user_pool_ops".
> And if it is set there, the secondary process will also use it as its preferred mbuf pool type.
I notice that no other app or example is using the create_by_ops except pdump/pcapng/dumpcap.
~/DPDK/main/examples $ git grep rte_pktmbuf_pool_create_by_ops
~/DPDK/main/app $ git grep rte_pktmbuf_pool_create_by_ops
dumpcap/main.c: mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
pdump/main.c: mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
test/test_pcapng.c: mp = rte_pktmbuf_pool_create_by_ops("pcapng_test_pool", IOV_MAX + NUM_PACKETS,
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-07 16:41 ` Stephen Hemminger
@ 2023-11-07 17:38 ` Morten Brørup
2023-11-08 16:50 ` Stephen Hemminger
0 siblings, 1 reply; 18+ messages in thread
From: Morten Brørup @ 2023-11-07 17:38 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: David Marchand, dev, konstantin.v.ananyev
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, 7 November 2023 17.41
>
> On Tue, 7 Nov 2023 08:22:37 +0100
> Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > > >
> > > > Or are you saying that if they want to use dumpcap, they must
> choose
> > > a thread safe pool type for their application (regardless if the
> > > application is single-threaded or not)?
> > >
> > > There is no command line of EAL nature in dumpcap.
> > > This is intentional.
> > > QED: overriding default pool type is not going to be a possible
> >
> > The preferred mbuf pool type can configured in the primary process by
> EAL params. If so configured, it is stored in a memzone named
> "mbuf_user_pool_ops".
> > And if it is set there, the secondary process will also use it as its
> preferred mbuf pool type.
>
> If user decides to use a thread unsafe mempool, wouldn't it break in
> every PMD as well.
Yes, except if the application only uses one thread for packet processing. Then thread safety is not necessary.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-07 17:38 ` Morten Brørup
@ 2023-11-08 16:50 ` Stephen Hemminger
2023-11-08 17:43 ` Morten Brørup
0 siblings, 1 reply; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-08 16:50 UTC (permalink / raw)
To: Morten Brørup; +Cc: David Marchand, dev, konstantin.v.ananyev
On Tue, 7 Nov 2023 18:38:32 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:
> > If user decides to use a thread unsafe mempool, wouldn't it break in
> > every PMD as well.
>
> Yes, except if the application only uses one thread for packet processing. Then thread safety is not necessary.
>
If application only used single thread then single consumer pool would work fine
for dumpcap. Only the primary thread would ever allocate. But both secondary and
primary could be freeing mbuf.
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH] dumpcap: fix mbuf pool ring type
2023-11-08 16:50 ` Stephen Hemminger
@ 2023-11-08 17:43 ` Morten Brørup
0 siblings, 0 replies; 18+ messages in thread
From: Morten Brørup @ 2023-11-08 17:43 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: David Marchand, dev, konstantin.v.ananyev
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, 8 November 2023 17.51
>
> On Tue, 7 Nov 2023 18:38:32 +0100
> Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > > If user decides to use a thread unsafe mempool, wouldn't it break
> in
> > > every PMD as well.
> >
> > Yes, except if the application only uses one thread for packet
> processing. Then thread safety is not necessary.
> >
>
>
> If application only used single thread then single consumer pool would
> work fine
> for dumpcap. Only the primary thread would ever allocate. But both
> secondary and
> primary could be freeing mbuf.
I think it is an acceptable risk. Perhaps mention somewhere that dumpcap breaks the application if it uses non-thread safe mbuf ops.
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Optionally, dumpcap could compare the return value of rte_mbuf_best_mempool_ops() [1] with the few known unsafe ops. However, this is probably going to be forgotten, if new non-thread safe mempool ops are ever added to DPDK.
[1]: https://elixir.bootlin.com/dpdk/latest/source/lib/mbuf/rte_mbuf_pool_ops.c#L88
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 ] dumpcap: fix mbuf pool ring type
2023-08-04 16:16 [PATCH] dumpcap: fix mbuf pool ring type Stephen Hemminger
` (2 preceding siblings ...)
2023-11-06 19:34 ` [PATCH v2] " Stephen Hemminger
@ 2023-11-08 17:47 ` Stephen Hemminger
2023-11-09 7:21 ` Morten Brørup
3 siblings, 1 reply; 18+ messages in thread
From: Stephen Hemminger @ 2023-11-08 17:47 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
The internal buffer pool used for copies of mbufs captured
needs to be thread safe. If capturing on multiple interfaces
or multiple queues, the same pool will be used (consumers).
And if the capture ring gets full, the queues will need
to put back the capture buffer which leads to multiple producers.
Bugzilla ID: 1271
Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v3 - just change ops, don't use default pool ops
app/dumpcap/main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 64294bbfb3e6..4f581bd341d8 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -694,7 +694,7 @@ static struct rte_mempool *create_mempool(void)
mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
MBUF_POOL_CACHE_SIZE, 0,
data_size,
- rte_socket_id(), "ring_mp_sc");
+ rte_socket_id(), "ring_mp_mc");
if (mp == NULL)
rte_exit(EXIT_FAILURE,
"Mempool (%s) creation failed: %s\n", pool_name,
--
2.39.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH v3 ] dumpcap: fix mbuf pool ring type
2023-11-08 17:47 ` [PATCH v3 ] " Stephen Hemminger
@ 2023-11-09 7:21 ` Morten Brørup
2023-11-12 14:05 ` Thomas Monjalon
0 siblings, 1 reply; 18+ messages in thread
From: Morten Brørup @ 2023-11-09 7:21 UTC (permalink / raw)
To: Stephen Hemminger, dev
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, 8 November 2023 18.48
>
> The internal buffer pool used for copies of mbufs captured
> needs to be thread safe. If capturing on multiple interfaces
> or multiple queues, the same pool will be used (consumers).
> And if the capture ring gets full, the queues will need
> to put back the capture buffer which leads to multiple producers.
>
> Bugzilla ID: 1271
> Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 ] dumpcap: fix mbuf pool ring type
2023-11-09 7:21 ` Morten Brørup
@ 2023-11-12 14:05 ` Thomas Monjalon
0 siblings, 0 replies; 18+ messages in thread
From: Thomas Monjalon @ 2023-11-12 14:05 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Morten Brørup
09/11/2023 08:21, Morten Brørup:
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Wednesday, 8 November 2023 18.48
> >
> > The internal buffer pool used for copies of mbufs captured
> > needs to be thread safe. If capturing on multiple interfaces
> > or multiple queues, the same pool will be used (consumers).
> > And if the capture ring gets full, the queues will need
> > to put back the capture buffer which leads to multiple producers.
> >
> > Bugzilla ID: 1271
> > Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
>
> Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Applied, thanks.
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2023-11-12 14:05 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-04 16:16 [PATCH] dumpcap: fix mbuf pool ring type Stephen Hemminger
2023-08-05 9:05 ` Morten Brørup
2023-10-02 7:33 ` David Marchand
2023-10-02 8:42 ` Morten Brørup
2023-11-06 19:23 ` Stephen Hemminger
2023-11-06 21:50 ` Morten Brørup
2023-11-07 2:36 ` Stephen Hemminger
2023-11-07 7:22 ` Morten Brørup
2023-11-07 16:41 ` Stephen Hemminger
2023-11-07 17:38 ` Morten Brørup
2023-11-08 16:50 ` Stephen Hemminger
2023-11-08 17:43 ` Morten Brørup
2023-11-07 17:00 ` Stephen Hemminger
2023-11-06 19:24 ` Stephen Hemminger
2023-11-06 19:34 ` [PATCH v2] " Stephen Hemminger
2023-11-08 17:47 ` [PATCH v3 ] " Stephen Hemminger
2023-11-09 7:21 ` Morten Brørup
2023-11-12 14:05 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).