DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
@ 2023-05-16 13:41 Yasin CANER
  2023-05-16 15:23 ` Stephen Hemminger
  0 siblings, 1 reply; 12+ messages in thread
From: Yasin CANER @ 2023-05-16 13:41 UTC (permalink / raw)
  To: dev; +Cc: Yasin CANER, Olivier Matz, Andrew Rybchenko

From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>

after a while working rte_mempool_avail_count function returns bigger
than mempool size that cause miscalculation rte_mempool_in_use_count.

it helps to avoid miscalculation rte_mempool_in_use_count.

Bugzilla ID: 1229

Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
---
 lib/mempool/rte_mempool.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index cf5dea2304..17ed1eb845 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1009,8 +1009,11 @@ rte_mempool_avail_count(const struct rte_mempool *mp)
 
 	count = rte_mempool_ops_get_count(mp);
 
-	if (mp->cache_size == 0)
+	if (mp->cache_size == 0) {
+		if (count > mp->size)
+			return mp->size;
 		return count;
+	}
 
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
 		count += mp->local_cache[lcore_id].len;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-16 13:41 [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size Yasin CANER
@ 2023-05-16 15:23 ` Stephen Hemminger
  2023-05-16 16:03   ` Morten Brørup
  0 siblings, 1 reply; 12+ messages in thread
From: Stephen Hemminger @ 2023-05-16 15:23 UTC (permalink / raw)
  To: Yasin CANER; +Cc: dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

On Tue, 16 May 2023 13:41:46 +0000
Yasin CANER <yasinncaner@gmail.com> wrote:

> From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> 
> after a while working rte_mempool_avail_count function returns bigger
> than mempool size that cause miscalculation rte_mempool_in_use_count.
> 
> it helps to avoid miscalculation rte_mempool_in_use_count.
> 
> Bugzilla ID: 1229
> 
> Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>

An alternative that avoids some code duplication.

diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index cf5dea2304a7..2406b112e7b0 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool *mp)
        count = rte_mempool_ops_get_count(mp);
 
        if (mp->cache_size == 0)
-               return count;
+               goto exit;
 
        for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
                count += mp->local_cache[lcore_id].len;
@@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool *mp)
         * due to race condition (access to len is not locked), the
         * total can be greater than size... so fix the result
         */
+exit:
        if (count > mp->size)
                return mp->size;
        return count;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-16 15:23 ` Stephen Hemminger
@ 2023-05-16 16:03   ` Morten Brørup
  2023-05-17  8:01     ` Yasin CANER
  2023-05-18 15:40     ` Mattias Rönnblom
  0 siblings, 2 replies; 12+ messages in thread
From: Morten Brørup @ 2023-05-16 16:03 UTC (permalink / raw)
  To: Stephen Hemminger, Yasin CANER
  Cc: dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, 16 May 2023 17.24
> 
> On Tue, 16 May 2023 13:41:46 +0000
> Yasin CANER <yasinncaner@gmail.com> wrote:
> 
> > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> >
> > after a while working rte_mempool_avail_count function returns bigger
> > than mempool size that cause miscalculation rte_mempool_in_use_count.
> >
> > it helps to avoid miscalculation rte_mempool_in_use_count.
> >
> > Bugzilla ID: 1229
> >
> > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> 
> An alternative that avoids some code duplication.
> 
> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> index cf5dea2304a7..2406b112e7b0 100644
> --- a/lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c
> @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
> *mp)
>         count = rte_mempool_ops_get_count(mp);
> 
>         if (mp->cache_size == 0)
> -               return count;
> +               goto exit;

This bug can only occur here (i.e. with cache_size==0) if rte_mempool_ops_get_count() returns an incorrect value. The bug should be fixed there instead.

> 
>         for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
>                 count += mp->local_cache[lcore_id].len;
> @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
> *mp)
>          * due to race condition (access to len is not locked), the
>          * total can be greater than size... so fix the result
>          */
> +exit:
>         if (count > mp->size)
>                 return mp->size;
>         return count;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-16 16:03   ` Morten Brørup
@ 2023-05-17  8:01     ` Yasin CANER
  2023-05-17  8:38       ` Morten Brørup
  2023-05-18 15:40     ` Mattias Rönnblom
  1 sibling, 1 reply; 12+ messages in thread
From: Yasin CANER @ 2023-05-17  8:01 UTC (permalink / raw)
  To: Morten Brørup
  Cc: Stephen Hemminger, dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 2151 bytes --]

Hello,

I don't have full knowledge of how to work rte_mempool_ops_get_count() but
there is another comment about it. Maybe it relates.
/*
 * due to race condition (access to len is not locked), the
 * total can be greater than size... so fix the result
 */

Best regards.

Morten Brørup <mb@smartsharesystems.com>, 16 May 2023 Sal, 19:04 tarihinde
şunu yazdı:

> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Tuesday, 16 May 2023 17.24
> >
> > On Tue, 16 May 2023 13:41:46 +0000
> > Yasin CANER <yasinncaner@gmail.com> wrote:
> >
> > > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > >
> > > after a while working rte_mempool_avail_count function returns bigger
> > > than mempool size that cause miscalculation rte_mempool_in_use_count.
> > >
> > > it helps to avoid miscalculation rte_mempool_in_use_count.
> > >
> > > Bugzilla ID: 1229
> > >
> > > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> >
> > An alternative that avoids some code duplication.
> >
> > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> > index cf5dea2304a7..2406b112e7b0 100644
> > --- a/lib/mempool/rte_mempool.c
> > +++ b/lib/mempool/rte_mempool.c
> > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> >         count = rte_mempool_ops_get_count(mp);
> >
> >         if (mp->cache_size == 0)
> > -               return count;
> > +               goto exit;
>
> This bug can only occur here (i.e. with cache_size==0) if
> rte_mempool_ops_get_count() returns an incorrect value. The bug should be
> fixed there instead.
>
> >
> >         for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
> >                 count += mp->local_cache[lcore_id].len;
> > @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> >          * due to race condition (access to len is not locked), the
> >          * total can be greater than size... so fix the result
> >          */
> > +exit:
> >         if (count > mp->size)
> >                 return mp->size;
> >         return count;
>

[-- Attachment #2: Type: text/html, Size: 3123 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-17  8:01     ` Yasin CANER
@ 2023-05-17  8:38       ` Morten Brørup
  2023-05-17  9:04         ` Morten Brørup
  0 siblings, 1 reply; 12+ messages in thread
From: Morten Brørup @ 2023-05-17  8:38 UTC (permalink / raw)
  To: Yasin CANER
  Cc: Stephen Hemminger, dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 2509 bytes --]

From: Yasin CANER [mailto:yasinncaner@gmail.com] 
Sent: Wednesday, 17 May 2023 10.01
To: Morten Brørup
Cc: Stephen Hemminger; dev@dpdk.org; Yasin CANER; Olivier Matz; Andrew Rybchenko
Subject: Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size

 

Hello,

 

I don't have full knowledge of how to work rte_mempool_ops_get_count() but there is another comment about it. Maybe it relates. 

/*
 * due to race condition (access to len is not locked), the
 * total can be greater than size... so fix the result
 */

 

MB: This comment relates to the race when accessing the per-lcore cache counters.

 

Best regards.

 

Morten Brørup <mb@smartsharesystems.com>, 16 May 2023 Sal, 19:04 tarihinde şunu yazdı:

	> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
	> Sent: Tuesday, 16 May 2023 17.24
	> 
	> On Tue, 16 May 2023 13:41:46 +0000
	> Yasin CANER <yasinncaner@gmail.com> wrote:
	> 
	> > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
	> >
	> > after a while working rte_mempool_avail_count function returns bigger
	> > than mempool size that cause miscalculation rte_mempool_in_use_count.
	> >
	> > it helps to avoid miscalculation rte_mempool_in_use_count.
	> >
	> > Bugzilla ID: 1229
	> >
	> > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
	> 
	> An alternative that avoids some code duplication.
	> 
	> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
	> index cf5dea2304a7..2406b112e7b0 100644
	> --- a/lib/mempool/rte_mempool.c
	> +++ b/lib/mempool/rte_mempool.c
	> @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
	> *mp)
	>         count = rte_mempool_ops_get_count(mp);
	> 
	>         if (mp->cache_size == 0)
	> -               return count;
	> +               goto exit;
	
	This bug can only occur here (i.e. with cache_size==0) if rte_mempool_ops_get_count() returns an incorrect value. The bug should be fixed there instead.
	
	> 
	>         for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
	>                 count += mp->local_cache[lcore_id].len;
	> @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
	> *mp)
	>          * due to race condition (access to len is not locked), the
	>          * total can be greater than size... so fix the result
	>          */
	> +exit:
	>         if (count > mp->size)
	>                 return mp->size;
	>         return count;


[-- Attachment #2: Type: text/html, Size: 8930 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-17  8:38       ` Morten Brørup
@ 2023-05-17  9:04         ` Morten Brørup
  2023-05-17 11:37           ` Yasin CANER
  2023-05-17 11:53           ` David Marchand
  0 siblings, 2 replies; 12+ messages in thread
From: Morten Brørup @ 2023-05-17  9:04 UTC (permalink / raw)
  To: Yasin CANER
  Cc: Stephen Hemminger, dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 2694 bytes --]

From: Morten Brørup [mailto:mb@smartsharesystems.com] 
Sent: Wednesday, 17 May 2023 10.38



From: Yasin CANER [mailto:yasinncaner@gmail.com] 
Sent: Wednesday, 17 May 2023 10.01



Hello,

 

I don't have full knowledge of how to work rte_mempool_ops_get_count() but there is another comment about it. Maybe it relates. 

/*
 * due to race condition (access to len is not locked), the
 * total can be greater than size... so fix the result
 */

 

MB: This comment relates to the race when accessing the per-lcore cache counters.

 

MB (continued): I have added more information, regarding mempool drivers, in Bugzilla: https://bugs.dpdk.org/show_bug.cgi?id=1229

 

 

Best regards.

 

Morten Brørup <mb@smartsharesystems.com>, 16 May 2023 Sal, 19:04 tarihinde şunu yazdı:

	> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
	> Sent: Tuesday, 16 May 2023 17.24
	> 
	> On Tue, 16 May 2023 13:41:46 +0000
	> Yasin CANER <yasinncaner@gmail.com> wrote:
	> 
	> > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
	> >
	> > after a while working rte_mempool_avail_count function returns bigger
	> > than mempool size that cause miscalculation rte_mempool_in_use_count.
	> >
	> > it helps to avoid miscalculation rte_mempool_in_use_count.
	> >
	> > Bugzilla ID: 1229
	> >
	> > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
	> 
	> An alternative that avoids some code duplication.
	> 
	> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
	> index cf5dea2304a7..2406b112e7b0 100644
	> --- a/lib/mempool/rte_mempool.c
	> +++ b/lib/mempool/rte_mempool.c
	> @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
	> *mp)
	>         count = rte_mempool_ops_get_count(mp);
	> 
	>         if (mp->cache_size == 0)
	> -               return count;
	> +               goto exit;
	
	This bug can only occur here (i.e. with cache_size==0) if rte_mempool_ops_get_count() returns an incorrect value. The bug should be fixed there instead.

	 

	MB (continued): The bug must be in the underlying mempool driver. I took a look at the ring and stack drivers, and they seem fine.

	
	
	> 
	>         for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
	>                 count += mp->local_cache[lcore_id].len;
	> @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
	> *mp)
	>          * due to race condition (access to len is not locked), the
	>          * total can be greater than size... so fix the result
	>          */
	> +exit:
	>         if (count > mp->size)
	>                 return mp->size;
	>         return count;


[-- Attachment #2: Type: text/html, Size: 10173 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-17  9:04         ` Morten Brørup
@ 2023-05-17 11:37           ` Yasin CANER
  2023-05-17 11:53           ` David Marchand
  1 sibling, 0 replies; 12+ messages in thread
From: Yasin CANER @ 2023-05-17 11:37 UTC (permalink / raw)
  To: Morten Brørup
  Cc: Stephen Hemminger, dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 3795 bytes --]

Hello,

mempool drivers is listed here.
https://doc.dpdk.org/guides/mempool/index.html

my app loads *rte_mempool_ring* and also *rte_mempool_stack* . In doc,
rte_mempool_ring is default mempool driver.

"-d librte_mbuf.so -d librte_mempool.so -d librte_mempool_ring.so -d
librte_mempool_stack.so -d librte_mempool_bucket.so -d librte_kni.so"

EAL: open shared lib librte_mbuf.so
EAL: open shared lib librte_mempool.so
EAL: open shared lib librte_mempool_ring.so
EAL: open shared lib librte_mempool_stack.so
EAL: lib.stack log level changed from disabled to notice
EAL: open shared lib librte_mempool_bucket.so
EAL: open shared lib librte_kni.so
EAL: open shared lib
DPDK_LIBS/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_mempool_octeontx.so
EAL: pmd.octeontx.mbox log level changed from disabled to notice
EAL: pmd.mempool.octeontx log level changed from disabled to notice



Morten Brørup <mb@smartsharesystems.com>, 17 May 2023 Çar, 12:04 tarihinde
şunu yazdı:

> *From:* Morten Brørup [mailto:mb@smartsharesystems.com]
> *Sent:* Wednesday, 17 May 2023 10.38
>
> *From:* Yasin CANER [mailto:yasinncaner@gmail.com]
> *Sent:* Wednesday, 17 May 2023 10.01
>
> Hello,
>
>
>
> I don't have full knowledge of how to work rte_mempool_ops_get_count() but
> there is another comment about it. Maybe it relates.
>
> /*
>  * due to race condition (access to len is not locked), the
>  * total can be greater than size... so fix the result
>  */
>
>
>
> MB: This comment relates to the race when accessing the per-lcore cache
> counters.
>
>
>
> MB (continued): I have added more information, regarding mempool drivers,
> in Bugzilla: https://bugs.dpdk.org/show_bug.cgi?id=1229
>
>
>
>
>
> Best regards.
>
>
>
> Morten Brørup <mb@smartsharesystems.com>, 16 May 2023 Sal, 19:04
> tarihinde şunu yazdı:
>
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Tuesday, 16 May 2023 17.24
> >
> > On Tue, 16 May 2023 13:41:46 +0000
> > Yasin CANER <yasinncaner@gmail.com> wrote:
> >
> > > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > >
> > > after a while working rte_mempool_avail_count function returns bigger
> > > than mempool size that cause miscalculation rte_mempool_in_use_count.
> > >
> > > it helps to avoid miscalculation rte_mempool_in_use_count.
> > >
> > > Bugzilla ID: 1229
> > >
> > > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> >
> > An alternative that avoids some code duplication.
> >
> > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> > index cf5dea2304a7..2406b112e7b0 100644
> > --- a/lib/mempool/rte_mempool.c
> > +++ b/lib/mempool/rte_mempool.c
> > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> >         count = rte_mempool_ops_get_count(mp);
> >
> >         if (mp->cache_size == 0)
> > -               return count;
> > +               goto exit;
>
> This bug can only occur here (i.e. with cache_size==0) if
> rte_mempool_ops_get_count() returns an incorrect value. The bug should be
> fixed there instead.
>
>
>
> MB (continued): The bug must be in the underlying mempool driver. I took a
> look at the ring and stack drivers, and they seem fine.
>
>
>
> >
> >         for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
> >                 count += mp->local_cache[lcore_id].len;
> > @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> >          * due to race condition (access to len is not locked), the
> >          * total can be greater than size... so fix the result
> >          */
> > +exit:
> >         if (count > mp->size)
> >                 return mp->size;
> >         return count;
>
>

[-- Attachment #2: Type: text/html, Size: 8566 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-17  9:04         ` Morten Brørup
  2023-05-17 11:37           ` Yasin CANER
@ 2023-05-17 11:53           ` David Marchand
  2023-05-17 12:23             ` Morten Brørup
  1 sibling, 1 reply; 12+ messages in thread
From: David Marchand @ 2023-05-17 11:53 UTC (permalink / raw)
  To: Morten Brørup, Yasin CANER
  Cc: Stephen Hemminger, dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

On Wed, May 17, 2023 at 11:05 AM Morten Brørup <mb@smartsharesystems.com> wrote:
> > On Tue, 16 May 2023 13:41:46 +0000
> > Yasin CANER <yasinncaner@gmail.com> wrote:
> >
> > > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > >
> > > after a while working rte_mempool_avail_count function returns bigger
> > > than mempool size that cause miscalculation rte_mempool_in_use_count.
> > >
> > > it helps to avoid miscalculation rte_mempool_in_use_count.

Is this issue reproduced with an application of the reporter, or a
DPDK in-tree application?


> > >
> > > Bugzilla ID: 1229
> > >
> > > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> >
> > An alternative that avoids some code duplication.
> >
> > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> > index cf5dea2304a7..2406b112e7b0 100644
> > --- a/lib/mempool/rte_mempool.c
> > +++ b/lib/mempool/rte_mempool.c
> > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> >         count = rte_mempool_ops_get_count(mp);
> >
> >         if (mp->cache_size == 0)
> > -               return count;
> > +               goto exit;
>
> This bug can only occur here (i.e. with cache_size==0) if rte_mempool_ops_get_count() returns an incorrect value. The bug should be fixed there instead.
>
>
>
> MB (continued): The bug must be in the underlying mempool driver. I took a look at the ring and stack drivers, and they seem fine.

Or it could indicate a double free (or equivalent) issue from the
application (either through direct call to mempool API, or indirectly
like sending/freeing an already sent/freed packet for example).


-- 
David Marchand


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-17 11:53           ` David Marchand
@ 2023-05-17 12:23             ` Morten Brørup
  2023-05-18  8:37               ` Yasin CANER
  0 siblings, 1 reply; 12+ messages in thread
From: Morten Brørup @ 2023-05-17 12:23 UTC (permalink / raw)
  To: David Marchand, Yasin CANER
  Cc: Stephen Hemminger, dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

> From: David Marchand [mailto:david.marchand@redhat.com]
> Sent: Wednesday, 17 May 2023 13.53
> 
> On Wed, May 17, 2023 at 11:05 AM Morten Brørup <mb@smartsharesystems.com>
> wrote:
> > > On Tue, 16 May 2023 13:41:46 +0000
> > > Yasin CANER <yasinncaner@gmail.com> wrote:
> > >
> > > > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > > >
> > > > after a while working rte_mempool_avail_count function returns bigger
> > > > than mempool size that cause miscalculation rte_mempool_in_use_count.
> > > >
> > > > it helps to avoid miscalculation rte_mempool_in_use_count.
> 
> Is this issue reproduced with an application of the reporter, or a
> DPDK in-tree application?
> 
> 
> > > >
> > > > Bugzilla ID: 1229
> > > >
> > > > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > >
> > > An alternative that avoids some code duplication.
> > >
> > > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> > > index cf5dea2304a7..2406b112e7b0 100644
> > > --- a/lib/mempool/rte_mempool.c
> > > +++ b/lib/mempool/rte_mempool.c
> > > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > > *mp)
> > >         count = rte_mempool_ops_get_count(mp);
> > >
> > >         if (mp->cache_size == 0)
> > > -               return count;
> > > +               goto exit;
> >
> > This bug can only occur here (i.e. with cache_size==0) if
> rte_mempool_ops_get_count() returns an incorrect value. The bug should be
> fixed there instead.
> >
> >
> >
> > MB (continued): The bug must be in the underlying mempool driver. I took a
> look at the ring and stack drivers, and they seem fine.
> 
> Or it could indicate a double free (or equivalent) issue from the
> application (either through direct call to mempool API, or indirectly
> like sending/freeing an already sent/freed packet for example).

Good point, David.

@Yasin, if you build DPDK and your application with RTE_LIBRTE_MEMPOOL_DEBUG set in config/rte_config.h, the mempool cookies should catch any double frees.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-17 12:23             ` Morten Brørup
@ 2023-05-18  8:37               ` Yasin CANER
  2023-05-18  8:53                 ` Morten Brørup
  0 siblings, 1 reply; 12+ messages in thread
From: Yasin CANER @ 2023-05-18  8:37 UTC (permalink / raw)
  To: Morten Brørup
  Cc: David Marchand, Stephen Hemminger, dev, Yasin CANER,
	Olivier Matz, Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 3115 bytes --]

Hello,

I found a second free command in my code and removed it. David pointed to
the right .

On the other hand, do you think we need to avoid miscalculations? Is it
better to patch it or not?

or it needs to be aware of the second free command.

Sharing more information about env.

# ethtool -i mgmt
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"

Linux spgw-dpdk 5.4.0-146-generic #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC
2023 x86_64 x86_64 x86_64 GNU/Linux



Best regards.

Morten Brørup <mb@smartsharesystems.com>, 17 May 2023 Çar, 15:23 tarihinde
şunu yazdı:

> > From: David Marchand [mailto:david.marchand@redhat.com]
> > Sent: Wednesday, 17 May 2023 13.53
> >
> > On Wed, May 17, 2023 at 11:05 AM Morten Brørup <mb@smartsharesystems.com
> >
> > wrote:
> > > > On Tue, 16 May 2023 13:41:46 +0000
> > > > Yasin CANER <yasinncaner@gmail.com> wrote:
> > > >
> > > > > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > > > >
> > > > > after a while working rte_mempool_avail_count function returns
> bigger
> > > > > than mempool size that cause miscalculation
> rte_mempool_in_use_count.
> > > > >
> > > > > it helps to avoid miscalculation rte_mempool_in_use_count.
> >
> > Is this issue reproduced with an application of the reporter, or a
> > DPDK in-tree application?
> >
> >
> > > > >
> > > > > Bugzilla ID: 1229
> > > > >
> > > > > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > > >
> > > > An alternative that avoids some code duplication.
> > > >
> > > > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> > > > index cf5dea2304a7..2406b112e7b0 100644
> > > > --- a/lib/mempool/rte_mempool.c
> > > > +++ b/lib/mempool/rte_mempool.c
> > > > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct
> rte_mempool
> > > > *mp)
> > > >         count = rte_mempool_ops_get_count(mp);
> > > >
> > > >         if (mp->cache_size == 0)
> > > > -               return count;
> > > > +               goto exit;
> > >
> > > This bug can only occur here (i.e. with cache_size==0) if
> > rte_mempool_ops_get_count() returns an incorrect value. The bug should be
> > fixed there instead.
> > >
> > >
> > >
> > > MB (continued): The bug must be in the underlying mempool driver. I
> took a
> > look at the ring and stack drivers, and they seem fine.
> >
> > Or it could indicate a double free (or equivalent) issue from the
> > application (either through direct call to mempool API, or indirectly
> > like sending/freeing an already sent/freed packet for example).
>
> Good point, David.
>
> @Yasin, if you build DPDK and your application with
> RTE_LIBRTE_MEMPOOL_DEBUG set in config/rte_config.h, the mempool cookies
> should catch any double frees.
>
>

[-- Attachment #2: Type: text/html, Size: 4531 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-18  8:37               ` Yasin CANER
@ 2023-05-18  8:53                 ` Morten Brørup
  0 siblings, 0 replies; 12+ messages in thread
From: Morten Brørup @ 2023-05-18  8:53 UTC (permalink / raw)
  To: Yasin CANER
  Cc: David Marchand, Stephen Hemminger, dev, Yasin CANER,
	Olivier Matz, Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 3364 bytes --]

From: Yasin CANER [mailto:yasinncaner@gmail.com] 
Sent: Thursday, 18 May 2023 10.37



Hello, 

 

I found a second free command in my code and removed it. David pointed to the right .

 

MB: Good to hear.

 

On the other hand, do you think we need to avoid miscalculations? Is it better to patch it or not?

 

MB: There is no bug in the library, so no need to patch it. The returned value was a consequence of using the library incorrectly (freeing twice).

 

or it needs to be aware of the second free command. 

 

Sharing more information about env.

 

# ethtool -i mgmt
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

 

NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"

 

Linux spgw-dpdk 5.4.0-146-generic #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

 

 

 

Best regards.

 

Morten Brørup <mb@smartsharesystems.com>, 17 May 2023 Çar, 15:23 tarihinde şunu yazdı:

	> From: David Marchand [mailto:david.marchand@redhat.com]
	> Sent: Wednesday, 17 May 2023 13.53
	> 
	> On Wed, May 17, 2023 at 11:05 AM Morten Brørup <mb@smartsharesystems.com>
	> wrote:
	> > > On Tue, 16 May 2023 13:41:46 +0000
	> > > Yasin CANER <yasinncaner@gmail.com> wrote:
	> > >
	> > > > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
	> > > >
	> > > > after a while working rte_mempool_avail_count function returns bigger
	> > > > than mempool size that cause miscalculation rte_mempool_in_use_count.
	> > > >
	> > > > it helps to avoid miscalculation rte_mempool_in_use_count.
	> 
	> Is this issue reproduced with an application of the reporter, or a
	> DPDK in-tree application?
	> 
	> 
	> > > >
	> > > > Bugzilla ID: 1229
	> > > >
	> > > > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
	> > >
	> > > An alternative that avoids some code duplication.
	> > >
	> > > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
	> > > index cf5dea2304a7..2406b112e7b0 100644
	> > > --- a/lib/mempool/rte_mempool.c
	> > > +++ b/lib/mempool/rte_mempool.c
	> > > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
	> > > *mp)
	> > >         count = rte_mempool_ops_get_count(mp);
	> > >
	> > >         if (mp->cache_size == 0)
	> > > -               return count;
	> > > +               goto exit;
	> >
	> > This bug can only occur here (i.e. with cache_size==0) if
	> rte_mempool_ops_get_count() returns an incorrect value. The bug should be
	> fixed there instead.
	> >
	> >
	> >
	> > MB (continued): The bug must be in the underlying mempool driver. I took a
	> look at the ring and stack drivers, and they seem fine.
	> 
	> Or it could indicate a double free (or equivalent) issue from the
	> application (either through direct call to mempool API, or indirectly
	> like sending/freeing an already sent/freed packet for example).
	
	Good point, David.
	
	@Yasin, if you build DPDK and your application with RTE_LIBRTE_MEMPOOL_DEBUG set in config/rte_config.h, the mempool cookies should catch any double frees.


[-- Attachment #2: Type: text/html, Size: 7948 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size
  2023-05-16 16:03   ` Morten Brørup
  2023-05-17  8:01     ` Yasin CANER
@ 2023-05-18 15:40     ` Mattias Rönnblom
  1 sibling, 0 replies; 12+ messages in thread
From: Mattias Rönnblom @ 2023-05-18 15:40 UTC (permalink / raw)
  To: Morten Brørup, Stephen Hemminger, Yasin CANER
  Cc: dev, Yasin CANER, Olivier Matz, Andrew Rybchenko

On 2023-05-16 18:03, Morten Brørup wrote:
>> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
>> Sent: Tuesday, 16 May 2023 17.24
>>
>> On Tue, 16 May 2023 13:41:46 +0000
>> Yasin CANER <yasinncaner@gmail.com> wrote:
>>
>>> From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
>>>
>>> after a while working rte_mempool_avail_count function returns bigger
>>> than mempool size that cause miscalculation rte_mempool_in_use_count.
>>>
>>> it helps to avoid miscalculation rte_mempool_in_use_count.
>>>
>>> Bugzilla ID: 1229
>>>
>>> Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
>>
>> An alternative that avoids some code duplication.
>>
>> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
>> index cf5dea2304a7..2406b112e7b0 100644
>> --- a/lib/mempool/rte_mempool.c
>> +++ b/lib/mempool/rte_mempool.c
>> @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
>> *mp)
>>          count = rte_mempool_ops_get_count(mp);
>>
>>          if (mp->cache_size == 0)
>> -               return count;
>> +               goto exit;
> 
> This bug can only occur here (i.e. with cache_size==0) if rte_mempool_ops_get_count() returns an incorrect value. The bug should be fixed there instead.
> 
>>
>>          for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
>>                  count += mp->local_cache[lcore_id].len;
>> @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
>> *mp)

This loop may result in the same symptoms (i.e., count > mp->size). For 
example, the LF stack has a relaxed atomic load to retrieve the usage 
count, which may well be reordered (by the CPU and/or compiler) in 
relation to these non-atomic loads. The same issue may well exist on the 
writer side, with the order between the global and the cache counts not 
being ordered.

So while sanity checking the count without caching is not needed, with, 
you can argue it is.

>>           * due to race condition (access to len is not locked), the
>>           * total can be greater than size... so fix the result
>>           */
>> +exit:
>>          if (count > mp->size)
>>                  return mp->size;
>>          return count;

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-05-18 15:40 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-16 13:41 [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size Yasin CANER
2023-05-16 15:23 ` Stephen Hemminger
2023-05-16 16:03   ` Morten Brørup
2023-05-17  8:01     ` Yasin CANER
2023-05-17  8:38       ` Morten Brørup
2023-05-17  9:04         ` Morten Brørup
2023-05-17 11:37           ` Yasin CANER
2023-05-17 11:53           ` David Marchand
2023-05-17 12:23             ` Morten Brørup
2023-05-18  8:37               ` Yasin CANER
2023-05-18  8:53                 ` Morten Brørup
2023-05-18 15:40     ` Mattias Rönnblom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).