From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C9C042AE1; Wed, 17 May 2023 13:39:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1FF1D406B7; Wed, 17 May 2023 13:39:43 +0200 (CEST) Received: from mail-wr1-f41.google.com (mail-wr1-f41.google.com [209.85.221.41]) by mails.dpdk.org (Postfix) with ESMTP id 0C7494067B for ; Wed, 17 May 2023 13:39:42 +0200 (CEST) Received: by mail-wr1-f41.google.com with SMTP id ffacd0b85a97d-3078fa679a7so629407f8f.3 for ; Wed, 17 May 2023 04:39:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684323581; x=1686915581; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=tK1U+UbcJ6oaEelLqCYkWb+Kh9z2AjrjqbjCkIZjgUA=; b=FG6irSBeDcGmKif2Ihf9B3NxnAbyVEy0urFVZ6W6n2LFc8g1/+KT7UocxIMFh1Y7o9 WB2LK/VgQXJIWAgn2UR4+osyKmUTiEf4xCp5fsOeB1U8ubcNhbHUaIH8dSmwQ/Wki0AK vx1cBznU02Xakdkas30Zlp90QpKk1AVklSrydrqwfioGR2Hg0yh8Te8EqdASKYcPyS+8 aMLX6GGCjNl/52IPnojhLh3UR9JjaOjNL03ffDa8K6rZMWw5CpUVdN6EG5R6+fCa7TBu pm24HZgieTVw25Cv/sYxQZxC4QegKQ1O9U0Si4nsO3v8AGR5WTgU+mV0LjOEO+9Jca1B 5bOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684323581; x=1686915581; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tK1U+UbcJ6oaEelLqCYkWb+Kh9z2AjrjqbjCkIZjgUA=; b=RHA8VhUjSYPFzO/8bbpoIVVVSEyKlrv7nuE+Y8EFUhDznsbJecxw8Kj9PXD6pUV0Ez AB/PbPfRbVpeUoVy0fHOzIdIU179Hmd6RP8NCj+o/UogpKVeKXmlD9PokQ7PFC8C/MSM QKQ0zUU4ORh5L0+uTLxDUk0HCVu1xMO3K865NcsLPd9lFhrw0hREgWCqeQYlrmLWuIV1 SiGuQJwJoFHNxH58aPUPYeymZC5pOqIQjAGo1JAC/PwqD5PUocqmOhPyBv1PbUzCukgF KT/S/EEXcBYkZA/3RpBOrRo2JKWN0i4mStplPLy4kzmSkTrf5qqW9VYnATdSEhhyIBZh 0Dqw== X-Gm-Message-State: AC+VfDw+byqqT3d4by0VusK0hwrZiEseMLr6cSPiDm+8TT+iKH+0HN91 /lBzN82rGgJoUToelOqhCD89Cf85vLvA6GCy+7w= X-Google-Smtp-Source: ACHHUZ6veLvTDZHOf9XaILE1RseGK998+cT1mB5ylfakFPggtlGHpOVIwRf6RMFfjytWW7bK/+SptrnMiyHfsMqIXBg= X-Received: by 2002:a5d:5919:0:b0:307:9194:9a94 with SMTP id v25-20020a5d5919000000b0030791949a94mr514021wrd.17.1684323581382; Wed, 17 May 2023 04:39:41 -0700 (PDT) MIME-Version: 1.0 References: <20230516134146.480047-1-yasinncaner@gmail.com> <20230516082349.041c0e68@hermes.local> <98CBD80474FA8B44BF855DF32C47DC35D87923@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D87926@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D87927@smartserver.smartshare.dk> In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87927@smartserver.smartshare.dk> From: Yasin CANER Date: Wed, 17 May 2023 14:37:46 +0300 Message-ID: Subject: Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size To: =?UTF-8?Q?Morten_Br=C3=B8rup?= Cc: Stephen Hemminger , dev@dpdk.org, Yasin CANER , Olivier Matz , Andrew Rybchenko Content-Type: multipart/alternative; boundary="000000000000c1557c05fbe22449" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000c1557c05fbe22449 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello, mempool drivers is listed here. https://doc.dpdk.org/guides/mempool/index.html my app loads *rte_mempool_ring* and also *rte_mempool_stack* . In doc, rte_mempool_ring is default mempool driver. "-d librte_mbuf.so -d librte_mempool.so -d librte_mempool_ring.so -d librte_mempool_stack.so -d librte_mempool_bucket.so -d librte_kni.so" EAL: open shared lib librte_mbuf.so EAL: open shared lib librte_mempool.so EAL: open shared lib librte_mempool_ring.so EAL: open shared lib librte_mempool_stack.so EAL: lib.stack log level changed from disabled to notice EAL: open shared lib librte_mempool_bucket.so EAL: open shared lib librte_kni.so EAL: open shared lib DPDK_LIBS/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_mempool_octeontx.so EAL: pmd.octeontx.mbox log level changed from disabled to notice EAL: pmd.mempool.octeontx log level changed from disabled to notice Morten Br=C3=B8rup , 17 May 2023 =C3=87ar, 12:04 = tarihinde =C5=9Funu yazd=C4=B1: > *From:* Morten Br=C3=B8rup [mailto:mb@smartsharesystems.com] > *Sent:* Wednesday, 17 May 2023 10.38 > > *From:* Yasin CANER [mailto:yasinncaner@gmail.com] > *Sent:* Wednesday, 17 May 2023 10.01 > > Hello, > > > > I don't have full knowledge of how to work rte_mempool_ops_get_count() bu= t > there is another comment about it. Maybe it relates. > > /* > * due to race condition (access to len is not locked), the > * total can be greater than size... so fix the result > */ > > > > MB: This comment relates to the race when accessing the per-lcore cache > counters. > > > > MB (continued): I have added more information, regarding mempool drivers, > in Bugzilla: https://bugs.dpdk.org/show_bug.cgi?id=3D1229 > > > > > > Best regards. > > > > Morten Br=C3=B8rup , 16 May 2023 Sal, 19:04 > tarihinde =C5=9Funu yazd=C4=B1: > > > From: Stephen Hemminger [mailto:stephen@networkplumber.org] > > Sent: Tuesday, 16 May 2023 17.24 > > > > On Tue, 16 May 2023 13:41:46 +0000 > > Yasin CANER wrote: > > > > > From: Yasin CANER > > > > > > after a while working rte_mempool_avail_count function returns bigger > > > than mempool size that cause miscalculation rte_mempool_in_use_count. > > > > > > it helps to avoid miscalculation rte_mempool_in_use_count. > > > > > > Bugzilla ID: 1229 > > > > > > Signed-off-by: Yasin CANER > > > > An alternative that avoids some code duplication. > > > > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c > > index cf5dea2304a7..2406b112e7b0 100644 > > --- a/lib/mempool/rte_mempool.c > > +++ b/lib/mempool/rte_mempool.c > > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool > > *mp) > > count =3D rte_mempool_ops_get_count(mp); > > > > if (mp->cache_size =3D=3D 0) > > - return count; > > + goto exit; > > This bug can only occur here (i.e. with cache_size=3D=3D0) if > rte_mempool_ops_get_count() returns an incorrect value. The bug should be > fixed there instead. > > > > MB (continued): The bug must be in the underlying mempool driver. I took = a > look at the ring and stack drivers, and they seem fine. > > > > > > > for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++) > > count +=3D mp->local_cache[lcore_id].len; > > @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool > > *mp) > > * due to race condition (access to len is not locked), the > > * total can be greater than size... so fix the result > > */ > > +exit: > > if (count > mp->size) > > return mp->size; > > return count; > > --000000000000c1557c05fbe22449 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hello,

mempool drivers is listed here.= =C2=A0

my app loads=C2=A0rte_mempool_ring=C2=A0and also=C2=A0rte_mempool_stack=C2=A0. In doc, rte_mempool_ring is def= ault mempool driver.

"-d librte_mbuf.so -d librte_mempool.so -d librte_mempool_ri= ng.so -d librte_mempool_stack.so -d librte_mempool_bucket.so -d librte_kni.= so"

EAL: open shared lib librte_mbuf.so
EA= L: open shared lib librte_mempool.so
EAL: open shared lib librte_mempool= _ring.so
EAL: open shared lib librte_mempool_stack.so
EAL: lib.stack = log level changed from disabled to notice
EAL: open shared lib librte_me= mpool_bucket.so
EAL: open shared lib librte_kni.so
EAL: open shared l= ib DPDK_LIBS/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_mempool_octeontx.so=
EAL: pmd.octeontx.mbox log level changed from disabled to notice
EAL= : pmd.mempool.octeontx log level changed from disabled to notice
<= div>


Morten Br=C3=B8rup <mb@smartsharesystems.com>, 17 May 2023 =C3=87a= r, 12:04 tarihinde =C5=9Funu yazd=C4=B1:

From: Mort= en Br=C3=B8rup [mailto:mb@smartsharesystems.com]
Sent: Wednesday, 17 Ma= y 2023 10.38

From: Yasin CANER [mailto:= yasinncaner@gmai= l.com]
Sent: Wednesday, 17 May 2023 10.01

<= /u>

Hello,

=C2=A0

I don't=C2=A0have full knowledge of how=C2=A0to work rte_mempool_ops_= get_count() but there is another comment about it. Maybe it relates.=C2=A0<= u>

/*
=C2=A0* due to rac= e condition (access to len is not locked), the
=C2=A0* total can be grea= ter than size... so fix the result
=C2=A0*/

=

=C2=A0

MB: This comment= relates to the race when accessing the per-lcore cache counters.=

=C2=A0<= /u>

MB (continued): I = have added more information, regarding mempool drivers, in Bugzilla: https:= //bugs.dpdk.org/show_bug.cgi?id=3D1229

=C2=A0

=C2=A0

Best regards.

=C2=A0

Mo= rten Br=C3=B8rup <mb@smartsharesystems.com>, 16 May 2023 Sal, 19:04 tarihinde = =C5=9Funu yazd=C4=B1:

> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sen= t: Tuesday, 16 May 2023 17.24
>
> On Tue, 16 May 2023 13:41:46= +0000
> Yasin CANER <yasinncaner@gmail.com> wrote:
>
> > F= rom: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> >
= > > after a while working rte_mempool_avail_count function returns bi= gger
> > than mempool size that cause miscalculation rte_mempool_i= n_use_count.
> >
> > it helps to avoid miscalculation rte= _mempool_in_use_count.
> >
> > Bugzilla ID: 1229
> = >
> > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com= .tr>
>
> An alternative that avoids some code duplicati= on.
>
> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/r= te_mempool.c
> index cf5dea2304a7..2406b112e7b0 100644
> --- a/= lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c
> @= @ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
&g= t; *mp)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count =3D rte_mempool_ops_= get_count(mp);
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (mp->= ;cache_size =3D=3D 0)
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0return count;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0goto exit;

This bug can only occur here (i.e. with = cache_size=3D=3D0) if rte_mempool_ops_get_count() returns an incorrect valu= e. The bug should be fixed there instead.

=C2=A0

MB (continued): The bug = must be in the underlying mempool driver. I took a look at the ring and sta= ck drivers, and they seem fine.



>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for (lcore_id = =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
>=C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count +=3D mp->local_cache[lco= re_id].len;
> @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const str= uct rte_mempool
> *mp)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * du= e to race condition (access to len is not locked), the
>=C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 * total can be greater than size... so fix the result=
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
> +exit:
>=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0if (count > mp->size)
>=C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return mp->size;
>= ;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return count;

--000000000000c1557c05fbe22449--