From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id CEAAD42B2A;
	Wed, 17 May 2023 10:02:57 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id A83324114A;
	Wed, 17 May 2023 10:02:57 +0200 (CEST)
Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com
 [209.85.221.44]) by mails.dpdk.org (Postfix) with ESMTP id A86824067B
 for <dev@dpdk.org>; Wed, 17 May 2023 10:02:56 +0200 (CEST)
Received: by mail-wr1-f44.google.com with SMTP id
 ffacd0b85a97d-30771c68a9eso403741f8f.2
 for <dev@dpdk.org>; Wed, 17 May 2023 01:02:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=gmail.com; s=20221208; t=1684310576; x=1686902576;
 h=cc:to:subject:message-id:date:from:in-reply-to:references
 :mime-version:from:to:cc:subject:date:message-id:reply-to;
 bh=Ft2GhjYfLNEzyRr/yC4jQlSBFLKyE0HkbtxNue+ZTv4=;
 b=egNKrq+6g1MkN0aL5nAwiTVNJk4cv2EB+xAGXD+JVP/bqjlfw/D3Mra86NXncO3oV8
 5fsEXZZmTevliovPXAFJZ0gG/5CHEttNxvRFqyPJkFQuUtARuQhzEAJIadA5qg3szNTh
 JZ1BshsLPlspZLfeDpjFcVCQQtuCMm4Wce0K4aLpLB0/H86rJFLbu4+oHHhfsjH6TDqM
 3vwxnmEb5H+Xb12wJ8kdDUdpRFimCiarLSZX4J+v+bXApTG1DMUXU9490fN/xcRvau2k
 YPe2vJEbXdUpj10LrWQ5+YaIFumyZtbgpG+TUz7/2le1sBXH1DGVUH6UbuYbMCsN78RZ
 KwHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20221208; t=1684310576; x=1686902576;
 h=cc:to:subject:message-id:date:from:in-reply-to:references
 :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
 :reply-to;
 bh=Ft2GhjYfLNEzyRr/yC4jQlSBFLKyE0HkbtxNue+ZTv4=;
 b=Q3GjIocNesAUfgiQCNI4vAhW4Fron9g9mn8I9CFBG/6vNCzJ1kcIHZhFSjovFENwEg
 CBf+pbREm6LXf7WLQYdINnmLCn5aRkoEWsfE2JanaeaslWBoMn0TyCvt0cxuMZL60tUa
 ZxcYCxWWChjQtYxnLUYdX4HVlpkzK9eDhGprTtwOt6tJTh9kVCB4xyrmeSVDMCKUMfpZ
 RB5CNhxJQ4bo7P0cGJ8rV85grU1w1Z+oWFVFXqdM1TxUvllZAcViaGVPXfQZ1n9REWOF
 eMl2r6mFuz2xth1LKOVyTj+YrXskMn43i3qaVaZ/o9/nwVAnxjnAx7h/Dkm8GdiR0C8r
 KsCw==
X-Gm-Message-State: AC+VfDw9R/OKxj6WQ/oHBIFsenEGAUm0nH8a7ersQh1i5Pk3pXb4eTFV
 snMmyrwjuygUyr6k5Pv+5nPQv9VBs1GdgfJplLufb75zkJdZew==
X-Google-Smtp-Source: ACHHUZ6ZIwtoGTsgpcgIuscO9YtYJ6/Tn+T/XoN1eplrnRa28690VT4y8V0RwwciaidzINXpMcMua0FMcC6cTehs0Ko=
X-Received: by 2002:adf:fe51:0:b0:306:42e2:5ec3 with SMTP id
 m17-20020adffe51000000b0030642e25ec3mr29051134wrs.6.1684310576035; Wed, 17
 May 2023 01:02:56 -0700 (PDT)
MIME-Version: 1.0
References: <20230516134146.480047-1-yasinncaner@gmail.com>
 <20230516082349.041c0e68@hermes.local>
 <98CBD80474FA8B44BF855DF32C47DC35D87923@smartserver.smartshare.dk>
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87923@smartserver.smartshare.dk>
From: Yasin CANER <yasinncaner@gmail.com>
Date: Wed, 17 May 2023 11:01:01 +0300
Message-ID: <CAP5epcMAC1H9wE=Dk9fk3mQ8AeEBeavyR01u2FgvsDZSskv=Gw@mail.gmail.com>
Subject: Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return
 bigger than mempool size
To: =?UTF-8?Q?Morten_Br=C3=B8rup?= <mb@smartsharesystems.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>, dev@dpdk.org, 
 Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>,
 Olivier Matz <olivier.matz@6wind.com>, 
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Content-Type: multipart/alternative; boundary="00000000000093790105fbdf1d24"
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

--00000000000093790105fbdf1d24
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello,

I don't have full knowledge of how to work rte_mempool_ops_get_count() but
there is another comment about it. Maybe it relates.
/*
 * due to race condition (access to len is not locked), the
 * total can be greater than size... so fix the result
 */

Best regards.

Morten Br=C3=B8rup <mb@smartsharesystems.com>, 16 May 2023 Sal, 19:04 tarih=
inde
=C5=9Funu yazd=C4=B1:

> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Tuesday, 16 May 2023 17.24
> >
> > On Tue, 16 May 2023 13:41:46 +0000
> > Yasin CANER <yasinncaner@gmail.com> wrote:
> >
> > > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> > >
> > > after a while working rte_mempool_avail_count function returns bigger
> > > than mempool size that cause miscalculation rte_mempool_in_use_count.
> > >
> > > it helps to avoid miscalculation rte_mempool_in_use_count.
> > >
> > > Bugzilla ID: 1229
> > >
> > > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> >
> > An alternative that avoids some code duplication.
> >
> > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> > index cf5dea2304a7..2406b112e7b0 100644
> > --- a/lib/mempool/rte_mempool.c
> > +++ b/lib/mempool/rte_mempool.c
> > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> >         count =3D rte_mempool_ops_get_count(mp);
> >
> >         if (mp->cache_size =3D=3D 0)
> > -               return count;
> > +               goto exit;
>
> This bug can only occur here (i.e. with cache_size=3D=3D0) if
> rte_mempool_ops_get_count() returns an incorrect value. The bug should be
> fixed there instead.
>
> >
> >         for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
> >                 count +=3D mp->local_cache[lcore_id].len;
> > @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
> > *mp)
> >          * due to race condition (access to len is not locked), the
> >          * total can be greater than size... so fix the result
> >          */
> > +exit:
> >         if (count > mp->size)
> >                 return mp->size;
> >         return count;
>

--00000000000093790105fbdf1d24
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello,<div><br></div><div>I don&#39;t=C2=A0have full knowl=
edge of how=C2=A0to work rte_mempool_ops_get_count() but there is another c=
omment about it. Maybe it relates.=C2=A0</div><div>/*<br>=C2=A0* due to rac=
e condition (access to len is not locked), the<br>=C2=A0* total can be grea=
ter than size... so fix the result<br>=C2=A0*/<br></div><div><br></div><div=
>Best regards.</div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" c=
lass=3D"gmail_attr">Morten Br=C3=B8rup &lt;<a href=3D"mailto:mb@smartshares=
ystems.com">mb@smartsharesystems.com</a>&gt;, 16 May 2023 Sal, 19:04 tarihi=
nde =C5=9Funu yazd=C4=B1:<br></div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex">&gt; From: Stephen Hemminger [mailto:<a href=3D"mailto:stephen@n=
etworkplumber.org" target=3D"_blank">stephen@networkplumber.org</a>]<br>
&gt; Sent: Tuesday, 16 May 2023 17.24<br>
&gt; <br>
&gt; On Tue, 16 May 2023 13:41:46 +0000<br>
&gt; Yasin CANER &lt;<a href=3D"mailto:yasinncaner@gmail.com" target=3D"_bl=
ank">yasinncaner@gmail.com</a>&gt; wrote:<br>
&gt; <br>
&gt; &gt; From: Yasin CANER &lt;<a href=3D"mailto:yasin.caner@ulakhaberlesm=
e.com.tr" target=3D"_blank">yasin.caner@ulakhaberlesme.com.tr</a>&gt;<br>
&gt; &gt;<br>
&gt; &gt; after a while working rte_mempool_avail_count function returns bi=
gger<br>
&gt; &gt; than mempool size that cause miscalculation rte_mempool_in_use_co=
unt.<br>
&gt; &gt;<br>
&gt; &gt; it helps to avoid miscalculation rte_mempool_in_use_count.<br>
&gt; &gt;<br>
&gt; &gt; Bugzilla ID: 1229<br>
&gt; &gt;<br>
&gt; &gt; Signed-off-by: Yasin CANER &lt;<a href=3D"mailto:yasin.caner@ulak=
haberlesme.com.tr" target=3D"_blank">yasin.caner@ulakhaberlesme.com.tr</a>&=
gt;<br>
&gt; <br>
&gt; An alternative that avoids some code duplication.<br>
&gt; <br>
&gt; diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c<br>
&gt; index cf5dea2304a7..2406b112e7b0 100644<br>
&gt; --- a/lib/mempool/rte_mempool.c<br>
&gt; +++ b/lib/mempool/rte_mempool.c<br>
&gt; @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool=
<br>
&gt; *mp)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count =3D rte_mempool_ops_get_count(m=
p);<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (mp-&gt;cache_size =3D=3D 0)<br>
&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return count;<=
br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto exit;<br>
<br>
This bug can only occur here (i.e. with cache_size=3D=3D0) if rte_mempool_o=
ps_get_count() returns an incorrect value. The bug should be fixed there in=
stead.<br>
<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for (lcore_id =3D 0; lcore_id &lt; RT=
E_MAX_LCORE; lcore_id++)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count +=
=3D mp-&gt;local_cache[lcore_id].len;<br>
&gt; @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool=
<br>
&gt; *mp)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * due to race condition (access to l=
en is not locked), the<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * total can be greater than size... =
so fix the result<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */<br>
&gt; +exit:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (count &gt; mp-&gt;size)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return mp=
-&gt;size;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return count;<br>
</blockquote></div>

--00000000000093790105fbdf1d24--