From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CEAAD42B2A; Wed, 17 May 2023 10:02:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A83324114A; Wed, 17 May 2023 10:02:57 +0200 (CEST) Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) by mails.dpdk.org (Postfix) with ESMTP id A86824067B for ; Wed, 17 May 2023 10:02:56 +0200 (CEST) Received: by mail-wr1-f44.google.com with SMTP id ffacd0b85a97d-30771c68a9eso403741f8f.2 for ; Wed, 17 May 2023 01:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684310576; x=1686902576; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=Ft2GhjYfLNEzyRr/yC4jQlSBFLKyE0HkbtxNue+ZTv4=; b=egNKrq+6g1MkN0aL5nAwiTVNJk4cv2EB+xAGXD+JVP/bqjlfw/D3Mra86NXncO3oV8 5fsEXZZmTevliovPXAFJZ0gG/5CHEttNxvRFqyPJkFQuUtARuQhzEAJIadA5qg3szNTh JZ1BshsLPlspZLfeDpjFcVCQQtuCMm4Wce0K4aLpLB0/H86rJFLbu4+oHHhfsjH6TDqM 3vwxnmEb5H+Xb12wJ8kdDUdpRFimCiarLSZX4J+v+bXApTG1DMUXU9490fN/xcRvau2k YPe2vJEbXdUpj10LrWQ5+YaIFumyZtbgpG+TUz7/2le1sBXH1DGVUH6UbuYbMCsN78RZ KwHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684310576; x=1686902576; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Ft2GhjYfLNEzyRr/yC4jQlSBFLKyE0HkbtxNue+ZTv4=; b=Q3GjIocNesAUfgiQCNI4vAhW4Fron9g9mn8I9CFBG/6vNCzJ1kcIHZhFSjovFENwEg CBf+pbREm6LXf7WLQYdINnmLCn5aRkoEWsfE2JanaeaslWBoMn0TyCvt0cxuMZL60tUa ZxcYCxWWChjQtYxnLUYdX4HVlpkzK9eDhGprTtwOt6tJTh9kVCB4xyrmeSVDMCKUMfpZ RB5CNhxJQ4bo7P0cGJ8rV85grU1w1Z+oWFVFXqdM1TxUvllZAcViaGVPXfQZ1n9REWOF eMl2r6mFuz2xth1LKOVyTj+YrXskMn43i3qaVaZ/o9/nwVAnxjnAx7h/Dkm8GdiR0C8r KsCw== X-Gm-Message-State: AC+VfDw9R/OKxj6WQ/oHBIFsenEGAUm0nH8a7ersQh1i5Pk3pXb4eTFV snMmyrwjuygUyr6k5Pv+5nPQv9VBs1GdgfJplLufb75zkJdZew== X-Google-Smtp-Source: ACHHUZ6ZIwtoGTsgpcgIuscO9YtYJ6/Tn+T/XoN1eplrnRa28690VT4y8V0RwwciaidzINXpMcMua0FMcC6cTehs0Ko= X-Received: by 2002:adf:fe51:0:b0:306:42e2:5ec3 with SMTP id m17-20020adffe51000000b0030642e25ec3mr29051134wrs.6.1684310576035; Wed, 17 May 2023 01:02:56 -0700 (PDT) MIME-Version: 1.0 References: <20230516134146.480047-1-yasinncaner@gmail.com> <20230516082349.041c0e68@hermes.local> <98CBD80474FA8B44BF855DF32C47DC35D87923@smartserver.smartshare.dk> In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87923@smartserver.smartshare.dk> From: Yasin CANER Date: Wed, 17 May 2023 11:01:01 +0300 Message-ID: Subject: Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size To: =?UTF-8?Q?Morten_Br=C3=B8rup?= Cc: Stephen Hemminger , dev@dpdk.org, Yasin CANER , Olivier Matz , Andrew Rybchenko Content-Type: multipart/alternative; boundary="00000000000093790105fbdf1d24" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --00000000000093790105fbdf1d24 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello, I don't have full knowledge of how to work rte_mempool_ops_get_count() but there is another comment about it. Maybe it relates. /* * due to race condition (access to len is not locked), the * total can be greater than size... so fix the result */ Best regards. Morten Br=C3=B8rup , 16 May 2023 Sal, 19:04 tarih= inde =C5=9Funu yazd=C4=B1: > > From: Stephen Hemminger [mailto:stephen@networkplumber.org] > > Sent: Tuesday, 16 May 2023 17.24 > > > > On Tue, 16 May 2023 13:41:46 +0000 > > Yasin CANER wrote: > > > > > From: Yasin CANER > > > > > > after a while working rte_mempool_avail_count function returns bigger > > > than mempool size that cause miscalculation rte_mempool_in_use_count. > > > > > > it helps to avoid miscalculation rte_mempool_in_use_count. > > > > > > Bugzilla ID: 1229 > > > > > > Signed-off-by: Yasin CANER > > > > An alternative that avoids some code duplication. > > > > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c > > index cf5dea2304a7..2406b112e7b0 100644 > > --- a/lib/mempool/rte_mempool.c > > +++ b/lib/mempool/rte_mempool.c > > @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool > > *mp) > > count =3D rte_mempool_ops_get_count(mp); > > > > if (mp->cache_size =3D=3D 0) > > - return count; > > + goto exit; > > This bug can only occur here (i.e. with cache_size=3D=3D0) if > rte_mempool_ops_get_count() returns an incorrect value. The bug should be > fixed there instead. > > > > > for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++) > > count +=3D mp->local_cache[lcore_id].len; > > @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool > > *mp) > > * due to race condition (access to len is not locked), the > > * total can be greater than size... so fix the result > > */ > > +exit: > > if (count > mp->size) > > return mp->size; > > return count; > --00000000000093790105fbdf1d24 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hello,

I don't=C2=A0have full knowl= edge of how=C2=A0to work rte_mempool_ops_get_count() but there is another c= omment about it. Maybe it relates.=C2=A0
/*
=C2=A0* due to rac= e condition (access to len is not locked), the
=C2=A0* total can be grea= ter than size... so fix the result
=C2=A0*/

Best regards.

Morten Br=C3=B8rup <mb@smartsharesystems.com>, 16 May 2023 Sal, 19:04 tarihi= nde =C5=9Funu yazd=C4=B1:
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, 16 May 2023 17.24
>
> On Tue, 16 May 2023 13:41:46 +0000
> Yasin CANER <yasinncaner@gmail.com> wrote:
>
> > From: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr>
> >
> > after a while working rte_mempool_avail_count function returns bi= gger
> > than mempool size that cause miscalculation rte_mempool_in_use_co= unt.
> >
> > it helps to avoid miscalculation rte_mempool_in_use_count.
> >
> > Bugzilla ID: 1229
> >
> > Signed-off-by: Yasin CANER <yasin.caner@ulakhaberlesme.com.tr&= gt;
>
> An alternative that avoids some code duplication.
>
> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> index cf5dea2304a7..2406b112e7b0 100644
> --- a/lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c
> @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool=
> *mp)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count =3D rte_mempool_ops_get_count(m= p);
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (mp->cache_size =3D=3D 0)
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return count;<= br> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto exit;

This bug can only occur here (i.e. with cache_size=3D=3D0) if rte_mempool_o= ps_get_count() returns an incorrect value. The bug should be fixed there in= stead.

>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for (lcore_id =3D 0; lcore_id < RT= E_MAX_LCORE; lcore_id++)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count += =3D mp->local_cache[lcore_id].len;
> @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool=
> *mp)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * due to race condition (access to l= en is not locked), the
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * total can be greater than size... = so fix the result
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
> +exit:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (count > mp->size)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return mp= ->size;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return count;
--00000000000093790105fbdf1d24--