From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76FC142B24; Tue, 16 May 2023 17:23:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5484C406B6; Tue, 16 May 2023 17:23:54 +0200 (CEST) Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by mails.dpdk.org (Postfix) with ESMTP id 092E540689 for ; Tue, 16 May 2023 17:23:52 +0200 (CEST) Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-64384c6797eso11321001b3a.2 for ; Tue, 16 May 2023 08:23:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20221208.gappssmtp.com; s=20221208; t=1684250632; x=1686842632; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=8D3v+gs/zzvnqhwiVK6kriW9JZEx39rwxiG1E9oIFqg=; b=34V0IYMda/KUMrnbWAFwAlKuzZ0FExsK/nR7HK+usPDAH/bvRcnpo4Q676f2puyfuK u1KEF5K68//t6NbzuvwowlZjSmhCs64890MwPV3X19j4lkzF8VjAUZ8rgnhWsO6p8zqb 4YFkr3hFFnpLA3tMansV7wSB9qAogVsPML18jajT20xSiOT5e9Acb49xMc7YmC3QB8Tr 8k4X0d38ncoRz/Ok2oYDCJQrCXr++qNWB7zFqp56msiQrY2ry+LPtlEhMR3lutyWUOB5 vufis3LzHWIzv8ovANZkYt82dGEmzSyDfiX60ZjL8QptWFANrIA0/QPy5DXQKeM6TRMM OWuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684250632; x=1686842632; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8D3v+gs/zzvnqhwiVK6kriW9JZEx39rwxiG1E9oIFqg=; b=g1yQCKIOQkGlp6EPGA8TxFuDdotNcGkGoPi44fvU5558rswWrj2wFHrITRT1K8qb+w /OilOwNzzWbuklkCu9PaDzcyJdj5e6V4Q/6ZitWUPqVeLSSGrBNwhgzXASH1GmA/NF8n hud0XhQGQCC+XMkk0CfKCYE6siEi5jDdI6i5rMy6Fir5v3BHuTGnSiiaJiIaDtQDAGWa y1kdIBL/eG13BVHmIiaAurahJSCkBCGW9L9ClaPwe+QI1A2bw3UGpAgayw/WO0Ms5WUC dXHUEF2oZRm82P9FRvABjor7pFmUSZ0kJ29IuEPkfvczmM4WoPDILQJOFvlEB5bG18xo aeQg== X-Gm-Message-State: AC+VfDxjrXtmrKdGAwI00Lfb+PGBRSZykEWlQct1ev3F6+D8No7nWMwB yVJC0RFZDq0YfTldhlUBzc3sjA== X-Google-Smtp-Source: ACHHUZ68NmLyYsDjyUhJ/64OfM25SlnuPD530K/MWWpePXgyyqtinN1UERW/zojVMdnI+ZGxbGrWjw== X-Received: by 2002:a05:6a00:1a51:b0:643:98e3:d43b with SMTP id h17-20020a056a001a5100b0064398e3d43bmr43766253pfv.5.1684250632053; Tue, 16 May 2023 08:23:52 -0700 (PDT) Received: from hermes.local (204-195-120-218.wavecable.com. [204.195.120.218]) by smtp.gmail.com with ESMTPSA id v10-20020aa7808a000000b0063d3fbf4783sm13574171pff.80.2023.05.16.08.23.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 May 2023 08:23:51 -0700 (PDT) Date: Tue, 16 May 2023 08:23:49 -0700 From: Stephen Hemminger To: Yasin CANER Cc: dev@dpdk.org, Yasin CANER , Olivier Matz , Andrew Rybchenko Subject: Re: [PATCH] lib/mempool : rte_mempool_avail_count, fixing return bigger than mempool size Message-ID: <20230516082349.041c0e68@hermes.local> In-Reply-To: <20230516134146.480047-1-yasinncaner@gmail.com> References: <20230516134146.480047-1-yasinncaner@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, 16 May 2023 13:41:46 +0000 Yasin CANER wrote: > From: Yasin CANER > > after a while working rte_mempool_avail_count function returns bigger > than mempool size that cause miscalculation rte_mempool_in_use_count. > > it helps to avoid miscalculation rte_mempool_in_use_count. > > Bugzilla ID: 1229 > > Signed-off-by: Yasin CANER An alternative that avoids some code duplication. diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index cf5dea2304a7..2406b112e7b0 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool *mp) count = rte_mempool_ops_get_count(mp); if (mp->cache_size == 0) - return count; + goto exit; for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) count += mp->local_cache[lcore_id].len; @@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool *mp) * due to race condition (access to len is not locked), the * total can be greater than size... so fix the result */ +exit: if (count > mp->size) return mp->size; return count;