From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D48C342E9E; Mon, 17 Jul 2023 18:43:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 59ABB40A80; Mon, 17 Jul 2023 18:43:10 +0200 (CEST) Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by mails.dpdk.org (Postfix) with ESMTP id B8B4C4068E for ; Mon, 17 Jul 2023 18:43:08 +0200 (CEST) Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1b9c5e07c1bso37365155ad.2 for ; Mon, 17 Jul 2023 09:43:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20221208.gappssmtp.com; s=20221208; t=1689612188; x=1692204188; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=HnfiTQigea/nxQvorF6jPzbdIG+WS5RGt1UKNQO8QDc=; b=vDzK7aXlXKvK034TH7Pe8yLFl3hqZbZzS9MBlCoApcj8cTU0rcPvJfNsXbaOBm3JL7 SxbBXgqGxhcAPC2B8D8oery2GzDBlrJzvsrVWkTrQ1kOB228Q4il5sYWEADGR1LoiT+g wv9WsMCA40DiTyirGM8hvvWS3t6PNXvf2vV0jJ3iuuoeHqJiBPhaMX1hlVUnaaf9Rcgd 9FjZMHsMu5D42HSczV/HOTGI0vvWwjzZZOLK6mk1WF7BOpWRk2MjBe0/llBJhDaxXy2S NinBQVNPvdOAfd6Q8JBtVjVBjeRDd2U70EUg3nsD7GAZ69MZ2Osy5/KdqIhSSMGJepyH g8Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689612188; x=1692204188; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HnfiTQigea/nxQvorF6jPzbdIG+WS5RGt1UKNQO8QDc=; b=arOkURSOIcnhj7/W0vNkVLShfshKid9qRMn0prDY4rNp+orAI2/VF0D8QIeybIqAU4 JqVDBiRmi6j/Mnl/02DqkFpMdRd0na5od9/cq5UOEnhighiyI0kM/H8OhFDPZmgq6E0C X0c+bbBYobzSQ/n2UeaGAcizIq6H7HgrFxGDNM64NaYgmByR18RYb6ofjbfGDnys3KPG 18Fkm/8Wtt/MHOCEYU3M/CznOjjNIHhCdInC8qhLjeb25cf3z5ubXAmuRqw4lre+il1i Jra2bDZp8MPnKP0C0HBCMy2eVSjXKH5P4zTMdDLLeeCjthEp0LpDDY4ndwoBED6juUhf /wdQ== X-Gm-Message-State: ABy/qLZOjN0/y6lCNS10dhI9brOI3+bFC3SBEUuMVTp44UrpzftlageD 1hBaKM37dxDY3k2Y5Wq2GlOe9g== X-Google-Smtp-Source: APBJJlF5raCAK+0PFUSRBGgTDbeRg5dhM6mCzk8GwofmFYciqueMWhkMX0KKbGZ7kLM6w6oQK5exUw== X-Received: by 2002:a17:903:44c:b0:1b8:abe7:5a80 with SMTP id iw12-20020a170903044c00b001b8abe75a80mr12451448plb.40.1689612187800; Mon, 17 Jul 2023 09:43:07 -0700 (PDT) Received: from hermes.local (204-195-127-207.wavecable.com. [204.195.127.207]) by smtp.gmail.com with ESMTPSA id x1-20020a170902fe8100b001b552309aedsm101786plm.192.2023.07.17.09.43.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jul 2023 09:43:07 -0700 (PDT) Date: Mon, 17 Jul 2023 09:43:05 -0700 From: Stephen Hemminger To: Fengnan Chang Cc: Olivier Matz , david.marchand@redhat.com, mb@smartsharesystems.com, dev@dpdk.org Subject: Re: [External] Re: [PATCH v2] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess Message-ID: <20230717094305.6035eca1@hermes.local> In-Reply-To: References: <20221115123502.12560-1-changfengnan@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, 29 Nov 2022 17:57:05 +0800 Fengnan Chang wrote: > Olivier Matz =E4=BA=8E2022=E5=B9=B411=E6=9C=8822= =E6=97=A5=E5=91=A8=E4=BA=8C 23:25=E5=86=99=E9=81=93=EF=BC=9A > > > > Hi, > > > > On Tue, Nov 15, 2022 at 08:35:02PM +0800, Fengnan Chang wrote: =20 > > > rte_mempool_create put tailq entry into rte_mempool_tailq list before > > > populate, and pool_data set when populate. So in multi process, if > > > process A create mempool, and process B can get mempool through > > > rte_mempool_lookup before pool_data set, if B call rte_mempool_avail_= count, > > > it will cause segment fault. > > > > > > Fix this by put tailq entry into rte_mempool_tailq after populate. > > > > > > Signed-off-by: Fengnan Chang Why not just handle this in rte_mempool_avail_count? It would be much simp= ler there. diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 4d337fca8dcd..14855e21801f 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -1006,6 +1006,10 @@ rte_mempool_avail_count(const struct rte_mempool *mp) unsigned count; unsigned lcore_id; =20 + /* Handle race where pool created but ops not allocated yet */ + if (!(mp->flags & RTE_MEMPOOL_F_POOL_CREATED)) + return 0; + count =3D rte_mempool_ops_get_count(mp); =20 if (mp->cache_size =3D=3D 0)