From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ED8164269C; Mon, 2 Oct 2023 09:34:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BF97C40284; Mon, 2 Oct 2023 09:34:06 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id AB8394003C for ; Mon, 2 Oct 2023 09:34:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1696232045; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=51f7aPHZwWIfkSKZrhcmzS+7C5YP9SjUVGhnCclz8qo=; b=RNvVoFgn3eFA9AVgSZv/JJdi40w0P3mrMi0rDMAl8k7uyuD+7X/+uuDpzFi7mMzCj72qlQ PeWdxUm5TjDJyBvBTuSaywhDbb/ZHhcLr47fqdv8A+glIrcoHT3eFr2jNzlw20q9+pUNcT BS9vl4KXgfvtfyDGrAqocyBiD1AhlfQ= Received: from mail-lj1-f198.google.com (mail-lj1-f198.google.com [209.85.208.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-125-jBNFARs9MhK2bSXjtX-vqg-1; Mon, 02 Oct 2023 03:34:04 -0400 X-MC-Unique: jBNFARs9MhK2bSXjtX-vqg-1 Received: by mail-lj1-f198.google.com with SMTP id 38308e7fff4ca-2c12a8576d4so223418441fa.3 for ; Mon, 02 Oct 2023 00:34:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696232042; x=1696836842; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=51f7aPHZwWIfkSKZrhcmzS+7C5YP9SjUVGhnCclz8qo=; b=IPRRnS6BbwbojmW2vEUFsdPPDpo7moQdQvOF0zSFzsn1AVUPK0Q7Ip/htn+JvgHSwV Td/p4E+ibHnnpI8OnummG6K9fTgotMU4m43n/syohG8ozUk/I7ynxekz8HePbTzpWk7B H3KTJ1MKudm4PY6FwkPjZPyuTDie96fzzzou4gZiPnF4+AJJxhx3elhUNecVmuM9hWT8 7qOMZNrH5U5BT8LhHeCYSHNm2zkfSOMU8x3KYWBc93S4E4Pf474HEoyuFVzZ1KlrF4Kc U5h1x61IzeULESitIkib3u77T0iHILaAMaJ3U2Qi4+rEY5r05WHcT2aesbmR+ZLUkjig 1K2A== X-Gm-Message-State: AOJu0YzeiIeMiU4MfopXBDes3jb5dXKc34YBi6vB9fB/aHaFTkvphcn9 itsVToB6lVQDbBVp8i+8B8zQs4i/CVn/xuy6w+Fu2leDlgUxfyu5h5k6g7vjW7L/95S4ZiuRfGP qptqxekT7u4PcVFqtcAg= X-Received: by 2002:a2e:9914:0:b0:2c0:2ab7:9aae with SMTP id v20-20020a2e9914000000b002c02ab79aaemr8954862lji.11.1696232042532; Mon, 02 Oct 2023 00:34:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFPhqzJYIdCjPovWO01VmN87I+o66mEAL4mBs6MtDnzR776RETGx8EWXr5RGyXTXGS1KwF3AyktgRFclOiqwro= X-Received: by 2002:a2e:9914:0:b0:2c0:2ab7:9aae with SMTP id v20-20020a2e9914000000b002c02ab79aaemr8954845lji.11.1696232042141; Mon, 02 Oct 2023 00:34:02 -0700 (PDT) MIME-Version: 1.0 References: <20230804161604.61050-1-stephen@networkplumber.org> In-Reply-To: <20230804161604.61050-1-stephen@networkplumber.org> From: David Marchand Date: Mon, 2 Oct 2023 09:33:50 +0200 Message-ID: Subject: Re: [PATCH] dumpcap: fix mbuf pool ring type To: Stephen Hemminger Cc: dev@dpdk.org, =?UTF-8?Q?Morten_Br=C3=B8rup?= X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Fri, Aug 4, 2023 at 6:16=E2=80=AFPM Stephen Hemminger wrote: > > The ring used to store mbufs needs to be multiple producer, > multiple consumer because multiple queues might on multiple > cores might be allocating and same time (consume) and in > case of ring full, the mbufs will be returned (multiple producer). I think I get the idea, but can you rephrase please? > > Bugzilla ID: 1271 > Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type") This Fixes: tag looks wrong. > Signed-off-by: Stephen Hemminger > --- > app/dumpcap/main.c | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c > index 64294bbfb3e6..991174e95022 100644 > --- a/app/dumpcap/main.c > +++ b/app/dumpcap/main.c > @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void) > data_size =3D mbuf_size; > } > > - mp =3D rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs, > - MBUF_POOL_CACHE_SIZE, 0, > - data_size, > - rte_socket_id(), "ring_mp_sc"= ); > + mp =3D rte_pktmbuf_pool_create(pool_name, num_mbufs, > + MBUF_POOL_CACHE_SIZE, 0, > + data_size, rte_socket_id()); Switching to rte_pktmbuf_pool_create() still leaves the user with the possibility to shoot himself in the foot (I was thinking of setting some --mbuf-pool-ops-name EAL option). This application has explicit requirements in terms of concurrent access (and I don't think the mempool library exposes per driver capabilities in that regard). The application was enforcing the use of mempool/ring so far. I think it is safer to go with an explicit rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc"). WDYT? > if (mp =3D=3D NULL) > rte_exit(EXIT_FAILURE, > "Mempool (%s) creation failed: %s\n", pool_name, > -- > 2.39.2 > Thanks. --=20 David Marchand