From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B7922432D9; Wed, 8 Nov 2023 18:47:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6137E402BB; Wed, 8 Nov 2023 18:47:51 +0100 (CET) Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by mails.dpdk.org (Postfix) with ESMTP id D55AE402A7 for ; Wed, 8 Nov 2023 18:47:50 +0100 (CET) Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-6b87c1edfd5so5518619b3a.1 for ; Wed, 08 Nov 2023 09:47:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1699465669; x=1700070469; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hn10u09bFek9BIBYmYg7Dy8C7cx+3kNY7KlUfzGxGYA=; b=rVym4Fh7405eaqyfE2XEfB3AWsRlpXhpwN/JJiFW8JAaI+O6gCDvQumsW5wTdqsAMi dX6Q93fH3CY3bfE8ykxhVXEMkDj1Dq0eM5vUL4Jw/M4FIWwZpVMUz6xqa6Bjf/smraU8 6gej6SUd665c1svWehGcdGqBJ8qliJJfa6hGssa0rG1+dvaKRWjmjCsLwRe3mINNmGQk Ih9QHLQVch44VmT8nsdcq/auVWNxJpienIq2sM8ey3BBARO9/JpRRBXr8pWBKCfEOPIJ yAEK7j7KVQQzzkJzUvLVmQ5yOy/RG0yzsxnUT1c43pObkXSvssrOdY47ixvXG9UQqbMH kHbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699465669; x=1700070469; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hn10u09bFek9BIBYmYg7Dy8C7cx+3kNY7KlUfzGxGYA=; b=rf/UudxXCpP/T/5RXOdhhVWKaHhmmUy5eNub9f9saWvB8n8ytXpf6GOp7RWGJBynu6 wKndSyQqUcgEG6ZhXNJTws2I423jnBlsM+oNAxo127ywX86wkNwG9KhoC7eQun9862J5 Vnqy55QEsiMgiSy6AlkophbQYVZm3Cv/IUUr/Qg33eyjC8j9VZHMS6KRAay9q4f7sm1M y7w4wRYQy22sLeR928ILcycxXRwhoQncqdoh7kQlMy4lxf8P6bRhbcHJJDelQp8R6DvP BJ7DX4tZ2Ifm3+PoOi+svxAbBuN30joqWv2axayZBILfdktoOHeFKGYmJaZyfIJzXKvB AO6g== X-Gm-Message-State: AOJu0YzZS1DaMYD395IGOsp9JmxJzIlbmUUk5YE5APKCB7VaWQRu/ypT FgkDjRwh4Fl2V0u33q3aPgueJV8QB2ImouA+5Bg= X-Google-Smtp-Source: AGHT+IHPioOno+9kTkKfJi7FnMFsZzYnN8zwpx3VFwMLVnTeWscFENj6c3ytC9VGnq1PlbZKjMfw8A== X-Received: by 2002:a05:6a00:2e04:b0:690:3b59:cc7b with SMTP id fc4-20020a056a002e0400b006903b59cc7bmr2678928pfb.32.1699465669372; Wed, 08 Nov 2023 09:47:49 -0800 (PST) Received: from hermes.local (204-195-123-141.wavecable.com. [204.195.123.141]) by smtp.gmail.com with ESMTPSA id s1-20020a62e701000000b0068feb378b89sm9564242pfh.171.2023.11.08.09.47.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 09:47:48 -0800 (PST) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v3 ] dumpcap: fix mbuf pool ring type Date: Wed, 8 Nov 2023 09:47:38 -0800 Message-Id: <20231108174738.185933-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230804161604.61050-1-stephen@networkplumber.org> References: <20230804161604.61050-1-stephen@networkplumber.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The internal buffer pool used for copies of mbufs captured needs to be thread safe. If capturing on multiple interfaces or multiple queues, the same pool will be used (consumers). And if the capture ring gets full, the queues will need to put back the capture buffer which leads to multiple producers. Bugzilla ID: 1271 Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application") Signed-off-by: Stephen Hemminger --- v3 - just change ops, don't use default pool ops app/dumpcap/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c index 64294bbfb3e6..4f581bd341d8 100644 --- a/app/dumpcap/main.c +++ b/app/dumpcap/main.c @@ -694,7 +694,7 @@ static struct rte_mempool *create_mempool(void) mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs, MBUF_POOL_CACHE_SIZE, 0, data_size, - rte_socket_id(), "ring_mp_sc"); + rte_socket_id(), "ring_mp_mc"); if (mp == NULL) rte_exit(EXIT_FAILURE, "Mempool (%s) creation failed: %s\n", pool_name, -- 2.39.2