From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9F6EE46A86 for ; Wed, 9 Jul 2025 23:58:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9688140669; Wed, 9 Jul 2025 23:58:59 +0200 (CEST) Received: from mx0a-00196b01.pphosted.com (mx0a-00196b01.pphosted.com [67.231.149.170]) by mails.dpdk.org (Postfix) with ESMTP id 85E7E402E3 for ; Wed, 9 Jul 2025 23:58:57 +0200 (CEST) Received: from pps.filterd (m0096263.ppops.net [127.0.0.1]) by mx0a-00196b01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 569LSsiS027074; Wed, 9 Jul 2025 17:58:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscout.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= netscout.com.09.24.2020; bh=mO1aMj1096WkoxClttfiGf4+LZiip/TX3IrE TjAoZFM=; b=VQGJNmgAcH3eK4/06ax5wBBGHhgpXOaCpEkipVVMnuTflivpCYp4 lIzA46Gq6S+KiaezQk4JasBcGvbiI/iYP/cc5pJF+zDlQX9E1wwOB3m0hoCHwOMk nYPhwhHjJ1gloXM2poRKTocXhgXLZhFJhHgjCMxfbPPnpzRfYoU/XifBdKHyptPp EQnwUqBVPhqW0XyHrBgvxEl7SjstLfaZBRFJPp5glbo3gSm6ix0jmITwgyPKIPq7 XQTF4ZjupKTNJSkYetz3Yp7U9JjTalQ+e/au0JV2191rqKXdtVZiHbN3zs0p1HZq YS0FQyrXjScjvYMfZusxojBa7s7FEzJ5tw== Received: from nam04-dm6-obe.outbound.protection.outlook.com (mail-dm6nam04lp2046.outbound.protection.outlook.com [104.47.73.46]) by mx0a-00196b01.pphosted.com (PPS) with ESMTPS id 47syckg2tt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 09 Jul 2025 17:58:47 -0400 (EDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=IXCelpaqNW5H0VzeNbtqC7RJa/QPK0sKySirFgNZhJsB3Qw+oUt8ausI+MJvhL1Z0oAUwYiro7O6UqmmVy7QSv23KGtuLA+kQ53BjLSNMIDQ+LvnwSYMTtZd5nSuGSdtzjFIGJQwUsn3JBN63w6wAkc6QyPlx/1wc3yIBzwS88Bk0KnkeNAn43eZRvRCccnnVfrIJcl9ANZ7PKFKPkdf6CqczzhUYihLJNRIUAv+bFLAkR7g9uwuBOEZ+1lJsEoQ2v5/B/dXtYmmxicEZs+8cERRsU5UckhlcExnYhjs+ivA3+w5R8LAurzTmCw2KqR8Ivyl2WYOKaLicIdeAReGKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mO1aMj1096WkoxClttfiGf4+LZiip/TX3IrETjAoZFM=; b=CPnVlpdQkaiWcKzLlyizinyFAIgfWxeFUchZjwHAxZYQMEvrWuzWEdEe6tmuJ/tiwQvnCrlRFDjq7DjnKK1nVHRvVt6QctEKZ2rbdyv8EYd3b6bUUcVihAV/Rnm8yqkYRDX2h689ta1Gr5uJtQL/jau6+ERL+N6aGqUZ9qNW1o9FqYzEtZPfwbOGrfxk9ypM+QhoF6W7qKz1EaonFI29Qf9ARqSnYQVhF7OGtrjsSBxGAu6H6H0ajopkuDHTyH0YkuoVbnQj7kx2q+FTxJ8xmUFoqW7INfjOSJ6eOJJDEjxdSP9roVrhBPZzXtbqq6VxzsccBlJDma/406ProkQQLg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=netscout.com; dmarc=pass action=none header.from=netscout.com; dkim=pass header.d=netscout.com; arc=none Received: from CH3PR01MB8470.prod.exchangelabs.com (2603:10b6:610:1a4::21) by LV8PR01MB8478.prod.exchangelabs.com (2603:10b6:408:180::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8901.24; Wed, 9 Jul 2025 21:58:41 +0000 Received: from CH3PR01MB8470.prod.exchangelabs.com ([fe80::80c4:7216:f070:e5fd]) by CH3PR01MB8470.prod.exchangelabs.com ([fe80::80c4:7216:f070:e5fd%3]) with mapi id 15.20.8901.021; Wed, 9 Jul 2025 21:58:41 +0000 From: "Lombardo, Ed" To: Ivan Malov CC: Stephen Hemminger , users Subject: RE: dpdk Tx falling short Thread-Topic: dpdk Tx falling short Thread-Index: AdvsS4XiFhfr9tUWTZG/9hrfPmeo0QAjYaoAAAZ3EoAAOAaP4AADTEIAAAkqa1AAIqDrAAADVbOwAC9nmnAAC62DAAACmVaAAApYvfAAFH2agAAANzQwAADljgAAACPVIAAA6lSAAAMFuyAAAU/PgAARDE2wACnFt9A= Date: Wed, 9 Jul 2025 21:58:41 +0000 Message-ID: References: <20250705120834.78849e56@hermes.local> <20250706090232.635bd36e@hermes.local> <9ae56e38-0d29-4c7c-0bc2-f92912146da2@arknetworks.am> <20250707160409.75fbc2f1@hermes.local> <20250708064707.583df905@hermes.local> <4b43a1ce-2dc6-5d46-12e0-b26d13a60633@arknetworks.am> <1b7533d3-a3de-b5e9-8838-2d6608f2c8e5@arknetworks.am> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-traffictypediagnostic: CH3PR01MB8470:EE_|LV8PR01MB8478:EE_ x-ms-office365-filtering-correlation-id: 779f320c-751a-419f-f1e0-08ddbf33c472 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; ARA:13230040|376014|366016|1800799024|38070700018; x-microsoft-antispam-message-info: =?us-ascii?Q?j0+5Vx1ZG0i9iCNSrJBPTOOp27edlrg5dE78LkSEPGfqRAjYO/LnpZvsMbNz?= =?us-ascii?Q?q0lr8c+a4wqYAcX9pHPRIgYIlYMTDtRoAJdFLTB9kxbqmMzBUUwSV29GSxV+?= =?us-ascii?Q?y8thLfSAiCw0H+17eSjWpYpIyUMLurM7McBR5/0CR+AjnlvaRwZLfO65e25X?= =?us-ascii?Q?b6GX3RG+U/kUUmG62C1aXpCaUcLfJS+91MLVvOfJi4mRzaJskk+Y4emdbwwm?= =?us-ascii?Q?8h2A/oxfLJiOtZNaprfemvqotlDVqDCDJ4Mb0v2e3XArxi2/uTOysW7QhY5B?= =?us-ascii?Q?VXgM0ashLmbi4a4z/d20JvDURtJ2a6c/hAQvU7dzyd05LhhHS26bM/OlHwkA?= =?us-ascii?Q?E1zcoiDy90llpF8iiXPXv920gpbTGlQ0M7ZdhravbQyFzoribcLT2a8ck9hq?= =?us-ascii?Q?R70vD7mgumtfNaGuWgFuWifYp6jqKbnCGMBZ0kjjAxogVXZULC3cmABsx8/5?= =?us-ascii?Q?WP01uaJFiczAJAPDvVqKyHz9Oo0os78G+jQy/IwGAaSxj+4bwtYFRjX073Ag?= =?us-ascii?Q?P9URGXEoZV986IAPg3eErTrXZpUmV4aNsyf1hZGqMNqQhmKNUQ3lsGz374Dk?= =?us-ascii?Q?GSc1ixAhw4P0UchJ1SWAQr59GFtuD/tqXq3szzQpyi1Ua4ffeZtNJuA0n3HR?= =?us-ascii?Q?1mQf5nTNLOo7ERRnpnFmi1agYZoA4+IDFzWf2H/ZZrvaCVz/uSsQUAaZiZW3?= =?us-ascii?Q?x8c4JjspNykfiffWlKWBqTHoopJM1o+J+T79skrnsqTd6KkkUASid/mAta4R?= =?us-ascii?Q?Z5uSZW7gzpRrqft9b3qTGVdDGr5u1BH9OraAXy9GeKTNWKIhhpEIizic2Web?= =?us-ascii?Q?KrisaMN90LznMljA5NVx0TLGmJV5WZ8cl7krqmueG19pp/RPWVjSvkGrAIPf?= =?us-ascii?Q?R+LIQhADnWv16A9VrBQPDckfRuuFhsnFRtB+KWrmLZ6d1SVHNy5whpOEbfzJ?= =?us-ascii?Q?4VDJUwMi5sGi7CYRQ7y/1zHE4shWmB/MMx672ABM1PSCb3ikKgidiwxWUUXH?= =?us-ascii?Q?T/xkeX15AqjhhUR1ycKb6jIlwYjoJwS+jxjbc4nBlc8zjS+xRE60+heB3Jmf?= =?us-ascii?Q?9XMJtwrkIEsgdTyp4lQtT8sVNYWMAGMiWg0v0Hrg0ibP2I7pqxwGLiiNAACi?= =?us-ascii?Q?9L8TvuAOV+/TGyXbrFB/yuUXBYeHdzPbha0JWnowHwDWuCU2iiNHHwU5mKis?= =?us-ascii?Q?ieIRFm2anvaaBX9zvO13Y02d+t0xe25btz9WQRSokWlYOQ+vfMDgyS+8U2pO?= =?us-ascii?Q?tgv7cGdH/7/KFZ07ERj8wpl/903MIqOmtAApKC+doRxqwMzYi6mMWL2Ms55A?= =?us-ascii?Q?HMnu9X22mADxBIeMqFl7geQTEW9l2SPhDmJZ+weOlsKbnv6b1v8WSlUhRJ/1?= =?us-ascii?Q?72USQimgCqfmM1262roE0Kb2jDKl1doPc0bOYno9ZIteReBO1XWfqL2hWdcC?= =?us-ascii?Q?OGEl79Axv6ednRjS84PpNgq6ST9/qwLm?= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CH3PR01MB8470.prod.exchangelabs.com; PTR:; CAT:NONE; SFS:(13230040)(376014)(366016)(1800799024)(38070700018); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?xBaC9g/VjZYLNl0SXmE/mWx3BYH80nN+AST97LGgw5TM+DgrE3bU/tqbrtw8?= =?us-ascii?Q?GOL+q1Qc4YgyUNSLGsSBYq92RorAHqLwlYOcSRPpQuiKFX4KS5aSitmBcZ0T?= =?us-ascii?Q?bhx/27o2JOe3tu73tmuB0MoO7KSXgFf/9BBA8dmZBdDfe8Y9vog7qOURxn+O?= =?us-ascii?Q?RKnChziufwA65f2gypkstiMXjbc+I2HJ7Yvj34qTfSL1GvlvVSzjSz+YhBm9?= =?us-ascii?Q?yyWNueGFm/6ompeJuQyqW+mtFffEyHD18vV9Zi1cTp1AtTqZcyTqpotU2Vt2?= =?us-ascii?Q?LQ5CjA3wox0H4r/eUxnWpxYXPSi1dkZrICEitOXR5/iS44QM48BiKBZS6z3s?= =?us-ascii?Q?NECjVpGxY0Wuap/woshF9+YobtUxPn8Esv9K+XHDeeDbn0RmIyTxIgot4mk3?= =?us-ascii?Q?SmEGEhbbcaQv1REkqI0YnpbgTtfsgrdwVKmubvKwbRh3sJT5ZxRixvJP50pH?= =?us-ascii?Q?esR/4Qyga1EvIW3x/JIekbth0O31CO69aUdyz/sbHa8OW8hXs4zQ3x+ACdRh?= =?us-ascii?Q?W/54MstloW5JjYB9mC1TfZsTkYMAPgMHep4+RhjmRVGspN16ycQpn9rWdWHh?= =?us-ascii?Q?CgUbQ31fUlVV44ThOHMxs1SuLz8LYEYXYEBrjshzlIjTKQcOoeMz7PLYxVWc?= =?us-ascii?Q?xRsp+Tu6+SZZpKMxcomhwVDpIDbeNALkG7oe7By8Yx45Kl14WMt7FbFQ9KUm?= =?us-ascii?Q?2+wyYc5z2kuTHC1WXrqZIBreGbzVEz9+fMKeSv5+ASpXmzx/ZL9pyngbsF2M?= =?us-ascii?Q?FVko813Hnukie9UI8ilsWGTscR6X6W3citYWwi+fDIZoC5W/5Hhc8bQrF8cJ?= =?us-ascii?Q?CmcqtQvV5Z/6UeyGkkVoskTNJbrtJY2uScxwdHTjaGPvGCeYFDtby0YM2H7R?= =?us-ascii?Q?LzVTzu10in8EXMBjkMYOACBX6butlrt55Xtjs+vDlB10xRe6/0iqpJk96W5i?= =?us-ascii?Q?sITfg0LclSZppZ/atB+D7xzYzuGlLtNgT1RwOvmuJaY0zFR0txo1gXjn4mEB?= =?us-ascii?Q?RXzLGZDwbuxF457IYPR1h/4rmyAQpz65S1WzJ5enEU+gpzj+PqoYqK/cTZAt?= =?us-ascii?Q?ZwdAvzaAUde3qdd6IWkhJamduE2rSDyQ3Zw9aN21j0qWMtFfeck9iS+216ft?= =?us-ascii?Q?V59I7snzxhMTCG4WoQr0VpdjTyYrx3H0Jt7cTnvRYRLCXPd1yaJYDiF3ifuy?= =?us-ascii?Q?UJr7i+jRkLK7swMeqvbZrsUDWu8Park7XCEsmR3wcanmpTLQfpEYkbjS1Ro+?= =?us-ascii?Q?IVf6NvKhARWK0MTgmJIK2Spr532RlAEN5eLc4jrZY5B0jORUPWMnT7QLrMRU?= =?us-ascii?Q?W3og7ltUg1imJHbl9PazSINqiFHdwQHnvlWzq9DTsOEjNgd83sRrUUKlWVw3?= =?us-ascii?Q?CSPVqJvNbvxJuO9SRJmEE64CWZche1nWVApPDb/vgZcWqRmhJ6wBpcx5XzjX?= =?us-ascii?Q?e24GEdR4nFw2oqSFa6wSyWITLFbVdMnLT2Fdkmep288MhNjvq1PjsfRa0+Kn?= =?us-ascii?Q?120OVWThb8wt9SXHZMXJGkolix0qQ9fF2O1U6hSmfF5BCiqSPZNIpw2I8R8G?= =?us-ascii?Q?i7BVmJ6X19sFtGVdguJlE4xIFqjB6xYeYsbu9tDQ?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: oJGLNHu1yPYhQil65AXzFC8Bqu3GFgaxXu78j+V9XQYB4eFsRQYRjMJeNzbg7BDYF+v39deh2pfkdl8zHztRYn4Dpcdd8QDP8XCg2zXZMMpeXraYSBHTVRG/1gr3iOEG3Xph2i/4V39sGuRsf1DFTGoLUXuBoRfeULom5COaOX7GZbBo2cYOKC5NWkeCWY27EgegGHXgi10/0pWKxy+XU/WIFrz9S/AaWwXX/JfTTg9YOl5urRBBiIp2faSqSuDo0HzAHezeN/F1SXUYkpwxJ/mtTT3HJTlHZJFCV7lP83aBRojUsaL5M61P57FW0WVejQ+WZFpGdgS21bOGwaarZopRXMy73p3ZBaBr4gd8L5s97BOBZY8/eB1LAJgYPlgfhIpjwAqNsgsQVTyh2LcgOF+bpUxalWtnK5LD6NaOPy91EgskFgYPMbXrdj+cwp1E6OvpIvFKOamNJ9+QRX+htUy/b6Kr1G/Dwg42c+O9y6UiGtzIZTs8j4ES6zv4C3+yZSxv2A+N3+e/v1r+L5TWvEDLz5VaK/eVYVk6sL5sqP2/xyK0GKmzwKAhrWCPcyC4wxv1fI63i2ZZgpJEYmnY64pN9YJ8PhJOrZk/Gjb4LT2YOyM5e8L2E/e4UfnzMiSu X-OriginatorOrg: netscout.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: CH3PR01MB8470.prod.exchangelabs.com X-MS-Exchange-CrossTenant-Network-Message-Id: 779f320c-751a-419f-f1e0-08ddbf33c472 X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jul 2025 21:58:41.6199 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 54f11205-d4aa-4809-bd36-0b542199c5b2 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: t5cjPsq07735qMiMGj7KVIT37MeU35zDb9HrRIuNBePILqLuOOHJj93t5ndX36j8yxOvTNZ+Atce33xfL34MyQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR01MB8478 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNzA5MDE5NyBTYWx0ZWRfX6Dj5k2Ee5u0n MCcxYMvaqFX/aUFP1kCWYCbBleexuyaXrE7zJG7YGDpmziQHbxR7iEG9ZZiowVTdDSzWk617BiG aIF7nJPnTplf4mQC2Lqo9O3iFMKyNbsYtmC8B3aC1uc6skQmReXpHhVo5de3W64ENiG8UCA+3UZ XoT9AMW/sF76slsUAsaOid9e/JAUdMx4VhpTJU9usTqdRVWOXYootO0sHJJfVaimdwFQWzz3Ky/ /FxZLV0IBniEdNV+OHospEmtlm14sidjlzuZIKlBAsa091F9sa10vcbeWIfLOP82IEDaP09qGda YOjSrWvLPE2QpJ+4JFpT2w90LVZBCmdj2QN4/Eypb8M1Yi9RHjOc53PP+p+1qw7fhCkrHWtyvlK MJfOc0Q+2C5T+LsAO0pmVdWorK0lK4cdVa112D9QgG6E7pJlON+jnNcQKLB9N+iHm60iTU/2 X-Proofpoint-GUID: Q41BlcceBxQRoD4YtmjFCCpedRWXD7PR X-Proofpoint-ORIG-GUID: Q41BlcceBxQRoD4YtmjFCCpedRWXD7PR X-Authority-Analysis: v=2.4 cv=TprmhCXh c=1 sm=1 tr=0 ts=686ee617 cx=c_pps a=wXDgSYWbwZencpnnUUTq5g==:117 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=wKuvFiaSGQ0qltdbU6+NXLB8nM8=:19 a=Ol13hO9ccFRV9qXi2t6ftBPywas=:19 a=xqWC_Br6kY4A:10 a=kj9zAlcOel0A:10 a=Wb1JkmetP80A:10 a=uherdBYGAAAA:8 a=8rWy6zfcAAAA:8 a=jZVsG21pAAAA:8 a=jQOgFn-ZAAAA:8 a=3kHFLYupHgx09AzNmM8A:9 a=CjuIK1q_8ugA:10 a=YjdVzJdQTyZRADMV7wFX:22 a=3Sh2lD0sZASs_lUdrUhf:22 a=mT82qxFQzDvLIExZS32s:22 cc=ntf X-Proofpoint-Spam-Details: rule=outbound_policy_notspam policy=outbound_policy score=0 lowpriorityscore=0 bulkscore=0 malwarescore=0 clxscore=1015 spamscore=0 mlxscore=0 priorityscore=1501 impostorscore=0 suspectscore=0 phishscore=0 mlxlogscore=999 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc=notification route=outbound adjust=0 reason=mlx scancount=1 engine=8.21.0-2505280000 definitions=main-2507090197 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Hi Ivan, Do you see any benefit to creating two mempools, one per NUMA node versus b= oth on the same NUMA as the NIC? If I try creating Hugepage memory on both NUMA nodes and associated mempool= s , do I need to have a DPDK lcore on each NUMA node, or can I get by with = one lcore (strictly for housekeeping). We use POSIX threads in our applica= tion, in which DPDK knows nothing about. Thanks, Ed -----Original Message----- From: Lombardo, Ed=20 Sent: Tuesday, July 8, 2025 9:09 PM To: Ivan Malov Cc: Stephen Hemminger ; users Subject: RE: dpdk Tx falling short Hi Ivan, I added two mempools per port pair as you suggeted. The results of Tx performance improved and when turn on one port pair and i= t does not affect the second port pair. The tx ring no longer fills up but= drains near empty. Improved: Tx - 1.5 Mpps to 8.3 Mpps, 1.5 Mpps to 11.2 Mpps I need to do the perf analysis again but wanted to provide you results. I still need to improve the performance on Tx, but this is much needed brea= k through (with your help). Thanks, Ed -----Original Message----- From: Ivan Malov Sent: Tuesday, July 8, 2025 12:53 PM To: Lombardo, Ed Cc: Stephen Hemminger ; users Subject: RE: dpdk Tx falling short External Email: This message originated outside of NETSCOUT. Do not click l= inks or open attachments unless you recognize the sender and know the conte= nt is safe. Hi Ed, On Tue, 8 Jul 2025, Lombardo, Ed wrote: > Hi Ivan, > Thanks, this clears up my confusion. Using API[2] to create one mempool = for the network Rx and Tx queues must be MP/MC. The CPU Cycles spent on th= e common_ring_mp_enqueue increase as more ports are transmitting. The tran= smit operation causes the call for Rx and Tx queues results in fight for ac= cess to the mbuf mempool because of one mempool? Not really. Mempools in DPDK in general (and, in particular, as shown in yo= ur monitor printout) have per-lcore object cache, which, if I'm not mistake= n, is to avoid such contention when accessing the pool. And, since only a s= ingle pool is used in your case, the use of MP/MC seems logical, as well as= the use of the per-lcore object cache. But it's not obvious if this is opt= imal in your case. > This is why you suggested creating two mempools, one for each pair of por= ts. It could be a low-hanging fruit to do a quick check with two separate mempo= ols, probably also MP/MC even (allocated via the same API [2]), to know if = it affects performance or not. Again, as Stephen noted, this may even worse= n CPU cache performance, but may be it still pays to do a quick check after= all. > If I go this route what are the precautions I need to take? > > I will try RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload flag first. This is somehow unrelated to pools and rings, yet it should enable the PMD'= s internal Tx handling to accumulate bulks of mbufs to be freed upon transm= ission via bulk operations that, akin Tx and Rx bursts, may also improve CP= U cache utilisation and overall performance. The only prerequisite - all mb= ufs passed to a given Tx queue have to come from the same mempool. Hopefull= y this holds for you, if the logic does not intermix packets from 2 pools i= nto the same Tx queue. May be Stephen's suggestion to use a Tx buffer API is also worth the shot. Thank you. > > Thanks, > Ed > > -----Original Message----- > From: Ivan Malov > Sent: Tuesday, July 8, 2025 10:49 AM > To: Lombardo, Ed > Cc: Stephen Hemminger ; users=20 > > Subject: RE: dpdk Tx falling short > > External Email: This message originated outside of NETSCOUT. Do not click= links or open attachments unless you recognize the sender and know the con= tent is safe. > > On Tue, 8 Jul 2025, Lombardo, Ed wrote: > >> Hi Ivan, >> Yes, only the user space created rings. >> Can you add more to your thoughts? > > I was seeking to address the probable confusion here. If the application = creates a SC / MP ring for its own pipiline logic using API [1] and then in= vokes another API [2] to create a common "mbuf mempool" to be used with Rx = and Tx queues of the network ports, then the observed appearance of "common= _ring_mp_enqueue" is likely attributed to the fact that API [2] creates a r= ing-based mempool internally, and in MP / MC mode by default. And the latte= r ring is not the same as the one created by the application logic. These a= re two independent rings. > > BTW, does your application set RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload = flag when configuring Tx port/queue offloads on the network ports? > > Thank you. > > [1] > https://urldefense.com/v3/__https://doc.dpdk.org/api-25.03/rte__ring_8 > h.html*a155cb48ef311eddae9b2e34808338b17__;Iw!!Nzg7nt7_!GXTS2DQR0JZFGh > dahtcpSBjmoh-AZ4dzS73R_J9A1I0JxlORLHvylHea80X_KHTZRcZV4qcMEvJd7Z7izij4 > 0zap9fvA$ [2] > https://urldefense.com/v3/__https://doc.dpdk.org/api-25.03/rte__mbuf_8 > h.html*a8f4abb0d54753d2fde515f35c1ba402a__;Iw!!Nzg7nt7_!GXTS2DQR0JZFGh > dahtcpSBjmoh-AZ4dzS73R_J9A1I0JxlORLHvylHea80X_KHTZRcZV4qcMEvJd7Z7izij4 > 07rwGv1P$ [3] > https://urldefense.com/v3/__https://doc.dpdk.org/api-25.03/rte__mempoo > l_8h.html*a0b64d611bc140a4d2a0c94911580efd5__;Iw!!Nzg7nt7_!GXTS2DQR0JZ > FGhdahtcpSBjmoh-AZ4dzS73R_J9A1I0JxlORLHvylHea80X_KHTZRcZV4qcMEvJd7Z7iz > ij402Z4uOww$ > >> >> Ed >> >> -----Original Message----- >> From: Ivan Malov >> Sent: Tuesday, July 8, 2025 10:19 AM >> To: Lombardo, Ed >> Cc: Stephen Hemminger ; users=20 >> >> Subject: RE: dpdk Tx falling short >> >> External Email: This message originated outside of NETSCOUT. Do not clic= k links or open attachments unless you recognize the sender and know the co= ntent is safe. >> >> Hi Ed, >> >> On Tue, 8 Jul 2025, Lombardo, Ed wrote: >> >>> Hi Stephen, >>> When I replace rte_eth_tx_burst() with mbuf free bulk I do not see the = tx ring fill up. I think this is valuable information. Also, perf analysi= s of the tx thread shows common_ring_mp_enqueue and rte_atomic32_cmpset, wh= ere I did not expect to see if I created all the Tx rings as SP and SC (an= d the workers and ack rings as well, essentially all the 16 rings). >>> >>> Perf report snippet: >>> + 57.25% DPDK_TX_1 test [.] common_ring_mp_enqueue >>> + 25.51% DPDK_TX_1 test [.] rte_atomic32_cmpset >>> + 9.13% DPDK_TX_1 test [.] i40e_xmit_pkts >>> + 6.50% DPDK_TX_1 test [.] rte_pause >>> 0.21% DPDK_TX_1 test [.] rte_mempool_ops_enqueue_bu= lk.isra.0 >>> 0.20% DPDK_TX_1 test [.] dpdk_tx_thread >>> >>> The traffic load is constant 10 Gbps 84 bytes packets with no idles. T= he burst size of 512 is a desired burst of mbufs, however the tx thread wil= l transmit what ever it can get from the Tx ring. >>> >>> I think if resolving why the perf analysis shows ring is MP when it has= been created as SP / SC should resolve this issue. >> >> The 'common_ring_mp_enqueue' is the enqueue method of mempool variant 'r= ing', that is, based on RTE Ring internally. When you say that ring has bee= n created as SP / SC you seemingly refer to the regular RTE ring created by= your application logic, not the internal ring of the mempool. Am I missing= something? >> >> Thank you. >> >>> >>> Thanks, >>> ed >>> >>> -----Original Message----- >>> From: Stephen Hemminger >>> Sent: Tuesday, July 8, 2025 9:47 AM >>> To: Lombardo, Ed >>> Cc: Ivan Malov ; users >>> Subject: Re: dpdk Tx falling short >>> >>> External Email: This message originated outside of NETSCOUT. Do not cli= ck links or open attachments unless you recognize the sender and know the c= ontent is safe. >>> >>> On Tue, 8 Jul 2025 04:10:05 +0000 >>> "Lombardo, Ed" wrote: >>> >>>> Hi Stephen, >>>> I ensured that in every pipeline stage that enqueue or dequeues mbufs = it uses the burst version, perf showed the repercussions of doing one mbuf = dequeue and enqueue. >>>> For the receive stage rte_eth_rx_burst() is used and Tx stage we use r= te_eth_tx_burst(). The burst size used in tx_thread for dequeue burst is 5= 12 Mbufs. >>> >>> You might try buffering like rte_eth_tx_buffer does. >>> Need to add an additional mechanism to ensure that buffer gets flushed = when you detect idle period. >>> >> >