From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8165A0542; Fri, 11 Nov 2022 05:25:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 866DF40141; Fri, 11 Nov 2022 05:25:17 +0100 (CET) Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2044.outbound.protection.outlook.com [40.107.21.44]) by mails.dpdk.org (Postfix) with ESMTP id 6560D400EF for ; Fri, 11 Nov 2022 05:25:16 +0100 (CET) ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=nYHBNlfTZypYVDHvCcLEbZYpq8B/Xszagdkuo9a4bFGS1BNXH0CGxRwQRFOjQDbpIkLFgX0iMHuIE1+dSr8DUE1iuPTrFqLF67GTZtPjtdsLjl9C+ZGrmSNv0eX2VF2pL6796av+Lzveg+tHkibqp+TS+LV3CTpu9qiiP6nZLDxXcP97zdEp3A6xBVIiRpkCt4b5i5g3zXBeExFYXIcLO5TdFPKYAdrfvmQnl0NBkKEGZwq0PpZLgCf2WxQybP9LrykLs5sDSXDqwOP+/6Vt9Wx0p6YzY6KOnR8c2+bTpa8aTAAKbSfki7ovQppZnXS2FJH3i/6N8EbopMzOGU6GLA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=997YJ2D5uZbbqpVXnwRVRCsh1fOy0ieM5GNVpabhDYQ=; b=j6QZ8GR9zJx/HHBumayYA3v5BueOM3+0BPWQ0yaZQDTW9S+8ulVDCIqJNBI6iXN/GbJ9rr6lg2s+0eGbRNL8A1qdu3768dlvux//3POBDrMYkaYR2UNHv38s4W08+w6kgBXOIoMOb74LSGtvHWUBOtiFQT1L7k4NywA6lRAgT4fLii/84+PTGGkkVARqOhDgDMiaThb0MX2Cqdv4vqL+wcH/L8j+LYnf4ACVIDcIs646tu/H+6+2CU/Uq2/7R76YzVSjhu1vBSNUdGIIZVUCQrKyTKlOWL36lqUkGeygd8CSDr7A/mi5p6cPFcrPt5D8avqfNdxr/HKPaxaFHR2VfA== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=dpdk.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=997YJ2D5uZbbqpVXnwRVRCsh1fOy0ieM5GNVpabhDYQ=; b=uEeP7XbBSIzloXGwr+xbnP5TPLYdZAUA8fNdXYeK1/+Ka08pa14zRIvDNgVcGf47fVb7qwnO2Y0YT256HUQEW+BgrdBZirBT8RdwmU+VzQ3QTygRp8QJ3QOzHGo1Hu1SAFo82Txi5Wc7pS4/CQtLxnGJVNBGvb9SlQPDCU9s6js= Received: from AS9PR07CA0026.eurprd07.prod.outlook.com (2603:10a6:20b:46c::31) by AM9PR08MB6019.eurprd08.prod.outlook.com (2603:10a6:20b:2da::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5813.13; Fri, 11 Nov 2022 04:25:05 +0000 Received: from AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:46c:cafe::a5) by AS9PR07CA0026.outlook.office365.com (2603:10a6:20b:46c::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5813.13 via Frontend Transport; Fri, 11 Nov 2022 04:25:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT065.mail.protection.outlook.com (100.127.140.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5813.12 via Frontend Transport; Fri, 11 Nov 2022 04:25:05 +0000 Received: ("Tessian outbound 6c699027a257:v130"); Fri, 11 Nov 2022 04:25:04 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 8639c41f59e8c9b6 X-CR-MTA-TID: 64aa7808 Received: from 92a3f48dd13d.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 326F7CCD-B614-4357-925B-FD84BBFFDC05.1; Fri, 11 Nov 2022 04:24:54 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 92a3f48dd13d.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 11 Nov 2022 04:24:54 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gX5ZaRiD9xk4Nx1QmBgK6GeScN/J5AXnUPzYwj+iFd+1lVJQm3H2llE+UI9daGf4oHHU96JGBoj7E0dxOtsIh6mJVPsGQH8WEizbsDVEIxAJ/FQsifHyUXp1hllWDGXGIbSCiGPUQdFj1ibbsDwhbJcH40+5D//y6hKIZ6B6jd72N9pMOsZBDxS/o282AHxdO81hG4Jc88uWUAMkSZMGMMd21oOMgiXWV1w7c8WW+cIsl0QE+ZMU3BiDebwb7YuolqZ1nkepWlq3geuVFShpLRApLur9WwmumKfmLhQ8k1jbpMjnLfAWAmH2JlsXs74rgiXYC5HTehbn57Gbvvb/0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=997YJ2D5uZbbqpVXnwRVRCsh1fOy0ieM5GNVpabhDYQ=; b=JQF0Mggka5ZvHjxvXnkHuKLAuJN3wLCEGdV0EwO3W6aEjuzsJ+4B7iJCafG9wa2anlpsmT2zu4CPmKRlN7pU7eFj7EPPgwc8Kottm3jiKdqgyBAXzYrgpKBPBs6lNJ+HGfWZCTYxnZIT/a6zYFD4gjRaA9uJ6CcsM4flzpQ7M60Icsbj3zaDyiCI+A5+OIAMPSXjjiOI6gOkuF4UVZxaOx3chvwqvPyVpFLgocyX60V8EhXxtn46N/YPLWUENZEtyJBGue0nc6qGhhDus7meh8MrubvpPvHT8Fc0BPJFmxW44iL72yzA9kmagEDDeUNysVMK5DQKrzNTUcyBmUWw1g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=997YJ2D5uZbbqpVXnwRVRCsh1fOy0ieM5GNVpabhDYQ=; b=uEeP7XbBSIzloXGwr+xbnP5TPLYdZAUA8fNdXYeK1/+Ka08pa14zRIvDNgVcGf47fVb7qwnO2Y0YT256HUQEW+BgrdBZirBT8RdwmU+VzQ3QTygRp8QJ3QOzHGo1Hu1SAFo82Txi5Wc7pS4/CQtLxnGJVNBGvb9SlQPDCU9s6js= Received: from DBAPR08MB5814.eurprd08.prod.outlook.com (2603:10a6:10:1b1::6) by DB9PR08MB9801.eurprd08.prod.outlook.com (2603:10a6:10:462::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5813.13; Fri, 11 Nov 2022 04:24:52 +0000 Received: from DBAPR08MB5814.eurprd08.prod.outlook.com ([fe80::3fb6:b7b2:1e8d:11d6]) by DBAPR08MB5814.eurprd08.prod.outlook.com ([fe80::3fb6:b7b2:1e8d:11d6%8]) with mapi id 15.20.5813.013; Fri, 11 Nov 2022 04:24:52 +0000 From: Honnappa Nagarahalli To: Bruce Richardson , =?iso-8859-1?Q?Morten_Br=F8rup?= CC: "dev@dpdk.org" , "olivier.matz@6wind.com" , "andrew.rybchenko@oktetlabs.ru" , Kamalakshitha Aligeri , nd , nd Subject: RE: [RFC] mempool: zero-copy cache put bulk Thread-Topic: [RFC] mempool: zero-copy cache put bulk Thread-Index: AdjxHCA/ROrKGJlPRZqF8NgKVJugtwAS1RvwABAiw0AArl49MAAB1pgQAAgXhFAAE1H3AAAHTPIAACRLWsA= Date: Fri, 11 Nov 2022 04:24:52 +0000 Message-ID: References: <98CBD80474FA8B44BF855DF32C47DC35D87489@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D8748A@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D874A9@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D874AB@smartserver.smartshare.dk> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: 24431AC989E78A45A28F01C59BCED5EB.0 Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; x-ms-traffictypediagnostic: DBAPR08MB5814:EE_|DB9PR08MB9801:EE_|AM7EUR03FT065:EE_|AM9PR08MB6019:EE_ X-MS-Office365-Filtering-Correlation-Id: d2eb61ef-4f60-4a61-a67f-08dac39cb55b x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: w+QIbpTZmwzsGE7jAX2D0oxYwQIk05qCsVfWNQ4i8uLmgqkt8GLzbVS/ogleWL2YLlElA3cddz82Lian80IE91X5hCUiqpIGlKuq4g8TYm7JjpOd1RF4wqKvlp7BqcsB6c61IeaFlyn5AQXiVOAO+a4Ys73Ysr3qclzklQMrJ7Zd0vB5st5W9s1H5Qt717ISPhIveiwBL2zUolA/NVxwW1FggNz5OrKxWE/zI3a9++c82CctZJg5gjX70mSU13Z5fLAnSh3XlKzr5FanMKW3X+ItktULRByxvef2IEJSEezh5zQ/Xlf/fOoFYNlFG+m9JUoZhxJzlaucVF6R6SKB5MQQ0Ske+ux15Sm19ESh0U6t4nxC+PkELh9KRb84nB5Rb6agRWKhu95H5RTb6D6CiTfvbolRf4wdFc7bsxZPkQ7ipdh1HwA1Jb+yvskYVjXpUJijJbh3Kv4gha9LcQL36NECTsKNq67wlJ2EUOegANr/+pUXFA2qAd5peY2AnWwOf9nEY38GWRWCHLS3ywksEx1mFfFXQMOgoK+kYIQDNc33J5DMb17QREi57d9wZfuaRwquIJb5d44NhoLQoVBio6dHQ99+XuaQAnPJFHSv9mDm3z7pq8s0ZQSn0htJfdS5dra/yQb44eTb8RnJvrFEbwgfpWo3XchaPCCADkEdvOxxBrOW68bOsl2SVq6enqBV7NoFbptPS0Mf+r0bUITFdbeWmJC4q7cC7dBZUsPvwn4a45i2I2AVVn+wNlMMC/4hcFwT5W98AA3bB5VMLIheG/OOqslCa8MlNH4eC86NqWs= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DBAPR08MB5814.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(366004)(376002)(346002)(136003)(396003)(451199015)(6506007)(7696005)(110136005)(86362001)(478600001)(66899015)(316002)(54906003)(52536014)(186003)(30864003)(5660300002)(41300700001)(8936002)(26005)(966005)(9686003)(8676002)(4326008)(66446008)(64756008)(66476007)(76116006)(66946007)(66556008)(2906002)(33656002)(83380400001)(55016003)(38070700005)(122000001)(38100700002)(71200400001); DIR:OUT; SFP:1101; Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9801 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 3867eb93-578f-435e-d8c1-08dac39cada4 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iL3JQcu5axlKZke9CW9eagveGgAl0ud0xocam9hgf1DJJCBCts9k4/YVXZFgND+gw9MsS/tmBrcziy9Oj1LbbDsdJnGO1kzaF3U5qHSeRDv2dDiZqY5KxpDrDJ1HaqSy7ZZyZgATpd45juqlwtdQRnUHo86BBEjfqndGvmcsVwiEvfbCbVhQe0QtTdgAmK8Qpl17EzP/kZkszNiqjLM7wO192DPz40uskgum5Ah90167Nn0ZQIKaynyNAe1SLkzv1HEljZlijuCX4hZCF++B1dZLMkCYptmjd7nKgWISJkxuEqQIMXT4gCYYNUkwQCX7Z981YfoUf1Hl2tzHrvvpaLuMp5d7RIG5m7u3VQtH0hJohK6BGNtkT5zhIdTBmXr+moHV139ESNviWs0yYqdotCG/bODYE7fGev0CM/dUWAB5fjit01foWPzfIPF/DSi1wGb0ks0rk69knzEEoSh2wYAiZmPbnxcXetC3F1eUsfcosCH+pdKW5R042zcB+EdG1uefjCGQ7eOFxnA68GTjooqjWkZa7f1wWfxo8VKEe1wJOf4B77te1oFSEdkHsQLaTZ7oI0jPBHNypV3f34CgsWCCuu6T//yx7YbLKQ/PamBQR59PLrVlYcRfrivvdV74/GPKhIN0vSsl/kqFn/vJlJMc86lH9pTlWFCtBKMRge0xy0c3KALEs+xKNCvq/w51yG3eApeZOvfbFddLJUpLjRTnjKrS80YIuvg/qCBk2lbDPkYP/5Ja4RRxIpginIvmMM3m3Gf/NalgDaN7lbCo8VI8SU7bkOekbjKS33Z75yAS78ER7HufKVa92M2ciG6Y X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(346002)(396003)(136003)(376002)(451199015)(46966006)(36840700001)(40470700004)(2906002)(82740400003)(54906003)(316002)(110136005)(40460700003)(33656002)(186003)(336012)(47076005)(83380400001)(6506007)(26005)(9686003)(966005)(36860700001)(478600001)(82310400005)(55016003)(7696005)(30864003)(52536014)(4326008)(5660300002)(70586007)(70206006)(8936002)(8676002)(41300700001)(40480700001)(66899015)(81166007)(86362001)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2022 04:25:05.0406 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d2eb61ef-4f60-4a61-a67f-08dac39cb55b X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6019 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > > > > > > > From: Honnappa Nagarahalli > > > [mailto:Honnappa.Nagarahalli@arm.com] > > > > > > > Sent: Sunday, 6 November 2022 00.11 > > > > > > > > > > > > > > + Akshitha, she is working on similar patch > > > > > > > > > > > > > > Few comments inline > > > > > > > > > > > > > > > From: Morten Br=EF=BF=BDrup > > > > > > > > Sent: Saturday, November 5, 2022 8:40 AM > > > > > > > > > > > > > > > > Zero-copy access to the mempool cache is beneficial for > > > > > > > > PMD > > > > > > > performance, > > > > > > > > and must be provided by the mempool library to fix [Bug > > > > > > > > 1052] without > > > > > > > a > > > > > > > > performance regression. > > > > > > > > > > > > > > > > [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=3D1052 > > > > > > > > > > > > > > > > > > > > > > > > This RFC offers a conceptual zero-copy put function, where > > > the > > > > > > > application > > > > > > > > promises to store some objects, and in return gets an > > > > > > > > address > > > > > where > > > > > > > to store > > > > > > > > them. > > > > > > > > > > > > > > > > I would like some early feedback. > > > > > > > > > > > > > > > > Notes: > > > > > > > > * Allowing the 'cache' parameter to be NULL, and getting > > > > > > > > it > > > from > > > > > the > > > > > > > > mempool instead, was inspired by rte_mempool_cache_flush(). > > > > > > > I am not sure why the 'cache' parameter is required for this > > > API. > > > > > This > > > > > > > API should take the mem pool as the parameter. > > > > > > > > > > > > > > We have based our API on 'rte_mempool_do_generic_put' and > > > removed > > > > > > the > > > > > > > 'cache' parameter. > > > > > > > > > > > > I thoroughly considered omitting the 'cache' parameter, but > > > included > > > > > it for > > > > > > two reasons: > > > > > > > > > > > > 1. The function is a "mempool cache" function (i.e. primarily > > > > > > working > > > > > on the > > > > > > mempool cache), not a "mempool" function. > > > > > > > > > > > > So it is appropriate to have a pointer directly to the > > > > > > structure > > > it > > > > > is working on. > > > > > > Following this through, I also made 'cache' the first > > > > > > parameter > > > and > > > > > 'mp' the > > > > > > second, like in rte_mempool_cache_flush(). > > > > > I am wondering if the PMD should be aware of the cache or not. > > > > > For > > > ex: > > > > > in the case of pipeline mode, the RX and TX side of the PMD are > > > > > running on different cores. > > > > > > > > In that example, the PMD can store two cache pointers, one for > > > > each > > > of the > > > > RX and TX side. > > > I did not understand this. If RX core and TX core have their own > > > per- core caches the logic would not work. For ex: the RX core cache > > > would not get filled. > > > > > > In the case of pipeline mode, there will not be a per-core cache. > > > The buffers would be allocated and freed from a global ring or a > > > global lockless stack. > > > > Aha... Now I understand what you mean: You are referring to use cases > where the mempool is configured to *not* have a mempool cache. > > > > For a mempool without a mempool cache, the proposed "mempool cache" > zero-copy functions can obviously not be used. > > > > We need "mempool" zero-copy functions for the mempools that have no > mempool cache. > > > > However, those functions depend on the mempool's underlying backing > store. > > > > E.g. zero-copy access to a ring has certain requirements [1]. > > > > [1]: > > http://doc.dpdk.org/guides/prog_guide/ring_lib.html#ring-peek-zero-cop > > y-api > > > > For a stack, I think it is possible to locklessly zero-copy pop objects= . But it is > impossible to locklessly zero-copy push elements to a stack; another thre= ad > can race to pop some objects from the stack before the pushing thread has > finished writing them into the stack. > > > > Furthermore, the ring zero-copy get function cannot return a consecutiv= e > array of objects when wrapping, and PMD functions using vector instructio= ns > usually rely on handling chunks of e.g. 8 objects. > > > > Just for a second, let me theorize into the absurd: Even worse, if a > mempool's underlying backing store does not use an array of pointers as i= ts > internal storage structure, it is impossible to use a pointer to an array= of > pointers for zero-copy transactions. E.g. if the backing store uses a lis= t or a > tree structure for its storage, a pointer to somewhere in the list or tre= e > structure is not an array of objects pointers. > > > > Anyway, we could consider designing a generic API for zero-copy mempool > get/put; but it should be compatible with all underlying backing stores -= or > return failure, so the PMD can fall back to the standard functions, if th= e > mempool is in a state where zero-copy access to a contiguous burst cannot > be provided. E.g. zero-copy get from a ring can return failure when zero-= copy > access to the ring is temporarily unavailable due to being at a point whe= re it > would wrap. > > > > Here is a conceptual proposal for such an API. > > > > /* Mempool zero-copy transaction state. Opaque outside the mempool > > API. */ struct rte_mempool_zc_transaction_state { > > char opaque[RTE_CACHE_LINE_SIZE]; > > }; > > > > /** Start zero-copy get/put bulk transaction. > > * > > * @param[in] mp > > * Pointer to the mempool. > > * @param[out] obj_table_ptr > > * Where to store the pointer to > > * the zero-copy array of objects in the mempool. > > * @param[in] n > > * Number of objects in the transaction. > > * @param[in] cache > > * Pointer to the mempool cache. May be NULL if unknown. > > * @param[out] transaction > > * Where to store the opaque transaction information. > > * Used internally by the mempool library. > > * @return > > * - 1: Transaction completed; > > * '_finish' must not be called. > > * - 0: Transaction started; > > * '_finish' must be called to complete the transaction. > > * - <0: Error; failure code. > > */ > > static __rte_always_inline int > > rte_mempool_get/put_zc_bulk_start( > > struct rte_mempool *mp, > > void ***obj_table_ptr, > > unsigned int n, > > struct rte_mempool_cache *cache, > > rte_mempool_zc_transaction_state *transaction); > > > > /** Finish zero-copy get/put bulk transaction. > > * > > * @param[in] mp > > * Pointer to mempool. > > * @param[in] obj_table_ptr > > * Pointer to the zero-copy array of objects in the mempool, > > * returned by the 'start' function. > > * @param[in] n > > * Number of objects in the transaction. > > * Must be the same as for the 'start' function. > > * @param[in] transaction > > * Opaque transaction information, > > * returned by the 'start' function. > > * Used internally by the mempool library. > > */ > > static __rte_always_inline void > > rte_mempool_get/put_zc_bulk_finish( > > struct rte_mempool *mp, > > void **obj_table, > > unsigned int n, > > rte_mempool_zc_transaction_state *transaction); > > > > Note that these are *bulk* functions, so 'n' has to remain the same for= a > 'finish' call as it was for the 'start' call of a transaction. > > > > And then the underlying backing stores would need to provide callbacks > that implement these functions, if they offer zero-copy functionality. > > > > The mempool implementation of these could start by checking for a > mempool cache, and use the "mempool cache" zero-copy if present. > > > > Some internal state information (from the mempool library or the > underlying mempool backing store) may need to be carried over from the > 'start' to the 'finish' function, so I have added a transaction state par= ameter. > The transaction state must be held by the application for thread safety > reasons. Think of this like the 'flags' parameter to the Linux kernel's > spin_lock_irqsave/irqrestore() functions. > > > > We could omit the 'obj_table' and 'n' parameters from the 'finish' func= tions > and store them in the transaction state if needed; but we might possibly > achieve higher performance by passing them as parameters instead. > > > > > > > > > > > > > And if the PMD is unaware of the cache pointer, it can look it up > > > > at > > > runtime > > > > using rte_lcore_id(), like it does in the current Intel PMDs. > > > > > > > > > However, since the rte_mempool_cache_flush API is provided, may > > > > > be that decision is already done? Interestingly, > > > rte_mempool_cache_flush > > > > > is called by just a single PMD. > > > > > > > > I intentionally aligned this RFC with rte_mempool_cache_flush() to > > > maintain > > > > consistency. > > > > > > > > However, the API is not set in stone. It should always be > > > > acceptable > > > to > > > > consider improved alternatives. > > > > > > > > > > > > > > So, the question is, should we allow zero-copy only for per-core > > > cache > > > > > or for other cases as well. > > > > > > > > I suppose that the mempool library was designed to have a mempool > > > > associated with exactly one mempool cache per core. > > > > (Alternatively, > > > the > > > > mempool can be configured with no mempool caches at all.) > > > > > > > > We should probably stay loyal to that design concept, and only > > > > allow > > > zero- > > > > copy for per-core cache. > > > > > > > > If you can come up with an example of the opposite, I would like > > > > to > > > explore > > > > that option too... I can't think of a good example myself, and > > > perhaps I'm > > > > overlooking a relevant use case. > > > The use case I am talking about is the pipeline mode as I mentioned > > > above. Let me know if you agree. > > > > I see what you mean, and I don't object. :-) > > > > However, I still think the "mempool cache" zero-copy functions could be > useful. > > > > They would be needed for the generic "mempool" zero-copy functions > anyway. > > > > And the "mempool cache" zero-copy functions are much simpler to design, > implement and use than the "mempool" zero-copy functions, so it is a good > first step. > > > I would think that even in pipeline mode applications a mempool cache > would still be very useful, as it would like reduce the number of accesse= s to > the underlying shared ring or stack resources. >=20 > For example, if, as is common, the application works in bursts of up to 3= 2, but > the mempool cache was configured as 128, it means that the TX side would > flush buffers to the shared pool at most every 4 bursts and likely less > frequently than that due to the bursts themselves not always being the fu= ll > 32. Since accesses to the underlying ring and stack tend to be slow due t= o > locking or atomic operations, the more you can reduce the accesses the > better. >=20 > Therefore, I would very much consider use of a mempool without a cache as > an edge-case - but one we need to support, but not optimize for, since > mempool accesses without cache would already be rather slow. Understood. Looks like supporting APIs for per-core cache is enough for now= . I do not see a lot of advantages for mempool without a cache. >=20 > My 2c here. > /Bruce