From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECFA842F83; Wed, 2 Aug 2023 11:46:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BEE6240DDB; Wed, 2 Aug 2023 11:46:12 +0200 (CEST) Received: from APC01-TYZ-obe.outbound.protection.outlook.com (mail-tyzapc01on2049.outbound.protection.outlook.com [40.107.117.49]) by mails.dpdk.org (Postfix) with ESMTP id 5151C4021D for ; Wed, 2 Aug 2023 11:46:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dmj3t5gQGsefNdpngTQJNhGOD8PW+TJDRpRFkVld1zvtCXtweiQFnxO1Ybbk69IcEvKit8wccuFvOkcCImtt48sjEdRtqBG0D4N/pCkNKbtfVmrWrzWWSYglPSRmh+oG2OchXWPiTrDX/cetA5+kebneQRIpbhugpicE0AZ8Su39D2Jbb/cFeRCNTamxmxGZjZZnfN6tuhAl6Ks+Gjn/IByO1l3ovO5vAkf4z43iNdaPex8jArdJ2lbOpYfR7PKtNXXtghP+S5AOsvXSS67hxyw8OXxjYmC6y/CIDwL6KOoTU5mHi+pulKvjOvDJP6GNkLWz9H1tXT9D1PsXxOGpWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S3myiia9AnOYydIRrlruVNq+pLO+0fCWQd5L0B/t3hg=; b=IDQAesGBxxwyph1tZpseF02h2O4GyG++Bg08t0ym0411Q+M9gd6+1KwG+SWp3SFQBfAVY/Mux3xh5/XmGKXo47ks4fw0cLclK/O3qOxB/zQ9gLLhEF6Q34kIdBO6DB9kikUWvpcIs7lyVdGsw9aVNZuN867iPG20a1suzTkaLYLoU6/yLHdENkYh/PnDvaJ15DHVsW2pP4bbuIpHoNiolaCx0Jkoihh/geHpF/ibriiLd0MUw1p8nITHLCztFdUM25GHSYISmjBdmwTz8d0tI7M3Cwqg2VleDnC+ruG7yR9oFLylPmnN4P2aHVirCfLUL0KDrSlf8l5nw7Akd4EFzg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=jaguarmicro.com; dmarc=pass action=none header.from=jaguarmicro.com; dkim=pass header.d=jaguarmicro.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jaguarmicro.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=S3myiia9AnOYydIRrlruVNq+pLO+0fCWQd5L0B/t3hg=; b=MddUpijXmrLufWJYkTayO+FuzeJ04ULBrHisYEPt3aVwuWAW/QVQxIC+HfncTiM+/vnVxS6WdGF4IXJeBUx+6XtugX8ApkYtCtz1JY4TQwKJVOsTb1nLGOy/eJMJb/NkSrBHpYP4+wZECOlh1igDB089cTn85lXbulqF+U8F9OsHWCiJTxsMftbLnqQrYi5fv/ZF6Oju53GNgaVo4di/hlcPpzyRDr+JfYv0+R2xqiNXzIFYr+DsjqnODTRCGUSHi2V9+4gSmRps4O9Q9Hj6RURdd1b6MGgK5eW8KiGrTmqI7tSMSeSLxZkEy01aEP2aUgefv9K1O6VNY4QzI/Rdhw== Received: from SI2PR06MB4752.apcprd06.prod.outlook.com (2603:1096:4:14c::14) by KL1PR06MB6817.apcprd06.prod.outlook.com (2603:1096:820:10b::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6631.45; Wed, 2 Aug 2023 09:46:04 +0000 Received: from SI2PR06MB4752.apcprd06.prod.outlook.com ([fe80::15e:6c71:d71:e1a9]) by SI2PR06MB4752.apcprd06.prod.outlook.com ([fe80::15e:6c71:d71:e1a9%7]) with mapi id 15.20.6631.045; Wed, 2 Aug 2023 09:46:03 +0000 From: Rma Ma To: =?iso-8859-1?Q?Morten_Br=F8rup?= , dpdk-dev , Olivier Matz , Andrew Rybchenko CC: Ashwin Sekhar T K , Pavan Nikhilesh , Harman Kalra , Hemant Agrawal , Sachin Saxena Subject: Re: [PATCH v1] mempool: fix some errors in html api Thread-Topic: [PATCH v1] mempool: fix some errors in html api Thread-Index: AQHZrXYd3ONlG9FKB0O1cZZ31QgOeq/W4/mAgAAMIao= Date: Wed, 2 Aug 2023 09:46:03 +0000 Message-ID: References: <20230703061743.10576-1-rma.ma@jaguarmicro.com> <98CBD80474FA8B44BF855DF32C47DC35D87AB7@smartserver.smartshare.dk> In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87AB7@smartserver.smartshare.dk> Accept-Language: en-US, en-GB, en-VI, zh-CN, en-AS Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=jaguarmicro.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: SI2PR06MB4752:EE_|KL1PR06MB6817:EE_ x-ms-office365-filtering-correlation-id: 94546d81-b282-4d06-e9c8-08db933d497a x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: clQWjLUQK8H7jkIG6WtFZ19rl5VXzrD18EwhsD7veEWrhLGOUzSS+iSys0VZEsaVCva5Qn7UdTVMCoovN20u0ABAEeDrq5lI67BV7GY+gRYi1Aebcxiw7xnUkueGJO5deg05bhvv1io+9oj0/BbgponujYq6teFQAob2k0qqSmqACMgxa71OTXLx96WdA6VArDtYM/VD1TOE9aR2he5CNQ9l9LP3rlBk1DJwBwC+yGSR2TAmABvLZoSrQtADf1wdAgqpSm7Cdq1EdPAxElvRoMwjsmnN89Stu1hX+ZzqS8wupLBlqvm9MSaF0/xSQ1p0arFMfsRcJ5O1aJWdo8yYGZteOsRXTxYhqXZWbAWL38JJNPdcEayY7HfkhX9LQ6iHa/a4H2KJo9tiy68UhZMPbLRchnobWXuS3FxyqMUmHDKtMM1iuLy8wCSauDIipaGJJ6OKZHaK5KoDmswyOSqXQgfMPvgzz1jbg7QxryLXU0dDWqKloEXpAturQvEdut/1aqFKsXf0c1yCYK+pC09hA2o6NoiPJZiZck6SJLFSVvsLS9g9zebd5OGSIzjIlZ2dGzjzWEofQ5W9EVibnjFghla8PSGADE1DSFEAhCZ8Evg= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SI2PR06MB4752.apcprd06.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(346002)(396003)(136003)(366004)(376002)(39840400004)(451199021)(66574015)(8676002)(8936002)(5660300002)(19627405001)(52536014)(26005)(41300700001)(2906002)(44832011)(38070700005)(83380400001)(186003)(55016003)(122000001)(478600001)(110136005)(54906003)(38100700002)(316002)(86362001)(53546011)(6506007)(33656002)(7696005)(64756008)(71200400001)(66946007)(76116006)(66556008)(66446008)(66476007)(4326008)(166002)(9686003)(966005); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?Q?Yf3WVUg8yeq/1MRlmSBycZC5xy/jCPRAibGG02ogHeHMViAmSHRbwiY/ZU?= =?iso-8859-1?Q?1WrG8PgV9uW90VB6WYunKcK0Y02c8j2O9OwH+L0ljrRIua7XZYOZPdbCbR?= =?iso-8859-1?Q?WuIxEsmkSIN9tQmvIU7b68NyiNviH+AzSoZLBfnxd6RljugScEhMK0g3F0?= =?iso-8859-1?Q?xI1cSXHB+QnIeNy+HvSeMwiUkri3pKVzfkVes6iMWJJhfUeLE4L6kQadN8?= =?iso-8859-1?Q?xU05Ogb/5jfTitn62DXTiLGVxCfeUx4c4DbI5FWxxd6W4Pjh9BsokPuXwd?= =?iso-8859-1?Q?Z5oaJaKqb5XUVfvGhqPTIQUtOnpq7S8qhbP++xVf4CEhloJYkU4Wnch9Uz?= =?iso-8859-1?Q?wPY9cIyHYagk4/JFhWBQGyBl+bwsxL36l8A3+1YRSAqNQeItIbgRQ5bVoR?= =?iso-8859-1?Q?Y1G2jjUMh3YwU/1FUbLEctCTdDY35/lIiypGtXqmLhZzL7yAC3s6Aw5dwZ?= =?iso-8859-1?Q?iYZhVhoN4pVaU+pGoWzhuV7q0AHidBrwp0FwCS6JlIGCjXkxe8Zu/dbZD2?= =?iso-8859-1?Q?34W8IsMLnEUNKz6oO6hp+iVuMbnc2vqOaycJSU66CpA35Lar0/X6NVTY47?= =?iso-8859-1?Q?RSksOuetZVlU1Rd1x2YcS8HGrvEIwRhUmE9fAEVYbVQs+eG5Uz9Rgxmx3q?= =?iso-8859-1?Q?Wd2/AcEwXGkB0i1sBIXMnGcXhEAmI7DV/txg6QGUYOJE69AOJA8R2vvRS7?= =?iso-8859-1?Q?W67eL38ncN0ZOyvulrxztzt0FArK87CQ0CZlmCtxMruVtarOYIi0mJFIY/?= =?iso-8859-1?Q?TSm+dTRnRpRe8YCr9F4EvU8+CelMAG31J7Vonpg/9feUkhOVIqnGVBh3xV?= =?iso-8859-1?Q?UN5S+pxi0rzOtqUa7Au+KtXL5AAwtLKWNbOTbcb+7E45gj0c+Rmbbwc8LH?= =?iso-8859-1?Q?lldSFc+V88B64+Ls1b6iBC6WmrPkPvzZeVtuPvBgKWHGnwkxV57NB3RCSR?= =?iso-8859-1?Q?h3MRemkEJID+HEaDPAMhXo4P/KTiychrPGdAI8UrvWOceoGk444YFZUNW+?= =?iso-8859-1?Q?l8WY9jHEEvx7VqMHZ1ghCG2K/ZJ9RVPcCW35hAfjTn90q28M8iXQonfjxW?= =?iso-8859-1?Q?zrZ34EcpIuTbbdJctXEoqkt5MnBUhMtKmsFe24g3Zly6e9+gcEnvZ43IX5?= =?iso-8859-1?Q?aJE6Vheo31oFVeRmVn3n9F4gEy1w1gYjbo5VlI+7cQSKifltjOI7SnezXq?= =?iso-8859-1?Q?0k+TkYmHDtI5mQFjxT0TM366FYMZvDk+jZnGPRf/f1tj+Ghx2dFuTP4Woe?= =?iso-8859-1?Q?qsOw6k7zspT7lvh/lyus9ilE/Z3nF+n9LWwCBdFnboy9kYutu7MlDhjurz?= =?iso-8859-1?Q?Bv4EOI2PVGM3nomEqtuRwtAA/0IhbBXy17fGamWHDVJeGfoCR2CXiazKmy?= =?iso-8859-1?Q?maK4QoqYOhcaWPpFKsqo6eEbO7FKosTSxP/T3Ur9uqV3Zv0nnUysCXjSzb?= =?iso-8859-1?Q?y/XUrXFJvgZGmeLcenVXcY21zdSfItpACxCNQ8ZpvkvvcwREZtxCOk7qqZ?= =?iso-8859-1?Q?ptO+l3QOA82HHWSREGjKYAWFrKiW0lw7Yb0RBC/8EtmNrd7LCI7yWlZj2S?= =?iso-8859-1?Q?4Bi0bPgKh533tG5+WPfEC+ocN9THBfVOGz5lkFViGabLSkQgu7T+7ZALuR?= =?iso-8859-1?Q?mdIlHP5/xRSwLSlfV0wvTUqCFtkdUeW2k5?= Content-Type: multipart/alternative; boundary="_000_SI2PR06MB4752D833A59179C4EC8F8BC7990BASI2PR06MB4752apcp_" MIME-Version: 1.0 X-OriginatorOrg: jaguarmicro.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SI2PR06MB4752.apcprd06.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 94546d81-b282-4d06-e9c8-08db933d497a X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Aug 2023 09:46:03.7569 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 1e45a5c2-d3e1-46b3-a0e6-c5ebf6d8ba7b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 2isAwZcO64fBKvE8nleAannDxvEAYXxnAPWb6axKRhknB4deB6IAjDDo3VdpkSHxExhOomXFYd0xJO8VJdY1BA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: KL1PR06MB6817 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --_000_SI2PR06MB4752D833A59179C4EC8F8BC7990BASI2PR06MB4752apcp_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable > > So, how do we want to fix this: > > 1. Specify the allowed return values in the documentation for the mempool > library callback types, and require the mempool drivers to follow the upd= ated > specification? > 2. Update mempool API documentation to list the many currently possible e= rror > return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=3D-1))? > 3. Update mempool API documentation to say something along the lines of > "negative return value indicates error; rte_errno is not set if -1"? > 4. Switch to a general convention of returning -1 and setting rte_errno i= n all > DPDK APIs, starting with the mempool library? However; this would still r= equire a > documented list of possible rte_errno values set by the functions - essen= tially > the problem would remain the same: "possible return values" vs. "possible > rte_errno values". > > Personally, I am in strong favor of any option that tightens the API spec= ification > and treats non-conformance as bugs. Thanks for your detailed reply! Couldn't agree more with your third option, which is much more generalized. Best wishes, Rma ________________________________ From: Morten Br=F8rup Sent: Wednesday, August 2, 2023 16:58 To: Rma Ma ; dpdk-dev ; Olivier Matz = ; Andrew Rybchenko Cc: Ashwin Sekhar T K ; Pavan Nikhilesh ; Harman Kalra ; Hemant Agrawal ; Sachin Saxena Subject: RE: [PATCH v1] mempool: fix some errors in html api +CC various mempool driver maintainers > From: Rma Ma [mailto:rma.ma@jaguarmicro.com] > Sent: Monday, 3 July 2023 08.18 > > This patch fix some error descriptions of > return value in mempool api which affect in html api. > > Signed-off-by: Rma Ma > --- > lib/mempool/rte_mempool.h | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h > index 160975a7e7..d4d707533a 100644 > --- a/lib/mempool/rte_mempool.h > +++ b/lib/mempool/rte_mempool.h > @@ -1610,7 +1610,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, = void > **obj_table, > * Get several objects from the mempool. > * > * If cache is enabled, objects will be retrieved first from cache, > - * subsequently from the common pool. Note that it can return -ENOENT wh= en > + * subsequently from the common pool. Note that it can return -ENOBUFS w= hen > * the local cache and common pool are empty, even if cache from other > * lcores are full. > * > @@ -1624,7 +1624,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, = void > **obj_table, > * A pointer to a mempool cache structure. May be NULL if not needed. > * @return > * - 0: Success; objects taken. > - * - -ENOENT: Not enough entries in the mempool; no object is retrieve= d. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retriev= ed. > */ > static __rte_always_inline int > rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, > @@ -1646,7 +1646,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, voi= d > **obj_table, > * mempool creation time (see flags). > * > * If cache is enabled, objects will be retrieved first from cache, > - * subsequently from the common pool. Note that it can return -ENOENT wh= en > + * subsequently from the common pool. Note that it can return -ENOBUFS w= hen > * the local cache and common pool are empty, even if cache from other > * lcores are full. > * > @@ -1658,7 +1658,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, voi= d > **obj_table, > * The number of objects to get from the mempool to obj_table. > * @return > * - 0: Success; objects taken > - * - -ENOENT: Not enough entries in the mempool; no object is retrieve= d. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retriev= ed. > */ > static __rte_always_inline int > rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned = int > n) > @@ -1677,7 +1677,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void > **obj_table, unsigned int n) > * mempool creation (see flags). > * > * If cache is enabled, objects will be retrieved first from cache, > - * subsequently from the common pool. Note that it can return -ENOENT wh= en > + * subsequently from the common pool. Note that it can return -ENOBUFS w= hen > * the local cache and common pool are empty, even if cache from other > * lcores are full. > * > @@ -1687,7 +1687,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void > **obj_table, unsigned int n) > * A pointer to a void * pointer (object) that will be filled. > * @return > * - 0: Success; objects taken. > - * - -ENOENT: Not enough entries in the mempool; no object is retrieve= d. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retriev= ed. > */ > static __rte_always_inline int > rte_mempool_get(struct rte_mempool *mp, void **obj_p) > -- > 2.17.1 Unfortunately, it is not that simple... Here is the list of where mempool drivers are registered: https://elixir.bootlin.com/dpdk/v23.07/C/ident/RTE_MEMPOOL_REGISTER_OPS Some of the mempool drivers return -ENOBUFS, e.g. the ring driver and the s= tack driver: https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_memp= ool_ring.c#L145 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_memp= ool_ring.c#L48 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mem= pool_stack.c#L78 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mem= pool_stack.c#L59 However, I found one mempool driver (Marvell cnxk) that returns -ENOENT: https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hw= pool_ops.c#L261 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hw= pool_ops.c#L69 And one mempool driver (Cavium OCTEON TX) returns -ENOMEM: https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_= mempool_octeontx.c#L194 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_= mempool_octeontx.c#L111 One mempool driver (NXP dpaa) returns -ENOBUFS, or (if requesting too many = objects) -1: https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mem= pool.c#L351 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mem= pool.c#L225 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mem= pool.c#L257 And one mempool driver (NXP dpaa2) returns -ENOBUFS, or in rare cases -ENOE= NT or -1: https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_h= w_mempool.c#L471 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_h= w_mempool.c#L373 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_h= w_mempool.c#L336 https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_h= w_mempool.c#L342 The root cause for this misery is the general lack of documentation of call= backs in DPDK (not just in the mempool library!), which leaves the callback= implementers unaware of what to return. E.g. the mempool driver enqueue an= d dequeue callback types have no descriptions of their return values: https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L46= 7 https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L47= 3 So in theory, the mempool drivers are free to return any value... Initially= , I didn't think any mempool drivers would return -1 and set rte_errno, but= apparently they do (except the driver doesn't set rte_errno when returning= -1)! On a techboard meeting not long ago, Tyler mentioned the lack of consistenc= y in return value conventions as an general annoyance. Now having looked at= these mempool drivers, I realize that the situation is much worse than I w= ould imagine in my worst nightmare! So, how do we want to fix this: 1. Specify the allowed return values in the documentation for the mempool l= ibrary callback types, and require the mempool drivers to follow the update= d specification? 2. Update mempool API documentation to list the many currently possible err= or return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=3D-1))? 3. Update mempool API documentation to say something along the lines of "ne= gative return value indicates error; rte_errno is not set if -1"? 4. Switch to a general convention of returning -1 and setting rte_errno in = all DPDK APIs, starting with the mempool library? However; this would still= require a documented list of possible rte_errno values set by the function= s - essentially the problem would remain the same: "possible return values"= vs. "possible rte_errno values". Personally, I am in strong favor of any option that tightens the API specif= ication and treats non-conformance as bugs. --_000_SI2PR06MB4752D833A59179C4EC8F8BC7990BASI2PR06MB4752apcp_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable

>  

> So, how do we want to fix this: 

>  

> 1. Specify the allowed return values in the documentation for the memp= ool 

> library callback types, and require the mempool drivers to follow the = updated 

> specification? 

> 2. Update mempool API documentation to list the many currently possibl= e error 

> return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=3D-1))? 

> 3. Update mempool API documentation to say something along the lines o= f 

> "negative return value indicates error; rte_errno is not set if -= 1"? 

> 4. Switch to a general convention of returning -1 and setting rte_errn= o in all 

> DPDK APIs, starting with the mempool library? However; this would stil= l require a 

> documented list of possible rte_errno values set by the functions - es= sentially 

> the problem would remain the same: "possible return values" = vs. "possible 

> rte_errno values". 

>  

> Personally, I am in strong favor of any option that tightens the API s= pecification 

> and treats non-conformance as bugs. = ;


Thanks for your detailed reply!

Couldn't agree more with your third option, which is much more generalized.=


Best wis= hes,

Rma


From: Morten Br=F8rup <m= b@smartsharesystems.com>
Sent: Wednesday, August 2, 2023 16:58
To: Rma Ma <rma.ma@jaguarmicro.com>; dpdk-dev <dev@dpdk.org= >; Olivier Matz <olivier.matz@6wind.com>; Andrew Rybchenko <and= rew.rybchenko@oktetlabs.ru>
Cc: Ashwin Sekhar T K <asekhar@marvell.com>; Pavan Nikhilesh &= lt;pbhagavatula@marvell.com>; Harman Kalra <hkalra@marvell.com>; H= emant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxe= na@oss.nxp.com>
Subject: RE: [PATCH v1] mempool: fix some errors in html api
 
+CC various mempool driver maintainers

> From: Rma Ma [mailto:rma.ma@= jaguarmicro.com]
> Sent: Monday, 3 July 2023 08.18
>
> This patch fix some error descriptions of
> return value in mempool api which affect in html api.
>
> Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com>
> ---
>  lib/mempool/rte_mempool.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 160975a7e7..d4d707533a 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1610,7 +1610,7 @@ rte_mempool_do_generic_get(struct rte_mempool *m= p, void
> **obj_table,
>   * Get several objects from the mempool.
>   *
>   * If cache is enabled, objects will be retrieved first fro= m cache,
> - * subsequently from the common pool. Note that it can return -ENOENT= when
> + * subsequently from the common pool. Note that it can return -ENOBUF= S when
>   * the local cache and common pool are empty, even if cache= from other
>   * lcores are full.
>   *
> @@ -1624,7 +1624,7 @@ rte_mempool_do_generic_get(struct rte_mempool *m= p, void
> **obj_table,
>   *   A pointer to a mempool cache structure. May = be NULL if not needed.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no objec= t is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no obje= ct is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table= ,
> @@ -1646,7 +1646,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, = void
> **obj_table,
>   * mempool creation time (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first fro= m cache,
> - * subsequently from the common pool. Note that it can return -ENOENT= when
> + * subsequently from the common pool. Note that it can return -ENOBUF= S when
>   * the local cache and common pool are empty, even if cache= from other
>   * lcores are full.
>   *
> @@ -1658,7 +1658,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, = void
> **obj_table,
>   *   The number of objects to get from the mempoo= l to obj_table.
>   * @return
>   *   - 0: Success; objects taken
> - *   - -ENOENT: Not enough entries in the mempool; no objec= t is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no obje= ct is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, u= nsigned int
> n)
> @@ -1677,7 +1677,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, voi= d
> **obj_table, unsigned int n)
>   * mempool creation (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first fro= m cache,
> - * subsequently from the common pool. Note that it can return -ENOENT= when
> + * subsequently from the common pool. Note that it can return -ENOBUF= S when
>   * the local cache and common pool are empty, even if cache= from other
>   * lcores are full.
>   *
> @@ -1687,7 +1687,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, voi= d
> **obj_table, unsigned int n)
>   *   A pointer to a void * pointer (object) that = will be filled.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no objec= t is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no obje= ct is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get(struct rte_mempool *mp, void **obj_p)
> --
> 2.17.1

Unfortunately, it is not that simple...

Here is the list of where mempool drivers are registered:
https://elixir.bootlin.com/dpdk/v23.07/C/ident/RTE_MEMPOOL_REGISTE= R_OPS

Some of the mempool drivers return -ENOBUFS, e.g. the ring driver and the s= tack driver:
https://elixir.bootlin.com/dpdk/v23.07/source/d= rivers/mempool/ring/rte_mempool_ring.c#L145
https://elixir.bootlin.com/dpdk/v23.07/source/dr= ivers/mempool/ring/rte_mempool_ring.c#L48
https://elixir.bootlin.com/dpdk/v23.07/source/= drivers/mempool/stack/rte_mempool_stack.c#L78
https://elixir.bootlin.com/dpdk/v23.07/source/= drivers/mempool/stack/rte_mempool_stack.c#L59

However, I found one mempool driver (Marvell cnxk) that returns -ENOENT: https://elixir.bootlin.com/dpdk/v23.07/source/d= rivers/mempool/cnxk/cn10k_hwpool_ops.c#L261
https://elixir.bootlin.com/dpdk/v23.07/source/dr= ivers/mempool/cnxk/cn10k_hwpool_ops.c#L69

And one mempool driver (Cavium OCTEON TX) returns -ENOMEM:
https://elixir.bootlin.com/dpdk/v23.07/= source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L194
https://elixir.bootlin.com/dpdk/v23.07/= source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L111

One mempool driver (NXP dpaa) returns -ENOBUFS, or (if requesting too many = objects) -1:
https://elixir.bootlin.com/dpdk/v23.07/source/drive= rs/mempool/dpaa/dpaa_mempool.c#L351
https://elixir.bootlin.com/dpdk/v23.07/source/drive= rs/mempool/dpaa/dpaa_mempool.c#L225
https://elixir.bootlin.com/dpdk/v23.07/source/drive= rs/mempool/dpaa/dpaa_mempool.c#L257

And one mempool driver (NXP dpaa2) returns -ENOBUFS, or in rare cases -ENOE= NT or -1:
https://elixir.bootlin.com/dpdk/v23.07/source/= drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L471
https://elixir.bootlin.com/dpdk/v23.07/source/= drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L373
https://elixir.bootlin.com/dpdk/v23.07/source/= drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L336
https://elixir.bootlin.com/dpdk/v23.07/source/= drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L342


The root cause for this misery is the general lack of documentation of call= backs in DPDK (not just in the mempool library!), which leaves the callback= implementers unaware of what to return. E.g. the mempool driver enqueue an= d dequeue callback types have no descriptions of their return values:
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte= _mempool.h#L467
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte= _mempool.h#L473

So in theory, the mempool drivers are free to return any value... Initially= , I didn't think any mempool drivers would return -1 and set rte_errno, but= apparently they do (except the driver doesn't set rte_errno when returning= -1)!

On a techboard meeting not long ago, Tyler mentioned the lack of consistenc= y in return value conventions as an general annoyance. Now having looked at= these mempool drivers, I realize that the situation is much worse than I w= ould imagine in my worst nightmare!


So, how do we want to fix this:

1. Specify the allowed return values in the documentation for the mempool l= ibrary callback types, and require the mempool drivers to follow the update= d specification?
2. Update mempool API documentation to list the many currently possible err= or return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=3D-1))?
3. Update mempool API documentation to say something along the lines of &qu= ot;negative return value indicates error; rte_errno is not set if -1"?=
4. Switch to a general convention of returning -1 and setting rte_errno in = all DPDK APIs, starting with the mempool library? However; this would still= require a documented list of possible rte_errno values set by the function= s - essentially the problem would remain the same: "possible return values" vs. "possible rte= _errno values".

Personally, I am in strong favor of any option that tightens the API specif= ication and treats non-conformance as bugs.

--_000_SI2PR06MB4752D833A59179C4EC8F8BC7990BASI2PR06MB4752apcp_--