From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D7A9A0C4C; Tue, 5 Oct 2021 18:34:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 52B7A413AF; Tue, 5 Oct 2021 18:34:14 +0200 (CEST) Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by mails.dpdk.org (Postfix) with ESMTP id 5481B4139F for ; Tue, 5 Oct 2021 18:34:13 +0200 (CEST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 023235C024F; Tue, 5 Oct 2021 12:34:12 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 05 Oct 2021 12:34:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= lTpcs6+sZszzQPRBWIQ/iqys94Rcru8Jm/T+bDyMK6s=; b=RHBXvuO3g4Z8zMG+ Xxt9cl4JCmk7b21sFqFPTM1NwkqU1J6lkSyUsgt0ekQ43E6xZrnOVti/U2nnUnzT 6g6ldeOZ0dxyiNpXzGAT8P6vFK9RZ/UUsGofWfqr7/1rxhbegyHOqr9yIP2Vnv7P 0AC2r6MoHl8SV2UJpvTFYa3B1TpfhRh4MsGX+vLhySi+t2jiRaWGZrcn1m4lelcd zpnc59uY2L3T9gikZoyk/8RE+hAxfsBJ/UTpluuZe+XLBqnbaQXlVUG5z0MYWA+m qZo0y/F3OplfwSHyytKDU/fp1Muovk/ysnNoJKSkbl5EIMMwOi1wTvVkLu0Q8XDT 3HOCPg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=lTpcs6+sZszzQPRBWIQ/iqys94Rcru8Jm/T+bDyMK 6s=; b=TInVhyO2le9VZ+/3dJAJF85kwzhHoi76PEPnlUIcX5P0LTYrQ+gpSxWWQ hTCCEFSL2LiUrSzdVvRvy7ApXB8k9AnDmtdtk+NQYlhjn0Dpq08Mu9wfboyhJ1CA C3DVMv/Yc7VNqBXq2yEd66y8My8CfBbhTKOsH8KZXAG9bWKPv41wBtwrEPSptX9G wx1kXVDxrRSO9vZohgcVQgCDuSGexCAXU1XnC+Op5tSAgw1tntn34NnQuTidYqvk uumlMdsy3FtDQXCtasgT7DQ3dsrVQOzANIuQHUNwGcFhuH5OcuhnA8d8+9l9P3LH /TIogmdv8feYwRDW+u2i9ZD46kLfw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudelgedguddtvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdej ueeiiedvffegheenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 5 Oct 2021 12:34:09 -0400 (EDT) From: Thomas Monjalon To: Dmitry Kozlyuk Cc: dev@dpdk.org, Matan Azrad , Olivier Matz , Andrew Rybchenko , Ray Kinsella , Anatoly Burakov Date: Tue, 05 Oct 2021 18:34:07 +0200 Message-ID: <2041497.iWrRf74lhJ@thomas> In-Reply-To: <20210929145249.2176811-2-dkozlyuk@nvidia.com> References: <20210818090755.2419483-1-dkozlyuk@nvidia.com> <20210929145249.2176811-1-dkozlyuk@nvidia.com> <20210929145249.2176811-2-dkozlyuk@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v2 1/4] mempool: add event callbacks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 29/09/2021 16:52, dkozlyuk@oss.nvidia.com: > From: Dmitry Kozlyuk > > Performance of MLX5 PMD of different classes can benefit if PMD knows > which memory it will need to handle in advance, before the first mbuf > is sent to the PMD. It is impractical, however, to consider > all allocated memory for this purpose. Most often mbuf memory comes > from mempools that can come and go. PMD can enumerate existing mempools > on device start, but it also needs to track creation and destruction > of mempools after the forwarding starts but before an mbuf from the new > mempool is sent to the device. I'm not sure this introduction about mlx5 is appropriate. > Add an internal API to register callback for mempool lify cycle events, lify -> life > currently RTE_MEMPOOL_EVENT_READY (after populating) > and RTE_MEMPOOL_EVENT_DESTROY (before freeing): > * rte_mempool_event_callback_register() > * rte_mempool_event_callback_unregister() > Provide a unit test for the new API. [...] > -rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, > - unsigned cache_size, unsigned private_data_size, > - int socket_id, unsigned flags) > +rte_mempool_create_empty(const char *name, unsigned int n, > + unsigned int elt_size, unsigned int cache_size, > + unsigned int private_data_size, int socket_id, unsigned int flags) This change looks unrelated. > +enum rte_mempool_event { > + /** Occurs after a mempool is successfully populated. */ > + RTE_MEMPOOL_EVENT_READY = 0, > + /** Occurs before destruction of a mempool begins. */ > + RTE_MEMPOOL_EVENT_DESTROY = 1, > +}; These events look OK. > +typedef void (rte_mempool_event_callback)( > + enum rte_mempool_event event, > + struct rte_mempool *mp, > + void *arg); Instead of "arg", I prefer the name "user_data".