From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE420A0C47; Tue, 12 Oct 2021 08:33:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4F5E64067C; Tue, 12 Oct 2021 08:33:19 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 4BCA040142 for ; Tue, 12 Oct 2021 08:33:17 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id B08FD7F596; Tue, 12 Oct 2021 09:33:16 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru B08FD7F596 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1634020396; bh=cH5k7Cw6L9ZSGbzEd3lWKzS6wN8GlzHvQn3sRgsMTy8=; h=Subject:To:Cc:References:From:Date:In-Reply-To; b=dSkt7rBEz+RYe41MzTZu6yer4nMPT2SvbWnGGzJ/Pqr00LCrgnzG9IHGnRMEAP+hH Ce6XwaI5vQTssBnmInwrbj6AcesCLjX/EC/ptq1y13iRl1mm/CIqwNfXXhJN7OQimm 6UyBGtAmYYcIgO38DQt5kbjs/ZkQFthRAlejtG+w= To: Dmitry Kozlyuk , dev@dpdk.org Cc: Thomas Monjalon , Matan Azrad , Olivier Matz , Ray Kinsella , Anatoly Burakov References: <20210929145249.2176811-1-dkozlyuk@nvidia.com> <20211012000409.2751908-1-dkozlyuk@nvidia.com> <20211012000409.2751908-2-dkozlyuk@nvidia.com> From: Andrew Rybchenko Organization: OKTET Labs Message-ID: Date: Tue, 12 Oct 2021 09:33:16 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211012000409.2751908-2-dkozlyuk@nvidia.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3 1/4] mempool: add event callbacks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/12/21 3:04 AM, Dmitry Kozlyuk wrote: > Data path performance can benefit if the PMD knows which memory it will > need to handle in advance, before the first mbuf is sent to the PMD. > It is impractical, however, to consider all allocated memory for this > purpose. Most often mbuf memory comes from mempools that can come and > go. PMD can enumerate existing mempools on device start, but it also > needs to track creation and destruction of mempools after the forwarding > starts but before an mbuf from the new mempool is sent to the device. > > Add an internal API to register callback for mempool life cycle events: > * rte_mempool_event_callback_register() > * rte_mempool_event_callback_unregister() > Currently tracked events are: > * RTE_MEMPOOL_EVENT_READY (after populating a mempool) > * RTE_MEMPOOL_EVENT_DESTROY (before freeing a mempool) > Provide a unit test for the new API. Good idea. > Signed-off-by: Dmitry Kozlyuk > Acked-by: Matan Azrad [snip] I think it would be very useful to test two callbacks as well including new mempool creation after one of callbacks unregister. Plus register/unregister callbacks from a callback itself. Feel free to drop it, since increasing test coverage is almost endless :) > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h > index f57ecbd6fc..e2bf40aa09 100644 > --- a/lib/mempool/rte_mempool.h > +++ b/lib/mempool/rte_mempool.h > @@ -1774,6 +1774,62 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg), > int > rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz); > > +/** > + * Mempool event type. > + * @internal > + */ > +enum rte_mempool_event { > + /** Occurs after a mempool is successfully populated. */ successfully -> fully ? > + RTE_MEMPOOL_EVENT_READY = 0, > + /** Occurs before destruction of a mempool begins. */ > + RTE_MEMPOOL_EVENT_DESTROY = 1, > +}; > + > +/** > + * @internal > + * Mempool event callback. > + */ > +typedef void (rte_mempool_event_callback)( > + enum rte_mempool_event event, > + struct rte_mempool *mp, > + void *user_data); > + > +/** > + * @internal I'd like to understand why the API is internal (not experimental). I think reasons should be clear from function description. > + * Register a callback invoked on mempool life cycle event. > + * Callbacks will be invoked in the process that creates the mempool. > + * > + * @param func > + * Callback function. > + * @param user_data > + * User data. > + * > + * @return > + * 0 on success, negative on failure and rte_errno is set. > + */ > +__rte_internal > +int > +rte_mempool_event_callback_register(rte_mempool_event_callback *func, > + void *user_data); > + > +/** > + * @internal > + * Unregister a callback added with rte_mempool_event_callback_register(). > + * @p func and @p user_data must exactly match registration parameters. > + * > + * @param func > + * Callback function. > + * @param user_data > + * User data. > + * > + * @return > + * 0 on success, negative on failure and rte_errno is set. > + */ > +__rte_internal > +int > +rte_mempool_event_callback_unregister(rte_mempool_event_callback *func, > + void *user_data); > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/mempool/version.map b/lib/mempool/version.map > index 9f77da6fff..1b7d7c5456 100644 > --- a/lib/mempool/version.map > +++ b/lib/mempool/version.map > @@ -64,3 +64,11 @@ EXPERIMENTAL { > __rte_mempool_trace_ops_free; > __rte_mempool_trace_set_ops_byname; > }; > + > +INTERNAL { > + global: > + > + # added in 21.11 > + rte_mempool_event_callback_register; > + rte_mempool_event_callback_unregister; > +}; >