From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6873AA0547; Wed, 29 Sep 2021 16:53:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4A81410F0; Wed, 29 Sep 2021 16:53:12 +0200 (CEST) Received: from AZHDRRW-EX02.NVIDIA.COM (azhdrrw-ex02.nvidia.com [20.64.145.131]) by mails.dpdk.org (Postfix) with ESMTP id 0CFD7410EF for ; Wed, 29 Sep 2021 16:53:11 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.171) by mxs.oss.nvidia.com (10.13.234.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Wed, 29 Sep 2021 07:53:09 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JyVsoXd5mDM7FTjyNY3OQPu2K1X7xTTrHSJhFYBQuHl6Vs+ybnZzv5wiZkSck6MX5GQkymC6yVXINjKlzgX9d1D9zRpkd1jhHzfCN+aS8eiu9ETNbp2GPxpmVsQ7o3GVn/i12SkNvpXjfeUlO93svcDfwG4/McLJvJeHiDskxvKnk1IyvZ3eUw7II9t7WiNzct6EGZUK/B/2FRMLXnUaMTNJ7QJki7qldFRAmvGQcmH+TwdUkrTEhkAyJUGA00aoIyhIsIJV5eBonTjdPDQ26bqI8+o43mrqDOCJjF5sVDyved6Q+OcNAUDF9+XpHtOBCAE71OBCUGhVvD0dcZC+qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Yqdi7iBuhvqCl4Rr50p60IujMrdrZZ2Pg3HtVXrPlsI=; b=YtlKOGY2uaWl4XmnSXIj8YEMMbEBxohpgxyFKKT7/o8imzsPpj07qNQRve+cCXd7XgnLn9aUPpcGe3f3JWfYCTSv0VZsE5l5qwLZIlcSfmDYkUcCzJsLHNHkTM6X7juL42Rb0hDxmxDJo0DQrDY3Pc/c1BzdHeQkDXoRHvKJ4HOvx6YEy05O0DMyVcBfu34LpOdGoUvSvn+IERLi93uYFa7kKEqQUKfglEH6xnZfg+p32cVi9nlrGwpI+p4I0l6gDFmZSMV/DbLDT3rZ2sI6n3kLpEwkID8/+fbVI4vl43ccbK0TEpqfYV9R76ey/Wg8zGytbfDizRVjr+NosX6VPw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=6wind.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yqdi7iBuhvqCl4Rr50p60IujMrdrZZ2Pg3HtVXrPlsI=; b=k7hzpYy363dyl+NNRfqaIJbzMKW1ecg7qApZ/kFmwMiiDmgxXLX057Gfsw5c1V7dnOrOOvlwgwfbB02uYO1C3MDMzg3d/MLJcoSHDcvRZVtJTB88PXyH960+n75GOhijcKDSueExQbsvLTH+LBFIEDJWF2ueeI/le6lXYlnwiuF6gwRZSLRJV+9qOTfiRH4GRaWowOTJA7Zjh9RkI0UHszC7OseSR2O/yJLysdxTOV50Ezl1txZZEXJr7dSCxT6R+iA4j/PSI2DAKBt6bnat53RF1Bw5GGp+5DM3CCCS/IytYnuxIEF4R7/hoOJO60SVfUQKfPF51eWXNR6X2uqh7g== Received: from DM5PR21CA0011.namprd21.prod.outlook.com (2603:10b6:3:ac::21) by CY4PR1201MB0005.namprd12.prod.outlook.com (2603:10b6:903:d1::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13; Wed, 29 Sep 2021 14:53:08 +0000 Received: from DM6NAM11FT065.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ac:cafe::7) by DM5PR21CA0011.outlook.office365.com (2603:10b6:3:ac::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.1 via Frontend Transport; Wed, 29 Sep 2021 14:53:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; 6wind.com; dkim=none (message not signed) header.d=none;6wind.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by DM6NAM11FT065.mail.protection.outlook.com (10.13.172.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Wed, 29 Sep 2021 14:53:07 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 29 Sep 2021 07:53:06 -0700 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 29 Sep 2021 14:53:04 +0000 From: To: CC: Dmitry Kozlyuk , Matan Azrad , Olivier Matz , Andrew Rybchenko , Ray Kinsella , "Anatoly Burakov" Date: Wed, 29 Sep 2021 17:52:46 +0300 Message-ID: <20210929145249.2176811-2-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210929145249.2176811-1-dkozlyuk@nvidia.com> References: <20210818090755.2419483-1-dkozlyuk@nvidia.com> <20210929145249.2176811-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fc4f14b9-3767-4873-b68c-08d98358d94f X-MS-TrafficTypeDiagnostic: CY4PR1201MB0005: X-Microsoft-Antispam-PRVS: X-MS-Exchange-Transport-Forked: True X-MS-Oob-TLC-OOBClassifiers: OLM:1332; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: M9wEYG+7OhFLI1uF2VpndUDSbwk/ij9btVCmy/Q7aJvTEQzmrZ6r9iYcHQT3ZO+AVnymj7mR8MNEHkQRQhWBYVHWd8VXkG8bgo1GTkjgWo/MTuefbfdWsLM6VzBVvtMQ7Yx+BqnwIb6Z9QCg98iJAKfioJjEoijfyfZYZwOhmCEQZkN+7Usg1Gu+hPgeWEOxXELadQdOZZstPmHcKovapxgF72MND66L+jCGmicP8JfciSHp77NBQ62/GMyuho2IbeZxQdLnViSmr9cNySfhXAOboBN0CchaO9i1UpPktvDWzdR3eSzmSBiYq8aQSA2oDQxu+IZ3y4tsQK5SQDl3U7xwN5AjP9vQNp13YHAUP+cJiBA4HJgtM32yzOssJgELKYPfNWAeLGPZLR3Og/ETn5mOd+suv8hnpavvimJJ9irYpiaHAyyuuILEOqDIZu56epFErO+rsuW3NBAXtZ48+RNAa4+nL4ozoDSvqnwm8m/txfm+uuFOjQf6Ynd8YsV2LHFiyO+9j+5x2p3QVYQ1J7FyTfuGHupvQFuv0DhvaIfkpUCrsem+CsCePGSZXNFGg2g8+zkGBrLWTQ9oHmkirGHDT9ZxE+H0Z0qN8MGYC5v3pMzwZDWzc7mOkw+aZNVpYfHKS6kX+HyVKpOAIQ7CkFr1j8pK3BVHBQaPf1v38hDcELH8yTVZgbt2PeT5mxJLdLOET2NBEBYandEUNqLsCQ== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(30864003)(6916009)(5660300002)(316002)(4326008)(2906002)(70586007)(55016002)(86362001)(7696005)(2876002)(82310400003)(54906003)(70206006)(426003)(83380400001)(2616005)(1076003)(336012)(47076005)(186003)(8676002)(356005)(8936002)(26005)(36756003)(16526019)(6666004)(107886003)(6286002)(508600001)(36860700001)(7636003); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Sep 2021 14:53:07.4748 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fc4f14b9-3767-4873-b68c-08d98358d94f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT065.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0005 Subject: [dpdk-dev] [PATCH v2 1/4] mempool: add event callbacks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Dmitry Kozlyuk Performance of MLX5 PMD of different classes can benefit if PMD knows which memory it will need to handle in advance, before the first mbuf is sent to the PMD. It is impractical, however, to consider all allocated memory for this purpose. Most often mbuf memory comes from mempools that can come and go. PMD can enumerate existing mempools on device start, but it also needs to track creation and destruction of mempools after the forwarding starts but before an mbuf from the new mempool is sent to the device. Add an internal API to register callback for mempool lify cycle events, currently RTE_MEMPOOL_EVENT_READY (after populating) and RTE_MEMPOOL_EVENT_DESTROY (before freeing): * rte_mempool_event_callback_register() * rte_mempool_event_callback_unregister() Provide a unit test for the new API. Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- app/test/test_mempool.c | 75 ++++++++++++++++++++ lib/mempool/rte_mempool.c | 143 +++++++++++++++++++++++++++++++++++++- lib/mempool/rte_mempool.h | 56 +++++++++++++++ lib/mempool/version.map | 8 +++ 4 files changed, 279 insertions(+), 3 deletions(-) diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index 7675a3e605..0c4ed7c60b 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -471,6 +472,74 @@ test_mp_mem_init(struct rte_mempool *mp, data->ret = 0; } +struct test_mempool_events_data { + struct rte_mempool *mp; + enum rte_mempool_event event; + bool invoked; +}; + +static void +test_mempool_events_cb(enum rte_mempool_event event, + struct rte_mempool *mp, void *arg) +{ + struct test_mempool_events_data *data = arg; + + data->mp = mp; + data->event = event; + data->invoked = true; +} + +static int +test_mempool_events(int (*populate)(struct rte_mempool *mp)) +{ + struct test_mempool_events_data data; + struct rte_mempool *mp; + int ret; + + ret = rte_mempool_event_callback_register(NULL, &data); + RTE_TEST_ASSERT_NOT_EQUAL(ret, 0, "Registered a NULL callback"); + + memset(&data, 0, sizeof(data)); + ret = rte_mempool_event_callback_register(test_mempool_events_cb, + &data); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to register the callback: %s", + rte_strerror(rte_errno)); + + mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE, + MEMPOOL_ELT_SIZE, 0, 0, + SOCKET_ID_ANY, 0); + RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create an empty mempool: %s", + rte_strerror(rte_errno)); + RTE_TEST_ASSERT_EQUAL(data.invoked, false, + "Callback invoked on an empty mempool creation"); + + rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL); + ret = populate(mp); + RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate the mempool: %s", + rte_strerror(rte_errno)); + RTE_TEST_ASSERT_EQUAL(data.invoked, true, + "Callback not invoked on an empty mempool population"); + RTE_TEST_ASSERT_EQUAL(data.event, RTE_MEMPOOL_EVENT_READY, + "Wrong callback invoked, expected READY"); + RTE_TEST_ASSERT_EQUAL(data.mp, mp, + "Callback invoked for a wrong mempool"); + + memset(&data, 0, sizeof(data)); + rte_mempool_free(mp); + RTE_TEST_ASSERT_EQUAL(data.invoked, true, + "Callback not invoked on mempool destruction"); + RTE_TEST_ASSERT_EQUAL(data.event, RTE_MEMPOOL_EVENT_DESTROY, + "Wrong callback invoked, expected DESTROY"); + RTE_TEST_ASSERT_EQUAL(data.mp, mp, + "Callback invoked for a wrong mempool"); + + ret = rte_mempool_event_callback_unregister(test_mempool_events_cb, + &data); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to unregister the callback: %s", + rte_strerror(rte_errno)); + return 0; +} + static int test_mempool(void) { @@ -645,6 +714,12 @@ test_mempool(void) if (test_mempool_basic(default_pool, 1) < 0) GOTO_ERR(ret, err); + /* test mempool event callbacks */ + if (test_mempool_events(rte_mempool_populate_default) < 0) + GOTO_ERR(ret, err); + if (test_mempool_events(rte_mempool_populate_anon) < 0) + GOTO_ERR(ret, err); + rte_mempool_list_dump(stdout); ret = 0; diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 59a588425b..c6cb99ba48 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -42,6 +42,18 @@ static struct rte_tailq_elem rte_mempool_tailq = { }; EAL_REGISTER_TAILQ(rte_mempool_tailq) +TAILQ_HEAD(mempool_callback_list, rte_tailq_entry); + +static struct rte_tailq_elem callback_tailq = { + .name = "RTE_MEMPOOL_CALLBACK", +}; +EAL_REGISTER_TAILQ(callback_tailq) + +/* Invoke all registered mempool event callbacks. */ +static void +mempool_event_callback_invoke(enum rte_mempool_event event, + struct rte_mempool *mp); + #define CACHE_FLUSHTHRESH_MULTIPLIER 1.5 #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) @@ -360,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next); mp->nb_mem_chunks++; + /* Report the mempool as ready only when fully populated. */ + if (mp->populated_size >= mp->size) + mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp); + rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque); return i; @@ -722,6 +738,7 @@ rte_mempool_free(struct rte_mempool *mp) } rte_mcfg_tailq_write_unlock(); + mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_DESTROY, mp); rte_mempool_trace_free(mp); rte_mempool_free_memchunks(mp); rte_mempool_ops_free(mp); @@ -779,9 +796,9 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache) /* create an empty mempool */ struct rte_mempool * -rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, - unsigned cache_size, unsigned private_data_size, - int socket_id, unsigned flags) +rte_mempool_create_empty(const char *name, unsigned int n, + unsigned int elt_size, unsigned int cache_size, + unsigned int private_data_size, int socket_id, unsigned int flags) { char mz_name[RTE_MEMZONE_NAMESIZE]; struct rte_mempool_list *mempool_list; @@ -1343,3 +1360,123 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *), rte_mcfg_mempool_read_unlock(); } + +struct mempool_callback { + rte_mempool_event_callback *func; + void *arg; +}; + +static void +mempool_event_callback_invoke(enum rte_mempool_event event, + struct rte_mempool *mp) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te; + void *tmp_te; + + rte_mcfg_tailq_read_lock(); + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + TAILQ_FOREACH_SAFE(te, list, next, tmp_te) { + struct mempool_callback *cb = te->data; + rte_mcfg_tailq_read_unlock(); + cb->func(event, mp, cb->arg); + rte_mcfg_tailq_read_lock(); + } + rte_mcfg_tailq_read_unlock(); +} + +int +rte_mempool_event_callback_register(rte_mempool_event_callback *func, + void *arg) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te = NULL; + struct mempool_callback *cb; + void *tmp_te; + int ret; + + if (func == NULL) { + rte_errno = EINVAL; + return -rte_errno; + } + + rte_mcfg_mempool_read_lock(); + rte_mcfg_tailq_write_lock(); + + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + TAILQ_FOREACH_SAFE(te, list, next, tmp_te) { + struct mempool_callback *cb = + (struct mempool_callback *)te->data; + if (cb->func == func && cb->arg == arg) { + ret = -EEXIST; + goto exit; + } + } + + te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); + if (te == NULL) { + RTE_LOG(ERR, MEMPOOL, + "Cannot allocate event callback tailq entry!\n"); + ret = -ENOMEM; + goto exit; + } + + cb = rte_malloc("MEMPOOL_EVENT_CALLBACK", sizeof(*cb), 0); + if (cb == NULL) { + RTE_LOG(ERR, MEMPOOL, + "Cannot allocate event callback!\n"); + rte_free(te); + ret = -ENOMEM; + goto exit; + } + + cb->func = func; + cb->arg = arg; + te->data = cb; + TAILQ_INSERT_TAIL(list, te, next); + ret = 0; + +exit: + rte_mcfg_tailq_write_unlock(); + rte_mcfg_mempool_read_unlock(); + rte_errno = -ret; + return ret; +} + +int +rte_mempool_event_callback_unregister(rte_mempool_event_callback *func, + void *arg) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te = NULL; + struct mempool_callback *cb; + int ret; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + rte_errno = EPERM; + return -1; + } + + rte_mcfg_mempool_read_lock(); + rte_mcfg_tailq_write_lock(); + ret = -ENOENT; + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + TAILQ_FOREACH(te, list, next) { + cb = (struct mempool_callback *)te->data; + if (cb->func == func && cb->arg == arg) + break; + } + if (te != NULL) { + TAILQ_REMOVE(list, te, next); + ret = 0; + } + rte_mcfg_tailq_write_unlock(); + rte_mcfg_mempool_read_unlock(); + + if (ret == 0) { + rte_free(te); + rte_free(cb); + } + rte_errno = -ret; + return ret; +} diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 4235d6f0bf..c81e488851 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -1775,6 +1775,62 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg), int rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz); +/** + * Mempool event type. + * @internal + */ +enum rte_mempool_event { + /** Occurs after a mempool is successfully populated. */ + RTE_MEMPOOL_EVENT_READY = 0, + /** Occurs before destruction of a mempool begins. */ + RTE_MEMPOOL_EVENT_DESTROY = 1, +}; + +/** + * @internal + * Mempool event callback. + */ +typedef void (rte_mempool_event_callback)( + enum rte_mempool_event event, + struct rte_mempool *mp, + void *arg); + +/** + * @internal + * Register a callback invoked on mempool life cycle event. + * Callbacks will be invoked in the process that creates the mempool. + * + * @param cb + * Callback function. + * @param cb_arg + * User data. + * + * @return + * 0 on success, negative on failure and rte_errno is set. + */ +__rte_internal +int +rte_mempool_event_callback_register(rte_mempool_event_callback *cb, + void *cb_arg); + +/** + * @internal + * Unregister a callback added with rte_mempool_event_callback_register(). + * @p cb and @p arg must exactly match registration parameters. + * + * @param cb + * Callback function. + * @param cb_arg + * User data. + * + * @return + * 0 on success, negative on failure and rte_errno is set. + */ +__rte_internal +int +rte_mempool_event_callback_unregister(rte_mempool_event_callback *cb, + void *cb_arg); + #ifdef __cplusplus } #endif diff --git a/lib/mempool/version.map b/lib/mempool/version.map index 9f77da6fff..1b7d7c5456 100644 --- a/lib/mempool/version.map +++ b/lib/mempool/version.map @@ -64,3 +64,11 @@ EXPERIMENTAL { __rte_mempool_trace_ops_free; __rte_mempool_trace_set_ops_byname; }; + +INTERNAL { + global: + + # added in 21.11 + rte_mempool_event_callback_register; + rte_mempool_event_callback_unregister; +}; -- 2.25.1