From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ACB15A0C55; Wed, 13 Oct 2021 13:01:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 123D2411AF; Wed, 13 Oct 2021 13:01:54 +0200 (CEST) Received: from AZHDRRW-EX01.nvidia.com (azhdrrw-ex01.nvidia.com [20.51.104.162]) by mails.dpdk.org (Postfix) with ESMTP id 4BF60411A8 for ; Wed, 13 Oct 2021 13:01:52 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.48) by mxs.oss.nvidia.com (10.13.234.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Wed, 13 Oct 2021 04:01:51 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mI4PgWjInwsALGj7Xq3aSt0D0CUFyHIV7Mb00JjKTtvhnNrg0JzguGFGpKtAYC0LYnZCRyPSnM62tPK+crxW38465RQOK6Vy8rS26sPHMGLMQW810lnd9qFz3s6+LdFM1vEsMXOJ8L1eC+yPG18dd5iDYkmND2JtvV39dXnX+j63az5Bes3P7FWgvr0F+9T3w+PmLp3LirIY7mhguKQ7DCIVR8XdpqsUgsQUUJdOWb/kZxLNiSkfO50UgMg6DQ1c7Kgu2NO7Dojgh9lwahe7NV34xxFz6RbMVA7J1ar9S2RpptX/7B1OacC893GHT2DMAXO6oiZiDyT8IOBx3EE4Kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WfMc5OVaIwaK58hkSivcgTmMWifU//6ErxC5/XuEeMQ=; b=PFxwN9lH0Zmu3Qo5c6RCn3bsr2aNqBk4kI54SpAIQL4Lka++AwNDPwueZhD677CX16Fu8dEkFgtEI6WFP1o69Dyvs6/dSHSpvM3qAtW+3OPiIzE+cqH7lGgGHBt0hJ4MK3H6OKDb1hKK1uxENAV6W/XepNdsArysPbc1XFCxuj5LuCGIpe/868G0c0kagWqUtA0Tcsl3YF09FDaJ/vdv0tIZ/QUtwnO+nO0cVRCg8OVxW/P7CgYmQj34olCqFxULvI6OZCX/3/Fbamr7CDQ+3ONSXND/iRqME6qxBaKa+MOOIILQKKEE80IyMqJu7G6srL2PDNS06yeznSFbSu8D6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=ashroe.eu smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WfMc5OVaIwaK58hkSivcgTmMWifU//6ErxC5/XuEeMQ=; b=FTtRh8Pw13if87eWyvUgJdUxS8sfi0TGHhRjyO7Rvl1Oq7E4I3wFtbw2vwG6PJRzp/J15m/TXymqjIewOIqdTzlc74GH/NZW6Hx3c3tW+3YsZP42A6mivwrH6XUhe0UajYCf4ocPo8zpXrKIsPj5J9fXwJT6CoKIfEgJB2m6Dc9L/m6LmbFX+5WuFifEvXIygGXpsr+PNPZzDBrF2d2c9CKq9ZiYLVqqmffCYW1lXkBkrP+h6Ozxsa/Mhx0vIuXP2A+wwRI3OaQUnyM8qBQpmDwoK9jraIbBKuKB+JDvOaUJ7JBT25bk7wM0XkFm04g0NHOXVx3XCvQu4KbYcxKj2A== Received: from DM5PR21CA0034.namprd21.prod.outlook.com (2603:10b6:3:ed::20) by MWHPR1201MB0096.namprd12.prod.outlook.com (2603:10b6:301:55::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.18; Wed, 13 Oct 2021 11:01:50 +0000 Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ed:cafe::5f) by DM5PR21CA0034.outlook.office365.com (2603:10b6:3:ed::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.1 via Frontend Transport; Wed, 13 Oct 2021 11:01:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; ashroe.eu; dkim=none (message not signed) header.d=none;ashroe.eu; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4608.15 via Frontend Transport; Wed, 13 Oct 2021 11:01:49 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 13 Oct 2021 11:01:47 +0000 From: Dmitry Kozlyuk To: CC: Andrew Rybchenko , Matan Azrad , Olivier Matz , Ray Kinsella , Anatoly Burakov Date: Wed, 13 Oct 2021 14:01:28 +0300 Message-ID: <20211013110131.2909604-2-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211013110131.2909604-1-dkozlyuk@nvidia.com> References: <20211012000409.2751908-1-dkozlyuk@nvidia.com> <20211013110131.2909604-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f73c36a6-de80-46fd-4616-08d98e38db2f X-MS-TrafficTypeDiagnostic: MWHPR1201MB0096: X-Microsoft-Antispam-PRVS: X-MS-Exchange-Transport-Forked: True X-MS-Oob-TLC-OOBClassifiers: OLM:2201; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r2Ad7d1dRztao6PqkR/9BFVQPDMWch0P2cMstqeUzrxmZkmQA8vivti07jd71F+H8Rf08UvtXYOqB2H7BKTLnwiacFZccWnrCq/zB9z3u+BP5Wwe8NaP+JTfP+xGzPfKrh2iP+o7f63aMk9fK3o+YbCocehgWm8pGocwy9bJxl1zdEyq4cDE6XA3zDVvgM8Wm9G1HOVeZE79frKwPKK+KmfElFY/M8g071ohqKGkthGoD+EVTFWCtT2xRKosrMxjsxOzgqVy+cfmz56RR7ii94mV86J0rPaIcZaYc7y9p9Eatgsa8ysho/uj/Y/z9iRg8PSXlkFDHvzAfbSVI5VoEu+Hi+KrMJ4ac+mXelfvBVxKGlP4g9Jl6xK7nw4xJD7v7jpCzk2Ufx3diHH6Or0zL/8p0ead0az8AQjPacHUNS7MPmUrfqwcGwM3so3RGLuPuKAWxiT7W+O4rlXBB/FKWXug1gamfZIiQGZVR5H8sTqKpXnNtwavcSEwXnYI382xGQWqOxyKhdZTFuuPksdlV0FlwOTz2+f5VyZv8ruDn+k8aT39B9iKz0TH0qAdNUGOjEkg697xxAZZPcFS6fhWR6Z7W0d1/hz0aQ980lcVeiOaZrNQeJ6IMbH1FZkSozgbr4b8bbWH3KAXiQnabWuc+WDpT2UAEQWE6lRlZd0cyURf8tHiqmG8G8d2+KN0W//KmTmBKY2iTYl5565gmPN2Fg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(86362001)(36860700001)(8936002)(7696005)(6916009)(2906002)(26005)(83380400001)(47076005)(316002)(36756003)(30864003)(1076003)(6666004)(107886003)(508600001)(5660300002)(8676002)(54906003)(356005)(6286002)(16526019)(7636003)(70586007)(4326008)(82310400003)(2616005)(55016002)(336012)(70206006)(186003)(426003); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2021 11:01:49.4787 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f73c36a6-de80-46fd-4616-08d98e38db2f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1201MB0096 Subject: [dpdk-dev] [PATCH v4 1/4] mempool: add event callbacks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Data path performance can benefit if the PMD knows which memory it will need to handle in advance, before the first mbuf is sent to the PMD. It is impractical, however, to consider all allocated memory for this purpose. Most often mbuf memory comes from mempools that can come and go. PMD can enumerate existing mempools on device start, but it also needs to track creation and destruction of mempools after the forwarding starts but before an mbuf from the new mempool is sent to the device. Add an API to register callback for mempool life cycle events: * rte_mempool_event_callback_register() * rte_mempool_event_callback_unregister() Currently tracked events are: * RTE_MEMPOOL_EVENT_READY (after populating a mempool) * RTE_MEMPOOL_EVENT_DESTROY (before freeing a mempool) Provide a unit test for the new API. The new API is internal, because it is primarily demanded by PMDs that may need to deal with any mempools and do not control their creation, while an application, on the other hand, knows which mempools it creates and doesn't care about internal mempools PMDs might create. Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- app/test/test_mempool.c | 209 ++++++++++++++++++++++++++++++++++++++ lib/mempool/rte_mempool.c | 137 +++++++++++++++++++++++++ lib/mempool/rte_mempool.h | 61 +++++++++++ lib/mempool/version.map | 8 ++ 4 files changed, 415 insertions(+) diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index 7675a3e605..bc0cc9ed48 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -471,6 +472,206 @@ test_mp_mem_init(struct rte_mempool *mp, data->ret = 0; } +struct test_mempool_events_data { + struct rte_mempool *mp; + enum rte_mempool_event event; + bool invoked; +}; + +static void +test_mempool_events_cb(enum rte_mempool_event event, + struct rte_mempool *mp, void *user_data) +{ + struct test_mempool_events_data *data = user_data; + + data->mp = mp; + data->event = event; + data->invoked = true; +} + +static int +test_mempool_events(int (*populate)(struct rte_mempool *mp)) +{ + static const size_t CB_NUM = 3; + static const size_t MP_NUM = 2; + + struct test_mempool_events_data data[CB_NUM]; + struct rte_mempool *mp[MP_NUM]; + char name[RTE_MEMPOOL_NAMESIZE]; + size_t i, j; + int ret; + + for (i = 0; i < CB_NUM; i++) { + ret = rte_mempool_event_callback_register + (test_mempool_events_cb, &data[i]); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to register the callback %zu: %s", + i, rte_strerror(rte_errno)); + } + ret = rte_mempool_event_callback_unregister(test_mempool_events_cb, mp); + RTE_TEST_ASSERT_NOT_EQUAL(ret, 0, "Unregistered a non-registered callback"); + /* NULL argument has no special meaning in this API. */ + ret = rte_mempool_event_callback_unregister(test_mempool_events_cb, + NULL); + RTE_TEST_ASSERT_NOT_EQUAL(ret, 0, "Unregistered a non-registered callback with NULL argument"); + + /* Create mempool 0 that will be observed by all callbacks. */ + memset(&data, 0, sizeof(data)); + strcpy(name, "empty0"); + mp[0] = rte_mempool_create_empty(name, MEMPOOL_SIZE, + MEMPOOL_ELT_SIZE, 0, 0, + SOCKET_ID_ANY, 0); + RTE_TEST_ASSERT_NOT_NULL(mp[0], "Cannot create mempool %s: %s", + name, rte_strerror(rte_errno)); + for (j = 0; j < CB_NUM; j++) + RTE_TEST_ASSERT_EQUAL(data[j].invoked, false, + "Callback %zu invoked on %s mempool creation", + j, name); + + rte_mempool_set_ops_byname(mp[0], rte_mbuf_best_mempool_ops(), NULL); + ret = populate(mp[0]); + RTE_TEST_ASSERT_EQUAL(ret, (int)mp[0]->size, "Failed to populate mempool %s: %s", + name, rte_strerror(rte_errno)); + for (j = 0; j < CB_NUM; j++) { + RTE_TEST_ASSERT_EQUAL(data[j].invoked, true, + "Callback %zu not invoked on mempool %s population", + j, name); + RTE_TEST_ASSERT_EQUAL(data[j].event, + RTE_MEMPOOL_EVENT_READY, + "Wrong callback invoked, expected READY"); + RTE_TEST_ASSERT_EQUAL(data[j].mp, mp[0], + "Callback %zu invoked for a wrong mempool instead of %s", + j, name); + } + + /* Check that unregistered callback 0 observes no events. */ + ret = rte_mempool_event_callback_unregister(test_mempool_events_cb, + &data[0]); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to unregister callback 0: %s", + rte_strerror(rte_errno)); + memset(&data, 0, sizeof(data)); + strcpy(name, "empty1"); + mp[1] = rte_mempool_create_empty(name, MEMPOOL_SIZE, + MEMPOOL_ELT_SIZE, 0, 0, + SOCKET_ID_ANY, 0); + RTE_TEST_ASSERT_NOT_NULL(mp[1], "Cannot create mempool %s: %s", + name, rte_strerror(rte_errno)); + rte_mempool_set_ops_byname(mp[1], rte_mbuf_best_mempool_ops(), NULL); + ret = populate(mp[1]); + RTE_TEST_ASSERT_EQUAL(ret, (int)mp[1]->size, "Failed to populate mempool %s: %s", + name, rte_strerror(rte_errno)); + RTE_TEST_ASSERT_EQUAL(data[0].invoked, false, + "Unregistered callback 0 invoked on %s mempool populaton", + name); + + for (i = 0; i < MP_NUM; i++) { + memset(&data, 0, sizeof(data)); + sprintf(name, "empty%zu", i); + rte_mempool_free(mp[i]); + for (j = 1; j < CB_NUM; j++) { + RTE_TEST_ASSERT_EQUAL(data[j].invoked, true, + "Callback %zu not invoked on mempool %s destruction", + j, name); + RTE_TEST_ASSERT_EQUAL(data[j].event, + RTE_MEMPOOL_EVENT_DESTROY, + "Wrong callback invoked, expected DESTROY"); + RTE_TEST_ASSERT_EQUAL(data[j].mp, mp[i], + "Callback %zu invoked for a wrong mempool instead of %s", + j, name); + } + RTE_TEST_ASSERT_EQUAL(data[0].invoked, false, + "Unregistered callback 0 invoked on %s mempool destruction", + name); + } + + for (j = 1; j < CB_NUM; j++) { + ret = rte_mempool_event_callback_unregister + (test_mempool_events_cb, &data[j]); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to unregister the callback %zu: %s", + j, rte_strerror(rte_errno)); + } + return 0; +} + +struct test_mempool_events_safety_data { + bool invoked; + int (*api_func)(rte_mempool_event_callback *func, void *user_data); + rte_mempool_event_callback *cb_func; + void *cb_user_data; + int ret; +}; + +static void +test_mempool_events_safety_cb(enum rte_mempool_event event, + struct rte_mempool *mp, void *user_data) +{ + struct test_mempool_events_safety_data *data = user_data; + + RTE_SET_USED(event); + RTE_SET_USED(mp); + data->invoked = true; + data->ret = data->api_func(data->cb_func, data->cb_user_data); +} + +static int +test_mempool_events_safety(void) +{ + struct test_mempool_events_data data; + struct test_mempool_events_safety_data sdata[2]; + struct rte_mempool *mp; + size_t i; + int ret; + + /* removes itself */ + sdata[0].api_func = rte_mempool_event_callback_unregister; + sdata[0].cb_func = test_mempool_events_safety_cb; + sdata[0].cb_user_data = &sdata[0]; + sdata[0].ret = -1; + rte_mempool_event_callback_register(test_mempool_events_safety_cb, + &sdata[0]); + /* inserts a callback after itself */ + sdata[1].api_func = rte_mempool_event_callback_register; + sdata[1].cb_func = test_mempool_events_cb; + sdata[1].cb_user_data = &data; + sdata[1].ret = -1; + rte_mempool_event_callback_register(test_mempool_events_safety_cb, + &sdata[1]); + + mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE, + MEMPOOL_ELT_SIZE, 0, 0, + SOCKET_ID_ANY, 0); + RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s", + rte_strerror(rte_errno)); + memset(&data, 0, sizeof(data)); + ret = rte_mempool_populate_default(mp); + RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s", + rte_strerror(rte_errno)); + + RTE_TEST_ASSERT_EQUAL(sdata[0].ret, 0, "Callback failed to unregister itself: %s", + rte_strerror(rte_errno)); + RTE_TEST_ASSERT_EQUAL(sdata[1].ret, 0, "Failed to insert a new callback: %s", + rte_strerror(rte_errno)); + RTE_TEST_ASSERT_EQUAL(data.invoked, false, + "Inserted callback is invoked on mempool population"); + + memset(&data, 0, sizeof(data)); + sdata[0].invoked = false; + rte_mempool_free(mp); + RTE_TEST_ASSERT_EQUAL(sdata[0].invoked, false, + "Callback that unregistered itself was called"); + RTE_TEST_ASSERT_EQUAL(sdata[1].ret, -EEXIST, + "New callback inserted twice"); + RTE_TEST_ASSERT_EQUAL(data.invoked, true, + "Inserted callback is not invoked on mempool destruction"); + + /* cleanup, don't care which callbacks are already removed */ + rte_mempool_event_callback_unregister(test_mempool_events_cb, &data); + for (i = 0; i < RTE_DIM(sdata); i++) + rte_mempool_event_callback_unregister + (test_mempool_events_safety_cb, + &sdata[i]); + return 0; +} + static int test_mempool(void) { @@ -645,6 +846,14 @@ test_mempool(void) if (test_mempool_basic(default_pool, 1) < 0) GOTO_ERR(ret, err); + /* test mempool event callbacks */ + if (test_mempool_events(rte_mempool_populate_default) < 0) + GOTO_ERR(ret, err); + if (test_mempool_events(rte_mempool_populate_anon) < 0) + GOTO_ERR(ret, err); + if (test_mempool_events_safety() < 0) + GOTO_ERR(ret, err); + rte_mempool_list_dump(stdout); ret = 0; diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index c5f859ae71..51c0ba2931 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -42,6 +42,18 @@ static struct rte_tailq_elem rte_mempool_tailq = { }; EAL_REGISTER_TAILQ(rte_mempool_tailq) +TAILQ_HEAD(mempool_callback_list, rte_tailq_entry); + +static struct rte_tailq_elem callback_tailq = { + .name = "RTE_MEMPOOL_CALLBACK", +}; +EAL_REGISTER_TAILQ(callback_tailq) + +/* Invoke all registered mempool event callbacks. */ +static void +mempool_event_callback_invoke(enum rte_mempool_event event, + struct rte_mempool *mp); + #define CACHE_FLUSHTHRESH_MULTIPLIER 1.5 #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) @@ -360,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next); mp->nb_mem_chunks++; + /* Report the mempool as ready only when fully populated. */ + if (mp->populated_size >= mp->size) + mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp); + rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque); return i; @@ -722,6 +738,7 @@ rte_mempool_free(struct rte_mempool *mp) } rte_mcfg_tailq_write_unlock(); + mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_DESTROY, mp); rte_mempool_trace_free(mp); rte_mempool_free_memchunks(mp); rte_mempool_ops_free(mp); @@ -1343,3 +1360,123 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *), rte_mcfg_mempool_read_unlock(); } + +struct mempool_callback { + rte_mempool_event_callback *func; + void *user_data; +}; + +static void +mempool_event_callback_invoke(enum rte_mempool_event event, + struct rte_mempool *mp) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te; + void *tmp_te; + + rte_mcfg_tailq_read_lock(); + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) { + struct mempool_callback *cb = te->data; + rte_mcfg_tailq_read_unlock(); + cb->func(event, mp, cb->user_data); + rte_mcfg_tailq_read_lock(); + } + rte_mcfg_tailq_read_unlock(); +} + +int +rte_mempool_event_callback_register(rte_mempool_event_callback *func, + void *user_data) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te = NULL; + struct mempool_callback *cb; + void *tmp_te; + int ret; + + if (func == NULL) { + rte_errno = EINVAL; + return -rte_errno; + } + + rte_mcfg_mempool_read_lock(); + rte_mcfg_tailq_write_lock(); + + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) { + struct mempool_callback *cb = + (struct mempool_callback *)te->data; + if (cb->func == func && cb->user_data == user_data) { + ret = -EEXIST; + goto exit; + } + } + + te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); + if (te == NULL) { + RTE_LOG(ERR, MEMPOOL, + "Cannot allocate event callback tailq entry!\n"); + ret = -ENOMEM; + goto exit; + } + + cb = rte_malloc("MEMPOOL_EVENT_CALLBACK", sizeof(*cb), 0); + if (cb == NULL) { + RTE_LOG(ERR, MEMPOOL, + "Cannot allocate event callback!\n"); + rte_free(te); + ret = -ENOMEM; + goto exit; + } + + cb->func = func; + cb->user_data = user_data; + te->data = cb; + TAILQ_INSERT_TAIL(list, te, next); + ret = 0; + +exit: + rte_mcfg_tailq_write_unlock(); + rte_mcfg_mempool_read_unlock(); + rte_errno = -ret; + return ret; +} + +int +rte_mempool_event_callback_unregister(rte_mempool_event_callback *func, + void *user_data) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te = NULL; + struct mempool_callback *cb; + int ret; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + rte_errno = EPERM; + return -1; + } + + rte_mcfg_mempool_read_lock(); + rte_mcfg_tailq_write_lock(); + ret = -ENOENT; + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + TAILQ_FOREACH(te, list, next) { + cb = (struct mempool_callback *)te->data; + if (cb->func == func && cb->user_data == user_data) + break; + } + if (te != NULL) { + TAILQ_REMOVE(list, te, next); + ret = 0; + } + rte_mcfg_tailq_write_unlock(); + rte_mcfg_mempool_read_unlock(); + + if (ret == 0) { + rte_free(te); + rte_free(cb); + } + rte_errno = -ret; + return ret; +} diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index f57ecbd6fc..663123042f 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -1774,6 +1774,67 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg), int rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz); +/** + * Mempool event type. + * @internal + */ +enum rte_mempool_event { + /** Occurs after a mempool is successfully populated. */ + RTE_MEMPOOL_EVENT_READY = 0, + /** Occurs before destruction of a mempool begins. */ + RTE_MEMPOOL_EVENT_DESTROY = 1, +}; + +/** + * @internal + * Mempool event callback. + * + * rte_mempool_event_callback_register() may be called from within the callback, + * but the callbacks registered this way will not be invoked for the same event. + * rte_mempool_event_callback_unregister() may only be safely called + * to remove the running callback. + */ +typedef void (rte_mempool_event_callback)( + enum rte_mempool_event event, + struct rte_mempool *mp, + void *user_data); + +/** + * @internal + * Register a callback invoked on mempool life cycle event. + * Callbacks will be invoked in the process that creates the mempool. + * + * @param func + * Callback function. + * @param user_data + * User data. + * + * @return + * 0 on success, negative on failure and rte_errno is set. + */ +__rte_internal +int +rte_mempool_event_callback_register(rte_mempool_event_callback *func, + void *user_data); + +/** + * @internal + * Unregister a callback added with rte_mempool_event_callback_register(). + * @p func and @p user_data must exactly match registration parameters. + * + * @param func + * Callback function. + * @param user_data + * User data. + * + * @return + * 0 on success, negative on failure and rte_errno is set. + */ +__rte_internal +int +rte_mempool_event_callback_unregister(rte_mempool_event_callback *func, + void *user_data); + #ifdef __cplusplus } #endif diff --git a/lib/mempool/version.map b/lib/mempool/version.map index 9f77da6fff..1b7d7c5456 100644 --- a/lib/mempool/version.map +++ b/lib/mempool/version.map @@ -64,3 +64,11 @@ EXPERIMENTAL { __rte_mempool_trace_ops_free; __rte_mempool_trace_set_ops_byname; }; + +INTERNAL { + global: + + # added in 21.11 + rte_mempool_event_callback_register; + rte_mempool_event_callback_unregister; +}; -- 2.25.1