From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3EEEFA034F; Tue, 12 Oct 2021 02:04:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A61D410EA; Tue, 12 Oct 2021 02:04:35 +0200 (CEST) Received: from AZHDRRW-EX02.NVIDIA.COM (azhdrrw-ex02.nvidia.com [20.64.145.131]) by mails.dpdk.org (Postfix) with ESMTP id D731740150 for ; Tue, 12 Oct 2021 02:04:32 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.103) by mxs.oss.nvidia.com (10.13.234.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Mon, 11 Oct 2021 17:04:31 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iEu1mOnZvdE9iEUJMPGPbCdGzrc/YvNuhHQW69a5PuqbUqanhSOzUe1wR39ChRJd1536xAbjM/m5YlvxW+OWJywbRh5LygccowdFdjBFtXgeNNot8x66AbCfHCInl9fxbahUYjDtEtpSSngEicH/TEtfm/pyGND4OsxvI9I71/6l6pWBfX8qGKLDne5teksF4njqVxAp6NTw30aaf1fb3A1/e/HgJvfqnZ7JamEXHVY2p7qRVWrf2ggQFr+F/UflsKllB7o/WygMsNnpE8RDgKtQMPZfF5AnECocy20WFHj4Co/6n8/14fpyeVmr01C/9zYjcrJ6nXECcf4pTWSE6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=c5VdCR5BKfpviMLLX/KGtVzTOPmqeFyZ/efYmclAzvU=; b=FyycT1Ay+r1yzMP05DFtt7O9mR12utqR76YLi4PKh3XTLsM42/cy6mWK6eJeksemNRpapNpMZfUWUrt4JqrImmJW/6E12btjfgUqeJ1HEM7BjIxWMi6nKgqWPQIYCC6quv0R6Hnin7TzByXFZG3U2Njve4854AsRh+EcjjK2O7faUPozCzz4eaQyyIELG/xEf2GUWv8+5u25qAG7cPzeCDdMf9HKDaB7p/kqDdH+VjEDN1JVF0M8eh/+SKj6zCanf0tkBz6Dg8ciZmJze9mzlJbL664MlTshh33z/qQwoLBAzE8BQvu/JAYBzJHuqTe64IWaHT4n/0EcNdQ1S47Kew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=oktetlabs.ru smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=c5VdCR5BKfpviMLLX/KGtVzTOPmqeFyZ/efYmclAzvU=; b=lPumG5W4uA89LEu5T2kaWLu5a5F1/sdNJZqVobXoGEOSmkVwchRbp1AMvLiZjBeiTFdt9WUPL2BIqGvC2ZUsj+WckAkwVIFoRjbwoXFyMtUU8gHHcE29R+Cd6/J+WxgAjX323ANwy4wTcoUBCzv8kMCEPujxUtPdbFnydbqR78To3l5WvVJIe0gHX56FT776OoFkpeY5nRgADjX0EqBZS1+tuTr95CScBmlVBrXYTaspQpy8p3oppcm02m6gNa1A+vs9a7Y1roWA2syC3yVqrvlzIXW4mGJLM7s78R/1VO128hl1BOY4CIeAdUxK7LxwnB67giyxcNRd+doxJh9w2A== Received: from BN9P221CA0001.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::16) by SN1PR12MB2382.namprd12.prod.outlook.com (2603:10b6:802:2e::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.19; Tue, 12 Oct 2021 00:04:30 +0000 Received: from BN8NAM11FT064.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10a:cafe::b) by BN9P221CA0001.outlook.office365.com (2603:10b6:408:10a::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.25 via Frontend Transport; Tue, 12 Oct 2021 00:04:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; oktetlabs.ru; dkim=none (message not signed) header.d=none;oktetlabs.ru; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT064.mail.protection.outlook.com (10.13.176.160) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4587.18 via Frontend Transport; Tue, 12 Oct 2021 00:04:28 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 12 Oct 2021 00:04:25 +0000 From: Dmitry Kozlyuk To: CC: Thomas Monjalon , Matan Azrad , Olivier Matz , Andrew Rybchenko , Ray Kinsella , "Anatoly Burakov" Date: Tue, 12 Oct 2021 03:04:06 +0300 Message-ID: <20211012000409.2751908-2-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211012000409.2751908-1-dkozlyuk@nvidia.com> References: <20210929145249.2176811-1-dkozlyuk@nvidia.com> <20211012000409.2751908-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9928eb27-88cb-4dfd-f676-08d98d13dc59 X-MS-TrafficTypeDiagnostic: SN1PR12MB2382: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-Transport-Forked: True X-MS-Oob-TLC-OOBClassifiers: OLM:1332; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ph5AoGz24hvVWOg95JDRglutXe/Vo1x4LYtLfbIKghqEnjS2uEToQc0gWCvPJfc6Jg4OnrMFCW6N9RDn0epHLWGozzYt5s6VjYrl+IpqdCtPUHiFE2h8fhDpK7/1l1EJTp4Vb73HESOOKNIAFMgmIsWZPd9CPFwkN9QgujBwsUdPKjZKSp4nCdoN9AHgnzN/VnMVih8yRpv/kj3WEbC18HCiOhp2Z/PTk8+oJBMIBXkwuZEcCM2sA6a1/b8jgVVPMCe9z6E3nbs99qUAd3HZrvP1Es1jIDDioBbZXKZdoJZ4pAjNWnAaMjqlow7eZgbVXyQrdk3SJB/jW9b/crlLs0u1NRmIkqzsqt3u0332tVk7YzhhfPZqxdS6Y66L26z5kwu7Nzm4YSsD7wCTpRISMbzij4a7njGs+ZHuV8hkRmByj3ASHrabpf0Ia6xcn37BQJVN/qZXTYmelPYaCovH9XI33s2St9ubb8wAQAXi0w9N5jqR2fNUAz2XSx4ZIN4ayHQGyAAv7r5194Tq0OGfuk4PXUh9v/mPMcUJC9xBFp37GREsTGXiO80WWSEB+6ljlGFoSOgXSg+65wPR6Ux0bAwvvhpouWRa8jsWNuZriuJriPMPtDD/BSzjN+mimSbLwdloPmwb1Z6HqQOjD3olY0kg1Ey5kteoh9T8rK1/7PWIcFzrILl5MqlTTHb/s5gOByBlRKgDPmrJEyzRcQ7pOA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(55016002)(16526019)(186003)(316002)(6666004)(6286002)(7696005)(7636003)(83380400001)(36756003)(356005)(8676002)(2906002)(8936002)(36860700001)(1076003)(6916009)(26005)(2616005)(47076005)(30864003)(426003)(70586007)(5660300002)(336012)(107886003)(86362001)(70206006)(54906003)(4326008)(508600001)(82310400003); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2021 00:04:28.7775 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9928eb27-88cb-4dfd-f676-08d98d13dc59 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT064.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2382 Subject: [dpdk-dev] [PATCH v3 1/4] mempool: add event callbacks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Data path performance can benefit if the PMD knows which memory it will need to handle in advance, before the first mbuf is sent to the PMD. It is impractical, however, to consider all allocated memory for this purpose. Most often mbuf memory comes from mempools that can come and go. PMD can enumerate existing mempools on device start, but it also needs to track creation and destruction of mempools after the forwarding starts but before an mbuf from the new mempool is sent to the device. Add an internal API to register callback for mempool life cycle events: * rte_mempool_event_callback_register() * rte_mempool_event_callback_unregister() Currently tracked events are: * RTE_MEMPOOL_EVENT_READY (after populating a mempool) * RTE_MEMPOOL_EVENT_DESTROY (before freeing a mempool) Provide a unit test for the new API. Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- app/test/test_mempool.c | 75 +++++++++++++++++++++ lib/mempool/rte_mempool.c | 137 ++++++++++++++++++++++++++++++++++++++ lib/mempool/rte_mempool.h | 56 ++++++++++++++++ lib/mempool/version.map | 8 +++ 4 files changed, 276 insertions(+) diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index 7675a3e605..0c4ed7c60b 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -471,6 +472,74 @@ test_mp_mem_init(struct rte_mempool *mp, data->ret = 0; } +struct test_mempool_events_data { + struct rte_mempool *mp; + enum rte_mempool_event event; + bool invoked; +}; + +static void +test_mempool_events_cb(enum rte_mempool_event event, + struct rte_mempool *mp, void *arg) +{ + struct test_mempool_events_data *data = arg; + + data->mp = mp; + data->event = event; + data->invoked = true; +} + +static int +test_mempool_events(int (*populate)(struct rte_mempool *mp)) +{ + struct test_mempool_events_data data; + struct rte_mempool *mp; + int ret; + + ret = rte_mempool_event_callback_register(NULL, &data); + RTE_TEST_ASSERT_NOT_EQUAL(ret, 0, "Registered a NULL callback"); + + memset(&data, 0, sizeof(data)); + ret = rte_mempool_event_callback_register(test_mempool_events_cb, + &data); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to register the callback: %s", + rte_strerror(rte_errno)); + + mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE, + MEMPOOL_ELT_SIZE, 0, 0, + SOCKET_ID_ANY, 0); + RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create an empty mempool: %s", + rte_strerror(rte_errno)); + RTE_TEST_ASSERT_EQUAL(data.invoked, false, + "Callback invoked on an empty mempool creation"); + + rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL); + ret = populate(mp); + RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate the mempool: %s", + rte_strerror(rte_errno)); + RTE_TEST_ASSERT_EQUAL(data.invoked, true, + "Callback not invoked on an empty mempool population"); + RTE_TEST_ASSERT_EQUAL(data.event, RTE_MEMPOOL_EVENT_READY, + "Wrong callback invoked, expected READY"); + RTE_TEST_ASSERT_EQUAL(data.mp, mp, + "Callback invoked for a wrong mempool"); + + memset(&data, 0, sizeof(data)); + rte_mempool_free(mp); + RTE_TEST_ASSERT_EQUAL(data.invoked, true, + "Callback not invoked on mempool destruction"); + RTE_TEST_ASSERT_EQUAL(data.event, RTE_MEMPOOL_EVENT_DESTROY, + "Wrong callback invoked, expected DESTROY"); + RTE_TEST_ASSERT_EQUAL(data.mp, mp, + "Callback invoked for a wrong mempool"); + + ret = rte_mempool_event_callback_unregister(test_mempool_events_cb, + &data); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to unregister the callback: %s", + rte_strerror(rte_errno)); + return 0; +} + static int test_mempool(void) { @@ -645,6 +714,12 @@ test_mempool(void) if (test_mempool_basic(default_pool, 1) < 0) GOTO_ERR(ret, err); + /* test mempool event callbacks */ + if (test_mempool_events(rte_mempool_populate_default) < 0) + GOTO_ERR(ret, err); + if (test_mempool_events(rte_mempool_populate_anon) < 0) + GOTO_ERR(ret, err); + rte_mempool_list_dump(stdout); ret = 0; diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index c5f859ae71..51c0ba2931 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -42,6 +42,18 @@ static struct rte_tailq_elem rte_mempool_tailq = { }; EAL_REGISTER_TAILQ(rte_mempool_tailq) +TAILQ_HEAD(mempool_callback_list, rte_tailq_entry); + +static struct rte_tailq_elem callback_tailq = { + .name = "RTE_MEMPOOL_CALLBACK", +}; +EAL_REGISTER_TAILQ(callback_tailq) + +/* Invoke all registered mempool event callbacks. */ +static void +mempool_event_callback_invoke(enum rte_mempool_event event, + struct rte_mempool *mp); + #define CACHE_FLUSHTHRESH_MULTIPLIER 1.5 #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) @@ -360,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next); mp->nb_mem_chunks++; + /* Report the mempool as ready only when fully populated. */ + if (mp->populated_size >= mp->size) + mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp); + rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque); return i; @@ -722,6 +738,7 @@ rte_mempool_free(struct rte_mempool *mp) } rte_mcfg_tailq_write_unlock(); + mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_DESTROY, mp); rte_mempool_trace_free(mp); rte_mempool_free_memchunks(mp); rte_mempool_ops_free(mp); @@ -1343,3 +1360,123 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *), rte_mcfg_mempool_read_unlock(); } + +struct mempool_callback { + rte_mempool_event_callback *func; + void *user_data; +}; + +static void +mempool_event_callback_invoke(enum rte_mempool_event event, + struct rte_mempool *mp) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te; + void *tmp_te; + + rte_mcfg_tailq_read_lock(); + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) { + struct mempool_callback *cb = te->data; + rte_mcfg_tailq_read_unlock(); + cb->func(event, mp, cb->user_data); + rte_mcfg_tailq_read_lock(); + } + rte_mcfg_tailq_read_unlock(); +} + +int +rte_mempool_event_callback_register(rte_mempool_event_callback *func, + void *user_data) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te = NULL; + struct mempool_callback *cb; + void *tmp_te; + int ret; + + if (func == NULL) { + rte_errno = EINVAL; + return -rte_errno; + } + + rte_mcfg_mempool_read_lock(); + rte_mcfg_tailq_write_lock(); + + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) { + struct mempool_callback *cb = + (struct mempool_callback *)te->data; + if (cb->func == func && cb->user_data == user_data) { + ret = -EEXIST; + goto exit; + } + } + + te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); + if (te == NULL) { + RTE_LOG(ERR, MEMPOOL, + "Cannot allocate event callback tailq entry!\n"); + ret = -ENOMEM; + goto exit; + } + + cb = rte_malloc("MEMPOOL_EVENT_CALLBACK", sizeof(*cb), 0); + if (cb == NULL) { + RTE_LOG(ERR, MEMPOOL, + "Cannot allocate event callback!\n"); + rte_free(te); + ret = -ENOMEM; + goto exit; + } + + cb->func = func; + cb->user_data = user_data; + te->data = cb; + TAILQ_INSERT_TAIL(list, te, next); + ret = 0; + +exit: + rte_mcfg_tailq_write_unlock(); + rte_mcfg_mempool_read_unlock(); + rte_errno = -ret; + return ret; +} + +int +rte_mempool_event_callback_unregister(rte_mempool_event_callback *func, + void *user_data) +{ + struct mempool_callback_list *list; + struct rte_tailq_entry *te = NULL; + struct mempool_callback *cb; + int ret; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + rte_errno = EPERM; + return -1; + } + + rte_mcfg_mempool_read_lock(); + rte_mcfg_tailq_write_lock(); + ret = -ENOENT; + list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list); + TAILQ_FOREACH(te, list, next) { + cb = (struct mempool_callback *)te->data; + if (cb->func == func && cb->user_data == user_data) + break; + } + if (te != NULL) { + TAILQ_REMOVE(list, te, next); + ret = 0; + } + rte_mcfg_tailq_write_unlock(); + rte_mcfg_mempool_read_unlock(); + + if (ret == 0) { + rte_free(te); + rte_free(cb); + } + rte_errno = -ret; + return ret; +} diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index f57ecbd6fc..e2bf40aa09 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -1774,6 +1774,62 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg), int rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz); +/** + * Mempool event type. + * @internal + */ +enum rte_mempool_event { + /** Occurs after a mempool is successfully populated. */ + RTE_MEMPOOL_EVENT_READY = 0, + /** Occurs before destruction of a mempool begins. */ + RTE_MEMPOOL_EVENT_DESTROY = 1, +}; + +/** + * @internal + * Mempool event callback. + */ +typedef void (rte_mempool_event_callback)( + enum rte_mempool_event event, + struct rte_mempool *mp, + void *user_data); + +/** + * @internal + * Register a callback invoked on mempool life cycle event. + * Callbacks will be invoked in the process that creates the mempool. + * + * @param func + * Callback function. + * @param user_data + * User data. + * + * @return + * 0 on success, negative on failure and rte_errno is set. + */ +__rte_internal +int +rte_mempool_event_callback_register(rte_mempool_event_callback *func, + void *user_data); + +/** + * @internal + * Unregister a callback added with rte_mempool_event_callback_register(). + * @p func and @p user_data must exactly match registration parameters. + * + * @param func + * Callback function. + * @param user_data + * User data. + * + * @return + * 0 on success, negative on failure and rte_errno is set. + */ +__rte_internal +int +rte_mempool_event_callback_unregister(rte_mempool_event_callback *func, + void *user_data); + #ifdef __cplusplus } #endif diff --git a/lib/mempool/version.map b/lib/mempool/version.map index 9f77da6fff..1b7d7c5456 100644 --- a/lib/mempool/version.map +++ b/lib/mempool/version.map @@ -64,3 +64,11 @@ EXPERIMENTAL { __rte_mempool_trace_ops_free; __rte_mempool_trace_set_ops_byname; }; + +INTERNAL { + global: + + # added in 21.11 + rte_mempool_event_callback_register; + rte_mempool_event_callback_unregister; +}; -- 2.25.1