From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F293542987; Wed, 19 Apr 2023 10:37:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 82B2E40A79; Wed, 19 Apr 2023 10:37:08 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2046.outbound.protection.outlook.com [40.107.243.46]) by mails.dpdk.org (Postfix) with ESMTP id F10F34021F for ; Wed, 19 Apr 2023 10:37:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eOxlkrimtqnor0ESjmyGLCp4LMXMQ5VpLsi9ks7lndQsc/Ft5NDLEwypNsKcVoMZoeddFbj6atEfngGF5BRCekx8sabAUWh5ZX84dFi/kEUEkBeWbDc4L6jIwrd5zo6z8EER5mHKf5bc0XRR4fQ2ek0Azy6jOq5+OHPe3FHmgGVxTKW1oD1qpbDDzHTVXblpOx9gfJhUgexNN4CpNJXY3jupXWyQmPzgxRvbv9AD32kESSu0IKafVi6jcMszPV0snVDjdLsVXN+lgw8+ync+VuZsCJJGll/37Rx9nbZgEjgx25rOh4s4YiaoDGJPPGPgOH/a59Lf6wHIv8aiFxWEGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Lp+D2j9RBWJhlHQmzA6t7PgiGdpY8aeO/5+j3NQdLMA=; b=ePRcNW6CQ3yKTXiYJI/Nw7lGdUr2MxBDwt8umUGmDP4bhNf/jh+V5irIxZsS342hm+nR3jSPdSWtKeWZEhxVTiZ8w99ALe6gYcHgWFe36V/OLLv+FKl2byexNmf0bVLubfkwXiPo5FUM3W7iJv1pSAl31j9VnDFeDtx1XJ/4yBbe7+9SsuR0+xW3yRu79ptWdpKOl1k2qQ6LbGyAPf00HciLJBrJ8yunRYrCg29MbuyMpVO35bXE8kXk4vIT7SMsK5WueLqCDvNwO6hOiHNxiqyySTL94JtQbJWc0z+YLdiISm6rbMsUnDYKszV5jqeCngy0zk/cybKst9yPWcyPhA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Lp+D2j9RBWJhlHQmzA6t7PgiGdpY8aeO/5+j3NQdLMA=; b=lYM7zxqb1UZjBpPmZ3FAW8AsFhnPCo8tby677E9puaE9kS3aWbH26MfjOPbOTOTAY9vdsiH849eJ2FOp/EdCGYrmpppUi7AwhG5RGYtVq0jtEYEYDVODvuDNDquOcgseDe90i/7BwZbinULjHGchAeATfF3aYytPcy3GJ0TtNUwv+4CVhjanVZiohN4jJHUvbYn+inHc80m6MgzDHs9pHUtHBA6biZZNM2p057/+QZNVcYrjJAnuyU4WU3zUe8k79+62jUefI/ZdAIN1EAhCHEkHPLjOiAxf/teGE/gYRnK8873zSv9aYjKBv67EU3x434eiMYC4xbD6uKirbRFN9Q== Received: from DM6PR13CA0049.namprd13.prod.outlook.com (2603:10b6:5:134::26) by MN0PR12MB5835.namprd12.prod.outlook.com (2603:10b6:208:37a::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr 2023 08:37:04 +0000 Received: from DM6NAM11FT093.eop-nam11.prod.protection.outlook.com (2603:10b6:5:134:cafe::79) by DM6PR13CA0049.outlook.office365.com (2603:10b6:5:134::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 08:37:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT093.mail.protection.outlook.com (10.13.172.235) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 08:37:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Wed, 19 Apr 2023 01:36:56 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Wed, 19 Apr 2023 01:36:53 -0700 From: Ophir Munk To: , Bruce Richardson , "Devendra Singh Rawat" , Alok Prasad CC: Ophir Munk , Matan Azrad , "Thomas Monjalon" , Lior Margalit Subject: [RFC] lib: set/get max memzone segments Date: Wed, 19 Apr 2023 11:36:34 +0300 Message-ID: <20230419083634.2027689-1-ophirmu@nvidia.com> X-Mailer: git-send-email 2.14.1 MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT093:EE_|MN0PR12MB5835:EE_ X-MS-Office365-Filtering-Correlation-Id: 1caab988-7f70-4477-3078-08db40b140ea X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hGqJZhklfI6e9xjOP+OGp7TT5LMD7kI8FYD6AvAirXE8duGQeLkJ4gF28dVb9Vsq840ltQ0NnmEbwNV1YLDdUtR2+8qLJxVTQXIqwC4qWRWxPnqqGaNfaJ085rNNfStofxH8XNV5o80EQAjQxqogoc/oFz1e6969cNiVflSONHD/xmYSSt8vhgUcDhX9dcVR2dIfagvrC+St8p2an5kX7W+llLCju8bc6P61rFP7GqeVA1cnGvAcdYh95FxEpljPJncTNUtk+j9gM+kC3eg6Mnp2bYv+QyjnJzSd+PSmyCmcuL/ghgCRgmLc6hUy8F+NCINKYYzURa+/kR6KRGbdRC3D67slj535WmocHRckMTA0VFpkMxPsT71iK3BKuhuZtr4QnWDOOxIVG05+YjFNsUoGDlOPu7RWPC4LFKeaQwNiWQhmZiH0AFsXZCavyC4ug16dOZy3OtTE8CMSrv4KZPP8Npcq2l5YLEuZ74/QCjJkuOOZDn/EpKtYAcHWTh5RNSNBVRpmuVujG7wU46exTkDkIKXiRYOog3WXHvZw16Bya2hFWsIm+xFN6ATPGBBrt/G157SiHPVvP+ddRA5BtZsqgcoEh+MQcCSt+dsAi9HsAlQLerrJkyHFD5yRr6tfn4UYVT8cQgfDnExPz+3tBBoQ6/CplpPqzY9cNm//6XaUxge8J0WQaok2FjrAQJCYuMEhN6+sHPZgliBHkVXhPpCLAZfFevldwVekm1MLgUtBfm7VuR+9hRFDC3SQ/t1ePkjQ79cZf2B7V9CqBUW0S20InsEi8mHc1t42yZEaAL0= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(4326008)(54906003)(110136005)(316002)(70586007)(70206006)(41300700001)(478600001)(6666004)(55016003)(5660300002)(8936002)(82310400005)(8676002)(2906002)(40480700001)(34070700002)(82740400003)(86362001)(426003)(356005)(336012)(2616005)(107886003)(6286002)(1076003)(40460700003)(26005)(36860700001)(16526019)(47076005)(83380400001)(7636003)(186003)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 08:37:04.4129 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1caab988-7f70-4477-3078-08db40b140ea X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT093.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5835 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In current DPDK the RTE_MAX_MEMZONE definition is unconditionally hard coded as 2560. For applications requiring different values of this parameter – it is more convenient to set the max value via an rte API - rather than changing the dpdk source code per application. In many organizations, the possibility to compile a private DPDK library for a particular application does not exist at all. With this option there is no need to recompile DPDK and it allows using an in-box packaged DPDK. An example usage for updating the RTE_MAX_MEMZONE would be of an application that uses the DPDK mempool library which is based on DPDK memzone library. The application may need to create a number of steering tables, each of which will require its own mempool allocation. This commit is not about how to optimize the application usage of mempool nor about how to improve the mempool implementation based on memzone. It is about how to make the max memzone definition - run-time customized. This commit adds an API which must be called before rte_eal_init(): rte_memzone_max_set(int max). If not called, the default memzone (RTE_MAX_MEMZONE) is used. There is also an API to query the effective max memzone: rte_memzone_max_get(). Signed-off-by: Ophir Munk --- app/test/test_func_reentrancy.c | 2 +- app/test/test_malloc_perf.c | 2 +- app/test/test_memzone.c | 2 +- config/rte_config.h | 1 - drivers/net/qede/base/bcm_osal.c | 26 +++++++++++++++++++++----- drivers/net/qede/base/bcm_osal.h | 3 +++ drivers/net/qede/qede_main.c | 7 +++++++ lib/eal/common/eal_common_memzone.c | 28 +++++++++++++++++++++++++--- lib/eal/include/rte_memzone.h | 20 ++++++++++++++++++++ lib/eal/version.map | 4 ++++ 10 files changed, 83 insertions(+), 12 deletions(-) diff --git a/app/test/test_func_reentrancy.c b/app/test/test_func_reentrancy.c index d1ed5d4..ae9de6f 100644 --- a/app/test/test_func_reentrancy.c +++ b/app/test/test_func_reentrancy.c @@ -51,7 +51,7 @@ typedef void (*case_clean_t)(unsigned lcore_id); #define MEMPOOL_ELT_SIZE (sizeof(uint32_t)) #define MEMPOOL_SIZE (4) -#define MAX_LCORES (RTE_MAX_MEMZONE / (MAX_ITER_MULTI * 4U)) +#define MAX_LCORES (rte_memzone_max_get() / (MAX_ITER_MULTI * 4U)) static uint32_t obj_count; static uint32_t synchro; diff --git a/app/test/test_malloc_perf.c b/app/test/test_malloc_perf.c index ccec43a..9bd1662 100644 --- a/app/test/test_malloc_perf.c +++ b/app/test/test_malloc_perf.c @@ -165,7 +165,7 @@ test_malloc_perf(void) return -1; if (test_alloc_perf("rte_memzone_reserve", memzone_alloc, memzone_free, - NULL, memset_us_gb, RTE_MAX_MEMZONE - 1) < 0) + NULL, memset_us_gb, rte_memzone_max_get() - 1) < 0) return -1; return 0; diff --git a/app/test/test_memzone.c b/app/test/test_memzone.c index c9255e5..a315826 100644 --- a/app/test/test_memzone.c +++ b/app/test/test_memzone.c @@ -871,7 +871,7 @@ test_memzone_bounded(void) static int test_memzone_free(void) { - const struct rte_memzone *mz[RTE_MAX_MEMZONE + 1]; + const struct rte_memzone *mz[rte_memzone_max_get() + 1]; int i; char name[20]; diff --git a/config/rte_config.h b/config/rte_config.h index 7b8c85e..400e44e 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -34,7 +34,6 @@ #define RTE_MAX_MEM_MB_PER_LIST 32768 #define RTE_MAX_MEMSEG_PER_TYPE 32768 #define RTE_MAX_MEM_MB_PER_TYPE 65536 -#define RTE_MAX_MEMZONE 2560 #define RTE_MAX_TAILQ 32 #define RTE_LOG_DP_LEVEL RTE_LOG_INFO #define RTE_MAX_VFIO_CONTAINERS 64 diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c index 2c59397..f195f2c 100644 --- a/drivers/net/qede/base/bcm_osal.c +++ b/drivers/net/qede/base/bcm_osal.c @@ -47,10 +47,26 @@ void osal_poll_mode_dpc(osal_int_ptr_t hwfn_cookie) } /* Array of memzone pointers */ -static const struct rte_memzone *ecore_mz_mapping[RTE_MAX_MEMZONE]; +static const struct rte_memzone **ecore_mz_mapping; /* Counter to track current memzone allocated */ static uint16_t ecore_mz_count; +int ecore_mz_mapping_alloc(void) +{ + ecore_mz_mapping = rte_malloc("ecore_mz_map", 0, + rte_memzone_max_get() * sizeof(struct rte_memzone *)); + + if (!ecore_mz_mapping) + return -ENOMEM; + + return 0; +} + +void ecore_mz_mapping_free(void) +{ + rte_free(ecore_mz_mapping); +} + unsigned long qede_log2_align(unsigned long n) { unsigned long ret = n ? 1 : 0; @@ -132,9 +148,9 @@ void *osal_dma_alloc_coherent(struct ecore_dev *p_dev, uint32_t core_id = rte_lcore_id(); unsigned int socket_id; - if (ecore_mz_count >= RTE_MAX_MEMZONE) { + if (ecore_mz_count >= rte_memzone_max_get()) { DP_ERR(p_dev, "Memzone allocation count exceeds %u\n", - RTE_MAX_MEMZONE); + rte_memzone_max_get()); *phys = 0; return OSAL_NULL; } @@ -171,9 +187,9 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *p_dev, uint32_t core_id = rte_lcore_id(); unsigned int socket_id; - if (ecore_mz_count >= RTE_MAX_MEMZONE) { + if (ecore_mz_count >= rte_memzone_max_get()) { DP_ERR(p_dev, "Memzone allocation count exceeds %u\n", - RTE_MAX_MEMZONE); + rte_memzone_max_get()); *phys = 0; return OSAL_NULL; } diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h index 67e7f75..97e261d 100644 --- a/drivers/net/qede/base/bcm_osal.h +++ b/drivers/net/qede/base/bcm_osal.h @@ -477,4 +477,7 @@ enum dbg_status qed_dbg_alloc_user_data(struct ecore_hwfn *p_hwfn, qed_dbg_alloc_user_data(p_hwfn, user_data_ptr) #define OSAL_DB_REC_OCCURRED(p_hwfn) nothing +int ecore_mz_mapping_alloc(void); +void ecore_mz_mapping_free(void); + #endif /* __BCM_OSAL_H */ diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c index 0303903..f116e86 100644 --- a/drivers/net/qede/qede_main.c +++ b/drivers/net/qede/qede_main.c @@ -78,6 +78,12 @@ qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev, return rc; } + rc = ecore_mz_mapping_alloc(); + if (rc) { + DP_ERR(edev, "mem zones array allocation failed\n"); + return rc; + } + return rc; } @@ -721,6 +727,7 @@ static void qed_remove(struct ecore_dev *edev) if (!edev) return; + ecore_mz_mapping_free(); ecore_hw_remove(edev); } diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c index a9cd91f..6c43b7f 100644 --- a/lib/eal/common/eal_common_memzone.c +++ b/lib/eal/common/eal_common_memzone.c @@ -22,6 +22,10 @@ #include "eal_private.h" #include "eal_memcfg.h" +#define RTE_DEFAULT_MAX_MEMZONE 2560 + +static uint32_t memzone_max = RTE_DEFAULT_MAX_MEMZONE; + static inline const struct rte_memzone * memzone_lookup_thread_unsafe(const char *name) { @@ -81,8 +85,9 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* no more room in config */ if (arr->count >= arr->len) { RTE_LOG(ERR, EAL, - "%s(): Number of requested memzone segments exceeds RTE_MAX_MEMZONE\n", - __func__); + "%s(): Number of requested memzone segments exceeds max " + "memzone segments (%d >= %d)\n", + __func__, arr->count, arr->len); rte_errno = ENOSPC; return NULL; } @@ -396,7 +401,7 @@ rte_eal_memzone_init(void) if (rte_eal_process_type() == RTE_PROC_PRIMARY && rte_fbarray_init(&mcfg->memzones, "memzone", - RTE_MAX_MEMZONE, sizeof(struct rte_memzone))) { + rte_memzone_max_get(), sizeof(struct rte_memzone))) { RTE_LOG(ERR, EAL, "Cannot allocate memzone list\n"); ret = -1; } else if (rte_eal_process_type() == RTE_PROC_SECONDARY && @@ -430,3 +435,20 @@ void rte_memzone_walk(void (*func)(const struct rte_memzone *, void *), } rte_rwlock_read_unlock(&mcfg->mlock); } + +int +rte_memzone_max_set(uint32_t max) +{ + /* Setting max memzone must occur befaore calling rte_eal_init() */ + if (eal_get_internal_configuration()->init_complete > 0) + return -1; + + memzone_max = max; + return 0; +} + +uint32_t +rte_memzone_max_get(void) +{ + return memzone_max; +} diff --git a/lib/eal/include/rte_memzone.h b/lib/eal/include/rte_memzone.h index 5302caa..ca60409 100644 --- a/lib/eal/include/rte_memzone.h +++ b/lib/eal/include/rte_memzone.h @@ -305,6 +305,26 @@ void rte_memzone_dump(FILE *f); void rte_memzone_walk(void (*func)(const struct rte_memzone *, void *arg), void *arg); +/** + * Set max memzone value + * + * @param max + * Value of max memzone allocations + * @return + * 0 on success, -1 otherwise + */ +__rte_experimental +int rte_memzone_max_set(uint32_t max); + +/** + * Get max memzone value + * + * @return + * Value of max memzone allocations + */ +__rte_experimental +uint32_t rte_memzone_max_get(void); + #ifdef __cplusplus } #endif diff --git a/lib/eal/version.map b/lib/eal/version.map index 6d6978f..717c5b2 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -430,6 +430,10 @@ EXPERIMENTAL { rte_thread_create_control; rte_thread_set_name; __rte_eal_trace_generic_blob; + + # added in 23.07 + rte_memzone_max_set; + rte_memzone_max_get; }; INTERNAL { -- 2.8.4