From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F74642B81; Tue, 23 May 2023 14:49:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5BD6842D74; Tue, 23 May 2023 14:48:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1387742D56 for ; Tue, 23 May 2023 14:48:44 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34NC5jns029254 for ; Tue, 23 May 2023 05:48:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=isqW/TVavp/SDoXIIvocQm1oDGez3+AYxQarbpZIHQo=; b=g4I/95f8xEX06Lln/Y97JjWW3O5aIUymVgxtpyqQ9KY7dj1nMkv3phJp6iRRjtiOoukF OFZKzpyQni4DUFOdJ96dvL5+z+lPosm+P656FXwUVRAvOmaFpdT8YWshwptbDxvhXo7x OvL9ThBxq8pGfuC7CqmnMtHC8gMOVmskpZxgHF/rJ6LQWb2Ag/2Isg0xSYFaeGAHibE0 HNCKmp+bQzIYNafMFPRpyv7fRekh/WOBOnN1NdhnqWe4OWUgsk60qy9aNDUDm2XUr8WG Eig+cg3NNfbCyUY8m1XOloK3N7fvxx9V1uvKID24ilb5BBCv7mOAnCXrnCmsmy93Ile6 xg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qrm46j76r-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 23 May 2023 05:48:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 23 May 2023 03:54:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 23 May 2023 03:54:40 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 034A13F7074; Tue, 23 May 2023 03:54:36 -0700 (PDT) From: Ashwin Sekhar T K To: , Ashwin Sekhar T K , Pavan Nikhilesh , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , Subject: [PATCH v2 1/5] mempool/cnxk: use pool config to pass flags Date: Tue, 23 May 2023 16:24:29 +0530 Message-ID: <20230523105433.719998-1-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230411075528.1125799-1-asekhar@marvell.com> References: <20230411075528.1125799-1-asekhar@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: BhNH5DHHiSHXH2m9zW3taaJgJ62eoA83 X-Proofpoint-GUID: BhNH5DHHiSHXH2m9zW3taaJgJ62eoA83 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-23_08,2023-05-23_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use lower bits of pool_config to pass flags specific to cnxk mempool PMD ops. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cnxk_mempool.h | 24 ++++++++++++++++++++++++ drivers/mempool/cnxk/cnxk_mempool_ops.c | 17 ++++++++++------- drivers/net/cnxk/cnxk_ethdev_sec.c | 25 ++++++------------------- 3 files changed, 40 insertions(+), 26 deletions(-) diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h index 3405aa7663..fc2e4b5b70 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.h +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -7,6 +7,30 @@ #include +enum cnxk_mempool_flags { + /* This flag is used to ensure that only aura zero is allocated. + * If aura zero is not available, then mempool creation fails. + */ + CNXK_MEMPOOL_F_ZERO_AURA = RTE_BIT64(0), + /* Here the pool create will use the npa_aura_s structure passed + * as pool config to create the pool. + */ + CNXK_MEMPOOL_F_CUSTOM_AURA = RTE_BIT64(1), +}; + +#define CNXK_MEMPOOL_F_MASK 0xFUL + +#define CNXK_MEMPOOL_FLAGS(_m) \ + (PLT_U64_CAST((_m)->pool_config) & CNXK_MEMPOOL_F_MASK) +#define CNXK_MEMPOOL_CONFIG(_m) \ + (PLT_PTR_CAST(PLT_U64_CAST((_m)->pool_config) & ~CNXK_MEMPOOL_F_MASK)) +#define CNXK_MEMPOOL_SET_FLAGS(_m, _f) \ + do { \ + void *_c = CNXK_MEMPOOL_CONFIG(_m); \ + uint64_t _flags = CNXK_MEMPOOL_FLAGS(_m) | (_f); \ + (_m)->pool_config = PLT_PTR_CAST(PLT_U64_CAST(_c) | _flags); \ + } while (0) + unsigned int cnxk_mempool_get_count(const struct rte_mempool *mp); ssize_t cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, diff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c index 3769afd3d1..1b6c4591bb 100644 --- a/drivers/mempool/cnxk/cnxk_mempool_ops.c +++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c @@ -72,7 +72,7 @@ cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, int cnxk_mempool_alloc(struct rte_mempool *mp) { - uint32_t block_count, flags = 0; + uint32_t block_count, flags, roc_flags = 0; uint64_t aura_handle = 0; struct npa_aura_s aura; struct npa_pool_s pool; @@ -96,15 +96,18 @@ cnxk_mempool_alloc(struct rte_mempool *mp) pool.nat_align = 1; pool.buf_offset = mp->header_size / ROC_ALIGN; - /* Use driver specific mp->pool_config to override aura config */ - if (mp->pool_config != NULL) - memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s)); + flags = CNXK_MEMPOOL_FLAGS(mp); + if (flags & CNXK_MEMPOOL_F_ZERO_AURA) { + roc_flags = ROC_NPA_ZERO_AURA_F; + } else if (flags & CNXK_MEMPOOL_F_CUSTOM_AURA) { + struct npa_aura_s *paura; - if (aura.ena && aura.pool_addr == 0) - flags = ROC_NPA_ZERO_AURA_F; + paura = CNXK_MEMPOOL_CONFIG(mp); + memcpy(&aura, paura, sizeof(struct npa_aura_s)); + } rc = roc_npa_pool_create(&aura_handle, block_size, block_count, &aura, - &pool, flags); + &pool, roc_flags); if (rc) { plt_err("Failed to alloc pool or aura rc=%d", rc); goto error; diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c index aa8a378a00..cd64daacc0 100644 --- a/drivers/net/cnxk/cnxk_ethdev_sec.c +++ b/drivers/net/cnxk/cnxk_ethdev_sec.c @@ -3,6 +3,7 @@ */ #include +#include #define CNXK_NIX_INL_META_POOL_NAME "NIX_INL_META_POOL" @@ -43,7 +44,6 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ { const char *mp_name = NULL; struct rte_pktmbuf_pool_private mbp_priv; - struct npa_aura_s *aura; struct rte_mempool *mp; uint16_t first_skip; int rc; @@ -65,7 +65,6 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ return -EINVAL; } - plt_free(mp->pool_config); rte_mempool_free(mp); *aura_handle = 0; @@ -84,22 +83,12 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ return -EIO; } - /* Indicate to allocate zero aura */ - aura = plt_zmalloc(sizeof(struct npa_aura_s), 0); - if (!aura) { - rc = -ENOMEM; - goto free_mp; - } - aura->ena = 1; - if (!mempool_name) - aura->pool_addr = 0; - else - aura->pool_addr = 1; /* Any non zero value, so that alloc from next free Index */ - - rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(), aura); + rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(), + mempool_name ? + NULL : PLT_PTR_CAST(CNXK_MEMPOOL_F_ZERO_AURA)); if (rc) { plt_err("Failed to setup mempool ops for meta, rc=%d", rc); - goto free_aura; + goto free_mp; } /* Init mempool private area */ @@ -113,15 +102,13 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ rc = rte_mempool_populate_default(mp); if (rc < 0) { plt_err("Failed to create inline meta pool, rc=%d", rc); - goto free_aura; + goto free_mp; } rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); *aura_handle = mp->pool_id; *mpool = (uintptr_t)mp; return 0; -free_aura: - plt_free(aura); free_mp: rte_mempool_free(mp); return rc; -- 2.25.1