From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03DBA42919; Tue, 11 Apr 2023 09:55:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A86DB41141; Tue, 11 Apr 2023 09:55:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9BF6B40A8B for ; Tue, 11 Apr 2023 09:55:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33ALFUBp021637 for ; Tue, 11 Apr 2023 00:55:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=isqW/TVavp/SDoXIIvocQm1oDGez3+AYxQarbpZIHQo=; b=fI2NgTyaTVWPEVtUEpY9s9SyLLpV1zcNopro/Gl0EKYqBspd7CnKEXCAxdFSBJkOazRW wKTjhyKqxOb9uN/euQymuliDtiLtLSpGxWtl45ep16eLGPgqWger/V+OhhAsGb5jbLN+ hjX3sbEhlj/2ChRaBc70SYyA8p+2iMFA9yXhtrjys2MV55nJvSexBOUaTnKRRpfOC2KA J3iCRqLPWG3zLDK6T/m5PbLYzLzogUHLVf4EyhFJSryBJxAStWX/7M2mDHNtZwsxq4i/ 1DFXiqEeF32EXWjLmjXxqD5opl6ivTOWwZ3x/AsTT2PipmsyYSCiYN6UCp9Ih9NOGwUZ mw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3pvt73ajvt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 11 Apr 2023 00:55:46 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 11 Apr 2023 00:55:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 11 Apr 2023 00:55:44 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 5B9033F7041; Tue, 11 Apr 2023 00:55:41 -0700 (PDT) From: Ashwin Sekhar T K To: , Ashwin Sekhar T K , Pavan Nikhilesh , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , Subject: [PATCH 1/5] mempool/cnxk: use pool config to pass flags Date: Tue, 11 Apr 2023 13:25:24 +0530 Message-ID: <20230411075528.1125799-2-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230411075528.1125799-1-asekhar@marvell.com> References: <20230411075528.1125799-1-asekhar@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: fBHexDFdJSRE_fseB7k7W6fIkwLTwjkM X-Proofpoint-ORIG-GUID: fBHexDFdJSRE_fseB7k7W6fIkwLTwjkM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-11_04,2023-04-06_03,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use lower bits of pool_config to pass flags specific to cnxk mempool PMD ops. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cnxk_mempool.h | 24 ++++++++++++++++++++++++ drivers/mempool/cnxk/cnxk_mempool_ops.c | 17 ++++++++++------- drivers/net/cnxk/cnxk_ethdev_sec.c | 25 ++++++------------------- 3 files changed, 40 insertions(+), 26 deletions(-) diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h index 3405aa7663..fc2e4b5b70 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.h +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -7,6 +7,30 @@ #include +enum cnxk_mempool_flags { + /* This flag is used to ensure that only aura zero is allocated. + * If aura zero is not available, then mempool creation fails. + */ + CNXK_MEMPOOL_F_ZERO_AURA = RTE_BIT64(0), + /* Here the pool create will use the npa_aura_s structure passed + * as pool config to create the pool. + */ + CNXK_MEMPOOL_F_CUSTOM_AURA = RTE_BIT64(1), +}; + +#define CNXK_MEMPOOL_F_MASK 0xFUL + +#define CNXK_MEMPOOL_FLAGS(_m) \ + (PLT_U64_CAST((_m)->pool_config) & CNXK_MEMPOOL_F_MASK) +#define CNXK_MEMPOOL_CONFIG(_m) \ + (PLT_PTR_CAST(PLT_U64_CAST((_m)->pool_config) & ~CNXK_MEMPOOL_F_MASK)) +#define CNXK_MEMPOOL_SET_FLAGS(_m, _f) \ + do { \ + void *_c = CNXK_MEMPOOL_CONFIG(_m); \ + uint64_t _flags = CNXK_MEMPOOL_FLAGS(_m) | (_f); \ + (_m)->pool_config = PLT_PTR_CAST(PLT_U64_CAST(_c) | _flags); \ + } while (0) + unsigned int cnxk_mempool_get_count(const struct rte_mempool *mp); ssize_t cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, diff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c index 3769afd3d1..1b6c4591bb 100644 --- a/drivers/mempool/cnxk/cnxk_mempool_ops.c +++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c @@ -72,7 +72,7 @@ cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, int cnxk_mempool_alloc(struct rte_mempool *mp) { - uint32_t block_count, flags = 0; + uint32_t block_count, flags, roc_flags = 0; uint64_t aura_handle = 0; struct npa_aura_s aura; struct npa_pool_s pool; @@ -96,15 +96,18 @@ cnxk_mempool_alloc(struct rte_mempool *mp) pool.nat_align = 1; pool.buf_offset = mp->header_size / ROC_ALIGN; - /* Use driver specific mp->pool_config to override aura config */ - if (mp->pool_config != NULL) - memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s)); + flags = CNXK_MEMPOOL_FLAGS(mp); + if (flags & CNXK_MEMPOOL_F_ZERO_AURA) { + roc_flags = ROC_NPA_ZERO_AURA_F; + } else if (flags & CNXK_MEMPOOL_F_CUSTOM_AURA) { + struct npa_aura_s *paura; - if (aura.ena && aura.pool_addr == 0) - flags = ROC_NPA_ZERO_AURA_F; + paura = CNXK_MEMPOOL_CONFIG(mp); + memcpy(&aura, paura, sizeof(struct npa_aura_s)); + } rc = roc_npa_pool_create(&aura_handle, block_size, block_count, &aura, - &pool, flags); + &pool, roc_flags); if (rc) { plt_err("Failed to alloc pool or aura rc=%d", rc); goto error; diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c index aa8a378a00..cd64daacc0 100644 --- a/drivers/net/cnxk/cnxk_ethdev_sec.c +++ b/drivers/net/cnxk/cnxk_ethdev_sec.c @@ -3,6 +3,7 @@ */ #include +#include #define CNXK_NIX_INL_META_POOL_NAME "NIX_INL_META_POOL" @@ -43,7 +44,6 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ { const char *mp_name = NULL; struct rte_pktmbuf_pool_private mbp_priv; - struct npa_aura_s *aura; struct rte_mempool *mp; uint16_t first_skip; int rc; @@ -65,7 +65,6 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ return -EINVAL; } - plt_free(mp->pool_config); rte_mempool_free(mp); *aura_handle = 0; @@ -84,22 +83,12 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ return -EIO; } - /* Indicate to allocate zero aura */ - aura = plt_zmalloc(sizeof(struct npa_aura_s), 0); - if (!aura) { - rc = -ENOMEM; - goto free_mp; - } - aura->ena = 1; - if (!mempool_name) - aura->pool_addr = 0; - else - aura->pool_addr = 1; /* Any non zero value, so that alloc from next free Index */ - - rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(), aura); + rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(), + mempool_name ? + NULL : PLT_PTR_CAST(CNXK_MEMPOOL_F_ZERO_AURA)); if (rc) { plt_err("Failed to setup mempool ops for meta, rc=%d", rc); - goto free_aura; + goto free_mp; } /* Init mempool private area */ @@ -113,15 +102,13 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ rc = rte_mempool_populate_default(mp); if (rc < 0) { plt_err("Failed to create inline meta pool, rc=%d", rc); - goto free_aura; + goto free_mp; } rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); *aura_handle = mp->pool_id; *mpool = (uintptr_t)mp; return 0; -free_aura: - plt_free(aura); free_mp: rte_mempool_free(mp); return rc; -- 2.25.1