From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7B8E442919; Tue, 11 Apr 2023 09:56:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BB84742C24; Tue, 11 Apr 2023 09:56:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id D64FC42D2F for ; Tue, 11 Apr 2023 09:56:06 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33ALFUBq021637 for ; Tue, 11 Apr 2023 00:56:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=KLim8NGB5bNGnNohPyeL+PHqbChvp5N2/67/lZmThMs=; b=FhoarqTvUIWB8lFA+6NsMCUS8J/0DAAl6pK6FLKeN/LhFt/+Yq1BQ4EwJt7ntEirxtw6 XTEblk9LmYiZF6U06rBZ3SNflF7ljixAqQaKbxumxySLvFcU8G7MvoXdhueDoEtItHVh 4tBWROXxHoK/R0WcOYwBO6yDD0gbTjDF8bxvLn4wBAN2rCoa3w150MYeYYroZXdVahCp hf5oGQVO/hsjD9nQgPz+i2V5AVIU7GLL0zcXa7KLU9WIhW41Q707AHgNt1uMgRkqpyS8 W6IsB5IlckayEHVEmWDL6xKfQyNr+tBrkJKKqOm+rpaQAW4H99zMD1QCfAqph5VdKAl7 kA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3pvt73ajwu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 11 Apr 2023 00:56:05 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 11 Apr 2023 00:56:04 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 11 Apr 2023 00:56:04 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id E00F93F7041; Tue, 11 Apr 2023 00:56:00 -0700 (PDT) From: Ashwin Sekhar T K To: , Ashwin Sekhar T K , Pavan Nikhilesh CC: , , , , , , , , Subject: [PATCH 5/5] mempool/cnxk: add support for exchanging mbufs between pools Date: Tue, 11 Apr 2023 13:25:28 +0530 Message-ID: <20230411075528.1125799-6-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230411075528.1125799-1-asekhar@marvell.com> References: <20230411075528.1125799-1-asekhar@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: U59NyDiPpZEa818PsGofbZlsup6VVJk8 X-Proofpoint-ORIG-GUID: U59NyDiPpZEa818PsGofbZlsup6VVJk8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-11_04,2023-04-06_03,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the following cnxk mempool PMD APIs to facilitate exchanging mbufs between pools. * rte_pmd_cnxk_mempool_is_hwpool() - Allows user to check whether a pool is hwpool or not. * rte_pmd_cnxk_mempool_range_check_disable() - Disables range checking on any rte_mempool. * rte_pmd_cnxk_mempool_mbuf_exchange() - Exchanges mbufs between any two rte_mempool where the range check is disabled. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cn10k_hwpool_ops.c | 63 ++++++++++++++++++++- drivers/mempool/cnxk/cnxk_mempool.h | 4 ++ drivers/mempool/cnxk/meson.build | 1 + drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h | 56 ++++++++++++++++++ drivers/mempool/cnxk/version.map | 10 ++++ 5 files changed, 133 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h create mode 100644 drivers/mempool/cnxk/version.map diff --git a/drivers/mempool/cnxk/cn10k_hwpool_ops.c b/drivers/mempool/cnxk/cn10k_hwpool_ops.c index 9238765155..b234481ec1 100644 --- a/drivers/mempool/cnxk/cn10k_hwpool_ops.c +++ b/drivers/mempool/cnxk/cn10k_hwpool_ops.c @@ -3,11 +3,14 @@ */ #include +#include #include "roc_api.h" #include "cnxk_mempool.h" -#define CN10K_HWPOOL_MEM_SIZE 128 +#define CN10K_HWPOOL_MEM_SIZE 128 +#define CN10K_NPA_IOVA_RANGE_MIN 0x0 +#define CN10K_NPA_IOVA_RANGE_MAX 0x1fffffffffff80 static int __rte_hot cn10k_hwpool_enq(struct rte_mempool *hp, void *const *obj_table, unsigned int n) @@ -197,6 +200,64 @@ cn10k_hwpool_populate(struct rte_mempool *hp, unsigned int max_objs, return hp->size; } +int +rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf *m2) +{ + struct rte_mempool_objhdr *hdr; + +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG + if (!(CNXK_MEMPOOL_FLAGS(m1->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHECK) || + !(CNXK_MEMPOOL_FLAGS(m2->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHECK)) { + plt_err("Pools must have range check disabled"); + return -EINVAL; + } + if (m1->pool->elt_size != m2->pool->elt_size || + m1->pool->header_size != m2->pool->header_size || + m1->pool->trailer_size != m2->pool->trailer_size || + m1->pool->size != m2->pool->size) { + plt_err("Parameters of pools involved in exchange does not match"); + return -EINVAL; + } +#endif + RTE_SWAP(m1->pool, m2->pool); + hdr = rte_mempool_get_header(m1); + hdr->mp = m1->pool; + hdr = rte_mempool_get_header(m2); + hdr->mp = m2->pool; + return 0; +} + +int +rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp) +{ + return !!(CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_IS_HWPOOL); +} + +int +rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp) +{ + if (rte_pmd_cnxk_mempool_is_hwpool(mp)) { + /* Disable only aura range check for hardware pools */ + roc_npa_aura_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RANGE_MIN, + CN10K_NPA_IOVA_RANGE_MAX); + CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK); + mp = CNXK_MEMPOOL_CONFIG(mp); + } + + /* No need to disable again if already disabled */ + if (CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_NO_RANGE_CHECK) + return 0; + + /* Disable aura/pool range check */ + roc_npa_pool_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RANGE_MIN, + CN10K_NPA_IOVA_RANGE_MAX); + if (roc_npa_pool_range_update_check(mp->pool_id) < 0) + return -EBUSY; + + CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK); + return 0; +} + static struct rte_mempool_ops cn10k_hwpool_ops = { .name = "cn10k_hwpool_ops", .alloc = cn10k_hwpool_alloc, diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h index 4ca05d53e1..669e617952 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.h +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -20,6 +20,10 @@ enum cnxk_mempool_flags { * This flag is set by the driver. */ CNXK_MEMPOOL_F_IS_HWPOOL = RTE_BIT64(2), + /* This flag indicates whether range check has been disabled for + * the pool. This flag is set by the driver. + */ + CNXK_MEMPOOL_F_NO_RANGE_CHECK = RTE_BIT64(3), }; #define CNXK_MEMPOOL_F_MASK 0xFUL diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build index ce152bedd2..e388cce26a 100644 --- a/drivers/mempool/cnxk/meson.build +++ b/drivers/mempool/cnxk/meson.build @@ -17,5 +17,6 @@ sources = files( 'cn10k_hwpool_ops.c', ) +headers = files('rte_pmd_cnxk_mempool.h') deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool'] require_iova_in_mbuf = false diff --git a/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h b/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h new file mode 100644 index 0000000000..b040d0414f --- /dev/null +++ b/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +/** + * @file rte_pmd_cnxk.h + * Marvell CNXK Mempool PMD specific functions. + * + **/ + +#ifndef _PMD_CNXK_MEMPOOL_H_ +#define _PMD_CNXK_MEMPOOL_H_ + +#include +#include + +/** + * Exchange mbufs between two mempools. + * + * @param m1 + * First mbuf + * @param m2 + * Second mbuf + * + * @return + * 0 on success, a negative errno value otherwise. + */ +__rte_experimental +int rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, + struct rte_mbuf *m2); + +/** + * Check whether a mempool is a hwpool. + * + * @param mp + * Mempool to check. + * + * @return + * 1 if mp is a hwpool, 0 otherwise. + */ +__rte_experimental +int rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp); + +/** + * Disable buffer address range check on a mempool. + * + * @param mp + * Mempool to disable range check on. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +__rte_experimental +int rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp); + +#endif /* _PMD_CNXK_MEMPOOL_H_ */ diff --git a/drivers/mempool/cnxk/version.map b/drivers/mempool/cnxk/version.map new file mode 100644 index 0000000000..755731e3b5 --- /dev/null +++ b/drivers/mempool/cnxk/version.map @@ -0,0 +1,10 @@ + DPDK_23 { + local: *; + }; + + EXPERIMENTAL { + global: + rte_pmd_cnxk_mempool_is_hwpool; + rte_pmd_cnxk_mempool_mbuf_exchange; + rte_pmd_cnxk_mempool_range_check_disable; + }; -- 2.25.1