From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2053942B8A; Wed, 24 May 2023 11:34:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A9BF40ED8; Wed, 24 May 2023 11:34:17 +0200 (CEST) Received: from mail-vk1-f177.google.com (mail-vk1-f177.google.com [209.85.221.177]) by mails.dpdk.org (Postfix) with ESMTP id 9103E4067E for ; Wed, 24 May 2023 11:34:16 +0200 (CEST) Received: by mail-vk1-f177.google.com with SMTP id 71dfb90a1353d-456d241fcdcso269959e0c.0 for ; Wed, 24 May 2023 02:34:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684920856; x=1687512856; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=e0donYWt1UfsPZsurDmB5S35xd9C1kVcOIATmjUKU64=; b=I+Cak9UlTiJ1IgNfAJ45/LvD7W2+f1NVf+5ZDn1jhjKDWrZzcfPWNlc1CBt2Uhu588 nX0hlPjt39dAzf6WAby8uAECaNAnyLt1l+ySyxySE7EKTP8VhmBBVAja9NkBWcOpsadB TRVbLJo+U6fiaaw36keuqsdNLvMzUYAu6YmuYB7UsZwfCdcVubWLPMhu2SIBvsjY07nH zMM/IVFF6a5f5yEVqyu4DsSBpb+NevUC5oMCzIZX62HqzoVXk+vzXxC4aiLdjAQnqJwI bb34ZBoo1RKMeZ3/Ajbp0MGKYpe2H/6qk043k7Iuo0KxkHYVP89dF+Mh8CJ0g5MFXlZ9 51VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684920856; x=1687512856; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e0donYWt1UfsPZsurDmB5S35xd9C1kVcOIATmjUKU64=; b=fAE8vifn4l4U87Q2Z9TuZOMj6YUuiIAo5sjeVephbg4Bo2qKk96nNUz2NleWfyzz+1 u9YC4rvG70zCr0o6URkokYwoxMY6XrbCMmaqUyLaU9I+reJ9rEHTukqZW3x4lK3/o4U2 lZLAHRYTIho83RlHyt5+Vh6QnPFGBD4DM7A0bap7ei3GlVHjA6Q13ku+RlETkGxW+BAF 9fj3DJLSLNK8Feu5Tm443ThPeGMLssNiATf90A099Fxj0GOMfWLj5TjWDiAmXBkzxyk1 4W2YqD9uQKS1YfZiCue/H/OyTyU6iVMTzJeZpWBmPyMnvD3TRJWu3EpQzXAoTO81Z0rL HICQ== X-Gm-Message-State: AC+VfDz+LmHXDJJ/8hUr/OGgFJYXazhd10jXfCFV4NF4DMEMt/FXGI3c XV5cnFPytcnesGDtbd0dfQLpW2PVV+QbVoOTC+8= X-Google-Smtp-Source: ACHHUZ6V8GyVgA3d+UJ+4z4t0fSjrTMtOzFsfeQu0Iz0YB46Ee3xRxGWAZNnbTdWd7cjZUdyGClsg/XoxrQJbBhi7Hk= X-Received: by 2002:a1f:41d2:0:b0:457:338d:b22b with SMTP id o201-20020a1f41d2000000b00457338db22bmr4120684vka.16.1684920855727; Wed, 24 May 2023 02:34:15 -0700 (PDT) MIME-Version: 1.0 References: <20230411075528.1125799-1-asekhar@marvell.com> <20230523090431.717460-1-asekhar@marvell.com> <20230523090431.717460-5-asekhar@marvell.com> In-Reply-To: <20230523090431.717460-5-asekhar@marvell.com> From: Jerin Jacob Date: Wed, 24 May 2023 15:03:49 +0530 Message-ID: Subject: Re: [PATCH v2 5/5] mempool/cnxk: add support for exchanging mbufs between pools To: Ashwin Sekhar T K Cc: dev@dpdk.org, Pavan Nikhilesh , jerinj@marvell.com, skori@marvell.com, skoteshwar@marvell.com, kirankumark@marvell.com, psatheesh@marvell.com, anoobj@marvell.com, gakhil@marvell.com, hkalra@marvell.com, ndabilpuram@marvell.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, May 23, 2023 at 6:30=E2=80=AFPM Ashwin Sekhar T K wrote: > > Add the following cnxk mempool PMD APIs to facilitate exchanging mbufs > between pools. > * rte_pmd_cnxk_mempool_is_hwpool() - Allows user to check whether a pool > is hwpool or not. > * rte_pmd_cnxk_mempool_range_check_disable() - Disables range checking o= n > any rte_mempool. > * rte_pmd_cnxk_mempool_mbuf_exchange() - Exchanges mbufs between any two > rte_mempool where the range check is disabled. > > Signed-off-by: Ashwin Sekhar T K Series applied to dpdk-next-net-mrvl/for-next-net. Thanks > --- > doc/api/doxy-api-index.md | 1 + > doc/api/doxy-api.conf.in | 1 + > drivers/mempool/cnxk/cn10k_hwpool_ops.c | 63 ++++++++++++++++++++- > drivers/mempool/cnxk/cnxk_mempool.h | 4 ++ > drivers/mempool/cnxk/meson.build | 1 + > drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h | 56 ++++++++++++++++++ > drivers/mempool/cnxk/version.map | 10 ++++ > 7 files changed, 135 insertions(+), 1 deletion(-) > create mode 100644 drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h > create mode 100644 drivers/mempool/cnxk/version.map > > diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md > index c709fd48ad..a781b8f408 100644 > --- a/doc/api/doxy-api-index.md > +++ b/doc/api/doxy-api-index.md > @@ -49,6 +49,7 @@ The public API headers are grouped by topics: > [iavf](@ref rte_pmd_iavf.h), > [bnxt](@ref rte_pmd_bnxt.h), > [cnxk](@ref rte_pmd_cnxk.h), > + [cnxk_mempool](@ref rte_pmd_cnxk_mempool.h), > [dpaa](@ref rte_pmd_dpaa.h), > [dpaa2](@ref rte_pmd_dpaa2.h), > [mlx5](@ref rte_pmd_mlx5.h), > diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in > index d230a19e1f..7e68e43c64 100644 > --- a/doc/api/doxy-api.conf.in > +++ b/doc/api/doxy-api.conf.in > @@ -9,6 +9,7 @@ INPUT =3D @TOPDIR@/doc/api/doxy-api-ind= ex.md \ > @TOPDIR@/drivers/crypto/scheduler \ > @TOPDIR@/drivers/dma/dpaa2 \ > @TOPDIR@/drivers/event/dlb2 \ > + @TOPDIR@/drivers/mempool/cnxk \ > @TOPDIR@/drivers/mempool/dpaa2 \ > @TOPDIR@/drivers/net/ark \ > @TOPDIR@/drivers/net/bnxt \ > diff --git a/drivers/mempool/cnxk/cn10k_hwpool_ops.c b/drivers/mempool/cn= xk/cn10k_hwpool_ops.c > index 9238765155..b234481ec1 100644 > --- a/drivers/mempool/cnxk/cn10k_hwpool_ops.c > +++ b/drivers/mempool/cnxk/cn10k_hwpool_ops.c > @@ -3,11 +3,14 @@ > */ > > #include > +#include > > #include "roc_api.h" > #include "cnxk_mempool.h" > > -#define CN10K_HWPOOL_MEM_SIZE 128 > +#define CN10K_HWPOOL_MEM_SIZE 128 > +#define CN10K_NPA_IOVA_RANGE_MIN 0x0 > +#define CN10K_NPA_IOVA_RANGE_MAX 0x1fffffffffff80 > > static int __rte_hot > cn10k_hwpool_enq(struct rte_mempool *hp, void *const *obj_table, unsigne= d int n) > @@ -197,6 +200,64 @@ cn10k_hwpool_populate(struct rte_mempool *hp, unsign= ed int max_objs, > return hp->size; > } > > +int > +rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf = *m2) > +{ > + struct rte_mempool_objhdr *hdr; > + > +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > + if (!(CNXK_MEMPOOL_FLAGS(m1->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHEC= K) || > + !(CNXK_MEMPOOL_FLAGS(m2->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHEC= K)) { > + plt_err("Pools must have range check disabled"); > + return -EINVAL; > + } > + if (m1->pool->elt_size !=3D m2->pool->elt_size || > + m1->pool->header_size !=3D m2->pool->header_size || > + m1->pool->trailer_size !=3D m2->pool->trailer_size || > + m1->pool->size !=3D m2->pool->size) { > + plt_err("Parameters of pools involved in exchange does no= t match"); > + return -EINVAL; > + } > +#endif > + RTE_SWAP(m1->pool, m2->pool); > + hdr =3D rte_mempool_get_header(m1); > + hdr->mp =3D m1->pool; > + hdr =3D rte_mempool_get_header(m2); > + hdr->mp =3D m2->pool; > + return 0; > +} > + > +int > +rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp) > +{ > + return !!(CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_IS_HWPOOL); > +} > + > +int > +rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp) > +{ > + if (rte_pmd_cnxk_mempool_is_hwpool(mp)) { > + /* Disable only aura range check for hardware pools */ > + roc_npa_aura_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RAN= GE_MIN, > + CN10K_NPA_IOVA_RANGE_MAX); > + CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK)= ; > + mp =3D CNXK_MEMPOOL_CONFIG(mp); > + } > + > + /* No need to disable again if already disabled */ > + if (CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_NO_RANGE_CHECK) > + return 0; > + > + /* Disable aura/pool range check */ > + roc_npa_pool_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RANGE_MIN, > + CN10K_NPA_IOVA_RANGE_MAX); > + if (roc_npa_pool_range_update_check(mp->pool_id) < 0) > + return -EBUSY; > + > + CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK); > + return 0; > +} > + > static struct rte_mempool_ops cn10k_hwpool_ops =3D { > .name =3D "cn10k_hwpool_ops", > .alloc =3D cn10k_hwpool_alloc, > diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/c= nxk_mempool.h > index 4ca05d53e1..669e617952 100644 > --- a/drivers/mempool/cnxk/cnxk_mempool.h > +++ b/drivers/mempool/cnxk/cnxk_mempool.h > @@ -20,6 +20,10 @@ enum cnxk_mempool_flags { > * This flag is set by the driver. > */ > CNXK_MEMPOOL_F_IS_HWPOOL =3D RTE_BIT64(2), > + /* This flag indicates whether range check has been disabled for > + * the pool. This flag is set by the driver. > + */ > + CNXK_MEMPOOL_F_NO_RANGE_CHECK =3D RTE_BIT64(3), > }; > > #define CNXK_MEMPOOL_F_MASK 0xFUL > diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meso= n.build > index ce152bedd2..e388cce26a 100644 > --- a/drivers/mempool/cnxk/meson.build > +++ b/drivers/mempool/cnxk/meson.build > @@ -17,5 +17,6 @@ sources =3D files( > 'cn10k_hwpool_ops.c', > ) > > +headers =3D files('rte_pmd_cnxk_mempool.h') > deps +=3D ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool'] > require_iova_in_mbuf =3D false > diff --git a/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h b/drivers/mempoo= l/cnxk/rte_pmd_cnxk_mempool.h > new file mode 100644 > index 0000000000..ada6e7cd4d > --- /dev/null > +++ b/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h > @@ -0,0 +1,56 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(C) 2023 Marvell. > + */ > + > +/** > + * @file rte_pmd_cnxk_mempool.h > + * Marvell CNXK Mempool PMD specific functions. > + * > + **/ > + > +#ifndef _PMD_CNXK_MEMPOOL_H_ > +#define _PMD_CNXK_MEMPOOL_H_ > + > +#include > +#include > + > +/** > + * Exchange mbufs between two mempools. > + * > + * @param m1 > + * First mbuf > + * @param m2 > + * Second mbuf > + * > + * @return > + * 0 on success, a negative errno value otherwise. > + */ > +__rte_experimental > +int rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, > + struct rte_mbuf *m2); > + > +/** > + * Check whether a mempool is a hwpool. > + * > + * @param mp > + * Mempool to check. > + * > + * @return > + * 1 if mp is a hwpool, 0 otherwise. > + */ > +__rte_experimental > +int rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp); > + > +/** > + * Disable buffer address range check on a mempool. > + * > + * @param mp > + * Mempool to disable range check on. > + * > + * @return > + * 0 on success, a negative errno value otherwise. > + */ > +__rte_experimental > +int rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp); > + > +#endif /* _PMD_CNXK_MEMPOOL_H_ */ > diff --git a/drivers/mempool/cnxk/version.map b/drivers/mempool/cnxk/vers= ion.map > new file mode 100644 > index 0000000000..755731e3b5 > --- /dev/null > +++ b/drivers/mempool/cnxk/version.map > @@ -0,0 +1,10 @@ > + DPDK_23 { > + local: *; > + }; > + > + EXPERIMENTAL { > + global: > + rte_pmd_cnxk_mempool_is_hwpool; > + rte_pmd_cnxk_mempool_mbuf_exchange; > + rte_pmd_cnxk_mempool_range_check_disable; > + }; > -- > 2.25.1 >