DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Ashwin Sekhar T K <asekhar@marvell.com>
Cc: dev@dpdk.org, Pavan Nikhilesh <pbhagavatula@marvell.com>,
	jerinj@marvell.com,  skori@marvell.com, skoteshwar@marvell.com,
	kirankumark@marvell.com,  psatheesh@marvell.com,
	anoobj@marvell.com, gakhil@marvell.com,  hkalra@marvell.com,
	ndabilpuram@marvell.com
Subject: Re: [PATCH v2 5/5] mempool/cnxk: add support for exchanging mbufs between pools
Date: Wed, 24 May 2023 15:03:49 +0530	[thread overview]
Message-ID: <CALBAE1PqJ-E1PpSQHwUnYA_LTRBLmt7VUK01xWN3rDrCYCz6=A@mail.gmail.com> (raw)
In-Reply-To: <20230523090431.717460-5-asekhar@marvell.com>

On Tue, May 23, 2023 at 6:30 PM Ashwin Sekhar T K <asekhar@marvell.com> wrote:
>
> Add the following cnxk mempool PMD APIs to facilitate exchanging mbufs
> between pools.
>  * rte_pmd_cnxk_mempool_is_hwpool() - Allows user to check whether a pool
>    is hwpool or not.
>  * rte_pmd_cnxk_mempool_range_check_disable() - Disables range checking on
>    any rte_mempool.
>  * rte_pmd_cnxk_mempool_mbuf_exchange() - Exchanges mbufs between any two
>    rte_mempool where the range check is disabled.
>
> Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>

Series applied to dpdk-next-net-mrvl/for-next-net. Thanks


> ---
>  doc/api/doxy-api-index.md                   |  1 +
>  doc/api/doxy-api.conf.in                    |  1 +
>  drivers/mempool/cnxk/cn10k_hwpool_ops.c     | 63 ++++++++++++++++++++-
>  drivers/mempool/cnxk/cnxk_mempool.h         |  4 ++
>  drivers/mempool/cnxk/meson.build            |  1 +
>  drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h | 56 ++++++++++++++++++
>  drivers/mempool/cnxk/version.map            | 10 ++++
>  7 files changed, 135 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h
>  create mode 100644 drivers/mempool/cnxk/version.map
>
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index c709fd48ad..a781b8f408 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -49,6 +49,7 @@ The public API headers are grouped by topics:
>    [iavf](@ref rte_pmd_iavf.h),
>    [bnxt](@ref rte_pmd_bnxt.h),
>    [cnxk](@ref rte_pmd_cnxk.h),
> +  [cnxk_mempool](@ref rte_pmd_cnxk_mempool.h),
>    [dpaa](@ref rte_pmd_dpaa.h),
>    [dpaa2](@ref rte_pmd_dpaa2.h),
>    [mlx5](@ref rte_pmd_mlx5.h),
> diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
> index d230a19e1f..7e68e43c64 100644
> --- a/doc/api/doxy-api.conf.in
> +++ b/doc/api/doxy-api.conf.in
> @@ -9,6 +9,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
>                            @TOPDIR@/drivers/crypto/scheduler \
>                            @TOPDIR@/drivers/dma/dpaa2 \
>                            @TOPDIR@/drivers/event/dlb2 \
> +                          @TOPDIR@/drivers/mempool/cnxk \
>                            @TOPDIR@/drivers/mempool/dpaa2 \
>                            @TOPDIR@/drivers/net/ark \
>                            @TOPDIR@/drivers/net/bnxt \
> diff --git a/drivers/mempool/cnxk/cn10k_hwpool_ops.c b/drivers/mempool/cnxk/cn10k_hwpool_ops.c
> index 9238765155..b234481ec1 100644
> --- a/drivers/mempool/cnxk/cn10k_hwpool_ops.c
> +++ b/drivers/mempool/cnxk/cn10k_hwpool_ops.c
> @@ -3,11 +3,14 @@
>   */
>
>  #include <rte_mempool.h>
> +#include <rte_pmd_cnxk_mempool.h>
>
>  #include "roc_api.h"
>  #include "cnxk_mempool.h"
>
> -#define CN10K_HWPOOL_MEM_SIZE 128
> +#define CN10K_HWPOOL_MEM_SIZE   128
> +#define CN10K_NPA_IOVA_RANGE_MIN 0x0
> +#define CN10K_NPA_IOVA_RANGE_MAX 0x1fffffffffff80
>
>  static int __rte_hot
>  cn10k_hwpool_enq(struct rte_mempool *hp, void *const *obj_table, unsigned int n)
> @@ -197,6 +200,64 @@ cn10k_hwpool_populate(struct rte_mempool *hp, unsigned int max_objs,
>         return hp->size;
>  }
>
> +int
> +rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf *m2)
> +{
> +       struct rte_mempool_objhdr *hdr;
> +
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> +       if (!(CNXK_MEMPOOL_FLAGS(m1->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHECK) ||
> +           !(CNXK_MEMPOOL_FLAGS(m2->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHECK)) {
> +               plt_err("Pools must have range check disabled");
> +               return -EINVAL;
> +       }
> +       if (m1->pool->elt_size != m2->pool->elt_size ||
> +           m1->pool->header_size != m2->pool->header_size ||
> +           m1->pool->trailer_size != m2->pool->trailer_size ||
> +           m1->pool->size != m2->pool->size) {
> +               plt_err("Parameters of pools involved in exchange does not match");
> +               return -EINVAL;
> +       }
> +#endif
> +       RTE_SWAP(m1->pool, m2->pool);
> +       hdr = rte_mempool_get_header(m1);
> +       hdr->mp = m1->pool;
> +       hdr = rte_mempool_get_header(m2);
> +       hdr->mp = m2->pool;
> +       return 0;
> +}
> +
> +int
> +rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp)
> +{
> +       return !!(CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_IS_HWPOOL);
> +}
> +
> +int
> +rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp)
> +{
> +       if (rte_pmd_cnxk_mempool_is_hwpool(mp)) {
> +               /* Disable only aura range check for hardware pools */
> +               roc_npa_aura_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RANGE_MIN,
> +                                         CN10K_NPA_IOVA_RANGE_MAX);
> +               CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK);
> +               mp = CNXK_MEMPOOL_CONFIG(mp);
> +       }
> +
> +       /* No need to disable again if already disabled */
> +       if (CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_NO_RANGE_CHECK)
> +               return 0;
> +
> +       /* Disable aura/pool range check */
> +       roc_npa_pool_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RANGE_MIN,
> +                                 CN10K_NPA_IOVA_RANGE_MAX);
> +       if (roc_npa_pool_range_update_check(mp->pool_id) < 0)
> +               return -EBUSY;
> +
> +       CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK);
> +       return 0;
> +}
> +
>  static struct rte_mempool_ops cn10k_hwpool_ops = {
>         .name = "cn10k_hwpool_ops",
>         .alloc = cn10k_hwpool_alloc,
> diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h
> index 4ca05d53e1..669e617952 100644
> --- a/drivers/mempool/cnxk/cnxk_mempool.h
> +++ b/drivers/mempool/cnxk/cnxk_mempool.h
> @@ -20,6 +20,10 @@ enum cnxk_mempool_flags {
>          * This flag is set by the driver.
>          */
>         CNXK_MEMPOOL_F_IS_HWPOOL = RTE_BIT64(2),
> +       /* This flag indicates whether range check has been disabled for
> +        * the pool. This flag is set by the driver.
> +        */
> +       CNXK_MEMPOOL_F_NO_RANGE_CHECK = RTE_BIT64(3),
>  };
>
>  #define CNXK_MEMPOOL_F_MASK 0xFUL
> diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build
> index ce152bedd2..e388cce26a 100644
> --- a/drivers/mempool/cnxk/meson.build
> +++ b/drivers/mempool/cnxk/meson.build
> @@ -17,5 +17,6 @@ sources = files(
>          'cn10k_hwpool_ops.c',
>  )
>
> +headers = files('rte_pmd_cnxk_mempool.h')
>  deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool']
>  require_iova_in_mbuf = false
> diff --git a/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h b/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h
> new file mode 100644
> index 0000000000..ada6e7cd4d
> --- /dev/null
> +++ b/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h
> @@ -0,0 +1,56 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2023 Marvell.
> + */
> +
> +/**
> + * @file rte_pmd_cnxk_mempool.h
> + * Marvell CNXK Mempool PMD specific functions.
> + *
> + **/
> +
> +#ifndef _PMD_CNXK_MEMPOOL_H_
> +#define _PMD_CNXK_MEMPOOL_H_
> +
> +#include <rte_mbuf.h>
> +#include <rte_mempool.h>
> +
> +/**
> + * Exchange mbufs between two mempools.
> + *
> + * @param m1
> + *   First mbuf
> + * @param m2
> + *   Second mbuf
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise.
> + */
> +__rte_experimental
> +int rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1,
> +                                      struct rte_mbuf *m2);
> +
> +/**
> + * Check whether a mempool is a hwpool.
> + *
> + * @param mp
> + *   Mempool to check.
> + *
> + * @return
> + *   1 if mp is a hwpool, 0 otherwise.
> + */
> +__rte_experimental
> +int rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp);
> +
> +/**
> + * Disable buffer address range check on a mempool.
> + *
> + * @param mp
> + *   Mempool to disable range check on.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise.
> + */
> +__rte_experimental
> +int rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp);
> +
> +#endif /* _PMD_CNXK_MEMPOOL_H_ */
> diff --git a/drivers/mempool/cnxk/version.map b/drivers/mempool/cnxk/version.map
> new file mode 100644
> index 0000000000..755731e3b5
> --- /dev/null
> +++ b/drivers/mempool/cnxk/version.map
> @@ -0,0 +1,10 @@
> + DPDK_23 {
> +       local: *;
> + };
> +
> + EXPERIMENTAL {
> +       global:
> +       rte_pmd_cnxk_mempool_is_hwpool;
> +       rte_pmd_cnxk_mempool_mbuf_exchange;
> +       rte_pmd_cnxk_mempool_range_check_disable;
> + };
> --
> 2.25.1
>

  reply	other threads:[~2023-05-24  9:34 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-11  7:55 [PATCH 0/5] add hwpools and support " Ashwin Sekhar T K
2023-04-11  7:55 ` [PATCH 1/5] mempool/cnxk: use pool config to pass flags Ashwin Sekhar T K
2023-04-11  7:55 ` [PATCH 2/5] common/cnxk: add NPA aura create/destroy ROC APIs Ashwin Sekhar T K
2023-04-11  7:55 ` [PATCH 3/5] mempool/cnxk: add NPA aura range get/set APIs Ashwin Sekhar T K
2023-04-11  7:55 ` [PATCH 4/5] mempool/cnxk: add hwpool ops Ashwin Sekhar T K
2023-04-11  7:55 ` [PATCH 5/5] mempool/cnxk: add support for exchanging mbufs between pools Ashwin Sekhar T K
2023-05-17 18:46   ` Jerin Jacob
2023-05-23  9:04 ` [PATCH v2 1/5] mempool/cnxk: use pool config to pass flags Ashwin Sekhar T K
2023-05-23  9:04   ` [PATCH v2 2/5] common/cnxk: add NPA aura create/destroy ROC APIs Ashwin Sekhar T K
2023-05-23  9:04   ` [PATCH v2 3/5] mempool/cnxk: add NPA aura range get/set APIs Ashwin Sekhar T K
2023-05-23  9:04   ` [PATCH v2 4/5] mempool/cnxk: add hwpool ops Ashwin Sekhar T K
2023-05-23  9:04   ` [PATCH v2 5/5] mempool/cnxk: add support for exchanging mbufs between pools Ashwin Sekhar T K
2023-05-24  9:33     ` Jerin Jacob [this message]
2023-05-23  9:13 ` [PATCH v2 1/5] mempool/cnxk: use pool config to pass flags Ashwin Sekhar T K
2023-05-23  9:13   ` [PATCH v2 2/5] common/cnxk: add NPA aura create/destroy ROC APIs Ashwin Sekhar T K
2023-05-23  9:13   ` [PATCH v2 3/5] mempool/cnxk: add NPA aura range get/set APIs Ashwin Sekhar T K
2023-05-23  9:27   ` Ashwin Sekhar T K
2023-05-23 10:54 ` [PATCH v2 1/5] mempool/cnxk: use pool config to pass flags Ashwin Sekhar T K
2023-05-23 10:54   ` [PATCH v2 2/5] common/cnxk: add NPA aura create/destroy ROC APIs Ashwin Sekhar T K
2023-05-23 10:54   ` [PATCH v2 3/5] mempool/cnxk: add NPA aura range get/set APIs Ashwin Sekhar T K
2023-05-23 10:54   ` [PATCH v2 4/5] mempool/cnxk: add hwpool ops Ashwin Sekhar T K
2023-05-23 10:54   ` [PATCH v2 5/5] mempool/cnxk: add support for exchanging mbufs between pools Ashwin Sekhar T K

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALBAE1PqJ-E1PpSQHwUnYA_LTRBLmt7VUK01xWN3rDrCYCz6=A@mail.gmail.com' \
    --to=jerinjacobk@gmail.com \
    --cc=anoobj@marvell.com \
    --cc=asekhar@marvell.com \
    --cc=dev@dpdk.org \
    --cc=gakhil@marvell.com \
    --cc=hkalra@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=kirankumark@marvell.com \
    --cc=ndabilpuram@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=psatheesh@marvell.com \
    --cc=skori@marvell.com \
    --cc=skoteshwar@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).