DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/6] mlx5: external RxQ support
@ 2022-02-22 21:04 Michael Baum
  2022-02-22 21:04 ` [PATCH 1/6] common/mlx5: glue device and PD importation Michael Baum
                   ` (6 more replies)
  0 siblings, 7 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-22 21:04 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

These patches add support to external Rx queues.
External queue is a queue that is managed by a process external to PMD,
but uses PMD process to generate its flow rules.

For the hardware to allow the DPDK process to set rules for it, the
process needs to use the same PD of the external process. In addition,
the indexes of the queues in hardware are represented by 32-bit compared
to the rte_flow indexes represented by 16-bit, so the processes need to
share some mapping between the indexes.

These patches allow the external process to provide devargs which enable
importing its context and PD, instead of prepare new ones. In addition,
an API is provided for mapping for the indexes of the queues.

Depends-on: series-21791 ("refactore mlx5 guides")

Michael Baum (6):
  common/mlx5: glue device and PD importation
  common/mlx5: add remote PD and CTX support
  net/mlx5: optimize RxQ/TxQ control structure
  net/mlx5: add external RxQ mapping API
  net/mlx5: support queue/RSS action for external RxQ
  app/testpmd: add test for external RxQ

 app/test-pmd/cmdline.c                       | 157 +++++++++++
 app/test-pmd/meson.build                     |   3 +
 doc/guides/nics/mlx5.rst                     |   1 +
 doc/guides/platform/mlx5.rst                 |  37 ++-
 doc/guides/rel_notes/release_22_03.rst       |   6 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst  |  16 ++
 drivers/common/mlx5/linux/meson.build        |   2 +
 drivers/common/mlx5/linux/mlx5_common_os.c   | 227 ++++++++++++++--
 drivers/common/mlx5/linux/mlx5_common_os.h   |   6 -
 drivers/common/mlx5/linux/mlx5_glue.c        |  41 +++
 drivers/common/mlx5/linux/mlx5_glue.h        |   4 +
 drivers/common/mlx5/mlx5_common.c            |  62 ++++-
 drivers/common/mlx5/mlx5_common.h            |  28 +-
 drivers/common/mlx5/version.map              |   4 +
 drivers/common/mlx5/windows/mlx5_common_os.c |  52 +++-
 drivers/common/mlx5/windows/mlx5_common_os.h |   1 -
 drivers/net/mlx5/linux/mlx5_os.c             |  18 ++
 drivers/net/mlx5/mlx5.c                      |   6 +
 drivers/net/mlx5/mlx5.h                      |   1 +
 drivers/net/mlx5/mlx5_defs.h                 |   3 +
 drivers/net/mlx5/mlx5_devx.c                 |  52 ++--
 drivers/net/mlx5/mlx5_ethdev.c               |  18 +-
 drivers/net/mlx5/mlx5_flow.c                 |  43 ++--
 drivers/net/mlx5/mlx5_flow_dv.c              |  14 +-
 drivers/net/mlx5/mlx5_rx.h                   |  49 +++-
 drivers/net/mlx5/mlx5_rxq.c                  | 258 +++++++++++++++++--
 drivers/net/mlx5/mlx5_trigger.c              |  36 +--
 drivers/net/mlx5/mlx5_tx.h                   |   7 +-
 drivers/net/mlx5/mlx5_txq.c                  |  14 +-
 drivers/net/mlx5/rte_pmd_mlx5.h              |  50 +++-
 drivers/net/mlx5/version.map                 |   3 +
 31 files changed, 1047 insertions(+), 172 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/6] common/mlx5: glue device and PD importation
  2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
@ 2022-02-22 21:04 ` Michael Baum
  2022-02-22 21:04 ` [PATCH 2/6] common/mlx5: add remote PD and CTX support Michael Baum
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-22 21:04 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add support for rdma-core API to import device.
The API gets ibv_context file descriptor and returns an ibv_context
pointer that is associated with the given file descriptor.
Add also support for rdma-core API to import PD.
The API gets ibv_context and PD handle and returns a protection domain
(PD) that is associated with the given handle in the given context.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 drivers/common/mlx5/linux/meson.build |  2 ++
 drivers/common/mlx5/linux/mlx5_glue.c | 41 +++++++++++++++++++++++++++
 drivers/common/mlx5/linux/mlx5_glue.h |  4 +++
 3 files changed, 47 insertions(+)

diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 4c7b53b9bd..ed48245c67 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -202,6 +202,8 @@ has_sym_args = [
             'mlx5dv_dr_domain_allow_duplicate_rules' ],
         [ 'HAVE_MLX5_IBV_REG_MR_IOVA', 'infiniband/verbs.h',
             'ibv_reg_mr_iova' ],
+        [ 'HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR', 'infiniband/verbs.h',
+            'ibv_import_device' ],
 ]
 config = configuration_data()
 foreach arg:has_sym_args
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index bc6622053f..450dd6a06a 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -34,6 +34,32 @@ mlx5_glue_dealloc_pd(struct ibv_pd *pd)
 	return ibv_dealloc_pd(pd);
 }
 
+static struct ibv_pd *
+mlx5_glue_import_pd(struct ibv_context *context, uint32_t pd_handle)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	return ibv_import_pd(context, pd_handle);
+#else
+	(void)context;
+	(void)pd_handle;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
+static int
+mlx5_glue_unimport_pd(struct ibv_pd *pd)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	ibv_unimport_pd(pd);
+	return 0;
+#else
+	(void)pd;
+	errno = ENOTSUP;
+	return -errno;
+#endif
+}
+
 static struct ibv_device **
 mlx5_glue_get_device_list(int *num_devices)
 {
@@ -52,6 +78,18 @@ mlx5_glue_open_device(struct ibv_device *device)
 	return ibv_open_device(device);
 }
 
+static struct ibv_context *
+mlx5_glue_import_device(int cmd_fd)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	return ibv_import_device(cmd_fd);
+#else
+	(void)cmd_fd;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
 static int
 mlx5_glue_close_device(struct ibv_context *context)
 {
@@ -1402,9 +1440,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
 	.fork_init = mlx5_glue_fork_init,
 	.alloc_pd = mlx5_glue_alloc_pd,
 	.dealloc_pd = mlx5_glue_dealloc_pd,
+	.import_pd = mlx5_glue_import_pd,
+	.unimport_pd = mlx5_glue_unimport_pd,
 	.get_device_list = mlx5_glue_get_device_list,
 	.free_device_list = mlx5_glue_free_device_list,
 	.open_device = mlx5_glue_open_device,
+	.import_device = mlx5_glue_import_device,
 	.close_device = mlx5_glue_close_device,
 	.query_device = mlx5_glue_query_device,
 	.query_device_ex = mlx5_glue_query_device_ex,
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index 4e6d31f263..c4903a6dce 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -151,9 +151,13 @@ struct mlx5_glue {
 	int (*fork_init)(void);
 	struct ibv_pd *(*alloc_pd)(struct ibv_context *context);
 	int (*dealloc_pd)(struct ibv_pd *pd);
+	struct ibv_pd *(*import_pd)(struct ibv_context *context,
+				    uint32_t pd_handle);
+	int (*unimport_pd)(struct ibv_pd *pd);
 	struct ibv_device **(*get_device_list)(int *num_devices);
 	void (*free_device_list)(struct ibv_device **list);
 	struct ibv_context *(*open_device)(struct ibv_device *device);
+	struct ibv_context *(*import_device)(int cmd_fd);
 	int (*close_device)(struct ibv_context *context);
 	int (*query_device)(struct ibv_context *context,
 			    struct ibv_device_attr *device_attr);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 2/6] common/mlx5: add remote PD and CTX support
  2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
  2022-02-22 21:04 ` [PATCH 1/6] common/mlx5: glue device and PD importation Michael Baum
@ 2022-02-22 21:04 ` Michael Baum
  2022-02-22 21:04 ` [PATCH 3/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-22 21:04 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add option to probe common device using import CTX/PD functions instead
of create functions.
This option requires accepting the context FD and the PD handle as
devargs.

This sharing can be useful for applications that use PMD for only some
operations. For example, an app that generates queues itself and uses
PMD just to configure flow rules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 doc/guides/platform/mlx5.rst                 |  37 ++-
 drivers/common/mlx5/linux/mlx5_common_os.c   | 227 +++++++++++++++++--
 drivers/common/mlx5/linux/mlx5_common_os.h   |   6 -
 drivers/common/mlx5/mlx5_common.c            |  62 +++--
 drivers/common/mlx5/mlx5_common.h            |  28 ++-
 drivers/common/mlx5/version.map              |   4 +
 drivers/common/mlx5/windows/mlx5_common_os.c |  52 ++++-
 drivers/common/mlx5/windows/mlx5_common_os.h |   1 -
 8 files changed, 366 insertions(+), 51 deletions(-)

diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst
index a8553405ce..c928bf1265 100644
--- a/doc/guides/platform/mlx5.rst
+++ b/doc/guides/platform/mlx5.rst
@@ -81,6 +81,12 @@ Limitations
 - On Windows, only ``eth`` and ``crypto`` are supported.
 
 
+Features
+--------
+
+- Remote PD and CTX - Linux only.
+
+
 .. _mlx5_common_compilation:
 
 Compilation Prerequisites
@@ -613,4 +619,33 @@ and below are the arguments supported by the common mlx5 layer.
 
   If ``sq_db_nc`` is omitted, the preset (if any) environment variable
   "MLX5_SHUT_UP_BF" value is used. If there is no "MLX5_SHUT_UP_BF", the
-  default ``sq_db_nc`` value is zero for ARM64 hosts and one for others.
\ No newline at end of file
+  default ``sq_db_nc`` value is zero for ARM64 hosts and one for others.
+
+- ``cmd_fd`` parameter [int]
+
+  File descriptor of ibv_context created outside the PMD.
+  PMD will use this FD to import remote CTX. The ``cmd_fd`` is obtained from
+  the ibv_context cmd_fd member, which must be dup'd before being passed.
+  This parameter is valid only if ``pd_handle`` parameter is specified.
+
+  By default, the PMD will ignore this parameter and create a new ibv_context.
+
+  .. note::
+
+     When FD comes from another process, it is on the user responsibility to
+     share the FD between the processes (e.g. by SCM_RIGHTS).
+
+- ``pd_handle`` parameter [int]
+
+  Protection domain handle of ibv_pd created outside the PMD.
+  PMD will use this handle to import remote PD. The ``pd_handle`` can be
+  achieved from the original PD by getting its ibv_pd->handle member value.
+  This parameter is valid only if ``cmd_fd`` parameter is specified, and its
+  value must be a valid kernel handle for a PD object in the context represented
+  by given ``cmd_fd``.
+
+  By default, the PMD will ignore this parameter and allocate a new PD.
+
+  .. note::
+
+     The ibv_pd->handle member is different then mlx5dv_pd->pdn member.
\ No newline at end of file
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index a752d79e8e..21c970fd9e 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -408,27 +408,128 @@ mlx5_glue_constructor(void)
 }
 
 /**
- * Allocate Protection Domain object and extract its pdn using DV API.
+ * Validate user arguments for remote PD and CTX.
+ *
+ * @param config
+ *   Pointer to device configuration structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config)
+{
+	int device_fd = config->device_fd;
+	int pd_handle = config->pd_handle;
+
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	if (device_fd == MLX5_ARG_UNSET && pd_handle != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote PD without CTX is not supported.");
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	if (device_fd != MLX5_ARG_UNSET && pd_handle == MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote CTX without PD is not supported.");
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	DRV_LOG(DEBUG, "Remote PD and CTX is supported: (cmd_fd=%d, "
+		"pd_handle=%d).", device_fd, pd_handle);
+#else
+	if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR,
+			"Remote PD and CTX is not supported - maybe old rdma-core version?");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+#endif
+	return 0;
+}
+
+/**
+ * Release Protection Domain object.
  *
  * @param[out] cdev
  *   Pointer to the mlx5 device.
  *
  * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise.
  */
 int
+mlx5_os_pd_release(struct mlx5_common_device *cdev)
+{
+	if (cdev->config.pd_handle == MLX5_ARG_UNSET)
+		return mlx5_glue->dealloc_pd(cdev->pd);
+	else
+		return mlx5_glue->unimport_pd(cdev->pd);
+}
+
+/**
+ * Allocate Protection Domain object.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise.
+ */
+static int
 mlx5_os_pd_create(struct mlx5_common_device *cdev)
+{
+	cdev->pd = mlx5_glue->alloc_pd(cdev->ctx);
+	if (cdev->pd == NULL) {
+		DRV_LOG(ERR, "Failed to allocate PD: %s", rte_strerror(errno));
+		return errno ? -errno : -ENOMEM;
+	}
+	return 0;
+}
+
+/**
+ * Import Protection Domain object according to given PD handle.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise.
+ */
+static int
+mlx5_os_pd_import(struct mlx5_common_device *cdev)
+{
+	cdev->pd = mlx5_glue->import_pd(cdev->ctx, cdev->config.pd_handle);
+	if (cdev->pd == NULL) {
+		DRV_LOG(ERR, "Failed to import PD using handle=%d: %s",
+			cdev->config.pd_handle, rte_strerror(errno));
+		return errno ? -errno : -ENOMEM;
+	}
+	return 0;
+}
+
+/**
+ * Prepare Protection Domain object and extract its pdn using DV API.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_pd_prepare(struct mlx5_common_device *cdev)
 {
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	struct mlx5dv_obj obj;
 	struct mlx5dv_pd pd_info;
-	int ret;
 #endif
+	int ret;
 
-	cdev->pd = mlx5_glue->alloc_pd(cdev->ctx);
-	if (cdev->pd == NULL) {
-		DRV_LOG(ERR, "Failed to allocate PD.");
-		return errno ? -errno : -ENOMEM;
+	if (cdev->config.pd_handle == MLX5_ARG_UNSET)
+		ret = mlx5_os_pd_create(cdev);
+	else
+		ret = mlx5_os_pd_import(cdev);
+	if (ret) {
+		rte_errno = -ret;
+		return ret;
 	}
 	if (cdev->config.devx == 0)
 		return 0;
@@ -438,15 +539,17 @@ mlx5_os_pd_create(struct mlx5_common_device *cdev)
 	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Fail to get PD object info.");
-		mlx5_glue->dealloc_pd(cdev->pd);
+		rte_errno = errno;
+		claim_zero(mlx5_os_pd_release(cdev));
 		cdev->pd = NULL;
-		return -errno;
+		return -rte_errno;
 	}
 	cdev->pdn = pd_info.pdn;
 	return 0;
 #else
 	DRV_LOG(ERR, "Cannot get pdn - no DV support.");
-	return -ENOTSUP;
+	rte_errno = ENOTSUP;
+	return -rte_errno;
 #endif /* HAVE_IBV_FLOW_DV_SUPPORT */
 }
 
@@ -648,28 +751,28 @@ mlx5_restore_doorbell_mapping_env(int value)
 /**
  * Function API to open IB device.
  *
- *
  * @param cdev
  *   Pointer to the mlx5 device.
  * @param classes
  *   Chosen classes come from device arguments.
  *
  * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Pointer to ibv_context on success, NULL otherwise and rte_errno is set.
  */
-int
-mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
+static struct ibv_context *
+mlx5_open_device(struct mlx5_common_device *cdev, uint32_t classes)
 {
 	struct ibv_device *ibv;
 	struct ibv_context *ctx = NULL;
 	int dbmap_env;
 
+	MLX5_ASSERT(cdev->config.device_fd == MLX5_ARG_UNSET);
 	if (classes & MLX5_CLASS_VDPA)
 		ibv = mlx5_vdpa_get_ibv_dev(cdev->dev);
 	else
 		ibv = mlx5_os_get_ibv_dev(cdev->dev);
 	if (!ibv)
-		return -rte_errno;
+		return NULL;
 	DRV_LOG(INFO, "Dev information matches for device \"%s\".", ibv->name);
 	/*
 	 * Configure environment variable "MLX5_BF_SHUT_UP" before the device
@@ -682,29 +785,109 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
 	ctx = mlx5_glue->dv_open_device(ibv);
 	if (ctx) {
 		cdev->config.devx = 1;
-		DRV_LOG(DEBUG, "DevX is supported.");
 	} else if (classes == MLX5_CLASS_ETH) {
 		/* The environment variable is still configured. */
 		ctx = mlx5_glue->open_device(ibv);
 		if (ctx == NULL)
 			goto error;
-		DRV_LOG(DEBUG, "DevX is NOT supported.");
 	} else {
 		goto error;
 	}
 	/* The device is created, no need for environment. */
 	mlx5_restore_doorbell_mapping_env(dbmap_env);
-	/* Hint libmlx5 to use PMD allocator for data plane resources */
-	mlx5_set_context_attr(cdev->dev, ctx);
-	cdev->ctx = ctx;
-	return 0;
+	return ctx;
 error:
 	rte_errno = errno ? errno : ENODEV;
 	/* The device creation is failed, no need for environment. */
 	mlx5_restore_doorbell_mapping_env(dbmap_env);
 	DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name);
-	return -rte_errno;
+	return NULL;
 }
+
+/**
+ * Function API to import IB device.
+ *
+ * @param cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   Pointer to ibv_context on success, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_context *
+mlx5_import_device(struct mlx5_common_device *cdev)
+{
+	struct ibv_context *ctx = NULL;
+
+	MLX5_ASSERT(cdev->config.device_fd != MLX5_ARG_UNSET);
+	ctx = mlx5_glue->import_device(cdev->config.device_fd);
+	if (!ctx) {
+		DRV_LOG(ERR, "Failed to import device for fd=%d: %s",
+			cdev->config.device_fd, rte_strerror(errno));
+		rte_errno = errno;
+	}
+	return ctx;
+}
+
+/**
+ * Function API to prepare IB device.
+ *
+ * @param cdev
+ *   Pointer to the mlx5 device.
+ * @param classes
+ *   Chosen classes come from device arguments.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
+{
+
+	struct ibv_context *ctx = NULL;
+
+	if (cdev->config.device_fd == MLX5_ARG_UNSET)
+		ctx = mlx5_open_device(cdev, classes);
+	else
+		ctx = mlx5_import_device(cdev);
+	if (ctx == NULL)
+		return -rte_errno;
+	/* Hint libmlx5 to use PMD allocator for data plane resources */
+	mlx5_set_context_attr(cdev->dev, ctx);
+	cdev->ctx = ctx;
+	return 0;
+}
+
+/**
+ * Query HCA attributes.
+ * For remote context, it is check if DevX is supported.
+ *
+ * @param cdev
+ *   Pointer to mlx5 device structure.
+ *
+ * @return
+ *   0 on success, a negative value otherwise.
+ */
+int
+mlx5_os_query_hca_attr(struct mlx5_common_device *cdev)
+{
+	int ret;
+
+	ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx, &cdev->config.hca_attr);
+	if (ret) {
+		if (cdev->config.device_fd == MLX5_ARG_UNSET ||
+		    cdev->classes_loaded != MLX5_CLASS_ETH) {
+			rte_errno = ENOTSUP;
+			return -rte_errno;
+		}
+		DRV_LOG(DEBUG,
+			"The imported device %s has been opened by Verbs.",
+			mlx5_os_get_ctx_device_name(cdev->ctx));
+		cdev->config.devx = 0;
+	}
+	cdev->config.devx = 1;
+	return 0;
+}
+
 int
 mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
 {
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h
index 83066e752d..246e8b2784 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.h
+++ b/drivers/common/mlx5/linux/mlx5_common_os.h
@@ -203,12 +203,6 @@ mlx5_os_get_devx_uar_page_id(void *uar)
 #endif
 }
 
-static inline int
-mlx5_os_dealloc_pd(void *pd)
-{
-	return mlx5_glue->dealloc_pd(pd);
-}
-
 __rte_internal
 static inline void *
 mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access)
diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 8cf391df13..c285803c61 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -24,6 +24,12 @@ uint8_t haswell_broadwell_cpu;
 /* Driver type key for new device global syntax. */
 #define MLX5_DRIVER_KEY "driver"
 
+/* Device parameter to get file descriptor for import device. */
+#define MLX5_DEVICE_FD "cmd_fd"
+
+/* Device parameter to get PD number for import Protection Domain. */
+#define MLX5_PD_HANDLE "pd_handle"
+
 /* Enable extending memsegs when creating a MR. */
 #define MLX5_MR_EXT_MEMSEG_EN "mr_ext_memseg_en"
 
@@ -283,6 +289,10 @@ mlx5_common_args_check_handler(const char *key, const char *val, void *opaque)
 		config->mr_mempool_reg_en = !!tmp;
 	} else if (strcmp(key, MLX5_SYS_MEM_EN) == 0) {
 		config->sys_mem_en = !!tmp;
+	} else if (strcmp(key, MLX5_DEVICE_FD) == 0) {
+		config->device_fd = tmp;
+	} else if (strcmp(key, MLX5_PD_HANDLE) == 0) {
+		config->pd_handle = tmp;
 	}
 	return 0;
 }
@@ -310,6 +320,8 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist,
 		MLX5_MR_EXT_MEMSEG_EN,
 		MLX5_SYS_MEM_EN,
 		MLX5_MR_MEMPOOL_REG_EN,
+		MLX5_DEVICE_FD,
+		MLX5_PD_HANDLE,
 		NULL,
 	};
 	int ret = 0;
@@ -321,13 +333,19 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist,
 	config->mr_mempool_reg_en = 1;
 	config->sys_mem_en = 0;
 	config->dbnc = MLX5_ARG_UNSET;
+	config->device_fd = MLX5_ARG_UNSET;
+	config->pd_handle = MLX5_ARG_UNSET;
 	/* Process common parameters. */
 	ret = mlx5_kvargs_process(mkvlist, params,
 				  mlx5_common_args_check_handler, config);
 	if (ret) {
 		rte_errno = EINVAL;
-		ret = -rte_errno;
+		return -rte_errno;
 	}
+	/* Validate user arguments for remote PD and CTX if it is given. */
+	ret = mlx5_os_remote_pd_and_ctx_validate(config);
+	if (ret)
+		return ret;
 	DRV_LOG(DEBUG, "mr_ext_memseg_en is %u.", config->mr_ext_memseg_en);
 	DRV_LOG(DEBUG, "mr_mempool_reg_en is %u.", config->mr_mempool_reg_en);
 	DRV_LOG(DEBUG, "sys_mem_en is %u.", config->sys_mem_en);
@@ -645,7 +663,7 @@ static void
 mlx5_dev_hw_global_release(struct mlx5_common_device *cdev)
 {
 	if (cdev->pd != NULL) {
-		claim_zero(mlx5_os_dealloc_pd(cdev->pd));
+		claim_zero(mlx5_os_pd_release(cdev));
 		cdev->pd = NULL;
 	}
 	if (cdev->ctx != NULL) {
@@ -674,20 +692,25 @@ mlx5_dev_hw_global_prepare(struct mlx5_common_device *cdev, uint32_t classes)
 	ret = mlx5_os_open_device(cdev, classes);
 	if (ret < 0)
 		return ret;
-	/* Allocate Protection Domain object and extract its pdn. */
-	ret = mlx5_os_pd_create(cdev);
+	/*
+	 * When CTX is created by Verbs, query HCA attribute is unsupported.
+	 * When CTX is imported, we cannot know if it is created by DevX or
+	 * Verbs. So, we use query HCA attribute function to check it.
+	 */
+	if (cdev->config.devx || cdev->config.device_fd != MLX5_ARG_UNSET) {
+		/* Query HCA attributes. */
+		ret = mlx5_os_query_hca_attr(cdev);
+		if (ret) {
+			DRV_LOG(ERR, "Unable to read HCA capabilities.");
+			rte_errno = ENOTSUP;
+			goto error;
+		}
+	}
+	DRV_LOG(DEBUG, "DevX is %ssupported.", cdev->config.devx ? "" : "NOT ");
+	/* Prepare Protection Domain object and extract its pdn. */
+	ret = mlx5_os_pd_prepare(cdev);
 	if (ret)
 		goto error;
-	/* All actions taken below are relevant only when DevX is supported */
-	if (cdev->config.devx == 0)
-		return 0;
-	/* Query HCA attributes. */
-	ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx, &cdev->config.hca_attr);
-	if (ret) {
-		DRV_LOG(ERR, "Unable to read HCA capabilities.");
-		rte_errno = ENOTSUP;
-		goto error;
-	}
 	return 0;
 error:
 	mlx5_dev_hw_global_release(cdev);
@@ -826,6 +849,17 @@ mlx5_common_probe_again_args_validate(struct mlx5_common_device *cdev,
 			cdev->dev->name);
 		goto error;
 	}
+	if (cdev->config.device_fd ^ config->device_fd) {
+		DRV_LOG(ERR, "\"cmd_fd\" configuration mismatch for device %s.",
+			cdev->dev->name);
+		goto error;
+	}
+	if (cdev->config.pd_handle ^ config->pd_handle) {
+		DRV_LOG(ERR,
+			"\"pd_handle\" configuration mismatch for device %s.",
+			cdev->dev->name);
+		goto error;
+	}
 	if (cdev->config.sys_mem_en ^ config->sys_mem_en) {
 		DRV_LOG(ERR,
 			"\"sys_mem_en\" configuration mismatch for device %s.",
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 49bcea1d91..1911d922e8 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -446,6 +446,8 @@ void mlx5_common_init(void);
 struct mlx5_common_dev_config {
 	struct mlx5_hca_attr hca_attr; /* HCA attributes. */
 	int dbnc; /* Skip doorbell register write barrier. */
+	int device_fd; /* Device file descriptor for importation. */
+	int pd_handle; /* Protection Domain handle for importation.  */
 	unsigned int devx:1; /* Whether devx interface is available or not. */
 	unsigned int sys_mem_en:1; /* The default memory allocator. */
 	unsigned int mr_mempool_reg_en:1;
@@ -465,6 +467,23 @@ struct mlx5_common_device {
 	struct mlx5_common_dev_config config; /* Device configuration. */
 };
 
+/**
+ * Indicates whether PD and CTX are imported from another process,
+ * or created by this process.
+ *
+ * @param cdev
+ *   Pointer to common device.
+ *
+ * @return
+ *   True if PD and CTX are imported from another process, False otherwise.
+ */
+static inline bool
+mlx5_imported_pd_and_ctx(struct mlx5_common_device *cdev)
+{
+	return (cdev->config.device_fd != MLX5_ARG_UNSET &&
+		cdev->config.pd_handle != MLX5_ARG_UNSET);
+}
+
 /**
  * Initialization function for the driver called during device probing.
  */
@@ -554,7 +573,14 @@ mlx5_devx_uar_release(struct mlx5_uar *uar);
 /* mlx5_common_os.c */
 
 int mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes);
-int mlx5_os_pd_create(struct mlx5_common_device *cdev);
+__rte_internal
+int mlx5_os_pd_prepare(struct mlx5_common_device *cdev);
+__rte_internal
+int mlx5_os_pd_release(struct mlx5_common_device *cdev);
+__rte_internal
+int mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config);
+__rte_internal
+int mlx5_os_query_hca_attr(struct mlx5_common_device *cdev);
 
 /* mlx5 PMD wrapped MR struct. */
 struct mlx5_pmd_wrapped_mr {
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 1c6153c576..be8846f347 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -136,6 +136,10 @@ INTERNAL {
 
 	mlx5_os_umem_dereg;
 	mlx5_os_umem_reg;
+	mlx5_os_pd_prepare;
+	mlx5_os_pd_release;
+	mlx5_os_remote_pd_and_ctx_validate;
+	mlx5_os_query_hca_attr;
 
 	mlx5_os_wrapped_mkey_create;
 	mlx5_os_wrapped_mkey_destroy;
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c
index c3cfc315f2..82f4bb2393 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.c
+++ b/drivers/common/mlx5/windows/mlx5_common_os.c
@@ -25,21 +25,61 @@ mlx5_glue_constructor(void)
 {
 }
 
+/**
+ * Validate user arguments for remote PD and CTX.
+ *
+ * @param config
+ *   Pointer to device configuration structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config)
+{
+	int device_fd = config->device_fd;
+	int pd_handle = config->pd_handle;
+
+	if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote PD and CTX is not supported on Windows.");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+	return 0;
+}
+
+/**
+ * Query HCA attributes.
+ *
+ * @param cdev
+ *   Pointer to mlx5 device structure.
+ *
+ * @return
+ *   0 on success, a negative value otherwise.
+ */
+int
+mlx5_os_query_hca_attr(struct mlx5_common_device *cdev)
+{
+	return mlx5_devx_cmd_query_hca_attr(cdev->ctx, &cdev->config.hca_attr);
+}
+
 /**
  * Release PD. Releases a given mlx5_pd object
  *
- * @param[in] pd
- *   Pointer to mlx5_pd.
+ * @param[in] cdev
+ *   Pointer to the mlx5 device.
  *
  * @return
  *   Zero if pd is released successfully, negative number otherwise.
  */
 int
-mlx5_os_dealloc_pd(void *pd)
+mlx5_os_pd_release(struct mlx5_common_device *cdev)
 {
+	struct mlx5_pd *pd = cdev->pd;
+
 	if (!pd)
 		return -EINVAL;
-	mlx5_devx_cmd_destroy(((struct mlx5_pd *)pd)->obj);
+	mlx5_devx_cmd_destroy(pd->obj);
 	mlx5_free(pd);
 	return 0;
 }
@@ -47,14 +87,14 @@ mlx5_os_dealloc_pd(void *pd)
 /**
  * Allocate Protection Domain object and extract its pdn using DV API.
  *
- * @param[out] dev
+ * @param[out] cdev
  *   Pointer to the mlx5 device.
  *
  * @return
  *   0 on success, a negative value otherwise.
  */
 int
-mlx5_os_pd_create(struct mlx5_common_device *cdev)
+mlx5_os_pd_prepare(struct mlx5_common_device *cdev)
 {
 	struct mlx5_pd *pd;
 
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.h b/drivers/common/mlx5/windows/mlx5_common_os.h
index 61fc8dd761..ee7973f1ec 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.h
+++ b/drivers/common/mlx5/windows/mlx5_common_os.h
@@ -248,7 +248,6 @@ mlx5_os_devx_subscribe_devx_event(void *eventc,
 	return -ENOTSUP;
 }
 
-int mlx5_os_dealloc_pd(void *pd);
 __rte_internal
 void *mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access);
 __rte_internal
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 3/6] net/mlx5: optimize RxQ/TxQ control structure
  2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
  2022-02-22 21:04 ` [PATCH 1/6] common/mlx5: glue device and PD importation Michael Baum
  2022-02-22 21:04 ` [PATCH 2/6] common/mlx5: add remote PD and CTX support Michael Baum
@ 2022-02-22 21:04 ` Michael Baum
  2022-02-22 21:04 ` [PATCH 4/6] net/mlx5: add external RxQ mapping API Michael Baum
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-22 21:04 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The RxQ/TxQ control structure has a field named type. This type is enum
with values for standard and hairpin.
The use of this field is to check whether the queue is of the hairpin
type or standard.

This patch replaces it with a boolean variable that saves whether it is
a hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c    | 26 ++++++++++--------------
 drivers/net/mlx5/mlx5_ethdev.c  |  2 +-
 drivers/net/mlx5/mlx5_flow.c    | 14 ++++++-------
 drivers/net/mlx5/mlx5_flow_dv.c | 14 +++++--------
 drivers/net/mlx5/mlx5_rx.h      | 13 +++---------
 drivers/net/mlx5/mlx5_rxq.c     | 33 +++++++++++-------------------
 drivers/net/mlx5/mlx5_trigger.c | 36 ++++++++++++++++-----------------
 drivers/net/mlx5/mlx5_tx.h      |  7 +------
 drivers/net/mlx5/mlx5_txq.c     | 14 ++++++-------
 9 files changed, 64 insertions(+), 95 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index f18b18b1a2..154df99251 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -88,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type)
 	default:
 		break;
 	}
-	if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (rxq->ctrl->is_hairpin)
 		return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr);
 	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
@@ -162,7 +162,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 
 	if (rxq_obj == NULL)
 		return;
-	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
+	if (rxq_obj->rxq_ctrl->is_hairpin) {
 		if (rxq_obj->rq == NULL)
 			return;
 		mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST);
@@ -476,7 +476,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(tmpl);
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (rxq_ctrl->is_hairpin)
 		return mlx5_rxq_obj_hairpin_new(rxq);
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq && !rxq_ctrl->started) {
@@ -583,7 +583,7 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
 
 		MLX5_ASSERT(rxq != NULL);
-		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+		if (rxq->ctrl->is_hairpin)
 			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
 		else
 			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
@@ -706,17 +706,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 		       int tunnel, struct mlx5_devx_tir_attr *tir_attr)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5_rxq_type rxq_obj_type;
+	bool is_hairpin;
 	bool lro = true;
 	uint32_t i;
 
 	/* NULL queues designate drop queue. */
 	if (ind_tbl->queues != NULL) {
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-				mlx5_rxq_ctrl_get(dev, ind_tbl->queues[0]);
-		rxq_obj_type = rxq_ctrl != NULL ? rxq_ctrl->type :
-						  MLX5_RXQ_TYPE_STANDARD;
-
+		is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]);
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
 			struct mlx5_rxq_data *rxq_i =
@@ -728,7 +724,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			}
 		}
 	} else {
-		rxq_obj_type = priv->drop_queue.rxq->ctrl->type;
+		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
@@ -759,7 +755,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			(!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) <<
 			 MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT;
 	}
-	if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (is_hairpin)
 		tir_attr->transport_domain = priv->sh->td->id;
 	else
 		tir_attr->transport_domain = priv->sh->tdn;
@@ -932,7 +928,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 		goto error;
 	}
 	rxq_obj->rxq_ctrl = rxq_ctrl;
-	rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD;
+	rxq_ctrl->is_hairpin = false;
 	rxq_ctrl->sh = priv->sh;
 	rxq_ctrl->obj = rxq_obj;
 	rxq->ctrl = rxq_ctrl;
@@ -1232,7 +1228,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	struct mlx5_txq_ctrl *txq_ctrl =
 			container_of(txq_data, struct mlx5_txq_ctrl, txq);
 
-	if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN)
+	if (txq_ctrl->is_hairpin)
 		return mlx5_txq_obj_hairpin_new(dev, idx);
 #if !defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) && defined(HAVE_INFINIBAND_VERBS_H)
 	DRV_LOG(ERR, "Port %u Tx queue %u cannot create with DevX, no UAR.",
@@ -1369,7 +1365,7 @@ void
 mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj)
 {
 	MLX5_ASSERT(txq_obj);
-	if (txq_obj->txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
+	if (txq_obj->txq_ctrl->is_hairpin) {
 		if (txq_obj->tis)
 			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
 #if defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) || !defined(HAVE_INFINIBAND_VERBS_H)
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 72bf8ac914..406761ccf8 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -173,7 +173,7 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev)
 	for (i = 0, j = 0; i < rxqs_n; i++) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl && !rxq_ctrl->is_hairpin)
 			rss_queue_arr[j++] = i;
 	}
 	rss_queue_n = j;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 1c3f648491..5e8454f5f5 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1676,7 +1676,7 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			 const char **error, uint32_t *queue_idx)
 {
 	const struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5_rxq_type rxq_type = MLX5_RXQ_TYPE_UNDEFINED;
+	bool is_hairpin = false;
 	uint32_t i;
 
 	for (i = 0; i != queues_n; ++i) {
@@ -1693,9 +1693,9 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			*queue_idx = i;
 			return -EINVAL;
 		}
-		if (i == 0)
-			rxq_type = rxq_ctrl->type;
-		if (rxq_type != rxq_ctrl->type) {
+		if (i == 0 && rxq_ctrl->is_hairpin)
+			is_hairpin = true;
+		if (is_hairpin != rxq_ctrl->is_hairpin) {
 			*error = "combining hairpin and regular RSS queues is not supported";
 			*queue_idx = i;
 			return -ENOTSUP;
@@ -5767,15 +5767,13 @@ flow_create_split_metadata(struct rte_eth_dev *dev,
 			const struct rte_flow_action_queue *queue;
 
 			queue = qrss->conf;
-			if (mlx5_rxq_get_type(dev, queue->index) ==
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			if (mlx5_rxq_is_hairpin(dev, queue->index))
 				qrss = NULL;
 		} else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) {
 			const struct rte_flow_action_rss *rss;
 
 			rss = qrss->conf;
-			if (mlx5_rxq_get_type(dev, rss->queue[0]) ==
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			if (mlx5_rxq_is_hairpin(dev, rss->queue[0]))
 				qrss = NULL;
 		}
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ce69b6ff3a..3034dbd70e 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5767,8 +5767,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags,
 	}
 	/* Continue validation for Xcap actions.*/
 	if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) &&
-	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) {
+	    (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index))) {
 		if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
 		     MLX5_FLOW_XCAP_ACTIONS)
 			return rte_flow_error_set(error, ENOTSUP,
@@ -7953,8 +7952,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	 */
 	if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS |
 			     MLX5_FLOW_VLAN_ACTIONS)) &&
-	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN ||
+	    (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index) ||
 	     ((conf = mlx5_rxq_get_hairpin_conf(dev, queue_index)) != NULL &&
 	     conf->tx_explicit != 0))) {
 		if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
@@ -10944,10 +10942,8 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev,
 {
 	const struct mlx5_rte_flow_item_tx_queue *queue_m;
 	const struct mlx5_rte_flow_item_tx_queue *queue_v;
-	void *misc_m =
-		MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v =
-		MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
+	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
 	struct mlx5_txq_ctrl *txq;
 	uint32_t queue, mask;
 
@@ -10958,7 +10954,7 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev,
 	txq = mlx5_txq_get(dev, queue_v->queue);
 	if (!txq)
 		return;
-	if (txq->type == MLX5_TXQ_TYPE_HAIRPIN)
+	if (txq->is_hairpin)
 		queue = txq->obj->sq->id;
 	else
 		queue = txq->obj->sq_obj.sq->id;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 7e417819f7..2d7c1a983a 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -141,12 +141,6 @@ struct mlx5_rxq_data {
 	/* Buffer split segment descriptions - sizes, offsets, pools. */
 } __rte_cache_aligned;
 
-enum mlx5_rxq_type {
-	MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */
-	MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */
-	MLX5_RXQ_TYPE_UNDEFINED,
-};
-
 /* RX queue control descriptor. */
 struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
@@ -154,7 +148,7 @@ struct mlx5_rxq_ctrl {
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
-	enum mlx5_rxq_type type; /* Rxq type. */
+	bool is_hairpin; /* Whether RxQ type is Hairpin. */
 	unsigned int socket; /* CPU socket ID for allocations. */
 	LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
 	uint32_t share_group; /* Group ID of shared RXQ. */
@@ -254,7 +248,7 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
 		       struct mlx5_flow_rss_desc *rss_desc);
 int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx);
 uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev);
-enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx);
+bool mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx);
 const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf
 	(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev);
@@ -628,8 +622,7 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 	for (i = 0; i < priv->rxqs_n; ++i) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		n_ibv++;
 		if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 2625fa3308..4d45d494c0 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1393,8 +1393,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 		struct mlx5_rxq_data *rxq;
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		rxq = &rxq_ctrl->rxq;
 		n_ibv++;
@@ -1482,8 +1481,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	for (i = 0; i != priv->rxqs_n; ++i) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		rxq_ctrl->rxq.mprq_mp = mp;
 	}
@@ -1810,7 +1808,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 		rte_errno = ENOSPC;
 		goto error;
 	}
-	tmpl->type = MLX5_RXQ_TYPE_STANDARD;
+	tmpl->is_hairpin = false;
 	if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl,
 			      &priv->sh->cdev->mr_scache.dev_gen, socket)) {
 		/* rte_errno is already set. */
@@ -1975,7 +1973,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	LIST_INIT(&tmpl->owners);
 	rxq->ctrl = tmpl;
 	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
-	tmpl->type = MLX5_RXQ_TYPE_HAIRPIN;
+	tmpl->is_hairpin = true;
 	tmpl->socket = SOCKET_ID_ANY;
 	tmpl->rxq.rss_hash = 0;
 	tmpl->rxq.port_id = dev->data->port_id;
@@ -2126,7 +2124,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 			mlx5_free(rxq_ctrl->obj);
 			rxq_ctrl->obj = NULL;
 		}
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+		if (!rxq_ctrl->is_hairpin) {
 			if (!rxq_ctrl->started)
 				rxq_free_elts(rxq_ctrl);
 			dev->data->rx_queue_state[idx] =
@@ -2135,7 +2133,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 	} else { /* Refcnt zero, closing device. */
 		LIST_REMOVE(rxq, owner_entry);
 		if (LIST_EMPTY(&rxq_ctrl->owners)) {
-			if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+			if (!rxq_ctrl->is_hairpin)
 				mlx5_mr_btree_free
 					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			if (rxq_ctrl->rxq.shared)
@@ -2175,7 +2173,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
 }
 
 /**
- * Get a Rx queue type.
+ * Check whether RxQ type is Hairpin.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -2183,17 +2181,15 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
  *   Rx queue index.
  *
  * @return
- *   The Rx queue type.
+ *   True if Rx queue type is Hairpin, otherwise False.
  */
-enum mlx5_rxq_type
-mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
+bool
+mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
-	if (idx < priv->rxqs_n && rxq_ctrl != NULL)
-		return rxq_ctrl->type;
-	return MLX5_RXQ_TYPE_UNDEFINED;
+	return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin);
 }
 
 /*
@@ -2210,14 +2206,9 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
 const struct rte_eth_hairpin_conf *
 mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
 
-	if (idx < priv->rxqs_n && rxq != NULL) {
-		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
-			return &rxq->hairpin_conf;
-	}
-	return NULL;
+	return mlx5_rxq_is_hairpin(dev, idx) ? &rxq->hairpin_conf : NULL;
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c3bc8a13..fe8b42c414 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -59,7 +59,7 @@ mlx5_txq_start(struct rte_eth_dev *dev)
 
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
+		if (!txq_ctrl->is_hairpin)
 			txq_alloc_elts(txq_ctrl);
 		MLX5_ASSERT(!txq_ctrl->obj);
 		txq_ctrl->obj = mlx5_malloc(flags, sizeof(struct mlx5_txq_obj),
@@ -77,7 +77,7 @@ mlx5_txq_start(struct rte_eth_dev *dev)
 			txq_ctrl->obj = NULL;
 			goto error;
 		}
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+		if (!txq_ctrl->is_hairpin) {
 			size_t size = txq_data->cqe_s * sizeof(*txq_data->fcqs);
 
 			txq_data->fcqs = mlx5_malloc(flags, size,
@@ -167,7 +167,7 @@ mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl,
 {
 	int ret = 0;
 
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+	if (!rxq_ctrl->is_hairpin) {
 		/*
 		 * Pre-register the mempools. Regardless of whether
 		 * the implicit registration is enabled or not,
@@ -280,7 +280,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		if (!txq_ctrl->is_hairpin ||
 		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
@@ -299,7 +299,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		if (!txq_ctrl)
 			continue;
 		/* Skip hairpin queues with other peer ports. */
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		if (!txq_ctrl->is_hairpin ||
 		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
@@ -322,7 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN ||
+		if (!rxq_ctrl->is_hairpin ||
 		    rxq->hairpin_conf.peers[0].queue != i) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u Tx queue %d can't be binded to "
@@ -412,7 +412,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 				dev->data->port_id, peer_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Txq",
 				dev->data->port_id, peer_queue);
@@ -444,7 +444,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq",
 				dev->data->port_id, peer_queue);
@@ -510,7 +510,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
 				dev->data->port_id, cur_queue);
@@ -570,7 +570,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
@@ -644,7 +644,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
 				dev->data->port_id, cur_queue);
@@ -683,7 +683,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
@@ -751,7 +751,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -791,7 +791,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -886,7 +886,7 @@ mlx5_hairpin_unbind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -1016,7 +1016,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			txq_ctrl = mlx5_txq_get(dev, i);
 			if (!txq_ctrl)
 				continue;
-			if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			if (!txq_ctrl->is_hairpin) {
 				mlx5_txq_release(dev, i);
 				continue;
 			}
@@ -1040,7 +1040,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			if (rxq == NULL)
 				continue;
 			rxq_ctrl = rxq->ctrl;
-			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN)
+			if (!rxq_ctrl->is_hairpin)
 				continue;
 			pp = rxq->hairpin_conf.peers[0].port;
 			if (pp >= RTE_MAX_ETHPORTS) {
@@ -1318,7 +1318,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		if (!txq_ctrl)
 			continue;
 		/* Only Tx implicit mode requires the default Tx flow. */
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN &&
+		if (txq_ctrl->is_hairpin &&
 		    txq_ctrl->hairpin_conf.tx_explicit == 0 &&
 		    txq_ctrl->hairpin_conf.peers[0].port ==
 		    priv->dev_data->port_id) {
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index c4b8271f6f..00cc9e19d4 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -166,17 +166,12 @@ struct mlx5_txq_data {
 	/* Storage for queued packets, must be the last field. */
 } __rte_cache_aligned;
 
-enum mlx5_txq_type {
-	MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */
-	MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Tx queue. */
-};
-
 /* TX queue control descriptor. */
 struct mlx5_txq_ctrl {
 	LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */
 	uint32_t refcnt; /* Reference counter. */
 	unsigned int socket; /* CPU socket ID for allocations. */
-	enum mlx5_txq_type type; /* The txq ctrl type. */
+	bool is_hairpin; /* Whether TxQ type is Hairpin. */
 	unsigned int max_inline_data; /* Max inline data. */
 	unsigned int max_tso_header; /* Max TSO header size. */
 	struct mlx5_txq_obj *obj; /* Verbs/DevX queue object. */
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index edbaa50692..c2cc0c84ab 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -526,7 +526,7 @@ txq_uar_init_secondary(struct mlx5_txq_ctrl *txq_ctrl, int fd)
 		return -rte_errno;
 	}
 
-	if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+	if (txq_ctrl->is_hairpin)
 		return 0;
 	MLX5_ASSERT(ppriv);
 	/*
@@ -569,7 +569,7 @@ txq_uar_uninit_secondary(struct mlx5_txq_ctrl *txq_ctrl)
 		rte_errno = ENOMEM;
 	}
 
-	if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+	if (txq_ctrl->is_hairpin)
 		return;
 	addr = ppriv->uar_table[txq_ctrl->txq.idx].db;
 	rte_mem_unmap(RTE_PTR_ALIGN_FLOOR(addr, page_size), page_size);
@@ -630,7 +630,7 @@ mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd)
 			continue;
 		txq = (*priv->txqs)[i];
 		txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq);
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+		if (txq_ctrl->is_hairpin)
 			continue;
 		MLX5_ASSERT(txq->idx == (uint16_t)i);
 		ret = txq_uar_init_secondary(txq_ctrl, fd);
@@ -1106,7 +1106,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
-	tmpl->type = MLX5_TXQ_TYPE_STANDARD;
+	tmpl->is_hairpin = false;
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
 error:
@@ -1149,7 +1149,7 @@ mlx5_txq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	tmpl->hairpin_conf = *hairpin_conf;
-	tmpl->type = MLX5_TXQ_TYPE_HAIRPIN;
+	tmpl->is_hairpin = true;
 	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
@@ -1208,7 +1208,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
 		mlx5_free(txq_ctrl->obj);
 		txq_ctrl->obj = NULL;
 	}
-	if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+	if (!txq_ctrl->is_hairpin) {
 		if (txq_ctrl->txq.fcqs) {
 			mlx5_free(txq_ctrl->txq.fcqs);
 			txq_ctrl->txq.fcqs = NULL;
@@ -1217,7 +1217,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
 		dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 	if (!__atomic_load_n(&txq_ctrl->refcnt, __ATOMIC_RELAXED)) {
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
+		if (!txq_ctrl->is_hairpin)
 			mlx5_mr_btree_free(&txq_ctrl->txq.mr_ctrl.cache_bh);
 		LIST_REMOVE(txq_ctrl, next);
 		mlx5_free(txq_ctrl);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 4/6] net/mlx5: add external RxQ mapping API
  2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
                   ` (2 preceding siblings ...)
  2022-02-22 21:04 ` [PATCH 3/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
@ 2022-02-22 21:04 ` Michael Baum
  2022-02-22 21:04 ` [PATCH 5/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-22 21:04 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

External queue is a queue that has been created and managed outside the
PMD. The queues owner might use PMD to generate flow rules using these
external queues.

When the queue is created in hardware it is given an ID represented by
32 bits. In contrast, the index of the queues in PMD is represented by
16 bits. To enable the use of PMD to generate flow rules, the queue
owner must provide a mapping between the HW index and a 16-bit index
corresponding to the RTE Flow API.

This patch adds an API enabling to insert/cancel a mapping between HW
queue id and RTE Flow queue id.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  18 +++++
 drivers/net/mlx5/mlx5.c          |   2 +
 drivers/net/mlx5/mlx5.h          |   1 +
 drivers/net/mlx5/mlx5_defs.h     |   3 +
 drivers/net/mlx5/mlx5_ethdev.c   |  16 ++++-
 drivers/net/mlx5/mlx5_rx.h       |   6 ++
 drivers/net/mlx5/mlx5_rxq.c      | 109 +++++++++++++++++++++++++++++++
 drivers/net/mlx5/rte_pmd_mlx5.h  |  50 +++++++++++++-
 drivers/net/mlx5/version.map     |   3 +
 9 files changed, 204 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index ecf823da56..058c140fe1 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1156,6 +1156,22 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 		err = ENOMEM;
 		goto error;
 	}
+	/*
+	 * When user configures remote PD and CTX and device creates RxQ by
+	 * DevX, external RxQ is both supported and requested.
+	 */
+	if (mlx5_imported_pd_and_ctx(sh->cdev) && mlx5_devx_obj_ops_en(sh)) {
+		priv->ext_rxqs = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
+					     sizeof(struct mlx5_external_rxq) *
+					     MLX5_MAX_EXT_RX_QUEUES, 0,
+					     SOCKET_ID_ANY);
+		if (priv->ext_rxqs == NULL) {
+			DRV_LOG(ERR, "Fail to allocate external RxQ array.");
+			err = ENOMEM;
+			goto error;
+		}
+		DRV_LOG(DEBUG, "External RxQ is supported.");
+	}
 	priv->sh = sh;
 	priv->dev_port = spawn->phys_port;
 	priv->pci_dev = spawn->pci_dev;
@@ -1613,6 +1629,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 			mlx5_list_destroy(priv->hrxqs);
 		if (eth_dev && priv->flex_item_map)
 			mlx5_flex_item_port_cleanup(eth_dev);
+		if (priv->ext_rxqs)
+			mlx5_free(priv->ext_rxqs);
 		mlx5_free(priv);
 		if (eth_dev != NULL)
 			eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 9f65a8f901..415e0fe2f2 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1855,6 +1855,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		close(priv->nl_socket_rdma);
 	if (priv->vmwa_context)
 		mlx5_vlan_vmwa_exit(priv->vmwa_context);
+	if (priv->ext_rxqs)
+		mlx5_free(priv->ext_rxqs);
 	ret = mlx5_hrxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 35ea3fb47c..a316f628ec 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1420,6 +1420,7 @@ struct mlx5_priv {
 	/* RX/TX queues. */
 	unsigned int rxqs_n; /* RX queues array size. */
 	unsigned int txqs_n; /* TX queues array size. */
+	struct mlx5_external_rxq *ext_rxqs; /* External RX queues array. */
 	struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */
 	struct mlx5_txq_data *(*txqs)[]; /* TX queues. */
 	struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 2d48fde010..15728fb41f 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -175,6 +175,9 @@
 /* Maximum number of indirect actions supported by rte_flow */
 #define MLX5_MAX_INDIRECT_ACTIONS 3
 
+/* Maximum number of external Rx queues supported by rte_flow */
+#define MLX5_MAX_EXT_RX_QUEUES (UINT16_MAX - MLX5_EXTERNAL_RX_QUEUE_ID_MIN + 1)
+
 /*
  * Linux definition of static_assert is found in /usr/include/assert.h.
  * Windows does not require a redefinition.
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 406761ccf8..de0ba2b1ff 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -27,6 +27,7 @@
 #include "mlx5_tx.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_devx.h"
+#include "rte_pmd_mlx5.h"
 
 /**
  * Get the interface index from device name.
@@ -81,9 +82,10 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	priv->rss_conf.rss_key =
-		mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE,
-			    MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY);
+	priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key,
+					      MLX5_MEM_RTE,
+					      MLX5_RSS_HASH_KEY_LEN, 0,
+					      SOCKET_ID_ANY);
 	if (!priv->rss_conf.rss_key) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)",
 			dev->data->port_id, rxqs_n);
@@ -127,6 +129,14 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
+	if (priv->ext_rxqs && rxqs_n >= MLX5_EXTERNAL_RX_QUEUE_ID_MIN) {
+		DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u), "
+			"the maximal number of internal Rx queues is %u",
+			dev->data->port_id, rxqs_n,
+			MLX5_EXTERNAL_RX_QUEUE_ID_MIN - 1);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
 	if (rxqs_n != priv->rxqs_n) {
 		DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u",
 			dev->data->port_id, priv->rxqs_n, rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 2d7c1a983a..1e191a5704 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -175,6 +175,12 @@ struct mlx5_rxq_priv {
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
+/* External RX queue descriptor. */
+struct mlx5_external_rxq {
+	uint32_t hw_id; /* Queue index in the Hardware. */
+	uint32_t refcnt; /* Reference counter. */
+};
+
 /* mlx5_rxq.c */
 
 extern uint8_t rss_hash_default_key[];
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 4d45d494c0..93adc09369 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -30,6 +30,7 @@
 #include "mlx5_utils.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_devx.h"
+#include "rte_pmd_mlx5.h"
 
 
 /* Default RSS hash key also used for ConnectX-3. */
@@ -2989,3 +2990,111 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev)
 		data->rt_timestamp = sh->dev_cap.rt_timestamp;
 	}
 }
+
+/**
+ * Validate given external RxQ rte_plow index, and get pointer to concurrent
+ * external RxQ object to map/unmap.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ *
+ * @return
+ *   Pointer to concurrent external RxQ on success,
+ *   NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_external_rxq *
+mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx)
+{
+	struct rte_eth_dev *dev;
+	struct mlx5_priv *priv;
+
+	if (dpdk_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN) {
+		DRV_LOG(ERR, "Queue index %u should be in range: [%u, %u].",
+			dpdk_idx, MLX5_EXTERNAL_RX_QUEUE_ID_MIN, UINT16_MAX);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	if (rte_eth_dev_is_valid_port(port_id) < 0) {
+		DRV_LOG(ERR, "There is no Ethernet device for port %u.",
+			port_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+	dev = &rte_eth_devices[port_id];
+	priv = dev->data->dev_private;
+	if (!mlx5_imported_pd_and_ctx(priv->sh->cdev)) {
+		DRV_LOG(ERR, "Port %u "
+			"external RxQ isn't supported on local PD and CTX.",
+			port_id);
+		rte_errno = ENOTSUP;
+		return NULL;
+	}
+	if (!mlx5_devx_obj_ops_en(priv->sh)) {
+		DRV_LOG(ERR,
+			"Port %u external RxQ isn't supported by Verbs API.",
+			port_id);
+		rte_errno = ENOTSUP;
+		return NULL;
+	}
+	/*
+	 * When user configures remote PD and CTX and device creates RxQ by
+	 * DevX, external RxQs array is allocated.
+	 */
+	MLX5_ASSERT(priv->ext_rxqs != NULL);
+	return &priv->ext_rxqs[dpdk_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+}
+
+int
+rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
+				      uint32_t hw_idx)
+{
+	struct mlx5_external_rxq *ext_rxq;
+
+	ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx);
+	if (ext_rxq == NULL)
+		return -rte_errno;
+	if (__atomic_load_n(&ext_rxq->refcnt, __ATOMIC_RELAXED)) {
+		if (ext_rxq->hw_id != hw_idx) {
+			DRV_LOG(ERR, "Port %u external RxQ index %u "
+				"is already mapped to HW index (requesting is "
+				"%u, existing is %u).",
+				port_id, dpdk_idx, hw_idx, ext_rxq->hw_id);
+			rte_errno = EEXIST;
+			return -rte_errno;
+		}
+		DRV_LOG(WARNING, "Port %u external RxQ index %u "
+			"is already mapped to the requested HW index (%u)",
+			port_id, dpdk_idx, hw_idx);
+
+	} else {
+		ext_rxq->hw_id = hw_idx;
+		__atomic_store_n(&ext_rxq->refcnt, 1, __ATOMIC_RELAXED);
+		DRV_LOG(DEBUG, "Port %u external RxQ index %u "
+			"is successfully mapped to the requested HW index (%u)",
+			port_id, dpdk_idx, hw_idx);
+	}
+	return 0;
+}
+
+int
+rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx)
+{
+	struct mlx5_external_rxq *ext_rxq;
+
+	ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx);
+	if (ext_rxq == NULL)
+		return -rte_errno;
+	if (__atomic_load_n(&ext_rxq->refcnt, __ATOMIC_RELAXED) == 0) {
+		DRV_LOG(ERR, "Port %u external RxQ index %u doesn't exist.",
+			port_id, dpdk_idx);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	__atomic_store_n(&ext_rxq->refcnt, 0, __ATOMIC_RELAXED);
+	DRV_LOG(DEBUG,
+		"Port %u external RxQ index %u is successfully unmapped.",
+		port_id, dpdk_idx);
+	return 0;
+}
diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h
index fc37a386db..9a78e522d1 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5.h
+++ b/drivers/net/mlx5/rte_pmd_mlx5.h
@@ -61,8 +61,56 @@ int rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n);
 __rte_experimental
 int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains);
 
+/**
+ * External Rx queue rte_flow index minimal value.
+ */
+#define MLX5_EXTERNAL_RX_QUEUE_ID_MIN (UINT16_MAX - 1000 + 1)
+
+/**
+ * Update mapping between rte_flow queue index (16 bits) and HW queue index (32
+ * bits) for RxQs which is created by external process.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ * @param[in] hw_idx
+ *   Queue index in hardware.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Possible values for rte_errno:
+ *   - EEXIST - a mapping with the same rte_flow index already exists.
+ *   - EINVAL - invalid rte_flow index, out of range.
+ *   - ENODEV - there is no Ethernet device for this port id.
+ *   - ENOTSUP - the port doesn't support external RxQ.
+ */
+__rte_experimental
+int rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
+					  uint32_t hw_idx);
+
+/**
+ * Remove mapping between rte_flow queue index (16 bits) and HW queue index (32
+ * bits) for RxQs which is created by external process.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Possible values for rte_errno:
+ *   - EINVAL - invalid index, out of range or doesn't exist.
+ *   - ENODEV - there is no Ethernet device for this port id.
+ *   - ENOTSUP - the port doesn't support external RxQ.
+ */
+__rte_experimental
+int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id,
+					    uint16_t dpdk_idx);
+
 #ifdef __cplusplus
 }
 #endif
 
-#endif
+#endif /* RTE_PMD_PRIVATE_MLX5_H_ */
diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map
index 0af7a12488..79cb79acc6 100644
--- a/drivers/net/mlx5/version.map
+++ b/drivers/net/mlx5/version.map
@@ -9,4 +9,7 @@ EXPERIMENTAL {
 	rte_pmd_mlx5_get_dyn_flag_names;
 	# added in 20.11
 	rte_pmd_mlx5_sync_flow;
+	# added in 22.03
+	rte_pmd_mlx5_external_rx_queue_id_map;
+	rte_pmd_mlx5_external_rx_queue_id_unmap;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 5/6] net/mlx5: support queue/RSS action for external RxQ
  2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
                   ` (3 preceding siblings ...)
  2022-02-22 21:04 ` [PATCH 4/6] net/mlx5: add external RxQ mapping API Michael Baum
@ 2022-02-22 21:04 ` Michael Baum
  2022-02-22 21:04 ` [PATCH 6/6] app/testpmd: add test " Michael Baum
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-22 21:04 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add support queue/RSS action for external RxQ.
In indirection table creation, the queue index will be taken from
mapping array.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 doc/guides/nics/mlx5.rst               |   1 +
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 drivers/net/mlx5/mlx5.c                |   8 +-
 drivers/net/mlx5/mlx5_devx.c           |  30 +++++--
 drivers/net/mlx5/mlx5_flow.c           |  29 +++++--
 drivers/net/mlx5/mlx5_rx.h             |  30 +++++++
 drivers/net/mlx5/mlx5_rxq.c            | 116 +++++++++++++++++++++++--
 7 files changed, 194 insertions(+), 26 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 748939527d..724e34d98b 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -38,6 +38,7 @@ Features
 - Multiple TX and RX queues.
 - Shared Rx queue.
 - Rx queue delay drop.
+- Support steering for external Rx queue.
 - Support for scattered TX frames.
 - Advanced support for scattered Rx frames with tunable buffer attributes.
 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 41923f50e6..b5dd5d9913 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -118,6 +118,12 @@ New Features
   * Added PPPoL2TPv2oUDP FDIR distribute packets based on inner IP
     src/dst address and UDP/TCP src/dst port.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated the Mellanox mlx5 driver with new features and improvements, including:
+
+  * Support steering for external Rx queue.
+
 * **Updated Wangxun ngbe driver.**
 
   * Added support for devices of custom PHY interfaces.
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 415e0fe2f2..9760f52b46 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1855,8 +1855,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		close(priv->nl_socket_rdma);
 	if (priv->vmwa_context)
 		mlx5_vlan_vmwa_exit(priv->vmwa_context);
-	if (priv->ext_rxqs)
-		mlx5_free(priv->ext_rxqs);
 	ret = mlx5_hrxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
@@ -1869,6 +1867,10 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 	if (ret)
 		DRV_LOG(WARNING, "port %u some Rx queue objects still remain",
 			dev->data->port_id);
+	ret = mlx5_ext_rxq_verify(dev);
+	if (ret)
+		DRV_LOG(WARNING, "Port %u some external RxQ still remain.",
+			dev->data->port_id);
 	ret = mlx5_rxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some Rx queues still remain",
@@ -1887,6 +1889,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 			dev->data->port_id);
 	if (priv->hrxqs)
 		mlx5_list_destroy(priv->hrxqs);
+	if (priv->ext_rxqs)
+		mlx5_free(priv->ext_rxqs);
 	/*
 	 * Free the shared context in last turn, because the cleanup
 	 * routines above may use some shared fields, like
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 154df99251..19510a540c 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -580,13 +580,21 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		return rqt_attr;
 	}
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
+		if (mlx5_is_external_rxq(dev, queues[i])) {
+			struct mlx5_external_rxq *ext_rxq =
+					mlx5_ext_rxq_get(dev, queues[i]);
 
-		MLX5_ASSERT(rxq != NULL);
-		if (rxq->ctrl->is_hairpin)
-			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
-		else
-			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+			rqt_attr->rq_list[i] = ext_rxq->hw_id;
+		} else {
+			struct mlx5_rxq_priv *rxq =
+					mlx5_rxq_get(dev, queues[i]);
+
+			MLX5_ASSERT(rxq != NULL);
+			if (rxq->ctrl->is_hairpin)
+				rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
+			else
+				rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+		}
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
@@ -711,7 +719,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 	uint32_t i;
 
 	/* NULL queues designate drop queue. */
-	if (ind_tbl->queues != NULL) {
+	if (ind_tbl->queues == NULL) {
+		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
+	} else if (mlx5_is_external_rxq(dev, ind_tbl->queues[0])) {
+		/* External RxQ supports neither Hairpin nor LRO. */
+		is_hairpin = false;
+		lro = false;
+	} else {
 		is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]);
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
@@ -723,8 +737,6 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 				break;
 			}
 		}
-	} else {
-		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 5e8454f5f5..1f81cedecb 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1631,6 +1631,12 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
 					  "can't have 2 fate actions in"
 					  " same flow");
+	if (attr->egress)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+					  "queue action not supported for egress.");
+	if (mlx5_is_external_rxq(dev, queue->index))
+		return 0;
 	if (!priv->rxqs_n)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
@@ -1645,11 +1651,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  &queue->index,
 					  "queue is not configured");
-	if (attr->egress)
-		return rte_flow_error_set(error, ENOTSUP,
-					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
-					  "queue action not supported for "
-					  "egress");
 	return 0;
 }
 
@@ -1664,7 +1665,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
  *   Size of the @p queues array.
  * @param[out] error
  *   On error, filled with a textual error description.
- * @param[out] queue
+ * @param[out] queue_idx
  *   On error, filled with an offending queue index in @p queues array.
  *
  * @return
@@ -1677,17 +1678,27 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 {
 	const struct mlx5_priv *priv = dev->data->dev_private;
 	bool is_hairpin = false;
+	bool is_ext_rss = false;
 	uint32_t i;
 
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev,
-								   queues[i]);
+		struct mlx5_rxq_ctrl *rxq_ctrl;
 
+		if (mlx5_is_external_rxq(dev, queues[0])) {
+			is_ext_rss = true;
+			continue;
+		}
+		if (is_ext_rss) {
+			*error = "Combining external and regular RSS queues is not supported";
+			*queue_idx = i;
+			return -ENOTSUP;
+		}
 		if (queues[i] >= priv->rxqs_n) {
 			*error = "queue index out of range";
 			*queue_idx = i;
 			return -EINVAL;
 		}
+		rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]);
 		if (rxq_ctrl == NULL) {
 			*error =  "queue is not configured";
 			*queue_idx = i;
@@ -1782,7 +1793,7 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
 					  " type not specified");
-	if (!priv->rxqs_n)
+	if (!priv->rxqs_n && priv->ext_rxqs == NULL)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  NULL, "No Rx queues configured");
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 1e191a5704..353c7c05b2 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -18,6 +18,7 @@
 
 #include "mlx5.h"
 #include "mlx5_autoconf.h"
+#include "rte_pmd_mlx5.h"
 
 /* Support tunnel matching. */
 #define MLX5_FLOW_TUNNEL 10
@@ -218,8 +219,14 @@ uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_external_rxq *mlx5_ext_rxq_ref(struct rte_eth_dev *dev,
+					   uint16_t idx);
+uint32_t mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_external_rxq *mlx5_ext_rxq_get(struct rte_eth_dev *dev,
+					   uint16_t idx);
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_verify(struct rte_eth_dev *dev);
+int mlx5_ext_rxq_verify(struct rte_eth_dev *dev);
 int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
 int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev,
@@ -639,4 +646,27 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 	return n == n_ibv;
 }
 
+/**
+ * Check whether given RxQ is external.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param queue_idx
+ *   Rx queue index.
+ *
+ * @return
+ *   True if is external RxQ, otherwise false.
+ */
+static __rte_always_inline bool
+mlx5_is_external_rxq(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_external_rxq *rxq;
+
+	if (!priv->ext_rxqs || queue_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN)
+		return false;
+	rxq = &priv->ext_rxqs[queue_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+	return __atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED);
+}
+
 #endif /* RTE_PMD_MLX5_RX_H_ */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 93adc09369..720a98650c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2090,6 +2090,65 @@ mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx)
 	return rxq == NULL ? NULL : &rxq->ctrl->rxq;
 }
 
+/**
+ * Increase an external Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External RX queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+struct mlx5_external_rxq *
+mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx);
+
+	__atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+	return rxq;
+}
+
+/**
+ * Decrease an external Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External RX queue index.
+ *
+ * @return
+ *   Updated reference count.
+ */
+uint32_t
+mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx);
+
+	return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+}
+
+/**
+ * Get an external Rx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External Rx queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+struct mlx5_external_rxq *
+mlx5_ext_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	MLX5_ASSERT(mlx5_is_external_rxq(dev, idx));
+	return &priv->ext_rxqs[idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+}
+
 /**
  * Release a Rx queue.
  *
@@ -2173,6 +2232,37 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
 	return ret;
 }
 
+/**
+ * Verify the external Rx Queue list is empty.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   The number of object not released.
+ */
+int
+mlx5_ext_rxq_verify(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_external_rxq *rxq;
+	uint16_t i;
+	int ret = 0;
+
+	if (priv->ext_rxqs == NULL)
+		return 0;
+
+	for (i = MLX5_EXTERNAL_RX_QUEUE_ID_MIN; i <= UINT16_MAX ; ++i) {
+		rxq = mlx5_ext_rxq_get(dev, i);
+		if (rxq->refcnt < 2)
+			continue;
+		DRV_LOG(DEBUG, "Port %u external RxQ %u still referenced.",
+			dev->data->port_id, i);
+		++ret;
+	}
+	return ret;
+}
+
 /**
  * Check whether RxQ type is Hairpin.
  *
@@ -2188,8 +2278,11 @@ bool
 mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 
+	if (mlx5_is_external_rxq(dev, idx))
+		return false;
+	rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 	return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin);
 }
 
@@ -2367,9 +2460,16 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 
 	if (ref_qs)
 		for (i = 0; i != queues_n; ++i) {
-			if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
-				ret = -rte_errno;
-				goto error;
+			if (mlx5_is_external_rxq(dev, queues[i])) {
+				if (mlx5_ext_rxq_ref(dev, queues[i]) == NULL) {
+					ret = -rte_errno;
+					goto error;
+				}
+			} else {
+				if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
+					ret = -rte_errno;
+					goto error;
+				}
 			}
 		}
 	ret = priv->obj_ops.ind_table_new(dev, n, ind_tbl);
@@ -2380,8 +2480,12 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 error:
 	if (ref_qs) {
 		err = rte_errno;
-		for (j = 0; j < i; j++)
-			mlx5_rxq_deref(dev, queues[j]);
+		for (j = 0; j < i; j++) {
+			if (mlx5_is_external_rxq(dev, queues[j]))
+				mlx5_ext_rxq_deref(dev, queues[j]);
+			else
+				mlx5_rxq_deref(dev, queues[j]);
+		}
 		rte_errno = err;
 	}
 	DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 6/6] app/testpmd: add test for external RxQ
  2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
                   ` (4 preceding siblings ...)
  2022-02-22 21:04 ` [PATCH 5/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
@ 2022-02-22 21:04 ` Michael Baum
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-22 21:04 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add test for map and unmap external RxQs.
This patch adds to Testpmd app a runtime function to test the mapping
API.

For insert mapping use this command:

  testpmd> port (port_id) ext_rxq map (rte_queue_id) (hw_queue_id)

For insert mapping use this command:

  testpmd> port (port_id) ext_rxq unmap (rte_queue_id)

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 app/test-pmd/cmdline.c                      | 157 ++++++++++++++++++++
 app/test-pmd/meson.build                    |   3 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  16 ++
 3 files changed, 176 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b4ba8da2b0..c0899ca6c5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -63,6 +63,9 @@
 #ifdef RTE_NET_BNXT
 #include <rte_pmd_bnxt.h>
 #endif
+#ifdef RTE_NET_MLX5
+#include <rte_pmd_mlx5.h>
+#endif
 #include "testpmd.h"
 #include "cmdline_mtr.h"
 #include "cmdline_tm.h"
@@ -911,6 +914,15 @@ static void cmd_help_long_parsed(void *parsed_result,
 
 			"port cleanup (port_id) txq (queue_id) (free_cnt)\n"
 			"    Cleanup txq mbufs for a specific Tx queue\n\n"
+
+#ifdef RTE_NET_MLX5
+			"port (port_id) ext_rxq map (rte_queue_id) (hw_queue_id)\n"
+			"    Map HW queue index (32 bit) to rte_flow queue"
+			" index (16 bit) for external RxQ\n\n"
+
+			"port (port_id) ext_rxq unmap (rte_queue_id)\n"
+			"    Unmap external Rx queue rte_flow index mapping\n\n"
+#endif
 		);
 	}
 
@@ -17806,6 +17818,147 @@ cmdline_parse_inst_t cmd_show_port_flow_transfer_proxy = {
 	}
 };
 
+#ifdef RTE_NET_MLX5
+
+/* Map HW queue index to rte queue index. */
+struct cmd_map_ext_rxq {
+	cmdline_fixed_string_t port;
+	portid_t port_id;
+	cmdline_fixed_string_t ext_rxq;
+	cmdline_fixed_string_t map;
+	uint16_t rte_queue_id;
+	uint32_t hw_queue_id;
+};
+
+cmdline_parse_token_string_t cmd_map_ext_rxq_port =
+	TOKEN_STRING_INITIALIZER(struct cmd_map_ext_rxq, port, "port");
+cmdline_parse_token_num_t cmd_map_ext_rxq_port_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_map_ext_rxq, port_id, RTE_UINT16);
+cmdline_parse_token_string_t cmd_map_ext_rxq_ext_rxq =
+	TOKEN_STRING_INITIALIZER(struct cmd_map_ext_rxq, ext_rxq, "ext_rxq");
+cmdline_parse_token_string_t cmd_map_ext_rxq_map =
+	TOKEN_STRING_INITIALIZER(struct cmd_map_ext_rxq, map, "map");
+cmdline_parse_token_num_t cmd_map_ext_rxq_rte_queue_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_map_ext_rxq, rte_queue_id, RTE_UINT16);
+cmdline_parse_token_num_t cmd_map_ext_rxq_hw_queue_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_map_ext_rxq, hw_queue_id, RTE_UINT32);
+
+static void
+cmd_map_ext_rxq_parsed(void *parsed_result,
+		       __rte_unused struct cmdline *cl,
+		       __rte_unused void *data)
+{
+	struct cmd_map_ext_rxq *res = parsed_result;
+	int ret;
+
+	if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+		return;
+	ret = rte_pmd_mlx5_external_rx_queue_id_map(res->port_id,
+						    res->rte_queue_id,
+						    res->hw_queue_id);
+	switch (ret) {
+	case 0:
+		break;
+	case -EINVAL:
+		fprintf(stderr, "invalid rte_flow index (%u), out of range\n",
+			res->rte_queue_id);
+		break;
+	case -ENODEV:
+		fprintf(stderr, "invalid port_id %u\n", res->port_id);
+		break;
+	case -ENOTSUP:
+		fprintf(stderr, "function not implemented or supported\n");
+		break;
+	case -EEXIST:
+		fprintf(stderr, "mapping with index %u already exists\n",
+			res->rte_queue_id);
+		break;
+	default:
+		fprintf(stderr, "programming error: (%s)\n", strerror(-ret));
+	}
+}
+
+cmdline_parse_inst_t cmd_map_ext_rxq = {
+	.f = cmd_map_ext_rxq_parsed,
+	.data = NULL,
+	.help_str = "port <port_id> ext_rxq map <rte_queue_id> <hw_queue_id>",
+	.tokens = {
+		(void *)&cmd_map_ext_rxq_port,
+		(void *)&cmd_map_ext_rxq_port_id,
+		(void *)&cmd_map_ext_rxq_ext_rxq,
+		(void *)&cmd_map_ext_rxq_map,
+		(void *)&cmd_map_ext_rxq_rte_queue_id,
+		(void *)&cmd_map_ext_rxq_hw_queue_id,
+		NULL,
+	}
+};
+
+/* Unmap HW queue index to rte queue index. */
+struct cmd_unmap_ext_rxq {
+	cmdline_fixed_string_t port;
+	portid_t port_id;
+	cmdline_fixed_string_t ext_rxq;
+	cmdline_fixed_string_t unmap;
+	uint16_t queue_id;
+};
+
+cmdline_parse_token_string_t cmd_unmap_ext_rxq_port =
+	TOKEN_STRING_INITIALIZER(struct cmd_unmap_ext_rxq, port, "port");
+cmdline_parse_token_num_t cmd_unmap_ext_rxq_port_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_unmap_ext_rxq, port_id, RTE_UINT16);
+cmdline_parse_token_string_t cmd_unmap_ext_rxq_ext_rxq =
+	TOKEN_STRING_INITIALIZER(struct cmd_unmap_ext_rxq, ext_rxq, "ext_rxq");
+cmdline_parse_token_string_t cmd_unmap_ext_rxq_unmap =
+	TOKEN_STRING_INITIALIZER(struct cmd_unmap_ext_rxq, unmap, "unmap");
+cmdline_parse_token_num_t cmd_unmap_ext_rxq_queue_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_unmap_ext_rxq, queue_id, RTE_UINT16);
+
+static void
+cmd_unmap_ext_rxq_parsed(void *parsed_result,
+			 __rte_unused struct cmdline *cl,
+			 __rte_unused void *data)
+{
+	struct cmd_unmap_ext_rxq *res = parsed_result;
+	int ret;
+
+	if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+		return;
+	ret = rte_pmd_mlx5_external_rx_queue_id_unmap(res->port_id,
+						      res->queue_id);
+	switch (ret) {
+	case 0:
+		break;
+	case -EINVAL:
+		fprintf(stderr, "invalid rte_flow index (%u), "
+			"out of range or doesn't exist\n", res->queue_id);
+		break;
+	case -ENODEV:
+		fprintf(stderr, "invalid port_id %u\n", res->port_id);
+		break;
+	case -ENOTSUP:
+		fprintf(stderr, "function not implemented or supported\n");
+		break;
+	default:
+		fprintf(stderr, "programming error: (%s)\n", strerror(-ret));
+	}
+}
+
+cmdline_parse_inst_t cmd_unmap_ext_rxq = {
+	.f = cmd_unmap_ext_rxq_parsed,
+	.data = NULL,
+	.help_str = "port <port_id> ext_rxq unmap <queue_id>",
+	.tokens = {
+		(void *)&cmd_unmap_ext_rxq_port,
+		(void *)&cmd_unmap_ext_rxq_port_id,
+		(void *)&cmd_unmap_ext_rxq_ext_rxq,
+		(void *)&cmd_unmap_ext_rxq_unmap,
+		(void *)&cmd_unmap_ext_rxq_queue_id,
+		NULL,
+	}
+};
+
+#endif /* RTE_NET_MLX5 */
+
 /* ******************************************************************************** */
 
 /* list of instructions */
@@ -18092,6 +18245,10 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_show_capability,
 	(cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,
 	(cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,
+#ifdef RTE_NET_MLX5
+	(cmdline_parse_inst_t *)&cmd_map_ext_rxq,
+	(cmdline_parse_inst_t *)&cmd_unmap_ext_rxq,
+#endif
 	NULL,
 };
 
diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build
index 43130c8856..c4fd379e67 100644
--- a/app/test-pmd/meson.build
+++ b/app/test-pmd/meson.build
@@ -73,3 +73,6 @@ endif
 if dpdk_conf.has('RTE_NET_DPAA')
     deps += ['bus_dpaa', 'mempool_dpaa', 'net_dpaa']
 endif
+if dpdk_conf.has('RTE_NET_MLX5')
+    deps += 'net_mlx5'
+endif
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 9cc248084f..613d281923 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2462,6 +2462,22 @@ To cleanup txq mbufs currently cached by driver::
 
 If the value of ``free_cnt`` is 0, driver should free all cached mbufs.
 
+port map external RxQ
+~~~~~~~~~~~~~~~~~~~~~
+
+Map HW queue index (32 bit) to rte_flow queue index (16 bit) for external RxQ::
+
+   testpmd> port (port_id) ext_rxq map (rte_queue_id) (hw_queue_id)
+
+Unmap external Rx queue rte_flow index mapping::
+
+   testpmd> port (port_id) ext_rxq unmap (rte_queue_id)
+
+where:
+
+* ``rte_queue_id``: queue index in reage [64536, 65535].
+* ``hw_queue_id``: queue index given by HW in queue creation.
+
 Device Functions
 ----------------
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 0/6] mlx5: external RxQ support
  2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
                   ` (5 preceding siblings ...)
  2022-02-22 21:04 ` [PATCH 6/6] app/testpmd: add test " Michael Baum
@ 2022-02-23 18:48 ` Michael Baum
  2022-02-23 18:48   ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
                     ` (7 more replies)
  6 siblings, 8 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

These patches add support to external Rx queues.
External queue is a queue that is managed by a process external to PMD,
but uses PMD process to generate its flow rules.

For the hardware to allow the DPDK process to set rules for it, the
process needs to use the same PD of the external process. In addition,
the indexes of the queues in hardware are represented by 32-bit compared
to the rte_flow indexes represented by 16-bit, so the processes need to
share some mapping between the indexes.

These patches allow the external process to provide devargs which enable
importing its context and PD, instead of prepare new ones. In addition,
an API is provided for mapping for the indexes of the queues. 

v2:
- Rebase.
- Add ABI exception for common/mlx5 library.
- Correct DevX flag updating.
- Improve explanations in doc and comments.
- Remove teatpmd part.


Michael Baum (6):
  common/mlx5: consider local functions as internal
  common/mlx5: glue device and PD importation
  common/mlx5: add remote PD and CTX support
  net/mlx5: optimize RxQ/TxQ control structure
  net/mlx5: add external RxQ mapping API
  net/mlx5: support queue/RSS action for external RxQ

 devtools/libabigail.abignore                 |   4 +
 doc/guides/nics/mlx5.rst                     |   1 +
 doc/guides/platform/mlx5.rst                 |  37 ++-
 doc/guides/rel_notes/release_22_03.rst       |   1 +
 drivers/common/mlx5/linux/meson.build        |   2 +
 drivers/common/mlx5/linux/mlx5_common_os.c   | 196 ++++++++++++--
 drivers/common/mlx5/linux/mlx5_common_os.h   |   7 +-
 drivers/common/mlx5/linux/mlx5_glue.c        |  41 +++
 drivers/common/mlx5/linux/mlx5_glue.h        |   4 +
 drivers/common/mlx5/mlx5_common.c            |  64 ++++-
 drivers/common/mlx5/mlx5_common.h            |  23 +-
 drivers/common/mlx5/version.map              |   3 +
 drivers/common/mlx5/windows/mlx5_common_os.c |  37 ++-
 drivers/common/mlx5/windows/mlx5_common_os.h |   1 -
 drivers/net/mlx5/linux/mlx5_os.c             |  18 ++
 drivers/net/mlx5/mlx5.c                      |   6 +
 drivers/net/mlx5/mlx5.h                      |   1 +
 drivers/net/mlx5/mlx5_defs.h                 |   3 +
 drivers/net/mlx5/mlx5_devx.c                 |  52 ++--
 drivers/net/mlx5/mlx5_ethdev.c               |  18 +-
 drivers/net/mlx5/mlx5_flow.c                 |  43 ++--
 drivers/net/mlx5/mlx5_flow_dv.c              |  14 +-
 drivers/net/mlx5/mlx5_rx.h                   |  49 +++-
 drivers/net/mlx5/mlx5_rxq.c                  | 258 +++++++++++++++++--
 drivers/net/mlx5/mlx5_trigger.c              |  36 +--
 drivers/net/mlx5/mlx5_tx.h                   |   7 +-
 drivers/net/mlx5/mlx5_txq.c                  |  14 +-
 drivers/net/mlx5/rte_pmd_mlx5.h              |  50 +++-
 drivers/net/mlx5/version.map                 |   3 +
 29 files changed, 821 insertions(+), 172 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 1/6] common/mlx5: consider local functions as internal
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
@ 2022-02-23 18:48   ` Michael Baum
  2022-02-23 18:48   ` [PATCH v2 2/6] common/mlx5: glue device and PD importation Michael Baum
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The functions which are not explicitly marked as internal
were exported because the local catch-all rule was missing in the
version script.
After adding the missing rule, all local functions are hidden.
The function mlx5_get_device_guid is used in another library,
so it needs to be exported (as internal).

Because the local functions were exported as non-internal
in DPDK 21.11, any change in these functions would break the ABI.
An ABI exception is added for this library, considering that all
functions are either local or internal.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 devtools/libabigail.abignore               | 4 ++++
 drivers/common/mlx5/linux/mlx5_common_os.h | 1 +
 drivers/common/mlx5/version.map            | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index ef0602975a..78d57497e6 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -20,3 +20,7 @@
 ; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental
 [suppress_type]
         name = rte_crypto_asym_op
+
+; Ignore changes in common mlx5 driver, should be all internal
+[suppress_file]
+        soname_regexp = ^librte_common_mlx5\.
\ No newline at end of file
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h
index 83066e752d..edf356a30a 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.h
+++ b/drivers/common/mlx5/linux/mlx5_common_os.h
@@ -300,6 +300,7 @@ mlx5_set_context_attr(struct rte_device *dev, struct ibv_context *ctx);
  *  0 if OFED doesn't support.
  *  >0 if success.
  */
+__rte_internal
 int
 mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len);
 
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 1c6153c576..cb20a7d893 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -80,6 +80,7 @@ INTERNAL {
 
 	mlx5_free;
 
+	mlx5_get_device_guid; # WINDOWS_NO_EXPORT
 	mlx5_get_ifname_sysfs; # WINDOWS_NO_EXPORT
 	mlx5_get_pci_addr; # WINDOWS_NO_EXPORT
 
@@ -149,4 +150,6 @@ INTERNAL {
 	mlx5_mp_req_mempool_reg;
 	mlx5_mr_mempool2mr_bh;
 	mlx5_mr_mempool_populate_cache;
+
+	local: *;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 2/6] common/mlx5: glue device and PD importation
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
  2022-02-23 18:48   ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
@ 2022-02-23 18:48   ` Michael Baum
  2022-02-23 18:48   ` [PATCH v2 3/6] common/mlx5: add remote PD and CTX support Michael Baum
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add support for rdma-core API to import device.
The API gets ibv_context file descriptor and returns an ibv_context
pointer that is associated with the given file descriptor.
Add also support for rdma-core API to import PD.
The API gets ibv_context and PD handle and returns a protection domain
(PD) that is associated with the given handle in the given context.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 drivers/common/mlx5/linux/meson.build |  2 ++
 drivers/common/mlx5/linux/mlx5_glue.c | 41 +++++++++++++++++++++++++++
 drivers/common/mlx5/linux/mlx5_glue.h |  4 +++
 3 files changed, 47 insertions(+)

diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 4c7b53b9bd..ed48245c67 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -202,6 +202,8 @@ has_sym_args = [
             'mlx5dv_dr_domain_allow_duplicate_rules' ],
         [ 'HAVE_MLX5_IBV_REG_MR_IOVA', 'infiniband/verbs.h',
             'ibv_reg_mr_iova' ],
+        [ 'HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR', 'infiniband/verbs.h',
+            'ibv_import_device' ],
 ]
 config = configuration_data()
 foreach arg:has_sym_args
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index bc6622053f..450dd6a06a 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -34,6 +34,32 @@ mlx5_glue_dealloc_pd(struct ibv_pd *pd)
 	return ibv_dealloc_pd(pd);
 }
 
+static struct ibv_pd *
+mlx5_glue_import_pd(struct ibv_context *context, uint32_t pd_handle)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	return ibv_import_pd(context, pd_handle);
+#else
+	(void)context;
+	(void)pd_handle;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
+static int
+mlx5_glue_unimport_pd(struct ibv_pd *pd)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	ibv_unimport_pd(pd);
+	return 0;
+#else
+	(void)pd;
+	errno = ENOTSUP;
+	return -errno;
+#endif
+}
+
 static struct ibv_device **
 mlx5_glue_get_device_list(int *num_devices)
 {
@@ -52,6 +78,18 @@ mlx5_glue_open_device(struct ibv_device *device)
 	return ibv_open_device(device);
 }
 
+static struct ibv_context *
+mlx5_glue_import_device(int cmd_fd)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	return ibv_import_device(cmd_fd);
+#else
+	(void)cmd_fd;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
 static int
 mlx5_glue_close_device(struct ibv_context *context)
 {
@@ -1402,9 +1440,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
 	.fork_init = mlx5_glue_fork_init,
 	.alloc_pd = mlx5_glue_alloc_pd,
 	.dealloc_pd = mlx5_glue_dealloc_pd,
+	.import_pd = mlx5_glue_import_pd,
+	.unimport_pd = mlx5_glue_unimport_pd,
 	.get_device_list = mlx5_glue_get_device_list,
 	.free_device_list = mlx5_glue_free_device_list,
 	.open_device = mlx5_glue_open_device,
+	.import_device = mlx5_glue_import_device,
 	.close_device = mlx5_glue_close_device,
 	.query_device = mlx5_glue_query_device,
 	.query_device_ex = mlx5_glue_query_device_ex,
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index 4e6d31f263..c4903a6dce 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -151,9 +151,13 @@ struct mlx5_glue {
 	int (*fork_init)(void);
 	struct ibv_pd *(*alloc_pd)(struct ibv_context *context);
 	int (*dealloc_pd)(struct ibv_pd *pd);
+	struct ibv_pd *(*import_pd)(struct ibv_context *context,
+				    uint32_t pd_handle);
+	int (*unimport_pd)(struct ibv_pd *pd);
 	struct ibv_device **(*get_device_list)(int *num_devices);
 	void (*free_device_list)(struct ibv_device **list);
 	struct ibv_context *(*open_device)(struct ibv_device *device);
+	struct ibv_context *(*import_device)(int cmd_fd);
 	int (*close_device)(struct ibv_context *context);
 	int (*query_device)(struct ibv_context *context,
 			    struct ibv_device_attr *device_attr);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 3/6] common/mlx5: add remote PD and CTX support
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
  2022-02-23 18:48   ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
  2022-02-23 18:48   ` [PATCH v2 2/6] common/mlx5: glue device and PD importation Michael Baum
@ 2022-02-23 18:48   ` Michael Baum
  2022-02-23 18:48   ` [PATCH v2 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add option to probe common device using import CTX/PD functions instead
of create functions.
This option requires accepting the context FD and the PD handle as
devargs.

This sharing can be useful for applications that use PMD for only some
operations. For example, an app that generates queues itself and uses
PMD just to configure flow rules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 doc/guides/platform/mlx5.rst                 |  37 +++-
 drivers/common/mlx5/linux/mlx5_common_os.c   | 196 ++++++++++++++++---
 drivers/common/mlx5/linux/mlx5_common_os.h   |   6 -
 drivers/common/mlx5/mlx5_common.c            |  64 ++++--
 drivers/common/mlx5/mlx5_common.h            |  23 ++-
 drivers/common/mlx5/windows/mlx5_common_os.c |  37 +++-
 drivers/common/mlx5/windows/mlx5_common_os.h |   1 -
 7 files changed, 313 insertions(+), 51 deletions(-)

diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst
index d073c213ca..d2ed094357 100644
--- a/doc/guides/platform/mlx5.rst
+++ b/doc/guides/platform/mlx5.rst
@@ -81,6 +81,12 @@ Limitations
 - On Windows, only ``eth`` and ``crypto`` are supported.
 
 
+Features
+--------
+
+- Remote PD and CTX - Linux only.
+
+
 .. _mlx5_common_compilation:
 
 Compilation Prerequisites
@@ -638,4 +644,33 @@ and below are the arguments supported by the common mlx5 layer.
 
   If ``sq_db_nc`` is omitted, the preset (if any) environment variable
   "MLX5_SHUT_UP_BF" value is used. If there is no "MLX5_SHUT_UP_BF", the
-  default ``sq_db_nc`` value is zero for ARM64 hosts and one for others.
\ No newline at end of file
+  default ``sq_db_nc`` value is zero for ARM64 hosts and one for others.
+
+- ``cmd_fd`` parameter [int]
+
+  File descriptor of ibv_context created outside the PMD.
+  PMD will use this FD to import remote CTX. The ``cmd_fd`` is obtained from
+  the ibv_context cmd_fd member, which must be dup'd before being passed.
+  This parameter is valid only if ``pd_handle`` parameter is specified.
+
+  By default, the PMD will ignore this parameter and create a new ibv_context.
+
+  .. note::
+
+     When FD comes from another process, it is on the user responsibility to
+     share the FD between the processes (e.g. by SCM_RIGHTS).
+
+- ``pd_handle`` parameter [int]
+
+  Protection domain handle of ibv_pd created outside the PMD.
+  PMD will use this handle to import remote PD. The ``pd_handle`` can be
+  achieved from the original PD by getting its ibv_pd->handle member value.
+  This parameter is valid only if ``cmd_fd`` parameter is specified, and its
+  value must be a valid kernel handle for a PD object in the context represented
+  by given ``cmd_fd``.
+
+  By default, the PMD will ignore this parameter and allocate a new PD.
+
+  .. note::
+
+     The ibv_pd->handle member is different then mlx5dv_pd->pdn member.
\ No newline at end of file
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index a752d79e8e..a3c25638da 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -408,27 +408,128 @@ mlx5_glue_constructor(void)
 }
 
 /**
- * Allocate Protection Domain object and extract its pdn using DV API.
+ * Validate user arguments for remote PD and CTX.
+ *
+ * @param config
+ *   Pointer to device configuration structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config)
+{
+	int device_fd = config->device_fd;
+	int pd_handle = config->pd_handle;
+
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	if (device_fd == MLX5_ARG_UNSET && pd_handle != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote PD without CTX is not supported.");
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	if (device_fd != MLX5_ARG_UNSET && pd_handle == MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote CTX without PD is not supported.");
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	DRV_LOG(DEBUG, "Remote PD and CTX is supported: (cmd_fd=%d, "
+		"pd_handle=%d).", device_fd, pd_handle);
+#else
+	if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR,
+			"Remote PD and CTX is not supported - maybe old rdma-core version?");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+#endif
+	return 0;
+}
+
+/**
+ * Release Protection Domain object.
  *
  * @param[out] cdev
  *   Pointer to the mlx5 device.
  *
  * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise.
  */
 int
+mlx5_os_pd_release(struct mlx5_common_device *cdev)
+{
+	if (cdev->config.pd_handle == MLX5_ARG_UNSET)
+		return mlx5_glue->dealloc_pd(cdev->pd);
+	else
+		return mlx5_glue->unimport_pd(cdev->pd);
+}
+
+/**
+ * Allocate Protection Domain object.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise.
+ */
+static int
 mlx5_os_pd_create(struct mlx5_common_device *cdev)
+{
+	cdev->pd = mlx5_glue->alloc_pd(cdev->ctx);
+	if (cdev->pd == NULL) {
+		DRV_LOG(ERR, "Failed to allocate PD: %s", rte_strerror(errno));
+		return errno ? -errno : -ENOMEM;
+	}
+	return 0;
+}
+
+/**
+ * Import Protection Domain object according to given PD handle.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise.
+ */
+static int
+mlx5_os_pd_import(struct mlx5_common_device *cdev)
+{
+	cdev->pd = mlx5_glue->import_pd(cdev->ctx, cdev->config.pd_handle);
+	if (cdev->pd == NULL) {
+		DRV_LOG(ERR, "Failed to import PD using handle=%d: %s",
+			cdev->config.pd_handle, rte_strerror(errno));
+		return errno ? -errno : -ENOMEM;
+	}
+	return 0;
+}
+
+/**
+ * Prepare Protection Domain object and extract its pdn using DV API.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_pd_prepare(struct mlx5_common_device *cdev)
 {
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	struct mlx5dv_obj obj;
 	struct mlx5dv_pd pd_info;
-	int ret;
 #endif
+	int ret;
 
-	cdev->pd = mlx5_glue->alloc_pd(cdev->ctx);
-	if (cdev->pd == NULL) {
-		DRV_LOG(ERR, "Failed to allocate PD.");
-		return errno ? -errno : -ENOMEM;
+	if (cdev->config.pd_handle == MLX5_ARG_UNSET)
+		ret = mlx5_os_pd_create(cdev);
+	else
+		ret = mlx5_os_pd_import(cdev);
+	if (ret) {
+		rte_errno = -ret;
+		return ret;
 	}
 	if (cdev->config.devx == 0)
 		return 0;
@@ -438,15 +539,17 @@ mlx5_os_pd_create(struct mlx5_common_device *cdev)
 	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Fail to get PD object info.");
-		mlx5_glue->dealloc_pd(cdev->pd);
+		rte_errno = errno;
+		claim_zero(mlx5_os_pd_release(cdev));
 		cdev->pd = NULL;
-		return -errno;
+		return -rte_errno;
 	}
 	cdev->pdn = pd_info.pdn;
 	return 0;
 #else
 	DRV_LOG(ERR, "Cannot get pdn - no DV support.");
-	return -ENOTSUP;
+	rte_errno = ENOTSUP;
+	return -rte_errno;
 #endif /* HAVE_IBV_FLOW_DV_SUPPORT */
 }
 
@@ -648,28 +751,28 @@ mlx5_restore_doorbell_mapping_env(int value)
 /**
  * Function API to open IB device.
  *
- *
  * @param cdev
  *   Pointer to the mlx5 device.
  * @param classes
  *   Chosen classes come from device arguments.
  *
  * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Pointer to ibv_context on success, NULL otherwise and rte_errno is set.
  */
-int
-mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
+static struct ibv_context *
+mlx5_open_device(struct mlx5_common_device *cdev, uint32_t classes)
 {
 	struct ibv_device *ibv;
 	struct ibv_context *ctx = NULL;
 	int dbmap_env;
 
+	MLX5_ASSERT(cdev->config.device_fd == MLX5_ARG_UNSET);
 	if (classes & MLX5_CLASS_VDPA)
 		ibv = mlx5_vdpa_get_ibv_dev(cdev->dev);
 	else
 		ibv = mlx5_os_get_ibv_dev(cdev->dev);
 	if (!ibv)
-		return -rte_errno;
+		return NULL;
 	DRV_LOG(INFO, "Dev information matches for device \"%s\".", ibv->name);
 	/*
 	 * Configure environment variable "MLX5_BF_SHUT_UP" before the device
@@ -682,29 +785,78 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
 	ctx = mlx5_glue->dv_open_device(ibv);
 	if (ctx) {
 		cdev->config.devx = 1;
-		DRV_LOG(DEBUG, "DevX is supported.");
 	} else if (classes == MLX5_CLASS_ETH) {
 		/* The environment variable is still configured. */
 		ctx = mlx5_glue->open_device(ibv);
 		if (ctx == NULL)
 			goto error;
-		DRV_LOG(DEBUG, "DevX is NOT supported.");
 	} else {
 		goto error;
 	}
 	/* The device is created, no need for environment. */
 	mlx5_restore_doorbell_mapping_env(dbmap_env);
-	/* Hint libmlx5 to use PMD allocator for data plane resources */
-	mlx5_set_context_attr(cdev->dev, ctx);
-	cdev->ctx = ctx;
-	return 0;
+	return ctx;
 error:
 	rte_errno = errno ? errno : ENODEV;
 	/* The device creation is failed, no need for environment. */
 	mlx5_restore_doorbell_mapping_env(dbmap_env);
 	DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name);
-	return -rte_errno;
+	return NULL;
+}
+
+/**
+ * Function API to import IB device.
+ *
+ * @param cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   Pointer to ibv_context on success, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_context *
+mlx5_import_device(struct mlx5_common_device *cdev)
+{
+	struct ibv_context *ctx = NULL;
+
+	MLX5_ASSERT(cdev->config.device_fd != MLX5_ARG_UNSET);
+	ctx = mlx5_glue->import_device(cdev->config.device_fd);
+	if (!ctx) {
+		DRV_LOG(ERR, "Failed to import device for fd=%d: %s",
+			cdev->config.device_fd, rte_strerror(errno));
+		rte_errno = errno;
+	}
+	return ctx;
+}
+
+/**
+ * Function API to prepare IB device.
+ *
+ * @param cdev
+ *   Pointer to the mlx5 device.
+ * @param classes
+ *   Chosen classes come from device arguments.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
+{
+
+	struct ibv_context *ctx = NULL;
+
+	if (cdev->config.device_fd == MLX5_ARG_UNSET)
+		ctx = mlx5_open_device(cdev, classes);
+	else
+		ctx = mlx5_import_device(cdev);
+	if (ctx == NULL)
+		return -rte_errno;
+	/* Hint libmlx5 to use PMD allocator for data plane resources */
+	mlx5_set_context_attr(cdev->dev, ctx);
+	cdev->ctx = ctx;
+	return 0;
 }
+
 int
 mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
 {
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h
index edf356a30a..a85f3b5f3c 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.h
+++ b/drivers/common/mlx5/linux/mlx5_common_os.h
@@ -203,12 +203,6 @@ mlx5_os_get_devx_uar_page_id(void *uar)
 #endif
 }
 
-static inline int
-mlx5_os_dealloc_pd(void *pd)
-{
-	return mlx5_glue->dealloc_pd(pd);
-}
-
 __rte_internal
 static inline void *
 mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access)
diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 8cf391df13..2175cea25d 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -24,6 +24,12 @@ uint8_t haswell_broadwell_cpu;
 /* Driver type key for new device global syntax. */
 #define MLX5_DRIVER_KEY "driver"
 
+/* Device parameter to get file descriptor for import device. */
+#define MLX5_DEVICE_FD "cmd_fd"
+
+/* Device parameter to get PD number for import Protection Domain. */
+#define MLX5_PD_HANDLE "pd_handle"
+
 /* Enable extending memsegs when creating a MR. */
 #define MLX5_MR_EXT_MEMSEG_EN "mr_ext_memseg_en"
 
@@ -283,6 +289,10 @@ mlx5_common_args_check_handler(const char *key, const char *val, void *opaque)
 		config->mr_mempool_reg_en = !!tmp;
 	} else if (strcmp(key, MLX5_SYS_MEM_EN) == 0) {
 		config->sys_mem_en = !!tmp;
+	} else if (strcmp(key, MLX5_DEVICE_FD) == 0) {
+		config->device_fd = tmp;
+	} else if (strcmp(key, MLX5_PD_HANDLE) == 0) {
+		config->pd_handle = tmp;
 	}
 	return 0;
 }
@@ -310,6 +320,8 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist,
 		MLX5_MR_EXT_MEMSEG_EN,
 		MLX5_SYS_MEM_EN,
 		MLX5_MR_MEMPOOL_REG_EN,
+		MLX5_DEVICE_FD,
+		MLX5_PD_HANDLE,
 		NULL,
 	};
 	int ret = 0;
@@ -321,13 +333,19 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist,
 	config->mr_mempool_reg_en = 1;
 	config->sys_mem_en = 0;
 	config->dbnc = MLX5_ARG_UNSET;
+	config->device_fd = MLX5_ARG_UNSET;
+	config->pd_handle = MLX5_ARG_UNSET;
 	/* Process common parameters. */
 	ret = mlx5_kvargs_process(mkvlist, params,
 				  mlx5_common_args_check_handler, config);
 	if (ret) {
 		rte_errno = EINVAL;
-		ret = -rte_errno;
+		return -rte_errno;
 	}
+	/* Validate user arguments for remote PD and CTX if it is given. */
+	ret = mlx5_os_remote_pd_and_ctx_validate(config);
+	if (ret)
+		return ret;
 	DRV_LOG(DEBUG, "mr_ext_memseg_en is %u.", config->mr_ext_memseg_en);
 	DRV_LOG(DEBUG, "mr_mempool_reg_en is %u.", config->mr_mempool_reg_en);
 	DRV_LOG(DEBUG, "sys_mem_en is %u.", config->sys_mem_en);
@@ -645,7 +663,7 @@ static void
 mlx5_dev_hw_global_release(struct mlx5_common_device *cdev)
 {
 	if (cdev->pd != NULL) {
-		claim_zero(mlx5_os_dealloc_pd(cdev->pd));
+		claim_zero(mlx5_os_pd_release(cdev));
 		cdev->pd = NULL;
 	}
 	if (cdev->ctx != NULL) {
@@ -674,20 +692,27 @@ mlx5_dev_hw_global_prepare(struct mlx5_common_device *cdev, uint32_t classes)
 	ret = mlx5_os_open_device(cdev, classes);
 	if (ret < 0)
 		return ret;
-	/* Allocate Protection Domain object and extract its pdn. */
-	ret = mlx5_os_pd_create(cdev);
+	/*
+	 * When CTX is created by Verbs, query HCA attribute is unsupported.
+	 * When CTX is imported, we cannot know if it is created by DevX or
+	 * Verbs. So, we use query HCA attribute function to check it.
+	 */
+	if (cdev->config.devx || cdev->config.device_fd != MLX5_ARG_UNSET) {
+		/* Query HCA attributes. */
+		ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx,
+						   &cdev->config.hca_attr);
+		if (ret) {
+			DRV_LOG(ERR, "Unable to read HCA caps in DevX mode.");
+			rte_errno = ENOTSUP;
+			goto error;
+		}
+		cdev->config.devx = 1;
+	}
+	DRV_LOG(DEBUG, "DevX is %ssupported.", cdev->config.devx ? "" : "NOT ");
+	/* Prepare Protection Domain object and extract its pdn. */
+	ret = mlx5_os_pd_prepare(cdev);
 	if (ret)
 		goto error;
-	/* All actions taken below are relevant only when DevX is supported */
-	if (cdev->config.devx == 0)
-		return 0;
-	/* Query HCA attributes. */
-	ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx, &cdev->config.hca_attr);
-	if (ret) {
-		DRV_LOG(ERR, "Unable to read HCA capabilities.");
-		rte_errno = ENOTSUP;
-		goto error;
-	}
 	return 0;
 error:
 	mlx5_dev_hw_global_release(cdev);
@@ -826,6 +851,17 @@ mlx5_common_probe_again_args_validate(struct mlx5_common_device *cdev,
 			cdev->dev->name);
 		goto error;
 	}
+	if (cdev->config.device_fd ^ config->device_fd) {
+		DRV_LOG(ERR, "\"cmd_fd\" configuration mismatch for device %s.",
+			cdev->dev->name);
+		goto error;
+	}
+	if (cdev->config.pd_handle ^ config->pd_handle) {
+		DRV_LOG(ERR,
+			"\"pd_handle\" configuration mismatch for device %s.",
+			cdev->dev->name);
+		goto error;
+	}
 	if (cdev->config.sys_mem_en ^ config->sys_mem_en) {
 		DRV_LOG(ERR,
 			"\"sys_mem_en\" configuration mismatch for device %s.",
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 49bcea1d91..451cdb4fad 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -446,6 +446,8 @@ void mlx5_common_init(void);
 struct mlx5_common_dev_config {
 	struct mlx5_hca_attr hca_attr; /* HCA attributes. */
 	int dbnc; /* Skip doorbell register write barrier. */
+	int device_fd; /* Device file descriptor for importation. */
+	int pd_handle; /* Protection Domain handle for importation.  */
 	unsigned int devx:1; /* Whether devx interface is available or not. */
 	unsigned int sys_mem_en:1; /* The default memory allocator. */
 	unsigned int mr_mempool_reg_en:1;
@@ -465,6 +467,23 @@ struct mlx5_common_device {
 	struct mlx5_common_dev_config config; /* Device configuration. */
 };
 
+/**
+ * Indicates whether PD and CTX are imported from another process,
+ * or created by this process.
+ *
+ * @param cdev
+ *   Pointer to common device.
+ *
+ * @return
+ *   True if PD and CTX are imported from another process, False otherwise.
+ */
+static inline bool
+mlx5_imported_pd_and_ctx(struct mlx5_common_device *cdev)
+{
+	return (cdev->config.device_fd != MLX5_ARG_UNSET &&
+		cdev->config.pd_handle != MLX5_ARG_UNSET);
+}
+
 /**
  * Initialization function for the driver called during device probing.
  */
@@ -554,7 +573,9 @@ mlx5_devx_uar_release(struct mlx5_uar *uar);
 /* mlx5_common_os.c */
 
 int mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes);
-int mlx5_os_pd_create(struct mlx5_common_device *cdev);
+int mlx5_os_pd_prepare(struct mlx5_common_device *cdev);
+int mlx5_os_pd_release(struct mlx5_common_device *cdev);
+int mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config);
 
 /* mlx5 PMD wrapped MR struct. */
 struct mlx5_pmd_wrapped_mr {
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c
index c3cfc315f2..f2fc7cd494 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.c
+++ b/drivers/common/mlx5/windows/mlx5_common_os.c
@@ -25,21 +25,46 @@ mlx5_glue_constructor(void)
 {
 }
 
+/**
+ * Validate user arguments for remote PD and CTX.
+ *
+ * @param config
+ *   Pointer to device configuration structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config)
+{
+	int device_fd = config->device_fd;
+	int pd_handle = config->pd_handle;
+
+	if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote PD and CTX is not supported on Windows.");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+	return 0;
+}
+
 /**
  * Release PD. Releases a given mlx5_pd object
  *
- * @param[in] pd
- *   Pointer to mlx5_pd.
+ * @param[in] cdev
+ *   Pointer to the mlx5 device.
  *
  * @return
  *   Zero if pd is released successfully, negative number otherwise.
  */
 int
-mlx5_os_dealloc_pd(void *pd)
+mlx5_os_pd_release(struct mlx5_common_device *cdev)
 {
+	struct mlx5_pd *pd = cdev->pd;
+
 	if (!pd)
 		return -EINVAL;
-	mlx5_devx_cmd_destroy(((struct mlx5_pd *)pd)->obj);
+	mlx5_devx_cmd_destroy(pd->obj);
 	mlx5_free(pd);
 	return 0;
 }
@@ -47,14 +72,14 @@ mlx5_os_dealloc_pd(void *pd)
 /**
  * Allocate Protection Domain object and extract its pdn using DV API.
  *
- * @param[out] dev
+ * @param[out] cdev
  *   Pointer to the mlx5 device.
  *
  * @return
  *   0 on success, a negative value otherwise.
  */
 int
-mlx5_os_pd_create(struct mlx5_common_device *cdev)
+mlx5_os_pd_prepare(struct mlx5_common_device *cdev)
 {
 	struct mlx5_pd *pd;
 
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.h b/drivers/common/mlx5/windows/mlx5_common_os.h
index 61fc8dd761..ee7973f1ec 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.h
+++ b/drivers/common/mlx5/windows/mlx5_common_os.h
@@ -248,7 +248,6 @@ mlx5_os_devx_subscribe_devx_event(void *eventc,
 	return -ENOTSUP;
 }
 
-int mlx5_os_dealloc_pd(void *pd);
 __rte_internal
 void *mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access);
 __rte_internal
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 4/6] net/mlx5: optimize RxQ/TxQ control structure
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
                     ` (2 preceding siblings ...)
  2022-02-23 18:48   ` [PATCH v2 3/6] common/mlx5: add remote PD and CTX support Michael Baum
@ 2022-02-23 18:48   ` Michael Baum
  2022-02-23 18:48   ` [PATCH v2 5/6] net/mlx5: add external RxQ mapping API Michael Baum
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The RxQ/TxQ control structure has a field named type. This type is enum
with values for standard and hairpin.
The use of this field is to check whether the queue is of the hairpin
type or standard.

This patch replaces it with a boolean variable that saves whether it is
a hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c    | 26 ++++++++++--------------
 drivers/net/mlx5/mlx5_ethdev.c  |  2 +-
 drivers/net/mlx5/mlx5_flow.c    | 14 ++++++-------
 drivers/net/mlx5/mlx5_flow_dv.c | 14 +++++--------
 drivers/net/mlx5/mlx5_rx.h      | 13 +++---------
 drivers/net/mlx5/mlx5_rxq.c     | 33 +++++++++++-------------------
 drivers/net/mlx5/mlx5_trigger.c | 36 ++++++++++++++++-----------------
 drivers/net/mlx5/mlx5_tx.h      |  7 +------
 drivers/net/mlx5/mlx5_txq.c     | 14 ++++++-------
 9 files changed, 64 insertions(+), 95 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index a9b8c2a1b7..e4bc90a30e 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -88,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type)
 	default:
 		break;
 	}
-	if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (rxq->ctrl->is_hairpin)
 		return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr);
 	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
@@ -162,7 +162,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 
 	if (rxq_obj == NULL)
 		return;
-	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
+	if (rxq_obj->rxq_ctrl->is_hairpin) {
 		if (rxq_obj->rq == NULL)
 			return;
 		mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST);
@@ -476,7 +476,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(tmpl);
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (rxq_ctrl->is_hairpin)
 		return mlx5_rxq_obj_hairpin_new(rxq);
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq && !rxq_ctrl->started) {
@@ -583,7 +583,7 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
 
 		MLX5_ASSERT(rxq != NULL);
-		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+		if (rxq->ctrl->is_hairpin)
 			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
 		else
 			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
@@ -706,17 +706,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 		       int tunnel, struct mlx5_devx_tir_attr *tir_attr)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5_rxq_type rxq_obj_type;
+	bool is_hairpin;
 	bool lro = true;
 	uint32_t i;
 
 	/* NULL queues designate drop queue. */
 	if (ind_tbl->queues != NULL) {
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-				mlx5_rxq_ctrl_get(dev, ind_tbl->queues[0]);
-		rxq_obj_type = rxq_ctrl != NULL ? rxq_ctrl->type :
-						  MLX5_RXQ_TYPE_STANDARD;
-
+		is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]);
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
 			struct mlx5_rxq_data *rxq_i =
@@ -728,7 +724,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			}
 		}
 	} else {
-		rxq_obj_type = priv->drop_queue.rxq->ctrl->type;
+		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
@@ -759,7 +755,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			(!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) <<
 			 MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT;
 	}
-	if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (is_hairpin)
 		tir_attr->transport_domain = priv->sh->td->id;
 	else
 		tir_attr->transport_domain = priv->sh->tdn;
@@ -932,7 +928,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 		goto error;
 	}
 	rxq_obj->rxq_ctrl = rxq_ctrl;
-	rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD;
+	rxq_ctrl->is_hairpin = false;
 	rxq_ctrl->sh = priv->sh;
 	rxq_ctrl->obj = rxq_obj;
 	rxq->ctrl = rxq_ctrl;
@@ -1232,7 +1228,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	struct mlx5_txq_ctrl *txq_ctrl =
 			container_of(txq_data, struct mlx5_txq_ctrl, txq);
 
-	if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN)
+	if (txq_ctrl->is_hairpin)
 		return mlx5_txq_obj_hairpin_new(dev, idx);
 #if !defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) && defined(HAVE_INFINIBAND_VERBS_H)
 	DRV_LOG(ERR, "Port %u Tx queue %u cannot create with DevX, no UAR.",
@@ -1371,7 +1367,7 @@ void
 mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj)
 {
 	MLX5_ASSERT(txq_obj);
-	if (txq_obj->txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
+	if (txq_obj->txq_ctrl->is_hairpin) {
 		if (txq_obj->tis)
 			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
 #if defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) || !defined(HAVE_INFINIBAND_VERBS_H)
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 72bf8ac914..406761ccf8 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -173,7 +173,7 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev)
 	for (i = 0, j = 0; i < rxqs_n; i++) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl && !rxq_ctrl->is_hairpin)
 			rss_queue_arr[j++] = i;
 	}
 	rss_queue_n = j;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a87ac8e6d7..58f0aba294 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1676,7 +1676,7 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			 const char **error, uint32_t *queue_idx)
 {
 	const struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5_rxq_type rxq_type = MLX5_RXQ_TYPE_UNDEFINED;
+	bool is_hairpin = false;
 	uint32_t i;
 
 	for (i = 0; i != queues_n; ++i) {
@@ -1693,9 +1693,9 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			*queue_idx = i;
 			return -EINVAL;
 		}
-		if (i == 0)
-			rxq_type = rxq_ctrl->type;
-		if (rxq_type != rxq_ctrl->type) {
+		if (i == 0 && rxq_ctrl->is_hairpin)
+			is_hairpin = true;
+		if (is_hairpin != rxq_ctrl->is_hairpin) {
 			*error = "combining hairpin and regular RSS queues is not supported";
 			*queue_idx = i;
 			return -ENOTSUP;
@@ -5767,15 +5767,13 @@ flow_create_split_metadata(struct rte_eth_dev *dev,
 			const struct rte_flow_action_queue *queue;
 
 			queue = qrss->conf;
-			if (mlx5_rxq_get_type(dev, queue->index) ==
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			if (mlx5_rxq_is_hairpin(dev, queue->index))
 				qrss = NULL;
 		} else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) {
 			const struct rte_flow_action_rss *rss;
 
 			rss = qrss->conf;
-			if (mlx5_rxq_get_type(dev, rss->queue[0]) ==
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			if (mlx5_rxq_is_hairpin(dev, rss->queue[0]))
 				qrss = NULL;
 		}
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index abd1c27538..c4cd5c894b 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5771,8 +5771,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags,
 	}
 	/* Continue validation for Xcap actions.*/
 	if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) &&
-	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) {
+	    (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index))) {
 		if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
 		     MLX5_FLOW_XCAP_ACTIONS)
 			return rte_flow_error_set(error, ENOTSUP,
@@ -7957,8 +7956,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	 */
 	if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS |
 			     MLX5_FLOW_VLAN_ACTIONS)) &&
-	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN ||
+	    (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index) ||
 	     ((conf = mlx5_rxq_get_hairpin_conf(dev, queue_index)) != NULL &&
 	     conf->tx_explicit != 0))) {
 		if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
@@ -10948,10 +10946,8 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev,
 {
 	const struct mlx5_rte_flow_item_tx_queue *queue_m;
 	const struct mlx5_rte_flow_item_tx_queue *queue_v;
-	void *misc_m =
-		MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v =
-		MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
+	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
 	struct mlx5_txq_ctrl *txq;
 	uint32_t queue, mask;
 
@@ -10962,7 +10958,7 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev,
 	txq = mlx5_txq_get(dev, queue_v->queue);
 	if (!txq)
 		return;
-	if (txq->type == MLX5_TXQ_TYPE_HAIRPIN)
+	if (txq->is_hairpin)
 		queue = txq->obj->sq->id;
 	else
 		queue = txq->obj->sq_obj.sq->id;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 38335fd744..1fdf4ff161 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -141,12 +141,6 @@ struct mlx5_rxq_data {
 	/* Buffer split segment descriptions - sizes, offsets, pools. */
 } __rte_cache_aligned;
 
-enum mlx5_rxq_type {
-	MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */
-	MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */
-	MLX5_RXQ_TYPE_UNDEFINED,
-};
-
 /* RX queue control descriptor. */
 struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
@@ -154,7 +148,7 @@ struct mlx5_rxq_ctrl {
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
-	enum mlx5_rxq_type type; /* Rxq type. */
+	bool is_hairpin; /* Whether RxQ type is Hairpin. */
 	unsigned int socket; /* CPU socket ID for allocations. */
 	LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
 	uint32_t share_group; /* Group ID of shared RXQ. */
@@ -253,7 +247,7 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
 		       struct mlx5_flow_rss_desc *rss_desc);
 int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx);
 uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev);
-enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx);
+bool mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx);
 const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf
 	(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev);
@@ -627,8 +621,7 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 	for (i = 0; i < priv->rxqs_n; ++i) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		n_ibv++;
 		if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 809006f66a..796497ab1a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1391,8 +1391,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 		struct mlx5_rxq_data *rxq;
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		rxq = &rxq_ctrl->rxq;
 		n_ibv++;
@@ -1480,8 +1479,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	for (i = 0; i != priv->rxqs_n; ++i) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		rxq_ctrl->rxq.mprq_mp = mp;
 	}
@@ -1798,7 +1796,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		rte_errno = ENOSPC;
 		goto error;
 	}
-	tmpl->type = MLX5_RXQ_TYPE_STANDARD;
+	tmpl->is_hairpin = false;
 	if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl,
 			      &priv->sh->cdev->mr_scache.dev_gen, socket)) {
 		/* rte_errno is already set. */
@@ -1969,7 +1967,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	LIST_INIT(&tmpl->owners);
 	rxq->ctrl = tmpl;
 	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
-	tmpl->type = MLX5_RXQ_TYPE_HAIRPIN;
+	tmpl->is_hairpin = true;
 	tmpl->socket = SOCKET_ID_ANY;
 	tmpl->rxq.rss_hash = 0;
 	tmpl->rxq.port_id = dev->data->port_id;
@@ -2120,7 +2118,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 			mlx5_free(rxq_ctrl->obj);
 			rxq_ctrl->obj = NULL;
 		}
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+		if (!rxq_ctrl->is_hairpin) {
 			if (!rxq_ctrl->started)
 				rxq_free_elts(rxq_ctrl);
 			dev->data->rx_queue_state[idx] =
@@ -2129,7 +2127,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 	} else { /* Refcnt zero, closing device. */
 		LIST_REMOVE(rxq, owner_entry);
 		if (LIST_EMPTY(&rxq_ctrl->owners)) {
-			if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+			if (!rxq_ctrl->is_hairpin)
 				mlx5_mr_btree_free
 					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			if (rxq_ctrl->rxq.shared)
@@ -2169,7 +2167,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
 }
 
 /**
- * Get a Rx queue type.
+ * Check whether RxQ type is Hairpin.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -2177,17 +2175,15 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
  *   Rx queue index.
  *
  * @return
- *   The Rx queue type.
+ *   True if Rx queue type is Hairpin, otherwise False.
  */
-enum mlx5_rxq_type
-mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
+bool
+mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
-	if (idx < priv->rxqs_n && rxq_ctrl != NULL)
-		return rxq_ctrl->type;
-	return MLX5_RXQ_TYPE_UNDEFINED;
+	return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin);
 }
 
 /*
@@ -2204,14 +2200,9 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
 const struct rte_eth_hairpin_conf *
 mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
 
-	if (idx < priv->rxqs_n && rxq != NULL) {
-		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
-			return &rxq->hairpin_conf;
-	}
-	return NULL;
+	return mlx5_rxq_is_hairpin(dev, idx) ? &rxq->hairpin_conf : NULL;
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c3bc8a13..fe8b42c414 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -59,7 +59,7 @@ mlx5_txq_start(struct rte_eth_dev *dev)
 
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
+		if (!txq_ctrl->is_hairpin)
 			txq_alloc_elts(txq_ctrl);
 		MLX5_ASSERT(!txq_ctrl->obj);
 		txq_ctrl->obj = mlx5_malloc(flags, sizeof(struct mlx5_txq_obj),
@@ -77,7 +77,7 @@ mlx5_txq_start(struct rte_eth_dev *dev)
 			txq_ctrl->obj = NULL;
 			goto error;
 		}
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+		if (!txq_ctrl->is_hairpin) {
 			size_t size = txq_data->cqe_s * sizeof(*txq_data->fcqs);
 
 			txq_data->fcqs = mlx5_malloc(flags, size,
@@ -167,7 +167,7 @@ mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl,
 {
 	int ret = 0;
 
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+	if (!rxq_ctrl->is_hairpin) {
 		/*
 		 * Pre-register the mempools. Regardless of whether
 		 * the implicit registration is enabled or not,
@@ -280,7 +280,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		if (!txq_ctrl->is_hairpin ||
 		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
@@ -299,7 +299,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		if (!txq_ctrl)
 			continue;
 		/* Skip hairpin queues with other peer ports. */
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		if (!txq_ctrl->is_hairpin ||
 		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
@@ -322,7 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN ||
+		if (!rxq_ctrl->is_hairpin ||
 		    rxq->hairpin_conf.peers[0].queue != i) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u Tx queue %d can't be binded to "
@@ -412,7 +412,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 				dev->data->port_id, peer_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Txq",
 				dev->data->port_id, peer_queue);
@@ -444,7 +444,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq",
 				dev->data->port_id, peer_queue);
@@ -510,7 +510,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
 				dev->data->port_id, cur_queue);
@@ -570,7 +570,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
@@ -644,7 +644,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
 				dev->data->port_id, cur_queue);
@@ -683,7 +683,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
@@ -751,7 +751,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -791,7 +791,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -886,7 +886,7 @@ mlx5_hairpin_unbind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -1016,7 +1016,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			txq_ctrl = mlx5_txq_get(dev, i);
 			if (!txq_ctrl)
 				continue;
-			if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			if (!txq_ctrl->is_hairpin) {
 				mlx5_txq_release(dev, i);
 				continue;
 			}
@@ -1040,7 +1040,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			if (rxq == NULL)
 				continue;
 			rxq_ctrl = rxq->ctrl;
-			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN)
+			if (!rxq_ctrl->is_hairpin)
 				continue;
 			pp = rxq->hairpin_conf.peers[0].port;
 			if (pp >= RTE_MAX_ETHPORTS) {
@@ -1318,7 +1318,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		if (!txq_ctrl)
 			continue;
 		/* Only Tx implicit mode requires the default Tx flow. */
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN &&
+		if (txq_ctrl->is_hairpin &&
 		    txq_ctrl->hairpin_conf.tx_explicit == 0 &&
 		    txq_ctrl->hairpin_conf.peers[0].port ==
 		    priv->dev_data->port_id) {
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 0adc3f4839..89dac0c65a 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -169,17 +169,12 @@ struct mlx5_txq_data {
 	/* Storage for queued packets, must be the last field. */
 } __rte_cache_aligned;
 
-enum mlx5_txq_type {
-	MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */
-	MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Tx queue. */
-};
-
 /* TX queue control descriptor. */
 struct mlx5_txq_ctrl {
 	LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */
 	uint32_t refcnt; /* Reference counter. */
 	unsigned int socket; /* CPU socket ID for allocations. */
-	enum mlx5_txq_type type; /* The txq ctrl type. */
+	bool is_hairpin; /* Whether TxQ type is Hairpin. */
 	unsigned int max_inline_data; /* Max inline data. */
 	unsigned int max_tso_header; /* Max TSO header size. */
 	struct mlx5_txq_obj *obj; /* Verbs/DevX queue object. */
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index f128c3d1a5..0140f8b3b2 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -527,7 +527,7 @@ txq_uar_init_secondary(struct mlx5_txq_ctrl *txq_ctrl, int fd)
 		return -rte_errno;
 	}
 
-	if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+	if (txq_ctrl->is_hairpin)
 		return 0;
 	MLX5_ASSERT(ppriv);
 	/*
@@ -570,7 +570,7 @@ txq_uar_uninit_secondary(struct mlx5_txq_ctrl *txq_ctrl)
 		rte_errno = ENOMEM;
 	}
 
-	if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+	if (txq_ctrl->is_hairpin)
 		return;
 	addr = ppriv->uar_table[txq_ctrl->txq.idx].db;
 	rte_mem_unmap(RTE_PTR_ALIGN_FLOOR(addr, page_size), page_size);
@@ -631,7 +631,7 @@ mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd)
 			continue;
 		txq = (*priv->txqs)[i];
 		txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq);
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+		if (txq_ctrl->is_hairpin)
 			continue;
 		MLX5_ASSERT(txq->idx == (uint16_t)i);
 		ret = txq_uar_init_secondary(txq_ctrl, fd);
@@ -1107,7 +1107,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
-	tmpl->type = MLX5_TXQ_TYPE_STANDARD;
+	tmpl->is_hairpin = false;
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
 error:
@@ -1150,7 +1150,7 @@ mlx5_txq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	tmpl->hairpin_conf = *hairpin_conf;
-	tmpl->type = MLX5_TXQ_TYPE_HAIRPIN;
+	tmpl->is_hairpin = true;
 	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
@@ -1209,7 +1209,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
 		mlx5_free(txq_ctrl->obj);
 		txq_ctrl->obj = NULL;
 	}
-	if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+	if (!txq_ctrl->is_hairpin) {
 		if (txq_ctrl->txq.fcqs) {
 			mlx5_free(txq_ctrl->txq.fcqs);
 			txq_ctrl->txq.fcqs = NULL;
@@ -1218,7 +1218,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
 		dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 	if (!__atomic_load_n(&txq_ctrl->refcnt, __ATOMIC_RELAXED)) {
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
+		if (!txq_ctrl->is_hairpin)
 			mlx5_mr_btree_free(&txq_ctrl->txq.mr_ctrl.cache_bh);
 		LIST_REMOVE(txq_ctrl, next);
 		mlx5_free(txq_ctrl);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 5/6] net/mlx5: add external RxQ mapping API
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
                     ` (3 preceding siblings ...)
  2022-02-23 18:48   ` [PATCH v2 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
@ 2022-02-23 18:48   ` Michael Baum
  2022-02-23 18:48   ` [PATCH v2 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

External queue is a queue that has been created and managed outside the
PMD. The queues owner might use PMD to generate flow rules using these
external queues.

When the queue is created in hardware it is given an ID represented by
32 bits. In contrast, the index of the queues in PMD is represented by
16 bits. To enable the use of PMD to generate flow rules, the queue
owner must provide a mapping between the HW index and a 16-bit index
corresponding to the RTE Flow API.

This patch adds an API enabling to insert/cancel a mapping between HW
queue id and RTE Flow queue id.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  18 +++++
 drivers/net/mlx5/mlx5.c          |   2 +
 drivers/net/mlx5/mlx5.h          |   1 +
 drivers/net/mlx5/mlx5_defs.h     |   3 +
 drivers/net/mlx5/mlx5_ethdev.c   |  16 ++++-
 drivers/net/mlx5/mlx5_rx.h       |   6 ++
 drivers/net/mlx5/mlx5_rxq.c      | 109 +++++++++++++++++++++++++++++++
 drivers/net/mlx5/rte_pmd_mlx5.h  |  50 +++++++++++++-
 drivers/net/mlx5/version.map     |   3 +
 9 files changed, 204 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index ecf823da56..058c140fe1 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1156,6 +1156,22 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 		err = ENOMEM;
 		goto error;
 	}
+	/*
+	 * When user configures remote PD and CTX and device creates RxQ by
+	 * DevX, external RxQ is both supported and requested.
+	 */
+	if (mlx5_imported_pd_and_ctx(sh->cdev) && mlx5_devx_obj_ops_en(sh)) {
+		priv->ext_rxqs = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
+					     sizeof(struct mlx5_external_rxq) *
+					     MLX5_MAX_EXT_RX_QUEUES, 0,
+					     SOCKET_ID_ANY);
+		if (priv->ext_rxqs == NULL) {
+			DRV_LOG(ERR, "Fail to allocate external RxQ array.");
+			err = ENOMEM;
+			goto error;
+		}
+		DRV_LOG(DEBUG, "External RxQ is supported.");
+	}
 	priv->sh = sh;
 	priv->dev_port = spawn->phys_port;
 	priv->pci_dev = spawn->pci_dev;
@@ -1613,6 +1629,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 			mlx5_list_destroy(priv->hrxqs);
 		if (eth_dev && priv->flex_item_map)
 			mlx5_flex_item_port_cleanup(eth_dev);
+		if (priv->ext_rxqs)
+			mlx5_free(priv->ext_rxqs);
 		mlx5_free(priv);
 		if (eth_dev != NULL)
 			eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 9f65a8f901..415e0fe2f2 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1855,6 +1855,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		close(priv->nl_socket_rdma);
 	if (priv->vmwa_context)
 		mlx5_vlan_vmwa_exit(priv->vmwa_context);
+	if (priv->ext_rxqs)
+		mlx5_free(priv->ext_rxqs);
 	ret = mlx5_hrxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 0f465d0e9e..fa27f65a36 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1423,6 +1423,7 @@ struct mlx5_priv {
 	/* RX/TX queues. */
 	unsigned int rxqs_n; /* RX queues array size. */
 	unsigned int txqs_n; /* TX queues array size. */
+	struct mlx5_external_rxq *ext_rxqs; /* External RX queues array. */
 	struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */
 	struct mlx5_txq_data *(*txqs)[]; /* TX queues. */
 	struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 2d48fde010..15728fb41f 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -175,6 +175,9 @@
 /* Maximum number of indirect actions supported by rte_flow */
 #define MLX5_MAX_INDIRECT_ACTIONS 3
 
+/* Maximum number of external Rx queues supported by rte_flow */
+#define MLX5_MAX_EXT_RX_QUEUES (UINT16_MAX - MLX5_EXTERNAL_RX_QUEUE_ID_MIN + 1)
+
 /*
  * Linux definition of static_assert is found in /usr/include/assert.h.
  * Windows does not require a redefinition.
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 406761ccf8..de0ba2b1ff 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -27,6 +27,7 @@
 #include "mlx5_tx.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_devx.h"
+#include "rte_pmd_mlx5.h"
 
 /**
  * Get the interface index from device name.
@@ -81,9 +82,10 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	priv->rss_conf.rss_key =
-		mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE,
-			    MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY);
+	priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key,
+					      MLX5_MEM_RTE,
+					      MLX5_RSS_HASH_KEY_LEN, 0,
+					      SOCKET_ID_ANY);
 	if (!priv->rss_conf.rss_key) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)",
 			dev->data->port_id, rxqs_n);
@@ -127,6 +129,14 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
+	if (priv->ext_rxqs && rxqs_n >= MLX5_EXTERNAL_RX_QUEUE_ID_MIN) {
+		DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u), "
+			"the maximal number of internal Rx queues is %u",
+			dev->data->port_id, rxqs_n,
+			MLX5_EXTERNAL_RX_QUEUE_ID_MIN - 1);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
 	if (rxqs_n != priv->rxqs_n) {
 		DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u",
 			dev->data->port_id, priv->rxqs_n, rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 1fdf4ff161..754c526464 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -175,6 +175,12 @@ struct mlx5_rxq_priv {
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
+/* External RX queue descriptor. */
+struct mlx5_external_rxq {
+	uint32_t hw_id; /* Queue index in the Hardware. */
+	uint32_t refcnt; /* Reference counter. */
+};
+
 /* mlx5_rxq.c */
 
 extern uint8_t rss_hash_default_key[];
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 796497ab1a..145da2dbbb 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -30,6 +30,7 @@
 #include "mlx5_utils.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_devx.h"
+#include "rte_pmd_mlx5.h"
 
 
 /* Default RSS hash key also used for ConnectX-3. */
@@ -2983,3 +2984,111 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev)
 		data->rt_timestamp = sh->dev_cap.rt_timestamp;
 	}
 }
+
+/**
+ * Validate given external RxQ rte_plow index, and get pointer to concurrent
+ * external RxQ object to map/unmap.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ *
+ * @return
+ *   Pointer to concurrent external RxQ on success,
+ *   NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_external_rxq *
+mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx)
+{
+	struct rte_eth_dev *dev;
+	struct mlx5_priv *priv;
+
+	if (dpdk_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN) {
+		DRV_LOG(ERR, "Queue index %u should be in range: [%u, %u].",
+			dpdk_idx, MLX5_EXTERNAL_RX_QUEUE_ID_MIN, UINT16_MAX);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	if (rte_eth_dev_is_valid_port(port_id) < 0) {
+		DRV_LOG(ERR, "There is no Ethernet device for port %u.",
+			port_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+	dev = &rte_eth_devices[port_id];
+	priv = dev->data->dev_private;
+	if (!mlx5_imported_pd_and_ctx(priv->sh->cdev)) {
+		DRV_LOG(ERR, "Port %u "
+			"external RxQ isn't supported on local PD and CTX.",
+			port_id);
+		rte_errno = ENOTSUP;
+		return NULL;
+	}
+	if (!mlx5_devx_obj_ops_en(priv->sh)) {
+		DRV_LOG(ERR,
+			"Port %u external RxQ isn't supported by Verbs API.",
+			port_id);
+		rte_errno = ENOTSUP;
+		return NULL;
+	}
+	/*
+	 * When user configures remote PD and CTX and device creates RxQ by
+	 * DevX, external RxQs array is allocated.
+	 */
+	MLX5_ASSERT(priv->ext_rxqs != NULL);
+	return &priv->ext_rxqs[dpdk_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+}
+
+int
+rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
+				      uint32_t hw_idx)
+{
+	struct mlx5_external_rxq *ext_rxq;
+
+	ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx);
+	if (ext_rxq == NULL)
+		return -rte_errno;
+	if (__atomic_load_n(&ext_rxq->refcnt, __ATOMIC_RELAXED)) {
+		if (ext_rxq->hw_id != hw_idx) {
+			DRV_LOG(ERR, "Port %u external RxQ index %u "
+				"is already mapped to HW index (requesting is "
+				"%u, existing is %u).",
+				port_id, dpdk_idx, hw_idx, ext_rxq->hw_id);
+			rte_errno = EEXIST;
+			return -rte_errno;
+		}
+		DRV_LOG(WARNING, "Port %u external RxQ index %u "
+			"is already mapped to the requested HW index (%u)",
+			port_id, dpdk_idx, hw_idx);
+
+	} else {
+		ext_rxq->hw_id = hw_idx;
+		__atomic_store_n(&ext_rxq->refcnt, 1, __ATOMIC_RELAXED);
+		DRV_LOG(DEBUG, "Port %u external RxQ index %u "
+			"is successfully mapped to the requested HW index (%u)",
+			port_id, dpdk_idx, hw_idx);
+	}
+	return 0;
+}
+
+int
+rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx)
+{
+	struct mlx5_external_rxq *ext_rxq;
+
+	ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx);
+	if (ext_rxq == NULL)
+		return -rte_errno;
+	if (__atomic_load_n(&ext_rxq->refcnt, __ATOMIC_RELAXED) == 0) {
+		DRV_LOG(ERR, "Port %u external RxQ index %u doesn't exist.",
+			port_id, dpdk_idx);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	__atomic_store_n(&ext_rxq->refcnt, 0, __ATOMIC_RELAXED);
+	DRV_LOG(DEBUG,
+		"Port %u external RxQ index %u is successfully unmapped.",
+		port_id, dpdk_idx);
+	return 0;
+}
diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h
index fc37a386db..92dc447648 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5.h
+++ b/drivers/net/mlx5/rte_pmd_mlx5.h
@@ -61,8 +61,56 @@ int rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n);
 __rte_experimental
 int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains);
 
+/**
+ * External Rx queue rte_flow index minimal value.
+ */
+#define MLX5_EXTERNAL_RX_QUEUE_ID_MIN (UINT16_MAX - 1000 + 1)
+
+/**
+ * Update mapping between rte_flow queue index (16 bits) and HW queue index (32
+ * bits) for RxQs which is created outside the PMD.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ * @param[in] hw_idx
+ *   Queue index in hardware.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Possible values for rte_errno:
+ *   - EEXIST - a mapping with the same rte_flow index already exists.
+ *   - EINVAL - invalid rte_flow index, out of range.
+ *   - ENODEV - there is no Ethernet device for this port id.
+ *   - ENOTSUP - the port doesn't support external RxQ.
+ */
+__rte_experimental
+int rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
+					  uint32_t hw_idx);
+
+/**
+ * Remove mapping between rte_flow queue index (16 bits) and HW queue index (32
+ * bits) for RxQs which is created outside the PMD.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Possible values for rte_errno:
+ *   - EINVAL - invalid index, out of range or doesn't exist.
+ *   - ENODEV - there is no Ethernet device for this port id.
+ *   - ENOTSUP - the port doesn't support external RxQ.
+ */
+__rte_experimental
+int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id,
+					    uint16_t dpdk_idx);
+
 #ifdef __cplusplus
 }
 #endif
 
-#endif
+#endif /* RTE_PMD_PRIVATE_MLX5_H_ */
diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map
index 0af7a12488..79cb79acc6 100644
--- a/drivers/net/mlx5/version.map
+++ b/drivers/net/mlx5/version.map
@@ -9,4 +9,7 @@ EXPERIMENTAL {
 	rte_pmd_mlx5_get_dyn_flag_names;
 	# added in 20.11
 	rte_pmd_mlx5_sync_flow;
+	# added in 22.03
+	rte_pmd_mlx5_external_rx_queue_id_map;
+	rte_pmd_mlx5_external_rx_queue_id_unmap;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 6/6] net/mlx5: support queue/RSS action for external RxQ
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
                     ` (4 preceding siblings ...)
  2022-02-23 18:48   ` [PATCH v2 5/6] net/mlx5: add external RxQ mapping API Michael Baum
@ 2022-02-23 18:48   ` Michael Baum
  2022-02-24  8:38   ` [PATCH v2 0/6] mlx5: external RxQ support Matan Azrad
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
  7 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add support queue/RSS action for external RxQ.
In indirection table creation, the queue index will be taken from
mapping array.

This feature supports neither LRO nor Hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 doc/guides/nics/mlx5.rst               |   1 +
 doc/guides/rel_notes/release_22_03.rst |   1 +
 drivers/net/mlx5/mlx5.c                |   8 +-
 drivers/net/mlx5/mlx5_devx.c           |  30 +++++--
 drivers/net/mlx5/mlx5_flow.c           |  29 +++++--
 drivers/net/mlx5/mlx5_rx.h             |  30 +++++++
 drivers/net/mlx5/mlx5_rxq.c            | 116 +++++++++++++++++++++++--
 7 files changed, 189 insertions(+), 26 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 8956cd1dd8..34be031360 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -38,6 +38,7 @@ Features
 - Multiple TX and RX queues.
 - Shared Rx queue.
 - Rx queue delay drop.
+- Support steering for external Rx queue created outside the PMD.
 - Support for scattered TX frames.
 - Advanced support for scattered Rx frames with tunable buffer attributes.
 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index acd56e0a80..c093616d7f 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -123,6 +123,7 @@ New Features
   Updated the Mellanox mlx5 driver with new features and improvements, including:
 
   * Support ConnectX-7 capability to schedule traffic sending on timestamp
+  * Support steering for external Rx queue created outside the PMD.
 
 * **Updated Wangxun ngbe driver.**
 
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 415e0fe2f2..9760f52b46 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1855,8 +1855,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		close(priv->nl_socket_rdma);
 	if (priv->vmwa_context)
 		mlx5_vlan_vmwa_exit(priv->vmwa_context);
-	if (priv->ext_rxqs)
-		mlx5_free(priv->ext_rxqs);
 	ret = mlx5_hrxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
@@ -1869,6 +1867,10 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 	if (ret)
 		DRV_LOG(WARNING, "port %u some Rx queue objects still remain",
 			dev->data->port_id);
+	ret = mlx5_ext_rxq_verify(dev);
+	if (ret)
+		DRV_LOG(WARNING, "Port %u some external RxQ still remain.",
+			dev->data->port_id);
 	ret = mlx5_rxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some Rx queues still remain",
@@ -1887,6 +1889,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 			dev->data->port_id);
 	if (priv->hrxqs)
 		mlx5_list_destroy(priv->hrxqs);
+	if (priv->ext_rxqs)
+		mlx5_free(priv->ext_rxqs);
 	/*
 	 * Free the shared context in last turn, because the cleanup
 	 * routines above may use some shared fields, like
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index e4bc90a30e..8aa68d9658 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -580,13 +580,21 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		return rqt_attr;
 	}
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
+		if (mlx5_is_external_rxq(dev, queues[i])) {
+			struct mlx5_external_rxq *ext_rxq =
+					mlx5_ext_rxq_get(dev, queues[i]);
 
-		MLX5_ASSERT(rxq != NULL);
-		if (rxq->ctrl->is_hairpin)
-			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
-		else
-			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+			rqt_attr->rq_list[i] = ext_rxq->hw_id;
+		} else {
+			struct mlx5_rxq_priv *rxq =
+					mlx5_rxq_get(dev, queues[i]);
+
+			MLX5_ASSERT(rxq != NULL);
+			if (rxq->ctrl->is_hairpin)
+				rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
+			else
+				rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+		}
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
@@ -711,7 +719,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 	uint32_t i;
 
 	/* NULL queues designate drop queue. */
-	if (ind_tbl->queues != NULL) {
+	if (ind_tbl->queues == NULL) {
+		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
+	} else if (mlx5_is_external_rxq(dev, ind_tbl->queues[0])) {
+		/* External RxQ supports neither Hairpin nor LRO. */
+		is_hairpin = false;
+		lro = false;
+	} else {
 		is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]);
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
@@ -723,8 +737,6 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 				break;
 			}
 		}
-	} else {
-		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 58f0aba294..96f3402418 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1631,6 +1631,12 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
 					  "can't have 2 fate actions in"
 					  " same flow");
+	if (attr->egress)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+					  "queue action not supported for egress.");
+	if (mlx5_is_external_rxq(dev, queue->index))
+		return 0;
 	if (!priv->rxqs_n)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
@@ -1645,11 +1651,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  &queue->index,
 					  "queue is not configured");
-	if (attr->egress)
-		return rte_flow_error_set(error, ENOTSUP,
-					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
-					  "queue action not supported for "
-					  "egress");
 	return 0;
 }
 
@@ -1664,7 +1665,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
  *   Size of the @p queues array.
  * @param[out] error
  *   On error, filled with a textual error description.
- * @param[out] queue
+ * @param[out] queue_idx
  *   On error, filled with an offending queue index in @p queues array.
  *
  * @return
@@ -1677,17 +1678,27 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 {
 	const struct mlx5_priv *priv = dev->data->dev_private;
 	bool is_hairpin = false;
+	bool is_ext_rss = false;
 	uint32_t i;
 
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev,
-								   queues[i]);
+		struct mlx5_rxq_ctrl *rxq_ctrl;
 
+		if (mlx5_is_external_rxq(dev, queues[0])) {
+			is_ext_rss = true;
+			continue;
+		}
+		if (is_ext_rss) {
+			*error = "Combining external and regular RSS queues is not supported";
+			*queue_idx = i;
+			return -ENOTSUP;
+		}
 		if (queues[i] >= priv->rxqs_n) {
 			*error = "queue index out of range";
 			*queue_idx = i;
 			return -EINVAL;
 		}
+		rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]);
 		if (rxq_ctrl == NULL) {
 			*error =  "queue is not configured";
 			*queue_idx = i;
@@ -1782,7 +1793,7 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
 					  " type not specified");
-	if (!priv->rxqs_n)
+	if (!priv->rxqs_n && priv->ext_rxqs == NULL)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  NULL, "No Rx queues configured");
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 754c526464..29652a8c9f 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -18,6 +18,7 @@
 
 #include "mlx5.h"
 #include "mlx5_autoconf.h"
+#include "rte_pmd_mlx5.h"
 
 /* Support tunnel matching. */
 #define MLX5_FLOW_TUNNEL 10
@@ -217,8 +218,14 @@ uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_external_rxq *mlx5_ext_rxq_ref(struct rte_eth_dev *dev,
+					   uint16_t idx);
+uint32_t mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_external_rxq *mlx5_ext_rxq_get(struct rte_eth_dev *dev,
+					   uint16_t idx);
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_verify(struct rte_eth_dev *dev);
+int mlx5_ext_rxq_verify(struct rte_eth_dev *dev);
 int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
 int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev,
@@ -638,4 +645,27 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 	return n == n_ibv;
 }
 
+/**
+ * Check whether given RxQ is external.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param queue_idx
+ *   Rx queue index.
+ *
+ * @return
+ *   True if is external RxQ, otherwise false.
+ */
+static __rte_always_inline bool
+mlx5_is_external_rxq(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_external_rxq *rxq;
+
+	if (!priv->ext_rxqs || queue_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN)
+		return false;
+	rxq = &priv->ext_rxqs[queue_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+	return !!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED);
+}
+
 #endif /* RTE_PMD_MLX5_RX_H_ */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 145da2dbbb..22679755a4 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2084,6 +2084,65 @@ mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx)
 	return rxq == NULL ? NULL : &rxq->ctrl->rxq;
 }
 
+/**
+ * Increase an external Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External RX queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+struct mlx5_external_rxq *
+mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx);
+
+	__atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+	return rxq;
+}
+
+/**
+ * Decrease an external Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External RX queue index.
+ *
+ * @return
+ *   Updated reference count.
+ */
+uint32_t
+mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx);
+
+	return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+}
+
+/**
+ * Get an external Rx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External Rx queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+struct mlx5_external_rxq *
+mlx5_ext_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	MLX5_ASSERT(mlx5_is_external_rxq(dev, idx));
+	return &priv->ext_rxqs[idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+}
+
 /**
  * Release a Rx queue.
  *
@@ -2167,6 +2226,37 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
 	return ret;
 }
 
+/**
+ * Verify the external Rx Queue list is empty.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   The number of object not released.
+ */
+int
+mlx5_ext_rxq_verify(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_external_rxq *rxq;
+	uint16_t i;
+	int ret = 0;
+
+	if (priv->ext_rxqs == NULL)
+		return 0;
+
+	for (i = MLX5_EXTERNAL_RX_QUEUE_ID_MIN; i <= UINT16_MAX ; ++i) {
+		rxq = mlx5_ext_rxq_get(dev, i);
+		if (rxq->refcnt < 2)
+			continue;
+		DRV_LOG(DEBUG, "Port %u external RxQ %u still referenced.",
+			dev->data->port_id, i);
+		++ret;
+	}
+	return ret;
+}
+
 /**
  * Check whether RxQ type is Hairpin.
  *
@@ -2182,8 +2272,11 @@ bool
 mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 
+	if (mlx5_is_external_rxq(dev, idx))
+		return false;
+	rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 	return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin);
 }
 
@@ -2361,9 +2454,16 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 
 	if (ref_qs)
 		for (i = 0; i != queues_n; ++i) {
-			if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
-				ret = -rte_errno;
-				goto error;
+			if (mlx5_is_external_rxq(dev, queues[i])) {
+				if (mlx5_ext_rxq_ref(dev, queues[i]) == NULL) {
+					ret = -rte_errno;
+					goto error;
+				}
+			} else {
+				if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
+					ret = -rte_errno;
+					goto error;
+				}
 			}
 		}
 	ret = priv->obj_ops.ind_table_new(dev, n, ind_tbl);
@@ -2374,8 +2474,12 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 error:
 	if (ref_qs) {
 		err = rte_errno;
-		for (j = 0; j < i; j++)
-			mlx5_rxq_deref(dev, queues[j]);
+		for (j = 0; j < i; j++) {
+			if (mlx5_is_external_rxq(dev, queues[j]))
+				mlx5_ext_rxq_deref(dev, queues[j]);
+			else
+				mlx5_rxq_deref(dev, queues[j]);
+		}
 		rte_errno = err;
 	}
 	DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: [PATCH v2 0/6] mlx5: external RxQ support
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
                     ` (5 preceding siblings ...)
  2022-02-23 18:48   ` [PATCH v2 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
@ 2022-02-24  8:38   ` Matan Azrad
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
  7 siblings, 0 replies; 26+ messages in thread
From: Matan Azrad @ 2022-02-24  8:38 UTC (permalink / raw)
  To: Michael Baum, dev; +Cc: Raslan Darawsheh, Slava Ovsiienko



From: Michael Baum
> These patches add support to external Rx queues.
> External queue is a queue that is managed by a process external to PMD, but
> uses PMD process to generate its flow rules.
> 
> For the hardware to allow the DPDK process to set rules for it, the process needs
> to use the same PD of the external process. In addition, the indexes of the
> queues in hardware are represented by 32-bit compared to the rte_flow indexes
> represented by 16-bit, so the processes need to share some mapping between
> the indexes.
> 
> These patches allow the external process to provide devargs which enable
> importing its context and PD, instead of prepare new ones. In addition, an API is
> provided for mapping for the indexes of the queues.
> 
> v2:
> - Rebase.
> - Add ABI exception for common/mlx5 library.
> - Correct DevX flag updating.
> - Improve explanations in doc and comments.
> - Remove teatpmd part.
> 

Series-acked-by: Matan Azrad <matan@nvidia.com>

> Michael Baum (6):
>   common/mlx5: consider local functions as internal
>   common/mlx5: glue device and PD importation
>   common/mlx5: add remote PD and CTX support
>   net/mlx5: optimize RxQ/TxQ control structure
>   net/mlx5: add external RxQ mapping API
>   net/mlx5: support queue/RSS action for external RxQ
> 
>  devtools/libabigail.abignore                 |   4 +
>  doc/guides/nics/mlx5.rst                     |   1 +
>  doc/guides/platform/mlx5.rst                 |  37 ++-
>  doc/guides/rel_notes/release_22_03.rst       |   1 +
>  drivers/common/mlx5/linux/meson.build        |   2 +
>  drivers/common/mlx5/linux/mlx5_common_os.c   | 196 ++++++++++++--
>  drivers/common/mlx5/linux/mlx5_common_os.h   |   7 +-
>  drivers/common/mlx5/linux/mlx5_glue.c        |  41 +++
>  drivers/common/mlx5/linux/mlx5_glue.h        |   4 +
>  drivers/common/mlx5/mlx5_common.c            |  64 ++++-
>  drivers/common/mlx5/mlx5_common.h            |  23 +-
>  drivers/common/mlx5/version.map              |   3 +
>  drivers/common/mlx5/windows/mlx5_common_os.c |  37 ++-
>  drivers/common/mlx5/windows/mlx5_common_os.h |   1 -
>  drivers/net/mlx5/linux/mlx5_os.c             |  18 ++
>  drivers/net/mlx5/mlx5.c                      |   6 +
>  drivers/net/mlx5/mlx5.h                      |   1 +
>  drivers/net/mlx5/mlx5_defs.h                 |   3 +
>  drivers/net/mlx5/mlx5_devx.c                 |  52 ++--
>  drivers/net/mlx5/mlx5_ethdev.c               |  18 +-
>  drivers/net/mlx5/mlx5_flow.c                 |  43 ++--
>  drivers/net/mlx5/mlx5_flow_dv.c              |  14 +-
>  drivers/net/mlx5/mlx5_rx.h                   |  49 +++-
>  drivers/net/mlx5/mlx5_rxq.c                  | 258 +++++++++++++++++--
>  drivers/net/mlx5/mlx5_trigger.c              |  36 +--
>  drivers/net/mlx5/mlx5_tx.h                   |   7 +-
>  drivers/net/mlx5/mlx5_txq.c                  |  14 +-
>  drivers/net/mlx5/rte_pmd_mlx5.h              |  50 +++-
>  drivers/net/mlx5/version.map                 |   3 +
>  29 files changed, 821 insertions(+), 172 deletions(-)
> 
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 0/6] mlx5: external RxQ support
  2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
                     ` (6 preceding siblings ...)
  2022-02-24  8:38   ` [PATCH v2 0/6] mlx5: external RxQ support Matan Azrad
@ 2022-02-24 23:25   ` Michael Baum
  2022-02-24 23:25     ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
                       ` (6 more replies)
  7 siblings, 7 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

These patches add support to external Rx queues.
External queue is a queue that is managed by a process external to PMD,
but uses PMD process to generate its flow rules.

For the hardware to allow the DPDK process to set rules for it, the
process needs to use the same PD of the external process. In addition,
the indexes of the queues in hardware are represented by 32-bit compared
to the rte_flow indexes represented by 16-bit, so the processes need to
share some mapping between the indexes.

These patches allow the external process to provide devargs which enable
importing its context and PD, instead of prepare new ones. In addition,
an API is provided for mapping for the indexes of the queues.

v1:
- initial commits.

v2:
- Rebase.
- Add ABI exception for common/mlx5 library.
- Correct DevX flag updating.
- Improve explanations in doc and comments.
- Remove teatpmd part. 

v3:
- Rebase.
- Fix compilation error.
- Avoide TOCTOU issue in external RxQ map/unmap functions.
- Add check it the queue still referenced in unmapping function.
- Improve guide explanations for the new devargs.


Michael Baum (6):
  common/mlx5: consider local functions as internal
  common/mlx5: glue device and PD importation
  common/mlx5: add remote PD and CTX support
  net/mlx5: optimize RxQ/TxQ control structure
  net/mlx5: add external RxQ mapping API
  net/mlx5: support queue/RSS action for external RxQ

 devtools/libabigail.abignore                 |   4 +
 doc/guides/nics/mlx5.rst                     |   1 +
 doc/guides/platform/mlx5.rst                 |  37 ++-
 doc/guides/rel_notes/release_22_03.rst       |   1 +
 drivers/common/mlx5/linux/meson.build        |   2 +
 drivers/common/mlx5/linux/mlx5_common_os.c   | 196 ++++++++++++--
 drivers/common/mlx5/linux/mlx5_common_os.h   |   7 +-
 drivers/common/mlx5/linux/mlx5_glue.c        |  41 +++
 drivers/common/mlx5/linux/mlx5_glue.h        |   4 +
 drivers/common/mlx5/mlx5_common.c            |  84 ++++--
 drivers/common/mlx5/mlx5_common.h            |  23 +-
 drivers/common/mlx5/version.map              |   3 +
 drivers/common/mlx5/windows/mlx5_common_os.c |  37 ++-
 drivers/common/mlx5/windows/mlx5_common_os.h |   1 -
 drivers/net/mlx5/linux/mlx5_os.c             |  17 ++
 drivers/net/mlx5/mlx5.c                      |   5 +
 drivers/net/mlx5/mlx5.h                      |   1 +
 drivers/net/mlx5/mlx5_defs.h                 |   3 +
 drivers/net/mlx5/mlx5_devx.c                 |  52 ++--
 drivers/net/mlx5/mlx5_ethdev.c               |  18 +-
 drivers/net/mlx5/mlx5_flow.c                 |  43 +--
 drivers/net/mlx5/mlx5_flow_dv.c              |  14 +-
 drivers/net/mlx5/mlx5_rx.h                   |  49 +++-
 drivers/net/mlx5/mlx5_rxq.c                  | 266 +++++++++++++++++--
 drivers/net/mlx5/mlx5_trigger.c              |  36 +--
 drivers/net/mlx5/mlx5_tx.h                   |   7 +-
 drivers/net/mlx5/mlx5_txq.c                  |  14 +-
 drivers/net/mlx5/rte_pmd_mlx5.h              |  50 +++-
 drivers/net/mlx5/version.map                 |   3 +
 29 files changed, 838 insertions(+), 181 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 1/6] common/mlx5: consider local functions as internal
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
@ 2022-02-24 23:25     ` Michael Baum
  2022-02-25 18:01       ` Ferruh Yigit
  2022-02-24 23:25     ` [PATCH v3 2/6] common/mlx5: glue device and PD importation Michael Baum
                       ` (5 subsequent siblings)
  6 siblings, 1 reply; 26+ messages in thread
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The functions which are not explicitly marked as internal
were exported because the local catch-all rule was missing in the
version script.
After adding the missing rule, all local functions are hidden.
The function mlx5_get_device_guid is used in another library,
so it needs to be exported (as internal).

Because the local functions were exported as non-internal
in DPDK 21.11, any change in these functions would break the ABI.
An ABI exception is added for this library, considering that all
functions are either local or internal.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 devtools/libabigail.abignore               | 4 ++++
 drivers/common/mlx5/linux/mlx5_common_os.h | 1 +
 drivers/common/mlx5/version.map            | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index ef0602975a..78d57497e6 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -20,3 +20,7 @@
 ; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental
 [suppress_type]
         name = rte_crypto_asym_op
+
+; Ignore changes in common mlx5 driver, should be all internal
+[suppress_file]
+        soname_regexp = ^librte_common_mlx5\.
\ No newline at end of file
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h
index 83066e752d..edf356a30a 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.h
+++ b/drivers/common/mlx5/linux/mlx5_common_os.h
@@ -300,6 +300,7 @@ mlx5_set_context_attr(struct rte_device *dev, struct ibv_context *ctx);
  *  0 if OFED doesn't support.
  *  >0 if success.
  */
+__rte_internal
 int
 mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len);
 
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 1c6153c576..cb20a7d893 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -80,6 +80,7 @@ INTERNAL {
 
 	mlx5_free;
 
+	mlx5_get_device_guid; # WINDOWS_NO_EXPORT
 	mlx5_get_ifname_sysfs; # WINDOWS_NO_EXPORT
 	mlx5_get_pci_addr; # WINDOWS_NO_EXPORT
 
@@ -149,4 +150,6 @@ INTERNAL {
 	mlx5_mp_req_mempool_reg;
 	mlx5_mr_mempool2mr_bh;
 	mlx5_mr_mempool_populate_cache;
+
+	local: *;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 2/6] common/mlx5: glue device and PD importation
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
  2022-02-24 23:25     ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
@ 2022-02-24 23:25     ` Michael Baum
  2022-02-24 23:25     ` [PATCH v3 3/6] common/mlx5: add remote PD and CTX support Michael Baum
                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add support for rdma-core API to import device.
The API gets ibv_context file descriptor and returns an ibv_context
pointer that is associated with the given file descriptor.
Add also support for rdma-core API to import PD.
The API gets ibv_context and PD handle and returns a protection domain
(PD) that is associated with the given handle in the given context.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/linux/meson.build |  2 ++
 drivers/common/mlx5/linux/mlx5_glue.c | 41 +++++++++++++++++++++++++++
 drivers/common/mlx5/linux/mlx5_glue.h |  4 +++
 3 files changed, 47 insertions(+)

diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 4c7b53b9bd..ed48245c67 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -202,6 +202,8 @@ has_sym_args = [
             'mlx5dv_dr_domain_allow_duplicate_rules' ],
         [ 'HAVE_MLX5_IBV_REG_MR_IOVA', 'infiniband/verbs.h',
             'ibv_reg_mr_iova' ],
+        [ 'HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR', 'infiniband/verbs.h',
+            'ibv_import_device' ],
 ]
 config = configuration_data()
 foreach arg:has_sym_args
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index bc6622053f..450dd6a06a 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -34,6 +34,32 @@ mlx5_glue_dealloc_pd(struct ibv_pd *pd)
 	return ibv_dealloc_pd(pd);
 }
 
+static struct ibv_pd *
+mlx5_glue_import_pd(struct ibv_context *context, uint32_t pd_handle)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	return ibv_import_pd(context, pd_handle);
+#else
+	(void)context;
+	(void)pd_handle;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
+static int
+mlx5_glue_unimport_pd(struct ibv_pd *pd)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	ibv_unimport_pd(pd);
+	return 0;
+#else
+	(void)pd;
+	errno = ENOTSUP;
+	return -errno;
+#endif
+}
+
 static struct ibv_device **
 mlx5_glue_get_device_list(int *num_devices)
 {
@@ -52,6 +78,18 @@ mlx5_glue_open_device(struct ibv_device *device)
 	return ibv_open_device(device);
 }
 
+static struct ibv_context *
+mlx5_glue_import_device(int cmd_fd)
+{
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	return ibv_import_device(cmd_fd);
+#else
+	(void)cmd_fd;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
 static int
 mlx5_glue_close_device(struct ibv_context *context)
 {
@@ -1402,9 +1440,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
 	.fork_init = mlx5_glue_fork_init,
 	.alloc_pd = mlx5_glue_alloc_pd,
 	.dealloc_pd = mlx5_glue_dealloc_pd,
+	.import_pd = mlx5_glue_import_pd,
+	.unimport_pd = mlx5_glue_unimport_pd,
 	.get_device_list = mlx5_glue_get_device_list,
 	.free_device_list = mlx5_glue_free_device_list,
 	.open_device = mlx5_glue_open_device,
+	.import_device = mlx5_glue_import_device,
 	.close_device = mlx5_glue_close_device,
 	.query_device = mlx5_glue_query_device,
 	.query_device_ex = mlx5_glue_query_device_ex,
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index 4e6d31f263..c4903a6dce 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -151,9 +151,13 @@ struct mlx5_glue {
 	int (*fork_init)(void);
 	struct ibv_pd *(*alloc_pd)(struct ibv_context *context);
 	int (*dealloc_pd)(struct ibv_pd *pd);
+	struct ibv_pd *(*import_pd)(struct ibv_context *context,
+				    uint32_t pd_handle);
+	int (*unimport_pd)(struct ibv_pd *pd);
 	struct ibv_device **(*get_device_list)(int *num_devices);
 	void (*free_device_list)(struct ibv_device **list);
 	struct ibv_context *(*open_device)(struct ibv_device *device);
+	struct ibv_context *(*import_device)(int cmd_fd);
 	int (*close_device)(struct ibv_context *context);
 	int (*query_device)(struct ibv_context *context,
 			    struct ibv_device_attr *device_attr);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 3/6] common/mlx5: add remote PD and CTX support
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
  2022-02-24 23:25     ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
  2022-02-24 23:25     ` [PATCH v3 2/6] common/mlx5: glue device and PD importation Michael Baum
@ 2022-02-24 23:25     ` Michael Baum
  2022-02-24 23:25     ` [PATCH v3 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add option to probe common device using import CTX/PD functions instead
of create functions.
This option requires accepting the context FD and the PD handle as
devargs.

This sharing can be useful for applications that use PMD for only some
operations. For example, an app that generates queues itself and uses
PMD just to configure flow rules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/platform/mlx5.rst                 |  37 +++-
 drivers/common/mlx5/linux/mlx5_common_os.c   | 196 ++++++++++++++++---
 drivers/common/mlx5/linux/mlx5_common_os.h   |   6 -
 drivers/common/mlx5/mlx5_common.c            |  84 +++++---
 drivers/common/mlx5/mlx5_common.h            |  23 ++-
 drivers/common/mlx5/windows/mlx5_common_os.c |  37 +++-
 drivers/common/mlx5/windows/mlx5_common_os.h |   1 -
 7 files changed, 324 insertions(+), 60 deletions(-)

diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst
index d073c213ca..76b3f80315 100644
--- a/doc/guides/platform/mlx5.rst
+++ b/doc/guides/platform/mlx5.rst
@@ -81,6 +81,12 @@ Limitations
 - On Windows, only ``eth`` and ``crypto`` are supported.
 
 
+Features
+--------
+
+- Remote PD and CTX - Linux only.
+
+
 .. _mlx5_common_compilation:
 
 Compilation Prerequisites
@@ -638,4 +644,33 @@ and below are the arguments supported by the common mlx5 layer.
 
   If ``sq_db_nc`` is omitted, the preset (if any) environment variable
   "MLX5_SHUT_UP_BF" value is used. If there is no "MLX5_SHUT_UP_BF", the
-  default ``sq_db_nc`` value is zero for ARM64 hosts and one for others.
\ No newline at end of file
+  default ``sq_db_nc`` value is zero for ARM64 hosts and one for others.
+
+- ``cmd_fd`` parameter [int]
+
+  File descriptor of ``ibv_context`` created outside the PMD.
+  PMD will use this FD to import remote CTX. The ``cmd_fd`` is obtained from
+  the ``ibv_context->cmd_fd`` member, which must be dup'd before being passed.
+  This parameter is valid only if ``pd_handle`` parameter is specified.
+
+  By default, the PMD will create a new ``ibv_context``.
+
+  .. note::
+
+     When FD comes from another process, it is the user responsibility to
+     share the FD between the processes (e.g. by SCM_RIGHTS).
+
+- ``pd_handle`` parameter [int]
+
+  Protection domain handle of ``ibv_pd`` created outside the PMD.
+  PMD will use this handle to import remote PD. The ``pd_handle`` can be
+  achieved from the original PD by getting its ``ibv_pd->handle`` member value.
+  This parameter is valid only if ``cmd_fd`` parameter is specified, and its
+  value must be a valid kernel handle for a PD object in the context represented
+  by given ``cmd_fd``.
+
+  By default, the PMD will allocate a new PD.
+
+  .. note::
+
+     The ``ibv_pd->handle`` member is different then ``mlx5dv_pd->pdn`` member.
\ No newline at end of file
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index a752d79e8e..a3c25638da 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -408,27 +408,128 @@ mlx5_glue_constructor(void)
 }
 
 /**
- * Allocate Protection Domain object and extract its pdn using DV API.
+ * Validate user arguments for remote PD and CTX.
+ *
+ * @param config
+ *   Pointer to device configuration structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config)
+{
+	int device_fd = config->device_fd;
+	int pd_handle = config->pd_handle;
+
+#ifdef HAVE_MLX5_IBV_IMPORT_CTX_PD_AND_MR
+	if (device_fd == MLX5_ARG_UNSET && pd_handle != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote PD without CTX is not supported.");
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	if (device_fd != MLX5_ARG_UNSET && pd_handle == MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote CTX without PD is not supported.");
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	DRV_LOG(DEBUG, "Remote PD and CTX is supported: (cmd_fd=%d, "
+		"pd_handle=%d).", device_fd, pd_handle);
+#else
+	if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR,
+			"Remote PD and CTX is not supported - maybe old rdma-core version?");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+#endif
+	return 0;
+}
+
+/**
+ * Release Protection Domain object.
  *
  * @param[out] cdev
  *   Pointer to the mlx5 device.
  *
  * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise.
  */
 int
+mlx5_os_pd_release(struct mlx5_common_device *cdev)
+{
+	if (cdev->config.pd_handle == MLX5_ARG_UNSET)
+		return mlx5_glue->dealloc_pd(cdev->pd);
+	else
+		return mlx5_glue->unimport_pd(cdev->pd);
+}
+
+/**
+ * Allocate Protection Domain object.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise.
+ */
+static int
 mlx5_os_pd_create(struct mlx5_common_device *cdev)
+{
+	cdev->pd = mlx5_glue->alloc_pd(cdev->ctx);
+	if (cdev->pd == NULL) {
+		DRV_LOG(ERR, "Failed to allocate PD: %s", rte_strerror(errno));
+		return errno ? -errno : -ENOMEM;
+	}
+	return 0;
+}
+
+/**
+ * Import Protection Domain object according to given PD handle.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise.
+ */
+static int
+mlx5_os_pd_import(struct mlx5_common_device *cdev)
+{
+	cdev->pd = mlx5_glue->import_pd(cdev->ctx, cdev->config.pd_handle);
+	if (cdev->pd == NULL) {
+		DRV_LOG(ERR, "Failed to import PD using handle=%d: %s",
+			cdev->config.pd_handle, rte_strerror(errno));
+		return errno ? -errno : -ENOMEM;
+	}
+	return 0;
+}
+
+/**
+ * Prepare Protection Domain object and extract its pdn using DV API.
+ *
+ * @param[out] cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_pd_prepare(struct mlx5_common_device *cdev)
 {
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	struct mlx5dv_obj obj;
 	struct mlx5dv_pd pd_info;
-	int ret;
 #endif
+	int ret;
 
-	cdev->pd = mlx5_glue->alloc_pd(cdev->ctx);
-	if (cdev->pd == NULL) {
-		DRV_LOG(ERR, "Failed to allocate PD.");
-		return errno ? -errno : -ENOMEM;
+	if (cdev->config.pd_handle == MLX5_ARG_UNSET)
+		ret = mlx5_os_pd_create(cdev);
+	else
+		ret = mlx5_os_pd_import(cdev);
+	if (ret) {
+		rte_errno = -ret;
+		return ret;
 	}
 	if (cdev->config.devx == 0)
 		return 0;
@@ -438,15 +539,17 @@ mlx5_os_pd_create(struct mlx5_common_device *cdev)
 	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Fail to get PD object info.");
-		mlx5_glue->dealloc_pd(cdev->pd);
+		rte_errno = errno;
+		claim_zero(mlx5_os_pd_release(cdev));
 		cdev->pd = NULL;
-		return -errno;
+		return -rte_errno;
 	}
 	cdev->pdn = pd_info.pdn;
 	return 0;
 #else
 	DRV_LOG(ERR, "Cannot get pdn - no DV support.");
-	return -ENOTSUP;
+	rte_errno = ENOTSUP;
+	return -rte_errno;
 #endif /* HAVE_IBV_FLOW_DV_SUPPORT */
 }
 
@@ -648,28 +751,28 @@ mlx5_restore_doorbell_mapping_env(int value)
 /**
  * Function API to open IB device.
  *
- *
  * @param cdev
  *   Pointer to the mlx5 device.
  * @param classes
  *   Chosen classes come from device arguments.
  *
  * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Pointer to ibv_context on success, NULL otherwise and rte_errno is set.
  */
-int
-mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
+static struct ibv_context *
+mlx5_open_device(struct mlx5_common_device *cdev, uint32_t classes)
 {
 	struct ibv_device *ibv;
 	struct ibv_context *ctx = NULL;
 	int dbmap_env;
 
+	MLX5_ASSERT(cdev->config.device_fd == MLX5_ARG_UNSET);
 	if (classes & MLX5_CLASS_VDPA)
 		ibv = mlx5_vdpa_get_ibv_dev(cdev->dev);
 	else
 		ibv = mlx5_os_get_ibv_dev(cdev->dev);
 	if (!ibv)
-		return -rte_errno;
+		return NULL;
 	DRV_LOG(INFO, "Dev information matches for device \"%s\".", ibv->name);
 	/*
 	 * Configure environment variable "MLX5_BF_SHUT_UP" before the device
@@ -682,29 +785,78 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
 	ctx = mlx5_glue->dv_open_device(ibv);
 	if (ctx) {
 		cdev->config.devx = 1;
-		DRV_LOG(DEBUG, "DevX is supported.");
 	} else if (classes == MLX5_CLASS_ETH) {
 		/* The environment variable is still configured. */
 		ctx = mlx5_glue->open_device(ibv);
 		if (ctx == NULL)
 			goto error;
-		DRV_LOG(DEBUG, "DevX is NOT supported.");
 	} else {
 		goto error;
 	}
 	/* The device is created, no need for environment. */
 	mlx5_restore_doorbell_mapping_env(dbmap_env);
-	/* Hint libmlx5 to use PMD allocator for data plane resources */
-	mlx5_set_context_attr(cdev->dev, ctx);
-	cdev->ctx = ctx;
-	return 0;
+	return ctx;
 error:
 	rte_errno = errno ? errno : ENODEV;
 	/* The device creation is failed, no need for environment. */
 	mlx5_restore_doorbell_mapping_env(dbmap_env);
 	DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name);
-	return -rte_errno;
+	return NULL;
+}
+
+/**
+ * Function API to import IB device.
+ *
+ * @param cdev
+ *   Pointer to the mlx5 device.
+ *
+ * @return
+ *   Pointer to ibv_context on success, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_context *
+mlx5_import_device(struct mlx5_common_device *cdev)
+{
+	struct ibv_context *ctx = NULL;
+
+	MLX5_ASSERT(cdev->config.device_fd != MLX5_ARG_UNSET);
+	ctx = mlx5_glue->import_device(cdev->config.device_fd);
+	if (!ctx) {
+		DRV_LOG(ERR, "Failed to import device for fd=%d: %s",
+			cdev->config.device_fd, rte_strerror(errno));
+		rte_errno = errno;
+	}
+	return ctx;
+}
+
+/**
+ * Function API to prepare IB device.
+ *
+ * @param cdev
+ *   Pointer to the mlx5 device.
+ * @param classes
+ *   Chosen classes come from device arguments.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
+{
+
+	struct ibv_context *ctx = NULL;
+
+	if (cdev->config.device_fd == MLX5_ARG_UNSET)
+		ctx = mlx5_open_device(cdev, classes);
+	else
+		ctx = mlx5_import_device(cdev);
+	if (ctx == NULL)
+		return -rte_errno;
+	/* Hint libmlx5 to use PMD allocator for data plane resources */
+	mlx5_set_context_attr(cdev->dev, ctx);
+	cdev->ctx = ctx;
+	return 0;
 }
+
 int
 mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
 {
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h
index edf356a30a..a85f3b5f3c 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.h
+++ b/drivers/common/mlx5/linux/mlx5_common_os.h
@@ -203,12 +203,6 @@ mlx5_os_get_devx_uar_page_id(void *uar)
 #endif
 }
 
-static inline int
-mlx5_os_dealloc_pd(void *pd)
-{
-	return mlx5_glue->dealloc_pd(pd);
-}
-
 __rte_internal
 static inline void *
 mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access)
diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 8cf391df13..94c303ce81 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -24,6 +24,12 @@ uint8_t haswell_broadwell_cpu;
 /* Driver type key for new device global syntax. */
 #define MLX5_DRIVER_KEY "driver"
 
+/* Device parameter to get file descriptor for import device. */
+#define MLX5_DEVICE_FD "cmd_fd"
+
+/* Device parameter to get PD number for import Protection Domain. */
+#define MLX5_PD_HANDLE "pd_handle"
+
 /* Enable extending memsegs when creating a MR. */
 #define MLX5_MR_EXT_MEMSEG_EN "mr_ext_memseg_en"
 
@@ -283,6 +289,10 @@ mlx5_common_args_check_handler(const char *key, const char *val, void *opaque)
 		config->mr_mempool_reg_en = !!tmp;
 	} else if (strcmp(key, MLX5_SYS_MEM_EN) == 0) {
 		config->sys_mem_en = !!tmp;
+	} else if (strcmp(key, MLX5_DEVICE_FD) == 0) {
+		config->device_fd = tmp;
+	} else if (strcmp(key, MLX5_PD_HANDLE) == 0) {
+		config->pd_handle = tmp;
 	}
 	return 0;
 }
@@ -310,6 +320,8 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist,
 		MLX5_MR_EXT_MEMSEG_EN,
 		MLX5_SYS_MEM_EN,
 		MLX5_MR_MEMPOOL_REG_EN,
+		MLX5_DEVICE_FD,
+		MLX5_PD_HANDLE,
 		NULL,
 	};
 	int ret = 0;
@@ -321,13 +333,19 @@ mlx5_common_config_get(struct mlx5_kvargs_ctrl *mkvlist,
 	config->mr_mempool_reg_en = 1;
 	config->sys_mem_en = 0;
 	config->dbnc = MLX5_ARG_UNSET;
+	config->device_fd = MLX5_ARG_UNSET;
+	config->pd_handle = MLX5_ARG_UNSET;
 	/* Process common parameters. */
 	ret = mlx5_kvargs_process(mkvlist, params,
 				  mlx5_common_args_check_handler, config);
 	if (ret) {
 		rte_errno = EINVAL;
-		ret = -rte_errno;
+		return -rte_errno;
 	}
+	/* Validate user arguments for remote PD and CTX if it is given. */
+	ret = mlx5_os_remote_pd_and_ctx_validate(config);
+	if (ret)
+		return ret;
 	DRV_LOG(DEBUG, "mr_ext_memseg_en is %u.", config->mr_ext_memseg_en);
 	DRV_LOG(DEBUG, "mr_mempool_reg_en is %u.", config->mr_mempool_reg_en);
 	DRV_LOG(DEBUG, "sys_mem_en is %u.", config->sys_mem_en);
@@ -645,7 +663,7 @@ static void
 mlx5_dev_hw_global_release(struct mlx5_common_device *cdev)
 {
 	if (cdev->pd != NULL) {
-		claim_zero(mlx5_os_dealloc_pd(cdev->pd));
+		claim_zero(mlx5_os_pd_release(cdev));
 		cdev->pd = NULL;
 	}
 	if (cdev->ctx != NULL) {
@@ -674,20 +692,27 @@ mlx5_dev_hw_global_prepare(struct mlx5_common_device *cdev, uint32_t classes)
 	ret = mlx5_os_open_device(cdev, classes);
 	if (ret < 0)
 		return ret;
-	/* Allocate Protection Domain object and extract its pdn. */
-	ret = mlx5_os_pd_create(cdev);
+	/*
+	 * When CTX is created by Verbs, query HCA attribute is unsupported.
+	 * When CTX is imported, we cannot know if it is created by DevX or
+	 * Verbs. So, we use query HCA attribute function to check it.
+	 */
+	if (cdev->config.devx || cdev->config.device_fd != MLX5_ARG_UNSET) {
+		/* Query HCA attributes. */
+		ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx,
+						   &cdev->config.hca_attr);
+		if (ret) {
+			DRV_LOG(ERR, "Unable to read HCA caps in DevX mode.");
+			rte_errno = ENOTSUP;
+			goto error;
+		}
+		cdev->config.devx = 1;
+	}
+	DRV_LOG(DEBUG, "DevX is %ssupported.", cdev->config.devx ? "" : "NOT ");
+	/* Prepare Protection Domain object and extract its pdn. */
+	ret = mlx5_os_pd_prepare(cdev);
 	if (ret)
 		goto error;
-	/* All actions taken below are relevant only when DevX is supported */
-	if (cdev->config.devx == 0)
-		return 0;
-	/* Query HCA attributes. */
-	ret = mlx5_devx_cmd_query_hca_attr(cdev->ctx, &cdev->config.hca_attr);
-	if (ret) {
-		DRV_LOG(ERR, "Unable to read HCA capabilities.");
-		rte_errno = ENOTSUP;
-		goto error;
-	}
 	return 0;
 error:
 	mlx5_dev_hw_global_release(cdev);
@@ -814,26 +839,39 @@ mlx5_common_probe_again_args_validate(struct mlx5_common_device *cdev,
 	 * Checks the match between the temporary structure and the existing
 	 * common device structure.
 	 */
-	if (cdev->config.mr_ext_memseg_en ^ config->mr_ext_memseg_en) {
-		DRV_LOG(ERR, "\"mr_ext_memseg_en\" "
+	if (cdev->config.mr_ext_memseg_en != config->mr_ext_memseg_en) {
+		DRV_LOG(ERR, "\"" MLX5_MR_EXT_MEMSEG_EN "\" "
 			"configuration mismatch for device %s.",
 			cdev->dev->name);
 		goto error;
 	}
-	if (cdev->config.mr_mempool_reg_en ^ config->mr_mempool_reg_en) {
-		DRV_LOG(ERR, "\"mr_mempool_reg_en\" "
+	if (cdev->config.mr_mempool_reg_en != config->mr_mempool_reg_en) {
+		DRV_LOG(ERR, "\"" MLX5_MR_MEMPOOL_REG_EN "\" "
 			"configuration mismatch for device %s.",
 			cdev->dev->name);
 		goto error;
 	}
-	if (cdev->config.sys_mem_en ^ config->sys_mem_en) {
-		DRV_LOG(ERR,
-			"\"sys_mem_en\" configuration mismatch for device %s.",
+	if (cdev->config.device_fd != config->device_fd) {
+		DRV_LOG(ERR, "\"" MLX5_DEVICE_FD "\" "
+			"configuration mismatch for device %s.",
+			cdev->dev->name);
+		goto error;
+	}
+	if (cdev->config.pd_handle != config->pd_handle) {
+		DRV_LOG(ERR, "\"" MLX5_PD_HANDLE "\" "
+			"configuration mismatch for device %s.",
+			cdev->dev->name);
+		goto error;
+	}
+	if (cdev->config.sys_mem_en != config->sys_mem_en) {
+		DRV_LOG(ERR, "\"" MLX5_SYS_MEM_EN "\" "
+			"configuration mismatch for device %s.",
 			cdev->dev->name);
 		goto error;
 	}
-	if (cdev->config.dbnc ^ config->dbnc) {
-		DRV_LOG(ERR, "\"dbnc\" configuration mismatch for device %s.",
+	if (cdev->config.dbnc != config->dbnc) {
+		DRV_LOG(ERR, "\"" MLX5_SQ_DB_NC "\" "
+			"configuration mismatch for device %s.",
 			cdev->dev->name);
 		goto error;
 	}
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 49bcea1d91..63f31437da 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -446,6 +446,8 @@ void mlx5_common_init(void);
 struct mlx5_common_dev_config {
 	struct mlx5_hca_attr hca_attr; /* HCA attributes. */
 	int dbnc; /* Skip doorbell register write barrier. */
+	int device_fd; /* Device file descriptor for importation. */
+	int pd_handle; /* Protection Domain handle for importation.  */
 	unsigned int devx:1; /* Whether devx interface is available or not. */
 	unsigned int sys_mem_en:1; /* The default memory allocator. */
 	unsigned int mr_mempool_reg_en:1;
@@ -465,6 +467,23 @@ struct mlx5_common_device {
 	struct mlx5_common_dev_config config; /* Device configuration. */
 };
 
+/**
+ * Indicates whether PD and CTX are imported from another process,
+ * or created by this process.
+ *
+ * @param cdev
+ *   Pointer to common device.
+ *
+ * @return
+ *   True if PD and CTX are imported from another process, False otherwise.
+ */
+static inline bool
+mlx5_imported_pd_and_ctx(struct mlx5_common_device *cdev)
+{
+	return cdev->config.device_fd != MLX5_ARG_UNSET &&
+	       cdev->config.pd_handle != MLX5_ARG_UNSET;
+}
+
 /**
  * Initialization function for the driver called during device probing.
  */
@@ -554,7 +573,9 @@ mlx5_devx_uar_release(struct mlx5_uar *uar);
 /* mlx5_common_os.c */
 
 int mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes);
-int mlx5_os_pd_create(struct mlx5_common_device *cdev);
+int mlx5_os_pd_prepare(struct mlx5_common_device *cdev);
+int mlx5_os_pd_release(struct mlx5_common_device *cdev);
+int mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config);
 
 /* mlx5 PMD wrapped MR struct. */
 struct mlx5_pmd_wrapped_mr {
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c
index c3cfc315f2..f2fc7cd494 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.c
+++ b/drivers/common/mlx5/windows/mlx5_common_os.c
@@ -25,21 +25,46 @@ mlx5_glue_constructor(void)
 {
 }
 
+/**
+ * Validate user arguments for remote PD and CTX.
+ *
+ * @param config
+ *   Pointer to device configuration structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_os_remote_pd_and_ctx_validate(struct mlx5_common_dev_config *config)
+{
+	int device_fd = config->device_fd;
+	int pd_handle = config->pd_handle;
+
+	if (pd_handle != MLX5_ARG_UNSET || device_fd != MLX5_ARG_UNSET) {
+		DRV_LOG(ERR, "Remote PD and CTX is not supported on Windows.");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+	return 0;
+}
+
 /**
  * Release PD. Releases a given mlx5_pd object
  *
- * @param[in] pd
- *   Pointer to mlx5_pd.
+ * @param[in] cdev
+ *   Pointer to the mlx5 device.
  *
  * @return
  *   Zero if pd is released successfully, negative number otherwise.
  */
 int
-mlx5_os_dealloc_pd(void *pd)
+mlx5_os_pd_release(struct mlx5_common_device *cdev)
 {
+	struct mlx5_pd *pd = cdev->pd;
+
 	if (!pd)
 		return -EINVAL;
-	mlx5_devx_cmd_destroy(((struct mlx5_pd *)pd)->obj);
+	mlx5_devx_cmd_destroy(pd->obj);
 	mlx5_free(pd);
 	return 0;
 }
@@ -47,14 +72,14 @@ mlx5_os_dealloc_pd(void *pd)
 /**
  * Allocate Protection Domain object and extract its pdn using DV API.
  *
- * @param[out] dev
+ * @param[out] cdev
  *   Pointer to the mlx5 device.
  *
  * @return
  *   0 on success, a negative value otherwise.
  */
 int
-mlx5_os_pd_create(struct mlx5_common_device *cdev)
+mlx5_os_pd_prepare(struct mlx5_common_device *cdev)
 {
 	struct mlx5_pd *pd;
 
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.h b/drivers/common/mlx5/windows/mlx5_common_os.h
index 61fc8dd761..ee7973f1ec 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.h
+++ b/drivers/common/mlx5/windows/mlx5_common_os.h
@@ -248,7 +248,6 @@ mlx5_os_devx_subscribe_devx_event(void *eventc,
 	return -ENOTSUP;
 }
 
-int mlx5_os_dealloc_pd(void *pd);
 __rte_internal
 void *mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access);
 __rte_internal
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 4/6] net/mlx5: optimize RxQ/TxQ control structure
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
                       ` (2 preceding siblings ...)
  2022-02-24 23:25     ` [PATCH v3 3/6] common/mlx5: add remote PD and CTX support Michael Baum
@ 2022-02-24 23:25     ` Michael Baum
  2022-02-24 23:25     ` [PATCH v3 5/6] net/mlx5: add external RxQ mapping API Michael Baum
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The RxQ/TxQ control structure has a field named type. This type is enum
with values for standard and hairpin.
The use of this field is to check whether the queue is of the hairpin
type or standard.

This patch replaces it with a boolean variable that saves whether it is
a hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c    | 26 ++++++++++--------------
 drivers/net/mlx5/mlx5_ethdev.c  |  2 +-
 drivers/net/mlx5/mlx5_flow.c    | 14 ++++++-------
 drivers/net/mlx5/mlx5_flow_dv.c | 14 +++++--------
 drivers/net/mlx5/mlx5_rx.h      | 13 +++---------
 drivers/net/mlx5/mlx5_rxq.c     | 33 +++++++++++-------------------
 drivers/net/mlx5/mlx5_trigger.c | 36 ++++++++++++++++-----------------
 drivers/net/mlx5/mlx5_tx.h      |  7 +------
 drivers/net/mlx5/mlx5_txq.c     | 14 ++++++-------
 9 files changed, 64 insertions(+), 95 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 8d151fa4ab..bcd2358165 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -88,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type)
 	default:
 		break;
 	}
-	if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (rxq->ctrl->is_hairpin)
 		return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr);
 	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
@@ -162,7 +162,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 
 	if (rxq_obj == NULL)
 		return;
-	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
+	if (rxq_obj->rxq_ctrl->is_hairpin) {
 		if (rxq_obj->rq == NULL)
 			return;
 		mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST);
@@ -476,7 +476,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(tmpl);
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (rxq_ctrl->is_hairpin)
 		return mlx5_rxq_obj_hairpin_new(rxq);
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq && !rxq_ctrl->started) {
@@ -583,7 +583,7 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
 
 		MLX5_ASSERT(rxq != NULL);
-		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+		if (rxq->ctrl->is_hairpin)
 			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
 		else
 			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
@@ -706,17 +706,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 		       int tunnel, struct mlx5_devx_tir_attr *tir_attr)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5_rxq_type rxq_obj_type;
+	bool is_hairpin;
 	bool lro = true;
 	uint32_t i;
 
 	/* NULL queues designate drop queue. */
 	if (ind_tbl->queues != NULL) {
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-				mlx5_rxq_ctrl_get(dev, ind_tbl->queues[0]);
-		rxq_obj_type = rxq_ctrl != NULL ? rxq_ctrl->type :
-						  MLX5_RXQ_TYPE_STANDARD;
-
+		is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]);
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
 			struct mlx5_rxq_data *rxq_i =
@@ -728,7 +724,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			}
 		}
 	} else {
-		rxq_obj_type = priv->drop_queue.rxq->ctrl->type;
+		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
@@ -759,7 +755,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			(!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) <<
 			 MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT;
 	}
-	if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN)
+	if (is_hairpin)
 		tir_attr->transport_domain = priv->sh->td->id;
 	else
 		tir_attr->transport_domain = priv->sh->tdn;
@@ -940,7 +936,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 		goto error;
 	}
 	rxq_obj->rxq_ctrl = rxq_ctrl;
-	rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD;
+	rxq_ctrl->is_hairpin = false;
 	rxq_ctrl->sh = priv->sh;
 	rxq_ctrl->obj = rxq_obj;
 	rxq->ctrl = rxq_ctrl;
@@ -1242,7 +1238,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	struct mlx5_txq_ctrl *txq_ctrl =
 			container_of(txq_data, struct mlx5_txq_ctrl, txq);
 
-	if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN)
+	if (txq_ctrl->is_hairpin)
 		return mlx5_txq_obj_hairpin_new(dev, idx);
 #if !defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) && defined(HAVE_INFINIBAND_VERBS_H)
 	DRV_LOG(ERR, "Port %u Tx queue %u cannot create with DevX, no UAR.",
@@ -1381,7 +1377,7 @@ void
 mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj)
 {
 	MLX5_ASSERT(txq_obj);
-	if (txq_obj->txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
+	if (txq_obj->txq_ctrl->is_hairpin) {
 		if (txq_obj->tis)
 			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
 #if defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) || !defined(HAVE_INFINIBAND_VERBS_H)
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 72bf8ac914..406761ccf8 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -173,7 +173,7 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev)
 	for (i = 0, j = 0; i < rxqs_n; i++) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl && !rxq_ctrl->is_hairpin)
 			rss_queue_arr[j++] = i;
 	}
 	rss_queue_n = j;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 5a4e000c12..09701a73c1 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1788,7 +1788,7 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			 const char **error, uint32_t *queue_idx)
 {
 	const struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5_rxq_type rxq_type = MLX5_RXQ_TYPE_UNDEFINED;
+	bool is_hairpin = false;
 	uint32_t i;
 
 	for (i = 0; i != queues_n; ++i) {
@@ -1805,9 +1805,9 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			*queue_idx = i;
 			return -EINVAL;
 		}
-		if (i == 0)
-			rxq_type = rxq_ctrl->type;
-		if (rxq_type != rxq_ctrl->type) {
+		if (i == 0 && rxq_ctrl->is_hairpin)
+			is_hairpin = true;
+		if (is_hairpin != rxq_ctrl->is_hairpin) {
 			*error = "combining hairpin and regular RSS queues is not supported";
 			*queue_idx = i;
 			return -ENOTSUP;
@@ -5885,15 +5885,13 @@ flow_create_split_metadata(struct rte_eth_dev *dev,
 			const struct rte_flow_action_queue *queue;
 
 			queue = qrss->conf;
-			if (mlx5_rxq_get_type(dev, queue->index) ==
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			if (mlx5_rxq_is_hairpin(dev, queue->index))
 				qrss = NULL;
 		} else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) {
 			const struct rte_flow_action_rss *rss;
 
 			rss = qrss->conf;
-			if (mlx5_rxq_get_type(dev, rss->queue[0]) ==
-			    MLX5_RXQ_TYPE_HAIRPIN)
+			if (mlx5_rxq_is_hairpin(dev, rss->queue[0]))
 				qrss = NULL;
 		}
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 7a012f7bb9..313dc64604 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5771,8 +5771,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags,
 	}
 	/* Continue validation for Xcap actions.*/
 	if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) &&
-	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) {
+	    (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index))) {
 		if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
 		     MLX5_FLOW_XCAP_ACTIONS)
 			return rte_flow_error_set(error, ENOTSUP,
@@ -7957,8 +7956,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	 */
 	if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS |
 			     MLX5_FLOW_VLAN_ACTIONS)) &&
-	    (queue_index == 0xFFFF ||
-	     mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN ||
+	    (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index) ||
 	     ((conf = mlx5_rxq_get_hairpin_conf(dev, queue_index)) != NULL &&
 	     conf->tx_explicit != 0))) {
 		if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) ==
@@ -10948,10 +10946,8 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev,
 {
 	const struct mlx5_rte_flow_item_tx_queue *queue_m;
 	const struct mlx5_rte_flow_item_tx_queue *queue_v;
-	void *misc_m =
-		MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v =
-		MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
+	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
 	struct mlx5_txq_ctrl *txq;
 	uint32_t queue, mask;
 
@@ -10962,7 +10958,7 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev,
 	txq = mlx5_txq_get(dev, queue_v->queue);
 	if (!txq)
 		return;
-	if (txq->type == MLX5_TXQ_TYPE_HAIRPIN)
+	if (txq->is_hairpin)
 		queue = txq->obj->sq->id;
 	else
 		queue = txq->obj->sq_obj.sq->id;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 295dba063b..fbc86dcef2 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -141,12 +141,6 @@ struct mlx5_rxq_data {
 	/* Buffer split segment descriptions - sizes, offsets, pools. */
 } __rte_cache_aligned;
 
-enum mlx5_rxq_type {
-	MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */
-	MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */
-	MLX5_RXQ_TYPE_UNDEFINED,
-};
-
 /* RX queue control descriptor. */
 struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
@@ -154,7 +148,7 @@ struct mlx5_rxq_ctrl {
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
-	enum mlx5_rxq_type type; /* Rxq type. */
+	bool is_hairpin; /* Whether RxQ type is Hairpin. */
 	unsigned int socket; /* CPU socket ID for allocations. */
 	LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
 	uint32_t share_group; /* Group ID of shared RXQ. */
@@ -258,7 +252,7 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
 int mlx5_hrxq_obj_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq);
 int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx);
 uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev);
-enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx);
+bool mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx);
 const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf
 	(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev);
@@ -632,8 +626,7 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 	for (i = 0; i < priv->rxqs_n; ++i) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		n_ibv++;
 		if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e7284f9da9..e96584d55d 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1391,8 +1391,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 		struct mlx5_rxq_data *rxq;
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		rxq = &rxq_ctrl->rxq;
 		n_ibv++;
@@ -1480,8 +1479,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	for (i = 0; i != priv->rxqs_n; ++i) {
 		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq_ctrl == NULL ||
-		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin)
 			continue;
 		rxq_ctrl->rxq.mprq_mp = mp;
 	}
@@ -1798,7 +1796,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		rte_errno = ENOSPC;
 		goto error;
 	}
-	tmpl->type = MLX5_RXQ_TYPE_STANDARD;
+	tmpl->is_hairpin = false;
 	if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl,
 			      &priv->sh->cdev->mr_scache.dev_gen, socket)) {
 		/* rte_errno is already set. */
@@ -1969,7 +1967,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	LIST_INIT(&tmpl->owners);
 	rxq->ctrl = tmpl;
 	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
-	tmpl->type = MLX5_RXQ_TYPE_HAIRPIN;
+	tmpl->is_hairpin = true;
 	tmpl->socket = SOCKET_ID_ANY;
 	tmpl->rxq.rss_hash = 0;
 	tmpl->rxq.port_id = dev->data->port_id;
@@ -2120,7 +2118,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 			mlx5_free(rxq_ctrl->obj);
 			rxq_ctrl->obj = NULL;
 		}
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+		if (!rxq_ctrl->is_hairpin) {
 			if (!rxq_ctrl->started)
 				rxq_free_elts(rxq_ctrl);
 			dev->data->rx_queue_state[idx] =
@@ -2129,7 +2127,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 	} else { /* Refcnt zero, closing device. */
 		LIST_REMOVE(rxq, owner_entry);
 		if (LIST_EMPTY(&rxq_ctrl->owners)) {
-			if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+			if (!rxq_ctrl->is_hairpin)
 				mlx5_mr_btree_free
 					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			if (rxq_ctrl->rxq.shared)
@@ -2169,7 +2167,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
 }
 
 /**
- * Get a Rx queue type.
+ * Check whether RxQ type is Hairpin.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -2177,17 +2175,15 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
  *   Rx queue index.
  *
  * @return
- *   The Rx queue type.
+ *   True if Rx queue type is Hairpin, otherwise False.
  */
-enum mlx5_rxq_type
-mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
+bool
+mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
-	if (idx < priv->rxqs_n && rxq_ctrl != NULL)
-		return rxq_ctrl->type;
-	return MLX5_RXQ_TYPE_UNDEFINED;
+	return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin);
 }
 
 /*
@@ -2204,14 +2200,9 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
 const struct rte_eth_hairpin_conf *
 mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
 
-	if (idx < priv->rxqs_n && rxq != NULL) {
-		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
-			return &rxq->hairpin_conf;
-	}
-	return NULL;
+	return mlx5_rxq_is_hairpin(dev, idx) ? &rxq->hairpin_conf : NULL;
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c3bc8a13..fe8b42c414 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -59,7 +59,7 @@ mlx5_txq_start(struct rte_eth_dev *dev)
 
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
+		if (!txq_ctrl->is_hairpin)
 			txq_alloc_elts(txq_ctrl);
 		MLX5_ASSERT(!txq_ctrl->obj);
 		txq_ctrl->obj = mlx5_malloc(flags, sizeof(struct mlx5_txq_obj),
@@ -77,7 +77,7 @@ mlx5_txq_start(struct rte_eth_dev *dev)
 			txq_ctrl->obj = NULL;
 			goto error;
 		}
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+		if (!txq_ctrl->is_hairpin) {
 			size_t size = txq_data->cqe_s * sizeof(*txq_data->fcqs);
 
 			txq_data->fcqs = mlx5_malloc(flags, size,
@@ -167,7 +167,7 @@ mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl,
 {
 	int ret = 0;
 
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+	if (!rxq_ctrl->is_hairpin) {
 		/*
 		 * Pre-register the mempools. Regardless of whether
 		 * the implicit registration is enabled or not,
@@ -280,7 +280,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		if (!txq_ctrl->is_hairpin ||
 		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
@@ -299,7 +299,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		if (!txq_ctrl)
 			continue;
 		/* Skip hairpin queues with other peer ports. */
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		if (!txq_ctrl->is_hairpin ||
 		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
@@ -322,7 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN ||
+		if (!rxq_ctrl->is_hairpin ||
 		    rxq->hairpin_conf.peers[0].queue != i) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u Tx queue %d can't be binded to "
@@ -412,7 +412,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 				dev->data->port_id, peer_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Txq",
 				dev->data->port_id, peer_queue);
@@ -444,7 +444,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq",
 				dev->data->port_id, peer_queue);
@@ -510,7 +510,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
 				dev->data->port_id, cur_queue);
@@ -570,7 +570,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
@@ -644,7 +644,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Txq",
 				dev->data->port_id, cur_queue);
@@ -683,7 +683,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			return -rte_errno;
 		}
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
+		if (!rxq_ctrl->is_hairpin) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
@@ -751,7 +751,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -791,7 +791,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -886,7 +886,7 @@ mlx5_hairpin_unbind_single_port(struct rte_eth_dev *dev, uint16_t rx_port)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (txq_ctrl == NULL)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (!txq_ctrl->is_hairpin) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
@@ -1016,7 +1016,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			txq_ctrl = mlx5_txq_get(dev, i);
 			if (!txq_ctrl)
 				continue;
-			if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+			if (!txq_ctrl->is_hairpin) {
 				mlx5_txq_release(dev, i);
 				continue;
 			}
@@ -1040,7 +1040,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			if (rxq == NULL)
 				continue;
 			rxq_ctrl = rxq->ctrl;
-			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN)
+			if (!rxq_ctrl->is_hairpin)
 				continue;
 			pp = rxq->hairpin_conf.peers[0].port;
 			if (pp >= RTE_MAX_ETHPORTS) {
@@ -1318,7 +1318,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		if (!txq_ctrl)
 			continue;
 		/* Only Tx implicit mode requires the default Tx flow. */
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN &&
+		if (txq_ctrl->is_hairpin &&
 		    txq_ctrl->hairpin_conf.tx_explicit == 0 &&
 		    txq_ctrl->hairpin_conf.peers[0].port ==
 		    priv->dev_data->port_id) {
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 0adc3f4839..89dac0c65a 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -169,17 +169,12 @@ struct mlx5_txq_data {
 	/* Storage for queued packets, must be the last field. */
 } __rte_cache_aligned;
 
-enum mlx5_txq_type {
-	MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */
-	MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Tx queue. */
-};
-
 /* TX queue control descriptor. */
 struct mlx5_txq_ctrl {
 	LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */
 	uint32_t refcnt; /* Reference counter. */
 	unsigned int socket; /* CPU socket ID for allocations. */
-	enum mlx5_txq_type type; /* The txq ctrl type. */
+	bool is_hairpin; /* Whether TxQ type is Hairpin. */
 	unsigned int max_inline_data; /* Max inline data. */
 	unsigned int max_tso_header; /* Max TSO header size. */
 	struct mlx5_txq_obj *obj; /* Verbs/DevX queue object. */
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index f128c3d1a5..0140f8b3b2 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -527,7 +527,7 @@ txq_uar_init_secondary(struct mlx5_txq_ctrl *txq_ctrl, int fd)
 		return -rte_errno;
 	}
 
-	if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+	if (txq_ctrl->is_hairpin)
 		return 0;
 	MLX5_ASSERT(ppriv);
 	/*
@@ -570,7 +570,7 @@ txq_uar_uninit_secondary(struct mlx5_txq_ctrl *txq_ctrl)
 		rte_errno = ENOMEM;
 	}
 
-	if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+	if (txq_ctrl->is_hairpin)
 		return;
 	addr = ppriv->uar_table[txq_ctrl->txq.idx].db;
 	rte_mem_unmap(RTE_PTR_ALIGN_FLOOR(addr, page_size), page_size);
@@ -631,7 +631,7 @@ mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd)
 			continue;
 		txq = (*priv->txqs)[i];
 		txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq);
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD)
+		if (txq_ctrl->is_hairpin)
 			continue;
 		MLX5_ASSERT(txq->idx == (uint16_t)i);
 		ret = txq_uar_init_secondary(txq_ctrl, fd);
@@ -1107,7 +1107,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
-	tmpl->type = MLX5_TXQ_TYPE_STANDARD;
+	tmpl->is_hairpin = false;
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
 error:
@@ -1150,7 +1150,7 @@ mlx5_txq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	tmpl->hairpin_conf = *hairpin_conf;
-	tmpl->type = MLX5_TXQ_TYPE_HAIRPIN;
+	tmpl->is_hairpin = true;
 	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
@@ -1209,7 +1209,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
 		mlx5_free(txq_ctrl->obj);
 		txq_ctrl->obj = NULL;
 	}
-	if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+	if (!txq_ctrl->is_hairpin) {
 		if (txq_ctrl->txq.fcqs) {
 			mlx5_free(txq_ctrl->txq.fcqs);
 			txq_ctrl->txq.fcqs = NULL;
@@ -1218,7 +1218,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
 		dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 	if (!__atomic_load_n(&txq_ctrl->refcnt, __ATOMIC_RELAXED)) {
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
+		if (!txq_ctrl->is_hairpin)
 			mlx5_mr_btree_free(&txq_ctrl->txq.mr_ctrl.cache_bh);
 		LIST_REMOVE(txq_ctrl, next);
 		mlx5_free(txq_ctrl);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 5/6] net/mlx5: add external RxQ mapping API
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
                       ` (3 preceding siblings ...)
  2022-02-24 23:25     ` [PATCH v3 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
@ 2022-02-24 23:25     ` Michael Baum
  2022-02-24 23:25     ` [PATCH v3 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
  2022-02-25 17:39     ` [PATCH v3 0/6] mlx5: external RxQ support Thomas Monjalon
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

External queue is a queue that has been created and managed outside the
PMD. The queues owner might use PMD to generate flow rules using these
external queues.

When the queue is created in hardware it is given an ID represented by
32 bits. In contrast, the index of the queues in PMD is represented by
16 bits. To enable the use of PMD to generate flow rules, the queue
owner must provide a mapping between the HW index and a 16-bit index
corresponding to the RTE Flow API.

This patch adds an API enabling to insert/cancel a mapping between HW
queue id and RTE Flow queue id.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  17 +++++
 drivers/net/mlx5/mlx5.c          |   1 +
 drivers/net/mlx5/mlx5.h          |   1 +
 drivers/net/mlx5/mlx5_defs.h     |   3 +
 drivers/net/mlx5/mlx5_ethdev.c   |  16 ++++-
 drivers/net/mlx5/mlx5_rx.h       |   6 ++
 drivers/net/mlx5/mlx5_rxq.c      | 117 +++++++++++++++++++++++++++++++
 drivers/net/mlx5/rte_pmd_mlx5.h  |  50 ++++++++++++-
 drivers/net/mlx5/version.map     |   3 +
 9 files changed, 210 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 2e1606a733..a847ed13cc 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1158,6 +1158,22 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 		err = ENOMEM;
 		goto error;
 	}
+	/*
+	 * When user configures remote PD and CTX and device creates RxQ by
+	 * DevX, external RxQ is both supported and requested.
+	 */
+	if (mlx5_imported_pd_and_ctx(sh->cdev) && mlx5_devx_obj_ops_en(sh)) {
+		priv->ext_rxqs = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
+					     sizeof(struct mlx5_external_rxq) *
+					     MLX5_MAX_EXT_RX_QUEUES, 0,
+					     SOCKET_ID_ANY);
+		if (priv->ext_rxqs == NULL) {
+			DRV_LOG(ERR, "Fail to allocate external RxQ array.");
+			err = ENOMEM;
+			goto error;
+		}
+		DRV_LOG(DEBUG, "External RxQ is supported.");
+	}
 	priv->sh = sh;
 	priv->dev_port = spawn->phys_port;
 	priv->pci_dev = spawn->pci_dev;
@@ -1617,6 +1633,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 			mlx5_list_destroy(priv->hrxqs);
 		if (eth_dev && priv->flex_item_map)
 			mlx5_flex_item_port_cleanup(eth_dev);
+		mlx5_free(priv->ext_rxqs);
 		mlx5_free(priv);
 		if (eth_dev != NULL)
 			eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 7611fdd62b..5ecca2dd1b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1930,6 +1930,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 			dev->data->port_id);
 	if (priv->hrxqs)
 		mlx5_list_destroy(priv->hrxqs);
+	mlx5_free(priv->ext_rxqs);
 	/*
 	 * Free the shared context in last turn, because the cleanup
 	 * routines above may use some shared fields, like
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index bd69aa2334..0f825396a2 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1461,6 +1461,7 @@ struct mlx5_priv {
 	/* RX/TX queues. */
 	unsigned int rxqs_n; /* RX queues array size. */
 	unsigned int txqs_n; /* TX queues array size. */
+	struct mlx5_external_rxq *ext_rxqs; /* External RX queues array. */
 	struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */
 	struct mlx5_txq_data *(*txqs)[]; /* TX queues. */
 	struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 2d48fde010..15728fb41f 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -175,6 +175,9 @@
 /* Maximum number of indirect actions supported by rte_flow */
 #define MLX5_MAX_INDIRECT_ACTIONS 3
 
+/* Maximum number of external Rx queues supported by rte_flow */
+#define MLX5_MAX_EXT_RX_QUEUES (UINT16_MAX - MLX5_EXTERNAL_RX_QUEUE_ID_MIN + 1)
+
 /*
  * Linux definition of static_assert is found in /usr/include/assert.h.
  * Windows does not require a redefinition.
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 406761ccf8..de0ba2b1ff 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -27,6 +27,7 @@
 #include "mlx5_tx.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_devx.h"
+#include "rte_pmd_mlx5.h"
 
 /**
  * Get the interface index from device name.
@@ -81,9 +82,10 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	priv->rss_conf.rss_key =
-		mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE,
-			    MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY);
+	priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key,
+					      MLX5_MEM_RTE,
+					      MLX5_RSS_HASH_KEY_LEN, 0,
+					      SOCKET_ID_ANY);
 	if (!priv->rss_conf.rss_key) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)",
 			dev->data->port_id, rxqs_n);
@@ -127,6 +129,14 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
+	if (priv->ext_rxqs && rxqs_n >= MLX5_EXTERNAL_RX_QUEUE_ID_MIN) {
+		DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u), "
+			"the maximal number of internal Rx queues is %u",
+			dev->data->port_id, rxqs_n,
+			MLX5_EXTERNAL_RX_QUEUE_ID_MIN - 1);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
 	if (rxqs_n != priv->rxqs_n) {
 		DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u",
 			dev->data->port_id, priv->rxqs_n, rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index fbc86dcef2..aba05dffa7 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -175,6 +175,12 @@ struct mlx5_rxq_priv {
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
+/* External RX queue descriptor. */
+struct mlx5_external_rxq {
+	uint32_t hw_id; /* Queue index in the Hardware. */
+	uint32_t refcnt; /* Reference counter. */
+};
+
 /* mlx5_rxq.c */
 
 extern uint8_t rss_hash_default_key[];
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e96584d55d..889428f48a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -30,6 +30,7 @@
 #include "mlx5_utils.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_devx.h"
+#include "rte_pmd_mlx5.h"
 
 
 /* Default RSS hash key also used for ConnectX-3. */
@@ -3008,3 +3009,119 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev)
 		data->rt_timestamp = sh->dev_cap.rt_timestamp;
 	}
 }
+
+/**
+ * Validate given external RxQ rte_plow index, and get pointer to concurrent
+ * external RxQ object to map/unmap.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ *
+ * @return
+ *   Pointer to concurrent external RxQ on success,
+ *   NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_external_rxq *
+mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx)
+{
+	struct rte_eth_dev *dev;
+	struct mlx5_priv *priv;
+
+	if (dpdk_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN) {
+		DRV_LOG(ERR, "Queue index %u should be in range: [%u, %u].",
+			dpdk_idx, MLX5_EXTERNAL_RX_QUEUE_ID_MIN, UINT16_MAX);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	if (rte_eth_dev_is_valid_port(port_id) < 0) {
+		DRV_LOG(ERR, "There is no Ethernet device for port %u.",
+			port_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+	dev = &rte_eth_devices[port_id];
+	priv = dev->data->dev_private;
+	if (!mlx5_imported_pd_and_ctx(priv->sh->cdev)) {
+		DRV_LOG(ERR, "Port %u "
+			"external RxQ isn't supported on local PD and CTX.",
+			port_id);
+		rte_errno = ENOTSUP;
+		return NULL;
+	}
+	if (!mlx5_devx_obj_ops_en(priv->sh)) {
+		DRV_LOG(ERR,
+			"Port %u external RxQ isn't supported by Verbs API.",
+			port_id);
+		rte_errno = ENOTSUP;
+		return NULL;
+	}
+	/*
+	 * When user configures remote PD and CTX and device creates RxQ by
+	 * DevX, external RxQs array is allocated.
+	 */
+	MLX5_ASSERT(priv->ext_rxqs != NULL);
+	return &priv->ext_rxqs[dpdk_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+}
+
+int
+rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
+				      uint32_t hw_idx)
+{
+	struct mlx5_external_rxq *ext_rxq;
+	uint32_t unmapped = 0;
+
+	ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx);
+	if (ext_rxq == NULL)
+		return -rte_errno;
+	if (!__atomic_compare_exchange_n(&ext_rxq->refcnt, &unmapped, 1, false,
+					 __ATOMIC_RELAXED, __ATOMIC_RELAXED)) {
+		if (ext_rxq->hw_id != hw_idx) {
+			DRV_LOG(ERR, "Port %u external RxQ index %u "
+				"is already mapped to HW index (requesting is "
+				"%u, existing is %u).",
+				port_id, dpdk_idx, hw_idx, ext_rxq->hw_id);
+			rte_errno = EEXIST;
+			return -rte_errno;
+		}
+		DRV_LOG(WARNING, "Port %u external RxQ index %u "
+			"is already mapped to the requested HW index (%u)",
+			port_id, dpdk_idx, hw_idx);
+
+	} else {
+		ext_rxq->hw_id = hw_idx;
+		DRV_LOG(DEBUG, "Port %u external RxQ index %u "
+			"is successfully mapped to the requested HW index (%u)",
+			port_id, dpdk_idx, hw_idx);
+	}
+	return 0;
+}
+
+int
+rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx)
+{
+	struct mlx5_external_rxq *ext_rxq;
+	uint32_t mapped = 1;
+
+	ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx);
+	if (ext_rxq == NULL)
+		return -rte_errno;
+	if (ext_rxq->refcnt > 1) {
+		DRV_LOG(ERR, "Port %u external RxQ index %u still referenced.",
+			port_id, dpdk_idx);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	if (!__atomic_compare_exchange_n(&ext_rxq->refcnt, &mapped, 0, false,
+					 __ATOMIC_RELAXED, __ATOMIC_RELAXED)) {
+		DRV_LOG(ERR, "Port %u external RxQ index %u doesn't exist.",
+			port_id, dpdk_idx);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	DRV_LOG(DEBUG,
+		"Port %u external RxQ index %u is successfully unmapped.",
+		port_id, dpdk_idx);
+	return 0;
+}
diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h
index fc37a386db..6e7907ee59 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5.h
+++ b/drivers/net/mlx5/rte_pmd_mlx5.h
@@ -61,8 +61,56 @@ int rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n);
 __rte_experimental
 int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains);
 
+/**
+ * External Rx queue rte_flow index minimal value.
+ */
+#define MLX5_EXTERNAL_RX_QUEUE_ID_MIN (UINT16_MAX - 1000 + 1)
+
+/**
+ * Update mapping between rte_flow queue index (16 bits) and HW queue index (32
+ * bits) for RxQs which is created outside the PMD.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ * @param[in] hw_idx
+ *   Queue index in hardware.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Possible values for rte_errno:
+ *   - EEXIST - a mapping with the same rte_flow index already exists.
+ *   - EINVAL - invalid rte_flow index, out of range.
+ *   - ENODEV - there is no Ethernet device for this port id.
+ *   - ENOTSUP - the port doesn't support external RxQ.
+ */
+__rte_experimental
+int rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
+					  uint32_t hw_idx);
+
+/**
+ * Remove mapping between rte_flow queue index (16 bits) and HW queue index (32
+ * bits) for RxQs which is created outside the PMD.
+ *
+ * @param[in] port_id
+ *   The port identifier of the Ethernet device.
+ * @param[in] dpdk_idx
+ *   Queue index in rte_flow.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ *   Possible values for rte_errno:
+ *   - EINVAL - invalid index, out of range, still referenced or doesn't exist.
+ *   - ENODEV - there is no Ethernet device for this port id.
+ *   - ENOTSUP - the port doesn't support external RxQ.
+ */
+__rte_experimental
+int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id,
+					    uint16_t dpdk_idx);
+
 #ifdef __cplusplus
 }
 #endif
 
-#endif
+#endif /* RTE_PMD_PRIVATE_MLX5_H_ */
diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map
index 0af7a12488..79cb79acc6 100644
--- a/drivers/net/mlx5/version.map
+++ b/drivers/net/mlx5/version.map
@@ -9,4 +9,7 @@ EXPERIMENTAL {
 	rte_pmd_mlx5_get_dyn_flag_names;
 	# added in 20.11
 	rte_pmd_mlx5_sync_flow;
+	# added in 22.03
+	rte_pmd_mlx5_external_rx_queue_id_map;
+	rte_pmd_mlx5_external_rx_queue_id_unmap;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 6/6] net/mlx5: support queue/RSS action for external RxQ
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
                       ` (4 preceding siblings ...)
  2022-02-24 23:25     ` [PATCH v3 5/6] net/mlx5: add external RxQ mapping API Michael Baum
@ 2022-02-24 23:25     ` Michael Baum
  2022-02-25 17:39     ` [PATCH v3 0/6] mlx5: external RxQ support Thomas Monjalon
  6 siblings, 0 replies; 26+ messages in thread
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Add support queue/RSS action for external RxQ.
In indirection table creation, the queue index will be taken from
mapping array.

This feature supports neither LRO nor Hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/nics/mlx5.rst               |   1 +
 doc/guides/rel_notes/release_22_03.rst |   1 +
 drivers/net/mlx5/mlx5.c                |   4 +
 drivers/net/mlx5/mlx5_devx.c           |  30 +++++--
 drivers/net/mlx5/mlx5_flow.c           |  29 +++++--
 drivers/net/mlx5/mlx5_rx.h             |  30 +++++++
 drivers/net/mlx5/mlx5_rxq.c            | 116 +++++++++++++++++++++++--
 7 files changed, 187 insertions(+), 24 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 7b04e9bac5..a5b3298f0c 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -38,6 +38,7 @@ Features
 - Multiple TX and RX queues.
 - Shared Rx queue.
 - Rx queue delay drop.
+- Support steering for external Rx queue created outside the PMD.
 - Support for scattered TX frames.
 - Advanced support for scattered Rx frames with tunable buffer attributes.
 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index e66548558c..a29e96c37c 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -164,6 +164,7 @@ New Features
 
   * Support ConnectX-7 capability to schedule traffic sending on timestamp
   * Added WQE based hardware steering support with ``rte_flow_async`` API.
+  * Support steering for external Rx queue created outside the PMD.
 
 * **Updated Wangxun ngbe driver.**
 
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 5ecca2dd1b..74841caaf9 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1912,6 +1912,10 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 	if (ret)
 		DRV_LOG(WARNING, "port %u some Rx queue objects still remain",
 			dev->data->port_id);
+	ret = mlx5_ext_rxq_verify(dev);
+	if (ret)
+		DRV_LOG(WARNING, "Port %u some external RxQ still remain.",
+			dev->data->port_id);
 	ret = mlx5_rxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some Rx queues still remain",
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index bcd2358165..af106bda50 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -580,13 +580,21 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		return rqt_attr;
 	}
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
+		if (mlx5_is_external_rxq(dev, queues[i])) {
+			struct mlx5_external_rxq *ext_rxq =
+					mlx5_ext_rxq_get(dev, queues[i]);
 
-		MLX5_ASSERT(rxq != NULL);
-		if (rxq->ctrl->is_hairpin)
-			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
-		else
-			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+			rqt_attr->rq_list[i] = ext_rxq->hw_id;
+		} else {
+			struct mlx5_rxq_priv *rxq =
+					mlx5_rxq_get(dev, queues[i]);
+
+			MLX5_ASSERT(rxq != NULL);
+			if (rxq->ctrl->is_hairpin)
+				rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
+			else
+				rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+		}
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
@@ -711,7 +719,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 	uint32_t i;
 
 	/* NULL queues designate drop queue. */
-	if (ind_tbl->queues != NULL) {
+	if (ind_tbl->queues == NULL) {
+		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
+	} else if (mlx5_is_external_rxq(dev, ind_tbl->queues[0])) {
+		/* External RxQ supports neither Hairpin nor LRO. */
+		is_hairpin = false;
+		lro = false;
+	} else {
 		is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]);
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
@@ -723,8 +737,6 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 				break;
 			}
 		}
-	} else {
-		is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 09701a73c1..3875160708 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1743,6 +1743,12 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
 					  "can't have 2 fate actions in"
 					  " same flow");
+	if (attr->egress)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+					  "queue action not supported for egress.");
+	if (mlx5_is_external_rxq(dev, queue->index))
+		return 0;
 	if (!priv->rxqs_n)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
@@ -1757,11 +1763,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  &queue->index,
 					  "queue is not configured");
-	if (attr->egress)
-		return rte_flow_error_set(error, ENOTSUP,
-					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
-					  "queue action not supported for "
-					  "egress");
 	return 0;
 }
 
@@ -1776,7 +1777,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
  *   Size of the @p queues array.
  * @param[out] error
  *   On error, filled with a textual error description.
- * @param[out] queue
+ * @param[out] queue_idx
  *   On error, filled with an offending queue index in @p queues array.
  *
  * @return
@@ -1789,17 +1790,27 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 {
 	const struct mlx5_priv *priv = dev->data->dev_private;
 	bool is_hairpin = false;
+	bool is_ext_rss = false;
 	uint32_t i;
 
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev,
-								   queues[i]);
+		struct mlx5_rxq_ctrl *rxq_ctrl;
 
+		if (mlx5_is_external_rxq(dev, queues[0])) {
+			is_ext_rss = true;
+			continue;
+		}
+		if (is_ext_rss) {
+			*error = "Combining external and regular RSS queues is not supported";
+			*queue_idx = i;
+			return -ENOTSUP;
+		}
 		if (queues[i] >= priv->rxqs_n) {
 			*error = "queue index out of range";
 			*queue_idx = i;
 			return -EINVAL;
 		}
+		rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]);
 		if (rxq_ctrl == NULL) {
 			*error =  "queue is not configured";
 			*queue_idx = i;
@@ -1894,7 +1905,7 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
 					  " type not specified");
-	if (!priv->rxqs_n)
+	if (!priv->rxqs_n && priv->ext_rxqs == NULL)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  NULL, "No Rx queues configured");
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index aba05dffa7..acebe3348c 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -18,6 +18,7 @@
 
 #include "mlx5.h"
 #include "mlx5_autoconf.h"
+#include "rte_pmd_mlx5.h"
 
 /* Support tunnel matching. */
 #define MLX5_FLOW_TUNNEL 10
@@ -217,8 +218,14 @@ uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx);
 struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_external_rxq *mlx5_ext_rxq_ref(struct rte_eth_dev *dev,
+					   uint16_t idx);
+uint32_t mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_external_rxq *mlx5_ext_rxq_get(struct rte_eth_dev *dev,
+					   uint16_t idx);
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_verify(struct rte_eth_dev *dev);
+int mlx5_ext_rxq_verify(struct rte_eth_dev *dev);
 int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
 int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev,
@@ -643,4 +650,27 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 	return n == n_ibv;
 }
 
+/**
+ * Check whether given RxQ is external.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param queue_idx
+ *   Rx queue index.
+ *
+ * @return
+ *   True if is external RxQ, otherwise false.
+ */
+static __rte_always_inline bool
+mlx5_is_external_rxq(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_external_rxq *rxq;
+
+	if (!priv->ext_rxqs || queue_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN)
+		return false;
+	rxq = &priv->ext_rxqs[queue_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+	return !!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED);
+}
+
 #endif /* RTE_PMD_MLX5_RX_H_ */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 889428f48a..ff293d9d56 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2084,6 +2084,65 @@ mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx)
 	return rxq == NULL ? NULL : &rxq->ctrl->rxq;
 }
 
+/**
+ * Increase an external Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External RX queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+struct mlx5_external_rxq *
+mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx);
+
+	__atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+	return rxq;
+}
+
+/**
+ * Decrease an external Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External RX queue index.
+ *
+ * @return
+ *   Updated reference count.
+ */
+uint32_t
+mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx);
+
+	return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+}
+
+/**
+ * Get an external Rx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   External Rx queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+struct mlx5_external_rxq *
+mlx5_ext_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	MLX5_ASSERT(mlx5_is_external_rxq(dev, idx));
+	return &priv->ext_rxqs[idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
+}
+
 /**
  * Release a Rx queue.
  *
@@ -2167,6 +2226,37 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
 	return ret;
 }
 
+/**
+ * Verify the external Rx Queue list is empty.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   The number of object not released.
+ */
+int
+mlx5_ext_rxq_verify(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_external_rxq *rxq;
+	uint32_t i;
+	int ret = 0;
+
+	if (priv->ext_rxqs == NULL)
+		return 0;
+
+	for (i = MLX5_EXTERNAL_RX_QUEUE_ID_MIN; i <= UINT16_MAX ; ++i) {
+		rxq = mlx5_ext_rxq_get(dev, i);
+		if (rxq->refcnt < 2)
+			continue;
+		DRV_LOG(DEBUG, "Port %u external RxQ %u still referenced.",
+			dev->data->port_id, i);
+		++ret;
+	}
+	return ret;
+}
+
 /**
  * Check whether RxQ type is Hairpin.
  *
@@ -2182,8 +2272,11 @@ bool
 mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 
+	if (mlx5_is_external_rxq(dev, idx))
+		return false;
+	rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 	return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin);
 }
 
@@ -2358,9 +2451,16 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 
 	if (ref_qs)
 		for (i = 0; i != queues_n; ++i) {
-			if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
-				ret = -rte_errno;
-				goto error;
+			if (mlx5_is_external_rxq(dev, queues[i])) {
+				if (mlx5_ext_rxq_ref(dev, queues[i]) == NULL) {
+					ret = -rte_errno;
+					goto error;
+				}
+			} else {
+				if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
+					ret = -rte_errno;
+					goto error;
+				}
 			}
 		}
 	ret = priv->obj_ops.ind_table_new(dev, n, ind_tbl);
@@ -2371,8 +2471,12 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 error:
 	if (ref_qs) {
 		err = rte_errno;
-		for (j = 0; j < i; j++)
-			mlx5_rxq_deref(dev, queues[j]);
+		for (j = 0; j < i; j++) {
+			if (mlx5_is_external_rxq(dev, queues[j]))
+				mlx5_ext_rxq_deref(dev, queues[j]);
+			else
+				mlx5_rxq_deref(dev, queues[j]);
+		}
 		rte_errno = err;
 	}
 	DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/6] mlx5: external RxQ support
  2022-02-24 23:25   ` [PATCH v3 " Michael Baum
                       ` (5 preceding siblings ...)
  2022-02-24 23:25     ` [PATCH v3 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
@ 2022-02-25 17:39     ` Thomas Monjalon
  6 siblings, 0 replies; 26+ messages in thread
From: Thomas Monjalon @ 2022-02-25 17:39 UTC (permalink / raw)
  To: Michael Baum
  Cc: dev, Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, asafp

> Michael Baum (6):
>   common/mlx5: consider local functions as internal
>   common/mlx5: glue device and PD importation
>   common/mlx5: add remote PD and CTX support
>   net/mlx5: optimize RxQ/TxQ control structure
>   net/mlx5: add external RxQ mapping API
>   net/mlx5: support queue/RSS action for external RxQ

Applied in next-net-mlx, thanks.





^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/6] common/mlx5: consider local functions as internal
  2022-02-24 23:25     ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
@ 2022-02-25 18:01       ` Ferruh Yigit
  2022-02-25 18:38         ` Thomas Monjalon
  0 siblings, 1 reply; 26+ messages in thread
From: Ferruh Yigit @ 2022-02-25 18:01 UTC (permalink / raw)
  To: Michael Baum, dev, Thomas Monjalon
  Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko,
	David Marchand, Ray Kinsella

On 2/24/2022 11:25 PM, Michael Baum wrote:
> The functions which are not explicitly marked as internal
> were exported because the local catch-all rule was missing in the
> version script.
> After adding the missing rule, all local functions are hidden.
> The function mlx5_get_device_guid is used in another library,
> so it needs to be exported (as internal).
> 
> Because the local functions were exported as non-internal
> in DPDK 21.11, any change in these functions would break the ABI.
> An ABI exception is added for this library, considering that all
> functions are either local or internal.
> 

When a function is not listed explicitly in .map file, it shouldn't
be exported at all.

So I am not sure if this exception is required, did you get
warning for tool, or is this theoretical?

cc'ed David and Ray for comment.

> Signed-off-by: Michael Baum <michaelba@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>


<...>



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/6] common/mlx5: consider local functions as internal
  2022-02-25 18:01       ` Ferruh Yigit
@ 2022-02-25 18:38         ` Thomas Monjalon
  2022-02-25 19:13           ` Ferruh Yigit
  0 siblings, 1 reply; 26+ messages in thread
From: Thomas Monjalon @ 2022-02-25 18:38 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Michael Baum, dev, Matan Azrad, Raslan Darawsheh,
	Viacheslav Ovsiienko, David Marchand, Ray Kinsella

25/02/2022 19:01, Ferruh Yigit:
> On 2/24/2022 11:25 PM, Michael Baum wrote:
> > The functions which are not explicitly marked as internal
> > were exported because the local catch-all rule was missing in the
> > version script.
> > After adding the missing rule, all local functions are hidden.
> > The function mlx5_get_device_guid is used in another library,
> > so it needs to be exported (as internal).
> > 
> > Because the local functions were exported as non-internal
> > in DPDK 21.11, any change in these functions would break the ABI.
> > An ABI exception is added for this library, considering that all
> > functions are either local or internal.
> > 
> 
> When a function is not listed explicitly in .map file, it shouldn't
> be exported at all.

It seems we need local:* to achieve this behaviour.
Few other libs are missing it. I plan to send a patch for them.

> So I am not sure if this exception is required, did you get
> warning for tool, or is this theoretical?

It is not theoritical, you can check with objdump:
objdump -T build/lib/librte_common_mlx5.so | sed -rn 's,^[[:xdigit:]]* g *(D[^0]*)[^ ]* *,\1,p'

I did not check the ABI tool without the exception.



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/6] common/mlx5: consider local functions as internal
  2022-02-25 18:38         ` Thomas Monjalon
@ 2022-02-25 19:13           ` Ferruh Yigit
  0 siblings, 0 replies; 26+ messages in thread
From: Ferruh Yigit @ 2022-02-25 19:13 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Michael Baum, dev, Matan Azrad, Raslan Darawsheh,
	Viacheslav Ovsiienko, David Marchand, Ray Kinsella

On 2/25/2022 6:38 PM, Thomas Monjalon wrote:
> 25/02/2022 19:01, Ferruh Yigit:
>> On 2/24/2022 11:25 PM, Michael Baum wrote:
>>> The functions which are not explicitly marked as internal
>>> were exported because the local catch-all rule was missing in the
>>> version script.
>>> After adding the missing rule, all local functions are hidden.
>>> The function mlx5_get_device_guid is used in another library,
>>> so it needs to be exported (as internal).
>>>
>>> Because the local functions were exported as non-internal
>>> in DPDK 21.11, any change in these functions would break the ABI.
>>> An ABI exception is added for this library, considering that all
>>> functions are either local or internal.
>>>
>>
>> When a function is not listed explicitly in .map file, it shouldn't
>> be exported at all.
> 
> It seems we need local:* to achieve this behaviour.
> Few other libs are missing it. I plan to send a patch for them.
> 

+1 for this patch, thanks.

>> So I am not sure if this exception is required, did you get
>> warning for tool, or is this theoretical?
> 
> It is not theoritical, you can check with objdump:
> objdump -T build/lib/librte_common_mlx5.so | sed -rn 's,^[[:xdigit:]]* g *(D[^0]*)[^ ]* *,\1,p'
> 
> I did not check the ABI tool without the exception.
> 

Yes tool complains with change [1], I will proceed with original patch.

[1]
29 Removed functions:

   [D] 'function int mlx5_auxiliary_get_pci_str(const rte_auxiliary_device*, char*, size_t)'    {mlx5_auxiliary_get_pci_str}
   [D] 'function void mlx5_common_auxiliary_init()'    {mlx5_common_auxiliary_init}
   [D] 'function int mlx5_common_dev_dma_map(rte_device*, void*, uint64_t, size_t)'    {mlx5_common_dev_dma_map}
   [D] 'function int mlx5_common_dev_dma_unmap(rte_device*, void*, uint64_t, size_t)'    {mlx5_common_dev_dma_unmap}
   [D] 'function int mlx5_common_dev_probe(rte_device*)'    {mlx5_common_dev_probe}
   [D] 'function int mlx5_common_dev_remove(rte_device*)'    {mlx5_common_dev_remove}
   [D] 'function void mlx5_common_driver_on_register_pci(mlx5_class_driver*)'    {mlx5_common_driver_on_register_pci}
   [D] 'function void mlx5_common_pci_init()'    {mlx5_common_pci_init}
   [D] 'function mlx5_mr* mlx5_create_mr_ext(void*, uintptr_t, size_t, int, mlx5_reg_mr_t)'    {mlx5_create_mr_ext}
   [D] 'function bool mlx5_dev_pci_match(const mlx5_class_driver*, const rte_device*)'    {mlx5_dev_pci_match}
   [D] 'function int mlx5_dev_to_pci_str(const rte_device*, char*, size_t)'    {mlx5_dev_to_pci_str}
   [D] 'function void mlx5_free_mr_by_addr(mlx5_mr_share_cache*, const char*, void*, size_t)'    {mlx5_free_mr_by_addr}
   [D] 'function ibv_device* mlx5_get_aux_ibv_device(const rte_auxiliary_device*)'    {mlx5_get_aux_ibv_device}
   [D] 'function void mlx5_glue_constructor()'    {mlx5_glue_constructor}
   [D] 'function void mlx5_malloc_mem_select(uint32_t)'    {mlx5_malloc_mem_select}
   [D] 'function void mlx5_mr_btree_dump(mlx5_mr_btree*)'    {mlx5_mr_btree_dump}
   [D] 'function int mlx5_mr_create_cache(mlx5_mr_share_cache*, int)'    {mlx5_mr_create_cache}
   [D] 'function void mlx5_mr_free(mlx5_mr*, mlx5_dereg_mr_t)'    {mlx5_mr_free}
   [D] 'function int mlx5_mr_insert_cache(mlx5_mr_share_cache*, mlx5_mr*)'    {mlx5_mr_insert_cache}
   [D] 'function mlx5_mr* mlx5_mr_lookup_list(mlx5_mr_share_cache*, mr_cache_entry*, uintptr_t)'    {mlx5_mr_lookup_list}
   [D] 'function void mlx5_mr_rebuild_cache(mlx5_mr_share_cache*)'    {mlx5_mr_rebuild_cache}
   [D] 'function void mlx5_mr_release_cache(mlx5_mr_share_cache*)'    {mlx5_mr_release_cache}
   [D] 'function int mlx5_nl_devlink_family_id_get(int)'    {mlx5_nl_devlink_family_id_get}
   [D] 'function int mlx5_nl_enable_roce_get(int, int, const char*, int*)'    {mlx5_nl_enable_roce_get}
   [D] 'function int mlx5_nl_enable_roce_set(int, int, const char*, int)'    {mlx5_nl_enable_roce_set}
   [D] 'function int mlx5_os_open_device(mlx5_common_device*, uint32_t)'    {mlx5_os_open_device}
   [D] 'function int mlx5_os_pd_create(mlx5_common_device*)'    {mlx5_os_pd_create}
   [D] 'function void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t*, mlx5_dereg_mr_t*)'    {mlx5_os_set_reg_mr_cb}
   [D] 'function void mlx5_set_context_attr(rte_device*, ibv_context*)'    {mlx5_set_context_attr}

2 Removed variables:

   [D] 'uint32_t atomic_sn'    {atomic_sn}
   [D] 'int mlx5_common_logtype'    {mlx5_common_logtype}

1 Removed function symbol not referenced by debug info:

   [D] mlx5_mr_dump_cache

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-02-25 19:14 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
2022-02-22 21:04 ` [PATCH 1/6] common/mlx5: glue device and PD importation Michael Baum
2022-02-22 21:04 ` [PATCH 2/6] common/mlx5: add remote PD and CTX support Michael Baum
2022-02-22 21:04 ` [PATCH 3/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
2022-02-22 21:04 ` [PATCH 4/6] net/mlx5: add external RxQ mapping API Michael Baum
2022-02-22 21:04 ` [PATCH 5/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
2022-02-22 21:04 ` [PATCH 6/6] app/testpmd: add test " Michael Baum
2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
2022-02-23 18:48   ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
2022-02-23 18:48   ` [PATCH v2 2/6] common/mlx5: glue device and PD importation Michael Baum
2022-02-23 18:48   ` [PATCH v2 3/6] common/mlx5: add remote PD and CTX support Michael Baum
2022-02-23 18:48   ` [PATCH v2 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
2022-02-23 18:48   ` [PATCH v2 5/6] net/mlx5: add external RxQ mapping API Michael Baum
2022-02-23 18:48   ` [PATCH v2 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
2022-02-24  8:38   ` [PATCH v2 0/6] mlx5: external RxQ support Matan Azrad
2022-02-24 23:25   ` [PATCH v3 " Michael Baum
2022-02-24 23:25     ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
2022-02-25 18:01       ` Ferruh Yigit
2022-02-25 18:38         ` Thomas Monjalon
2022-02-25 19:13           ` Ferruh Yigit
2022-02-24 23:25     ` [PATCH v3 2/6] common/mlx5: glue device and PD importation Michael Baum
2022-02-24 23:25     ` [PATCH v3 3/6] common/mlx5: add remote PD and CTX support Michael Baum
2022-02-24 23:25     ` [PATCH v3 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
2022-02-24 23:25     ` [PATCH v3 5/6] net/mlx5: add external RxQ mapping API Michael Baum
2022-02-24 23:25     ` [PATCH v3 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
2022-02-25 17:39     ` [PATCH v3 0/6] mlx5: external RxQ support Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).