DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue
@ 2021-09-26 11:18 Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 01/11] common/mlx5: support receive queue user index Xueming Li
                   ` (11 more replies)
  0 siblings, 12 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:18 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit

Implemetation of Shared Rx queue.

Depends-on: series-18996 ("ethdev: introduce shared Rx queue")
Depends-on: series-18065 ("net/mlx5: keep indirect actions across port restart")
Depends-on: series-18939 ("ethdev: change queue release callback")

Xueming Li (11):
  common/mlx5: support receive queue user index
  common/mlx5: support receive memory pool
  net/mlx5: clean Rx queue code
  net/mlx5: split multiple packet Rq memory pool
  net/mlx5: split Rx queue
  net/mlx5: move Rx queue reference count
  net/mlx5: move Rx queue hairpin info to private data
  net/mlx5: remove port info from shareable Rx queue
  net/mlx5: move Rx queue DevX resource
  net/mlx5: remove Rx queue data list from device
  net/mlx5: support shared Rx queue

 doc/guides/nics/features/mlx5.ini        |   1 +
 doc/guides/nics/mlx5.rst                 |   6 +
 drivers/common/mlx5/mlx5_common_devx.c   | 310 ++++++++++--
 drivers/common/mlx5/mlx5_common_devx.h   |  19 +-
 drivers/common/mlx5/mlx5_devx_cmds.c     |  52 ++
 drivers/common/mlx5/mlx5_devx_cmds.h     |  16 +
 drivers/common/mlx5/mlx5_prm.h           |  93 +++-
 drivers/common/mlx5/version.map          |   1 +
 drivers/net/mlx5/linux/mlx5_os.c         |   2 +
 drivers/net/mlx5/linux/mlx5_verbs.c      | 161 +++---
 drivers/net/mlx5/mlx5.c                  |  11 +-
 drivers/net/mlx5/mlx5.h                  |  17 +-
 drivers/net/mlx5/mlx5_devx.c             | 249 ++++-----
 drivers/net/mlx5/mlx5_ethdev.c           |  16 +-
 drivers/net/mlx5/mlx5_flow.c             |  45 +-
 drivers/net/mlx5/mlx5_mr.c               |   7 +-
 drivers/net/mlx5/mlx5_rss.c              |   6 +-
 drivers/net/mlx5/mlx5_rx.c               |  31 +-
 drivers/net/mlx5/mlx5_rx.h               |  49 +-
 drivers/net/mlx5/mlx5_rxq.c              | 618 ++++++++++++++++-------
 drivers/net/mlx5/mlx5_rxtx.c             |   6 +-
 drivers/net/mlx5/mlx5_rxtx_vec.c         |   8 +-
 drivers/net/mlx5/mlx5_stats.c            |   9 +-
 drivers/net/mlx5/mlx5_trigger.c          | 161 +++---
 drivers/net/mlx5/mlx5_vlan.c             |  16 +-
 drivers/regex/mlx5/mlx5_regex_fastpath.c |   2 +-
 26 files changed, 1289 insertions(+), 623 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 01/11] common/mlx5: support receive queue user index
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
@ 2021-09-26 11:18 ` Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 02/11] common/mlx5: support receive memory pool Xueming Li
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:18 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko, Ori Kam

RQ user index is saved in CQE when packet received by RQ.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/common/mlx5/mlx5_prm.h           | 8 +++++++-
 drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +-
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index d361bcf90ef..72af3710a8f 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -393,7 +393,13 @@ struct mlx5_cqe {
 	uint16_t hdr_type_etc;
 	uint16_t vlan_info;
 	uint8_t lro_num_seg;
-	uint8_t rsvd3[3];
+	union {
+		uint8_t user_index_bytes[3];
+		struct {
+			uint8_t user_index_hi;
+			uint16_t user_index_low;
+		} __rte_packed;
+	};
 	uint32_t flow_table_metadata;
 	uint8_t rsvd4[4];
 	uint32_t byte_cnt;
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index c79445ce7d3..a151e4ce8dc 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -567,7 +567,7 @@ mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id,
 		uint16_t wq_counter
 			= (rte_be_to_cpu_16(cqe->wqe_counter) + 1) &
 			  MLX5_REGEX_MAX_WQE_INDEX;
-		size_t sqid = cqe->rsvd3[2];
+		size_t sqid = cqe->user_index_bytes[2];
 		struct mlx5_regex_sq *sq = &queue->sqs[sqid];
 
 		/* UMR mode WQE counter move as WQE set(4 WQEBBS).*/
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 02/11] common/mlx5: support receive memory pool
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 01/11] common/mlx5: support receive queue user index Xueming Li
@ 2021-09-26 11:18 ` Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 03/11] net/mlx5: clean Rx queue code Xueming Li
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:18 UTC (permalink / raw)
  To: dev
  Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko, Ray Kinsella

Adds DevX supports of PRM shared receive memory pool(RMP) object.
RMP is used to support shared Rx queue. Multiple RQ could share same
RMP. Memory buffers are supplied to RMP.

This patch makes RMP RQ optional, created only if mlx5_devx_rq.rmp
is set.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c | 310 +++++++++++++++++++++----
 drivers/common/mlx5/mlx5_common_devx.h |  19 +-
 drivers/common/mlx5/mlx5_devx_cmds.c   |  52 +++++
 drivers/common/mlx5/mlx5_devx_cmds.h   |  16 ++
 drivers/common/mlx5/mlx5_prm.h         |  85 ++++++-
 drivers/common/mlx5/version.map        |   1 +
 drivers/net/mlx5/mlx5_devx.c           |   2 +-
 7 files changed, 434 insertions(+), 51 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 22c8d356c45..cd6f13a66b6 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -271,6 +271,39 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
 	return -rte_errno;
 }
 
+/**
+ * Destroy DevX Receive Queue resources.
+ *
+ * @param[in] rq_res
+ *   DevX RQ resource to destroy.
+ */
+static void
+mlx5_devx_wq_res_destroy(struct mlx5_devx_wq_res *rq_res)
+{
+	if (rq_res->umem_obj)
+		claim_zero(mlx5_os_umem_dereg(rq_res->umem_obj));
+	if (rq_res->umem_buf)
+		mlx5_free((void *)(uintptr_t)rq_res->umem_buf);
+	memset(rq_res, 0, sizeof(*rq_res));
+}
+
+/**
+ * Destroy DevX Receive Memory Pool.
+ *
+ * @param[in] rmp
+ *   DevX RMP to destroy.
+ */
+static void
+mlx5_devx_rmp_destroy(struct mlx5_devx_rmp *rmp)
+{
+	MLX5_ASSERT(rmp->ref_cnt == 0);
+	if (rmp->rmp) {
+		claim_zero(mlx5_devx_cmd_destroy(rmp->rmp));
+		rmp->rmp = NULL;
+	}
+	mlx5_devx_wq_res_destroy(&rmp->wq);
+}
+
 /**
  * Destroy DevX Receive Queue.
  *
@@ -280,55 +313,47 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
 void
 mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq)
 {
-	if (rq->rq)
+	if (rq->rq) {
 		claim_zero(mlx5_devx_cmd_destroy(rq->rq));
-	if (rq->umem_obj)
-		claim_zero(mlx5_os_umem_dereg(rq->umem_obj));
-	if (rq->umem_buf)
-		mlx5_free((void *)(uintptr_t)rq->umem_buf);
+		rq->rq = NULL;
+	}
+	if (rq->rmp == NULL) {
+		mlx5_devx_wq_res_destroy(&rq->wq);
+	} else {
+		MLX5_ASSERT(rq->rmp->ref_cnt > 0);
+		rq->rmp->ref_cnt--;
+		if (rq->rmp->ref_cnt == 0)
+			mlx5_devx_rmp_destroy(rq->rmp);
+	}
+	rq->db_rec = 0;
 }
 
 /**
- * Create Receive Queue using DevX API.
- *
- * Get a pointer to partially initialized attributes structure, and updates the
- * following fields:
- *   wq_umem_valid
- *   wq_umem_id
- *   wq_umem_offset
- *   dbr_umem_valid
- *   dbr_umem_id
- *   dbr_addr
- *   log_wq_pg_sz
- * All other fields are updated by caller.
+ * Create WQ resources using DevX API.
  *
  * @param[in] ctx
  *   Context returned from mlx5 open_device() glue function.
- * @param[in/out] rq_obj
- *   Pointer to RQ to create.
+ * @param[in/out] rq_rest
+ *   Pointer to RQ resource to create.
  * @param[in] wqe_size
  *   Size of WQE structure.
  * @param[in] log_wqbb_n
  *   Log of number of WQBBs in queue.
- * @param[in] attr
- *   Pointer to RQ attributes structure.
- * @param[in] socket
- *   Socket to use for allocation.
+ * @param[in] wq_attr
+ *   Pointer to WQ attributes structure.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-int
-mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
-		    uint16_t log_wqbb_n,
-		    struct mlx5_devx_create_rq_attr *attr, int socket)
+static int
+mlx5_devx_wq_res_create(void *ctx, struct mlx5_devx_wq_res *rq_res,
+			uint32_t wqe_size, uint16_t log_wqbb_n,
+			struct mlx5_devx_wq_attr *wq_attr, int socket)
 {
-	struct mlx5_devx_obj *rq = NULL;
 	struct mlx5dv_devx_umem *umem_obj = NULL;
 	void *umem_buf = NULL;
 	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
-	uint32_t umem_size, umem_dbrec;
-	uint16_t rq_size = 1 << log_wqbb_n;
+	uint32_t umem_size;
 	int ret;
 
 	if (alignment == (size_t)-1) {
@@ -337,8 +362,7 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
 		return -rte_errno;
 	}
 	/* Allocate memory buffer for WQEs and doorbell record. */
-	umem_size = wqe_size * rq_size;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size = wqe_size * (1 << log_wqbb_n);
 	umem_size += MLX5_DBR_SIZE;
 	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
 			       alignment, socket);
@@ -355,14 +379,58 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
 		rte_errno = errno;
 		goto error;
 	}
-	/* Fill attributes for RQ object creation. */
-	attr->wq_attr.wq_umem_valid = 1;
-	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj);
-	attr->wq_attr.wq_umem_offset = 0;
-	attr->wq_attr.dbr_umem_valid = 1;
-	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
-	attr->wq_attr.dbr_addr = umem_dbrec;
-	attr->wq_attr.log_wq_pg_sz = MLX5_LOG_PAGE_SIZE;
+	rq_res->umem_buf = umem_buf;
+	rq_res->umem_obj = umem_obj;
+	/* Fill WQ attributes. */
+	wq_attr->wq_umem_valid = 1;
+	wq_attr->wq_umem_id = mlx5_os_get_umem_id(umem_obj);
+	wq_attr->wq_umem_offset = 0;
+	wq_attr->dbr_umem_valid = 1;
+	wq_attr->dbr_umem_id = wq_attr->wq_umem_id;
+	wq_attr->dbr_addr = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	wq_attr->log_wq_pg_sz = MLX5_LOG_PAGE_SIZE;
+	return 0;
+error:
+	ret = rte_errno;
+	if (umem_obj)
+		claim_zero(mlx5_os_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+/**
+ * Create standalone Receive Queue using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_rq_std_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+			uint32_t wqe_size, uint16_t log_wqbb_n,
+			struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *rq = NULL;
+	int ret;
+
+	ret = mlx5_devx_wq_res_create(ctx, &rq_obj->wq, wqe_size, log_wqbb_n,
+				      &attr->wq_attr, socket);
+	if (ret != 0)
+		return ret;
 	/* Create receive queue object with DevX. */
 	rq = mlx5_devx_cmd_create_rq(ctx, attr, socket);
 	if (!rq) {
@@ -370,18 +438,166 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	rq_obj->umem_buf = umem_buf;
-	rq_obj->umem_obj = umem_obj;
 	rq_obj->rq = rq;
-	rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec);
 	return 0;
 error:
 	ret = rte_errno;
-	if (umem_obj)
-		claim_zero(mlx5_os_umem_dereg(umem_obj));
-	if (umem_buf)
-		mlx5_free((void *)(uintptr_t)umem_buf);
+	mlx5_devx_wq_res_destroy(&rq_obj->wq);
 	rte_errno = ret;
 	return -rte_errno;
 }
 
+/**
+ * Create Receive Memory Pool using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_rmp_create(void *ctx, struct mlx5_devx_rmp *rmp_obj,
+		     uint32_t wqe_size, uint16_t log_wqbb_n,
+		     struct mlx5_devx_wq_attr *wq_attr, int socket)
+{
+	struct mlx5_devx_create_rmp_attr rmp_attr = { 0 };
+	int ret;
+
+	rmp_attr.wq_attr = *wq_attr;
+	ret = mlx5_devx_wq_res_create(ctx, &rmp_obj->wq, wqe_size, log_wqbb_n,
+				      &rmp_attr.wq_attr, socket);
+	if (ret != 0)
+		return ret;
+	rmp_attr.state = MLX5_RMPC_STATE_RDY;
+	rmp_attr.basic_cyclic_rcv_wqe =
+		wq_attr->wq_type == MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ ?
+		0 : 1;
+	/* Create receive queue object with DevX. */
+	rmp_obj->rmp = mlx5_devx_cmd_create_rmp(ctx, &rmp_attr, socket);
+	if (rmp_obj->rmp == NULL) {
+		DRV_LOG(ERR, "Can't create DevX RMP object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	return 0;
+error:
+	ret = rte_errno;
+	mlx5_devx_wq_res_destroy(&rmp_obj->wq);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+/**
+ * Create Shared Receive Queue based on RMP using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_rq_shared_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+			   uint32_t wqe_size, uint16_t log_wqbb_n,
+			   struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *rq = NULL;
+	int ret;
+
+	ret = mlx5_devx_rmp_create(ctx, rq_obj->rmp, wqe_size, log_wqbb_n,
+				   &attr->wq_attr, socket);
+	if (ret != 0)
+		return ret;
+	rq_obj->rmp->ref_cnt++;
+	attr->mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_RMP;
+	attr->rmpn = rq_obj->rmp->rmp->id;
+	attr->flush_in_error_en = 0;
+	memset(&attr->wq_attr, 0, sizeof(attr->wq_attr));
+	/* Create receive queue object with DevX. */
+	rq = mlx5_devx_cmd_create_rq(ctx, attr, socket);
+	if (!rq) {
+		DRV_LOG(ERR, "Can't create DevX RMP RQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rq_obj->rq = rq;
+	return 0;
+error:
+	ret = rte_errno;
+	mlx5_devx_rq_destroy(rq_obj);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+/**
+ * Create Receive Queue using DevX API. Shared RQ is created only if rmp set.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+		    uint32_t wqe_size, uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	uint32_t umem_size, umem_dbrec;
+	int ret;
+
+	if (rq_obj->rmp == NULL)
+		ret = mlx5_devx_rq_std_create(ctx, rq_obj, wqe_size,
+					      log_wqbb_n, attr, socket);
+	else
+		ret = mlx5_devx_rq_shared_create(ctx, rq_obj, wqe_size,
+						 log_wqbb_n, attr, socket);
+	if (ret != 0)
+		return ret;
+	umem_size = wqe_size * (1 << log_wqbb_n);
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	rq_obj->db_rec = RTE_PTR_ADD(rq_obj->wq.umem_buf, umem_dbrec);
+	return 0;
+}
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index aad0184e5ac..328b6ce9324 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -33,11 +33,26 @@ struct mlx5_devx_sq {
 	volatile uint32_t *db_rec; /* The SQ doorbell record. */
 };
 
+/* DevX Receive Queue resource structure. */
+struct mlx5_devx_wq_res {
+	void *umem_obj; /* The RQ umem object. */
+	volatile void *umem_buf;
+};
+
+/* DevX Receive Queue structure. */
+struct mlx5_devx_rmp {
+	struct mlx5_devx_obj *rmp; /* The RMP DevX object. */
+	uint32_t ref_cnt; /* Reference count. */
+	struct mlx5_devx_wq_res wq;
+};
+
 /* DevX Receive Queue structure. */
 struct mlx5_devx_rq {
 	struct mlx5_devx_obj *rq; /* The RQ DevX object. */
-	void *umem_obj; /* The RQ umem object. */
-	volatile void *umem_buf;
+	union {
+		struct mlx5_devx_rmp *rmp; /* Shared RQ RMP object. */
+		struct mlx5_devx_wq_res wq; /* WQ resource of standalone RQ. */
+	};
 	volatile uint32_t *db_rec; /* The RQ doorbell record. */
 };
 
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 56407cc332f..120331e9c87 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -766,6 +766,8 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 			MLX5_GET(cmd_hca_cap, hcattr, flow_counter_bulk_alloc);
 	attr->flow_counters_dump = MLX5_GET(cmd_hca_cap, hcattr,
 					    flow_counters_dump);
+	attr->log_max_rmp = MLX5_GET(cmd_hca_cap, hcattr, log_max_rmp);
+	attr->mem_rq_rmp = MLX5_GET(cmd_hca_cap, hcattr, mem_rq_rmp);
 	attr->log_max_rqt_size = MLX5_GET(cmd_hca_cap, hcattr,
 					  log_max_rqt_size);
 	attr->eswitch_manager = MLX5_GET(cmd_hca_cap, hcattr, eswitch_manager);
@@ -1250,6 +1252,56 @@ mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 }
 
 /**
+ * Create RMP using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param [in] rmp_attr
+ *   Pointer to create RMP attributes structure.
+ * @param [in] socket
+ *   CPU socket ID for allocations.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_rmp(void *ctx,
+			 struct mlx5_devx_create_rmp_attr *rmp_attr,
+			 int socket)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_rmp_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(create_rmp_out)] = {0};
+	void *rmp_ctx, *wq_ctx;
+	struct mlx5_devx_wq_attr *wq_attr;
+	struct mlx5_devx_obj *rmp = NULL;
+
+	rmp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rmp), 0, socket);
+	if (!rmp) {
+		DRV_LOG(ERR, "Failed to allocate RMP data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(create_rmp_in, in, opcode, MLX5_CMD_OP_CREATE_RMP);
+	rmp_ctx = MLX5_ADDR_OF(create_rmp_in, in, ctx);
+	MLX5_SET(rmpc, rmp_ctx, state, rmp_attr->state);
+	MLX5_SET(rmpc, rmp_ctx, basic_cyclic_rcv_wqe,
+		 rmp_attr->basic_cyclic_rcv_wqe);
+	wq_ctx = MLX5_ADDR_OF(rmpc, rmp_ctx, wq);
+	wq_attr = &rmp_attr->wq_attr;
+	devx_cmd_fill_wq_data(wq_ctx, wq_attr);
+	rmp->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
+					      sizeof(out));
+	if (!rmp->obj) {
+		DRV_LOG(ERR, "Failed to create RMP using DevX");
+		rte_errno = errno;
+		mlx5_free(rmp);
+		return NULL;
+	}
+	rmp->id = MLX5_GET(create_rmp_out, out, rmpn);
+	return rmp;
+}
+
+/*
  * Create TIR using DevX API.
  *
  * @param[in] ctx
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index e576e30f242..fa8ba89abe6 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -101,6 +101,8 @@ struct mlx5_hca_flow_attr {
 struct mlx5_hca_attr {
 	uint32_t eswitch_manager:1;
 	uint32_t flow_counters_dump:1;
+	uint32_t mem_rq_rmp:1;
+	uint32_t log_max_rmp:5;
 	uint32_t log_max_rqt_size:5;
 	uint32_t parse_graph_flex_node:1;
 	uint8_t flow_counter_bulk_alloc_bitmap;
@@ -245,6 +247,17 @@ struct mlx5_devx_modify_rq_attr {
 	uint32_t lwm:16; /* Contained WQ lwm. */
 };
 
+/* Create RMP attributes structure, used by create RMP operation. */
+struct mlx5_devx_create_rmp_attr {
+	uint32_t rsvd0:8;
+	uint32_t state:4;
+	uint32_t rsvd1:20;
+	uint32_t basic_cyclic_rcv_wqe:1;
+	uint32_t rsvd4:31;
+	uint32_t rsvd8[10];
+	struct mlx5_devx_wq_attr wq_attr;
+};
+
 struct mlx5_rx_hash_field_select {
 	uint32_t l3_prot_type:1;
 	uint32_t l4_prot_type:1;
@@ -520,6 +533,9 @@ __rte_internal
 int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			    struct mlx5_devx_modify_rq_attr *rq_attr);
 __rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_rmp(void *ctx,
+			struct mlx5_devx_create_rmp_attr *rq_attr, int socket);
+__rte_internal
 struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(void *ctx,
 					   struct mlx5_devx_tir_attr *tir_attr);
 __rte_internal
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 72af3710a8f..df0991ee402 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -1061,6 +1061,10 @@ enum {
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_RQ = 0x90b,
+	MLX5_CMD_OP_CREATE_RMP = 0x90c,
+	MLX5_CMD_OP_MODIFY_RMP = 0x90d,
+	MLX5_CMD_OP_DESTROY_RMP = 0x90e,
+	MLX5_CMD_OP_QUERY_RMP = 0x90f,
 	MLX5_CMD_OP_CREATE_TIS = 0x912,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_CREATE_RQT = 0x916,
@@ -1557,7 +1561,8 @@ struct mlx5_ifc_cmd_hca_cap_bits {
 	u8 reserved_at_378[0x3];
 	u8 log_max_tis[0x5];
 	u8 basic_cyclic_rcv_wqe[0x1];
-	u8 reserved_at_381[0x2];
+	u8 reserved_at_381[0x1];
+	u8 mem_rq_rmp[0x1];
 	u8 log_max_rmp[0x5];
 	u8 reserved_at_388[0x3];
 	u8 log_max_rqt[0x5];
@@ -2159,6 +2164,84 @@ struct mlx5_ifc_query_rq_in_bits {
 	u8 reserved_at_60[0x20];
 };
 
+enum {
+	MLX5_RMPC_STATE_RDY = 0x1,
+	MLX5_RMPC_STATE_ERR = 0x3,
+};
+
+struct mlx5_ifc_rmpc_bits {
+	u8 reserved_at_0[0x8];
+	u8 state[0x4];
+	u8 reserved_at_c[0x14];
+	u8 basic_cyclic_rcv_wqe[0x1];
+	u8 reserved_at_21[0x1f];
+	u8 reserved_at_40[0x140];
+	struct mlx5_ifc_wq_bits wq;
+};
+
+struct mlx5_ifc_query_rmp_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rmpc_bits rmp_context;
+};
+
+struct mlx5_ifc_query_rmp_in_bits {
+	u8 opcode[0x10];
+	u8 reserved_at_10[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0x8];
+	u8 rmpn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_modify_rmp_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_rmp_bitmask_bits {
+	u8 reserved_at_0[0x20];
+	u8 reserved_at_20[0x1f];
+	u8 lwm[0x1];
+};
+
+struct mlx5_ifc_modify_rmp_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 rmp_state[0x4];
+	u8 reserved_at_44[0x4];
+	u8 rmpn[0x18];
+	u8 reserved_at_60[0x20];
+	struct mlx5_ifc_rmp_bitmask_bits bitmask;
+	u8 reserved_at_c0[0x40];
+	struct mlx5_ifc_rmpc_bits ctx;
+};
+
+struct mlx5_ifc_create_rmp_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 rmpn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_create_rmp_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rmpc_bits ctx;
+};
+
 struct mlx5_ifc_create_tis_out_bits {
 	u8 status[0x8];
 	u8 reserved_at_8[0x18];
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index e5cb6b70604..40975078cc4 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -31,6 +31,7 @@ INTERNAL {
 	mlx5_devx_cmd_create_geneve_tlv_option;
 	mlx5_devx_cmd_create_import_kek_obj;
 	mlx5_devx_cmd_create_qp;
+	mlx5_devx_cmd_create_rmp;
 	mlx5_devx_cmd_create_rq;
 	mlx5_devx_cmd_create_rqt;
 	mlx5_devx_cmd_create_sq;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 447d6bafb93..4d479c19e6c 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -514,7 +514,7 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
-	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf;
+	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf;
 	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec;
 	rxq_data->cq_arm_sn = 0;
 	rxq_data->cq_ci = 0;
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 03/11] net/mlx5: clean Rx queue code
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 01/11] common/mlx5: support receive queue user index Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 02/11] common/mlx5: support receive memory pool Xueming Li
@ 2021-09-26 11:18 ` Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 04/11] net/mlx5: split multiple packet Rq memory pool Xueming Li
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:18 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Removes unused rxq code.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 396de327d11..7e97cdd4bc0 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -674,9 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		    struct rte_mempool *mp)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 	struct rte_eth_rxseg_split *rx_seg =
 				(struct rte_eth_rxseg_split *)conf->rx_seg;
 	struct rte_eth_rxseg_split rx_single = {.mp = mp};
@@ -743,9 +741,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 			    const struct rte_eth_hairpin_conf *hairpin_conf)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 	int res;
 
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 04/11] net/mlx5: split multiple packet Rq memory pool
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (2 preceding siblings ...)
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 03/11] net/mlx5: clean Rx queue code Xueming Li
@ 2021-09-26 11:18 ` Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 05/11] net/mlx5: split Rx queue Xueming Li
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:18 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Port info is invisible from shared Rx queue, split MPR mempool from
device to Rx queue, also changed pool flag to mp_sc.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5.c         |   1 -
 drivers/net/mlx5/mlx5_rx.h      |   4 +-
 drivers/net/mlx5/mlx5_rxq.c     | 109 ++++++++++++--------------------
 drivers/net/mlx5/mlx5_trigger.c |  10 ++-
 4 files changed, 47 insertions(+), 77 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f84e061fe71..3abb8c97e76 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1602,7 +1602,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		mlx5_drop_action_destroy(dev);
 	if (priv->mreg_cp_tbl)
 		mlx5_hlist_destroy(priv->mreg_cp_tbl);
-	mlx5_mprq_free_mp(dev);
 	if (priv->sh->ct_mng)
 		mlx5_flow_aso_ct_mng_close(priv->sh);
 	mlx5_os_free_shared_dr(priv);
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index d44c8078dea..a8e0c3162b0 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -179,8 +179,8 @@ struct mlx5_rxq_ctrl {
 extern uint8_t rss_hash_default_key[];
 
 unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data);
-int mlx5_mprq_free_mp(struct rte_eth_dev *dev);
-int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev);
+int mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl);
+int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl);
 int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
 int mlx5_rx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id);
 int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t queue_id);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 7e97cdd4bc0..14de8d0e6a4 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1087,7 +1087,7 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg,
 }
 
 /**
- * Free mempool of Multi-Packet RQ.
+ * Free RXQ mempool of Multi-Packet RQ.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -1096,16 +1096,15 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg,
  *   0 on success, negative errno value on failure.
  */
 int
-mlx5_mprq_free_mp(struct rte_eth_dev *dev)
+mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_mempool *mp = priv->mprq_mp;
-	unsigned int i;
+	struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq;
+	struct rte_mempool *mp = rxq->mprq_mp;
 
 	if (mp == NULL)
 		return 0;
-	DRV_LOG(DEBUG, "port %u freeing mempool (%s) for Multi-Packet RQ",
-		dev->data->port_id, mp->name);
+	DRV_LOG(DEBUG, "port %u queue %hu freeing mempool (%s) for Multi-Packet RQ",
+		dev->data->port_id, rxq->idx, mp->name);
 	/*
 	 * If a buffer in the pool has been externally attached to a mbuf and it
 	 * is still in use by application, destroying the Rx queue can spoil
@@ -1123,34 +1122,28 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev)
 		return -rte_errno;
 	}
 	rte_mempool_free(mp);
-	/* Unset mempool for each Rx queue. */
-	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-
-		if (rxq == NULL)
-			continue;
-		rxq->mprq_mp = NULL;
-	}
-	priv->mprq_mp = NULL;
+	rxq->mprq_mp = NULL;
 	return 0;
 }
 
 /**
- * Allocate a mempool for Multi-Packet RQ. All configured Rx queues share the
- * mempool. If already allocated, reuse it if there're enough elements.
+ * Allocate RXQ a mempool for Multi-Packet RQ.
+ * If already allocated, reuse it if there're enough elements.
  * Otherwise, resize it.
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param rxq_ctrl
+ *   Pointer to RXQ.
  *
  * @return
  *   0 on success, negative errno value on failure.
  */
 int
-mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
+mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_mempool *mp = priv->mprq_mp;
+	struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq;
+	struct rte_mempool *mp = rxq->mprq_mp;
 	char name[RTE_MEMPOOL_NAMESIZE];
 	unsigned int desc = 0;
 	unsigned int buf_len;
@@ -1158,28 +1151,15 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	unsigned int obj_size;
 	unsigned int strd_num_n = 0;
 	unsigned int strd_sz_n = 0;
-	unsigned int i;
-	unsigned int n_ibv = 0;
 
-	if (!mlx5_mprq_enabled(dev))
+	if (rxq_ctrl == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
 		return 0;
-	/* Count the total number of descriptors configured. */
-	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl = container_of
-			(rxq, struct mlx5_rxq_ctrl, rxq);
-
-		if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
-			continue;
-		n_ibv++;
-		desc += 1 << rxq->elts_n;
-		/* Get the max number of strides. */
-		if (strd_num_n < rxq->strd_num_n)
-			strd_num_n = rxq->strd_num_n;
-		/* Get the max size of a stride. */
-		if (strd_sz_n < rxq->strd_sz_n)
-			strd_sz_n = rxq->strd_sz_n;
-	}
+	/* Number of descriptors configured. */
+	desc = 1 << rxq->elts_n;
+	/* Get the max number of strides. */
+	strd_num_n = rxq->strd_num_n;
+	/* Get the max size of a stride. */
+	strd_sz_n = rxq->strd_sz_n;
 	MLX5_ASSERT(strd_num_n && strd_sz_n);
 	buf_len = (1 << strd_num_n) * (1 << strd_sz_n);
 	obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) *
@@ -1196,7 +1176,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	 * this Mempool gets available again.
 	 */
 	desc *= 4;
-	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * n_ibv;
+	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ;
 	/*
 	 * rte_mempool_create_empty() has sanity check to refuse large cache
 	 * size compared to the number of elements.
@@ -1209,50 +1189,41 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 		DRV_LOG(DEBUG, "port %u mempool %s is being reused",
 			dev->data->port_id, mp->name);
 		/* Reuse. */
-		goto exit;
-	} else if (mp != NULL) {
-		DRV_LOG(DEBUG, "port %u mempool %s should be resized, freeing it",
-			dev->data->port_id, mp->name);
+		return 0;
+	}
+	if (mp != NULL) {
+		DRV_LOG(DEBUG, "port %u queue %u mempool %s should be resized, freeing it",
+			dev->data->port_id, rxq->idx, mp->name);
 		/*
 		 * If failed to free, which means it may be still in use, no way
 		 * but to keep using the existing one. On buffer underrun,
 		 * packets will be memcpy'd instead of external buffer
 		 * attachment.
 		 */
-		if (mlx5_mprq_free_mp(dev)) {
+		if (mlx5_mprq_free_mp(dev, rxq_ctrl) != 0) {
 			if (mp->elt_size >= obj_size)
-				goto exit;
+				return 0;
 			else
 				return -rte_errno;
 		}
 	}
-	snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id);
+	snprintf(name, sizeof(name), "port-%u-queue-%hu-mprq",
+		 dev->data->port_id, rxq->idx);
 	mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ,
 				0, NULL, NULL, mlx5_mprq_buf_init,
-				(void *)((uintptr_t)1 << strd_num_n),
-				dev->device->numa_node, 0);
+				(void *)(uintptr_t)(1 << strd_num_n),
+				dev->device->numa_node, MEMPOOL_F_SC_GET);
 	if (mp == NULL) {
 		DRV_LOG(ERR,
-			"port %u failed to allocate a mempool for"
+			"port %u queue %hu failed to allocate a mempool for"
 			" Multi-Packet RQ, count=%u, size=%u",
-			dev->data->port_id, obj_num, obj_size);
+			dev->data->port_id, rxq->idx, obj_num, obj_size);
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
-	priv->mprq_mp = mp;
-exit:
-	/* Set mempool for each Rx queue. */
-	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl = container_of
-			(rxq, struct mlx5_rxq_ctrl, rxq);
-
-		if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
-			continue;
-		rxq->mprq_mp = mp;
-	}
-	DRV_LOG(INFO, "port %u Multi-Packet RQ is configured",
-		dev->data->port_id);
+	rxq->mprq_mp = mp;
+	DRV_LOG(INFO, "port %u queue %hu Multi-Packet RQ is configured",
+		dev->data->port_id, rxq->idx);
 	return 0;
 }
 
@@ -1717,8 +1688,10 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 		dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 	if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) {
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+			mlx5_mprq_free_mp(dev, rxq_ctrl);
+		}
 		LIST_REMOVE(rxq_ctrl, next);
 		mlx5_free(rxq_ctrl);
 		(*priv->rxqs)[idx] = NULL;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index c3adf5082e6..0753dbad053 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -138,11 +138,6 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 	unsigned int i;
 	int ret = 0;
 
-	/* Allocate/reuse/resize mempool for Multi-Packet RQ. */
-	if (mlx5_mprq_alloc_mp(dev)) {
-		/* Should not release Rx queues but return immediately. */
-		return -rte_errno;
-	}
 	DRV_LOG(DEBUG, "Port %u device_attr.max_qp_wr is %d.",
 		dev->data->port_id, priv->sh->device_attr.max_qp_wr);
 	DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.",
@@ -153,8 +148,11 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 		if (!rxq_ctrl)
 			continue;
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-			/* Pre-register Rx mempools. */
 			if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
+				/* Allocate/reuse/resize mempool for MPRQ. */
+				if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0)
+					goto error;
+				/* Pre-register Rx mempools. */
 				mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
 						  rxq_ctrl->rxq.mprq_mp);
 			} else {
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 05/11] net/mlx5: split Rx queue
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (3 preceding siblings ...)
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 04/11] net/mlx5: split multiple packet Rq memory pool Xueming Li
@ 2021-09-26 11:18 ` Xueming Li
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 06/11] net/mlx5: move Rx queue reference count Xueming Li
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:18 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

To prepare shared RX queue, splits rxq data into shareable and private.
Struct mlx5_rxq_priv is per queue data.
Struct mlx5_rxq_ctrl is shared queue resources and data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5.c        |  4 +++
 drivers/net/mlx5/mlx5.h        |  5 ++-
 drivers/net/mlx5/mlx5_ethdev.c | 10 ++++++
 drivers/net/mlx5/mlx5_rx.h     | 15 ++++++--
 drivers/net/mlx5/mlx5_rxq.c    | 66 ++++++++++++++++++++++++++++------
 5 files changed, 86 insertions(+), 14 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 3abb8c97e76..749729d6fbe 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1585,6 +1585,10 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		mlx5_free(dev->intr_handle);
 		dev->intr_handle = NULL;
 	}
+	if (priv->rxq_privs != NULL) {
+		mlx5_free(priv->rxq_privs);
+		priv->rxq_privs = NULL;
+	}
 	if (priv->txqs != NULL) {
 		/* XXX race condition if mlx5_tx_burst() is still running. */
 		rte_delay_us_sleep(1000);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02714e2319..d06f828ed33 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1335,6 +1335,8 @@ enum mlx5_txq_modify_type {
 	MLX5_TXQ_MOD_ERR2RDY, /* modify state from error to ready. */
 };
 
+struct mlx5_rxq_priv;
+
 /* HW objects operations structure. */
 struct mlx5_obj_ops {
 	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
@@ -1404,7 +1406,8 @@ struct mlx5_priv {
 	/* RX/TX queues. */
 	unsigned int rxqs_n; /* RX queues array size. */
 	unsigned int txqs_n; /* TX queues array size. */
-	struct mlx5_rxq_data *(*rxqs)[]; /* RX queues. */
+	struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */
+	struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */
 	struct mlx5_txq_data *(*txqs)[]; /* TX queues. */
 	struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */
 	struct rte_eth_rss_conf rss_conf; /* RSS configuration. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d986..7071a5f7039 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -104,6 +104,16 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	       MLX5_RSS_HASH_KEY_LEN);
 	priv->rss_conf.rss_key_len = MLX5_RSS_HASH_KEY_LEN;
 	priv->rss_conf.rss_hf = dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
+	priv->rxq_privs = mlx5_realloc(priv->rxq_privs,
+				       MLX5_MEM_ANY | MLX5_MEM_ZERO,
+				       sizeof(void *) * rxqs_n, 0,
+				       SOCKET_ID_ANY);
+	if (priv->rxq_privs == NULL) {
+		DRV_LOG(ERR, "port %u cannot allocate rxq private data",
+			dev->data->port_id);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
 	priv->rxqs = (void *)dev->data->rx_queues;
 	priv->txqs = (void *)dev->data->tx_queues;
 	if (txqs_n != priv->txqs_n) {
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index a8e0c3162b0..db6252e8e86 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -161,7 +161,9 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
 	LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
 	uint32_t refcnt; /* Reference counter. */
+	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
+	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
 	enum mlx5_rxq_type type; /* Rxq type. */
 	unsigned int socket; /* CPU socket ID for allocations. */
@@ -174,6 +176,14 @@ struct mlx5_rxq_ctrl {
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
+/* RX queue private data. */
+struct mlx5_rxq_priv {
+	uint16_t idx; /* Queue index. */
+	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
+	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
+	struct mlx5_priv *priv; /* Back pointer to private data. */
+};
+
 /* mlx5_rxq.c */
 
 extern uint8_t rss_hash_default_key[];
@@ -197,13 +207,14 @@ void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
 int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
-struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
+struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev,
+				   struct mlx5_rxq_priv *rxq,
 				   uint16_t desc, unsigned int socket,
 				   const struct rte_eth_rxconf *conf,
 				   const struct rte_eth_rxseg_split *rx_seg,
 				   uint16_t n_seg);
 struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
-	(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+	(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc,
 	 const struct rte_eth_hairpin_conf *hairpin_conf);
 struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 14de8d0e6a4..70e73690aa7 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -674,6 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		    struct rte_mempool *mp)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
 	struct rte_eth_rxseg_split *rx_seg =
 				(struct rte_eth_rxseg_split *)conf->rx_seg;
@@ -708,10 +709,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	rxq_ctrl = mlx5_rxq_new(dev, idx, desc, socket, conf, rx_seg, n_seg);
+	rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0,
+			  SOCKET_ID_ANY);
+	if (!rxq) {
+		DRV_LOG(ERR, "port %u unable to allocate rx queue index %u private data",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	rxq->priv = priv;
+	rxq->idx = idx;
+	(*priv->rxq_privs)[idx] = rxq;
+	rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg);
 	if (!rxq_ctrl) {
-		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
+		DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
 			dev->data->port_id, idx);
+		mlx5_free(rxq);
+		(*priv->rxq_privs)[idx] = NULL;
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
@@ -741,6 +755,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 			    const struct rte_eth_hairpin_conf *hairpin_conf)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
 	int res;
 
@@ -776,14 +791,27 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 			return -rte_errno;
 		}
 	}
-	rxq_ctrl = mlx5_rxq_hairpin_new(dev, idx, desc, hairpin_conf);
+	rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0,
+			  SOCKET_ID_ANY);
+	if (!rxq) {
+		DRV_LOG(ERR, "port %u unable to allocate hairpin rx queue index %u private data",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	rxq->priv = priv;
+	rxq->idx = idx;
+	(*priv->rxq_privs)[idx] = rxq;
+	rxq_ctrl = mlx5_rxq_hairpin_new(dev, rxq, desc, hairpin_conf);
 	if (!rxq_ctrl) {
-		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
+		DRV_LOG(ERR, "port %u unable to allocate hairpin queue index %u",
 			dev->data->port_id, idx);
+		mlx5_free(rxq);
+		(*priv->rxq_privs)[idx] = NULL;
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
-	DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
+	DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list",
 		dev->data->port_id, idx);
 	(*priv->rxqs)[idx] = &rxq_ctrl->rxq;
 	return 0;
@@ -1274,8 +1302,8 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
  *
  * @param dev
  *   Pointer to Ethernet device.
- * @param idx
- *   RX queue index.
+ * @param rxq
+ *   RX queue private data.
  * @param desc
  *   Number of descriptors to configure in queue.
  * @param socket
@@ -1285,10 +1313,12 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
  *   A DPDK queue object on success, NULL otherwise and rte_errno is set.
  */
 struct mlx5_rxq_ctrl *
-mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
+	     uint16_t desc,
 	     unsigned int socket, const struct rte_eth_rxconf *conf,
 	     const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg)
 {
+	uint16_t idx = rxq->idx;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *tmpl;
 	unsigned int mb_len = rte_pktmbuf_data_room_size(rx_seg[0].mp);
@@ -1331,6 +1361,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		rte_errno = ENOMEM;
 		return NULL;
 	}
+	LIST_INIT(&tmpl->owners);
+	rxq->ctrl = tmpl;
+	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
 	MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG);
 	/*
 	 * Build the array of actual buffer offsets and lengths.
@@ -1564,6 +1597,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
 		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
+	tmpl->sh = priv->sh;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
 	tmpl->rxq.elts_n = log2above(desc);
@@ -1591,8 +1625,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
  *
  * @param dev
  *   Pointer to Ethernet device.
- * @param idx
- *   RX queue index.
+ * @param rxq
+ *   RX queue.
  * @param desc
  *   Number of descriptors to configure in queue.
  * @param hairpin_conf
@@ -1602,9 +1636,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
  *   A DPDK queue object on success, NULL otherwise and rte_errno is set.
  */
 struct mlx5_rxq_ctrl *
-mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
+		     uint16_t desc,
 		     const struct rte_eth_hairpin_conf *hairpin_conf)
 {
+	uint16_t idx = rxq->idx;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *tmpl;
 
@@ -1614,10 +1650,14 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		rte_errno = ENOMEM;
 		return NULL;
 	}
+	LIST_INIT(&tmpl->owners);
+	rxq->ctrl = tmpl;
+	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
 	tmpl->type = MLX5_RXQ_TYPE_HAIRPIN;
 	tmpl->socket = SOCKET_ID_ANY;
 	tmpl->rxq.rss_hash = 0;
 	tmpl->rxq.port_id = dev->data->port_id;
+	tmpl->sh = priv->sh;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = NULL;
 	tmpl->rxq.elts_n = log2above(desc);
@@ -1671,6 +1711,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx];
 
 	if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL)
 		return 0;
@@ -1692,9 +1733,12 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			mlx5_mprq_free_mp(dev, rxq_ctrl);
 		}
+		LIST_REMOVE(rxq, owner_entry);
 		LIST_REMOVE(rxq_ctrl, next);
 		mlx5_free(rxq_ctrl);
 		(*priv->rxqs)[idx] = NULL;
+		mlx5_free(rxq);
+		(*priv->rxq_privs)[idx] = NULL;
 	}
 	return 0;
 }
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 06/11] net/mlx5: move Rx queue reference count
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (4 preceding siblings ...)
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 05/11] net/mlx5: split Rx queue Xueming Li
@ 2021-09-26 11:18 ` Xueming Li
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 07/11] net/mlx5: move Rx queue hairpin info to private data Xueming Li
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:18 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Rx queue reference count is counter of RQ, used on RQ table.
To prepare for shared Rx queue, move it from rxq_ctrl to Rx queue
private data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_rx.h      |   8 +-
 drivers/net/mlx5/mlx5_rxq.c     | 173 +++++++++++++++++++++-----------
 drivers/net/mlx5/mlx5_trigger.c |  57 +++++------
 3 files changed, 144 insertions(+), 94 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index db6252e8e86..fe19414c130 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -160,7 +160,6 @@ enum mlx5_rxq_type {
 struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
 	LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
-	uint32_t refcnt; /* Reference counter. */
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
@@ -179,6 +178,7 @@ struct mlx5_rxq_ctrl {
 /* RX queue private data. */
 struct mlx5_rxq_priv {
 	uint16_t idx; /* Queue index. */
+	uint32_t refcnt; /* Reference counter. */
 	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
 	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
@@ -216,7 +216,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev,
 struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
 	(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc,
 	 const struct rte_eth_hairpin_conf *hairpin_conf);
-struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_priv *mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx);
+uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_verify(struct rte_eth_dev *dev);
 int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 70e73690aa7..7f28646f55c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -386,15 +386,13 @@ mlx5_get_rx_port_offloads(void)
 static int
 mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
 
-	if (!(*priv->rxqs)[idx]) {
+	if (rxq == NULL) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq);
-	return (__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED) == 1);
+	return (__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED) == 1);
 }
 
 /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */
@@ -874,8 +872,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
 	intr_handle->type = RTE_INTR_HANDLE_EXT;
 	for (i = 0; i != n; ++i) {
 		/* This rxq obj must not be released in this function. */
-		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
-		struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL;
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+		struct mlx5_rxq_obj *rxq_obj = rxq ? rxq->ctrl->obj : NULL;
 		int rc;
 
 		/* Skip queues that cannot request interrupts. */
@@ -885,11 +883,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
 			intr_handle->intr_vec[i] =
 				RTE_INTR_VEC_RXTX_OFFSET +
 				RTE_MAX_RXTX_INTR_VEC_ID;
-			/* Decrease the rxq_ctrl's refcnt */
-			if (rxq_ctrl)
-				mlx5_rxq_release(dev, i);
 			continue;
 		}
+		mlx5_rxq_ref(dev, i);
 		if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
 			DRV_LOG(ERR,
 				"port %u too many Rx queues for interrupt"
@@ -949,7 +945,7 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
 		 * Need to access directly the queue to release the reference
 		 * kept in mlx5_rx_intr_vec_enable().
 		 */
-		mlx5_rxq_release(dev, i);
+		mlx5_rxq_deref(dev, i);
 	}
 free:
 	rte_intr_free_epoll_fd(intr_handle);
@@ -998,19 +994,14 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq)
 int
 mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
-	struct mlx5_rxq_ctrl *rxq_ctrl;
-
-	rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id);
-	if (!rxq_ctrl)
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id);
+	if (!rxq)
 		goto error;
-	if (rxq_ctrl->irq) {
-		if (!rxq_ctrl->obj) {
-			mlx5_rxq_release(dev, rx_queue_id);
+	if (rxq->ctrl->irq) {
+		if (!rxq->ctrl->obj)
 			goto error;
-		}
-		mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn);
+		mlx5_arm_cq(&rxq->ctrl->rxq, rxq->ctrl->rxq.cq_arm_sn);
 	}
-	mlx5_rxq_release(dev, rx_queue_id);
 	return 0;
 error:
 	rte_errno = EINVAL;
@@ -1032,23 +1023,21 @@ int
 mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id);
 	int ret = 0;
 
-	rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id);
-	if (!rxq_ctrl) {
+	if (!rxq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	if (!rxq_ctrl->obj)
+	if (!rxq->ctrl->obj)
 		goto error;
-	if (rxq_ctrl->irq) {
-		ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj);
+	if (rxq->ctrl->irq) {
+		ret = priv->obj_ops.rxq_event_get(rxq->ctrl->obj);
 		if (ret < 0)
 			goto error;
-		rxq_ctrl->rxq.cq_arm_sn++;
+		rxq->ctrl->rxq.cq_arm_sn++;
 	}
-	mlx5_rxq_release(dev, rx_queue_id);
 	return 0;
 error:
 	/**
@@ -1059,12 +1048,9 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		rte_errno = errno;
 	else
 		rte_errno = EINVAL;
-	ret = rte_errno; /* Save rte_errno before cleanup. */
-	mlx5_rxq_release(dev, rx_queue_id);
-	if (ret != EAGAIN)
+	if (rte_errno != EAGAIN)
 		DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d",
 			dev->data->port_id, rx_queue_id);
-	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
 
@@ -1611,7 +1597,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq;
 #endif
 	tmpl->rxq.idx = idx;
-	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
+	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 error:
@@ -1665,11 +1651,53 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 };
 	tmpl->hairpin_conf = *hairpin_conf;
 	tmpl->rxq.idx = idx;
-	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
+	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 }
 
+/**
+ * Increase Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+inline struct mlx5_rxq_priv *
+mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	if (rxq != NULL)
+		__atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+	return rxq;
+}
+
+/**
+ * Dereference a Rx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   Updated reference count.
+ */
+inline uint32_t
+mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	if (rxq == NULL)
+		return 0;
+	return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+}
+
 /**
  * Get a Rx queue.
  *
@@ -1681,18 +1709,52 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
  * @return
  *   A pointer to the queue if it exists, NULL otherwise.
  */
-struct mlx5_rxq_ctrl *
+inline struct mlx5_rxq_priv *
 mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
 
-	if (rxq_data) {
-		rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-		__atomic_fetch_add(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED);
-	}
-	return rxq_ctrl;
+	if (priv->rxq_privs == NULL)
+		return NULL;
+	return (*priv->rxq_privs)[idx];
+}
+
+/**
+ * Get Rx queue shareable control.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   A pointer to the queue control if it exists, NULL otherwise.
+ */
+inline struct mlx5_rxq_ctrl *
+mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	return rxq == NULL ? NULL : rxq->ctrl;
+}
+
+/**
+ * Get Rx queue shareable data.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   A pointer to the queue data if it exists, NULL otherwise.
+ */
+inline struct mlx5_rxq_data *
+mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	return rxq == NULL ? NULL : &rxq->ctrl->rxq;
 }
 
 /**
@@ -1710,13 +1772,12 @@ int
 mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
-	struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx];
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
 
 	if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL)
 		return 0;
-	rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq);
-	if (__atomic_sub_fetch(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED) > 1)
+	if (mlx5_rxq_deref(dev, idx) > 1)
 		return 1;
 	if (rxq_ctrl->obj) {
 		priv->obj_ops.rxq_obj_release(rxq_ctrl->obj);
@@ -1728,7 +1789,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 		rxq_free_elts(rxq_ctrl);
 		dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
-	if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) {
+	if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) {
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			mlx5_mprq_free_mp(dev, rxq_ctrl);
@@ -1908,7 +1969,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
 		return 1;
 	priv->obj_ops.ind_table_destroy(ind_tbl);
 	for (i = 0; i != ind_tbl->queues_n; ++i)
-		claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
+		claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i]));
 	mlx5_free(ind_tbl);
 	return 0;
 }
@@ -1965,7 +2026,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 			       log2above(priv->config.ind_table_max_size);
 
 	for (i = 0; i != queues_n; ++i) {
-		if (!mlx5_rxq_get(dev, queues[i])) {
+		if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
 			ret = -rte_errno;
 			goto error;
 		}
@@ -1978,7 +2039,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 error:
 	err = rte_errno;
 	for (j = 0; j < i; j++)
-		mlx5_rxq_release(dev, ind_tbl->queues[j]);
+		mlx5_rxq_deref(dev, ind_tbl->queues[j]);
 	rte_errno = err;
 	DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",
 		dev->data->port_id);
@@ -2074,7 +2135,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev,
 			  bool standalone)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	unsigned int i, j;
+	unsigned int i;
 	int ret = 0, err;
 	const unsigned int n = rte_is_power_of_2(queues_n) ?
 			       log2above(queues_n) :
@@ -2094,15 +2155,11 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev,
 	ret = priv->obj_ops.ind_table_modify(dev, n, queues, queues_n, ind_tbl);
 	if (ret)
 		goto error;
-	for (j = 0; j < ind_tbl->queues_n; j++)
-		mlx5_rxq_release(dev, ind_tbl->queues[j]);
 	ind_tbl->queues_n = queues_n;
 	ind_tbl->queues = queues;
 	return 0;
 error:
 	err = rte_errno;
-	for (j = 0; j < i; j++)
-		mlx5_rxq_release(dev, queues[j]);
 	rte_errno = err;
 	DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",
 		dev->data->port_id);
@@ -2135,7 +2192,7 @@ mlx5_ind_table_obj_attach(struct rte_eth_dev *dev,
 		return ret;
 	}
 	for (i = 0; i < ind_tbl->queues_n; i++)
-		mlx5_rxq_get(dev, ind_tbl->queues[i]);
+		mlx5_rxq_ref(dev, ind_tbl->queues[i]);
 	return 0;
 }
 
@@ -2172,7 +2229,7 @@ mlx5_ind_table_obj_detach(struct rte_eth_dev *dev,
 		return ret;
 	}
 	for (i = 0; i < ind_tbl->queues_n; i++)
-		mlx5_rxq_release(dev, ind_tbl->queues[i]);
+		mlx5_rxq_deref(dev, ind_tbl->queues[i]);
 	return ret;
 }
 
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 0753dbad053..a49254c96f6 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -143,10 +143,12 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 	DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.",
 		dev->data->port_id, priv->sh->device_attr.max_sge);
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_ref(dev, i);
+		struct mlx5_rxq_ctrl *rxq_ctrl;
 
-		if (!rxq_ctrl)
+		if (rxq == NULL)
 			continue;
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
 			if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
 				/* Allocate/reuse/resize mempool for MPRQ. */
@@ -215,6 +217,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 	struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
 	struct mlx5_txq_ctrl *txq_ctrl;
+	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
 	struct mlx5_devx_obj *sq;
 	struct mlx5_devx_obj *rq;
@@ -259,9 +262,8 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 			return -rte_errno;
 		}
 		sq = txq_ctrl->obj->sq;
-		rxq_ctrl = mlx5_rxq_get(dev,
-					txq_ctrl->hairpin_conf.peers[0].queue);
-		if (!rxq_ctrl) {
+		rxq = mlx5_rxq_get(dev, txq_ctrl->hairpin_conf.peers[0].queue);
+		if (rxq == NULL) {
 			mlx5_txq_release(dev, i);
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u no rxq object found: %d",
@@ -269,6 +271,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 				txq_ctrl->hairpin_conf.peers[0].queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN ||
 		    rxq_ctrl->hairpin_conf.peers[0].queue != i) {
 			rte_errno = ENOMEM;
@@ -303,12 +306,10 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		rxq_ctrl->hairpin_status = 1;
 		txq_ctrl->hairpin_status = 1;
 		mlx5_txq_release(dev, i);
-		mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue);
 	}
 	return 0;
 error:
 	mlx5_txq_release(dev, i);
-	mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue);
 	return -rte_errno;
 }
 
@@ -381,27 +382,26 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 		peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind;
 		mlx5_txq_release(dev, peer_queue);
 	} else { /* Peer port used as ingress. */
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, peer_queue);
 		struct mlx5_rxq_ctrl *rxq_ctrl;
 
-		rxq_ctrl = mlx5_rxq_get(dev, peer_queue);
-		if (rxq_ctrl == NULL) {
+		if (rxq == NULL) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
 				dev->data->port_id, peer_queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq",
 				dev->data->port_id, peer_queue);
-			mlx5_rxq_release(dev, peer_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no Rxq object found: %d",
 				dev->data->port_id, peer_queue);
-			mlx5_rxq_release(dev, peer_queue);
 			return -rte_errno;
 		}
 		peer_info->qp_id = rxq_ctrl->obj->rq->id;
@@ -409,7 +409,6 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 		peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue;
 		peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit;
 		peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind;
-		mlx5_rxq_release(dev, peer_queue);
 	}
 	return 0;
 }
@@ -508,34 +507,32 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			txq_ctrl->hairpin_status = 1;
 		mlx5_txq_release(dev, cur_queue);
 	} else {
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue);
 		struct mlx5_rxq_ctrl *rxq_ctrl;
 		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
 
-		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
-		if (rxq_ctrl == NULL) {
+		if (rxq == NULL) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no Rxq object found: %d",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->hairpin_status != 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already bound",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return 0;
 		}
 		if (peer_info->tx_explicit !=
@@ -543,7 +540,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode"
 				" mismatch", dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (peer_info->manual_bind !=
@@ -551,7 +547,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode"
 				" mismatch", dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		rq_attr.state = MLX5_SQC_STATE_RDY;
@@ -561,7 +556,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
 			rxq_ctrl->hairpin_status = 1;
-		mlx5_rxq_release(dev, cur_queue);
 	}
 	return ret;
 }
@@ -626,34 +620,32 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			txq_ctrl->hairpin_status = 0;
 		mlx5_txq_release(dev, cur_queue);
 	} else {
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue);
 		struct mlx5_rxq_ctrl *rxq_ctrl;
 		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
 
-		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
-		if (rxq_ctrl == NULL) {
+		if (rxq == NULL) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->hairpin_status == 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return 0;
 		}
 		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no Rxq object found: %d",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		rq_attr.state = MLX5_SQC_STATE_RST;
@@ -661,7 +653,6 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
 			rxq_ctrl->hairpin_status = 0;
-		mlx5_rxq_release(dev, cur_queue);
 	}
 	return ret;
 }
@@ -963,7 +954,6 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *txq_ctrl;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
 	uint32_t i;
 	uint16_t pp;
 	uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0};
@@ -992,24 +982,23 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 		}
 	} else {
 		for (i = 0; i < priv->rxqs_n; i++) {
-			rxq_ctrl = mlx5_rxq_get(dev, i);
-			if (!rxq_ctrl)
+			struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+			struct mlx5_rxq_ctrl *rxq_ctrl;
+
+			if (rxq == NULL)
 				continue;
-			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
-				mlx5_rxq_release(dev, i);
+			rxq_ctrl = rxq->ctrl;
+			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN)
 				continue;
-			}
 			pp = rxq_ctrl->hairpin_conf.peers[0].port;
 			if (pp >= RTE_MAX_ETHPORTS) {
 				rte_errno = ERANGE;
-				mlx5_rxq_release(dev, i);
 				DRV_LOG(ERR, "port %hu queue %u peer port "
 					"out of range %hu",
 					priv->dev_data->port_id, i, pp);
 				return -rte_errno;
 			}
 			bits[pp / 32] |= 1 << (pp % 32);
-			mlx5_rxq_release(dev, i);
 		}
 	}
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 07/11] net/mlx5: move Rx queue hairpin info to private data
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (5 preceding siblings ...)
  2021-09-26 11:18 ` [dpdk-dev] [PATCH 06/11] net/mlx5: move Rx queue reference count Xueming Li
@ 2021-09-26 11:19 ` Xueming Li
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 08/11] net/mlx5: remove port info from shareable Rx queue Xueming Li
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:19 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Hairpin info of Rx queue can't be shared, moves to private queue data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_rx.h      |  4 ++--
 drivers/net/mlx5/mlx5_rxq.c     | 13 +++++--------
 drivers/net/mlx5/mlx5_trigger.c | 24 ++++++++++++------------
 3 files changed, 19 insertions(+), 22 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index fe19414c130..2ed544556f5 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -171,8 +171,6 @@ struct mlx5_rxq_ctrl {
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
 	uint32_t wqn; /* WQ number. */
 	uint16_t dump_file_n; /* Number of dump files. */
-	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
-	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
 /* RX queue private data. */
@@ -182,6 +180,8 @@ struct mlx5_rxq_priv {
 	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
 	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
+	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
+	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
 /* mlx5_rxq.c */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 7f28646f55c..21cb1000899 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1649,8 +1649,8 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.elts_n = log2above(desc);
 	tmpl->rxq.elts = NULL;
 	tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 };
-	tmpl->hairpin_conf = *hairpin_conf;
 	tmpl->rxq.idx = idx;
+	rxq->hairpin_conf = *hairpin_conf;
 	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
@@ -1869,14 +1869,11 @@ const struct rte_eth_hairpin_conf *
 mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
 
-	if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) {
-		rxq_ctrl = container_of((*priv->rxqs)[idx],
-					struct mlx5_rxq_ctrl,
-					rxq);
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
-			return &rxq_ctrl->hairpin_conf;
+	if (idx < priv->rxqs_n && rxq != NULL) {
+		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+			return &rxq->hairpin_conf;
 	}
 	return NULL;
 }
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index a49254c96f6..f376f4d6fc4 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -273,7 +273,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		}
 		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN ||
-		    rxq_ctrl->hairpin_conf.peers[0].queue != i) {
+		    rxq->hairpin_conf.peers[0].queue != i) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u Tx queue %d can't be binded to "
 				"Rx queue %d", dev->data->port_id,
@@ -303,7 +303,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		if (ret)
 			goto error;
 		/* Qs with auto-bind will be destroyed directly. */
-		rxq_ctrl->hairpin_status = 1;
+		rxq->hairpin_status = 1;
 		txq_ctrl->hairpin_status = 1;
 		mlx5_txq_release(dev, i);
 	}
@@ -406,9 +406,9 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 		}
 		peer_info->qp_id = rxq_ctrl->obj->rq->id;
 		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
-		peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue;
-		peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit;
-		peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind;
+		peer_info->peer_q = rxq->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = rxq->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = rxq->hairpin_conf.manual_bind;
 	}
 	return 0;
 }
@@ -530,20 +530,20 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (rxq_ctrl->hairpin_status != 0) {
+		if (rxq->hairpin_status != 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already bound",
 				dev->data->port_id, cur_queue);
 			return 0;
 		}
 		if (peer_info->tx_explicit !=
-		    rxq_ctrl->hairpin_conf.tx_explicit) {
+		    rxq->hairpin_conf.tx_explicit) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode"
 				" mismatch", dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
 		if (peer_info->manual_bind !=
-		    rxq_ctrl->hairpin_conf.manual_bind) {
+		    rxq->hairpin_conf.manual_bind) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode"
 				" mismatch", dev->data->port_id, cur_queue);
@@ -555,7 +555,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		rq_attr.hairpin_peer_vhca = peer_info->vhca_id;
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
-			rxq_ctrl->hairpin_status = 1;
+			rxq->hairpin_status = 1;
 	}
 	return ret;
 }
@@ -637,7 +637,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (rxq_ctrl->hairpin_status == 0) {
+		if (rxq->hairpin_status == 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound",
 				dev->data->port_id, cur_queue);
 			return 0;
@@ -652,7 +652,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		rq_attr.rq_state = MLX5_SQC_STATE_RST;
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
-			rxq_ctrl->hairpin_status = 0;
+			rxq->hairpin_status = 0;
 	}
 	return ret;
 }
@@ -990,7 +990,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			rxq_ctrl = rxq->ctrl;
 			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN)
 				continue;
-			pp = rxq_ctrl->hairpin_conf.peers[0].port;
+			pp = rxq->hairpin_conf.peers[0].port;
 			if (pp >= RTE_MAX_ETHPORTS) {
 				rte_errno = ERANGE;
 				DRV_LOG(ERR, "port %hu queue %u peer port "
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 08/11] net/mlx5: remove port info from shareable Rx queue
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (6 preceding siblings ...)
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 07/11] net/mlx5: move Rx queue hairpin info to private data Xueming Li
@ 2021-09-26 11:19 ` Xueming Li
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 09/11] net/mlx5: move Rx queue DevX resource Xueming Li
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:19 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

To prepare for shared Rx queue, removes port info from shareable Rx
queue control.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c     |  2 +-
 drivers/net/mlx5/mlx5_mr.c       |  7 ++++---
 drivers/net/mlx5/mlx5_rx.c       | 15 +++------------
 drivers/net/mlx5/mlx5_rx.h       |  5 ++++-
 drivers/net/mlx5/mlx5_rxq.c      | 10 ++++------
 drivers/net/mlx5/mlx5_rxtx_vec.c |  2 +-
 6 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 4d479c19e6c..71e4bce1588 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -916,7 +916,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 	}
 	rxq->rxq_ctrl = rxq_ctrl;
 	rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD;
-	rxq_ctrl->priv = priv;
+	rxq_ctrl->sh = priv->sh;
 	rxq_ctrl->obj = rxq;
 	rxq_data = &rxq_ctrl->rxq;
 	/* Create CQ using DevX API. */
diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
index 44afda731fc..8d48b4614ee 100644
--- a/drivers/net/mlx5/mlx5_mr.c
+++ b/drivers/net/mlx5/mlx5_mr.c
@@ -82,10 +82,11 @@ mlx5_rx_addr2mr_bh(struct mlx5_rxq_data *rxq, uintptr_t addr)
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_mr_ctrl *mr_ctrl = &rxq->mr_ctrl;
-	struct mlx5_priv *priv = rxq_ctrl->priv;
+	struct mlx5_priv *priv = RXQ_PORT(rxq_ctrl);
+	struct mlx5_dev_ctx_shared *sh = rxq_ctrl->sh;
 
-	return mlx5_mr_addr2mr_bh(priv->sh->pd, &priv->mp_id,
-				  &priv->sh->share_cache, mr_ctrl, addr,
+	return mlx5_mr_addr2mr_bh(sh->pd, &priv->mp_id,
+				  &sh->share_cache, mr_ctrl, addr,
 				  priv->config.mr_ext_memseg_en);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba46..09de26c0d39 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -118,15 +118,7 @@ int
 mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
 	struct mlx5_rxq_data *rxq = rx_queue;
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
-	struct rte_eth_dev *dev = ETH_DEV(rxq_ctrl->priv);
 
-	if (dev->rx_pkt_burst == NULL ||
-	    dev->rx_pkt_burst == removed_rx_burst) {
-		rte_errno = ENOTSUP;
-		return -rte_errno;
-	}
 	if (offset >= (1 << rxq->cqe_n)) {
 		rte_errno = EINVAL;
 		return -rte_errno;
@@ -438,10 +430,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec)
 		sm.is_wq = 1;
 		sm.queue_id = rxq->idx;
 		sm.state = IBV_WQS_RESET;
-		if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), &sm))
+		if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm))
 			return -1;
 		if (rxq_ctrl->dump_file_n <
-		    rxq_ctrl->priv->config.max_dump_files_num) {
+		    RXQ_PORT(rxq_ctrl)->config.max_dump_files_num) {
 			MKSTR(err_str, "Unexpected CQE error syndrome "
 			      "0x%02x CQN = %u RQN = %u wqe_counter = %u"
 			      " rq_ci = %u cq_ci = %u", u.err_cqe->syndrome,
@@ -478,8 +470,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec)
 			sm.is_wq = 1;
 			sm.queue_id = rxq->idx;
 			sm.state = IBV_WQS_RDY;
-			if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv),
-						    &sm))
+			if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm))
 				return -1;
 			if (vec) {
 				const uint32_t elts_n =
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 2ed544556f5..4eed4176324 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -23,6 +23,10 @@
 /* Support tunnel matching. */
 #define MLX5_FLOW_TUNNEL 10
 
+#define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv
+#define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl))
+#define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl))
+
 struct mlx5_rxq_stats {
 #ifdef MLX5_PMD_SOFT_COUNTERS
 	uint64_t ipackets; /**< Total of successfully received packets. */
@@ -163,7 +167,6 @@ struct mlx5_rxq_ctrl {
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
-	struct mlx5_priv *priv; /* Back pointer to private data. */
 	enum mlx5_rxq_type type; /* Rxq type. */
 	unsigned int socket; /* CPU socket ID for allocations. */
 	unsigned int irq:1; /* Whether IRQ is enabled. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 21cb1000899..3aac7cc20ba 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -148,7 +148,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		buf = rte_pktmbuf_alloc(seg->mp);
 		if (buf == NULL) {
 			DRV_LOG(ERR, "port %u empty mbuf pool",
-				PORT_ID(rxq_ctrl->priv));
+				RXQ_PORT_ID(rxq_ctrl));
 			rte_errno = ENOMEM;
 			goto error;
 		}
@@ -195,7 +195,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 	DRV_LOG(DEBUG,
 		"port %u SPRQ queue %u allocated and configured %u segments"
 		" (max %u packets)",
-		PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx, elts_n,
+		RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx, elts_n,
 		elts_n / (1 << rxq_ctrl->rxq.sges_n));
 	return 0;
 error:
@@ -207,7 +207,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		(*rxq_ctrl->rxq.elts)[i] = NULL;
 	}
 	DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything",
-		PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx);
+		RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx);
 	rte_errno = err; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -284,7 +284,7 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 	uint16_t i;
 
 	DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs",
-		PORT_ID(rxq_ctrl->priv), rxq->idx, q_n);
+		RXQ_PORT_ID(rxq_ctrl), rxq->idx, q_n);
 	if (rxq->elts == NULL)
 		return;
 	/**
@@ -1584,7 +1584,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->sh = priv->sh;
-	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
 	tmpl->rxq.elts_n = log2above(desc);
 	tmpl->rxq.rq_repl_thresh =
@@ -1644,7 +1643,6 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.rss_hash = 0;
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->sh = priv->sh;
-	tmpl->priv = priv;
 	tmpl->rxq.mp = NULL;
 	tmpl->rxq.elts_n = log2above(desc);
 	tmpl->rxq.elts = NULL;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index ecd273e00a8..511681841ca 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -550,7 +550,7 @@ mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq)
 	struct mlx5_rxq_ctrl *ctrl =
 		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-	if (!ctrl->priv->config.rx_vec_en || rxq->sges_n != 0)
+	if (!RXQ_PORT(ctrl)->config.rx_vec_en || rxq->sges_n != 0)
 		return -ENOTSUP;
 	if (rxq->lro)
 		return -ENOTSUP;
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 09/11] net/mlx5: move Rx queue DevX resource
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (7 preceding siblings ...)
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 08/11] net/mlx5: remove port info from shareable Rx queue Xueming Li
@ 2021-09-26 11:19 ` Xueming Li
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 10/11] net/mlx5: remove Rx queue data list from device Xueming Li
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:19 UTC (permalink / raw)
  To: dev
  Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko,
	Anatoly Burakov

To support shared RX queue, move DevX RQ which is per queue resource to
Rx queue private data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c | 154 +++++++++++--------
 drivers/net/mlx5/mlx5.h             |  11 +-
 drivers/net/mlx5/mlx5_devx.c        | 227 ++++++++++++++--------------
 drivers/net/mlx5/mlx5_rx.h          |   1 +
 drivers/net/mlx5/mlx5_rxq.c         |  44 +++---
 drivers/net/mlx5/mlx5_rxtx.c        |   6 +-
 drivers/net/mlx5/mlx5_trigger.c     |   2 +-
 drivers/net/mlx5/mlx5_vlan.c        |  16 +-
 8 files changed, 241 insertions(+), 220 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index d4fa202ac4b..a2a9b9c1f98 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -71,13 +71,13 @@ const struct mlx5_mr_ops mlx5_mr_verbs_ops = {
 /**
  * Modify Rx WQ vlan stripping offload
  *
- * @param rxq_obj
- *   Rx queue object.
+ * @param rxq
+ *   Rx queue.
  *
  * @return 0 on success, non-0 otherwise
  */
 static int
-mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
+mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_priv *rxq, int on)
 {
 	uint16_t vlan_offloads =
 		(on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) |
@@ -89,14 +89,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
 		.flags = vlan_offloads,
 	};
 
-	return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
+	return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod);
 }
 
 /**
  * Modifies the attributes for the specified WQ.
  *
- * @param rxq_obj
- *   Verbs Rx queue object.
+ * @param rxq
+ *   Verbs Rx queue.
  * @param type
  *   Type of change queue state.
  *
@@ -104,14 +104,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
+mlx5_ibv_modify_wq(struct mlx5_rxq_priv *rxq, uint8_t type)
 {
 	struct ibv_wq_attr mod = {
 		.attr_mask = IBV_WQ_ATTR_STATE,
 		.wq_state = (enum ibv_wq_state)type,
 	};
 
-	return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
+	return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod);
 }
 
 /**
@@ -181,21 +181,18 @@ mlx5_ibv_modify_qp(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
 /**
  * Create a CQ Verbs object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   The Verbs CQ object initialized, NULL otherwise and rte_errno is set.
  */
 static struct ibv_cq *
-mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_ibv_cq_create(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
 	struct {
@@ -241,7 +238,7 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx)
 		DRV_LOG(DEBUG,
 			"Port %u Rx CQE compression is disabled for HW"
 			" timestamp.",
-			dev->data->port_id);
+			priv->dev_data->port_id);
 	}
 #ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
 	if (RTE_CACHE_LINE_SIZE == 128) {
@@ -257,21 +254,18 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx)
 /**
  * Create a WQ Verbs object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   The Verbs WQ object initialized, NULL otherwise and rte_errno is set.
  */
 static struct ibv_wq *
-mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_ibv_wq_create(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
 	struct {
@@ -338,7 +332,7 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx)
 			DRV_LOG(ERR,
 				"Port %u Rx queue %u requested %u*%u but got"
 				" %u*%u WRs*SGEs.",
-				dev->data->port_id, idx,
+				priv->dev_data->port_id, rxq->idx,
 				wqe_n >> rxq_data->sges_n,
 				(1 << rxq_data->sges_n),
 				wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
@@ -353,21 +347,20 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx)
 /**
  * Create the Rx queue Verbs object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_ibv_obj_new(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	uint16_t idx = rxq->idx;
+	struct mlx5_priv *priv = rxq->priv;
+	uint16_t port_id = priv->dev_data->port_id;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
 	struct mlx5dv_cq cq_info;
 	struct mlx5dv_rwq rwq;
@@ -382,17 +375,17 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 				mlx5_glue->create_comp_channel(priv->sh->ctx);
 		if (!tmpl->ibv_channel) {
 			DRV_LOG(ERR, "Port %u: comp channel creation failure.",
-				dev->data->port_id);
+				port_id);
 			rte_errno = ENOMEM;
 			goto error;
 		}
 		tmpl->fd = ((struct ibv_comp_channel *)(tmpl->ibv_channel))->fd;
 	}
 	/* Create CQ using Verbs API. */
-	tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(dev, idx);
+	tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(rxq);
 	if (!tmpl->ibv_cq) {
 		DRV_LOG(ERR, "Port %u Rx queue %u CQ creation failure.",
-			dev->data->port_id, idx);
+			port_id, idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
@@ -407,7 +400,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 		DRV_LOG(ERR,
 			"Port %u wrong MLX5_CQE_SIZE environment "
 			"variable value: it should be set to %u.",
-			dev->data->port_id, RTE_CACHE_LINE_SIZE);
+			port_id, RTE_CACHE_LINE_SIZE);
 		rte_errno = EINVAL;
 		goto error;
 	}
@@ -418,19 +411,19 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	rxq_data->cq_uar = cq_info.cq_uar;
 	rxq_data->cqn = cq_info.cqn;
 	/* Create WQ (RQ) using Verbs API. */
-	tmpl->wq = mlx5_rxq_ibv_wq_create(dev, idx);
+	tmpl->wq = mlx5_rxq_ibv_wq_create(rxq);
 	if (!tmpl->wq) {
 		DRV_LOG(ERR, "Port %u Rx queue %u WQ creation failure.",
-			dev->data->port_id, idx);
+			port_id, idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
 	/* Change queue state to ready. */
-	ret = mlx5_ibv_modify_wq(tmpl, IBV_WQS_RDY);
+	ret = mlx5_ibv_modify_wq(rxq, IBV_WQS_RDY);
 	if (ret) {
 		DRV_LOG(ERR,
 			"Port %u Rx queue %u WQ state to IBV_WQS_RDY failed.",
-			dev->data->port_id, idx);
+			port_id, idx);
 		rte_errno = ret;
 		goto error;
 	}
@@ -446,7 +439,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	rxq_data->cq_arm_sn = 0;
 	mlx5_rxq_initialize(rxq_data);
 	rxq_data->cq_ci = 0;
-	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
+	priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
 	rxq_ctrl->wqn = ((struct ibv_wq *)(tmpl->wq))->wq_num;
 	return 0;
 error:
@@ -464,12 +457,14 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 /**
  * Release an Rx verbs queue object.
  *
- * @param rxq_obj
- *   Verbs Rx queue object.
+ * @param rxq
+ *   Pointer to Rx queue.
  */
 static void
-mlx5_rxq_ibv_obj_release(struct mlx5_rxq_obj *rxq_obj)
+mlx5_rxq_ibv_obj_release(struct mlx5_rxq_priv *rxq)
 {
+	struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj;
+
 	MLX5_ASSERT(rxq_obj);
 	MLX5_ASSERT(rxq_obj->wq);
 	MLX5_ASSERT(rxq_obj->ibv_cq);
@@ -692,12 +687,24 @@ static void
 mlx5_rxq_ibv_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_obj *rxq_obj;
 
-	if (rxq->wq)
-		claim_zero(mlx5_glue->destroy_wq(rxq->wq));
-	if (rxq->ibv_cq)
-		claim_zero(mlx5_glue->destroy_cq(rxq->ibv_cq));
+	if (rxq == NULL)
+		return;
+	if (rxq->ctrl == NULL)
+		goto free_priv;
+	rxq_obj = rxq->ctrl->obj;
+	if (rxq_obj == NULL)
+		goto free_ctrl;
+	if (rxq_obj->wq)
+		claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+	if (rxq_obj->ibv_cq)
+		claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
+	mlx5_free(rxq_obj);
+free_ctrl:
+	mlx5_free(rxq->ctrl);
+free_priv:
 	mlx5_free(rxq);
 	priv->drop_queue.rxq = NULL;
 }
@@ -716,39 +723,58 @@ mlx5_rxq_ibv_obj_drop_create(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct ibv_context *ctx = priv->sh->ctx;
-	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_obj *rxq_obj = NULL;
 
-	if (rxq)
+	if (rxq != NULL)
 		return 0;
 	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
-	if (!rxq) {
+	if (rxq == NULL) {
 		DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.",
 		      dev->data->port_id);
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
 	priv->drop_queue.rxq = rxq;
-	rxq->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
-	if (!rxq->ibv_cq) {
+	rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl), 0,
+			       SOCKET_ID_ANY);
+	if (rxq_ctrl == NULL) {
+		DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue control memory.",
+		      dev->data->port_id);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rxq->ctrl = rxq_ctrl;
+	rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0,
+			      SOCKET_ID_ANY);
+	if (rxq_obj == NULL) {
+		DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.",
+		      dev->data->port_id);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rxq_ctrl->obj = rxq_obj;
+	rxq_obj->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
+	if (!rxq_obj->ibv_cq) {
 		DRV_LOG(DEBUG, "Port %u cannot allocate CQ for drop queue.",
 		      dev->data->port_id);
 		rte_errno = errno;
 		goto error;
 	}
-	rxq->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){
+	rxq_obj->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){
 						    .wq_type = IBV_WQT_RQ,
 						    .max_wr = 1,
 						    .max_sge = 1,
 						    .pd = priv->sh->pd,
-						    .cq = rxq->ibv_cq,
+						    .cq = rxq_obj->ibv_cq,
 					      });
-	if (!rxq->wq) {
+	if (!rxq_obj->wq) {
 		DRV_LOG(DEBUG, "Port %u cannot allocate WQ for drop queue.",
 		      dev->data->port_id);
 		rte_errno = errno;
 		goto error;
 	}
-	priv->drop_queue.rxq = rxq;
 	return 0;
 error:
 	mlx5_rxq_ibv_obj_drop_release(dev);
@@ -777,7 +803,7 @@ mlx5_ibv_drop_action_create(struct rte_eth_dev *dev)
 	ret = mlx5_rxq_ibv_obj_drop_create(dev);
 	if (ret < 0)
 		goto error;
-	rxq = priv->drop_queue.rxq;
+	rxq = priv->drop_queue.rxq->ctrl->obj;
 	ind_tbl = mlx5_glue->create_rwq_ind_table
 				(priv->sh->ctx,
 				 &(struct ibv_rwq_ind_table_init_attr){
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index d06f828ed33..c674f5ba9c4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -310,7 +310,7 @@ struct mlx5_vf_vlan {
 /* Flow drop context necessary due to Verbs API. */
 struct mlx5_drop {
 	struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */
-	struct mlx5_rxq_obj *rxq; /* Rx queue object. */
+	struct mlx5_rxq_priv *rxq; /* Rx queue. */
 };
 
 /* Loopback dummy queue resources required due to Verbs API. */
@@ -1257,7 +1257,6 @@ struct mlx5_rxq_obj {
 		};
 		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
-			struct mlx5_devx_rq rq_obj; /* DevX RQ object. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
@@ -1339,11 +1338,11 @@ struct mlx5_rxq_priv;
 
 /* HW objects operations structure. */
 struct mlx5_obj_ops {
-	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
-	int (*rxq_obj_new)(struct rte_eth_dev *dev, uint16_t idx);
+	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_priv *rxq, int on);
+	int (*rxq_obj_new)(struct mlx5_rxq_priv *rxq);
 	int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
-	int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, uint8_t type);
-	void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
+	int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type);
+	void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq);
 	int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n,
 			     struct mlx5_ind_table_obj *ind_tbl);
 	int (*ind_table_modify)(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 71e4bce1588..d219e255f0a 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -30,14 +30,16 @@
 /**
  * Modify RQ vlan stripping offload
  *
- * @param rxq_obj
- *   Rx queue object.
+ * @param rxq
+ *   Rx queue.
+ * @param on
+ *   Enable/disable VLAN stripping.
  *
  * @return
  *   0 on success, non-0 otherwise
  */
 static int
-mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
+mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_priv *rxq, int on)
 {
 	struct mlx5_devx_modify_rq_attr rq_attr;
 
@@ -46,14 +48,14 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
 	rq_attr.state = MLX5_RQC_STATE_RDY;
 	rq_attr.vsd = (on ? 0 : 1);
 	rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD;
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
 
 /**
  * Modify RQ using DevX API.
  *
- * @param rxq_obj
- *   DevX Rx queue object.
+ * @param rxq
+ *   DevX rx queue.
  * @param type
  *   Type of change queue state.
  *
@@ -61,7 +63,7 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
+mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type)
 {
 	struct mlx5_devx_modify_rq_attr rq_attr;
 
@@ -86,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
 	default:
 		break;
 	}
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
 
 /**
@@ -145,42 +147,34 @@ mlx5_devx_modify_sq(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
 	return 0;
 }
 
-/**
- * Destroy the Rx queue DevX object.
- *
- * @param rxq_obj
- *   Rxq object to destroy.
- */
-static void
-mlx5_rxq_release_devx_resources(struct mlx5_rxq_obj *rxq_obj)
-{
-	mlx5_devx_rq_destroy(&rxq_obj->rq_obj);
-	memset(&rxq_obj->rq_obj, 0, sizeof(rxq_obj->rq_obj));
-	mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
-	memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
-}
-
 /**
  * Release an Rx DevX queue object.
  *
- * @param rxq_obj
- *   DevX Rx queue object.
+ * @param rxq
+ *   DevX Rx queue.
  */
 static void
-mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj)
+mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 {
-	MLX5_ASSERT(rxq_obj);
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
+
+	MLX5_ASSERT(rxq != NULL);
+	MLX5_ASSERT(rxq_ctrl != NULL);
 	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
 		MLX5_ASSERT(rxq_obj->rq);
-		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
+		mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->cq_obj.cq);
-		MLX5_ASSERT(rxq_obj->rq_obj.rq);
-		mlx5_rxq_release_devx_resources(rxq_obj);
-		if (rxq_obj->devx_channel)
+		mlx5_devx_rq_destroy(&rxq->devx_rq);
+		memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq));
+		mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
+		memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
+		if (rxq_obj->devx_channel) {
 			mlx5_os_devx_destroy_event_channel
 							(rxq_obj->devx_channel);
+			rxq_obj->devx_channel = NULL;
+		}
 	}
 }
 
@@ -224,21 +218,18 @@ mlx5_rx_devx_get_event(struct mlx5_rxq_obj *rxq_obj)
 /**
  * Create a RQ object using DevX.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param rxq_data
- *   RX queue data.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev,
-				  struct mlx5_rxq_data *rxq_data)
+mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq;
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
 	uint16_t log_desc_n = rxq_data->elts_n - rxq_data->sges_n;
 	uint32_t wqe_size, log_wqe_size;
@@ -279,7 +270,7 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev,
 	rq_attr.wq_attr.pd = priv->sh->pdn;
 	rq_attr.counter_set_id = priv->counter_set_id;
 	/* Create RQ using DevX API. */
-	return mlx5_devx_rq_create(priv->sh->ctx, &rxq_ctrl->obj->rq_obj,
+	return mlx5_devx_rq_create(priv->sh->ctx, &rxq->devx_rq,
 				   wqe_size, log_desc_n, &rq_attr,
 				   rxq_ctrl->socket);
 }
@@ -287,24 +278,22 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev,
 /**
  * Create a DevX CQ object for an Rx queue.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param rxq_data
- *   RX queue data.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
-				  struct mlx5_rxq_data *rxq_data)
+mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
 {
 	struct mlx5_devx_cq *cq_obj = 0;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
-	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_priv *priv = rxq->priv;
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	uint16_t port_id = priv->dev_data->port_id;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
 	uint32_t log_cqe_n;
 	uint16_t event_nums[1] = { 0 };
@@ -345,7 +334,7 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
 		}
 		DRV_LOG(DEBUG,
 			"Port %u Rx CQE compression is enabled, format %d.",
-			dev->data->port_id, priv->config.cqe_comp_fmt);
+			port_id, priv->config.cqe_comp_fmt);
 		/*
 		 * For vectorized Rx, it must not be doubled in order to
 		 * make cq_ci and rq_ci aligned.
@@ -354,13 +343,12 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
 			cqe_n *= 2;
 	} else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
 		DRV_LOG(DEBUG,
-			"Port %u Rx CQE compression is disabled for HW"
-			" timestamp.",
-			dev->data->port_id);
+			"Port %u Rx CQE compression is disabled for HW timestamp.",
+			port_id);
 	} else if (priv->config.cqe_comp && rxq_data->lro) {
 		DRV_LOG(DEBUG,
 			"Port %u Rx CQE compression is disabled for LRO.",
-			dev->data->port_id);
+			port_id);
 	}
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar);
 	log_cqe_n = log2above(cqe_n);
@@ -398,27 +386,23 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
 /**
  * Create the Rx hairpin queue object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	uint16_t idx = rxq->idx;
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
 	struct mlx5_devx_create_rq_attr attr = { 0 };
 	struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
 	uint32_t max_wq_data;
 
-	MLX5_ASSERT(rxq_data);
-	MLX5_ASSERT(tmpl);
+	MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL && tmpl != NULL);
 	tmpl->rxq_ctrl = rxq_ctrl;
 	attr.hairpin = 1;
 	max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
@@ -447,39 +431,36 @@ mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
 	if (!tmpl->rq) {
 		DRV_LOG(ERR,
 			"Port %u Rx hairpin queue %u can't create rq object.",
-			dev->data->port_id, idx);
+			priv->dev_data->port_id, idx);
 		rte_errno = errno;
 		return -rte_errno;
 	}
-	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
+	priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
 	return 0;
 }
 
 /**
  * Create the Rx queue DevX object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
 	int ret = 0;
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(tmpl);
 	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
-		return mlx5_rxq_obj_hairpin_new(dev, idx);
+		return mlx5_rxq_obj_hairpin_new(rxq);
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq) {
 		int devx_ev_flag =
@@ -497,34 +478,32 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 		tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
 	}
 	/* Create CQ using DevX API. */
-	ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_cq_resources(rxq);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ.");
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_rq_resources(rxq);
 	if (ret) {
 		DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
-			dev->data->port_id, idx);
+			priv->dev_data->port_id, rxq->idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
 	/* Change queue state to ready. */
-	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
+	ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
-	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf;
-	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec;
-	rxq_data->cq_arm_sn = 0;
-	rxq_data->cq_ci = 0;
+	rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
+	rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.db_rec;
 	mlx5_rxq_initialize(rxq_data);
-	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq_ctrl->wqn = tmpl->rq_obj.rq->id;
+	priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED;
+	rxq_ctrl->wqn = rxq->devx_rq.rq->id;
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	mlx5_rxq_devx_obj_release(tmpl);
+	mlx5_rxq_devx_obj_release(rxq);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -570,15 +549,15 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 	rqt_attr->rqt_actual_size = rqt_n;
 	if (queues == NULL) {
 		for (i = 0; i < rqt_n; i++)
-			rqt_attr->rq_list[i] = priv->drop_queue.rxq->rq->id;
+			rqt_attr->rq_list[i] =
+					priv->drop_queue.rxq->devx_rq.rq->id;
 		return rqt_attr;
 	}
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[queues[i]];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-				container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
 
-		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq_obj.rq->id;
+		MLX5_ASSERT(rxq != NULL);
+		rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
@@ -717,7 +696,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			}
 		}
 	} else {
-		rxq_obj_type = priv->drop_queue.rxq->rxq_ctrl->type;
+		rxq_obj_type = priv->drop_queue.rxq->ctrl->type;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
@@ -889,16 +868,23 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int socket_id = dev->device->numa_node;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
-	struct mlx5_rxq_data *rxq_data;
-	struct mlx5_rxq_obj *rxq = NULL;
+	struct mlx5_rxq_priv *rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_obj *rxq_obj = NULL;
 	int ret;
 
 	/*
-	 * Initialize dummy control structures.
+	 * Initialize dummy Rx queue structures.
 	 * They are required to hold pointers for cleanup
 	 * and are only accessible via drop queue DevX objects.
 	 */
+	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id);
+	if (rxq == NULL) {
+		DRV_LOG(ERR, "Port %u could not allocate drop queue",
+			dev->data->port_id);
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl),
 			       0, socket_id);
 	if (rxq_ctrl == NULL) {
@@ -907,27 +893,29 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id);
-	if (rxq == NULL) {
+	rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0, socket_id);
+	if (rxq_obj == NULL) {
 		DRV_LOG(ERR, "Port %u could not allocate drop queue object",
 			dev->data->port_id);
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	rxq->rxq_ctrl = rxq_ctrl;
+	rxq->priv = priv;
+	rxq->ctrl = rxq_ctrl;
+	LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry);
+	rxq_obj->rxq_ctrl = rxq_ctrl;
 	rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD;
 	rxq_ctrl->sh = priv->sh;
-	rxq_ctrl->obj = rxq;
-	rxq_data = &rxq_ctrl->rxq;
+	rxq_ctrl->obj = rxq_obj;
 	/* Create CQ using DevX API. */
-	ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_cq_resources(rxq);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Port %u drop queue CQ creation failed.",
 			dev->data->port_id);
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_rq_resources(rxq);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Port %u drop queue RQ creation failed.",
 			dev->data->port_id);
@@ -944,15 +932,18 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	if (rxq != NULL) {
-		if (rxq->rq_obj.rq != NULL)
-			mlx5_devx_rq_destroy(&rxq->rq_obj);
-		if (rxq->cq_obj.cq != NULL)
-			mlx5_devx_cq_destroy(&rxq->cq_obj);
-		if (rxq->devx_channel)
-			mlx5_os_devx_destroy_event_channel
-							(rxq->devx_channel);
+		if (rxq->devx_rq.rq != NULL)
+			claim_zero(mlx5_devx_rq_destroy(&rxq->devx_rq));
 		mlx5_free(rxq);
 	}
+	if (rxq_obj != NULL) {
+		if (rxq_obj->cq_obj.cq != NULL)
+			mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
+		if (rxq_obj->devx_channel)
+			mlx5_os_devx_destroy_event_channel
+							(rxq_obj->devx_channel);
+		mlx5_free(rxq_obj);
+	}
 	if (rxq_ctrl != NULL)
 		mlx5_free(rxq_ctrl);
 	rte_errno = ret; /* Restore rte_errno. */
@@ -969,12 +960,14 @@ static void
 mlx5_rxq_devx_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
-	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
 
 	mlx5_rxq_devx_obj_release(rxq);
 	mlx5_free(rxq);
 	mlx5_free(rxq_ctrl);
+	mlx5_free(rxq_obj);
 	priv->drop_queue.rxq = NULL;
 }
 
@@ -994,7 +987,7 @@ mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev)
 		mlx5_devx_tir_destroy(hrxq);
 	if (hrxq->ind_table->ind_table != NULL)
 		mlx5_devx_ind_table_destroy(hrxq->ind_table);
-	if (priv->drop_queue.rxq->rq != NULL)
+	if (priv->drop_queue.rxq != NULL)
 		mlx5_rxq_devx_obj_drop_release(dev);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 4eed4176324..25f7fc2071a 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -183,6 +183,7 @@ struct mlx5_rxq_priv {
 	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
 	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
+	struct mlx5_devx_rq devx_rq;
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 3aac7cc20ba..98408da3c8e 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -452,13 +452,13 @@ int
 mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
 	int ret;
 
+	MLX5_ASSERT(rxq != NULL && rxq_ctrl != NULL);
 	MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
-	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RDY2RST);
+	ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RDY2RST);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot change Rx WQ state to RESET:  %s",
 			strerror(errno));
@@ -466,7 +466,7 @@ mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx)
 		return ret;
 	}
 	/* Remove all processes CQEs. */
-	rxq_sync_cq(rxq);
+	rxq_sync_cq(&rxq_ctrl->rxq);
 	/* Free all involved mbufs. */
 	rxq_free_elts(rxq_ctrl);
 	/* Set the actual queue state. */
@@ -538,26 +538,26 @@ int
 mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+	struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq;
 	int ret;
 
-	MLX5_ASSERT(rte_eal_process_type() ==  RTE_PROC_PRIMARY);
+	MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL);
+	MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
 	/* Allocate needed buffers. */
-	ret = rxq_alloc_elts(rxq_ctrl);
+	ret = rxq_alloc_elts(rxq->ctrl);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot reallocate buffers for Rx WQ");
 		rte_errno = errno;
 		return ret;
 	}
 	rte_io_wmb();
-	*rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci);
+	*rxq_data->cq_db = rte_cpu_to_be_32(rxq_data->cq_ci);
 	rte_io_wmb();
 	/* Reset RQ consumer before moving queue to READY state. */
-	*rxq->rq_db = rte_cpu_to_be_32(0);
+	*rxq_data->rq_db = rte_cpu_to_be_32(0);
 	rte_io_wmb();
-	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RST2RDY);
+	ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RST2RDY);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot change Rx WQ state to READY:  %s",
 			strerror(errno));
@@ -565,8 +565,8 @@ mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx)
 		return ret;
 	}
 	/* Reinitialize RQ - set WQEs. */
-	mlx5_rxq_initialize(rxq);
-	rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR;
+	mlx5_rxq_initialize(rxq_data);
+	rxq_data->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR;
 	/* Set actual queue state. */
 	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
 	return 0;
@@ -1770,15 +1770,19 @@ int
 mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
-	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_priv *rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 
-	if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL)
+	if (priv->rxq_privs == NULL)
+		return 0;
+	rxq = mlx5_rxq_get(dev, idx);
+	if (rxq == NULL)
 		return 0;
 	if (mlx5_rxq_deref(dev, idx) > 1)
 		return 1;
-	if (rxq_ctrl->obj) {
-		priv->obj_ops.rxq_obj_release(rxq_ctrl->obj);
+	rxq_ctrl = rxq->ctrl;
+	if (rxq_ctrl->obj != NULL) {
+		priv->obj_ops.rxq_obj_release(rxq);
 		LIST_REMOVE(rxq_ctrl->obj, next);
 		mlx5_free(rxq_ctrl->obj);
 		rxq_ctrl->obj = NULL;
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 7b984eff35f..d44d6d8e4c3 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -374,11 +374,9 @@ mlx5_queue_state_modify_primary(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 
 	if (sm->is_wq) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[sm->queue_id];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, sm->queue_id);
 
-		ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, sm->state);
+		ret = priv->obj_ops.rxq_obj_modify(rxq, sm->state);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change Rx WQ state to %u  - %s",
 					sm->state, strerror(errno));
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f376f4d6fc4..b3188f510fb 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -180,7 +180,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 			rte_errno = ENOMEM;
 			goto error;
 		}
-		ret = priv->obj_ops.rxq_obj_new(dev, i);
+		ret = priv->obj_ops.rxq_obj_new(rxq);
 		if (ret) {
 			mlx5_free(rxq_ctrl->obj);
 			goto error;
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1..586ba7166cb 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -91,11 +91,11 @@ void
 mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[queue];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queue);
+	struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq;
 	int ret = 0;
 
+	MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL);
 	/* Validate hw support */
 	if (!priv->config.hw_vlan_strip) {
 		DRV_LOG(ERR, "port %u VLAN stripping is not supported",
@@ -109,20 +109,20 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 		return;
 	}
 	DRV_LOG(DEBUG, "port %u set VLAN stripping offloads %d for port %uqueue %d",
-		dev->data->port_id, on, rxq->port_id, queue);
-	if (!rxq_ctrl->obj) {
+		dev->data->port_id, on, rxq_data->port_id, queue);
+	if (rxq->ctrl->obj == NULL) {
 		/* Update related bits in RX queue. */
-		rxq->vlan_strip = !!on;
+		rxq_data->vlan_strip = !!on;
 		return;
 	}
-	ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on);
+	ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq, on);
 	if (ret) {
 		DRV_LOG(ERR, "Port %u failed to modify object stripping mode:"
 			" %s", dev->data->port_id, strerror(rte_errno));
 		return;
 	}
 	/* Update related bits in RX queue. */
-	rxq->vlan_strip = !!on;
+	rxq_data->vlan_strip = !!on;
 }
 
 /**
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 10/11] net/mlx5: remove Rx queue data list from device
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (8 preceding siblings ...)
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 09/11] net/mlx5: move Rx queue DevX resource Xueming Li
@ 2021-09-26 11:19 ` Xueming Li
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 11/11] net/mlx5: support shared Rx queue Xueming Li
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:19 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Rx queue data list(priv->rxqs) can be replaced by Rx queue
list(priv->rxq_privs), removes it and replace with universal wrapper
API.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c |  7 ++---
 drivers/net/mlx5/mlx5.c             | 10 +------
 drivers/net/mlx5/mlx5.h             |  1 -
 drivers/net/mlx5/mlx5_devx.c        | 13 +++++----
 drivers/net/mlx5/mlx5_ethdev.c      |  6 +---
 drivers/net/mlx5/mlx5_flow.c        | 45 +++++++++++++++--------------
 drivers/net/mlx5/mlx5_rss.c         |  6 ++--
 drivers/net/mlx5/mlx5_rx.c          | 16 ++++------
 drivers/net/mlx5/mlx5_rx.h          |  9 +++---
 drivers/net/mlx5/mlx5_rxq.c         | 23 ++++++---------
 drivers/net/mlx5/mlx5_rxtx_vec.c    |  6 ++--
 drivers/net/mlx5/mlx5_stats.c       |  9 +++---
 drivers/net/mlx5/mlx5_trigger.c     |  2 +-
 13 files changed, 66 insertions(+), 87 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index a2a9b9c1f98..0e68a13208b 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -527,11 +527,10 @@ mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n,
 
 	MLX5_ASSERT(ind_tbl);
 	for (i = 0; i != ind_tbl->queues_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-				container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev,
+							 ind_tbl->queues[i]);
 
-		wq[i] = rxq_ctrl->obj->wq;
+		wq[i] = rxq->ctrl->obj->wq;
 	}
 	MLX5_ASSERT(i > 0);
 	/* Finalise indirection table. */
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 749729d6fbe..6681b74c8f0 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1572,20 +1572,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 	mlx5_mp_os_req_stop_rxtx(dev);
 	/* Free the eCPRI flex parser resource. */
 	mlx5_flex_parser_ecpri_release(dev);
-	if (priv->rxqs != NULL) {
+	if (priv->rxq_privs != NULL) {
 		/* XXX race condition if mlx5_rx_burst() is still running. */
 		rte_delay_us_sleep(1000);
 		for (i = 0; (i != priv->rxqs_n); ++i)
 			mlx5_rxq_release(dev, i);
 		priv->rxqs_n = 0;
-		priv->rxqs = NULL;
-	}
-	if (priv->representor) {
-		/* Each representor has a dedicated interrupts handler */
-		mlx5_free(dev->intr_handle);
-		dev->intr_handle = NULL;
-	}
-	if (priv->rxq_privs != NULL) {
 		mlx5_free(priv->rxq_privs);
 		priv->rxq_privs = NULL;
 	}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c674f5ba9c4..6a9c99a8826 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1406,7 +1406,6 @@ struct mlx5_priv {
 	unsigned int rxqs_n; /* RX queues array size. */
 	unsigned int txqs_n; /* TX queues array size. */
 	struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */
-	struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */
 	struct mlx5_txq_data *(*txqs)[]; /* TX queues. */
 	struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */
 	struct rte_eth_rss_conf rss_conf; /* RSS configuration. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index d219e255f0a..371ff387c99 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -682,15 +682,16 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 
 	/* NULL queues designate drop queue. */
 	if (ind_tbl->queues != NULL) {
-		struct mlx5_rxq_data *rxq_data =
-					(*priv->rxqs)[ind_tbl->queues[0]];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-		rxq_obj_type = rxq_ctrl->type;
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev,
+							 ind_tbl->queues[0]);
 
+		rxq_obj_type = rxq->ctrl->type;
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
-			if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) {
+			struct mlx5_rxq_data *rxq_i =
+				mlx5_rxq_data_get(dev, ind_tbl->queues[i]);
+
+			if (rxq_i != NULL && !rxq_i->lro) {
 				lro = false;
 				break;
 			}
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 7071a5f7039..16e96da8d24 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -114,7 +114,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
-	priv->rxqs = (void *)dev->data->rx_queues;
 	priv->txqs = (void *)dev->data->tx_queues;
 	if (txqs_n != priv->txqs_n) {
 		DRV_LOG(INFO, "port %u Tx queues number update: %u -> %u",
@@ -171,11 +170,8 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev)
 		return -rte_errno;
 	}
 	for (i = 0, j = 0; i < rxqs_n; i++) {
-		struct mlx5_rxq_data *rxq_data;
-		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		rxq_data = (*priv->rxqs)[i];
-		rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 		if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
 			rss_queue_arr[j++] = i;
 	}
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c10b9112593..49a74edd2e6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1166,10 +1166,11 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev,
 		return;
 	for (i = 0; i != ind_tbl->queues_n; ++i) {
 		int idx = ind_tbl->queues[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of((*priv->rxqs)[idx],
-				     struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
+		MLX5_ASSERT(rxq_ctrl != NULL);
+		if (rxq_ctrl == NULL)
+			continue;
 		/*
 		 * To support metadata register copy on Tx loopback,
 		 * this must be always enabled (metadata may arive
@@ -1261,10 +1262,11 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev,
 	MLX5_ASSERT(dev->data->dev_started);
 	for (i = 0; i != ind_tbl->queues_n; ++i) {
 		int idx = ind_tbl->queues[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of((*priv->rxqs)[idx],
-				     struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
+		MLX5_ASSERT(rxq_ctrl != NULL);
+		if (rxq_ctrl == NULL)
+			continue;
 		if (priv->config.dv_flow_en &&
 		    priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
 		    mlx5_flow_ext_mreg_supported(dev)) {
@@ -1325,18 +1327,16 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev)
 	unsigned int i;
 
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
 		unsigned int j;
 
-		if (!(*priv->rxqs)[i])
+		if (rxq == NULL || rxq->ctrl == NULL)
 			continue;
-		rxq_ctrl = container_of((*priv->rxqs)[i],
-					struct mlx5_rxq_ctrl, rxq);
-		rxq_ctrl->flow_mark_n = 0;
-		rxq_ctrl->rxq.mark = 0;
+		rxq->ctrl->flow_mark_n = 0;
+		rxq->ctrl->rxq.mark = 0;
 		for (j = 0; j != MLX5_FLOW_TUNNEL; ++j)
-			rxq_ctrl->flow_tunnels_n[j] = 0;
-		rxq_ctrl->rxq.tunnel = 0;
+			rxq->ctrl->flow_tunnels_n[j] = 0;
+		rxq->ctrl->rxq.tunnel = 0;
 	}
 }
 
@@ -1350,13 +1350,15 @@ void
 mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *data;
 	unsigned int i;
 
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		if (!(*priv->rxqs)[i])
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+		struct mlx5_rxq_data *data;
+
+		if (rxq == NULL || rxq->ctrl == NULL)
 			continue;
-		data = (*priv->rxqs)[i];
+		data = &rxq->ctrl->rxq;
 		if (!rte_flow_dynf_metadata_avail()) {
 			data->dynf_meta = 0;
 			data->flow_meta_mask = 0;
@@ -1547,7 +1549,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  &queue->index,
 					  "queue index out of range");
-	if (!(*priv->rxqs)[queue->index])
+	if (mlx5_rxq_get(dev, queue->index) == NULL)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  &queue->index,
@@ -1578,7 +1580,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
  *   0 on success, a negative errno code on error.
  */
 static int
-mlx5_validate_rss_queues(const struct rte_eth_dev *dev,
+mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			 const uint16_t *queues, uint32_t queues_n,
 			 const char **error, uint32_t *queue_idx)
 {
@@ -1594,13 +1596,12 @@ mlx5_validate_rss_queues(const struct rte_eth_dev *dev,
 			*queue_idx = i;
 			return -EINVAL;
 		}
-		if (!(*priv->rxqs)[queues[i]]) {
+		rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]);
+		if (rxq_ctrl == NULL) {
 			*error =  "queue is not configured";
 			*queue_idx = i;
 			return -EINVAL;
 		}
-		rxq_ctrl = container_of((*priv->rxqs)[queues[i]],
-					struct mlx5_rxq_ctrl, rxq);
 		if (i == 0)
 			rxq_type = rxq_ctrl->type;
 		if (rxq_type != rxq_ctrl->type) {
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b..9ffc44b179f 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -65,9 +65,11 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 	priv->rss_conf.rss_hf = rss_conf->rss_hf;
 	/* Enable the RSS hash in all Rx queues. */
 	for (i = 0, idx = 0; idx != priv->rxqs_n; ++i) {
-		if (!(*priv->rxqs)[i])
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+
+		if (rxq == NULL || rxq->ctrl == NULL)
 			continue;
-		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
+		rxq->ctrl->rxq.rss_hash = !!rss_conf->rss_hf &&
 			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
 		++idx;
 	}
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 09de26c0d39..13fbd12b22c 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -148,10 +148,8 @@ void
 mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		  struct rte_eth_rxq_info *qinfo)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[rx_queue_id];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id);
+	struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id);
 
 	if (!rxq)
 		return;
@@ -162,7 +160,7 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	qinfo->conf.rx_thresh.wthresh = 0;
 	qinfo->conf.rx_free_thresh = rxq->rq_repl_thresh;
 	qinfo->conf.rx_drop_en = 1;
-	qinfo->conf.rx_deferred_start = rxq_ctrl ? 0 : 1;
+	qinfo->conf.rx_deferred_start = rxq_ctrl->obj == NULL ? 0 : 1;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	qinfo->scattered_rx = dev->data->scattered_rx;
 	qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ?
@@ -191,10 +189,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 		       struct rte_eth_burst_mode *mode)
 {
 	eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id);
 
-	rxq = (*priv->rxqs)[rx_queue_id];
 	if (!rxq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
@@ -245,15 +241,13 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 uint32_t
 mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id);
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
 	if (!rxq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 25f7fc2071a..161399c764d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -606,14 +606,13 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 		return 0;
 	/* All the configured queues should be enabled. */
 	for (i = 0; i < priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl = container_of
-			(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL ||
+		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
 			continue;
 		n_ibv++;
-		if (mlx5_rxq_mprq_enabled(rxq))
+		if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
 			++n;
 	}
 	/* Multi-Packet RQ can't be partially configured. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 98408da3c8e..cde01a48022 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -729,7 +729,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
 		dev->data->port_id, idx);
-	(*priv->rxqs)[idx] = &rxq_ctrl->rxq;
+	dev->data->rx_queues[idx] = &rxq_ctrl->rxq;
 	return 0;
 }
 
@@ -811,7 +811,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 	}
 	DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list",
 		dev->data->port_id, idx);
-	(*priv->rxqs)[idx] = &rxq_ctrl->rxq;
+	dev->data->rx_queues[idx] = &rxq_ctrl->rxq;
 	return 0;
 }
 
@@ -1712,8 +1712,7 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
-	if (priv->rxq_privs == NULL)
-		return NULL;
+	MLX5_ASSERT(priv->rxq_privs != NULL);
 	return (*priv->rxq_privs)[idx];
 }
 
@@ -1799,7 +1798,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 		LIST_REMOVE(rxq, owner_entry);
 		LIST_REMOVE(rxq_ctrl, next);
 		mlx5_free(rxq_ctrl);
-		(*priv->rxqs)[idx] = NULL;
+		dev->data->rx_queues[idx] = NULL;
 		mlx5_free(rxq);
 		(*priv->rxq_privs)[idx] = NULL;
 	}
@@ -1845,14 +1844,10 @@ enum mlx5_rxq_type
 mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
-	if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) {
-		rxq_ctrl = container_of((*priv->rxqs)[idx],
-					struct mlx5_rxq_ctrl,
-					rxq);
+	if (idx < priv->rxqs_n && rxq_ctrl != NULL)
 		return rxq_ctrl->type;
-	}
 	return MLX5_RXQ_TYPE_UNDEFINED;
 }
 
@@ -2619,13 +2614,13 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
-	struct mlx5_rxq_data *data;
 	unsigned int i;
 
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		if (!(*priv->rxqs)[i])
+		struct mlx5_rxq_data *data = mlx5_rxq_data_get(dev, i);
+
+		if (data == NULL)
 			continue;
-		data = (*priv->rxqs)[i];
 		data->sh = sh;
 		data->rt_timestamp = priv->config.rt_timestamp;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 511681841ca..6212ce8247d 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -578,11 +578,11 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	/* All the configured queues should support. */
 	for (i = 0; i < priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
+		struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i);
 
-		if (!rxq)
+		if (!rxq_data)
 			continue;
-		if (mlx5_rxq_check_vec_support(rxq) < 0)
+		if (mlx5_rxq_check_vec_support(rxq_data) < 0)
 			break;
 	}
 	if (i != priv->rxqs_n)
diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c
index ae2f5668a74..732775954ad 100644
--- a/drivers/net/mlx5/mlx5_stats.c
+++ b/drivers/net/mlx5/mlx5_stats.c
@@ -107,7 +107,7 @@ mlx5_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	memset(&tmp, 0, sizeof(tmp));
 	/* Add software counters. */
 	for (i = 0; (i != priv->rxqs_n); ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
+		struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i);
 
 		if (rxq == NULL)
 			continue;
@@ -181,10 +181,11 @@ mlx5_stats_reset(struct rte_eth_dev *dev)
 	unsigned int i;
 
 	for (i = 0; (i != priv->rxqs_n); ++i) {
-		if ((*priv->rxqs)[i] == NULL)
+		struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i);
+
+		if (rxq_data == NULL)
 			continue;
-		memset(&(*priv->rxqs)[i]->stats, 0,
-		       sizeof(struct mlx5_rxq_stats));
+		memset(&rxq_data->stats, 0, sizeof(struct mlx5_rxq_stats));
 	}
 	for (i = 0; (i != priv->txqs_n); ++i) {
 		if ((*priv->txqs)[i] == NULL)
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index b3188f510fb..1e865e74e39 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -176,7 +176,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 		if (!rxq_ctrl->obj) {
 			DRV_LOG(ERR,
 				"Port %u Rx queue %u can't allocate resources.",
-				dev->data->port_id, (*priv->rxqs)[i]->idx);
+				dev->data->port_id, i);
 			rte_errno = ENOMEM;
 			goto error;
 		}
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH 11/11] net/mlx5: support shared Rx queue
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (9 preceding siblings ...)
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 10/11] net/mlx5: remove Rx queue data list from device Xueming Li
@ 2021-09-26 11:19 ` Xueming Li
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
  11 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-09-26 11:19 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

This patch introduces shared RXQ. All share Rx queues with same group
and queue id shares same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.

Shared rxq_data is set into device Rx queues of all member ports as
rxq object, used for receiving packets. Polling queue of any member
ports returns packets of any member, mbuf->port is used to identify
source port.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini |   1 +
 doc/guides/nics/mlx5.rst          |   6 +
 drivers/net/mlx5/linux/mlx5_os.c  |   2 +
 drivers/net/mlx5/mlx5.h           |   2 +
 drivers/net/mlx5/mlx5_devx.c      |   9 +-
 drivers/net/mlx5/mlx5_rx.h        |   7 +
 drivers/net/mlx5/mlx5_rxq.c       | 208 ++++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_trigger.c   |  76 ++++++-----
 8 files changed, 255 insertions(+), 56 deletions(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index f01abd4231f..ff5e669acc1 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -11,6 +11,7 @@ Removal event        = Y
 Rx interrupt         = Y
 Fast mbuf free       = Y
 Queue start/stop     = Y
+Shared Rx queue      = Y
 Burst mode info      = Y
 Power mgmt address monitor = Y
 MTU update           = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index ca3e7f560da..494ee957c1d 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -113,6 +113,7 @@ Features
 - Connection tracking.
 - Sub-Function representors.
 - Sub-Function.
+- Shared Rx queue.
 
 
 Limitations
@@ -464,6 +465,11 @@ Limitations
   - In order to achieve best insertion rate, application should manage the flows per lcore.
   - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache.
 
+ Shared Rx queue:
+
+  - Counter of received packets and bytes number of devices in same share group are same.
+  - Counter of received packets and bytes number of queues in same group and queue ID are same.
+
 Statistics
 ----------
 
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 27233b679c6..b631768b4f9 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -457,6 +457,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 			mlx5_glue->dr_create_flow_action_default_miss();
 	if (!sh->default_miss_action)
 		DRV_LOG(WARNING, "Default miss action is not supported.");
+	LIST_INIT(&sh->shared_rxqs);
 	return 0;
 error:
 	/* Rollback the created objects. */
@@ -531,6 +532,7 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv)
 	MLX5_ASSERT(sh && sh->refcnt);
 	if (sh->refcnt > 1)
 		return;
+	MLX5_ASSERT(LIST_EMPTY(&sh->shared_rxqs));
 #ifdef HAVE_MLX5DV_DR
 	if (sh->rx_domain) {
 		mlx5_glue->dr_destroy_domain(sh->rx_domain);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6a9c99a8826..c671c8a354f 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1193,6 +1193,7 @@ struct mlx5_dev_ctx_shared {
 	struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX];
 	/* Flex parser profiles information. */
 	void *devx_rx_uar; /* DevX UAR for Rx. */
+	LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */
 	struct mlx5_aso_age_mng *aso_age_mng;
 	/* Management data for aging mechanism using ASO Flow Hit. */
 	struct mlx5_geneve_tlv_option_resource *geneve_tlv_option_resource;
@@ -1257,6 +1258,7 @@ struct mlx5_rxq_obj {
 		};
 		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
+			struct mlx5_devx_rmp devx_rmp; /* RMP for shared RQ. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 371ff387c99..01561639038 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -170,6 +170,8 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 		memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq));
 		mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
 		memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
+		if (!RXQ_CTRL_LAST(rxq))
+			return;
 		if (rxq_obj->devx_channel) {
 			mlx5_os_devx_destroy_event_channel
 							(rxq_obj->devx_channel);
@@ -270,6 +272,8 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq)
 	rq_attr.wq_attr.pd = priv->sh->pdn;
 	rq_attr.counter_set_id = priv->counter_set_id;
 	/* Create RQ using DevX API. */
+	if (rxq_data->shared)
+		rxq->devx_rq.rmp = &rxq_ctrl->obj->devx_rmp;
 	return mlx5_devx_rq_create(priv->sh->ctx, &rxq->devx_rq,
 				   wqe_size, log_desc_n, &rq_attr,
 				   rxq_ctrl->socket);
@@ -495,7 +499,10 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 	ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
-	rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
+	if (rxq_data->shared)
+		rxq_data->wqes = (void *)(uintptr_t)tmpl->devx_rmp.wq.umem_buf;
+	else
+		rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
 	rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.db_rec;
 	mlx5_rxq_initialize(rxq_data);
 	priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 161399c764d..a83fa6e8db1 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -26,6 +26,9 @@
 #define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv
 #define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl))
 #define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl))
+#define RXQ_CTRL_LAST(rxq) \
+	(LIST_FIRST(&(rxq)->ctrl->owners) == (rxq) && \
+	LIST_NEXT((rxq), owner_entry) == NULL)
 
 struct mlx5_rxq_stats {
 #ifdef MLX5_PMD_SOFT_COUNTERS
@@ -107,6 +110,7 @@ struct mlx5_rxq_data {
 	unsigned int lro:1; /* Enable LRO. */
 	unsigned int dynf_meta:1; /* Dynamic metadata is configured. */
 	unsigned int mcqe_format:3; /* CQE compression format. */
+	unsigned int shared:1; /* Shared RXQ. */
 	volatile uint32_t *rq_db;
 	volatile uint32_t *cq_db;
 	uint16_t port_id;
@@ -169,6 +173,9 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
 	enum mlx5_rxq_type type; /* Rxq type. */
 	unsigned int socket; /* CPU socket ID for allocations. */
+	LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
+	uint32_t share_group; /* Group ID of shared RXQ. */
+	unsigned int started:1; /* Whether (shared) RXQ has been started. */
 	unsigned int irq:1; /* Whether IRQ is enabled. */
 	uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index cde01a48022..45f78ad076b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -28,6 +28,7 @@
 #include "mlx5_rx.h"
 #include "mlx5_utils.h"
 #include "mlx5_autoconf.h"
+#include "mlx5_devx.h"
 
 
 /* Default RSS hash key also used for ConnectX-3. */
@@ -352,6 +353,9 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	if (MLX5_LRO_SUPPORTED(dev))
 		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+	if (priv->config.hca_attr.mem_rq_rmp &&
+	    priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new)
+		offloads |= RTE_ETH_RX_OFFLOAD_SHARED_RXQ;
 	return offloads;
 }
 
@@ -648,6 +652,114 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc)
 	return 0;
 }
 
+/**
+ * Get the shared Rx queue object that matches group and queue index.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param group
+ *   Shared RXQ group.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   Shared RXQ object that matching, or NULL if not found.
+ */
+static struct mlx5_rxq_ctrl *
+mlx5_shared_rxq_get(struct rte_eth_dev *dev, uint32_t group, uint16_t idx)
+{
+	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, share_entry) {
+		if (rxq_ctrl->share_group == group && rxq_ctrl->rxq.idx == idx)
+			return rxq_ctrl;
+	}
+	return NULL;
+}
+
+/**
+ * Check whether requested Rx queue configuration matches shared RXQ.
+ *
+ * @param rxq_ctrl
+ *   Pointer to shared RXQ.
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param idx
+ *   Queue index.
+ * @param desc
+ *   Number of descriptors to configure in queue.
+ * @param socket
+ *   NUMA socket on which memory must be allocated.
+ * @param[in] conf
+ *   Thresholds parameters.
+ * @param mp
+ *   Memory pool for buffer allocations.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static bool
+mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev,
+		      uint16_t idx, uint16_t desc, unsigned int socket,
+		      const struct rte_eth_rxconf *conf,
+		      struct rte_mempool *mp)
+{
+	struct mlx5_priv *spriv = LIST_FIRST(&rxq_ctrl->owners)->priv;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	unsigned int mprq_stride_nums = priv->config.mprq.stride_num_n ?
+		priv->config.mprq.stride_num_n : MLX5_MPRQ_STRIDE_NUM_N;
+
+	RTE_SET_USED(conf);
+	if (rxq_ctrl->socket != socket) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: socket mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->config.mprq.enabled)
+		desc >>= mprq_stride_nums;
+	if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: descriptor number mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->mtu != spriv->mtu) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mtu mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->dev_data->dev_conf.intr_conf.rxq !=
+	    spriv->dev_data->dev_conf.intr_conf.rxq) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: interrupt mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (!spriv->config.mprq.enabled && rxq_ctrl->rxq.mp != mp) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mempool mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->config.hw_padding != spriv->config.hw_padding) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: padding mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (memcmp(&priv->config.mprq, &spriv->config.mprq,
+		   sizeof(priv->config.mprq)) != 0) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: MPRQ mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->config.cqe_comp != spriv->config.cqe_comp ||
+	    (priv->config.cqe_comp &&
+	     priv->config.cqe_comp_fmt != spriv->config.cqe_comp_fmt)) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: CQE compression mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	return true;
+}
+
 /**
  *
  * @param dev
@@ -673,12 +785,14 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
 	struct rte_eth_rxseg_split *rx_seg =
 				(struct rte_eth_rxseg_split *)conf->rx_seg;
 	struct rte_eth_rxseg_split rx_single = {.mp = mp};
 	uint16_t n_seg = conf->rx_nseg;
 	int res;
+	uint64_t offloads = conf->offloads |
+			    dev->data->dev_conf.rxmode.offloads;
 
 	if (mp) {
 		/*
@@ -690,9 +804,6 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		n_seg = 1;
 	}
 	if (n_seg > 1) {
-		uint64_t offloads = conf->offloads |
-				    dev->data->dev_conf.rxmode.offloads;
-
 		/* The offloads should be checked on rte_eth_dev layer. */
 		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
@@ -704,9 +815,32 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		}
 		MLX5_ASSERT(n_seg < MLX5_MAX_RXQ_NSEG);
 	}
+	if (offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) {
+		if (!priv->config.hca_attr.mem_rq_rmp) {
+			DRV_LOG(ERR, "port %u queue index %u shared Rx queue not supported by fw",
+				     dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (priv->obj_ops.rxq_obj_new != devx_obj_ops.rxq_obj_new) {
+			DRV_LOG(ERR, "port %u queue index %u shared Rx queue needs DevX api",
+				     dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		/* Try to reuse shared RXQ. */
+		rxq_ctrl = mlx5_shared_rxq_get(dev, conf->shared_group, idx);
+		if (rxq_ctrl != NULL &&
+		    !mlx5_shared_rxq_match(rxq_ctrl, dev, idx, desc, socket,
+					   conf, mp)) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+	}
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
+	/* Allocate RXQ. */
 	rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0,
 			  SOCKET_ID_ANY);
 	if (!rxq) {
@@ -718,14 +852,22 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->idx = idx;
 	(*priv->rxq_privs)[idx] = rxq;
-	rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg);
-	if (!rxq_ctrl) {
-		DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
-			dev->data->port_id, idx);
-		mlx5_free(rxq);
-		(*priv->rxq_privs)[idx] = NULL;
-		rte_errno = ENOMEM;
-		return -rte_errno;
+	if (rxq_ctrl != NULL) {
+		/* Join owner list of shared RXQ. */
+		LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry);
+		rxq->ctrl = rxq_ctrl;
+	} else {
+		/* Create new shared RXQ. */
+		rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg,
+					n_seg);
+		if (rxq_ctrl == NULL) {
+			DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
+				dev->data->port_id, idx);
+			mlx5_free(rxq);
+			(*priv->rxq_privs)[idx] = NULL;
+			rte_errno = ENOMEM;
+			return -rte_errno;
+		}
 	}
 	DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
 		dev->data->port_id, idx);
@@ -1071,6 +1213,9 @@ mlx5_rxq_obj_verify(struct rte_eth_dev *dev)
 	struct mlx5_rxq_obj *rxq_obj;
 
 	LIST_FOREACH(rxq_obj, &priv->rxqsobj, next) {
+		if (rxq_obj->rxq_ctrl->rxq.shared &&
+		    !LIST_EMPTY(&rxq_obj->rxq_ctrl->owners))
+			continue;
 		DRV_LOG(DEBUG, "port %u Rx queue %u still referenced",
 			dev->data->port_id, rxq_obj->rxq_ctrl->rxq.idx);
 		++ret;
@@ -1348,6 +1493,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 		return NULL;
 	}
 	LIST_INIT(&tmpl->owners);
+	if (offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) {
+		tmpl->rxq.shared = 1;
+		LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry);
+	}
 	rxq->ctrl = tmpl;
 	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
 	MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG);
@@ -1771,6 +1920,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
+	bool free_ctrl;
 
 	if (priv->rxq_privs == NULL)
 		return 0;
@@ -1780,24 +1930,36 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 	if (mlx5_rxq_deref(dev, idx) > 1)
 		return 1;
 	rxq_ctrl = rxq->ctrl;
-	if (rxq_ctrl->obj != NULL) {
+	/* If the last entry in share RXQ. */
+	free_ctrl = RXQ_CTRL_LAST(rxq);
+	if (rxq->devx_rq.rq != NULL)
 		priv->obj_ops.rxq_obj_release(rxq);
-		LIST_REMOVE(rxq_ctrl->obj, next);
-		mlx5_free(rxq_ctrl->obj);
-		rxq_ctrl->obj = NULL;
+	if (free_ctrl) {
+		if (rxq_ctrl->obj != NULL) {
+			LIST_REMOVE(rxq_ctrl->obj, next);
+			mlx5_free(rxq_ctrl->obj);
+			rxq_ctrl->obj = NULL;
+		}
+		rxq_ctrl->started = false;
 	}
 	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-		rxq_free_elts(rxq_ctrl);
+		if (free_ctrl)
+			rxq_free_elts(rxq_ctrl);
 		dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 	if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) {
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
-			mlx5_mprq_free_mp(dev, rxq_ctrl);
-		}
 		LIST_REMOVE(rxq, owner_entry);
-		LIST_REMOVE(rxq_ctrl, next);
-		mlx5_free(rxq_ctrl);
+		if (free_ctrl) {
+			if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+				mlx5_mr_btree_free
+					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+				mlx5_mprq_free_mp(dev, rxq_ctrl);
+			}
+			if (rxq_ctrl->rxq.shared)
+				LIST_REMOVE(rxq_ctrl, share_entry);
+			LIST_REMOVE(rxq_ctrl, next);
+			mlx5_free(rxq_ctrl);
+		}
 		dev->data->rx_queues[idx] = NULL;
 		mlx5_free(rxq);
 		(*priv->rxq_privs)[idx] = NULL;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 1e865e74e39..2fd8c70cce5 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -122,6 +122,46 @@ mlx5_rxq_stop(struct rte_eth_dev *dev)
 		mlx5_rxq_release(dev, i);
 }
 
+static int
+mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl,
+		      unsigned int idx)
+{
+	int ret = 0;
+
+	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+		if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
+			/* Allocate/reuse/resize mempool for MPRQ. */
+			if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0)
+				return -rte_errno;
+
+			/* Pre-register Rx mempools. */
+			mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
+					  rxq_ctrl->rxq.mprq_mp);
+		} else {
+			uint32_t s;
+			for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++)
+				mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
+						  rxq_ctrl->rxq.rxseg[s].mp);
+		}
+		ret = rxq_alloc_elts(rxq_ctrl);
+		if (ret)
+			return ret;
+	}
+	MLX5_ASSERT(!rxq_ctrl->obj);
+	rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				    sizeof(*rxq_ctrl->obj), 0,
+				    rxq_ctrl->socket);
+	if (!rxq_ctrl->obj) {
+		DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.", dev->data->port_id,
+		idx, (void *)&rxq_ctrl->obj);
+	return 0;
+}
+
 /**
  * Start traffic on Rx queues.
  *
@@ -149,45 +189,17 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 		if (rxq == NULL)
 			continue;
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-			if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
-				/* Allocate/reuse/resize mempool for MPRQ. */
-				if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0)
-					goto error;
-				/* Pre-register Rx mempools. */
-				mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
-						  rxq_ctrl->rxq.mprq_mp);
-			} else {
-				uint32_t s;
-
-				for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++)
-					mlx5_mr_update_mp
-						(dev, &rxq_ctrl->rxq.mr_ctrl,
-						rxq_ctrl->rxq.rxseg[s].mp);
-			}
-			ret = rxq_alloc_elts(rxq_ctrl);
-			if (ret)
+		if (!rxq_ctrl->started) {
+			if (mlx5_rxq_ctrl_prepare(dev, rxq_ctrl, i) < 0)
 				goto error;
-		}
-		MLX5_ASSERT(!rxq_ctrl->obj);
-		rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-					    sizeof(*rxq_ctrl->obj), 0,
-					    rxq_ctrl->socket);
-		if (!rxq_ctrl->obj) {
-			DRV_LOG(ERR,
-				"Port %u Rx queue %u can't allocate resources.",
-				dev->data->port_id, i);
-			rte_errno = ENOMEM;
-			goto error;
+			LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next);
+			rxq_ctrl->started = true;
 		}
 		ret = priv->obj_ops.rxq_obj_new(rxq);
 		if (ret) {
 			mlx5_free(rxq_ctrl->obj);
 			goto error;
 		}
-		DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.",
-			dev->data->port_id, i, (void *)&rxq_ctrl->obj);
-		LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next);
 	}
 	return 0;
 error:
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue
  2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
                   ` (10 preceding siblings ...)
  2021-09-26 11:19 ` [dpdk-dev] [PATCH 11/11] net/mlx5: support shared Rx queue Xueming Li
@ 2021-10-16  9:12 ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 01/13] common/mlx5: support receive queue user index Xueming Li
                     ` (13 more replies)
  11 siblings, 14 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit

Implemetation of Shared Rx queue.

Depends-on: series-19708 ("ethdev: introduce shared Rx queue")
Depends-on: series-19698 ("Flow entites behavior on port restart")

v1:
- initial version
v2:
- rebased on latest dependent series
- fully tested

Viacheslav Ovsiienko (1):
  net/mlx5: add shared Rx queue port datapath support

Xueming Li (12):
  common/mlx5: support receive queue user index
  common/mlx5: support receive memory pool
  net/mlx5: fix Rx queue memory allocation return value
  net/mlx5: clean Rx queue code
  net/mlx5: split multiple packet Rq memory pool
  net/mlx5: split Rx queue
  net/mlx5: move Rx queue reference count
  net/mlx5: move Rx queue hairpin info to private data
  net/mlx5: remove port info from shareable Rx queue
  net/mlx5: move Rx queue DevX resource
  net/mlx5: remove Rx queue data list from device
  net/mlx5: support shared Rx queue

 doc/guides/nics/features/mlx5.ini        |   1 +
 doc/guides/nics/mlx5.rst                 |   6 +
 drivers/common/mlx5/mlx5_common_devx.c   | 296 +++++++++--
 drivers/common/mlx5/mlx5_common_devx.h   |  19 +-
 drivers/common/mlx5/mlx5_devx_cmds.c     |  52 ++
 drivers/common/mlx5/mlx5_devx_cmds.h     |  16 +
 drivers/common/mlx5/mlx5_prm.h           |  93 +++-
 drivers/common/mlx5/version.map          |   1 +
 drivers/net/mlx5/linux/mlx5_os.c         |   2 +
 drivers/net/mlx5/linux/mlx5_verbs.c      | 173 ++++---
 drivers/net/mlx5/mlx5.c                  |  11 +-
 drivers/net/mlx5/mlx5.h                  |  17 +-
 drivers/net/mlx5/mlx5_devx.c             | 275 +++++-----
 drivers/net/mlx5/mlx5_ethdev.c           |  21 +-
 drivers/net/mlx5/mlx5_flow.c             |  45 +-
 drivers/net/mlx5/mlx5_mr.c               |   7 +-
 drivers/net/mlx5/mlx5_rss.c              |   6 +-
 drivers/net/mlx5/mlx5_rx.c               |  35 +-
 drivers/net/mlx5/mlx5_rx.h               |  46 +-
 drivers/net/mlx5/mlx5_rxq.c              | 633 +++++++++++++++--------
 drivers/net/mlx5/mlx5_rxtx.c             |   6 +-
 drivers/net/mlx5/mlx5_rxtx_vec.c         |   8 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h |  14 +-
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h    |  12 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h     |   8 +-
 drivers/net/mlx5/mlx5_stats.c            |   9 +-
 drivers/net/mlx5/mlx5_trigger.c          | 161 +++---
 drivers/net/mlx5/mlx5_vlan.c             |  16 +-
 drivers/regex/mlx5/mlx5_regex_fastpath.c |   2 +-
 29 files changed, 1350 insertions(+), 641 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 01/13] common/mlx5: support receive queue user index
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 02/13] common/mlx5: support receive memory pool Xueming Li
                     ` (12 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev
  Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko,
	David Christensen, Ori Kam

RQ user index is saved in CQE when packet received by RQ.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/common/mlx5/mlx5_prm.h           | 8 +++++++-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 8 ++++----
 drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +-
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 54e62aa1531..5fd93958ac3 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -393,7 +393,13 @@ struct mlx5_cqe {
 	uint16_t hdr_type_etc;
 	uint16_t vlan_info;
 	uint8_t lro_num_seg;
-	uint8_t rsvd3[3];
+	union {
+		uint8_t user_index_bytes[3];
+		struct {
+			uint8_t user_index_hi;
+			uint16_t user_index_low;
+		} __rte_packed;
+	};
 	uint32_t flow_table_metadata;
 	uint8_t rsvd4[4];
 	uint32_t byte_cnt;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 68cef1a83ed..82586f012cb 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -974,10 +974,10 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 			(vector unsigned short)cqe_tmp1, cqe_sel_mask1);
 		cqe_tmp2 = (vector unsigned char)(vector unsigned long){
 			*(__rte_aligned(8) unsigned long *)
-			&cq[pos + p3].rsvd3[9], 0LL};
+			&cq[pos + p3].user_index_bytes[9], 0LL};
 		cqe_tmp1 = (vector unsigned char)(vector unsigned long){
 			*(__rte_aligned(8) unsigned long *)
-			&cq[pos + p2].rsvd3[9], 0LL};
+			&cq[pos + p2].user_index_bytes[9], 0LL};
 		cqes[3] = (vector unsigned char)
 			vec_sel((vector unsigned short)cqes[3],
 			(vector unsigned short)cqe_tmp2,
@@ -1037,10 +1037,10 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 			(vector unsigned short)cqe_tmp1, cqe_sel_mask1);
 		cqe_tmp2 = (vector unsigned char)(vector unsigned long){
 			*(__rte_aligned(8) unsigned long *)
-			&cq[pos + p1].rsvd3[9], 0LL};
+			&cq[pos + p1].user_index_bytes[9], 0LL};
 		cqe_tmp1 = (vector unsigned char)(vector unsigned long){
 			*(__rte_aligned(8) unsigned long *)
-			&cq[pos].rsvd3[9], 0LL};
+			&cq[pos].user_index_bytes[9], 0LL};
 		cqes[1] = (vector unsigned char)
 			vec_sel((vector unsigned short)cqes[1],
 			(vector unsigned short)cqe_tmp2, cqe_sel_mask2);
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 0833b2817e2..e51e632c1f8 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -571,7 +571,7 @@ mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id,
 		uint16_t wq_counter
 			= (rte_be_to_cpu_16(cqe->wqe_counter) + 1) &
 			  MLX5_REGEX_MAX_WQE_INDEX;
-		size_t hw_qpid = cqe->rsvd3[2];
+		size_t hw_qpid = cqe->user_index_bytes[2];
 		struct mlx5_regex_hw_qp *qp_obj = &queue->qps[hw_qpid];
 
 		/* UMR mode WQE counter move as WQE set(4 WQEBBS).*/
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 02/13] common/mlx5: support receive memory pool
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 01/13] common/mlx5: support receive queue user index Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-20  9:32     ` Kinsella, Ray
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 03/13] net/mlx5: fix Rx queue memory allocation return value Xueming Li
                     ` (11 subsequent siblings)
  13 siblings, 1 reply; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev
  Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko, Ray Kinsella

Adds DevX supports of PRM shared receive memory pool(RMP) object.
RMP is used to support shared Rx queue. Multiple RQ could share same
RMP. Memory buffers are supplied to RMP.

This patch makes RMP RQ optional, created only if mlx5_devx_rq.rmp
is set.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c | 296 +++++++++++++++++++++----
 drivers/common/mlx5/mlx5_common_devx.h |  19 +-
 drivers/common/mlx5/mlx5_devx_cmds.c   |  52 +++++
 drivers/common/mlx5/mlx5_devx_cmds.h   |  16 ++
 drivers/common/mlx5/mlx5_prm.h         |  85 ++++++-
 drivers/common/mlx5/version.map        |   1 +
 drivers/net/mlx5/mlx5_devx.c           |   4 +-
 7 files changed, 425 insertions(+), 48 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 825f84b1833..db019418b39 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -271,6 +271,39 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
 	return -rte_errno;
 }
 
+/**
+ * Destroy DevX Receive Queue resources.
+ *
+ * @param[in] rq_res
+ *   DevX RQ resource to destroy.
+ */
+static void
+mlx5_devx_wq_res_destroy(struct mlx5_devx_wq_res *rq_res)
+{
+	if (rq_res->umem_obj)
+		claim_zero(mlx5_os_umem_dereg(rq_res->umem_obj));
+	if (rq_res->umem_buf)
+		mlx5_free((void *)(uintptr_t)rq_res->umem_buf);
+	memset(rq_res, 0, sizeof(*rq_res));
+}
+
+/**
+ * Destroy DevX Receive Memory Pool.
+ *
+ * @param[in] rmp
+ *   DevX RMP to destroy.
+ */
+static void
+mlx5_devx_rmp_destroy(struct mlx5_devx_rmp *rmp)
+{
+	MLX5_ASSERT(rmp->ref_cnt == 0);
+	if (rmp->rmp) {
+		claim_zero(mlx5_devx_cmd_destroy(rmp->rmp));
+		rmp->rmp = NULL;
+	}
+	mlx5_devx_wq_res_destroy(&rmp->wq);
+}
+
 /**
  * Destroy DevX Queue Pair.
  *
@@ -389,55 +422,48 @@ mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint16_t log_wqbb_n,
 void
 mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq)
 {
-	if (rq->rq)
+	if (rq->rq) {
 		claim_zero(mlx5_devx_cmd_destroy(rq->rq));
-	if (rq->umem_obj)
-		claim_zero(mlx5_os_umem_dereg(rq->umem_obj));
-	if (rq->umem_buf)
-		mlx5_free((void *)(uintptr_t)rq->umem_buf);
+		rq->rq = NULL;
+		if (rq->rmp)
+			rq->rmp->ref_cnt--;
+	}
+	if (rq->rmp == NULL) {
+		mlx5_devx_wq_res_destroy(&rq->wq);
+	} else {
+		if (rq->rmp->ref_cnt == 0)
+			mlx5_devx_rmp_destroy(rq->rmp);
+	}
 }
 
 /**
- * Create Receive Queue using DevX API.
- *
- * Get a pointer to partially initialized attributes structure, and updates the
- * following fields:
- *   wq_umem_valid
- *   wq_umem_id
- *   wq_umem_offset
- *   dbr_umem_valid
- *   dbr_umem_id
- *   dbr_addr
- *   log_wq_pg_sz
- * All other fields are updated by caller.
+ * Create WQ resources using DevX API.
  *
  * @param[in] ctx
  *   Context returned from mlx5 open_device() glue function.
- * @param[in/out] rq_obj
- *   Pointer to RQ to create.
  * @param[in] wqe_size
  *   Size of WQE structure.
  * @param[in] log_wqbb_n
  *   Log of number of WQBBs in queue.
- * @param[in] attr
- *   Pointer to RQ attributes structure.
  * @param[in] socket
  *   Socket to use for allocation.
+ * @param[out] wq_attr
+ *   Pointer to WQ attributes structure.
+ * @param[out] wq_res
+ *   Pointer to WQ resource to create.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-int
-mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
-		    uint16_t log_wqbb_n,
-		    struct mlx5_devx_create_rq_attr *attr, int socket)
+static int
+mlx5_devx_wq_init(void *ctx, uint32_t wqe_size, uint16_t log_wqbb_n, int socket,
+		  struct mlx5_devx_wq_attr *wq_attr,
+		  struct mlx5_devx_wq_res *wq_res)
 {
-	struct mlx5_devx_obj *rq = NULL;
 	struct mlx5dv_devx_umem *umem_obj = NULL;
 	void *umem_buf = NULL;
 	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
 	uint32_t umem_size, umem_dbrec;
-	uint16_t rq_size = 1 << log_wqbb_n;
 	int ret;
 
 	if (alignment == (size_t)-1) {
@@ -446,7 +472,7 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
 		return -rte_errno;
 	}
 	/* Allocate memory buffer for WQEs and doorbell record. */
-	umem_size = wqe_size * rq_size;
+	umem_size = wqe_size * (1 << log_wqbb_n);
 	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
 	umem_size += MLX5_DBR_SIZE;
 	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
@@ -464,14 +490,60 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
 		rte_errno = errno;
 		goto error;
 	}
+	/* Fill WQ attributes for RQ/RMP object creation. */
+	wq_attr->wq_umem_valid = 1;
+	wq_attr->wq_umem_id = mlx5_os_get_umem_id(umem_obj);
+	wq_attr->wq_umem_offset = 0;
+	wq_attr->dbr_umem_valid = 1;
+	wq_attr->dbr_umem_id = wq_attr->wq_umem_id;
+	wq_attr->dbr_addr = umem_dbrec;
+	wq_attr->log_wq_pg_sz = MLX5_LOG_PAGE_SIZE;
 	/* Fill attributes for RQ object creation. */
-	attr->wq_attr.wq_umem_valid = 1;
-	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj);
-	attr->wq_attr.wq_umem_offset = 0;
-	attr->wq_attr.dbr_umem_valid = 1;
-	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
-	attr->wq_attr.dbr_addr = umem_dbrec;
-	attr->wq_attr.log_wq_pg_sz = MLX5_LOG_PAGE_SIZE;
+	wq_res->umem_buf = umem_buf;
+	wq_res->umem_obj = umem_obj;
+	wq_res->db_rec = RTE_PTR_ADD(umem_buf, umem_dbrec);
+	return 0;
+error:
+	ret = rte_errno;
+	if (umem_obj)
+		claim_zero(mlx5_os_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+/**
+ * Create standalone Receive Queue using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_rq_std_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+			uint32_t wqe_size, uint16_t log_wqbb_n,
+			struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *rq;
+	int ret;
+
+	ret = mlx5_devx_wq_init(ctx, wqe_size, log_wqbb_n, socket,
+				&attr->wq_attr, &rq_obj->wq);
+	if (ret != 0)
+		return ret;
 	/* Create receive queue object with DevX. */
 	rq = mlx5_devx_cmd_create_rq(ctx, attr, socket);
 	if (!rq) {
@@ -479,21 +551,161 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	rq_obj->umem_buf = umem_buf;
-	rq_obj->umem_obj = umem_obj;
 	rq_obj->rq = rq;
-	rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec);
 	return 0;
 error:
 	ret = rte_errno;
-	if (umem_obj)
-		claim_zero(mlx5_os_umem_dereg(umem_obj));
-	if (umem_buf)
-		mlx5_free((void *)(uintptr_t)umem_buf);
+	mlx5_devx_wq_res_destroy(&rq_obj->wq);
 	rte_errno = ret;
 	return -rte_errno;
 }
 
+/**
+ * Create Receive Memory Pool using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_rmp_create(void *ctx, struct mlx5_devx_rmp *rmp_obj,
+		     uint32_t wqe_size, uint16_t log_wqbb_n,
+		     struct mlx5_devx_wq_attr *wq_attr, int socket)
+{
+	struct mlx5_devx_create_rmp_attr rmp_attr = { 0 };
+	int ret;
+
+	if (rmp_obj->rmp != NULL)
+		return 0;
+	rmp_attr.wq_attr = *wq_attr;
+	ret = mlx5_devx_wq_init(ctx, wqe_size, log_wqbb_n, socket,
+				&rmp_attr.wq_attr, &rmp_obj->wq);
+	if (ret != 0)
+		return ret;
+	rmp_attr.state = MLX5_RMPC_STATE_RDY;
+	rmp_attr.basic_cyclic_rcv_wqe =
+		wq_attr->wq_type == MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ ?
+		0 : 1;
+	/* Create receive queue object with DevX. */
+	rmp_obj->rmp = mlx5_devx_cmd_create_rmp(ctx, &rmp_attr, socket);
+	if (rmp_obj->rmp == NULL) {
+		DRV_LOG(ERR, "Can't create DevX RMP object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	return 0;
+error:
+	ret = rte_errno;
+	mlx5_devx_wq_res_destroy(&rmp_obj->wq);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+/**
+ * Create Shared Receive Queue based on RMP using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_rq_shared_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+			   uint32_t wqe_size, uint16_t log_wqbb_n,
+			   struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *rq;
+	int ret;
+
+	ret = mlx5_devx_rmp_create(ctx, rq_obj->rmp, wqe_size, log_wqbb_n,
+				   &attr->wq_attr, socket);
+	if (ret != 0)
+		return ret;
+	attr->mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_RMP;
+	attr->rmpn = rq_obj->rmp->rmp->id;
+	attr->flush_in_error_en = 0;
+	memset(&attr->wq_attr, 0, sizeof(attr->wq_attr));
+	/* Create receive queue object with DevX. */
+	rq = mlx5_devx_cmd_create_rq(ctx, attr, socket);
+	if (!rq) {
+		DRV_LOG(ERR, "Can't create DevX RMP RQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rq_obj->rq = rq;
+	rq_obj->rmp->ref_cnt++;
+	return 0;
+error:
+	ret = rte_errno;
+	mlx5_devx_rq_destroy(rq_obj);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+/**
+ * Create Receive Queue using DevX API. Shared RQ is created only if rmp set.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+		    uint32_t wqe_size, uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	if (rq_obj->rmp == NULL)
+		return mlx5_devx_rq_std_create(ctx, rq_obj, wqe_size,
+					       log_wqbb_n, attr, socket);
+	return mlx5_devx_rq_shared_create(ctx, rq_obj, wqe_size,
+					  log_wqbb_n, attr, socket);
+}
 
 /**
  * Change QP state to RTS.
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index f699405f69b..b0d6e4a39e8 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -45,14 +45,27 @@ struct mlx5_devx_qp {
 	volatile uint32_t *db_rec; /* The QP doorbell record. */
 };
 
-/* DevX Receive Queue structure. */
-struct mlx5_devx_rq {
-	struct mlx5_devx_obj *rq; /* The RQ DevX object. */
+/* DevX Receive Queue resource structure. */
+struct mlx5_devx_wq_res {
 	void *umem_obj; /* The RQ umem object. */
 	volatile void *umem_buf;
 	volatile uint32_t *db_rec; /* The RQ doorbell record. */
 };
 
+/* DevX Receive Queue structure. */
+struct mlx5_devx_rmp {
+	struct mlx5_devx_obj *rmp; /* The RMP DevX object. */
+	uint32_t ref_cnt; /* Reference count. */
+	struct mlx5_devx_wq_res wq;
+};
+
+/* DevX Receive Queue structure. */
+struct mlx5_devx_rq {
+	struct mlx5_devx_obj *rq; /* The RQ DevX object. */
+	struct mlx5_devx_rmp *rmp; /* Shared RQ RMP object. */
+	struct mlx5_devx_wq_res wq; /* WQ resource of standalone RQ. */
+};
+
 /* mlx5_common_devx.c */
 
 __rte_internal
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index eefb869b7d8..8f6dae5c7f8 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -766,6 +766,8 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 			MLX5_GET(cmd_hca_cap, hcattr, flow_counter_bulk_alloc);
 	attr->flow_counters_dump = MLX5_GET(cmd_hca_cap, hcattr,
 					    flow_counters_dump);
+	attr->log_max_rmp = MLX5_GET(cmd_hca_cap, hcattr, log_max_rmp);
+	attr->mem_rq_rmp = MLX5_GET(cmd_hca_cap, hcattr, mem_rq_rmp);
 	attr->log_max_rqt_size = MLX5_GET(cmd_hca_cap, hcattr,
 					  log_max_rqt_size);
 	attr->eswitch_manager = MLX5_GET(cmd_hca_cap, hcattr, eswitch_manager);
@@ -1259,6 +1261,56 @@ mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 }
 
 /**
+ * Create RMP using DevX API.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param [in] rmp_attr
+ *   Pointer to create RMP attributes structure.
+ * @param [in] socket
+ *   CPU socket ID for allocations.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_rmp(void *ctx,
+			 struct mlx5_devx_create_rmp_attr *rmp_attr,
+			 int socket)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_rmp_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(create_rmp_out)] = {0};
+	void *rmp_ctx, *wq_ctx;
+	struct mlx5_devx_wq_attr *wq_attr;
+	struct mlx5_devx_obj *rmp = NULL;
+
+	rmp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rmp), 0, socket);
+	if (!rmp) {
+		DRV_LOG(ERR, "Failed to allocate RMP data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(create_rmp_in, in, opcode, MLX5_CMD_OP_CREATE_RMP);
+	rmp_ctx = MLX5_ADDR_OF(create_rmp_in, in, ctx);
+	MLX5_SET(rmpc, rmp_ctx, state, rmp_attr->state);
+	MLX5_SET(rmpc, rmp_ctx, basic_cyclic_rcv_wqe,
+		 rmp_attr->basic_cyclic_rcv_wqe);
+	wq_ctx = MLX5_ADDR_OF(rmpc, rmp_ctx, wq);
+	wq_attr = &rmp_attr->wq_attr;
+	devx_cmd_fill_wq_data(wq_ctx, wq_attr);
+	rmp->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
+					      sizeof(out));
+	if (!rmp->obj) {
+		DRV_LOG(ERR, "Failed to create RMP using DevX");
+		rte_errno = errno;
+		mlx5_free(rmp);
+		return NULL;
+	}
+	rmp->id = MLX5_GET(create_rmp_out, out, rmpn);
+	return rmp;
+}
+
+/*
  * Create TIR using DevX API.
  *
  * @param[in] ctx
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index e149f8b4f57..5b2d4d4d8a4 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -101,6 +101,8 @@ struct mlx5_hca_flow_attr {
 struct mlx5_hca_attr {
 	uint32_t eswitch_manager:1;
 	uint32_t flow_counters_dump:1;
+	uint32_t mem_rq_rmp:1;
+	uint32_t log_max_rmp:5;
 	uint32_t log_max_rqt_size:5;
 	uint32_t parse_graph_flex_node:1;
 	uint8_t flow_counter_bulk_alloc_bitmap;
@@ -250,6 +252,17 @@ struct mlx5_devx_modify_rq_attr {
 	uint32_t lwm:16; /* Contained WQ lwm. */
 };
 
+/* Create RMP attributes structure, used by create RMP operation. */
+struct mlx5_devx_create_rmp_attr {
+	uint32_t rsvd0:8;
+	uint32_t state:4;
+	uint32_t rsvd1:20;
+	uint32_t basic_cyclic_rcv_wqe:1;
+	uint32_t rsvd4:31;
+	uint32_t rsvd8[10];
+	struct mlx5_devx_wq_attr wq_attr;
+};
+
 struct mlx5_rx_hash_field_select {
 	uint32_t l3_prot_type:1;
 	uint32_t l4_prot_type:1;
@@ -527,6 +540,9 @@ __rte_internal
 int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			    struct mlx5_devx_modify_rq_attr *rq_attr);
 __rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_rmp(void *ctx,
+			struct mlx5_devx_create_rmp_attr *rq_attr, int socket);
+__rte_internal
 struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(void *ctx,
 					   struct mlx5_devx_tir_attr *tir_attr);
 __rte_internal
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 5fd93958ac3..a267da42ba6 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -1061,6 +1061,10 @@ enum {
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_RQ = 0x90b,
+	MLX5_CMD_OP_CREATE_RMP = 0x90c,
+	MLX5_CMD_OP_MODIFY_RMP = 0x90d,
+	MLX5_CMD_OP_DESTROY_RMP = 0x90e,
+	MLX5_CMD_OP_QUERY_RMP = 0x90f,
 	MLX5_CMD_OP_CREATE_TIS = 0x912,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_CREATE_RQT = 0x916,
@@ -1559,7 +1563,8 @@ struct mlx5_ifc_cmd_hca_cap_bits {
 	u8 reserved_at_378[0x3];
 	u8 log_max_tis[0x5];
 	u8 basic_cyclic_rcv_wqe[0x1];
-	u8 reserved_at_381[0x2];
+	u8 reserved_at_381[0x1];
+	u8 mem_rq_rmp[0x1];
 	u8 log_max_rmp[0x5];
 	u8 reserved_at_388[0x3];
 	u8 log_max_rqt[0x5];
@@ -2166,6 +2171,84 @@ struct mlx5_ifc_query_rq_in_bits {
 	u8 reserved_at_60[0x20];
 };
 
+enum {
+	MLX5_RMPC_STATE_RDY = 0x1,
+	MLX5_RMPC_STATE_ERR = 0x3,
+};
+
+struct mlx5_ifc_rmpc_bits {
+	u8 reserved_at_0[0x8];
+	u8 state[0x4];
+	u8 reserved_at_c[0x14];
+	u8 basic_cyclic_rcv_wqe[0x1];
+	u8 reserved_at_21[0x1f];
+	u8 reserved_at_40[0x140];
+	struct mlx5_ifc_wq_bits wq;
+};
+
+struct mlx5_ifc_query_rmp_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rmpc_bits rmp_context;
+};
+
+struct mlx5_ifc_query_rmp_in_bits {
+	u8 opcode[0x10];
+	u8 reserved_at_10[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0x8];
+	u8 rmpn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_modify_rmp_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_rmp_bitmask_bits {
+	u8 reserved_at_0[0x20];
+	u8 reserved_at_20[0x1f];
+	u8 lwm[0x1];
+};
+
+struct mlx5_ifc_modify_rmp_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 rmp_state[0x4];
+	u8 reserved_at_44[0x4];
+	u8 rmpn[0x18];
+	u8 reserved_at_60[0x20];
+	struct mlx5_ifc_rmp_bitmask_bits bitmask;
+	u8 reserved_at_c0[0x40];
+	struct mlx5_ifc_rmpc_bits ctx;
+};
+
+struct mlx5_ifc_create_rmp_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 rmpn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_create_rmp_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rmpc_bits ctx;
+};
+
 struct mlx5_ifc_create_tis_out_bits {
 	u8 status[0x8];
 	u8 reserved_at_8[0x18];
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index d3c5040aac8..eee05917aab 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -31,6 +31,7 @@ INTERNAL {
 	mlx5_devx_cmd_create_geneve_tlv_option;
 	mlx5_devx_cmd_create_import_kek_obj;
 	mlx5_devx_cmd_create_qp;
+	mlx5_devx_cmd_create_rmp;
 	mlx5_devx_cmd_create_rq;
 	mlx5_devx_cmd_create_rqt;
 	mlx5_devx_cmd_create_sq;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 447d6bafb93..c65a6e5d4e7 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -514,8 +514,8 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
-	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf;
-	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec;
+	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf;
+	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.wq.db_rec;
 	rxq_data->cq_arm_sn = 0;
 	rxq_data->cq_ci = 0;
 	mlx5_rxq_initialize(rxq_data);
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 03/13] net/mlx5: fix Rx queue memory allocation return value
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 01/13] common/mlx5: support receive queue user index Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 02/13] common/mlx5: support receive memory pool Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 04/13] net/mlx5: clean Rx queue code Xueming Li
                     ` (10 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, akozyrev, Matan Azrad, Viacheslav Ovsiienko

If error happened during Rx queue mbuf allocation, boolean value
returned. From description, return value should be error number.

This patch returns negative error number.

Fixes: 0f20acbf5eda ("net/mlx5: implement vectorized MPRQ burst")
Cc: akozyrev@nvidia.com

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index fd2b5779fff..4036bcbe544 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -128,7 +128,7 @@ rxq_alloc_elts_mprq(struct mlx5_rxq_ctrl *rxq_ctrl)
  *   Pointer to RX queue structure.
  *
  * @return
- *   0 on success, errno value on failure.
+ *   0 on success, negative errno value on failure.
  */
 static int
 rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
@@ -219,7 +219,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
  *   Pointer to RX queue structure.
  *
  * @return
- *   0 on success, errno value on failure.
+ *   0 on success, negative errno value on failure.
  */
 int
 rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl)
@@ -232,7 +232,9 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl)
 	 */
 	if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
 		ret = rxq_alloc_elts_mprq(rxq_ctrl);
-	return (ret || rxq_alloc_elts_sprq(rxq_ctrl));
+	if (ret == 0)
+		ret = rxq_alloc_elts_sprq(rxq_ctrl);
+	return ret;
 }
 
 /**
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 04/13] net/mlx5: clean Rx queue code
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (2 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 03/13] net/mlx5: fix Rx queue memory allocation return value Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 05/13] net/mlx5: split multiple packet Rq memory pool Xueming Li
                     ` (9 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Removes unused rxq code.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 4036bcbe544..1cb99de1ae7 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -674,9 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		    struct rte_mempool *mp)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 	struct rte_eth_rxseg_split *rx_seg =
 				(struct rte_eth_rxseg_split *)conf->rx_seg;
 	struct rte_eth_rxseg_split rx_single = {.mp = mp};
@@ -743,9 +741,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 			    const struct rte_eth_hairpin_conf *hairpin_conf)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 	int res;
 
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 05/13] net/mlx5: split multiple packet Rq memory pool
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (3 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 04/13] net/mlx5: clean Rx queue code Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 06/13] net/mlx5: split Rx queue Xueming Li
                     ` (8 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Port info is invisible from shared Rx queue, split MPR mempool from
device to Rx queue, also changed pool flag to mp_sc.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5.c         |   1 -
 drivers/net/mlx5/mlx5_rx.h      |   4 +-
 drivers/net/mlx5/mlx5_rxq.c     | 109 ++++++++++++--------------------
 drivers/net/mlx5/mlx5_trigger.c |  10 ++-
 4 files changed, 47 insertions(+), 77 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 45ccfe27845..1033c29cb82 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1608,7 +1608,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		mlx5_drop_action_destroy(dev);
 	if (priv->mreg_cp_tbl)
 		mlx5_hlist_destroy(priv->mreg_cp_tbl);
-	mlx5_mprq_free_mp(dev);
 	if (priv->sh->ct_mng)
 		mlx5_flow_aso_ct_mng_close(priv->sh);
 	mlx5_os_free_shared_dr(priv);
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index d44c8078dea..a8e0c3162b0 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -179,8 +179,8 @@ struct mlx5_rxq_ctrl {
 extern uint8_t rss_hash_default_key[];
 
 unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data);
-int mlx5_mprq_free_mp(struct rte_eth_dev *dev);
-int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev);
+int mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl);
+int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl);
 int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
 int mlx5_rx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id);
 int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t queue_id);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 1cb99de1ae7..f29a8143967 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1087,7 +1087,7 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg,
 }
 
 /**
- * Free mempool of Multi-Packet RQ.
+ * Free RXQ mempool of Multi-Packet RQ.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -1096,16 +1096,15 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg,
  *   0 on success, negative errno value on failure.
  */
 int
-mlx5_mprq_free_mp(struct rte_eth_dev *dev)
+mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_mempool *mp = priv->mprq_mp;
-	unsigned int i;
+	struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq;
+	struct rte_mempool *mp = rxq->mprq_mp;
 
 	if (mp == NULL)
 		return 0;
-	DRV_LOG(DEBUG, "port %u freeing mempool (%s) for Multi-Packet RQ",
-		dev->data->port_id, mp->name);
+	DRV_LOG(DEBUG, "port %u queue %hu freeing mempool (%s) for Multi-Packet RQ",
+		dev->data->port_id, rxq->idx, mp->name);
 	/*
 	 * If a buffer in the pool has been externally attached to a mbuf and it
 	 * is still in use by application, destroying the Rx queue can spoil
@@ -1123,34 +1122,28 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev)
 		return -rte_errno;
 	}
 	rte_mempool_free(mp);
-	/* Unset mempool for each Rx queue. */
-	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-
-		if (rxq == NULL)
-			continue;
-		rxq->mprq_mp = NULL;
-	}
-	priv->mprq_mp = NULL;
+	rxq->mprq_mp = NULL;
 	return 0;
 }
 
 /**
- * Allocate a mempool for Multi-Packet RQ. All configured Rx queues share the
- * mempool. If already allocated, reuse it if there're enough elements.
+ * Allocate RXQ a mempool for Multi-Packet RQ.
+ * If already allocated, reuse it if there're enough elements.
  * Otherwise, resize it.
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param rxq_ctrl
+ *   Pointer to RXQ.
  *
  * @return
  *   0 on success, negative errno value on failure.
  */
 int
-mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
+mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_mempool *mp = priv->mprq_mp;
+	struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq;
+	struct rte_mempool *mp = rxq->mprq_mp;
 	char name[RTE_MEMPOOL_NAMESIZE];
 	unsigned int desc = 0;
 	unsigned int buf_len;
@@ -1158,28 +1151,15 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	unsigned int obj_size;
 	unsigned int strd_num_n = 0;
 	unsigned int strd_sz_n = 0;
-	unsigned int i;
-	unsigned int n_ibv = 0;
 
-	if (!mlx5_mprq_enabled(dev))
+	if (rxq_ctrl == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
 		return 0;
-	/* Count the total number of descriptors configured. */
-	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl = container_of
-			(rxq, struct mlx5_rxq_ctrl, rxq);
-
-		if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
-			continue;
-		n_ibv++;
-		desc += 1 << rxq->elts_n;
-		/* Get the max number of strides. */
-		if (strd_num_n < rxq->strd_num_n)
-			strd_num_n = rxq->strd_num_n;
-		/* Get the max size of a stride. */
-		if (strd_sz_n < rxq->strd_sz_n)
-			strd_sz_n = rxq->strd_sz_n;
-	}
+	/* Number of descriptors configured. */
+	desc = 1 << rxq->elts_n;
+	/* Get the max number of strides. */
+	strd_num_n = rxq->strd_num_n;
+	/* Get the max size of a stride. */
+	strd_sz_n = rxq->strd_sz_n;
 	MLX5_ASSERT(strd_num_n && strd_sz_n);
 	buf_len = (1 << strd_num_n) * (1 << strd_sz_n);
 	obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) *
@@ -1196,7 +1176,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	 * this Mempool gets available again.
 	 */
 	desc *= 4;
-	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * n_ibv;
+	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ;
 	/*
 	 * rte_mempool_create_empty() has sanity check to refuse large cache
 	 * size compared to the number of elements.
@@ -1209,50 +1189,41 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 		DRV_LOG(DEBUG, "port %u mempool %s is being reused",
 			dev->data->port_id, mp->name);
 		/* Reuse. */
-		goto exit;
-	} else if (mp != NULL) {
-		DRV_LOG(DEBUG, "port %u mempool %s should be resized, freeing it",
-			dev->data->port_id, mp->name);
+		return 0;
+	}
+	if (mp != NULL) {
+		DRV_LOG(DEBUG, "port %u queue %u mempool %s should be resized, freeing it",
+			dev->data->port_id, rxq->idx, mp->name);
 		/*
 		 * If failed to free, which means it may be still in use, no way
 		 * but to keep using the existing one. On buffer underrun,
 		 * packets will be memcpy'd instead of external buffer
 		 * attachment.
 		 */
-		if (mlx5_mprq_free_mp(dev)) {
+		if (mlx5_mprq_free_mp(dev, rxq_ctrl) != 0) {
 			if (mp->elt_size >= obj_size)
-				goto exit;
+				return 0;
 			else
 				return -rte_errno;
 		}
 	}
-	snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id);
+	snprintf(name, sizeof(name), "port-%u-queue-%hu-mprq",
+		 dev->data->port_id, rxq->idx);
 	mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ,
 				0, NULL, NULL, mlx5_mprq_buf_init,
-				(void *)((uintptr_t)1 << strd_num_n),
-				dev->device->numa_node, 0);
+				(void *)(((uintptr_t)1) << strd_num_n),
+				dev->device->numa_node, MEMPOOL_F_SC_GET);
 	if (mp == NULL) {
 		DRV_LOG(ERR,
-			"port %u failed to allocate a mempool for"
+			"port %u queue %hu failed to allocate a mempool for"
 			" Multi-Packet RQ, count=%u, size=%u",
-			dev->data->port_id, obj_num, obj_size);
+			dev->data->port_id, rxq->idx, obj_num, obj_size);
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
-	priv->mprq_mp = mp;
-exit:
-	/* Set mempool for each Rx queue. */
-	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl = container_of
-			(rxq, struct mlx5_rxq_ctrl, rxq);
-
-		if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
-			continue;
-		rxq->mprq_mp = mp;
-	}
-	DRV_LOG(INFO, "port %u Multi-Packet RQ is configured",
-		dev->data->port_id);
+	rxq->mprq_mp = mp;
+	DRV_LOG(INFO, "port %u queue %hu Multi-Packet RQ is configured",
+		dev->data->port_id, rxq->idx);
 	return 0;
 }
 
@@ -1717,8 +1688,10 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 		dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 	if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) {
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+			mlx5_mprq_free_mp(dev, rxq_ctrl);
+		}
 		LIST_REMOVE(rxq_ctrl, next);
 		mlx5_free(rxq_ctrl);
 		(*priv->rxqs)[idx] = NULL;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index c3adf5082e6..0753dbad053 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -138,11 +138,6 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 	unsigned int i;
 	int ret = 0;
 
-	/* Allocate/reuse/resize mempool for Multi-Packet RQ. */
-	if (mlx5_mprq_alloc_mp(dev)) {
-		/* Should not release Rx queues but return immediately. */
-		return -rte_errno;
-	}
 	DRV_LOG(DEBUG, "Port %u device_attr.max_qp_wr is %d.",
 		dev->data->port_id, priv->sh->device_attr.max_qp_wr);
 	DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.",
@@ -153,8 +148,11 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 		if (!rxq_ctrl)
 			continue;
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-			/* Pre-register Rx mempools. */
 			if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
+				/* Allocate/reuse/resize mempool for MPRQ. */
+				if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0)
+					goto error;
+				/* Pre-register Rx mempools. */
 				mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
 						  rxq_ctrl->rxq.mprq_mp);
 			} else {
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 06/13] net/mlx5: split Rx queue
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (4 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 05/13] net/mlx5: split multiple packet Rq memory pool Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 07/13] net/mlx5: move Rx queue reference count Xueming Li
                     ` (7 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

To prepare shared RX queue, splits rxq data into shareable and private.
Struct mlx5_rxq_priv is per queue data.
Struct mlx5_rxq_ctrl is shared queue resources and data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5.c        |  4 +++
 drivers/net/mlx5/mlx5.h        |  5 ++-
 drivers/net/mlx5/mlx5_ethdev.c | 10 ++++++
 drivers/net/mlx5/mlx5_rx.h     | 15 ++++++--
 drivers/net/mlx5/mlx5_rxq.c    | 66 ++++++++++++++++++++++++++++------
 5 files changed, 86 insertions(+), 14 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 1033c29cb82..477ad8c1bc9 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1591,6 +1591,10 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 		mlx5_free(dev->intr_handle);
 		dev->intr_handle = NULL;
 	}
+	if (priv->rxq_privs != NULL) {
+		mlx5_free(priv->rxq_privs);
+		priv->rxq_privs = NULL;
+	}
 	if (priv->txqs != NULL) {
 		/* XXX race condition if mlx5_tx_burst() is still running. */
 		rte_delay_us_sleep(1000);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 3581414b789..b18ddb0b0fa 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1335,6 +1335,8 @@ enum mlx5_txq_modify_type {
 	MLX5_TXQ_MOD_ERR2RDY, /* modify state from error to ready. */
 };
 
+struct mlx5_rxq_priv;
+
 /* HW objects operations structure. */
 struct mlx5_obj_ops {
 	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
@@ -1404,7 +1406,8 @@ struct mlx5_priv {
 	/* RX/TX queues. */
 	unsigned int rxqs_n; /* RX queues array size. */
 	unsigned int txqs_n; /* TX queues array size. */
-	struct mlx5_rxq_data *(*rxqs)[]; /* RX queues. */
+	struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */
+	struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */
 	struct mlx5_txq_data *(*txqs)[]; /* TX queues. */
 	struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */
 	struct rte_eth_rss_conf rss_conf; /* RSS configuration. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 8ebfd0bccb3..ee1189b929d 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -104,6 +104,16 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	       MLX5_RSS_HASH_KEY_LEN);
 	priv->rss_conf.rss_key_len = MLX5_RSS_HASH_KEY_LEN;
 	priv->rss_conf.rss_hf = dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
+	priv->rxq_privs = mlx5_realloc(priv->rxq_privs,
+				       MLX5_MEM_ANY | MLX5_MEM_ZERO,
+				       sizeof(void *) * rxqs_n, 0,
+				       SOCKET_ID_ANY);
+	if (priv->rxq_privs == NULL) {
+		DRV_LOG(ERR, "port %u cannot allocate rxq private data",
+			dev->data->port_id);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
 	priv->rxqs = (void *)dev->data->rx_queues;
 	priv->txqs = (void *)dev->data->tx_queues;
 	if (txqs_n != priv->txqs_n) {
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index a8e0c3162b0..db6252e8e86 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -161,7 +161,9 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
 	LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
 	uint32_t refcnt; /* Reference counter. */
+	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
+	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
 	enum mlx5_rxq_type type; /* Rxq type. */
 	unsigned int socket; /* CPU socket ID for allocations. */
@@ -174,6 +176,14 @@ struct mlx5_rxq_ctrl {
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
+/* RX queue private data. */
+struct mlx5_rxq_priv {
+	uint16_t idx; /* Queue index. */
+	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
+	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
+	struct mlx5_priv *priv; /* Back pointer to private data. */
+};
+
 /* mlx5_rxq.c */
 
 extern uint8_t rss_hash_default_key[];
@@ -197,13 +207,14 @@ void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
 int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
-struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
+struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev,
+				   struct mlx5_rxq_priv *rxq,
 				   uint16_t desc, unsigned int socket,
 				   const struct rte_eth_rxconf *conf,
 				   const struct rte_eth_rxseg_split *rx_seg,
 				   uint16_t n_seg);
 struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
-	(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+	(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc,
 	 const struct rte_eth_hairpin_conf *hairpin_conf);
 struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index f29a8143967..acd77a7ecc1 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -674,6 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		    struct rte_mempool *mp)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
 	struct rte_eth_rxseg_split *rx_seg =
 				(struct rte_eth_rxseg_split *)conf->rx_seg;
@@ -708,10 +709,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
-	rxq_ctrl = mlx5_rxq_new(dev, idx, desc, socket, conf, rx_seg, n_seg);
+	rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0,
+			  SOCKET_ID_ANY);
+	if (!rxq) {
+		DRV_LOG(ERR, "port %u unable to allocate rx queue index %u private data",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	rxq->priv = priv;
+	rxq->idx = idx;
+	(*priv->rxq_privs)[idx] = rxq;
+	rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg);
 	if (!rxq_ctrl) {
-		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
+		DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
 			dev->data->port_id, idx);
+		mlx5_free(rxq);
+		(*priv->rxq_privs)[idx] = NULL;
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
@@ -741,6 +755,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 			    const struct rte_eth_hairpin_conf *hairpin_conf)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
 	int res;
 
@@ -776,14 +791,27 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 			return -rte_errno;
 		}
 	}
-	rxq_ctrl = mlx5_rxq_hairpin_new(dev, idx, desc, hairpin_conf);
+	rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0,
+			  SOCKET_ID_ANY);
+	if (!rxq) {
+		DRV_LOG(ERR, "port %u unable to allocate hairpin rx queue index %u private data",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	rxq->priv = priv;
+	rxq->idx = idx;
+	(*priv->rxq_privs)[idx] = rxq;
+	rxq_ctrl = mlx5_rxq_hairpin_new(dev, rxq, desc, hairpin_conf);
 	if (!rxq_ctrl) {
-		DRV_LOG(ERR, "port %u unable to allocate queue index %u",
+		DRV_LOG(ERR, "port %u unable to allocate hairpin queue index %u",
 			dev->data->port_id, idx);
+		mlx5_free(rxq);
+		(*priv->rxq_privs)[idx] = NULL;
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
-	DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
+	DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list",
 		dev->data->port_id, idx);
 	(*priv->rxqs)[idx] = &rxq_ctrl->rxq;
 	return 0;
@@ -1274,8 +1302,8 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
  *
  * @param dev
  *   Pointer to Ethernet device.
- * @param idx
- *   RX queue index.
+ * @param rxq
+ *   RX queue private data.
  * @param desc
  *   Number of descriptors to configure in queue.
  * @param socket
@@ -1285,10 +1313,12 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx,
  *   A DPDK queue object on success, NULL otherwise and rte_errno is set.
  */
 struct mlx5_rxq_ctrl *
-mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
+	     uint16_t desc,
 	     unsigned int socket, const struct rte_eth_rxconf *conf,
 	     const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg)
 {
+	uint16_t idx = rxq->idx;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *tmpl;
 	unsigned int mb_len = rte_pktmbuf_data_room_size(rx_seg[0].mp);
@@ -1331,6 +1361,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		rte_errno = ENOMEM;
 		return NULL;
 	}
+	LIST_INIT(&tmpl->owners);
+	rxq->ctrl = tmpl;
+	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
 	MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG);
 	/*
 	 * Build the array of actual buffer offsets and lengths.
@@ -1564,6 +1597,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
 		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
+	tmpl->sh = priv->sh;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
 	tmpl->rxq.elts_n = log2above(desc);
@@ -1591,8 +1625,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
  *
  * @param dev
  *   Pointer to Ethernet device.
- * @param idx
- *   RX queue index.
+ * @param rxq
+ *   RX queue.
  * @param desc
  *   Number of descriptors to configure in queue.
  * @param hairpin_conf
@@ -1602,9 +1636,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
  *   A DPDK queue object on success, NULL otherwise and rte_errno is set.
  */
 struct mlx5_rxq_ctrl *
-mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
+		     uint16_t desc,
 		     const struct rte_eth_hairpin_conf *hairpin_conf)
 {
+	uint16_t idx = rxq->idx;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *tmpl;
 
@@ -1614,10 +1650,14 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		rte_errno = ENOMEM;
 		return NULL;
 	}
+	LIST_INIT(&tmpl->owners);
+	rxq->ctrl = tmpl;
+	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
 	tmpl->type = MLX5_RXQ_TYPE_HAIRPIN;
 	tmpl->socket = SOCKET_ID_ANY;
 	tmpl->rxq.rss_hash = 0;
 	tmpl->rxq.port_id = dev->data->port_id;
+	tmpl->sh = priv->sh;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = NULL;
 	tmpl->rxq.elts_n = log2above(desc);
@@ -1671,6 +1711,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx];
 
 	if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL)
 		return 0;
@@ -1692,9 +1733,12 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			mlx5_mprq_free_mp(dev, rxq_ctrl);
 		}
+		LIST_REMOVE(rxq, owner_entry);
 		LIST_REMOVE(rxq_ctrl, next);
 		mlx5_free(rxq_ctrl);
 		(*priv->rxqs)[idx] = NULL;
+		mlx5_free(rxq);
+		(*priv->rxq_privs)[idx] = NULL;
 	}
 	return 0;
 }
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 07/13] net/mlx5: move Rx queue reference count
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (5 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 06/13] net/mlx5: split Rx queue Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 08/13] net/mlx5: move Rx queue hairpin info to private data Xueming Li
                     ` (6 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Rx queue reference count is counter of RQ, used on RQ table.
To prepare for shared Rx queue, move it from rxq_ctrl to Rx queue
private data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_rx.h      |   8 +-
 drivers/net/mlx5/mlx5_rxq.c     | 173 +++++++++++++++++++++-----------
 drivers/net/mlx5/mlx5_trigger.c |  57 +++++------
 3 files changed, 144 insertions(+), 94 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index db6252e8e86..fe19414c130 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -160,7 +160,6 @@ enum mlx5_rxq_type {
 struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
 	LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
-	uint32_t refcnt; /* Reference counter. */
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
@@ -179,6 +178,7 @@ struct mlx5_rxq_ctrl {
 /* RX queue private data. */
 struct mlx5_rxq_priv {
 	uint16_t idx; /* Queue index. */
+	uint32_t refcnt; /* Reference counter. */
 	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
 	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
@@ -216,7 +216,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev,
 struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
 	(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc,
 	 const struct rte_eth_hairpin_conf *hairpin_conf);
-struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_priv *mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx);
+uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_verify(struct rte_eth_dev *dev);
 int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index acd77a7ecc1..17077762ecd 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -386,15 +386,13 @@ mlx5_get_rx_port_offloads(void)
 static int
 mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
 
-	if (!(*priv->rxqs)[idx]) {
+	if (rxq == NULL) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq);
-	return (__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED) == 1);
+	return (__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED) == 1);
 }
 
 /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */
@@ -874,8 +872,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
 	intr_handle->type = RTE_INTR_HANDLE_EXT;
 	for (i = 0; i != n; ++i) {
 		/* This rxq obj must not be released in this function. */
-		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
-		struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL;
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+		struct mlx5_rxq_obj *rxq_obj = rxq ? rxq->ctrl->obj : NULL;
 		int rc;
 
 		/* Skip queues that cannot request interrupts. */
@@ -885,11 +883,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
 			intr_handle->intr_vec[i] =
 				RTE_INTR_VEC_RXTX_OFFSET +
 				RTE_MAX_RXTX_INTR_VEC_ID;
-			/* Decrease the rxq_ctrl's refcnt */
-			if (rxq_ctrl)
-				mlx5_rxq_release(dev, i);
 			continue;
 		}
+		mlx5_rxq_ref(dev, i);
 		if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
 			DRV_LOG(ERR,
 				"port %u too many Rx queues for interrupt"
@@ -949,7 +945,7 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
 		 * Need to access directly the queue to release the reference
 		 * kept in mlx5_rx_intr_vec_enable().
 		 */
-		mlx5_rxq_release(dev, i);
+		mlx5_rxq_deref(dev, i);
 	}
 free:
 	rte_intr_free_epoll_fd(intr_handle);
@@ -998,19 +994,14 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq)
 int
 mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
-	struct mlx5_rxq_ctrl *rxq_ctrl;
-
-	rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id);
-	if (!rxq_ctrl)
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id);
+	if (!rxq)
 		goto error;
-	if (rxq_ctrl->irq) {
-		if (!rxq_ctrl->obj) {
-			mlx5_rxq_release(dev, rx_queue_id);
+	if (rxq->ctrl->irq) {
+		if (!rxq->ctrl->obj)
 			goto error;
-		}
-		mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn);
+		mlx5_arm_cq(&rxq->ctrl->rxq, rxq->ctrl->rxq.cq_arm_sn);
 	}
-	mlx5_rxq_release(dev, rx_queue_id);
 	return 0;
 error:
 	rte_errno = EINVAL;
@@ -1032,23 +1023,21 @@ int
 mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id);
 	int ret = 0;
 
-	rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id);
-	if (!rxq_ctrl) {
+	if (!rxq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	if (!rxq_ctrl->obj)
+	if (!rxq->ctrl->obj)
 		goto error;
-	if (rxq_ctrl->irq) {
-		ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj);
+	if (rxq->ctrl->irq) {
+		ret = priv->obj_ops.rxq_event_get(rxq->ctrl->obj);
 		if (ret < 0)
 			goto error;
-		rxq_ctrl->rxq.cq_arm_sn++;
+		rxq->ctrl->rxq.cq_arm_sn++;
 	}
-	mlx5_rxq_release(dev, rx_queue_id);
 	return 0;
 error:
 	/**
@@ -1059,12 +1048,9 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		rte_errno = errno;
 	else
 		rte_errno = EINVAL;
-	ret = rte_errno; /* Save rte_errno before cleanup. */
-	mlx5_rxq_release(dev, rx_queue_id);
-	if (ret != EAGAIN)
+	if (rte_errno != EAGAIN)
 		DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d",
 			dev->data->port_id, rx_queue_id);
-	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
 
@@ -1611,7 +1597,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq;
 #endif
 	tmpl->rxq.idx = idx;
-	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
+	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 error:
@@ -1665,11 +1651,53 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 };
 	tmpl->hairpin_conf = *hairpin_conf;
 	tmpl->rxq.idx = idx;
-	__atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED);
+	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 }
 
+/**
+ * Increase Rx queue reference count.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   A pointer to the queue if it exists, NULL otherwise.
+ */
+struct mlx5_rxq_priv *
+mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	if (rxq != NULL)
+		__atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+	return rxq;
+}
+
+/**
+ * Dereference a Rx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   Updated reference count.
+ */
+uint32_t
+mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	if (rxq == NULL)
+		return 0;
+	return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED);
+}
+
 /**
  * Get a Rx queue.
  *
@@ -1681,18 +1709,52 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
  * @return
  *   A pointer to the queue if it exists, NULL otherwise.
  */
-struct mlx5_rxq_ctrl *
+struct mlx5_rxq_priv *
 mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
 
-	if (rxq_data) {
-		rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-		__atomic_fetch_add(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED);
-	}
-	return rxq_ctrl;
+	if (priv->rxq_privs == NULL)
+		return NULL;
+	return (*priv->rxq_privs)[idx];
+}
+
+/**
+ * Get Rx queue shareable control.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   A pointer to the queue control if it exists, NULL otherwise.
+ */
+struct mlx5_rxq_ctrl *
+mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	return rxq == NULL ? NULL : rxq->ctrl;
+}
+
+/**
+ * Get Rx queue shareable data.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   A pointer to the queue data if it exists, NULL otherwise.
+ */
+struct mlx5_rxq_data *
+mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+
+	return rxq == NULL ? NULL : &rxq->ctrl->rxq;
 }
 
 /**
@@ -1710,13 +1772,12 @@ int
 mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
-	struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx];
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
 
 	if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL)
 		return 0;
-	rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq);
-	if (__atomic_sub_fetch(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED) > 1)
+	if (mlx5_rxq_deref(dev, idx) > 1)
 		return 1;
 	if (rxq_ctrl->obj) {
 		priv->obj_ops.rxq_obj_release(rxq_ctrl->obj);
@@ -1728,7 +1789,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 		rxq_free_elts(rxq_ctrl);
 		dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
-	if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) {
+	if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) {
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 			mlx5_mprq_free_mp(dev, rxq_ctrl);
@@ -1908,7 +1969,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
 		return 1;
 	priv->obj_ops.ind_table_destroy(ind_tbl);
 	for (i = 0; i != ind_tbl->queues_n; ++i)
-		claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
+		claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i]));
 	mlx5_free(ind_tbl);
 	return 0;
 }
@@ -1965,7 +2026,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 			       log2above(priv->config.ind_table_max_size);
 
 	for (i = 0; i != queues_n; ++i) {
-		if (!mlx5_rxq_get(dev, queues[i])) {
+		if (mlx5_rxq_ref(dev, queues[i]) == NULL) {
 			ret = -rte_errno;
 			goto error;
 		}
@@ -1978,7 +2039,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev,
 error:
 	err = rte_errno;
 	for (j = 0; j < i; j++)
-		mlx5_rxq_release(dev, ind_tbl->queues[j]);
+		mlx5_rxq_deref(dev, ind_tbl->queues[j]);
 	rte_errno = err;
 	DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",
 		dev->data->port_id);
@@ -2074,7 +2135,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev,
 			  bool standalone)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	unsigned int i, j;
+	unsigned int i;
 	int ret = 0, err;
 	const unsigned int n = rte_is_power_of_2(queues_n) ?
 			       log2above(queues_n) :
@@ -2094,15 +2155,11 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev,
 	ret = priv->obj_ops.ind_table_modify(dev, n, queues, queues_n, ind_tbl);
 	if (ret)
 		goto error;
-	for (j = 0; j < ind_tbl->queues_n; j++)
-		mlx5_rxq_release(dev, ind_tbl->queues[j]);
 	ind_tbl->queues_n = queues_n;
 	ind_tbl->queues = queues;
 	return 0;
 error:
 	err = rte_errno;
-	for (j = 0; j < i; j++)
-		mlx5_rxq_release(dev, queues[j]);
 	rte_errno = err;
 	DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",
 		dev->data->port_id);
@@ -2135,7 +2192,7 @@ mlx5_ind_table_obj_attach(struct rte_eth_dev *dev,
 		return ret;
 	}
 	for (i = 0; i < ind_tbl->queues_n; i++)
-		mlx5_rxq_get(dev, ind_tbl->queues[i]);
+		mlx5_rxq_ref(dev, ind_tbl->queues[i]);
 	return 0;
 }
 
@@ -2172,7 +2229,7 @@ mlx5_ind_table_obj_detach(struct rte_eth_dev *dev,
 		return ret;
 	}
 	for (i = 0; i < ind_tbl->queues_n; i++)
-		mlx5_rxq_release(dev, ind_tbl->queues[i]);
+		mlx5_rxq_deref(dev, ind_tbl->queues[i]);
 	return ret;
 }
 
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 0753dbad053..a49254c96f6 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -143,10 +143,12 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 	DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.",
 		dev->data->port_id, priv->sh->device_attr.max_sge);
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_ref(dev, i);
+		struct mlx5_rxq_ctrl *rxq_ctrl;
 
-		if (!rxq_ctrl)
+		if (rxq == NULL)
 			continue;
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
 			if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
 				/* Allocate/reuse/resize mempool for MPRQ. */
@@ -215,6 +217,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 	struct mlx5_devx_modify_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
 	struct mlx5_txq_ctrl *txq_ctrl;
+	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
 	struct mlx5_devx_obj *sq;
 	struct mlx5_devx_obj *rq;
@@ -259,9 +262,8 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 			return -rte_errno;
 		}
 		sq = txq_ctrl->obj->sq;
-		rxq_ctrl = mlx5_rxq_get(dev,
-					txq_ctrl->hairpin_conf.peers[0].queue);
-		if (!rxq_ctrl) {
+		rxq = mlx5_rxq_get(dev, txq_ctrl->hairpin_conf.peers[0].queue);
+		if (rxq == NULL) {
 			mlx5_txq_release(dev, i);
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u no rxq object found: %d",
@@ -269,6 +271,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 				txq_ctrl->hairpin_conf.peers[0].queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN ||
 		    rxq_ctrl->hairpin_conf.peers[0].queue != i) {
 			rte_errno = ENOMEM;
@@ -303,12 +306,10 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		rxq_ctrl->hairpin_status = 1;
 		txq_ctrl->hairpin_status = 1;
 		mlx5_txq_release(dev, i);
-		mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue);
 	}
 	return 0;
 error:
 	mlx5_txq_release(dev, i);
-	mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue);
 	return -rte_errno;
 }
 
@@ -381,27 +382,26 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 		peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind;
 		mlx5_txq_release(dev, peer_queue);
 	} else { /* Peer port used as ingress. */
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, peer_queue);
 		struct mlx5_rxq_ctrl *rxq_ctrl;
 
-		rxq_ctrl = mlx5_rxq_get(dev, peer_queue);
-		if (rxq_ctrl == NULL) {
+		if (rxq == NULL) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
 				dev->data->port_id, peer_queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq",
 				dev->data->port_id, peer_queue);
-			mlx5_rxq_release(dev, peer_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no Rxq object found: %d",
 				dev->data->port_id, peer_queue);
-			mlx5_rxq_release(dev, peer_queue);
 			return -rte_errno;
 		}
 		peer_info->qp_id = rxq_ctrl->obj->rq->id;
@@ -409,7 +409,6 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 		peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue;
 		peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit;
 		peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind;
-		mlx5_rxq_release(dev, peer_queue);
 	}
 	return 0;
 }
@@ -508,34 +507,32 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			txq_ctrl->hairpin_status = 1;
 		mlx5_txq_release(dev, cur_queue);
 	} else {
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue);
 		struct mlx5_rxq_ctrl *rxq_ctrl;
 		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
 
-		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
-		if (rxq_ctrl == NULL) {
+		if (rxq == NULL) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no Rxq object found: %d",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->hairpin_status != 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already bound",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return 0;
 		}
 		if (peer_info->tx_explicit !=
@@ -543,7 +540,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode"
 				" mismatch", dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (peer_info->manual_bind !=
@@ -551,7 +547,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode"
 				" mismatch", dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		rq_attr.state = MLX5_SQC_STATE_RDY;
@@ -561,7 +556,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
 			rxq_ctrl->hairpin_status = 1;
-		mlx5_rxq_release(dev, cur_queue);
 	}
 	return ret;
 }
@@ -626,34 +620,32 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 			txq_ctrl->hairpin_status = 0;
 		mlx5_txq_release(dev, cur_queue);
 	} else {
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue);
 		struct mlx5_rxq_ctrl *rxq_ctrl;
 		struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
 
-		rxq_ctrl = mlx5_rxq_get(dev, cur_queue);
-		if (rxq_ctrl == NULL) {
+		if (rxq == NULL) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "Failed to get port %u Rx queue %d",
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
+		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		if (rxq_ctrl->hairpin_status == 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return 0;
 		}
 		if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no Rxq object found: %d",
 				dev->data->port_id, cur_queue);
-			mlx5_rxq_release(dev, cur_queue);
 			return -rte_errno;
 		}
 		rq_attr.state = MLX5_SQC_STATE_RST;
@@ -661,7 +653,6 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
 			rxq_ctrl->hairpin_status = 0;
-		mlx5_rxq_release(dev, cur_queue);
 	}
 	return ret;
 }
@@ -963,7 +954,6 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *txq_ctrl;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
 	uint32_t i;
 	uint16_t pp;
 	uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0};
@@ -992,24 +982,23 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 		}
 	} else {
 		for (i = 0; i < priv->rxqs_n; i++) {
-			rxq_ctrl = mlx5_rxq_get(dev, i);
-			if (!rxq_ctrl)
+			struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+			struct mlx5_rxq_ctrl *rxq_ctrl;
+
+			if (rxq == NULL)
 				continue;
-			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) {
-				mlx5_rxq_release(dev, i);
+			rxq_ctrl = rxq->ctrl;
+			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN)
 				continue;
-			}
 			pp = rxq_ctrl->hairpin_conf.peers[0].port;
 			if (pp >= RTE_MAX_ETHPORTS) {
 				rte_errno = ERANGE;
-				mlx5_rxq_release(dev, i);
 				DRV_LOG(ERR, "port %hu queue %u peer port "
 					"out of range %hu",
 					priv->dev_data->port_id, i, pp);
 				return -rte_errno;
 			}
 			bits[pp / 32] |= 1 << (pp % 32);
-			mlx5_rxq_release(dev, i);
 		}
 	}
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 08/13] net/mlx5: move Rx queue hairpin info to private data
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (6 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 07/13] net/mlx5: move Rx queue reference count Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 09/13] net/mlx5: remove port info from shareable Rx queue Xueming Li
                     ` (5 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Hairpin info of Rx queue can't be shared, moves to private queue data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_rx.h      |  4 ++--
 drivers/net/mlx5/mlx5_rxq.c     | 13 +++++--------
 drivers/net/mlx5/mlx5_trigger.c | 24 ++++++++++++------------
 3 files changed, 19 insertions(+), 22 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index fe19414c130..2ed544556f5 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -171,8 +171,6 @@ struct mlx5_rxq_ctrl {
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
 	uint32_t wqn; /* WQ number. */
 	uint16_t dump_file_n; /* Number of dump files. */
-	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
-	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
 /* RX queue private data. */
@@ -182,6 +180,8 @@ struct mlx5_rxq_priv {
 	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
 	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
+	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
+	uint32_t hairpin_status; /* Hairpin binding status. */
 };
 
 /* mlx5_rxq.c */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 17077762ecd..2b9ab7b3fc4 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1649,8 +1649,8 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.elts_n = log2above(desc);
 	tmpl->rxq.elts = NULL;
 	tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 };
-	tmpl->hairpin_conf = *hairpin_conf;
 	tmpl->rxq.idx = idx;
+	rxq->hairpin_conf = *hairpin_conf;
 	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
@@ -1869,14 +1869,11 @@ const struct rte_eth_hairpin_conf *
 mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
 
-	if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) {
-		rxq_ctrl = container_of((*priv->rxqs)[idx],
-					struct mlx5_rxq_ctrl,
-					rxq);
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
-			return &rxq_ctrl->hairpin_conf;
+	if (idx < priv->rxqs_n && rxq != NULL) {
+		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+			return &rxq->hairpin_conf;
 	}
 	return NULL;
 }
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index a49254c96f6..f376f4d6fc4 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -273,7 +273,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		}
 		rxq_ctrl = rxq->ctrl;
 		if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN ||
-		    rxq_ctrl->hairpin_conf.peers[0].queue != i) {
+		    rxq->hairpin_conf.peers[0].queue != i) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u Tx queue %d can't be binded to "
 				"Rx queue %d", dev->data->port_id,
@@ -303,7 +303,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		if (ret)
 			goto error;
 		/* Qs with auto-bind will be destroyed directly. */
-		rxq_ctrl->hairpin_status = 1;
+		rxq->hairpin_status = 1;
 		txq_ctrl->hairpin_status = 1;
 		mlx5_txq_release(dev, i);
 	}
@@ -406,9 +406,9 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue,
 		}
 		peer_info->qp_id = rxq_ctrl->obj->rq->id;
 		peer_info->vhca_id = priv->config.hca_attr.vhca_id;
-		peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue;
-		peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit;
-		peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind;
+		peer_info->peer_q = rxq->hairpin_conf.peers[0].queue;
+		peer_info->tx_explicit = rxq->hairpin_conf.tx_explicit;
+		peer_info->manual_bind = rxq->hairpin_conf.manual_bind;
 	}
 	return 0;
 }
@@ -530,20 +530,20 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (rxq_ctrl->hairpin_status != 0) {
+		if (rxq->hairpin_status != 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already bound",
 				dev->data->port_id, cur_queue);
 			return 0;
 		}
 		if (peer_info->tx_explicit !=
-		    rxq_ctrl->hairpin_conf.tx_explicit) {
+		    rxq->hairpin_conf.tx_explicit) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode"
 				" mismatch", dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
 		if (peer_info->manual_bind !=
-		    rxq_ctrl->hairpin_conf.manual_bind) {
+		    rxq->hairpin_conf.manual_bind) {
 			rte_errno = EINVAL;
 			DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode"
 				" mismatch", dev->data->port_id, cur_queue);
@@ -555,7 +555,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		rq_attr.hairpin_peer_vhca = peer_info->vhca_id;
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
-			rxq_ctrl->hairpin_status = 1;
+			rxq->hairpin_status = 1;
 	}
 	return ret;
 }
@@ -637,7 +637,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 				dev->data->port_id, cur_queue);
 			return -rte_errno;
 		}
-		if (rxq_ctrl->hairpin_status == 0) {
+		if (rxq->hairpin_status == 0) {
 			DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound",
 				dev->data->port_id, cur_queue);
 			return 0;
@@ -652,7 +652,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue,
 		rq_attr.rq_state = MLX5_SQC_STATE_RST;
 		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
 		if (ret == 0)
-			rxq_ctrl->hairpin_status = 0;
+			rxq->hairpin_status = 0;
 	}
 	return ret;
 }
@@ -990,7 +990,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 			rxq_ctrl = rxq->ctrl;
 			if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN)
 				continue;
-			pp = rxq_ctrl->hairpin_conf.peers[0].port;
+			pp = rxq->hairpin_conf.peers[0].port;
 			if (pp >= RTE_MAX_ETHPORTS) {
 				rte_errno = ERANGE;
 				DRV_LOG(ERR, "port %hu queue %u peer port "
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 09/13] net/mlx5: remove port info from shareable Rx queue
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (7 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 08/13] net/mlx5: move Rx queue hairpin info to private data Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 10/13] net/mlx5: move Rx queue DevX resource Xueming Li
                     ` (4 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

To prepare for shared Rx queue, removes port info from shareable Rx
queue control.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c     |  2 +-
 drivers/net/mlx5/mlx5_mr.c       |  7 ++++---
 drivers/net/mlx5/mlx5_rx.c       | 15 +++------------
 drivers/net/mlx5/mlx5_rx.h       |  5 ++++-
 drivers/net/mlx5/mlx5_rxq.c      | 10 ++++------
 drivers/net/mlx5/mlx5_rxtx_vec.c |  2 +-
 6 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index c65a6e5d4e7..8411e6e6418 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -916,7 +916,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 	}
 	rxq->rxq_ctrl = rxq_ctrl;
 	rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD;
-	rxq_ctrl->priv = priv;
+	rxq_ctrl->sh = priv->sh;
 	rxq_ctrl->obj = rxq;
 	rxq_data = &rxq_ctrl->rxq;
 	/* Create CQ using DevX API. */
diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
index 44afda731fc..8d48b4614ee 100644
--- a/drivers/net/mlx5/mlx5_mr.c
+++ b/drivers/net/mlx5/mlx5_mr.c
@@ -82,10 +82,11 @@ mlx5_rx_addr2mr_bh(struct mlx5_rxq_data *rxq, uintptr_t addr)
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_mr_ctrl *mr_ctrl = &rxq->mr_ctrl;
-	struct mlx5_priv *priv = rxq_ctrl->priv;
+	struct mlx5_priv *priv = RXQ_PORT(rxq_ctrl);
+	struct mlx5_dev_ctx_shared *sh = rxq_ctrl->sh;
 
-	return mlx5_mr_addr2mr_bh(priv->sh->pd, &priv->mp_id,
-				  &priv->sh->share_cache, mr_ctrl, addr,
+	return mlx5_mr_addr2mr_bh(sh->pd, &priv->mp_id,
+				  &sh->share_cache, mr_ctrl, addr,
 				  priv->config.mr_ext_memseg_en);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba46..09de26c0d39 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -118,15 +118,7 @@ int
 mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
 	struct mlx5_rxq_data *rxq = rx_queue;
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
-	struct rte_eth_dev *dev = ETH_DEV(rxq_ctrl->priv);
 
-	if (dev->rx_pkt_burst == NULL ||
-	    dev->rx_pkt_burst == removed_rx_burst) {
-		rte_errno = ENOTSUP;
-		return -rte_errno;
-	}
 	if (offset >= (1 << rxq->cqe_n)) {
 		rte_errno = EINVAL;
 		return -rte_errno;
@@ -438,10 +430,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec)
 		sm.is_wq = 1;
 		sm.queue_id = rxq->idx;
 		sm.state = IBV_WQS_RESET;
-		if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), &sm))
+		if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm))
 			return -1;
 		if (rxq_ctrl->dump_file_n <
-		    rxq_ctrl->priv->config.max_dump_files_num) {
+		    RXQ_PORT(rxq_ctrl)->config.max_dump_files_num) {
 			MKSTR(err_str, "Unexpected CQE error syndrome "
 			      "0x%02x CQN = %u RQN = %u wqe_counter = %u"
 			      " rq_ci = %u cq_ci = %u", u.err_cqe->syndrome,
@@ -478,8 +470,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec)
 			sm.is_wq = 1;
 			sm.queue_id = rxq->idx;
 			sm.state = IBV_WQS_RDY;
-			if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv),
-						    &sm))
+			if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm))
 				return -1;
 			if (vec) {
 				const uint32_t elts_n =
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 2ed544556f5..4eed4176324 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -23,6 +23,10 @@
 /* Support tunnel matching. */
 #define MLX5_FLOW_TUNNEL 10
 
+#define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv
+#define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl))
+#define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl))
+
 struct mlx5_rxq_stats {
 #ifdef MLX5_PMD_SOFT_COUNTERS
 	uint64_t ipackets; /**< Total of successfully received packets. */
@@ -163,7 +167,6 @@ struct mlx5_rxq_ctrl {
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
-	struct mlx5_priv *priv; /* Back pointer to private data. */
 	enum mlx5_rxq_type type; /* Rxq type. */
 	unsigned int socket; /* CPU socket ID for allocations. */
 	unsigned int irq:1; /* Whether IRQ is enabled. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 2b9ab7b3fc4..fc931939721 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -148,7 +148,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		buf = rte_pktmbuf_alloc(seg->mp);
 		if (buf == NULL) {
 			DRV_LOG(ERR, "port %u empty mbuf pool",
-				PORT_ID(rxq_ctrl->priv));
+				RXQ_PORT_ID(rxq_ctrl));
 			rte_errno = ENOMEM;
 			goto error;
 		}
@@ -195,7 +195,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 	DRV_LOG(DEBUG,
 		"port %u SPRQ queue %u allocated and configured %u segments"
 		" (max %u packets)",
-		PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx, elts_n,
+		RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx, elts_n,
 		elts_n / (1 << rxq_ctrl->rxq.sges_n));
 	return 0;
 error:
@@ -207,7 +207,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		(*rxq_ctrl->rxq.elts)[i] = NULL;
 	}
 	DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything",
-		PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx);
+		RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx);
 	rte_errno = err; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -284,7 +284,7 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 	uint16_t i;
 
 	DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs",
-		PORT_ID(rxq_ctrl->priv), rxq->idx, q_n);
+		RXQ_PORT_ID(rxq_ctrl), rxq->idx, q_n);
 	if (rxq->elts == NULL)
 		return;
 	/**
@@ -1584,7 +1584,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->sh = priv->sh;
-	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
 	tmpl->rxq.elts_n = log2above(desc);
 	tmpl->rxq.rq_repl_thresh =
@@ -1644,7 +1643,6 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.rss_hash = 0;
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->sh = priv->sh;
-	tmpl->priv = priv;
 	tmpl->rxq.mp = NULL;
 	tmpl->rxq.elts_n = log2above(desc);
 	tmpl->rxq.elts = NULL;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index ecd273e00a8..511681841ca 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -550,7 +550,7 @@ mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq)
 	struct mlx5_rxq_ctrl *ctrl =
 		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-	if (!ctrl->priv->config.rx_vec_en || rxq->sges_n != 0)
+	if (!RXQ_PORT(ctrl)->config.rx_vec_en || rxq->sges_n != 0)
 		return -ENOTSUP;
 	if (rxq->lro)
 		return -ENOTSUP;
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 10/13] net/mlx5: move Rx queue DevX resource
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (8 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 09/13] net/mlx5: remove port info from shareable Rx queue Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 11/13] net/mlx5: remove Rx queue data list from device Xueming Li
                     ` (3 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev
  Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko,
	Anatoly Burakov

To support shared RX queue, move DevX RQ which is per queue resource to
Rx queue private data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c | 154 +++++++++++--------
 drivers/net/mlx5/mlx5.h             |  11 +-
 drivers/net/mlx5/mlx5_devx.c        | 227 ++++++++++++++--------------
 drivers/net/mlx5/mlx5_rx.h          |   1 +
 drivers/net/mlx5/mlx5_rxq.c         |  44 +++---
 drivers/net/mlx5/mlx5_rxtx.c        |   6 +-
 drivers/net/mlx5/mlx5_trigger.c     |   2 +-
 drivers/net/mlx5/mlx5_vlan.c        |  16 +-
 8 files changed, 241 insertions(+), 220 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index d4fa202ac4b..a2a9b9c1f98 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -71,13 +71,13 @@ const struct mlx5_mr_ops mlx5_mr_verbs_ops = {
 /**
  * Modify Rx WQ vlan stripping offload
  *
- * @param rxq_obj
- *   Rx queue object.
+ * @param rxq
+ *   Rx queue.
  *
  * @return 0 on success, non-0 otherwise
  */
 static int
-mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
+mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_priv *rxq, int on)
 {
 	uint16_t vlan_offloads =
 		(on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) |
@@ -89,14 +89,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
 		.flags = vlan_offloads,
 	};
 
-	return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
+	return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod);
 }
 
 /**
  * Modifies the attributes for the specified WQ.
  *
- * @param rxq_obj
- *   Verbs Rx queue object.
+ * @param rxq
+ *   Verbs Rx queue.
  * @param type
  *   Type of change queue state.
  *
@@ -104,14 +104,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
+mlx5_ibv_modify_wq(struct mlx5_rxq_priv *rxq, uint8_t type)
 {
 	struct ibv_wq_attr mod = {
 		.attr_mask = IBV_WQ_ATTR_STATE,
 		.wq_state = (enum ibv_wq_state)type,
 	};
 
-	return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
+	return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod);
 }
 
 /**
@@ -181,21 +181,18 @@ mlx5_ibv_modify_qp(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
 /**
  * Create a CQ Verbs object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   The Verbs CQ object initialized, NULL otherwise and rte_errno is set.
  */
 static struct ibv_cq *
-mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_ibv_cq_create(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
 	struct {
@@ -241,7 +238,7 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx)
 		DRV_LOG(DEBUG,
 			"Port %u Rx CQE compression is disabled for HW"
 			" timestamp.",
-			dev->data->port_id);
+			priv->dev_data->port_id);
 	}
 #ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
 	if (RTE_CACHE_LINE_SIZE == 128) {
@@ -257,21 +254,18 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx)
 /**
  * Create a WQ Verbs object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   The Verbs WQ object initialized, NULL otherwise and rte_errno is set.
  */
 static struct ibv_wq *
-mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_ibv_wq_create(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
 	struct {
@@ -338,7 +332,7 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx)
 			DRV_LOG(ERR,
 				"Port %u Rx queue %u requested %u*%u but got"
 				" %u*%u WRs*SGEs.",
-				dev->data->port_id, idx,
+				priv->dev_data->port_id, rxq->idx,
 				wqe_n >> rxq_data->sges_n,
 				(1 << rxq_data->sges_n),
 				wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
@@ -353,21 +347,20 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx)
 /**
  * Create the Rx queue Verbs object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_ibv_obj_new(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	uint16_t idx = rxq->idx;
+	struct mlx5_priv *priv = rxq->priv;
+	uint16_t port_id = priv->dev_data->port_id;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
 	struct mlx5dv_cq cq_info;
 	struct mlx5dv_rwq rwq;
@@ -382,17 +375,17 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 				mlx5_glue->create_comp_channel(priv->sh->ctx);
 		if (!tmpl->ibv_channel) {
 			DRV_LOG(ERR, "Port %u: comp channel creation failure.",
-				dev->data->port_id);
+				port_id);
 			rte_errno = ENOMEM;
 			goto error;
 		}
 		tmpl->fd = ((struct ibv_comp_channel *)(tmpl->ibv_channel))->fd;
 	}
 	/* Create CQ using Verbs API. */
-	tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(dev, idx);
+	tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(rxq);
 	if (!tmpl->ibv_cq) {
 		DRV_LOG(ERR, "Port %u Rx queue %u CQ creation failure.",
-			dev->data->port_id, idx);
+			port_id, idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
@@ -407,7 +400,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 		DRV_LOG(ERR,
 			"Port %u wrong MLX5_CQE_SIZE environment "
 			"variable value: it should be set to %u.",
-			dev->data->port_id, RTE_CACHE_LINE_SIZE);
+			port_id, RTE_CACHE_LINE_SIZE);
 		rte_errno = EINVAL;
 		goto error;
 	}
@@ -418,19 +411,19 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	rxq_data->cq_uar = cq_info.cq_uar;
 	rxq_data->cqn = cq_info.cqn;
 	/* Create WQ (RQ) using Verbs API. */
-	tmpl->wq = mlx5_rxq_ibv_wq_create(dev, idx);
+	tmpl->wq = mlx5_rxq_ibv_wq_create(rxq);
 	if (!tmpl->wq) {
 		DRV_LOG(ERR, "Port %u Rx queue %u WQ creation failure.",
-			dev->data->port_id, idx);
+			port_id, idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
 	/* Change queue state to ready. */
-	ret = mlx5_ibv_modify_wq(tmpl, IBV_WQS_RDY);
+	ret = mlx5_ibv_modify_wq(rxq, IBV_WQS_RDY);
 	if (ret) {
 		DRV_LOG(ERR,
 			"Port %u Rx queue %u WQ state to IBV_WQS_RDY failed.",
-			dev->data->port_id, idx);
+			port_id, idx);
 		rte_errno = ret;
 		goto error;
 	}
@@ -446,7 +439,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	rxq_data->cq_arm_sn = 0;
 	mlx5_rxq_initialize(rxq_data);
 	rxq_data->cq_ci = 0;
-	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
+	priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
 	rxq_ctrl->wqn = ((struct ibv_wq *)(tmpl->wq))->wq_num;
 	return 0;
 error:
@@ -464,12 +457,14 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 /**
  * Release an Rx verbs queue object.
  *
- * @param rxq_obj
- *   Verbs Rx queue object.
+ * @param rxq
+ *   Pointer to Rx queue.
  */
 static void
-mlx5_rxq_ibv_obj_release(struct mlx5_rxq_obj *rxq_obj)
+mlx5_rxq_ibv_obj_release(struct mlx5_rxq_priv *rxq)
 {
+	struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj;
+
 	MLX5_ASSERT(rxq_obj);
 	MLX5_ASSERT(rxq_obj->wq);
 	MLX5_ASSERT(rxq_obj->ibv_cq);
@@ -692,12 +687,24 @@ static void
 mlx5_rxq_ibv_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_obj *rxq_obj;
 
-	if (rxq->wq)
-		claim_zero(mlx5_glue->destroy_wq(rxq->wq));
-	if (rxq->ibv_cq)
-		claim_zero(mlx5_glue->destroy_cq(rxq->ibv_cq));
+	if (rxq == NULL)
+		return;
+	if (rxq->ctrl == NULL)
+		goto free_priv;
+	rxq_obj = rxq->ctrl->obj;
+	if (rxq_obj == NULL)
+		goto free_ctrl;
+	if (rxq_obj->wq)
+		claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+	if (rxq_obj->ibv_cq)
+		claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
+	mlx5_free(rxq_obj);
+free_ctrl:
+	mlx5_free(rxq->ctrl);
+free_priv:
 	mlx5_free(rxq);
 	priv->drop_queue.rxq = NULL;
 }
@@ -716,39 +723,58 @@ mlx5_rxq_ibv_obj_drop_create(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct ibv_context *ctx = priv->sh->ctx;
-	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_obj *rxq_obj = NULL;
 
-	if (rxq)
+	if (rxq != NULL)
 		return 0;
 	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
-	if (!rxq) {
+	if (rxq == NULL) {
 		DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.",
 		      dev->data->port_id);
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
 	priv->drop_queue.rxq = rxq;
-	rxq->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
-	if (!rxq->ibv_cq) {
+	rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl), 0,
+			       SOCKET_ID_ANY);
+	if (rxq_ctrl == NULL) {
+		DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue control memory.",
+		      dev->data->port_id);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rxq->ctrl = rxq_ctrl;
+	rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0,
+			      SOCKET_ID_ANY);
+	if (rxq_obj == NULL) {
+		DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.",
+		      dev->data->port_id);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rxq_ctrl->obj = rxq_obj;
+	rxq_obj->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
+	if (!rxq_obj->ibv_cq) {
 		DRV_LOG(DEBUG, "Port %u cannot allocate CQ for drop queue.",
 		      dev->data->port_id);
 		rte_errno = errno;
 		goto error;
 	}
-	rxq->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){
+	rxq_obj->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){
 						    .wq_type = IBV_WQT_RQ,
 						    .max_wr = 1,
 						    .max_sge = 1,
 						    .pd = priv->sh->pd,
-						    .cq = rxq->ibv_cq,
+						    .cq = rxq_obj->ibv_cq,
 					      });
-	if (!rxq->wq) {
+	if (!rxq_obj->wq) {
 		DRV_LOG(DEBUG, "Port %u cannot allocate WQ for drop queue.",
 		      dev->data->port_id);
 		rte_errno = errno;
 		goto error;
 	}
-	priv->drop_queue.rxq = rxq;
 	return 0;
 error:
 	mlx5_rxq_ibv_obj_drop_release(dev);
@@ -777,7 +803,7 @@ mlx5_ibv_drop_action_create(struct rte_eth_dev *dev)
 	ret = mlx5_rxq_ibv_obj_drop_create(dev);
 	if (ret < 0)
 		goto error;
-	rxq = priv->drop_queue.rxq;
+	rxq = priv->drop_queue.rxq->ctrl->obj;
 	ind_tbl = mlx5_glue->create_rwq_ind_table
 				(priv->sh->ctx,
 				 &(struct ibv_rwq_ind_table_init_attr){
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b18ddb0b0fa..a2735cbb350 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -310,7 +310,7 @@ struct mlx5_vf_vlan {
 /* Flow drop context necessary due to Verbs API. */
 struct mlx5_drop {
 	struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */
-	struct mlx5_rxq_obj *rxq; /* Rx queue object. */
+	struct mlx5_rxq_priv *rxq; /* Rx queue. */
 };
 
 /* Loopback dummy queue resources required due to Verbs API. */
@@ -1257,7 +1257,6 @@ struct mlx5_rxq_obj {
 		};
 		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
-			struct mlx5_devx_rq rq_obj; /* DevX RQ object. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
@@ -1339,11 +1338,11 @@ struct mlx5_rxq_priv;
 
 /* HW objects operations structure. */
 struct mlx5_obj_ops {
-	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
-	int (*rxq_obj_new)(struct rte_eth_dev *dev, uint16_t idx);
+	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_priv *rxq, int on);
+	int (*rxq_obj_new)(struct mlx5_rxq_priv *rxq);
 	int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
-	int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, uint8_t type);
-	void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
+	int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type);
+	void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq);
 	int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n,
 			     struct mlx5_ind_table_obj *ind_tbl);
 	int (*ind_table_modify)(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 8411e6e6418..f7f7526dbf6 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -30,14 +30,16 @@
 /**
  * Modify RQ vlan stripping offload
  *
- * @param rxq_obj
- *   Rx queue object.
+ * @param rxq
+ *   Rx queue.
+ * @param on
+ *   Enable/disable VLAN stripping.
  *
  * @return
  *   0 on success, non-0 otherwise
  */
 static int
-mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
+mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_priv *rxq, int on)
 {
 	struct mlx5_devx_modify_rq_attr rq_attr;
 
@@ -46,14 +48,14 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
 	rq_attr.state = MLX5_RQC_STATE_RDY;
 	rq_attr.vsd = (on ? 0 : 1);
 	rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD;
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
 
 /**
  * Modify RQ using DevX API.
  *
- * @param rxq_obj
- *   DevX Rx queue object.
+ * @param rxq
+ *   DevX rx queue.
  * @param type
  *   Type of change queue state.
  *
@@ -61,7 +63,7 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
+mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type)
 {
 	struct mlx5_devx_modify_rq_attr rq_attr;
 
@@ -86,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
 	default:
 		break;
 	}
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
 
 /**
@@ -145,42 +147,34 @@ mlx5_devx_modify_sq(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
 	return 0;
 }
 
-/**
- * Destroy the Rx queue DevX object.
- *
- * @param rxq_obj
- *   Rxq object to destroy.
- */
-static void
-mlx5_rxq_release_devx_resources(struct mlx5_rxq_obj *rxq_obj)
-{
-	mlx5_devx_rq_destroy(&rxq_obj->rq_obj);
-	memset(&rxq_obj->rq_obj, 0, sizeof(rxq_obj->rq_obj));
-	mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
-	memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
-}
-
 /**
  * Release an Rx DevX queue object.
  *
- * @param rxq_obj
- *   DevX Rx queue object.
+ * @param rxq
+ *   DevX Rx queue.
  */
 static void
-mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj)
+mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 {
-	MLX5_ASSERT(rxq_obj);
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
+
+	MLX5_ASSERT(rxq != NULL);
+	MLX5_ASSERT(rxq_ctrl != NULL);
 	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
 		MLX5_ASSERT(rxq_obj->rq);
-		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
+		mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->cq_obj.cq);
-		MLX5_ASSERT(rxq_obj->rq_obj.rq);
-		mlx5_rxq_release_devx_resources(rxq_obj);
-		if (rxq_obj->devx_channel)
+		mlx5_devx_rq_destroy(&rxq->devx_rq);
+		memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq));
+		mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
+		memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
+		if (rxq_obj->devx_channel) {
 			mlx5_os_devx_destroy_event_channel
 							(rxq_obj->devx_channel);
+			rxq_obj->devx_channel = NULL;
+		}
 	}
 }
 
@@ -224,21 +218,18 @@ mlx5_rx_devx_get_event(struct mlx5_rxq_obj *rxq_obj)
 /**
  * Create a RQ object using DevX.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param rxq_data
- *   RX queue data.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev,
-				  struct mlx5_rxq_data *rxq_data)
+mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq;
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
 	uint16_t log_desc_n = rxq_data->elts_n - rxq_data->sges_n;
 	uint32_t wqe_size, log_wqe_size;
@@ -279,7 +270,7 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev,
 	rq_attr.wq_attr.pd = priv->sh->pdn;
 	rq_attr.counter_set_id = priv->counter_set_id;
 	/* Create RQ using DevX API. */
-	return mlx5_devx_rq_create(priv->sh->ctx, &rxq_ctrl->obj->rq_obj,
+	return mlx5_devx_rq_create(priv->sh->ctx, &rxq->devx_rq,
 				   wqe_size, log_desc_n, &rq_attr,
 				   rxq_ctrl->socket);
 }
@@ -287,24 +278,22 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev,
 /**
  * Create a DevX CQ object for an Rx queue.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param rxq_data
- *   RX queue data.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
-				  struct mlx5_rxq_data *rxq_data)
+mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
 {
 	struct mlx5_devx_cq *cq_obj = 0;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
-	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_priv *priv = rxq->priv;
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	uint16_t port_id = priv->dev_data->port_id;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
 	uint32_t log_cqe_n;
 	uint16_t event_nums[1] = { 0 };
@@ -345,7 +334,7 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
 		}
 		DRV_LOG(DEBUG,
 			"Port %u Rx CQE compression is enabled, format %d.",
-			dev->data->port_id, priv->config.cqe_comp_fmt);
+			port_id, priv->config.cqe_comp_fmt);
 		/*
 		 * For vectorized Rx, it must not be doubled in order to
 		 * make cq_ci and rq_ci aligned.
@@ -354,13 +343,12 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
 			cqe_n *= 2;
 	} else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
 		DRV_LOG(DEBUG,
-			"Port %u Rx CQE compression is disabled for HW"
-			" timestamp.",
-			dev->data->port_id);
+			"Port %u Rx CQE compression is disabled for HW timestamp.",
+			port_id);
 	} else if (priv->config.cqe_comp && rxq_data->lro) {
 		DRV_LOG(DEBUG,
 			"Port %u Rx CQE compression is disabled for LRO.",
-			dev->data->port_id);
+			port_id);
 	}
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar);
 	log_cqe_n = log2above(cqe_n);
@@ -398,27 +386,23 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev,
 /**
  * Create the Rx hairpin queue object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	uint16_t idx = rxq->idx;
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
 	struct mlx5_devx_create_rq_attr attr = { 0 };
 	struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
 	uint32_t max_wq_data;
 
-	MLX5_ASSERT(rxq_data);
-	MLX5_ASSERT(tmpl);
+	MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL && tmpl != NULL);
 	tmpl->rxq_ctrl = rxq_ctrl;
 	attr.hairpin = 1;
 	max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
@@ -447,39 +431,36 @@ mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
 	if (!tmpl->rq) {
 		DRV_LOG(ERR,
 			"Port %u Rx hairpin queue %u can't create rq object.",
-			dev->data->port_id, idx);
+			priv->dev_data->port_id, idx);
 		rte_errno = errno;
 		return -rte_errno;
 	}
-	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
+	priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
 	return 0;
 }
 
 /**
  * Create the Rx queue DevX object.
  *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Rx queue array.
+ * @param rxq
+ *   Pointer to Rx queue.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_priv *priv = rxq->priv;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq;
 	struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
 	int ret = 0;
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(tmpl);
 	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
-		return mlx5_rxq_obj_hairpin_new(dev, idx);
+		return mlx5_rxq_obj_hairpin_new(rxq);
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq) {
 		int devx_ev_flag =
@@ -497,34 +478,32 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 		tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
 	}
 	/* Create CQ using DevX API. */
-	ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_cq_resources(rxq);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ.");
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_rq_resources(rxq);
 	if (ret) {
 		DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
-			dev->data->port_id, idx);
+			priv->dev_data->port_id, rxq->idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
 	/* Change queue state to ready. */
-	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
+	ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
-	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf;
-	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.wq.db_rec;
-	rxq_data->cq_arm_sn = 0;
-	rxq_data->cq_ci = 0;
+	rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
+	rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec;
 	mlx5_rxq_initialize(rxq_data);
-	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq_ctrl->wqn = tmpl->rq_obj.rq->id;
+	priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED;
+	rxq_ctrl->wqn = rxq->devx_rq.rq->id;
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	mlx5_rxq_devx_obj_release(tmpl);
+	mlx5_rxq_devx_obj_release(rxq);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -570,15 +549,15 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 	rqt_attr->rqt_actual_size = rqt_n;
 	if (queues == NULL) {
 		for (i = 0; i < rqt_n; i++)
-			rqt_attr->rq_list[i] = priv->drop_queue.rxq->rq->id;
+			rqt_attr->rq_list[i] =
+					priv->drop_queue.rxq->devx_rq.rq->id;
 		return rqt_attr;
 	}
 	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[queues[i]];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-				container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
 
-		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq_obj.rq->id;
+		MLX5_ASSERT(rxq != NULL);
+		rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
@@ -717,7 +696,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 			}
 		}
 	} else {
-		rxq_obj_type = priv->drop_queue.rxq->rxq_ctrl->type;
+		rxq_obj_type = priv->drop_queue.rxq->ctrl->type;
 	}
 	memset(tir_attr, 0, sizeof(*tir_attr));
 	tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
@@ -889,16 +868,23 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int socket_id = dev->device->numa_node;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
-	struct mlx5_rxq_data *rxq_data;
-	struct mlx5_rxq_obj *rxq = NULL;
+	struct mlx5_rxq_priv *rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_obj *rxq_obj = NULL;
 	int ret;
 
 	/*
-	 * Initialize dummy control structures.
+	 * Initialize dummy Rx queue structures.
 	 * They are required to hold pointers for cleanup
 	 * and are only accessible via drop queue DevX objects.
 	 */
+	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id);
+	if (rxq == NULL) {
+		DRV_LOG(ERR, "Port %u could not allocate drop queue",
+			dev->data->port_id);
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl),
 			       0, socket_id);
 	if (rxq_ctrl == NULL) {
@@ -907,27 +893,29 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id);
-	if (rxq == NULL) {
+	rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0, socket_id);
+	if (rxq_obj == NULL) {
 		DRV_LOG(ERR, "Port %u could not allocate drop queue object",
 			dev->data->port_id);
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	rxq->rxq_ctrl = rxq_ctrl;
+	rxq->priv = priv;
+	rxq->ctrl = rxq_ctrl;
+	LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry);
+	rxq_obj->rxq_ctrl = rxq_ctrl;
 	rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD;
 	rxq_ctrl->sh = priv->sh;
-	rxq_ctrl->obj = rxq;
-	rxq_data = &rxq_ctrl->rxq;
+	rxq_ctrl->obj = rxq_obj;
 	/* Create CQ using DevX API. */
-	ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_cq_resources(rxq);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Port %u drop queue CQ creation failed.",
 			dev->data->port_id);
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data);
+	ret = mlx5_rxq_create_devx_rq_resources(rxq);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Port %u drop queue RQ creation failed.",
 			dev->data->port_id);
@@ -944,15 +932,18 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev)
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	if (rxq != NULL) {
-		if (rxq->rq_obj.rq != NULL)
-			mlx5_devx_rq_destroy(&rxq->rq_obj);
-		if (rxq->cq_obj.cq != NULL)
-			mlx5_devx_cq_destroy(&rxq->cq_obj);
-		if (rxq->devx_channel)
-			mlx5_os_devx_destroy_event_channel
-							(rxq->devx_channel);
+		if (rxq->devx_rq.rq != NULL)
+			mlx5_devx_rq_destroy(&rxq->devx_rq);
 		mlx5_free(rxq);
 	}
+	if (rxq_obj != NULL) {
+		if (rxq_obj->cq_obj.cq != NULL)
+			mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
+		if (rxq_obj->devx_channel)
+			mlx5_os_devx_destroy_event_channel
+							(rxq_obj->devx_channel);
+		mlx5_free(rxq_obj);
+	}
 	if (rxq_ctrl != NULL)
 		mlx5_free(rxq_ctrl);
 	rte_errno = ret; /* Restore rte_errno. */
@@ -969,12 +960,14 @@ static void
 mlx5_rxq_devx_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
-	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->rxq_ctrl;
+	struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
 
 	mlx5_rxq_devx_obj_release(rxq);
 	mlx5_free(rxq);
 	mlx5_free(rxq_ctrl);
+	mlx5_free(rxq_obj);
 	priv->drop_queue.rxq = NULL;
 }
 
@@ -994,7 +987,7 @@ mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev)
 		mlx5_devx_tir_destroy(hrxq);
 	if (hrxq->ind_table->ind_table != NULL)
 		mlx5_devx_ind_table_destroy(hrxq->ind_table);
-	if (priv->drop_queue.rxq->rq != NULL)
+	if (priv->drop_queue.rxq != NULL)
 		mlx5_rxq_devx_obj_drop_release(dev);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 4eed4176324..25f7fc2071a 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -183,6 +183,7 @@ struct mlx5_rxq_priv {
 	struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
 	LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
+	struct mlx5_devx_rq devx_rq;
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index fc931939721..b9d39a28d78 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -452,13 +452,13 @@ int
 mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
 	int ret;
 
+	MLX5_ASSERT(rxq != NULL && rxq_ctrl != NULL);
 	MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
-	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RDY2RST);
+	ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RDY2RST);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot change Rx WQ state to RESET:  %s",
 			strerror(errno));
@@ -466,7 +466,7 @@ mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx)
 		return ret;
 	}
 	/* Remove all processes CQEs. */
-	rxq_sync_cq(rxq);
+	rxq_sync_cq(&rxq_ctrl->rxq);
 	/* Free all involved mbufs. */
 	rxq_free_elts(rxq_ctrl);
 	/* Set the actual queue state. */
@@ -538,26 +538,26 @@ int
 mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
+	struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq;
 	int ret;
 
-	MLX5_ASSERT(rte_eal_process_type() ==  RTE_PROC_PRIMARY);
+	MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL);
+	MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
 	/* Allocate needed buffers. */
-	ret = rxq_alloc_elts(rxq_ctrl);
+	ret = rxq_alloc_elts(rxq->ctrl);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot reallocate buffers for Rx WQ");
 		rte_errno = errno;
 		return ret;
 	}
 	rte_io_wmb();
-	*rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci);
+	*rxq_data->cq_db = rte_cpu_to_be_32(rxq_data->cq_ci);
 	rte_io_wmb();
 	/* Reset RQ consumer before moving queue to READY state. */
-	*rxq->rq_db = rte_cpu_to_be_32(0);
+	*rxq_data->rq_db = rte_cpu_to_be_32(0);
 	rte_io_wmb();
-	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RST2RDY);
+	ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RST2RDY);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot change Rx WQ state to READY:  %s",
 			strerror(errno));
@@ -565,8 +565,8 @@ mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx)
 		return ret;
 	}
 	/* Reinitialize RQ - set WQEs. */
-	mlx5_rxq_initialize(rxq);
-	rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR;
+	mlx5_rxq_initialize(rxq_data);
+	rxq_data->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR;
 	/* Set actual queue state. */
 	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
 	return 0;
@@ -1770,15 +1770,19 @@ int
 mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx);
-	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
+	struct mlx5_rxq_priv *rxq;
+	struct mlx5_rxq_ctrl *rxq_ctrl;
 
-	if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL)
+	if (priv->rxq_privs == NULL)
+		return 0;
+	rxq = mlx5_rxq_get(dev, idx);
+	if (rxq == NULL)
 		return 0;
 	if (mlx5_rxq_deref(dev, idx) > 1)
 		return 1;
-	if (rxq_ctrl->obj) {
-		priv->obj_ops.rxq_obj_release(rxq_ctrl->obj);
+	rxq_ctrl = rxq->ctrl;
+	if (rxq_ctrl->obj != NULL) {
+		priv->obj_ops.rxq_obj_release(rxq);
 		LIST_REMOVE(rxq_ctrl->obj, next);
 		mlx5_free(rxq_ctrl->obj);
 		rxq_ctrl->obj = NULL;
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 7b984eff35f..d44d6d8e4c3 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -374,11 +374,9 @@ mlx5_queue_state_modify_primary(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 
 	if (sm->is_wq) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[sm->queue_id];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, sm->queue_id);
 
-		ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, sm->state);
+		ret = priv->obj_ops.rxq_obj_modify(rxq, sm->state);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change Rx WQ state to %u  - %s",
 					sm->state, strerror(errno));
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f376f4d6fc4..b3188f510fb 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -180,7 +180,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 			rte_errno = ENOMEM;
 			goto error;
 		}
-		ret = priv->obj_ops.rxq_obj_new(dev, i);
+		ret = priv->obj_ops.rxq_obj_new(rxq);
 		if (ret) {
 			mlx5_free(rxq_ctrl->obj);
 			goto error;
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1..586ba7166cb 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -91,11 +91,11 @@ void
 mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[queue];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queue);
+	struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq;
 	int ret = 0;
 
+	MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL);
 	/* Validate hw support */
 	if (!priv->config.hw_vlan_strip) {
 		DRV_LOG(ERR, "port %u VLAN stripping is not supported",
@@ -109,20 +109,20 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 		return;
 	}
 	DRV_LOG(DEBUG, "port %u set VLAN stripping offloads %d for port %uqueue %d",
-		dev->data->port_id, on, rxq->port_id, queue);
-	if (!rxq_ctrl->obj) {
+		dev->data->port_id, on, rxq_data->port_id, queue);
+	if (rxq->ctrl->obj == NULL) {
 		/* Update related bits in RX queue. */
-		rxq->vlan_strip = !!on;
+		rxq_data->vlan_strip = !!on;
 		return;
 	}
-	ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on);
+	ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq, on);
 	if (ret) {
 		DRV_LOG(ERR, "Port %u failed to modify object stripping mode:"
 			" %s", dev->data->port_id, strerror(rte_errno));
 		return;
 	}
 	/* Update related bits in RX queue. */
-	rxq->vlan_strip = !!on;
+	rxq_data->vlan_strip = !!on;
 }
 
 /**
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 11/13] net/mlx5: remove Rx queue data list from device
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (9 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 10/13] net/mlx5: move Rx queue DevX resource Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 12/13] net/mlx5: support shared Rx queue Xueming Li
                     ` (2 subsequent siblings)
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

Rx queue data list(priv->rxqs) can be replaced by Rx queue
list(priv->rxq_privs), removes it and replace with universal wrapper
API.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c |  7 ++---
 drivers/net/mlx5/mlx5.c             | 10 +------
 drivers/net/mlx5/mlx5.h             |  1 -
 drivers/net/mlx5/mlx5_devx.c        | 13 +++++----
 drivers/net/mlx5/mlx5_ethdev.c      |  6 +---
 drivers/net/mlx5/mlx5_flow.c        | 45 +++++++++++++++--------------
 drivers/net/mlx5/mlx5_rss.c         |  6 ++--
 drivers/net/mlx5/mlx5_rx.c          | 19 +++++-------
 drivers/net/mlx5/mlx5_rx.h          |  9 +++---
 drivers/net/mlx5/mlx5_rxq.c         | 23 ++++++---------
 drivers/net/mlx5/mlx5_rxtx_vec.c    |  6 ++--
 drivers/net/mlx5/mlx5_stats.c       |  9 +++---
 drivers/net/mlx5/mlx5_trigger.c     |  2 +-
 13 files changed, 69 insertions(+), 87 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index a2a9b9c1f98..0e68a13208b 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -527,11 +527,10 @@ mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n,
 
 	MLX5_ASSERT(ind_tbl);
 	for (i = 0; i != ind_tbl->queues_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-				container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev,
+							 ind_tbl->queues[i]);
 
-		wq[i] = rxq_ctrl->obj->wq;
+		wq[i] = rxq->ctrl->obj->wq;
 	}
 	MLX5_ASSERT(i > 0);
 	/* Finalise indirection table. */
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 477ad8c1bc9..6240f6f5dc6 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1578,20 +1578,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 	mlx5_mp_os_req_stop_rxtx(dev);
 	/* Free the eCPRI flex parser resource. */
 	mlx5_flex_parser_ecpri_release(dev);
-	if (priv->rxqs != NULL) {
+	if (priv->rxq_privs != NULL) {
 		/* XXX race condition if mlx5_rx_burst() is still running. */
 		rte_delay_us_sleep(1000);
 		for (i = 0; (i != priv->rxqs_n); ++i)
 			mlx5_rxq_release(dev, i);
 		priv->rxqs_n = 0;
-		priv->rxqs = NULL;
-	}
-	if (priv->representor) {
-		/* Each representor has a dedicated interrupts handler */
-		mlx5_free(dev->intr_handle);
-		dev->intr_handle = NULL;
-	}
-	if (priv->rxq_privs != NULL) {
 		mlx5_free(priv->rxq_privs);
 		priv->rxq_privs = NULL;
 	}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a2735cbb350..55612f777ea 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1406,7 +1406,6 @@ struct mlx5_priv {
 	unsigned int rxqs_n; /* RX queues array size. */
 	unsigned int txqs_n; /* TX queues array size. */
 	struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */
-	struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */
 	struct mlx5_txq_data *(*txqs)[]; /* TX queues. */
 	struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */
 	struct rte_eth_rss_conf rss_conf; /* RSS configuration. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index f7f7526dbf6..b767470dea0 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -682,15 +682,16 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
 
 	/* NULL queues designate drop queue. */
 	if (ind_tbl->queues != NULL) {
-		struct mlx5_rxq_data *rxq_data =
-					(*priv->rxqs)[ind_tbl->queues[0]];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-		rxq_obj_type = rxq_ctrl->type;
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev,
+							 ind_tbl->queues[0]);
 
+		rxq_obj_type = rxq->ctrl->type;
 		/* Enable TIR LRO only if all the queues were configured for. */
 		for (i = 0; i < ind_tbl->queues_n; ++i) {
-			if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) {
+			struct mlx5_rxq_data *rxq_i =
+				mlx5_rxq_data_get(dev, ind_tbl->queues[i]);
+
+			if (rxq_i != NULL && !rxq_i->lro) {
 				lro = false;
 				break;
 			}
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index ee1189b929d..070ff149488 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -114,7 +114,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 		rte_errno = ENOMEM;
 		return -rte_errno;
 	}
-	priv->rxqs = (void *)dev->data->rx_queues;
 	priv->txqs = (void *)dev->data->tx_queues;
 	if (txqs_n != priv->txqs_n) {
 		DRV_LOG(INFO, "port %u Tx queues number update: %u -> %u",
@@ -171,11 +170,8 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev)
 		return -rte_errno;
 	}
 	for (i = 0, j = 0; i < rxqs_n; i++) {
-		struct mlx5_rxq_data *rxq_data;
-		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		rxq_data = (*priv->rxqs)[i];
-		rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 		if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
 			rss_queue_arr[j++] = i;
 	}
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c10b9112593..49a74edd2e6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1166,10 +1166,11 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev,
 		return;
 	for (i = 0; i != ind_tbl->queues_n; ++i) {
 		int idx = ind_tbl->queues[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of((*priv->rxqs)[idx],
-				     struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
+		MLX5_ASSERT(rxq_ctrl != NULL);
+		if (rxq_ctrl == NULL)
+			continue;
 		/*
 		 * To support metadata register copy on Tx loopback,
 		 * this must be always enabled (metadata may arive
@@ -1261,10 +1262,11 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev,
 	MLX5_ASSERT(dev->data->dev_started);
 	for (i = 0; i != ind_tbl->queues_n; ++i) {
 		int idx = ind_tbl->queues[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl =
-			container_of((*priv->rxqs)[idx],
-				     struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
+		MLX5_ASSERT(rxq_ctrl != NULL);
+		if (rxq_ctrl == NULL)
+			continue;
 		if (priv->config.dv_flow_en &&
 		    priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
 		    mlx5_flow_ext_mreg_supported(dev)) {
@@ -1325,18 +1327,16 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev)
 	unsigned int i;
 
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq_ctrl;
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
 		unsigned int j;
 
-		if (!(*priv->rxqs)[i])
+		if (rxq == NULL || rxq->ctrl == NULL)
 			continue;
-		rxq_ctrl = container_of((*priv->rxqs)[i],
-					struct mlx5_rxq_ctrl, rxq);
-		rxq_ctrl->flow_mark_n = 0;
-		rxq_ctrl->rxq.mark = 0;
+		rxq->ctrl->flow_mark_n = 0;
+		rxq->ctrl->rxq.mark = 0;
 		for (j = 0; j != MLX5_FLOW_TUNNEL; ++j)
-			rxq_ctrl->flow_tunnels_n[j] = 0;
-		rxq_ctrl->rxq.tunnel = 0;
+			rxq->ctrl->flow_tunnels_n[j] = 0;
+		rxq->ctrl->rxq.tunnel = 0;
 	}
 }
 
@@ -1350,13 +1350,15 @@ void
 mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *data;
 	unsigned int i;
 
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		if (!(*priv->rxqs)[i])
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+		struct mlx5_rxq_data *data;
+
+		if (rxq == NULL || rxq->ctrl == NULL)
 			continue;
-		data = (*priv->rxqs)[i];
+		data = &rxq->ctrl->rxq;
 		if (!rte_flow_dynf_metadata_avail()) {
 			data->dynf_meta = 0;
 			data->flow_meta_mask = 0;
@@ -1547,7 +1549,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  &queue->index,
 					  "queue index out of range");
-	if (!(*priv->rxqs)[queue->index])
+	if (mlx5_rxq_get(dev, queue->index) == NULL)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 					  &queue->index,
@@ -1578,7 +1580,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
  *   0 on success, a negative errno code on error.
  */
 static int
-mlx5_validate_rss_queues(const struct rte_eth_dev *dev,
+mlx5_validate_rss_queues(struct rte_eth_dev *dev,
 			 const uint16_t *queues, uint32_t queues_n,
 			 const char **error, uint32_t *queue_idx)
 {
@@ -1594,13 +1596,12 @@ mlx5_validate_rss_queues(const struct rte_eth_dev *dev,
 			*queue_idx = i;
 			return -EINVAL;
 		}
-		if (!(*priv->rxqs)[queues[i]]) {
+		rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]);
+		if (rxq_ctrl == NULL) {
 			*error =  "queue is not configured";
 			*queue_idx = i;
 			return -EINVAL;
 		}
-		rxq_ctrl = container_of((*priv->rxqs)[queues[i]],
-					struct mlx5_rxq_ctrl, rxq);
 		if (i == 0)
 			rxq_type = rxq_ctrl->type;
 		if (rxq_type != rxq_ctrl->type) {
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b..9ffc44b179f 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -65,9 +65,11 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 	priv->rss_conf.rss_hf = rss_conf->rss_hf;
 	/* Enable the RSS hash in all Rx queues. */
 	for (i = 0, idx = 0; idx != priv->rxqs_n; ++i) {
-		if (!(*priv->rxqs)[i])
+		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i);
+
+		if (rxq == NULL || rxq->ctrl == NULL)
 			continue;
-		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
+		rxq->ctrl->rxq.rss_hash = !!rss_conf->rss_hf &&
 			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
 		++idx;
 	}
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 09de26c0d39..3017a8da20c 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -148,10 +148,8 @@ void
 mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		  struct rte_eth_rxq_info *qinfo)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq = (*priv->rxqs)[rx_queue_id];
-	struct mlx5_rxq_ctrl *rxq_ctrl =
-		container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id);
+	struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id);
 
 	if (!rxq)
 		return;
@@ -162,7 +160,10 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	qinfo->conf.rx_thresh.wthresh = 0;
 	qinfo->conf.rx_free_thresh = rxq->rq_repl_thresh;
 	qinfo->conf.rx_drop_en = 1;
-	qinfo->conf.rx_deferred_start = rxq_ctrl ? 0 : 1;
+	if (rxq_ctrl == NULL || rxq_ctrl->obj == NULL)
+		qinfo->conf.rx_deferred_start = 0;
+	else
+		qinfo->conf.rx_deferred_start = 1;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	qinfo->scattered_rx = dev->data->scattered_rx;
 	qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ?
@@ -191,10 +192,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 		       struct rte_eth_burst_mode *mode)
 {
 	eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id);
 
-	rxq = (*priv->rxqs)[rx_queue_id];
 	if (!rxq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
@@ -245,15 +244,13 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 uint32_t
 mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id);
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
 	if (!rxq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 25f7fc2071a..161399c764d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -606,14 +606,13 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev)
 		return 0;
 	/* All the configured queues should be enabled. */
 	for (i = 0; i < priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
-		struct mlx5_rxq_ctrl *rxq_ctrl = container_of
-			(rxq, struct mlx5_rxq_ctrl, rxq);
+		struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
 
-		if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
+		if (rxq_ctrl == NULL ||
+		    rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD)
 			continue;
 		n_ibv++;
-		if (mlx5_rxq_mprq_enabled(rxq))
+		if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
 			++n;
 	}
 	/* Multi-Packet RQ can't be partially configured. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b9d39a28d78..8eec9a4ed8d 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -729,7 +729,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
 		dev->data->port_id, idx);
-	(*priv->rxqs)[idx] = &rxq_ctrl->rxq;
+	dev->data->rx_queues[idx] = &rxq_ctrl->rxq;
 	return 0;
 }
 
@@ -811,7 +811,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx,
 	}
 	DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list",
 		dev->data->port_id, idx);
-	(*priv->rxqs)[idx] = &rxq_ctrl->rxq;
+	dev->data->rx_queues[idx] = &rxq_ctrl->rxq;
 	return 0;
 }
 
@@ -1712,8 +1712,7 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
-	if (priv->rxq_privs == NULL)
-		return NULL;
+	MLX5_ASSERT(priv->rxq_privs != NULL);
 	return (*priv->rxq_privs)[idx];
 }
 
@@ -1799,7 +1798,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 		LIST_REMOVE(rxq, owner_entry);
 		LIST_REMOVE(rxq_ctrl, next);
 		mlx5_free(rxq_ctrl);
-		(*priv->rxqs)[idx] = NULL;
+		dev->data->rx_queues[idx] = NULL;
 		mlx5_free(rxq);
 		(*priv->rxq_privs)[idx] = NULL;
 	}
@@ -1845,14 +1844,10 @@ enum mlx5_rxq_type
 mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
+	struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx);
 
-	if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) {
-		rxq_ctrl = container_of((*priv->rxqs)[idx],
-					struct mlx5_rxq_ctrl,
-					rxq);
+	if (idx < priv->rxqs_n && rxq_ctrl != NULL)
 		return rxq_ctrl->type;
-	}
 	return MLX5_RXQ_TYPE_UNDEFINED;
 }
 
@@ -2619,13 +2614,13 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
-	struct mlx5_rxq_data *data;
 	unsigned int i;
 
 	for (i = 0; i != priv->rxqs_n; ++i) {
-		if (!(*priv->rxqs)[i])
+		struct mlx5_rxq_data *data = mlx5_rxq_data_get(dev, i);
+
+		if (data == NULL)
 			continue;
-		data = (*priv->rxqs)[i];
 		data->sh = sh;
 		data->rt_timestamp = priv->config.rt_timestamp;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 511681841ca..6212ce8247d 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -578,11 +578,11 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	/* All the configured queues should support. */
 	for (i = 0; i < priv->rxqs_n; ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
+		struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i);
 
-		if (!rxq)
+		if (!rxq_data)
 			continue;
-		if (mlx5_rxq_check_vec_support(rxq) < 0)
+		if (mlx5_rxq_check_vec_support(rxq_data) < 0)
 			break;
 	}
 	if (i != priv->rxqs_n)
diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c
index ae2f5668a74..732775954ad 100644
--- a/drivers/net/mlx5/mlx5_stats.c
+++ b/drivers/net/mlx5/mlx5_stats.c
@@ -107,7 +107,7 @@ mlx5_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	memset(&tmp, 0, sizeof(tmp));
 	/* Add software counters. */
 	for (i = 0; (i != priv->rxqs_n); ++i) {
-		struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
+		struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i);
 
 		if (rxq == NULL)
 			continue;
@@ -181,10 +181,11 @@ mlx5_stats_reset(struct rte_eth_dev *dev)
 	unsigned int i;
 
 	for (i = 0; (i != priv->rxqs_n); ++i) {
-		if ((*priv->rxqs)[i] == NULL)
+		struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i);
+
+		if (rxq_data == NULL)
 			continue;
-		memset(&(*priv->rxqs)[i]->stats, 0,
-		       sizeof(struct mlx5_rxq_stats));
+		memset(&rxq_data->stats, 0, sizeof(struct mlx5_rxq_stats));
 	}
 	for (i = 0; (i != priv->txqs_n); ++i) {
 		if ((*priv->txqs)[i] == NULL)
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index b3188f510fb..1e865e74e39 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -176,7 +176,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 		if (!rxq_ctrl->obj) {
 			DRV_LOG(ERR,
 				"Port %u Rx queue %u can't allocate resources.",
-				dev->data->port_id, (*priv->rxqs)[i]->idx);
+				dev->data->port_id, i);
 			rte_errno = ENOMEM;
 			goto error;
 		}
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 12/13] net/mlx5: support shared Rx queue
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (10 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 11/13] net/mlx5: remove Rx queue data list from device Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 13/13] net/mlx5: add shared Rx queue port datapath support Xueming Li
  2021-10-19  8:19   ` [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue Slava Ovsiienko
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev; +Cc: xuemingl, Lior Margalit, Matan Azrad, Viacheslav Ovsiienko

This patch introduces shared RXQ. All share Rx queues with same group
and queue id shares same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.

Shared rxq_data is set into device Rx queues of all member ports as
rxq object, used for receiving packets. Polling queue of any member
ports returns packets of any member, mbuf->port is used to identify
source port.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini   |   1 +
 doc/guides/nics/mlx5.rst            |   6 +
 drivers/net/mlx5/linux/mlx5_os.c    |   2 +
 drivers/net/mlx5/linux/mlx5_verbs.c |  12 +-
 drivers/net/mlx5/mlx5.h             |   4 +-
 drivers/net/mlx5/mlx5_devx.c        |  50 +++++--
 drivers/net/mlx5/mlx5_ethdev.c      |   5 +
 drivers/net/mlx5/mlx5_rx.h          |   4 +
 drivers/net/mlx5/mlx5_rxq.c         | 218 ++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_trigger.c     |  76 ++++++----
 10 files changed, 298 insertions(+), 80 deletions(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index f01abd4231f..ff5e669acc1 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -11,6 +11,7 @@ Removal event        = Y
 Rx interrupt         = Y
 Fast mbuf free       = Y
 Queue start/stop     = Y
+Shared Rx queue      = Y
 Burst mode info      = Y
 Power mgmt address monitor = Y
 MTU update           = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d88..d26f274dec4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -113,6 +113,7 @@ Features
 - Connection tracking.
 - Sub-Function representors.
 - Sub-Function.
+- Shared Rx queue.
 
 
 Limitations
@@ -464,6 +465,11 @@ Limitations
   - In order to achieve best insertion rate, application should manage the flows per lcore.
   - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache.
 
+ Shared Rx queue:
+
+  - Counter of received packets and bytes number of devices in same share group are same.
+  - Counter of received packets and bytes number of queues in same group and queue ID are same.
+
 Statistics
 ----------
 
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 985f0bd4892..49acbe34817 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -457,6 +457,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 			mlx5_glue->dr_create_flow_action_default_miss();
 	if (!sh->default_miss_action)
 		DRV_LOG(WARNING, "Default miss action is not supported.");
+	LIST_INIT(&sh->shared_rxqs);
 	return 0;
 error:
 	/* Rollback the created objects. */
@@ -531,6 +532,7 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv)
 	MLX5_ASSERT(sh && sh->refcnt);
 	if (sh->refcnt > 1)
 		return;
+	MLX5_ASSERT(LIST_EMPTY(&sh->shared_rxqs));
 #ifdef HAVE_MLX5DV_DR
 	if (sh->rx_domain) {
 		mlx5_glue->dr_destroy_domain(sh->rx_domain);
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 0e68a13208b..17183adf732 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -459,20 +459,24 @@ mlx5_rxq_ibv_obj_new(struct mlx5_rxq_priv *rxq)
  *
  * @param rxq
  *   Pointer to Rx queue.
+ * @return
+ *   Safe to release RxQ object.
  */
-static void
+static bool
 mlx5_rxq_ibv_obj_release(struct mlx5_rxq_priv *rxq)
 {
 	struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj;
 
-	MLX5_ASSERT(rxq_obj);
-	MLX5_ASSERT(rxq_obj->wq);
-	MLX5_ASSERT(rxq_obj->ibv_cq);
+	if (rxq_obj == NULL || rxq_obj->wq == NULL)
+		return true;
 	claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+	rxq_obj->wq = NULL;
+	MLX5_ASSERT(rxq_obj->ibv_cq);
 	claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
 	if (rxq_obj->ibv_channel)
 		claim_zero(mlx5_glue->destroy_comp_channel
 							(rxq_obj->ibv_channel));
+	return true;
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 55612f777ea..647a18d3916 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1193,6 +1193,7 @@ struct mlx5_dev_ctx_shared {
 	struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX];
 	/* Flex parser profiles information. */
 	void *devx_rx_uar; /* DevX UAR for Rx. */
+	LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */
 	struct mlx5_aso_age_mng *aso_age_mng;
 	/* Management data for aging mechanism using ASO Flow Hit. */
 	struct mlx5_geneve_tlv_option_resource *geneve_tlv_option_resource;
@@ -1257,6 +1258,7 @@ struct mlx5_rxq_obj {
 		};
 		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
+			struct mlx5_devx_rmp devx_rmp; /* RMP for shared RQ. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
@@ -1342,7 +1344,7 @@ struct mlx5_obj_ops {
 	int (*rxq_obj_new)(struct mlx5_rxq_priv *rxq);
 	int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
 	int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type);
-	void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq);
+	bool (*rxq_obj_release)(struct mlx5_rxq_priv *rxq);
 	int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n,
 			     struct mlx5_ind_table_obj *ind_tbl);
 	int (*ind_table_modify)(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index b767470dea0..94253047141 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -88,6 +88,8 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type)
 	default:
 		break;
 	}
+	if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+		return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr);
 	return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr);
 }
 
@@ -152,22 +154,27 @@ mlx5_devx_modify_sq(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
  *
  * @param rxq
  *   DevX Rx queue.
+ * @return
+ *   Safe to release RxQ object.
  */
-static void
+static bool
 mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 {
-	struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl;
-	struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
+	struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj;
 
-	MLX5_ASSERT(rxq != NULL);
-	MLX5_ASSERT(rxq_ctrl != NULL);
+	if (rxq_obj == NULL)
+		return true;
 	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
-		MLX5_ASSERT(rxq_obj->rq);
+		if (rxq_obj->rq == NULL)
+			return true;
 		mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
+		if (rxq->devx_rq.rq == NULL)
+			return true;
 		mlx5_devx_rq_destroy(&rxq->devx_rq);
-		memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq));
+		if (rxq->devx_rq.rmp != NULL && rxq->devx_rq.rmp->ref_cnt > 0)
+			return false;
 		mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
 		memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
 		if (rxq_obj->devx_channel) {
@@ -176,6 +183,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq)
 			rxq_obj->devx_channel = NULL;
 		}
 	}
+	return true;
 }
 
 /**
@@ -269,6 +277,8 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq)
 						MLX5_WQ_END_PAD_MODE_NONE;
 	rq_attr.wq_attr.pd = priv->sh->pdn;
 	rq_attr.counter_set_id = priv->counter_set_id;
+	if (rxq_data->shared) /* Create RMP based RQ. */
+		rxq->devx_rq.rmp = &rxq_ctrl->obj->devx_rmp;
 	/* Create RQ using DevX API. */
 	return mlx5_devx_rq_create(priv->sh->ctx, &rxq->devx_rq,
 				   wqe_size, log_desc_n, &rq_attr,
@@ -299,6 +309,8 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
 	uint16_t event_nums[1] = { 0 };
 	int ret = 0;
 
+	if (rxq_ctrl->started)
+		return 0;
 	if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
 	    !rxq_data->lro) {
 		cq_attr.cqe_comp_en = 1u;
@@ -364,6 +376,7 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq)
 	rxq_data->cq_uar = mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar);
 	rxq_data->cqe_n = log_cqe_n;
 	rxq_data->cqn = cq_obj->cq->id;
+	rxq_data->cq_ci = 0;
 	if (rxq_ctrl->obj->devx_channel) {
 		ret = mlx5_os_devx_subscribe_devx_event
 					      (rxq_ctrl->obj->devx_channel,
@@ -462,7 +475,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
 		return mlx5_rxq_obj_hairpin_new(rxq);
 	tmpl->rxq_ctrl = rxq_ctrl;
-	if (rxq_ctrl->irq) {
+	if (rxq_ctrl->irq && !rxq_ctrl->started) {
 		int devx_ev_flag =
 			  MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA;
 
@@ -495,11 +508,19 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq)
 	ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
-	rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
-	rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec;
-	mlx5_rxq_initialize(rxq_data);
+	if (!rxq_data->shared) {
+		rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf;
+		rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec;
+	} else if (!rxq_ctrl->started) {
+		rxq_data->wqes = (void *)(uintptr_t)tmpl->devx_rmp.wq.umem_buf;
+		rxq_data->rq_db =
+				(uint32_t *)(uintptr_t)tmpl->devx_rmp.wq.db_rec;
+	}
+	if (!rxq_ctrl->started) {
+		mlx5_rxq_initialize(rxq_data);
+		rxq_ctrl->wqn = rxq->devx_rq.rq->id;
+	}
 	priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq_ctrl->wqn = rxq->devx_rq.rq->id;
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
@@ -557,7 +578,10 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev,
 		struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]);
 
 		MLX5_ASSERT(rxq != NULL);
-		rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
+		if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+			rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id;
+		else
+			rqt_attr->rq_list[i] = rxq->devx_rq.rq->id;
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 070ff149488..ee6a70f315e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -26,6 +26,7 @@
 #include "mlx5_rx.h"
 #include "mlx5_tx.h"
 #include "mlx5_autoconf.h"
+#include "mlx5_devx.h"
 
 /**
  * Get the interface index from device name.
@@ -336,9 +337,13 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK;
 	mlx5_set_default_params(dev, info);
 	mlx5_set_txlimit_params(dev, info);
+	if (priv->config.hca_attr.mem_rq_rmp &&
+	    priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new)
+		info->dev_capa |= RTE_ETH_DEV_CAPA_RXQ_SHARE;
 	info->switch_info.name = dev->data->name;
 	info->switch_info.domain_id = priv->domain_id;
 	info->switch_info.port_id = priv->representor_id;
+	info->switch_info.rx_domain = 0; /* No sub Rx domains. */
 	if (priv->representor) {
 		uint16_t port_id;
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 161399c764d..c293cb1b61c 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -107,6 +107,7 @@ struct mlx5_rxq_data {
 	unsigned int lro:1; /* Enable LRO. */
 	unsigned int dynf_meta:1; /* Dynamic metadata is configured. */
 	unsigned int mcqe_format:3; /* CQE compression format. */
+	unsigned int shared:1; /* Shared RXQ. */
 	volatile uint32_t *rq_db;
 	volatile uint32_t *cq_db;
 	uint16_t port_id;
@@ -169,6 +170,9 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_dev_ctx_shared *sh; /* Shared context. */
 	enum mlx5_rxq_type type; /* Rxq type. */
 	unsigned int socket; /* CPU socket ID for allocations. */
+	LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
+	uint32_t share_group; /* Group ID of shared RXQ. */
+	unsigned int started:1; /* Whether (shared) RXQ has been started. */
 	unsigned int irq:1; /* Whether IRQ is enabled. */
 	uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 8eec9a4ed8d..494c9e3517f 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -28,6 +28,7 @@
 #include "mlx5_rx.h"
 #include "mlx5_utils.h"
 #include "mlx5_autoconf.h"
+#include "mlx5_devx.h"
 
 
 /* Default RSS hash key also used for ConnectX-3. */
@@ -648,6 +649,114 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc)
 	return 0;
 }
 
+/**
+ * Get the shared Rx queue object that matches group and queue index.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param group
+ *   Shared RXQ group.
+ * @param idx
+ *   RX queue index.
+ *
+ * @return
+ *   Shared RXQ object that matching, or NULL if not found.
+ */
+static struct mlx5_rxq_ctrl *
+mlx5_shared_rxq_get(struct rte_eth_dev *dev, uint32_t group, uint16_t idx)
+{
+	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, share_entry) {
+		if (rxq_ctrl->share_group == group && rxq_ctrl->rxq.idx == idx)
+			return rxq_ctrl;
+	}
+	return NULL;
+}
+
+/**
+ * Check whether requested Rx queue configuration matches shared RXQ.
+ *
+ * @param rxq_ctrl
+ *   Pointer to shared RXQ.
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param idx
+ *   Queue index.
+ * @param desc
+ *   Number of descriptors to configure in queue.
+ * @param socket
+ *   NUMA socket on which memory must be allocated.
+ * @param[in] conf
+ *   Thresholds parameters.
+ * @param mp
+ *   Memory pool for buffer allocations.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static bool
+mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev,
+		      uint16_t idx, uint16_t desc, unsigned int socket,
+		      const struct rte_eth_rxconf *conf,
+		      struct rte_mempool *mp)
+{
+	struct mlx5_priv *spriv = LIST_FIRST(&rxq_ctrl->owners)->priv;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	unsigned int mprq_stride_nums = priv->config.mprq.stride_num_n ?
+		priv->config.mprq.stride_num_n : MLX5_MPRQ_STRIDE_NUM_N;
+
+	RTE_SET_USED(conf);
+	if (rxq_ctrl->socket != socket) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: socket mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->config.mprq.enabled)
+		desc >>= mprq_stride_nums;
+	if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: descriptor number mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->mtu != spriv->mtu) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mtu mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->dev_data->dev_conf.intr_conf.rxq !=
+	    spriv->dev_data->dev_conf.intr_conf.rxq) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: interrupt mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (!spriv->config.mprq.enabled && rxq_ctrl->rxq.mp != mp) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mempool mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->config.hw_padding != spriv->config.hw_padding) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: padding mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (memcmp(&priv->config.mprq, &spriv->config.mprq,
+		   sizeof(priv->config.mprq)) != 0) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: MPRQ mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	if (priv->config.cqe_comp != spriv->config.cqe_comp ||
+	    (priv->config.cqe_comp &&
+	     priv->config.cqe_comp_fmt != spriv->config.cqe_comp_fmt)) {
+		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: CQE compression mismatch",
+			dev->data->port_id, idx);
+		return false;
+	}
+	return true;
+}
+
 /**
  *
  * @param dev
@@ -673,12 +782,14 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq;
-	struct mlx5_rxq_ctrl *rxq_ctrl;
+	struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
 	struct rte_eth_rxseg_split *rx_seg =
 				(struct rte_eth_rxseg_split *)conf->rx_seg;
 	struct rte_eth_rxseg_split rx_single = {.mp = mp};
 	uint16_t n_seg = conf->rx_nseg;
 	int res;
+	uint64_t offloads = conf->offloads |
+			    dev->data->dev_conf.rxmode.offloads;
 
 	if (mp) {
 		/*
@@ -690,9 +801,6 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		n_seg = 1;
 	}
 	if (n_seg > 1) {
-		uint64_t offloads = conf->offloads |
-				    dev->data->dev_conf.rxmode.offloads;
-
 		/* The offloads should be checked on rte_eth_dev layer. */
 		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
@@ -704,9 +812,32 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		}
 		MLX5_ASSERT(n_seg < MLX5_MAX_RXQ_NSEG);
 	}
+	if (conf->share_group > 0) {
+		if (!priv->config.hca_attr.mem_rq_rmp) {
+			DRV_LOG(ERR, "port %u queue index %u shared Rx queue not supported by fw",
+				     dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		if (priv->obj_ops.rxq_obj_new != devx_obj_ops.rxq_obj_new) {
+			DRV_LOG(ERR, "port %u queue index %u shared Rx queue needs DevX api",
+				     dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+		/* Try to reuse shared RXQ. */
+		rxq_ctrl = mlx5_shared_rxq_get(dev, conf->share_group, idx);
+		if (rxq_ctrl != NULL &&
+		    !mlx5_shared_rxq_match(rxq_ctrl, dev, idx, desc, socket,
+					   conf, mp)) {
+			rte_errno = EINVAL;
+			return -rte_errno;
+		}
+	}
 	res = mlx5_rx_queue_pre_setup(dev, idx, &desc);
 	if (res)
 		return res;
+	/* Allocate RXQ. */
 	rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0,
 			  SOCKET_ID_ANY);
 	if (!rxq) {
@@ -718,15 +849,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->idx = idx;
 	(*priv->rxq_privs)[idx] = rxq;
-	rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg);
-	if (!rxq_ctrl) {
-		DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
-			dev->data->port_id, idx);
-		mlx5_free(rxq);
-		(*priv->rxq_privs)[idx] = NULL;
-		rte_errno = ENOMEM;
-		return -rte_errno;
+	if (rxq_ctrl != NULL) {
+		/* Join owner list. */
+		LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry);
+		rxq->ctrl = rxq_ctrl;
+	} else {
+		rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg,
+					n_seg);
+		if (rxq_ctrl == NULL) {
+			DRV_LOG(ERR, "port %u unable to allocate rx queue index %u",
+				dev->data->port_id, idx);
+			mlx5_free(rxq);
+			(*priv->rxq_privs)[idx] = NULL;
+			rte_errno = ENOMEM;
+			return -rte_errno;
+		}
 	}
+	mlx5_rxq_ref(dev, idx);
 	DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
 		dev->data->port_id, idx);
 	dev->data->rx_queues[idx] = &rxq_ctrl->rxq;
@@ -1071,6 +1210,9 @@ mlx5_rxq_obj_verify(struct rte_eth_dev *dev)
 	struct mlx5_rxq_obj *rxq_obj;
 
 	LIST_FOREACH(rxq_obj, &priv->rxqsobj, next) {
+		if (rxq_obj->rxq_ctrl->rxq.shared &&
+		    !LIST_EMPTY(&rxq_obj->rxq_ctrl->owners))
+			continue;
 		DRV_LOG(DEBUG, "port %u Rx queue %u still referenced",
 			dev->data->port_id, rxq_obj->rxq_ctrl->rxq.idx);
 		++ret;
@@ -1348,6 +1490,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 		return NULL;
 	}
 	LIST_INIT(&tmpl->owners);
+	if (conf->share_group > 0) {
+		tmpl->rxq.shared = 1;
+		tmpl->share_group = conf->share_group;
+		LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry);
+	}
 	rxq->ctrl = tmpl;
 	LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry);
 	MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG);
@@ -1596,7 +1743,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
 	tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq;
 #endif
 	tmpl->rxq.idx = idx;
-	mlx5_rxq_ref(dev, idx);
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 error:
@@ -1771,33 +1917,45 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_priv *rxq;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
+	bool free_obj;
+	uint32_t refcnt;
 
 	if (priv->rxq_privs == NULL)
 		return 0;
 	rxq = mlx5_rxq_get(dev, idx);
 	if (rxq == NULL)
 		return 0;
-	if (mlx5_rxq_deref(dev, idx) > 1)
-		return 1;
 	rxq_ctrl = rxq->ctrl;
-	if (rxq_ctrl->obj != NULL) {
-		priv->obj_ops.rxq_obj_release(rxq);
-		LIST_REMOVE(rxq_ctrl->obj, next);
-		mlx5_free(rxq_ctrl->obj);
-		rxq_ctrl->obj = NULL;
-	}
-	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-		rxq_free_elts(rxq_ctrl);
-		dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
-	}
-	if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) {
+	refcnt = mlx5_rxq_deref(dev, idx);
+	if (refcnt > 1) {
+		return 1;
+	} else if (refcnt == 1) { /* RxQ stopped. */
+		free_obj = priv->obj_ops.rxq_obj_release(rxq);
+		if (free_obj && rxq_ctrl->obj != NULL) {
+			LIST_REMOVE(rxq_ctrl->obj, next);
+			mlx5_free(rxq_ctrl->obj);
+			rxq_ctrl->obj = NULL;
+			rxq_ctrl->started = false;
+		}
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
-			mlx5_mprq_free_mp(dev, rxq_ctrl);
+			if (free_obj)
+				rxq_free_elts(rxq_ctrl);
+			dev->data->rx_queue_state[idx] =
+					RTE_ETH_QUEUE_STATE_STOPPED;
 		}
+	} else { /* Refcnt zero, closing device. */
 		LIST_REMOVE(rxq, owner_entry);
-		LIST_REMOVE(rxq_ctrl, next);
-		mlx5_free(rxq_ctrl);
+		if (LIST_EMPTY(&rxq_ctrl->owners)) {
+			if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+				mlx5_mr_btree_free
+					(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+				mlx5_mprq_free_mp(dev, rxq_ctrl);
+			}
+			if (rxq_ctrl->rxq.shared)
+				LIST_REMOVE(rxq_ctrl, share_entry);
+			LIST_REMOVE(rxq_ctrl, next);
+			mlx5_free(rxq_ctrl);
+		}
 		dev->data->rx_queues[idx] = NULL;
 		mlx5_free(rxq);
 		(*priv->rxq_privs)[idx] = NULL;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 1e865e74e39..b12ca3dde99 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -122,6 +122,46 @@ mlx5_rxq_stop(struct rte_eth_dev *dev)
 		mlx5_rxq_release(dev, i);
 }
 
+static int
+mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl,
+		      unsigned int idx)
+{
+	int ret = 0;
+
+	if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+		if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
+			/* Allocate/reuse/resize mempool for MPRQ. */
+			if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0)
+				return -rte_errno;
+
+			/* Pre-register Rx mempools. */
+			mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
+					  rxq_ctrl->rxq.mprq_mp);
+		} else {
+			uint32_t s;
+			for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++)
+				mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
+						  rxq_ctrl->rxq.rxseg[s].mp);
+		}
+		ret = rxq_alloc_elts(rxq_ctrl);
+		if (ret)
+			return ret;
+	}
+	MLX5_ASSERT(!rxq_ctrl->obj);
+	rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				    sizeof(*rxq_ctrl->obj), 0,
+				    rxq_ctrl->socket);
+	if (!rxq_ctrl->obj) {
+		DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.", dev->data->port_id,
+		idx, (void *)&rxq_ctrl->obj);
+	return 0;
+}
+
 /**
  * Start traffic on Rx queues.
  *
@@ -149,45 +189,17 @@ mlx5_rxq_start(struct rte_eth_dev *dev)
 		if (rxq == NULL)
 			continue;
 		rxq_ctrl = rxq->ctrl;
-		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
-			if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) {
-				/* Allocate/reuse/resize mempool for MPRQ. */
-				if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0)
-					goto error;
-				/* Pre-register Rx mempools. */
-				mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl,
-						  rxq_ctrl->rxq.mprq_mp);
-			} else {
-				uint32_t s;
-
-				for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++)
-					mlx5_mr_update_mp
-						(dev, &rxq_ctrl->rxq.mr_ctrl,
-						rxq_ctrl->rxq.rxseg[s].mp);
-			}
-			ret = rxq_alloc_elts(rxq_ctrl);
-			if (ret)
+		if (!rxq_ctrl->started) {
+			if (mlx5_rxq_ctrl_prepare(dev, rxq_ctrl, i) < 0)
 				goto error;
-		}
-		MLX5_ASSERT(!rxq_ctrl->obj);
-		rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-					    sizeof(*rxq_ctrl->obj), 0,
-					    rxq_ctrl->socket);
-		if (!rxq_ctrl->obj) {
-			DRV_LOG(ERR,
-				"Port %u Rx queue %u can't allocate resources.",
-				dev->data->port_id, i);
-			rte_errno = ENOMEM;
-			goto error;
+			LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next);
 		}
 		ret = priv->obj_ops.rxq_obj_new(rxq);
 		if (ret) {
 			mlx5_free(rxq_ctrl->obj);
 			goto error;
 		}
-		DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.",
-			dev->data->port_id, i, (void *)&rxq_ctrl->obj);
-		LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next);
+		rxq_ctrl->started = true;
 	}
 	return 0;
 error:
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [dpdk-dev] [PATCH v2 13/13] net/mlx5: add shared Rx queue port datapath support
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (11 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 12/13] net/mlx5: support shared Rx queue Xueming Li
@ 2021-10-16  9:12   ` Xueming Li
  2021-10-19  8:19   ` [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue Slava Ovsiienko
  13 siblings, 0 replies; 29+ messages in thread
From: Xueming Li @ 2021-10-16  9:12 UTC (permalink / raw)
  To: dev
  Cc: Viacheslav Ovsiienko, xuemingl, Lior Margalit, Matan Azrad,
	David Christensen, Ruifeng Wang, Bruce Richardson,
	Konstantin Ananyev

From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

When receive packet, mlx5 PMD saves mbuf port number from
rxq data.

To support shared rxq, save port number into RQ context as
user index. Received packet resolve port number from
CQE user index which derived from RQ context.

Legacy Verbs API doesn't support RQ user index setting,
still read from rxq port number.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c             |  1 +
 drivers/net/mlx5/mlx5_rx.c               |  1 +
 drivers/net/mlx5/mlx5_rxq.c              |  3 ++-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h |  6 ++++++
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h    | 12 +++++++++++-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h     |  8 +++++++-
 6 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 94253047141..11c426eee14 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -277,6 +277,7 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq)
 						MLX5_WQ_END_PAD_MODE_NONE;
 	rq_attr.wq_attr.pd = priv->sh->pdn;
 	rq_attr.counter_set_id = priv->counter_set_id;
+	rq_attr.user_index = rte_cpu_to_be_16(priv->dev_data->port_id);
 	if (rxq_data->shared) /* Create RMP based RQ. */
 		rxq->devx_rq.rmp = &rxq_ctrl->obj->devx_rmp;
 	/* Create RQ using DevX API. */
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 3017a8da20c..6ee54b820f1 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -707,6 +707,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 {
 	/* Update packet information. */
 	pkt->packet_type = rxq_cq_to_pkt_type(rxq, cqe, mcqe);
+	pkt->port = unlikely(rxq->shared) ? cqe->user_index_low : rxq->port_id;
 
 	if (rxq->rss_hash) {
 		uint32_t rss_hash_res = 0;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 494c9e3517f..250922b0d7a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -179,7 +179,8 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		mbuf_init->data_off = RTE_PKTMBUF_HEADROOM;
 		rte_mbuf_refcnt_set(mbuf_init, 1);
 		mbuf_init->nb_segs = 1;
-		mbuf_init->port = rxq->port_id;
+		/* For shared queues port is provided in CQE */
+		mbuf_init->port = rxq->shared ? 0 : rxq->port_id;
 		if (priv->flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF)
 			mbuf_init->ol_flags = EXT_ATTACHED_MBUF;
 		/*
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 82586f012cb..115320a26f0 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -1189,6 +1189,12 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 
 		/* D.5 fill in mbuf - rearm_data and packet_type. */
 		rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]);
+		if (unlikely(rxq->shared)) {
+			pkts[pos]->port = cq[pos].user_index_low;
+			pkts[pos + p1]->port = cq[pos + p1].user_index_low;
+			pkts[pos + p2]->port = cq[pos + p2].user_index_low;
+			pkts[pos + p3]->port = cq[pos + p3].user_index_low;
+		}
 		if (rxq->hw_timestamp) {
 			int offset = rxq->timestamp_offset;
 			if (rxq->rt_timestamp) {
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 5ff792f4cb5..9e78318129a 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -787,7 +787,17 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 		/* C.4 fill in mbuf - rearm_data and packet_type. */
 		rxq_cq_to_ptype_oflags_v(rxq, ptype_info, flow_tag,
 					 opcode, &elts[pos]);
-		if (rxq->hw_timestamp) {
+		if (unlikely(rxq->shared)) {
+			elts[pos]->port = container_of(p0, struct mlx5_cqe,
+					      pkt_info)->user_index_low;
+			elts[pos + 1]->port = container_of(p1, struct mlx5_cqe,
+					      pkt_info)->user_index_low;
+			elts[pos + 2]->port = container_of(p2, struct mlx5_cqe,
+					      pkt_info)->user_index_low;
+			elts[pos + 3]->port = container_of(p3, struct mlx5_cqe,
+					      pkt_info)->user_index_low;
+		}
+		if (unlikely(rxq->hw_timestamp)) {
 			int offset = rxq->timestamp_offset;
 			if (rxq->rt_timestamp) {
 				struct mlx5_dev_ctx_shared *sh = rxq->sh;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index adf991f0139..97eb0adc9e6 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -736,7 +736,13 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 		*err |= _mm_cvtsi128_si64(opcode);
 		/* D.5 fill in mbuf - rearm_data and packet_type. */
 		rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]);
-		if (rxq->hw_timestamp) {
+		if (unlikely(rxq->shared)) {
+			pkts[pos]->port = cq[pos].user_index_low;
+			pkts[pos + p1]->port = cq[pos + p1].user_index_low;
+			pkts[pos + p2]->port = cq[pos + p2].user_index_low;
+			pkts[pos + p3]->port = cq[pos + p3].user_index_low;
+		}
+		if (unlikely(rxq->hw_timestamp)) {
 			int offset = rxq->timestamp_offset;
 			if (rxq->rt_timestamp) {
 				struct mlx5_dev_ctx_shared *sh = rxq->sh;
-- 
2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue
  2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
                     ` (12 preceding siblings ...)
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 13/13] net/mlx5: add shared Rx queue port datapath support Xueming Li
@ 2021-10-19  8:19   ` Slava Ovsiienko
  2021-10-19  8:22     ` Slava Ovsiienko
  13 siblings, 1 reply; 29+ messages in thread
From: Slava Ovsiienko @ 2021-10-19  8:19 UTC (permalink / raw)
  To: Xueming(Steven) Li, dev; +Cc: Xueming(Steven) Li, Lior Margalit

Hi,

For the entire series:

Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

With best regards, Slava

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Xueming Li
> Sent: Saturday, October 16, 2021 12:12
> To: dev@dpdk.org
> Cc: Xueming(Steven) Li <xuemingl@nvidia.com>; Lior Margalit
> <lmargalit@nvidia.com>
> Subject: [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue
> 
> Implemetation of Shared Rx queue.
> 
> Depends-on: series-19708 ("ethdev: introduce shared Rx queue")
> Depends-on: series-19698 ("Flow entites behavior on port restart")
> 
> v1:
> - initial version
> v2:
> - rebased on latest dependent series
> - fully tested
> 
> Viacheslav Ovsiienko (1):
>   net/mlx5: add shared Rx queue port datapath support
> 
> Xueming Li (12):
>   common/mlx5: support receive queue user index
>   common/mlx5: support receive memory pool
>   net/mlx5: fix Rx queue memory allocation return value
>   net/mlx5: clean Rx queue code
>   net/mlx5: split multiple packet Rq memory pool
>   net/mlx5: split Rx queue
>   net/mlx5: move Rx queue reference count
>   net/mlx5: move Rx queue hairpin info to private data
>   net/mlx5: remove port info from shareable Rx queue
>   net/mlx5: move Rx queue DevX resource
>   net/mlx5: remove Rx queue data list from device
>   net/mlx5: support shared Rx queue
> 
>  doc/guides/nics/features/mlx5.ini        |   1 +
>  doc/guides/nics/mlx5.rst                 |   6 +
>  drivers/common/mlx5/mlx5_common_devx.c   | 296 +++++++++--
>  drivers/common/mlx5/mlx5_common_devx.h   |  19 +-
>  drivers/common/mlx5/mlx5_devx_cmds.c     |  52 ++
>  drivers/common/mlx5/mlx5_devx_cmds.h     |  16 +
>  drivers/common/mlx5/mlx5_prm.h           |  93 +++-
>  drivers/common/mlx5/version.map          |   1 +
>  drivers/net/mlx5/linux/mlx5_os.c         |   2 +
>  drivers/net/mlx5/linux/mlx5_verbs.c      | 173 ++++---
>  drivers/net/mlx5/mlx5.c                  |  11 +-
>  drivers/net/mlx5/mlx5.h                  |  17 +-
>  drivers/net/mlx5/mlx5_devx.c             | 275 +++++-----
>  drivers/net/mlx5/mlx5_ethdev.c           |  21 +-
>  drivers/net/mlx5/mlx5_flow.c             |  45 +-
>  drivers/net/mlx5/mlx5_mr.c               |   7 +-
>  drivers/net/mlx5/mlx5_rss.c              |   6 +-
>  drivers/net/mlx5/mlx5_rx.c               |  35 +-
>  drivers/net/mlx5/mlx5_rx.h               |  46 +-
>  drivers/net/mlx5/mlx5_rxq.c              | 633 +++++++++++++++--------
>  drivers/net/mlx5/mlx5_rxtx.c             |   6 +-
>  drivers/net/mlx5/mlx5_rxtx_vec.c         |   8 +-
>  drivers/net/mlx5/mlx5_rxtx_vec_altivec.h |  14 +-
>  drivers/net/mlx5/mlx5_rxtx_vec_neon.h    |  12 +-
>  drivers/net/mlx5/mlx5_rxtx_vec_sse.h     |   8 +-
>  drivers/net/mlx5/mlx5_stats.c            |   9 +-
>  drivers/net/mlx5/mlx5_trigger.c          | 161 +++---
>  drivers/net/mlx5/mlx5_vlan.c             |  16 +-
>  drivers/regex/mlx5/mlx5_regex_fastpath.c |   2 +-
>  29 files changed, 1350 insertions(+), 641 deletions(-)
> 
> --
> 2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue
  2021-10-19  8:19   ` [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue Slava Ovsiienko
@ 2021-10-19  8:22     ` Slava Ovsiienko
  0 siblings, 0 replies; 29+ messages in thread
From: Slava Ovsiienko @ 2021-10-19  8:22 UTC (permalink / raw)
  To: Xueming(Steven) Li, dev; +Cc: Xueming(Steven) Li, Lior Margalit

Hi,

Sorry, wrong series,  revoking my "Acked"

With best regards,
Slava

> -----Original Message-----
> From: Slava Ovsiienko
> Sent: Tuesday, October 19, 2021 11:20
> To: Xueming Li <xuemingl@nvidia.com>; dev@dpdk.org
> Cc: Xueming(Steven) Li <xuemingl@nvidia.com>; Lior Margalit
> <lmargalit@nvidia.com>
> Subject: RE: [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue
> 
> Hi,
> 
> For the entire series:
> 
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> 
> With best regards, Slava
> 
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Xueming Li
> > Sent: Saturday, October 16, 2021 12:12
> > To: dev@dpdk.org
> > Cc: Xueming(Steven) Li <xuemingl@nvidia.com>; Lior Margalit
> > <lmargalit@nvidia.com>
> > Subject: [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue
> >
> > Implemetation of Shared Rx queue.
> >
> > Depends-on: series-19708 ("ethdev: introduce shared Rx queue")
> > Depends-on: series-19698 ("Flow entites behavior on port restart")
> >
> > v1:
> > - initial version
> > v2:
> > - rebased on latest dependent series
> > - fully tested
> >
> > Viacheslav Ovsiienko (1):
> >   net/mlx5: add shared Rx queue port datapath support
> >
> > Xueming Li (12):
> >   common/mlx5: support receive queue user index
> >   common/mlx5: support receive memory pool
> >   net/mlx5: fix Rx queue memory allocation return value
> >   net/mlx5: clean Rx queue code
> >   net/mlx5: split multiple packet Rq memory pool
> >   net/mlx5: split Rx queue
> >   net/mlx5: move Rx queue reference count
> >   net/mlx5: move Rx queue hairpin info to private data
> >   net/mlx5: remove port info from shareable Rx queue
> >   net/mlx5: move Rx queue DevX resource
> >   net/mlx5: remove Rx queue data list from device
> >   net/mlx5: support shared Rx queue
> >
> >  doc/guides/nics/features/mlx5.ini        |   1 +
> >  doc/guides/nics/mlx5.rst                 |   6 +
> >  drivers/common/mlx5/mlx5_common_devx.c   | 296 +++++++++--
> >  drivers/common/mlx5/mlx5_common_devx.h   |  19 +-
> >  drivers/common/mlx5/mlx5_devx_cmds.c     |  52 ++
> >  drivers/common/mlx5/mlx5_devx_cmds.h     |  16 +
> >  drivers/common/mlx5/mlx5_prm.h           |  93 +++-
> >  drivers/common/mlx5/version.map          |   1 +
> >  drivers/net/mlx5/linux/mlx5_os.c         |   2 +
> >  drivers/net/mlx5/linux/mlx5_verbs.c      | 173 ++++---
> >  drivers/net/mlx5/mlx5.c                  |  11 +-
> >  drivers/net/mlx5/mlx5.h                  |  17 +-
> >  drivers/net/mlx5/mlx5_devx.c             | 275 +++++-----
> >  drivers/net/mlx5/mlx5_ethdev.c           |  21 +-
> >  drivers/net/mlx5/mlx5_flow.c             |  45 +-
> >  drivers/net/mlx5/mlx5_mr.c               |   7 +-
> >  drivers/net/mlx5/mlx5_rss.c              |   6 +-
> >  drivers/net/mlx5/mlx5_rx.c               |  35 +-
> >  drivers/net/mlx5/mlx5_rx.h               |  46 +-
> >  drivers/net/mlx5/mlx5_rxq.c              | 633 +++++++++++++++--------
> >  drivers/net/mlx5/mlx5_rxtx.c             |   6 +-
> >  drivers/net/mlx5/mlx5_rxtx_vec.c         |   8 +-
> >  drivers/net/mlx5/mlx5_rxtx_vec_altivec.h |  14 +-
> >  drivers/net/mlx5/mlx5_rxtx_vec_neon.h    |  12 +-
> >  drivers/net/mlx5/mlx5_rxtx_vec_sse.h     |   8 +-
> >  drivers/net/mlx5/mlx5_stats.c            |   9 +-
> >  drivers/net/mlx5/mlx5_trigger.c          | 161 +++---
> >  drivers/net/mlx5/mlx5_vlan.c             |  16 +-
> >  drivers/regex/mlx5/mlx5_regex_fastpath.c |   2 +-
> >  29 files changed, 1350 insertions(+), 641 deletions(-)
> >
> > --
> > 2.33.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/13] common/mlx5: support receive memory pool
  2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 02/13] common/mlx5: support receive memory pool Xueming Li
@ 2021-10-20  9:32     ` Kinsella, Ray
  0 siblings, 0 replies; 29+ messages in thread
From: Kinsella, Ray @ 2021-10-20  9:32 UTC (permalink / raw)
  To: Xueming Li, dev; +Cc: Lior Margalit, Matan Azrad, Viacheslav Ovsiienko



On 16/10/2021 10:12, Xueming Li wrote:
> Adds DevX supports of PRM shared receive memory pool(RMP) object.
> RMP is used to support shared Rx queue. Multiple RQ could share same
> RMP. Memory buffers are supplied to RMP.
> 
> This patch makes RMP RQ optional, created only if mlx5_devx_rq.rmp
> is set.
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>   drivers/common/mlx5/mlx5_common_devx.c | 296 +++++++++++++++++++++----
>   drivers/common/mlx5/mlx5_common_devx.h |  19 +-
>   drivers/common/mlx5/mlx5_devx_cmds.c   |  52 +++++
>   drivers/common/mlx5/mlx5_devx_cmds.h   |  16 ++
>   drivers/common/mlx5/mlx5_prm.h         |  85 ++++++-
>   drivers/common/mlx5/version.map        |   1 +
>   drivers/net/mlx5/mlx5_devx.c           |   4 +-
>   7 files changed, 425 insertions(+), 48 deletions(-)
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2021-10-20  9:32 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-26 11:18 [dpdk-dev] [PATCH 00/11] net/mlx5: support shared Rx queue Xueming Li
2021-09-26 11:18 ` [dpdk-dev] [PATCH 01/11] common/mlx5: support receive queue user index Xueming Li
2021-09-26 11:18 ` [dpdk-dev] [PATCH 02/11] common/mlx5: support receive memory pool Xueming Li
2021-09-26 11:18 ` [dpdk-dev] [PATCH 03/11] net/mlx5: clean Rx queue code Xueming Li
2021-09-26 11:18 ` [dpdk-dev] [PATCH 04/11] net/mlx5: split multiple packet Rq memory pool Xueming Li
2021-09-26 11:18 ` [dpdk-dev] [PATCH 05/11] net/mlx5: split Rx queue Xueming Li
2021-09-26 11:18 ` [dpdk-dev] [PATCH 06/11] net/mlx5: move Rx queue reference count Xueming Li
2021-09-26 11:19 ` [dpdk-dev] [PATCH 07/11] net/mlx5: move Rx queue hairpin info to private data Xueming Li
2021-09-26 11:19 ` [dpdk-dev] [PATCH 08/11] net/mlx5: remove port info from shareable Rx queue Xueming Li
2021-09-26 11:19 ` [dpdk-dev] [PATCH 09/11] net/mlx5: move Rx queue DevX resource Xueming Li
2021-09-26 11:19 ` [dpdk-dev] [PATCH 10/11] net/mlx5: remove Rx queue data list from device Xueming Li
2021-09-26 11:19 ` [dpdk-dev] [PATCH 11/11] net/mlx5: support shared Rx queue Xueming Li
2021-10-16  9:12 ` [dpdk-dev] [PATCH v2 00/13] " Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 01/13] common/mlx5: support receive queue user index Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 02/13] common/mlx5: support receive memory pool Xueming Li
2021-10-20  9:32     ` Kinsella, Ray
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 03/13] net/mlx5: fix Rx queue memory allocation return value Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 04/13] net/mlx5: clean Rx queue code Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 05/13] net/mlx5: split multiple packet Rq memory pool Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 06/13] net/mlx5: split Rx queue Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 07/13] net/mlx5: move Rx queue reference count Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 08/13] net/mlx5: move Rx queue hairpin info to private data Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 09/13] net/mlx5: remove port info from shareable Rx queue Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 10/13] net/mlx5: move Rx queue DevX resource Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 11/13] net/mlx5: remove Rx queue data list from device Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 12/13] net/mlx5: support shared Rx queue Xueming Li
2021-10-16  9:12   ` [dpdk-dev] [PATCH v2 13/13] net/mlx5: add shared Rx queue port datapath support Xueming Li
2021-10-19  8:19   ` [dpdk-dev] [PATCH v2 00/13] net/mlx5: support shared Rx queue Slava Ovsiienko
2021-10-19  8:22     ` Slava Ovsiienko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).