DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation
@ 2020-10-01 14:09 Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 01/15] net/mlx5: fix send queue doorbell typo Michael Baum
                   ` (15 more replies)
  0 siblings, 16 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The series is an arrangement to multi-OS support by net/mlx5 driver so it comes to ease the code management for OS which supports\doesn't support DevX\Verbs operations.

Michael Baum (15):
  net/mlx5: fix send queue doorbell typo
  net/mlx5: fix unused variable in Txq creation
  net/mlx5: mitigate Tx queue reference counters
  net/mlx5: reorder Tx queue DevX object creation
  net/mlx5: reorder Tx queue Verbs object creation
  net/mlx5: reposition the event queue number field
  net/mlx5: separate Tx queue object creations
  net/mlx5: share Tx control code
  net/mlx5: rearrange SQ and CQ creation in DevX module
  net/mlx5: rearrange QP creation in Verbs module
  net/mlx5: separate Tx queue object modification
  net/mlx5: share Tx queue object modification
  net/mlx5: remove Tx queue object type field
  net/mlx5: separate Rx queue state modification
  net/mlx5: remove Rx queue object type field

 drivers/net/mlx5/linux/mlx5_os.c    |  80 ++++
 drivers/net/mlx5/linux/mlx5_verbs.c | 296 ++++++++++++-
 drivers/net/mlx5/linux/mlx5_verbs.h |   3 +
 drivers/net/mlx5/mlx5.c             |  10 +
 drivers/net/mlx5/mlx5.h             |  61 ++-
 drivers/net/mlx5/mlx5_devx.c        | 593 +++++++++++++++++++++++--
 drivers/net/mlx5/mlx5_devx.h        |   3 +
 drivers/net/mlx5/mlx5_rxq.c         |   4 +-
 drivers/net/mlx5/mlx5_rxtx.c        | 105 +----
 drivers/net/mlx5/mlx5_rxtx.h        |  45 +-
 drivers/net/mlx5/mlx5_trigger.c     |  40 +-
 drivers/net/mlx5/mlx5_txpp.c        |  28 +-
 drivers/net/mlx5/mlx5_txq.c         | 850 ++----------------------------------
 drivers/net/mlx5/mlx5_vlan.c        |   5 +-
 14 files changed, 1087 insertions(+), 1036 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 01/15] net/mlx5: fix send queue doorbell typo
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 02/15] net/mlx5: fix unused variable in Txq creation Michael Baum
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

As part of SQ creation for Tx queue objects, a HW doorbell memory should
be allocated and mapped to the HW.

The SQ doorbell handler was wrongly saved on the CQ fields what caused
wrong doorbell release in the Tx queue object destroy flow.

Save the SQ doorbell handler in the SQ fields.

Fixes: 3a87b964edd3 ("net/mlx5: create Tx queues with DevX")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_txq.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1bb667d..fc730fa 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -1050,8 +1050,8 @@
 			dev->data->port_id, txq_data->idx);
 		goto error;
 	}
-	/* Allocate doorbell record for completion queue. */
-	txq_obj->cq_dbrec_offset = mlx5_get_dbr(sh->ctx,
+	/* Allocate doorbell record for send queue. */
+	txq_obj->sq_dbrec_offset = mlx5_get_dbr(sh->ctx,
 						&priv->dbrpgs,
 						&txq_obj->sq_dbrec_page);
 	if (txq_obj->sq_dbrec_offset < 0)
@@ -1076,9 +1076,9 @@
 	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
 	sq_attr.wq_attr.log_wq_sz = txq_data->wqe_n;
 	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = txq_obj->cq_dbrec_offset;
+	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
 	sq_attr.wq_attr.dbr_umem_id =
-			mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem);
+			mlx5_os_get_umem_id(txq_obj->sq_dbrec_page->umem);
 	sq_attr.wq_attr.wq_umem_valid = 1;
 	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
 	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 02/15] net/mlx5: fix unused variable in Txq creation
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 01/15] net/mlx5: fix send queue doorbell typo Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 03/15] net/mlx5: mitigate Tx queue reference counters Michael Baum
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

When a CQ is not created by DevX, it be allocated by either DV function
or by regular Verbs function.

The CQ DV attributes variable was wrongly defined and initialized in Tx
queue creation while the CQ is created by the regular Verbs function
what remained the attributes variable unused.

Remove the unused variable.

Fixes: faf2667fe8d5 ("net/mlx5: separate DPDK from verbs Tx queue objects")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_txq.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index fc730fa..ef3137b 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -1168,7 +1168,6 @@ struct mlx5_txq_obj *
 	struct mlx5_txq_obj *txq_obj = NULL;
 	union {
 		struct ibv_qp_init_attr_ex init;
-		struct ibv_cq_init_attr_ex cq;
 		struct ibv_qp_attr mod;
 	} attr;
 	unsigned int cqe_n;
@@ -1198,9 +1197,6 @@ struct mlx5_txq_obj *
 		return NULL;
 	}
 	memset(&tmpl, 0, sizeof(struct mlx5_txq_obj));
-	attr.cq = (struct ibv_cq_init_attr_ex){
-		.comp_mask = 0,
-	};
 	cqe_n = desc / MLX5_TX_COMP_THRESH +
 		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
 	tmpl.cq = mlx5_glue->create_cq(priv->sh->ctx, cqe_n, NULL, NULL, 0);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 03/15] net/mlx5: mitigate Tx queue reference counters
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 01/15] net/mlx5: fix send queue doorbell typo Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 02/15] net/mlx5: fix unused variable in Txq creation Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 04/15] net/mlx5: reorder Tx queue DevX object creation Michael Baum
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The Tx queue structures manage 2 different reference counter per queue:
txq_ctrl reference counter and txq_obj reference counter.

There is no real need to use two different counters, it just complicates
the release functions.
Remove the txq_obj counter and use only the txq_ctrl counter.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxtx.h |  4 +-
 drivers/net/mlx5/mlx5_txq.c  | 98 ++++++++++++++------------------------------
 2 files changed, 32 insertions(+), 70 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 9ffa028..d947e0e 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -276,7 +276,6 @@ enum mlx5_txq_type {
 /* Verbs/DevX Tx queue elements. */
 struct mlx5_txq_obj {
 	LIST_ENTRY(mlx5_txq_obj) next; /* Pointer to the next element. */
-	rte_atomic32_t refcnt; /* Reference counter. */
 	struct mlx5_txq_ctrl *txq_ctrl; /* Pointer to the control queue. */
 	enum mlx5_txq_obj_type type; /* The txq object type. */
 	RTE_STD_C11
@@ -405,8 +404,7 @@ int mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 void mlx5_tx_uar_uninit_secondary(struct rte_eth_dev *dev);
 struct mlx5_txq_obj *mlx5_txq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
 				      enum mlx5_txq_obj_type type);
-struct mlx5_txq_obj *mlx5_txq_obj_get(struct rte_eth_dev *dev, uint16_t idx);
-int mlx5_txq_obj_release(struct mlx5_txq_obj *txq_ibv);
+void mlx5_txq_obj_release(struct mlx5_txq_obj *txq_obj);
 int mlx5_txq_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_txq_ctrl *mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx,
 				   uint16_t desc, unsigned int socket,
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index ef3137b..e8bf7d7 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -437,6 +437,7 @@
 	mlx5_txq_release(dev, idx);
 	return 0;
 }
+
 /**
  * DPDK callback to configure a TX queue.
  *
@@ -833,7 +834,6 @@
 	}
 	DRV_LOG(DEBUG, "port %u sxq %u updated with %p", dev->data->port_id,
 		idx, (void *)&tmpl);
-	rte_atomic32_inc(&tmpl->refcnt);
 	LIST_INSERT_HEAD(&priv->txqsobj, tmpl, next);
 	return tmpl;
 }
@@ -1126,7 +1126,6 @@
 	txq_ctrl->bf_reg = reg_addr;
 	txq_ctrl->uar_mmap_offset =
 		mlx5_os_get_devx_uar_mmap_offset(sh->tx_uar);
-	rte_atomic32_set(&txq_obj->refcnt, 1);
 	txq_uar_init(txq_ctrl);
 	LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next);
 	return txq_obj;
@@ -1360,7 +1359,6 @@ struct mlx5_txq_obj *
 #endif
 	txq_obj->qp = tmpl.qp;
 	txq_obj->cq = tmpl.cq;
-	rte_atomic32_inc(&txq_obj->refcnt);
 	txq_ctrl->bf_reg = qp.bf.reg;
 	if (qp.comp_mask & MLX5DV_QP_MASK_UAR_MMAP_OFFSET) {
 		txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset;
@@ -1397,64 +1395,30 @@ struct mlx5_txq_obj *
 }
 
 /**
- * Get an Tx queue Verbs object.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- *
- * @return
- *   The Verbs object if it exists.
- */
-struct mlx5_txq_obj *
-mlx5_txq_obj_get(struct rte_eth_dev *dev, uint16_t idx)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_ctrl *txq_ctrl;
-
-	if (idx >= priv->txqs_n)
-		return NULL;
-	if (!(*priv->txqs)[idx])
-		return NULL;
-	txq_ctrl = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq);
-	if (txq_ctrl->obj)
-		rte_atomic32_inc(&txq_ctrl->obj->refcnt);
-	return txq_ctrl->obj;
-}
-
-/**
  * Release an Tx verbs queue object.
  *
  * @param txq_obj
- *   Verbs Tx queue object.
- *
- * @return
- *   1 while a reference on it exists, 0 when freed.
+ *   Verbs Tx queue object..
  */
-int
+void
 mlx5_txq_obj_release(struct mlx5_txq_obj *txq_obj)
 {
 	MLX5_ASSERT(txq_obj);
-	if (rte_atomic32_dec_and_test(&txq_obj->refcnt)) {
-		if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN) {
-			if (txq_obj->tis)
-				claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
-		} else if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ) {
-			txq_release_sq_resources(txq_obj);
-		} else {
-			claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
-			claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
-		}
-		if (txq_obj->txq_ctrl->txq.fcqs) {
-			mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
-			txq_obj->txq_ctrl->txq.fcqs = NULL;
-		}
-		LIST_REMOVE(txq_obj, next);
-		mlx5_free(txq_obj);
-		return 0;
+	if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN) {
+		if (txq_obj->tis)
+			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
+	} else if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ) {
+		txq_release_sq_resources(txq_obj);
+	} else {
+		claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
+		claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
 	}
-	return 1;
+	if (txq_obj->txq_ctrl->txq.fcqs) {
+		mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
+		txq_obj->txq_ctrl->txq.fcqs = NULL;
+	}
+	LIST_REMOVE(txq_obj, next);
+	mlx5_free(txq_obj);
 }
 
 /**
@@ -1967,12 +1931,11 @@ struct mlx5_txq_ctrl *
 mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
 	struct mlx5_txq_ctrl *ctrl = NULL;
 
-	if ((*priv->txqs)[idx]) {
-		ctrl = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl,
-				    txq);
-		mlx5_txq_obj_get(dev, idx);
+	if (txq_data) {
+		ctrl = container_of(txq_data, struct mlx5_txq_ctrl, txq);
 		rte_atomic32_inc(&ctrl->refcnt);
 	}
 	return ctrl;
@@ -1998,18 +1961,19 @@ struct mlx5_txq_ctrl *
 	if (!(*priv->txqs)[idx])
 		return 0;
 	txq = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq);
-	if (txq->obj && !mlx5_txq_obj_release(txq->obj))
+	if (!rte_atomic32_dec_and_test(&txq->refcnt))
+		return 1;
+	if (txq->obj) {
+		mlx5_txq_obj_release(txq->obj);
 		txq->obj = NULL;
-	if (rte_atomic32_dec_and_test(&txq->refcnt)) {
-		txq_free_elts(txq);
-		mlx5_mr_btree_free(&txq->txq.mr_ctrl.cache_bh);
-		LIST_REMOVE(txq, next);
-		mlx5_free(txq);
-		(*priv->txqs)[idx] = NULL;
-		dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
-		return 0;
 	}
-	return 1;
+	txq_free_elts(txq);
+	mlx5_mr_btree_free(&txq->txq.mr_ctrl.cache_bh);
+	LIST_REMOVE(txq, next);
+	mlx5_free(txq);
+	(*priv->txqs)[idx] = NULL;
+	dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
 }
 
 /**
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 04/15] net/mlx5: reorder Tx queue DevX object creation
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (2 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 03/15] net/mlx5: mitigate Tx queue reference counters Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 05/15] net/mlx5: reorder Tx queue Verbs " Michael Baum
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Move the creation of the send queue and the completion queue resources
from the mlx5_txq_obj_devx_new function into auxiliary functions.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_txq.c | 361 +++++++++++++++++++++++++++++---------------
 1 file changed, 239 insertions(+), 122 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index e8bf7d7..686b452 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -839,140 +839,133 @@
 }
 
 /**
- * Destroy the Tx queue DevX object.
+ * Release DevX SQ resources.
  *
- * @param txq_obj
- *   Txq object to destroy
+ * @param txq_ctrl
+ *   DevX Tx queue object.
  */
 static void
-txq_release_sq_resources(struct mlx5_txq_obj *txq_obj)
+txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
 {
-	MLX5_ASSERT(txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ);
-
 	if (txq_obj->sq_devx)
 		claim_zero(mlx5_devx_cmd_destroy(txq_obj->sq_devx));
-	if (txq_obj->sq_dbrec_page)
-		claim_zero(mlx5_release_dbr
-				(&txq_obj->txq_ctrl->priv->dbrpgs,
-				mlx5_os_get_umem_id
-					(txq_obj->sq_dbrec_page->umem),
-				txq_obj->sq_dbrec_offset));
 	if (txq_obj->sq_umem)
 		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->sq_umem));
 	if (txq_obj->sq_buf)
 		mlx5_free(txq_obj->sq_buf);
+	if (txq_obj->sq_dbrec_page)
+		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
+					    mlx5_os_get_umem_id
+						 (txq_obj->sq_dbrec_page->umem),
+					    txq_obj->sq_dbrec_offset));
+}
+
+/**
+ * Release DevX Tx CQ resources.
+ *
+ * @param txq_ctrl
+ *   DevX Tx queue object.
+ */
+static void
+txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
+{
 	if (txq_obj->cq_devx)
 		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
-	if (txq_obj->cq_dbrec_page)
-		claim_zero(mlx5_release_dbr
-				(&txq_obj->txq_ctrl->priv->dbrpgs,
-				mlx5_os_get_umem_id
-					(txq_obj->cq_dbrec_page->umem),
-				txq_obj->cq_dbrec_offset));
 	if (txq_obj->cq_umem)
 		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->cq_umem));
 	if (txq_obj->cq_buf)
 		mlx5_free(txq_obj->cq_buf);
+	if (txq_obj->cq_dbrec_page)
+		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
+					    mlx5_os_get_umem_id
+						 (txq_obj->cq_dbrec_page->umem),
+					    txq_obj->cq_dbrec_offset));
 }
 
 /**
- * Create the Tx queue DevX object.
+ * Destroy the Tx queue DevX object.
+ *
+ * @param txq_obj
+ *   Txq object to destroy
+ */
+static void
+txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
+{
+	MLX5_ASSERT(txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ);
+
+	txq_release_devx_sq_resources(txq_obj);
+	txq_release_devx_cq_resources(txq_obj);
+}
+
+#ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
+/**
+ * Create a DevX CQ object for a Tx queue.
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param cqe_n
+ *   Number of entries in the CQ.
  * @param idx
- *   Queue index in DPDK Tx queue array
+ *   Queue index in DPDK Tx queue array.
+ * @param type
+ *   Type of the Tx queue object to create.
  *
  * @return
- *   The DevX object initialised, NULL otherwise and rte_errno is set.
+ *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_txq_obj *
-mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx)
+static struct mlx5_devx_obj *
+mlx5_devx_cq_new(struct rte_eth_dev *dev, uint32_t cqe_n, uint16_t idx,
+		 struct mlx5_txq_obj *txq_obj)
 {
-#ifndef HAVE_MLX5DV_DEVX_UAR_OFFSET
-	DRV_LOG(ERR, "port %u Tx queue %u cannot create with DevX, no UAR",
-		     dev->data->port_id, idx);
-	rte_errno = ENOMEM;
-	return NULL;
-#else
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-		container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
-	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
+	struct mlx5_devx_obj *cq_obj = NULL;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
-	struct mlx5_txq_obj *txq_obj = NULL;
-	size_t page_size;
 	struct mlx5_cqe *cqe;
-	uint32_t i, nqe;
-	void *reg_addr;
-	size_t alignment = (size_t)-1;
-	int ret = 0;
+	size_t page_size;
+	size_t alignment;
+	uint32_t i;
+	int ret;
 
 	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(!txq_ctrl->obj);
+	MLX5_ASSERT(txq_obj);
 	page_size = rte_mem_page_size();
 	if (page_size == (size_t)-1) {
 		DRV_LOG(ERR, "Failed to get mem page size");
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			      sizeof(struct mlx5_txq_obj), 0,
-			      txq_ctrl->socket);
-	if (!txq_obj) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u cannot allocate memory resources",
-			dev->data->port_id, txq_data->idx);
+	/* Allocate memory buffer for CQEs. */
+	alignment = MLX5_CQE_BUF_ALIGNMENT;
+	if (alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get CQE buf alignment");
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	txq_obj->type = MLX5_TXQ_OBJ_TYPE_DEVX_SQ;
-	txq_obj->txq_ctrl = txq_ctrl;
-	txq_obj->dev = dev;
-	/* Create the Completion Queue. */
-	nqe = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
-	       1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	nqe = 1UL << log2above(nqe);
-	if (nqe > UINT16_MAX) {
+	cqe_n = 1UL << log2above(cqe_n);
+	if (cqe_n > UINT16_MAX) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u requests to many CQEs %u",
-			dev->data->port_id, txq_data->idx, nqe);
+			dev->data->port_id, txq_data->idx, cqe_n);
 		rte_errno = EINVAL;
-		goto error;
-	}
-	/* Allocate memory buffer for CQEs. */
-	alignment = MLX5_CQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		goto error;
+		return NULL;
 	}
 	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      nqe * sizeof(struct mlx5_cqe),
+				      cqe_n * sizeof(struct mlx5_cqe),
 				      alignment,
-				      sh->numa_node);
+				      priv->sh->numa_node);
 	if (!txq_obj->cq_buf) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory (CQ)",
 			dev->data->port_id, txq_data->idx);
 		rte_errno = ENOMEM;
-		goto error;
+		return NULL;
 	}
-	txq_data->cqe_n = log2above(nqe);
-	txq_data->cqe_s = 1 << txq_data->cqe_n;
-	txq_data->cqe_m = txq_data->cqe_s - 1;
-	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
-	txq_data->cq_ci = 0;
-	txq_data->cq_pi = 0;
 	/* Register allocated buffer in user space with DevX. */
-	txq_obj->cq_umem = mlx5_glue->devx_umem_reg
-					(sh->ctx,
-					 (void *)txq_obj->cq_buf,
-					 nqe * sizeof(struct mlx5_cqe),
-					 IBV_ACCESS_LOCAL_WRITE);
+	txq_obj->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
+						(void *)txq_obj->cq_buf,
+						cqe_n * sizeof(struct mlx5_cqe),
+						IBV_ACCESS_LOCAL_WRITE);
 	if (!txq_obj->cq_umem) {
 		rte_errno = errno;
 		DRV_LOG(ERR,
@@ -981,46 +974,88 @@
 		goto error;
 	}
 	/* Allocate doorbell record for completion queue. */
-	txq_obj->cq_dbrec_offset = mlx5_get_dbr(sh->ctx,
+	txq_obj->cq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
 						&priv->dbrpgs,
 						&txq_obj->cq_dbrec_page);
-	if (txq_obj->cq_dbrec_offset < 0)
+	if (txq_obj->cq_dbrec_offset < 0) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
 		goto error;
-	txq_data->cq_db = (volatile uint32_t *)(txq_obj->cq_dbrec_page->dbrs +
-						txq_obj->cq_dbrec_offset);
-	*txq_data->cq_db = 0;
-	/* Create completion queue object with DevX. */
+	}
 	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
 			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->txpp.eqn;
+	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
+	cq_attr.eqn = priv->sh->txpp.eqn;
 	cq_attr.q_umem_valid = 1;
 	cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size;
 	cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem);
 	cq_attr.db_umem_valid = 1;
 	cq_attr.db_umem_offset = txq_obj->cq_dbrec_offset;
 	cq_attr.db_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem);
-	cq_attr.log_cq_size = rte_log2_u32(nqe);
+	cq_attr.log_cq_size = rte_log2_u32(cqe_n);
 	cq_attr.log_page_size = rte_log2_u32(page_size);
-	txq_obj->cq_devx = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr);
-	if (!txq_obj->cq_devx) {
+	/* Create completion queue object with DevX. */
+	cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
+	if (!cq_obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "port %u Tx queue %u CQ creation failure",
 			dev->data->port_id, idx);
 		goto error;
 	}
+	txq_data->cqe_n = log2above(cqe_n);
+	txq_data->cqe_s = 1 << txq_data->cqe_n;
 	/* Initial fill CQ buffer with invalid CQE opcode. */
 	cqe = (struct mlx5_cqe *)txq_obj->cq_buf;
 	for (i = 0; i < txq_data->cqe_s; i++) {
 		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
 		++cqe;
 	}
-	/* Create the Work Queue. */
-	nqe = RTE_MIN(1UL << txq_data->elts_n,
-		      (uint32_t)sh->device_attr.max_qp_wr);
+	return cq_obj;
+error:
+	ret = rte_errno;
+	txq_release_devx_cq_resources(txq_obj);
+	rte_errno = ret;
+	return NULL;
+}
+
+/**
+ * Create a SQ object using DevX.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ * @param type
+ *   Type of the Tx queue object to create.
+ *
+ * @return
+ *   The DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_devx_obj *
+mlx5_devx_sq_new(struct rte_eth_dev *dev, uint16_t idx,
+		 struct mlx5_txq_obj *txq_obj)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
+	struct mlx5_devx_obj *sq_obj = NULL;
+	size_t page_size;
+	uint32_t wqe_n;
+	int ret;
+
+	MLX5_ASSERT(txq_data);
+	MLX5_ASSERT(txq_obj);
+	page_size = rte_mem_page_size();
+	if (page_size == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get mem page size");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
+			(uint32_t)priv->sh->device_attr.max_qp_wr);
 	txq_obj->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      nqe * sizeof(struct mlx5_wqe),
-				      page_size, sh->numa_node);
+				      wqe_n * sizeof(struct mlx5_wqe),
+				      page_size, priv->sh->numa_node);
 	if (!txq_obj->sq_buf) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory (SQ)",
@@ -1028,20 +1063,11 @@
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	txq_data->wqe_n = log2above(nqe);
-	txq_data->wqe_s = 1 << txq_data->wqe_n;
-	txq_data->wqe_m = txq_data->wqe_s - 1;
-	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
-	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
-	txq_data->wqe_ci = 0;
-	txq_data->wqe_pi = 0;
-	txq_data->wqe_comp = 0;
-	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
 	/* Register allocated buffer in user space with DevX. */
 	txq_obj->sq_umem = mlx5_glue->devx_umem_reg
-					(sh->ctx,
+					(priv->sh->ctx,
 					 (void *)txq_obj->sq_buf,
-					 nqe * sizeof(struct mlx5_wqe),
+					 wqe_n * sizeof(struct mlx5_wqe),
 					 IBV_ACCESS_LOCAL_WRITE);
 	if (!txq_obj->sq_umem) {
 		rte_errno = errno;
@@ -1051,30 +1077,28 @@
 		goto error;
 	}
 	/* Allocate doorbell record for send queue. */
-	txq_obj->sq_dbrec_offset = mlx5_get_dbr(sh->ctx,
+	txq_obj->sq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
 						&priv->dbrpgs,
 						&txq_obj->sq_dbrec_page);
-	if (txq_obj->sq_dbrec_offset < 0)
+	if (txq_obj->sq_dbrec_offset < 0) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to allocate SQ door-bell.");
 		goto error;
-	txq_data->qp_db = (volatile uint32_t *)
-					(txq_obj->sq_dbrec_page->dbrs +
-					 txq_obj->sq_dbrec_offset +
-					 MLX5_SND_DBR * sizeof(uint32_t));
-	*txq_data->qp_db = 0;
-	/* Create Send Queue object with DevX. */
+	}
 	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = sh->tis->id;
+	sq_attr.tis_num = priv->sh->tis->id;
 	sq_attr.state = MLX5_SQC_STATE_RST;
 	sq_attr.cqn = txq_obj->cq_devx->id;
 	sq_attr.flush_in_error_en = 1;
 	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
 	sq_attr.allow_swp = !!priv->config.swp;
 	sq_attr.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode;
-	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
+	sq_attr.wq_attr.uar_page =
+				mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
 	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = sh->pdn;
+	sq_attr.wq_attr.pd = priv->sh->pdn;
 	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = txq_data->wqe_n;
+	sq_attr.wq_attr.log_wq_sz = log2above(wqe_n);
 	sq_attr.wq_attr.dbr_umem_valid = 1;
 	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
 	sq_attr.wq_attr.dbr_umem_id =
@@ -1082,13 +1106,106 @@
 	sq_attr.wq_attr.wq_umem_valid = 1;
 	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
 	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
-	txq_obj->sq_devx = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr);
-	if (!txq_obj->sq_devx) {
+	/* Create Send Queue object with DevX. */
+	sq_obj = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
+	if (!sq_obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "port %u Tx queue %u SQ creation failure",
 			dev->data->port_id, idx);
 		goto error;
 	}
+	txq_data->wqe_n = log2above(wqe_n);
+	return sq_obj;
+error:
+	ret = rte_errno;
+	txq_release_devx_sq_resources(txq_obj);
+	rte_errno = ret;
+	return NULL;
+}
+#endif
+
+/**
+ * Create the Tx queue DevX object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ *
+ * @return
+ *   The DevX object initialised, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_txq_obj *
+mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+#ifndef HAVE_MLX5DV_DEVX_UAR_OFFSET
+	DRV_LOG(ERR, "port %u Tx queue %u cannot create with DevX, no UAR",
+		     dev->data->port_id, idx);
+	rte_errno = ENOMEM;
+	return NULL;
+#else
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_dev_ctx_shared *sh = priv->sh;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+		container_of(txq_data, struct mlx5_txq_ctrl, txq);
+	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
+	struct mlx5_txq_obj *txq_obj = NULL;
+	void *reg_addr;
+	uint32_t cqe_n;
+	int ret = 0;
+
+	MLX5_ASSERT(txq_data);
+	MLX5_ASSERT(!txq_ctrl->obj);
+	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			      sizeof(struct mlx5_txq_obj), 0,
+			      txq_ctrl->socket);
+	if (!txq_obj) {
+		DRV_LOG(ERR,
+			"port %u Tx queue %u cannot allocate memory resources",
+			dev->data->port_id, txq_data->idx);
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	txq_obj->type = MLX5_TXQ_OBJ_TYPE_DEVX_SQ;
+	txq_obj->txq_ctrl = txq_ctrl;
+	txq_obj->dev = dev;
+	/* Create the Completion Queue. */
+	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
+		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
+	/* Create completion queue object with DevX. */
+	txq_obj->cq_devx = mlx5_devx_cq_new(dev, cqe_n, idx, txq_obj);
+	if (!txq_obj->cq_devx) {
+		rte_errno = errno;
+		goto error;
+	}
+	txq_data->cqe_m = txq_data->cqe_s - 1;
+	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
+	txq_data->cq_ci = 0;
+	txq_data->cq_pi = 0;
+	txq_data->cq_db = (volatile uint32_t *)(txq_obj->cq_dbrec_page->dbrs +
+						txq_obj->cq_dbrec_offset);
+	*txq_data->cq_db = 0;
+	/* Create Send Queue object with DevX. */
+	txq_obj->sq_devx = mlx5_devx_sq_new(dev, idx, txq_obj);
+	if (!txq_obj->sq_devx) {
+		rte_errno = errno;
+		goto error;
+	}
+	/* Create the Work Queue. */
+	txq_data->wqe_s = 1 << txq_data->wqe_n;
+	txq_data->wqe_m = txq_data->wqe_s - 1;
+	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
+	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
+	txq_data->wqe_ci = 0;
+	txq_data->wqe_pi = 0;
+	txq_data->wqe_comp = 0;
+	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
+	txq_data->qp_db = (volatile uint32_t *)
+					(txq_obj->sq_dbrec_page->dbrs +
+					 txq_obj->sq_dbrec_offset +
+					 MLX5_SND_DBR * sizeof(uint32_t));
+	*txq_data->qp_db = 0;
 	txq_data->qp_num_8s = txq_obj->sq_devx->id << 8;
 	/* Change Send Queue state to Ready-to-Send. */
 	msq_attr.sq_state = MLX5_SQC_STATE_RST;
@@ -1131,7 +1248,7 @@
 	return txq_obj;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	txq_release_sq_resources(txq_obj);
+	txq_release_devx_resources(txq_obj);
 	if (txq_data->fcqs) {
 		mlx5_free(txq_data->fcqs);
 		txq_data->fcqs = NULL;
@@ -1408,7 +1525,7 @@ struct mlx5_txq_obj *
 		if (txq_obj->tis)
 			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
 	} else if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ) {
-		txq_release_sq_resources(txq_obj);
+		txq_release_devx_resources(txq_obj);
 	} else {
 		claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
 		claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 05/15] net/mlx5: reorder Tx queue Verbs object creation
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (3 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 04/15] net/mlx5: reorder Tx queue DevX object creation Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 06/15] net/mlx5: reposition the event queue number field Michael Baum
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Move the creation of the completion queue from the mlx5_txq_obj_new
function into an auxiliary function.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_txq.c | 117 +++++++++++++++++++++++++-------------------
 1 file changed, 68 insertions(+), 49 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 686b452..f2ecfc4 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -1260,6 +1260,66 @@
 }
 
 /**
+ * Create a QP Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ * @param rxq_obj
+ *   Pointer to Tx queue object data.
+ *
+ * @return
+ *   The QP Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_qp *
+mlx5_ibv_qp_new(struct rte_eth_dev *dev, uint16_t idx,
+		struct mlx5_txq_obj *txq_obj)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+			container_of(txq_data, struct mlx5_txq_ctrl, txq);
+	struct ibv_qp *qp_obj = NULL;
+	struct ibv_qp_init_attr_ex qp_attr = { 0 };
+	const int desc = 1 << txq_data->elts_n;
+
+	MLX5_ASSERT(!txq_ctrl->obj);
+	/* CQ to be associated with the send queue. */
+	qp_attr.send_cq = txq_obj->cq;
+	/* CQ to be associated with the receive queue. */
+	qp_attr.recv_cq = txq_obj->cq;
+	/* Max number of outstanding WRs. */
+	qp_attr.cap.max_send_wr = ((priv->sh->device_attr.max_qp_wr < desc) ?
+				   priv->sh->device_attr.max_qp_wr : desc);
+	/*
+	 * Max number of scatter/gather elements in a WR, must be 1 to prevent
+	 * libmlx5 from trying to affect must be 1 to prevent libmlx5 from
+	 * trying to affect too much memory. TX gather is not impacted by the
+	 * device_attr.max_sge limit and will still work properly.
+	 */
+	qp_attr.cap.max_send_sge = 1;
+	qp_attr.qp_type = IBV_QPT_RAW_PACKET,
+	/* Do *NOT* enable this, completions events are managed per Tx burst. */
+	qp_attr.sq_sig_all = 0;
+	qp_attr.pd = priv->sh->pd;
+	qp_attr.comp_mask = IBV_QP_INIT_ATTR_PD;
+	if (txq_data->inlen_send)
+		qp_attr.cap.max_inline_data = txq_ctrl->max_inline_data;
+	if (txq_data->tso_en) {
+		qp_attr.max_tso_header = txq_ctrl->max_tso_header;
+		qp_attr.comp_mask |= IBV_QP_INIT_ATTR_MAX_TSO_HEADER;
+	}
+	qp_obj = mlx5_glue->create_qp_ex(priv->sh->ctx, &qp_attr);
+	if (qp_obj == NULL) {
+		DRV_LOG(ERR, "port %u Tx queue %u QP creation failure",
+			dev->data->port_id, idx);
+		rte_errno = errno;
+	}
+	return qp_obj;
+}
+
+/**
  * Create the Tx queue Verbs object.
  *
  * @param dev
@@ -1282,10 +1342,7 @@ struct mlx5_txq_obj *
 		container_of(txq_data, struct mlx5_txq_ctrl, txq);
 	struct mlx5_txq_obj tmpl;
 	struct mlx5_txq_obj *txq_obj = NULL;
-	union {
-		struct ibv_qp_init_attr_ex init;
-		struct ibv_qp_attr mod;
-	} attr;
+	struct ibv_qp_attr mod;
 	unsigned int cqe_n;
 	struct mlx5dv_qp qp = { .comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET };
 	struct mlx5dv_cq cq_info;
@@ -1322,56 +1379,18 @@ struct mlx5_txq_obj *
 		rte_errno = errno;
 		goto error;
 	}
-	attr.init = (struct ibv_qp_init_attr_ex){
-		/* CQ to be associated with the send queue. */
-		.send_cq = tmpl.cq,
-		/* CQ to be associated with the receive queue. */
-		.recv_cq = tmpl.cq,
-		.cap = {
-			/* Max number of outstanding WRs. */
-			.max_send_wr =
-				((priv->sh->device_attr.max_qp_wr <
-				  desc) ?
-				 priv->sh->device_attr.max_qp_wr :
-				 desc),
-			/*
-			 * Max number of scatter/gather elements in a WR,
-			 * must be 1 to prevent libmlx5 from trying to affect
-			 * too much memory. TX gather is not impacted by the
-			 * device_attr.max_sge limit and will still work
-			 * properly.
-			 */
-			.max_send_sge = 1,
-		},
-		.qp_type = IBV_QPT_RAW_PACKET,
-		/*
-		 * Do *NOT* enable this, completions events are managed per
-		 * Tx burst.
-		 */
-		.sq_sig_all = 0,
-		.pd = priv->sh->pd,
-		.comp_mask = IBV_QP_INIT_ATTR_PD,
-	};
-	if (txq_data->inlen_send)
-		attr.init.cap.max_inline_data = txq_ctrl->max_inline_data;
-	if (txq_data->tso_en) {
-		attr.init.max_tso_header = txq_ctrl->max_tso_header;
-		attr.init.comp_mask |= IBV_QP_INIT_ATTR_MAX_TSO_HEADER;
-	}
-	tmpl.qp = mlx5_glue->create_qp_ex(priv->sh->ctx, &attr.init);
+	tmpl.qp = mlx5_ibv_qp_new(dev, idx, &tmpl);
 	if (tmpl.qp == NULL) {
-		DRV_LOG(ERR, "port %u Tx queue %u QP creation failure",
-			dev->data->port_id, idx);
 		rte_errno = errno;
 		goto error;
 	}
-	attr.mod = (struct ibv_qp_attr){
+	mod = (struct ibv_qp_attr){
 		/* Move the QP to this state. */
 		.qp_state = IBV_QPS_INIT,
 		/* IB device port number. */
 		.port_num = (uint8_t)priv->dev_port,
 	};
-	ret = mlx5_glue->modify_qp(tmpl.qp, &attr.mod,
+	ret = mlx5_glue->modify_qp(tmpl.qp, &mod,
 				   (IBV_QP_STATE | IBV_QP_PORT));
 	if (ret) {
 		DRV_LOG(ERR,
@@ -1380,10 +1399,10 @@ struct mlx5_txq_obj *
 		rte_errno = errno;
 		goto error;
 	}
-	attr.mod = (struct ibv_qp_attr){
+	mod = (struct ibv_qp_attr){
 		.qp_state = IBV_QPS_RTR
 	};
-	ret = mlx5_glue->modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE);
+	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
 	if (ret) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u QP state to IBV_QPS_RTR failed",
@@ -1391,8 +1410,8 @@ struct mlx5_txq_obj *
 		rte_errno = errno;
 		goto error;
 	}
-	attr.mod.qp_state = IBV_QPS_RTS;
-	ret = mlx5_glue->modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE);
+	mod.qp_state = IBV_QPS_RTS;
+	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
 	if (ret) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u QP state to IBV_QPS_RTS failed",
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 06/15] net/mlx5: reposition the event queue number field
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (4 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 05/15] net/mlx5: reorder Tx queue Verbs " Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 07/15] net/mlx5: separate Tx queue object creations Michael Baum
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The eqn field has become a field of sh directly since it is also
relevant for Tx and Rx.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.c      | 10 ++++++++++
 drivers/net/mlx5/mlx5.h      |  2 +-
 drivers/net/mlx5/mlx5_devx.c |  9 +--------
 drivers/net/mlx5/mlx5_txpp.c | 28 ++++++++--------------------
 drivers/net/mlx5/mlx5_txq.c  |  2 +-
 5 files changed, 21 insertions(+), 30 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 01ead6e..e5ca392 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -925,6 +925,16 @@ struct mlx5_dev_ctx_shared *
 		goto error;
 	}
 	if (sh->devx) {
+		uint32_t lcore = (uint32_t)rte_lcore_to_cpu_id(-1);
+
+		/* Query the EQN for this core. */
+		err = mlx5_glue->devx_query_eqn(sh->ctx, lcore, &sh->eqn);
+		if (err) {
+			rte_errno = errno;
+			DRV_LOG(ERR, "Failed to query event queue number %d.",
+				rte_errno);
+			goto error;
+		}
 		err = mlx5_os_get_pdn(sh->pd, &sh->pdn);
 		if (err) {
 			DRV_LOG(ERR, "Fail to extract pdn from PD");
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index bd91e16..050d3a9 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -561,7 +561,6 @@ struct mlx5_dev_txpp {
 	uint32_t tick; /* Completion tick duration in nanoseconds. */
 	uint32_t test; /* Packet pacing test mode. */
 	int32_t skew; /* Scheduling skew. */
-	uint32_t eqn; /* Event Queue number. */
 	struct rte_intr_handle intr_handle; /* Periodic interrupt. */
 	void *echan; /* Event Channel. */
 	struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
@@ -603,6 +602,7 @@ struct mlx5_dev_ctx_shared {
 	LIST_ENTRY(mlx5_dev_ctx_shared) next;
 	uint32_t refcnt;
 	uint32_t devx:1; /* Opened with DV. */
+	uint32_t eqn; /* Event Queue number. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	void *ctx; /* Verbs/DV/DevX context. */
 	void *pd; /* Protection Domain. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index cb4a522..cddfe43 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -350,11 +350,9 @@
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	size_t page_size = rte_mem_page_size();
-	uint32_t lcore = (uint32_t)rte_lcore_to_cpu_id(-1);
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
 	struct mlx5_devx_dbr_page *dbr_page;
 	int64_t dbr_offset;
-	uint32_t eqn = 0;
 	void *buf = NULL;
 	uint16_t event_nums[1] = {0};
 	uint32_t log_cqe_n;
@@ -392,12 +390,6 @@
 		cq_attr.cqe_size = MLX5_CQE_SIZE_128B;
 	log_cqe_n = log2above(cqe_n);
 	cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
-	/* Query the EQN for this core. */
-	if (mlx5_glue->devx_query_eqn(priv->sh->ctx, lcore, &eqn)) {
-		DRV_LOG(ERR, "Failed to query EQN for CQ.");
-		goto error;
-	}
-	cq_attr.eqn = eqn;
 	buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
 				rxq_ctrl->socket);
 	if (!buf) {
@@ -425,6 +417,7 @@
 	rxq_data->cq_uar =
 			mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
 	/* Create CQ using DevX API. */
+	cq_attr.eqn = priv->sh->eqn;
 	cq_attr.uar_page_id =
 			mlx5_os_get_devx_uar_page_id(priv->sh->devx_rx_uar);
 	cq_attr.q_umem_id = mlx5_os_get_umem_id(rxq_ctrl->cq_umem);
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 011e479..37355fa 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -31,36 +31,24 @@
 
 /* Destroy Event Queue Notification Channel. */
 static void
-mlx5_txpp_destroy_eqn(struct mlx5_dev_ctx_shared *sh)
+mlx5_txpp_destroy_event_channel(struct mlx5_dev_ctx_shared *sh)
 {
 	if (sh->txpp.echan) {
 		mlx5_glue->devx_destroy_event_channel(sh->txpp.echan);
 		sh->txpp.echan = NULL;
 	}
-	sh->txpp.eqn = 0;
 }
 
 /* Create Event Queue Notification Channel. */
 static int
-mlx5_txpp_create_eqn(struct mlx5_dev_ctx_shared *sh)
+mlx5_txpp_create_event_channel(struct mlx5_dev_ctx_shared *sh)
 {
-	uint32_t lcore;
-
 	MLX5_ASSERT(!sh->txpp.echan);
-	lcore = (uint32_t)rte_lcore_to_cpu_id(-1);
-	if (mlx5_glue->devx_query_eqn(sh->ctx, lcore, &sh->txpp.eqn)) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to query EQ number %d.", rte_errno);
-		sh->txpp.eqn = 0;
-		return -rte_errno;
-	}
 	sh->txpp.echan = mlx5_glue->devx_create_event_channel(sh->ctx,
 			MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA);
 	if (!sh->txpp.echan) {
-		sh->txpp.eqn = 0;
 		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to create event channel %d.",
-			rte_errno);
+		DRV_LOG(ERR, "Failed to create event channel %d.", rte_errno);
 		return -rte_errno;
 	}
 	return 0;
@@ -285,7 +273,7 @@
 	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
 			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->txpp.eqn;
+	cq_attr.eqn = sh->eqn;
 	cq_attr.q_umem_valid = 1;
 	cq_attr.q_umem_offset = 0;
 	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
@@ -525,7 +513,7 @@
 	cq_attr.use_first_only = 1;
 	cq_attr.overrun_ignore = 1;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->txpp.eqn;
+	cq_attr.eqn = sh->eqn;
 	cq_attr.q_umem_valid = 1;
 	cq_attr.q_umem_offset = 0;
 	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
@@ -951,7 +939,7 @@
 	sh->txpp.test = !!(tx_pp < 0);
 	sh->txpp.skew = priv->config.tx_skew;
 	sh->txpp.freq = priv->config.hca_attr.dev_freq_khz;
-	ret = mlx5_txpp_create_eqn(sh);
+	ret = mlx5_txpp_create_event_channel(sh);
 	if (ret)
 		goto exit;
 	ret = mlx5_txpp_alloc_pp_index(sh);
@@ -972,7 +960,7 @@
 		mlx5_txpp_destroy_rearm_queue(sh);
 		mlx5_txpp_destroy_clock_queue(sh);
 		mlx5_txpp_free_pp_index(sh);
-		mlx5_txpp_destroy_eqn(sh);
+		mlx5_txpp_destroy_event_channel(sh);
 		sh->txpp.tick = 0;
 		sh->txpp.test = 0;
 		sh->txpp.skew = 0;
@@ -994,7 +982,7 @@
 	mlx5_txpp_destroy_rearm_queue(sh);
 	mlx5_txpp_destroy_clock_queue(sh);
 	mlx5_txpp_free_pp_index(sh);
-	mlx5_txpp_destroy_eqn(sh);
+	mlx5_txpp_destroy_event_channel(sh);
 	sh->txpp.tick = 0;
 	sh->txpp.test = 0;
 	sh->txpp.skew = 0;
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index f2ecfc4..c678971 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -985,7 +985,7 @@
 	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
 			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	cq_attr.eqn = priv->sh->txpp.eqn;
+	cq_attr.eqn = priv->sh->eqn;
 	cq_attr.q_umem_valid = 1;
 	cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size;
 	cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 07/15] net/mlx5: separate Tx queue object creations
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (5 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 06/15] net/mlx5: reposition the event queue number field Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 08/15] net/mlx5: share Tx control code Michael Baum
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

As an arrangement to Windows OS support, the Verbs operations should be
separated to another file.
By this way, the build can easily cut the unsupported Verbs APIs from
the compilation process.

Define operation structure and DevX module in addition to the existing
linux Verbs module.
Separate Tx object creation into the Verbs/DevX modules and update the
operation structure according to the OS support and the user
configuration.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c    |  73 ++++
 drivers/net/mlx5/linux/mlx5_verbs.c | 285 +++++++++++++
 drivers/net/mlx5/linux/mlx5_verbs.h |   4 +
 drivers/net/mlx5/mlx5.h             |  42 ++
 drivers/net/mlx5/mlx5_devx.c        | 531 +++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_devx.h        |   4 +
 drivers/net/mlx5/mlx5_rxtx.h        |  43 +-
 drivers/net/mlx5/mlx5_trigger.c     |  11 +-
 drivers/net/mlx5/mlx5_txq.c         | 798 +-----------------------------------
 9 files changed, 942 insertions(+), 849 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 188a6d4..c5332a0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -512,6 +512,70 @@
 }
 
 /**
+ * Create the Tx queue DevX/Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ *
+ * @return
+ *   The DevX/Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_txq_obj *
+mlx5_os_txq_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_dev_config *config = &priv->config;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+			container_of(txq_data, struct mlx5_txq_ctrl, txq);
+
+	/*
+	 * When DevX is supported and DV flow is enable, and dest tir is enable,
+	 * hairpin functions use DevX API.
+	 * When, in addition, DV E-Switch is enable and DevX uar offset is
+	 * supported, all Tx functions also use DevX API.
+	 * Otherwise, all Tx functions use Verbs API.
+	 */
+	if (config->devx && config->dv_flow_en && config->dest_tir) {
+		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN)
+			return mlx5_txq_devx_obj_new(dev, idx);
+#ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
+		if (config->dv_esw_en)
+			return mlx5_txq_devx_obj_new(dev, idx);
+#endif
+	}
+	return mlx5_txq_ibv_obj_new(dev, idx);
+}
+
+/**
+ * Release an Tx DevX/verbs queue object.
+ *
+ * @param txq_obj
+ *   DevX/Verbs Tx queue object.
+ */
+static void
+mlx5_os_txq_obj_release(struct mlx5_txq_obj *txq_obj)
+{
+	struct mlx5_dev_config *config = &txq_obj->txq_ctrl->priv->config;
+
+	if (config->devx && config->dv_flow_en && config->dest_tir) {
+#ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
+		if (config->dv_esw_en) {
+			mlx5_txq_devx_obj_release(txq_obj);
+			return;
+		}
+#endif
+		if (txq_obj->txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
+			mlx5_txq_devx_obj_release(txq_obj);
+			return;
+		}
+	}
+	mlx5_txq_ibv_obj_release(txq_obj);
+}
+
+/**
  * Spawn an Ethernet device from Verbs information.
  *
  * @param dpdk_dev
@@ -1299,6 +1363,12 @@
 			goto error;
 		}
 	}
+	/*
+	 * Initialize the dev_ops structure with DevX/Verbs function pointers.
+	 * When DevX is supported and both DV flow and dest tir are enabled, all
+	 * Rx functions use DevX API (except for drop that has not yet been
+	 * implemented in DevX).
+	 */
 	if (config->devx && config->dv_flow_en && config->dest_tir) {
 		priv->obj_ops = devx_obj_ops;
 		priv->obj_ops.drop_action_create =
@@ -1308,6 +1378,9 @@
 	} else {
 		priv->obj_ops = ibv_obj_ops;
 	}
+	/* The Tx objects are managed by a specific linux wrapper functions. */
+	priv->obj_ops.txq_obj_new = mlx5_os_txq_obj_new;
+	priv->obj_ops.txq_obj_release = mlx5_os_txq_obj_release;
 	/* Supported Verbs flow priority number detection. */
 	err = mlx5_flow_discover_priorities(eth_dev);
 	if (err < 0) {
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 20f659e..c79c4a2 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -782,6 +782,289 @@
 	mlx5_rxq_ibv_obj_drop_release(dev);
 }
 
+/**
+ * Create a QP Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ * @param rxq_obj
+ *   Pointer to Tx queue object data.
+ *
+ * @return
+ *   The QP Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_qp *
+mlx5_ibv_qp_new(struct rte_eth_dev *dev, uint16_t idx,
+		struct mlx5_txq_obj *txq_obj)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+			container_of(txq_data, struct mlx5_txq_ctrl, txq);
+	struct ibv_qp *qp_obj = NULL;
+	struct ibv_qp_init_attr_ex qp_attr = { 0 };
+	const int desc = 1 << txq_data->elts_n;
+
+	MLX5_ASSERT(!txq_ctrl->obj);
+	/* CQ to be associated with the send queue. */
+	qp_attr.send_cq = txq_obj->cq;
+	/* CQ to be associated with the receive queue. */
+	qp_attr.recv_cq = txq_obj->cq;
+	/* Max number of outstanding WRs. */
+	qp_attr.cap.max_send_wr = ((priv->sh->device_attr.max_qp_wr < desc) ?
+				   priv->sh->device_attr.max_qp_wr : desc);
+	/*
+	 * Max number of scatter/gather elements in a WR, must be 1 to prevent
+	 * libmlx5 from trying to affect must be 1 to prevent libmlx5 from
+	 * trying to affect too much memory. TX gather is not impacted by the
+	 * device_attr.max_sge limit and will still work properly.
+	 */
+	qp_attr.cap.max_send_sge = 1;
+	qp_attr.qp_type = IBV_QPT_RAW_PACKET,
+	/* Do *NOT* enable this, completions events are managed per Tx burst. */
+	qp_attr.sq_sig_all = 0;
+	qp_attr.pd = priv->sh->pd;
+	qp_attr.comp_mask = IBV_QP_INIT_ATTR_PD;
+	if (txq_data->inlen_send)
+		qp_attr.cap.max_inline_data = txq_ctrl->max_inline_data;
+	if (txq_data->tso_en) {
+		qp_attr.max_tso_header = txq_ctrl->max_tso_header;
+		qp_attr.comp_mask |= IBV_QP_INIT_ATTR_MAX_TSO_HEADER;
+	}
+	qp_obj = mlx5_glue->create_qp_ex(priv->sh->ctx, &qp_attr);
+	if (qp_obj == NULL) {
+		DRV_LOG(ERR, "Port %u Tx queue %u QP creation failure.",
+			dev->data->port_id, idx);
+		rte_errno = errno;
+	}
+	return qp_obj;
+}
+
+/**
+ * Create the Tx queue Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ *
+ * @return
+ *   The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_txq_obj *
+mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+		container_of(txq_data, struct mlx5_txq_ctrl, txq);
+	struct mlx5_txq_obj tmpl;
+	struct mlx5_txq_obj *txq_obj = NULL;
+	struct ibv_qp_attr mod;
+	unsigned int cqe_n;
+	struct mlx5dv_qp qp;
+	struct mlx5dv_cq cq_info;
+	struct mlx5dv_obj obj;
+	const int desc = 1 << txq_data->elts_n;
+	int ret = 0;
+
+	MLX5_ASSERT(txq_data);
+	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE;
+	priv->verbs_alloc_ctx.obj = txq_ctrl;
+	if (mlx5_getenv_int("MLX5_ENABLE_CQE_COMPRESSION")) {
+		DRV_LOG(ERR, "Port %u MLX5_ENABLE_CQE_COMPRESSION "
+			"must never be set.", dev->data->port_id);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	memset(&tmpl, 0, sizeof(struct mlx5_txq_obj));
+	cqe_n = desc / MLX5_TX_COMP_THRESH +
+		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
+	tmpl.cq = mlx5_glue->create_cq(priv->sh->ctx, cqe_n, NULL, NULL, 0);
+	if (tmpl.cq == NULL) {
+		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
+			dev->data->port_id, idx);
+		rte_errno = errno;
+		goto error;
+	}
+	tmpl.qp = mlx5_ibv_qp_new(dev, idx, &tmpl);
+	if (tmpl.qp == NULL) {
+		rte_errno = errno;
+		goto error;
+	}
+	mod = (struct ibv_qp_attr){
+		/* Move the QP to this state. */
+		.qp_state = IBV_QPS_INIT,
+		/* IB device port number. */
+		.port_num = (uint8_t)priv->dev_port,
+	};
+	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, (IBV_QP_STATE | IBV_QP_PORT));
+	if (ret) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u QP state to IBV_QPS_INIT failed.",
+			dev->data->port_id, idx);
+		rte_errno = errno;
+		goto error;
+	}
+	mod = (struct ibv_qp_attr){
+		.qp_state = IBV_QPS_RTR
+	};
+	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
+	if (ret) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u QP state to IBV_QPS_RTR failed.",
+			dev->data->port_id, idx);
+		rte_errno = errno;
+		goto error;
+	}
+	mod.qp_state = IBV_QPS_RTS;
+	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
+	if (ret) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u QP state to IBV_QPS_RTS failed.",
+			dev->data->port_id, idx);
+		rte_errno = errno;
+		goto error;
+	}
+	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			      sizeof(struct mlx5_txq_obj), 0,
+			      txq_ctrl->socket);
+	if (!txq_obj) {
+		DRV_LOG(ERR, "Port %u Tx queue %u cannot allocate memory.",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	qp.comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* If using DevX, need additional mask to read tisn value. */
+	if (priv->sh->devx && !priv->sh->tdn)
+		qp.comp_mask |= MLX5DV_QP_MASK_RAW_QP_HANDLES;
+#endif
+	obj.cq.in = tmpl.cq;
+	obj.cq.out = &cq_info;
+	obj.qp.in = tmpl.qp;
+	obj.qp.out = &qp;
+	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ | MLX5DV_OBJ_QP);
+	if (ret != 0) {
+		rte_errno = errno;
+		goto error;
+	}
+	if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) {
+		DRV_LOG(ERR,
+			"Port %u wrong MLX5_CQE_SIZE environment variable"
+			" value: it should be set to %u.",
+			dev->data->port_id, RTE_CACHE_LINE_SIZE);
+		rte_errno = EINVAL;
+		goto error;
+	}
+	txq_data->cqe_n = log2above(cq_info.cqe_cnt);
+	txq_data->cqe_s = 1 << txq_data->cqe_n;
+	txq_data->cqe_m = txq_data->cqe_s - 1;
+	txq_data->qp_num_8s = ((struct ibv_qp *)tmpl.qp)->qp_num << 8;
+	txq_data->wqes = qp.sq.buf;
+	txq_data->wqe_n = log2above(qp.sq.wqe_cnt);
+	txq_data->wqe_s = 1 << txq_data->wqe_n;
+	txq_data->wqe_m = txq_data->wqe_s - 1;
+	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
+	txq_data->qp_db = &qp.dbrec[MLX5_SND_DBR];
+	txq_data->cq_db = cq_info.dbrec;
+	txq_data->cqes = (volatile struct mlx5_cqe *)cq_info.buf;
+	txq_data->cq_ci = 0;
+	txq_data->cq_pi = 0;
+	txq_data->wqe_ci = 0;
+	txq_data->wqe_pi = 0;
+	txq_data->wqe_comp = 0;
+	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
+	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
+				     RTE_CACHE_LINE_SIZE, txq_ctrl->socket);
+	if (!txq_data->fcqs) {
+		DRV_LOG(ERR, "Port %u Tx queue %u can't allocate memory (FCQ).",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/*
+	 * If using DevX need to query and store TIS transport domain value.
+	 * This is done once per port.
+	 * Will use this value on Rx, when creating matching TIR.
+	 */
+	if (priv->sh->devx && !priv->sh->tdn) {
+		ret = mlx5_devx_cmd_qp_query_tis_td(tmpl.qp, qp.tisn,
+						    &priv->sh->tdn);
+		if (ret) {
+			DRV_LOG(ERR, "Fail to query port %u Tx queue %u QP TIS "
+				"transport domain.", dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			goto error;
+		} else {
+			DRV_LOG(DEBUG, "Port %u Tx queue %u TIS number %d "
+				"transport domain %d.", dev->data->port_id,
+				idx, qp.tisn, priv->sh->tdn);
+		}
+	}
+#endif
+	txq_obj->qp = tmpl.qp;
+	txq_obj->cq = tmpl.cq;
+	txq_ctrl->bf_reg = qp.bf.reg;
+	if (qp.comp_mask & MLX5DV_QP_MASK_UAR_MMAP_OFFSET) {
+		txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset;
+		DRV_LOG(DEBUG, "Port %u: uar_mmap_offset 0x%" PRIx64 ".",
+			dev->data->port_id, txq_ctrl->uar_mmap_offset);
+	} else {
+		DRV_LOG(ERR,
+			"Port %u failed to retrieve UAR info, invalid"
+			" libmlx5.so",
+			dev->data->port_id);
+		rte_errno = EINVAL;
+		goto error;
+	}
+	txq_uar_init(txq_ctrl);
+	txq_obj->txq_ctrl = txq_ctrl;
+	LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next);
+	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
+	return txq_obj;
+error:
+	ret = rte_errno; /* Save rte_errno before cleanup. */
+	if (tmpl.cq)
+		claim_zero(mlx5_glue->destroy_cq(tmpl.cq));
+	if (tmpl.qp)
+		claim_zero(mlx5_glue->destroy_qp(tmpl.qp));
+	if (txq_data->fcqs) {
+		mlx5_free(txq_data->fcqs);
+		txq_data->fcqs = NULL;
+	}
+	if (txq_obj)
+		mlx5_free(txq_obj);
+	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
+	rte_errno = ret; /* Restore rte_errno. */
+	return NULL;
+}
+
+/**
+ * Release an Tx verbs queue object.
+ *
+ * @param txq_obj
+ *   Verbs Tx queue object..
+ */
+void
+mlx5_txq_ibv_obj_release(struct mlx5_txq_obj *txq_obj)
+{
+	MLX5_ASSERT(txq_obj);
+	claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
+	claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
+	if (txq_obj->txq_ctrl->txq.fcqs) {
+		mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
+		txq_obj->txq_ctrl->txq.fcqs = NULL;
+	}
+	LIST_REMOVE(txq_obj, next);
+	mlx5_free(txq_obj);
+}
+
 struct mlx5_obj_ops ibv_obj_ops = {
 	.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
 	.rxq_obj_new = mlx5_rxq_ibv_obj_new,
@@ -794,4 +1077,6 @@ struct mlx5_obj_ops ibv_obj_ops = {
 	.hrxq_destroy = mlx5_ibv_qp_destroy,
 	.drop_action_create = mlx5_ibv_drop_action_create,
 	.drop_action_destroy = mlx5_ibv_drop_action_destroy,
+	.txq_obj_new = mlx5_txq_ibv_obj_new,
+	.txq_obj_release = mlx5_txq_ibv_obj_release,
 };
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.h b/drivers/net/mlx5/linux/mlx5_verbs.h
index 2e69c0f..7f6bb99 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.h
+++ b/drivers/net/mlx5/linux/mlx5_verbs.h
@@ -12,6 +12,10 @@ struct mlx5_verbs_ops {
 	mlx5_dereg_mr_t dereg_mr;
 };
 
+struct mlx5_txq_obj *mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev,
+					  uint16_t idx);
+void mlx5_txq_ibv_obj_release(struct mlx5_txq_obj *txq_obj);
+
 /* Verbs ops struct */
 extern const struct mlx5_verbs_ops mlx5_verbs_ops;
 extern struct mlx5_obj_ops ibv_obj_ops;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 050d3a9..8679750 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -735,6 +735,45 @@ struct mlx5_hrxq {
 	uint8_t rss_key[]; /* Hash key. */
 };
 
+enum mlx5_txq_obj_type {
+	MLX5_TXQ_OBJ_TYPE_IBV,		/* mlx5_txq_obj with ibv_wq. */
+	MLX5_TXQ_OBJ_TYPE_DEVX_SQ,	/* mlx5_txq_obj with mlx5_devx_sq. */
+	MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN,
+	/* mlx5_txq_obj with mlx5_devx_tq and hairpin support. */
+};
+
+/* Verbs/DevX Tx queue elements. */
+struct mlx5_txq_obj {
+	LIST_ENTRY(mlx5_txq_obj) next; /* Pointer to the next element. */
+	struct mlx5_txq_ctrl *txq_ctrl; /* Pointer to the control queue. */
+	enum mlx5_txq_obj_type type; /* The txq object type. */
+	RTE_STD_C11
+	union {
+		struct {
+			void *cq; /* Completion Queue. */
+			void *qp; /* Queue Pair. */
+		};
+		struct {
+			struct mlx5_devx_obj *sq;
+			/* DevX object for Sx queue. */
+			struct mlx5_devx_obj *tis; /* The TIS object. */
+		};
+		struct {
+			struct rte_eth_dev *dev;
+			struct mlx5_devx_obj *cq_devx;
+			void *cq_umem;
+			void *cq_buf;
+			int64_t cq_dbrec_offset;
+			struct mlx5_devx_dbr_page *cq_dbrec_page;
+			struct mlx5_devx_obj *sq_devx;
+			void *sq_umem;
+			void *sq_buf;
+			int64_t sq_dbrec_offset;
+			struct mlx5_devx_dbr_page *sq_dbrec_page;
+		};
+	};
+};
+
 /* HW objects operations structure. */
 struct mlx5_obj_ops {
 	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
@@ -750,6 +789,9 @@ struct mlx5_obj_ops {
 	void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
 	int (*drop_action_create)(struct rte_eth_dev *dev);
 	void (*drop_action_destroy)(struct rte_eth_dev *dev);
+	struct mlx5_txq_obj *(*txq_obj_new)(struct rte_eth_dev *dev,
+					    uint16_t idx);
+	void (*txq_obj_release)(struct mlx5_txq_obj *txq_obj);
 };
 
 struct mlx5_priv {
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index cddfe43..0b6e116 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -791,7 +791,7 @@
 mlx5_devx_drop_action_create(struct rte_eth_dev *dev)
 {
 	(void)dev;
-	DRV_LOG(ERR, "DevX drop action is not supported yet");
+	DRV_LOG(ERR, "DevX drop action is not supported yet.");
 	rte_errno = ENOTSUP;
 	return -rte_errno;
 }
@@ -806,10 +806,535 @@
 mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev)
 {
 	(void)dev;
-	DRV_LOG(ERR, "DevX drop action is not supported yet");
+	DRV_LOG(ERR, "DevX drop action is not supported yet.");
 	rte_errno = ENOTSUP;
 }
 
+/**
+ * Create the Tx hairpin queue object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ *
+ * @return
+ *   The hairpin DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_txq_obj *
+mlx5_txq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+		container_of(txq_data, struct mlx5_txq_ctrl, txq);
+	struct mlx5_devx_create_sq_attr attr = { 0 };
+	struct mlx5_txq_obj *tmpl = NULL;
+	uint32_t max_wq_data;
+
+	MLX5_ASSERT(txq_data);
+	MLX5_ASSERT(!txq_ctrl->obj);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   txq_ctrl->socket);
+	if (!tmpl) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u cannot allocate memory resources.",
+			dev->data->port_id, txq_data->idx);
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	tmpl->type = MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN;
+	tmpl->txq_ctrl = txq_ctrl;
+	attr.hairpin = 1;
+	attr.tis_lst_sz = 1;
+	max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
+	/* Jumbo frames > 9KB should be supported, and more packets. */
+	if (priv->config.log_hp_size != (uint32_t)MLX5_ARG_UNSET) {
+		if (priv->config.log_hp_size > max_wq_data) {
+			DRV_LOG(ERR, "Total data size %u power of 2 is "
+				"too large for hairpin.",
+				priv->config.log_hp_size);
+			mlx5_free(tmpl);
+			rte_errno = ERANGE;
+			return NULL;
+		}
+		attr.wq_attr.log_hairpin_data_sz = priv->config.log_hp_size;
+	} else {
+		attr.wq_attr.log_hairpin_data_sz =
+				(max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
+				 max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
+	}
+	/* Set the packets number to the maximum value for performance. */
+	attr.wq_attr.log_hairpin_num_packets =
+			attr.wq_attr.log_hairpin_data_sz -
+			MLX5_HAIRPIN_QUEUE_STRIDE;
+	attr.tis_num = priv->sh->tis->id;
+	tmpl->sq = mlx5_devx_cmd_create_sq(priv->sh->ctx, &attr);
+	if (!tmpl->sq) {
+		DRV_LOG(ERR,
+			"Port %u tx hairpin queue %u can't create SQ object.",
+			dev->data->port_id, idx);
+		mlx5_free(tmpl);
+		rte_errno = errno;
+		return NULL;
+	}
+	DRV_LOG(DEBUG, "Port %u sxq %u updated with %p.", dev->data->port_id,
+		idx, (void *)&tmpl);
+	LIST_INSERT_HEAD(&priv->txqsobj, tmpl, next);
+	return tmpl;
+}
+
+#ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
+/**
+ * Release DevX SQ resources.
+ *
+ * @param txq_obj
+ *   DevX Tx queue object.
+ */
+static void
+txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
+{
+	if (txq_obj->sq_devx)
+		claim_zero(mlx5_devx_cmd_destroy(txq_obj->sq_devx));
+	if (txq_obj->sq_umem)
+		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->sq_umem));
+	if (txq_obj->sq_buf)
+		mlx5_free(txq_obj->sq_buf);
+	if (txq_obj->sq_dbrec_page)
+		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
+					    mlx5_os_get_umem_id
+						 (txq_obj->sq_dbrec_page->umem),
+					    txq_obj->sq_dbrec_offset));
+}
+
+/**
+ * Release DevX Tx CQ resources.
+ *
+ * @param txq_obj
+ *   DevX Tx queue object.
+ */
+static void
+txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
+{
+	if (txq_obj->cq_devx)
+		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
+	if (txq_obj->cq_umem)
+		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->cq_umem));
+	if (txq_obj->cq_buf)
+		mlx5_free(txq_obj->cq_buf);
+	if (txq_obj->cq_dbrec_page)
+		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
+					    mlx5_os_get_umem_id
+						 (txq_obj->cq_dbrec_page->umem),
+					    txq_obj->cq_dbrec_offset));
+}
+
+/**
+ * Destroy the Tx queue DevX object.
+ *
+ * @param txq_obj
+ *   Txq object to destroy.
+ */
+static void
+txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
+{
+	MLX5_ASSERT(txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ);
+
+	txq_release_devx_cq_resources(txq_obj);
+	txq_release_devx_sq_resources(txq_obj);
+}
+
+/**
+ * Create a DevX CQ object for an Tx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param cqe_n
+ *   Number of entries in the CQ.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ * @param rxq_obj
+ *   Pointer to Tx queue object data.
+ *
+ * @return
+ *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_devx_obj *
+mlx5_tx_devx_cq_new(struct rte_eth_dev *dev, uint32_t cqe_n, uint16_t idx,
+		    struct mlx5_txq_obj *txq_obj)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_devx_obj *cq_obj = NULL;
+	struct mlx5_devx_cq_attr cq_attr = { 0 };
+	struct mlx5_cqe *cqe;
+	size_t page_size;
+	size_t alignment;
+	uint32_t i;
+	int ret;
+
+	MLX5_ASSERT(txq_data);
+	MLX5_ASSERT(txq_obj);
+	page_size = rte_mem_page_size();
+	if (page_size == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get mem page size.");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	/* Allocate memory buffer for CQEs. */
+	alignment = MLX5_CQE_BUF_ALIGNMENT;
+	if (alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get CQE buf alignment.");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	cqe_n = 1UL << log2above(cqe_n);
+	if (cqe_n > UINT16_MAX) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u requests to many CQEs %u.",
+			dev->data->port_id, txq_data->idx, cqe_n);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				      cqe_n * sizeof(struct mlx5_cqe),
+				      alignment,
+				      priv->sh->numa_node);
+	if (!txq_obj->cq_buf) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u cannot allocate memory (CQ).",
+			dev->data->port_id, txq_data->idx);
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	txq_obj->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
+						(void *)txq_obj->cq_buf,
+						cqe_n * sizeof(struct mlx5_cqe),
+						IBV_ACCESS_LOCAL_WRITE);
+	if (!txq_obj->cq_umem) {
+		rte_errno = errno;
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u cannot register memory (CQ).",
+			dev->data->port_id, txq_data->idx);
+		goto error;
+	}
+	/* Allocate doorbell record for completion queue. */
+	txq_obj->cq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
+						&priv->dbrpgs,
+						&txq_obj->cq_dbrec_page);
+	if (txq_obj->cq_dbrec_offset < 0) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
+		goto error;
+	}
+	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
+			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
+	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
+	cq_attr.eqn = priv->sh->eqn;
+	cq_attr.q_umem_valid = 1;
+	cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size;
+	cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem);
+	cq_attr.db_umem_valid = 1;
+	cq_attr.db_umem_offset = txq_obj->cq_dbrec_offset;
+	cq_attr.db_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem);
+	cq_attr.log_cq_size = rte_log2_u32(cqe_n);
+	cq_attr.log_page_size = rte_log2_u32(page_size);
+	/* Create completion queue object with DevX. */
+	cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
+	if (!cq_obj) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
+			dev->data->port_id, idx);
+		goto error;
+	}
+	txq_data->cqe_n = log2above(cqe_n);
+	txq_data->cqe_s = 1 << txq_data->cqe_n;
+	/* Initial fill CQ buffer with invalid CQE opcode. */
+	cqe = (struct mlx5_cqe *)txq_obj->cq_buf;
+	for (i = 0; i < txq_data->cqe_s; i++) {
+		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
+		++cqe;
+	}
+	return cq_obj;
+error:
+	ret = rte_errno;
+	txq_release_devx_cq_resources(txq_obj);
+	rte_errno = ret;
+	return NULL;
+}
+
+/**
+ * Create a SQ object using DevX.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ * @param rxq_obj
+ *   Pointer to Tx queue object data.
+ *
+ * @return
+ *   The DevX SQ object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_devx_obj *
+mlx5_devx_sq_new(struct rte_eth_dev *dev, uint16_t idx,
+		 struct mlx5_txq_obj *txq_obj)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
+	struct mlx5_devx_obj *sq_obj = NULL;
+	size_t page_size;
+	uint32_t wqe_n;
+	int ret;
+
+	MLX5_ASSERT(txq_data);
+	MLX5_ASSERT(txq_obj);
+	page_size = rte_mem_page_size();
+	if (page_size == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get mem page size.");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
+			(uint32_t)priv->sh->device_attr.max_qp_wr);
+	txq_obj->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				      wqe_n * sizeof(struct mlx5_wqe),
+				      page_size, priv->sh->numa_node);
+	if (!txq_obj->sq_buf) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u cannot allocate memory (SQ).",
+			dev->data->port_id, txq_data->idx);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	txq_obj->sq_umem = mlx5_glue->devx_umem_reg
+					(priv->sh->ctx,
+					 (void *)txq_obj->sq_buf,
+					 wqe_n * sizeof(struct mlx5_wqe),
+					 IBV_ACCESS_LOCAL_WRITE);
+	if (!txq_obj->sq_umem) {
+		rte_errno = errno;
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u cannot register memory (SQ).",
+			dev->data->port_id, txq_data->idx);
+		goto error;
+	}
+	/* Allocate doorbell record for send queue. */
+	txq_obj->sq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
+						&priv->dbrpgs,
+						&txq_obj->sq_dbrec_page);
+	if (txq_obj->sq_dbrec_offset < 0) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to allocate SQ door-bell.");
+		goto error;
+	}
+	txq_data->wqe_n = log2above(wqe_n);
+	sq_attr.tis_lst_sz = 1;
+	sq_attr.tis_num = priv->sh->tis->id;
+	sq_attr.state = MLX5_SQC_STATE_RST;
+	sq_attr.cqn = txq_obj->cq_devx->id;
+	sq_attr.flush_in_error_en = 1;
+	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
+	sq_attr.allow_swp = !!priv->config.swp;
+	sq_attr.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode;
+	sq_attr.wq_attr.uar_page =
+				mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
+	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
+	sq_attr.wq_attr.pd = priv->sh->pdn;
+	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
+	sq_attr.wq_attr.log_wq_sz = txq_data->wqe_n;
+	sq_attr.wq_attr.dbr_umem_valid = 1;
+	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
+	sq_attr.wq_attr.dbr_umem_id =
+			mlx5_os_get_umem_id(txq_obj->sq_dbrec_page->umem);
+	sq_attr.wq_attr.wq_umem_valid = 1;
+	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
+	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
+	/* Create Send Queue object with DevX. */
+	sq_obj = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
+	if (!sq_obj) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
+			dev->data->port_id, idx);
+		goto error;
+	}
+	return sq_obj;
+error:
+	ret = rte_errno;
+	txq_release_devx_sq_resources(txq_obj);
+	rte_errno = ret;
+	return NULL;
+}
+#endif
+
+/**
+ * Create the Tx queue DevX object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Tx queue array.
+ *
+ * @return
+ *   The DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_txq_obj *
+mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+			container_of(txq_data, struct mlx5_txq_ctrl, txq);
+
+	if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN)
+		return mlx5_txq_obj_hairpin_new(dev, idx);
+#ifndef HAVE_MLX5DV_DEVX_UAR_OFFSET
+	DRV_LOG(ERR, "Port %u Tx queue %u cannot create with DevX, no UAR.",
+		     dev->data->port_id, idx);
+	rte_errno = ENOMEM;
+	return NULL;
+#else
+	struct mlx5_dev_ctx_shared *sh = priv->sh;
+	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
+	struct mlx5_txq_obj *txq_obj = NULL;
+	void *reg_addr;
+	uint32_t cqe_n;
+	int ret = 0;
+
+	MLX5_ASSERT(txq_data);
+	MLX5_ASSERT(!txq_ctrl->obj);
+	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			      sizeof(struct mlx5_txq_obj), 0,
+			      txq_ctrl->socket);
+	if (!txq_obj) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u cannot allocate memory resources.",
+			dev->data->port_id, txq_data->idx);
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	txq_obj->type = MLX5_TXQ_OBJ_TYPE_DEVX_SQ;
+	txq_obj->txq_ctrl = txq_ctrl;
+	txq_obj->dev = dev;
+	/* Create the Completion Queue. */
+	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
+		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
+	/* Create completion queue object with DevX. */
+	txq_obj->cq_devx = mlx5_tx_devx_cq_new(dev, cqe_n, idx, txq_obj);
+	if (!txq_obj->cq_devx) {
+		rte_errno = errno;
+		goto error;
+	}
+	txq_data->cqe_m = txq_data->cqe_s - 1;
+	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
+	txq_data->cq_ci = 0;
+	txq_data->cq_pi = 0;
+	txq_data->cq_db = (volatile uint32_t *)(txq_obj->cq_dbrec_page->dbrs +
+						txq_obj->cq_dbrec_offset);
+	*txq_data->cq_db = 0;
+	/* Create Send Queue object with DevX. */
+	txq_obj->sq_devx = mlx5_devx_sq_new(dev, idx, txq_obj);
+	if (!txq_obj->sq_devx) {
+		rte_errno = errno;
+		goto error;
+	}
+	/* Create the Work Queue. */
+	txq_data->wqe_s = 1 << txq_data->wqe_n;
+	txq_data->wqe_m = txq_data->wqe_s - 1;
+	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
+	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
+	txq_data->wqe_ci = 0;
+	txq_data->wqe_pi = 0;
+	txq_data->wqe_comp = 0;
+	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
+	txq_data->qp_db = (volatile uint32_t *)
+					(txq_obj->sq_dbrec_page->dbrs +
+					 txq_obj->sq_dbrec_offset +
+					 MLX5_SND_DBR * sizeof(uint32_t));
+	*txq_data->qp_db = 0;
+	txq_data->qp_num_8s = txq_obj->sq_devx->id << 8;
+	/* Change Send Queue state to Ready-to-Send. */
+	msq_attr.sq_state = MLX5_SQC_STATE_RST;
+	msq_attr.state = MLX5_SQC_STATE_RDY;
+	ret = mlx5_devx_cmd_modify_sq(txq_obj->sq_devx, &msq_attr);
+	if (ret) {
+		rte_errno = errno;
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u SP state to SQC_STATE_RDY failed.",
+			dev->data->port_id, idx);
+		goto error;
+	}
+	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
+				     RTE_CACHE_LINE_SIZE,
+				     txq_ctrl->socket);
+	if (!txq_data->fcqs) {
+		DRV_LOG(ERR,
+			"Port %u Tx queue %u cannot allocate memory (FCQ).",
+			dev->data->port_id, idx);
+		rte_errno = ENOMEM;
+		goto error;
+	}
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/*
+	 * If using DevX need to query and store TIS transport domain value.
+	 * This is done once per port.
+	 * Will use this value on Rx, when creating matching TIR.
+	 */
+	if (!priv->sh->tdn)
+		priv->sh->tdn = priv->sh->td->id;
+#endif
+	MLX5_ASSERT(sh->tx_uar);
+	reg_addr = mlx5_os_get_devx_uar_reg_addr(sh->tx_uar);
+	MLX5_ASSERT(reg_addr);
+	txq_ctrl->bf_reg = reg_addr;
+	txq_ctrl->uar_mmap_offset =
+				mlx5_os_get_devx_uar_mmap_offset(sh->tx_uar);
+	txq_uar_init(txq_ctrl);
+	LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next);
+	return txq_obj;
+error:
+	ret = rte_errno; /* Save rte_errno before cleanup. */
+	txq_release_devx_resources(txq_obj);
+	if (txq_data->fcqs) {
+		mlx5_free(txq_data->fcqs);
+		txq_data->fcqs = NULL;
+	}
+	mlx5_free(txq_obj);
+	rte_errno = ret; /* Restore rte_errno. */
+	return NULL;
+#endif
+}
+
+/**
+ * Release an Tx DevX queue object.
+ *
+ * @param txq_obj
+ *   DevX Tx queue object.
+ */
+void
+mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj)
+{
+	MLX5_ASSERT(txq_obj);
+	if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN) {
+		if (txq_obj->tis)
+			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
+#ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
+	} else {
+		txq_release_devx_resources(txq_obj);
+#endif
+	}
+	if (txq_obj->txq_ctrl->txq.fcqs) {
+		mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
+		txq_obj->txq_ctrl->txq.fcqs = NULL;
+	}
+	LIST_REMOVE(txq_obj, next);
+	mlx5_free(txq_obj);
+}
+
 struct mlx5_obj_ops devx_obj_ops = {
 	.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
 	.rxq_obj_new = mlx5_rxq_devx_obj_new,
@@ -822,4 +1347,6 @@ struct mlx5_obj_ops devx_obj_ops = {
 	.hrxq_destroy = mlx5_devx_tir_destroy,
 	.drop_action_create = mlx5_devx_drop_action_create,
 	.drop_action_destroy = mlx5_devx_drop_action_destroy,
+	.txq_obj_new = mlx5_txq_devx_obj_new,
+	.txq_obj_release = mlx5_txq_devx_obj_release,
 };
diff --git a/drivers/net/mlx5/mlx5_devx.h b/drivers/net/mlx5/mlx5_devx.h
index 844985c..0bbbbc0 100644
--- a/drivers/net/mlx5/mlx5_devx.h
+++ b/drivers/net/mlx5/mlx5_devx.h
@@ -7,6 +7,10 @@
 
 #include "mlx5.h"
 
+struct mlx5_txq_obj *mlx5_txq_devx_obj_new(struct rte_eth_dev *dev,
+					   uint16_t idx);
+void mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj);
+
 extern struct mlx5_obj_ops devx_obj_ops;
 
 #endif /* RTE_PMD_MLX5_DEVX_H_ */
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index d947e0e..674296e 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -261,50 +261,11 @@ struct mlx5_txq_data {
 	/* Storage for queued packets, must be the last field. */
 } __rte_cache_aligned;
 
-enum mlx5_txq_obj_type {
-	MLX5_TXQ_OBJ_TYPE_IBV,		/* mlx5_txq_obj with ibv_wq. */
-	MLX5_TXQ_OBJ_TYPE_DEVX_SQ,	/* mlx5_txq_obj with mlx5_devx_sq. */
-	MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN,
-	/* mlx5_txq_obj with mlx5_devx_tq and hairpin support. */
-};
-
 enum mlx5_txq_type {
 	MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */
 	MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */
 };
 
-/* Verbs/DevX Tx queue elements. */
-struct mlx5_txq_obj {
-	LIST_ENTRY(mlx5_txq_obj) next; /* Pointer to the next element. */
-	struct mlx5_txq_ctrl *txq_ctrl; /* Pointer to the control queue. */
-	enum mlx5_txq_obj_type type; /* The txq object type. */
-	RTE_STD_C11
-	union {
-		struct {
-			void *cq; /* Completion Queue. */
-			void *qp; /* Queue Pair. */
-		};
-		struct {
-			struct mlx5_devx_obj *sq;
-			/* DevX object for Sx queue. */
-			struct mlx5_devx_obj *tis; /* The TIS object. */
-		};
-		struct {
-			struct rte_eth_dev *dev;
-			struct mlx5_devx_obj *cq_devx;
-			void *cq_umem;
-			void *cq_buf;
-			int64_t cq_dbrec_offset;
-			struct mlx5_devx_dbr_page *cq_dbrec_page;
-			struct mlx5_devx_obj *sq_devx;
-			void *sq_umem;
-			void *sq_buf;
-			int64_t sq_dbrec_offset;
-			struct mlx5_devx_dbr_page *sq_dbrec_page;
-		};
-	};
-};
-
 /* TX queue control descriptor. */
 struct mlx5_txq_ctrl {
 	LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */
@@ -400,11 +361,9 @@ int mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	 const struct rte_eth_hairpin_conf *hairpin_conf);
 void mlx5_tx_queue_release(void *dpdk_txq);
+void txq_uar_init(struct mlx5_txq_ctrl *txq_ctrl);
 int mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd);
 void mlx5_tx_uar_uninit_secondary(struct rte_eth_dev *dev);
-struct mlx5_txq_obj *mlx5_txq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
-				      enum mlx5_txq_obj_type type);
-void mlx5_txq_obj_release(struct mlx5_txq_obj *txq_obj);
 int mlx5_txq_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_txq_ctrl *mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx,
 				   uint16_t desc, unsigned int socket,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 0f4d031..6763042 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -55,16 +55,9 @@
 
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
-			txq_ctrl->obj = mlx5_txq_obj_new
-				(dev, i, MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN);
-		} else {
+		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
 			txq_alloc_elts(txq_ctrl);
-			txq_ctrl->obj = mlx5_txq_obj_new
-				(dev, i, priv->txpp_en ?
-				MLX5_TXQ_OBJ_TYPE_DEVX_SQ :
-				MLX5_TXQ_OBJ_TYPE_IBV);
-		}
+		txq_ctrl->obj = priv->obj_ops.txq_obj_new(dev, i);
 		if (!txq_ctrl->obj) {
 			rte_errno = ENOMEM;
 			goto error;
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index c678971..c1d36c3 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -20,7 +20,6 @@
 #include <mlx5_devx_cmds.h>
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
-#include <mlx5_common_os.h>
 #include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
@@ -589,7 +588,7 @@
  * @param txq_ctrl
  *   Pointer to Tx queue control structure.
  */
-static void
+void
 txq_uar_init(struct mlx5_txq_ctrl *txq_ctrl)
 {
 	struct mlx5_priv *priv = txq_ctrl->priv;
@@ -765,799 +764,6 @@
 }
 
 /**
- * Create the Tx hairpin queue object.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array
- *
- * @return
- *   The hairpin DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_txq_obj *
-mlx5_txq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-		container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_txq_obj *tmpl = NULL;
-	uint32_t max_wq_data;
-
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(!txq_ctrl->obj);
-	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
-			   txq_ctrl->socket);
-	if (!tmpl) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u cannot allocate memory resources",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-	tmpl->type = MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN;
-	tmpl->txq_ctrl = txq_ctrl;
-	attr.hairpin = 1;
-	attr.tis_lst_sz = 1;
-	max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
-	/* Jumbo frames > 9KB should be supported, and more packets. */
-	if (priv->config.log_hp_size != (uint32_t)MLX5_ARG_UNSET) {
-		if (priv->config.log_hp_size > max_wq_data) {
-			DRV_LOG(ERR, "total data size %u power of 2 is "
-				"too large for hairpin",
-				priv->config.log_hp_size);
-			mlx5_free(tmpl);
-			rte_errno = ERANGE;
-			return NULL;
-		}
-		attr.wq_attr.log_hairpin_data_sz = priv->config.log_hp_size;
-	} else {
-		attr.wq_attr.log_hairpin_data_sz =
-				(max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
-				 max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
-	}
-	/* Set the packets number to the maximum value for performance. */
-	attr.wq_attr.log_hairpin_num_packets =
-			attr.wq_attr.log_hairpin_data_sz -
-			MLX5_HAIRPIN_QUEUE_STRIDE;
-	attr.tis_num = priv->sh->tis->id;
-	tmpl->sq = mlx5_devx_cmd_create_sq(priv->sh->ctx, &attr);
-	if (!tmpl->sq) {
-		DRV_LOG(ERR,
-			"port %u tx hairpin queue %u can't create sq object",
-			dev->data->port_id, idx);
-		mlx5_free(tmpl);
-		rte_errno = errno;
-		return NULL;
-	}
-	DRV_LOG(DEBUG, "port %u sxq %u updated with %p", dev->data->port_id,
-		idx, (void *)&tmpl);
-	LIST_INSERT_HEAD(&priv->txqsobj, tmpl, next);
-	return tmpl;
-}
-
-/**
- * Release DevX SQ resources.
- *
- * @param txq_ctrl
- *   DevX Tx queue object.
- */
-static void
-txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->sq_devx)
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->sq_devx));
-	if (txq_obj->sq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->sq_umem));
-	if (txq_obj->sq_buf)
-		mlx5_free(txq_obj->sq_buf);
-	if (txq_obj->sq_dbrec_page)
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->sq_dbrec_page->umem),
-					    txq_obj->sq_dbrec_offset));
-}
-
-/**
- * Release DevX Tx CQ resources.
- *
- * @param txq_ctrl
- *   DevX Tx queue object.
- */
-static void
-txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->cq_devx)
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
-	if (txq_obj->cq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->cq_umem));
-	if (txq_obj->cq_buf)
-		mlx5_free(txq_obj->cq_buf);
-	if (txq_obj->cq_dbrec_page)
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->cq_dbrec_page->umem),
-					    txq_obj->cq_dbrec_offset));
-}
-
-/**
- * Destroy the Tx queue DevX object.
- *
- * @param txq_obj
- *   Txq object to destroy
- */
-static void
-txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
-{
-	MLX5_ASSERT(txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ);
-
-	txq_release_devx_sq_resources(txq_obj);
-	txq_release_devx_cq_resources(txq_obj);
-}
-
-#ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
-/**
- * Create a DevX CQ object for a Tx queue.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param cqe_n
- *   Number of entries in the CQ.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- * @param type
- *   Type of the Tx queue object to create.
- *
- * @return
- *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_devx_obj *
-mlx5_devx_cq_new(struct rte_eth_dev *dev, uint32_t cqe_n, uint16_t idx,
-		 struct mlx5_txq_obj *txq_obj)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_devx_obj *cq_obj = NULL;
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
-	struct mlx5_cqe *cqe;
-	size_t page_size;
-	size_t alignment;
-	uint32_t i;
-	int ret;
-
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-	/* Allocate memory buffer for CQEs. */
-	alignment = MLX5_CQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get CQE buf alignment");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-	cqe_n = 1UL << log2above(cqe_n);
-	if (cqe_n > UINT16_MAX) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u requests to many CQEs %u",
-			dev->data->port_id, txq_data->idx, cqe_n);
-		rte_errno = EINVAL;
-		return NULL;
-	}
-	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      cqe_n * sizeof(struct mlx5_cqe),
-				      alignment,
-				      priv->sh->numa_node);
-	if (!txq_obj->cq_buf) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u cannot allocate memory (CQ)",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
-						(void *)txq_obj->cq_buf,
-						cqe_n * sizeof(struct mlx5_cqe),
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"port %u Tx queue %u cannot register memory (CQ)",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for completion queue. */
-	txq_obj->cq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->cq_dbrec_page);
-	if (txq_obj->cq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
-		goto error;
-	}
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	cq_attr.eqn = priv->sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = txq_obj->cq_dbrec_offset;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem);
-	cq_attr.log_cq_size = rte_log2_u32(cqe_n);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	/* Create completion queue object with DevX. */
-	cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!cq_obj) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "port %u Tx queue %u CQ creation failure",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	txq_data->cqe_n = log2above(cqe_n);
-	txq_data->cqe_s = 1 << txq_data->cqe_n;
-	/* Initial fill CQ buffer with invalid CQE opcode. */
-	cqe = (struct mlx5_cqe *)txq_obj->cq_buf;
-	for (i = 0; i < txq_data->cqe_s; i++) {
-		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
-		++cqe;
-	}
-	return cq_obj;
-error:
-	ret = rte_errno;
-	txq_release_devx_cq_resources(txq_obj);
-	rte_errno = ret;
-	return NULL;
-}
-
-/**
- * Create a SQ object using DevX.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- * @param type
- *   Type of the Tx queue object to create.
- *
- * @return
- *   The DevX object initialized, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_devx_obj *
-mlx5_devx_sq_new(struct rte_eth_dev *dev, uint16_t idx,
-		 struct mlx5_txq_obj *txq_obj)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
-	struct mlx5_devx_obj *sq_obj = NULL;
-	size_t page_size;
-	uint32_t wqe_n;
-	int ret;
-
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
-			(uint32_t)priv->sh->device_attr.max_qp_wr);
-	txq_obj->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      wqe_n * sizeof(struct mlx5_wqe),
-				      page_size, priv->sh->numa_node);
-	if (!txq_obj->sq_buf) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u cannot allocate memory (SQ)",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->sq_umem = mlx5_glue->devx_umem_reg
-					(priv->sh->ctx,
-					 (void *)txq_obj->sq_buf,
-					 wqe_n * sizeof(struct mlx5_wqe),
-					 IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"port %u Tx queue %u cannot register memory (SQ)",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for send queue. */
-	txq_obj->sq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->sq_dbrec_page);
-	if (txq_obj->sq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate SQ door-bell.");
-		goto error;
-	}
-	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = priv->sh->tis->id;
-	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = txq_obj->cq_devx->id;
-	sq_attr.flush_in_error_en = 1;
-	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
-	sq_attr.allow_swp = !!priv->config.swp;
-	sq_attr.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode;
-	sq_attr.wq_attr.uar_page =
-				mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = priv->sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = log2above(wqe_n);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
-	sq_attr.wq_attr.dbr_umem_id =
-			mlx5_os_get_umem_id(txq_obj->sq_dbrec_page->umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
-	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
-	/* Create Send Queue object with DevX. */
-	sq_obj = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
-	if (!sq_obj) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "port %u Tx queue %u SQ creation failure",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	txq_data->wqe_n = log2above(wqe_n);
-	return sq_obj;
-error:
-	ret = rte_errno;
-	txq_release_devx_sq_resources(txq_obj);
-	rte_errno = ret;
-	return NULL;
-}
-#endif
-
-/**
- * Create the Tx queue DevX object.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- *
- * @return
- *   The DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_txq_obj *
-mlx5_txq_obj_devx_new(struct rte_eth_dev *dev, uint16_t idx)
-{
-#ifndef HAVE_MLX5DV_DEVX_UAR_OFFSET
-	DRV_LOG(ERR, "port %u Tx queue %u cannot create with DevX, no UAR",
-		     dev->data->port_id, idx);
-	rte_errno = ENOMEM;
-	return NULL;
-#else
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_dev_ctx_shared *sh = priv->sh;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-		container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_txq_obj *txq_obj = NULL;
-	void *reg_addr;
-	uint32_t cqe_n;
-	int ret = 0;
-
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(!txq_ctrl->obj);
-	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			      sizeof(struct mlx5_txq_obj), 0,
-			      txq_ctrl->socket);
-	if (!txq_obj) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u cannot allocate memory resources",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-	txq_obj->type = MLX5_TXQ_OBJ_TYPE_DEVX_SQ;
-	txq_obj->txq_ctrl = txq_ctrl;
-	txq_obj->dev = dev;
-	/* Create the Completion Queue. */
-	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
-		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	/* Create completion queue object with DevX. */
-	txq_obj->cq_devx = mlx5_devx_cq_new(dev, cqe_n, idx, txq_obj);
-	if (!txq_obj->cq_devx) {
-		rte_errno = errno;
-		goto error;
-	}
-	txq_data->cqe_m = txq_data->cqe_s - 1;
-	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
-	txq_data->cq_ci = 0;
-	txq_data->cq_pi = 0;
-	txq_data->cq_db = (volatile uint32_t *)(txq_obj->cq_dbrec_page->dbrs +
-						txq_obj->cq_dbrec_offset);
-	*txq_data->cq_db = 0;
-	/* Create Send Queue object with DevX. */
-	txq_obj->sq_devx = mlx5_devx_sq_new(dev, idx, txq_obj);
-	if (!txq_obj->sq_devx) {
-		rte_errno = errno;
-		goto error;
-	}
-	/* Create the Work Queue. */
-	txq_data->wqe_s = 1 << txq_data->wqe_n;
-	txq_data->wqe_m = txq_data->wqe_s - 1;
-	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
-	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
-	txq_data->wqe_ci = 0;
-	txq_data->wqe_pi = 0;
-	txq_data->wqe_comp = 0;
-	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->qp_db = (volatile uint32_t *)
-					(txq_obj->sq_dbrec_page->dbrs +
-					 txq_obj->sq_dbrec_offset +
-					 MLX5_SND_DBR * sizeof(uint32_t));
-	*txq_data->qp_db = 0;
-	txq_data->qp_num_8s = txq_obj->sq_devx->id << 8;
-	/* Change Send Queue state to Ready-to-Send. */
-	msq_attr.sq_state = MLX5_SQC_STATE_RST;
-	msq_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(txq_obj->sq_devx, &msq_attr);
-	if (ret) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"port %u Tx queue %u SP state to SQC_STATE_RDY failed",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
-				     RTE_CACHE_LINE_SIZE,
-				     txq_ctrl->socket);
-	if (!txq_data->fcqs) {
-		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory (FCQ)",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	/*
-	 * If using DevX need to query and store TIS transport domain value.
-	 * This is done once per port.
-	 * Will use this value on Rx, when creating matching TIR.
-	 */
-	if (priv->config.devx && !priv->sh->tdn)
-		priv->sh->tdn = priv->sh->td->id;
-#endif
-	MLX5_ASSERT(sh->tx_uar);
-	reg_addr = mlx5_os_get_devx_uar_reg_addr(sh->tx_uar);
-	MLX5_ASSERT(reg_addr);
-	txq_ctrl->bf_reg = reg_addr;
-	txq_ctrl->uar_mmap_offset =
-		mlx5_os_get_devx_uar_mmap_offset(sh->tx_uar);
-	txq_uar_init(txq_ctrl);
-	LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next);
-	return txq_obj;
-error:
-	ret = rte_errno; /* Save rte_errno before cleanup. */
-	txq_release_devx_resources(txq_obj);
-	if (txq_data->fcqs) {
-		mlx5_free(txq_data->fcqs);
-		txq_data->fcqs = NULL;
-	}
-	mlx5_free(txq_obj);
-	rte_errno = ret; /* Restore rte_errno. */
-	return NULL;
-#endif
-}
-
-/**
- * Create a QP Verbs object.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- * @param rxq_obj
- *   Pointer to Tx queue object data.
- *
- * @return
- *   The QP Verbs object initialized, NULL otherwise and rte_errno is set.
- */
-static struct ibv_qp *
-mlx5_ibv_qp_new(struct rte_eth_dev *dev, uint16_t idx,
-		struct mlx5_txq_obj *txq_obj)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-			container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct ibv_qp *qp_obj = NULL;
-	struct ibv_qp_init_attr_ex qp_attr = { 0 };
-	const int desc = 1 << txq_data->elts_n;
-
-	MLX5_ASSERT(!txq_ctrl->obj);
-	/* CQ to be associated with the send queue. */
-	qp_attr.send_cq = txq_obj->cq;
-	/* CQ to be associated with the receive queue. */
-	qp_attr.recv_cq = txq_obj->cq;
-	/* Max number of outstanding WRs. */
-	qp_attr.cap.max_send_wr = ((priv->sh->device_attr.max_qp_wr < desc) ?
-				   priv->sh->device_attr.max_qp_wr : desc);
-	/*
-	 * Max number of scatter/gather elements in a WR, must be 1 to prevent
-	 * libmlx5 from trying to affect must be 1 to prevent libmlx5 from
-	 * trying to affect too much memory. TX gather is not impacted by the
-	 * device_attr.max_sge limit and will still work properly.
-	 */
-	qp_attr.cap.max_send_sge = 1;
-	qp_attr.qp_type = IBV_QPT_RAW_PACKET,
-	/* Do *NOT* enable this, completions events are managed per Tx burst. */
-	qp_attr.sq_sig_all = 0;
-	qp_attr.pd = priv->sh->pd;
-	qp_attr.comp_mask = IBV_QP_INIT_ATTR_PD;
-	if (txq_data->inlen_send)
-		qp_attr.cap.max_inline_data = txq_ctrl->max_inline_data;
-	if (txq_data->tso_en) {
-		qp_attr.max_tso_header = txq_ctrl->max_tso_header;
-		qp_attr.comp_mask |= IBV_QP_INIT_ATTR_MAX_TSO_HEADER;
-	}
-	qp_obj = mlx5_glue->create_qp_ex(priv->sh->ctx, &qp_attr);
-	if (qp_obj == NULL) {
-		DRV_LOG(ERR, "port %u Tx queue %u QP creation failure",
-			dev->data->port_id, idx);
-		rte_errno = errno;
-	}
-	return qp_obj;
-}
-
-/**
- * Create the Tx queue Verbs object.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- * @param type
- *   Type of the Tx queue object to create.
- *
- * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
- */
-struct mlx5_txq_obj *
-mlx5_txq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
-		 enum mlx5_txq_obj_type type)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-		container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_txq_obj tmpl;
-	struct mlx5_txq_obj *txq_obj = NULL;
-	struct ibv_qp_attr mod;
-	unsigned int cqe_n;
-	struct mlx5dv_qp qp = { .comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET };
-	struct mlx5dv_cq cq_info;
-	struct mlx5dv_obj obj;
-	const int desc = 1 << txq_data->elts_n;
-	int ret = 0;
-
-	if (type == MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN)
-		return mlx5_txq_obj_hairpin_new(dev, idx);
-	if (type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ)
-		return mlx5_txq_obj_devx_new(dev, idx);
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	/* If using DevX, need additional mask to read tisn value. */
-	if (priv->config.devx && !priv->sh->tdn)
-		qp.comp_mask |= MLX5DV_QP_MASK_RAW_QP_HANDLES;
-#endif
-	MLX5_ASSERT(txq_data);
-	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE;
-	priv->verbs_alloc_ctx.obj = txq_ctrl;
-	if (mlx5_getenv_int("MLX5_ENABLE_CQE_COMPRESSION")) {
-		DRV_LOG(ERR,
-			"port %u MLX5_ENABLE_CQE_COMPRESSION must never be set",
-			dev->data->port_id);
-		rte_errno = EINVAL;
-		return NULL;
-	}
-	memset(&tmpl, 0, sizeof(struct mlx5_txq_obj));
-	cqe_n = desc / MLX5_TX_COMP_THRESH +
-		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	tmpl.cq = mlx5_glue->create_cq(priv->sh->ctx, cqe_n, NULL, NULL, 0);
-	if (tmpl.cq == NULL) {
-		DRV_LOG(ERR, "port %u Tx queue %u CQ creation failure",
-			dev->data->port_id, idx);
-		rte_errno = errno;
-		goto error;
-	}
-	tmpl.qp = mlx5_ibv_qp_new(dev, idx, &tmpl);
-	if (tmpl.qp == NULL) {
-		rte_errno = errno;
-		goto error;
-	}
-	mod = (struct ibv_qp_attr){
-		/* Move the QP to this state. */
-		.qp_state = IBV_QPS_INIT,
-		/* IB device port number. */
-		.port_num = (uint8_t)priv->dev_port,
-	};
-	ret = mlx5_glue->modify_qp(tmpl.qp, &mod,
-				   (IBV_QP_STATE | IBV_QP_PORT));
-	if (ret) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u QP state to IBV_QPS_INIT failed",
-			dev->data->port_id, idx);
-		rte_errno = errno;
-		goto error;
-	}
-	mod = (struct ibv_qp_attr){
-		.qp_state = IBV_QPS_RTR
-	};
-	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
-	if (ret) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u QP state to IBV_QPS_RTR failed",
-			dev->data->port_id, idx);
-		rte_errno = errno;
-		goto error;
-	}
-	mod.qp_state = IBV_QPS_RTS;
-	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
-	if (ret) {
-		DRV_LOG(ERR,
-			"port %u Tx queue %u QP state to IBV_QPS_RTS failed",
-			dev->data->port_id, idx);
-		rte_errno = errno;
-		goto error;
-	}
-	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			      sizeof(struct mlx5_txq_obj), 0,
-			      txq_ctrl->socket);
-	if (!txq_obj) {
-		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	obj.cq.in = tmpl.cq;
-	obj.cq.out = &cq_info;
-	obj.qp.in = tmpl.qp;
-	obj.qp.out = &qp;
-	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ | MLX5DV_OBJ_QP);
-	if (ret != 0) {
-		rte_errno = errno;
-		goto error;
-	}
-	if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) {
-		DRV_LOG(ERR,
-			"port %u wrong MLX5_CQE_SIZE environment variable"
-			" value: it should be set to %u",
-			dev->data->port_id, RTE_CACHE_LINE_SIZE);
-		rte_errno = EINVAL;
-		goto error;
-	}
-	txq_data->cqe_n = log2above(cq_info.cqe_cnt);
-	txq_data->cqe_s = 1 << txq_data->cqe_n;
-	txq_data->cqe_m = txq_data->cqe_s - 1;
-	txq_data->qp_num_8s = ((struct ibv_qp *)tmpl.qp)->qp_num << 8;
-	txq_data->wqes = qp.sq.buf;
-	txq_data->wqe_n = log2above(qp.sq.wqe_cnt);
-	txq_data->wqe_s = 1 << txq_data->wqe_n;
-	txq_data->wqe_m = txq_data->wqe_s - 1;
-	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
-	txq_data->qp_db = &qp.dbrec[MLX5_SND_DBR];
-	txq_data->cq_db = cq_info.dbrec;
-	txq_data->cqes = (volatile struct mlx5_cqe *)cq_info.buf;
-	txq_data->cq_ci = 0;
-	txq_data->cq_pi = 0;
-	txq_data->wqe_ci = 0;
-	txq_data->wqe_pi = 0;
-	txq_data->wqe_comp = 0;
-	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
-				     RTE_CACHE_LINE_SIZE, txq_ctrl->socket);
-	if (!txq_data->fcqs) {
-		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory (FCQ)",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	/*
-	 * If using DevX need to query and store TIS transport domain value.
-	 * This is done once per port.
-	 * Will use this value on Rx, when creating matching TIR.
-	 */
-	if (priv->config.devx && !priv->sh->tdn) {
-		ret = mlx5_devx_cmd_qp_query_tis_td(tmpl.qp, qp.tisn,
-						    &priv->sh->tdn);
-		if (ret) {
-			DRV_LOG(ERR, "Fail to query port %u Tx queue %u QP TIS "
-				"transport domain", dev->data->port_id, idx);
-			rte_errno = EINVAL;
-			goto error;
-		} else {
-			DRV_LOG(DEBUG, "port %u Tx queue %u TIS number %d "
-				"transport domain %d", dev->data->port_id,
-				idx, qp.tisn, priv->sh->tdn);
-		}
-	}
-#endif
-	txq_obj->qp = tmpl.qp;
-	txq_obj->cq = tmpl.cq;
-	txq_ctrl->bf_reg = qp.bf.reg;
-	if (qp.comp_mask & MLX5DV_QP_MASK_UAR_MMAP_OFFSET) {
-		txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset;
-		DRV_LOG(DEBUG, "port %u: uar_mmap_offset 0x%"PRIx64,
-			dev->data->port_id, txq_ctrl->uar_mmap_offset);
-	} else {
-		DRV_LOG(ERR,
-			"port %u failed to retrieve UAR info, invalid"
-			" libmlx5.so",
-			dev->data->port_id);
-		rte_errno = EINVAL;
-		goto error;
-	}
-	txq_uar_init(txq_ctrl);
-	LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next);
-	txq_obj->txq_ctrl = txq_ctrl;
-	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
-	return txq_obj;
-error:
-	ret = rte_errno; /* Save rte_errno before cleanup. */
-	if (tmpl.cq)
-		claim_zero(mlx5_glue->destroy_cq(tmpl.cq));
-	if (tmpl.qp)
-		claim_zero(mlx5_glue->destroy_qp(tmpl.qp));
-	if (txq_data && txq_data->fcqs) {
-		mlx5_free(txq_data->fcqs);
-		txq_data->fcqs = NULL;
-	}
-	if (txq_obj)
-		mlx5_free(txq_obj);
-	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
-	rte_errno = ret; /* Restore rte_errno. */
-	return NULL;
-}
-
-/**
- * Release an Tx verbs queue object.
- *
- * @param txq_obj
- *   Verbs Tx queue object..
- */
-void
-mlx5_txq_obj_release(struct mlx5_txq_obj *txq_obj)
-{
-	MLX5_ASSERT(txq_obj);
-	if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN) {
-		if (txq_obj->tis)
-			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
-	} else if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ) {
-		txq_release_devx_resources(txq_obj);
-	} else {
-		claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
-		claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
-	}
-	if (txq_obj->txq_ctrl->txq.fcqs) {
-		mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
-		txq_obj->txq_ctrl->txq.fcqs = NULL;
-	}
-	LIST_REMOVE(txq_obj, next);
-	mlx5_free(txq_obj);
-}
-
-/**
  * Verify the Verbs Tx queue list is empty
  *
  * @param dev
@@ -2100,7 +1306,7 @@ struct mlx5_txq_ctrl *
 	if (!rte_atomic32_dec_and_test(&txq->refcnt))
 		return 1;
 	if (txq->obj) {
-		mlx5_txq_obj_release(txq->obj);
+		priv->obj_ops.txq_obj_release(txq->obj);
 		txq->obj = NULL;
 	}
 	txq_free_elts(txq);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 08/15] net/mlx5: share Tx control code
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (6 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 07/15] net/mlx5: separate Tx queue object creations Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 09/15] net/mlx5: rearrange SQ and CQ creation in DevX module Michael Baum
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Move Tx object similar resources allocations and debug logs from DevX
and Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c    |  4 +-
 drivers/net/mlx5/linux/mlx5_verbs.c | 84 ++++++++++++-------------------------
 drivers/net/mlx5/linux/mlx5_verbs.h |  3 +-
 drivers/net/mlx5/mlx5.h             |  3 +-
 drivers/net/mlx5/mlx5_devx.c        | 75 +++++++--------------------------
 drivers/net/mlx5/mlx5_devx.h        |  3 +-
 drivers/net/mlx5/mlx5_trigger.c     | 31 +++++++++++++-
 drivers/net/mlx5/mlx5_txq.c         | 28 ++++++++-----
 8 files changed, 93 insertions(+), 138 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index c5332a0..0db2b5a 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -520,9 +520,9 @@
  *   Queue index in DPDK Tx queue array.
  *
  * @return
- *   The DevX/Verbs object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_txq_obj *
+static int
 mlx5_os_txq_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index c79c4a2..5568c75 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -807,7 +807,7 @@
 	struct ibv_qp_init_attr_ex qp_attr = { 0 };
 	const int desc = 1 << txq_data->elts_n;
 
-	MLX5_ASSERT(!txq_ctrl->obj);
+	MLX5_ASSERT(txq_ctrl->obj);
 	/* CQ to be associated with the send queue. */
 	qp_attr.send_cq = txq_obj->cq;
 	/* CQ to be associated with the receive queue. */
@@ -851,17 +851,16 @@
  *   Queue index in DPDK Tx queue array.
  *
  * @return
- *   The Verbs object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-struct mlx5_txq_obj *
+int
 mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
 	struct mlx5_txq_ctrl *txq_ctrl =
 		container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_txq_obj tmpl;
-	struct mlx5_txq_obj *txq_obj = NULL;
+	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
 	struct ibv_qp_attr mod;
 	unsigned int cqe_n;
 	struct mlx5dv_qp qp;
@@ -871,26 +870,28 @@ struct mlx5_txq_obj *
 	int ret = 0;
 
 	MLX5_ASSERT(txq_data);
+	MLX5_ASSERT(txq_obj);
+	txq_obj->type = MLX5_TXQ_OBJ_TYPE_IBV;
+	txq_obj->txq_ctrl = txq_ctrl;
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE;
 	priv->verbs_alloc_ctx.obj = txq_ctrl;
 	if (mlx5_getenv_int("MLX5_ENABLE_CQE_COMPRESSION")) {
 		DRV_LOG(ERR, "Port %u MLX5_ENABLE_CQE_COMPRESSION "
 			"must never be set.", dev->data->port_id);
 		rte_errno = EINVAL;
-		return NULL;
+		return -rte_errno;
 	}
-	memset(&tmpl, 0, sizeof(struct mlx5_txq_obj));
 	cqe_n = desc / MLX5_TX_COMP_THRESH +
 		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	tmpl.cq = mlx5_glue->create_cq(priv->sh->ctx, cqe_n, NULL, NULL, 0);
-	if (tmpl.cq == NULL) {
+	txq_obj->cq = mlx5_glue->create_cq(priv->sh->ctx, cqe_n, NULL, NULL, 0);
+	if (txq_obj->cq == NULL) {
 		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
 			dev->data->port_id, idx);
 		rte_errno = errno;
 		goto error;
 	}
-	tmpl.qp = mlx5_ibv_qp_new(dev, idx, &tmpl);
-	if (tmpl.qp == NULL) {
+	txq_obj->qp = mlx5_ibv_qp_new(dev, idx, txq_obj);
+	if (txq_obj->qp == NULL) {
 		rte_errno = errno;
 		goto error;
 	}
@@ -900,7 +901,8 @@ struct mlx5_txq_obj *
 		/* IB device port number. */
 		.port_num = (uint8_t)priv->dev_port,
 	};
-	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, (IBV_QP_STATE | IBV_QP_PORT));
+	ret = mlx5_glue->modify_qp(txq_obj->qp, &mod,
+				   (IBV_QP_STATE | IBV_QP_PORT));
 	if (ret) {
 		DRV_LOG(ERR,
 			"Port %u Tx queue %u QP state to IBV_QPS_INIT failed.",
@@ -911,7 +913,7 @@ struct mlx5_txq_obj *
 	mod = (struct ibv_qp_attr){
 		.qp_state = IBV_QPS_RTR
 	};
-	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
+	ret = mlx5_glue->modify_qp(txq_obj->qp, &mod, IBV_QP_STATE);
 	if (ret) {
 		DRV_LOG(ERR,
 			"Port %u Tx queue %u QP state to IBV_QPS_RTR failed.",
@@ -920,7 +922,7 @@ struct mlx5_txq_obj *
 		goto error;
 	}
 	mod.qp_state = IBV_QPS_RTS;
-	ret = mlx5_glue->modify_qp(tmpl.qp, &mod, IBV_QP_STATE);
+	ret = mlx5_glue->modify_qp(txq_obj->qp, &mod, IBV_QP_STATE);
 	if (ret) {
 		DRV_LOG(ERR,
 			"Port %u Tx queue %u QP state to IBV_QPS_RTS failed.",
@@ -928,24 +930,15 @@ struct mlx5_txq_obj *
 		rte_errno = errno;
 		goto error;
 	}
-	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			      sizeof(struct mlx5_txq_obj), 0,
-			      txq_ctrl->socket);
-	if (!txq_obj) {
-		DRV_LOG(ERR, "Port %u Tx queue %u cannot allocate memory.",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
 	qp.comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET;
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	/* If using DevX, need additional mask to read tisn value. */
 	if (priv->sh->devx && !priv->sh->tdn)
 		qp.comp_mask |= MLX5DV_QP_MASK_RAW_QP_HANDLES;
 #endif
-	obj.cq.in = tmpl.cq;
+	obj.cq.in = txq_obj->cq;
 	obj.cq.out = &cq_info;
-	obj.qp.in = tmpl.qp;
+	obj.qp.in = txq_obj->qp;
 	obj.qp.out = &qp;
 	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ | MLX5DV_OBJ_QP);
 	if (ret != 0) {
@@ -963,7 +956,7 @@ struct mlx5_txq_obj *
 	txq_data->cqe_n = log2above(cq_info.cqe_cnt);
 	txq_data->cqe_s = 1 << txq_data->cqe_n;
 	txq_data->cqe_m = txq_data->cqe_s - 1;
-	txq_data->qp_num_8s = ((struct ibv_qp *)tmpl.qp)->qp_num << 8;
+	txq_data->qp_num_8s = ((struct ibv_qp *)txq_obj->qp)->qp_num << 8;
 	txq_data->wqes = qp.sq.buf;
 	txq_data->wqe_n = log2above(qp.sq.wqe_cnt);
 	txq_data->wqe_s = 1 << txq_data->wqe_n;
@@ -978,15 +971,6 @@ struct mlx5_txq_obj *
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
-				     RTE_CACHE_LINE_SIZE, txq_ctrl->socket);
-	if (!txq_data->fcqs) {
-		DRV_LOG(ERR, "Port %u Tx queue %u can't allocate memory (FCQ).",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	/*
 	 * If using DevX need to query and store TIS transport domain value.
@@ -994,7 +978,7 @@ struct mlx5_txq_obj *
 	 * Will use this value on Rx, when creating matching TIR.
 	 */
 	if (priv->sh->devx && !priv->sh->tdn) {
-		ret = mlx5_devx_cmd_qp_query_tis_td(tmpl.qp, qp.tisn,
+		ret = mlx5_devx_cmd_qp_query_tis_td(txq_obj->qp, qp.tisn,
 						    &priv->sh->tdn);
 		if (ret) {
 			DRV_LOG(ERR, "Fail to query port %u Tx queue %u QP TIS "
@@ -1008,8 +992,6 @@ struct mlx5_txq_obj *
 		}
 	}
 #endif
-	txq_obj->qp = tmpl.qp;
-	txq_obj->cq = tmpl.cq;
 	txq_ctrl->bf_reg = qp.bf.reg;
 	if (qp.comp_mask & MLX5DV_QP_MASK_UAR_MMAP_OFFSET) {
 		txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset;
@@ -1024,25 +1006,17 @@ struct mlx5_txq_obj *
 		goto error;
 	}
 	txq_uar_init(txq_ctrl);
-	txq_obj->txq_ctrl = txq_ctrl;
-	LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
-	return txq_obj;
+	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	if (tmpl.cq)
-		claim_zero(mlx5_glue->destroy_cq(tmpl.cq));
-	if (tmpl.qp)
-		claim_zero(mlx5_glue->destroy_qp(tmpl.qp));
-	if (txq_data->fcqs) {
-		mlx5_free(txq_data->fcqs);
-		txq_data->fcqs = NULL;
-	}
-	if (txq_obj)
-		mlx5_free(txq_obj);
+	if (txq_obj->cq)
+		claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
+	if (txq_obj->qp)
+		claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	rte_errno = ret; /* Restore rte_errno. */
-	return NULL;
+	return -rte_errno;
 }
 
 /**
@@ -1057,12 +1031,6 @@ struct mlx5_txq_obj *
 	MLX5_ASSERT(txq_obj);
 	claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
 	claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
-	if (txq_obj->txq_ctrl->txq.fcqs) {
-		mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
-		txq_obj->txq_ctrl->txq.fcqs = NULL;
-	}
-	LIST_REMOVE(txq_obj, next);
-	mlx5_free(txq_obj);
 }
 
 struct mlx5_obj_ops ibv_obj_ops = {
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.h b/drivers/net/mlx5/linux/mlx5_verbs.h
index 7f6bb99..0670f6c 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.h
+++ b/drivers/net/mlx5/linux/mlx5_verbs.h
@@ -12,8 +12,7 @@ struct mlx5_verbs_ops {
 	mlx5_dereg_mr_t dereg_mr;
 };
 
-struct mlx5_txq_obj *mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev,
-					  uint16_t idx);
+int mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx);
 void mlx5_txq_ibv_obj_release(struct mlx5_txq_obj *txq_obj);
 
 /* Verbs ops struct */
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 8679750..3093f6e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -789,8 +789,7 @@ struct mlx5_obj_ops {
 	void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
 	int (*drop_action_create)(struct rte_eth_dev *dev);
 	void (*drop_action_destroy)(struct rte_eth_dev *dev);
-	struct mlx5_txq_obj *(*txq_obj_new)(struct rte_eth_dev *dev,
-					    uint16_t idx);
+	int (*txq_obj_new)(struct rte_eth_dev *dev, uint16_t idx);
 	void (*txq_obj_release)(struct mlx5_txq_obj *txq_obj);
 };
 
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 0b6e116..f3437a6 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -819,9 +819,9 @@
  *   Queue index in DPDK Tx queue array.
  *
  * @return
- *   The hairpin DevX object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_txq_obj *
+static int
 mlx5_txq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -829,20 +829,11 @@
 	struct mlx5_txq_ctrl *txq_ctrl =
 		container_of(txq_data, struct mlx5_txq_ctrl, txq);
 	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_txq_obj *tmpl = NULL;
+	struct mlx5_txq_obj *tmpl = txq_ctrl->obj;
 	uint32_t max_wq_data;
 
 	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(!txq_ctrl->obj);
-	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
-			   txq_ctrl->socket);
-	if (!tmpl) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory resources.",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return NULL;
-	}
+	MLX5_ASSERT(tmpl);
 	tmpl->type = MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN;
 	tmpl->txq_ctrl = txq_ctrl;
 	attr.hairpin = 1;
@@ -854,9 +845,8 @@
 			DRV_LOG(ERR, "Total data size %u power of 2 is "
 				"too large for hairpin.",
 				priv->config.log_hp_size);
-			mlx5_free(tmpl);
 			rte_errno = ERANGE;
-			return NULL;
+			return -rte_errno;
 		}
 		attr.wq_attr.log_hairpin_data_sz = priv->config.log_hp_size;
 	} else {
@@ -874,14 +864,10 @@
 		DRV_LOG(ERR,
 			"Port %u tx hairpin queue %u can't create SQ object.",
 			dev->data->port_id, idx);
-		mlx5_free(tmpl);
 		rte_errno = errno;
-		return NULL;
+		return -rte_errno;
 	}
-	DRV_LOG(DEBUG, "Port %u sxq %u updated with %p.", dev->data->port_id,
-		idx, (void *)&tmpl);
-	LIST_INSERT_HEAD(&priv->txqsobj, tmpl, next);
-	return tmpl;
+	return 0;
 }
 
 #ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
@@ -1179,9 +1165,9 @@
  *   Queue index in DPDK Tx queue array.
  *
  * @return
- *   The DevX object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-struct mlx5_txq_obj *
+int
 mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -1195,27 +1181,17 @@ struct mlx5_txq_obj *
 	DRV_LOG(ERR, "Port %u Tx queue %u cannot create with DevX, no UAR.",
 		     dev->data->port_id, idx);
 	rte_errno = ENOMEM;
-	return NULL;
+	return -rte_errno;
 #else
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_txq_obj *txq_obj = NULL;
+	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
 	void *reg_addr;
 	uint32_t cqe_n;
 	int ret = 0;
 
 	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(!txq_ctrl->obj);
-	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			      sizeof(struct mlx5_txq_obj), 0,
-			      txq_ctrl->socket);
-	if (!txq_obj) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory resources.",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return NULL;
-	}
+	MLX5_ASSERT(txq_obj);
 	txq_obj->type = MLX5_TXQ_OBJ_TYPE_DEVX_SQ;
 	txq_obj->txq_ctrl = txq_ctrl;
 	txq_obj->dev = dev;
@@ -1267,17 +1243,6 @@ struct mlx5_txq_obj *
 			dev->data->port_id, idx);
 		goto error;
 	}
-	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
-				     RTE_CACHE_LINE_SIZE,
-				     txq_ctrl->socket);
-	if (!txq_data->fcqs) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory (FCQ).",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	/*
 	 * If using DevX need to query and store TIS transport domain value.
@@ -1294,18 +1259,12 @@ struct mlx5_txq_obj *
 	txq_ctrl->uar_mmap_offset =
 				mlx5_os_get_devx_uar_mmap_offset(sh->tx_uar);
 	txq_uar_init(txq_ctrl);
-	LIST_INSERT_HEAD(&priv->txqsobj, txq_obj, next);
-	return txq_obj;
+	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	txq_release_devx_resources(txq_obj);
-	if (txq_data->fcqs) {
-		mlx5_free(txq_data->fcqs);
-		txq_data->fcqs = NULL;
-	}
-	mlx5_free(txq_obj);
 	rte_errno = ret; /* Restore rte_errno. */
-	return NULL;
+	return -rte_errno;
 #endif
 }
 
@@ -1327,12 +1286,6 @@ struct mlx5_txq_obj *
 		txq_release_devx_resources(txq_obj);
 #endif
 	}
-	if (txq_obj->txq_ctrl->txq.fcqs) {
-		mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
-		txq_obj->txq_ctrl->txq.fcqs = NULL;
-	}
-	LIST_REMOVE(txq_obj, next);
-	mlx5_free(txq_obj);
 }
 
 struct mlx5_obj_ops devx_obj_ops = {
diff --git a/drivers/net/mlx5/mlx5_devx.h b/drivers/net/mlx5/mlx5_devx.h
index 0bbbbc0..bc8a8d6 100644
--- a/drivers/net/mlx5/mlx5_devx.h
+++ b/drivers/net/mlx5/mlx5_devx.h
@@ -7,8 +7,7 @@
 
 #include "mlx5.h"
 
-struct mlx5_txq_obj *mlx5_txq_devx_obj_new(struct rte_eth_dev *dev,
-					   uint16_t idx);
+int mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx);
 void mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj);
 
 extern struct mlx5_obj_ops devx_obj_ops;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 6763042..e72e5fb 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -52,16 +52,45 @@
 
 	for (i = 0; i != priv->txqs_n; ++i) {
 		struct mlx5_txq_ctrl *txq_ctrl = mlx5_txq_get(dev, i);
+		struct mlx5_txq_data *txq_data = &txq_ctrl->txq;
+		uint32_t flags = MLX5_MEM_RTE | MLX5_MEM_ZERO;
 
 		if (!txq_ctrl)
 			continue;
 		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD)
 			txq_alloc_elts(txq_ctrl);
-		txq_ctrl->obj = priv->obj_ops.txq_obj_new(dev, i);
+		MLX5_ASSERT(!txq_ctrl->obj);
+		txq_ctrl->obj = mlx5_malloc(flags, sizeof(struct mlx5_txq_obj),
+					    0, txq_ctrl->socket);
 		if (!txq_ctrl->obj) {
+			DRV_LOG(ERR, "Port %u Tx queue %u cannot allocate "
+				"memory resources.", dev->data->port_id,
+				txq_data->idx);
 			rte_errno = ENOMEM;
 			goto error;
 		}
+		ret = priv->obj_ops.txq_obj_new(dev, i);
+		if (ret < 0) {
+			mlx5_free(txq_ctrl->obj);
+			txq_ctrl->obj = NULL;
+			goto error;
+		}
+		if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+			size_t size = txq_data->cqe_s * sizeof(*txq_data->fcqs);
+			txq_data->fcqs = mlx5_malloc(flags, size,
+						     RTE_CACHE_LINE_SIZE,
+						     txq_ctrl->socket);
+			if (!txq_data->fcqs) {
+				DRV_LOG(ERR, "Port %u Tx queue %u cannot "
+					"allocate memory (FCQ).",
+					dev->data->port_id, i);
+				rte_errno = ENOMEM;
+				goto error;
+			}
+		}
+		DRV_LOG(DEBUG, "Port %u txq %u updated with %p.",
+			dev->data->port_id, i, (void *)&txq_ctrl->obj);
+		LIST_INSERT_HEAD(&priv->txqsobj, txq_ctrl->obj, next);
 	}
 	return 0;
 error:
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index c1d36c3..23213d9 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -1298,21 +1298,29 @@ struct mlx5_txq_ctrl *
 mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_ctrl *txq;
+	struct mlx5_txq_ctrl *txq_ctrl;
 
 	if (!(*priv->txqs)[idx])
 		return 0;
-	txq = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq);
-	if (!rte_atomic32_dec_and_test(&txq->refcnt))
+	txq_ctrl = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq);
+	if (!rte_atomic32_dec_and_test(&txq_ctrl->refcnt))
 		return 1;
-	if (txq->obj) {
-		priv->obj_ops.txq_obj_release(txq->obj);
-		txq->obj = NULL;
+	if (txq_ctrl->obj) {
+		priv->obj_ops.txq_obj_release(txq_ctrl->obj);
+		LIST_REMOVE(txq_ctrl->obj, next);
+		mlx5_free(txq_ctrl->obj);
+		txq_ctrl->obj = NULL;
+	}
+	if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) {
+		if (txq_ctrl->txq.fcqs) {
+			mlx5_free(txq_ctrl->txq.fcqs);
+			txq_ctrl->txq.fcqs = NULL;
+		}
+		txq_free_elts(txq_ctrl);
+		mlx5_mr_btree_free(&txq_ctrl->txq.mr_ctrl.cache_bh);
 	}
-	txq_free_elts(txq);
-	mlx5_mr_btree_free(&txq->txq.mr_ctrl.cache_bh);
-	LIST_REMOVE(txq, next);
-	mlx5_free(txq);
+	LIST_REMOVE(txq_ctrl, next);
+	mlx5_free(txq_ctrl);
 	(*priv->txqs)[idx] = NULL;
 	dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 09/15] net/mlx5: rearrange SQ and CQ creation in DevX module
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (7 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 08/15] net/mlx5: share Tx control code Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 10/15] net/mlx5: rearrange QP creation in Verbs module Michael Baum
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

1. Rename functions to mention the internal resources.
2. Reduce the number of function arguments.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c | 127 +++++++++++++++++++++----------------------
 1 file changed, 62 insertions(+), 65 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index f3437a6..55fe946 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -79,7 +79,7 @@
  *   DevX Rx queue object.
  */
 static void
-rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
 	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->rq_dbrec_page;
 
@@ -106,7 +106,7 @@
  *   DevX Rx queue object.
  */
 static void
-rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
 	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->cq_dbrec_page;
 
@@ -147,8 +147,8 @@
 		if (rxq_obj->devx_channel)
 			mlx5_glue->devx_destroy_event_channel
 							(rxq_obj->devx_channel);
-		rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
-		rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
+		mlx5_rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
+		mlx5_rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
 	}
 }
 
@@ -247,7 +247,7 @@
  *   The DevX RQ object initialized, NULL otherwise and rte_errno is set.
  */
 static struct mlx5_devx_obj *
-rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -325,7 +325,7 @@
 		goto error;
 	return rq;
 error:
-	rxq_release_devx_rq_resources(rxq_ctrl);
+	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
 	return NULL;
 }
 
@@ -341,7 +341,7 @@
  *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
  */
 static struct mlx5_devx_obj *
-rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_devx_obj *cq_obj = 0;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
@@ -451,7 +451,7 @@
 error:
 	if (cq_obj)
 		mlx5_devx_cmd_destroy(cq_obj);
-	rxq_release_devx_cq_resources(rxq_ctrl);
+	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
 	return NULL;
 }
 
@@ -558,13 +558,13 @@
 		tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
 	}
 	/* Create CQ using DevX API. */
-	tmpl->devx_cq = rxq_create_devx_cq_resources(dev, idx);
+	tmpl->devx_cq = mlx5_rxq_create_devx_cq_resources(dev, idx);
 	if (!tmpl->devx_cq) {
 		DRV_LOG(ERR, "Failed to create CQ.");
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	tmpl->rq = rxq_create_devx_rq_resources(dev, idx);
+	tmpl->rq = mlx5_rxq_create_devx_rq_resources(dev, idx);
 	if (!tmpl->rq) {
 		DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
 			dev->data->port_id, idx);
@@ -589,8 +589,8 @@
 		claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
 	if (tmpl->devx_channel)
 		mlx5_glue->devx_destroy_event_channel(tmpl->devx_channel);
-	rxq_release_devx_rq_resources(rxq_ctrl);
-	rxq_release_devx_cq_resources(rxq_ctrl);
+	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
+	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -878,7 +878,7 @@
  *   DevX Tx queue object.
  */
 static void
-txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
+mlx5_txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
 {
 	if (txq_obj->sq_devx)
 		claim_zero(mlx5_devx_cmd_destroy(txq_obj->sq_devx));
@@ -900,7 +900,7 @@
  *   DevX Tx queue object.
  */
 static void
-txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
+mlx5_txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
 {
 	if (txq_obj->cq_devx)
 		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
@@ -922,40 +922,38 @@
  *   Txq object to destroy.
  */
 static void
-txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
+mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
 	MLX5_ASSERT(txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ);
 
-	txq_release_devx_cq_resources(txq_obj);
-	txq_release_devx_sq_resources(txq_obj);
+	mlx5_txq_release_devx_cq_resources(txq_obj);
+	mlx5_txq_release_devx_sq_resources(txq_obj);
 }
 
 /**
- * Create a DevX CQ object for an Tx queue.
+ * Create a DevX CQ object and its resources for an Tx queue.
  *
  * @param dev
  *   Pointer to Ethernet device.
- * @param cqe_n
- *   Number of entries in the CQ.
  * @param idx
  *   Queue index in DPDK Tx queue array.
- * @param rxq_obj
- *   Pointer to Tx queue object data.
  *
  * @return
- *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
+ *   Number of CQEs in CQ, 0 otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
-mlx5_tx_devx_cq_new(struct rte_eth_dev *dev, uint32_t cqe_n, uint16_t idx,
-		    struct mlx5_txq_obj *txq_obj)
+static uint32_t
+mlx5_txq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_devx_obj *cq_obj = NULL;
+	struct mlx5_txq_ctrl *txq_ctrl =
+			container_of(txq_data, struct mlx5_txq_ctrl, txq);
+	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
 	struct mlx5_cqe *cqe;
 	size_t page_size;
 	size_t alignment;
+	uint32_t cqe_n;
 	uint32_t i;
 	int ret;
 
@@ -965,22 +963,25 @@
 	if (page_size == (size_t)-1) {
 		DRV_LOG(ERR, "Failed to get mem page size.");
 		rte_errno = ENOMEM;
-		return NULL;
+		return 0;
 	}
 	/* Allocate memory buffer for CQEs. */
 	alignment = MLX5_CQE_BUF_ALIGNMENT;
 	if (alignment == (size_t)-1) {
 		DRV_LOG(ERR, "Failed to get CQE buf alignment.");
 		rte_errno = ENOMEM;
-		return NULL;
+		return 0;
 	}
+	/* Create the Completion Queue. */
+	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
+		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
 	cqe_n = 1UL << log2above(cqe_n);
 	if (cqe_n > UINT16_MAX) {
 		DRV_LOG(ERR,
 			"Port %u Tx queue %u requests to many CQEs %u.",
 			dev->data->port_id, txq_data->idx, cqe_n);
 		rte_errno = EINVAL;
-		return NULL;
+		return 0;
 	}
 	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
 				      cqe_n * sizeof(struct mlx5_cqe),
@@ -991,7 +992,7 @@
 			"Port %u Tx queue %u cannot allocate memory (CQ).",
 			dev->data->port_id, txq_data->idx);
 		rte_errno = ENOMEM;
-		return NULL;
+		return 0;
 	}
 	/* Register allocated buffer in user space with DevX. */
 	txq_obj->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
@@ -1027,50 +1028,47 @@
 	cq_attr.log_cq_size = rte_log2_u32(cqe_n);
 	cq_attr.log_page_size = rte_log2_u32(page_size);
 	/* Create completion queue object with DevX. */
-	cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!cq_obj) {
+	txq_obj->cq_devx = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
+	if (!txq_obj->cq_devx) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
 			dev->data->port_id, idx);
 		goto error;
 	}
-	txq_data->cqe_n = log2above(cqe_n);
-	txq_data->cqe_s = 1 << txq_data->cqe_n;
 	/* Initial fill CQ buffer with invalid CQE opcode. */
 	cqe = (struct mlx5_cqe *)txq_obj->cq_buf;
-	for (i = 0; i < txq_data->cqe_s; i++) {
+	for (i = 0; i < cqe_n; i++) {
 		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
 		++cqe;
 	}
-	return cq_obj;
+	return cqe_n;
 error:
 	ret = rte_errno;
-	txq_release_devx_cq_resources(txq_obj);
+	mlx5_txq_release_devx_cq_resources(txq_obj);
 	rte_errno = ret;
-	return NULL;
+	return 0;
 }
 
 /**
- * Create a SQ object using DevX.
+ * Create a SQ object and its resources using DevX.
  *
  * @param dev
  *   Pointer to Ethernet device.
  * @param idx
  *   Queue index in DPDK Tx queue array.
- * @param rxq_obj
- *   Pointer to Tx queue object data.
  *
  * @return
- *   The DevX SQ object initialized, NULL otherwise and rte_errno is set.
+ *   Number of WQEs in SQ, 0 otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
-mlx5_devx_sq_new(struct rte_eth_dev *dev, uint16_t idx,
-		 struct mlx5_txq_obj *txq_obj)
+static uint32_t
+mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
+	struct mlx5_txq_ctrl *txq_ctrl =
+			container_of(txq_data, struct mlx5_txq_ctrl, txq);
+	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
 	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
-	struct mlx5_devx_obj *sq_obj = NULL;
 	size_t page_size;
 	uint32_t wqe_n;
 	int ret;
@@ -1081,7 +1079,7 @@
 	if (page_size == (size_t)-1) {
 		DRV_LOG(ERR, "Failed to get mem page size.");
 		rte_errno = ENOMEM;
-		return NULL;
+		return 0;
 	}
 	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
 			(uint32_t)priv->sh->device_attr.max_qp_wr);
@@ -1117,7 +1115,6 @@
 		DRV_LOG(ERR, "Failed to allocate SQ door-bell.");
 		goto error;
 	}
-	txq_data->wqe_n = log2above(wqe_n);
 	sq_attr.tis_lst_sz = 1;
 	sq_attr.tis_num = priv->sh->tis->id;
 	sq_attr.state = MLX5_SQC_STATE_RST;
@@ -1131,7 +1128,7 @@
 	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
 	sq_attr.wq_attr.pd = priv->sh->pdn;
 	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = txq_data->wqe_n;
+	sq_attr.wq_attr.log_wq_sz = log2above(wqe_n);
 	sq_attr.wq_attr.dbr_umem_valid = 1;
 	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
 	sq_attr.wq_attr.dbr_umem_id =
@@ -1140,19 +1137,19 @@
 	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
 	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
 	/* Create Send Queue object with DevX. */
-	sq_obj = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
-	if (!sq_obj) {
+	txq_obj->sq_devx = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
+	if (!txq_obj->sq_devx) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
 			dev->data->port_id, idx);
 		goto error;
 	}
-	return sq_obj;
+	return wqe_n;
 error:
 	ret = rte_errno;
-	txq_release_devx_sq_resources(txq_obj);
+	mlx5_txq_release_devx_sq_resources(txq_obj);
 	rte_errno = ret;
-	return NULL;
+	return 0;
 }
 #endif
 
@@ -1188,6 +1185,7 @@
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
 	void *reg_addr;
 	uint32_t cqe_n;
+	uint32_t wqe_n;
 	int ret = 0;
 
 	MLX5_ASSERT(txq_data);
@@ -1195,15 +1193,13 @@
 	txq_obj->type = MLX5_TXQ_OBJ_TYPE_DEVX_SQ;
 	txq_obj->txq_ctrl = txq_ctrl;
 	txq_obj->dev = dev;
-	/* Create the Completion Queue. */
-	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
-		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	/* Create completion queue object with DevX. */
-	txq_obj->cq_devx = mlx5_tx_devx_cq_new(dev, cqe_n, idx, txq_obj);
-	if (!txq_obj->cq_devx) {
+	cqe_n = mlx5_txq_create_devx_cq_resources(dev, idx);
+	if (!cqe_n) {
 		rte_errno = errno;
 		goto error;
 	}
+	txq_data->cqe_n = log2above(cqe_n);
+	txq_data->cqe_s = 1 << txq_data->cqe_n;
 	txq_data->cqe_m = txq_data->cqe_s - 1;
 	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
 	txq_data->cq_ci = 0;
@@ -1212,12 +1208,13 @@
 						txq_obj->cq_dbrec_offset);
 	*txq_data->cq_db = 0;
 	/* Create Send Queue object with DevX. */
-	txq_obj->sq_devx = mlx5_devx_sq_new(dev, idx, txq_obj);
-	if (!txq_obj->sq_devx) {
+	wqe_n = mlx5_txq_create_devx_sq_resources(dev, idx);
+	if (!wqe_n) {
 		rte_errno = errno;
 		goto error;
 	}
 	/* Create the Work Queue. */
+	txq_data->wqe_n = log2above(wqe_n);
 	txq_data->wqe_s = 1 << txq_data->wqe_n;
 	txq_data->wqe_m = txq_data->wqe_s - 1;
 	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
@@ -1262,7 +1259,7 @@
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	txq_release_devx_resources(txq_obj);
+	mlx5_txq_release_devx_resources(txq_obj);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 #endif
@@ -1283,7 +1280,7 @@
 			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
 #ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
 	} else {
-		txq_release_devx_resources(txq_obj);
+		mlx5_txq_release_devx_resources(txq_obj);
 #endif
 	}
 }
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 10/15] net/mlx5: rearrange QP creation in Verbs module
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (8 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 09/15] net/mlx5: rearrange SQ and CQ creation in DevX module Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 11/15] net/mlx5: separate Tx queue object modification Michael Baum
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

1. Rename function to mention the internal resources.
2. Reduce the number of function arguments.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 5568c75..0476d94 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -789,15 +789,12 @@
  *   Pointer to Ethernet device.
  * @param idx
  *   Queue index in DPDK Tx queue array.
- * @param rxq_obj
- *   Pointer to Tx queue object data.
  *
  * @return
- *   The QP Verbs object initialized, NULL otherwise and rte_errno is set.
+ *   The QP Verbs object, NULL otherwise and rte_errno is set.
  */
 static struct ibv_qp *
-mlx5_ibv_qp_new(struct rte_eth_dev *dev, uint16_t idx,
-		struct mlx5_txq_obj *txq_obj)
+mlx5_txq_ibv_qp_create(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
@@ -807,11 +804,11 @@
 	struct ibv_qp_init_attr_ex qp_attr = { 0 };
 	const int desc = 1 << txq_data->elts_n;
 
-	MLX5_ASSERT(txq_ctrl->obj);
+	MLX5_ASSERT(txq_ctrl->obj->cq);
 	/* CQ to be associated with the send queue. */
-	qp_attr.send_cq = txq_obj->cq;
+	qp_attr.send_cq = txq_ctrl->obj->cq;
 	/* CQ to be associated with the receive queue. */
-	qp_attr.recv_cq = txq_obj->cq;
+	qp_attr.recv_cq = txq_ctrl->obj->cq;
 	/* Max number of outstanding WRs. */
 	qp_attr.cap.max_send_wr = ((priv->sh->device_attr.max_qp_wr < desc) ?
 				   priv->sh->device_attr.max_qp_wr : desc);
@@ -890,7 +887,7 @@
 		rte_errno = errno;
 		goto error;
 	}
-	txq_obj->qp = mlx5_ibv_qp_new(dev, idx, txq_obj);
+	txq_obj->qp = mlx5_txq_ibv_qp_create(dev, idx);
 	if (txq_obj->qp == NULL) {
 		rte_errno = errno;
 		goto error;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 11/15] net/mlx5: separate Tx queue object modification
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (9 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 10/15] net/mlx5: rearrange QP creation in Verbs module Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 12/15] net/mlx5: share " Michael Baum
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Separate Tx object modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c    |   7 +++
 drivers/net/mlx5/linux/mlx5_verbs.c |  65 ++++++++++++++++++++++
 drivers/net/mlx5/mlx5.h             |   9 ++++
 drivers/net/mlx5/mlx5_devx.c        |  57 ++++++++++++++++++++
 drivers/net/mlx5/mlx5_rxtx.c        |  78 ++-------------------------
 drivers/net/mlx5/mlx5_txq.c         | 104 ++++--------------------------------
 6 files changed, 152 insertions(+), 168 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 0db2b5a..487714f 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1375,6 +1375,13 @@
 						ibv_obj_ops.drop_action_create;
 		priv->obj_ops.drop_action_destroy =
 						ibv_obj_ops.drop_action_destroy;
+#ifndef HAVE_MLX5DV_DEVX_UAR_OFFSET
+		priv->obj_ops.txq_obj_modify = ibv_obj_ops.txq_obj_modify;
+#else
+		if (!config->dv_esw_en)
+			priv->obj_ops.txq_obj_modify =
+						ibv_obj_ops.txq_obj_modify;
+#endif
 	} else {
 		priv->obj_ops = ibv_obj_ops;
 	}
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 0476d94..7d5ea37 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -113,6 +113,70 @@
 }
 
 /**
+ * Modify QP using Verbs API.
+ *
+ * @param txq_obj
+ *   Verbs Tx queue object.
+ * @param type
+ *   Type of change queue state.
+ * @param dev_port
+ *   IB device port number.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_ibv_modify_qp(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
+		   uint8_t dev_port)
+{
+	struct ibv_qp_attr mod = {
+		.qp_state = IBV_QPS_RESET,
+		.port_num = dev_port,
+	};
+	int attr_mask = (IBV_QP_STATE | IBV_QP_PORT);
+	int ret;
+
+	if (type != MLX5_TXQ_MOD_RST2RDY) {
+		ret = mlx5_glue->modify_qp(obj->qp, &mod, IBV_QP_STATE);
+		if (ret) {
+			DRV_LOG(ERR, "Cannot change Tx QP state to RESET %s",
+				strerror(errno));
+			rte_errno = errno;
+			return ret;
+		}
+		if (type == MLX5_TXQ_MOD_RDY2RST)
+			return 0;
+	}
+	if (type == MLX5_TXQ_MOD_ERR2RDY)
+		attr_mask = IBV_QP_STATE;
+	mod.qp_state = IBV_QPS_INIT;
+	ret = mlx5_glue->modify_qp(obj->qp, &mod, attr_mask);
+	if (ret) {
+		DRV_LOG(ERR, "Cannot change Tx QP state to INIT %s",
+			strerror(errno));
+		rte_errno = errno;
+		return ret;
+	}
+	mod.qp_state = IBV_QPS_RTR;
+	ret = mlx5_glue->modify_qp(obj->qp, &mod, IBV_QP_STATE);
+	if (ret) {
+		DRV_LOG(ERR, "Cannot change Tx QP state to RTR %s",
+			strerror(errno));
+		rte_errno = errno;
+		return ret;
+	}
+	mod.qp_state = IBV_QPS_RTS;
+	ret = mlx5_glue->modify_qp(obj->qp, &mod, IBV_QP_STATE);
+	if (ret) {
+		DRV_LOG(ERR, "Cannot change Tx QP state to RTS %s",
+			strerror(errno));
+		rte_errno = errno;
+		return ret;
+	}
+	return 0;
+}
+
+/**
  * Create a CQ Verbs object.
  *
  * @param dev
@@ -1043,5 +1107,6 @@ struct mlx5_obj_ops ibv_obj_ops = {
 	.drop_action_create = mlx5_ibv_drop_action_create,
 	.drop_action_destroy = mlx5_ibv_drop_action_destroy,
 	.txq_obj_new = mlx5_txq_ibv_obj_new,
+	.txq_obj_modify = mlx5_ibv_modify_qp,
 	.txq_obj_release = mlx5_txq_ibv_obj_release,
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 3093f6e..7cbb09b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -774,6 +774,13 @@ struct mlx5_txq_obj {
 	};
 };
 
+enum mlx5_txq_modify_type {
+	MLX5_TXQ_MOD_RDY2RDY, /* modify state from ready to ready. */
+	MLX5_TXQ_MOD_RST2RDY, /* modify state from reset to ready. */
+	MLX5_TXQ_MOD_RDY2RST, /* modify state from ready to reset. */
+	MLX5_TXQ_MOD_ERR2RDY, /* modify state from error to ready. */
+};
+
 /* HW objects operations structure. */
 struct mlx5_obj_ops {
 	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
@@ -790,6 +797,8 @@ struct mlx5_obj_ops {
 	int (*drop_action_create)(struct rte_eth_dev *dev);
 	void (*drop_action_destroy)(struct rte_eth_dev *dev);
 	int (*txq_obj_new)(struct rte_eth_dev *dev, uint16_t idx);
+	int (*txq_obj_modify)(struct mlx5_txq_obj *obj,
+			      enum mlx5_txq_modify_type type, uint8_t dev_port);
 	void (*txq_obj_release)(struct mlx5_txq_obj *txq_obj);
 };
 
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 55fe946..7404a15 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -73,6 +73,62 @@
 }
 
 /**
+ * Modify SQ using DevX API.
+ *
+ * @param txq_obj
+ *   DevX Tx queue object.
+ * @param type
+ *   Type of change queue state.
+ * @param dev_port
+ *   Unnecessary.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_modify_sq(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type,
+		    uint8_t dev_port)
+{
+	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
+	int ret;
+
+	if (type != MLX5_TXQ_MOD_RST2RDY) {
+		/* Change queue state to reset. */
+		if (type == MLX5_TXQ_MOD_ERR2RDY)
+			msq_attr.sq_state = MLX5_SQC_STATE_ERR;
+		else
+			msq_attr.sq_state = MLX5_SQC_STATE_RDY;
+		msq_attr.state = MLX5_SQC_STATE_RST;
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		if (ret) {
+			DRV_LOG(ERR, "Cannot change the Tx SQ state to RESET"
+				" %s", strerror(errno));
+			rte_errno = errno;
+			return ret;
+		}
+	}
+	if (type != MLX5_TXQ_MOD_RDY2RST) {
+		/* Change queue state to ready. */
+		msq_attr.sq_state = MLX5_SQC_STATE_RST;
+		msq_attr.state = MLX5_SQC_STATE_RDY;
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		if (ret) {
+			DRV_LOG(ERR, "Cannot change the Tx SQ state to READY"
+				" %s", strerror(errno));
+			rte_errno = errno;
+			return ret;
+		}
+	}
+	/*
+	 * The dev_port variable is relevant only in Verbs API, and there is a
+	 * pointer that points to this function and a parallel function in verbs
+	 * intermittently, so they should have the same parameters.
+	 */
+	(void)dev_port;
+	return 0;
+}
+
+/**
  * Release the resources allocated for an RQ DevX object.
  *
  * @param rxq_ctrl
@@ -1298,5 +1354,6 @@ struct mlx5_obj_ops devx_obj_ops = {
 	.drop_action_create = mlx5_devx_drop_action_create,
 	.drop_action_destroy = mlx5_devx_drop_action_destroy,
 	.txq_obj_new = mlx5_txq_devx_obj_new,
+	.txq_obj_modify = mlx5_devx_modify_sq,
 	.txq_obj_release = mlx5_txq_devx_obj_release,
 };
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 0b87be1..af4b4ba 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -936,79 +936,11 @@ enum mlx5_txcmp_code {
 		struct mlx5_txq_ctrl *txq_ctrl =
 			container_of(txq, struct mlx5_txq_ctrl, txq);
 
-		if (txq_ctrl->obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ) {
-			struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-
-			/* Change queue state to reset. */
-			msq_attr.sq_state = MLX5_SQC_STATE_ERR;
-			msq_attr.state = MLX5_SQC_STATE_RST;
-			ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq_devx,
-						      &msq_attr);
-			if (ret) {
-				DRV_LOG(ERR, "Cannot change the "
-					"Tx QP state to RESET %s",
-					strerror(errno));
-				rte_errno = errno;
-				return ret;
-			}
-			/* Change queue state to ready. */
-			msq_attr.sq_state = MLX5_SQC_STATE_RST;
-			msq_attr.state = MLX5_SQC_STATE_RDY;
-			ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq_devx,
-						      &msq_attr);
-			if (ret) {
-				DRV_LOG(ERR, "Cannot change the "
-					"Tx QP state to READY %s",
-					strerror(errno));
-				rte_errno = errno;
-				return ret;
-			}
-		} else {
-			struct ibv_qp_attr mod = {
-				.qp_state = IBV_QPS_RESET,
-				.port_num = (uint8_t)priv->dev_port,
-			};
-			struct ibv_qp *qp = txq_ctrl->obj->qp;
-
-			MLX5_ASSERT
-				(txq_ctrl->obj->type == MLX5_TXQ_OBJ_TYPE_IBV);
-
-			ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-			if (ret) {
-				DRV_LOG(ERR, "Cannot change the "
-					"Tx QP state to RESET %s",
-					strerror(errno));
-				rte_errno = errno;
-				return ret;
-			}
-			mod.qp_state = IBV_QPS_INIT;
-			ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-			if (ret) {
-				DRV_LOG(ERR, "Cannot change the "
-					"Tx QP state to INIT %s",
-					strerror(errno));
-				rte_errno = errno;
-				return ret;
-			}
-			mod.qp_state = IBV_QPS_RTR;
-			ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-			if (ret) {
-				DRV_LOG(ERR, "Cannot change the "
-					"Tx QP state to RTR %s",
-					strerror(errno));
-				rte_errno = errno;
-				return ret;
-			}
-			mod.qp_state = IBV_QPS_RTS;
-			ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-			if (ret) {
-				DRV_LOG(ERR, "Cannot change the "
-					"Tx QP state to RTS %s",
-					strerror(errno));
-				rte_errno = errno;
-				return ret;
-			}
-		}
+		ret = priv->obj_ops.txq_obj_modify(txq_ctrl->obj,
+						   MLX5_TXQ_MOD_ERR2RDY,
+						   (uint8_t)priv->dev_port);
+		if (ret)
+			return ret;
 	}
 	return 0;
 }
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 23213d9..c31e446 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -182,37 +182,10 @@
 
 	MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
 	/* Move QP to RESET state. */
-	if (txq_ctrl->obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ) {
-		struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-
-		/* Change queue state to reset with DevX. */
-		msq_attr.sq_state = MLX5_SQC_STATE_RDY;
-		msq_attr.state = MLX5_SQC_STATE_RST;
-		ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq_devx,
-					      &msq_attr);
-		if (ret) {
-			DRV_LOG(ERR, "Cannot change the "
-				"Tx QP state to RESET %s",
-				strerror(errno));
-			rte_errno = errno;
-			return ret;
-		}
-	} else {
-		struct ibv_qp_attr mod = {
-			.qp_state = IBV_QPS_RESET,
-			.port_num = (uint8_t)priv->dev_port,
-		};
-		struct ibv_qp *qp = txq_ctrl->obj->qp;
-
-		/* Change queue state to reset with Verbs. */
-		ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-		if (ret) {
-			DRV_LOG(ERR, "Cannot change the Tx QP state to RESET "
-				"%s", strerror(errno));
-			rte_errno = errno;
-			return ret;
-		}
-	}
+	ret = priv->obj_ops.txq_obj_modify(txq_ctrl->obj, MLX5_TXQ_MOD_RDY2RST,
+					   (uint8_t)priv->dev_port);
+	if (ret)
+		return ret;
 	/* Handle all send completions. */
 	txq_sync_cq(txq);
 	/* Free elts stored in the SQ. */
@@ -281,70 +254,11 @@
 	int ret;
 
 	MLX5_ASSERT(rte_eal_process_type() ==  RTE_PROC_PRIMARY);
-	if (txq_ctrl->obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ) {
-		struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-		struct mlx5_txq_obj *obj = txq_ctrl->obj;
-
-		msq_attr.sq_state = MLX5_SQC_STATE_RDY;
-		msq_attr.state = MLX5_SQC_STATE_RST;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
-		if (ret) {
-			rte_errno = errno;
-			DRV_LOG(ERR,
-				"Cannot change the Tx QP state to RESET "
-				"%s", strerror(errno));
-			return ret;
-		}
-		msq_attr.sq_state = MLX5_SQC_STATE_RST;
-		msq_attr.state = MLX5_SQC_STATE_RDY;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
-		if (ret) {
-			rte_errno = errno;
-			DRV_LOG(ERR,
-				"Cannot change the Tx QP state to READY "
-				"%s", strerror(errno));
-			return ret;
-		}
-	} else {
-		struct ibv_qp_attr mod = {
-			.qp_state = IBV_QPS_RESET,
-			.port_num = (uint8_t)priv->dev_port,
-		};
-		struct ibv_qp *qp = txq_ctrl->obj->qp;
-
-		ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-		if (ret) {
-			DRV_LOG(ERR, "Cannot change the Tx QP state to RESET "
-				"%s", strerror(errno));
-			rte_errno = errno;
-			return ret;
-		}
-		mod.qp_state = IBV_QPS_INIT;
-		ret = mlx5_glue->modify_qp(qp, &mod,
-					   (IBV_QP_STATE | IBV_QP_PORT));
-		if (ret) {
-			DRV_LOG(ERR, "Cannot change Tx QP state to INIT %s",
-				strerror(errno));
-			rte_errno = errno;
-			return ret;
-		}
-		mod.qp_state = IBV_QPS_RTR;
-		ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-		if (ret) {
-			DRV_LOG(ERR, "Cannot change Tx QP state to RTR %s",
-				strerror(errno));
-			rte_errno = errno;
-			return ret;
-		}
-		mod.qp_state = IBV_QPS_RTS;
-		ret = mlx5_glue->modify_qp(qp, &mod, IBV_QP_STATE);
-		if (ret) {
-			DRV_LOG(ERR, "Cannot change Tx QP state to RTS %s",
-				strerror(errno));
-			rte_errno = errno;
-			return ret;
-		}
-	}
+	ret = priv->obj_ops.txq_obj_modify(txq_ctrl->obj,
+					   MLX5_TXQ_MOD_RDY2RDY,
+					   (uint8_t)priv->dev_port);
+	if (ret)
+		return ret;
 	txq_ctrl->txq.wqe_ci = 0;
 	txq_ctrl->txq.wqe_pi = 0;
 	txq_ctrl->txq.elts_comp = 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 12/15] net/mlx5: share Tx queue object modification
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (10 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 11/15] net/mlx5: separate Tx queue object modification Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 13/15] net/mlx5: remove Tx queue object type field Michael Baum
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Use new modify_qp functions for Tx object creation in DevX and Verbs
modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c | 34 +++-------------------------------
 drivers/net/mlx5/mlx5_devx.c        |  7 ++-----
 drivers/net/mlx5/mlx5_txq.c         |  2 --
 3 files changed, 5 insertions(+), 38 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 7d5ea37..ad6e3d7 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -922,7 +922,6 @@
 	struct mlx5_txq_ctrl *txq_ctrl =
 		container_of(txq_data, struct mlx5_txq_ctrl, txq);
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
-	struct ibv_qp_attr mod;
 	unsigned int cqe_n;
 	struct mlx5dv_qp qp;
 	struct mlx5dv_cq cq_info;
@@ -956,37 +955,10 @@
 		rte_errno = errno;
 		goto error;
 	}
-	mod = (struct ibv_qp_attr){
-		/* Move the QP to this state. */
-		.qp_state = IBV_QPS_INIT,
-		/* IB device port number. */
-		.port_num = (uint8_t)priv->dev_port,
-	};
-	ret = mlx5_glue->modify_qp(txq_obj->qp, &mod,
-				   (IBV_QP_STATE | IBV_QP_PORT));
-	if (ret) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u QP state to IBV_QPS_INIT failed.",
-			dev->data->port_id, idx);
-		rte_errno = errno;
-		goto error;
-	}
-	mod = (struct ibv_qp_attr){
-		.qp_state = IBV_QPS_RTR
-	};
-	ret = mlx5_glue->modify_qp(txq_obj->qp, &mod, IBV_QP_STATE);
+	ret = mlx5_ibv_modify_qp(txq_obj, MLX5_TXQ_MOD_RST2RDY,
+				 (uint8_t)priv->dev_port);
 	if (ret) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u QP state to IBV_QPS_RTR failed.",
-			dev->data->port_id, idx);
-		rte_errno = errno;
-		goto error;
-	}
-	mod.qp_state = IBV_QPS_RTS;
-	ret = mlx5_glue->modify_qp(txq_obj->qp, &mod, IBV_QP_STATE);
-	if (ret) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u QP state to IBV_QPS_RTS failed.",
+		DRV_LOG(ERR, "Port %u Tx queue %u QP state modifying failed.",
 			dev->data->port_id, idx);
 		rte_errno = errno;
 		goto error;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 7404a15..c876ae9 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -1237,7 +1237,6 @@
 	return -rte_errno;
 #else
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
-	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
 	void *reg_addr;
 	uint32_t cqe_n;
@@ -1286,13 +1285,11 @@
 	*txq_data->qp_db = 0;
 	txq_data->qp_num_8s = txq_obj->sq_devx->id << 8;
 	/* Change Send Queue state to Ready-to-Send. */
-	msq_attr.sq_state = MLX5_SQC_STATE_RST;
-	msq_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(txq_obj->sq_devx, &msq_attr);
+	ret = mlx5_devx_modify_sq(txq_obj, MLX5_TXQ_MOD_RST2RDY, 0);
 	if (ret) {
 		rte_errno = errno;
 		DRV_LOG(ERR,
-			"Port %u Tx queue %u SP state to SQC_STATE_RDY failed.",
+			"Port %u Tx queue %u SQ state to SQC_STATE_RDY failed.",
 			dev->data->port_id, idx);
 		goto error;
 	}
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index c31e446..af84f5f 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -16,8 +16,6 @@
 #include <rte_common.h>
 #include <rte_eal_paging.h>
 
-#include <mlx5_glue.h>
-#include <mlx5_devx_cmds.h>
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
 #include <mlx5_malloc.h>
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 13/15] net/mlx5: remove Tx queue object type field
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (11 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 12/15] net/mlx5: share " Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 14/15] net/mlx5: separate Rx queue state modification Michael Baum
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Once the separation between Verbs and DevX is done using function
pointers, the type field of the Tx queue object structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c | 1 -
 drivers/net/mlx5/mlx5.h             | 8 --------
 drivers/net/mlx5/mlx5_devx.c        | 6 +-----
 3 files changed, 1 insertion(+), 14 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index ad6e3d7..6260b4e 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -931,7 +931,6 @@
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(txq_obj);
-	txq_obj->type = MLX5_TXQ_OBJ_TYPE_IBV;
 	txq_obj->txq_ctrl = txq_ctrl;
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE;
 	priv->verbs_alloc_ctx.obj = txq_ctrl;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7cbb09b..fd5dd87 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -735,18 +735,10 @@ struct mlx5_hrxq {
 	uint8_t rss_key[]; /* Hash key. */
 };
 
-enum mlx5_txq_obj_type {
-	MLX5_TXQ_OBJ_TYPE_IBV,		/* mlx5_txq_obj with ibv_wq. */
-	MLX5_TXQ_OBJ_TYPE_DEVX_SQ,	/* mlx5_txq_obj with mlx5_devx_sq. */
-	MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN,
-	/* mlx5_txq_obj with mlx5_devx_tq and hairpin support. */
-};
-
 /* Verbs/DevX Tx queue elements. */
 struct mlx5_txq_obj {
 	LIST_ENTRY(mlx5_txq_obj) next; /* Pointer to the next element. */
 	struct mlx5_txq_ctrl *txq_ctrl; /* Pointer to the control queue. */
-	enum mlx5_txq_obj_type type; /* The txq object type. */
 	RTE_STD_C11
 	union {
 		struct {
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index c876ae9..430ad08 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -890,7 +890,6 @@
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(tmpl);
-	tmpl->type = MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN;
 	tmpl->txq_ctrl = txq_ctrl;
 	attr.hairpin = 1;
 	attr.tis_lst_sz = 1;
@@ -980,8 +979,6 @@
 static void
 mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
-	MLX5_ASSERT(txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_SQ);
-
 	mlx5_txq_release_devx_cq_resources(txq_obj);
 	mlx5_txq_release_devx_sq_resources(txq_obj);
 }
@@ -1245,7 +1242,6 @@
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(txq_obj);
-	txq_obj->type = MLX5_TXQ_OBJ_TYPE_DEVX_SQ;
 	txq_obj->txq_ctrl = txq_ctrl;
 	txq_obj->dev = dev;
 	cqe_n = mlx5_txq_create_devx_cq_resources(dev, idx);
@@ -1328,7 +1324,7 @@
 mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj)
 {
 	MLX5_ASSERT(txq_obj);
-	if (txq_obj->type == MLX5_TXQ_OBJ_TYPE_DEVX_HAIRPIN) {
+	if (txq_obj->txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) {
 		if (txq_obj->tis)
 			claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis));
 #ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 14/15] net/mlx5: separate Rx queue state modification
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (12 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 13/15] net/mlx5: remove Tx queue object type field Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 15/15] net/mlx5: remove Rx queue object type field Michael Baum
  2020-10-06 15:25 ` [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Raslan Darawsheh
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Separate Rx state modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c |  9 +++++----
 drivers/net/mlx5/mlx5.h             |  9 ++++++++-
 drivers/net/mlx5/mlx5_devx.c        | 25 ++++++++++++++++++++-----
 drivers/net/mlx5/mlx5_rxq.c         |  4 ++--
 drivers/net/mlx5/mlx5_rxtx.c        | 27 +--------------------------
 5 files changed, 36 insertions(+), 38 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 6260b4e..b4a6b5e 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -4,7 +4,6 @@
 
 #include <stddef.h>
 #include <errno.h>
-#include <stdbool.h>
 #include <string.h>
 #include <stdint.h>
 #include <unistd.h>
@@ -97,16 +96,18 @@
  *
  * @param rxq_obj
  *   Verbs Rx queue object.
+ * @param type
+ *   Type of change queue state.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
+mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
 {
 	struct ibv_wq_attr mod = {
 		.attr_mask = IBV_WQ_ATTR_STATE,
-		.wq_state = is_start ? IBV_WQS_RDY : IBV_WQS_RESET,
+		.wq_state = (enum ibv_wq_state)type,
 	};
 
 	return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
@@ -418,7 +419,7 @@
 		goto error;
 	}
 	/* Change queue state to ready. */
-	ret = mlx5_ibv_modify_wq(tmpl, true);
+	ret = mlx5_ibv_modify_wq(tmpl, IBV_WQS_RDY);
 	if (ret) {
 		DRV_LOG(ERR,
 			"Port %u Rx queue %u WQ state to IBV_WQS_RDY failed.",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fd5dd87..f385b48 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -766,6 +766,13 @@ struct mlx5_txq_obj {
 	};
 };
 
+enum mlx5_rxq_modify_type {
+	MLX5_RXQ_MOD_ERR2RST, /* modify state from error to reset. */
+	MLX5_RXQ_MOD_RST2RDY, /* modify state from reset to ready. */
+	MLX5_RXQ_MOD_RDY2ERR, /* modify state from ready to error. */
+	MLX5_RXQ_MOD_RDY2RST, /* modify state from ready to reset. */
+};
+
 enum mlx5_txq_modify_type {
 	MLX5_TXQ_MOD_RDY2RDY, /* modify state from ready to ready. */
 	MLX5_TXQ_MOD_RST2RDY, /* modify state from reset to ready. */
@@ -778,7 +785,7 @@ struct mlx5_obj_ops {
 	int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
 	int (*rxq_obj_new)(struct rte_eth_dev *dev, uint16_t idx);
 	int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
-	int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, bool is_start);
+	int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, uint8_t type);
 	void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
 	int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n,
 			     struct mlx5_ind_table_obj *ind_tbl);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 430ad08..a7c941c 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -52,22 +52,37 @@
  *
  * @param rxq_obj
  *   DevX Rx queue object.
+ * @param type
+ *   Type of change queue state.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
+mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type)
 {
 	struct mlx5_devx_modify_rq_attr rq_attr;
 
 	memset(&rq_attr, 0, sizeof(rq_attr));
-	if (is_start) {
+	switch (type) {
+	case MLX5_RXQ_MOD_ERR2RST:
+		rq_attr.rq_state = MLX5_RQC_STATE_ERR;
+		rq_attr.state = MLX5_RQC_STATE_RST;
+		break;
+	case MLX5_RXQ_MOD_RST2RDY:
 		rq_attr.rq_state = MLX5_RQC_STATE_RST;
 		rq_attr.state = MLX5_RQC_STATE_RDY;
-	} else {
+		break;
+	case MLX5_RXQ_MOD_RDY2ERR:
+		rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+		rq_attr.state = MLX5_RQC_STATE_ERR;
+		break;
+	case MLX5_RXQ_MOD_RDY2RST:
 		rq_attr.rq_state = MLX5_RQC_STATE_RDY;
 		rq_attr.state = MLX5_RQC_STATE_RST;
+		break;
+	default:
+		break;
 	}
 	return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
 }
@@ -194,7 +209,7 @@
 	MLX5_ASSERT(rxq_obj);
 	MLX5_ASSERT(rxq_obj->rq);
 	if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN) {
-		mlx5_devx_modify_rq(rxq_obj, false);
+		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
 		MLX5_ASSERT(rxq_obj->devx_cq);
@@ -628,7 +643,7 @@
 		goto error;
 	}
 	/* Change queue state to ready. */
-	ret = mlx5_devx_modify_rq(tmpl, true);
+	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
 	rxq_data->cq_arm_sn = 0;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index c059e21..f1d8373 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -513,7 +513,7 @@
 	int ret;
 
 	MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
-	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, false);
+	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RDY2RST);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot change Rx WQ state to RESET:  %s",
 			strerror(errno));
@@ -612,7 +612,7 @@
 	/* Reset RQ consumer before moving queue ro READY state. */
 	*rxq->rq_db = rte_cpu_to_be_32(0);
 	rte_io_wmb();
-	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, true);
+	ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RST2RDY);
 	if (ret) {
 		DRV_LOG(ERR, "Cannot change Rx WQ state to READY:  %s",
 			strerror(errno));
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index af4b4ba..dc22479 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -16,8 +16,6 @@
 #include <rte_cycles.h>
 #include <rte_flow.h>
 
-#include <mlx5_glue.h>
-#include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
 #include <mlx5_common.h>
 
@@ -901,30 +899,7 @@ enum mlx5_txcmp_code {
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
-			struct ibv_wq_attr mod = {
-				.attr_mask = IBV_WQ_ATTR_STATE,
-				.wq_state = sm->state,
-			};
-
-			ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
-		} else { /* rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ. */
-			struct mlx5_devx_modify_rq_attr rq_attr;
-
-			memset(&rq_attr, 0, sizeof(rq_attr));
-			if (sm->state == IBV_WQS_RESET) {
-				rq_attr.rq_state = MLX5_RQC_STATE_ERR;
-				rq_attr.state = MLX5_RQC_STATE_RST;
-			} else if (sm->state == IBV_WQS_RDY) {
-				rq_attr.rq_state = MLX5_RQC_STATE_RST;
-				rq_attr.state = MLX5_RQC_STATE_RDY;
-			} else if (sm->state == IBV_WQS_ERR) {
-				rq_attr.rq_state = MLX5_RQC_STATE_RDY;
-				rq_attr.state = MLX5_RQC_STATE_ERR;
-			}
-			ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq,
-						      &rq_attr);
-		}
+		ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, sm->state);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change Rx WQ state to %u  - %s",
 					sm->state, strerror(errno));
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-dev] [PATCH v1 15/15] net/mlx5: remove Rx queue object type field
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (13 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 14/15] net/mlx5: separate Rx queue state modification Michael Baum
@ 2020-10-01 14:09 ` Michael Baum
  2020-10-06 15:25 ` [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Raslan Darawsheh
  15 siblings, 0 replies; 17+ messages in thread
From: Michael Baum @ 2020-10-01 14:09 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Once the separation between Verbs and DevX is done using function
pointers, the type field of the Rx queue object structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_verbs.c | 1 -
 drivers/net/mlx5/mlx5.h             | 8 --------
 drivers/net/mlx5/mlx5_devx.c        | 4 +---
 drivers/net/mlx5/mlx5_vlan.c        | 5 ++---
 4 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index b4a6b5e..494ddba 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -368,7 +368,6 @@
 	MLX5_ASSERT(tmpl);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
 	priv->verbs_alloc_ctx.obj = rxq_ctrl;
-	tmpl->type = MLX5_RXQ_OBJ_TYPE_IBV;
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq) {
 		tmpl->ibv_channel =
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f385b48..87d3c15 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -676,18 +676,10 @@ struct mlx5_proc_priv {
 #define MLX5_PROC_PRIV(port_id) \
 	((struct mlx5_proc_priv *)rte_eth_devices[port_id].process_private)
 
-enum mlx5_rxq_obj_type {
-	MLX5_RXQ_OBJ_TYPE_IBV,          /* mlx5_rxq_obj with ibv_wq. */
-	MLX5_RXQ_OBJ_TYPE_DEVX_RQ,      /* mlx5_rxq_obj with mlx5_devx_rq. */
-	MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN,
-	/* mlx5_rxq_obj with mlx5_devx_rq and hairpin support. */
-};
-
 /* Verbs/DevX Rx queue elements. */
 struct mlx5_rxq_obj {
 	LIST_ENTRY(mlx5_rxq_obj) next; /* Pointer to the next element. */
 	struct mlx5_rxq_ctrl *rxq_ctrl; /* Back pointer to parent. */
-	enum mlx5_rxq_obj_type type;
 	int fd; /* File descriptor for event channel */
 	RTE_STD_C11
 	union {
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index a7c941c..11bda32 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -208,7 +208,7 @@
 {
 	MLX5_ASSERT(rxq_obj);
 	MLX5_ASSERT(rxq_obj->rq);
-	if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN) {
+	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
 		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
@@ -550,7 +550,6 @@
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(tmpl);
-	tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN;
 	tmpl->rxq_ctrl = rxq_ctrl;
 	attr.hairpin = 1;
 	max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
@@ -611,7 +610,6 @@
 	MLX5_ASSERT(tmpl);
 	if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
 		return mlx5_rxq_obj_hairpin_new(dev, idx);
-	tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_RQ;
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq) {
 		int devx_ev_flag =
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 290503a..dbb9d36 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -116,9 +116,8 @@
 	}
 	ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on);
 	if (ret) {
-		DRV_LOG(ERR, "port %u failed to modify object %d stripping "
-			"mode: %s", dev->data->port_id,
-			rxq_ctrl->obj->type, strerror(rte_errno));
+		DRV_LOG(ERR, "Port %u failed to modify object stripping mode:"
+			" %s", dev->data->port_id, strerror(rte_errno));
 		return;
 	}
 	/* Update related bits in RX queue. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation
  2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
                   ` (14 preceding siblings ...)
  2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 15/15] net/mlx5: remove Rx queue object type field Michael Baum
@ 2020-10-06 15:25 ` Raslan Darawsheh
  15 siblings, 0 replies; 17+ messages in thread
From: Raslan Darawsheh @ 2020-10-06 15:25 UTC (permalink / raw)
  To: Michael Baum, dev; +Cc: Matan Azrad, Slava Ovsiienko

Hi,

> -----Original Message-----
> From: Michael Baum <michaelba@nvidia.com>
> Sent: Thursday, October 1, 2020 5:09 PM
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
> Subject: [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation
> 
> The series is an arrangement to multi-OS support by net/mlx5 driver so it
> comes to ease the code management for OS which supports\doesn't support
> DevX\Verbs operations.
> 
> Michael Baum (15):
>   net/mlx5: fix send queue doorbell typo
>   net/mlx5: fix unused variable in Txq creation
>   net/mlx5: mitigate Tx queue reference counters
>   net/mlx5: reorder Tx queue DevX object creation
>   net/mlx5: reorder Tx queue Verbs object creation
>   net/mlx5: reposition the event queue number field
>   net/mlx5: separate Tx queue object creations
>   net/mlx5: share Tx control code
>   net/mlx5: rearrange SQ and CQ creation in DevX module
>   net/mlx5: rearrange QP creation in Verbs module
>   net/mlx5: separate Tx queue object modification
>   net/mlx5: share Tx queue object modification
>   net/mlx5: remove Tx queue object type field
>   net/mlx5: separate Rx queue state modification
>   net/mlx5: remove Rx queue object type field
> 
>  drivers/net/mlx5/linux/mlx5_os.c    |  80 ++++
>  drivers/net/mlx5/linux/mlx5_verbs.c | 296 ++++++++++++-
>  drivers/net/mlx5/linux/mlx5_verbs.h |   3 +
>  drivers/net/mlx5/mlx5.c             |  10 +
>  drivers/net/mlx5/mlx5.h             |  61 ++-
>  drivers/net/mlx5/mlx5_devx.c        | 593 +++++++++++++++++++++++--
>  drivers/net/mlx5/mlx5_devx.h        |   3 +
>  drivers/net/mlx5/mlx5_rxq.c         |   4 +-
>  drivers/net/mlx5/mlx5_rxtx.c        | 105 +----
>  drivers/net/mlx5/mlx5_rxtx.h        |  45 +-
>  drivers/net/mlx5/mlx5_trigger.c     |  40 +-
>  drivers/net/mlx5/mlx5_txpp.c        |  28 +-
>  drivers/net/mlx5/mlx5_txq.c         | 850 ++----------------------------------
>  drivers/net/mlx5/mlx5_vlan.c        |   5 +-
>  14 files changed, 1087 insertions(+), 1036 deletions(-)
> 
> --
> 1.8.3.1

Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-10-06 15:25 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-01 14:09 [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 01/15] net/mlx5: fix send queue doorbell typo Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 02/15] net/mlx5: fix unused variable in Txq creation Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 03/15] net/mlx5: mitigate Tx queue reference counters Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 04/15] net/mlx5: reorder Tx queue DevX object creation Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 05/15] net/mlx5: reorder Tx queue Verbs " Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 06/15] net/mlx5: reposition the event queue number field Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 07/15] net/mlx5: separate Tx queue object creations Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 08/15] net/mlx5: share Tx control code Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 09/15] net/mlx5: rearrange SQ and CQ creation in DevX module Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 10/15] net/mlx5: rearrange QP creation in Verbs module Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 11/15] net/mlx5: separate Tx queue object modification Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 12/15] net/mlx5: share " Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 13/15] net/mlx5: remove Tx queue object type field Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 14/15] net/mlx5: separate Rx queue state modification Michael Baum
2020-10-01 14:09 ` [dpdk-dev] [PATCH v1 15/15] net/mlx5: remove Rx queue object type field Michael Baum
2020-10-06 15:25 ` [dpdk-dev] [PATCH v1 00/15] mlx5 Tx DevX/Verbs separation Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).