DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations
@ 2020-12-17 11:44 Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
                   ` (16 more replies)
  0 siblings, 17 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Due to many instances of creating CQ SQ and RQ on DevX, they move to common.

Michael Baum (17):
  net/mlx5: fix ASO SQ creation error flow
  common/mlx5: share DevX CQ creation
  regex/mlx5: move DevX CQ creation to common
  vdpa/mlx5: move DevX CQ creation to common
  net/mlx5: move rearm and clock queue CQ creation to common
  net/mlx5: move ASO CQ creation to common
  net/mlx5: move Tx CQ creation to common
  net/mlx5: move Rx CQ creation to common
  common/mlx5: enhance page size configuration
  common/mlx5: share DevX SQ creation
  regex/mlx5: move DevX SQ creation to common
  net/mlx5: move rearm and clock queue SQ creation to common
  net/mlx5: move Tx SQ creation to common
  net/mlx5: move ASO SQ creation to common
  common/mlx5: share DevX RQ creation
  net/mlx5: move Rx RQ creation to common
  common/mlx5: remove doorbell allocation API

 drivers/common/mlx5/meson.build          |   1 +
 drivers/common/mlx5/mlx5_common.c        | 122 ------
 drivers/common/mlx5/mlx5_common.h        |  23 --
 drivers/common/mlx5/mlx5_common_devx.c   | 395 +++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h   |  56 +++
 drivers/common/mlx5/mlx5_devx_cmds.c     |  53 ++-
 drivers/net/mlx5/mlx5.c                  |   8 -
 drivers/net/mlx5/mlx5.h                  |  54 +--
 drivers/net/mlx5/mlx5_devx.c             | 643 +++++++------------------------
 drivers/net/mlx5/mlx5_flow_age.c         | 172 +++------
 drivers/net/mlx5/mlx5_rxtx.c             |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h             |   8 -
 drivers/net/mlx5/mlx5_txpp.c             | 294 ++++----------
 drivers/regex/mlx5/mlx5_regex.c          |   6 -
 drivers/regex/mlx5/mlx5_regex.h          |  17 +-
 drivers/regex/mlx5/mlx5_regex_control.c  | 242 +++---------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  18 +-
 drivers/vdpa/mlx5/mlx5_vdpa.h            |  10 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c      |  81 ++--
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c      |   2 +-
 20 files changed, 839 insertions(+), 1368 deletions(-)
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.c
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 02/17] common/mlx5: share DevX CQ creation Michael Baum
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

In ASO SQ creation, the PMD allocates umem buffer for SQ.

When umem buffer allocation is fails, the MR and CQ memory are not freed
what caused a memory leak.

Free it.

Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_age.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index cea2cf7..0ea61be 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -278,7 +278,8 @@
 				   sizeof(*sq->db_rec) * 2, 4096, socket);
 	if (!sq->umem_buf) {
 		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		return -ENOMEM;
+		rte_errno = ENOMEM;
+		goto error;
 	}
 	sq->wqe_umem = mlx5_glue->devx_umem_reg(ctx,
 						(void *)(uintptr_t)sq->umem_buf,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 02/17] common/mlx5: share DevX CQ creation
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 03/17] regex/mlx5: move DevX CQ creation to common Michael Baum
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The CQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/meson.build        |   1 +
 drivers/common/mlx5/mlx5_common_devx.c | 157 +++++++++++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h |  28 ++++++
 3 files changed, 186 insertions(+)
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.c
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.h

diff --git a/drivers/common/mlx5/meson.build b/drivers/common/mlx5/meson.build
index 3dacc6f..26cee06 100644
--- a/drivers/common/mlx5/meson.build
+++ b/drivers/common/mlx5/meson.build
@@ -16,6 +16,7 @@ sources += files(
 	'mlx5_common_mr.c',
 	'mlx5_malloc.c',
 	'mlx5_common_pci.c',
+	'mlx5_common_devx.c',
 )
 
 cflags_options = [
diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
new file mode 100644
index 0000000..324c6ea
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include <rte_common.h>
+#include <rte_eal_paging.h>
+
+#include <mlx5_glue.h>
+#include <mlx5_common_os.h>
+
+#include "mlx5_prm.h"
+#include "mlx5_devx_cmds.h"
+#include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
+#include "mlx5_common.h"
+#include "mlx5_common_devx.h"
+
+/**
+ * Destroy DevX Completion Queue.
+ *
+ * @param[in] cq
+ *   DevX CQ to destroy.
+ */
+void
+mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq)
+{
+	if (cq->cq)
+		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
+	if (cq->umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
+	if (cq->umem_buf)
+		mlx5_free((void *)(uintptr_t)cq->umem_buf);
+}
+
+/* Mark all CQEs initially as invalid. */
+static void
+mlx5_cq_init(struct mlx5_devx_cq *cq_obj, uint16_t cq_size)
+{
+	volatile struct mlx5_cqe *cqe = cq_obj->cqes;
+	uint16_t i;
+
+	for (i = 0; i < cq_size; i++, cqe++)
+		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
+}
+
+/**
+ * Create Completion Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   q_umem_valid
+ *   q_umem_id
+ *   q_umem_offset
+ *   db_umem_valid
+ *   db_umem_id
+ *   db_umem_offset
+ *   eqn
+ *   log_cq_size
+ *   log_page_size
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] cq_obj
+ *   Pointer to CQ to create.
+ * @param[in] log_desc_n
+ *   Log of number of descriptors in queue.
+ * @param[in] attr
+ *   Pointer to CQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj, uint16_t log_desc_n,
+		    struct mlx5_devx_cq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *cq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t page_size = rte_mem_page_size();
+	size_t alignment = MLX5_CQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint32_t eqn;
+	uint16_t cq_size = 1 << log_desc_n;
+	int ret;
+
+	if (page_size == (size_t)-1 || alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get page_size.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Query first EQN. */
+	ret = mlx5_glue->devx_query_eqn(ctx, 0, &eqn);
+	if (ret) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to query event queue number.");
+		goto error;
+	}
+	/* Allocate memory buffer for CQEs and doorbell record. */
+	umem_size = sizeof(struct mlx5_cqe) * cq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_glue->devx_umem_reg(ctx, (void *)(uintptr_t)umem_buf,
+					    umem_size, IBV_ACCESS_LOCAL_WRITE);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for CQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for CQ object creation. */
+	attr->q_umem_valid = 1;
+	attr->q_umem_id = mlx5_os_get_umem_id(umem_obj);
+	attr->q_umem_offset = 0;
+	attr->db_umem_valid = 1;
+	attr->db_umem_id = attr->q_umem_id;
+	attr->db_umem_offset = umem_dbrec;
+	attr->eqn = eqn;
+	attr->log_cq_size = log_desc_n;
+	attr->log_page_size = rte_log2_u32(page_size);
+	/* Create completion queue object with DevX. */
+	cq = mlx5_devx_cmd_create_cq(ctx, attr);
+	if (!cq) {
+		DRV_LOG(ERR, "Can't create DevX CQ object.");
+		rte_errno  = ENOMEM;
+		goto error;
+	}
+	cq_obj->umem_buf = umem_buf;
+	cq_obj->umem_obj = umem_obj;
+	cq_obj->cq = cq;
+	cq_obj->db_rec = RTE_PTR_ADD(cq_obj->umem_buf, umem_dbrec);
+	/* Mark all CQEs initially as invalid. */
+	mlx5_cq_init(cq_obj, cq_size);
+	return 0;
+error:
+	ret = rte_errno;
+	if (cq)
+		claim_zero(mlx5_devx_cmd_destroy(cq));
+	if (umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
new file mode 100644
index 0000000..31cb804
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef RTE_PMD_MLX5_COMMON_DEVX_H_
+#define RTE_PMD_MLX5_COMMON_DEVX_H_
+
+#include "mlx5_devx_cmds.h"
+
+/* DevX Completion Queue structure. */
+struct mlx5_devx_cq {
+	struct mlx5_devx_obj *cq; /* The CQ DevX object. */
+	struct mlx5dv_devx_umem *umem_obj; /* The CQ umem object. */
+	union {
+		volatile void *umem_buf;
+		volatile struct mlx5_cqe *cqes; /* The CQ ring buffer. */
+	};
+	volatile uint32_t *db_rec; /* The CQ doorbell record. */
+};
+
+/* mlx5_common_devx.c */
+
+void mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq);
+int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
+			uint16_t log_desc_n, struct mlx5_devx_cq_attr *attr,
+			int socket);
+
+#endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 03/17] regex/mlx5: move DevX CQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 02/17] common/mlx5: share DevX CQ creation Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 04/17] vdpa/mlx5: " Michael Baum
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/regex/mlx5/mlx5_regex.c          |  6 ---
 drivers/regex/mlx5/mlx5_regex.h          |  9 +---
 drivers/regex/mlx5/mlx5_regex_control.c  | 91 ++++++--------------------------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  4 +-
 4 files changed, 20 insertions(+), 90 deletions(-)

diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c
index c91c444..c0d6331 100644
--- a/drivers/regex/mlx5/mlx5_regex.c
+++ b/drivers/regex/mlx5/mlx5_regex.c
@@ -170,12 +170,6 @@
 		rte_errno = rte_errno ? rte_errno : EINVAL;
 		goto error;
 	}
-	ret = mlx5_glue->devx_query_eqn(ctx, 0, &priv->eqn);
-	if (ret) {
-		DRV_LOG(ERR, "can't query event queue number.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
 	/*
 	 * This PMD always claims the write memory barrier on UAR
 	 * registers writings, it is safe to allocate UAR with any
diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 2c4877c..9f7a388 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -12,6 +12,7 @@
 
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_rxp.h"
 
@@ -30,13 +31,8 @@ struct mlx5_regex_sq {
 
 struct mlx5_regex_cq {
 	uint32_t log_nb_desc; /* Log 2 number of desc for this object. */
-	struct mlx5_devx_obj *obj; /* The CQ DevX object. */
-	int64_t dbr_offset; /* Door bell record offset. */
-	uint32_t dbr_umem; /* Door bell record umem id. */
-	volatile struct mlx5_cqe *cqe; /* The CQ ring buffer. */
-	struct mlx5dv_devx_umem *cqe_umem; /* CQ buffer umem. */
+	struct mlx5_devx_cq cq_obj; /* The CQ DevX object. */
 	size_t ci;
-	uint32_t *dbr;
 };
 
 struct mlx5_regex_qp {
@@ -75,7 +71,6 @@ struct mlx5_regex_priv {
 	struct mlx5_regex_db db[MLX5_RXP_MAX_ENGINES +
 				MLX5_RXP_EM_COUNT];
 	uint32_t nb_engines; /* Number of RegEx engines. */
-	uint32_t eqn; /* EQ number. */
 	struct mlx5dv_devx_uar *uar; /* UAR object. */
 	struct ibv_pd *pd;
 	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c
index d6f452b..ca6c0f5 100644
--- a/drivers/regex/mlx5/mlx5_regex_control.c
+++ b/drivers/regex/mlx5/mlx5_regex_control.c
@@ -6,6 +6,7 @@
 
 #include <rte_log.h>
 #include <rte_errno.h>
+#include <rte_memory.h>
 #include <rte_malloc.h>
 #include <rte_regexdev.h>
 #include <rte_regexdev_core.h>
@@ -17,6 +18,7 @@
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
 #include <mlx5_common_os.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_regex.h"
 #include "mlx5_regex_utils.h"
@@ -44,8 +46,6 @@
 /**
  * destroy CQ.
  *
- * @param priv
- *   Pointer to the priv object.
  * @param cp
  *   Pointer to the CQ to be destroyed.
  *
@@ -53,24 +53,10 @@
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-regex_ctrl_destroy_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq)
+regex_ctrl_destroy_cq(struct mlx5_regex_cq *cq)
 {
-	if (cq->cqe_umem) {
-		mlx5_glue->devx_umem_dereg(cq->cqe_umem);
-		cq->cqe_umem = NULL;
-	}
-	if (cq->cqe) {
-		rte_free((void *)(uintptr_t)cq->cqe);
-		cq->cqe = NULL;
-	}
-	if (cq->dbr_offset) {
-		mlx5_release_dbr(&priv->dbrpgs, cq->dbr_umem, cq->dbr_offset);
-		cq->dbr_offset = -1;
-	}
-	if (cq->obj) {
-		mlx5_devx_cmd_destroy(cq->obj);
-		cq->obj = NULL;
-	}
+	mlx5_devx_cq_destroy(&cq->cq_obj);
+	memset(cq, 0, sizeof(*cq));
 	return 0;
 }
 
@@ -89,65 +75,20 @@
 regex_ctrl_create_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq)
 {
 	struct mlx5_devx_cq_attr attr = {
-		.q_umem_valid = 1,
-		.db_umem_valid = 1,
-		.eqn = priv->eqn,
+		.uar_page_id = priv->uar->page_id,
 	};
-	struct mlx5_devx_dbr_page *dbr_page = NULL;
-	void *buf = NULL;
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t cq_size = 1 << cq->log_nb_desc;
-	uint32_t i;
-
-	cq->dbr_offset = mlx5_get_dbr(priv->ctx, &priv->dbrpgs, &dbr_page);
-	if (cq->dbr_offset < 0) {
-		DRV_LOG(ERR, "Can't allocate cq door bell record.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	cq->dbr_umem = mlx5_os_get_umem_id(dbr_page->umem);
-	cq->dbr = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			       (uintptr_t)cq->dbr_offset);
+	int ret;
 
-	buf = rte_calloc(NULL, 1, sizeof(struct mlx5_cqe) * cq_size, 4096);
-	if (!buf) {
-		DRV_LOG(ERR, "Can't allocate cqe buffer.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	cq->cqe = buf;
-	for (i = 0; i < cq_size; i++)
-		cq->cqe[i].op_own = 0xff;
-	cq->cqe_umem = mlx5_glue->devx_umem_reg(priv->ctx, buf,
-						sizeof(struct mlx5_cqe) *
-						cq_size, 7);
 	cq->ci = 0;
-	if (!cq->cqe_umem) {
-		DRV_LOG(ERR, "Can't register cqe mem.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	attr.db_umem_offset = cq->dbr_offset;
-	attr.db_umem_id = cq->dbr_umem;
-	attr.q_umem_id = mlx5_os_get_umem_id(cq->cqe_umem);
-	attr.log_cq_size = cq->log_nb_desc;
-	attr.uar_page_id = priv->uar->page_id;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->obj = mlx5_devx_cmd_create_cq(priv->ctx, &attr);
-	if (!cq->obj) {
-		DRV_LOG(ERR, "Can't create cq object.");
-		rte_errno  = ENOMEM;
-		goto error;
+	ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, cq->log_nb_desc,
+				  &attr, SOCKET_ID_ANY);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create CQ object.");
+		memset(cq, 0, sizeof(*cq));
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
 	return 0;
-error:
-	if (cq->cqe_umem)
-		mlx5_glue->devx_umem_dereg(cq->cqe_umem);
-	if (buf)
-		rte_free(buf);
-	if (cq->dbr_offset)
-		mlx5_release_dbr(&priv->dbrpgs, cq->dbr_umem, cq->dbr_offset);
-	return -rte_errno;
 }
 
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
@@ -232,7 +173,7 @@
 	attr.tis_lst_sz = 0;
 	attr.tis_num = 0;
 	attr.user_index = q_ind;
-	attr.cqn = qp->cq.obj->id;
+	attr.cqn = qp->cq.cq_obj.cq->id;
 	wq_attr->uar_page = priv->uar->page_id;
 	regex_get_pdn(priv->pd, &pd_num);
 	wq_attr->pd = pd_num;
@@ -389,7 +330,7 @@
 err_btree:
 	for (i = 0; i < nb_sq_config; i++)
 		regex_ctrl_destroy_sq(priv, qp, i);
-	regex_ctrl_destroy_cq(priv, &qp->cq);
+	regex_ctrl_destroy_cq(&qp->cq);
 err_cq:
 	rte_free(qp->sqs);
 	return ret;
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 5857617..255fd40 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -224,7 +224,7 @@ struct mlx5_regex_job {
 	size_t next_cqe_offset;
 
 	next_cqe_offset =  (cq->ci & (cq_size_get(cq) - 1));
-	cqe = (volatile struct mlx5_cqe *)(cq->cqe + next_cqe_offset);
+	cqe = (volatile struct mlx5_cqe *)(cq->cq_obj.cqes + next_cqe_offset);
 	rte_io_wmb();
 
 	int ret = check_cqe(cqe, cq_size_get(cq), cq->ci);
@@ -285,7 +285,7 @@ struct mlx5_regex_job {
 		}
 		cq->ci = (cq->ci + 1) & 0xffffff;
 		rte_wmb();
-		cq->dbr[0] = rte_cpu_to_be_32(cq->ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->ci);
 		queue->free_sqs |= (1 << sqid);
 	}
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 04/17] vdpa/mlx5: move DevX CQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (2 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 03/17] regex/mlx5: move DevX CQ creation to common Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 05/17] net/mlx5: move rearm and clock queue " Michael Baum
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa.h       | 10 +----
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 81 +++++++++++--------------------------
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c |  2 +-
 3 files changed, 26 insertions(+), 67 deletions(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index d039ada..ddee9dc 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -22,6 +22,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_prm.h>
 
 
@@ -46,13 +47,7 @@ struct mlx5_vdpa_cq {
 	uint32_t armed:1;
 	int callfd;
 	rte_spinlock_t sl;
-	struct mlx5_devx_obj *cq;
-	struct mlx5dv_devx_umem *umem_obj;
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_cq cq_obj;
 	uint64_t errors;
 };
 
@@ -144,7 +139,6 @@ struct mlx5_vdpa_priv {
 	uint32_t gpa_mkey_index;
 	struct ibv_mr *null_mr;
 	struct rte_vhost_memory *vmem;
-	uint32_t eqn;
 	struct mlx5dv_devx_event_channel *eventc;
 	struct mlx5dv_devx_event_channel *err_chnl;
 	struct mlx5dv_devx_uar *uar;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 3aeaeb8..ef92338 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -7,6 +7,7 @@
 #include <sys/eventfd.h>
 
 #include <rte_malloc.h>
+#include <rte_memory.h>
 #include <rte_errno.h>
 #include <rte_lcore.h>
 #include <rte_atomic.h>
@@ -15,6 +16,7 @@
 #include <rte_alarm.h>
 
 #include <mlx5_common.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_glue.h>
 
 #include "mlx5_vdpa_utils.h"
@@ -47,7 +49,6 @@
 		priv->eventc = NULL;
 	}
 #endif
-	priv->eqn = 0;
 }
 
 /* Prepare all the global resources for all the event objects.*/
@@ -58,11 +59,6 @@
 
 	if (priv->eventc)
 		return 0;
-	if (mlx5_glue->devx_query_eqn(priv->ctx, 0, &priv->eqn)) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to query EQ number %d.", rte_errno);
-		return -1;
-	}
 	priv->eventc = mlx5_glue->devx_create_event_channel(priv->ctx,
 			   MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA);
 	if (!priv->eventc) {
@@ -97,12 +93,7 @@
 static void
 mlx5_vdpa_cq_destroy(struct mlx5_vdpa_cq *cq)
 {
-	if (cq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
-	if (cq->umem_obj)
-		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
-	if (cq->umem_buf)
-		rte_free((void *)(uintptr_t)cq->umem_buf);
+	mlx5_devx_cq_destroy(&cq->cq_obj);
 	memset(cq, 0, sizeof(*cq));
 }
 
@@ -112,12 +103,12 @@
 	uint32_t arm_sn = cq->arm_sn << MLX5_CQ_SQN_OFFSET;
 	uint32_t cq_ci = cq->cq_ci & MLX5_CI_MASK;
 	uint32_t doorbell_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | cq_ci;
-	uint64_t doorbell = ((uint64_t)doorbell_hi << 32) | cq->cq->id;
+	uint64_t doorbell = ((uint64_t)doorbell_hi << 32) | cq->cq_obj.cq->id;
 	uint64_t db_be = rte_cpu_to_be_64(doorbell);
 	uint32_t *addr = RTE_PTR_ADD(priv->uar->base_addr, MLX5_CQ_DOORBELL);
 
 	rte_io_wmb();
-	cq->db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(doorbell_hi);
+	cq->cq_obj.db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(doorbell_hi);
 	rte_wmb();
 #ifdef RTE_ARCH_64
 	*(uint64_t *)addr = db_be;
@@ -134,49 +125,23 @@
 mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n,
 		    int callfd, struct mlx5_vdpa_cq *cq)
 {
-	struct mlx5_devx_cq_attr attr = {0};
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t umem_size;
+	struct mlx5_devx_cq_attr attr = {
+		.use_first_only = 1,
+		.uar_page_id = priv->uar->page_id,
+	};
 	uint16_t event_nums[1] = {0};
-	uint16_t cq_size = 1 << log_desc_n;
 	int ret;
 
-	cq->log_desc_n = log_desc_n;
-	umem_size = sizeof(struct mlx5_cqe) * cq_size + sizeof(*cq->db_rec) * 2;
-	cq->umem_buf = rte_zmalloc(__func__, umem_size, 4096);
-	if (!cq->umem_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		rte_errno = ENOMEM;
-		return -ENOMEM;
-	}
-	cq->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx,
-						(void *)(uintptr_t)cq->umem_buf,
-						umem_size,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!cq->umem_obj) {
-		DRV_LOG(ERR, "Failed to register umem for CQ.");
-		goto error;
-	}
-	attr.q_umem_valid = 1;
-	attr.db_umem_valid = 1;
-	attr.use_first_only = 1;
-	attr.overrun_ignore = 0;
-	attr.uar_page_id = priv->uar->page_id;
-	attr.q_umem_id = cq->umem_obj->umem_id;
-	attr.q_umem_offset = 0;
-	attr.db_umem_id = cq->umem_obj->umem_id;
-	attr.db_umem_offset = sizeof(struct mlx5_cqe) * cq_size;
-	attr.eqn = priv->eqn;
-	attr.log_cq_size = log_desc_n;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->cq = mlx5_devx_cmd_create_cq(priv->ctx, &attr);
-	if (!cq->cq)
+	ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, log_desc_n, &attr,
+				  SOCKET_ID_ANY);
+	if (ret)
 		goto error;
-	cq->db_rec = RTE_PTR_ADD(cq->umem_buf, (uintptr_t)attr.db_umem_offset);
 	cq->cq_ci = 0;
+	cq->log_desc_n = log_desc_n;
 	rte_spinlock_init(&cq->sl);
 	/* Subscribe CQ event to the event channel controlled by the driver. */
-	ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc, cq->cq->obj,
+	ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc,
+						   cq->cq_obj.cq->obj,
 						   sizeof(event_nums),
 						   event_nums,
 						   (uint64_t)(uintptr_t)cq);
@@ -187,8 +152,8 @@
 	}
 	cq->callfd = callfd;
 	/* Init CQ to ones to be in HW owner in the start. */
-	cq->cqes[0].op_own = MLX5_CQE_OWNER_MASK;
-	cq->cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
+	cq->cq_obj.cqes[0].op_own = MLX5_CQE_OWNER_MASK;
+	cq->cq_obj.cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
 	/* First arming. */
 	mlx5_vdpa_cq_arm(priv, cq);
 	return 0;
@@ -215,7 +180,7 @@
 	uint16_t cur_wqe_counter;
 	uint16_t comp;
 
-	last_word.word = rte_read32(&cq->cqes[0].wqe_counter);
+	last_word.word = rte_read32(&cq->cq_obj.cqes[0].wqe_counter);
 	cur_wqe_counter = rte_be_to_cpu_16(last_word.wqe_counter);
 	comp = cur_wqe_counter + (uint16_t)1 - next_wqe_counter;
 	if (comp) {
@@ -229,7 +194,7 @@
 			cq->errors++;
 		rte_io_wmb();
 		/* Ring CQ doorbell record. */
-		cq->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
 		rte_io_wmb();
 		/* Ring SW QP doorbell record. */
 		eqp->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size);
@@ -245,7 +210,7 @@
 
 	for (i = 0; i < priv->nr_virtqs; i++) {
 		cq = &priv->virtqs[i].eqp.cq;
-		if (cq->cq && !cq->armed)
+		if (cq->cq_obj.cq && !cq->armed)
 			mlx5_vdpa_cq_arm(priv, cq);
 	}
 }
@@ -290,7 +255,7 @@
 		pthread_mutex_lock(&priv->vq_config_lock);
 		for (i = 0; i < priv->nr_virtqs; i++) {
 			cq = &priv->virtqs[i].eqp.cq;
-			if (cq->cq && !cq->armed) {
+			if (cq->cq_obj.cq && !cq->armed) {
 				uint32_t comp = mlx5_vdpa_cq_poll(cq);
 
 				if (comp) {
@@ -369,7 +334,7 @@
 		DRV_LOG(DEBUG, "Device %s virtq %d cq %d event was captured."
 			" Timer is %s, cq ci is %u.\n",
 			priv->vdev->device->name,
-			(int)virtq->index, cq->cq->id,
+			(int)virtq->index, cq->cq_obj.cq->id,
 			priv->timer_on ? "on" : "off", cq->cq_ci);
 		cq->armed = 0;
 	}
@@ -679,7 +644,7 @@
 		goto error;
 	}
 	attr.uar_index = priv->uar->page_id;
-	attr.cqn = eqp->cq.cq->id;
+	attr.cqn = eqp->cq.cq_obj.cq->id;
 	attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE));
 	attr.rq_size = 1 << log_desc_n;
 	attr.log_rq_stride = rte_log2_u32(MLX5_WSEG_SIZE);
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index 3e882e4..cc77314 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -497,7 +497,7 @@
 		return -1;
 	if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
 		return 1;
-	if (virtq->eqp.cq.cq) {
+	if (virtq->eqp.cq.cq_obj.cq) {
 		if (vq.callfd != virtq->eqp.cq.callfd)
 			return 1;
 	} else if (vq.callfd != -1) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 05/17] net/mlx5: move rearm and clock queue CQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (3 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 04/17] vdpa/mlx5: " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 06/17] net/mlx5: move ASO " Michael Baum
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for CQ creation at rearm queue and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   9 +--
 drivers/net/mlx5/mlx5_rxtx.c |   2 +-
 drivers/net/mlx5/mlx5_txpp.c | 147 +++++++++++--------------------------------
 3 files changed, 40 insertions(+), 118 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 121d726..00ccaee 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -26,6 +26,7 @@
 #include <mlx5_prm.h>
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_defs.h"
 #include "mlx5_utils.h"
@@ -612,13 +613,7 @@ struct mlx5_flow_id_pool {
 /* Tx pacing queue structure - for Clock and Rearm queues. */
 struct mlx5_txpp_wq {
 	/* Completion Queue related data.*/
-	struct mlx5_devx_obj *cq;
-	void *cq_umem;
-	union {
-		volatile void *cq_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *cq_dbrec;
+	struct mlx5_devx_cq cq_obj;
 	uint32_t cq_ci:24;
 	uint32_t arm_sn:2;
 	/* Send Queue related data.*/
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index d12d746..dad24a3 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -2277,7 +2277,7 @@ enum mlx5_txcmp_code {
 
 	qs = RTE_PTR_ADD(wqe, MLX5_WSEG_SIZE);
 	qs->max_index = rte_cpu_to_be_32(wci);
-	qs->qpn_cqn = rte_cpu_to_be_32(txq->sh->txpp.clock_queue.cq->id);
+	qs->qpn_cqn = rte_cpu_to_be_32(txq->sh->txpp.clock_queue.cq_obj.cq->id);
 	qs->reserved0 = RTE_BE32(0);
 	qs->reserved1 = RTE_BE32(0);
 }
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 2438bf1..54ea572 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -13,6 +13,7 @@
 #include <rte_eal_paging.h>
 
 #include <mlx5_malloc.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -126,12 +127,7 @@
 		claim_zero(mlx5_glue->devx_umem_dereg(wq->sq_umem));
 	if (wq->sq_buf)
 		mlx5_free((void *)(uintptr_t)wq->sq_buf);
-	if (wq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(wq->cq));
-	if (wq->cq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(wq->cq_umem));
-	if (wq->cq_buf)
-		mlx5_free((void *)(uintptr_t)wq->cq_buf);
+	mlx5_devx_cq_destroy(&wq->cq_obj);
 	memset(wq, 0, sizeof(*wq));
 }
 
@@ -181,19 +177,6 @@
 }
 
 static void
-mlx5_txpp_fill_cqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
-{
-	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
-	uint32_t i;
-
-	for (i = 0; i < MLX5_TXPP_REARM_CQ_SIZE; i++) {
-		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
-		++cqe;
-	}
-}
-
-static void
 mlx5_txpp_fill_wqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
@@ -228,7 +211,8 @@
 		index = (i * MLX5_TXPP_REARM / 2 + MLX5_TXPP_REARM / 2) &
 			((1 << MLX5_CQ_INDEX_WIDTH) - 1);
 		qs->max_index = rte_cpu_to_be_32(index);
-		qs->qpn_cqn = rte_cpu_to_be_32(sh->txpp.clock_queue.cq->id);
+		qs->qpn_cqn =
+			   rte_cpu_to_be_32(sh->txpp.clock_queue.cq_obj.cq->id);
 	}
 }
 
@@ -238,7 +222,11 @@
 {
 	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
+	struct mlx5_devx_cq_attr cq_attr = {
+		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
+					 MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B,
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
 	size_t page_size;
 	uint32_t umem_size, umem_dbrec;
@@ -249,50 +237,16 @@
 		DRV_LOG(ERR, "Failed to get mem page size");
 		return -ENOMEM;
 	}
-	/* Allocate memory buffer for CQEs and doorbell record. */
-	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_REARM_CQ_SIZE;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->cq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
-		return -ENOMEM;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->cq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->cq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Rearm Queue.");
-		goto error;
-	}
 	/* Create completion queue object for Rearm Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = 0;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = umem_dbrec;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.log_cq_size = rte_log2_u32(MLX5_TXPP_REARM_CQ_SIZE);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	wq->cq = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr);
-	if (!wq->cq) {
-		rte_errno = errno;
+	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
+				  log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr,
+				  sh->numa_node);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ for Rearm Queue.");
-		goto error;
+		return ret;
 	}
-	wq->cq_dbrec = RTE_PTR_ADD(wq->cq_buf, umem_dbrec);
 	wq->cq_ci = 0;
 	wq->arm_sn = 0;
-	/* Mark all CQEs initially as invalid. */
-	mlx5_txpp_fill_cqe_rearm_queue(sh);
 	/*
 	 * Allocate memory buffer for Send Queue WQEs.
 	 * There should be no WQE leftovers in the cyclic queue.
@@ -323,7 +277,7 @@
 	sq_attr.state = MLX5_SQC_STATE_RST;
 	sq_attr.tis_lst_sz = 1;
 	sq_attr.tis_num = sh->tis->id;
-	sq_attr.cqn = wq->cq->id;
+	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.cd_master = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
 	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
@@ -466,7 +420,13 @@
 {
 	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
+	struct mlx5_devx_cq_attr cq_attr = {
+		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
+					 MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B,
+		.use_first_only = 1,
+		.overrun_ignore = 1,
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
 	size_t page_size;
 	uint32_t umem_size, umem_dbrec;
@@ -487,48 +447,14 @@
 	}
 	sh->txpp.ts_p = 0;
 	sh->txpp.ts_n = 0;
-	/* Allocate memory buffer for CQEs and doorbell record. */
-	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_CLKQ_SIZE;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-					page_size, sh->numa_node);
-	if (!wq->cq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
-		return -ENOMEM;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->cq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->cq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Clock Queue.");
-		goto error;
-	}
 	/* Create completion queue object for Clock Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.use_first_only = 1;
-	cq_attr.overrun_ignore = 1;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = 0;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = umem_dbrec;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.log_cq_size = rte_log2_u32(MLX5_TXPP_CLKQ_SIZE);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	wq->cq = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr);
-	if (!wq->cq) {
-		rte_errno = errno;
+	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
+				  log2above(MLX5_TXPP_CLKQ_SIZE), &cq_attr,
+				  sh->numa_node);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ for Clock Queue.");
 		goto error;
 	}
-	wq->cq_dbrec = RTE_PTR_ADD(wq->cq_buf, umem_dbrec);
 	wq->cq_ci = 0;
 	/* Allocate memory buffer for Send Queue WQEs. */
 	if (sh->txpp.test) {
@@ -574,7 +500,7 @@
 		sq_attr.static_sq_wq = 1;
 	}
 	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = wq->cq->id;
+	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id;
 	sq_attr.wq_attr.cd_slave = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
@@ -625,12 +551,13 @@
 	struct mlx5_txpp_wq *aq = &sh->txpp.rearm_queue;
 	uint32_t arm_sn = aq->arm_sn << MLX5_CQ_SQN_OFFSET;
 	uint32_t db_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | aq->cq_ci;
-	uint64_t db_be = rte_cpu_to_be_64(((uint64_t)db_hi << 32) | aq->cq->id);
+	uint64_t db_be =
+		rte_cpu_to_be_64(((uint64_t)db_hi << 32) | aq->cq_obj.cq->id);
 	base_addr = mlx5_os_get_devx_uar_base_addr(sh->tx_uar);
 	uint32_t *addr = RTE_PTR_ADD(base_addr, MLX5_CQ_DOORBELL);
 
 	rte_compiler_barrier();
-	aq->cq_dbrec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(db_hi);
+	aq->cq_obj.db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(db_hi);
 	rte_wmb();
 #ifdef RTE_ARCH_64
 	*(uint64_t *)addr = db_be;
@@ -728,7 +655,7 @@
 mlx5_txpp_update_timestamp(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
+	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cq_obj.cqes;
 	union {
 		rte_int128_t u128;
 		struct mlx5_cqe_ts cts;
@@ -809,7 +736,7 @@
 	do {
 		volatile struct mlx5_cqe *cqe;
 
-		cqe = &wq->cqes[cq_ci & (MLX5_TXPP_REARM_CQ_SIZE - 1)];
+		cqe = &wq->cq_obj.cqes[cq_ci & (MLX5_TXPP_REARM_CQ_SIZE - 1)];
 		ret = check_cqe(cqe, MLX5_TXPP_REARM_CQ_SIZE, cq_ci);
 		switch (ret) {
 		case MLX5_CQE_STATUS_ERR:
@@ -841,7 +768,7 @@
 		}
 		/* Update doorbell record to notify hardware. */
 		rte_compiler_barrier();
-		*wq->cq_dbrec = rte_cpu_to_be_32(cq_ci);
+		*wq->cq_obj.db_rec = rte_cpu_to_be_32(cq_ci);
 		rte_wmb();
 		wq->cq_ci = cq_ci;
 		/* Fire new requests to Rearm Queue. */
@@ -936,9 +863,8 @@
 	}
 	/* Subscribe CQ event to the event channel controlled by the driver. */
 	ret = mlx5_glue->devx_subscribe_devx_event(sh->txpp.echan,
-						   sh->txpp.rearm_queue.cq->obj,
-						   sizeof(event_nums),
-						   event_nums, 0);
+					    sh->txpp.rearm_queue.cq_obj.cq->obj,
+					     sizeof(event_nums), event_nums, 0);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to subscribe CQE event.");
 		rte_errno = errno;
@@ -1140,7 +1066,8 @@
 
 	if (sh->txpp.refcnt) {
 		struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-		struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
+		struct mlx5_cqe *cqe =
+				(struct mlx5_cqe *)(uintptr_t)wq->cq_obj.cqes;
 		union {
 			rte_int128_t u128;
 			struct mlx5_cqe_ts cts;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 06/17] net/mlx5: move ASO CQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (4 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 05/17] net/mlx5: move rearm and clock queue " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 07/17] net/mlx5: move Tx " Michael Baum
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Use common function for ASO CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h          |  8 +---
 drivers/net/mlx5/mlx5_flow_age.c | 81 +++++++++-------------------------------
 2 files changed, 19 insertions(+), 70 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 00ccaee..e02faed 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -463,13 +463,7 @@ struct mlx5_flow_counter_mng {
 struct mlx5_aso_cq {
 	uint16_t log_desc_n;
 	uint32_t cq_ci:24;
-	struct mlx5_devx_obj *cq;
-	struct mlx5dv_devx_umem *umem_obj;
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_cq cq_obj;
 	uint64_t errors;
 };
 
diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index 0ea61be..60a8d2a 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -7,10 +7,12 @@
 
 #include <mlx5_malloc.h>
 #include <mlx5_common_os.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5.h"
 #include "mlx5_flow.h"
 
+
 /**
  * Destroy Completion Queue used for ASO access.
  *
@@ -20,12 +22,8 @@
 static void
 mlx5_aso_cq_destroy(struct mlx5_aso_cq *cq)
 {
-	if (cq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
-	if (cq->umem_obj)
-		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
-	if (cq->umem_buf)
-		mlx5_free((void *)(uintptr_t)cq->umem_buf);
+	if (cq->cq_obj.cq)
+		mlx5_devx_cq_destroy(&cq->cq_obj);
 	memset(cq, 0, sizeof(*cq));
 }
 
@@ -42,60 +40,21 @@
  *   Socket to use for allocation.
  * @param[in] uar_page_id
  *   UAR page ID to use.
- * @param[in] eqn
- *   EQ number.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
 mlx5_aso_cq_create(void *ctx, struct mlx5_aso_cq *cq, uint16_t log_desc_n,
-		   int socket, int uar_page_id, uint32_t eqn)
+		   int socket, int uar_page_id)
 {
-	struct mlx5_devx_cq_attr attr = { 0 };
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t umem_size;
-	uint16_t cq_size = 1 << log_desc_n;
+	struct mlx5_devx_cq_attr attr = {
+		.uar_page_id = uar_page_id,
+	};
 
 	cq->log_desc_n = log_desc_n;
-	umem_size = sizeof(struct mlx5_cqe) * cq_size + sizeof(*cq->db_rec) * 2;
-	cq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				   4096, socket);
-	if (!cq->umem_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		rte_errno = ENOMEM;
-		return -ENOMEM;
-	}
-	cq->umem_obj = mlx5_glue->devx_umem_reg(ctx,
-						(void *)(uintptr_t)cq->umem_buf,
-						umem_size,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!cq->umem_obj) {
-		DRV_LOG(ERR, "Failed to register umem for aso CQ.");
-		goto error;
-	}
-	attr.q_umem_valid = 1;
-	attr.db_umem_valid = 1;
-	attr.use_first_only = 0;
-	attr.overrun_ignore = 0;
-	attr.uar_page_id = uar_page_id;
-	attr.q_umem_id = mlx5_os_get_umem_id(cq->umem_obj);
-	attr.q_umem_offset = 0;
-	attr.db_umem_id = attr.q_umem_id;
-	attr.db_umem_offset = sizeof(struct mlx5_cqe) * cq_size;
-	attr.eqn = eqn;
-	attr.log_cq_size = log_desc_n;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->cq = mlx5_devx_cmd_create_cq(ctx, &attr);
-	if (!cq->cq)
-		goto error;
-	cq->db_rec = RTE_PTR_ADD(cq->umem_buf, (uintptr_t)attr.db_umem_offset);
 	cq->cq_ci = 0;
-	memset((void *)(uintptr_t)cq->umem_buf, 0xFF, attr.db_umem_offset);
-	return 0;
-error:
-	mlx5_aso_cq_destroy(cq);
-	return -1;
+	return mlx5_devx_cq_create(ctx, &cq->cq_obj, log_desc_n, &attr, socket);
 }
 
 /**
@@ -194,8 +153,7 @@
 		mlx5_devx_cmd_destroy(sq->sq);
 		sq->sq = NULL;
 	}
-	if (sq->cq.cq)
-		mlx5_aso_cq_destroy(&sq->cq);
+	mlx5_aso_cq_destroy(&sq->cq);
 	mlx5_aso_devx_dereg_mr(&sq->mr);
 	memset(sq, 0, sizeof(*sq));
 }
@@ -246,8 +204,6 @@
  *   User Access Region object.
  * @param[in] pdn
  *   Protection Domain number to use.
- * @param[in] eqn
- *   EQ number.
  * @param[in] log_desc_n
  *   Log of number of descriptors in queue.
  *
@@ -257,7 +213,7 @@
 static int
 mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket,
 		   struct mlx5dv_devx_uar *uar, uint32_t pdn,
-		   uint32_t eqn,  uint16_t log_desc_n)
+		   uint16_t log_desc_n)
 {
 	struct mlx5_devx_create_sq_attr attr = { 0 };
 	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
@@ -271,7 +227,7 @@
 				 sq_desc_n, &sq->mr, socket, pdn))
 		return -1;
 	if (mlx5_aso_cq_create(ctx, &sq->cq, log_desc_n, socket,
-				mlx5_os_get_devx_uar_page_id(uar), eqn))
+			       mlx5_os_get_devx_uar_page_id(uar)))
 		goto error;
 	sq->log_desc_n = log_desc_n;
 	sq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size +
@@ -295,7 +251,7 @@
 	attr.tis_lst_sz = 0;
 	attr.tis_num = 0;
 	attr.user_index = 0xFFFF;
-	attr.cqn = sq->cq.cq->id;
+	attr.cqn = sq->cq.cq_obj.cq->id;
 	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(uar);
 	wq_attr->pd = pdn;
 	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
@@ -347,8 +303,7 @@
 mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh)
 {
 	return mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0,
-				  sh->tx_uar, sh->pdn, sh->eqn,
-				  MLX5_ASO_QUEUE_LOG_DESC);
+				  sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC);
 }
 
 /**
@@ -458,7 +413,7 @@
 	struct mlx5_aso_cq *cq = &sq->cq;
 	uint32_t idx = cq->cq_ci & ((1 << cq->log_desc_n) - 1);
 	volatile struct mlx5_err_cqe *cqe =
-				(volatile struct mlx5_err_cqe *)&cq->cqes[idx];
+			(volatile struct mlx5_err_cqe *)&cq->cq_obj.cqes[idx];
 
 	cq->errors++;
 	idx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);
@@ -571,8 +526,8 @@
 	do {
 		idx = next_idx;
 		next_idx = (cq->cq_ci + 1) & mask;
-		rte_prefetch0(&cq->cqes[next_idx]);
-		cqe = &cq->cqes[idx];
+		rte_prefetch0(&cq->cq_obj.cqes[next_idx]);
+		cqe = &cq->cq_obj.cqes[idx];
 		ret = check_cqe(cqe, cq_size, cq->cq_ci);
 		/*
 		 * Be sure owner read is done before any other cookie field or
@@ -592,7 +547,7 @@
 		mlx5_aso_age_action_update(sh, i);
 		sq->tail += i;
 		rte_io_wmb();
-		cq->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
 	}
 	return i;
 }
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 07/17] net/mlx5: move Tx CQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (5 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 06/17] net/mlx5: move ASO " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 08/17] net/mlx5: move Rx " Michael Baum
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Tx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   6 +-
 drivers/net/mlx5/mlx5_devx.c | 182 +++++++------------------------------------
 2 files changed, 31 insertions(+), 157 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02faed..2e75498 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -843,11 +843,7 @@ struct mlx5_txq_obj {
 		};
 		struct {
 			struct rte_eth_dev *dev;
-			struct mlx5_devx_obj *cq_devx;
-			void *cq_umem;
-			void *cq_buf;
-			int64_t cq_dbrec_offset;
-			struct mlx5_devx_dbr_page *cq_dbrec_page;
+			struct mlx5_devx_cq cq_obj;
 			struct mlx5_devx_obj *sq_devx;
 			void *sq_umem;
 			void *sq_buf;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index de9b204..9560f2b 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -15,6 +15,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_malloc.h>
 
 #include "mlx5.h"
@@ -1144,28 +1145,6 @@
 }
 
 /**
- * Release DevX Tx CQ resources.
- *
- * @param txq_obj
- *   DevX Tx queue object.
- */
-static void
-mlx5_txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->cq_devx)
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
-	if (txq_obj->cq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->cq_umem));
-	if (txq_obj->cq_buf)
-		mlx5_free(txq_obj->cq_buf);
-	if (txq_obj->cq_dbrec_page)
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->cq_dbrec_page->umem),
-					    txq_obj->cq_dbrec_offset));
-}
-
-/**
  * Destroy the Tx queue DevX object.
  *
  * @param txq_obj
@@ -1175,126 +1154,8 @@
 mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
 	mlx5_txq_release_devx_sq_resources(txq_obj);
-	mlx5_txq_release_devx_cq_resources(txq_obj);
-}
-
-/**
- * Create a DevX CQ object and its resources for an Tx queue.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- *
- * @return
- *   Number of CQEs in CQ, 0 otherwise and rte_errno is set.
- */
-static uint32_t
-mlx5_txq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-			container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
-	struct mlx5_cqe *cqe;
-	size_t page_size;
-	size_t alignment;
-	uint32_t cqe_n;
-	uint32_t i;
-	int ret;
-
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Allocate memory buffer for CQEs. */
-	alignment = MLX5_CQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get CQE buf alignment.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Create the Completion Queue. */
-	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
-		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	cqe_n = 1UL << log2above(cqe_n);
-	if (cqe_n > UINT16_MAX) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u requests to many CQEs %u.",
-			dev->data->port_id, txq_data->idx, cqe_n);
-		rte_errno = EINVAL;
-		return 0;
-	}
-	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      cqe_n * sizeof(struct mlx5_cqe),
-				      alignment,
-				      priv->sh->numa_node);
-	if (!txq_obj->cq_buf) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory (CQ).",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
-						(void *)txq_obj->cq_buf,
-						cqe_n * sizeof(struct mlx5_cqe),
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot register memory (CQ).",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for completion queue. */
-	txq_obj->cq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->cq_dbrec_page);
-	if (txq_obj->cq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
-		goto error;
-	}
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	cq_attr.eqn = priv->sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = txq_obj->cq_dbrec_offset;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem);
-	cq_attr.log_cq_size = rte_log2_u32(cqe_n);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	/* Create completion queue object with DevX. */
-	txq_obj->cq_devx = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!txq_obj->cq_devx) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	/* Initial fill CQ buffer with invalid CQE opcode. */
-	cqe = (struct mlx5_cqe *)txq_obj->cq_buf;
-	for (i = 0; i < cqe_n; i++) {
-		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
-		++cqe;
-	}
-	return cqe_n;
-error:
-	ret = rte_errno;
-	mlx5_txq_release_devx_cq_resources(txq_obj);
-	rte_errno = ret;
-	return 0;
+	mlx5_devx_cq_destroy(&txq_obj->cq_obj);
+	memset(&txq_obj->cq_obj, 0, sizeof(txq_obj->cq_obj));
 }
 
 /**
@@ -1366,7 +1227,7 @@
 	sq_attr.tis_lst_sz = 1;
 	sq_attr.tis_num = priv->sh->tis->id;
 	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = txq_obj->cq_devx->id;
+	sq_attr.cqn = txq_obj->cq_obj.cq->id;
 	sq_attr.flush_in_error_en = 1;
 	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
 	sq_attr.allow_swp = !!priv->config.swp;
@@ -1430,8 +1291,13 @@
 #else
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
+	struct mlx5_devx_cq_attr cq_attr = {
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
+					MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B,
+	};
 	void *reg_addr;
-	uint32_t cqe_n;
+	uint32_t cqe_n, log_desc_n;
 	uint32_t wqe_n;
 	int ret = 0;
 
@@ -1439,19 +1305,31 @@
 	MLX5_ASSERT(txq_obj);
 	txq_obj->txq_ctrl = txq_ctrl;
 	txq_obj->dev = dev;
-	cqe_n = mlx5_txq_create_devx_cq_resources(dev, idx);
-	if (!cqe_n) {
-		rte_errno = errno;
+	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
+		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
+	log_desc_n = log2above(cqe_n);
+	cqe_n = 1UL << log_desc_n;
+	if (cqe_n > UINT16_MAX) {
+		DRV_LOG(ERR, "Port %u Tx queue %u requests to many CQEs %u.",
+			dev->data->port_id, txq_data->idx, cqe_n);
+		rte_errno = EINVAL;
+		return 0;
+	}
+	/* Create completion queue object with DevX. */
+	ret = mlx5_devx_cq_create(sh->ctx, &txq_obj->cq_obj, log_desc_n,
+				  &cq_attr, priv->sh->numa_node);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
+			dev->data->port_id, idx);
 		goto error;
 	}
-	txq_data->cqe_n = log2above(cqe_n);
-	txq_data->cqe_s = 1 << txq_data->cqe_n;
+	txq_data->cqe_n = log_desc_n;
+	txq_data->cqe_s = cqe_n;
 	txq_data->cqe_m = txq_data->cqe_s - 1;
-	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
+	txq_data->cqes = txq_obj->cq_obj.cqes;
 	txq_data->cq_ci = 0;
 	txq_data->cq_pi = 0;
-	txq_data->cq_db = (volatile uint32_t *)(txq_obj->cq_dbrec_page->dbrs +
-						txq_obj->cq_dbrec_offset);
+	txq_data->cq_db = txq_obj->cq_obj.db_rec;
 	*txq_data->cq_db = 0;
 	/* Create Send Queue object with DevX. */
 	wqe_n = mlx5_txq_create_devx_sq_resources(dev, idx);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 08/17] net/mlx5: move Rx CQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (6 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 07/17] net/mlx5: move Tx " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 09/17] common/mlx5: enhance page size configuration Michael Baum
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Rx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.c      |   8 ---
 drivers/net/mlx5/mlx5.h      |   3 +-
 drivers/net/mlx5/mlx5_devx.c | 142 +++++++++++++------------------------------
 drivers/net/mlx5/mlx5_rxtx.h |   4 --
 4 files changed, 42 insertions(+), 115 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 52a8a25..3c7e5d2 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -938,14 +938,6 @@ struct mlx5_dev_ctx_shared *
 		goto error;
 	}
 	if (sh->devx) {
-		/* Query the EQN for this core. */
-		err = mlx5_glue->devx_query_eqn(sh->ctx, 0, &sh->eqn);
-		if (err) {
-			rte_errno = errno;
-			DRV_LOG(ERR, "Failed to query event queue number %d.",
-				rte_errno);
-			goto error;
-		}
 		err = mlx5_os_get_pdn(sh->pd, &sh->pdn);
 		if (err) {
 			DRV_LOG(ERR, "Fail to extract pdn from PD");
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2e75498..9a59c26 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -679,7 +679,6 @@ struct mlx5_dev_ctx_shared {
 	uint16_t bond_dev; /* Bond primary device id. */
 	uint32_t devx:1; /* Opened with DV. */
 	uint32_t flow_hit_aso_en:1; /* Flow Hit ASO is supported. */
-	uint32_t eqn; /* Event Queue number. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	void *ctx; /* Verbs/DV/DevX context. */
 	void *pd; /* Protection Domain. */
@@ -787,7 +786,7 @@ struct mlx5_rxq_obj {
 		};
 		struct {
 			struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
-			struct mlx5_devx_obj *devx_cq; /* DevX CQ object. */
+			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
 	};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 9560f2b..6ad70f2 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -172,30 +172,17 @@
 }
 
 /**
- * Release the resources allocated for the Rx CQ DevX object.
+ * Destroy the Rx queue DevX object.
  *
- * @param rxq_ctrl
- *   DevX Rx queue object.
+ * @param rxq_obj
+ *   Rxq object to destroy.
  */
 static void
-mlx5_rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
-	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->cq_dbrec_page;
-
-	if (rxq_ctrl->cq_umem) {
-		mlx5_glue->devx_umem_dereg(rxq_ctrl->cq_umem);
-		rxq_ctrl->cq_umem = NULL;
-	}
-	if (rxq_ctrl->rxq.cqes) {
-		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
-		rxq_ctrl->rxq.cqes = NULL;
-	}
-	if (dbr_page) {
-		claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id(dbr_page->umem),
-					    rxq_ctrl->cq_dbr_offset));
-		rxq_ctrl->cq_dbrec_page = NULL;
-	}
+	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
+	mlx5_devx_cq_destroy(&rxq_ctrl->obj->cq_obj);
+	memset(&rxq_ctrl->obj->cq_obj, 0, sizeof(rxq_ctrl->obj->cq_obj));
 }
 
 /**
@@ -213,14 +200,12 @@
 		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->devx_cq);
+		MLX5_ASSERT(rxq_obj->cq_obj);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
-		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
 		if (rxq_obj->devx_channel)
 			mlx5_glue->devx_destroy_event_channel
 							(rxq_obj->devx_channel);
-		mlx5_rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
-		mlx5_rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
+		mlx5_rxq_release_devx_resources(rxq_obj->rxq_ctrl);
 	}
 }
 
@@ -249,7 +234,7 @@
 		rte_errno = errno;
 		return -rte_errno;
 	}
-	if (out.event_resp.cookie != (uint64_t)(uintptr_t)rxq_obj->devx_cq) {
+	if (out.event_resp.cookie != (uint64_t)(uintptr_t)rxq_obj->cq_obj.cq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
@@ -327,7 +312,7 @@
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
 	uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
-	uint32_t cqn = rxq_ctrl->obj->devx_cq->id;
+	uint32_t cqn = rxq_ctrl->obj->cq_obj.cq->id;
 	struct mlx5_devx_dbr_page *dbr_page;
 	int64_t dbr_offset;
 	uint32_t wq_size = 0;
@@ -410,31 +395,23 @@
  *   Queue index in DPDK Rx queue array.
  *
  * @return
- *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
+static int
 mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_devx_obj *cq_obj = 0;
+	struct mlx5_devx_cq *cq_obj = 0;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-	size_t page_size = rte_mem_page_size();
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
-	struct mlx5_devx_dbr_page *dbr_page;
-	int64_t dbr_offset;
-	void *buf = NULL;
-	uint16_t event_nums[1] = {0};
 	uint32_t log_cqe_n;
-	uint32_t cq_size;
+	uint16_t event_nums[1] = { 0 };
 	int ret = 0;
 
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get page_size.");
-		goto error;
-	}
 	if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
 	    !rxq_data->lro) {
 		cq_attr.cqe_comp_en = 1u;
@@ -489,71 +466,37 @@
 	}
 	if (priv->config.cqe_pad)
 		cq_attr.cqe_size = MLX5_CQE_SIZE_128B;
+	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar);
 	log_cqe_n = log2above(cqe_n);
-	cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
-	buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
-				rxq_ctrl->socket);
-	if (!buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		goto error;
-	}
-	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)buf;
-	rxq_ctrl->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx, buf,
-						     cq_size,
-						     IBV_ACCESS_LOCAL_WRITE);
-	if (!rxq_ctrl->cq_umem) {
-		DRV_LOG(ERR, "Failed to register umem for CQ.");
-		goto error;
-	}
-	/* Allocate CQ door-bell. */
-	dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
-	if (dbr_offset < 0) {
-		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
-		goto error;
-	}
-	rxq_ctrl->cq_dbr_offset = dbr_offset;
-	rxq_ctrl->cq_dbrec_page = dbr_page;
-	rxq_data->cq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			  (uintptr_t)rxq_ctrl->cq_dbr_offset);
-	rxq_data->cq_uar =
-			mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
 	/* Create CQ using DevX API. */
-	cq_attr.eqn = priv->sh->eqn;
-	cq_attr.uar_page_id =
-			mlx5_os_get_devx_uar_page_id(priv->sh->devx_rx_uar);
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(rxq_ctrl->cq_umem);
-	cq_attr.q_umem_valid = 1;
-	cq_attr.log_cq_size = log_cqe_n;
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	cq_attr.db_umem_offset = rxq_ctrl->cq_dbr_offset;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(dbr_page->umem);
-	cq_attr.db_umem_valid = 1;
-	cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!cq_obj)
-		goto error;
+	ret = mlx5_devx_cq_create(sh->ctx, &rxq_ctrl->obj->cq_obj, log_cqe_n,
+				  &cq_attr, sh->numa_node);
+	if (ret)
+		return ret;
+	cq_obj = &rxq_ctrl->obj->cq_obj;
+	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])
+							(uintptr_t)cq_obj->cqes;
+	rxq_data->cq_db = cq_obj->db_rec;
+	rxq_data->cq_uar = mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar);
 	rxq_data->cqe_n = log_cqe_n;
-	rxq_data->cqn = cq_obj->id;
+	rxq_data->cqn = cq_obj->cq->id;
 	if (rxq_ctrl->obj->devx_channel) {
 		ret = mlx5_glue->devx_subscribe_devx_event
-						(rxq_ctrl->obj->devx_channel,
-						 cq_obj->obj,
-						 sizeof(event_nums),
-						 event_nums,
-						 (uint64_t)(uintptr_t)cq_obj);
+					      (rxq_ctrl->obj->devx_channel,
+					       cq_obj->cq->obj,
+					       sizeof(event_nums),
+					       event_nums,
+					       (uint64_t)(uintptr_t)cq_obj->cq);
 		if (ret) {
 			DRV_LOG(ERR, "Fail to subscribe CQ to event channel.");
-			rte_errno = errno;
-			goto error;
+			ret = errno;
+			mlx5_devx_cq_destroy(cq_obj);
+			memset(cq_obj, 0, sizeof(*cq_obj));
+			rte_errno = ret;
+			return -ret;
 		}
 	}
-	/* Initialise CQ to 1's to mark HW ownership for all CQEs. */
-	memset((void *)(uintptr_t)rxq_data->cqes, 0xFF, cq_size);
-	return cq_obj;
-error:
-	if (cq_obj)
-		mlx5_devx_cmd_destroy(cq_obj);
-	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
-	return NULL;
+	return 0;
 }
 
 /**
@@ -657,8 +600,8 @@
 		tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
 	}
 	/* Create CQ using DevX API. */
-	tmpl->devx_cq = mlx5_rxq_create_devx_cq_resources(dev, idx);
-	if (!tmpl->devx_cq) {
+	ret = mlx5_rxq_create_devx_cq_resources(dev, idx);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ.");
 		goto error;
 	}
@@ -684,12 +627,9 @@
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	if (tmpl->rq)
 		claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
-	if (tmpl->devx_cq)
-		claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
 	if (tmpl->devx_channel)
 		mlx5_glue->devx_destroy_event_channel(tmpl->devx_channel);
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
+	mlx5_rxq_release_devx_resources(rxq_ctrl);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 7989a50..6a71791 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -196,11 +196,7 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_devx_dbr_page *rq_dbrec_page;
 	uint64_t rq_dbr_offset;
 	/* Storing RQ door-bell information, needed when freeing door-bell. */
-	struct mlx5_devx_dbr_page *cq_dbrec_page;
-	uint64_t cq_dbr_offset;
-	/* Storing CQ door-bell information, needed when freeing door-bell. */
 	void *wq_umem; /* WQ buffer registration info. */
-	void *cq_umem; /* CQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 09/17] common/mlx5: enhance page size configuration
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (7 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 08/17] net/mlx5: move Rx " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 10/17] common/mlx5: share DevX SQ creation Michael Baum
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The PRM calculates page size in 4K, so need to reduce the log_wq_pg_sz
attribute.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 53 ++++++++++++++++--------------------
 drivers/net/mlx5/mlx5_devx.c         | 13 +++++----
 2 files changed, 30 insertions(+), 36 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 9c1d188..09e204b 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -268,9 +268,8 @@ struct mlx5_devx_obj *
 	MLX5_SET(mkc, mkc, mkey_7_0, attr->umem_id & 0xFF);
 	MLX5_SET(mkc, mkc, translations_octword_size, translation_size);
 	MLX5_SET(mkc, mkc, relaxed_ordering_write,
-		attr->relaxed_ordering_write);
-	MLX5_SET(mkc, mkc, relaxed_ordering_read,
-		attr->relaxed_ordering_read);
+		 attr->relaxed_ordering_write);
+	MLX5_SET(mkc, mkc, relaxed_ordering_read, attr->relaxed_ordering_read);
 	MLX5_SET64(mkc, mkc, start_addr, attr->addr);
 	MLX5_SET64(mkc, mkc, len, attr->size);
 	mkey->obj = mlx5_glue->devx_obj_create(ctx, in, in_size_dw * 4, out,
@@ -308,7 +307,7 @@ struct mlx5_devx_obj *
 	if (status) {
 		int syndrome = MLX5_GET(query_flow_counter_out, out, syndrome);
 
-		DRV_LOG(ERR, "Bad devX status %x, syndrome = %x", status,
+		DRV_LOG(ERR, "Bad DevX status %x, syndrome = %x", status,
 			syndrome);
 	}
 	return status;
@@ -374,8 +373,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_nic_vport_context_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query NIC vport context, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		return -1;
 	}
 	vctx = MLX5_ADDR_OF(query_nic_vport_context_out, out,
@@ -662,8 +660,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		return -1;
 	}
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
@@ -683,11 +680,11 @@ struct mlx5_devx_obj *
 		(cmd_hca_cap, hcattr, log_min_hairpin_wq_data_sz);
 	attr->vhca_id = MLX5_GET(cmd_hca_cap, hcattr, vhca_id);
 	attr->relaxed_ordering_write = MLX5_GET(cmd_hca_cap, hcattr,
-			relaxed_ordering_write);
+						relaxed_ordering_write);
 	attr->relaxed_ordering_read = MLX5_GET(cmd_hca_cap, hcattr,
-			relaxed_ordering_read);
+					       relaxed_ordering_read);
 	attr->access_register_user = MLX5_GET(cmd_hca_cap, hcattr,
-			access_register_user);
+					      access_register_user);
 	attr->eth_net_offloads = MLX5_GET(cmd_hca_cap, hcattr,
 					  eth_net_offloads);
 	attr->eth_virt = MLX5_GET(cmd_hca_cap, hcattr, eth_virt);
@@ -730,8 +727,7 @@ struct mlx5_devx_obj *
 			goto error;
 		if (status) {
 			DRV_LOG(DEBUG, "Failed to query devx QOS capabilities,"
-				" status %x, syndrome = %x",
-				status, syndrome);
+				" status %x, syndrome = %x", status, syndrome);
 			return -1;
 		}
 		hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
@@ -761,17 +757,14 @@ struct mlx5_devx_obj *
 		 MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE |
 		 MLX5_HCA_CAP_OPMOD_GET_CUR);
 
-	rc = mlx5_glue->devx_general_cmd(ctx,
-					 in, sizeof(in),
-					 out, sizeof(out));
+	rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out));
 	if (rc)
 		goto error;
 	status = MLX5_GET(query_hca_cap_out, out, status);
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		attr->log_max_ft_sampler_num = 0;
 		return -1;
 	}
@@ -788,9 +781,7 @@ struct mlx5_devx_obj *
 		 MLX5_GET_HCA_CAP_OP_MOD_ETHERNET_OFFLOAD_CAPS |
 		 MLX5_HCA_CAP_OPMOD_GET_CUR);
 
-	rc = mlx5_glue->devx_general_cmd(ctx,
-					 in, sizeof(in),
-					 out, sizeof(out));
+	rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out));
 	if (rc) {
 		attr->eth_net_offloads = 0;
 		goto error;
@@ -799,8 +790,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		attr->eth_net_offloads = 0;
 		return -1;
 	}
@@ -916,7 +906,9 @@ struct mlx5_devx_obj *
 	MLX5_SET(wq, wq_ctx, hw_counter, wq_attr->hw_counter);
 	MLX5_SET(wq, wq_ctx, sw_counter, wq_attr->sw_counter);
 	MLX5_SET(wq, wq_ctx, log_wq_stride, wq_attr->log_wq_stride);
-	MLX5_SET(wq, wq_ctx, log_wq_pg_sz, wq_attr->log_wq_pg_sz);
+	if (wq_attr->log_wq_pg_sz > MLX5_ADAPTER_PAGE_SHIFT)
+		MLX5_SET(wq, wq_ctx, log_wq_pg_sz,
+			 wq_attr->log_wq_pg_sz - MLX5_ADAPTER_PAGE_SHIFT);
 	MLX5_SET(wq, wq_ctx, log_wq_sz, wq_attr->log_wq_sz);
 	MLX5_SET(wq, wq_ctx, dbr_umem_valid, wq_attr->dbr_umem_valid);
 	MLX5_SET(wq, wq_ctx, wq_umem_valid, wq_attr->wq_umem_valid);
@@ -1562,13 +1554,13 @@ struct mlx5_devx_obj *
 	MLX5_SET(cqc, cqctx, cc, attr->use_first_only);
 	MLX5_SET(cqc, cqctx, oi, attr->overrun_ignore);
 	MLX5_SET(cqc, cqctx, log_cq_size, attr->log_cq_size);
-	MLX5_SET(cqc, cqctx, log_page_size, attr->log_page_size -
-		 MLX5_ADAPTER_PAGE_SHIFT);
+	if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT)
+		MLX5_SET(cqc, cqctx, log_page_size,
+			 attr->log_page_size - MLX5_ADAPTER_PAGE_SHIFT);
 	MLX5_SET(cqc, cqctx, c_eqn, attr->eqn);
 	MLX5_SET(cqc, cqctx, uar_page, attr->uar_page_id);
 	MLX5_SET(cqc, cqctx, cqe_comp_en, !!attr->cqe_comp_en);
-	MLX5_SET(cqc, cqctx, mini_cqe_res_format,
-		 attr->mini_cqe_res_format);
+	MLX5_SET(cqc, cqctx, mini_cqe_res_format, attr->mini_cqe_res_format);
 	MLX5_SET(cqc, cqctx, mini_cqe_res_format_ext,
 		 attr->mini_cqe_res_format_ext);
 	MLX5_SET(cqc, cqctx, cqe_sz, attr->cqe_size);
@@ -1798,8 +1790,9 @@ struct mlx5_devx_obj *
 	if (attr->uar_index) {
 		MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
 		MLX5_SET(qpc, qpc, uar_page, attr->uar_index);
-		MLX5_SET(qpc, qpc, log_page_size, attr->log_page_size -
-			 MLX5_ADAPTER_PAGE_SHIFT);
+		if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT)
+			MLX5_SET(qpc, qpc, log_page_size,
+				 attr->log_page_size - MLX5_ADAPTER_PAGE_SHIFT);
 		if (attr->sq_size) {
 			MLX5_ASSERT(RTE_IS_POWER_OF_2(attr->sq_size));
 			MLX5_SET(qpc, qpc, cqn_snd, attr->cqn);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 6ad70f2..fe103a7 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -320,7 +320,13 @@
 	uint32_t log_wqe_size = 0;
 	void *buf = NULL;
 	struct mlx5_devx_obj *rq;
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
 
+	if (alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get mem page size");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
 	/* Fill RQ attributes. */
 	rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
 	rq_attr.flush_in_error_en = 1;
@@ -347,15 +353,10 @@
 	log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
 	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
 	rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
+	rq_attr.wq_attr.log_wq_pg_sz = log2above(alignment);
 	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
 	wq_size = wqe_n * wqe_size;
-	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
 	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
 			  alignment, rxq_ctrl->socket);
 	if (!buf)
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 10/17] common/mlx5: share DevX SQ creation
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (8 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 09/17] common/mlx5: enhance page size configuration Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 11/17] regex/mlx5: move DevX SQ creation to common Michael Baum
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The SQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c | 122 +++++++++++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h |  20 +++++-
 2 files changed, 140 insertions(+), 2 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 324c6ea..46404d8 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -155,3 +155,125 @@
 	rte_errno = ret;
 	return -rte_errno;
 }
+
+/**
+ * Destroy DevX Send Queue.
+ *
+ * @param[in] sq
+ *   DevX SQ to destroy.
+ */
+void
+mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq)
+{
+	if (sq->sq)
+		claim_zero(mlx5_devx_cmd_destroy(sq->sq));
+	if (sq->umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(sq->umem_obj));
+	if (sq->umem_buf)
+		mlx5_free((void *)(uintptr_t)sq->umem_buf);
+}
+
+/**
+ * Create Send Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_type
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_stride
+ *   log_wq_sz
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] sq_obj
+ *   Pointer to SQ to create.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to SQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_sq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *sq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t page_size = rte_mem_page_size();
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint16_t sq_size = 1 << log_wqbb_n;
+	int ret;
+
+	if (page_size == (size_t)-1 || alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get page_size.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Allocate memory buffer for WQEs and doorbell record. */
+	umem_size = MLX5_WQE_SIZE * sq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for SQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_glue->devx_umem_reg(ctx, (void *)(uintptr_t)umem_buf,
+					    umem_size, IBV_ACCESS_LOCAL_WRITE);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for SQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for SQ object creation. */
+	attr->wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
+	attr->wq_attr.wq_umem_valid = 1;
+	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(sq_obj->umem_obj);
+	attr->wq_attr.wq_umem_offset = 0;
+	attr->wq_attr.dbr_umem_valid = 1;
+	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
+	attr->wq_attr.dbr_addr = umem_dbrec;
+	attr->wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
+	attr->wq_attr.log_wq_sz = log_wqbb_n;
+	attr->wq_attr.log_wq_pg_sz = rte_log2_u32(page_size);
+	/* Create send queue object with DevX. */
+	sq = mlx5_devx_cmd_create_sq(ctx, attr);
+	if (!sq) {
+		DRV_LOG(ERR, "Can't create DevX SQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	sq_obj->umem_buf = umem_buf;
+	sq_obj->umem_obj = umem_obj;
+	sq_obj->sq = sq;
+	sq_obj->db_rec = RTE_PTR_ADD(sq_obj->umem_buf, umem_dbrec);
+	return 0;
+error:
+	ret = rte_errno;
+	if (sq)
+		claim_zero(mlx5_devx_cmd_destroy(sq));
+	if (umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 31cb804..88d520b 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -18,11 +18,27 @@ struct mlx5_devx_cq {
 	volatile uint32_t *db_rec; /* The CQ doorbell record. */
 };
 
+/* DevX Send Queue structure. */
+struct mlx5_devx_sq {
+	struct mlx5_devx_obj *sq; /* The SQ DevX object. */
+	struct mlx5dv_devx_umem *umem_obj; /* The SQ umem object. */
+	union {
+		volatile void *umem_buf;
+		volatile struct mlx5_wqe *wqes; /* The SQ ring buffer. */
+	};
+	volatile uint32_t *db_rec; /* The SQ doorbell record. */
+};
+
+
 /* mlx5_common_devx.c */
 
 void mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq);
 int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
-			uint16_t log_desc_n, struct mlx5_devx_cq_attr *attr,
-			int socket);
+			uint16_t log_desc_n,
+			struct mlx5_devx_cq_attr *attr, int socket);
+void mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq);
+int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj,
+			uint16_t log_wqbb_n,
+			struct mlx5_devx_create_sq_attr *attr, int socket);
 
 #endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 11/17] regex/mlx5: move DevX SQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (9 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 10/17] common/mlx5: share DevX SQ creation Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 12/17] net/mlx5: move rearm and clock queue " Michael Baum
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/regex/mlx5/mlx5_regex.h          |   8 +-
 drivers/regex/mlx5/mlx5_regex_control.c  | 153 ++++++++++---------------------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  14 +--
 3 files changed, 55 insertions(+), 120 deletions(-)

diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 9f7a388..7e1b2a9 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -18,15 +18,10 @@
 
 struct mlx5_regex_sq {
 	uint16_t log_nb_desc; /* Log 2 number of desc for this object. */
-	struct mlx5_devx_obj *obj; /* The SQ DevX object. */
-	int64_t dbr_offset; /* Door bell record offset. */
-	uint32_t dbr_umem; /* Door bell record umem id. */
-	uint8_t *wqe; /* The SQ ring buffer. */
-	struct mlx5dv_devx_umem *wqe_umem; /* SQ buffer umem. */
+	struct mlx5_devx_sq sq_obj; /* The SQ DevX object. */
 	size_t pi, db_pi;
 	size_t ci;
 	uint32_t sqn;
-	uint32_t *dbr;
 };
 
 struct mlx5_regex_cq {
@@ -73,7 +68,6 @@ struct mlx5_regex_priv {
 	uint32_t nb_engines; /* Number of RegEx engines. */
 	struct mlx5dv_devx_uar *uar; /* UAR object. */
 	struct ibv_pd *pd;
-	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
 	struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */
 };
 
diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c
index ca6c0f5..df57fad 100644
--- a/drivers/regex/mlx5/mlx5_regex_control.c
+++ b/drivers/regex/mlx5/mlx5_regex_control.c
@@ -112,6 +112,27 @@
 #endif
 
 /**
+ * Destroy the SQ object.
+ *
+ * @param qp
+ *   Pointer to the QP element
+ * @param q_ind
+ *   The index of the queue.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+regex_ctrl_destroy_sq(struct mlx5_regex_qp *qp, uint16_t q_ind)
+{
+	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
+
+	mlx5_devx_sq_destroy(&sq->sq_obj);
+	memset(sq, 0, sizeof(*sq));
+	return 0;
+}
+
+/**
  * create the SQ object.
  *
  * @param priv
@@ -131,84 +152,42 @@
 		     uint16_t q_ind, uint16_t log_nb_desc)
 {
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
-	struct mlx5_devx_wq_attr *wq_attr = &attr.wq_attr;
-	struct mlx5_devx_dbr_page *dbr_page = NULL;
+	struct mlx5_devx_create_sq_attr attr = {
+		.user_index = q_ind,
+		.cqn = qp->cq.cq_obj.cq->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.uar_page = priv->uar->page_id,
+		},
+	};
+	struct mlx5_devx_modify_sq_attr modify_attr = {
+		.state = MLX5_SQC_STATE_RDY,
+	};
 	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
-	void *buf = NULL;
-	uint32_t sq_size;
 	uint32_t pd_num = 0;
 	int ret;
 
 	sq->log_nb_desc = log_nb_desc;
-	sq_size = 1 << sq->log_nb_desc;
-	sq->dbr_offset = mlx5_get_dbr(priv->ctx, &priv->dbrpgs, &dbr_page);
-	if (sq->dbr_offset < 0) {
-		DRV_LOG(ERR, "Can't allocate sq door bell record.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	sq->dbr_umem = mlx5_os_get_umem_id(dbr_page->umem);
-	sq->dbr = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			       (uintptr_t)sq->dbr_offset);
-
-	buf = rte_calloc(NULL, 1, 64 * sq_size, 4096);
-	if (!buf) {
-		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	sq->wqe = buf;
-	sq->wqe_umem = mlx5_glue->devx_umem_reg(priv->ctx, buf, 64 * sq_size,
-						7);
 	sq->ci = 0;
 	sq->pi = 0;
-	if (!sq->wqe_umem) {
-		DRV_LOG(ERR, "Can't register wqe mem.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	attr.state = MLX5_SQC_STATE_RST;
-	attr.tis_lst_sz = 0;
-	attr.tis_num = 0;
-	attr.user_index = q_ind;
-	attr.cqn = qp->cq.cq_obj.cq->id;
-	wq_attr->uar_page = priv->uar->page_id;
-	regex_get_pdn(priv->pd, &pd_num);
-	wq_attr->pd = pd_num;
-	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
-	wq_attr->dbr_umem_id = sq->dbr_umem;
-	wq_attr->dbr_addr = sq->dbr_offset;
-	wq_attr->dbr_umem_valid = 1;
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(sq->wqe_umem);
-	wq_attr->wq_umem_offset = 0;
-	wq_attr->wq_umem_valid = 1;
-	wq_attr->log_wq_stride = 6;
-	wq_attr->log_wq_sz = sq->log_nb_desc;
-	sq->obj = mlx5_devx_cmd_create_sq(priv->ctx, &attr);
-	if (!sq->obj) {
-		DRV_LOG(ERR, "Can't create sq object.");
-		rte_errno  = ENOMEM;
-		goto error;
+	ret = regex_get_pdn(priv->pd, &pd_num);
+	if (ret)
+		return ret;
+	attr.wq_attr.pd = pd_num;
+	ret = mlx5_devx_sq_create(priv->ctx, &sq->sq_obj, log_nb_desc, &attr,
+				  SOCKET_ID_ANY);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create SQ object.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
-	modify_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(sq->obj, &modify_attr);
+	ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);
 	if (ret) {
-		DRV_LOG(ERR, "Can't change sq state to ready.");
-		rte_errno  = ENOMEM;
-		goto error;
+		DRV_LOG(ERR, "Can't change SQ state to ready.");
+		regex_ctrl_destroy_sq(qp, q_ind);
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
-
 	return 0;
-error:
-	if (sq->wqe_umem)
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-	if (buf)
-		rte_free(buf);
-	if (sq->dbr_offset)
-		mlx5_release_dbr(&priv->dbrpgs, sq->dbr_umem, sq->dbr_offset);
-	return -rte_errno;
 #else
 	(void)priv;
 	(void)qp;
@@ -220,44 +199,6 @@
 }
 
 /**
- * Destroy the SQ object.
- *
- * @param priv
- *   Pointer to the priv object.
- * @param qp
- *   Pointer to the QP element
- * @param q_ind
- *   The index of the queue.
- *
- * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
- */
-static int
-regex_ctrl_destroy_sq(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp,
-		      uint16_t q_ind)
-{
-	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
-
-	if (sq->wqe_umem) {
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-		sq->wqe_umem = NULL;
-	}
-	if (sq->wqe) {
-		rte_free((void *)(uintptr_t)sq->wqe);
-		sq->wqe = NULL;
-	}
-	if (sq->dbr_offset) {
-		mlx5_release_dbr(&priv->dbrpgs, sq->dbr_umem, sq->dbr_offset);
-		sq->dbr_offset = -1;
-	}
-	if (sq->obj) {
-		mlx5_devx_cmd_destroy(sq->obj);
-		sq->obj = NULL;
-	}
-	return 0;
-}
-
-/**
  * Setup the qp.
  *
  * @param dev
@@ -329,7 +270,7 @@
 	mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh);
 err_btree:
 	for (i = 0; i < nb_sq_config; i++)
-		regex_ctrl_destroy_sq(priv, qp, i);
+		regex_ctrl_destroy_sq(qp, i);
 	regex_ctrl_destroy_cq(&qp->cq);
 err_cq:
 	rte_free(qp->sqs);
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 255fd40..cd0f9bd 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -110,12 +110,12 @@ struct mlx5_regex_job {
 				  &priv->mr_scache, &qp->mr_ctrl,
 				  rte_pktmbuf_mtod(op->mbuf, uintptr_t),
 				  !!(op->mbuf->ol_flags & EXT_ATTACHED_MBUF));
-	uint8_t *wqe = (uint8_t *)sq->wqe + wqe_offset;
+	uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset;
 	int ds = 4; /*  ctrl + meta + input + output */
 
 	set_wqe_ctrl_seg((struct mlx5_wqe_ctrl_seg *)wqe, sq->pi,
-			 MLX5_OPCODE_MMO, MLX5_OPC_MOD_MMO_REGEX, sq->obj->id,
-			 0, ds, 0, 0);
+			 MLX5_OPCODE_MMO, MLX5_OPC_MOD_MMO_REGEX,
+			 sq->sq_obj.sq->id, 0, ds, 0, 0);
 	set_regex_ctrl_seg(wqe + 12, 0, op->group_id0, op->group_id1,
 			   op->group_id2,
 			   op->group_id3, 0);
@@ -137,12 +137,12 @@ struct mlx5_regex_job {
 {
 	size_t wqe_offset = (sq->db_pi & (sq_size_get(sq) - 1)) *
 		MLX5_SEND_WQE_BB;
-	uint8_t *wqe = (uint8_t *)sq->wqe + wqe_offset;
+	uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset;
 	((struct mlx5_wqe_ctrl_seg *)wqe)->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
 	uint64_t *doorbell_addr =
 		(uint64_t *)((uint8_t *)uar->base_addr + 0x800);
 	rte_io_wmb();
-	sq->dbr[MLX5_SND_DBR] = rte_cpu_to_be_32((sq->db_pi + 1) &
+	sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32((sq->db_pi + 1) &
 						 MLX5_REGEX_MAX_WQE_INDEX);
 	rte_wmb();
 	*doorbell_addr = *(volatile uint64_t *)wqe;
@@ -301,7 +301,7 @@ struct mlx5_regex_job {
 	uint32_t job_id;
 	for (sqid = 0; sqid < queue->nb_obj; sqid++) {
 		struct mlx5_regex_sq *sq = &queue->sqs[sqid];
-		uint8_t *wqe = (uint8_t *)sq->wqe;
+		uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes;
 		for (entry = 0 ; entry < sq_size_get(sq); entry++) {
 			job_id = sqid * sq_size_get(sq) + entry;
 			struct mlx5_regex_job *job = &queue->jobs[job_id];
@@ -334,7 +334,7 @@ struct mlx5_regex_job {
 		return -ENOMEM;
 
 	qp->metadata = mlx5_glue->reg_mr(pd, ptr,
-					 MLX5_REGEX_METADATA_SIZE*qp->nb_desc,
+					 MLX5_REGEX_METADATA_SIZE * qp->nb_desc,
 					 IBV_ACCESS_LOCAL_WRITE);
 	if (!qp->metadata) {
 		DRV_LOG(ERR, "Failed to register metadata");
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 12/17] net/mlx5: move rearm and clock queue SQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (10 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 11/17] regex/mlx5: move DevX SQ creation to common Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 13/17] net/mlx5: move Tx " Michael Baum
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX SQ creation for rearm and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   8 +--
 drivers/net/mlx5/mlx5_txpp.c | 147 +++++++++++--------------------------------
 2 files changed, 36 insertions(+), 119 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9a59c26..192a5a7 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -611,15 +611,9 @@ struct mlx5_txpp_wq {
 	uint32_t cq_ci:24;
 	uint32_t arm_sn:2;
 	/* Send Queue related data.*/
-	struct mlx5_devx_obj *sq;
-	void *sq_umem;
-	union {
-		volatile void *sq_buf;
-		volatile struct mlx5_wqe *wqes;
-	};
+	struct mlx5_devx_sq sq_obj;
 	uint16_t sq_size; /* Number of WQEs in the queue. */
 	uint16_t sq_ci; /* Next WQE to execute. */
-	volatile uint32_t *sq_dbrec;
 };
 
 /* Tx packet pacing internal timestamp. */
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 54ea572..b6ff7e0 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -121,12 +121,7 @@
 static void
 mlx5_txpp_destroy_send_queue(struct mlx5_txpp_wq *wq)
 {
-	if (wq->sq)
-		claim_zero(mlx5_devx_cmd_destroy(wq->sq));
-	if (wq->sq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(wq->sq_umem));
-	if (wq->sq_buf)
-		mlx5_free((void *)(uintptr_t)wq->sq_buf);
+	mlx5_devx_sq_destroy(&wq->sq_obj);
 	mlx5_devx_cq_destroy(&wq->cq_obj);
 	memset(wq, 0, sizeof(*wq));
 }
@@ -155,6 +150,7 @@
 mlx5_txpp_doorbell_rearm_queue(struct mlx5_dev_ctx_shared *sh, uint16_t ci)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	union {
 		uint32_t w32[2];
 		uint64_t w64;
@@ -163,11 +159,11 @@
 
 	wq->sq_ci = ci + 1;
 	cs.w32[0] = rte_cpu_to_be_32(rte_be_to_cpu_32
-		   (wq->wqes[ci & (wq->sq_size - 1)].ctrl[0]) | (ci - 1) << 8);
-	cs.w32[1] = wq->wqes[ci & (wq->sq_size - 1)].ctrl[1];
+			(wqe[ci & (wq->sq_size - 1)].ctrl[0]) | (ci - 1) << 8);
+	cs.w32[1] = wqe[ci & (wq->sq_size - 1)].ctrl[1];
 	/* Update SQ doorbell record with new SQ ci. */
 	rte_compiler_barrier();
-	*wq->sq_dbrec = rte_cpu_to_be_32(wq->sq_ci);
+	*wq->sq_obj.db_rec = rte_cpu_to_be_32(wq->sq_ci);
 	/* Make sure the doorbell record is updated. */
 	rte_wmb();
 	/* Write to doorbel register to start processing. */
@@ -180,7 +176,7 @@
 mlx5_txpp_fill_wqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->wqes;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	uint32_t i;
 
 	for (i = 0; i < wq->sq_size; i += 2) {
@@ -191,7 +187,7 @@
 		/* Build SEND_EN request with slave WQE index. */
 		cs = &wqe[i + 0].cseg;
 		cs->opcode = RTE_BE32(MLX5_OPCODE_SEND_EN | 0);
-		cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) | 2);
+		cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) | 2);
 		cs->flags = RTE_BE32(MLX5_COMP_ALWAYS <<
 				     MLX5_COMP_MODE_OFFSET);
 		cs->misc = RTE_BE32(0);
@@ -199,11 +195,12 @@
 		index = (i * MLX5_TXPP_REARM / 2 + MLX5_TXPP_REARM) &
 			((1 << MLX5_WQ_INDEX_WIDTH) - 1);
 		qs->max_index = rte_cpu_to_be_32(index);
-		qs->qpn_cqn = rte_cpu_to_be_32(sh->txpp.clock_queue.sq->id);
+		qs->qpn_cqn =
+			   rte_cpu_to_be_32(sh->txpp.clock_queue.sq_obj.sq->id);
 		/* Build WAIT request with slave CQE index. */
 		cs = &wqe[i + 1].cseg;
 		cs->opcode = RTE_BE32(MLX5_OPCODE_WAIT | 0);
-		cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) | 2);
+		cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) | 2);
 		cs->flags = RTE_BE32(MLX5_COMP_ONLY_ERR <<
 				     MLX5_COMP_MODE_OFFSET);
 		cs->misc = RTE_BE32(0);
@@ -220,7 +217,16 @@
 static int
 mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
+	struct mlx5_devx_create_sq_attr sq_attr = {
+		.cd_master = 1,
+		.state = MLX5_SQC_STATE_RST,
+		.tis_lst_sz = 1,
+		.tis_num = sh->tis->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = sh->pdn,
+			.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+		},
+	};
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
 	struct mlx5_devx_cq_attr cq_attr = {
 		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
@@ -228,15 +234,8 @@
 		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
 	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	size_t page_size;
-	uint32_t umem_size, umem_dbrec;
 	int ret;
 
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		return -ENOMEM;
-	}
 	/* Create completion queue object for Rearm Queue. */
 	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
 				  log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr,
@@ -247,63 +246,25 @@
 	}
 	wq->cq_ci = 0;
 	wq->arm_sn = 0;
-	/*
-	 * Allocate memory buffer for Send Queue WQEs.
-	 * There should be no WQE leftovers in the cyclic queue.
-	 */
 	wq->sq_size = MLX5_TXPP_REARM_SQ_SIZE;
 	MLX5_ASSERT(wq->sq_size == (1 << log2above(wq->sq_size)));
-	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->sq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->sq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->sq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Rearm Queue.");
-		goto error;
-	}
 	/* Create send queue object for Rearm Queue. */
-	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = sh->tis->id;
 	sq_attr.cqn = wq->cq_obj.cq->id;
-	sq_attr.cd_master = 1;
-	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = rte_log2_u32(wq->sq_size);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = umem_dbrec;
-	sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_offset = 0;
-	wq->sq = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr);
-	if (!wq->sq) {
+	/* There should be no WQE leftovers in the cyclic queue. */
+	ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj,
+				  log2above(MLX5_TXPP_REARM_SQ_SIZE), &sq_attr,
+				  sh->numa_node);
+	if (ret) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create SQ for Rearm Queue.");
 		goto error;
 	}
-	wq->sq_dbrec = RTE_PTR_ADD(wq->sq_buf, umem_dbrec +
-				   MLX5_SND_DBR * sizeof(uint32_t));
 	/* Build the WQEs in the Send Queue before goto Ready state. */
 	mlx5_txpp_fill_wqe_rearm_queue(sh);
 	/* Change queue state to ready. */
 	msq_attr.sq_state = MLX5_SQC_STATE_RST;
 	msq_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(wq->sq, &msq_attr);
+	ret = mlx5_devx_cmd_modify_sq(wq->sq_obj.sq, &msq_attr);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to set SQ ready state Rearm Queue.");
 		goto error;
@@ -320,7 +281,7 @@
 mlx5_txpp_fill_wqe_clock_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->wqes;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	struct mlx5_wqe_cseg *cs = &wqe->cseg;
 	uint32_t wqe_size, opcode, i;
 	uint8_t *dst;
@@ -338,7 +299,7 @@
 		opcode = MLX5_OPCODE_NOP;
 	}
 	cs->opcode = rte_cpu_to_be_32(opcode | 0); /* Index is ignored. */
-	cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) |
+	cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) |
 				     (wqe_size / MLX5_WSEG_SIZE));
 	cs->flags = RTE_BE32(MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET);
 	cs->misc = RTE_BE32(0);
@@ -407,10 +368,11 @@
 	}
 wcopy:
 	/* Duplicate the pattern to the next WQEs. */
-	dst = (uint8_t *)(uintptr_t)wq->sq_buf;
+	dst = (uint8_t *)(uintptr_t)wq->sq_obj.umem_buf;
 	for (i = 1; i < MLX5_TXPP_CLKQ_SIZE; i++) {
 		dst += wqe_size;
-		rte_memcpy(dst, (void *)(uintptr_t)wq->sq_buf, wqe_size);
+		rte_memcpy(dst, (void *)(uintptr_t)wq->sq_obj.umem_buf,
+			   wqe_size);
 	}
 }
 
@@ -428,15 +390,8 @@
 		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
 	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	size_t page_size;
-	uint32_t umem_size, umem_dbrec;
 	int ret;
 
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		return -ENOMEM;
-	}
 	sh->txpp.tsa = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
 				   MLX5_TXPP_REARM_SQ_SIZE *
 				   sizeof(struct mlx5_txpp_ts),
@@ -469,26 +424,6 @@
 	}
 	/* There should not be WQE leftovers in the cyclic queue. */
 	MLX5_ASSERT(wq->sq_size == (1 << log2above(wq->sq_size)));
-	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->sq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->sq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->sq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Clock Queue.");
-		goto error;
-	}
 	/* Create send queue object for Clock Queue. */
 	if (sh->txpp.test) {
 		sq_attr.tis_lst_sz = 1;
@@ -499,37 +434,25 @@
 		sq_attr.non_wire = 1;
 		sq_attr.static_sq_wq = 1;
 	}
-	sq_attr.state = MLX5_SQC_STATE_RST;
 	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id;
 	sq_attr.wq_attr.cd_slave = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
 	sq_attr.wq_attr.pd = sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = rte_log2_u32(wq->sq_size);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = umem_dbrec;
-	sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	/* umem_offset must be zero for static_sq_wq queue. */
-	sq_attr.wq_attr.wq_umem_offset = 0;
-	wq->sq = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr);
-	if (!wq->sq) {
+	ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj, log2above(wq->sq_size),
+				  &sq_attr, sh->numa_node);
+	if (ret) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create SQ for Clock Queue.");
 		goto error;
 	}
-	wq->sq_dbrec = RTE_PTR_ADD(wq->sq_buf, umem_dbrec +
-				   MLX5_SND_DBR * sizeof(uint32_t));
 	/* Build the WQEs in the Send Queue before goto Ready state. */
 	mlx5_txpp_fill_wqe_clock_queue(sh);
 	/* Change queue state to ready. */
 	msq_attr.sq_state = MLX5_SQC_STATE_RST;
 	msq_attr.state = MLX5_SQC_STATE_RDY;
 	wq->sq_ci = 0;
-	ret = mlx5_devx_cmd_modify_sq(wq->sq, &msq_attr);
+	ret = mlx5_devx_cmd_modify_sq(wq->sq_obj.sq, &msq_attr);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to set SQ ready state Clock Queue.");
 		goto error;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 13/17] net/mlx5: move Tx SQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (11 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 12/17] net/mlx5: move rearm and clock queue " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 14/17] net/mlx5: move ASO " Michael Baum
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Tx SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   8 +--
 drivers/net/mlx5/mlx5_devx.c | 160 ++++++++++---------------------------------
 2 files changed, 40 insertions(+), 128 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 192a5a7..6977eac 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -837,11 +837,9 @@ struct mlx5_txq_obj {
 		struct {
 			struct rte_eth_dev *dev;
 			struct mlx5_devx_cq cq_obj;
-			struct mlx5_devx_obj *sq_devx;
-			void *sq_umem;
-			void *sq_buf;
-			int64_t sq_dbrec_offset;
-			struct mlx5_devx_dbr_page *sq_dbrec_page;
+			/* DevX CQ object and its resources. */
+			struct mlx5_devx_sq sq_obj;
+			/* DevX SQ object and its resources. */
 		};
 	};
 };
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index fe103a7..4154c52 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -115,7 +115,7 @@
 		else
 			msq_attr.sq_state = MLX5_SQC_STATE_RDY;
 		msq_attr.state = MLX5_SQC_STATE_RST;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_obj.sq, &msq_attr);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change the Tx SQ state to RESET"
 				" %s", strerror(errno));
@@ -127,7 +127,7 @@
 		/* Change queue state to ready. */
 		msq_attr.sq_state = MLX5_SQC_STATE_RST;
 		msq_attr.state = MLX5_SQC_STATE_RDY;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_obj.sq, &msq_attr);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change the Tx SQ state to READY"
 				" %s", strerror(errno));
@@ -1056,36 +1056,6 @@
 
 #ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
 /**
- * Release DevX SQ resources.
- *
- * @param txq_obj
- *   DevX Tx queue object.
- */
-static void
-mlx5_txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->sq_devx) {
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->sq_devx));
-		txq_obj->sq_devx = NULL;
-	}
-	if (txq_obj->sq_umem) {
-		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->sq_umem));
-		txq_obj->sq_umem = NULL;
-	}
-	if (txq_obj->sq_buf) {
-		mlx5_free(txq_obj->sq_buf);
-		txq_obj->sq_buf = NULL;
-	}
-	if (txq_obj->sq_dbrec_page) {
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->sq_dbrec_page->umem),
-					    txq_obj->sq_dbrec_offset));
-		txq_obj->sq_dbrec_page = NULL;
-	}
-}
-
-/**
  * Destroy the Tx queue DevX object.
  *
  * @param txq_obj
@@ -1094,7 +1064,8 @@
 static void
 mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
-	mlx5_txq_release_devx_sq_resources(txq_obj);
+	mlx5_devx_sq_destroy(&txq_obj->sq_obj);
+	memset(&txq_obj->sq_obj, 0, sizeof(txq_obj->sq_obj));
 	mlx5_devx_cq_destroy(&txq_obj->cq_obj);
 	memset(&txq_obj->cq_obj, 0, sizeof(txq_obj->cq_obj));
 }
@@ -1106,100 +1077,41 @@
  *   Pointer to Ethernet device.
  * @param idx
  *   Queue index in DPDK Tx queue array.
+ * @param[in] log_desc_n
+ *   Log of number of descriptors in queue.
  *
  * @return
- *   Number of WQEs in SQ, 0 otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static uint32_t
-mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx)
+static int
+mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx,
+				  uint16_t log_desc_n)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
 	struct mlx5_txq_ctrl *txq_ctrl =
 			container_of(txq_data, struct mlx5_txq_ctrl, txq);
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
-	size_t page_size;
-	uint32_t wqe_n;
-	int ret;
+	struct mlx5_devx_create_sq_attr sq_attr = {
+		.flush_in_error_en = 1,
+		.allow_multi_pkt_send_wqe = !!priv->config.mps,
+		.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode,
+		.allow_swp = !!priv->config.swp,
+		.cqn = txq_obj->cq_obj.cq->id,
+		.tis_lst_sz = 1,
+		.tis_num = priv->sh->tis->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = priv->sh->pdn,
+			.uar_page =
+				 mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar),
+		},
+	};
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
-			(uint32_t)priv->sh->device_attr.max_qp_wr);
-	txq_obj->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      wqe_n * sizeof(struct mlx5_wqe),
-				      page_size, priv->sh->numa_node);
-	if (!txq_obj->sq_buf) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory (SQ).",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->sq_umem = mlx5_glue->devx_umem_reg
-					(priv->sh->ctx,
-					 (void *)txq_obj->sq_buf,
-					 wqe_n * sizeof(struct mlx5_wqe),
-					 IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot register memory (SQ).",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for send queue. */
-	txq_obj->sq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->sq_dbrec_page);
-	if (txq_obj->sq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate SQ door-bell.");
-		goto error;
-	}
-	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = priv->sh->tis->id;
-	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = txq_obj->cq_obj.cq->id;
-	sq_attr.flush_in_error_en = 1;
-	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
-	sq_attr.allow_swp = !!priv->config.swp;
-	sq_attr.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode;
-	sq_attr.wq_attr.uar_page =
-				mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = priv->sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = log2above(wqe_n);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
-	sq_attr.wq_attr.dbr_umem_id =
-			mlx5_os_get_umem_id(txq_obj->sq_dbrec_page->umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
-	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
 	/* Create Send Queue object with DevX. */
-	txq_obj->sq_devx = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
-	if (!txq_obj->sq_devx) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	return wqe_n;
-error:
-	ret = rte_errno;
-	mlx5_txq_release_devx_sq_resources(txq_obj);
-	rte_errno = ret;
-	return 0;
+	return mlx5_devx_sq_create(priv->sh->ctx, &txq_obj->sq_obj, log_desc_n,
+				   &sq_attr, priv->sh->numa_node);
 }
 #endif
 
@@ -1273,27 +1185,29 @@
 	txq_data->cq_db = txq_obj->cq_obj.db_rec;
 	*txq_data->cq_db = 0;
 	/* Create Send Queue object with DevX. */
-	wqe_n = mlx5_txq_create_devx_sq_resources(dev, idx);
-	if (!wqe_n) {
+	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
+			(uint32_t)priv->sh->device_attr.max_qp_wr);
+	log_desc_n = log2above(wqe_n);
+	ret = mlx5_txq_create_devx_sq_resources(dev, idx, log_desc_n);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
+			dev->data->port_id, idx);
 		rte_errno = errno;
 		goto error;
 	}
 	/* Create the Work Queue. */
-	txq_data->wqe_n = log2above(wqe_n);
+	txq_data->wqe_n = log_desc_n;
 	txq_data->wqe_s = 1 << txq_data->wqe_n;
 	txq_data->wqe_m = txq_data->wqe_s - 1;
-	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
+	txq_data->wqes = (struct mlx5_wqe *)(uintptr_t)txq_obj->sq_obj.wqes;
 	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
 	txq_data->wqe_ci = 0;
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->qp_db = (volatile uint32_t *)
-					(txq_obj->sq_dbrec_page->dbrs +
-					 txq_obj->sq_dbrec_offset +
-					 MLX5_SND_DBR * sizeof(uint32_t));
+	txq_data->qp_db = txq_obj->sq_obj.db_rec;
 	*txq_data->qp_db = 0;
-	txq_data->qp_num_8s = txq_obj->sq_devx->id << 8;
+	txq_data->qp_num_8s = txq_obj->sq_obj.sq->id << 8;
 	/* Change Send Queue state to Ready-to-Send. */
 	ret = mlx5_devx_modify_sq(txq_obj, MLX5_TXQ_MOD_RST2RDY, 0);
 	if (ret) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 14/17] net/mlx5: move ASO SQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (12 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 13/17] net/mlx5: move Tx " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 15/17] common/mlx5: share DevX RQ creation Michael Baum
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for ASO SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.h |  1 +
 drivers/net/mlx5/mlx5.h                |  8 +--
 drivers/net/mlx5/mlx5_flow_age.c       | 94 ++++++++++------------------------
 3 files changed, 30 insertions(+), 73 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 88d520b..8377d34 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -25,6 +25,7 @@ struct mlx5_devx_sq {
 	union {
 		volatile void *umem_buf;
 		volatile struct mlx5_wqe *wqes; /* The SQ ring buffer. */
+		volatile struct mlx5_aso_wqe *aso_wqes;
 	};
 	volatile uint32_t *db_rec; /* The SQ doorbell record. */
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6977eac..86ada23 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -483,13 +483,7 @@ struct mlx5_aso_sq_elem {
 struct mlx5_aso_sq {
 	uint16_t log_desc_n;
 	struct mlx5_aso_cq cq;
-	struct mlx5_devx_obj *sq;
-	struct mlx5dv_devx_umem *wqe_umem; /* SQ buffer umem. */
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_aso_wqe *wqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_sq sq_obj;
 	volatile uint64_t *uar_addr;
 	struct mlx5_aso_devx_mr mr;
 	uint16_t pi;
diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index 60a8d2a..9681cbf 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -141,18 +141,7 @@
 static void
 mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)
 {
-	if (sq->wqe_umem) {
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-		sq->wqe_umem = NULL;
-	}
-	if (sq->umem_buf) {
-		mlx5_free((void *)(uintptr_t)sq->umem_buf);
-		sq->umem_buf = NULL;
-	}
-	if (sq->sq) {
-		mlx5_devx_cmd_destroy(sq->sq);
-		sq->sq = NULL;
-	}
+	mlx5_devx_sq_destroy(&sq->sq_obj);
 	mlx5_aso_cq_destroy(&sq->cq);
 	mlx5_aso_devx_dereg_mr(&sq->mr);
 	memset(sq, 0, sizeof(*sq));
@@ -173,7 +162,7 @@
 	uint64_t addr;
 
 	/* All the next fields state should stay constant. */
-	for (i = 0, wqe = &sq->wqes[0]; i < size; ++i, ++wqe) {
+	for (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) {
 		wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |
 							  (sizeof(*wqe) >> 4));
 		wqe->aso_cseg.lkey = rte_cpu_to_be_32(sq->mr.mkey->id);
@@ -215,12 +204,18 @@
 		   struct mlx5dv_devx_uar *uar, uint32_t pdn,
 		   uint16_t log_desc_n)
 {
-	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	struct mlx5_devx_wq_attr *wq_attr = &attr.wq_attr;
+	struct mlx5_devx_create_sq_attr attr = {
+		.user_index = 0xFFFF,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = pdn,
+			.uar_page = mlx5_os_get_devx_uar_page_id(uar),
+		},
+	};
+	struct mlx5_devx_modify_sq_attr modify_attr = {
+		.state = MLX5_SQC_STATE_RDY,
+	};
 	uint32_t sq_desc_n = 1 << log_desc_n;
-	uint32_t wq_size = sizeof(struct mlx5_aso_wqe) * sq_desc_n;
+	uint16_t log_wqbb_n;
 	int ret;
 
 	if (mlx5_aso_devx_reg_mr(ctx, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) *
@@ -230,58 +225,25 @@
 			       mlx5_os_get_devx_uar_page_id(uar)))
 		goto error;
 	sq->log_desc_n = log_desc_n;
-	sq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size +
-				   sizeof(*sq->db_rec) * 2, 4096, socket);
-	if (!sq->umem_buf) {
-		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	sq->wqe_umem = mlx5_glue->devx_umem_reg(ctx,
-						(void *)(uintptr_t)sq->umem_buf,
-						wq_size +
-						sizeof(*sq->db_rec) * 2,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!sq->wqe_umem) {
-		DRV_LOG(ERR, "Failed to register umem for SQ.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	attr.state = MLX5_SQC_STATE_RST;
-	attr.tis_lst_sz = 0;
-	attr.tis_num = 0;
-	attr.user_index = 0xFFFF;
 	attr.cqn = sq->cq.cq_obj.cq->id;
-	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(uar);
-	wq_attr->pd = pdn;
-	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
-	wq_attr->log_wq_pg_sz = rte_log2_u32(pgsize);
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(sq->wqe_umem);
-	wq_attr->wq_umem_offset = 0;
-	wq_attr->wq_umem_valid = 1;
-	wq_attr->log_wq_stride = 6;
-	wq_attr->log_wq_sz = rte_log2_u32(wq_size) - 6;
-	wq_attr->dbr_umem_id = wq_attr->wq_umem_id;
-	wq_attr->dbr_addr = wq_size;
-	wq_attr->dbr_umem_valid = 1;
-	sq->sq = mlx5_devx_cmd_create_sq(ctx, &attr);
-	if (!sq->sq) {
-		DRV_LOG(ERR, "Can't create sq object.");
-		rte_errno  = ENOMEM;
+	/* for mlx5_aso_wqe that is twice the size of mlx5_wqe */
+	log_wqbb_n = log_desc_n + 1;
+	ret = mlx5_devx_sq_create(ctx, &sq->sq_obj, log_wqbb_n, &attr, socket);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create SQ object.");
+		rte_errno = ENOMEM;
 		goto error;
 	}
-	modify_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(sq->sq, &modify_attr);
+	ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);
 	if (ret) {
-		DRV_LOG(ERR, "Can't change sq state to ready.");
-		rte_errno  = ENOMEM;
+		DRV_LOG(ERR, "Can't change SQ state to ready.");
+		rte_errno = ENOMEM;
 		goto error;
 	}
 	sq->pi = 0;
 	sq->head = 0;
 	sq->tail = 0;
-	sq->sqn = sq->sq->id;
-	sq->db_rec = RTE_PTR_ADD(sq->umem_buf, (uintptr_t)(wq_attr->dbr_addr));
+	sq->sqn = sq->sq_obj.sq->id;
 	sq->uar_addr = (volatile uint64_t *)((uint8_t *)uar->base_addr + 0x800);
 	mlx5_aso_init_sq(sq);
 	return 0;
@@ -345,8 +307,8 @@
 		return 0;
 	sq->elts[start_head & mask].burst_size = max;
 	do {
-		wqe = &sq->wqes[sq->head & mask];
-		rte_prefetch0(&sq->wqes[(sq->head + 1) & mask]);
+		wqe = &sq->sq_obj.aso_wqes[sq->head & mask];
+		rte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]);
 		/* Fill next WQE. */
 		rte_spinlock_lock(&mng->resize_sl);
 		pool = mng->pools[sq->next];
@@ -371,7 +333,7 @@
 	wqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ALWAYS <<
 							 MLX5_COMP_MODE_OFFSET);
 	rte_io_wmb();
-	sq->db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);
+	sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);
 	rte_wmb();
 	*sq->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/
 	rte_wmb();
@@ -418,7 +380,7 @@
 	cq->errors++;
 	idx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);
 	mlx5_aso_dump_err_objs((volatile uint32_t *)cqe,
-				 (volatile uint32_t *)&sq->wqes[idx]);
+			       (volatile uint32_t *)&sq->sq_obj.aso_wqes[idx]);
 }
 
 /**
@@ -613,7 +575,7 @@
 {
 	int retries = 1024;
 
-	if (!sh->aso_age_mng->aso_sq.sq)
+	if (!sh->aso_age_mng->aso_sq.sq_obj.sq)
 		return -EINVAL;
 	rte_errno = 0;
 	while (--retries) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 15/17] common/mlx5: share DevX RQ creation
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (13 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 14/17] net/mlx5: move ASO " Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 16/17] net/mlx5: move Rx RQ creation to common Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 17/17] common/mlx5: remove doorbell allocation API Michael Baum
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The RQ object in DevX is used currently only in net driver, but it share
for future.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c | 116 +++++++++++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h |  11 ++++
 2 files changed, 127 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 46404d8..0ac67bd 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -276,4 +276,120 @@
 	return -rte_errno;
 }
 
+/**
+ * Destroy DevX Receive Queue.
+ *
+ * @param[in] rq
+ *   DevX RQ to destroy.
+ */
+void
+mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq)
+{
+	if (rq->rq)
+		claim_zero(mlx5_devx_cmd_destroy(rq->rq));
+	if (rq->umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(rq->umem_obj));
+	if (rq->umem_buf)
+		mlx5_free((void *)(uintptr_t)rq->umem_buf);
+}
+
+/**
+ * Create Receive Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
+		    uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *rq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t page_size = rte_mem_page_size();
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint16_t rq_size = 1 << log_wqbb_n;
+	int ret;
+
+	if (page_size == (size_t)-1 || alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get page_size.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Allocate memory buffer for WQEs and doorbell record. */
+	umem_size = wqe_size * rq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for RQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_glue->devx_umem_reg(ctx, (void *)(uintptr_t)umem_buf,
+					    umem_size, 0);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for RQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for RQ object creation. */
+	attr->wq_attr.wq_umem_valid = 1;
+	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(rq_obj->umem_obj);
+	attr->wq_attr.wq_umem_offset = 0;
+	attr->wq_attr.dbr_umem_valid = 1;
+	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
+	attr->wq_attr.dbr_addr = umem_dbrec;
+	attr->wq_attr.log_wq_pg_sz = rte_log2_u32(page_size);
+	/* Create receive queue object with DevX. */
+	rq = mlx5_devx_cmd_create_rq(ctx, attr, socket);
+	if (!rq) {
+		DRV_LOG(ERR, "Can't create DevX RQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rq_obj->umem_buf = umem_buf;
+	rq_obj->umem_obj = umem_obj;
+	rq_obj->rq = rq;
+	rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec);
+	return 0;
+error:
+	ret = rte_errno;
+	if (rq)
+		claim_zero(mlx5_devx_cmd_destroy(rq));
+	if (umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
 
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 8377d34..1dafbf5 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -30,6 +30,13 @@ struct mlx5_devx_sq {
 	volatile uint32_t *db_rec; /* The SQ doorbell record. */
 };
 
+/* DevX Receive Queue structure. */
+struct mlx5_devx_rq {
+	struct mlx5_devx_obj *rq; /* The RQ DevX object. */
+	struct mlx5dv_devx_umem *umem_obj; /* The RQ umem object. */
+	volatile void *umem_buf;
+	volatile uint32_t *db_rec; /* The RQ doorbell record. */
+};
 
 /* mlx5_common_devx.c */
 
@@ -41,5 +48,9 @@ int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
 int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj,
 			uint16_t log_wqbb_n,
 			struct mlx5_devx_create_sq_attr *attr, int socket);
+void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq);
+int mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+			uint32_t wqe_size, uint16_t log_wqbb_n,
+			struct mlx5_devx_create_rq_attr *attr, int socket);
 
 #endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 16/17] net/mlx5: move Rx RQ creation to common
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (14 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 15/17] common/mlx5: share DevX RQ creation Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 17/17] common/mlx5: remove doorbell allocation API Michael Baum
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Rx RQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   4 +-
 drivers/net/mlx5/mlx5_devx.c | 178 +++++++++----------------------------------
 drivers/net/mlx5/mlx5_rxtx.h |   4 -
 3 files changed, 37 insertions(+), 149 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 86ada23..5bf6886 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -772,8 +772,9 @@ struct mlx5_rxq_obj {
 			void *ibv_cq; /* Completion Queue. */
 			void *ibv_channel;
 		};
+		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
-			struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
+			struct mlx5_devx_rq rq_obj; /* DevX RQ object. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
@@ -944,7 +945,6 @@ struct mlx5_priv {
 	/* Context for Verbs allocator. */
 	int nl_socket_rdma; /* Netlink socket (NETLINK_RDMA). */
 	int nl_socket_route; /* Netlink socket (NETLINK_ROUTE). */
-	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
 	struct mlx5_nl_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
 	struct mlx5_hlist *mreg_cp_tbl;
 	/* Hash table of Rx metadata register copy table. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 4154c52..9e825ce 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -45,7 +45,7 @@
 	rq_attr.state = MLX5_RQC_STATE_RDY;
 	rq_attr.vsd = (on ? 0 : 1);
 	rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD;
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
 }
 
 /**
@@ -85,7 +85,7 @@
 	default:
 		break;
 	}
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
 }
 
 /**
@@ -145,44 +145,18 @@
 }
 
 /**
- * Release the resources allocated for an RQ DevX object.
- *
- * @param rxq_ctrl
- *   DevX Rx queue object.
- */
-static void
-mlx5_rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
-{
-	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->rq_dbrec_page;
-
-	if (rxq_ctrl->wq_umem) {
-		mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
-		rxq_ctrl->wq_umem = NULL;
-	}
-	if (rxq_ctrl->rxq.wqes) {
-		mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
-		rxq_ctrl->rxq.wqes = NULL;
-	}
-	if (dbr_page) {
-		claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id(dbr_page->umem),
-					    rxq_ctrl->rq_dbr_offset));
-		rxq_ctrl->rq_dbrec_page = NULL;
-	}
-}
-
-/**
  * Destroy the Rx queue DevX object.
  *
  * @param rxq_obj
  *   Rxq object to destroy.
  */
 static void
-mlx5_rxq_release_devx_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_resources(struct mlx5_rxq_obj *rxq_obj)
 {
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	mlx5_devx_cq_destroy(&rxq_ctrl->obj->cq_obj);
-	memset(&rxq_ctrl->obj->cq_obj, 0, sizeof(rxq_ctrl->obj->cq_obj));
+	mlx5_devx_rq_destroy(&rxq_obj->rq_obj);
+	memset(&rxq_obj->rq_obj, 0, sizeof(rxq_obj->rq_obj));
+	mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
+	memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
 }
 
 /**
@@ -195,17 +169,17 @@
 mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj)
 {
 	MLX5_ASSERT(rxq_obj);
-	MLX5_ASSERT(rxq_obj->rq);
 	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
+		MLX5_ASSERT(rxq_obj->rq);
 		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->cq_obj);
-		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+		MLX5_ASSERT(rxq_obj->cq_obj.cq);
+		MLX5_ASSERT(rxq_obj->rq_obj.rq);
+		mlx5_rxq_release_devx_resources(rxq_obj);
 		if (rxq_obj->devx_channel)
 			mlx5_glue->devx_destroy_event_channel
 							(rxq_obj->devx_channel);
-		mlx5_rxq_release_devx_resources(rxq_obj->rxq_ctrl);
 	}
 }
 
@@ -247,52 +221,6 @@
 }
 
 /**
- * Fill common fields of create RQ attributes structure.
- *
- * @param rxq_data
- *   Pointer to Rx queue data.
- * @param cqn
- *   CQ number to use with this RQ.
- * @param rq_attr
- *   RQ attributes structure to fill..
- */
-static void
-mlx5_devx_create_rq_attr_fill(struct mlx5_rxq_data *rxq_data, uint32_t cqn,
-			      struct mlx5_devx_create_rq_attr *rq_attr)
-{
-	rq_attr->state = MLX5_RQC_STATE_RST;
-	rq_attr->vsd = (rxq_data->vlan_strip) ? 0 : 1;
-	rq_attr->cqn = cqn;
-	rq_attr->scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
-}
-
-/**
- * Fill common fields of DevX WQ attributes structure.
- *
- * @param priv
- *   Pointer to device private data.
- * @param rxq_ctrl
- *   Pointer to Rx queue control structure.
- * @param wq_attr
- *   WQ attributes structure to fill..
- */
-static void
-mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
-		       struct mlx5_devx_wq_attr *wq_attr)
-{
-	wq_attr->end_padding_mode = priv->config.hw_padding ?
-					MLX5_WQ_END_PAD_MODE_ALIGN :
-					MLX5_WQ_END_PAD_MODE_NONE;
-	wq_attr->pd = priv->sh->pdn;
-	wq_attr->dbr_addr = rxq_ctrl->rq_dbr_offset;
-	wq_attr->dbr_umem_id =
-			mlx5_os_get_umem_id(rxq_ctrl->rq_dbrec_page->umem);
-	wq_attr->dbr_umem_valid = 1;
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(rxq_ctrl->wq_umem);
-	wq_attr->wq_umem_valid = 1;
-}
-
-/**
  * Create a RQ object using DevX.
  *
  * @param dev
@@ -301,9 +229,9 @@
  *   Queue index in DPDK Rx queue array.
  *
  * @return
- *   The DevX RQ object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
+static int
 mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -311,26 +239,15 @@
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
-	uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
-	uint32_t cqn = rxq_ctrl->obj->cq_obj.cq->id;
-	struct mlx5_devx_dbr_page *dbr_page;
-	int64_t dbr_offset;
-	uint32_t wq_size = 0;
-	uint32_t wqe_size = 0;
-	uint32_t log_wqe_size = 0;
-	void *buf = NULL;
-	struct mlx5_devx_obj *rq;
-	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint16_t log_desc_n = rxq_data->elts_n - rxq_data->sges_n;
+	uint32_t wqe_size, log_wqe_size;
 
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
 	/* Fill RQ attributes. */
 	rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
 	rq_attr.flush_in_error_en = 1;
-	mlx5_devx_create_rq_attr_fill(rxq_data, cqn, &rq_attr);
+	rq_attr.vsd = (rxq_data->vlan_strip) ? 0 : 1;
+	rq_attr.cqn = rxq_ctrl->obj->cq_obj.cq->id;
+	rq_attr.scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
 	/* Fill WQ attributes for this RQ. */
 	if (mlx5_rxq_mprq_enabled(rxq_data)) {
 		rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ;
@@ -351,40 +268,17 @@
 		wqe_size = sizeof(struct mlx5_wqe_data_seg);
 	}
 	log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
-	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
-	rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
-	rq_attr.wq_attr.log_wq_pg_sz = log2above(alignment);
-	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
-	wq_size = wqe_n * wqe_size;
-	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
-			  alignment, rxq_ctrl->socket);
-	if (!buf)
-		return NULL;
-	rxq_data->wqes = buf;
-	rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
-						     buf, wq_size, 0);
-	if (!rxq_ctrl->wq_umem)
-		goto error;
-	/* Allocate RQ door-bell. */
-	dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
-	if (dbr_offset < 0) {
-		DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
-		goto error;
-	}
-	rxq_ctrl->rq_dbr_offset = dbr_offset;
-	rxq_ctrl->rq_dbrec_page = dbr_page;
-	rxq_data->rq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			  (uintptr_t)rxq_ctrl->rq_dbr_offset);
+	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
+	rq_attr.wq_attr.log_wq_sz = log_desc_n;
+	rq_attr.wq_attr.end_padding_mode = priv->config.hw_padding ?
+						MLX5_WQ_END_PAD_MODE_ALIGN :
+						MLX5_WQ_END_PAD_MODE_NONE;
+	rq_attr.wq_attr.pd = priv->sh->pdn;
 	/* Create RQ using DevX API. */
-	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
-	rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
-	if (!rq)
-		goto error;
-	return rq;
-error:
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	return NULL;
+	return mlx5_devx_rq_create(priv->sh->ctx, &rxq_ctrl->obj->rq_obj,
+				   wqe_size, log_desc_n, &rq_attr,
+				   rxq_ctrl->socket);
 }
 
 /**
@@ -607,8 +501,8 @@
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	tmpl->rq = mlx5_rxq_create_devx_rq_resources(dev, idx);
-	if (!tmpl->rq) {
+	ret = mlx5_rxq_create_devx_rq_resources(dev, idx);
+	if (ret) {
 		DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
 			dev->data->port_id, idx);
 		rte_errno = ENOMEM;
@@ -618,19 +512,17 @@
 	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
+	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf;
+	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec;
 	rxq_data->cq_arm_sn = 0;
-	mlx5_rxq_initialize(rxq_data);
 	rxq_data->cq_ci = 0;
+	mlx5_rxq_initialize(rxq_data);
 	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq_ctrl->wqn = tmpl->rq->id;
+	rxq_ctrl->wqn = tmpl->rq_obj.rq->id;
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	if (tmpl->rq)
-		claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
-	if (tmpl->devx_channel)
-		mlx5_glue->devx_destroy_event_channel(tmpl->devx_channel);
-	mlx5_rxq_release_devx_resources(rxq_ctrl);
+	mlx5_rxq_devx_obj_release(tmpl);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -674,7 +566,7 @@
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 				container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq->id;
+		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq_obj.rq->id;
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 6a71791..b0041c9 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -193,10 +193,6 @@ struct mlx5_rxq_ctrl {
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
 	uint32_t wqn; /* WQ number. */
 	uint16_t dump_file_n; /* Number of dump files. */
-	struct mlx5_devx_dbr_page *rq_dbrec_page;
-	uint64_t rq_dbr_offset;
-	/* Storing RQ door-bell information, needed when freeing door-bell. */
-	void *wq_umem; /* WQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH 17/17] common/mlx5: remove doorbell allocation API
  2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
                   ` (15 preceding siblings ...)
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 16/17] net/mlx5: move Rx RQ creation to common Michael Baum
@ 2020-12-17 11:44 ` Michael Baum
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The mlx5_devx_dbr_page structure was used to allocate and release the
umem of the doorbells.
Since doorbell and buffer have used same umem, this structure is
useless.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.c | 122 --------------------------------------
 drivers/common/mlx5/mlx5_common.h |  23 -------
 2 files changed, 145 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 0445132..c26a2cf 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -8,12 +8,10 @@
 
 #include <rte_errno.h>
 #include <rte_mempool.h>
-#include <rte_malloc.h>
 
 #include "mlx5_common.h"
 #include "mlx5_common_os.h"
 #include "mlx5_common_utils.h"
-#include "mlx5_malloc.h"
 #include "mlx5_common_pci.h"
 
 int mlx5_common_logtype;
@@ -126,126 +124,6 @@ static inline void mlx5_cpu_id(unsigned int level,
 }
 
 /**
- * Allocate page of door-bells and register it using DevX API.
- *
- * @param [in] ctx
- *   Pointer to the device context.
- *
- * @return
- *   Pointer to new page on success, NULL otherwise.
- */
-static struct mlx5_devx_dbr_page *
-mlx5_alloc_dbr_page(void *ctx)
-{
-	struct mlx5_devx_dbr_page *page;
-
-	/* Allocate space for door-bell page and management data. */
-	page = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			   sizeof(struct mlx5_devx_dbr_page),
-			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
-	if (!page) {
-		DRV_LOG(ERR, "cannot allocate dbr page");
-		return NULL;
-	}
-	/* Register allocated memory. */
-	page->umem = mlx5_glue->devx_umem_reg(ctx, page->dbrs,
-					      MLX5_DBR_PAGE_SIZE, 0);
-	if (!page->umem) {
-		DRV_LOG(ERR, "cannot umem reg dbr page");
-		mlx5_free(page);
-		return NULL;
-	}
-	return page;
-}
-
-/**
- * Find the next available door-bell, allocate new page if needed.
- *
- * @param [in] ctx
- *   Pointer to device context.
- * @param [in] head
- *   Pointer to the head of dbr pages list.
- * @param [out] dbr_page
- *   Door-bell page containing the page data.
- *
- * @return
- *   Door-bell address offset on success, a negative error value otherwise.
- */
-int64_t
-mlx5_get_dbr(void *ctx,  struct mlx5_dbr_page_list *head,
-	     struct mlx5_devx_dbr_page **dbr_page)
-{
-	struct mlx5_devx_dbr_page *page = NULL;
-	uint32_t i, j;
-
-	LIST_FOREACH(page, head, next)
-		if (page->dbr_count < MLX5_DBR_PER_PAGE)
-			break;
-	if (!page) { /* No page with free door-bell exists. */
-		page = mlx5_alloc_dbr_page(ctx);
-		if (!page) /* Failed to allocate new page. */
-			return (-1);
-		LIST_INSERT_HEAD(head, page, next);
-	}
-	/* Loop to find bitmap part with clear bit. */
-	for (i = 0;
-	     i < MLX5_DBR_BITMAP_SIZE && page->dbr_bitmap[i] == UINT64_MAX;
-	     i++)
-		; /* Empty. */
-	/* Find the first clear bit. */
-	MLX5_ASSERT(i < MLX5_DBR_BITMAP_SIZE);
-	j = rte_bsf64(~page->dbr_bitmap[i]);
-	page->dbr_bitmap[i] |= (UINT64_C(1) << j);
-	page->dbr_count++;
-	*dbr_page = page;
-	return (i * CHAR_BIT * sizeof(uint64_t) + j) * MLX5_DBR_SIZE;
-}
-
-/**
- * Release a door-bell record.
- *
- * @param [in] head
- *   Pointer to the head of dbr pages list.
- * @param [in] umem_id
- *   UMEM ID of page containing the door-bell record to release.
- * @param [in] offset
- *   Offset of door-bell record in page.
- *
- * @return
- *   0 on success, a negative error value otherwise.
- */
-int32_t
-mlx5_release_dbr(struct mlx5_dbr_page_list *head, uint32_t umem_id,
-		 uint64_t offset)
-{
-	struct mlx5_devx_dbr_page *page = NULL;
-	int ret = 0;
-
-	LIST_FOREACH(page, head, next)
-		/* Find the page this address belongs to. */
-		if (mlx5_os_get_umem_id(page->umem) == umem_id)
-			break;
-	if (!page)
-		return -EINVAL;
-	page->dbr_count--;
-	if (!page->dbr_count) {
-		/* Page not used, free it and remove from list. */
-		LIST_REMOVE(page, next);
-		if (page->umem)
-			ret = -mlx5_glue->devx_umem_dereg(page->umem);
-		mlx5_free(page);
-	} else {
-		/* Mark in bitmap that this door-bell is not in use. */
-		offset /= MLX5_DBR_SIZE;
-		int i = offset / 64;
-		int j = offset % 64;
-
-		page->dbr_bitmap[i] &= ~(UINT64_C(1) << j);
-	}
-	return ret;
-}
-
-/**
  * Allocate the User Access Region with DevX on specified device.
  *
  * @param [in] ctx
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index a484b74..e35188d 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -220,21 +220,6 @@ enum mlx5_class {
 };
 
 #define MLX5_DBR_SIZE RTE_CACHE_LINE_SIZE
-#define MLX5_DBR_PER_PAGE 64
-/* Must be >= CHAR_BIT * sizeof(uint64_t) */
-#define MLX5_DBR_PAGE_SIZE (MLX5_DBR_PER_PAGE * MLX5_DBR_SIZE)
-/* Page size must be >= 512. */
-#define MLX5_DBR_BITMAP_SIZE (MLX5_DBR_PER_PAGE / (CHAR_BIT * sizeof(uint64_t)))
-
-struct mlx5_devx_dbr_page {
-	/* Door-bell records, must be first member in structure. */
-	uint8_t dbrs[MLX5_DBR_PAGE_SIZE];
-	LIST_ENTRY(mlx5_devx_dbr_page) next; /* Pointer to the next element. */
-	void *umem;
-	uint32_t dbr_count; /* Number of door-bell records in use. */
-	/* 1 bit marks matching door-bell is in use. */
-	uint64_t dbr_bitmap[MLX5_DBR_BITMAP_SIZE];
-};
 
 /* devX creation object */
 struct mlx5_devx_obj {
@@ -249,19 +234,11 @@ struct mlx5_klm {
 	uint64_t address;
 };
 
-LIST_HEAD(mlx5_dbr_page_list, mlx5_devx_dbr_page);
-
 __rte_internal
 void mlx5_translate_port_name(const char *port_name_in,
 			      struct mlx5_switch_info *port_info_out);
 void mlx5_glue_constructor(void);
 __rte_internal
-int64_t mlx5_get_dbr(void *ctx,  struct mlx5_dbr_page_list *head,
-		     struct mlx5_devx_dbr_page **dbr_page);
-__rte_internal
-int32_t mlx5_release_dbr(struct mlx5_dbr_page_list *head, uint32_t umem_id,
-			 uint64_t offset);
-__rte_internal
 void *mlx5_devx_alloc_uar(void *ctx, int mapping);
 extern uint8_t haswell_broadwell_cpu;
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations
  2020-12-17 11:44 ` [dpdk-dev] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
@ 2020-12-29  8:52   ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
                       ` (16 more replies)
  0 siblings, 17 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Due to many instances of creating CQ SQ and RQ on DevX, they move to common.

v1: Initial release
v2: Bug fix (sending wrong umem id to FW).

Michael Baum (17):
  net/mlx5: fix ASO SQ creation error flow
  common/mlx5: share DevX CQ creation
  regex/mlx5: move DevX CQ creation to common
  vdpa/mlx5: move DevX CQ creation to common
  net/mlx5: move rearm and clock queue CQ creation to common
  net/mlx5: move ASO CQ creation to common
  net/mlx5: move Tx CQ creation to common
  net/mlx5: move Rx CQ creation to common
  common/mlx5: enhance page size configuration
  common/mlx5: share DevX SQ creation
  regex/mlx5: move DevX SQ creation to common
  net/mlx5: move rearm and clock queue SQ creation to common
  net/mlx5: move Tx SQ creation to common
  net/mlx5: move ASO SQ creation to common
  common/mlx5: share DevX RQ creation
  net/mlx5: move Rx RQ creation to common
  common/mlx5: remove doorbell allocation API

 drivers/common/mlx5/meson.build          |   1 +
 drivers/common/mlx5/mlx5_common.c        | 122 ------
 drivers/common/mlx5/mlx5_common.h        |  23 --
 drivers/common/mlx5/mlx5_common_devx.c   | 395 +++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h   |  56 +++
 drivers/common/mlx5/mlx5_devx_cmds.c     |  53 ++-
 drivers/net/mlx5/mlx5.c                  |   8 -
 drivers/net/mlx5/mlx5.h                  |  54 +--
 drivers/net/mlx5/mlx5_devx.c             | 643 +++++++------------------------
 drivers/net/mlx5/mlx5_flow_age.c         | 172 +++------
 drivers/net/mlx5/mlx5_rxtx.c             |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h             |   8 -
 drivers/net/mlx5/mlx5_txpp.c             | 294 ++++----------
 drivers/regex/mlx5/mlx5_regex.c          |   6 -
 drivers/regex/mlx5/mlx5_regex.h          |  17 +-
 drivers/regex/mlx5/mlx5_regex_control.c  | 242 +++---------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  18 +-
 drivers/vdpa/mlx5/mlx5_vdpa.h            |  10 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c      |  81 ++--
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c      |   2 +-
 20 files changed, 839 insertions(+), 1368 deletions(-)
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.c
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 01/17] net/mlx5: fix ASO SQ creation error flow
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 02/17] common/mlx5: share DevX CQ creation Michael Baum
                       ` (15 subsequent siblings)
  16 siblings, 1 reply; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

In ASO SQ creation, the PMD allocates umem buffer for SQ.

When umem buffer allocation is fails, the MR and CQ memory are not freed
what caused a memory leak.

Free it.

Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_age.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index cea2cf7..0ea61be 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -278,7 +278,8 @@
 				   sizeof(*sq->db_rec) * 2, 4096, socket);
 	if (!sq->umem_buf) {
 		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		return -ENOMEM;
+		rte_errno = ENOMEM;
+		goto error;
 	}
 	sq->wqe_umem = mlx5_glue->devx_umem_reg(ctx,
 						(void *)(uintptr_t)sq->umem_buf,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 02/17] common/mlx5: share DevX CQ creation
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 03/17] regex/mlx5: move DevX CQ creation to common Michael Baum
                       ` (14 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The CQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/meson.build        |   1 +
 drivers/common/mlx5/mlx5_common_devx.c | 157 +++++++++++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h |  28 ++++++
 3 files changed, 186 insertions(+)
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.c
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.h

diff --git a/drivers/common/mlx5/meson.build b/drivers/common/mlx5/meson.build
index 3dacc6f..26cee06 100644
--- a/drivers/common/mlx5/meson.build
+++ b/drivers/common/mlx5/meson.build
@@ -16,6 +16,7 @@ sources += files(
 	'mlx5_common_mr.c',
 	'mlx5_malloc.c',
 	'mlx5_common_pci.c',
+	'mlx5_common_devx.c',
 )
 
 cflags_options = [
diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
new file mode 100644
index 0000000..324c6ea
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include <rte_common.h>
+#include <rte_eal_paging.h>
+
+#include <mlx5_glue.h>
+#include <mlx5_common_os.h>
+
+#include "mlx5_prm.h"
+#include "mlx5_devx_cmds.h"
+#include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
+#include "mlx5_common.h"
+#include "mlx5_common_devx.h"
+
+/**
+ * Destroy DevX Completion Queue.
+ *
+ * @param[in] cq
+ *   DevX CQ to destroy.
+ */
+void
+mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq)
+{
+	if (cq->cq)
+		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
+	if (cq->umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
+	if (cq->umem_buf)
+		mlx5_free((void *)(uintptr_t)cq->umem_buf);
+}
+
+/* Mark all CQEs initially as invalid. */
+static void
+mlx5_cq_init(struct mlx5_devx_cq *cq_obj, uint16_t cq_size)
+{
+	volatile struct mlx5_cqe *cqe = cq_obj->cqes;
+	uint16_t i;
+
+	for (i = 0; i < cq_size; i++, cqe++)
+		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
+}
+
+/**
+ * Create Completion Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   q_umem_valid
+ *   q_umem_id
+ *   q_umem_offset
+ *   db_umem_valid
+ *   db_umem_id
+ *   db_umem_offset
+ *   eqn
+ *   log_cq_size
+ *   log_page_size
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] cq_obj
+ *   Pointer to CQ to create.
+ * @param[in] log_desc_n
+ *   Log of number of descriptors in queue.
+ * @param[in] attr
+ *   Pointer to CQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj, uint16_t log_desc_n,
+		    struct mlx5_devx_cq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *cq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t page_size = rte_mem_page_size();
+	size_t alignment = MLX5_CQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint32_t eqn;
+	uint16_t cq_size = 1 << log_desc_n;
+	int ret;
+
+	if (page_size == (size_t)-1 || alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get page_size.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Query first EQN. */
+	ret = mlx5_glue->devx_query_eqn(ctx, 0, &eqn);
+	if (ret) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to query event queue number.");
+		goto error;
+	}
+	/* Allocate memory buffer for CQEs and doorbell record. */
+	umem_size = sizeof(struct mlx5_cqe) * cq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_glue->devx_umem_reg(ctx, (void *)(uintptr_t)umem_buf,
+					    umem_size, IBV_ACCESS_LOCAL_WRITE);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for CQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for CQ object creation. */
+	attr->q_umem_valid = 1;
+	attr->q_umem_id = mlx5_os_get_umem_id(umem_obj);
+	attr->q_umem_offset = 0;
+	attr->db_umem_valid = 1;
+	attr->db_umem_id = attr->q_umem_id;
+	attr->db_umem_offset = umem_dbrec;
+	attr->eqn = eqn;
+	attr->log_cq_size = log_desc_n;
+	attr->log_page_size = rte_log2_u32(page_size);
+	/* Create completion queue object with DevX. */
+	cq = mlx5_devx_cmd_create_cq(ctx, attr);
+	if (!cq) {
+		DRV_LOG(ERR, "Can't create DevX CQ object.");
+		rte_errno  = ENOMEM;
+		goto error;
+	}
+	cq_obj->umem_buf = umem_buf;
+	cq_obj->umem_obj = umem_obj;
+	cq_obj->cq = cq;
+	cq_obj->db_rec = RTE_PTR_ADD(cq_obj->umem_buf, umem_dbrec);
+	/* Mark all CQEs initially as invalid. */
+	mlx5_cq_init(cq_obj, cq_size);
+	return 0;
+error:
+	ret = rte_errno;
+	if (cq)
+		claim_zero(mlx5_devx_cmd_destroy(cq));
+	if (umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
new file mode 100644
index 0000000..31cb804
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef RTE_PMD_MLX5_COMMON_DEVX_H_
+#define RTE_PMD_MLX5_COMMON_DEVX_H_
+
+#include "mlx5_devx_cmds.h"
+
+/* DevX Completion Queue structure. */
+struct mlx5_devx_cq {
+	struct mlx5_devx_obj *cq; /* The CQ DevX object. */
+	struct mlx5dv_devx_umem *umem_obj; /* The CQ umem object. */
+	union {
+		volatile void *umem_buf;
+		volatile struct mlx5_cqe *cqes; /* The CQ ring buffer. */
+	};
+	volatile uint32_t *db_rec; /* The CQ doorbell record. */
+};
+
+/* mlx5_common_devx.c */
+
+void mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq);
+int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
+			uint16_t log_desc_n, struct mlx5_devx_cq_attr *attr,
+			int socket);
+
+#endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 03/17] regex/mlx5: move DevX CQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 02/17] common/mlx5: share DevX CQ creation Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 04/17] vdpa/mlx5: " Michael Baum
                       ` (13 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/regex/mlx5/mlx5_regex.c          |  6 ---
 drivers/regex/mlx5/mlx5_regex.h          |  9 +---
 drivers/regex/mlx5/mlx5_regex_control.c  | 91 ++++++--------------------------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  4 +-
 4 files changed, 20 insertions(+), 90 deletions(-)

diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c
index c91c444..c0d6331 100644
--- a/drivers/regex/mlx5/mlx5_regex.c
+++ b/drivers/regex/mlx5/mlx5_regex.c
@@ -170,12 +170,6 @@
 		rte_errno = rte_errno ? rte_errno : EINVAL;
 		goto error;
 	}
-	ret = mlx5_glue->devx_query_eqn(ctx, 0, &priv->eqn);
-	if (ret) {
-		DRV_LOG(ERR, "can't query event queue number.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
 	/*
 	 * This PMD always claims the write memory barrier on UAR
 	 * registers writings, it is safe to allocate UAR with any
diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 2c4877c..9f7a388 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -12,6 +12,7 @@
 
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_rxp.h"
 
@@ -30,13 +31,8 @@ struct mlx5_regex_sq {
 
 struct mlx5_regex_cq {
 	uint32_t log_nb_desc; /* Log 2 number of desc for this object. */
-	struct mlx5_devx_obj *obj; /* The CQ DevX object. */
-	int64_t dbr_offset; /* Door bell record offset. */
-	uint32_t dbr_umem; /* Door bell record umem id. */
-	volatile struct mlx5_cqe *cqe; /* The CQ ring buffer. */
-	struct mlx5dv_devx_umem *cqe_umem; /* CQ buffer umem. */
+	struct mlx5_devx_cq cq_obj; /* The CQ DevX object. */
 	size_t ci;
-	uint32_t *dbr;
 };
 
 struct mlx5_regex_qp {
@@ -75,7 +71,6 @@ struct mlx5_regex_priv {
 	struct mlx5_regex_db db[MLX5_RXP_MAX_ENGINES +
 				MLX5_RXP_EM_COUNT];
 	uint32_t nb_engines; /* Number of RegEx engines. */
-	uint32_t eqn; /* EQ number. */
 	struct mlx5dv_devx_uar *uar; /* UAR object. */
 	struct ibv_pd *pd;
 	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c
index d6f452b..ca6c0f5 100644
--- a/drivers/regex/mlx5/mlx5_regex_control.c
+++ b/drivers/regex/mlx5/mlx5_regex_control.c
@@ -6,6 +6,7 @@
 
 #include <rte_log.h>
 #include <rte_errno.h>
+#include <rte_memory.h>
 #include <rte_malloc.h>
 #include <rte_regexdev.h>
 #include <rte_regexdev_core.h>
@@ -17,6 +18,7 @@
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
 #include <mlx5_common_os.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_regex.h"
 #include "mlx5_regex_utils.h"
@@ -44,8 +46,6 @@
 /**
  * destroy CQ.
  *
- * @param priv
- *   Pointer to the priv object.
  * @param cp
  *   Pointer to the CQ to be destroyed.
  *
@@ -53,24 +53,10 @@
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-regex_ctrl_destroy_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq)
+regex_ctrl_destroy_cq(struct mlx5_regex_cq *cq)
 {
-	if (cq->cqe_umem) {
-		mlx5_glue->devx_umem_dereg(cq->cqe_umem);
-		cq->cqe_umem = NULL;
-	}
-	if (cq->cqe) {
-		rte_free((void *)(uintptr_t)cq->cqe);
-		cq->cqe = NULL;
-	}
-	if (cq->dbr_offset) {
-		mlx5_release_dbr(&priv->dbrpgs, cq->dbr_umem, cq->dbr_offset);
-		cq->dbr_offset = -1;
-	}
-	if (cq->obj) {
-		mlx5_devx_cmd_destroy(cq->obj);
-		cq->obj = NULL;
-	}
+	mlx5_devx_cq_destroy(&cq->cq_obj);
+	memset(cq, 0, sizeof(*cq));
 	return 0;
 }
 
@@ -89,65 +75,20 @@
 regex_ctrl_create_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq)
 {
 	struct mlx5_devx_cq_attr attr = {
-		.q_umem_valid = 1,
-		.db_umem_valid = 1,
-		.eqn = priv->eqn,
+		.uar_page_id = priv->uar->page_id,
 	};
-	struct mlx5_devx_dbr_page *dbr_page = NULL;
-	void *buf = NULL;
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t cq_size = 1 << cq->log_nb_desc;
-	uint32_t i;
-
-	cq->dbr_offset = mlx5_get_dbr(priv->ctx, &priv->dbrpgs, &dbr_page);
-	if (cq->dbr_offset < 0) {
-		DRV_LOG(ERR, "Can't allocate cq door bell record.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	cq->dbr_umem = mlx5_os_get_umem_id(dbr_page->umem);
-	cq->dbr = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			       (uintptr_t)cq->dbr_offset);
+	int ret;
 
-	buf = rte_calloc(NULL, 1, sizeof(struct mlx5_cqe) * cq_size, 4096);
-	if (!buf) {
-		DRV_LOG(ERR, "Can't allocate cqe buffer.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	cq->cqe = buf;
-	for (i = 0; i < cq_size; i++)
-		cq->cqe[i].op_own = 0xff;
-	cq->cqe_umem = mlx5_glue->devx_umem_reg(priv->ctx, buf,
-						sizeof(struct mlx5_cqe) *
-						cq_size, 7);
 	cq->ci = 0;
-	if (!cq->cqe_umem) {
-		DRV_LOG(ERR, "Can't register cqe mem.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	attr.db_umem_offset = cq->dbr_offset;
-	attr.db_umem_id = cq->dbr_umem;
-	attr.q_umem_id = mlx5_os_get_umem_id(cq->cqe_umem);
-	attr.log_cq_size = cq->log_nb_desc;
-	attr.uar_page_id = priv->uar->page_id;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->obj = mlx5_devx_cmd_create_cq(priv->ctx, &attr);
-	if (!cq->obj) {
-		DRV_LOG(ERR, "Can't create cq object.");
-		rte_errno  = ENOMEM;
-		goto error;
+	ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, cq->log_nb_desc,
+				  &attr, SOCKET_ID_ANY);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create CQ object.");
+		memset(cq, 0, sizeof(*cq));
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
 	return 0;
-error:
-	if (cq->cqe_umem)
-		mlx5_glue->devx_umem_dereg(cq->cqe_umem);
-	if (buf)
-		rte_free(buf);
-	if (cq->dbr_offset)
-		mlx5_release_dbr(&priv->dbrpgs, cq->dbr_umem, cq->dbr_offset);
-	return -rte_errno;
 }
 
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
@@ -232,7 +173,7 @@
 	attr.tis_lst_sz = 0;
 	attr.tis_num = 0;
 	attr.user_index = q_ind;
-	attr.cqn = qp->cq.obj->id;
+	attr.cqn = qp->cq.cq_obj.cq->id;
 	wq_attr->uar_page = priv->uar->page_id;
 	regex_get_pdn(priv->pd, &pd_num);
 	wq_attr->pd = pd_num;
@@ -389,7 +330,7 @@
 err_btree:
 	for (i = 0; i < nb_sq_config; i++)
 		regex_ctrl_destroy_sq(priv, qp, i);
-	regex_ctrl_destroy_cq(priv, &qp->cq);
+	regex_ctrl_destroy_cq(&qp->cq);
 err_cq:
 	rte_free(qp->sqs);
 	return ret;
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 5857617..255fd40 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -224,7 +224,7 @@ struct mlx5_regex_job {
 	size_t next_cqe_offset;
 
 	next_cqe_offset =  (cq->ci & (cq_size_get(cq) - 1));
-	cqe = (volatile struct mlx5_cqe *)(cq->cqe + next_cqe_offset);
+	cqe = (volatile struct mlx5_cqe *)(cq->cq_obj.cqes + next_cqe_offset);
 	rte_io_wmb();
 
 	int ret = check_cqe(cqe, cq_size_get(cq), cq->ci);
@@ -285,7 +285,7 @@ struct mlx5_regex_job {
 		}
 		cq->ci = (cq->ci + 1) & 0xffffff;
 		rte_wmb();
-		cq->dbr[0] = rte_cpu_to_be_32(cq->ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->ci);
 		queue->free_sqs |= (1 << sqid);
 	}
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 04/17] vdpa/mlx5: move DevX CQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (2 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 03/17] regex/mlx5: move DevX CQ creation to common Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 05/17] net/mlx5: move rearm and clock queue " Michael Baum
                       ` (12 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa.h       | 10 +----
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 81 +++++++++++--------------------------
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c |  2 +-
 3 files changed, 26 insertions(+), 67 deletions(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index d039ada..ddee9dc 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -22,6 +22,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_prm.h>
 
 
@@ -46,13 +47,7 @@ struct mlx5_vdpa_cq {
 	uint32_t armed:1;
 	int callfd;
 	rte_spinlock_t sl;
-	struct mlx5_devx_obj *cq;
-	struct mlx5dv_devx_umem *umem_obj;
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_cq cq_obj;
 	uint64_t errors;
 };
 
@@ -144,7 +139,6 @@ struct mlx5_vdpa_priv {
 	uint32_t gpa_mkey_index;
 	struct ibv_mr *null_mr;
 	struct rte_vhost_memory *vmem;
-	uint32_t eqn;
 	struct mlx5dv_devx_event_channel *eventc;
 	struct mlx5dv_devx_event_channel *err_chnl;
 	struct mlx5dv_devx_uar *uar;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 3aeaeb8..ef92338 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -7,6 +7,7 @@
 #include <sys/eventfd.h>
 
 #include <rte_malloc.h>
+#include <rte_memory.h>
 #include <rte_errno.h>
 #include <rte_lcore.h>
 #include <rte_atomic.h>
@@ -15,6 +16,7 @@
 #include <rte_alarm.h>
 
 #include <mlx5_common.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_glue.h>
 
 #include "mlx5_vdpa_utils.h"
@@ -47,7 +49,6 @@
 		priv->eventc = NULL;
 	}
 #endif
-	priv->eqn = 0;
 }
 
 /* Prepare all the global resources for all the event objects.*/
@@ -58,11 +59,6 @@
 
 	if (priv->eventc)
 		return 0;
-	if (mlx5_glue->devx_query_eqn(priv->ctx, 0, &priv->eqn)) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to query EQ number %d.", rte_errno);
-		return -1;
-	}
 	priv->eventc = mlx5_glue->devx_create_event_channel(priv->ctx,
 			   MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA);
 	if (!priv->eventc) {
@@ -97,12 +93,7 @@
 static void
 mlx5_vdpa_cq_destroy(struct mlx5_vdpa_cq *cq)
 {
-	if (cq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
-	if (cq->umem_obj)
-		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
-	if (cq->umem_buf)
-		rte_free((void *)(uintptr_t)cq->umem_buf);
+	mlx5_devx_cq_destroy(&cq->cq_obj);
 	memset(cq, 0, sizeof(*cq));
 }
 
@@ -112,12 +103,12 @@
 	uint32_t arm_sn = cq->arm_sn << MLX5_CQ_SQN_OFFSET;
 	uint32_t cq_ci = cq->cq_ci & MLX5_CI_MASK;
 	uint32_t doorbell_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | cq_ci;
-	uint64_t doorbell = ((uint64_t)doorbell_hi << 32) | cq->cq->id;
+	uint64_t doorbell = ((uint64_t)doorbell_hi << 32) | cq->cq_obj.cq->id;
 	uint64_t db_be = rte_cpu_to_be_64(doorbell);
 	uint32_t *addr = RTE_PTR_ADD(priv->uar->base_addr, MLX5_CQ_DOORBELL);
 
 	rte_io_wmb();
-	cq->db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(doorbell_hi);
+	cq->cq_obj.db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(doorbell_hi);
 	rte_wmb();
 #ifdef RTE_ARCH_64
 	*(uint64_t *)addr = db_be;
@@ -134,49 +125,23 @@
 mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n,
 		    int callfd, struct mlx5_vdpa_cq *cq)
 {
-	struct mlx5_devx_cq_attr attr = {0};
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t umem_size;
+	struct mlx5_devx_cq_attr attr = {
+		.use_first_only = 1,
+		.uar_page_id = priv->uar->page_id,
+	};
 	uint16_t event_nums[1] = {0};
-	uint16_t cq_size = 1 << log_desc_n;
 	int ret;
 
-	cq->log_desc_n = log_desc_n;
-	umem_size = sizeof(struct mlx5_cqe) * cq_size + sizeof(*cq->db_rec) * 2;
-	cq->umem_buf = rte_zmalloc(__func__, umem_size, 4096);
-	if (!cq->umem_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		rte_errno = ENOMEM;
-		return -ENOMEM;
-	}
-	cq->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx,
-						(void *)(uintptr_t)cq->umem_buf,
-						umem_size,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!cq->umem_obj) {
-		DRV_LOG(ERR, "Failed to register umem for CQ.");
-		goto error;
-	}
-	attr.q_umem_valid = 1;
-	attr.db_umem_valid = 1;
-	attr.use_first_only = 1;
-	attr.overrun_ignore = 0;
-	attr.uar_page_id = priv->uar->page_id;
-	attr.q_umem_id = cq->umem_obj->umem_id;
-	attr.q_umem_offset = 0;
-	attr.db_umem_id = cq->umem_obj->umem_id;
-	attr.db_umem_offset = sizeof(struct mlx5_cqe) * cq_size;
-	attr.eqn = priv->eqn;
-	attr.log_cq_size = log_desc_n;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->cq = mlx5_devx_cmd_create_cq(priv->ctx, &attr);
-	if (!cq->cq)
+	ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, log_desc_n, &attr,
+				  SOCKET_ID_ANY);
+	if (ret)
 		goto error;
-	cq->db_rec = RTE_PTR_ADD(cq->umem_buf, (uintptr_t)attr.db_umem_offset);
 	cq->cq_ci = 0;
+	cq->log_desc_n = log_desc_n;
 	rte_spinlock_init(&cq->sl);
 	/* Subscribe CQ event to the event channel controlled by the driver. */
-	ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc, cq->cq->obj,
+	ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc,
+						   cq->cq_obj.cq->obj,
 						   sizeof(event_nums),
 						   event_nums,
 						   (uint64_t)(uintptr_t)cq);
@@ -187,8 +152,8 @@
 	}
 	cq->callfd = callfd;
 	/* Init CQ to ones to be in HW owner in the start. */
-	cq->cqes[0].op_own = MLX5_CQE_OWNER_MASK;
-	cq->cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
+	cq->cq_obj.cqes[0].op_own = MLX5_CQE_OWNER_MASK;
+	cq->cq_obj.cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
 	/* First arming. */
 	mlx5_vdpa_cq_arm(priv, cq);
 	return 0;
@@ -215,7 +180,7 @@
 	uint16_t cur_wqe_counter;
 	uint16_t comp;
 
-	last_word.word = rte_read32(&cq->cqes[0].wqe_counter);
+	last_word.word = rte_read32(&cq->cq_obj.cqes[0].wqe_counter);
 	cur_wqe_counter = rte_be_to_cpu_16(last_word.wqe_counter);
 	comp = cur_wqe_counter + (uint16_t)1 - next_wqe_counter;
 	if (comp) {
@@ -229,7 +194,7 @@
 			cq->errors++;
 		rte_io_wmb();
 		/* Ring CQ doorbell record. */
-		cq->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
 		rte_io_wmb();
 		/* Ring SW QP doorbell record. */
 		eqp->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size);
@@ -245,7 +210,7 @@
 
 	for (i = 0; i < priv->nr_virtqs; i++) {
 		cq = &priv->virtqs[i].eqp.cq;
-		if (cq->cq && !cq->armed)
+		if (cq->cq_obj.cq && !cq->armed)
 			mlx5_vdpa_cq_arm(priv, cq);
 	}
 }
@@ -290,7 +255,7 @@
 		pthread_mutex_lock(&priv->vq_config_lock);
 		for (i = 0; i < priv->nr_virtqs; i++) {
 			cq = &priv->virtqs[i].eqp.cq;
-			if (cq->cq && !cq->armed) {
+			if (cq->cq_obj.cq && !cq->armed) {
 				uint32_t comp = mlx5_vdpa_cq_poll(cq);
 
 				if (comp) {
@@ -369,7 +334,7 @@
 		DRV_LOG(DEBUG, "Device %s virtq %d cq %d event was captured."
 			" Timer is %s, cq ci is %u.\n",
 			priv->vdev->device->name,
-			(int)virtq->index, cq->cq->id,
+			(int)virtq->index, cq->cq_obj.cq->id,
 			priv->timer_on ? "on" : "off", cq->cq_ci);
 		cq->armed = 0;
 	}
@@ -679,7 +644,7 @@
 		goto error;
 	}
 	attr.uar_index = priv->uar->page_id;
-	attr.cqn = eqp->cq.cq->id;
+	attr.cqn = eqp->cq.cq_obj.cq->id;
 	attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE));
 	attr.rq_size = 1 << log_desc_n;
 	attr.log_rq_stride = rte_log2_u32(MLX5_WSEG_SIZE);
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index 3e882e4..cc77314 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -497,7 +497,7 @@
 		return -1;
 	if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
 		return 1;
-	if (virtq->eqp.cq.cq) {
+	if (virtq->eqp.cq.cq_obj.cq) {
 		if (vq.callfd != virtq->eqp.cq.callfd)
 			return 1;
 	} else if (vq.callfd != -1) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 05/17] net/mlx5: move rearm and clock queue CQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (3 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 04/17] vdpa/mlx5: " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 06/17] net/mlx5: move ASO " Michael Baum
                       ` (11 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for CQ creation at rearm queue and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   9 +--
 drivers/net/mlx5/mlx5_rxtx.c |   2 +-
 drivers/net/mlx5/mlx5_txpp.c | 147 +++++++++++--------------------------------
 3 files changed, 40 insertions(+), 118 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 121d726..00ccaee 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -26,6 +26,7 @@
 #include <mlx5_prm.h>
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_defs.h"
 #include "mlx5_utils.h"
@@ -612,13 +613,7 @@ struct mlx5_flow_id_pool {
 /* Tx pacing queue structure - for Clock and Rearm queues. */
 struct mlx5_txpp_wq {
 	/* Completion Queue related data.*/
-	struct mlx5_devx_obj *cq;
-	void *cq_umem;
-	union {
-		volatile void *cq_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *cq_dbrec;
+	struct mlx5_devx_cq cq_obj;
 	uint32_t cq_ci:24;
 	uint32_t arm_sn:2;
 	/* Send Queue related data.*/
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index d12d746..dad24a3 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -2277,7 +2277,7 @@ enum mlx5_txcmp_code {
 
 	qs = RTE_PTR_ADD(wqe, MLX5_WSEG_SIZE);
 	qs->max_index = rte_cpu_to_be_32(wci);
-	qs->qpn_cqn = rte_cpu_to_be_32(txq->sh->txpp.clock_queue.cq->id);
+	qs->qpn_cqn = rte_cpu_to_be_32(txq->sh->txpp.clock_queue.cq_obj.cq->id);
 	qs->reserved0 = RTE_BE32(0);
 	qs->reserved1 = RTE_BE32(0);
 }
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 2438bf1..54ea572 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -13,6 +13,7 @@
 #include <rte_eal_paging.h>
 
 #include <mlx5_malloc.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -126,12 +127,7 @@
 		claim_zero(mlx5_glue->devx_umem_dereg(wq->sq_umem));
 	if (wq->sq_buf)
 		mlx5_free((void *)(uintptr_t)wq->sq_buf);
-	if (wq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(wq->cq));
-	if (wq->cq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(wq->cq_umem));
-	if (wq->cq_buf)
-		mlx5_free((void *)(uintptr_t)wq->cq_buf);
+	mlx5_devx_cq_destroy(&wq->cq_obj);
 	memset(wq, 0, sizeof(*wq));
 }
 
@@ -181,19 +177,6 @@
 }
 
 static void
-mlx5_txpp_fill_cqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
-{
-	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
-	uint32_t i;
-
-	for (i = 0; i < MLX5_TXPP_REARM_CQ_SIZE; i++) {
-		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
-		++cqe;
-	}
-}
-
-static void
 mlx5_txpp_fill_wqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
@@ -228,7 +211,8 @@
 		index = (i * MLX5_TXPP_REARM / 2 + MLX5_TXPP_REARM / 2) &
 			((1 << MLX5_CQ_INDEX_WIDTH) - 1);
 		qs->max_index = rte_cpu_to_be_32(index);
-		qs->qpn_cqn = rte_cpu_to_be_32(sh->txpp.clock_queue.cq->id);
+		qs->qpn_cqn =
+			   rte_cpu_to_be_32(sh->txpp.clock_queue.cq_obj.cq->id);
 	}
 }
 
@@ -238,7 +222,11 @@
 {
 	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
+	struct mlx5_devx_cq_attr cq_attr = {
+		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
+					 MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B,
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
 	size_t page_size;
 	uint32_t umem_size, umem_dbrec;
@@ -249,50 +237,16 @@
 		DRV_LOG(ERR, "Failed to get mem page size");
 		return -ENOMEM;
 	}
-	/* Allocate memory buffer for CQEs and doorbell record. */
-	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_REARM_CQ_SIZE;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->cq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
-		return -ENOMEM;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->cq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->cq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Rearm Queue.");
-		goto error;
-	}
 	/* Create completion queue object for Rearm Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = 0;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = umem_dbrec;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.log_cq_size = rte_log2_u32(MLX5_TXPP_REARM_CQ_SIZE);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	wq->cq = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr);
-	if (!wq->cq) {
-		rte_errno = errno;
+	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
+				  log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr,
+				  sh->numa_node);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ for Rearm Queue.");
-		goto error;
+		return ret;
 	}
-	wq->cq_dbrec = RTE_PTR_ADD(wq->cq_buf, umem_dbrec);
 	wq->cq_ci = 0;
 	wq->arm_sn = 0;
-	/* Mark all CQEs initially as invalid. */
-	mlx5_txpp_fill_cqe_rearm_queue(sh);
 	/*
 	 * Allocate memory buffer for Send Queue WQEs.
 	 * There should be no WQE leftovers in the cyclic queue.
@@ -323,7 +277,7 @@
 	sq_attr.state = MLX5_SQC_STATE_RST;
 	sq_attr.tis_lst_sz = 1;
 	sq_attr.tis_num = sh->tis->id;
-	sq_attr.cqn = wq->cq->id;
+	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.cd_master = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
 	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
@@ -466,7 +420,13 @@
 {
 	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
+	struct mlx5_devx_cq_attr cq_attr = {
+		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
+					 MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B,
+		.use_first_only = 1,
+		.overrun_ignore = 1,
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
 	size_t page_size;
 	uint32_t umem_size, umem_dbrec;
@@ -487,48 +447,14 @@
 	}
 	sh->txpp.ts_p = 0;
 	sh->txpp.ts_n = 0;
-	/* Allocate memory buffer for CQEs and doorbell record. */
-	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_CLKQ_SIZE;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-					page_size, sh->numa_node);
-	if (!wq->cq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
-		return -ENOMEM;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->cq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->cq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Clock Queue.");
-		goto error;
-	}
 	/* Create completion queue object for Clock Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.use_first_only = 1;
-	cq_attr.overrun_ignore = 1;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = 0;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = umem_dbrec;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.log_cq_size = rte_log2_u32(MLX5_TXPP_CLKQ_SIZE);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	wq->cq = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr);
-	if (!wq->cq) {
-		rte_errno = errno;
+	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
+				  log2above(MLX5_TXPP_CLKQ_SIZE), &cq_attr,
+				  sh->numa_node);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ for Clock Queue.");
 		goto error;
 	}
-	wq->cq_dbrec = RTE_PTR_ADD(wq->cq_buf, umem_dbrec);
 	wq->cq_ci = 0;
 	/* Allocate memory buffer for Send Queue WQEs. */
 	if (sh->txpp.test) {
@@ -574,7 +500,7 @@
 		sq_attr.static_sq_wq = 1;
 	}
 	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = wq->cq->id;
+	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id;
 	sq_attr.wq_attr.cd_slave = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
@@ -625,12 +551,13 @@
 	struct mlx5_txpp_wq *aq = &sh->txpp.rearm_queue;
 	uint32_t arm_sn = aq->arm_sn << MLX5_CQ_SQN_OFFSET;
 	uint32_t db_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | aq->cq_ci;
-	uint64_t db_be = rte_cpu_to_be_64(((uint64_t)db_hi << 32) | aq->cq->id);
+	uint64_t db_be =
+		rte_cpu_to_be_64(((uint64_t)db_hi << 32) | aq->cq_obj.cq->id);
 	base_addr = mlx5_os_get_devx_uar_base_addr(sh->tx_uar);
 	uint32_t *addr = RTE_PTR_ADD(base_addr, MLX5_CQ_DOORBELL);
 
 	rte_compiler_barrier();
-	aq->cq_dbrec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(db_hi);
+	aq->cq_obj.db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(db_hi);
 	rte_wmb();
 #ifdef RTE_ARCH_64
 	*(uint64_t *)addr = db_be;
@@ -728,7 +655,7 @@
 mlx5_txpp_update_timestamp(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
+	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cq_obj.cqes;
 	union {
 		rte_int128_t u128;
 		struct mlx5_cqe_ts cts;
@@ -809,7 +736,7 @@
 	do {
 		volatile struct mlx5_cqe *cqe;
 
-		cqe = &wq->cqes[cq_ci & (MLX5_TXPP_REARM_CQ_SIZE - 1)];
+		cqe = &wq->cq_obj.cqes[cq_ci & (MLX5_TXPP_REARM_CQ_SIZE - 1)];
 		ret = check_cqe(cqe, MLX5_TXPP_REARM_CQ_SIZE, cq_ci);
 		switch (ret) {
 		case MLX5_CQE_STATUS_ERR:
@@ -841,7 +768,7 @@
 		}
 		/* Update doorbell record to notify hardware. */
 		rte_compiler_barrier();
-		*wq->cq_dbrec = rte_cpu_to_be_32(cq_ci);
+		*wq->cq_obj.db_rec = rte_cpu_to_be_32(cq_ci);
 		rte_wmb();
 		wq->cq_ci = cq_ci;
 		/* Fire new requests to Rearm Queue. */
@@ -936,9 +863,8 @@
 	}
 	/* Subscribe CQ event to the event channel controlled by the driver. */
 	ret = mlx5_glue->devx_subscribe_devx_event(sh->txpp.echan,
-						   sh->txpp.rearm_queue.cq->obj,
-						   sizeof(event_nums),
-						   event_nums, 0);
+					    sh->txpp.rearm_queue.cq_obj.cq->obj,
+					     sizeof(event_nums), event_nums, 0);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to subscribe CQE event.");
 		rte_errno = errno;
@@ -1140,7 +1066,8 @@
 
 	if (sh->txpp.refcnt) {
 		struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-		struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
+		struct mlx5_cqe *cqe =
+				(struct mlx5_cqe *)(uintptr_t)wq->cq_obj.cqes;
 		union {
 			rte_int128_t u128;
 			struct mlx5_cqe_ts cts;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 06/17] net/mlx5: move ASO CQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (4 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 05/17] net/mlx5: move rearm and clock queue " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 07/17] net/mlx5: move Tx " Michael Baum
                       ` (10 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Use common function for ASO CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h          |  8 +---
 drivers/net/mlx5/mlx5_flow_age.c | 81 +++++++++-------------------------------
 2 files changed, 19 insertions(+), 70 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 00ccaee..e02faed 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -463,13 +463,7 @@ struct mlx5_flow_counter_mng {
 struct mlx5_aso_cq {
 	uint16_t log_desc_n;
 	uint32_t cq_ci:24;
-	struct mlx5_devx_obj *cq;
-	struct mlx5dv_devx_umem *umem_obj;
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_cq cq_obj;
 	uint64_t errors;
 };
 
diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index 0ea61be..60a8d2a 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -7,10 +7,12 @@
 
 #include <mlx5_malloc.h>
 #include <mlx5_common_os.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5.h"
 #include "mlx5_flow.h"
 
+
 /**
  * Destroy Completion Queue used for ASO access.
  *
@@ -20,12 +22,8 @@
 static void
 mlx5_aso_cq_destroy(struct mlx5_aso_cq *cq)
 {
-	if (cq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
-	if (cq->umem_obj)
-		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
-	if (cq->umem_buf)
-		mlx5_free((void *)(uintptr_t)cq->umem_buf);
+	if (cq->cq_obj.cq)
+		mlx5_devx_cq_destroy(&cq->cq_obj);
 	memset(cq, 0, sizeof(*cq));
 }
 
@@ -42,60 +40,21 @@
  *   Socket to use for allocation.
  * @param[in] uar_page_id
  *   UAR page ID to use.
- * @param[in] eqn
- *   EQ number.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
 mlx5_aso_cq_create(void *ctx, struct mlx5_aso_cq *cq, uint16_t log_desc_n,
-		   int socket, int uar_page_id, uint32_t eqn)
+		   int socket, int uar_page_id)
 {
-	struct mlx5_devx_cq_attr attr = { 0 };
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t umem_size;
-	uint16_t cq_size = 1 << log_desc_n;
+	struct mlx5_devx_cq_attr attr = {
+		.uar_page_id = uar_page_id,
+	};
 
 	cq->log_desc_n = log_desc_n;
-	umem_size = sizeof(struct mlx5_cqe) * cq_size + sizeof(*cq->db_rec) * 2;
-	cq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				   4096, socket);
-	if (!cq->umem_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		rte_errno = ENOMEM;
-		return -ENOMEM;
-	}
-	cq->umem_obj = mlx5_glue->devx_umem_reg(ctx,
-						(void *)(uintptr_t)cq->umem_buf,
-						umem_size,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!cq->umem_obj) {
-		DRV_LOG(ERR, "Failed to register umem for aso CQ.");
-		goto error;
-	}
-	attr.q_umem_valid = 1;
-	attr.db_umem_valid = 1;
-	attr.use_first_only = 0;
-	attr.overrun_ignore = 0;
-	attr.uar_page_id = uar_page_id;
-	attr.q_umem_id = mlx5_os_get_umem_id(cq->umem_obj);
-	attr.q_umem_offset = 0;
-	attr.db_umem_id = attr.q_umem_id;
-	attr.db_umem_offset = sizeof(struct mlx5_cqe) * cq_size;
-	attr.eqn = eqn;
-	attr.log_cq_size = log_desc_n;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->cq = mlx5_devx_cmd_create_cq(ctx, &attr);
-	if (!cq->cq)
-		goto error;
-	cq->db_rec = RTE_PTR_ADD(cq->umem_buf, (uintptr_t)attr.db_umem_offset);
 	cq->cq_ci = 0;
-	memset((void *)(uintptr_t)cq->umem_buf, 0xFF, attr.db_umem_offset);
-	return 0;
-error:
-	mlx5_aso_cq_destroy(cq);
-	return -1;
+	return mlx5_devx_cq_create(ctx, &cq->cq_obj, log_desc_n, &attr, socket);
 }
 
 /**
@@ -194,8 +153,7 @@
 		mlx5_devx_cmd_destroy(sq->sq);
 		sq->sq = NULL;
 	}
-	if (sq->cq.cq)
-		mlx5_aso_cq_destroy(&sq->cq);
+	mlx5_aso_cq_destroy(&sq->cq);
 	mlx5_aso_devx_dereg_mr(&sq->mr);
 	memset(sq, 0, sizeof(*sq));
 }
@@ -246,8 +204,6 @@
  *   User Access Region object.
  * @param[in] pdn
  *   Protection Domain number to use.
- * @param[in] eqn
- *   EQ number.
  * @param[in] log_desc_n
  *   Log of number of descriptors in queue.
  *
@@ -257,7 +213,7 @@
 static int
 mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket,
 		   struct mlx5dv_devx_uar *uar, uint32_t pdn,
-		   uint32_t eqn,  uint16_t log_desc_n)
+		   uint16_t log_desc_n)
 {
 	struct mlx5_devx_create_sq_attr attr = { 0 };
 	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
@@ -271,7 +227,7 @@
 				 sq_desc_n, &sq->mr, socket, pdn))
 		return -1;
 	if (mlx5_aso_cq_create(ctx, &sq->cq, log_desc_n, socket,
-				mlx5_os_get_devx_uar_page_id(uar), eqn))
+			       mlx5_os_get_devx_uar_page_id(uar)))
 		goto error;
 	sq->log_desc_n = log_desc_n;
 	sq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size +
@@ -295,7 +251,7 @@
 	attr.tis_lst_sz = 0;
 	attr.tis_num = 0;
 	attr.user_index = 0xFFFF;
-	attr.cqn = sq->cq.cq->id;
+	attr.cqn = sq->cq.cq_obj.cq->id;
 	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(uar);
 	wq_attr->pd = pdn;
 	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
@@ -347,8 +303,7 @@
 mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh)
 {
 	return mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0,
-				  sh->tx_uar, sh->pdn, sh->eqn,
-				  MLX5_ASO_QUEUE_LOG_DESC);
+				  sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC);
 }
 
 /**
@@ -458,7 +413,7 @@
 	struct mlx5_aso_cq *cq = &sq->cq;
 	uint32_t idx = cq->cq_ci & ((1 << cq->log_desc_n) - 1);
 	volatile struct mlx5_err_cqe *cqe =
-				(volatile struct mlx5_err_cqe *)&cq->cqes[idx];
+			(volatile struct mlx5_err_cqe *)&cq->cq_obj.cqes[idx];
 
 	cq->errors++;
 	idx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);
@@ -571,8 +526,8 @@
 	do {
 		idx = next_idx;
 		next_idx = (cq->cq_ci + 1) & mask;
-		rte_prefetch0(&cq->cqes[next_idx]);
-		cqe = &cq->cqes[idx];
+		rte_prefetch0(&cq->cq_obj.cqes[next_idx]);
+		cqe = &cq->cq_obj.cqes[idx];
 		ret = check_cqe(cqe, cq_size, cq->cq_ci);
 		/*
 		 * Be sure owner read is done before any other cookie field or
@@ -592,7 +547,7 @@
 		mlx5_aso_age_action_update(sh, i);
 		sq->tail += i;
 		rte_io_wmb();
-		cq->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
 	}
 	return i;
 }
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 07/17] net/mlx5: move Tx CQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (5 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 06/17] net/mlx5: move ASO " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 08/17] net/mlx5: move Rx " Michael Baum
                       ` (9 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Tx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   6 +-
 drivers/net/mlx5/mlx5_devx.c | 182 +++++++------------------------------------
 2 files changed, 31 insertions(+), 157 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02faed..2e75498 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -843,11 +843,7 @@ struct mlx5_txq_obj {
 		};
 		struct {
 			struct rte_eth_dev *dev;
-			struct mlx5_devx_obj *cq_devx;
-			void *cq_umem;
-			void *cq_buf;
-			int64_t cq_dbrec_offset;
-			struct mlx5_devx_dbr_page *cq_dbrec_page;
+			struct mlx5_devx_cq cq_obj;
 			struct mlx5_devx_obj *sq_devx;
 			void *sq_umem;
 			void *sq_buf;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index de9b204..9560f2b 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -15,6 +15,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_malloc.h>
 
 #include "mlx5.h"
@@ -1144,28 +1145,6 @@
 }
 
 /**
- * Release DevX Tx CQ resources.
- *
- * @param txq_obj
- *   DevX Tx queue object.
- */
-static void
-mlx5_txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->cq_devx)
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
-	if (txq_obj->cq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->cq_umem));
-	if (txq_obj->cq_buf)
-		mlx5_free(txq_obj->cq_buf);
-	if (txq_obj->cq_dbrec_page)
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->cq_dbrec_page->umem),
-					    txq_obj->cq_dbrec_offset));
-}
-
-/**
  * Destroy the Tx queue DevX object.
  *
  * @param txq_obj
@@ -1175,126 +1154,8 @@
 mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
 	mlx5_txq_release_devx_sq_resources(txq_obj);
-	mlx5_txq_release_devx_cq_resources(txq_obj);
-}
-
-/**
- * Create a DevX CQ object and its resources for an Tx queue.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- *
- * @return
- *   Number of CQEs in CQ, 0 otherwise and rte_errno is set.
- */
-static uint32_t
-mlx5_txq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-			container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
-	struct mlx5_cqe *cqe;
-	size_t page_size;
-	size_t alignment;
-	uint32_t cqe_n;
-	uint32_t i;
-	int ret;
-
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Allocate memory buffer for CQEs. */
-	alignment = MLX5_CQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get CQE buf alignment.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Create the Completion Queue. */
-	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
-		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	cqe_n = 1UL << log2above(cqe_n);
-	if (cqe_n > UINT16_MAX) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u requests to many CQEs %u.",
-			dev->data->port_id, txq_data->idx, cqe_n);
-		rte_errno = EINVAL;
-		return 0;
-	}
-	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      cqe_n * sizeof(struct mlx5_cqe),
-				      alignment,
-				      priv->sh->numa_node);
-	if (!txq_obj->cq_buf) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory (CQ).",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
-						(void *)txq_obj->cq_buf,
-						cqe_n * sizeof(struct mlx5_cqe),
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot register memory (CQ).",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for completion queue. */
-	txq_obj->cq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->cq_dbrec_page);
-	if (txq_obj->cq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
-		goto error;
-	}
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	cq_attr.eqn = priv->sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = txq_obj->cq_dbrec_offset;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem);
-	cq_attr.log_cq_size = rte_log2_u32(cqe_n);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	/* Create completion queue object with DevX. */
-	txq_obj->cq_devx = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!txq_obj->cq_devx) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	/* Initial fill CQ buffer with invalid CQE opcode. */
-	cqe = (struct mlx5_cqe *)txq_obj->cq_buf;
-	for (i = 0; i < cqe_n; i++) {
-		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
-		++cqe;
-	}
-	return cqe_n;
-error:
-	ret = rte_errno;
-	mlx5_txq_release_devx_cq_resources(txq_obj);
-	rte_errno = ret;
-	return 0;
+	mlx5_devx_cq_destroy(&txq_obj->cq_obj);
+	memset(&txq_obj->cq_obj, 0, sizeof(txq_obj->cq_obj));
 }
 
 /**
@@ -1366,7 +1227,7 @@
 	sq_attr.tis_lst_sz = 1;
 	sq_attr.tis_num = priv->sh->tis->id;
 	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = txq_obj->cq_devx->id;
+	sq_attr.cqn = txq_obj->cq_obj.cq->id;
 	sq_attr.flush_in_error_en = 1;
 	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
 	sq_attr.allow_swp = !!priv->config.swp;
@@ -1430,8 +1291,13 @@
 #else
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
+	struct mlx5_devx_cq_attr cq_attr = {
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
+					MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B,
+	};
 	void *reg_addr;
-	uint32_t cqe_n;
+	uint32_t cqe_n, log_desc_n;
 	uint32_t wqe_n;
 	int ret = 0;
 
@@ -1439,19 +1305,31 @@
 	MLX5_ASSERT(txq_obj);
 	txq_obj->txq_ctrl = txq_ctrl;
 	txq_obj->dev = dev;
-	cqe_n = mlx5_txq_create_devx_cq_resources(dev, idx);
-	if (!cqe_n) {
-		rte_errno = errno;
+	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
+		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
+	log_desc_n = log2above(cqe_n);
+	cqe_n = 1UL << log_desc_n;
+	if (cqe_n > UINT16_MAX) {
+		DRV_LOG(ERR, "Port %u Tx queue %u requests to many CQEs %u.",
+			dev->data->port_id, txq_data->idx, cqe_n);
+		rte_errno = EINVAL;
+		return 0;
+	}
+	/* Create completion queue object with DevX. */
+	ret = mlx5_devx_cq_create(sh->ctx, &txq_obj->cq_obj, log_desc_n,
+				  &cq_attr, priv->sh->numa_node);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
+			dev->data->port_id, idx);
 		goto error;
 	}
-	txq_data->cqe_n = log2above(cqe_n);
-	txq_data->cqe_s = 1 << txq_data->cqe_n;
+	txq_data->cqe_n = log_desc_n;
+	txq_data->cqe_s = cqe_n;
 	txq_data->cqe_m = txq_data->cqe_s - 1;
-	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
+	txq_data->cqes = txq_obj->cq_obj.cqes;
 	txq_data->cq_ci = 0;
 	txq_data->cq_pi = 0;
-	txq_data->cq_db = (volatile uint32_t *)(txq_obj->cq_dbrec_page->dbrs +
-						txq_obj->cq_dbrec_offset);
+	txq_data->cq_db = txq_obj->cq_obj.db_rec;
 	*txq_data->cq_db = 0;
 	/* Create Send Queue object with DevX. */
 	wqe_n = mlx5_txq_create_devx_sq_resources(dev, idx);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 08/17] net/mlx5: move Rx CQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (6 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 07/17] net/mlx5: move Tx " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 09/17] common/mlx5: enhance page size configuration Michael Baum
                       ` (8 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Rx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.c      |   8 ---
 drivers/net/mlx5/mlx5.h      |   3 +-
 drivers/net/mlx5/mlx5_devx.c | 142 +++++++++++++------------------------------
 drivers/net/mlx5/mlx5_rxtx.h |   4 --
 4 files changed, 42 insertions(+), 115 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 52a8a25..3c7e5d2 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -938,14 +938,6 @@ struct mlx5_dev_ctx_shared *
 		goto error;
 	}
 	if (sh->devx) {
-		/* Query the EQN for this core. */
-		err = mlx5_glue->devx_query_eqn(sh->ctx, 0, &sh->eqn);
-		if (err) {
-			rte_errno = errno;
-			DRV_LOG(ERR, "Failed to query event queue number %d.",
-				rte_errno);
-			goto error;
-		}
 		err = mlx5_os_get_pdn(sh->pd, &sh->pdn);
 		if (err) {
 			DRV_LOG(ERR, "Fail to extract pdn from PD");
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2e75498..9a59c26 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -679,7 +679,6 @@ struct mlx5_dev_ctx_shared {
 	uint16_t bond_dev; /* Bond primary device id. */
 	uint32_t devx:1; /* Opened with DV. */
 	uint32_t flow_hit_aso_en:1; /* Flow Hit ASO is supported. */
-	uint32_t eqn; /* Event Queue number. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	void *ctx; /* Verbs/DV/DevX context. */
 	void *pd; /* Protection Domain. */
@@ -787,7 +786,7 @@ struct mlx5_rxq_obj {
 		};
 		struct {
 			struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
-			struct mlx5_devx_obj *devx_cq; /* DevX CQ object. */
+			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
 	};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 9560f2b..6ad70f2 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -172,30 +172,17 @@
 }
 
 /**
- * Release the resources allocated for the Rx CQ DevX object.
+ * Destroy the Rx queue DevX object.
  *
- * @param rxq_ctrl
- *   DevX Rx queue object.
+ * @param rxq_obj
+ *   Rxq object to destroy.
  */
 static void
-mlx5_rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
-	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->cq_dbrec_page;
-
-	if (rxq_ctrl->cq_umem) {
-		mlx5_glue->devx_umem_dereg(rxq_ctrl->cq_umem);
-		rxq_ctrl->cq_umem = NULL;
-	}
-	if (rxq_ctrl->rxq.cqes) {
-		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
-		rxq_ctrl->rxq.cqes = NULL;
-	}
-	if (dbr_page) {
-		claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id(dbr_page->umem),
-					    rxq_ctrl->cq_dbr_offset));
-		rxq_ctrl->cq_dbrec_page = NULL;
-	}
+	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
+	mlx5_devx_cq_destroy(&rxq_ctrl->obj->cq_obj);
+	memset(&rxq_ctrl->obj->cq_obj, 0, sizeof(rxq_ctrl->obj->cq_obj));
 }
 
 /**
@@ -213,14 +200,12 @@
 		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->devx_cq);
+		MLX5_ASSERT(rxq_obj->cq_obj);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
-		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
 		if (rxq_obj->devx_channel)
 			mlx5_glue->devx_destroy_event_channel
 							(rxq_obj->devx_channel);
-		mlx5_rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
-		mlx5_rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
+		mlx5_rxq_release_devx_resources(rxq_obj->rxq_ctrl);
 	}
 }
 
@@ -249,7 +234,7 @@
 		rte_errno = errno;
 		return -rte_errno;
 	}
-	if (out.event_resp.cookie != (uint64_t)(uintptr_t)rxq_obj->devx_cq) {
+	if (out.event_resp.cookie != (uint64_t)(uintptr_t)rxq_obj->cq_obj.cq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
@@ -327,7 +312,7 @@
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
 	uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
-	uint32_t cqn = rxq_ctrl->obj->devx_cq->id;
+	uint32_t cqn = rxq_ctrl->obj->cq_obj.cq->id;
 	struct mlx5_devx_dbr_page *dbr_page;
 	int64_t dbr_offset;
 	uint32_t wq_size = 0;
@@ -410,31 +395,23 @@
  *   Queue index in DPDK Rx queue array.
  *
  * @return
- *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
+static int
 mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_devx_obj *cq_obj = 0;
+	struct mlx5_devx_cq *cq_obj = 0;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-	size_t page_size = rte_mem_page_size();
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
-	struct mlx5_devx_dbr_page *dbr_page;
-	int64_t dbr_offset;
-	void *buf = NULL;
-	uint16_t event_nums[1] = {0};
 	uint32_t log_cqe_n;
-	uint32_t cq_size;
+	uint16_t event_nums[1] = { 0 };
 	int ret = 0;
 
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get page_size.");
-		goto error;
-	}
 	if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
 	    !rxq_data->lro) {
 		cq_attr.cqe_comp_en = 1u;
@@ -489,71 +466,37 @@
 	}
 	if (priv->config.cqe_pad)
 		cq_attr.cqe_size = MLX5_CQE_SIZE_128B;
+	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar);
 	log_cqe_n = log2above(cqe_n);
-	cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
-	buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
-				rxq_ctrl->socket);
-	if (!buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		goto error;
-	}
-	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)buf;
-	rxq_ctrl->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx, buf,
-						     cq_size,
-						     IBV_ACCESS_LOCAL_WRITE);
-	if (!rxq_ctrl->cq_umem) {
-		DRV_LOG(ERR, "Failed to register umem for CQ.");
-		goto error;
-	}
-	/* Allocate CQ door-bell. */
-	dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
-	if (dbr_offset < 0) {
-		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
-		goto error;
-	}
-	rxq_ctrl->cq_dbr_offset = dbr_offset;
-	rxq_ctrl->cq_dbrec_page = dbr_page;
-	rxq_data->cq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			  (uintptr_t)rxq_ctrl->cq_dbr_offset);
-	rxq_data->cq_uar =
-			mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
 	/* Create CQ using DevX API. */
-	cq_attr.eqn = priv->sh->eqn;
-	cq_attr.uar_page_id =
-			mlx5_os_get_devx_uar_page_id(priv->sh->devx_rx_uar);
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(rxq_ctrl->cq_umem);
-	cq_attr.q_umem_valid = 1;
-	cq_attr.log_cq_size = log_cqe_n;
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	cq_attr.db_umem_offset = rxq_ctrl->cq_dbr_offset;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(dbr_page->umem);
-	cq_attr.db_umem_valid = 1;
-	cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!cq_obj)
-		goto error;
+	ret = mlx5_devx_cq_create(sh->ctx, &rxq_ctrl->obj->cq_obj, log_cqe_n,
+				  &cq_attr, sh->numa_node);
+	if (ret)
+		return ret;
+	cq_obj = &rxq_ctrl->obj->cq_obj;
+	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])
+							(uintptr_t)cq_obj->cqes;
+	rxq_data->cq_db = cq_obj->db_rec;
+	rxq_data->cq_uar = mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar);
 	rxq_data->cqe_n = log_cqe_n;
-	rxq_data->cqn = cq_obj->id;
+	rxq_data->cqn = cq_obj->cq->id;
 	if (rxq_ctrl->obj->devx_channel) {
 		ret = mlx5_glue->devx_subscribe_devx_event
-						(rxq_ctrl->obj->devx_channel,
-						 cq_obj->obj,
-						 sizeof(event_nums),
-						 event_nums,
-						 (uint64_t)(uintptr_t)cq_obj);
+					      (rxq_ctrl->obj->devx_channel,
+					       cq_obj->cq->obj,
+					       sizeof(event_nums),
+					       event_nums,
+					       (uint64_t)(uintptr_t)cq_obj->cq);
 		if (ret) {
 			DRV_LOG(ERR, "Fail to subscribe CQ to event channel.");
-			rte_errno = errno;
-			goto error;
+			ret = errno;
+			mlx5_devx_cq_destroy(cq_obj);
+			memset(cq_obj, 0, sizeof(*cq_obj));
+			rte_errno = ret;
+			return -ret;
 		}
 	}
-	/* Initialise CQ to 1's to mark HW ownership for all CQEs. */
-	memset((void *)(uintptr_t)rxq_data->cqes, 0xFF, cq_size);
-	return cq_obj;
-error:
-	if (cq_obj)
-		mlx5_devx_cmd_destroy(cq_obj);
-	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
-	return NULL;
+	return 0;
 }
 
 /**
@@ -657,8 +600,8 @@
 		tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
 	}
 	/* Create CQ using DevX API. */
-	tmpl->devx_cq = mlx5_rxq_create_devx_cq_resources(dev, idx);
-	if (!tmpl->devx_cq) {
+	ret = mlx5_rxq_create_devx_cq_resources(dev, idx);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ.");
 		goto error;
 	}
@@ -684,12 +627,9 @@
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	if (tmpl->rq)
 		claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
-	if (tmpl->devx_cq)
-		claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
 	if (tmpl->devx_channel)
 		mlx5_glue->devx_destroy_event_channel(tmpl->devx_channel);
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
+	mlx5_rxq_release_devx_resources(rxq_ctrl);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 7989a50..6a71791 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -196,11 +196,7 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_devx_dbr_page *rq_dbrec_page;
 	uint64_t rq_dbr_offset;
 	/* Storing RQ door-bell information, needed when freeing door-bell. */
-	struct mlx5_devx_dbr_page *cq_dbrec_page;
-	uint64_t cq_dbr_offset;
-	/* Storing CQ door-bell information, needed when freeing door-bell. */
 	void *wq_umem; /* WQ buffer registration info. */
-	void *cq_umem; /* CQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 09/17] common/mlx5: enhance page size configuration
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (7 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 08/17] net/mlx5: move Rx " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 10/17] common/mlx5: share DevX SQ creation Michael Baum
                       ` (7 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The PRM calculates page size in 4K, so need to reduce the log_wq_pg_sz
attribute.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 53 ++++++++++++++++--------------------
 drivers/net/mlx5/mlx5_devx.c         | 13 +++++----
 2 files changed, 30 insertions(+), 36 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 9c1d188..09e204b 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -268,9 +268,8 @@ struct mlx5_devx_obj *
 	MLX5_SET(mkc, mkc, mkey_7_0, attr->umem_id & 0xFF);
 	MLX5_SET(mkc, mkc, translations_octword_size, translation_size);
 	MLX5_SET(mkc, mkc, relaxed_ordering_write,
-		attr->relaxed_ordering_write);
-	MLX5_SET(mkc, mkc, relaxed_ordering_read,
-		attr->relaxed_ordering_read);
+		 attr->relaxed_ordering_write);
+	MLX5_SET(mkc, mkc, relaxed_ordering_read, attr->relaxed_ordering_read);
 	MLX5_SET64(mkc, mkc, start_addr, attr->addr);
 	MLX5_SET64(mkc, mkc, len, attr->size);
 	mkey->obj = mlx5_glue->devx_obj_create(ctx, in, in_size_dw * 4, out,
@@ -308,7 +307,7 @@ struct mlx5_devx_obj *
 	if (status) {
 		int syndrome = MLX5_GET(query_flow_counter_out, out, syndrome);
 
-		DRV_LOG(ERR, "Bad devX status %x, syndrome = %x", status,
+		DRV_LOG(ERR, "Bad DevX status %x, syndrome = %x", status,
 			syndrome);
 	}
 	return status;
@@ -374,8 +373,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_nic_vport_context_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query NIC vport context, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		return -1;
 	}
 	vctx = MLX5_ADDR_OF(query_nic_vport_context_out, out,
@@ -662,8 +660,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		return -1;
 	}
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
@@ -683,11 +680,11 @@ struct mlx5_devx_obj *
 		(cmd_hca_cap, hcattr, log_min_hairpin_wq_data_sz);
 	attr->vhca_id = MLX5_GET(cmd_hca_cap, hcattr, vhca_id);
 	attr->relaxed_ordering_write = MLX5_GET(cmd_hca_cap, hcattr,
-			relaxed_ordering_write);
+						relaxed_ordering_write);
 	attr->relaxed_ordering_read = MLX5_GET(cmd_hca_cap, hcattr,
-			relaxed_ordering_read);
+					       relaxed_ordering_read);
 	attr->access_register_user = MLX5_GET(cmd_hca_cap, hcattr,
-			access_register_user);
+					      access_register_user);
 	attr->eth_net_offloads = MLX5_GET(cmd_hca_cap, hcattr,
 					  eth_net_offloads);
 	attr->eth_virt = MLX5_GET(cmd_hca_cap, hcattr, eth_virt);
@@ -730,8 +727,7 @@ struct mlx5_devx_obj *
 			goto error;
 		if (status) {
 			DRV_LOG(DEBUG, "Failed to query devx QOS capabilities,"
-				" status %x, syndrome = %x",
-				status, syndrome);
+				" status %x, syndrome = %x", status, syndrome);
 			return -1;
 		}
 		hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
@@ -761,17 +757,14 @@ struct mlx5_devx_obj *
 		 MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE |
 		 MLX5_HCA_CAP_OPMOD_GET_CUR);
 
-	rc = mlx5_glue->devx_general_cmd(ctx,
-					 in, sizeof(in),
-					 out, sizeof(out));
+	rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out));
 	if (rc)
 		goto error;
 	status = MLX5_GET(query_hca_cap_out, out, status);
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		attr->log_max_ft_sampler_num = 0;
 		return -1;
 	}
@@ -788,9 +781,7 @@ struct mlx5_devx_obj *
 		 MLX5_GET_HCA_CAP_OP_MOD_ETHERNET_OFFLOAD_CAPS |
 		 MLX5_HCA_CAP_OPMOD_GET_CUR);
 
-	rc = mlx5_glue->devx_general_cmd(ctx,
-					 in, sizeof(in),
-					 out, sizeof(out));
+	rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out));
 	if (rc) {
 		attr->eth_net_offloads = 0;
 		goto error;
@@ -799,8 +790,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		attr->eth_net_offloads = 0;
 		return -1;
 	}
@@ -916,7 +906,9 @@ struct mlx5_devx_obj *
 	MLX5_SET(wq, wq_ctx, hw_counter, wq_attr->hw_counter);
 	MLX5_SET(wq, wq_ctx, sw_counter, wq_attr->sw_counter);
 	MLX5_SET(wq, wq_ctx, log_wq_stride, wq_attr->log_wq_stride);
-	MLX5_SET(wq, wq_ctx, log_wq_pg_sz, wq_attr->log_wq_pg_sz);
+	if (wq_attr->log_wq_pg_sz > MLX5_ADAPTER_PAGE_SHIFT)
+		MLX5_SET(wq, wq_ctx, log_wq_pg_sz,
+			 wq_attr->log_wq_pg_sz - MLX5_ADAPTER_PAGE_SHIFT);
 	MLX5_SET(wq, wq_ctx, log_wq_sz, wq_attr->log_wq_sz);
 	MLX5_SET(wq, wq_ctx, dbr_umem_valid, wq_attr->dbr_umem_valid);
 	MLX5_SET(wq, wq_ctx, wq_umem_valid, wq_attr->wq_umem_valid);
@@ -1562,13 +1554,13 @@ struct mlx5_devx_obj *
 	MLX5_SET(cqc, cqctx, cc, attr->use_first_only);
 	MLX5_SET(cqc, cqctx, oi, attr->overrun_ignore);
 	MLX5_SET(cqc, cqctx, log_cq_size, attr->log_cq_size);
-	MLX5_SET(cqc, cqctx, log_page_size, attr->log_page_size -
-		 MLX5_ADAPTER_PAGE_SHIFT);
+	if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT)
+		MLX5_SET(cqc, cqctx, log_page_size,
+			 attr->log_page_size - MLX5_ADAPTER_PAGE_SHIFT);
 	MLX5_SET(cqc, cqctx, c_eqn, attr->eqn);
 	MLX5_SET(cqc, cqctx, uar_page, attr->uar_page_id);
 	MLX5_SET(cqc, cqctx, cqe_comp_en, !!attr->cqe_comp_en);
-	MLX5_SET(cqc, cqctx, mini_cqe_res_format,
-		 attr->mini_cqe_res_format);
+	MLX5_SET(cqc, cqctx, mini_cqe_res_format, attr->mini_cqe_res_format);
 	MLX5_SET(cqc, cqctx, mini_cqe_res_format_ext,
 		 attr->mini_cqe_res_format_ext);
 	MLX5_SET(cqc, cqctx, cqe_sz, attr->cqe_size);
@@ -1798,8 +1790,9 @@ struct mlx5_devx_obj *
 	if (attr->uar_index) {
 		MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
 		MLX5_SET(qpc, qpc, uar_page, attr->uar_index);
-		MLX5_SET(qpc, qpc, log_page_size, attr->log_page_size -
-			 MLX5_ADAPTER_PAGE_SHIFT);
+		if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT)
+			MLX5_SET(qpc, qpc, log_page_size,
+				 attr->log_page_size - MLX5_ADAPTER_PAGE_SHIFT);
 		if (attr->sq_size) {
 			MLX5_ASSERT(RTE_IS_POWER_OF_2(attr->sq_size));
 			MLX5_SET(qpc, qpc, cqn_snd, attr->cqn);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 6ad70f2..fe103a7 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -320,7 +320,13 @@
 	uint32_t log_wqe_size = 0;
 	void *buf = NULL;
 	struct mlx5_devx_obj *rq;
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
 
+	if (alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get mem page size");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
 	/* Fill RQ attributes. */
 	rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
 	rq_attr.flush_in_error_en = 1;
@@ -347,15 +353,10 @@
 	log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
 	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
 	rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
+	rq_attr.wq_attr.log_wq_pg_sz = log2above(alignment);
 	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
 	wq_size = wqe_n * wqe_size;
-	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
 	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
 			  alignment, rxq_ctrl->socket);
 	if (!buf)
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 10/17] common/mlx5: share DevX SQ creation
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (8 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 09/17] common/mlx5: enhance page size configuration Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 11/17] regex/mlx5: move DevX SQ creation to common Michael Baum
                       ` (6 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The SQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c | 122 +++++++++++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h |  20 +++++-
 2 files changed, 140 insertions(+), 2 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 324c6ea..652bc9a 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -155,3 +155,125 @@
 	rte_errno = ret;
 	return -rte_errno;
 }
+
+/**
+ * Destroy DevX Send Queue.
+ *
+ * @param[in] sq
+ *   DevX SQ to destroy.
+ */
+void
+mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq)
+{
+	if (sq->sq)
+		claim_zero(mlx5_devx_cmd_destroy(sq->sq));
+	if (sq->umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(sq->umem_obj));
+	if (sq->umem_buf)
+		mlx5_free((void *)(uintptr_t)sq->umem_buf);
+}
+
+/**
+ * Create Send Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_type
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_stride
+ *   log_wq_sz
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] sq_obj
+ *   Pointer to SQ to create.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to SQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_sq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *sq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t page_size = rte_mem_page_size();
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint16_t sq_size = 1 << log_wqbb_n;
+	int ret;
+
+	if (page_size == (size_t)-1 || alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get page_size.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Allocate memory buffer for WQEs and doorbell record. */
+	umem_size = MLX5_WQE_SIZE * sq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for SQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_glue->devx_umem_reg(ctx, (void *)(uintptr_t)umem_buf,
+					    umem_size, IBV_ACCESS_LOCAL_WRITE);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for SQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for SQ object creation. */
+	attr->wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
+	attr->wq_attr.wq_umem_valid = 1;
+	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj);
+	attr->wq_attr.wq_umem_offset = 0;
+	attr->wq_attr.dbr_umem_valid = 1;
+	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
+	attr->wq_attr.dbr_addr = umem_dbrec;
+	attr->wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
+	attr->wq_attr.log_wq_sz = log_wqbb_n;
+	attr->wq_attr.log_wq_pg_sz = rte_log2_u32(page_size);
+	/* Create send queue object with DevX. */
+	sq = mlx5_devx_cmd_create_sq(ctx, attr);
+	if (!sq) {
+		DRV_LOG(ERR, "Can't create DevX SQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	sq_obj->umem_buf = umem_buf;
+	sq_obj->umem_obj = umem_obj;
+	sq_obj->sq = sq;
+	sq_obj->db_rec = RTE_PTR_ADD(sq_obj->umem_buf, umem_dbrec);
+	return 0;
+error:
+	ret = rte_errno;
+	if (sq)
+		claim_zero(mlx5_devx_cmd_destroy(sq));
+	if (umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 31cb804..88d520b 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -18,11 +18,27 @@ struct mlx5_devx_cq {
 	volatile uint32_t *db_rec; /* The CQ doorbell record. */
 };
 
+/* DevX Send Queue structure. */
+struct mlx5_devx_sq {
+	struct mlx5_devx_obj *sq; /* The SQ DevX object. */
+	struct mlx5dv_devx_umem *umem_obj; /* The SQ umem object. */
+	union {
+		volatile void *umem_buf;
+		volatile struct mlx5_wqe *wqes; /* The SQ ring buffer. */
+	};
+	volatile uint32_t *db_rec; /* The SQ doorbell record. */
+};
+
+
 /* mlx5_common_devx.c */
 
 void mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq);
 int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
-			uint16_t log_desc_n, struct mlx5_devx_cq_attr *attr,
-			int socket);
+			uint16_t log_desc_n,
+			struct mlx5_devx_cq_attr *attr, int socket);
+void mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq);
+int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj,
+			uint16_t log_wqbb_n,
+			struct mlx5_devx_create_sq_attr *attr, int socket);
 
 #endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 11/17] regex/mlx5: move DevX SQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (9 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 10/17] common/mlx5: share DevX SQ creation Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 12/17] net/mlx5: move rearm and clock queue " Michael Baum
                       ` (5 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/regex/mlx5/mlx5_regex.h          |   8 +-
 drivers/regex/mlx5/mlx5_regex_control.c  | 153 ++++++++++---------------------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  14 +--
 3 files changed, 55 insertions(+), 120 deletions(-)

diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 9f7a388..7e1b2a9 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -18,15 +18,10 @@
 
 struct mlx5_regex_sq {
 	uint16_t log_nb_desc; /* Log 2 number of desc for this object. */
-	struct mlx5_devx_obj *obj; /* The SQ DevX object. */
-	int64_t dbr_offset; /* Door bell record offset. */
-	uint32_t dbr_umem; /* Door bell record umem id. */
-	uint8_t *wqe; /* The SQ ring buffer. */
-	struct mlx5dv_devx_umem *wqe_umem; /* SQ buffer umem. */
+	struct mlx5_devx_sq sq_obj; /* The SQ DevX object. */
 	size_t pi, db_pi;
 	size_t ci;
 	uint32_t sqn;
-	uint32_t *dbr;
 };
 
 struct mlx5_regex_cq {
@@ -73,7 +68,6 @@ struct mlx5_regex_priv {
 	uint32_t nb_engines; /* Number of RegEx engines. */
 	struct mlx5dv_devx_uar *uar; /* UAR object. */
 	struct ibv_pd *pd;
-	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
 	struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */
 };
 
diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c
index ca6c0f5..df57fad 100644
--- a/drivers/regex/mlx5/mlx5_regex_control.c
+++ b/drivers/regex/mlx5/mlx5_regex_control.c
@@ -112,6 +112,27 @@
 #endif
 
 /**
+ * Destroy the SQ object.
+ *
+ * @param qp
+ *   Pointer to the QP element
+ * @param q_ind
+ *   The index of the queue.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+regex_ctrl_destroy_sq(struct mlx5_regex_qp *qp, uint16_t q_ind)
+{
+	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
+
+	mlx5_devx_sq_destroy(&sq->sq_obj);
+	memset(sq, 0, sizeof(*sq));
+	return 0;
+}
+
+/**
  * create the SQ object.
  *
  * @param priv
@@ -131,84 +152,42 @@
 		     uint16_t q_ind, uint16_t log_nb_desc)
 {
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
-	struct mlx5_devx_wq_attr *wq_attr = &attr.wq_attr;
-	struct mlx5_devx_dbr_page *dbr_page = NULL;
+	struct mlx5_devx_create_sq_attr attr = {
+		.user_index = q_ind,
+		.cqn = qp->cq.cq_obj.cq->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.uar_page = priv->uar->page_id,
+		},
+	};
+	struct mlx5_devx_modify_sq_attr modify_attr = {
+		.state = MLX5_SQC_STATE_RDY,
+	};
 	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
-	void *buf = NULL;
-	uint32_t sq_size;
 	uint32_t pd_num = 0;
 	int ret;
 
 	sq->log_nb_desc = log_nb_desc;
-	sq_size = 1 << sq->log_nb_desc;
-	sq->dbr_offset = mlx5_get_dbr(priv->ctx, &priv->dbrpgs, &dbr_page);
-	if (sq->dbr_offset < 0) {
-		DRV_LOG(ERR, "Can't allocate sq door bell record.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	sq->dbr_umem = mlx5_os_get_umem_id(dbr_page->umem);
-	sq->dbr = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			       (uintptr_t)sq->dbr_offset);
-
-	buf = rte_calloc(NULL, 1, 64 * sq_size, 4096);
-	if (!buf) {
-		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	sq->wqe = buf;
-	sq->wqe_umem = mlx5_glue->devx_umem_reg(priv->ctx, buf, 64 * sq_size,
-						7);
 	sq->ci = 0;
 	sq->pi = 0;
-	if (!sq->wqe_umem) {
-		DRV_LOG(ERR, "Can't register wqe mem.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	attr.state = MLX5_SQC_STATE_RST;
-	attr.tis_lst_sz = 0;
-	attr.tis_num = 0;
-	attr.user_index = q_ind;
-	attr.cqn = qp->cq.cq_obj.cq->id;
-	wq_attr->uar_page = priv->uar->page_id;
-	regex_get_pdn(priv->pd, &pd_num);
-	wq_attr->pd = pd_num;
-	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
-	wq_attr->dbr_umem_id = sq->dbr_umem;
-	wq_attr->dbr_addr = sq->dbr_offset;
-	wq_attr->dbr_umem_valid = 1;
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(sq->wqe_umem);
-	wq_attr->wq_umem_offset = 0;
-	wq_attr->wq_umem_valid = 1;
-	wq_attr->log_wq_stride = 6;
-	wq_attr->log_wq_sz = sq->log_nb_desc;
-	sq->obj = mlx5_devx_cmd_create_sq(priv->ctx, &attr);
-	if (!sq->obj) {
-		DRV_LOG(ERR, "Can't create sq object.");
-		rte_errno  = ENOMEM;
-		goto error;
+	ret = regex_get_pdn(priv->pd, &pd_num);
+	if (ret)
+		return ret;
+	attr.wq_attr.pd = pd_num;
+	ret = mlx5_devx_sq_create(priv->ctx, &sq->sq_obj, log_nb_desc, &attr,
+				  SOCKET_ID_ANY);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create SQ object.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
-	modify_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(sq->obj, &modify_attr);
+	ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);
 	if (ret) {
-		DRV_LOG(ERR, "Can't change sq state to ready.");
-		rte_errno  = ENOMEM;
-		goto error;
+		DRV_LOG(ERR, "Can't change SQ state to ready.");
+		regex_ctrl_destroy_sq(qp, q_ind);
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
-
 	return 0;
-error:
-	if (sq->wqe_umem)
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-	if (buf)
-		rte_free(buf);
-	if (sq->dbr_offset)
-		mlx5_release_dbr(&priv->dbrpgs, sq->dbr_umem, sq->dbr_offset);
-	return -rte_errno;
 #else
 	(void)priv;
 	(void)qp;
@@ -220,44 +199,6 @@
 }
 
 /**
- * Destroy the SQ object.
- *
- * @param priv
- *   Pointer to the priv object.
- * @param qp
- *   Pointer to the QP element
- * @param q_ind
- *   The index of the queue.
- *
- * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
- */
-static int
-regex_ctrl_destroy_sq(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp,
-		      uint16_t q_ind)
-{
-	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
-
-	if (sq->wqe_umem) {
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-		sq->wqe_umem = NULL;
-	}
-	if (sq->wqe) {
-		rte_free((void *)(uintptr_t)sq->wqe);
-		sq->wqe = NULL;
-	}
-	if (sq->dbr_offset) {
-		mlx5_release_dbr(&priv->dbrpgs, sq->dbr_umem, sq->dbr_offset);
-		sq->dbr_offset = -1;
-	}
-	if (sq->obj) {
-		mlx5_devx_cmd_destroy(sq->obj);
-		sq->obj = NULL;
-	}
-	return 0;
-}
-
-/**
  * Setup the qp.
  *
  * @param dev
@@ -329,7 +270,7 @@
 	mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh);
 err_btree:
 	for (i = 0; i < nb_sq_config; i++)
-		regex_ctrl_destroy_sq(priv, qp, i);
+		regex_ctrl_destroy_sq(qp, i);
 	regex_ctrl_destroy_cq(&qp->cq);
 err_cq:
 	rte_free(qp->sqs);
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 255fd40..cd0f9bd 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -110,12 +110,12 @@ struct mlx5_regex_job {
 				  &priv->mr_scache, &qp->mr_ctrl,
 				  rte_pktmbuf_mtod(op->mbuf, uintptr_t),
 				  !!(op->mbuf->ol_flags & EXT_ATTACHED_MBUF));
-	uint8_t *wqe = (uint8_t *)sq->wqe + wqe_offset;
+	uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset;
 	int ds = 4; /*  ctrl + meta + input + output */
 
 	set_wqe_ctrl_seg((struct mlx5_wqe_ctrl_seg *)wqe, sq->pi,
-			 MLX5_OPCODE_MMO, MLX5_OPC_MOD_MMO_REGEX, sq->obj->id,
-			 0, ds, 0, 0);
+			 MLX5_OPCODE_MMO, MLX5_OPC_MOD_MMO_REGEX,
+			 sq->sq_obj.sq->id, 0, ds, 0, 0);
 	set_regex_ctrl_seg(wqe + 12, 0, op->group_id0, op->group_id1,
 			   op->group_id2,
 			   op->group_id3, 0);
@@ -137,12 +137,12 @@ struct mlx5_regex_job {
 {
 	size_t wqe_offset = (sq->db_pi & (sq_size_get(sq) - 1)) *
 		MLX5_SEND_WQE_BB;
-	uint8_t *wqe = (uint8_t *)sq->wqe + wqe_offset;
+	uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset;
 	((struct mlx5_wqe_ctrl_seg *)wqe)->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
 	uint64_t *doorbell_addr =
 		(uint64_t *)((uint8_t *)uar->base_addr + 0x800);
 	rte_io_wmb();
-	sq->dbr[MLX5_SND_DBR] = rte_cpu_to_be_32((sq->db_pi + 1) &
+	sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32((sq->db_pi + 1) &
 						 MLX5_REGEX_MAX_WQE_INDEX);
 	rte_wmb();
 	*doorbell_addr = *(volatile uint64_t *)wqe;
@@ -301,7 +301,7 @@ struct mlx5_regex_job {
 	uint32_t job_id;
 	for (sqid = 0; sqid < queue->nb_obj; sqid++) {
 		struct mlx5_regex_sq *sq = &queue->sqs[sqid];
-		uint8_t *wqe = (uint8_t *)sq->wqe;
+		uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes;
 		for (entry = 0 ; entry < sq_size_get(sq); entry++) {
 			job_id = sqid * sq_size_get(sq) + entry;
 			struct mlx5_regex_job *job = &queue->jobs[job_id];
@@ -334,7 +334,7 @@ struct mlx5_regex_job {
 		return -ENOMEM;
 
 	qp->metadata = mlx5_glue->reg_mr(pd, ptr,
-					 MLX5_REGEX_METADATA_SIZE*qp->nb_desc,
+					 MLX5_REGEX_METADATA_SIZE * qp->nb_desc,
 					 IBV_ACCESS_LOCAL_WRITE);
 	if (!qp->metadata) {
 		DRV_LOG(ERR, "Failed to register metadata");
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 12/17] net/mlx5: move rearm and clock queue SQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (10 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 11/17] regex/mlx5: move DevX SQ creation to common Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 13/17] net/mlx5: move Tx " Michael Baum
                       ` (4 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX SQ creation for rearm and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   8 +--
 drivers/net/mlx5/mlx5_txpp.c | 147 +++++++++++--------------------------------
 2 files changed, 36 insertions(+), 119 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9a59c26..192a5a7 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -611,15 +611,9 @@ struct mlx5_txpp_wq {
 	uint32_t cq_ci:24;
 	uint32_t arm_sn:2;
 	/* Send Queue related data.*/
-	struct mlx5_devx_obj *sq;
-	void *sq_umem;
-	union {
-		volatile void *sq_buf;
-		volatile struct mlx5_wqe *wqes;
-	};
+	struct mlx5_devx_sq sq_obj;
 	uint16_t sq_size; /* Number of WQEs in the queue. */
 	uint16_t sq_ci; /* Next WQE to execute. */
-	volatile uint32_t *sq_dbrec;
 };
 
 /* Tx packet pacing internal timestamp. */
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 54ea572..b6ff7e0 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -121,12 +121,7 @@
 static void
 mlx5_txpp_destroy_send_queue(struct mlx5_txpp_wq *wq)
 {
-	if (wq->sq)
-		claim_zero(mlx5_devx_cmd_destroy(wq->sq));
-	if (wq->sq_umem)
-		claim_zero(mlx5_glue->devx_umem_dereg(wq->sq_umem));
-	if (wq->sq_buf)
-		mlx5_free((void *)(uintptr_t)wq->sq_buf);
+	mlx5_devx_sq_destroy(&wq->sq_obj);
 	mlx5_devx_cq_destroy(&wq->cq_obj);
 	memset(wq, 0, sizeof(*wq));
 }
@@ -155,6 +150,7 @@
 mlx5_txpp_doorbell_rearm_queue(struct mlx5_dev_ctx_shared *sh, uint16_t ci)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	union {
 		uint32_t w32[2];
 		uint64_t w64;
@@ -163,11 +159,11 @@
 
 	wq->sq_ci = ci + 1;
 	cs.w32[0] = rte_cpu_to_be_32(rte_be_to_cpu_32
-		   (wq->wqes[ci & (wq->sq_size - 1)].ctrl[0]) | (ci - 1) << 8);
-	cs.w32[1] = wq->wqes[ci & (wq->sq_size - 1)].ctrl[1];
+			(wqe[ci & (wq->sq_size - 1)].ctrl[0]) | (ci - 1) << 8);
+	cs.w32[1] = wqe[ci & (wq->sq_size - 1)].ctrl[1];
 	/* Update SQ doorbell record with new SQ ci. */
 	rte_compiler_barrier();
-	*wq->sq_dbrec = rte_cpu_to_be_32(wq->sq_ci);
+	*wq->sq_obj.db_rec = rte_cpu_to_be_32(wq->sq_ci);
 	/* Make sure the doorbell record is updated. */
 	rte_wmb();
 	/* Write to doorbel register to start processing. */
@@ -180,7 +176,7 @@
 mlx5_txpp_fill_wqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->wqes;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	uint32_t i;
 
 	for (i = 0; i < wq->sq_size; i += 2) {
@@ -191,7 +187,7 @@
 		/* Build SEND_EN request with slave WQE index. */
 		cs = &wqe[i + 0].cseg;
 		cs->opcode = RTE_BE32(MLX5_OPCODE_SEND_EN | 0);
-		cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) | 2);
+		cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) | 2);
 		cs->flags = RTE_BE32(MLX5_COMP_ALWAYS <<
 				     MLX5_COMP_MODE_OFFSET);
 		cs->misc = RTE_BE32(0);
@@ -199,11 +195,12 @@
 		index = (i * MLX5_TXPP_REARM / 2 + MLX5_TXPP_REARM) &
 			((1 << MLX5_WQ_INDEX_WIDTH) - 1);
 		qs->max_index = rte_cpu_to_be_32(index);
-		qs->qpn_cqn = rte_cpu_to_be_32(sh->txpp.clock_queue.sq->id);
+		qs->qpn_cqn =
+			   rte_cpu_to_be_32(sh->txpp.clock_queue.sq_obj.sq->id);
 		/* Build WAIT request with slave CQE index. */
 		cs = &wqe[i + 1].cseg;
 		cs->opcode = RTE_BE32(MLX5_OPCODE_WAIT | 0);
-		cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) | 2);
+		cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) | 2);
 		cs->flags = RTE_BE32(MLX5_COMP_ONLY_ERR <<
 				     MLX5_COMP_MODE_OFFSET);
 		cs->misc = RTE_BE32(0);
@@ -220,7 +217,16 @@
 static int
 mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
+	struct mlx5_devx_create_sq_attr sq_attr = {
+		.cd_master = 1,
+		.state = MLX5_SQC_STATE_RST,
+		.tis_lst_sz = 1,
+		.tis_num = sh->tis->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = sh->pdn,
+			.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+		},
+	};
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
 	struct mlx5_devx_cq_attr cq_attr = {
 		.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
@@ -228,15 +234,8 @@
 		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
 	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	size_t page_size;
-	uint32_t umem_size, umem_dbrec;
 	int ret;
 
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		return -ENOMEM;
-	}
 	/* Create completion queue object for Rearm Queue. */
 	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
 				  log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr,
@@ -247,63 +246,25 @@
 	}
 	wq->cq_ci = 0;
 	wq->arm_sn = 0;
-	/*
-	 * Allocate memory buffer for Send Queue WQEs.
-	 * There should be no WQE leftovers in the cyclic queue.
-	 */
 	wq->sq_size = MLX5_TXPP_REARM_SQ_SIZE;
 	MLX5_ASSERT(wq->sq_size == (1 << log2above(wq->sq_size)));
-	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->sq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->sq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->sq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Rearm Queue.");
-		goto error;
-	}
 	/* Create send queue object for Rearm Queue. */
-	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = sh->tis->id;
 	sq_attr.cqn = wq->cq_obj.cq->id;
-	sq_attr.cd_master = 1;
-	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = rte_log2_u32(wq->sq_size);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = umem_dbrec;
-	sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_offset = 0;
-	wq->sq = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr);
-	if (!wq->sq) {
+	/* There should be no WQE leftovers in the cyclic queue. */
+	ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj,
+				  log2above(MLX5_TXPP_REARM_SQ_SIZE), &sq_attr,
+				  sh->numa_node);
+	if (ret) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create SQ for Rearm Queue.");
 		goto error;
 	}
-	wq->sq_dbrec = RTE_PTR_ADD(wq->sq_buf, umem_dbrec +
-				   MLX5_SND_DBR * sizeof(uint32_t));
 	/* Build the WQEs in the Send Queue before goto Ready state. */
 	mlx5_txpp_fill_wqe_rearm_queue(sh);
 	/* Change queue state to ready. */
 	msq_attr.sq_state = MLX5_SQC_STATE_RST;
 	msq_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(wq->sq, &msq_attr);
+	ret = mlx5_devx_cmd_modify_sq(wq->sq_obj.sq, &msq_attr);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to set SQ ready state Rearm Queue.");
 		goto error;
@@ -320,7 +281,7 @@
 mlx5_txpp_fill_wqe_clock_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->wqes;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	struct mlx5_wqe_cseg *cs = &wqe->cseg;
 	uint32_t wqe_size, opcode, i;
 	uint8_t *dst;
@@ -338,7 +299,7 @@
 		opcode = MLX5_OPCODE_NOP;
 	}
 	cs->opcode = rte_cpu_to_be_32(opcode | 0); /* Index is ignored. */
-	cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) |
+	cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) |
 				     (wqe_size / MLX5_WSEG_SIZE));
 	cs->flags = RTE_BE32(MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET);
 	cs->misc = RTE_BE32(0);
@@ -407,10 +368,11 @@
 	}
 wcopy:
 	/* Duplicate the pattern to the next WQEs. */
-	dst = (uint8_t *)(uintptr_t)wq->sq_buf;
+	dst = (uint8_t *)(uintptr_t)wq->sq_obj.umem_buf;
 	for (i = 1; i < MLX5_TXPP_CLKQ_SIZE; i++) {
 		dst += wqe_size;
-		rte_memcpy(dst, (void *)(uintptr_t)wq->sq_buf, wqe_size);
+		rte_memcpy(dst, (void *)(uintptr_t)wq->sq_obj.umem_buf,
+			   wqe_size);
 	}
 }
 
@@ -428,15 +390,8 @@
 		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
 	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	size_t page_size;
-	uint32_t umem_size, umem_dbrec;
 	int ret;
 
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		return -ENOMEM;
-	}
 	sh->txpp.tsa = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
 				   MLX5_TXPP_REARM_SQ_SIZE *
 				   sizeof(struct mlx5_txpp_ts),
@@ -469,26 +424,6 @@
 	}
 	/* There should not be WQE leftovers in the cyclic queue. */
 	MLX5_ASSERT(wq->sq_size == (1 << log2above(wq->sq_size)));
-	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->sq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->sq_umem = mlx5_glue->devx_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->sq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Clock Queue.");
-		goto error;
-	}
 	/* Create send queue object for Clock Queue. */
 	if (sh->txpp.test) {
 		sq_attr.tis_lst_sz = 1;
@@ -499,37 +434,25 @@
 		sq_attr.non_wire = 1;
 		sq_attr.static_sq_wq = 1;
 	}
-	sq_attr.state = MLX5_SQC_STATE_RST;
 	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id;
 	sq_attr.wq_attr.cd_slave = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
 	sq_attr.wq_attr.pd = sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = rte_log2_u32(wq->sq_size);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = umem_dbrec;
-	sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	/* umem_offset must be zero for static_sq_wq queue. */
-	sq_attr.wq_attr.wq_umem_offset = 0;
-	wq->sq = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr);
-	if (!wq->sq) {
+	ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj, log2above(wq->sq_size),
+				  &sq_attr, sh->numa_node);
+	if (ret) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create SQ for Clock Queue.");
 		goto error;
 	}
-	wq->sq_dbrec = RTE_PTR_ADD(wq->sq_buf, umem_dbrec +
-				   MLX5_SND_DBR * sizeof(uint32_t));
 	/* Build the WQEs in the Send Queue before goto Ready state. */
 	mlx5_txpp_fill_wqe_clock_queue(sh);
 	/* Change queue state to ready. */
 	msq_attr.sq_state = MLX5_SQC_STATE_RST;
 	msq_attr.state = MLX5_SQC_STATE_RDY;
 	wq->sq_ci = 0;
-	ret = mlx5_devx_cmd_modify_sq(wq->sq, &msq_attr);
+	ret = mlx5_devx_cmd_modify_sq(wq->sq_obj.sq, &msq_attr);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to set SQ ready state Clock Queue.");
 		goto error;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 13/17] net/mlx5: move Tx SQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (11 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 12/17] net/mlx5: move rearm and clock queue " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 14/17] net/mlx5: move ASO " Michael Baum
                       ` (3 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Tx SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   8 +--
 drivers/net/mlx5/mlx5_devx.c | 160 ++++++++++---------------------------------
 2 files changed, 40 insertions(+), 128 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 192a5a7..6977eac 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -837,11 +837,9 @@ struct mlx5_txq_obj {
 		struct {
 			struct rte_eth_dev *dev;
 			struct mlx5_devx_cq cq_obj;
-			struct mlx5_devx_obj *sq_devx;
-			void *sq_umem;
-			void *sq_buf;
-			int64_t sq_dbrec_offset;
-			struct mlx5_devx_dbr_page *sq_dbrec_page;
+			/* DevX CQ object and its resources. */
+			struct mlx5_devx_sq sq_obj;
+			/* DevX SQ object and its resources. */
 		};
 	};
 };
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index fe103a7..4154c52 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -115,7 +115,7 @@
 		else
 			msq_attr.sq_state = MLX5_SQC_STATE_RDY;
 		msq_attr.state = MLX5_SQC_STATE_RST;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_obj.sq, &msq_attr);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change the Tx SQ state to RESET"
 				" %s", strerror(errno));
@@ -127,7 +127,7 @@
 		/* Change queue state to ready. */
 		msq_attr.sq_state = MLX5_SQC_STATE_RST;
 		msq_attr.state = MLX5_SQC_STATE_RDY;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_obj.sq, &msq_attr);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change the Tx SQ state to READY"
 				" %s", strerror(errno));
@@ -1056,36 +1056,6 @@
 
 #ifdef HAVE_MLX5DV_DEVX_UAR_OFFSET
 /**
- * Release DevX SQ resources.
- *
- * @param txq_obj
- *   DevX Tx queue object.
- */
-static void
-mlx5_txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->sq_devx) {
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->sq_devx));
-		txq_obj->sq_devx = NULL;
-	}
-	if (txq_obj->sq_umem) {
-		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->sq_umem));
-		txq_obj->sq_umem = NULL;
-	}
-	if (txq_obj->sq_buf) {
-		mlx5_free(txq_obj->sq_buf);
-		txq_obj->sq_buf = NULL;
-	}
-	if (txq_obj->sq_dbrec_page) {
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->sq_dbrec_page->umem),
-					    txq_obj->sq_dbrec_offset));
-		txq_obj->sq_dbrec_page = NULL;
-	}
-}
-
-/**
  * Destroy the Tx queue DevX object.
  *
  * @param txq_obj
@@ -1094,7 +1064,8 @@
 static void
 mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
-	mlx5_txq_release_devx_sq_resources(txq_obj);
+	mlx5_devx_sq_destroy(&txq_obj->sq_obj);
+	memset(&txq_obj->sq_obj, 0, sizeof(txq_obj->sq_obj));
 	mlx5_devx_cq_destroy(&txq_obj->cq_obj);
 	memset(&txq_obj->cq_obj, 0, sizeof(txq_obj->cq_obj));
 }
@@ -1106,100 +1077,41 @@
  *   Pointer to Ethernet device.
  * @param idx
  *   Queue index in DPDK Tx queue array.
+ * @param[in] log_desc_n
+ *   Log of number of descriptors in queue.
  *
  * @return
- *   Number of WQEs in SQ, 0 otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static uint32_t
-mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx)
+static int
+mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx,
+				  uint16_t log_desc_n)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
 	struct mlx5_txq_ctrl *txq_ctrl =
 			container_of(txq_data, struct mlx5_txq_ctrl, txq);
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
-	size_t page_size;
-	uint32_t wqe_n;
-	int ret;
+	struct mlx5_devx_create_sq_attr sq_attr = {
+		.flush_in_error_en = 1,
+		.allow_multi_pkt_send_wqe = !!priv->config.mps,
+		.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode,
+		.allow_swp = !!priv->config.swp,
+		.cqn = txq_obj->cq_obj.cq->id,
+		.tis_lst_sz = 1,
+		.tis_num = priv->sh->tis->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = priv->sh->pdn,
+			.uar_page =
+				 mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar),
+		},
+	};
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
-			(uint32_t)priv->sh->device_attr.max_qp_wr);
-	txq_obj->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      wqe_n * sizeof(struct mlx5_wqe),
-				      page_size, priv->sh->numa_node);
-	if (!txq_obj->sq_buf) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory (SQ).",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->sq_umem = mlx5_glue->devx_umem_reg
-					(priv->sh->ctx,
-					 (void *)txq_obj->sq_buf,
-					 wqe_n * sizeof(struct mlx5_wqe),
-					 IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot register memory (SQ).",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for send queue. */
-	txq_obj->sq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->sq_dbrec_page);
-	if (txq_obj->sq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate SQ door-bell.");
-		goto error;
-	}
-	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = priv->sh->tis->id;
-	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = txq_obj->cq_obj.cq->id;
-	sq_attr.flush_in_error_en = 1;
-	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
-	sq_attr.allow_swp = !!priv->config.swp;
-	sq_attr.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode;
-	sq_attr.wq_attr.uar_page =
-				mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = priv->sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = log2above(wqe_n);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
-	sq_attr.wq_attr.dbr_umem_id =
-			mlx5_os_get_umem_id(txq_obj->sq_dbrec_page->umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
-	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
 	/* Create Send Queue object with DevX. */
-	txq_obj->sq_devx = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
-	if (!txq_obj->sq_devx) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	return wqe_n;
-error:
-	ret = rte_errno;
-	mlx5_txq_release_devx_sq_resources(txq_obj);
-	rte_errno = ret;
-	return 0;
+	return mlx5_devx_sq_create(priv->sh->ctx, &txq_obj->sq_obj, log_desc_n,
+				   &sq_attr, priv->sh->numa_node);
 }
 #endif
 
@@ -1273,27 +1185,29 @@
 	txq_data->cq_db = txq_obj->cq_obj.db_rec;
 	*txq_data->cq_db = 0;
 	/* Create Send Queue object with DevX. */
-	wqe_n = mlx5_txq_create_devx_sq_resources(dev, idx);
-	if (!wqe_n) {
+	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
+			(uint32_t)priv->sh->device_attr.max_qp_wr);
+	log_desc_n = log2above(wqe_n);
+	ret = mlx5_txq_create_devx_sq_resources(dev, idx, log_desc_n);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
+			dev->data->port_id, idx);
 		rte_errno = errno;
 		goto error;
 	}
 	/* Create the Work Queue. */
-	txq_data->wqe_n = log2above(wqe_n);
+	txq_data->wqe_n = log_desc_n;
 	txq_data->wqe_s = 1 << txq_data->wqe_n;
 	txq_data->wqe_m = txq_data->wqe_s - 1;
-	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
+	txq_data->wqes = (struct mlx5_wqe *)(uintptr_t)txq_obj->sq_obj.wqes;
 	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
 	txq_data->wqe_ci = 0;
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->qp_db = (volatile uint32_t *)
-					(txq_obj->sq_dbrec_page->dbrs +
-					 txq_obj->sq_dbrec_offset +
-					 MLX5_SND_DBR * sizeof(uint32_t));
+	txq_data->qp_db = txq_obj->sq_obj.db_rec;
 	*txq_data->qp_db = 0;
-	txq_data->qp_num_8s = txq_obj->sq_devx->id << 8;
+	txq_data->qp_num_8s = txq_obj->sq_obj.sq->id << 8;
 	/* Change Send Queue state to Ready-to-Send. */
 	ret = mlx5_devx_modify_sq(txq_obj, MLX5_TXQ_MOD_RST2RDY, 0);
 	if (ret) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 14/17] net/mlx5: move ASO SQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (12 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 13/17] net/mlx5: move Tx " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 15/17] common/mlx5: share DevX RQ creation Michael Baum
                       ` (2 subsequent siblings)
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for ASO SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.h |  1 +
 drivers/net/mlx5/mlx5.h                |  8 +--
 drivers/net/mlx5/mlx5_flow_age.c       | 94 ++++++++++------------------------
 3 files changed, 30 insertions(+), 73 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 88d520b..8377d34 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -25,6 +25,7 @@ struct mlx5_devx_sq {
 	union {
 		volatile void *umem_buf;
 		volatile struct mlx5_wqe *wqes; /* The SQ ring buffer. */
+		volatile struct mlx5_aso_wqe *aso_wqes;
 	};
 	volatile uint32_t *db_rec; /* The SQ doorbell record. */
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6977eac..86ada23 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -483,13 +483,7 @@ struct mlx5_aso_sq_elem {
 struct mlx5_aso_sq {
 	uint16_t log_desc_n;
 	struct mlx5_aso_cq cq;
-	struct mlx5_devx_obj *sq;
-	struct mlx5dv_devx_umem *wqe_umem; /* SQ buffer umem. */
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_aso_wqe *wqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_sq sq_obj;
 	volatile uint64_t *uar_addr;
 	struct mlx5_aso_devx_mr mr;
 	uint16_t pi;
diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index 60a8d2a..9681cbf 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -141,18 +141,7 @@
 static void
 mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)
 {
-	if (sq->wqe_umem) {
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-		sq->wqe_umem = NULL;
-	}
-	if (sq->umem_buf) {
-		mlx5_free((void *)(uintptr_t)sq->umem_buf);
-		sq->umem_buf = NULL;
-	}
-	if (sq->sq) {
-		mlx5_devx_cmd_destroy(sq->sq);
-		sq->sq = NULL;
-	}
+	mlx5_devx_sq_destroy(&sq->sq_obj);
 	mlx5_aso_cq_destroy(&sq->cq);
 	mlx5_aso_devx_dereg_mr(&sq->mr);
 	memset(sq, 0, sizeof(*sq));
@@ -173,7 +162,7 @@
 	uint64_t addr;
 
 	/* All the next fields state should stay constant. */
-	for (i = 0, wqe = &sq->wqes[0]; i < size; ++i, ++wqe) {
+	for (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) {
 		wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |
 							  (sizeof(*wqe) >> 4));
 		wqe->aso_cseg.lkey = rte_cpu_to_be_32(sq->mr.mkey->id);
@@ -215,12 +204,18 @@
 		   struct mlx5dv_devx_uar *uar, uint32_t pdn,
 		   uint16_t log_desc_n)
 {
-	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	struct mlx5_devx_wq_attr *wq_attr = &attr.wq_attr;
+	struct mlx5_devx_create_sq_attr attr = {
+		.user_index = 0xFFFF,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = pdn,
+			.uar_page = mlx5_os_get_devx_uar_page_id(uar),
+		},
+	};
+	struct mlx5_devx_modify_sq_attr modify_attr = {
+		.state = MLX5_SQC_STATE_RDY,
+	};
 	uint32_t sq_desc_n = 1 << log_desc_n;
-	uint32_t wq_size = sizeof(struct mlx5_aso_wqe) * sq_desc_n;
+	uint16_t log_wqbb_n;
 	int ret;
 
 	if (mlx5_aso_devx_reg_mr(ctx, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) *
@@ -230,58 +225,25 @@
 			       mlx5_os_get_devx_uar_page_id(uar)))
 		goto error;
 	sq->log_desc_n = log_desc_n;
-	sq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size +
-				   sizeof(*sq->db_rec) * 2, 4096, socket);
-	if (!sq->umem_buf) {
-		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	sq->wqe_umem = mlx5_glue->devx_umem_reg(ctx,
-						(void *)(uintptr_t)sq->umem_buf,
-						wq_size +
-						sizeof(*sq->db_rec) * 2,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!sq->wqe_umem) {
-		DRV_LOG(ERR, "Failed to register umem for SQ.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	attr.state = MLX5_SQC_STATE_RST;
-	attr.tis_lst_sz = 0;
-	attr.tis_num = 0;
-	attr.user_index = 0xFFFF;
 	attr.cqn = sq->cq.cq_obj.cq->id;
-	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(uar);
-	wq_attr->pd = pdn;
-	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
-	wq_attr->log_wq_pg_sz = rte_log2_u32(pgsize);
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(sq->wqe_umem);
-	wq_attr->wq_umem_offset = 0;
-	wq_attr->wq_umem_valid = 1;
-	wq_attr->log_wq_stride = 6;
-	wq_attr->log_wq_sz = rte_log2_u32(wq_size) - 6;
-	wq_attr->dbr_umem_id = wq_attr->wq_umem_id;
-	wq_attr->dbr_addr = wq_size;
-	wq_attr->dbr_umem_valid = 1;
-	sq->sq = mlx5_devx_cmd_create_sq(ctx, &attr);
-	if (!sq->sq) {
-		DRV_LOG(ERR, "Can't create sq object.");
-		rte_errno  = ENOMEM;
+	/* for mlx5_aso_wqe that is twice the size of mlx5_wqe */
+	log_wqbb_n = log_desc_n + 1;
+	ret = mlx5_devx_sq_create(ctx, &sq->sq_obj, log_wqbb_n, &attr, socket);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create SQ object.");
+		rte_errno = ENOMEM;
 		goto error;
 	}
-	modify_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(sq->sq, &modify_attr);
+	ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);
 	if (ret) {
-		DRV_LOG(ERR, "Can't change sq state to ready.");
-		rte_errno  = ENOMEM;
+		DRV_LOG(ERR, "Can't change SQ state to ready.");
+		rte_errno = ENOMEM;
 		goto error;
 	}
 	sq->pi = 0;
 	sq->head = 0;
 	sq->tail = 0;
-	sq->sqn = sq->sq->id;
-	sq->db_rec = RTE_PTR_ADD(sq->umem_buf, (uintptr_t)(wq_attr->dbr_addr));
+	sq->sqn = sq->sq_obj.sq->id;
 	sq->uar_addr = (volatile uint64_t *)((uint8_t *)uar->base_addr + 0x800);
 	mlx5_aso_init_sq(sq);
 	return 0;
@@ -345,8 +307,8 @@
 		return 0;
 	sq->elts[start_head & mask].burst_size = max;
 	do {
-		wqe = &sq->wqes[sq->head & mask];
-		rte_prefetch0(&sq->wqes[(sq->head + 1) & mask]);
+		wqe = &sq->sq_obj.aso_wqes[sq->head & mask];
+		rte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]);
 		/* Fill next WQE. */
 		rte_spinlock_lock(&mng->resize_sl);
 		pool = mng->pools[sq->next];
@@ -371,7 +333,7 @@
 	wqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ALWAYS <<
 							 MLX5_COMP_MODE_OFFSET);
 	rte_io_wmb();
-	sq->db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);
+	sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);
 	rte_wmb();
 	*sq->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/
 	rte_wmb();
@@ -418,7 +380,7 @@
 	cq->errors++;
 	idx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);
 	mlx5_aso_dump_err_objs((volatile uint32_t *)cqe,
-				 (volatile uint32_t *)&sq->wqes[idx]);
+			       (volatile uint32_t *)&sq->sq_obj.aso_wqes[idx]);
 }
 
 /**
@@ -613,7 +575,7 @@
 {
 	int retries = 1024;
 
-	if (!sh->aso_age_mng->aso_sq.sq)
+	if (!sh->aso_age_mng->aso_sq.sq_obj.sq)
 		return -EINVAL;
 	rte_errno = 0;
 	while (--retries) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 15/17] common/mlx5: share DevX RQ creation
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (13 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 14/17] net/mlx5: move ASO " Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 16/17] net/mlx5: move Rx RQ creation to common Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 17/17] common/mlx5: remove doorbell allocation API Michael Baum
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The RQ object in DevX is used currently only in net driver, but it share
for future.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c | 116 +++++++++++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h |  11 ++++
 2 files changed, 127 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 652bc9a..a66b838 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -276,4 +276,120 @@
 	return -rte_errno;
 }
 
+/**
+ * Destroy DevX Receive Queue.
+ *
+ * @param[in] rq
+ *   DevX RQ to destroy.
+ */
+void
+mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq)
+{
+	if (rq->rq)
+		claim_zero(mlx5_devx_cmd_destroy(rq->rq));
+	if (rq->umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(rq->umem_obj));
+	if (rq->umem_buf)
+		mlx5_free((void *)(uintptr_t)rq->umem_buf);
+}
+
+/**
+ * Create Receive Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
+		    uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *rq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t page_size = rte_mem_page_size();
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint16_t rq_size = 1 << log_wqbb_n;
+	int ret;
+
+	if (page_size == (size_t)-1 || alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get page_size.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Allocate memory buffer for WQEs and doorbell record. */
+	umem_size = wqe_size * rq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for RQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_glue->devx_umem_reg(ctx, (void *)(uintptr_t)umem_buf,
+					    umem_size, 0);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for RQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for RQ object creation. */
+	attr->wq_attr.wq_umem_valid = 1;
+	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj);
+	attr->wq_attr.wq_umem_offset = 0;
+	attr->wq_attr.dbr_umem_valid = 1;
+	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
+	attr->wq_attr.dbr_addr = umem_dbrec;
+	attr->wq_attr.log_wq_pg_sz = rte_log2_u32(page_size);
+	/* Create receive queue object with DevX. */
+	rq = mlx5_devx_cmd_create_rq(ctx, attr, socket);
+	if (!rq) {
+		DRV_LOG(ERR, "Can't create DevX RQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rq_obj->umem_buf = umem_buf;
+	rq_obj->umem_obj = umem_obj;
+	rq_obj->rq = rq;
+	rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec);
+	return 0;
+error:
+	ret = rte_errno;
+	if (rq)
+		claim_zero(mlx5_devx_cmd_destroy(rq));
+	if (umem_obj)
+		claim_zero(mlx5_glue->devx_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
 
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 8377d34..1dafbf5 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -30,6 +30,13 @@ struct mlx5_devx_sq {
 	volatile uint32_t *db_rec; /* The SQ doorbell record. */
 };
 
+/* DevX Receive Queue structure. */
+struct mlx5_devx_rq {
+	struct mlx5_devx_obj *rq; /* The RQ DevX object. */
+	struct mlx5dv_devx_umem *umem_obj; /* The RQ umem object. */
+	volatile void *umem_buf;
+	volatile uint32_t *db_rec; /* The RQ doorbell record. */
+};
 
 /* mlx5_common_devx.c */
 
@@ -41,5 +48,9 @@ int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
 int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj,
 			uint16_t log_wqbb_n,
 			struct mlx5_devx_create_sq_attr *attr, int socket);
+void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq);
+int mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+			uint32_t wqe_size, uint16_t log_wqbb_n,
+			struct mlx5_devx_create_rq_attr *attr, int socket);
 
 #endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 16/17] net/mlx5: move Rx RQ creation to common
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (14 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 15/17] common/mlx5: share DevX RQ creation Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 17/17] common/mlx5: remove doorbell allocation API Michael Baum
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Rx RQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   4 +-
 drivers/net/mlx5/mlx5_devx.c | 178 +++++++++----------------------------------
 drivers/net/mlx5/mlx5_rxtx.h |   4 -
 3 files changed, 37 insertions(+), 149 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 86ada23..5bf6886 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -772,8 +772,9 @@ struct mlx5_rxq_obj {
 			void *ibv_cq; /* Completion Queue. */
 			void *ibv_channel;
 		};
+		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
-			struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
+			struct mlx5_devx_rq rq_obj; /* DevX RQ object. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
@@ -944,7 +945,6 @@ struct mlx5_priv {
 	/* Context for Verbs allocator. */
 	int nl_socket_rdma; /* Netlink socket (NETLINK_RDMA). */
 	int nl_socket_route; /* Netlink socket (NETLINK_ROUTE). */
-	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
 	struct mlx5_nl_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
 	struct mlx5_hlist *mreg_cp_tbl;
 	/* Hash table of Rx metadata register copy table. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 4154c52..9e825ce 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -45,7 +45,7 @@
 	rq_attr.state = MLX5_RQC_STATE_RDY;
 	rq_attr.vsd = (on ? 0 : 1);
 	rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD;
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
 }
 
 /**
@@ -85,7 +85,7 @@
 	default:
 		break;
 	}
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
 }
 
 /**
@@ -145,44 +145,18 @@
 }
 
 /**
- * Release the resources allocated for an RQ DevX object.
- *
- * @param rxq_ctrl
- *   DevX Rx queue object.
- */
-static void
-mlx5_rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
-{
-	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->rq_dbrec_page;
-
-	if (rxq_ctrl->wq_umem) {
-		mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
-		rxq_ctrl->wq_umem = NULL;
-	}
-	if (rxq_ctrl->rxq.wqes) {
-		mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
-		rxq_ctrl->rxq.wqes = NULL;
-	}
-	if (dbr_page) {
-		claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id(dbr_page->umem),
-					    rxq_ctrl->rq_dbr_offset));
-		rxq_ctrl->rq_dbrec_page = NULL;
-	}
-}
-
-/**
  * Destroy the Rx queue DevX object.
  *
  * @param rxq_obj
  *   Rxq object to destroy.
  */
 static void
-mlx5_rxq_release_devx_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_resources(struct mlx5_rxq_obj *rxq_obj)
 {
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	mlx5_devx_cq_destroy(&rxq_ctrl->obj->cq_obj);
-	memset(&rxq_ctrl->obj->cq_obj, 0, sizeof(rxq_ctrl->obj->cq_obj));
+	mlx5_devx_rq_destroy(&rxq_obj->rq_obj);
+	memset(&rxq_obj->rq_obj, 0, sizeof(rxq_obj->rq_obj));
+	mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
+	memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
 }
 
 /**
@@ -195,17 +169,17 @@
 mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj)
 {
 	MLX5_ASSERT(rxq_obj);
-	MLX5_ASSERT(rxq_obj->rq);
 	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
+		MLX5_ASSERT(rxq_obj->rq);
 		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->cq_obj);
-		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+		MLX5_ASSERT(rxq_obj->cq_obj.cq);
+		MLX5_ASSERT(rxq_obj->rq_obj.rq);
+		mlx5_rxq_release_devx_resources(rxq_obj);
 		if (rxq_obj->devx_channel)
 			mlx5_glue->devx_destroy_event_channel
 							(rxq_obj->devx_channel);
-		mlx5_rxq_release_devx_resources(rxq_obj->rxq_ctrl);
 	}
 }
 
@@ -247,52 +221,6 @@
 }
 
 /**
- * Fill common fields of create RQ attributes structure.
- *
- * @param rxq_data
- *   Pointer to Rx queue data.
- * @param cqn
- *   CQ number to use with this RQ.
- * @param rq_attr
- *   RQ attributes structure to fill..
- */
-static void
-mlx5_devx_create_rq_attr_fill(struct mlx5_rxq_data *rxq_data, uint32_t cqn,
-			      struct mlx5_devx_create_rq_attr *rq_attr)
-{
-	rq_attr->state = MLX5_RQC_STATE_RST;
-	rq_attr->vsd = (rxq_data->vlan_strip) ? 0 : 1;
-	rq_attr->cqn = cqn;
-	rq_attr->scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
-}
-
-/**
- * Fill common fields of DevX WQ attributes structure.
- *
- * @param priv
- *   Pointer to device private data.
- * @param rxq_ctrl
- *   Pointer to Rx queue control structure.
- * @param wq_attr
- *   WQ attributes structure to fill..
- */
-static void
-mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
-		       struct mlx5_devx_wq_attr *wq_attr)
-{
-	wq_attr->end_padding_mode = priv->config.hw_padding ?
-					MLX5_WQ_END_PAD_MODE_ALIGN :
-					MLX5_WQ_END_PAD_MODE_NONE;
-	wq_attr->pd = priv->sh->pdn;
-	wq_attr->dbr_addr = rxq_ctrl->rq_dbr_offset;
-	wq_attr->dbr_umem_id =
-			mlx5_os_get_umem_id(rxq_ctrl->rq_dbrec_page->umem);
-	wq_attr->dbr_umem_valid = 1;
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(rxq_ctrl->wq_umem);
-	wq_attr->wq_umem_valid = 1;
-}
-
-/**
  * Create a RQ object using DevX.
  *
  * @param dev
@@ -301,9 +229,9 @@
  *   Queue index in DPDK Rx queue array.
  *
  * @return
- *   The DevX RQ object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
+static int
 mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -311,26 +239,15 @@
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
-	uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
-	uint32_t cqn = rxq_ctrl->obj->cq_obj.cq->id;
-	struct mlx5_devx_dbr_page *dbr_page;
-	int64_t dbr_offset;
-	uint32_t wq_size = 0;
-	uint32_t wqe_size = 0;
-	uint32_t log_wqe_size = 0;
-	void *buf = NULL;
-	struct mlx5_devx_obj *rq;
-	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint16_t log_desc_n = rxq_data->elts_n - rxq_data->sges_n;
+	uint32_t wqe_size, log_wqe_size;
 
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
 	/* Fill RQ attributes. */
 	rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
 	rq_attr.flush_in_error_en = 1;
-	mlx5_devx_create_rq_attr_fill(rxq_data, cqn, &rq_attr);
+	rq_attr.vsd = (rxq_data->vlan_strip) ? 0 : 1;
+	rq_attr.cqn = rxq_ctrl->obj->cq_obj.cq->id;
+	rq_attr.scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
 	/* Fill WQ attributes for this RQ. */
 	if (mlx5_rxq_mprq_enabled(rxq_data)) {
 		rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ;
@@ -351,40 +268,17 @@
 		wqe_size = sizeof(struct mlx5_wqe_data_seg);
 	}
 	log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
-	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
-	rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
-	rq_attr.wq_attr.log_wq_pg_sz = log2above(alignment);
-	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
-	wq_size = wqe_n * wqe_size;
-	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
-			  alignment, rxq_ctrl->socket);
-	if (!buf)
-		return NULL;
-	rxq_data->wqes = buf;
-	rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
-						     buf, wq_size, 0);
-	if (!rxq_ctrl->wq_umem)
-		goto error;
-	/* Allocate RQ door-bell. */
-	dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
-	if (dbr_offset < 0) {
-		DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
-		goto error;
-	}
-	rxq_ctrl->rq_dbr_offset = dbr_offset;
-	rxq_ctrl->rq_dbrec_page = dbr_page;
-	rxq_data->rq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			  (uintptr_t)rxq_ctrl->rq_dbr_offset);
+	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
+	rq_attr.wq_attr.log_wq_sz = log_desc_n;
+	rq_attr.wq_attr.end_padding_mode = priv->config.hw_padding ?
+						MLX5_WQ_END_PAD_MODE_ALIGN :
+						MLX5_WQ_END_PAD_MODE_NONE;
+	rq_attr.wq_attr.pd = priv->sh->pdn;
 	/* Create RQ using DevX API. */
-	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
-	rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
-	if (!rq)
-		goto error;
-	return rq;
-error:
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	return NULL;
+	return mlx5_devx_rq_create(priv->sh->ctx, &rxq_ctrl->obj->rq_obj,
+				   wqe_size, log_desc_n, &rq_attr,
+				   rxq_ctrl->socket);
 }
 
 /**
@@ -607,8 +501,8 @@
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	tmpl->rq = mlx5_rxq_create_devx_rq_resources(dev, idx);
-	if (!tmpl->rq) {
+	ret = mlx5_rxq_create_devx_rq_resources(dev, idx);
+	if (ret) {
 		DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
 			dev->data->port_id, idx);
 		rte_errno = ENOMEM;
@@ -618,19 +512,17 @@
 	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
+	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf;
+	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec;
 	rxq_data->cq_arm_sn = 0;
-	mlx5_rxq_initialize(rxq_data);
 	rxq_data->cq_ci = 0;
+	mlx5_rxq_initialize(rxq_data);
 	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq_ctrl->wqn = tmpl->rq->id;
+	rxq_ctrl->wqn = tmpl->rq_obj.rq->id;
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	if (tmpl->rq)
-		claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
-	if (tmpl->devx_channel)
-		mlx5_glue->devx_destroy_event_channel(tmpl->devx_channel);
-	mlx5_rxq_release_devx_resources(rxq_ctrl);
+	mlx5_rxq_devx_obj_release(tmpl);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -674,7 +566,7 @@
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 				container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq->id;
+		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq_obj.rq->id;
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 6a71791..b0041c9 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -193,10 +193,6 @@ struct mlx5_rxq_ctrl {
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
 	uint32_t wqn; /* WQ number. */
 	uint16_t dump_file_n; /* Number of dump files. */
-	struct mlx5_devx_dbr_page *rq_dbrec_page;
-	uint64_t rq_dbr_offset;
-	/* Storing RQ door-bell information, needed when freeing door-bell. */
-	void *wq_umem; /* WQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v2 17/17] common/mlx5: remove doorbell allocation API
  2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
                       ` (15 preceding siblings ...)
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 16/17] net/mlx5: move Rx RQ creation to common Michael Baum
@ 2020-12-29  8:52     ` Michael Baum
  16 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The mlx5_devx_dbr_page structure was used to allocate and release the
umem of the doorbells.
Since doorbell and buffer have used same umem, this structure is
useless.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.c | 122 --------------------------------------
 drivers/common/mlx5/mlx5_common.h |  23 -------
 2 files changed, 145 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 0445132..c26a2cf 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -8,12 +8,10 @@
 
 #include <rte_errno.h>
 #include <rte_mempool.h>
-#include <rte_malloc.h>
 
 #include "mlx5_common.h"
 #include "mlx5_common_os.h"
 #include "mlx5_common_utils.h"
-#include "mlx5_malloc.h"
 #include "mlx5_common_pci.h"
 
 int mlx5_common_logtype;
@@ -126,126 +124,6 @@ static inline void mlx5_cpu_id(unsigned int level,
 }
 
 /**
- * Allocate page of door-bells and register it using DevX API.
- *
- * @param [in] ctx
- *   Pointer to the device context.
- *
- * @return
- *   Pointer to new page on success, NULL otherwise.
- */
-static struct mlx5_devx_dbr_page *
-mlx5_alloc_dbr_page(void *ctx)
-{
-	struct mlx5_devx_dbr_page *page;
-
-	/* Allocate space for door-bell page and management data. */
-	page = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			   sizeof(struct mlx5_devx_dbr_page),
-			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
-	if (!page) {
-		DRV_LOG(ERR, "cannot allocate dbr page");
-		return NULL;
-	}
-	/* Register allocated memory. */
-	page->umem = mlx5_glue->devx_umem_reg(ctx, page->dbrs,
-					      MLX5_DBR_PAGE_SIZE, 0);
-	if (!page->umem) {
-		DRV_LOG(ERR, "cannot umem reg dbr page");
-		mlx5_free(page);
-		return NULL;
-	}
-	return page;
-}
-
-/**
- * Find the next available door-bell, allocate new page if needed.
- *
- * @param [in] ctx
- *   Pointer to device context.
- * @param [in] head
- *   Pointer to the head of dbr pages list.
- * @param [out] dbr_page
- *   Door-bell page containing the page data.
- *
- * @return
- *   Door-bell address offset on success, a negative error value otherwise.
- */
-int64_t
-mlx5_get_dbr(void *ctx,  struct mlx5_dbr_page_list *head,
-	     struct mlx5_devx_dbr_page **dbr_page)
-{
-	struct mlx5_devx_dbr_page *page = NULL;
-	uint32_t i, j;
-
-	LIST_FOREACH(page, head, next)
-		if (page->dbr_count < MLX5_DBR_PER_PAGE)
-			break;
-	if (!page) { /* No page with free door-bell exists. */
-		page = mlx5_alloc_dbr_page(ctx);
-		if (!page) /* Failed to allocate new page. */
-			return (-1);
-		LIST_INSERT_HEAD(head, page, next);
-	}
-	/* Loop to find bitmap part with clear bit. */
-	for (i = 0;
-	     i < MLX5_DBR_BITMAP_SIZE && page->dbr_bitmap[i] == UINT64_MAX;
-	     i++)
-		; /* Empty. */
-	/* Find the first clear bit. */
-	MLX5_ASSERT(i < MLX5_DBR_BITMAP_SIZE);
-	j = rte_bsf64(~page->dbr_bitmap[i]);
-	page->dbr_bitmap[i] |= (UINT64_C(1) << j);
-	page->dbr_count++;
-	*dbr_page = page;
-	return (i * CHAR_BIT * sizeof(uint64_t) + j) * MLX5_DBR_SIZE;
-}
-
-/**
- * Release a door-bell record.
- *
- * @param [in] head
- *   Pointer to the head of dbr pages list.
- * @param [in] umem_id
- *   UMEM ID of page containing the door-bell record to release.
- * @param [in] offset
- *   Offset of door-bell record in page.
- *
- * @return
- *   0 on success, a negative error value otherwise.
- */
-int32_t
-mlx5_release_dbr(struct mlx5_dbr_page_list *head, uint32_t umem_id,
-		 uint64_t offset)
-{
-	struct mlx5_devx_dbr_page *page = NULL;
-	int ret = 0;
-
-	LIST_FOREACH(page, head, next)
-		/* Find the page this address belongs to. */
-		if (mlx5_os_get_umem_id(page->umem) == umem_id)
-			break;
-	if (!page)
-		return -EINVAL;
-	page->dbr_count--;
-	if (!page->dbr_count) {
-		/* Page not used, free it and remove from list. */
-		LIST_REMOVE(page, next);
-		if (page->umem)
-			ret = -mlx5_glue->devx_umem_dereg(page->umem);
-		mlx5_free(page);
-	} else {
-		/* Mark in bitmap that this door-bell is not in use. */
-		offset /= MLX5_DBR_SIZE;
-		int i = offset / 64;
-		int j = offset % 64;
-
-		page->dbr_bitmap[i] &= ~(UINT64_C(1) << j);
-	}
-	return ret;
-}
-
-/**
  * Allocate the User Access Region with DevX on specified device.
  *
  * @param [in] ctx
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index a484b74..e35188d 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -220,21 +220,6 @@ enum mlx5_class {
 };
 
 #define MLX5_DBR_SIZE RTE_CACHE_LINE_SIZE
-#define MLX5_DBR_PER_PAGE 64
-/* Must be >= CHAR_BIT * sizeof(uint64_t) */
-#define MLX5_DBR_PAGE_SIZE (MLX5_DBR_PER_PAGE * MLX5_DBR_SIZE)
-/* Page size must be >= 512. */
-#define MLX5_DBR_BITMAP_SIZE (MLX5_DBR_PER_PAGE / (CHAR_BIT * sizeof(uint64_t)))
-
-struct mlx5_devx_dbr_page {
-	/* Door-bell records, must be first member in structure. */
-	uint8_t dbrs[MLX5_DBR_PAGE_SIZE];
-	LIST_ENTRY(mlx5_devx_dbr_page) next; /* Pointer to the next element. */
-	void *umem;
-	uint32_t dbr_count; /* Number of door-bell records in use. */
-	/* 1 bit marks matching door-bell is in use. */
-	uint64_t dbr_bitmap[MLX5_DBR_BITMAP_SIZE];
-};
 
 /* devX creation object */
 struct mlx5_devx_obj {
@@ -249,19 +234,11 @@ struct mlx5_klm {
 	uint64_t address;
 };
 
-LIST_HEAD(mlx5_dbr_page_list, mlx5_devx_dbr_page);
-
 __rte_internal
 void mlx5_translate_port_name(const char *port_name_in,
 			      struct mlx5_switch_info *port_info_out);
 void mlx5_glue_constructor(void);
 __rte_internal
-int64_t mlx5_get_dbr(void *ctx,  struct mlx5_dbr_page_list *head,
-		     struct mlx5_devx_dbr_page **dbr_page);
-__rte_internal
-int32_t mlx5_release_dbr(struct mlx5_dbr_page_list *head, uint32_t umem_id,
-			 uint64_t offset);
-__rte_internal
 void *mlx5_devx_alloc_uar(void *ctx, int mapping);
 extern uint8_t haswell_broadwell_cpu;
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations
  2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
@ 2021-01-06  8:19       ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration Michael Baum
                           ` (19 more replies)
  0 siblings, 20 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Due to many instances of creating CQ SQ and RQ on DevX, they move to common.

v1: Initial release.
v2: Bug fix (sending wrong umem id to HW).
v3: Rebase + Bug fix (sending wrong CQE size to HW).

Michael Baum (19):
  common/mlx5: fix completion queue entry size configuration
  net/mlx5: remove CQE padding device argument
  net/mlx5: fix ASO SQ creation error flow
  common/mlx5: share DevX CQ creation
  regex/mlx5: move DevX CQ creation to common
  vdpa/mlx5: move DevX CQ creation to common
  net/mlx5: move rearm and clock queue CQ creation to common
  net/mlx5: move ASO CQ creation to common
  net/mlx5: move Tx CQ creation to common
  net/mlx5: move Rx CQ creation to common
  common/mlx5: enhance page size configuration
  common/mlx5: share DevX SQ creation
  regex/mlx5: move DevX SQ creation to common
  net/mlx5: move rearm and clock queue SQ creation to common
  net/mlx5: move Tx SQ creation to common
  net/mlx5: move ASO SQ creation to common
  common/mlx5: share DevX RQ creation
  net/mlx5: move Rx RQ creation to common
  common/mlx5: remove doorbell allocation API

 doc/guides/nics/mlx5.rst                        |  18 -
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_common.c               | 122 -----
 drivers/common/mlx5/mlx5_common.h               |  23 -
 drivers/common/mlx5/mlx5_common_devx.c          | 387 ++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h          |  70 +++
 drivers/common/mlx5/mlx5_devx_cmds.c            |  57 +--
 drivers/common/mlx5/mlx5_devx_cmds.h            |   1 -
 drivers/common/mlx5/rte_common_mlx5_exports.def |  10 +-
 drivers/common/mlx5/version.map                 |  10 +-
 drivers/common/mlx5/windows/mlx5_win_ext.h      |   1 +
 drivers/net/mlx5/linux/mlx5_os.c                |  12 -
 drivers/net/mlx5/linux/mlx5_verbs.c             |   2 +-
 drivers/net/mlx5/mlx5.c                         |  14 -
 drivers/net/mlx5/mlx5.h                         |  55 +-
 drivers/net/mlx5/mlx5_devx.c                    | 645 +++++-------------------
 drivers/net/mlx5/mlx5_flow_age.c                | 173 ++-----
 drivers/net/mlx5/mlx5_rxtx.c                    |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h                    |   8 -
 drivers/net/mlx5/mlx5_txpp.c                    | 290 +++--------
 drivers/net/mlx5/windows/mlx5_os.c              |   7 -
 drivers/regex/mlx5/mlx5_regex.c                 |   6 -
 drivers/regex/mlx5/mlx5_regex.h                 |  17 +-
 drivers/regex/mlx5/mlx5_regex_control.c         | 242 +++------
 drivers/regex/mlx5/mlx5_regex_fastpath.c        |  18 +-
 drivers/vdpa/mlx5/mlx5_vdpa.h                   |  10 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c             |  86 +---
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |   2 +-
 28 files changed, 859 insertions(+), 1430 deletions(-)
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.c
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument Michael Baum
                           ` (18 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

According to the current data-path implementation in the PMD the CQE
size must follow the cache-line size.
So, the configuration of the CQE size should be depended in
RTE_CACHE_LINE_SIZE.

Wrongly, part of the CQE creations didn't follow it exactly what caused
an incompatibility between HW and SW in the data-path when working in
128B cache-line size systems.

Adjust the rule for any CQE creation.
Remove the cqe_size attribute from the DevX CQ creation command and set
it inside the command translation according to the cache-line size.

Fixes: 79a7e409a2f6 ("common/mlx5: prepare support of packet pacing")
Fixes: 5cd0a83f413e ("common/mlx5: support more fields in DevX CQ create")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 4 ++--
 drivers/common/mlx5/mlx5_devx_cmds.h | 1 -
 drivers/net/mlx5/mlx5_devx.c         | 4 ----
 drivers/net/mlx5/mlx5_txpp.c         | 4 ----
 4 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 12f51a9..59f0bcc 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1569,7 +1569,8 @@ struct mlx5_devx_obj *
 	} else {
 		MLX5_SET64(cqc, cqctx, dbr_addr, attr->db_addr);
 	}
-	MLX5_SET(cqc, cqctx, cqe_sz, attr->cqe_size);
+	MLX5_SET(cqc, cqctx, cqe_sz, (RTE_CACHE_LINE_SIZE == 128) ?
+				     MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B);
 	MLX5_SET(cqc, cqctx, cc, attr->use_first_only);
 	MLX5_SET(cqc, cqctx, oi, attr->overrun_ignore);
 	MLX5_SET(cqc, cqctx, log_cq_size, attr->log_cq_size);
@@ -1582,7 +1583,6 @@ struct mlx5_devx_obj *
 		 attr->mini_cqe_res_format);
 	MLX5_SET(cqc, cqctx, mini_cqe_res_format_ext,
 		 attr->mini_cqe_res_format_ext);
-	MLX5_SET(cqc, cqctx, cqe_sz, attr->cqe_size);
 	if (attr->q_umem_valid) {
 		MLX5_SET(create_cq_in, in, cq_umem_valid, attr->q_umem_valid);
 		MLX5_SET(create_cq_in, in, cq_umem_id, attr->q_umem_id);
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index b335b7c..a14f3bf 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -277,7 +277,6 @@ struct mlx5_devx_cq_attr {
 	uint32_t cqe_comp_en:1;
 	uint32_t mini_cqe_res_format:2;
 	uint32_t mini_cqe_res_format_ext:2;
-	uint32_t cqe_size:3;
 	uint32_t log_cq_size:5;
 	uint32_t log_page_size:5;
 	uint32_t uar_page_id;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index da3bb78..5c5bea6 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -486,8 +486,6 @@
 			"Port %u Rx CQE compression is disabled for LRO.",
 			dev->data->port_id);
 	}
-	if (priv->config.cqe_pad)
-		cq_attr.cqe_size = MLX5_CQE_SIZE_128B;
 	log_cqe_n = log2above(cqe_n);
 	cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
 	buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
@@ -1262,8 +1260,6 @@
 		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
 		goto error;
 	}
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
 	cq_attr.eqn = priv->sh->eqn;
 	cq_attr.q_umem_valid = 1;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 726bdc6a..e998de3 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -278,8 +278,6 @@
 		goto error;
 	}
 	/* Create completion queue object for Rearm Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
 	cq_attr.eqn = sh->eqn;
 	cq_attr.q_umem_valid = 1;
@@ -516,8 +514,6 @@
 		goto error;
 	}
 	/* Create completion queue object for Clock Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.use_first_only = 1;
 	cq_attr.overrun_ignore = 1;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow Michael Baum
                           ` (17 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

The data-path code doesn't take care on 'rxq_cqe_pad_en' and use padded
CQE for any case when the system cache-line size is 128B.

This makes the argument redundant.

Remove it.

Fixes: bc91e8db12cd ("net/mlx5: add 128B padding of Rx completion entry")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/nics/mlx5.rst            | 18 ------------------
 drivers/net/mlx5/linux/mlx5_os.c    | 12 ------------
 drivers/net/mlx5/linux/mlx5_verbs.c |  2 +-
 drivers/net/mlx5/mlx5.c             |  6 ------
 drivers/net/mlx5/mlx5.h             |  1 -
 drivers/net/mlx5/windows/mlx5_os.c  |  7 -------
 6 files changed, 1 insertion(+), 45 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 3bda0f8..6950cc1 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -448,24 +448,6 @@ Driver options
   - POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
     ConnectX-6 Lx, BlueField and BlueField-2.
 
-- ``rxq_cqe_pad_en`` parameter [int]
-
-  A nonzero value enables 128B padding of CQE on RX side. The size of CQE
-  is aligned with the size of a cacheline of the core. If cacheline size is
-  128B, the CQE size is configured to be 128B even though the device writes
-  only 64B data on the cacheline. This is to avoid unnecessary cache
-  invalidation by device's two consecutive writes on to one cacheline.
-  However in some architecture, it is more beneficial to update entire
-  cacheline with padding the rest 64B rather than striding because
-  read-modify-write could drop performance a lot. On the other hand,
-  writing extra data will consume more PCIe bandwidth and could also drop
-  the maximum throughput. It is recommended to empirically set this
-  parameter. Disabled by default.
-
-  Supported on:
-
-  - CPU having 128B cacheline with ConnectX-5 and BlueField.
-
 - ``rxq_pkt_pad_en`` parameter [int]
 
   A nonzero value enables padding Rx packet to the size of cacheline on PCI
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 6812a1f..9ac1d46 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -677,7 +677,6 @@
 	unsigned int hw_padding = 0;
 	unsigned int mps;
 	unsigned int cqe_comp;
-	unsigned int cqe_pad = 0;
 	unsigned int tunnel_en = 0;
 	unsigned int mpls_en = 0;
 	unsigned int swp = 0;
@@ -875,11 +874,6 @@
 	else
 		cqe_comp = 1;
 	config->cqe_comp = cqe_comp;
-#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
-	/* Whether device supports 128B Rx CQE padding. */
-	cqe_pad = RTE_CACHE_LINE_SIZE == 128 &&
-		  (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_CQE_128B_PAD);
-#endif
 #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
 	if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS) {
 		tunnel_en = ((dv_attr.tunnel_offloads_caps &
@@ -1116,12 +1110,6 @@
 		DRV_LOG(WARNING, "Rx CQE compression isn't supported");
 		config->cqe_comp = 0;
 	}
-	if (config->cqe_pad && !cqe_pad) {
-		DRV_LOG(WARNING, "Rx CQE padding isn't supported");
-		config->cqe_pad = 0;
-	} else if (config->cqe_pad) {
-		DRV_LOG(INFO, "Rx CQE padding is enabled");
-	}
 	if (config->devx) {
 		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr);
 		if (err) {
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index b52ae2e..318e39b 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -234,7 +234,7 @@
 			dev->data->port_id);
 	}
 #ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
-	if (priv->config.cqe_pad) {
+	if (RTE_CACHE_LINE_SIZE == 128) {
 		cq_attr.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
 		cq_attr.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
 	}
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 023ef50..91492c5 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -44,9 +44,6 @@
 /* Device parameter to enable RX completion queue compression. */
 #define MLX5_RXQ_CQE_COMP_EN "rxq_cqe_comp_en"
 
-/* Device parameter to enable RX completion entry padding to 128B. */
-#define MLX5_RXQ_CQE_PAD_EN "rxq_cqe_pad_en"
-
 /* Device parameter to enable padding Rx packet to cacheline size. */
 #define MLX5_RXQ_PKT_PAD_EN "rxq_pkt_pad_en"
 
@@ -1625,8 +1622,6 @@ struct mlx5_dev_ctx_shared *
 		}
 		config->cqe_comp = !!tmp;
 		config->cqe_comp_fmt = tmp;
-	} else if (strcmp(MLX5_RXQ_CQE_PAD_EN, key) == 0) {
-		config->cqe_pad = !!tmp;
 	} else if (strcmp(MLX5_RXQ_PKT_PAD_EN, key) == 0) {
 		config->hw_padding = !!tmp;
 	} else if (strcmp(MLX5_RX_MPRQ_EN, key) == 0) {
@@ -1755,7 +1750,6 @@ struct mlx5_dev_ctx_shared *
 {
 	const char **params = (const char *[]){
 		MLX5_RXQ_CQE_COMP_EN,
-		MLX5_RXQ_CQE_PAD_EN,
 		MLX5_RXQ_PKT_PAD_EN,
 		MLX5_RX_MPRQ_EN,
 		MLX5_RX_MPRQ_LOG_STRIDE_NUM,
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 41034f5..92a5d04 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -212,7 +212,6 @@ struct mlx5_dev_config {
 	unsigned int mpls_en:1; /* MPLS over GRE/UDP is enabled. */
 	unsigned int cqe_comp:1; /* CQE compression is enabled. */
 	unsigned int cqe_comp_fmt:3; /* CQE compression format. */
-	unsigned int cqe_pad:1; /* CQE padding is enabled. */
 	unsigned int tso:1; /* Whether TSO is supported. */
 	unsigned int rx_vec_en:1; /* Rx vector is enabled. */
 	unsigned int mr_ext_memseg_en:1;
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index fdd69fd..b036432 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -313,7 +313,6 @@
 	struct mlx5_priv *priv = NULL;
 	int err = 0;
 	unsigned int cqe_comp;
-	unsigned int cqe_pad = 0;
 	struct rte_ether_addr mac;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	int own_domain_id = 0;
@@ -461,12 +460,6 @@
 		DRV_LOG(WARNING, "Rx CQE compression isn't supported.");
 		config->cqe_comp = 0;
 	}
-	if (config->cqe_pad && !cqe_pad) {
-		DRV_LOG(WARNING, "Rx CQE padding isn't supported.");
-		config->cqe_pad = 0;
-	} else if (config->cqe_pad) {
-		DRV_LOG(INFO, "Rx CQE padding is enabled.");
-	}
 	if (config->devx) {
 		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr);
 		if (err) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 04/19] common/mlx5: share DevX CQ creation Michael Baum
                           ` (16 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

In ASO SQ creation, the PMD allocates umem buffer for SQ.

When umem buffer allocation is fails, the MR and CQ memory are not freed
what caused a memory leak.

Free it.

Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_age.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index 1f15f19..e867607 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -279,7 +279,8 @@
 				   sizeof(*sq->db_rec) * 2, 4096, socket);
 	if (!sq->umem_buf) {
 		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		return -ENOMEM;
+		rte_errno = ENOMEM;
+		goto error;
 	}
 	sq->wqe_umem = mlx5_os_umem_reg(ctx,
 						(void *)(uintptr_t)sq->umem_buf,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 04/19] common/mlx5: share DevX CQ creation
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (2 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 05/19] regex/mlx5: move DevX CQ creation to common Michael Baum
                           ` (15 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The CQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_common_devx.c          | 155 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h          |  31 +++++
 drivers/common/mlx5/rte_common_mlx5_exports.def |   3 +
 drivers/common/mlx5/version.map                 |   3 +
 drivers/common/mlx5/windows/mlx5_win_ext.h      |   1 +
 6 files changed, 194 insertions(+)
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.c
 create mode 100644 drivers/common/mlx5/mlx5_common_devx.h

diff --git a/drivers/common/mlx5/meson.build b/drivers/common/mlx5/meson.build
index 3dacc6f..26cee06 100644
--- a/drivers/common/mlx5/meson.build
+++ b/drivers/common/mlx5/meson.build
@@ -16,6 +16,7 @@ sources += files(
 	'mlx5_common_mr.c',
 	'mlx5_malloc.c',
 	'mlx5_common_pci.c',
+	'mlx5_common_devx.c',
 )
 
 cflags_options = [
diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
new file mode 100644
index 0000000..3ec0dd5
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -0,0 +1,155 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include <rte_common.h>
+#include <rte_eal_paging.h>
+
+#include <mlx5_glue.h>
+#include <mlx5_common_os.h>
+
+#include "mlx5_prm.h"
+#include "mlx5_devx_cmds.h"
+#include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
+#include "mlx5_common.h"
+#include "mlx5_common_devx.h"
+
+/**
+ * Destroy DevX Completion Queue.
+ *
+ * @param[in] cq
+ *   DevX CQ to destroy.
+ */
+void
+mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq)
+{
+	if (cq->cq)
+		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
+	if (cq->umem_obj)
+		claim_zero(mlx5_os_umem_dereg(cq->umem_obj));
+	if (cq->umem_buf)
+		mlx5_free((void *)(uintptr_t)cq->umem_buf);
+}
+
+/* Mark all CQEs initially as invalid. */
+static void
+mlx5_cq_init(struct mlx5_devx_cq *cq_obj, uint16_t cq_size)
+{
+	volatile struct mlx5_cqe *cqe = cq_obj->cqes;
+	uint16_t i;
+
+	for (i = 0; i < cq_size; i++, cqe++)
+		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
+}
+
+/**
+ * Create Completion Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   q_umem_valid
+ *   q_umem_id
+ *   q_umem_offset
+ *   db_umem_valid
+ *   db_umem_id
+ *   db_umem_offset
+ *   eqn
+ *   log_cq_size
+ *   log_page_size
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] cq_obj
+ *   Pointer to CQ to create.
+ * @param[in] log_desc_n
+ *   Log of number of descriptors in queue.
+ * @param[in] attr
+ *   Pointer to CQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj, uint16_t log_desc_n,
+		    struct mlx5_devx_cq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *cq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t page_size = rte_mem_page_size();
+	size_t alignment = MLX5_CQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint32_t eqn;
+	uint16_t cq_size = 1 << log_desc_n;
+	int ret;
+
+	if (page_size == (size_t)-1 || alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get page_size.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Query first EQN. */
+	ret = mlx5_glue->devx_query_eqn(ctx, 0, &eqn);
+	if (ret) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to query event queue number.");
+		return -rte_errno;
+	}
+	/* Allocate memory buffer for CQEs and doorbell record. */
+	umem_size = sizeof(struct mlx5_cqe) * cq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_os_umem_reg(ctx, (void *)(uintptr_t)umem_buf, umem_size,
+				    IBV_ACCESS_LOCAL_WRITE);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for CQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for CQ object creation. */
+	attr->q_umem_valid = 1;
+	attr->q_umem_id = mlx5_os_get_umem_id(umem_obj);
+	attr->q_umem_offset = 0;
+	attr->db_umem_valid = 1;
+	attr->db_umem_id = attr->q_umem_id;
+	attr->db_umem_offset = umem_dbrec;
+	attr->eqn = eqn;
+	attr->log_cq_size = log_desc_n;
+	attr->log_page_size = rte_log2_u32(page_size);
+	/* Create completion queue object with DevX. */
+	cq = mlx5_devx_cmd_create_cq(ctx, attr);
+	if (!cq) {
+		DRV_LOG(ERR, "Can't create DevX CQ object.");
+		rte_errno  = ENOMEM;
+		goto error;
+	}
+	cq_obj->umem_buf = umem_buf;
+	cq_obj->umem_obj = umem_obj;
+	cq_obj->cq = cq;
+	cq_obj->db_rec = RTE_PTR_ADD(cq_obj->umem_buf, umem_dbrec);
+	/* Mark all CQEs initially as invalid. */
+	mlx5_cq_init(cq_obj, cq_size);
+	return 0;
+error:
+	ret = rte_errno;
+	if (umem_obj)
+		claim_zero(mlx5_os_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
new file mode 100644
index 0000000..20d5da0
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef RTE_PMD_MLX5_COMMON_DEVX_H_
+#define RTE_PMD_MLX5_COMMON_DEVX_H_
+
+#include "mlx5_devx_cmds.h"
+
+/* DevX Completion Queue structure. */
+struct mlx5_devx_cq {
+	struct mlx5_devx_obj *cq; /* The CQ DevX object. */
+	void *umem_obj; /* The CQ umem object. */
+	union {
+		volatile void *umem_buf;
+		volatile struct mlx5_cqe *cqes; /* The CQ ring buffer. */
+	};
+	volatile uint32_t *db_rec; /* The CQ doorbell record. */
+};
+
+/* mlx5_common_devx.c */
+
+__rte_internal
+void mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq);
+
+__rte_internal
+int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
+			uint16_t log_desc_n, struct mlx5_devx_cq_attr *attr,
+			int socket);
+
+#endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
diff --git a/drivers/common/mlx5/rte_common_mlx5_exports.def b/drivers/common/mlx5/rte_common_mlx5_exports.def
index 65ae47a..d10db40 100644
--- a/drivers/common/mlx5/rte_common_mlx5_exports.def
+++ b/drivers/common/mlx5/rte_common_mlx5_exports.def
@@ -35,6 +35,9 @@ EXPORTS
 	mlx5_devx_get_out_command_status
 	mlx5_devx_cmd_create_flow_hit_aso_obj
 
+    mlx5_devx_cq_create
+    mlx5_devx_cq_destroy
+
 	mlx5_get_dbr
 	mlx5_glue
 
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index fb3952b..850202c 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -43,6 +43,9 @@ INTERNAL {
 	mlx5_devx_get_out_command_status;
 	mlx5_devx_alloc_uar;
 
+    mlx5_devx_cq_create;
+    mlx5_devx_cq_destroy;
+
 	mlx5_get_ifname_sysfs;
 	mlx5_get_dbr;
 
diff --git a/drivers/common/mlx5/windows/mlx5_win_ext.h b/drivers/common/mlx5/windows/mlx5_win_ext.h
index 111af2e..60610df 100644
--- a/drivers/common/mlx5/windows/mlx5_win_ext.h
+++ b/drivers/common/mlx5/windows/mlx5_win_ext.h
@@ -9,6 +9,7 @@
 extern "C" {
 #endif
 
+#include "mlx5_prm.h"
 #include "mlx5devx.h"
 
 typedef struct mlx5_context {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 05/19] regex/mlx5: move DevX CQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (3 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 04/19] common/mlx5: share DevX CQ creation Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 06/19] vdpa/mlx5: " Michael Baum
                           ` (14 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/regex/mlx5/mlx5_regex.c          |  6 ---
 drivers/regex/mlx5/mlx5_regex.h          |  9 +---
 drivers/regex/mlx5/mlx5_regex_control.c  | 91 ++++++--------------------------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  4 +-
 4 files changed, 20 insertions(+), 90 deletions(-)

diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c
index c91c444..c0d6331 100644
--- a/drivers/regex/mlx5/mlx5_regex.c
+++ b/drivers/regex/mlx5/mlx5_regex.c
@@ -170,12 +170,6 @@
 		rte_errno = rte_errno ? rte_errno : EINVAL;
 		goto error;
 	}
-	ret = mlx5_glue->devx_query_eqn(ctx, 0, &priv->eqn);
-	if (ret) {
-		DRV_LOG(ERR, "can't query event queue number.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
 	/*
 	 * This PMD always claims the write memory barrier on UAR
 	 * registers writings, it is safe to allocate UAR with any
diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 2c4877c..9f7a388 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -12,6 +12,7 @@
 
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_rxp.h"
 
@@ -30,13 +31,8 @@ struct mlx5_regex_sq {
 
 struct mlx5_regex_cq {
 	uint32_t log_nb_desc; /* Log 2 number of desc for this object. */
-	struct mlx5_devx_obj *obj; /* The CQ DevX object. */
-	int64_t dbr_offset; /* Door bell record offset. */
-	uint32_t dbr_umem; /* Door bell record umem id. */
-	volatile struct mlx5_cqe *cqe; /* The CQ ring buffer. */
-	struct mlx5dv_devx_umem *cqe_umem; /* CQ buffer umem. */
+	struct mlx5_devx_cq cq_obj; /* The CQ DevX object. */
 	size_t ci;
-	uint32_t *dbr;
 };
 
 struct mlx5_regex_qp {
@@ -75,7 +71,6 @@ struct mlx5_regex_priv {
 	struct mlx5_regex_db db[MLX5_RXP_MAX_ENGINES +
 				MLX5_RXP_EM_COUNT];
 	uint32_t nb_engines; /* Number of RegEx engines. */
-	uint32_t eqn; /* EQ number. */
 	struct mlx5dv_devx_uar *uar; /* UAR object. */
 	struct ibv_pd *pd;
 	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c
index d6f452b..ca6c0f5 100644
--- a/drivers/regex/mlx5/mlx5_regex_control.c
+++ b/drivers/regex/mlx5/mlx5_regex_control.c
@@ -6,6 +6,7 @@
 
 #include <rte_log.h>
 #include <rte_errno.h>
+#include <rte_memory.h>
 #include <rte_malloc.h>
 #include <rte_regexdev.h>
 #include <rte_regexdev_core.h>
@@ -17,6 +18,7 @@
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
 #include <mlx5_common_os.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_regex.h"
 #include "mlx5_regex_utils.h"
@@ -44,8 +46,6 @@
 /**
  * destroy CQ.
  *
- * @param priv
- *   Pointer to the priv object.
  * @param cp
  *   Pointer to the CQ to be destroyed.
  *
@@ -53,24 +53,10 @@
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-regex_ctrl_destroy_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq)
+regex_ctrl_destroy_cq(struct mlx5_regex_cq *cq)
 {
-	if (cq->cqe_umem) {
-		mlx5_glue->devx_umem_dereg(cq->cqe_umem);
-		cq->cqe_umem = NULL;
-	}
-	if (cq->cqe) {
-		rte_free((void *)(uintptr_t)cq->cqe);
-		cq->cqe = NULL;
-	}
-	if (cq->dbr_offset) {
-		mlx5_release_dbr(&priv->dbrpgs, cq->dbr_umem, cq->dbr_offset);
-		cq->dbr_offset = -1;
-	}
-	if (cq->obj) {
-		mlx5_devx_cmd_destroy(cq->obj);
-		cq->obj = NULL;
-	}
+	mlx5_devx_cq_destroy(&cq->cq_obj);
+	memset(cq, 0, sizeof(*cq));
 	return 0;
 }
 
@@ -89,65 +75,20 @@
 regex_ctrl_create_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq)
 {
 	struct mlx5_devx_cq_attr attr = {
-		.q_umem_valid = 1,
-		.db_umem_valid = 1,
-		.eqn = priv->eqn,
+		.uar_page_id = priv->uar->page_id,
 	};
-	struct mlx5_devx_dbr_page *dbr_page = NULL;
-	void *buf = NULL;
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t cq_size = 1 << cq->log_nb_desc;
-	uint32_t i;
-
-	cq->dbr_offset = mlx5_get_dbr(priv->ctx, &priv->dbrpgs, &dbr_page);
-	if (cq->dbr_offset < 0) {
-		DRV_LOG(ERR, "Can't allocate cq door bell record.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	cq->dbr_umem = mlx5_os_get_umem_id(dbr_page->umem);
-	cq->dbr = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			       (uintptr_t)cq->dbr_offset);
+	int ret;
 
-	buf = rte_calloc(NULL, 1, sizeof(struct mlx5_cqe) * cq_size, 4096);
-	if (!buf) {
-		DRV_LOG(ERR, "Can't allocate cqe buffer.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	cq->cqe = buf;
-	for (i = 0; i < cq_size; i++)
-		cq->cqe[i].op_own = 0xff;
-	cq->cqe_umem = mlx5_glue->devx_umem_reg(priv->ctx, buf,
-						sizeof(struct mlx5_cqe) *
-						cq_size, 7);
 	cq->ci = 0;
-	if (!cq->cqe_umem) {
-		DRV_LOG(ERR, "Can't register cqe mem.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	attr.db_umem_offset = cq->dbr_offset;
-	attr.db_umem_id = cq->dbr_umem;
-	attr.q_umem_id = mlx5_os_get_umem_id(cq->cqe_umem);
-	attr.log_cq_size = cq->log_nb_desc;
-	attr.uar_page_id = priv->uar->page_id;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->obj = mlx5_devx_cmd_create_cq(priv->ctx, &attr);
-	if (!cq->obj) {
-		DRV_LOG(ERR, "Can't create cq object.");
-		rte_errno  = ENOMEM;
-		goto error;
+	ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, cq->log_nb_desc,
+				  &attr, SOCKET_ID_ANY);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create CQ object.");
+		memset(cq, 0, sizeof(*cq));
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
 	return 0;
-error:
-	if (cq->cqe_umem)
-		mlx5_glue->devx_umem_dereg(cq->cqe_umem);
-	if (buf)
-		rte_free(buf);
-	if (cq->dbr_offset)
-		mlx5_release_dbr(&priv->dbrpgs, cq->dbr_umem, cq->dbr_offset);
-	return -rte_errno;
 }
 
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
@@ -232,7 +173,7 @@
 	attr.tis_lst_sz = 0;
 	attr.tis_num = 0;
 	attr.user_index = q_ind;
-	attr.cqn = qp->cq.obj->id;
+	attr.cqn = qp->cq.cq_obj.cq->id;
 	wq_attr->uar_page = priv->uar->page_id;
 	regex_get_pdn(priv->pd, &pd_num);
 	wq_attr->pd = pd_num;
@@ -389,7 +330,7 @@
 err_btree:
 	for (i = 0; i < nb_sq_config; i++)
 		regex_ctrl_destroy_sq(priv, qp, i);
-	regex_ctrl_destroy_cq(priv, &qp->cq);
+	regex_ctrl_destroy_cq(&qp->cq);
 err_cq:
 	rte_free(qp->sqs);
 	return ret;
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 5857617..255fd40 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -224,7 +224,7 @@ struct mlx5_regex_job {
 	size_t next_cqe_offset;
 
 	next_cqe_offset =  (cq->ci & (cq_size_get(cq) - 1));
-	cqe = (volatile struct mlx5_cqe *)(cq->cqe + next_cqe_offset);
+	cqe = (volatile struct mlx5_cqe *)(cq->cq_obj.cqes + next_cqe_offset);
 	rte_io_wmb();
 
 	int ret = check_cqe(cqe, cq_size_get(cq), cq->ci);
@@ -285,7 +285,7 @@ struct mlx5_regex_job {
 		}
 		cq->ci = (cq->ci + 1) & 0xffffff;
 		rte_wmb();
-		cq->dbr[0] = rte_cpu_to_be_32(cq->ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->ci);
 		queue->free_sqs |= (1 << sqid);
 	}
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 06/19] vdpa/mlx5: move DevX CQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (4 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 05/19] regex/mlx5: move DevX CQ creation to common Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 07/19] net/mlx5: move rearm and clock queue " Michael Baum
                           ` (13 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa.h       | 10 +----
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 86 +++++++++++--------------------------
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c |  2 +-
 3 files changed, 28 insertions(+), 70 deletions(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index d039ada..ddee9dc 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -22,6 +22,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_prm.h>
 
 
@@ -46,13 +47,7 @@ struct mlx5_vdpa_cq {
 	uint32_t armed:1;
 	int callfd;
 	rte_spinlock_t sl;
-	struct mlx5_devx_obj *cq;
-	struct mlx5dv_devx_umem *umem_obj;
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_cq cq_obj;
 	uint64_t errors;
 };
 
@@ -144,7 +139,6 @@ struct mlx5_vdpa_priv {
 	uint32_t gpa_mkey_index;
 	struct ibv_mr *null_mr;
 	struct rte_vhost_memory *vmem;
-	uint32_t eqn;
 	struct mlx5dv_devx_event_channel *eventc;
 	struct mlx5dv_devx_event_channel *err_chnl;
 	struct mlx5dv_devx_uar *uar;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index fd61509..eba783a 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -7,6 +7,7 @@
 #include <sys/eventfd.h>
 
 #include <rte_malloc.h>
+#include <rte_memory.h>
 #include <rte_errno.h>
 #include <rte_lcore.h>
 #include <rte_atomic.h>
@@ -16,6 +17,7 @@
 
 #include <mlx5_common.h>
 #include <mlx5_common_os.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_glue.h>
 
 #include "mlx5_vdpa_utils.h"
@@ -48,7 +50,6 @@
 		priv->eventc = NULL;
 	}
 #endif
-	priv->eqn = 0;
 }
 
 /* Prepare all the global resources for all the event objects.*/
@@ -59,11 +60,6 @@
 
 	if (priv->eventc)
 		return 0;
-	if (mlx5_glue->devx_query_eqn(priv->ctx, 0, &priv->eqn)) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to query EQ number %d.", rte_errno);
-		return -1;
-	}
 	priv->eventc = mlx5_os_devx_create_event_channel(priv->ctx,
 			   MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA);
 	if (!priv->eventc) {
@@ -98,12 +94,7 @@
 static void
 mlx5_vdpa_cq_destroy(struct mlx5_vdpa_cq *cq)
 {
-	if (cq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
-	if (cq->umem_obj)
-		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
-	if (cq->umem_buf)
-		rte_free((void *)(uintptr_t)cq->umem_buf);
+	mlx5_devx_cq_destroy(&cq->cq_obj);
 	memset(cq, 0, sizeof(*cq));
 }
 
@@ -113,12 +104,12 @@
 	uint32_t arm_sn = cq->arm_sn << MLX5_CQ_SQN_OFFSET;
 	uint32_t cq_ci = cq->cq_ci & MLX5_CI_MASK;
 	uint32_t doorbell_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | cq_ci;
-	uint64_t doorbell = ((uint64_t)doorbell_hi << 32) | cq->cq->id;
+	uint64_t doorbell = ((uint64_t)doorbell_hi << 32) | cq->cq_obj.cq->id;
 	uint64_t db_be = rte_cpu_to_be_64(doorbell);
 	uint32_t *addr = RTE_PTR_ADD(priv->uar->base_addr, MLX5_CQ_DOORBELL);
 
 	rte_io_wmb();
-	cq->db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(doorbell_hi);
+	cq->cq_obj.db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(doorbell_hi);
 	rte_wmb();
 #ifdef RTE_ARCH_64
 	*(uint64_t *)addr = db_be;
@@ -135,52 +126,25 @@
 mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n,
 		    int callfd, struct mlx5_vdpa_cq *cq)
 {
-	struct mlx5_devx_cq_attr attr = {0};
-	size_t pgsize = sysconf(_SC_PAGESIZE);
-	uint32_t umem_size;
+	struct mlx5_devx_cq_attr attr = {
+		.use_first_only = 1,
+		.uar_page_id = priv->uar->page_id,
+	};
 	uint16_t event_nums[1] = {0};
-	uint16_t cq_size = 1 << log_desc_n;
 	int ret;
 
-	cq->log_desc_n = log_desc_n;
-	umem_size = sizeof(struct mlx5_cqe) * cq_size + sizeof(*cq->db_rec) * 2;
-	cq->umem_buf = rte_zmalloc(__func__, umem_size, 4096);
-	if (!cq->umem_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		rte_errno = ENOMEM;
-		return -ENOMEM;
-	}
-	cq->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx,
-						(void *)(uintptr_t)cq->umem_buf,
-						umem_size,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!cq->umem_obj) {
-		DRV_LOG(ERR, "Failed to register umem for CQ.");
-		goto error;
-	}
-	attr.q_umem_valid = 1;
-	attr.db_umem_valid = 1;
-	attr.use_first_only = 1;
-	attr.overrun_ignore = 0;
-	attr.uar_page_id = priv->uar->page_id;
-	attr.q_umem_id = cq->umem_obj->umem_id;
-	attr.q_umem_offset = 0;
-	attr.db_umem_id = cq->umem_obj->umem_id;
-	attr.db_umem_offset = sizeof(struct mlx5_cqe) * cq_size;
-	attr.eqn = priv->eqn;
-	attr.log_cq_size = log_desc_n;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->cq = mlx5_devx_cmd_create_cq(priv->ctx, &attr);
-	if (!cq->cq)
+	ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, log_desc_n, &attr,
+				  SOCKET_ID_ANY);
+	if (ret)
 		goto error;
-	cq->db_rec = RTE_PTR_ADD(cq->umem_buf, (uintptr_t)attr.db_umem_offset);
 	cq->cq_ci = 0;
+	cq->log_desc_n = log_desc_n;
 	rte_spinlock_init(&cq->sl);
 	/* Subscribe CQ event to the event channel controlled by the driver. */
-	ret = mlx5_os_devx_subscribe_devx_event(priv->eventc, cq->cq->obj,
-						   sizeof(event_nums),
-						   event_nums,
-						   (uint64_t)(uintptr_t)cq);
+	ret = mlx5_os_devx_subscribe_devx_event(priv->eventc,
+						cq->cq_obj.cq->obj,
+						sizeof(event_nums), event_nums,
+						(uint64_t)(uintptr_t)cq);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to subscribe CQE event.");
 		rte_errno = errno;
@@ -188,8 +152,8 @@
 	}
 	cq->callfd = callfd;
 	/* Init CQ to ones to be in HW owner in the start. */
-	cq->cqes[0].op_own = MLX5_CQE_OWNER_MASK;
-	cq->cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
+	cq->cq_obj.cqes[0].op_own = MLX5_CQE_OWNER_MASK;
+	cq->cq_obj.cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
 	/* First arming. */
 	mlx5_vdpa_cq_arm(priv, cq);
 	return 0;
@@ -216,7 +180,7 @@
 	uint16_t cur_wqe_counter;
 	uint16_t comp;
 
-	last_word.word = rte_read32(&cq->cqes[0].wqe_counter);
+	last_word.word = rte_read32(&cq->cq_obj.cqes[0].wqe_counter);
 	cur_wqe_counter = rte_be_to_cpu_16(last_word.wqe_counter);
 	comp = cur_wqe_counter + (uint16_t)1 - next_wqe_counter;
 	if (comp) {
@@ -230,7 +194,7 @@
 			cq->errors++;
 		rte_io_wmb();
 		/* Ring CQ doorbell record. */
-		cq->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
 		rte_io_wmb();
 		/* Ring SW QP doorbell record. */
 		eqp->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size);
@@ -246,7 +210,7 @@
 
 	for (i = 0; i < priv->nr_virtqs; i++) {
 		cq = &priv->virtqs[i].eqp.cq;
-		if (cq->cq && !cq->armed)
+		if (cq->cq_obj.cq && !cq->armed)
 			mlx5_vdpa_cq_arm(priv, cq);
 	}
 }
@@ -291,7 +255,7 @@
 		pthread_mutex_lock(&priv->vq_config_lock);
 		for (i = 0; i < priv->nr_virtqs; i++) {
 			cq = &priv->virtqs[i].eqp.cq;
-			if (cq->cq && !cq->armed) {
+			if (cq->cq_obj.cq && !cq->armed) {
 				uint32_t comp = mlx5_vdpa_cq_poll(cq);
 
 				if (comp) {
@@ -370,7 +334,7 @@
 		DRV_LOG(DEBUG, "Device %s virtq %d cq %d event was captured."
 			" Timer is %s, cq ci is %u.\n",
 			priv->vdev->device->name,
-			(int)virtq->index, cq->cq->id,
+			(int)virtq->index, cq->cq_obj.cq->id,
 			priv->timer_on ? "on" : "off", cq->cq_ci);
 		cq->armed = 0;
 	}
@@ -680,7 +644,7 @@
 		goto error;
 	}
 	attr.uar_index = priv->uar->page_id;
-	attr.cqn = eqp->cq.cq->id;
+	attr.cqn = eqp->cq.cq_obj.cq->id;
 	attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE));
 	attr.rq_size = 1 << log_desc_n;
 	attr.log_rq_stride = rte_log2_u32(MLX5_WSEG_SIZE);
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index 3e882e4..cc77314 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -497,7 +497,7 @@
 		return -1;
 	if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
 		return 1;
-	if (virtq->eqp.cq.cq) {
+	if (virtq->eqp.cq.cq_obj.cq) {
 		if (vq.callfd != virtq->eqp.cq.callfd)
 			return 1;
 	} else if (vq.callfd != -1) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 07/19] net/mlx5: move rearm and clock queue CQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (5 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 06/19] vdpa/mlx5: " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 08/19] net/mlx5: move ASO " Michael Baum
                           ` (12 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for CQ creation at rearm queue and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   9 +--
 drivers/net/mlx5/mlx5_rxtx.c |   2 +-
 drivers/net/mlx5/mlx5_txpp.c | 139 ++++++++++---------------------------------
 3 files changed, 36 insertions(+), 114 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 92a5d04..ba2a8c4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -26,6 +26,7 @@
 #include <mlx5_prm.h>
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5_defs.h"
 #include "mlx5_utils.h"
@@ -616,13 +617,7 @@ struct mlx5_flow_id_pool {
 /* Tx pacing queue structure - for Clock and Rearm queues. */
 struct mlx5_txpp_wq {
 	/* Completion Queue related data.*/
-	struct mlx5_devx_obj *cq;
-	void *cq_umem;
-	union {
-		volatile void *cq_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *cq_dbrec;
+	struct mlx5_devx_cq cq_obj;
 	uint32_t cq_ci:24;
 	uint32_t arm_sn:2;
 	/* Send Queue related data.*/
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 65a1f99..3497765 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -2325,7 +2325,7 @@ enum mlx5_txcmp_code {
 
 	qs = RTE_PTR_ADD(wqe, MLX5_WSEG_SIZE);
 	qs->max_index = rte_cpu_to_be_32(wci);
-	qs->qpn_cqn = rte_cpu_to_be_32(txq->sh->txpp.clock_queue.cq->id);
+	qs->qpn_cqn = rte_cpu_to_be_32(txq->sh->txpp.clock_queue.cq_obj.cq->id);
 	qs->reserved0 = RTE_BE32(0);
 	qs->reserved1 = RTE_BE32(0);
 }
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index e998de3..bd679c2 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -13,6 +13,7 @@
 #include <rte_eal_paging.h>
 
 #include <mlx5_malloc.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -134,12 +135,7 @@
 		claim_zero(mlx5_os_umem_dereg(wq->sq_umem));
 	if (wq->sq_buf)
 		mlx5_free((void *)(uintptr_t)wq->sq_buf);
-	if (wq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(wq->cq));
-	if (wq->cq_umem)
-		claim_zero(mlx5_os_umem_dereg(wq->cq_umem));
-	if (wq->cq_buf)
-		mlx5_free((void *)(uintptr_t)wq->cq_buf);
+	mlx5_devx_cq_destroy(&wq->cq_obj);
 	memset(wq, 0, sizeof(*wq));
 }
 
@@ -189,19 +185,6 @@
 }
 
 static void
-mlx5_txpp_fill_cqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
-{
-	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
-	uint32_t i;
-
-	for (i = 0; i < MLX5_TXPP_REARM_CQ_SIZE; i++) {
-		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
-		++cqe;
-	}
-}
-
-static void
 mlx5_txpp_fill_wqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
@@ -236,7 +219,8 @@
 		index = (i * MLX5_TXPP_REARM / 2 + MLX5_TXPP_REARM / 2) &
 			((1 << MLX5_CQ_INDEX_WIDTH) - 1);
 		qs->max_index = rte_cpu_to_be_32(index);
-		qs->qpn_cqn = rte_cpu_to_be_32(sh->txpp.clock_queue.cq->id);
+		qs->qpn_cqn =
+			   rte_cpu_to_be_32(sh->txpp.clock_queue.cq_obj.cq->id);
 	}
 }
 
@@ -246,7 +230,9 @@
 {
 	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
+	struct mlx5_devx_cq_attr cq_attr = {
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
 	size_t page_size;
 	uint32_t umem_size, umem_dbrec;
@@ -257,48 +243,16 @@
 		DRV_LOG(ERR, "Failed to get mem page size");
 		return -ENOMEM;
 	}
-	/* Allocate memory buffer for CQEs and doorbell record. */
-	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_REARM_CQ_SIZE;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->cq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
-		return -ENOMEM;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->cq_umem = mlx5_os_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->cq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Rearm Queue.");
-		goto error;
-	}
 	/* Create completion queue object for Rearm Queue. */
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = 0;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = umem_dbrec;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.log_cq_size = rte_log2_u32(MLX5_TXPP_REARM_CQ_SIZE);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	wq->cq = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr);
-	if (!wq->cq) {
-		rte_errno = errno;
+	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
+				  log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr,
+				  sh->numa_node);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ for Rearm Queue.");
-		goto error;
+		return ret;
 	}
-	wq->cq_dbrec = RTE_PTR_ADD(wq->cq_buf, umem_dbrec);
 	wq->cq_ci = 0;
 	wq->arm_sn = 0;
-	/* Mark all CQEs initially as invalid. */
-	mlx5_txpp_fill_cqe_rearm_queue(sh);
 	/*
 	 * Allocate memory buffer for Send Queue WQEs.
 	 * There should be no WQE leftovers in the cyclic queue.
@@ -329,7 +283,7 @@
 	sq_attr.state = MLX5_SQC_STATE_RST;
 	sq_attr.tis_lst_sz = 1;
 	sq_attr.tis_num = sh->tis->id;
-	sq_attr.cqn = wq->cq->id;
+	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.cd_master = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
 	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
@@ -472,7 +426,11 @@
 {
 	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
+	struct mlx5_devx_cq_attr cq_attr = {
+		.use_first_only = 1,
+		.overrun_ignore = 1,
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
 	size_t page_size;
 	uint32_t umem_size, umem_dbrec;
@@ -493,46 +451,14 @@
 	}
 	sh->txpp.ts_p = 0;
 	sh->txpp.ts_n = 0;
-	/* Allocate memory buffer for CQEs and doorbell record. */
-	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_CLKQ_SIZE;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-					page_size, sh->numa_node);
-	if (!wq->cq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
-		return -ENOMEM;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->cq_umem = mlx5_os_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->cq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Clock Queue.");
-		goto error;
-	}
 	/* Create completion queue object for Clock Queue. */
-	cq_attr.use_first_only = 1;
-	cq_attr.overrun_ignore = 1;
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	cq_attr.eqn = sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = 0;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = umem_dbrec;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(wq->cq_umem);
-	cq_attr.log_cq_size = rte_log2_u32(MLX5_TXPP_CLKQ_SIZE);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	wq->cq = mlx5_devx_cmd_create_cq(sh->ctx, &cq_attr);
-	if (!wq->cq) {
-		rte_errno = errno;
+	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
+				  log2above(MLX5_TXPP_CLKQ_SIZE), &cq_attr,
+				  sh->numa_node);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ for Clock Queue.");
 		goto error;
 	}
-	wq->cq_dbrec = RTE_PTR_ADD(wq->cq_buf, umem_dbrec);
 	wq->cq_ci = 0;
 	/* Allocate memory buffer for Send Queue WQEs. */
 	if (sh->txpp.test) {
@@ -578,7 +504,7 @@
 		sq_attr.static_sq_wq = 1;
 	}
 	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = wq->cq->id;
+	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id;
 	sq_attr.wq_attr.cd_slave = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
@@ -629,12 +555,13 @@
 	struct mlx5_txpp_wq *aq = &sh->txpp.rearm_queue;
 	uint32_t arm_sn = aq->arm_sn << MLX5_CQ_SQN_OFFSET;
 	uint32_t db_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | aq->cq_ci;
-	uint64_t db_be = rte_cpu_to_be_64(((uint64_t)db_hi << 32) | aq->cq->id);
+	uint64_t db_be =
+		rte_cpu_to_be_64(((uint64_t)db_hi << 32) | aq->cq_obj.cq->id);
 	base_addr = mlx5_os_get_devx_uar_base_addr(sh->tx_uar);
 	uint32_t *addr = RTE_PTR_ADD(base_addr, MLX5_CQ_DOORBELL);
 
 	rte_compiler_barrier();
-	aq->cq_dbrec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(db_hi);
+	aq->cq_obj.db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(db_hi);
 	rte_wmb();
 #ifdef RTE_ARCH_64
 	*(uint64_t *)addr = db_be;
@@ -732,7 +659,7 @@
 mlx5_txpp_update_timestamp(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
+	struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cq_obj.cqes;
 	union {
 		rte_int128_t u128;
 		struct mlx5_cqe_ts cts;
@@ -807,7 +734,7 @@
 	do {
 		volatile struct mlx5_cqe *cqe;
 
-		cqe = &wq->cqes[cq_ci & (MLX5_TXPP_REARM_CQ_SIZE - 1)];
+		cqe = &wq->cq_obj.cqes[cq_ci & (MLX5_TXPP_REARM_CQ_SIZE - 1)];
 		ret = check_cqe(cqe, MLX5_TXPP_REARM_CQ_SIZE, cq_ci);
 		switch (ret) {
 		case MLX5_CQE_STATUS_ERR:
@@ -839,7 +766,7 @@
 		}
 		/* Update doorbell record to notify hardware. */
 		rte_compiler_barrier();
-		*wq->cq_dbrec = rte_cpu_to_be_32(cq_ci);
+		*wq->cq_obj.db_rec = rte_cpu_to_be_32(cq_ci);
 		rte_wmb();
 		wq->cq_ci = cq_ci;
 		/* Fire new requests to Rearm Queue. */
@@ -934,9 +861,8 @@
 	}
 	/* Subscribe CQ event to the event channel controlled by the driver. */
 	ret = mlx5_os_devx_subscribe_devx_event(sh->txpp.echan,
-						   sh->txpp.rearm_queue.cq->obj,
-						   sizeof(event_nums),
-						   event_nums, 0);
+					    sh->txpp.rearm_queue.cq_obj.cq->obj,
+					     sizeof(event_nums), event_nums, 0);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to subscribe CQE event.");
 		rte_errno = errno;
@@ -1138,7 +1064,8 @@
 
 	if (sh->txpp.refcnt) {
 		struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-		struct mlx5_cqe *cqe = (struct mlx5_cqe *)(uintptr_t)wq->cqes;
+		struct mlx5_cqe *cqe =
+				(struct mlx5_cqe *)(uintptr_t)wq->cq_obj.cqes;
 		union {
 			rte_int128_t u128;
 			struct mlx5_cqe_ts cts;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 08/19] net/mlx5: move ASO CQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (6 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 07/19] net/mlx5: move rearm and clock queue " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 09/19] net/mlx5: move Tx " Michael Baum
                           ` (11 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Use common function for ASO CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h          |  8 +---
 drivers/net/mlx5/mlx5_flow_age.c | 82 +++++++++-------------------------------
 2 files changed, 19 insertions(+), 71 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index ba2a8c4..f889180 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -467,13 +467,7 @@ struct mlx5_flow_counter_mng {
 struct mlx5_aso_cq {
 	uint16_t log_desc_n;
 	uint32_t cq_ci:24;
-	struct mlx5_devx_obj *cq;
-	struct mlx5dv_devx_umem *umem_obj;
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_cqe *cqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_cq cq_obj;
 	uint64_t errors;
 };
 
diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index e867607..a75adc8 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -8,10 +8,12 @@
 
 #include <mlx5_malloc.h>
 #include <mlx5_common_os.h>
+#include <mlx5_common_devx.h>
 
 #include "mlx5.h"
 #include "mlx5_flow.h"
 
+
 /**
  * Destroy Completion Queue used for ASO access.
  *
@@ -21,12 +23,8 @@
 static void
 mlx5_aso_cq_destroy(struct mlx5_aso_cq *cq)
 {
-	if (cq->cq)
-		claim_zero(mlx5_devx_cmd_destroy(cq->cq));
-	if (cq->umem_obj)
-		claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj));
-	if (cq->umem_buf)
-		mlx5_free((void *)(uintptr_t)cq->umem_buf);
+	if (cq->cq_obj.cq)
+		mlx5_devx_cq_destroy(&cq->cq_obj);
 	memset(cq, 0, sizeof(*cq));
 }
 
@@ -43,60 +41,21 @@
  *   Socket to use for allocation.
  * @param[in] uar_page_id
  *   UAR page ID to use.
- * @param[in] eqn
- *   EQ number.
  *
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
 mlx5_aso_cq_create(void *ctx, struct mlx5_aso_cq *cq, uint16_t log_desc_n,
-		   int socket, int uar_page_id, uint32_t eqn)
+		   int socket, int uar_page_id)
 {
-	struct mlx5_devx_cq_attr attr = { 0 };
-	size_t pgsize = rte_mem_page_size();
-	uint32_t umem_size;
-	uint16_t cq_size = 1 << log_desc_n;
+	struct mlx5_devx_cq_attr attr = {
+		.uar_page_id = uar_page_id,
+	};
 
 	cq->log_desc_n = log_desc_n;
-	umem_size = sizeof(struct mlx5_cqe) * cq_size + sizeof(*cq->db_rec) * 2;
-	cq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				   4096, socket);
-	if (!cq->umem_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		rte_errno = ENOMEM;
-		return -ENOMEM;
-	}
-	cq->umem_obj = mlx5_os_umem_reg(ctx,
-						(void *)(uintptr_t)cq->umem_buf,
-						umem_size,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!cq->umem_obj) {
-		DRV_LOG(ERR, "Failed to register umem for aso CQ.");
-		goto error;
-	}
-	attr.q_umem_valid = 1;
-	attr.db_umem_valid = 1;
-	attr.use_first_only = 0;
-	attr.overrun_ignore = 0;
-	attr.uar_page_id = uar_page_id;
-	attr.q_umem_id = mlx5_os_get_umem_id(cq->umem_obj);
-	attr.q_umem_offset = 0;
-	attr.db_umem_id = attr.q_umem_id;
-	attr.db_umem_offset = sizeof(struct mlx5_cqe) * cq_size;
-	attr.eqn = eqn;
-	attr.log_cq_size = log_desc_n;
-	attr.log_page_size = rte_log2_u32(pgsize);
-	cq->cq = mlx5_devx_cmd_create_cq(ctx, &attr);
-	if (!cq->cq)
-		goto error;
-	cq->db_rec = RTE_PTR_ADD(cq->umem_buf, (uintptr_t)attr.db_umem_offset);
 	cq->cq_ci = 0;
-	memset((void *)(uintptr_t)cq->umem_buf, 0xFF, attr.db_umem_offset);
-	return 0;
-error:
-	mlx5_aso_cq_destroy(cq);
-	return -1;
+	return mlx5_devx_cq_create(ctx, &cq->cq_obj, log_desc_n, &attr, socket);
 }
 
 /**
@@ -195,8 +154,7 @@
 		mlx5_devx_cmd_destroy(sq->sq);
 		sq->sq = NULL;
 	}
-	if (sq->cq.cq)
-		mlx5_aso_cq_destroy(&sq->cq);
+	mlx5_aso_cq_destroy(&sq->cq);
 	mlx5_aso_devx_dereg_mr(&sq->mr);
 	memset(sq, 0, sizeof(*sq));
 }
@@ -247,8 +205,6 @@
  *   User Access Region object.
  * @param[in] pdn
  *   Protection Domain number to use.
- * @param[in] eqn
- *   EQ number.
  * @param[in] log_desc_n
  *   Log of number of descriptors in queue.
  *
@@ -257,8 +213,7 @@
  */
 static int
 mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket,
-		   void *uar, uint32_t pdn,
-		   uint32_t eqn,  uint16_t log_desc_n)
+		   void *uar, uint32_t pdn,  uint16_t log_desc_n)
 {
 	struct mlx5_devx_create_sq_attr attr = { 0 };
 	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
@@ -272,7 +227,7 @@
 				 sq_desc_n, &sq->mr, socket, pdn))
 		return -1;
 	if (mlx5_aso_cq_create(ctx, &sq->cq, log_desc_n, socket,
-				mlx5_os_get_devx_uar_page_id(uar), eqn))
+			       mlx5_os_get_devx_uar_page_id(uar)))
 		goto error;
 	sq->log_desc_n = log_desc_n;
 	sq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size +
@@ -296,7 +251,7 @@
 	attr.tis_lst_sz = 0;
 	attr.tis_num = 0;
 	attr.user_index = 0xFFFF;
-	attr.cqn = sq->cq.cq->id;
+	attr.cqn = sq->cq.cq_obj.cq->id;
 	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(uar);
 	wq_attr->pd = pdn;
 	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
@@ -348,8 +303,7 @@
 mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh)
 {
 	return mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0,
-				  sh->tx_uar, sh->pdn, sh->eqn,
-				  MLX5_ASO_QUEUE_LOG_DESC);
+				  sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC);
 }
 
 /**
@@ -459,7 +413,7 @@
 	struct mlx5_aso_cq *cq = &sq->cq;
 	uint32_t idx = cq->cq_ci & ((1 << cq->log_desc_n) - 1);
 	volatile struct mlx5_err_cqe *cqe =
-				(volatile struct mlx5_err_cqe *)&cq->cqes[idx];
+			(volatile struct mlx5_err_cqe *)&cq->cq_obj.cqes[idx];
 
 	cq->errors++;
 	idx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);
@@ -572,8 +526,8 @@
 	do {
 		idx = next_idx;
 		next_idx = (cq->cq_ci + 1) & mask;
-		rte_prefetch0(&cq->cqes[next_idx]);
-		cqe = &cq->cqes[idx];
+		rte_prefetch0(&cq->cq_obj.cqes[next_idx]);
+		cqe = &cq->cq_obj.cqes[idx];
 		ret = check_cqe(cqe, cq_size, cq->cq_ci);
 		/*
 		 * Be sure owner read is done before any other cookie field or
@@ -593,7 +547,7 @@
 		mlx5_aso_age_action_update(sh, i);
 		sq->tail += i;
 		rte_io_wmb();
-		cq->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+		cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
 	}
 	return i;
 }
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 09/19] net/mlx5: move Tx CQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (7 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 08/19] net/mlx5: move ASO " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 10/19] net/mlx5: move Rx " Michael Baum
                           ` (10 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Tx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   6 +-
 drivers/net/mlx5/mlx5_devx.c | 178 +++++++------------------------------------
 2 files changed, 29 insertions(+), 155 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f889180..e61154b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -847,11 +847,7 @@ struct mlx5_txq_obj {
 		};
 		struct {
 			struct rte_eth_dev *dev;
-			struct mlx5_devx_obj *cq_devx;
-			void *cq_umem;
-			void *cq_buf;
-			int64_t cq_dbrec_offset;
-			struct mlx5_devx_dbr_page *cq_dbrec_page;
+			struct mlx5_devx_cq cq_obj;
 			struct mlx5_devx_obj *sq_devx;
 			void *sq_umem;
 			void *sq_buf;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 5c5bea6..af0383c 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -15,6 +15,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_common_devx.h>
 #include <mlx5_malloc.h>
 
 #include "mlx5.h"
@@ -1141,28 +1142,6 @@
 }
 
 /**
- * Release DevX Tx CQ resources.
- *
- * @param txq_obj
- *   DevX Tx queue object.
- */
-static void
-mlx5_txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->cq_devx)
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
-	if (txq_obj->cq_umem)
-		claim_zero(mlx5_os_umem_dereg(txq_obj->cq_umem));
-	if (txq_obj->cq_buf)
-		mlx5_free(txq_obj->cq_buf);
-	if (txq_obj->cq_dbrec_page)
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->cq_dbrec_page->umem),
-					    txq_obj->cq_dbrec_offset));
-}
-
-/**
  * Destroy the Tx queue DevX object.
  *
  * @param txq_obj
@@ -1172,124 +1151,8 @@
 mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
 	mlx5_txq_release_devx_sq_resources(txq_obj);
-	mlx5_txq_release_devx_cq_resources(txq_obj);
-}
-
-/**
- * Create a DevX CQ object and its resources for an Tx queue.
- *
- * @param dev
- *   Pointer to Ethernet device.
- * @param idx
- *   Queue index in DPDK Tx queue array.
- *
- * @return
- *   Number of CQEs in CQ, 0 otherwise and rte_errno is set.
- */
-static uint32_t
-mlx5_txq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
-	struct mlx5_txq_ctrl *txq_ctrl =
-			container_of(txq_data, struct mlx5_txq_ctrl, txq);
-	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
-	struct mlx5_devx_cq_attr cq_attr = { 0 };
-	struct mlx5_cqe *cqe;
-	size_t page_size;
-	size_t alignment;
-	uint32_t cqe_n;
-	uint32_t i;
-	int ret;
-
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Allocate memory buffer for CQEs. */
-	alignment = MLX5_CQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get CQE buf alignment.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Create the Completion Queue. */
-	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
-		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
-	cqe_n = 1UL << log2above(cqe_n);
-	if (cqe_n > UINT16_MAX) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u requests to many CQEs %u.",
-			dev->data->port_id, txq_data->idx, cqe_n);
-		rte_errno = EINVAL;
-		return 0;
-	}
-	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      cqe_n * sizeof(struct mlx5_cqe),
-				      alignment,
-				      priv->sh->numa_node);
-	if (!txq_obj->cq_buf) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory (CQ).",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->cq_umem = mlx5_os_umem_reg(priv->sh->ctx,
-						(void *)txq_obj->cq_buf,
-						cqe_n * sizeof(struct mlx5_cqe),
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->cq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot register memory (CQ).",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for completion queue. */
-	txq_obj->cq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->cq_dbrec_page);
-	if (txq_obj->cq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
-		goto error;
-	}
-	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	cq_attr.eqn = priv->sh->eqn;
-	cq_attr.q_umem_valid = 1;
-	cq_attr.q_umem_offset = (uintptr_t)txq_obj->cq_buf % page_size;
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(txq_obj->cq_umem);
-	cq_attr.db_umem_valid = 1;
-	cq_attr.db_umem_offset = txq_obj->cq_dbrec_offset;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(txq_obj->cq_dbrec_page->umem);
-	cq_attr.log_cq_size = rte_log2_u32(cqe_n);
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	/* Create completion queue object with DevX. */
-	txq_obj->cq_devx = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!txq_obj->cq_devx) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	/* Initial fill CQ buffer with invalid CQE opcode. */
-	cqe = (struct mlx5_cqe *)txq_obj->cq_buf;
-	for (i = 0; i < cqe_n; i++) {
-		cqe->op_own = (MLX5_CQE_INVALID << 4) | MLX5_CQE_OWNER_MASK;
-		++cqe;
-	}
-	return cqe_n;
-error:
-	ret = rte_errno;
-	mlx5_txq_release_devx_cq_resources(txq_obj);
-	rte_errno = ret;
-	return 0;
+	mlx5_devx_cq_destroy(&txq_obj->cq_obj);
+	memset(&txq_obj->cq_obj, 0, sizeof(txq_obj->cq_obj));
 }
 
 /**
@@ -1361,7 +1224,7 @@
 	sq_attr.tis_lst_sz = 1;
 	sq_attr.tis_num = priv->sh->tis->id;
 	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = txq_obj->cq_devx->id;
+	sq_attr.cqn = txq_obj->cq_obj.cq->id;
 	sq_attr.flush_in_error_en = 1;
 	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
 	sq_attr.allow_swp = !!priv->config.swp;
@@ -1425,8 +1288,11 @@
 #else
 	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
+	struct mlx5_devx_cq_attr cq_attr = {
+		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+	};
 	void *reg_addr;
-	uint32_t cqe_n;
+	uint32_t cqe_n, log_desc_n;
 	uint32_t wqe_n;
 	int ret = 0;
 
@@ -1434,19 +1300,31 @@
 	MLX5_ASSERT(txq_obj);
 	txq_obj->txq_ctrl = txq_ctrl;
 	txq_obj->dev = dev;
-	cqe_n = mlx5_txq_create_devx_cq_resources(dev, idx);
-	if (!cqe_n) {
-		rte_errno = errno;
+	cqe_n = (1UL << txq_data->elts_n) / MLX5_TX_COMP_THRESH +
+		1 + MLX5_TX_COMP_THRESH_INLINE_DIV;
+	log_desc_n = log2above(cqe_n);
+	cqe_n = 1UL << log_desc_n;
+	if (cqe_n > UINT16_MAX) {
+		DRV_LOG(ERR, "Port %u Tx queue %u requests to many CQEs %u.",
+			dev->data->port_id, txq_data->idx, cqe_n);
+		rte_errno = EINVAL;
+		return 0;
+	}
+	/* Create completion queue object with DevX. */
+	ret = mlx5_devx_cq_create(sh->ctx, &txq_obj->cq_obj, log_desc_n,
+				  &cq_attr, priv->sh->numa_node);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.",
+			dev->data->port_id, idx);
 		goto error;
 	}
-	txq_data->cqe_n = log2above(cqe_n);
-	txq_data->cqe_s = 1 << txq_data->cqe_n;
+	txq_data->cqe_n = log_desc_n;
+	txq_data->cqe_s = cqe_n;
 	txq_data->cqe_m = txq_data->cqe_s - 1;
-	txq_data->cqes = (volatile struct mlx5_cqe *)txq_obj->cq_buf;
+	txq_data->cqes = txq_obj->cq_obj.cqes;
 	txq_data->cq_ci = 0;
 	txq_data->cq_pi = 0;
-	txq_data->cq_db = (volatile uint32_t *)(txq_obj->cq_dbrec_page->dbrs +
-						txq_obj->cq_dbrec_offset);
+	txq_data->cq_db = txq_obj->cq_obj.db_rec;
 	*txq_data->cq_db = 0;
 	/* Create Send Queue object with DevX. */
 	wqe_n = mlx5_txq_create_devx_sq_resources(dev, idx);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 10/19] net/mlx5: move Rx CQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (8 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 09/19] net/mlx5: move Tx " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 11/19] common/mlx5: enhance page size configuration Michael Baum
                           ` (9 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Rx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.c      |   8 ---
 drivers/net/mlx5/mlx5.h      |   3 +-
 drivers/net/mlx5/mlx5_devx.c | 142 +++++++++++++------------------------------
 drivers/net/mlx5/mlx5_rxtx.h |   4 --
 4 files changed, 42 insertions(+), 115 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 91492c5..c0a36d6 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -936,14 +936,6 @@ struct mlx5_dev_ctx_shared *
 		goto error;
 	}
 	if (sh->devx) {
-		/* Query the EQN for this core. */
-		err = mlx5_glue->devx_query_eqn(sh->ctx, 0, &sh->eqn);
-		if (err) {
-			rte_errno = errno;
-			DRV_LOG(ERR, "Failed to query event queue number %d.",
-				rte_errno);
-			goto error;
-		}
 		err = mlx5_os_get_pdn(sh->pd, &sh->pdn);
 		if (err) {
 			DRV_LOG(ERR, "Fail to extract pdn from PD");
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e61154b..079cbca 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -683,7 +683,6 @@ struct mlx5_dev_ctx_shared {
 	uint16_t bond_dev; /* Bond primary device id. */
 	uint32_t devx:1; /* Opened with DV. */
 	uint32_t flow_hit_aso_en:1; /* Flow Hit ASO is supported. */
-	uint32_t eqn; /* Event Queue number. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	void *ctx; /* Verbs/DV/DevX context. */
 	void *pd; /* Protection Domain. */
@@ -791,7 +790,7 @@ struct mlx5_rxq_obj {
 		};
 		struct {
 			struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
-			struct mlx5_devx_obj *devx_cq; /* DevX CQ object. */
+			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
 	};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index af0383c..913f169 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -172,30 +172,17 @@
 }
 
 /**
- * Release the resources allocated for the Rx CQ DevX object.
+ * Destroy the Rx queue DevX object.
  *
- * @param rxq_ctrl
- *   DevX Rx queue object.
+ * @param rxq_obj
+ *   Rxq object to destroy.
  */
 static void
-mlx5_rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
-	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->cq_dbrec_page;
-
-	if (rxq_ctrl->cq_umem) {
-		mlx5_os_umem_dereg(rxq_ctrl->cq_umem);
-		rxq_ctrl->cq_umem = NULL;
-	}
-	if (rxq_ctrl->rxq.cqes) {
-		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
-		rxq_ctrl->rxq.cqes = NULL;
-	}
-	if (dbr_page) {
-		claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id(dbr_page->umem),
-					    rxq_ctrl->cq_dbr_offset));
-		rxq_ctrl->cq_dbrec_page = NULL;
-	}
+	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
+	mlx5_devx_cq_destroy(&rxq_ctrl->obj->cq_obj);
+	memset(&rxq_ctrl->obj->cq_obj, 0, sizeof(rxq_ctrl->obj->cq_obj));
 }
 
 /**
@@ -213,14 +200,12 @@
 		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->devx_cq);
+		MLX5_ASSERT(rxq_obj->cq_obj);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
-		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
 		if (rxq_obj->devx_channel)
 			mlx5_os_devx_destroy_event_channel
 							(rxq_obj->devx_channel);
-		mlx5_rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
-		mlx5_rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
+		mlx5_rxq_release_devx_resources(rxq_obj->rxq_ctrl);
 	}
 }
 
@@ -249,7 +234,7 @@
 		rte_errno = errno;
 		return -rte_errno;
 	}
-	if (out.event_resp.cookie != (uint64_t)(uintptr_t)rxq_obj->devx_cq) {
+	if (out.event_resp.cookie != (uint64_t)(uintptr_t)rxq_obj->cq_obj.cq) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
@@ -327,7 +312,7 @@
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
 	uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
-	uint32_t cqn = rxq_ctrl->obj->devx_cq->id;
+	uint32_t cqn = rxq_ctrl->obj->cq_obj.cq->id;
 	struct mlx5_devx_dbr_page *dbr_page;
 	int64_t dbr_offset;
 	uint32_t wq_size = 0;
@@ -410,31 +395,23 @@
  *   Queue index in DPDK Rx queue array.
  *
  * @return
- *   The DevX CQ object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
+static int
 mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
-	struct mlx5_devx_obj *cq_obj = 0;
+	struct mlx5_devx_cq *cq_obj = 0;
 	struct mlx5_devx_cq_attr cq_attr = { 0 };
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_dev_ctx_shared *sh = priv->sh;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-	size_t page_size = rte_mem_page_size();
 	unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
-	struct mlx5_devx_dbr_page *dbr_page;
-	int64_t dbr_offset;
-	void *buf = NULL;
-	uint16_t event_nums[1] = {0};
 	uint32_t log_cqe_n;
-	uint32_t cq_size;
+	uint16_t event_nums[1] = { 0 };
 	int ret = 0;
 
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get page_size.");
-		goto error;
-	}
 	if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
 	    !rxq_data->lro) {
 		cq_attr.cqe_comp_en = 1u;
@@ -487,71 +464,37 @@
 			"Port %u Rx CQE compression is disabled for LRO.",
 			dev->data->port_id);
 	}
+	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar);
 	log_cqe_n = log2above(cqe_n);
-	cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
-	buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
-				rxq_ctrl->socket);
-	if (!buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for CQ.");
-		goto error;
-	}
-	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)buf;
-	rxq_ctrl->cq_umem = mlx5_os_umem_reg(priv->sh->ctx, buf,
-						     cq_size,
-						     IBV_ACCESS_LOCAL_WRITE);
-	if (!rxq_ctrl->cq_umem) {
-		DRV_LOG(ERR, "Failed to register umem for CQ.");
-		goto error;
-	}
-	/* Allocate CQ door-bell. */
-	dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
-	if (dbr_offset < 0) {
-		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
-		goto error;
-	}
-	rxq_ctrl->cq_dbr_offset = dbr_offset;
-	rxq_ctrl->cq_dbrec_page = dbr_page;
-	rxq_data->cq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			  (uintptr_t)rxq_ctrl->cq_dbr_offset);
-	rxq_data->cq_uar =
-			mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
 	/* Create CQ using DevX API. */
-	cq_attr.eqn = priv->sh->eqn;
-	cq_attr.uar_page_id =
-			mlx5_os_get_devx_uar_page_id(priv->sh->devx_rx_uar);
-	cq_attr.q_umem_id = mlx5_os_get_umem_id(rxq_ctrl->cq_umem);
-	cq_attr.q_umem_valid = 1;
-	cq_attr.log_cq_size = log_cqe_n;
-	cq_attr.log_page_size = rte_log2_u32(page_size);
-	cq_attr.db_umem_offset = rxq_ctrl->cq_dbr_offset;
-	cq_attr.db_umem_id = mlx5_os_get_umem_id(dbr_page->umem);
-	cq_attr.db_umem_valid = 1;
-	cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
-	if (!cq_obj)
-		goto error;
+	ret = mlx5_devx_cq_create(sh->ctx, &rxq_ctrl->obj->cq_obj, log_cqe_n,
+				  &cq_attr, sh->numa_node);
+	if (ret)
+		return ret;
+	cq_obj = &rxq_ctrl->obj->cq_obj;
+	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])
+							(uintptr_t)cq_obj->cqes;
+	rxq_data->cq_db = cq_obj->db_rec;
+	rxq_data->cq_uar = mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar);
 	rxq_data->cqe_n = log_cqe_n;
-	rxq_data->cqn = cq_obj->id;
+	rxq_data->cqn = cq_obj->cq->id;
 	if (rxq_ctrl->obj->devx_channel) {
 		ret = mlx5_os_devx_subscribe_devx_event
-						(rxq_ctrl->obj->devx_channel,
-						 cq_obj->obj,
-						 sizeof(event_nums),
-						 event_nums,
-						 (uint64_t)(uintptr_t)cq_obj);
+					      (rxq_ctrl->obj->devx_channel,
+					       cq_obj->cq->obj,
+					       sizeof(event_nums),
+					       event_nums,
+					       (uint64_t)(uintptr_t)cq_obj->cq);
 		if (ret) {
 			DRV_LOG(ERR, "Fail to subscribe CQ to event channel.");
-			rte_errno = errno;
-			goto error;
+			ret = errno;
+			mlx5_devx_cq_destroy(cq_obj);
+			memset(cq_obj, 0, sizeof(*cq_obj));
+			rte_errno = ret;
+			return -ret;
 		}
 	}
-	/* Initialise CQ to 1's to mark HW ownership for all CQEs. */
-	memset((void *)(uintptr_t)rxq_data->cqes, 0xFF, cq_size);
-	return cq_obj;
-error:
-	if (cq_obj)
-		mlx5_devx_cmd_destroy(cq_obj);
-	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
-	return NULL;
+	return 0;
 }
 
 /**
@@ -655,8 +598,8 @@
 		tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
 	}
 	/* Create CQ using DevX API. */
-	tmpl->devx_cq = mlx5_rxq_create_devx_cq_resources(dev, idx);
-	if (!tmpl->devx_cq) {
+	ret = mlx5_rxq_create_devx_cq_resources(dev, idx);
+	if (ret) {
 		DRV_LOG(ERR, "Failed to create CQ.");
 		goto error;
 	}
@@ -682,12 +625,9 @@
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	if (tmpl->rq)
 		claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
-	if (tmpl->devx_cq)
-		claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
 	if (tmpl->devx_channel)
 		mlx5_os_devx_destroy_event_channel(tmpl->devx_channel);
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	mlx5_rxq_release_devx_cq_resources(rxq_ctrl);
+	mlx5_rxq_release_devx_resources(rxq_ctrl);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 1e9345a..aba9541 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -196,11 +196,7 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_devx_dbr_page *rq_dbrec_page;
 	uint64_t rq_dbr_offset;
 	/* Storing RQ door-bell information, needed when freeing door-bell. */
-	struct mlx5_devx_dbr_page *cq_dbrec_page;
-	uint64_t cq_dbr_offset;
-	/* Storing CQ door-bell information, needed when freeing door-bell. */
 	void *wq_umem; /* WQ buffer registration info. */
-	void *cq_umem; /* CQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 11/19] common/mlx5: enhance page size configuration
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (9 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 10/19] net/mlx5: move Rx " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 12/19] common/mlx5: share DevX SQ creation Michael Baum
                           ` (8 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The PRM calculates page size in 4K, so need to reduce the log_wq_pg_sz
attribute.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 53 ++++++++++++++++--------------------
 1 file changed, 23 insertions(+), 30 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 59f0bcc..790a701 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -268,9 +268,8 @@ struct mlx5_devx_obj *
 	MLX5_SET(mkc, mkc, mkey_7_0, attr->umem_id & 0xFF);
 	MLX5_SET(mkc, mkc, translations_octword_size, translation_size);
 	MLX5_SET(mkc, mkc, relaxed_ordering_write,
-		attr->relaxed_ordering_write);
-	MLX5_SET(mkc, mkc, relaxed_ordering_read,
-		attr->relaxed_ordering_read);
+		 attr->relaxed_ordering_write);
+	MLX5_SET(mkc, mkc, relaxed_ordering_read, attr->relaxed_ordering_read);
 	MLX5_SET64(mkc, mkc, start_addr, attr->addr);
 	MLX5_SET64(mkc, mkc, len, attr->size);
 	mkey->obj = mlx5_glue->devx_obj_create(ctx, in, in_size_dw * 4, out,
@@ -308,7 +307,7 @@ struct mlx5_devx_obj *
 	if (status) {
 		int syndrome = MLX5_GET(query_flow_counter_out, out, syndrome);
 
-		DRV_LOG(ERR, "Bad devX status %x, syndrome = %x", status,
+		DRV_LOG(ERR, "Bad DevX status %x, syndrome = %x", status,
 			syndrome);
 	}
 	return status;
@@ -374,8 +373,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_nic_vport_context_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query NIC vport context, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		return -1;
 	}
 	vctx = MLX5_ADDR_OF(query_nic_vport_context_out, out,
@@ -662,8 +660,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		return -1;
 	}
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
@@ -683,11 +680,11 @@ struct mlx5_devx_obj *
 		(cmd_hca_cap, hcattr, log_min_hairpin_wq_data_sz);
 	attr->vhca_id = MLX5_GET(cmd_hca_cap, hcattr, vhca_id);
 	attr->relaxed_ordering_write = MLX5_GET(cmd_hca_cap, hcattr,
-			relaxed_ordering_write);
+						relaxed_ordering_write);
 	attr->relaxed_ordering_read = MLX5_GET(cmd_hca_cap, hcattr,
-			relaxed_ordering_read);
+					       relaxed_ordering_read);
 	attr->access_register_user = MLX5_GET(cmd_hca_cap, hcattr,
-			access_register_user);
+					      access_register_user);
 	attr->eth_net_offloads = MLX5_GET(cmd_hca_cap, hcattr,
 					  eth_net_offloads);
 	attr->eth_virt = MLX5_GET(cmd_hca_cap, hcattr, eth_virt);
@@ -738,8 +735,7 @@ struct mlx5_devx_obj *
 			goto error;
 		if (status) {
 			DRV_LOG(DEBUG, "Failed to query devx QOS capabilities,"
-				" status %x, syndrome = %x",
-				status, syndrome);
+				" status %x, syndrome = %x", status, syndrome);
 			return -1;
 		}
 		hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
@@ -769,17 +765,14 @@ struct mlx5_devx_obj *
 		 MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE |
 		 MLX5_HCA_CAP_OPMOD_GET_CUR);
 
-	rc = mlx5_glue->devx_general_cmd(ctx,
-					 in, sizeof(in),
-					 out, sizeof(out));
+	rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out));
 	if (rc)
 		goto error;
 	status = MLX5_GET(query_hca_cap_out, out, status);
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		attr->log_max_ft_sampler_num = 0;
 		return -1;
 	}
@@ -796,9 +789,7 @@ struct mlx5_devx_obj *
 		 MLX5_GET_HCA_CAP_OP_MOD_ETHERNET_OFFLOAD_CAPS |
 		 MLX5_HCA_CAP_OPMOD_GET_CUR);
 
-	rc = mlx5_glue->devx_general_cmd(ctx,
-					 in, sizeof(in),
-					 out, sizeof(out));
+	rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out));
 	if (rc) {
 		attr->eth_net_offloads = 0;
 		goto error;
@@ -807,8 +798,7 @@ struct mlx5_devx_obj *
 	syndrome = MLX5_GET(query_hca_cap_out, out, syndrome);
 	if (status) {
 		DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, "
-			"status %x, syndrome = %x",
-			status, syndrome);
+			"status %x, syndrome = %x", status, syndrome);
 		attr->eth_net_offloads = 0;
 		return -1;
 	}
@@ -927,7 +917,9 @@ struct mlx5_devx_obj *
 	MLX5_SET(wq, wq_ctx, hw_counter, wq_attr->hw_counter);
 	MLX5_SET(wq, wq_ctx, sw_counter, wq_attr->sw_counter);
 	MLX5_SET(wq, wq_ctx, log_wq_stride, wq_attr->log_wq_stride);
-	MLX5_SET(wq, wq_ctx, log_wq_pg_sz, wq_attr->log_wq_pg_sz);
+	if (wq_attr->log_wq_pg_sz > MLX5_ADAPTER_PAGE_SHIFT)
+		MLX5_SET(wq, wq_ctx, log_wq_pg_sz,
+			 wq_attr->log_wq_pg_sz - MLX5_ADAPTER_PAGE_SHIFT);
 	MLX5_SET(wq, wq_ctx, log_wq_sz, wq_attr->log_wq_sz);
 	MLX5_SET(wq, wq_ctx, dbr_umem_valid, wq_attr->dbr_umem_valid);
 	MLX5_SET(wq, wq_ctx, wq_umem_valid, wq_attr->wq_umem_valid);
@@ -1574,13 +1566,13 @@ struct mlx5_devx_obj *
 	MLX5_SET(cqc, cqctx, cc, attr->use_first_only);
 	MLX5_SET(cqc, cqctx, oi, attr->overrun_ignore);
 	MLX5_SET(cqc, cqctx, log_cq_size, attr->log_cq_size);
-	MLX5_SET(cqc, cqctx, log_page_size, attr->log_page_size -
-		 MLX5_ADAPTER_PAGE_SHIFT);
+	if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT)
+		MLX5_SET(cqc, cqctx, log_page_size,
+			 attr->log_page_size - MLX5_ADAPTER_PAGE_SHIFT);
 	MLX5_SET(cqc, cqctx, c_eqn, attr->eqn);
 	MLX5_SET(cqc, cqctx, uar_page, attr->uar_page_id);
 	MLX5_SET(cqc, cqctx, cqe_comp_en, !!attr->cqe_comp_en);
-	MLX5_SET(cqc, cqctx, mini_cqe_res_format,
-		 attr->mini_cqe_res_format);
+	MLX5_SET(cqc, cqctx, mini_cqe_res_format, attr->mini_cqe_res_format);
 	MLX5_SET(cqc, cqctx, mini_cqe_res_format_ext,
 		 attr->mini_cqe_res_format_ext);
 	if (attr->q_umem_valid) {
@@ -1809,8 +1801,9 @@ struct mlx5_devx_obj *
 	if (attr->uar_index) {
 		MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
 		MLX5_SET(qpc, qpc, uar_page, attr->uar_index);
-		MLX5_SET(qpc, qpc, log_page_size, attr->log_page_size -
-			 MLX5_ADAPTER_PAGE_SHIFT);
+		if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT)
+			MLX5_SET(qpc, qpc, log_page_size,
+				 attr->log_page_size - MLX5_ADAPTER_PAGE_SHIFT);
 		if (attr->sq_size) {
 			MLX5_ASSERT(RTE_IS_POWER_OF_2(attr->sq_size));
 			MLX5_SET(qpc, qpc, cqn_snd, attr->cqn);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 12/19] common/mlx5: share DevX SQ creation
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (10 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 11/19] common/mlx5: enhance page size configuration Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 13/19] regex/mlx5: move DevX SQ creation to common Michael Baum
                           ` (7 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The SQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c          | 119 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h          |  27 +++++-
 drivers/common/mlx5/rte_common_mlx5_exports.def |   2 +
 drivers/common/mlx5/version.map                 |   2 +
 4 files changed, 148 insertions(+), 2 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 3ec0dd5..58b75c3 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -153,3 +153,122 @@
 	rte_errno = ret;
 	return -rte_errno;
 }
+
+/**
+ * Destroy DevX Send Queue.
+ *
+ * @param[in] sq
+ *   DevX SQ to destroy.
+ */
+void
+mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq)
+{
+	if (sq->sq)
+		claim_zero(mlx5_devx_cmd_destroy(sq->sq));
+	if (sq->umem_obj)
+		claim_zero(mlx5_os_umem_dereg(sq->umem_obj));
+	if (sq->umem_buf)
+		mlx5_free((void *)(uintptr_t)sq->umem_buf);
+}
+
+/**
+ * Create Send Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_type
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_stride
+ *   log_wq_sz
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] sq_obj
+ *   Pointer to SQ to create.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to SQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_sq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *sq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint16_t sq_size = 1 << log_wqbb_n;
+	int ret;
+
+	if (alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get WQE buf alignment.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Allocate memory buffer for WQEs and doorbell record. */
+	umem_size = MLX5_WQE_SIZE * sq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for SQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_os_umem_reg(ctx, (void *)(uintptr_t)umem_buf, umem_size,
+				    IBV_ACCESS_LOCAL_WRITE);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for SQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for SQ object creation. */
+	attr->wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
+	attr->wq_attr.wq_umem_valid = 1;
+	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj);
+	attr->wq_attr.wq_umem_offset = 0;
+	attr->wq_attr.dbr_umem_valid = 1;
+	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
+	attr->wq_attr.dbr_addr = umem_dbrec;
+	attr->wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
+	attr->wq_attr.log_wq_sz = log_wqbb_n;
+	attr->wq_attr.log_wq_pg_sz = MLX5_LOG_PAGE_SIZE;
+	/* Create send queue object with DevX. */
+	sq = mlx5_devx_cmd_create_sq(ctx, attr);
+	if (!sq) {
+		DRV_LOG(ERR, "Can't create DevX SQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	sq_obj->umem_buf = umem_buf;
+	sq_obj->umem_obj = umem_obj;
+	sq_obj->sq = sq;
+	sq_obj->db_rec = RTE_PTR_ADD(sq_obj->umem_buf, umem_dbrec);
+	return 0;
+error:
+	ret = rte_errno;
+	if (umem_obj)
+		claim_zero(mlx5_os_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
+
+
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 20d5da0..6b078b2 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -7,6 +7,9 @@
 
 #include "mlx5_devx_cmds.h"
 
+/* The standard page size */
+#define MLX5_LOG_PAGE_SIZE 12
+
 /* DevX Completion Queue structure. */
 struct mlx5_devx_cq {
 	struct mlx5_devx_obj *cq; /* The CQ DevX object. */
@@ -18,6 +21,18 @@ struct mlx5_devx_cq {
 	volatile uint32_t *db_rec; /* The CQ doorbell record. */
 };
 
+/* DevX Send Queue structure. */
+struct mlx5_devx_sq {
+	struct mlx5_devx_obj *sq; /* The SQ DevX object. */
+	void *umem_obj; /* The SQ umem object. */
+	union {
+		volatile void *umem_buf;
+		volatile struct mlx5_wqe *wqes; /* The SQ ring buffer. */
+	};
+	volatile uint32_t *db_rec; /* The SQ doorbell record. */
+};
+
+
 /* mlx5_common_devx.c */
 
 __rte_internal
@@ -25,7 +40,15 @@ struct mlx5_devx_cq {
 
 __rte_internal
 int mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj,
-			uint16_t log_desc_n, struct mlx5_devx_cq_attr *attr,
-			int socket);
+			uint16_t log_desc_n,
+			struct mlx5_devx_cq_attr *attr, int socket);
+
+__rte_internal
+void mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq);
+
+__rte_internal
+int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj,
+			uint16_t log_wqbb_n,
+			struct mlx5_devx_create_sq_attr *attr, int socket);
 
 #endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
diff --git a/drivers/common/mlx5/rte_common_mlx5_exports.def b/drivers/common/mlx5/rte_common_mlx5_exports.def
index d10db40..cfee96e 100644
--- a/drivers/common/mlx5/rte_common_mlx5_exports.def
+++ b/drivers/common/mlx5/rte_common_mlx5_exports.def
@@ -37,6 +37,8 @@ EXPORTS
 
     mlx5_devx_cq_create
     mlx5_devx_cq_destroy
+    mlx5_devx_sq_create
+    mlx5_devx_sq_destroy
 
 	mlx5_get_dbr
 	mlx5_glue
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 850202c..3414588 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -45,6 +45,8 @@ INTERNAL {
 
     mlx5_devx_cq_create;
     mlx5_devx_cq_destroy;
+    mlx5_devx_sq_create;
+    mlx5_devx_sq_destroy;
 
 	mlx5_get_ifname_sysfs;
 	mlx5_get_dbr;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 13/19] regex/mlx5: move DevX SQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (11 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 12/19] common/mlx5: share DevX SQ creation Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 14/19] net/mlx5: move rearm and clock queue " Michael Baum
                           ` (6 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/regex/mlx5/mlx5_regex.h          |   8 +-
 drivers/regex/mlx5/mlx5_regex_control.c  | 153 ++++++++++---------------------
 drivers/regex/mlx5/mlx5_regex_fastpath.c |  14 +--
 3 files changed, 55 insertions(+), 120 deletions(-)

diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 9f7a388..7e1b2a9 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -18,15 +18,10 @@
 
 struct mlx5_regex_sq {
 	uint16_t log_nb_desc; /* Log 2 number of desc for this object. */
-	struct mlx5_devx_obj *obj; /* The SQ DevX object. */
-	int64_t dbr_offset; /* Door bell record offset. */
-	uint32_t dbr_umem; /* Door bell record umem id. */
-	uint8_t *wqe; /* The SQ ring buffer. */
-	struct mlx5dv_devx_umem *wqe_umem; /* SQ buffer umem. */
+	struct mlx5_devx_sq sq_obj; /* The SQ DevX object. */
 	size_t pi, db_pi;
 	size_t ci;
 	uint32_t sqn;
-	uint32_t *dbr;
 };
 
 struct mlx5_regex_cq {
@@ -73,7 +68,6 @@ struct mlx5_regex_priv {
 	uint32_t nb_engines; /* Number of RegEx engines. */
 	struct mlx5dv_devx_uar *uar; /* UAR object. */
 	struct ibv_pd *pd;
-	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
 	struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */
 };
 
diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c
index ca6c0f5..df57fad 100644
--- a/drivers/regex/mlx5/mlx5_regex_control.c
+++ b/drivers/regex/mlx5/mlx5_regex_control.c
@@ -112,6 +112,27 @@
 #endif
 
 /**
+ * Destroy the SQ object.
+ *
+ * @param qp
+ *   Pointer to the QP element
+ * @param q_ind
+ *   The index of the queue.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+regex_ctrl_destroy_sq(struct mlx5_regex_qp *qp, uint16_t q_ind)
+{
+	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
+
+	mlx5_devx_sq_destroy(&sq->sq_obj);
+	memset(sq, 0, sizeof(*sq));
+	return 0;
+}
+
+/**
  * create the SQ object.
  *
  * @param priv
@@ -131,84 +152,42 @@
 		     uint16_t q_ind, uint16_t log_nb_desc)
 {
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
-	struct mlx5_devx_wq_attr *wq_attr = &attr.wq_attr;
-	struct mlx5_devx_dbr_page *dbr_page = NULL;
+	struct mlx5_devx_create_sq_attr attr = {
+		.user_index = q_ind,
+		.cqn = qp->cq.cq_obj.cq->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.uar_page = priv->uar->page_id,
+		},
+	};
+	struct mlx5_devx_modify_sq_attr modify_attr = {
+		.state = MLX5_SQC_STATE_RDY,
+	};
 	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
-	void *buf = NULL;
-	uint32_t sq_size;
 	uint32_t pd_num = 0;
 	int ret;
 
 	sq->log_nb_desc = log_nb_desc;
-	sq_size = 1 << sq->log_nb_desc;
-	sq->dbr_offset = mlx5_get_dbr(priv->ctx, &priv->dbrpgs, &dbr_page);
-	if (sq->dbr_offset < 0) {
-		DRV_LOG(ERR, "Can't allocate sq door bell record.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	sq->dbr_umem = mlx5_os_get_umem_id(dbr_page->umem);
-	sq->dbr = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			       (uintptr_t)sq->dbr_offset);
-
-	buf = rte_calloc(NULL, 1, 64 * sq_size, 4096);
-	if (!buf) {
-		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	sq->wqe = buf;
-	sq->wqe_umem = mlx5_glue->devx_umem_reg(priv->ctx, buf, 64 * sq_size,
-						7);
 	sq->ci = 0;
 	sq->pi = 0;
-	if (!sq->wqe_umem) {
-		DRV_LOG(ERR, "Can't register wqe mem.");
-		rte_errno  = ENOMEM;
-		goto error;
-	}
-	attr.state = MLX5_SQC_STATE_RST;
-	attr.tis_lst_sz = 0;
-	attr.tis_num = 0;
-	attr.user_index = q_ind;
-	attr.cqn = qp->cq.cq_obj.cq->id;
-	wq_attr->uar_page = priv->uar->page_id;
-	regex_get_pdn(priv->pd, &pd_num);
-	wq_attr->pd = pd_num;
-	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
-	wq_attr->dbr_umem_id = sq->dbr_umem;
-	wq_attr->dbr_addr = sq->dbr_offset;
-	wq_attr->dbr_umem_valid = 1;
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(sq->wqe_umem);
-	wq_attr->wq_umem_offset = 0;
-	wq_attr->wq_umem_valid = 1;
-	wq_attr->log_wq_stride = 6;
-	wq_attr->log_wq_sz = sq->log_nb_desc;
-	sq->obj = mlx5_devx_cmd_create_sq(priv->ctx, &attr);
-	if (!sq->obj) {
-		DRV_LOG(ERR, "Can't create sq object.");
-		rte_errno  = ENOMEM;
-		goto error;
+	ret = regex_get_pdn(priv->pd, &pd_num);
+	if (ret)
+		return ret;
+	attr.wq_attr.pd = pd_num;
+	ret = mlx5_devx_sq_create(priv->ctx, &sq->sq_obj, log_nb_desc, &attr,
+				  SOCKET_ID_ANY);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create SQ object.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
-	modify_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(sq->obj, &modify_attr);
+	ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);
 	if (ret) {
-		DRV_LOG(ERR, "Can't change sq state to ready.");
-		rte_errno  = ENOMEM;
-		goto error;
+		DRV_LOG(ERR, "Can't change SQ state to ready.");
+		regex_ctrl_destroy_sq(qp, q_ind);
+		rte_errno = ENOMEM;
+		return -rte_errno;
 	}
-
 	return 0;
-error:
-	if (sq->wqe_umem)
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-	if (buf)
-		rte_free(buf);
-	if (sq->dbr_offset)
-		mlx5_release_dbr(&priv->dbrpgs, sq->dbr_umem, sq->dbr_offset);
-	return -rte_errno;
 #else
 	(void)priv;
 	(void)qp;
@@ -220,44 +199,6 @@
 }
 
 /**
- * Destroy the SQ object.
- *
- * @param priv
- *   Pointer to the priv object.
- * @param qp
- *   Pointer to the QP element
- * @param q_ind
- *   The index of the queue.
- *
- * @return
- *   0 on success, a negative errno value otherwise and rte_errno is set.
- */
-static int
-regex_ctrl_destroy_sq(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp,
-		      uint16_t q_ind)
-{
-	struct mlx5_regex_sq *sq = &qp->sqs[q_ind];
-
-	if (sq->wqe_umem) {
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-		sq->wqe_umem = NULL;
-	}
-	if (sq->wqe) {
-		rte_free((void *)(uintptr_t)sq->wqe);
-		sq->wqe = NULL;
-	}
-	if (sq->dbr_offset) {
-		mlx5_release_dbr(&priv->dbrpgs, sq->dbr_umem, sq->dbr_offset);
-		sq->dbr_offset = -1;
-	}
-	if (sq->obj) {
-		mlx5_devx_cmd_destroy(sq->obj);
-		sq->obj = NULL;
-	}
-	return 0;
-}
-
-/**
  * Setup the qp.
  *
  * @param dev
@@ -329,7 +270,7 @@
 	mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh);
 err_btree:
 	for (i = 0; i < nb_sq_config; i++)
-		regex_ctrl_destroy_sq(priv, qp, i);
+		regex_ctrl_destroy_sq(qp, i);
 	regex_ctrl_destroy_cq(&qp->cq);
 err_cq:
 	rte_free(qp->sqs);
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 255fd40..cd0f9bd 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -110,12 +110,12 @@ struct mlx5_regex_job {
 				  &priv->mr_scache, &qp->mr_ctrl,
 				  rte_pktmbuf_mtod(op->mbuf, uintptr_t),
 				  !!(op->mbuf->ol_flags & EXT_ATTACHED_MBUF));
-	uint8_t *wqe = (uint8_t *)sq->wqe + wqe_offset;
+	uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset;
 	int ds = 4; /*  ctrl + meta + input + output */
 
 	set_wqe_ctrl_seg((struct mlx5_wqe_ctrl_seg *)wqe, sq->pi,
-			 MLX5_OPCODE_MMO, MLX5_OPC_MOD_MMO_REGEX, sq->obj->id,
-			 0, ds, 0, 0);
+			 MLX5_OPCODE_MMO, MLX5_OPC_MOD_MMO_REGEX,
+			 sq->sq_obj.sq->id, 0, ds, 0, 0);
 	set_regex_ctrl_seg(wqe + 12, 0, op->group_id0, op->group_id1,
 			   op->group_id2,
 			   op->group_id3, 0);
@@ -137,12 +137,12 @@ struct mlx5_regex_job {
 {
 	size_t wqe_offset = (sq->db_pi & (sq_size_get(sq) - 1)) *
 		MLX5_SEND_WQE_BB;
-	uint8_t *wqe = (uint8_t *)sq->wqe + wqe_offset;
+	uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset;
 	((struct mlx5_wqe_ctrl_seg *)wqe)->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
 	uint64_t *doorbell_addr =
 		(uint64_t *)((uint8_t *)uar->base_addr + 0x800);
 	rte_io_wmb();
-	sq->dbr[MLX5_SND_DBR] = rte_cpu_to_be_32((sq->db_pi + 1) &
+	sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32((sq->db_pi + 1) &
 						 MLX5_REGEX_MAX_WQE_INDEX);
 	rte_wmb();
 	*doorbell_addr = *(volatile uint64_t *)wqe;
@@ -301,7 +301,7 @@ struct mlx5_regex_job {
 	uint32_t job_id;
 	for (sqid = 0; sqid < queue->nb_obj; sqid++) {
 		struct mlx5_regex_sq *sq = &queue->sqs[sqid];
-		uint8_t *wqe = (uint8_t *)sq->wqe;
+		uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes;
 		for (entry = 0 ; entry < sq_size_get(sq); entry++) {
 			job_id = sqid * sq_size_get(sq) + entry;
 			struct mlx5_regex_job *job = &queue->jobs[job_id];
@@ -334,7 +334,7 @@ struct mlx5_regex_job {
 		return -ENOMEM;
 
 	qp->metadata = mlx5_glue->reg_mr(pd, ptr,
-					 MLX5_REGEX_METADATA_SIZE*qp->nb_desc,
+					 MLX5_REGEX_METADATA_SIZE * qp->nb_desc,
 					 IBV_ACCESS_LOCAL_WRITE);
 	if (!qp->metadata) {
 		DRV_LOG(ERR, "Failed to register metadata");
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 14/19] net/mlx5: move rearm and clock queue SQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (12 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 13/19] regex/mlx5: move DevX SQ creation to common Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 15/19] net/mlx5: move Tx " Michael Baum
                           ` (5 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for DevX SQ creation for rearm and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   8 +--
 drivers/net/mlx5/mlx5_txpp.c | 147 +++++++++++--------------------------------
 2 files changed, 36 insertions(+), 119 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 079cbca..0a0a943 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -615,15 +615,9 @@ struct mlx5_txpp_wq {
 	uint32_t cq_ci:24;
 	uint32_t arm_sn:2;
 	/* Send Queue related data.*/
-	struct mlx5_devx_obj *sq;
-	void *sq_umem;
-	union {
-		volatile void *sq_buf;
-		volatile struct mlx5_wqe *wqes;
-	};
+	struct mlx5_devx_sq sq_obj;
 	uint16_t sq_size; /* Number of WQEs in the queue. */
 	uint16_t sq_ci; /* Next WQE to execute. */
-	volatile uint32_t *sq_dbrec;
 };
 
 /* Tx packet pacing internal timestamp. */
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index bd679c2..b38482d 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -129,12 +129,7 @@
 static void
 mlx5_txpp_destroy_send_queue(struct mlx5_txpp_wq *wq)
 {
-	if (wq->sq)
-		claim_zero(mlx5_devx_cmd_destroy(wq->sq));
-	if (wq->sq_umem)
-		claim_zero(mlx5_os_umem_dereg(wq->sq_umem));
-	if (wq->sq_buf)
-		mlx5_free((void *)(uintptr_t)wq->sq_buf);
+	mlx5_devx_sq_destroy(&wq->sq_obj);
 	mlx5_devx_cq_destroy(&wq->cq_obj);
 	memset(wq, 0, sizeof(*wq));
 }
@@ -163,6 +158,7 @@
 mlx5_txpp_doorbell_rearm_queue(struct mlx5_dev_ctx_shared *sh, uint16_t ci)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	union {
 		uint32_t w32[2];
 		uint64_t w64;
@@ -171,11 +167,11 @@
 
 	wq->sq_ci = ci + 1;
 	cs.w32[0] = rte_cpu_to_be_32(rte_be_to_cpu_32
-		   (wq->wqes[ci & (wq->sq_size - 1)].ctrl[0]) | (ci - 1) << 8);
-	cs.w32[1] = wq->wqes[ci & (wq->sq_size - 1)].ctrl[1];
+			(wqe[ci & (wq->sq_size - 1)].ctrl[0]) | (ci - 1) << 8);
+	cs.w32[1] = wqe[ci & (wq->sq_size - 1)].ctrl[1];
 	/* Update SQ doorbell record with new SQ ci. */
 	rte_compiler_barrier();
-	*wq->sq_dbrec = rte_cpu_to_be_32(wq->sq_ci);
+	*wq->sq_obj.db_rec = rte_cpu_to_be_32(wq->sq_ci);
 	/* Make sure the doorbell record is updated. */
 	rte_wmb();
 	/* Write to doorbel register to start processing. */
@@ -188,7 +184,7 @@
 mlx5_txpp_fill_wqe_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->wqes;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	uint32_t i;
 
 	for (i = 0; i < wq->sq_size; i += 2) {
@@ -199,7 +195,7 @@
 		/* Build SEND_EN request with slave WQE index. */
 		cs = &wqe[i + 0].cseg;
 		cs->opcode = RTE_BE32(MLX5_OPCODE_SEND_EN | 0);
-		cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) | 2);
+		cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) | 2);
 		cs->flags = RTE_BE32(MLX5_COMP_ALWAYS <<
 				     MLX5_COMP_MODE_OFFSET);
 		cs->misc = RTE_BE32(0);
@@ -207,11 +203,12 @@
 		index = (i * MLX5_TXPP_REARM / 2 + MLX5_TXPP_REARM) &
 			((1 << MLX5_WQ_INDEX_WIDTH) - 1);
 		qs->max_index = rte_cpu_to_be_32(index);
-		qs->qpn_cqn = rte_cpu_to_be_32(sh->txpp.clock_queue.sq->id);
+		qs->qpn_cqn =
+			   rte_cpu_to_be_32(sh->txpp.clock_queue.sq_obj.sq->id);
 		/* Build WAIT request with slave CQE index. */
 		cs = &wqe[i + 1].cseg;
 		cs->opcode = RTE_BE32(MLX5_OPCODE_WAIT | 0);
-		cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) | 2);
+		cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) | 2);
 		cs->flags = RTE_BE32(MLX5_COMP_ONLY_ERR <<
 				     MLX5_COMP_MODE_OFFSET);
 		cs->misc = RTE_BE32(0);
@@ -228,21 +225,23 @@
 static int
 mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh)
 {
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
+	struct mlx5_devx_create_sq_attr sq_attr = {
+		.cd_master = 1,
+		.state = MLX5_SQC_STATE_RST,
+		.tis_lst_sz = 1,
+		.tis_num = sh->tis->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = sh->pdn,
+			.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
+		},
+	};
 	struct mlx5_devx_modify_sq_attr msq_attr = { 0 };
 	struct mlx5_devx_cq_attr cq_attr = {
 		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
 	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.rearm_queue;
-	size_t page_size;
-	uint32_t umem_size, umem_dbrec;
 	int ret;
 
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		return -ENOMEM;
-	}
 	/* Create completion queue object for Rearm Queue. */
 	ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj,
 				  log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr,
@@ -253,63 +252,25 @@
 	}
 	wq->cq_ci = 0;
 	wq->arm_sn = 0;
-	/*
-	 * Allocate memory buffer for Send Queue WQEs.
-	 * There should be no WQE leftovers in the cyclic queue.
-	 */
 	wq->sq_size = MLX5_TXPP_REARM_SQ_SIZE;
 	MLX5_ASSERT(wq->sq_size == (1 << log2above(wq->sq_size)));
-	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->sq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->sq_umem = mlx5_os_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->sq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Rearm Queue.");
-		goto error;
-	}
 	/* Create send queue object for Rearm Queue. */
-	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = sh->tis->id;
 	sq_attr.cqn = wq->cq_obj.cq->id;
-	sq_attr.cd_master = 1;
-	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = rte_log2_u32(wq->sq_size);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = umem_dbrec;
-	sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_offset = 0;
-	wq->sq = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr);
-	if (!wq->sq) {
+	/* There should be no WQE leftovers in the cyclic queue. */
+	ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj,
+				  log2above(MLX5_TXPP_REARM_SQ_SIZE), &sq_attr,
+				  sh->numa_node);
+	if (ret) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create SQ for Rearm Queue.");
 		goto error;
 	}
-	wq->sq_dbrec = RTE_PTR_ADD(wq->sq_buf, umem_dbrec +
-				   MLX5_SND_DBR * sizeof(uint32_t));
 	/* Build the WQEs in the Send Queue before goto Ready state. */
 	mlx5_txpp_fill_wqe_rearm_queue(sh);
 	/* Change queue state to ready. */
 	msq_attr.sq_state = MLX5_SQC_STATE_RST;
 	msq_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(wq->sq, &msq_attr);
+	ret = mlx5_devx_cmd_modify_sq(wq->sq_obj.sq, &msq_attr);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to set SQ ready state Rearm Queue.");
 		goto error;
@@ -326,7 +287,7 @@
 mlx5_txpp_fill_wqe_clock_queue(struct mlx5_dev_ctx_shared *sh)
 {
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->wqes;
+	struct mlx5_wqe *wqe = (struct mlx5_wqe *)(uintptr_t)wq->sq_obj.wqes;
 	struct mlx5_wqe_cseg *cs = &wqe->cseg;
 	uint32_t wqe_size, opcode, i;
 	uint8_t *dst;
@@ -344,7 +305,7 @@
 		opcode = MLX5_OPCODE_NOP;
 	}
 	cs->opcode = rte_cpu_to_be_32(opcode | 0); /* Index is ignored. */
-	cs->sq_ds = rte_cpu_to_be_32((wq->sq->id << 8) |
+	cs->sq_ds = rte_cpu_to_be_32((wq->sq_obj.sq->id << 8) |
 				     (wqe_size / MLX5_WSEG_SIZE));
 	cs->flags = RTE_BE32(MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET);
 	cs->misc = RTE_BE32(0);
@@ -413,10 +374,11 @@
 	}
 wcopy:
 	/* Duplicate the pattern to the next WQEs. */
-	dst = (uint8_t *)(uintptr_t)wq->sq_buf;
+	dst = (uint8_t *)(uintptr_t)wq->sq_obj.umem_buf;
 	for (i = 1; i < MLX5_TXPP_CLKQ_SIZE; i++) {
 		dst += wqe_size;
-		rte_memcpy(dst, (void *)(uintptr_t)wq->sq_buf, wqe_size);
+		rte_memcpy(dst, (void *)(uintptr_t)wq->sq_obj.umem_buf,
+			   wqe_size);
 	}
 }
 
@@ -432,15 +394,8 @@
 		.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar),
 	};
 	struct mlx5_txpp_wq *wq = &sh->txpp.clock_queue;
-	size_t page_size;
-	uint32_t umem_size, umem_dbrec;
 	int ret;
 
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		return -ENOMEM;
-	}
 	sh->txpp.tsa = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
 				   MLX5_TXPP_REARM_SQ_SIZE *
 				   sizeof(struct mlx5_txpp_ts),
@@ -473,26 +428,6 @@
 	}
 	/* There should not be WQE leftovers in the cyclic queue. */
 	MLX5_ASSERT(wq->sq_size == (1 << log2above(wq->sq_size)));
-	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
-	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
-	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
-				 page_size, sh->numa_node);
-	if (!wq->sq_buf) {
-		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	wq->sq_umem = mlx5_os_umem_reg(sh->ctx,
-					       (void *)(uintptr_t)wq->sq_buf,
-					       umem_size,
-					       IBV_ACCESS_LOCAL_WRITE);
-	if (!wq->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to register umem for Clock Queue.");
-		goto error;
-	}
 	/* Create send queue object for Clock Queue. */
 	if (sh->txpp.test) {
 		sq_attr.tis_lst_sz = 1;
@@ -503,37 +438,25 @@
 		sq_attr.non_wire = 1;
 		sq_attr.static_sq_wq = 1;
 	}
-	sq_attr.state = MLX5_SQC_STATE_RST;
 	sq_attr.cqn = wq->cq_obj.cq->id;
 	sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id;
 	sq_attr.wq_attr.cd_slave = 1;
 	sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
 	sq_attr.wq_attr.pd = sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = rte_log2_u32(wq->sq_size);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = umem_dbrec;
-	sq_attr.wq_attr.dbr_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(wq->sq_umem);
-	/* umem_offset must be zero for static_sq_wq queue. */
-	sq_attr.wq_attr.wq_umem_offset = 0;
-	wq->sq = mlx5_devx_cmd_create_sq(sh->ctx, &sq_attr);
-	if (!wq->sq) {
+	ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj, log2above(wq->sq_size),
+				  &sq_attr, sh->numa_node);
+	if (ret) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create SQ for Clock Queue.");
 		goto error;
 	}
-	wq->sq_dbrec = RTE_PTR_ADD(wq->sq_buf, umem_dbrec +
-				   MLX5_SND_DBR * sizeof(uint32_t));
 	/* Build the WQEs in the Send Queue before goto Ready state. */
 	mlx5_txpp_fill_wqe_clock_queue(sh);
 	/* Change queue state to ready. */
 	msq_attr.sq_state = MLX5_SQC_STATE_RST;
 	msq_attr.state = MLX5_SQC_STATE_RDY;
 	wq->sq_ci = 0;
-	ret = mlx5_devx_cmd_modify_sq(wq->sq, &msq_attr);
+	ret = mlx5_devx_cmd_modify_sq(wq->sq_obj.sq, &msq_attr);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to set SQ ready state Clock Queue.");
 		goto error;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 15/19] net/mlx5: move Tx SQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (13 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 14/19] net/mlx5: move rearm and clock queue " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 16/19] net/mlx5: move ASO " Michael Baum
                           ` (4 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Tx SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   8 +--
 drivers/net/mlx5/mlx5_devx.c | 162 ++++++++++---------------------------------
 2 files changed, 40 insertions(+), 130 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 0a0a943..62d7c89 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -841,11 +841,9 @@ struct mlx5_txq_obj {
 		struct {
 			struct rte_eth_dev *dev;
 			struct mlx5_devx_cq cq_obj;
-			struct mlx5_devx_obj *sq_devx;
-			void *sq_umem;
-			void *sq_buf;
-			int64_t sq_dbrec_offset;
-			struct mlx5_devx_dbr_page *sq_dbrec_page;
+			/* DevX CQ object and its resources. */
+			struct mlx5_devx_sq sq_obj;
+			/* DevX SQ object and its resources. */
 		};
 	};
 };
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 913f169..96c44d4 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -115,7 +115,7 @@
 		else
 			msq_attr.sq_state = MLX5_SQC_STATE_RDY;
 		msq_attr.state = MLX5_SQC_STATE_RST;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_obj.sq, &msq_attr);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change the Tx SQ state to RESET"
 				" %s", strerror(errno));
@@ -127,7 +127,7 @@
 		/* Change queue state to ready. */
 		msq_attr.sq_state = MLX5_SQC_STATE_RST;
 		msq_attr.state = MLX5_SQC_STATE_RDY;
-		ret = mlx5_devx_cmd_modify_sq(obj->sq_devx, &msq_attr);
+		ret = mlx5_devx_cmd_modify_sq(obj->sq_obj.sq, &msq_attr);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change the Tx SQ state to READY"
 				" %s", strerror(errno));
@@ -1052,36 +1052,6 @@
 
 #if defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) || !defined(HAVE_INFINIBAND_VERBS_H)
 /**
- * Release DevX SQ resources.
- *
- * @param txq_obj
- *   DevX Tx queue object.
- */
-static void
-mlx5_txq_release_devx_sq_resources(struct mlx5_txq_obj *txq_obj)
-{
-	if (txq_obj->sq_devx) {
-		claim_zero(mlx5_devx_cmd_destroy(txq_obj->sq_devx));
-		txq_obj->sq_devx = NULL;
-	}
-	if (txq_obj->sq_umem) {
-		claim_zero(mlx5_os_umem_dereg(txq_obj->sq_umem));
-		txq_obj->sq_umem = NULL;
-	}
-	if (txq_obj->sq_buf) {
-		mlx5_free(txq_obj->sq_buf);
-		txq_obj->sq_buf = NULL;
-	}
-	if (txq_obj->sq_dbrec_page) {
-		claim_zero(mlx5_release_dbr(&txq_obj->txq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id
-						 (txq_obj->sq_dbrec_page->umem),
-					    txq_obj->sq_dbrec_offset));
-		txq_obj->sq_dbrec_page = NULL;
-	}
-}
-
-/**
  * Destroy the Tx queue DevX object.
  *
  * @param txq_obj
@@ -1090,7 +1060,8 @@
 static void
 mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
 {
-	mlx5_txq_release_devx_sq_resources(txq_obj);
+	mlx5_devx_sq_destroy(&txq_obj->sq_obj);
+	memset(&txq_obj->sq_obj, 0, sizeof(txq_obj->sq_obj));
 	mlx5_devx_cq_destroy(&txq_obj->cq_obj);
 	memset(&txq_obj->cq_obj, 0, sizeof(txq_obj->cq_obj));
 }
@@ -1102,100 +1073,39 @@
  *   Pointer to Ethernet device.
  * @param idx
  *   Queue index in DPDK Tx queue array.
+ * @param[in] log_desc_n
+ *   Log of number of descriptors in queue.
  *
  * @return
- *   Number of WQEs in SQ, 0 otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static uint32_t
-mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx)
+static int
+mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx,
+				  uint16_t log_desc_n)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_data *txq_data = (*priv->txqs)[idx];
 	struct mlx5_txq_ctrl *txq_ctrl =
 			container_of(txq_data, struct mlx5_txq_ctrl, txq);
 	struct mlx5_txq_obj *txq_obj = txq_ctrl->obj;
-	struct mlx5_devx_create_sq_attr sq_attr = { 0 };
-	size_t page_size;
-	uint32_t wqe_n;
-	int ret;
+	struct mlx5_devx_create_sq_attr sq_attr = {
+		.flush_in_error_en = 1,
+		.allow_multi_pkt_send_wqe = !!priv->config.mps,
+		.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode,
+		.allow_swp = !!priv->config.swp,
+		.cqn = txq_obj->cq_obj.cq->id,
+		.tis_lst_sz = 1,
+		.tis_num = priv->sh->tis->id,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = priv->sh->pdn,
+			.uar_page =
+				 mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar),
+		},
+	};
 
-	MLX5_ASSERT(txq_data);
-	MLX5_ASSERT(txq_obj);
-	page_size = rte_mem_page_size();
-	if (page_size == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size.");
-		rte_errno = ENOMEM;
-		return 0;
-	}
-	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
-			(uint32_t)priv->sh->device_attr.max_qp_wr);
-	txq_obj->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-				      wqe_n * sizeof(struct mlx5_wqe),
-				      page_size, priv->sh->numa_node);
-	if (!txq_obj->sq_buf) {
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot allocate memory (SQ).",
-			dev->data->port_id, txq_data->idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Register allocated buffer in user space with DevX. */
-	txq_obj->sq_umem = mlx5_os_umem_reg
-					(priv->sh->ctx,
-					 (void *)txq_obj->sq_buf,
-					 wqe_n * sizeof(struct mlx5_wqe),
-					 IBV_ACCESS_LOCAL_WRITE);
-	if (!txq_obj->sq_umem) {
-		rte_errno = errno;
-		DRV_LOG(ERR,
-			"Port %u Tx queue %u cannot register memory (SQ).",
-			dev->data->port_id, txq_data->idx);
-		goto error;
-	}
-	/* Allocate doorbell record for send queue. */
-	txq_obj->sq_dbrec_offset = mlx5_get_dbr(priv->sh->ctx,
-						&priv->dbrpgs,
-						&txq_obj->sq_dbrec_page);
-	if (txq_obj->sq_dbrec_offset < 0) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Failed to allocate SQ door-bell.");
-		goto error;
-	}
-	sq_attr.tis_lst_sz = 1;
-	sq_attr.tis_num = priv->sh->tis->id;
-	sq_attr.state = MLX5_SQC_STATE_RST;
-	sq_attr.cqn = txq_obj->cq_obj.cq->id;
-	sq_attr.flush_in_error_en = 1;
-	sq_attr.allow_multi_pkt_send_wqe = !!priv->config.mps;
-	sq_attr.allow_swp = !!priv->config.swp;
-	sq_attr.min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode;
-	sq_attr.wq_attr.uar_page =
-				mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
-	sq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
-	sq_attr.wq_attr.pd = priv->sh->pdn;
-	sq_attr.wq_attr.log_wq_stride = rte_log2_u32(MLX5_WQE_SIZE);
-	sq_attr.wq_attr.log_wq_sz = log2above(wqe_n);
-	sq_attr.wq_attr.dbr_umem_valid = 1;
-	sq_attr.wq_attr.dbr_addr = txq_obj->sq_dbrec_offset;
-	sq_attr.wq_attr.dbr_umem_id =
-			mlx5_os_get_umem_id(txq_obj->sq_dbrec_page->umem);
-	sq_attr.wq_attr.wq_umem_valid = 1;
-	sq_attr.wq_attr.wq_umem_id = mlx5_os_get_umem_id(txq_obj->sq_umem);
-	sq_attr.wq_attr.wq_umem_offset = (uintptr_t)txq_obj->sq_buf % page_size;
 	/* Create Send Queue object with DevX. */
-	txq_obj->sq_devx = mlx5_devx_cmd_create_sq(priv->sh->ctx, &sq_attr);
-	if (!txq_obj->sq_devx) {
-		rte_errno = errno;
-		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
-			dev->data->port_id, idx);
-		goto error;
-	}
-	return wqe_n;
-error:
-	ret = rte_errno;
-	mlx5_txq_release_devx_sq_resources(txq_obj);
-	rte_errno = ret;
-	return 0;
+	return mlx5_devx_sq_create(priv->sh->ctx, &txq_obj->sq_obj, log_desc_n,
+				   &sq_attr, priv->sh->numa_node);
 }
 #endif
 
@@ -1267,27 +1177,29 @@
 	txq_data->cq_db = txq_obj->cq_obj.db_rec;
 	*txq_data->cq_db = 0;
 	/* Create Send Queue object with DevX. */
-	wqe_n = mlx5_txq_create_devx_sq_resources(dev, idx);
-	if (!wqe_n) {
+	wqe_n = RTE_MIN(1UL << txq_data->elts_n,
+			(uint32_t)priv->sh->device_attr.max_qp_wr);
+	log_desc_n = log2above(wqe_n);
+	ret = mlx5_txq_create_devx_sq_resources(dev, idx, log_desc_n);
+	if (ret) {
+		DRV_LOG(ERR, "Port %u Tx queue %u SQ creation failure.",
+			dev->data->port_id, idx);
 		rte_errno = errno;
 		goto error;
 	}
 	/* Create the Work Queue. */
-	txq_data->wqe_n = log2above(wqe_n);
+	txq_data->wqe_n = log_desc_n;
 	txq_data->wqe_s = 1 << txq_data->wqe_n;
 	txq_data->wqe_m = txq_data->wqe_s - 1;
-	txq_data->wqes = (struct mlx5_wqe *)txq_obj->sq_buf;
+	txq_data->wqes = (struct mlx5_wqe *)(uintptr_t)txq_obj->sq_obj.wqes;
 	txq_data->wqes_end = txq_data->wqes + txq_data->wqe_s;
 	txq_data->wqe_ci = 0;
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->qp_db = (volatile uint32_t *)
-					(txq_obj->sq_dbrec_page->dbrs +
-					 txq_obj->sq_dbrec_offset +
-					 MLX5_SND_DBR * sizeof(uint32_t));
+	txq_data->qp_db = txq_obj->sq_obj.db_rec;
 	*txq_data->qp_db = 0;
-	txq_data->qp_num_8s = txq_obj->sq_devx->id << 8;
+	txq_data->qp_num_8s = txq_obj->sq_obj.sq->id << 8;
 	/* Change Send Queue state to Ready-to-Send. */
 	ret = mlx5_devx_modify_sq(txq_obj, MLX5_TXQ_MOD_RST2RDY, 0);
 	if (ret) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 16/19] net/mlx5: move ASO SQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (14 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 15/19] net/mlx5: move Tx " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 17/19] common/mlx5: share DevX RQ creation Michael Baum
                           ` (3 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for ASO SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.h |  1 +
 drivers/net/mlx5/mlx5.h                |  8 +--
 drivers/net/mlx5/mlx5_flow_age.c       | 94 ++++++++++------------------------
 3 files changed, 30 insertions(+), 73 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 6b078b2..992ad8f 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -28,6 +28,7 @@ struct mlx5_devx_sq {
 	union {
 		volatile void *umem_buf;
 		volatile struct mlx5_wqe *wqes; /* The SQ ring buffer. */
+		volatile struct mlx5_aso_wqe *aso_wqes;
 	};
 	volatile uint32_t *db_rec; /* The SQ doorbell record. */
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 62d7c89..a3fd8d5 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -487,13 +487,7 @@ struct mlx5_aso_sq_elem {
 struct mlx5_aso_sq {
 	uint16_t log_desc_n;
 	struct mlx5_aso_cq cq;
-	struct mlx5_devx_obj *sq;
-	struct mlx5dv_devx_umem *wqe_umem; /* SQ buffer umem. */
-	union {
-		volatile void *umem_buf;
-		volatile struct mlx5_aso_wqe *wqes;
-	};
-	volatile uint32_t *db_rec;
+	struct mlx5_devx_sq sq_obj;
 	volatile uint64_t *uar_addr;
 	struct mlx5_aso_devx_mr mr;
 	uint16_t pi;
diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index a75adc8..3005afd 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -142,18 +142,7 @@
 static void
 mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)
 {
-	if (sq->wqe_umem) {
-		mlx5_glue->devx_umem_dereg(sq->wqe_umem);
-		sq->wqe_umem = NULL;
-	}
-	if (sq->umem_buf) {
-		mlx5_free((void *)(uintptr_t)sq->umem_buf);
-		sq->umem_buf = NULL;
-	}
-	if (sq->sq) {
-		mlx5_devx_cmd_destroy(sq->sq);
-		sq->sq = NULL;
-	}
+	mlx5_devx_sq_destroy(&sq->sq_obj);
 	mlx5_aso_cq_destroy(&sq->cq);
 	mlx5_aso_devx_dereg_mr(&sq->mr);
 	memset(sq, 0, sizeof(*sq));
@@ -174,7 +163,7 @@
 	uint64_t addr;
 
 	/* All the next fields state should stay constant. */
-	for (i = 0, wqe = &sq->wqes[0]; i < size; ++i, ++wqe) {
+	for (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) {
 		wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |
 							  (sizeof(*wqe) >> 4));
 		wqe->aso_cseg.lkey = rte_cpu_to_be_32(sq->mr.mkey->id);
@@ -215,12 +204,18 @@
 mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket,
 		   void *uar, uint32_t pdn,  uint16_t log_desc_n)
 {
-	struct mlx5_devx_create_sq_attr attr = { 0 };
-	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
-	size_t pgsize = rte_mem_page_size();
-	struct mlx5_devx_wq_attr *wq_attr = &attr.wq_attr;
+	struct mlx5_devx_create_sq_attr attr = {
+		.user_index = 0xFFFF,
+		.wq_attr = (struct mlx5_devx_wq_attr){
+			.pd = pdn,
+			.uar_page = mlx5_os_get_devx_uar_page_id(uar),
+		},
+	};
+	struct mlx5_devx_modify_sq_attr modify_attr = {
+		.state = MLX5_SQC_STATE_RDY,
+	};
 	uint32_t sq_desc_n = 1 << log_desc_n;
-	uint32_t wq_size = sizeof(struct mlx5_aso_wqe) * sq_desc_n;
+	uint16_t log_wqbb_n;
 	int ret;
 
 	if (mlx5_aso_devx_reg_mr(ctx, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) *
@@ -230,58 +225,25 @@
 			       mlx5_os_get_devx_uar_page_id(uar)))
 		goto error;
 	sq->log_desc_n = log_desc_n;
-	sq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size +
-				   sizeof(*sq->db_rec) * 2, 4096, socket);
-	if (!sq->umem_buf) {
-		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	sq->wqe_umem = mlx5_os_umem_reg(ctx,
-						(void *)(uintptr_t)sq->umem_buf,
-						wq_size +
-						sizeof(*sq->db_rec) * 2,
-						IBV_ACCESS_LOCAL_WRITE);
-	if (!sq->wqe_umem) {
-		DRV_LOG(ERR, "Failed to register umem for SQ.");
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	attr.state = MLX5_SQC_STATE_RST;
-	attr.tis_lst_sz = 0;
-	attr.tis_num = 0;
-	attr.user_index = 0xFFFF;
 	attr.cqn = sq->cq.cq_obj.cq->id;
-	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(uar);
-	wq_attr->pd = pdn;
-	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
-	wq_attr->log_wq_pg_sz = rte_log2_u32(pgsize);
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(sq->wqe_umem);
-	wq_attr->wq_umem_offset = 0;
-	wq_attr->wq_umem_valid = 1;
-	wq_attr->log_wq_stride = 6;
-	wq_attr->log_wq_sz = rte_log2_u32(wq_size) - 6;
-	wq_attr->dbr_umem_id = wq_attr->wq_umem_id;
-	wq_attr->dbr_addr = wq_size;
-	wq_attr->dbr_umem_valid = 1;
-	sq->sq = mlx5_devx_cmd_create_sq(ctx, &attr);
-	if (!sq->sq) {
-		DRV_LOG(ERR, "Can't create sq object.");
-		rte_errno  = ENOMEM;
+	/* for mlx5_aso_wqe that is twice the size of mlx5_wqe */
+	log_wqbb_n = log_desc_n + 1;
+	ret = mlx5_devx_sq_create(ctx, &sq->sq_obj, log_wqbb_n, &attr, socket);
+	if (ret) {
+		DRV_LOG(ERR, "Can't create SQ object.");
+		rte_errno = ENOMEM;
 		goto error;
 	}
-	modify_attr.state = MLX5_SQC_STATE_RDY;
-	ret = mlx5_devx_cmd_modify_sq(sq->sq, &modify_attr);
+	ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);
 	if (ret) {
-		DRV_LOG(ERR, "Can't change sq state to ready.");
-		rte_errno  = ENOMEM;
+		DRV_LOG(ERR, "Can't change SQ state to ready.");
+		rte_errno = ENOMEM;
 		goto error;
 	}
 	sq->pi = 0;
 	sq->head = 0;
 	sq->tail = 0;
-	sq->sqn = sq->sq->id;
-	sq->db_rec = RTE_PTR_ADD(sq->umem_buf, (uintptr_t)(wq_attr->dbr_addr));
+	sq->sqn = sq->sq_obj.sq->id;
 	sq->uar_addr = mlx5_os_get_devx_uar_reg_addr(uar);
 	mlx5_aso_init_sq(sq);
 	return 0;
@@ -345,8 +307,8 @@
 		return 0;
 	sq->elts[start_head & mask].burst_size = max;
 	do {
-		wqe = &sq->wqes[sq->head & mask];
-		rte_prefetch0(&sq->wqes[(sq->head + 1) & mask]);
+		wqe = &sq->sq_obj.aso_wqes[sq->head & mask];
+		rte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]);
 		/* Fill next WQE. */
 		rte_spinlock_lock(&mng->resize_sl);
 		pool = mng->pools[sq->next];
@@ -371,7 +333,7 @@
 	wqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ALWAYS <<
 							 MLX5_COMP_MODE_OFFSET);
 	rte_io_wmb();
-	sq->db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);
+	sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);
 	rte_wmb();
 	*sq->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/
 	rte_wmb();
@@ -418,7 +380,7 @@
 	cq->errors++;
 	idx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);
 	mlx5_aso_dump_err_objs((volatile uint32_t *)cqe,
-				 (volatile uint32_t *)&sq->wqes[idx]);
+			       (volatile uint32_t *)&sq->sq_obj.aso_wqes[idx]);
 }
 
 /**
@@ -613,7 +575,7 @@
 {
 	int retries = 1024;
 
-	if (!sh->aso_age_mng->aso_sq.sq)
+	if (!sh->aso_age_mng->aso_sq.sq_obj.sq)
 		return -EINVAL;
 	rte_errno = 0;
 	while (--retries) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 17/19] common/mlx5: share DevX RQ creation
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (15 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 16/19] net/mlx5: move ASO " Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 18/19] net/mlx5: move Rx RQ creation to common Michael Baum
                           ` (2 subsequent siblings)
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The RQ object in DevX is used currently only in net driver, but it share
for future.

Add a structure that contains all the resources, and provide creation
and release functions for it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common_devx.c          | 113 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common_devx.h          |  15 ++++
 drivers/common/mlx5/rte_common_mlx5_exports.def |   2 +
 drivers/common/mlx5/version.map                 |   2 +
 4 files changed, 132 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 58b75c3..d19be12 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -271,4 +271,117 @@
 	return -rte_errno;
 }
 
+/**
+ * Destroy DevX Receive Queue.
+ *
+ * @param[in] rq
+ *   DevX RQ to destroy.
+ */
+void
+mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq)
+{
+	if (rq->rq)
+		claim_zero(mlx5_devx_cmd_destroy(rq->rq));
+	if (rq->umem_obj)
+		claim_zero(mlx5_os_umem_dereg(rq->umem_obj));
+	if (rq->umem_buf)
+		mlx5_free((void *)(uintptr_t)rq->umem_buf);
+}
+
+/**
+ * Create Receive Queue using DevX API.
+ *
+ * Get a pointer to partially initialized attributes structure, and updates the
+ * following fields:
+ *   wq_umem_valid
+ *   wq_umem_id
+ *   wq_umem_offset
+ *   dbr_umem_valid
+ *   dbr_umem_id
+ *   dbr_addr
+ *   log_wq_pg_sz
+ * All other fields are updated by caller.
+ *
+ * @param[in] ctx
+ *   Context returned from mlx5 open_device() glue function.
+ * @param[in/out] rq_obj
+ *   Pointer to RQ to create.
+ * @param[in] wqe_size
+ *   Size of WQE structure.
+ * @param[in] log_wqbb_n
+ *   Log of number of WQBBs in queue.
+ * @param[in] attr
+ *   Pointer to RQ attributes structure.
+ * @param[in] socket
+ *   Socket to use for allocation.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size,
+		    uint16_t log_wqbb_n,
+		    struct mlx5_devx_create_rq_attr *attr, int socket)
+{
+	struct mlx5_devx_obj *rq = NULL;
+	struct mlx5dv_devx_umem *umem_obj = NULL;
+	void *umem_buf = NULL;
+	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+	uint32_t umem_size, umem_dbrec;
+	uint16_t rq_size = 1 << log_wqbb_n;
+	int ret;
+
+	if (alignment == (size_t)-1) {
+		DRV_LOG(ERR, "Failed to get WQE buf alignment.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Allocate memory buffer for WQEs and doorbell record. */
+	umem_size = wqe_size * rq_size;
+	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
+	umem_size += MLX5_DBR_SIZE;
+	umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+			       alignment, socket);
+	if (!umem_buf) {
+		DRV_LOG(ERR, "Failed to allocate memory for RQ.");
+		rte_errno = ENOMEM;
+		return -rte_errno;
+	}
+	/* Register allocated buffer in user space with DevX. */
+	umem_obj = mlx5_os_umem_reg(ctx, (void *)(uintptr_t)umem_buf,
+				    umem_size, 0);
+	if (!umem_obj) {
+		DRV_LOG(ERR, "Failed to register umem for RQ.");
+		rte_errno = errno;
+		goto error;
+	}
+	/* Fill attributes for RQ object creation. */
+	attr->wq_attr.wq_umem_valid = 1;
+	attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj);
+	attr->wq_attr.wq_umem_offset = 0;
+	attr->wq_attr.dbr_umem_valid = 1;
+	attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id;
+	attr->wq_attr.dbr_addr = umem_dbrec;
+	attr->wq_attr.log_wq_pg_sz = MLX5_LOG_PAGE_SIZE;
+	/* Create receive queue object with DevX. */
+	rq = mlx5_devx_cmd_create_rq(ctx, attr, socket);
+	if (!rq) {
+		DRV_LOG(ERR, "Can't create DevX RQ object.");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	rq_obj->umem_buf = umem_buf;
+	rq_obj->umem_obj = umem_obj;
+	rq_obj->rq = rq;
+	rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec);
+	return 0;
+error:
+	ret = rte_errno;
+	if (umem_obj)
+		claim_zero(mlx5_os_umem_dereg(umem_obj));
+	if (umem_buf)
+		mlx5_free((void *)(uintptr_t)umem_buf);
+	rte_errno = ret;
+	return -rte_errno;
+}
 
diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h
index 992ad8f..aad0184 100644
--- a/drivers/common/mlx5/mlx5_common_devx.h
+++ b/drivers/common/mlx5/mlx5_common_devx.h
@@ -33,6 +33,13 @@ struct mlx5_devx_sq {
 	volatile uint32_t *db_rec; /* The SQ doorbell record. */
 };
 
+/* DevX Receive Queue structure. */
+struct mlx5_devx_rq {
+	struct mlx5_devx_obj *rq; /* The RQ DevX object. */
+	void *umem_obj; /* The RQ umem object. */
+	volatile void *umem_buf;
+	volatile uint32_t *db_rec; /* The RQ doorbell record. */
+};
 
 /* mlx5_common_devx.c */
 
@@ -52,4 +59,12 @@ int mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj,
 			uint16_t log_wqbb_n,
 			struct mlx5_devx_create_sq_attr *attr, int socket);
 
+__rte_internal
+void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq);
+
+__rte_internal
+int mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
+			uint32_t wqe_size, uint16_t log_wqbb_n,
+			struct mlx5_devx_create_rq_attr *attr, int socket);
+
 #endif /* RTE_PMD_MLX5_COMMON_DEVX_H_ */
diff --git a/drivers/common/mlx5/rte_common_mlx5_exports.def b/drivers/common/mlx5/rte_common_mlx5_exports.def
index cfee96e..6e1ff50 100644
--- a/drivers/common/mlx5/rte_common_mlx5_exports.def
+++ b/drivers/common/mlx5/rte_common_mlx5_exports.def
@@ -37,6 +37,8 @@ EXPORTS
 
     mlx5_devx_cq_create
     mlx5_devx_cq_destroy
+    mlx5_devx_rq_create
+    mlx5_devx_rq_destroy
     mlx5_devx_sq_create
     mlx5_devx_sq_destroy
 
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 3414588..dac9411 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -45,6 +45,8 @@ INTERNAL {
 
     mlx5_devx_cq_create;
     mlx5_devx_cq_destroy;
+    mlx5_devx_rq_create;
+    mlx5_devx_rq_destroy;
     mlx5_devx_sq_create;
     mlx5_devx_sq_destroy;
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 18/19] net/mlx5: move Rx RQ creation to common
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (16 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 17/19] common/mlx5: share DevX RQ creation Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 19/19] common/mlx5: remove doorbell allocation API Michael Baum
  2021-01-12 21:39         ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Thomas Monjalon
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

Using common function for Rx RQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h      |   4 +-
 drivers/net/mlx5/mlx5_devx.c | 177 +++++++++----------------------------------
 drivers/net/mlx5/mlx5_rxtx.h |   4 -
 3 files changed, 37 insertions(+), 148 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a3fd8d5..3836a96 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -776,8 +776,9 @@ struct mlx5_rxq_obj {
 			void *ibv_cq; /* Completion Queue. */
 			void *ibv_channel;
 		};
+		struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */
 		struct {
-			struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
+			struct mlx5_devx_rq rq_obj; /* DevX RQ object. */
 			struct mlx5_devx_cq cq_obj; /* DevX CQ object. */
 			void *devx_channel;
 		};
@@ -954,7 +955,6 @@ struct mlx5_priv {
 	/* Context for Verbs allocator. */
 	int nl_socket_rdma; /* Netlink socket (NETLINK_RDMA). */
 	int nl_socket_route; /* Netlink socket (NETLINK_ROUTE). */
-	struct mlx5_dbr_page_list dbrpgs; /* Door-bell pages. */
 	struct mlx5_nl_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
 	struct mlx5_hlist *mreg_cp_tbl;
 	/* Hash table of Rx metadata register copy table. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 96c44d4..935cbd0 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -45,7 +45,7 @@
 	rq_attr.state = MLX5_RQC_STATE_RDY;
 	rq_attr.vsd = (on ? 0 : 1);
 	rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD;
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
 }
 
 /**
@@ -85,7 +85,7 @@
 	default:
 		break;
 	}
-	return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+	return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr);
 }
 
 /**
@@ -145,44 +145,18 @@
 }
 
 /**
- * Release the resources allocated for an RQ DevX object.
- *
- * @param rxq_ctrl
- *   DevX Rx queue object.
- */
-static void
-mlx5_rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
-{
-	struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->rq_dbrec_page;
-
-	if (rxq_ctrl->wq_umem) {
-		mlx5_os_umem_dereg(rxq_ctrl->wq_umem);
-		rxq_ctrl->wq_umem = NULL;
-	}
-	if (rxq_ctrl->rxq.wqes) {
-		mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
-		rxq_ctrl->rxq.wqes = NULL;
-	}
-	if (dbr_page) {
-		claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
-					    mlx5_os_get_umem_id(dbr_page->umem),
-					    rxq_ctrl->rq_dbr_offset));
-		rxq_ctrl->rq_dbrec_page = NULL;
-	}
-}
-
-/**
  * Destroy the Rx queue DevX object.
  *
  * @param rxq_obj
  *   Rxq object to destroy.
  */
 static void
-mlx5_rxq_release_devx_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+mlx5_rxq_release_devx_resources(struct mlx5_rxq_obj *rxq_obj)
 {
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	mlx5_devx_cq_destroy(&rxq_ctrl->obj->cq_obj);
-	memset(&rxq_ctrl->obj->cq_obj, 0, sizeof(rxq_ctrl->obj->cq_obj));
+	mlx5_devx_rq_destroy(&rxq_obj->rq_obj);
+	memset(&rxq_obj->rq_obj, 0, sizeof(rxq_obj->rq_obj));
+	mlx5_devx_cq_destroy(&rxq_obj->cq_obj);
+	memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj));
 }
 
 /**
@@ -195,17 +169,17 @@
 mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj)
 {
 	MLX5_ASSERT(rxq_obj);
-	MLX5_ASSERT(rxq_obj->rq);
 	if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
+		MLX5_ASSERT(rxq_obj->rq);
 		mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST);
 		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
 	} else {
-		MLX5_ASSERT(rxq_obj->cq_obj);
-		claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+		MLX5_ASSERT(rxq_obj->cq_obj.cq);
+		MLX5_ASSERT(rxq_obj->rq_obj.rq);
+		mlx5_rxq_release_devx_resources(rxq_obj);
 		if (rxq_obj->devx_channel)
 			mlx5_os_devx_destroy_event_channel
 							(rxq_obj->devx_channel);
-		mlx5_rxq_release_devx_resources(rxq_obj->rxq_ctrl);
 	}
 }
 
@@ -247,52 +221,6 @@
 }
 
 /**
- * Fill common fields of create RQ attributes structure.
- *
- * @param rxq_data
- *   Pointer to Rx queue data.
- * @param cqn
- *   CQ number to use with this RQ.
- * @param rq_attr
- *   RQ attributes structure to fill..
- */
-static void
-mlx5_devx_create_rq_attr_fill(struct mlx5_rxq_data *rxq_data, uint32_t cqn,
-			      struct mlx5_devx_create_rq_attr *rq_attr)
-{
-	rq_attr->state = MLX5_RQC_STATE_RST;
-	rq_attr->vsd = (rxq_data->vlan_strip) ? 0 : 1;
-	rq_attr->cqn = cqn;
-	rq_attr->scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
-}
-
-/**
- * Fill common fields of DevX WQ attributes structure.
- *
- * @param priv
- *   Pointer to device private data.
- * @param rxq_ctrl
- *   Pointer to Rx queue control structure.
- * @param wq_attr
- *   WQ attributes structure to fill..
- */
-static void
-mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
-		       struct mlx5_devx_wq_attr *wq_attr)
-{
-	wq_attr->end_padding_mode = priv->config.hw_padding ?
-					MLX5_WQ_END_PAD_MODE_ALIGN :
-					MLX5_WQ_END_PAD_MODE_NONE;
-	wq_attr->pd = priv->sh->pdn;
-	wq_attr->dbr_addr = rxq_ctrl->rq_dbr_offset;
-	wq_attr->dbr_umem_id =
-			mlx5_os_get_umem_id(rxq_ctrl->rq_dbrec_page->umem);
-	wq_attr->dbr_umem_valid = 1;
-	wq_attr->wq_umem_id = mlx5_os_get_umem_id(rxq_ctrl->wq_umem);
-	wq_attr->wq_umem_valid = 1;
-}
-
-/**
  * Create a RQ object using DevX.
  *
  * @param dev
@@ -301,9 +229,9 @@
  *   Queue index in DPDK Rx queue array.
  *
  * @return
- *   The DevX RQ object initialized, NULL otherwise and rte_errno is set.
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-static struct mlx5_devx_obj *
+static int
 mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -311,20 +239,15 @@
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct mlx5_devx_create_rq_attr rq_attr = { 0 };
-	uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
-	uint32_t cqn = rxq_ctrl->obj->cq_obj.cq->id;
-	struct mlx5_devx_dbr_page *dbr_page;
-	int64_t dbr_offset;
-	uint32_t wq_size = 0;
-	uint32_t wqe_size = 0;
-	uint32_t log_wqe_size = 0;
-	void *buf = NULL;
-	struct mlx5_devx_obj *rq;
+	uint16_t log_desc_n = rxq_data->elts_n - rxq_data->sges_n;
+	uint32_t wqe_size, log_wqe_size;
 
 	/* Fill RQ attributes. */
 	rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
 	rq_attr.flush_in_error_en = 1;
-	mlx5_devx_create_rq_attr_fill(rxq_data, cqn, &rq_attr);
+	rq_attr.vsd = (rxq_data->vlan_strip) ? 0 : 1;
+	rq_attr.cqn = rxq_ctrl->obj->cq_obj.cq->id;
+	rq_attr.scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
 	/* Fill WQ attributes for this RQ. */
 	if (mlx5_rxq_mprq_enabled(rxq_data)) {
 		rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ;
@@ -345,45 +268,17 @@
 		wqe_size = sizeof(struct mlx5_wqe_data_seg);
 	}
 	log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
-	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
-	rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
-	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
-	wq_size = wqe_n * wqe_size;
-	size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
-	if (alignment == (size_t)-1) {
-		DRV_LOG(ERR, "Failed to get mem page size");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
-			  alignment, rxq_ctrl->socket);
-	if (!buf)
-		return NULL;
-	rxq_data->wqes = buf;
-	rxq_ctrl->wq_umem = mlx5_os_umem_reg(priv->sh->ctx,
-						     buf, wq_size, 0);
-	if (!rxq_ctrl->wq_umem)
-		goto error;
-	/* Allocate RQ door-bell. */
-	dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
-	if (dbr_offset < 0) {
-		DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
-		goto error;
-	}
-	rxq_ctrl->rq_dbr_offset = dbr_offset;
-	rxq_ctrl->rq_dbrec_page = dbr_page;
-	rxq_data->rq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
-			  (uintptr_t)rxq_ctrl->rq_dbr_offset);
+	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
+	rq_attr.wq_attr.log_wq_sz = log_desc_n;
+	rq_attr.wq_attr.end_padding_mode = priv->config.hw_padding ?
+						MLX5_WQ_END_PAD_MODE_ALIGN :
+						MLX5_WQ_END_PAD_MODE_NONE;
+	rq_attr.wq_attr.pd = priv->sh->pdn;
 	/* Create RQ using DevX API. */
-	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
-	rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
-	if (!rq)
-		goto error;
-	return rq;
-error:
-	mlx5_rxq_release_devx_rq_resources(rxq_ctrl);
-	return NULL;
+	return mlx5_devx_rq_create(priv->sh->ctx, &rxq_ctrl->obj->rq_obj,
+				   wqe_size, log_desc_n, &rq_attr,
+				   rxq_ctrl->socket);
 }
 
 /**
@@ -604,8 +499,8 @@
 		goto error;
 	}
 	/* Create RQ using DevX API. */
-	tmpl->rq = mlx5_rxq_create_devx_rq_resources(dev, idx);
-	if (!tmpl->rq) {
+	ret = mlx5_rxq_create_devx_rq_resources(dev, idx);
+	if (ret) {
 		DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
 			dev->data->port_id, idx);
 		rte_errno = ENOMEM;
@@ -615,19 +510,17 @@
 	ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY);
 	if (ret)
 		goto error;
+	rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf;
+	rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec;
 	rxq_data->cq_arm_sn = 0;
-	mlx5_rxq_initialize(rxq_data);
 	rxq_data->cq_ci = 0;
+	mlx5_rxq_initialize(rxq_data);
 	dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq_ctrl->wqn = tmpl->rq->id;
+	rxq_ctrl->wqn = tmpl->rq_obj.rq->id;
 	return 0;
 error:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	if (tmpl->rq)
-		claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
-	if (tmpl->devx_channel)
-		mlx5_os_devx_destroy_event_channel(tmpl->devx_channel);
-	mlx5_rxq_release_devx_resources(rxq_ctrl);
+	mlx5_rxq_devx_obj_release(tmpl);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
@@ -671,7 +564,7 @@
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 				container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq->id;
+		rqt_attr->rq_list[i] = rxq_ctrl->obj->rq_obj.rq->id;
 	}
 	MLX5_ASSERT(i > 0);
 	for (j = 0; i != rqt_n; ++j, ++i)
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index aba9541..7756ed3 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -193,10 +193,6 @@ struct mlx5_rxq_ctrl {
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
 	uint32_t wqn; /* WQ number. */
 	uint16_t dump_file_n; /* Number of dump files. */
-	struct mlx5_devx_dbr_page *rq_dbrec_page;
-	uint64_t rq_dbr_offset;
-	/* Storing RQ door-bell information, needed when freeing door-bell. */
-	void *wq_umem; /* WQ buffer registration info. */
 	struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
 	uint32_t hairpin_status; /* Hairpin binding status. */
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [dpdk-dev] [PATCH v3 19/19] common/mlx5: remove doorbell allocation API
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (17 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 18/19] net/mlx5: move Rx RQ creation to common Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-12 21:39         ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Thomas Monjalon
  19 siblings, 0 replies; 57+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

The mlx5_devx_dbr_page structure was used to allocate and release the
umem of the doorbells.
Since doorbell and buffer have used same umem, this structure is
useless.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.c               | 122 ------------------------
 drivers/common/mlx5/mlx5_common.h               |  23 -----
 drivers/common/mlx5/rte_common_mlx5_exports.def |   3 -
 drivers/common/mlx5/version.map                 |   3 -
 4 files changed, 151 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index a00ffcb..c26a2cf 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -8,12 +8,10 @@
 
 #include <rte_errno.h>
 #include <rte_mempool.h>
-#include <rte_malloc.h>
 
 #include "mlx5_common.h"
 #include "mlx5_common_os.h"
 #include "mlx5_common_utils.h"
-#include "mlx5_malloc.h"
 #include "mlx5_common_pci.h"
 
 int mlx5_common_logtype;
@@ -126,126 +124,6 @@ static inline void mlx5_cpu_id(unsigned int level,
 }
 
 /**
- * Allocate page of door-bells and register it using DevX API.
- *
- * @param [in] ctx
- *   Pointer to the device context.
- *
- * @return
- *   Pointer to new page on success, NULL otherwise.
- */
-static struct mlx5_devx_dbr_page *
-mlx5_alloc_dbr_page(void *ctx)
-{
-	struct mlx5_devx_dbr_page *page;
-
-	/* Allocate space for door-bell page and management data. */
-	page = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
-			   sizeof(struct mlx5_devx_dbr_page),
-			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
-	if (!page) {
-		DRV_LOG(ERR, "cannot allocate dbr page");
-		return NULL;
-	}
-	/* Register allocated memory. */
-	page->umem = mlx5_os_umem_reg(ctx, page->dbrs,
-					      MLX5_DBR_PAGE_SIZE, 0);
-	if (!page->umem) {
-		DRV_LOG(ERR, "cannot umem reg dbr page");
-		mlx5_free(page);
-		return NULL;
-	}
-	return page;
-}
-
-/**
- * Find the next available door-bell, allocate new page if needed.
- *
- * @param [in] ctx
- *   Pointer to device context.
- * @param [in] head
- *   Pointer to the head of dbr pages list.
- * @param [out] dbr_page
- *   Door-bell page containing the page data.
- *
- * @return
- *   Door-bell address offset on success, a negative error value otherwise.
- */
-int64_t
-mlx5_get_dbr(void *ctx,  struct mlx5_dbr_page_list *head,
-	     struct mlx5_devx_dbr_page **dbr_page)
-{
-	struct mlx5_devx_dbr_page *page = NULL;
-	uint32_t i, j;
-
-	LIST_FOREACH(page, head, next)
-		if (page->dbr_count < MLX5_DBR_PER_PAGE)
-			break;
-	if (!page) { /* No page with free door-bell exists. */
-		page = mlx5_alloc_dbr_page(ctx);
-		if (!page) /* Failed to allocate new page. */
-			return (-1);
-		LIST_INSERT_HEAD(head, page, next);
-	}
-	/* Loop to find bitmap part with clear bit. */
-	for (i = 0;
-	     i < MLX5_DBR_BITMAP_SIZE && page->dbr_bitmap[i] == UINT64_MAX;
-	     i++)
-		; /* Empty. */
-	/* Find the first clear bit. */
-	MLX5_ASSERT(i < MLX5_DBR_BITMAP_SIZE);
-	j = rte_bsf64(~page->dbr_bitmap[i]);
-	page->dbr_bitmap[i] |= (UINT64_C(1) << j);
-	page->dbr_count++;
-	*dbr_page = page;
-	return (i * CHAR_BIT * sizeof(uint64_t) + j) * MLX5_DBR_SIZE;
-}
-
-/**
- * Release a door-bell record.
- *
- * @param [in] head
- *   Pointer to the head of dbr pages list.
- * @param [in] umem_id
- *   UMEM ID of page containing the door-bell record to release.
- * @param [in] offset
- *   Offset of door-bell record in page.
- *
- * @return
- *   0 on success, a negative error value otherwise.
- */
-int32_t
-mlx5_release_dbr(struct mlx5_dbr_page_list *head, uint32_t umem_id,
-		 uint64_t offset)
-{
-	struct mlx5_devx_dbr_page *page = NULL;
-	int ret = 0;
-
-	LIST_FOREACH(page, head, next)
-		/* Find the page this address belongs to. */
-		if (mlx5_os_get_umem_id(page->umem) == umem_id)
-			break;
-	if (!page)
-		return -EINVAL;
-	page->dbr_count--;
-	if (!page->dbr_count) {
-		/* Page not used, free it and remove from list. */
-		LIST_REMOVE(page, next);
-		if (page->umem)
-			ret = -mlx5_os_umem_dereg(page->umem);
-		mlx5_free(page);
-	} else {
-		/* Mark in bitmap that this door-bell is not in use. */
-		offset /= MLX5_DBR_SIZE;
-		int i = offset / 64;
-		int j = offset % 64;
-
-		page->dbr_bitmap[i] &= ~(UINT64_C(1) << j);
-	}
-	return ret;
-}
-
-/**
  * Allocate the User Access Region with DevX on specified device.
  *
  * @param [in] ctx
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index a484b74..e35188d 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -220,21 +220,6 @@ enum mlx5_class {
 };
 
 #define MLX5_DBR_SIZE RTE_CACHE_LINE_SIZE
-#define MLX5_DBR_PER_PAGE 64
-/* Must be >= CHAR_BIT * sizeof(uint64_t) */
-#define MLX5_DBR_PAGE_SIZE (MLX5_DBR_PER_PAGE * MLX5_DBR_SIZE)
-/* Page size must be >= 512. */
-#define MLX5_DBR_BITMAP_SIZE (MLX5_DBR_PER_PAGE / (CHAR_BIT * sizeof(uint64_t)))
-
-struct mlx5_devx_dbr_page {
-	/* Door-bell records, must be first member in structure. */
-	uint8_t dbrs[MLX5_DBR_PAGE_SIZE];
-	LIST_ENTRY(mlx5_devx_dbr_page) next; /* Pointer to the next element. */
-	void *umem;
-	uint32_t dbr_count; /* Number of door-bell records in use. */
-	/* 1 bit marks matching door-bell is in use. */
-	uint64_t dbr_bitmap[MLX5_DBR_BITMAP_SIZE];
-};
 
 /* devX creation object */
 struct mlx5_devx_obj {
@@ -249,19 +234,11 @@ struct mlx5_klm {
 	uint64_t address;
 };
 
-LIST_HEAD(mlx5_dbr_page_list, mlx5_devx_dbr_page);
-
 __rte_internal
 void mlx5_translate_port_name(const char *port_name_in,
 			      struct mlx5_switch_info *port_info_out);
 void mlx5_glue_constructor(void);
 __rte_internal
-int64_t mlx5_get_dbr(void *ctx,  struct mlx5_dbr_page_list *head,
-		     struct mlx5_devx_dbr_page **dbr_page);
-__rte_internal
-int32_t mlx5_release_dbr(struct mlx5_dbr_page_list *head, uint32_t umem_id,
-			 uint64_t offset);
-__rte_internal
 void *mlx5_devx_alloc_uar(void *ctx, int mapping);
 extern uint8_t haswell_broadwell_cpu;
 
diff --git a/drivers/common/mlx5/rte_common_mlx5_exports.def b/drivers/common/mlx5/rte_common_mlx5_exports.def
index 6e1ff50..93bf4c4 100644
--- a/drivers/common/mlx5/rte_common_mlx5_exports.def
+++ b/drivers/common/mlx5/rte_common_mlx5_exports.def
@@ -42,7 +42,6 @@ EXPORTS
     mlx5_devx_sq_create
     mlx5_devx_sq_destroy
 
-	mlx5_get_dbr
 	mlx5_glue
 
 	mlx5_malloc_mem_select
@@ -63,8 +62,6 @@ EXPORTS
 
 	mlx5_pci_driver_register
 
-	mlx5_release_dbr
-
 	mlx5_malloc
 	mlx5_realloc
 	mlx5_free
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index dac9411..c666fdf 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -51,7 +51,6 @@ INTERNAL {
     mlx5_devx_sq_destroy;
 
 	mlx5_get_ifname_sysfs;
-	mlx5_get_dbr;
 
 	mlx5_mp_init_primary;
 	mlx5_mp_uninit_primary;
@@ -93,8 +92,6 @@ INTERNAL {
 	mlx5_nl_vlan_vmwa_create;
 	mlx5_nl_vlan_vmwa_delete;
 
-	mlx5_release_dbr;
-
 	mlx5_translate_port_name;
 
 	mlx5_malloc_mem_select;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations
  2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
                           ` (18 preceding siblings ...)
  2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 19/19] common/mlx5: remove doorbell allocation API Michael Baum
@ 2021-01-12 21:39         ` Thomas Monjalon
  19 siblings, 0 replies; 57+ messages in thread
From: Thomas Monjalon @ 2021-01-12 21:39 UTC (permalink / raw)
  To: Michael Baum; +Cc: dev, Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko

06/01/2021 09:19, Michael Baum:
> Michael Baum (19):
>   common/mlx5: fix completion queue entry size configuration
>   net/mlx5: remove CQE padding device argument
>   net/mlx5: fix ASO SQ creation error flow
>   common/mlx5: share DevX CQ creation
>   regex/mlx5: move DevX CQ creation to common
>   vdpa/mlx5: move DevX CQ creation to common
>   net/mlx5: move rearm and clock queue CQ creation to common
>   net/mlx5: move ASO CQ creation to common
>   net/mlx5: move Tx CQ creation to common
>   net/mlx5: move Rx CQ creation to common
>   common/mlx5: enhance page size configuration
>   common/mlx5: share DevX SQ creation
>   regex/mlx5: move DevX SQ creation to common
>   net/mlx5: move rearm and clock queue SQ creation to common
>   net/mlx5: move Tx SQ creation to common
>   net/mlx5: move ASO SQ creation to common
>   common/mlx5: share DevX RQ creation
>   net/mlx5: move Rx RQ creation to common
>   common/mlx5: remove doorbell allocation API

Applied to next-net-mlx with indent fixed in symbol files (.map & .def), thanks.



^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2021-01-12 21:39 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-17 11:44 [dpdk-dev] [PATCH 00/17] common/mlx5: share DevX resources creations Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
2020-12-29  8:52   ` [dpdk-dev] [PATCH v2 00/17] common/mlx5: share DevX resources creations Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
2021-01-06  8:19       ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 04/19] common/mlx5: share DevX CQ creation Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 05/19] regex/mlx5: move DevX CQ creation to common Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 06/19] vdpa/mlx5: " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 07/19] net/mlx5: move rearm and clock queue " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 08/19] net/mlx5: move ASO " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 09/19] net/mlx5: move Tx " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 10/19] net/mlx5: move Rx " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 11/19] common/mlx5: enhance page size configuration Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 12/19] common/mlx5: share DevX SQ creation Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 13/19] regex/mlx5: move DevX SQ creation to common Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 14/19] net/mlx5: move rearm and clock queue " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 15/19] net/mlx5: move Tx " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 16/19] net/mlx5: move ASO " Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 17/19] common/mlx5: share DevX RQ creation Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 18/19] net/mlx5: move Rx RQ creation to common Michael Baum
2021-01-06  8:19         ` [dpdk-dev] [PATCH v3 19/19] common/mlx5: remove doorbell allocation API Michael Baum
2021-01-12 21:39         ` [dpdk-dev] [PATCH v3 00/19] common/mlx5: share DevX resources creations Thomas Monjalon
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 02/17] common/mlx5: share DevX CQ creation Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 03/17] regex/mlx5: move DevX CQ creation to common Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 04/17] vdpa/mlx5: " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 05/17] net/mlx5: move rearm and clock queue " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 06/17] net/mlx5: move ASO " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 07/17] net/mlx5: move Tx " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 08/17] net/mlx5: move Rx " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 09/17] common/mlx5: enhance page size configuration Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 10/17] common/mlx5: share DevX SQ creation Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 11/17] regex/mlx5: move DevX SQ creation to common Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 12/17] net/mlx5: move rearm and clock queue " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 13/17] net/mlx5: move Tx " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 14/17] net/mlx5: move ASO " Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 15/17] common/mlx5: share DevX RQ creation Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 16/17] net/mlx5: move Rx RQ creation to common Michael Baum
2020-12-29  8:52     ` [dpdk-dev] [PATCH v2 17/17] common/mlx5: remove doorbell allocation API Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 02/17] common/mlx5: share DevX CQ creation Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 03/17] regex/mlx5: move DevX CQ creation to common Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 04/17] vdpa/mlx5: " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 05/17] net/mlx5: move rearm and clock queue " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 06/17] net/mlx5: move ASO " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 07/17] net/mlx5: move Tx " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 08/17] net/mlx5: move Rx " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 09/17] common/mlx5: enhance page size configuration Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 10/17] common/mlx5: share DevX SQ creation Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 11/17] regex/mlx5: move DevX SQ creation to common Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 12/17] net/mlx5: move rearm and clock queue " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 13/17] net/mlx5: move Tx " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 14/17] net/mlx5: move ASO " Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 15/17] common/mlx5: share DevX RQ creation Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 16/17] net/mlx5: move Rx RQ creation to common Michael Baum
2020-12-17 11:44 ` [dpdk-dev] [PATCH 17/17] common/mlx5: remove doorbell allocation API Michael Baum

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).