* [dpdk-dev] [PATCH v1 01/18] net/mlx5: fix Rx hash queue creation error flow
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 02/18] net/mlx5: fix Rx queue state update Michael Baum
` (18 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable
The mlx5_hrxq_new function allocates several resources and if one of the
allocations fails, the function jumps to an error label where it
releases all the allocated resources.
When the TIR action creation fails, the hrxq memory is not released what
can cause a resource leak.
Add an appropriate release to the hrxq pointer in the error flow.
Fixes: 772dc0eb83d3 ("net/mlx5: convert hrxq to indexed")
Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_rxq.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 946f745..0d16592 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2873,7 +2873,7 @@ enum mlx5_rxq_type
int tunnel __rte_unused)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_hrxq *hrxq;
+ struct mlx5_hrxq *hrxq = NULL;
uint32_t hrxq_idx = 0;
struct ibv_qp *qp = NULL;
struct mlx5_ind_table_obj *ind_tbl;
@@ -3074,6 +3074,8 @@ enum mlx5_rxq_type
claim_zero(mlx5_glue->destroy_qp(qp));
else if (tir)
claim_zero(mlx5_devx_cmd_destroy(tir));
+ if (hrxq)
+ mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
rte_errno = err; /* Restore rte_errno. */
return 0;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 02/18] net/mlx5: fix Rx queue state update
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 01/18] net/mlx5: fix Rx hash queue creation error flow Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 03/18] net/mlx5: fix types differentiation in Rxq create Michael Baum
` (17 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable
In order to support DevX Rx queue stop and start operations, the state
of the queue should be updated in FW.
The state update PRM command requires to set both the current state and
the new requested state.
The current state and the new requested state fields setting were
wrongly switched.
Switch them back to the correct setting.
Fixes: 161d103b231c ("net/mlx5: add queue start and stop")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_rxq.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 0d16592..2e6cbd4 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -505,8 +505,8 @@
struct mlx5_devx_modify_rq_attr rq_attr;
memset(&rq_attr, 0, sizeof(rq_attr));
- rq_attr.rq_state = MLX5_RQC_STATE_RST;
- rq_attr.state = MLX5_RQC_STATE_RDY;
+ rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+ rq_attr.state = MLX5_RQC_STATE_RST;
ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
}
if (ret) {
@@ -604,7 +604,7 @@
rte_cio_wmb();
*rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci);
rte_cio_wmb();
- /* Reset RQ consumer before moving queue ro READY state. */
+ /* Reset RQ consumer before moving queue to READY state. */
*rxq->rq_db = rte_cpu_to_be_32(0);
rte_cio_wmb();
if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
@@ -618,8 +618,8 @@
struct mlx5_devx_modify_rq_attr rq_attr;
memset(&rq_attr, 0, sizeof(rq_attr));
- rq_attr.rq_state = MLX5_RQC_STATE_RDY;
- rq_attr.state = MLX5_RQC_STATE_RST;
+ rq_attr.rq_state = MLX5_RQC_STATE_RST;
+ rq_attr.state = MLX5_RQC_STATE_RDY;
ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
}
if (ret) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 03/18] net/mlx5: fix types differentiation in Rxq create
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 01/18] net/mlx5: fix Rx hash queue creation error flow Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 02/18] net/mlx5: fix Rx queue state update Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 04/18] net/mlx5: mitigate Rx queue reference counters Michael Baum
` (16 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable
Rx HW objects can be created by both Verbs and DevX operations.
The management of the 2 types of operations are done directly in the
main flow of the object’s creations.
Some arrangements and validations were wrongly done to the irrelevant
type:
1. LRO related validations were done for Verbs type where LRO is not
supported at all.
2. Verbs allocation arrangements were done for DevX operations where it
is not needed.
3. Doorbell destroy was considered for Verbs types where it is
irrelevant.
Adjust the aforementioned points only for the relevant types.
Fixes: e79c9be91515 ("net/mlx5: support Rx hairpin queues")
Fixes: 08d1838f645a ("net/mlx5: implement CQ for Rx using DevX API")
Fixes: 17ed314c6c0b ("net/mlx5: allow LRO per Rx queue")
Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_rxq.c | 78 +++++++++++++++++++++++---------------------
drivers/net/mlx5/mlx5_rxtx.h | 2 --
2 files changed, 41 insertions(+), 39 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 2e6cbd4..776c7f6 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -927,13 +927,16 @@
static int
mlx5_rxq_obj_release(struct mlx5_rxq_obj *rxq_obj)
{
+ struct mlx5_priv *priv = rxq_obj->rxq_ctrl->priv;
+ struct mlx5_rxq_ctrl *rxq_ctrl = rxq_obj->rxq_ctrl;
+
MLX5_ASSERT(rxq_obj);
if (rte_atomic32_dec_and_test(&rxq_obj->refcnt)) {
switch (rxq_obj->type) {
case MLX5_RXQ_OBJ_TYPE_IBV:
MLX5_ASSERT(rxq_obj->wq);
MLX5_ASSERT(rxq_obj->ibv_cq);
- rxq_free_elts(rxq_obj->rxq_ctrl);
+ rxq_free_elts(rxq_ctrl);
claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
if (rxq_obj->ibv_channel)
@@ -943,14 +946,20 @@
case MLX5_RXQ_OBJ_TYPE_DEVX_RQ:
MLX5_ASSERT(rxq_obj->rq);
MLX5_ASSERT(rxq_obj->devx_cq);
- rxq_free_elts(rxq_obj->rxq_ctrl);
+ rxq_free_elts(rxq_ctrl);
claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_ctrl->rq_dbr_umem_id,
+ rxq_ctrl->rq_dbr_offset));
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_ctrl->cq_dbr_umem_id,
+ rxq_ctrl->cq_dbr_offset));
if (rxq_obj->devx_channel)
mlx5_glue->devx_destroy_event_channel
(rxq_obj->devx_channel);
- rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
- rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
+ rxq_release_devx_rq_resources(rxq_ctrl);
+ rxq_release_devx_cq_resources(rxq_ctrl);
break;
case MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN:
MLX5_ASSERT(rxq_obj->rq);
@@ -1264,8 +1273,7 @@
cq_attr.mlx5 = (struct mlx5dv_cq_init_attr){
.comp_mask = 0,
};
- if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
- !rxq_data->lro) {
+ if (priv->config.cqe_comp && !rxq_data->hw_timestamp) {
cq_attr.mlx5.comp_mask |=
MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
@@ -1287,10 +1295,6 @@
"port %u Rx CQE compression is disabled for HW"
" timestamp",
dev->data->port_id);
- } else if (priv->config.cqe_comp && rxq_data->lro) {
- DRV_LOG(DEBUG,
- "port %u Rx CQE compression is disabled for LRO",
- dev->data->port_id);
}
#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
if (priv->config.cqe_pad) {
@@ -1628,7 +1632,7 @@
cq_attr.log_page_size = rte_log2_u32(page_size);
cq_attr.db_umem_offset = rxq_ctrl->cq_dbr_offset;
cq_attr.db_umem_id = rxq_ctrl->cq_dbr_umem_id;
- cq_attr.db_umem_valid = rxq_ctrl->cq_dbr_umem_id_valid;
+ cq_attr.db_umem_valid = 1;
cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
if (!cq_obj)
goto error;
@@ -1684,8 +1688,7 @@
tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
rxq_ctrl->socket);
if (!tmpl) {
- DRV_LOG(ERR,
- "port %u Rx queue %u cannot allocate verbs resources",
+ DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
dev->data->port_id, rxq_data->idx);
rte_errno = ENOMEM;
return NULL;
@@ -1728,7 +1731,6 @@
idx, (void *)&tmpl);
rte_atomic32_inc(&tmpl->refcnt);
LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
- priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
return tmpl;
}
@@ -1758,6 +1760,8 @@ struct mlx5_rxq_obj *
unsigned int cqe_n;
unsigned int wqe_n = 1 << rxq_data->elts_n;
struct mlx5_rxq_obj *tmpl = NULL;
+ struct mlx5_devx_dbr_page *cq_dbr_page = NULL;
+ struct mlx5_devx_dbr_page *rq_dbr_page = NULL;
struct mlx5dv_cq cq_info;
struct mlx5dv_rwq rwq;
int ret = 0;
@@ -1767,13 +1771,10 @@ struct mlx5_rxq_obj *
MLX5_ASSERT(!rxq_ctrl->obj);
if (type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN)
return mlx5_rxq_obj_hairpin_new(dev, idx);
- priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
- priv->verbs_alloc_ctx.obj = rxq_ctrl;
tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
rxq_ctrl->socket);
if (!tmpl) {
- DRV_LOG(ERR,
- "port %u Rx queue %u cannot allocate resources",
+ DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
dev->data->port_id, rxq_data->idx);
rte_errno = ENOMEM;
goto error;
@@ -1820,6 +1821,8 @@ struct mlx5_rxq_obj *
DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
dev->data->port_id, priv->sh->device_attr.max_sge);
if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+ priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
+ priv->verbs_alloc_ctx.obj = rxq_ctrl;
/* Create CQ using Verbs API. */
tmpl->ibv_cq = mlx5_ibv_cq_new(dev, priv, rxq_data, cqe_n,
tmpl);
@@ -1882,23 +1885,23 @@ struct mlx5_rxq_obj *
}
rxq_data->wqes = rwq.buf;
rxq_data->rq_db = rwq.dbrec;
+ priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
} else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
- struct mlx5_devx_dbr_page *dbr_page;
int64_t dbr_offset;
/* Allocate CQ door-bell. */
dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs,
- &dbr_page);
+ &cq_dbr_page);
if (dbr_offset < 0) {
DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
goto error;
}
rxq_ctrl->cq_dbr_offset = dbr_offset;
- rxq_ctrl->cq_dbr_umem_id = mlx5_os_get_umem_id(dbr_page->umem);
- rxq_ctrl->cq_dbr_umem_id_valid = 1;
+ rxq_ctrl->cq_dbr_umem_id =
+ mlx5_os_get_umem_id(cq_dbr_page->umem);
rxq_data->cq_db =
- (uint32_t *)((uintptr_t)dbr_page->dbrs +
+ (uint32_t *)((uintptr_t)cq_dbr_page->dbrs +
(uintptr_t)rxq_ctrl->cq_dbr_offset);
rxq_data->cq_uar =
mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
@@ -1910,16 +1913,16 @@ struct mlx5_rxq_obj *
}
/* Allocate RQ door-bell. */
dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs,
- &dbr_page);
+ &rq_dbr_page);
if (dbr_offset < 0) {
DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
goto error;
}
rxq_ctrl->rq_dbr_offset = dbr_offset;
- rxq_ctrl->rq_dbr_umem_id = mlx5_os_get_umem_id(dbr_page->umem);
- rxq_ctrl->rq_dbr_umem_id_valid = 1;
+ rxq_ctrl->rq_dbr_umem_id =
+ mlx5_os_get_umem_id(rq_dbr_page->umem);
rxq_data->rq_db =
- (uint32_t *)((uintptr_t)dbr_page->dbrs +
+ (uint32_t *)((uintptr_t)rq_dbr_page->dbrs +
(uintptr_t)rxq_ctrl->rq_dbr_offset);
/* Create RQ using DevX API. */
tmpl->rq = mlx5_devx_rq_new(dev, idx, tmpl->devx_cq->id);
@@ -1943,7 +1946,6 @@ struct mlx5_rxq_obj *
idx, (void *)&tmpl);
rte_atomic32_inc(&tmpl->refcnt);
LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
- priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
return tmpl;
error:
@@ -1957,6 +1959,7 @@ struct mlx5_rxq_obj *
if (tmpl->ibv_channel)
claim_zero(mlx5_glue->destroy_comp_channel
(tmpl->ibv_channel));
+ priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
} else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
if (tmpl->rq)
claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
@@ -1966,6 +1969,16 @@ struct mlx5_rxq_obj *
if (tmpl->devx_channel)
mlx5_glue->devx_destroy_event_channel
(tmpl->devx_channel);
+ if (rq_dbr_page)
+ claim_zero(mlx5_release_dbr
+ (&priv->dbrpgs,
+ rxq_ctrl->rq_dbr_umem_id,
+ rxq_ctrl->rq_dbr_offset));
+ if (cq_dbr_page)
+ claim_zero(mlx5_release_dbr
+ (&priv->dbrpgs,
+ rxq_ctrl->cq_dbr_umem_id,
+ rxq_ctrl->cq_dbr_offset));
}
mlx5_free(tmpl);
rte_errno = ret; /* Restore rte_errno. */
@@ -1974,7 +1987,6 @@ struct mlx5_rxq_obj *
rxq_release_devx_rq_resources(rxq_ctrl);
rxq_release_devx_cq_resources(rxq_ctrl);
}
- priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
return NULL;
}
@@ -2570,14 +2582,6 @@ struct mlx5_rxq_ctrl *
if (rxq_ctrl->obj && !mlx5_rxq_obj_release(rxq_ctrl->obj))
rxq_ctrl->obj = NULL;
if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) {
- if (rxq_ctrl->rq_dbr_umem_id_valid)
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->rq_dbr_umem_id,
- rxq_ctrl->rq_dbr_offset));
- if (rxq_ctrl->cq_dbr_umem_id_valid)
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->cq_dbr_umem_id,
- rxq_ctrl->cq_dbr_offset));
if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
LIST_REMOVE(rxq_ctrl, next);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index f3fe2e1..a161d4e 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -200,8 +200,6 @@ struct mlx5_rxq_ctrl {
enum mlx5_rxq_type type; /* Rxq type. */
unsigned int socket; /* CPU socket ID for allocations. */
unsigned int irq:1; /* Whether IRQ is enabled. */
- unsigned int rq_dbr_umem_id_valid:1;
- unsigned int cq_dbr_umem_id_valid:1;
uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */
uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
uint32_t wqn; /* WQ number. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 04/18] net/mlx5: mitigate Rx queue reference counters
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (2 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 03/18] net/mlx5: fix types differentiation in Rxq create Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 05/18] net/mlx5: separate Rx queue object creations Michael Baum
` (15 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
The Rx queue structures manage 2 different reference counter per queue:
rxq_ctrl reference counter and rxq_obj reference counter.
There is no real need to use two different counters, it just complicates
the release functions.
Remove the rxq_obj counter and use only the rxq_ctrl counter.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_rxq.c | 208 ++++++++++++++++---------------------------
drivers/net/mlx5/mlx5_rxtx.h | 1 -
2 files changed, 79 insertions(+), 130 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 776c7f6..506c4d3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -832,34 +832,6 @@
}
/**
- * Get an Rx queue Verbs/DevX object.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param idx
- * Queue index in DPDK Rx queue array
- *
- * @return
- * The Verbs/DevX object if it exists.
- */
-static struct mlx5_rxq_obj *
-mlx5_rxq_obj_get(struct rte_eth_dev *dev, uint16_t idx)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
- struct mlx5_rxq_ctrl *rxq_ctrl;
-
- if (idx >= priv->rxqs_n)
- return NULL;
- if (!rxq_data)
- return NULL;
- rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- if (rxq_ctrl->obj)
- rte_atomic32_inc(&rxq_ctrl->obj->refcnt);
- return rxq_ctrl->obj;
-}
-
-/**
* Release the resources allocated for an RQ DevX object.
*
* @param rxq_ctrl
@@ -920,57 +892,50 @@
*
* @param rxq_obj
* Verbs/DevX Rx queue object.
- *
- * @return
- * 1 while a reference on it exists, 0 when freed.
*/
-static int
+static void
mlx5_rxq_obj_release(struct mlx5_rxq_obj *rxq_obj)
{
struct mlx5_priv *priv = rxq_obj->rxq_ctrl->priv;
struct mlx5_rxq_ctrl *rxq_ctrl = rxq_obj->rxq_ctrl;
MLX5_ASSERT(rxq_obj);
- if (rte_atomic32_dec_and_test(&rxq_obj->refcnt)) {
- switch (rxq_obj->type) {
- case MLX5_RXQ_OBJ_TYPE_IBV:
- MLX5_ASSERT(rxq_obj->wq);
- MLX5_ASSERT(rxq_obj->ibv_cq);
- rxq_free_elts(rxq_ctrl);
- claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
- claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
- if (rxq_obj->ibv_channel)
- claim_zero(mlx5_glue->destroy_comp_channel
- (rxq_obj->ibv_channel));
- break;
- case MLX5_RXQ_OBJ_TYPE_DEVX_RQ:
- MLX5_ASSERT(rxq_obj->rq);
- MLX5_ASSERT(rxq_obj->devx_cq);
- rxq_free_elts(rxq_ctrl);
- claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
- claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->rq_dbr_umem_id,
- rxq_ctrl->rq_dbr_offset));
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->cq_dbr_umem_id,
- rxq_ctrl->cq_dbr_offset));
- if (rxq_obj->devx_channel)
- mlx5_glue->devx_destroy_event_channel
+ switch (rxq_obj->type) {
+ case MLX5_RXQ_OBJ_TYPE_IBV:
+ MLX5_ASSERT(rxq_obj->wq);
+ MLX5_ASSERT(rxq_obj->ibv_cq);
+ rxq_free_elts(rxq_ctrl);
+ claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+ claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
+ if (rxq_obj->ibv_channel)
+ claim_zero(mlx5_glue->destroy_comp_channel
+ (rxq_obj->ibv_channel));
+ break;
+ case MLX5_RXQ_OBJ_TYPE_DEVX_RQ:
+ MLX5_ASSERT(rxq_obj->rq);
+ MLX5_ASSERT(rxq_obj->devx_cq);
+ rxq_free_elts(rxq_ctrl);
+ claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+ claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_ctrl->rq_dbr_umem_id,
+ rxq_ctrl->rq_dbr_offset));
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_ctrl->cq_dbr_umem_id,
+ rxq_ctrl->cq_dbr_offset));
+ if (rxq_obj->devx_channel)
+ mlx5_glue->devx_destroy_event_channel
(rxq_obj->devx_channel);
- rxq_release_devx_rq_resources(rxq_ctrl);
- rxq_release_devx_cq_resources(rxq_ctrl);
- break;
- case MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN:
- MLX5_ASSERT(rxq_obj->rq);
- rxq_obj_hairpin_release(rxq_obj);
- break;
- }
- LIST_REMOVE(rxq_obj, next);
- mlx5_free(rxq_obj);
- return 0;
+ rxq_release_devx_rq_resources(rxq_ctrl);
+ rxq_release_devx_cq_resources(rxq_ctrl);
+ break;
+ case MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN:
+ MLX5_ASSERT(rxq_obj->rq);
+ rxq_obj_hairpin_release(rxq_obj);
+ break;
}
- return 1;
+ LIST_REMOVE(rxq_obj, next);
+ mlx5_free(rxq_obj);
}
/**
@@ -1009,7 +974,8 @@
intr_handle->type = RTE_INTR_HANDLE_EXT;
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
- struct mlx5_rxq_obj *rxq_obj = mlx5_rxq_obj_get(dev, i);
+ struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
+ struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL;
int rc;
/* Skip queues that cannot request interrupts. */
@@ -1019,6 +985,9 @@
intr_handle->intr_vec[i] =
RTE_INTR_VEC_RXTX_OFFSET +
RTE_MAX_RXTX_INTR_VEC_ID;
+ /* Decrease the rxq_ctrl's refcnt */
+ if (rxq_ctrl)
+ mlx5_rxq_release(dev, i);
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -1073,9 +1042,6 @@
if (!intr_handle->intr_vec)
goto free;
for (i = 0; i != n; ++i) {
- struct mlx5_rxq_ctrl *rxq_ctrl;
- struct mlx5_rxq_data *rxq_data;
-
if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
RTE_MAX_RXTX_INTR_VEC_ID)
continue;
@@ -1083,10 +1049,7 @@
* Need to access directly the queue to release the reference
* kept in mlx5_rx_intr_vec_enable().
*/
- rxq_data = (*priv->rxqs)[i];
- rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- if (rxq_ctrl->obj)
- mlx5_rxq_obj_release(rxq_ctrl->obj);
+ mlx5_rxq_release(dev, i);
}
free:
rte_intr_free_epoll_fd(intr_handle);
@@ -1135,28 +1098,23 @@
int
mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq_data;
struct mlx5_rxq_ctrl *rxq_ctrl;
- rxq_data = (*priv->rxqs)[rx_queue_id];
- if (!rxq_data) {
- rte_errno = EINVAL;
- return -rte_errno;
- }
- rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id);
+ if (!rxq_ctrl)
+ goto error;
if (rxq_ctrl->irq) {
- struct mlx5_rxq_obj *rxq_obj;
-
- rxq_obj = mlx5_rxq_obj_get(dev, rx_queue_id);
- if (!rxq_obj) {
- rte_errno = EINVAL;
- return -rte_errno;
+ if (!rxq_ctrl->obj) {
+ mlx5_rxq_release(dev, rx_queue_id);
+ goto error;
}
- mlx5_arm_cq(rxq_data, rxq_data->cq_arm_sn);
- mlx5_rxq_obj_release(rxq_obj);
+ mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn);
}
+ mlx5_rxq_release(dev, rx_queue_id);
return 0;
+error:
+ rte_errno = EINVAL;
+ return -rte_errno;
}
/**
@@ -1173,32 +1131,29 @@
int
mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq_data;
struct mlx5_rxq_ctrl *rxq_ctrl;
struct mlx5_rxq_obj *rxq_obj = NULL;
struct ibv_cq *ev_cq;
void *ev_ctx;
- int ret;
+ int ret = 0;
- rxq_data = (*priv->rxqs)[rx_queue_id];
- if (!rxq_data) {
+ rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id);
+ if (!rxq_ctrl) {
rte_errno = EINVAL;
return -rte_errno;
}
- rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- if (!rxq_ctrl->irq)
+ if (!rxq_ctrl->irq) {
+ mlx5_rxq_release(dev, rx_queue_id);
return 0;
- rxq_obj = mlx5_rxq_obj_get(dev, rx_queue_id);
- if (!rxq_obj) {
- rte_errno = EINVAL;
- return -rte_errno;
}
+ rxq_obj = rxq_ctrl->obj;
+ if (!rxq_obj)
+ goto error;
if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
ret = mlx5_glue->get_cq_event(rxq_obj->ibv_channel, &ev_cq,
&ev_ctx);
if (ret < 0 || ev_cq != rxq_obj->ibv_cq)
- goto exit;
+ goto error;
mlx5_glue->ack_cq_events(rxq_obj->ibv_cq, 1);
} else if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
#ifdef HAVE_IBV_DEVX_EVENT
@@ -1213,13 +1168,13 @@
sizeof(out.buf));
if (ret < 0 || out.event_resp.cookie !=
(uint64_t)(uintptr_t)rxq_obj->devx_cq)
- goto exit;
+ goto error;
#endif /* HAVE_IBV_DEVX_EVENT */
}
- rxq_data->cq_arm_sn++;
- mlx5_rxq_obj_release(rxq_obj);
+ rxq_ctrl->rxq.cq_arm_sn++;
+ mlx5_rxq_release(dev, rx_queue_id);
return 0;
-exit:
+error:
/**
* For ret < 0 save the errno (may be EAGAIN which means the get_event
* function was called before receiving one).
@@ -1229,8 +1184,7 @@
else
rte_errno = EINVAL;
ret = rte_errno; /* Save rte_errno before cleanup. */
- if (rxq_obj)
- mlx5_rxq_obj_release(rxq_obj);
+ mlx5_rxq_release(dev, rx_queue_id);
if (ret != EAGAIN)
DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d",
dev->data->port_id, rx_queue_id);
@@ -1729,7 +1683,6 @@
}
DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
idx, (void *)&tmpl);
- rte_atomic32_inc(&tmpl->refcnt);
LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
return tmpl;
@@ -1944,7 +1897,6 @@ struct mlx5_rxq_obj *
rxq_data->cq_ci = 0;
DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
idx, (void *)&tmpl);
- rte_atomic32_inc(&tmpl->refcnt);
LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
return tmpl;
@@ -2546,13 +2498,11 @@ struct mlx5_rxq_ctrl *
mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
struct mlx5_rxq_ctrl *rxq_ctrl = NULL;
- if ((*priv->rxqs)[idx]) {
- rxq_ctrl = container_of((*priv->rxqs)[idx],
- struct mlx5_rxq_ctrl,
- rxq);
- mlx5_rxq_obj_get(dev, idx);
+ if (rxq_data) {
+ rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
rte_atomic32_inc(&rxq_ctrl->refcnt);
}
return rxq_ctrl;
@@ -2578,18 +2528,18 @@ struct mlx5_rxq_ctrl *
if (!(*priv->rxqs)[idx])
return 0;
rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq);
- MLX5_ASSERT(rxq_ctrl->priv);
- if (rxq_ctrl->obj && !mlx5_rxq_obj_release(rxq_ctrl->obj))
+ if (!rte_atomic32_dec_and_test(&rxq_ctrl->refcnt))
+ return 1;
+ if (rxq_ctrl->obj) {
+ mlx5_rxq_obj_release(rxq_ctrl->obj);
rxq_ctrl->obj = NULL;
- if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) {
- if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
- mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
- LIST_REMOVE(rxq_ctrl, next);
- mlx5_free(rxq_ctrl);
- (*priv->rxqs)[idx] = NULL;
- return 0;
}
- return 1;
+ if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+ mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+ LIST_REMOVE(rxq_ctrl, next);
+ mlx5_free(rxq_ctrl);
+ (*priv->rxqs)[idx] = NULL;
+ return 0;
}
/**
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index a161d4e..b092e43 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -171,7 +171,6 @@ enum mlx5_rxq_type {
/* Verbs/DevX Rx queue elements. */
struct mlx5_rxq_obj {
LIST_ENTRY(mlx5_rxq_obj) next; /* Pointer to the next element. */
- rte_atomic32_t refcnt; /* Reference counter. */
struct mlx5_rxq_ctrl *rxq_ctrl; /* Back pointer to parent. */
enum mlx5_rxq_obj_type type;
int fd; /* File descriptor for event channel */
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 05/18] net/mlx5: separate Rx queue object creations
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (3 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 04/18] net/mlx5: mitigate Rx queue reference counters Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 06/18] net/mlx5: separate Rx interrupt handling Michael Baum
` (14 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
As an arrangement to Windows OS support, the Verbs operations should be
separated to another file.
By this way, the build can easily cut the unsupported Verbs APIs from
the compilation process.
Define operation structure and DevX module in addition to the existing
linux Verbs module.
Separate Rx object creation into the Verbs/DevX modules and update the
operation structure according to the OS support and the user
configuration.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/Makefile | 1 +
drivers/net/mlx5/linux/mlx5_os.c | 5 +
drivers/net/mlx5/linux/mlx5_verbs.c | 344 ++++++++++++++
drivers/net/mlx5/linux/mlx5_verbs.h | 4 +
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5.h | 31 ++
drivers/net/mlx5/mlx5_devx.c | 562 +++++++++++++++++++++++
drivers/net/mlx5/mlx5_rxq.c | 861 +-----------------------------------
drivers/net/mlx5/mlx5_rxtx.h | 31 +-
drivers/net/mlx5/mlx5_trigger.c | 45 +-
10 files changed, 963 insertions(+), 922 deletions(-)
diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
index 6097688..50ad3bc 100644
--- a/drivers/net/mlx5/Makefile
+++ b/drivers/net/mlx5/Makefile
@@ -31,6 +31,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_flow_meter.c
SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_flow_dv.c
SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_flow_verbs.c
SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_utils.c
+SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_devx.c
SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += linux/mlx5_socket.c
SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += linux/mlx5_os.c
SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += linux/mlx5_ethdev_os.c
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index bf1f82b..694fbd3 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -46,6 +46,7 @@
#include "rte_pmd_mlx5.h"
#include "mlx5_verbs.h"
#include "mlx5_nl.h"
+#include "mlx5_devx.h"
#define MLX5_TAGS_HLIST_ARRAY_SIZE 8192
@@ -1322,6 +1323,10 @@
goto error;
}
}
+ if (config->devx && config->dv_flow_en)
+ priv->obj_ops = &devx_obj_ops;
+ else
+ priv->obj_ops = &ibv_obj_ops;
return eth_dev;
error:
if (priv) {
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 6271f0f..ff71513 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -8,6 +8,7 @@
#include <stdint.h>
#include <unistd.h>
#include <inttypes.h>
+#include <sys/queue.h>
#include "mlx5_autoconf.h"
@@ -21,6 +22,10 @@
#include <mlx5_common_mr.h>
#include <mlx5_rxtx.h>
#include <mlx5_verbs.h>
+#include <mlx5_rxtx.h>
+#include <mlx5_utils.h>
+#include <mlx5_malloc.h>
+
/**
* Register mr. Given protection domain pointer, pointer to addr and length
* register the memory region.
@@ -86,6 +91,345 @@
return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
}
+/**
+ * Create a CQ Verbs object.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param priv
+ * Pointer to device private data.
+ * @param rxq_data
+ * Pointer to Rx queue data.
+ * @param cqe_n
+ * Number of CQEs in CQ.
+ * @param rxq_obj
+ * Pointer to Rx queue object data.
+ *
+ * @return
+ * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_cq *
+mlx5_ibv_cq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
+ struct mlx5_rxq_data *rxq_data,
+ unsigned int cqe_n, struct mlx5_rxq_obj *rxq_obj)
+{
+ struct {
+ struct ibv_cq_init_attr_ex ibv;
+ struct mlx5dv_cq_init_attr mlx5;
+ } cq_attr;
+
+ cq_attr.ibv = (struct ibv_cq_init_attr_ex){
+ .cqe = cqe_n,
+ .channel = rxq_obj->ibv_channel,
+ .comp_mask = 0,
+ };
+ cq_attr.mlx5 = (struct mlx5dv_cq_init_attr){
+ .comp_mask = 0,
+ };
+ if (priv->config.cqe_comp && !rxq_data->hw_timestamp) {
+ cq_attr.mlx5.comp_mask |=
+ MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+ cq_attr.mlx5.cqe_comp_res_format =
+ mlx5_rxq_mprq_enabled(rxq_data) ?
+ MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
+ MLX5DV_CQE_RES_FORMAT_HASH;
+#else
+ cq_attr.mlx5.cqe_comp_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
+#endif
+ /*
+ * For vectorized Rx, it must not be doubled in order to
+ * make cq_ci and rq_ci aligned.
+ */
+ if (mlx5_rxq_check_vec_support(rxq_data) < 0)
+ cq_attr.ibv.cqe *= 2;
+ } else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
+ DRV_LOG(DEBUG,
+ "Port %u Rx CQE compression is disabled for HW"
+ " timestamp.",
+ dev->data->port_id);
+ }
+#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
+ if (priv->config.cqe_pad) {
+ cq_attr.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
+ cq_attr.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
+ }
+#endif
+ return mlx5_glue->cq_ex_to_cq(mlx5_glue->dv_create_cq(priv->sh->ctx,
+ &cq_attr.ibv,
+ &cq_attr.mlx5));
+}
+
+/**
+ * Create a WQ Verbs object.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param priv
+ * Pointer to device private data.
+ * @param rxq_data
+ * Pointer to Rx queue data.
+ * @param idx
+ * Queue index in DPDK Rx queue array.
+ * @param wqe_n
+ * Number of WQEs in WQ.
+ * @param rxq_obj
+ * Pointer to Rx queue object data.
+ *
+ * @return
+ * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_wq *
+mlx5_ibv_wq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
+ struct mlx5_rxq_data *rxq_data, uint16_t idx,
+ unsigned int wqe_n, struct mlx5_rxq_obj *rxq_obj)
+{
+ struct {
+ struct ibv_wq_init_attr ibv;
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+ struct mlx5dv_wq_init_attr mlx5;
+#endif
+ } wq_attr;
+
+ wq_attr.ibv = (struct ibv_wq_init_attr){
+ .wq_context = NULL, /* Could be useful in the future. */
+ .wq_type = IBV_WQT_RQ,
+ /* Max number of outstanding WRs. */
+ .max_wr = wqe_n >> rxq_data->sges_n,
+ /* Max number of scatter/gather elements in a WR. */
+ .max_sge = 1 << rxq_data->sges_n,
+ .pd = priv->sh->pd,
+ .cq = rxq_obj->ibv_cq,
+ .comp_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING | 0,
+ .create_flags = (rxq_data->vlan_strip ?
+ IBV_WQ_FLAGS_CVLAN_STRIPPING : 0),
+ };
+ /* By default, FCS (CRC) is stripped by hardware. */
+ if (rxq_data->crc_present) {
+ wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
+ wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+ }
+ if (priv->config.hw_padding) {
+#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
+ wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
+ wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
+ wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
+ wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+#endif
+ }
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+ wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
+ .comp_mask = 0,
+ };
+ if (mlx5_rxq_mprq_enabled(rxq_data)) {
+ struct mlx5dv_striding_rq_init_attr *mprq_attr =
+ &wq_attr.mlx5.striding_rq_attrs;
+
+ wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
+ *mprq_attr = (struct mlx5dv_striding_rq_init_attr){
+ .single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
+ .single_wqe_log_num_of_strides = rxq_data->strd_num_n,
+ .two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
+ };
+ }
+ rxq_obj->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
+ &wq_attr.mlx5);
+#else
+ rxq_obj->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
+#endif
+ if (rxq_obj->wq) {
+ /*
+ * Make sure number of WRs*SGEs match expectations since a queue
+ * cannot allocate more than "desc" buffers.
+ */
+ if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
+ wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
+ DRV_LOG(ERR,
+ "Port %u Rx queue %u requested %u*%u but got"
+ " %u*%u WRs*SGEs.",
+ dev->data->port_id, idx,
+ wqe_n >> rxq_data->sges_n,
+ (1 << rxq_data->sges_n),
+ wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
+ claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+ rxq_obj->wq = NULL;
+ rte_errno = EINVAL;
+ }
+ }
+ return rxq_obj->wq;
+}
+
+/**
+ * Create the Rx queue Verbs object.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param idx
+ * Queue index in DPDK Rx queue array.
+ *
+ * @return
+ * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_rxq_obj *
+mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ struct ibv_wq_attr mod;
+ unsigned int cqe_n;
+ unsigned int wqe_n = 1 << rxq_data->elts_n;
+ struct mlx5_rxq_obj *tmpl = NULL;
+ struct mlx5dv_cq cq_info;
+ struct mlx5dv_rwq rwq;
+ int ret = 0;
+ struct mlx5dv_obj obj;
+
+ MLX5_ASSERT(rxq_data);
+ MLX5_ASSERT(!rxq_ctrl->obj);
+ priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
+ priv->verbs_alloc_ctx.obj = rxq_ctrl;
+ tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+ rxq_ctrl->socket);
+ if (!tmpl) {
+ DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
+ dev->data->port_id, rxq_data->idx);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ tmpl->type = MLX5_RXQ_OBJ_TYPE_IBV;
+ tmpl->rxq_ctrl = rxq_ctrl;
+ if (rxq_ctrl->irq) {
+ tmpl->ibv_channel =
+ mlx5_glue->create_comp_channel(priv->sh->ctx);
+ if (!tmpl->ibv_channel) {
+ DRV_LOG(ERR, "Port %u: comp channel creation failure.",
+ dev->data->port_id);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ tmpl->fd = ((struct ibv_comp_channel *)(tmpl->ibv_channel))->fd;
+ }
+ if (mlx5_rxq_mprq_enabled(rxq_data))
+ cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
+ else
+ cqe_n = wqe_n - 1;
+ DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
+ dev->data->port_id, priv->sh->device_attr.max_qp_wr);
+ DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
+ dev->data->port_id, priv->sh->device_attr.max_sge);
+ /* Create CQ using Verbs API. */
+ tmpl->ibv_cq = mlx5_ibv_cq_new(dev, priv, rxq_data, cqe_n, tmpl);
+ if (!tmpl->ibv_cq) {
+ DRV_LOG(ERR, "Port %u Rx queue %u CQ creation failure.",
+ dev->data->port_id, idx);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ obj.cq.in = tmpl->ibv_cq;
+ obj.cq.out = &cq_info;
+ ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ);
+ if (ret) {
+ rte_errno = ret;
+ goto error;
+ }
+ if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) {
+ DRV_LOG(ERR,
+ "Port %u wrong MLX5_CQE_SIZE environment "
+ "variable value: it should be set to %u.",
+ dev->data->port_id, RTE_CACHE_LINE_SIZE);
+ rte_errno = EINVAL;
+ goto error;
+ }
+ /* Fill the rings. */
+ rxq_data->cqe_n = log2above(cq_info.cqe_cnt);
+ rxq_data->cq_db = cq_info.dbrec;
+ rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)cq_info.buf;
+ rxq_data->cq_uar = cq_info.cq_uar;
+ rxq_data->cqn = cq_info.cqn;
+ /* Create WQ (RQ) using Verbs API. */
+ tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n, tmpl);
+ if (!tmpl->wq) {
+ DRV_LOG(ERR, "Port %u Rx queue %u WQ creation failure.",
+ dev->data->port_id, idx);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ /* Change queue state to ready. */
+ mod = (struct ibv_wq_attr){
+ .attr_mask = IBV_WQ_ATTR_STATE,
+ .wq_state = IBV_WQS_RDY,
+ };
+ ret = mlx5_glue->modify_wq(tmpl->wq, &mod);
+ if (ret) {
+ DRV_LOG(ERR,
+ "Port %u Rx queue %u WQ state to IBV_WQS_RDY failed.",
+ dev->data->port_id, idx);
+ rte_errno = ret;
+ goto error;
+ }
+ obj.rwq.in = tmpl->wq;
+ obj.rwq.out = &rwq;
+ ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_RWQ);
+ if (ret) {
+ rte_errno = ret;
+ goto error;
+ }
+ rxq_data->wqes = rwq.buf;
+ rxq_data->rq_db = rwq.dbrec;
+ rxq_data->cq_arm_sn = 0;
+ mlx5_rxq_initialize(rxq_data);
+ rxq_data->cq_ci = 0;
+ DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
+ idx, (void *)&tmpl);
+ LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
+ priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
+ dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
+ rxq_ctrl->wqn = ((struct ibv_wq *)(tmpl->wq))->wq_num;
+ return tmpl;
+error:
+ if (tmpl) {
+ ret = rte_errno; /* Save rte_errno before cleanup. */
+ if (tmpl->wq)
+ claim_zero(mlx5_glue->destroy_wq(tmpl->wq));
+ if (tmpl->ibv_cq)
+ claim_zero(mlx5_glue->destroy_cq(tmpl->ibv_cq));
+ if (tmpl->ibv_channel)
+ claim_zero(mlx5_glue->destroy_comp_channel
+ (tmpl->ibv_channel));
+ mlx5_free(tmpl);
+ rte_errno = ret; /* Restore rte_errno. */
+ }
+ priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
+ return NULL;
+}
+
+/**
+ * Release an Rx verbs queue object.
+ *
+ * @param rxq_obj
+ * Verbs Rx queue object.
+ */
+static void
+mlx5_rxq_ibv_obj_release(struct mlx5_rxq_obj *rxq_obj)
+{
+ MLX5_ASSERT(rxq_obj);
+ MLX5_ASSERT(rxq_obj->wq);
+ MLX5_ASSERT(rxq_obj->ibv_cq);
+ rxq_free_elts(rxq_obj->rxq_ctrl);
+ claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+ claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
+ if (rxq_obj->ibv_channel)
+ claim_zero(mlx5_glue->destroy_comp_channel
+ (rxq_obj->ibv_channel));
+ LIST_REMOVE(rxq_obj, next);
+ mlx5_free(rxq_obj);
+}
+
struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
+ .rxq_obj_new = mlx5_rxq_ibv_obj_new,
+ .rxq_obj_release = mlx5_rxq_ibv_obj_release,
};
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.h b/drivers/net/mlx5/linux/mlx5_verbs.h
index 4f0b637..2e69c0f 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.h
+++ b/drivers/net/mlx5/linux/mlx5_verbs.h
@@ -5,6 +5,8 @@
#ifndef RTE_PMD_MLX5_VERBS_H_
#define RTE_PMD_MLX5_VERBS_H_
+#include "mlx5.h"
+
struct mlx5_verbs_ops {
mlx5_reg_mr_t reg_mr;
mlx5_dereg_mr_t dereg_mr;
@@ -12,4 +14,6 @@ struct mlx5_verbs_ops {
/* Verbs ops struct */
extern const struct mlx5_verbs_ops mlx5_verbs_ops;
+extern struct mlx5_obj_ops ibv_obj_ops;
+
#endif /* RTE_PMD_MLX5_VERBS_H_ */
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index 23462d1..9a97bb9 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -28,6 +28,7 @@ sources = files(
'mlx5_txpp.c',
'mlx5_vlan.c',
'mlx5_utils.c',
+ 'mlx5_devx.c',
)
if (dpdk_conf.has('RTE_ARCH_X86_64')
or dpdk_conf.has('RTE_ARCH_ARM64')
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f29a12c..eba5df9 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -676,9 +676,40 @@ struct mlx5_proc_priv {
#define MLX5_PROC_PRIV(port_id) \
((struct mlx5_proc_priv *)rte_eth_devices[port_id].process_private)
+enum mlx5_rxq_obj_type {
+ MLX5_RXQ_OBJ_TYPE_IBV, /* mlx5_rxq_obj with ibv_wq. */
+ MLX5_RXQ_OBJ_TYPE_DEVX_RQ, /* mlx5_rxq_obj with mlx5_devx_rq. */
+ MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN,
+ /* mlx5_rxq_obj with mlx5_devx_rq and hairpin support. */
+};
+
+/* Verbs/DevX Rx queue elements. */
+struct mlx5_rxq_obj {
+ LIST_ENTRY(mlx5_rxq_obj) next; /* Pointer to the next element. */
+ struct mlx5_rxq_ctrl *rxq_ctrl; /* Back pointer to parent. */
+ enum mlx5_rxq_obj_type type;
+ int fd; /* File descriptor for event channel */
+ RTE_STD_C11
+ union {
+ struct {
+ void *wq; /* Work Queue. */
+ void *ibv_cq; /* Completion Queue. */
+ void *ibv_channel;
+ };
+ struct {
+ struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
+ struct mlx5_devx_obj *devx_cq; /* DevX CQ object. */
+ void *devx_channel;
+ };
+ };
+};
+
/* HW objects operations structure. */
struct mlx5_obj_ops {
int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
+ struct mlx5_rxq_obj *(*rxq_obj_new)(struct rte_eth_dev *dev,
+ uint16_t idx);
+ void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
};
struct mlx5_priv {
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 7340412..191b3c2 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -43,6 +43,568 @@
return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
}
+/**
+ * Release the resources allocated for an RQ DevX object.
+ *
+ * @param rxq_ctrl
+ * DevX Rx queue object.
+ */
+static void
+rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+{
+ if (rxq_ctrl->rxq.wqes) {
+ mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
+ rxq_ctrl->rxq.wqes = NULL;
+ }
+ if (rxq_ctrl->wq_umem) {
+ mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
+ rxq_ctrl->wq_umem = NULL;
+ }
+}
+
+/**
+ * Release the resources allocated for the Rx CQ DevX object.
+ *
+ * @param rxq_ctrl
+ * DevX Rx queue object.
+ */
+static void
+rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+{
+ if (rxq_ctrl->rxq.cqes) {
+ rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
+ rxq_ctrl->rxq.cqes = NULL;
+ }
+ if (rxq_ctrl->cq_umem) {
+ mlx5_glue->devx_umem_dereg(rxq_ctrl->cq_umem);
+ rxq_ctrl->cq_umem = NULL;
+ }
+}
+
+/**
+ * Release an Rx hairpin related resources.
+ *
+ * @param rxq_obj
+ * Hairpin Rx queue object.
+ */
+static void
+mlx5_rxq_obj_hairpin_release(struct mlx5_rxq_obj *rxq_obj)
+{
+ struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+
+ MLX5_ASSERT(rxq_obj);
+ rq_attr.state = MLX5_RQC_STATE_RST;
+ rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+ mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+ claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+}
+
+/**
+ * Release an Rx DevX queue object.
+ *
+ * @param rxq_obj
+ * DevX Rx queue object.
+ */
+static void
+mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj)
+{
+ struct mlx5_priv *priv = rxq_obj->rxq_ctrl->priv;
+
+ MLX5_ASSERT(rxq_obj);
+ MLX5_ASSERT(rxq_obj->rq);
+ if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN) {
+ mlx5_rxq_obj_hairpin_release(rxq_obj);
+ } else {
+ MLX5_ASSERT(rxq_obj->devx_cq);
+ rxq_free_elts(rxq_obj->rxq_ctrl);
+ claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+ claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_obj->rxq_ctrl->rq_dbr_umem_id,
+ rxq_obj->rxq_ctrl->rq_dbr_offset));
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_obj->rxq_ctrl->cq_dbr_umem_id,
+ rxq_obj->rxq_ctrl->cq_dbr_offset));
+ if (rxq_obj->devx_channel)
+ mlx5_glue->devx_destroy_event_channel
+ (rxq_obj->devx_channel);
+ rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
+ rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
+ }
+ LIST_REMOVE(rxq_obj, next);
+ mlx5_free(rxq_obj);
+}
+
+/**
+ * Fill common fields of create RQ attributes structure.
+ *
+ * @param rxq_data
+ * Pointer to Rx queue data.
+ * @param cqn
+ * CQ number to use with this RQ.
+ * @param rq_attr
+ * RQ attributes structure to fill..
+ */
+static void
+mlx5_devx_create_rq_attr_fill(struct mlx5_rxq_data *rxq_data, uint32_t cqn,
+ struct mlx5_devx_create_rq_attr *rq_attr)
+{
+ rq_attr->state = MLX5_RQC_STATE_RST;
+ rq_attr->vsd = (rxq_data->vlan_strip) ? 0 : 1;
+ rq_attr->cqn = cqn;
+ rq_attr->scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
+}
+
+/**
+ * Fill common fields of DevX WQ attributes structure.
+ *
+ * @param priv
+ * Pointer to device private data.
+ * @param rxq_ctrl
+ * Pointer to Rx queue control structure.
+ * @param wq_attr
+ * WQ attributes structure to fill..
+ */
+static void
+mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
+ struct mlx5_devx_wq_attr *wq_attr)
+{
+ wq_attr->end_padding_mode = priv->config.cqe_pad ?
+ MLX5_WQ_END_PAD_MODE_ALIGN :
+ MLX5_WQ_END_PAD_MODE_NONE;
+ wq_attr->pd = priv->sh->pdn;
+ wq_attr->dbr_addr = rxq_ctrl->rq_dbr_offset;
+ wq_attr->dbr_umem_id = rxq_ctrl->rq_dbr_umem_id;
+ wq_attr->dbr_umem_valid = 1;
+ wq_attr->wq_umem_id = mlx5_os_get_umem_id(rxq_ctrl->wq_umem);
+ wq_attr->wq_umem_valid = 1;
+}
+
+/**
+ * Create a RQ object using DevX.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param idx
+ * Queue index in DPDK Rx queue array.
+ * @param cqn
+ * CQ number to use with this RQ.
+ *
+ * @return
+ * The DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_devx_obj *
+mlx5_devx_rq_new(struct rte_eth_dev *dev, uint16_t idx, uint32_t cqn)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ struct mlx5_devx_create_rq_attr rq_attr = { 0 };
+ uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
+ uint32_t wq_size = 0;
+ uint32_t wqe_size = 0;
+ uint32_t log_wqe_size = 0;
+ void *buf = NULL;
+ struct mlx5_devx_obj *rq;
+
+ /* Fill RQ attributes. */
+ rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
+ rq_attr.flush_in_error_en = 1;
+ mlx5_devx_create_rq_attr_fill(rxq_data, cqn, &rq_attr);
+ /* Fill WQ attributes for this RQ. */
+ if (mlx5_rxq_mprq_enabled(rxq_data)) {
+ rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ;
+ /*
+ * Number of strides in each WQE:
+ * 512*2^single_wqe_log_num_of_strides.
+ */
+ rq_attr.wq_attr.single_wqe_log_num_of_strides =
+ rxq_data->strd_num_n -
+ MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES;
+ /* Stride size = (2^single_stride_log_num_of_bytes)*64B. */
+ rq_attr.wq_attr.single_stride_log_num_of_bytes =
+ rxq_data->strd_sz_n -
+ MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES;
+ wqe_size = sizeof(struct mlx5_wqe_mprq);
+ } else {
+ rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
+ wqe_size = sizeof(struct mlx5_wqe_data_seg);
+ }
+ log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
+ rq_attr.wq_attr.log_wq_stride = log_wqe_size;
+ rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
+ /* Calculate and allocate WQ memory space. */
+ wqe_size = 1 << log_wqe_size; /* round up power of two.*/
+ wq_size = wqe_n * wqe_size;
+ size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
+ if (alignment == (size_t)-1) {
+ DRV_LOG(ERR, "Failed to get mem page size");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
+ alignment, rxq_ctrl->socket);
+ if (!buf)
+ return NULL;
+ rxq_data->wqes = buf;
+ rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
+ buf, wq_size, 0);
+ if (!rxq_ctrl->wq_umem) {
+ mlx5_free(buf);
+ return NULL;
+ }
+ mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
+ rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
+ if (!rq)
+ rxq_release_devx_rq_resources(rxq_ctrl);
+ return rq;
+}
+
+/**
+ * Create a DevX CQ object for an Rx queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param cqe_n
+ * Number of CQEs in CQ.
+ * @param idx
+ * Queue index in DPDK Rx queue array.
+ * @param rxq_obj
+ * Pointer to Rx queue object data.
+ *
+ * @return
+ * The DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_devx_obj *
+mlx5_devx_cq_new(struct rte_eth_dev *dev, unsigned int cqe_n, uint16_t idx,
+ struct mlx5_rxq_obj *rxq_obj)
+{
+ struct mlx5_devx_obj *cq_obj = 0;
+ struct mlx5_devx_cq_attr cq_attr = { 0 };
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ size_t page_size = rte_mem_page_size();
+ uint32_t lcore = (uint32_t)rte_lcore_to_cpu_id(-1);
+ uint32_t eqn = 0;
+ void *buf = NULL;
+ uint16_t event_nums[1] = {0};
+ uint32_t log_cqe_n;
+ uint32_t cq_size;
+ int ret = 0;
+
+ if (page_size == (size_t)-1) {
+ DRV_LOG(ERR, "Failed to get page_size.");
+ goto error;
+ }
+ if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
+ !rxq_data->lro) {
+ cq_attr.cqe_comp_en = MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+ cq_attr.mini_cqe_res_format =
+ mlx5_rxq_mprq_enabled(rxq_data) ?
+ MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
+ MLX5DV_CQE_RES_FORMAT_HASH;
+#else
+ cq_attr.mini_cqe_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
+#endif
+ /*
+ * For vectorized Rx, it must not be doubled in order to
+ * make cq_ci and rq_ci aligned.
+ */
+ if (mlx5_rxq_check_vec_support(rxq_data) < 0)
+ cqe_n *= 2;
+ } else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
+ DRV_LOG(DEBUG,
+ "Port %u Rx CQE compression is disabled for HW"
+ " timestamp.",
+ dev->data->port_id);
+ } else if (priv->config.cqe_comp && rxq_data->lro) {
+ DRV_LOG(DEBUG,
+ "Port %u Rx CQE compression is disabled for LRO.",
+ dev->data->port_id);
+ }
+#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
+ if (priv->config.cqe_pad)
+ cq_attr.cqe_size = MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
+#endif
+ log_cqe_n = log2above(cqe_n);
+ cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
+ /* Query the EQN for this core. */
+ if (mlx5_glue->devx_query_eqn(priv->sh->ctx, lcore, &eqn)) {
+ DRV_LOG(ERR, "Failed to query EQN for CQ.");
+ goto error;
+ }
+ cq_attr.eqn = eqn;
+ buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
+ rxq_ctrl->socket);
+ if (!buf) {
+ DRV_LOG(ERR, "Failed to allocate memory for CQ.");
+ goto error;
+ }
+ rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)buf;
+ rxq_ctrl->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx, buf,
+ cq_size,
+ IBV_ACCESS_LOCAL_WRITE);
+ if (!rxq_ctrl->cq_umem) {
+ DRV_LOG(ERR, "Failed to register umem for CQ.");
+ goto error;
+ }
+ cq_attr.uar_page_id =
+ mlx5_os_get_devx_uar_page_id(priv->sh->devx_rx_uar);
+ cq_attr.q_umem_id = mlx5_os_get_umem_id(rxq_ctrl->cq_umem);
+ cq_attr.q_umem_valid = 1;
+ cq_attr.log_cq_size = log_cqe_n;
+ cq_attr.log_page_size = rte_log2_u32(page_size);
+ cq_attr.db_umem_offset = rxq_ctrl->cq_dbr_offset;
+ cq_attr.db_umem_id = rxq_ctrl->cq_dbr_umem_id;
+ cq_attr.db_umem_valid = 1;
+ cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
+ if (!cq_obj)
+ goto error;
+ rxq_data->cqe_n = log_cqe_n;
+ rxq_data->cqn = cq_obj->id;
+ if (rxq_obj->devx_channel) {
+ ret = mlx5_glue->devx_subscribe_devx_event
+ (rxq_obj->devx_channel,
+ cq_obj->obj,
+ sizeof(event_nums),
+ event_nums,
+ (uint64_t)(uintptr_t)cq_obj);
+ if (ret) {
+ DRV_LOG(ERR, "Fail to subscribe CQ to event channel.");
+ rte_errno = errno;
+ goto error;
+ }
+ }
+ /* Initialise CQ to 1's to mark HW ownership for all CQEs. */
+ memset((void *)(uintptr_t)rxq_data->cqes, 0xFF, cq_size);
+ return cq_obj;
+error:
+ if (cq_obj)
+ mlx5_devx_cmd_destroy(cq_obj);
+ rxq_release_devx_cq_resources(rxq_ctrl);
+ return NULL;
+}
+
+/**
+ * Create the Rx hairpin queue object.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param idx
+ * Queue index in DPDK Rx queue array.
+ *
+ * @return
+ * The hairpin DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_rxq_obj *
+mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ struct mlx5_devx_create_rq_attr attr = { 0 };
+ struct mlx5_rxq_obj *tmpl = NULL;
+ uint32_t max_wq_data;
+
+ MLX5_ASSERT(rxq_data);
+ MLX5_ASSERT(!rxq_ctrl->obj);
+ tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+ rxq_ctrl->socket);
+ if (!tmpl) {
+ DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
+ dev->data->port_id, rxq_data->idx);
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN;
+ tmpl->rxq_ctrl = rxq_ctrl;
+ attr.hairpin = 1;
+ max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
+ /* Jumbo frames > 9KB should be supported, and more packets. */
+ if (priv->config.log_hp_size != (uint32_t)MLX5_ARG_UNSET) {
+ if (priv->config.log_hp_size > max_wq_data) {
+ DRV_LOG(ERR, "Total data size %u power of 2 is "
+ "too large for hairpin.",
+ priv->config.log_hp_size);
+ mlx5_free(tmpl);
+ rte_errno = ERANGE;
+ return NULL;
+ }
+ attr.wq_attr.log_hairpin_data_sz = priv->config.log_hp_size;
+ } else {
+ attr.wq_attr.log_hairpin_data_sz =
+ (max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
+ max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
+ }
+ /* Set the packets number to the maximum value for performance. */
+ attr.wq_attr.log_hairpin_num_packets =
+ attr.wq_attr.log_hairpin_data_sz -
+ MLX5_HAIRPIN_QUEUE_STRIDE;
+ tmpl->rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &attr,
+ rxq_ctrl->socket);
+ if (!tmpl->rq) {
+ DRV_LOG(ERR,
+ "Port %u Rx hairpin queue %u can't create rq object.",
+ dev->data->port_id, idx);
+ mlx5_free(tmpl);
+ rte_errno = errno;
+ return NULL;
+ }
+ DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
+ idx, (void *)&tmpl);
+ LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
+ dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
+ return tmpl;
+}
+
+/**
+ * Create the Rx queue DevX object.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param idx
+ * Queue index in DPDK Rx queue array.
+ *
+ * @return
+ * The DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_rxq_obj *
+mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ unsigned int cqe_n;
+ unsigned int wqe_n = 1 << rxq_data->elts_n;
+ struct mlx5_rxq_obj *tmpl = NULL;
+ struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
+ struct mlx5_devx_dbr_page *cq_dbr_page = NULL;
+ struct mlx5_devx_dbr_page *rq_dbr_page = NULL;
+ int64_t dbr_offset;
+ int ret = 0;
+
+ MLX5_ASSERT(rxq_data);
+ MLX5_ASSERT(!rxq_ctrl->obj);
+ if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+ return mlx5_rxq_obj_hairpin_new(dev, idx);
+ tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+ rxq_ctrl->socket);
+ if (!tmpl) {
+ DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
+ dev->data->port_id, rxq_data->idx);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_RQ;
+ tmpl->rxq_ctrl = rxq_ctrl;
+ if (rxq_ctrl->irq) {
+ int devx_ev_flag =
+ MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA;
+
+ tmpl->devx_channel = mlx5_glue->devx_create_event_channel
+ (priv->sh->ctx,
+ devx_ev_flag);
+ if (!tmpl->devx_channel) {
+ rte_errno = errno;
+ DRV_LOG(ERR, "Failed to create event channel %d.",
+ rte_errno);
+ goto error;
+ }
+ tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
+ }
+ if (mlx5_rxq_mprq_enabled(rxq_data))
+ cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
+ else
+ cqe_n = wqe_n - 1;
+ DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
+ dev->data->port_id, priv->sh->device_attr.max_qp_wr);
+ DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
+ dev->data->port_id, priv->sh->device_attr.max_sge);
+ /* Allocate CQ door-bell. */
+ dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &cq_dbr_page);
+ if (dbr_offset < 0) {
+ DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
+ goto error;
+ }
+ rxq_ctrl->cq_dbr_offset = dbr_offset;
+ rxq_ctrl->cq_dbr_umem_id = mlx5_os_get_umem_id(cq_dbr_page->umem);
+ rxq_data->cq_db = (uint32_t *)((uintptr_t)cq_dbr_page->dbrs +
+ (uintptr_t)rxq_ctrl->cq_dbr_offset);
+ rxq_data->cq_uar =
+ mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
+ /* Create CQ using DevX API. */
+ tmpl->devx_cq = mlx5_devx_cq_new(dev, cqe_n, idx, tmpl);
+ if (!tmpl->devx_cq) {
+ DRV_LOG(ERR, "Failed to create CQ.");
+ goto error;
+ }
+ /* Allocate RQ door-bell. */
+ dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &rq_dbr_page);
+ if (dbr_offset < 0) {
+ DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
+ goto error;
+ }
+ rxq_ctrl->rq_dbr_offset = dbr_offset;
+ rxq_ctrl->rq_dbr_umem_id = mlx5_os_get_umem_id(rq_dbr_page->umem);
+ rxq_data->rq_db = (uint32_t *)((uintptr_t)rq_dbr_page->dbrs +
+ (uintptr_t)rxq_ctrl->rq_dbr_offset);
+ /* Create RQ using DevX API. */
+ tmpl->rq = mlx5_devx_rq_new(dev, idx, tmpl->devx_cq->id);
+ if (!tmpl->rq) {
+ DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
+ dev->data->port_id, idx);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ /* Change queue state to ready. */
+ rq_attr.rq_state = MLX5_RQC_STATE_RST;
+ rq_attr.state = MLX5_RQC_STATE_RDY;
+ ret = mlx5_devx_cmd_modify_rq(tmpl->rq, &rq_attr);
+ if (ret)
+ goto error;
+ rxq_data->cq_arm_sn = 0;
+ mlx5_rxq_initialize(rxq_data);
+ rxq_data->cq_ci = 0;
+ DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
+ idx, (void *)&tmpl);
+ LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
+ dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
+ rxq_ctrl->wqn = tmpl->rq->id;
+ return tmpl;
+error:
+ if (tmpl) {
+ ret = rte_errno; /* Save rte_errno before cleanup. */
+ if (tmpl->rq)
+ claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
+ if (tmpl->devx_cq)
+ claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
+ if (tmpl->devx_channel)
+ mlx5_glue->devx_destroy_event_channel
+ (tmpl->devx_channel);
+ mlx5_free(tmpl);
+ rte_errno = ret; /* Restore rte_errno. */
+ }
+ if (rq_dbr_page)
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_ctrl->rq_dbr_umem_id,
+ rxq_ctrl->rq_dbr_offset));
+ if (cq_dbr_page)
+ claim_zero(mlx5_release_dbr(&priv->dbrpgs,
+ rxq_ctrl->cq_dbr_umem_id,
+ rxq_ctrl->cq_dbr_offset));
+ rxq_release_devx_rq_resources(rxq_ctrl);
+ rxq_release_devx_cq_resources(rxq_ctrl);
+ return NULL;
+}
+
struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
+ .rxq_obj_new = mlx5_rxq_devx_obj_new,
+ .rxq_obj_release = mlx5_rxq_devx_obj_release,
};
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 506c4d3..daa92b6 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -347,7 +347,7 @@
* @param rxq_ctrl
* Pointer to RX queue structure.
*/
-static void
+void
rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl)
{
if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
@@ -832,113 +832,6 @@
}
/**
- * Release the resources allocated for an RQ DevX object.
- *
- * @param rxq_ctrl
- * DevX Rx queue object.
- */
-static void
-rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
-{
- if (rxq_ctrl->rxq.wqes) {
- mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
- rxq_ctrl->rxq.wqes = NULL;
- }
- if (rxq_ctrl->wq_umem) {
- mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
- rxq_ctrl->wq_umem = NULL;
- }
-}
-
-/**
- * Release the resources allocated for the Rx CQ DevX object.
- *
- * @param rxq_ctrl
- * DevX Rx queue object.
- */
-static void
-rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
-{
- if (rxq_ctrl->rxq.cqes) {
- rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
- rxq_ctrl->rxq.cqes = NULL;
- }
- if (rxq_ctrl->cq_umem) {
- mlx5_glue->devx_umem_dereg(rxq_ctrl->cq_umem);
- rxq_ctrl->cq_umem = NULL;
- }
-}
-
-/**
- * Release an Rx hairpin related resources.
- *
- * @param rxq_obj
- * Hairpin Rx queue object.
- */
-static void
-rxq_obj_hairpin_release(struct mlx5_rxq_obj *rxq_obj)
-{
- struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
-
- MLX5_ASSERT(rxq_obj);
- rq_attr.state = MLX5_RQC_STATE_RST;
- rq_attr.rq_state = MLX5_RQC_STATE_RDY;
- mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
- claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
-}
-
-/**
- * Release an Rx verbs/DevX queue object.
- *
- * @param rxq_obj
- * Verbs/DevX Rx queue object.
- */
-static void
-mlx5_rxq_obj_release(struct mlx5_rxq_obj *rxq_obj)
-{
- struct mlx5_priv *priv = rxq_obj->rxq_ctrl->priv;
- struct mlx5_rxq_ctrl *rxq_ctrl = rxq_obj->rxq_ctrl;
-
- MLX5_ASSERT(rxq_obj);
- switch (rxq_obj->type) {
- case MLX5_RXQ_OBJ_TYPE_IBV:
- MLX5_ASSERT(rxq_obj->wq);
- MLX5_ASSERT(rxq_obj->ibv_cq);
- rxq_free_elts(rxq_ctrl);
- claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
- claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
- if (rxq_obj->ibv_channel)
- claim_zero(mlx5_glue->destroy_comp_channel
- (rxq_obj->ibv_channel));
- break;
- case MLX5_RXQ_OBJ_TYPE_DEVX_RQ:
- MLX5_ASSERT(rxq_obj->rq);
- MLX5_ASSERT(rxq_obj->devx_cq);
- rxq_free_elts(rxq_ctrl);
- claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
- claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->rq_dbr_umem_id,
- rxq_ctrl->rq_dbr_offset));
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->cq_dbr_umem_id,
- rxq_ctrl->cq_dbr_offset));
- if (rxq_obj->devx_channel)
- mlx5_glue->devx_destroy_event_channel
- (rxq_obj->devx_channel);
- rxq_release_devx_rq_resources(rxq_ctrl);
- rxq_release_devx_cq_resources(rxq_ctrl);
- break;
- case MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN:
- MLX5_ASSERT(rxq_obj->rq);
- rxq_obj_hairpin_release(rxq_obj);
- break;
- }
- LIST_REMOVE(rxq_obj, next);
- mlx5_free(rxq_obj);
-}
-
-/**
* Allocate queue vector and fill epoll fd list for Rx interrupts.
*
* @param dev
@@ -1193,756 +1086,6 @@
}
/**
- * Create a CQ Verbs object.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param priv
- * Pointer to device private data.
- * @param rxq_data
- * Pointer to Rx queue data.
- * @param cqe_n
- * Number of CQEs in CQ.
- * @param rxq_obj
- * Pointer to Rx queue object data.
- *
- * @return
- * The Verbs object initialised, NULL otherwise and rte_errno is set.
- */
-static struct ibv_cq *
-mlx5_ibv_cq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
- struct mlx5_rxq_data *rxq_data,
- unsigned int cqe_n, struct mlx5_rxq_obj *rxq_obj)
-{
- struct {
- struct ibv_cq_init_attr_ex ibv;
- struct mlx5dv_cq_init_attr mlx5;
- } cq_attr;
-
- cq_attr.ibv = (struct ibv_cq_init_attr_ex){
- .cqe = cqe_n,
- .channel = rxq_obj->ibv_channel,
- .comp_mask = 0,
- };
- cq_attr.mlx5 = (struct mlx5dv_cq_init_attr){
- .comp_mask = 0,
- };
- if (priv->config.cqe_comp && !rxq_data->hw_timestamp) {
- cq_attr.mlx5.comp_mask |=
- MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
- cq_attr.mlx5.cqe_comp_res_format =
- mlx5_rxq_mprq_enabled(rxq_data) ?
- MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
- MLX5DV_CQE_RES_FORMAT_HASH;
-#else
- cq_attr.mlx5.cqe_comp_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
-#endif
- /*
- * For vectorized Rx, it must not be doubled in order to
- * make cq_ci and rq_ci aligned.
- */
- if (mlx5_rxq_check_vec_support(rxq_data) < 0)
- cq_attr.ibv.cqe *= 2;
- } else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
- DRV_LOG(DEBUG,
- "port %u Rx CQE compression is disabled for HW"
- " timestamp",
- dev->data->port_id);
- }
-#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
- if (priv->config.cqe_pad) {
- cq_attr.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
- cq_attr.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
- }
-#endif
- return mlx5_glue->cq_ex_to_cq(mlx5_glue->dv_create_cq(priv->sh->ctx,
- &cq_attr.ibv,
- &cq_attr.mlx5));
-}
-
-/**
- * Create a WQ Verbs object.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param priv
- * Pointer to device private data.
- * @param rxq_data
- * Pointer to Rx queue data.
- * @param idx
- * Queue index in DPDK Rx queue array
- * @param wqe_n
- * Number of WQEs in WQ.
- * @param rxq_obj
- * Pointer to Rx queue object data.
- *
- * @return
- * The Verbs object initialised, NULL otherwise and rte_errno is set.
- */
-static struct ibv_wq *
-mlx5_ibv_wq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
- struct mlx5_rxq_data *rxq_data, uint16_t idx,
- unsigned int wqe_n, struct mlx5_rxq_obj *rxq_obj)
-{
- struct {
- struct ibv_wq_init_attr ibv;
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
- struct mlx5dv_wq_init_attr mlx5;
-#endif
- } wq_attr;
-
- wq_attr.ibv = (struct ibv_wq_init_attr){
- .wq_context = NULL, /* Could be useful in the future. */
- .wq_type = IBV_WQT_RQ,
- /* Max number of outstanding WRs. */
- .max_wr = wqe_n >> rxq_data->sges_n,
- /* Max number of scatter/gather elements in a WR. */
- .max_sge = 1 << rxq_data->sges_n,
- .pd = priv->sh->pd,
- .cq = rxq_obj->ibv_cq,
- .comp_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING | 0,
- .create_flags = (rxq_data->vlan_strip ?
- IBV_WQ_FLAGS_CVLAN_STRIPPING : 0),
- };
- /* By default, FCS (CRC) is stripped by hardware. */
- if (rxq_data->crc_present) {
- wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
- wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
- }
- if (priv->config.hw_padding) {
-#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
- wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
- wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
- wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
- wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-#endif
- }
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
- wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
- .comp_mask = 0,
- };
- if (mlx5_rxq_mprq_enabled(rxq_data)) {
- struct mlx5dv_striding_rq_init_attr *mprq_attr =
- &wq_attr.mlx5.striding_rq_attrs;
-
- wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
- *mprq_attr = (struct mlx5dv_striding_rq_init_attr){
- .single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
- .single_wqe_log_num_of_strides = rxq_data->strd_num_n,
- .two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
- };
- }
- rxq_obj->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
- &wq_attr.mlx5);
-#else
- rxq_obj->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
-#endif
- if (rxq_obj->wq) {
- /*
- * Make sure number of WRs*SGEs match expectations since a queue
- * cannot allocate more than "desc" buffers.
- */
- if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
- wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
- DRV_LOG(ERR,
- "port %u Rx queue %u requested %u*%u but got"
- " %u*%u WRs*SGEs",
- dev->data->port_id, idx,
- wqe_n >> rxq_data->sges_n,
- (1 << rxq_data->sges_n),
- wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
- claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
- rxq_obj->wq = NULL;
- rte_errno = EINVAL;
- }
- }
- return rxq_obj->wq;
-}
-
-/**
- * Fill common fields of create RQ attributes structure.
- *
- * @param rxq_data
- * Pointer to Rx queue data.
- * @param cqn
- * CQ number to use with this RQ.
- * @param rq_attr
- * RQ attributes structure to fill..
- */
-static void
-mlx5_devx_create_rq_attr_fill(struct mlx5_rxq_data *rxq_data, uint32_t cqn,
- struct mlx5_devx_create_rq_attr *rq_attr)
-{
- rq_attr->state = MLX5_RQC_STATE_RST;
- rq_attr->vsd = (rxq_data->vlan_strip) ? 0 : 1;
- rq_attr->cqn = cqn;
- rq_attr->scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
-}
-
-/**
- * Fill common fields of DevX WQ attributes structure.
- *
- * @param priv
- * Pointer to device private data.
- * @param rxq_ctrl
- * Pointer to Rx queue control structure.
- * @param wq_attr
- * WQ attributes structure to fill..
- */
-static void
-mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
- struct mlx5_devx_wq_attr *wq_attr)
-{
- wq_attr->end_padding_mode = priv->config.cqe_pad ?
- MLX5_WQ_END_PAD_MODE_ALIGN :
- MLX5_WQ_END_PAD_MODE_NONE;
- wq_attr->pd = priv->sh->pdn;
- wq_attr->dbr_addr = rxq_ctrl->rq_dbr_offset;
- wq_attr->dbr_umem_id = rxq_ctrl->rq_dbr_umem_id;
- wq_attr->dbr_umem_valid = 1;
- wq_attr->wq_umem_id = mlx5_os_get_umem_id(rxq_ctrl->wq_umem);
- wq_attr->wq_umem_valid = 1;
-}
-
-/**
- * Create a RQ object using DevX.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param idx
- * Queue index in DPDK Rx queue array
- * @param cqn
- * CQ number to use with this RQ.
- *
- * @return
- * The DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_devx_obj *
-mlx5_devx_rq_new(struct rte_eth_dev *dev, uint16_t idx, uint32_t cqn)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
- struct mlx5_rxq_ctrl *rxq_ctrl =
- container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- struct mlx5_devx_create_rq_attr rq_attr = { 0 };
- uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
- uint32_t wq_size = 0;
- uint32_t wqe_size = 0;
- uint32_t log_wqe_size = 0;
- void *buf = NULL;
- struct mlx5_devx_obj *rq;
-
- /* Fill RQ attributes. */
- rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
- rq_attr.flush_in_error_en = 1;
- mlx5_devx_create_rq_attr_fill(rxq_data, cqn, &rq_attr);
- /* Fill WQ attributes for this RQ. */
- if (mlx5_rxq_mprq_enabled(rxq_data)) {
- rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ;
- /*
- * Number of strides in each WQE:
- * 512*2^single_wqe_log_num_of_strides.
- */
- rq_attr.wq_attr.single_wqe_log_num_of_strides =
- rxq_data->strd_num_n -
- MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES;
- /* Stride size = (2^single_stride_log_num_of_bytes)*64B. */
- rq_attr.wq_attr.single_stride_log_num_of_bytes =
- rxq_data->strd_sz_n -
- MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES;
- wqe_size = sizeof(struct mlx5_wqe_mprq);
- } else {
- rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
- wqe_size = sizeof(struct mlx5_wqe_data_seg);
- }
- log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
- rq_attr.wq_attr.log_wq_stride = log_wqe_size;
- rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
- /* Calculate and allocate WQ memory space. */
- wqe_size = 1 << log_wqe_size; /* round up power of two.*/
- wq_size = wqe_n * wqe_size;
- size_t alignment = MLX5_WQE_BUF_ALIGNMENT;
- if (alignment == (size_t)-1) {
- DRV_LOG(ERR, "Failed to get mem page size");
- rte_errno = ENOMEM;
- return NULL;
- }
- buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
- alignment, rxq_ctrl->socket);
- if (!buf)
- return NULL;
- rxq_data->wqes = buf;
- rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
- buf, wq_size, 0);
- if (!rxq_ctrl->wq_umem) {
- mlx5_free(buf);
- return NULL;
- }
- mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
- rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
- if (!rq)
- rxq_release_devx_rq_resources(rxq_ctrl);
- return rq;
-}
-
-/**
- * Create a DevX CQ object for an Rx queue.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param cqe_n
- * Number of CQEs in CQ.
- * @param idx
- * Queue index in DPDK Rx queue array
- * @param rxq_obj
- * Pointer to Rx queue object data.
- *
- * @return
- * The DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_devx_obj *
-mlx5_devx_cq_new(struct rte_eth_dev *dev, unsigned int cqe_n, uint16_t idx,
- struct mlx5_rxq_obj *rxq_obj)
-{
- struct mlx5_devx_obj *cq_obj = 0;
- struct mlx5_devx_cq_attr cq_attr = { 0 };
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
- struct mlx5_rxq_ctrl *rxq_ctrl =
- container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- size_t page_size = rte_mem_page_size();
- uint32_t lcore = (uint32_t)rte_lcore_to_cpu_id(-1);
- uint32_t eqn = 0;
- void *buf = NULL;
- uint16_t event_nums[1] = {0};
- uint32_t log_cqe_n;
- uint32_t cq_size;
- int ret = 0;
-
- if (page_size == (size_t)-1) {
- DRV_LOG(ERR, "Failed to get page_size.");
- goto error;
- }
- if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
- !rxq_data->lro) {
- cq_attr.cqe_comp_en = MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
- cq_attr.mini_cqe_res_format =
- mlx5_rxq_mprq_enabled(rxq_data) ?
- MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
- MLX5DV_CQE_RES_FORMAT_HASH;
-#else
- cq_attr.mini_cqe_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
-#endif
- /*
- * For vectorized Rx, it must not be doubled in order to
- * make cq_ci and rq_ci aligned.
- */
- if (mlx5_rxq_check_vec_support(rxq_data) < 0)
- cqe_n *= 2;
- } else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
- DRV_LOG(DEBUG,
- "port %u Rx CQE compression is disabled for HW"
- " timestamp",
- dev->data->port_id);
- } else if (priv->config.cqe_comp && rxq_data->lro) {
- DRV_LOG(DEBUG,
- "port %u Rx CQE compression is disabled for LRO",
- dev->data->port_id);
- }
-#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
- if (priv->config.cqe_pad)
- cq_attr.cqe_size = MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
-#endif
- log_cqe_n = log2above(cqe_n);
- cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
- /* Query the EQN for this core. */
- if (mlx5_glue->devx_query_eqn(priv->sh->ctx, lcore, &eqn)) {
- DRV_LOG(ERR, "Failed to query EQN for CQ.");
- goto error;
- }
- cq_attr.eqn = eqn;
- buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
- rxq_ctrl->socket);
- if (!buf) {
- DRV_LOG(ERR, "Failed to allocate memory for CQ.");
- goto error;
- }
- rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)buf;
- rxq_ctrl->cq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx, buf,
- cq_size,
- IBV_ACCESS_LOCAL_WRITE);
- if (!rxq_ctrl->cq_umem) {
- DRV_LOG(ERR, "Failed to register umem for CQ.");
- goto error;
- }
- cq_attr.uar_page_id =
- mlx5_os_get_devx_uar_page_id(priv->sh->devx_rx_uar);
- cq_attr.q_umem_id = mlx5_os_get_umem_id(rxq_ctrl->cq_umem);
- cq_attr.q_umem_valid = 1;
- cq_attr.log_cq_size = log_cqe_n;
- cq_attr.log_page_size = rte_log2_u32(page_size);
- cq_attr.db_umem_offset = rxq_ctrl->cq_dbr_offset;
- cq_attr.db_umem_id = rxq_ctrl->cq_dbr_umem_id;
- cq_attr.db_umem_valid = 1;
- cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
- if (!cq_obj)
- goto error;
- rxq_data->cqe_n = log_cqe_n;
- rxq_data->cqn = cq_obj->id;
- if (rxq_obj->devx_channel) {
- ret = mlx5_glue->devx_subscribe_devx_event
- (rxq_obj->devx_channel,
- cq_obj->obj,
- sizeof(event_nums),
- event_nums,
- (uint64_t)(uintptr_t)cq_obj);
- if (ret) {
- DRV_LOG(ERR, "Fail to subscribe CQ to event channel.");
- rte_errno = errno;
- goto error;
- }
- }
- /* Initialise CQ to 1's to mark HW ownership for all CQEs. */
- memset((void *)(uintptr_t)rxq_data->cqes, 0xFF, cq_size);
- return cq_obj;
-error:
- if (cq_obj)
- mlx5_devx_cmd_destroy(cq_obj);
- rxq_release_devx_cq_resources(rxq_ctrl);
- return NULL;
-}
-
-/**
- * Create the Rx hairpin queue object.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param idx
- * Queue index in DPDK Rx queue array
- *
- * @return
- * The hairpin DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_rxq_obj *
-mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
- struct mlx5_rxq_ctrl *rxq_ctrl =
- container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- struct mlx5_devx_create_rq_attr attr = { 0 };
- struct mlx5_rxq_obj *tmpl = NULL;
- uint32_t max_wq_data;
-
- MLX5_ASSERT(rxq_data);
- MLX5_ASSERT(!rxq_ctrl->obj);
- tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
- rxq_ctrl->socket);
- if (!tmpl) {
- DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
- dev->data->port_id, rxq_data->idx);
- rte_errno = ENOMEM;
- return NULL;
- }
- tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN;
- tmpl->rxq_ctrl = rxq_ctrl;
- attr.hairpin = 1;
- max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
- /* Jumbo frames > 9KB should be supported, and more packets. */
- if (priv->config.log_hp_size != (uint32_t)MLX5_ARG_UNSET) {
- if (priv->config.log_hp_size > max_wq_data) {
- DRV_LOG(ERR, "total data size %u power of 2 is "
- "too large for hairpin",
- priv->config.log_hp_size);
- mlx5_free(tmpl);
- rte_errno = ERANGE;
- return NULL;
- }
- attr.wq_attr.log_hairpin_data_sz = priv->config.log_hp_size;
- } else {
- attr.wq_attr.log_hairpin_data_sz =
- (max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
- max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
- }
- /* Set the packets number to the maximum value for performance. */
- attr.wq_attr.log_hairpin_num_packets =
- attr.wq_attr.log_hairpin_data_sz -
- MLX5_HAIRPIN_QUEUE_STRIDE;
- tmpl->rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &attr,
- rxq_ctrl->socket);
- if (!tmpl->rq) {
- DRV_LOG(ERR,
- "port %u Rx hairpin queue %u can't create rq object",
- dev->data->port_id, idx);
- mlx5_free(tmpl);
- rte_errno = errno;
- return NULL;
- }
- DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
- idx, (void *)&tmpl);
- LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
- dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
- return tmpl;
-}
-
-/**
- * Create the Rx queue Verbs/DevX object.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param idx
- * Queue index in DPDK Rx queue array
- * @param type
- * Type of Rx queue object to create.
- *
- * @return
- * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
- */
-struct mlx5_rxq_obj *
-mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
- enum mlx5_rxq_obj_type type)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
- struct mlx5_rxq_ctrl *rxq_ctrl =
- container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- struct ibv_wq_attr mod;
- unsigned int cqe_n;
- unsigned int wqe_n = 1 << rxq_data->elts_n;
- struct mlx5_rxq_obj *tmpl = NULL;
- struct mlx5_devx_dbr_page *cq_dbr_page = NULL;
- struct mlx5_devx_dbr_page *rq_dbr_page = NULL;
- struct mlx5dv_cq cq_info;
- struct mlx5dv_rwq rwq;
- int ret = 0;
- struct mlx5dv_obj obj;
-
- MLX5_ASSERT(rxq_data);
- MLX5_ASSERT(!rxq_ctrl->obj);
- if (type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN)
- return mlx5_rxq_obj_hairpin_new(dev, idx);
- tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
- rxq_ctrl->socket);
- if (!tmpl) {
- DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
- dev->data->port_id, rxq_data->idx);
- rte_errno = ENOMEM;
- goto error;
- }
- tmpl->type = type;
- tmpl->rxq_ctrl = rxq_ctrl;
- if (rxq_ctrl->irq) {
- if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV) {
- tmpl->ibv_channel =
- mlx5_glue->create_comp_channel(priv->sh->ctx);
- if (!tmpl->ibv_channel) {
- DRV_LOG(ERR, "port %u: comp channel creation "
- "failure", dev->data->port_id);
- rte_errno = ENOMEM;
- goto error;
- }
- tmpl->fd = ((struct ibv_comp_channel *)
- (tmpl->ibv_channel))->fd;
- } else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
- int devx_ev_flag =
- MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA;
-
- tmpl->devx_channel =
- mlx5_glue->devx_create_event_channel
- (priv->sh->ctx,
- devx_ev_flag);
- if (!tmpl->devx_channel) {
- rte_errno = errno;
- DRV_LOG(ERR,
- "Failed to create event channel %d.",
- rte_errno);
- goto error;
- }
- tmpl->fd =
- mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
- }
- }
- if (mlx5_rxq_mprq_enabled(rxq_data))
- cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
- else
- cqe_n = wqe_n - 1;
- DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
- dev->data->port_id, priv->sh->device_attr.max_qp_wr);
- DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
- dev->data->port_id, priv->sh->device_attr.max_sge);
- if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV) {
- priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
- priv->verbs_alloc_ctx.obj = rxq_ctrl;
- /* Create CQ using Verbs API. */
- tmpl->ibv_cq = mlx5_ibv_cq_new(dev, priv, rxq_data, cqe_n,
- tmpl);
- if (!tmpl->ibv_cq) {
- DRV_LOG(ERR, "port %u Rx queue %u CQ creation failure",
- dev->data->port_id, idx);
- rte_errno = ENOMEM;
- goto error;
- }
- obj.cq.in = tmpl->ibv_cq;
- obj.cq.out = &cq_info;
- ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ);
- if (ret) {
- rte_errno = ret;
- goto error;
- }
- if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) {
- DRV_LOG(ERR,
- "port %u wrong MLX5_CQE_SIZE environment "
- "variable value: it should be set to %u",
- dev->data->port_id, RTE_CACHE_LINE_SIZE);
- rte_errno = EINVAL;
- goto error;
- }
- /* Fill the rings. */
- rxq_data->cqe_n = log2above(cq_info.cqe_cnt);
- rxq_data->cq_db = cq_info.dbrec;
- rxq_data->cqes =
- (volatile struct mlx5_cqe (*)[])(uintptr_t)cq_info.buf;
- rxq_data->cq_uar = cq_info.cq_uar;
- rxq_data->cqn = cq_info.cqn;
- /* Create WQ (RQ) using Verbs API. */
- tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n,
- tmpl);
- if (!tmpl->wq) {
- DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
- dev->data->port_id, idx);
- rte_errno = ENOMEM;
- goto error;
- }
- /* Change queue state to ready. */
- mod = (struct ibv_wq_attr){
- .attr_mask = IBV_WQ_ATTR_STATE,
- .wq_state = IBV_WQS_RDY,
- };
- ret = mlx5_glue->modify_wq(tmpl->wq, &mod);
- if (ret) {
- DRV_LOG(ERR,
- "port %u Rx queue %u WQ state to IBV_WQS_RDY"
- " failed", dev->data->port_id, idx);
- rte_errno = ret;
- goto error;
- }
- obj.rwq.in = tmpl->wq;
- obj.rwq.out = &rwq;
- ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_RWQ);
- if (ret) {
- rte_errno = ret;
- goto error;
- }
- rxq_data->wqes = rwq.buf;
- rxq_data->rq_db = rwq.dbrec;
- priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
- } else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
- struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
- int64_t dbr_offset;
-
- /* Allocate CQ door-bell. */
- dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs,
- &cq_dbr_page);
- if (dbr_offset < 0) {
- DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
- goto error;
- }
- rxq_ctrl->cq_dbr_offset = dbr_offset;
- rxq_ctrl->cq_dbr_umem_id =
- mlx5_os_get_umem_id(cq_dbr_page->umem);
- rxq_data->cq_db =
- (uint32_t *)((uintptr_t)cq_dbr_page->dbrs +
- (uintptr_t)rxq_ctrl->cq_dbr_offset);
- rxq_data->cq_uar =
- mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
- /* Create CQ using DevX API. */
- tmpl->devx_cq = mlx5_devx_cq_new(dev, cqe_n, idx, tmpl);
- if (!tmpl->devx_cq) {
- DRV_LOG(ERR, "Failed to create CQ.");
- goto error;
- }
- /* Allocate RQ door-bell. */
- dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs,
- &rq_dbr_page);
- if (dbr_offset < 0) {
- DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
- goto error;
- }
- rxq_ctrl->rq_dbr_offset = dbr_offset;
- rxq_ctrl->rq_dbr_umem_id =
- mlx5_os_get_umem_id(rq_dbr_page->umem);
- rxq_data->rq_db =
- (uint32_t *)((uintptr_t)rq_dbr_page->dbrs +
- (uintptr_t)rxq_ctrl->rq_dbr_offset);
- /* Create RQ using DevX API. */
- tmpl->rq = mlx5_devx_rq_new(dev, idx, tmpl->devx_cq->id);
- if (!tmpl->rq) {
- DRV_LOG(ERR, "port %u Rx queue %u RQ creation failure",
- dev->data->port_id, idx);
- rte_errno = ENOMEM;
- goto error;
- }
- /* Change queue state to ready. */
- rq_attr.rq_state = MLX5_RQC_STATE_RST;
- rq_attr.state = MLX5_RQC_STATE_RDY;
- ret = mlx5_devx_cmd_modify_rq(tmpl->rq, &rq_attr);
- if (ret)
- goto error;
- }
- rxq_data->cq_arm_sn = 0;
- mlx5_rxq_initialize(rxq_data);
- rxq_data->cq_ci = 0;
- DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
- idx, (void *)&tmpl);
- LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
- dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
- return tmpl;
-error:
- if (tmpl) {
- ret = rte_errno; /* Save rte_errno before cleanup. */
- if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV) {
- if (tmpl->wq)
- claim_zero(mlx5_glue->destroy_wq(tmpl->wq));
- if (tmpl->ibv_cq)
- claim_zero(mlx5_glue->destroy_cq(tmpl->ibv_cq));
- if (tmpl->ibv_channel)
- claim_zero(mlx5_glue->destroy_comp_channel
- (tmpl->ibv_channel));
- priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
- } else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
- if (tmpl->rq)
- claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
- if (tmpl->devx_cq)
- claim_zero(mlx5_devx_cmd_destroy
- (tmpl->devx_cq));
- if (tmpl->devx_channel)
- mlx5_glue->devx_destroy_event_channel
- (tmpl->devx_channel);
- if (rq_dbr_page)
- claim_zero(mlx5_release_dbr
- (&priv->dbrpgs,
- rxq_ctrl->rq_dbr_umem_id,
- rxq_ctrl->rq_dbr_offset));
- if (cq_dbr_page)
- claim_zero(mlx5_release_dbr
- (&priv->dbrpgs,
- rxq_ctrl->cq_dbr_umem_id,
- rxq_ctrl->cq_dbr_offset));
- }
- mlx5_free(tmpl);
- rte_errno = ret; /* Restore rte_errno. */
- }
- if (type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
- rxq_release_devx_rq_resources(rxq_ctrl);
- rxq_release_devx_cq_resources(rxq_ctrl);
- }
- return NULL;
-}
-
-/**
* Verify the Rx queue objects list is empty
*
* @param dev
@@ -2531,7 +1674,7 @@ struct mlx5_rxq_ctrl *
if (!rte_atomic32_dec_and_test(&rxq_ctrl->refcnt))
return 1;
if (rxq_ctrl->obj) {
- mlx5_rxq_obj_release(rxq_ctrl->obj);
+ priv->obj_ops->rxq_obj_release(rxq_ctrl->obj);
rxq_ctrl->obj = NULL;
}
if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index b092e43..4baf5b9 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -155,40 +155,12 @@ struct mlx5_rxq_data {
int32_t flow_meta_offset;
} __rte_cache_aligned;
-enum mlx5_rxq_obj_type {
- MLX5_RXQ_OBJ_TYPE_IBV, /* mlx5_rxq_obj with ibv_wq. */
- MLX5_RXQ_OBJ_TYPE_DEVX_RQ, /* mlx5_rxq_obj with mlx5_devx_rq. */
- MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN,
- /* mlx5_rxq_obj with mlx5_devx_rq and hairpin support. */
-};
-
enum mlx5_rxq_type {
MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */
MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */
MLX5_RXQ_TYPE_UNDEFINED,
};
-/* Verbs/DevX Rx queue elements. */
-struct mlx5_rxq_obj {
- LIST_ENTRY(mlx5_rxq_obj) next; /* Pointer to the next element. */
- struct mlx5_rxq_ctrl *rxq_ctrl; /* Back pointer to parent. */
- enum mlx5_rxq_obj_type type;
- int fd; /* File descriptor for event channel */
- RTE_STD_C11
- union {
- struct {
- void *wq; /* Work Queue. */
- void *ibv_cq; /* Completion Queue. */
- void *ibv_channel;
- };
- struct {
- struct mlx5_devx_obj *rq; /* DevX Rx Queue object. */
- struct mlx5_devx_obj *devx_cq; /* DevX CQ object. */
- void *devx_channel;
- };
- };
-};
-
/* RX queue control descriptor. */
struct mlx5_rxq_ctrl {
struct mlx5_rxq_data rxq; /* Data path structure. */
@@ -416,8 +388,6 @@ int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
-struct mlx5_rxq_obj *mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
- enum mlx5_rxq_obj_type type);
int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
uint16_t desc, unsigned int socket,
@@ -430,6 +400,7 @@ struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
int mlx5_rxq_verify(struct rte_eth_dev *dev);
int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
+void rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
uint32_t mlx5_hrxq_new(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 549af35..6376719 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -109,16 +109,7 @@
struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
int ret = 0;
- enum mlx5_rxq_obj_type obj_type = MLX5_RXQ_OBJ_TYPE_IBV;
- struct mlx5_rxq_data *rxq = NULL;
-
- for (i = 0; i < priv->rxqs_n; ++i) {
- rxq = (*priv->rxqs)[i];
- if (rxq && rxq->lro) {
- obj_type = MLX5_RXQ_OBJ_TYPE_DEVX_RQ;
- break;
- }
- }
+
/* Allocate/reuse/resize mempool for Multi-Packet RQ. */
if (mlx5_mprq_alloc_mp(dev)) {
/* Should not release Rx queues but return immediately. */
@@ -130,33 +121,21 @@
if (!rxq_ctrl)
continue;
- if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) {
- rxq_ctrl->obj = mlx5_rxq_obj_new
- (dev, i, MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN);
- if (!rxq_ctrl->obj)
+ if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
+ /* Pre-register Rx mempool. */
+ mp = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ?
+ rxq_ctrl->rxq.mprq_mp : rxq_ctrl->rxq.mp;
+ DRV_LOG(DEBUG, "port %u Rx queue %u registering mp %s"
+ " having %u chunks", dev->data->port_id,
+ rxq_ctrl->rxq.idx, mp->name, mp->nb_mem_chunks);
+ mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, mp);
+ ret = rxq_alloc_elts(rxq_ctrl);
+ if (ret)
goto error;
- continue;
}
- /* Pre-register Rx mempool. */
- mp = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ?
- rxq_ctrl->rxq.mprq_mp : rxq_ctrl->rxq.mp;
- DRV_LOG(DEBUG,
- "port %u Rx queue %u registering"
- " mp %s having %u chunks",
- dev->data->port_id, rxq_ctrl->rxq.idx,
- mp->name, mp->nb_mem_chunks);
- mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, mp);
- ret = rxq_alloc_elts(rxq_ctrl);
- if (ret)
- goto error;
- rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i, obj_type);
+ rxq_ctrl->obj = priv->obj_ops->rxq_obj_new(dev, i);
if (!rxq_ctrl->obj)
goto error;
- if (obj_type == MLX5_RXQ_OBJ_TYPE_IBV)
- rxq_ctrl->wqn =
- ((struct ibv_wq *)(rxq_ctrl->obj->wq))->wq_num;
- else if (obj_type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
- rxq_ctrl->wqn = rxq_ctrl->obj->rq->id;
}
return 0;
error:
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 06/18] net/mlx5: separate Rx interrupt handling
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (4 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 05/18] net/mlx5: separate Rx queue object creations Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 07/18] net/mlx5: share Rx control code Michael Baum
` (13 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Separate interrupt event handler into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 30 ++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_devx.c | 38 +++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_rxq.c | 40 ++++++++-----------------------------
4 files changed, 77 insertions(+), 32 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index ff71513..3af09db 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -428,8 +428,38 @@
mlx5_free(rxq_obj);
}
+/**
+ * Get event for an Rx verbs queue object.
+ *
+ * @param rxq_obj
+ * Verbs Rx queue object.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_rx_ibv_get_event(struct mlx5_rxq_obj *rxq_obj)
+{
+ struct ibv_cq *ev_cq;
+ void *ev_ctx;
+ int ret = mlx5_glue->get_cq_event(rxq_obj->ibv_channel,
+ &ev_cq, &ev_ctx);
+
+ if (ret < 0 || ev_cq != rxq_obj->ibv_cq)
+ goto exit;
+ mlx5_glue->ack_cq_events(rxq_obj->ibv_cq, 1);
+ return 0;
+exit:
+ if (ret < 0)
+ rte_errno = errno;
+ else
+ rte_errno = EINVAL;
+ return -rte_errno;
+}
+
struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
.rxq_obj_new = mlx5_rxq_ibv_obj_new,
+ .rxq_event_get = mlx5_rx_ibv_get_event,
.rxq_obj_release = mlx5_rxq_ibv_obj_release,
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index eba5df9..f0e2929 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -709,6 +709,7 @@ struct mlx5_obj_ops {
int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
struct mlx5_rxq_obj *(*rxq_obj_new)(struct rte_eth_dev *dev,
uint16_t idx);
+ int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 191b3c2..39e2ad5 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -136,6 +136,43 @@
}
/**
+ * Get event for an Rx DevX queue object.
+ *
+ * @param rxq_obj
+ * DevX Rx queue object.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_rx_devx_get_event(struct mlx5_rxq_obj *rxq_obj)
+{
+#ifdef HAVE_IBV_DEVX_EVENT
+ union {
+ struct mlx5dv_devx_async_event_hdr event_resp;
+ uint8_t buf[sizeof(struct mlx5dv_devx_async_event_hdr) + 128];
+ } out;
+ int ret = mlx5_glue->devx_get_event(rxq_obj->devx_channel,
+ &out.event_resp,
+ sizeof(out.buf));
+
+ if (ret < 0) {
+ rte_errno = errno;
+ return -rte_errno;
+ }
+ if (out.event_resp.cookie != (uint64_t)(uintptr_t)rxq_obj->devx_cq) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+ return 0;
+#else
+ (void)rxq_obj;
+ rte_errno = ENOTSUP;
+ return -rte_errno;
+#endif /* HAVE_IBV_DEVX_EVENT */
+}
+
+/**
* Fill common fields of create RQ attributes structure.
*
* @param rxq_data
@@ -606,5 +643,6 @@
struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
.rxq_obj_new = mlx5_rxq_devx_obj_new,
+ .rxq_event_get = mlx5_rx_devx_get_event,
.rxq_obj_release = mlx5_rxq_devx_obj_release,
};
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index daa92b6..46d5f6c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1024,10 +1024,8 @@
int
mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_rxq_ctrl *rxq_ctrl;
- struct mlx5_rxq_obj *rxq_obj = NULL;
- struct ibv_cq *ev_cq;
- void *ev_ctx;
int ret = 0;
rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id);
@@ -1035,42 +1033,20 @@
rte_errno = EINVAL;
return -rte_errno;
}
- if (!rxq_ctrl->irq) {
- mlx5_rxq_release(dev, rx_queue_id);
- return 0;
- }
- rxq_obj = rxq_ctrl->obj;
- if (!rxq_obj)
+ if (!rxq_ctrl->obj)
goto error;
- if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
- ret = mlx5_glue->get_cq_event(rxq_obj->ibv_channel, &ev_cq,
- &ev_ctx);
- if (ret < 0 || ev_cq != rxq_obj->ibv_cq)
- goto error;
- mlx5_glue->ack_cq_events(rxq_obj->ibv_cq, 1);
- } else if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
-#ifdef HAVE_IBV_DEVX_EVENT
- union {
- struct mlx5dv_devx_async_event_hdr event_resp;
- uint8_t buf[sizeof(struct mlx5dv_devx_async_event_hdr)
- + 128];
- } out;
-
- ret = mlx5_glue->devx_get_event
- (rxq_obj->devx_channel, &out.event_resp,
- sizeof(out.buf));
- if (ret < 0 || out.event_resp.cookie !=
- (uint64_t)(uintptr_t)rxq_obj->devx_cq)
+ if (rxq_ctrl->irq) {
+ ret = priv->obj_ops->rxq_event_get(rxq_ctrl->obj);
+ if (ret < 0)
goto error;
-#endif /* HAVE_IBV_DEVX_EVENT */
+ rxq_ctrl->rxq.cq_arm_sn++;
}
- rxq_ctrl->rxq.cq_arm_sn++;
mlx5_rxq_release(dev, rx_queue_id);
return 0;
error:
/**
- * For ret < 0 save the errno (may be EAGAIN which means the get_event
- * function was called before receiving one).
+ * The ret variable may be EAGAIN which means the get_event function was
+ * called before receiving one.
*/
if (ret < 0)
rte_errno = errno;
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 07/18] net/mlx5: share Rx control code
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (5 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 06/18] net/mlx5: separate Rx interrupt handling Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 08/18] net/mlx5: rearrange the creation of RQ and CQ resources Michael Baum
` (12 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Move Rx object similar resources allocations and debug logs from DevX
and Verbs modules to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 50 +++++++-----------------
drivers/net/mlx5/mlx5.h | 3 +-
drivers/net/mlx5/mlx5_devx.c | 77 ++++++++++---------------------------
drivers/net/mlx5/mlx5_rxq.c | 8 +++-
drivers/net/mlx5/mlx5_rxtx.h | 1 -
drivers/net/mlx5/mlx5_trigger.c | 30 +++++++++++++--
6 files changed, 68 insertions(+), 101 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 3af09db..16e5900 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -269,9 +269,9 @@
* Queue index in DPDK Rx queue array.
*
* @return
- * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_rxq_obj *
+static int
mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx)
{
struct mlx5_priv *priv = dev->data->dev_private;
@@ -281,24 +281,16 @@
struct ibv_wq_attr mod;
unsigned int cqe_n;
unsigned int wqe_n = 1 << rxq_data->elts_n;
- struct mlx5_rxq_obj *tmpl = NULL;
+ struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
struct mlx5dv_cq cq_info;
struct mlx5dv_rwq rwq;
int ret = 0;
struct mlx5dv_obj obj;
MLX5_ASSERT(rxq_data);
- MLX5_ASSERT(!rxq_ctrl->obj);
+ MLX5_ASSERT(tmpl);
priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
priv->verbs_alloc_ctx.obj = rxq_ctrl;
- tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
- rxq_ctrl->socket);
- if (!tmpl) {
- DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
- dev->data->port_id, rxq_data->idx);
- rte_errno = ENOMEM;
- goto error;
- }
tmpl->type = MLX5_RXQ_OBJ_TYPE_IBV;
tmpl->rxq_ctrl = rxq_ctrl;
if (rxq_ctrl->irq) {
@@ -316,10 +308,6 @@
cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
else
cqe_n = wqe_n - 1;
- DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
- dev->data->port_id, priv->sh->device_attr.max_qp_wr);
- DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
- dev->data->port_id, priv->sh->device_attr.max_sge);
/* Create CQ using Verbs API. */
tmpl->ibv_cq = mlx5_ibv_cq_new(dev, priv, rxq_data, cqe_n, tmpl);
if (!tmpl->ibv_cq) {
@@ -382,28 +370,21 @@
rxq_data->cq_arm_sn = 0;
mlx5_rxq_initialize(rxq_data);
rxq_data->cq_ci = 0;
- DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
- idx, (void *)&tmpl);
- LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
rxq_ctrl->wqn = ((struct ibv_wq *)(tmpl->wq))->wq_num;
- return tmpl;
+ return 0;
error:
- if (tmpl) {
- ret = rte_errno; /* Save rte_errno before cleanup. */
- if (tmpl->wq)
- claim_zero(mlx5_glue->destroy_wq(tmpl->wq));
- if (tmpl->ibv_cq)
- claim_zero(mlx5_glue->destroy_cq(tmpl->ibv_cq));
- if (tmpl->ibv_channel)
- claim_zero(mlx5_glue->destroy_comp_channel
- (tmpl->ibv_channel));
- mlx5_free(tmpl);
- rte_errno = ret; /* Restore rte_errno. */
- }
+ ret = rte_errno; /* Save rte_errno before cleanup. */
+ if (tmpl->wq)
+ claim_zero(mlx5_glue->destroy_wq(tmpl->wq));
+ if (tmpl->ibv_cq)
+ claim_zero(mlx5_glue->destroy_cq(tmpl->ibv_cq));
+ if (tmpl->ibv_channel)
+ claim_zero(mlx5_glue->destroy_comp_channel(tmpl->ibv_channel));
+ rte_errno = ret; /* Restore rte_errno. */
priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
- return NULL;
+ return -rte_errno;
}
/**
@@ -418,14 +399,11 @@
MLX5_ASSERT(rxq_obj);
MLX5_ASSERT(rxq_obj->wq);
MLX5_ASSERT(rxq_obj->ibv_cq);
- rxq_free_elts(rxq_obj->rxq_ctrl);
claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq));
if (rxq_obj->ibv_channel)
claim_zero(mlx5_glue->destroy_comp_channel
(rxq_obj->ibv_channel));
- LIST_REMOVE(rxq_obj, next);
- mlx5_free(rxq_obj);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f0e2929..5131a47 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -707,8 +707,7 @@ struct mlx5_rxq_obj {
/* HW objects operations structure. */
struct mlx5_obj_ops {
int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
- struct mlx5_rxq_obj *(*rxq_obj_new)(struct rte_eth_dev *dev,
- uint16_t idx);
+ int (*rxq_obj_new)(struct rte_eth_dev *dev, uint16_t idx);
int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 39e2ad5..5b1cf14 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -116,7 +116,6 @@
mlx5_rxq_obj_hairpin_release(rxq_obj);
} else {
MLX5_ASSERT(rxq_obj->devx_cq);
- rxq_free_elts(rxq_obj->rxq_ctrl);
claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
claim_zero(mlx5_release_dbr(&priv->dbrpgs,
@@ -131,8 +130,6 @@
rxq_release_devx_rq_resources(rxq_obj->rxq_ctrl);
rxq_release_devx_cq_resources(rxq_obj->rxq_ctrl);
}
- LIST_REMOVE(rxq_obj, next);
- mlx5_free(rxq_obj);
}
/**
@@ -435,9 +432,9 @@
* Queue index in DPDK Rx queue array.
*
* @return
- * The hairpin DevX object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_rxq_obj *
+static int
mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
{
struct mlx5_priv *priv = dev->data->dev_private;
@@ -445,19 +442,11 @@
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
struct mlx5_devx_create_rq_attr attr = { 0 };
- struct mlx5_rxq_obj *tmpl = NULL;
+ struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
uint32_t max_wq_data;
MLX5_ASSERT(rxq_data);
- MLX5_ASSERT(!rxq_ctrl->obj);
- tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
- rxq_ctrl->socket);
- if (!tmpl) {
- DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
- dev->data->port_id, rxq_data->idx);
- rte_errno = ENOMEM;
- return NULL;
- }
+ MLX5_ASSERT(tmpl);
tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN;
tmpl->rxq_ctrl = rxq_ctrl;
attr.hairpin = 1;
@@ -468,9 +457,8 @@
DRV_LOG(ERR, "Total data size %u power of 2 is "
"too large for hairpin.",
priv->config.log_hp_size);
- mlx5_free(tmpl);
rte_errno = ERANGE;
- return NULL;
+ return -rte_errno;
}
attr.wq_attr.log_hairpin_data_sz = priv->config.log_hp_size;
} else {
@@ -488,15 +476,11 @@
DRV_LOG(ERR,
"Port %u Rx hairpin queue %u can't create rq object.",
dev->data->port_id, idx);
- mlx5_free(tmpl);
rte_errno = errno;
- return NULL;
+ return -rte_errno;
}
- DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
- idx, (void *)&tmpl);
- LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN;
- return tmpl;
+ return 0;
}
/**
@@ -508,9 +492,9 @@
* Queue index in DPDK Rx queue array.
*
* @return
- * The DevX object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_rxq_obj *
+static int
mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
{
struct mlx5_priv *priv = dev->data->dev_private;
@@ -519,7 +503,7 @@
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
unsigned int cqe_n;
unsigned int wqe_n = 1 << rxq_data->elts_n;
- struct mlx5_rxq_obj *tmpl = NULL;
+ struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
struct mlx5_devx_dbr_page *cq_dbr_page = NULL;
struct mlx5_devx_dbr_page *rq_dbr_page = NULL;
@@ -527,17 +511,9 @@
int ret = 0;
MLX5_ASSERT(rxq_data);
- MLX5_ASSERT(!rxq_ctrl->obj);
+ MLX5_ASSERT(tmpl);
if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
return mlx5_rxq_obj_hairpin_new(dev, idx);
- tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
- rxq_ctrl->socket);
- if (!tmpl) {
- DRV_LOG(ERR, "port %u Rx queue %u cannot allocate resources",
- dev->data->port_id, rxq_data->idx);
- rte_errno = ENOMEM;
- goto error;
- }
tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_RQ;
tmpl->rxq_ctrl = rxq_ctrl;
if (rxq_ctrl->irq) {
@@ -559,10 +535,6 @@
cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
else
cqe_n = wqe_n - 1;
- DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
- dev->data->port_id, priv->sh->device_attr.max_qp_wr);
- DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
- dev->data->port_id, priv->sh->device_attr.max_sge);
/* Allocate CQ door-bell. */
dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &cq_dbr_page);
if (dbr_offset < 0) {
@@ -608,25 +580,17 @@
rxq_data->cq_arm_sn = 0;
mlx5_rxq_initialize(rxq_data);
rxq_data->cq_ci = 0;
- DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
- idx, (void *)&tmpl);
- LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED;
rxq_ctrl->wqn = tmpl->rq->id;
- return tmpl;
+ return 0;
error:
- if (tmpl) {
- ret = rte_errno; /* Save rte_errno before cleanup. */
- if (tmpl->rq)
- claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
- if (tmpl->devx_cq)
- claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
- if (tmpl->devx_channel)
- mlx5_glue->devx_destroy_event_channel
- (tmpl->devx_channel);
- mlx5_free(tmpl);
- rte_errno = ret; /* Restore rte_errno. */
- }
+ ret = rte_errno; /* Save rte_errno before cleanup. */
+ if (tmpl->rq)
+ claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
+ if (tmpl->devx_cq)
+ claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
+ if (tmpl->devx_channel)
+ mlx5_glue->devx_destroy_event_channel(tmpl->devx_channel);
if (rq_dbr_page)
claim_zero(mlx5_release_dbr(&priv->dbrpgs,
rxq_ctrl->rq_dbr_umem_id,
@@ -637,7 +601,8 @@
rxq_ctrl->cq_dbr_offset));
rxq_release_devx_rq_resources(rxq_ctrl);
rxq_release_devx_cq_resources(rxq_ctrl);
- return NULL;
+ rte_errno = ret; /* Restore rte_errno. */
+ return -rte_errno;
}
struct mlx5_obj_ops devx_obj_ops = {
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 46d5f6c..00ef230 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -347,7 +347,7 @@
* @param rxq_ctrl
* Pointer to RX queue structure.
*/
-void
+static void
rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl)
{
if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq))
@@ -1651,10 +1651,14 @@ struct mlx5_rxq_ctrl *
return 1;
if (rxq_ctrl->obj) {
priv->obj_ops->rxq_obj_release(rxq_ctrl->obj);
+ LIST_REMOVE(rxq_ctrl->obj, next);
+ mlx5_free(rxq_ctrl->obj);
rxq_ctrl->obj = NULL;
}
- if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
+ if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) {
mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
+ rxq_free_elts(rxq_ctrl);
+ }
LIST_REMOVE(rxq_ctrl, next);
mlx5_free(rxq_ctrl);
(*priv->rxqs)[idx] = NULL;
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 4baf5b9..d4a6c50 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -400,7 +400,6 @@ struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
int mlx5_rxq_verify(struct rte_eth_dev *dev);
int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
-void rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
uint32_t mlx5_hrxq_new(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 6376719..43eff93 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -10,6 +10,8 @@
#include <rte_interrupts.h>
#include <rte_alarm.h>
+#include <mlx5_malloc.h>
+
#include "mlx5.h"
#include "mlx5_mr.h"
#include "mlx5_rxtx.h"
@@ -115,6 +117,10 @@
/* Should not release Rx queues but return immediately. */
return -rte_errno;
}
+ DRV_LOG(DEBUG, "Port %u device_attr.max_qp_wr is %d.",
+ dev->data->port_id, priv->sh->device_attr.max_qp_wr);
+ DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.",
+ dev->data->port_id, priv->sh->device_attr.max_sge);
for (i = 0; i != priv->rxqs_n; ++i) {
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
struct rte_mempool *mp;
@@ -125,17 +131,33 @@
/* Pre-register Rx mempool. */
mp = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ?
rxq_ctrl->rxq.mprq_mp : rxq_ctrl->rxq.mp;
- DRV_LOG(DEBUG, "port %u Rx queue %u registering mp %s"
- " having %u chunks", dev->data->port_id,
+ DRV_LOG(DEBUG, "Port %u Rx queue %u registering mp %s"
+ " having %u chunks.", dev->data->port_id,
rxq_ctrl->rxq.idx, mp->name, mp->nb_mem_chunks);
mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, mp);
ret = rxq_alloc_elts(rxq_ctrl);
if (ret)
goto error;
}
- rxq_ctrl->obj = priv->obj_ops->rxq_obj_new(dev, i);
- if (!rxq_ctrl->obj)
+ MLX5_ASSERT(!rxq_ctrl->obj);
+ rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+ sizeof(*rxq_ctrl->obj), 0,
+ rxq_ctrl->socket);
+ if (!rxq_ctrl->obj) {
+ DRV_LOG(ERR,
+ "Port %u Rx queue %u can't allocate resources.",
+ dev->data->port_id, (*priv->rxqs)[i]->idx);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ ret = priv->obj_ops->rxq_obj_new(dev, i);
+ if (ret) {
+ mlx5_free(rxq_ctrl->obj);
goto error;
+ }
+ DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.",
+ dev->data->port_id, i, (void *)&rxq_ctrl->obj);
+ LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next);
}
return 0;
error:
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 08/18] net/mlx5: rearrange the creation of RQ and CQ resources
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (6 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 07/18] net/mlx5: share Rx control code Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 09/18] net/mlx5: rearrange the creation of WQ and CQ object Michael Baum
` (11 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Rearrangement of RQ and CQ resource handling for DevX Rx queue:
1. Rename the allocation function so that it is understood that it
allocates all resources and not just the CQ or RQ.
2. Move the allocation and release of the doorbell into creation and
release functions.
3. Reduce the number of arguments that the creation functions receive.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_devx.c | 156 ++++++++++++++++++++++++-------------------
drivers/net/mlx5/mlx5_rxtx.h | 4 +-
2 files changed, 89 insertions(+), 71 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 5b1cf14..5a3ac49 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -22,13 +22,37 @@
#include "mlx5_utils.h"
#include "mlx5_devx.h"
+
+/**
+ * Calculate the number of CQEs in CQ for the Rx queue.
+ *
+ * @param rxq_data
+ * Pointer to receive queue structure.
+ *
+ * @return
+ * Number of CQEs in CQ.
+ */
+static unsigned int
+mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data)
+{
+ unsigned int cqe_n;
+ unsigned int wqe_n = 1 << rxq_data->elts_n;
+
+ if (mlx5_rxq_mprq_enabled(rxq_data))
+ cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
+ else
+ cqe_n = wqe_n - 1;
+ return cqe_n;
+}
+
/**
* Modify RQ vlan stripping offload
*
* @param rxq_obj
* Rx queue object.
*
- * @return 0 on success, non-0 otherwise
+ * @return
+ * 0 on success, non-0 otherwise
*/
static int
mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on)
@@ -52,6 +76,8 @@
static void
rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
{
+ struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->rq_dbrec_page;
+
if (rxq_ctrl->rxq.wqes) {
mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
rxq_ctrl->rxq.wqes = NULL;
@@ -60,6 +86,12 @@
mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
rxq_ctrl->wq_umem = NULL;
}
+ if (dbr_page) {
+ claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
+ mlx5_os_get_umem_id(dbr_page->umem),
+ rxq_ctrl->rq_dbr_offset));
+ rxq_ctrl->rq_dbrec_page = NULL;
+ }
}
/**
@@ -71,6 +103,8 @@
static void
rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
{
+ struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->cq_dbrec_page;
+
if (rxq_ctrl->rxq.cqes) {
rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
rxq_ctrl->rxq.cqes = NULL;
@@ -79,6 +113,12 @@
mlx5_glue->devx_umem_dereg(rxq_ctrl->cq_umem);
rxq_ctrl->cq_umem = NULL;
}
+ if (dbr_page) {
+ claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
+ mlx5_os_get_umem_id(dbr_page->umem),
+ rxq_ctrl->cq_dbr_offset));
+ rxq_ctrl->cq_dbrec_page = NULL;
+ }
}
/**
@@ -108,8 +148,6 @@
static void
mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj)
{
- struct mlx5_priv *priv = rxq_obj->rxq_ctrl->priv;
-
MLX5_ASSERT(rxq_obj);
MLX5_ASSERT(rxq_obj->rq);
if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN) {
@@ -118,12 +156,6 @@
MLX5_ASSERT(rxq_obj->devx_cq);
claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
claim_zero(mlx5_devx_cmd_destroy(rxq_obj->devx_cq));
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_obj->rxq_ctrl->rq_dbr_umem_id,
- rxq_obj->rxq_ctrl->rq_dbr_offset));
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_obj->rxq_ctrl->cq_dbr_umem_id,
- rxq_obj->rxq_ctrl->cq_dbr_offset));
if (rxq_obj->devx_channel)
mlx5_glue->devx_destroy_event_channel
(rxq_obj->devx_channel);
@@ -208,7 +240,8 @@
MLX5_WQ_END_PAD_MODE_NONE;
wq_attr->pd = priv->sh->pdn;
wq_attr->dbr_addr = rxq_ctrl->rq_dbr_offset;
- wq_attr->dbr_umem_id = rxq_ctrl->rq_dbr_umem_id;
+ wq_attr->dbr_umem_id =
+ mlx5_os_get_umem_id(rxq_ctrl->rq_dbrec_page->umem);
wq_attr->dbr_umem_valid = 1;
wq_attr->wq_umem_id = mlx5_os_get_umem_id(rxq_ctrl->wq_umem);
wq_attr->wq_umem_valid = 1;
@@ -221,14 +254,12 @@
* Pointer to Ethernet device.
* @param idx
* Queue index in DPDK Rx queue array.
- * @param cqn
- * CQ number to use with this RQ.
*
* @return
- * The DevX object initialized, NULL otherwise and rte_errno is set.
+ * The DevX RQ object initialized, NULL otherwise and rte_errno is set.
*/
static struct mlx5_devx_obj *
-mlx5_devx_rq_new(struct rte_eth_dev *dev, uint16_t idx, uint32_t cqn)
+rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -236,6 +267,9 @@
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
struct mlx5_devx_create_rq_attr rq_attr = { 0 };
uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
+ uint32_t cqn = rxq_ctrl->obj->devx_cq->id;
+ struct mlx5_devx_dbr_page *dbr_page;
+ int64_t dbr_offset;
uint32_t wq_size = 0;
uint32_t wqe_size = 0;
uint32_t log_wqe_size = 0;
@@ -284,15 +318,27 @@
rxq_data->wqes = buf;
rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
buf, wq_size, 0);
- if (!rxq_ctrl->wq_umem) {
- mlx5_free(buf);
- return NULL;
+ if (!rxq_ctrl->wq_umem)
+ goto error;
+ /* Allocate RQ door-bell. */
+ dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
+ if (dbr_offset < 0) {
+ DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
+ goto error;
}
+ rxq_ctrl->rq_dbr_offset = dbr_offset;
+ rxq_ctrl->rq_dbrec_page = dbr_page;
+ rxq_data->rq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
+ (uintptr_t)rxq_ctrl->rq_dbr_offset);
+ /* Create RQ using DevX API. */
mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
if (!rq)
- rxq_release_devx_rq_resources(rxq_ctrl);
+ goto error;
return rq;
+error:
+ rxq_release_devx_rq_resources(rxq_ctrl);
+ return NULL;
}
/**
@@ -300,19 +346,14 @@
*
* @param dev
* Pointer to Ethernet device.
- * @param cqe_n
- * Number of CQEs in CQ.
* @param idx
* Queue index in DPDK Rx queue array.
- * @param rxq_obj
- * Pointer to Rx queue object data.
*
* @return
- * The DevX object initialized, NULL otherwise and rte_errno is set.
+ * The DevX CQ object initialized, NULL otherwise and rte_errno is set.
*/
static struct mlx5_devx_obj *
-mlx5_devx_cq_new(struct rte_eth_dev *dev, unsigned int cqe_n, uint16_t idx,
- struct mlx5_rxq_obj *rxq_obj)
+rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx)
{
struct mlx5_devx_obj *cq_obj = 0;
struct mlx5_devx_cq_attr cq_attr = { 0 };
@@ -322,6 +363,9 @@
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
size_t page_size = rte_mem_page_size();
uint32_t lcore = (uint32_t)rte_lcore_to_cpu_id(-1);
+ unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
+ struct mlx5_devx_dbr_page *dbr_page;
+ int64_t dbr_offset;
uint32_t eqn = 0;
void *buf = NULL;
uint16_t event_nums[1] = {0};
@@ -386,6 +430,19 @@
DRV_LOG(ERR, "Failed to register umem for CQ.");
goto error;
}
+ /* Allocate CQ door-bell. */
+ dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &dbr_page);
+ if (dbr_offset < 0) {
+ DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
+ goto error;
+ }
+ rxq_ctrl->cq_dbr_offset = dbr_offset;
+ rxq_ctrl->cq_dbrec_page = dbr_page;
+ rxq_data->cq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
+ (uintptr_t)rxq_ctrl->cq_dbr_offset);
+ rxq_data->cq_uar =
+ mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
+ /* Create CQ using DevX API. */
cq_attr.uar_page_id =
mlx5_os_get_devx_uar_page_id(priv->sh->devx_rx_uar);
cq_attr.q_umem_id = mlx5_os_get_umem_id(rxq_ctrl->cq_umem);
@@ -393,16 +450,16 @@
cq_attr.log_cq_size = log_cqe_n;
cq_attr.log_page_size = rte_log2_u32(page_size);
cq_attr.db_umem_offset = rxq_ctrl->cq_dbr_offset;
- cq_attr.db_umem_id = rxq_ctrl->cq_dbr_umem_id;
+ cq_attr.db_umem_id = mlx5_os_get_umem_id(dbr_page->umem);
cq_attr.db_umem_valid = 1;
cq_obj = mlx5_devx_cmd_create_cq(priv->sh->ctx, &cq_attr);
if (!cq_obj)
goto error;
rxq_data->cqe_n = log_cqe_n;
rxq_data->cqn = cq_obj->id;
- if (rxq_obj->devx_channel) {
+ if (rxq_ctrl->obj->devx_channel) {
ret = mlx5_glue->devx_subscribe_devx_event
- (rxq_obj->devx_channel,
+ (rxq_ctrl->obj->devx_channel,
cq_obj->obj,
sizeof(event_nums),
event_nums,
@@ -501,13 +558,8 @@
struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- unsigned int cqe_n;
- unsigned int wqe_n = 1 << rxq_data->elts_n;
struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
- struct mlx5_devx_dbr_page *cq_dbr_page = NULL;
- struct mlx5_devx_dbr_page *rq_dbr_page = NULL;
- int64_t dbr_offset;
int ret = 0;
MLX5_ASSERT(rxq_data);
@@ -531,40 +583,14 @@
}
tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel);
}
- if (mlx5_rxq_mprq_enabled(rxq_data))
- cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
- else
- cqe_n = wqe_n - 1;
- /* Allocate CQ door-bell. */
- dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &cq_dbr_page);
- if (dbr_offset < 0) {
- DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
- goto error;
- }
- rxq_ctrl->cq_dbr_offset = dbr_offset;
- rxq_ctrl->cq_dbr_umem_id = mlx5_os_get_umem_id(cq_dbr_page->umem);
- rxq_data->cq_db = (uint32_t *)((uintptr_t)cq_dbr_page->dbrs +
- (uintptr_t)rxq_ctrl->cq_dbr_offset);
- rxq_data->cq_uar =
- mlx5_os_get_devx_uar_base_addr(priv->sh->devx_rx_uar);
/* Create CQ using DevX API. */
- tmpl->devx_cq = mlx5_devx_cq_new(dev, cqe_n, idx, tmpl);
+ tmpl->devx_cq = rxq_create_devx_cq_resources(dev, idx);
if (!tmpl->devx_cq) {
DRV_LOG(ERR, "Failed to create CQ.");
goto error;
}
- /* Allocate RQ door-bell. */
- dbr_offset = mlx5_get_dbr(priv->sh->ctx, &priv->dbrpgs, &rq_dbr_page);
- if (dbr_offset < 0) {
- DRV_LOG(ERR, "Failed to allocate RQ door-bell.");
- goto error;
- }
- rxq_ctrl->rq_dbr_offset = dbr_offset;
- rxq_ctrl->rq_dbr_umem_id = mlx5_os_get_umem_id(rq_dbr_page->umem);
- rxq_data->rq_db = (uint32_t *)((uintptr_t)rq_dbr_page->dbrs +
- (uintptr_t)rxq_ctrl->rq_dbr_offset);
/* Create RQ using DevX API. */
- tmpl->rq = mlx5_devx_rq_new(dev, idx, tmpl->devx_cq->id);
+ tmpl->rq = rxq_create_devx_rq_resources(dev, idx);
if (!tmpl->rq) {
DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.",
dev->data->port_id, idx);
@@ -591,14 +617,6 @@
claim_zero(mlx5_devx_cmd_destroy(tmpl->devx_cq));
if (tmpl->devx_channel)
mlx5_glue->devx_destroy_event_channel(tmpl->devx_channel);
- if (rq_dbr_page)
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->rq_dbr_umem_id,
- rxq_ctrl->rq_dbr_offset));
- if (cq_dbr_page)
- claim_zero(mlx5_release_dbr(&priv->dbrpgs,
- rxq_ctrl->cq_dbr_umem_id,
- rxq_ctrl->cq_dbr_offset));
rxq_release_devx_rq_resources(rxq_ctrl);
rxq_release_devx_cq_resources(rxq_ctrl);
rte_errno = ret; /* Restore rte_errno. */
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index d4a6c50..6d135dd 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -175,10 +175,10 @@ struct mlx5_rxq_ctrl {
uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
uint32_t wqn; /* WQ number. */
uint16_t dump_file_n; /* Number of dump files. */
- uint32_t rq_dbr_umem_id;
+ struct mlx5_devx_dbr_page *rq_dbrec_page;
uint64_t rq_dbr_offset;
/* Storing RQ door-bell information, needed when freeing door-bell. */
- uint32_t cq_dbr_umem_id;
+ struct mlx5_devx_dbr_page *cq_dbrec_page;
uint64_t cq_dbr_offset;
/* Storing CQ door-bell information, needed when freeing door-bell. */
void *wq_umem; /* WQ buffer registration info. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 09/18] net/mlx5: rearrange the creation of WQ and CQ object
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (7 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 08/18] net/mlx5: rearrange the creation of RQ and CQ resources Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 10/18] net/mlx5: separate Rx queue object modification Michael Baum
` (10 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Rearrangement of WQ and CQ creation for Verbs Rx queue:
1. Rename the allocation function.
2. Reduce the number of arguments that the creation functions receive.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 52 ++++++++++++++-----------------------
drivers/net/mlx5/mlx5_devx.c | 22 ----------------
drivers/net/mlx5/mlx5_rxq.c | 22 ++++++++++++++++
drivers/net/mlx5/mlx5_rxtx.h | 1 +
4 files changed, 43 insertions(+), 54 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 16e5900..d9cf911 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -96,23 +96,21 @@
*
* @param dev
* Pointer to Ethernet device.
- * @param priv
- * Pointer to device private data.
- * @param rxq_data
- * Pointer to Rx queue data.
- * @param cqe_n
- * Number of CQEs in CQ.
- * @param rxq_obj
- * Pointer to Rx queue object data.
+ * @param idx
+ * Queue index in DPDK Rx queue array.
*
* @return
- * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ * The Verbs CQ object initialized, NULL otherwise and rte_errno is set.
*/
static struct ibv_cq *
-mlx5_ibv_cq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
- struct mlx5_rxq_data *rxq_data,
- unsigned int cqe_n, struct mlx5_rxq_obj *rxq_obj)
+mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
+ unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data);
struct {
struct ibv_cq_init_attr_ex ibv;
struct mlx5dv_cq_init_attr mlx5;
@@ -165,25 +163,21 @@
*
* @param dev
* Pointer to Ethernet device.
- * @param priv
- * Pointer to device private data.
- * @param rxq_data
- * Pointer to Rx queue data.
* @param idx
* Queue index in DPDK Rx queue array.
- * @param wqe_n
- * Number of WQEs in WQ.
- * @param rxq_obj
- * Pointer to Rx queue object data.
*
* @return
- * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ * The Verbs WQ object initialized, NULL otherwise and rte_errno is set.
*/
static struct ibv_wq *
-mlx5_ibv_wq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
- struct mlx5_rxq_data *rxq_data, uint16_t idx,
- unsigned int wqe_n, struct mlx5_rxq_obj *rxq_obj)
+mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj;
+ unsigned int wqe_n = 1 << rxq_data->elts_n;
struct {
struct ibv_wq_init_attr ibv;
#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
@@ -279,8 +273,6 @@
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
struct ibv_wq_attr mod;
- unsigned int cqe_n;
- unsigned int wqe_n = 1 << rxq_data->elts_n;
struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
struct mlx5dv_cq cq_info;
struct mlx5dv_rwq rwq;
@@ -304,12 +296,8 @@
}
tmpl->fd = ((struct ibv_comp_channel *)(tmpl->ibv_channel))->fd;
}
- if (mlx5_rxq_mprq_enabled(rxq_data))
- cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
- else
- cqe_n = wqe_n - 1;
/* Create CQ using Verbs API. */
- tmpl->ibv_cq = mlx5_ibv_cq_new(dev, priv, rxq_data, cqe_n, tmpl);
+ tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(dev, idx);
if (!tmpl->ibv_cq) {
DRV_LOG(ERR, "Port %u Rx queue %u CQ creation failure.",
dev->data->port_id, idx);
@@ -338,7 +326,7 @@
rxq_data->cq_uar = cq_info.cq_uar;
rxq_data->cqn = cq_info.cqn;
/* Create WQ (RQ) using Verbs API. */
- tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n, tmpl);
+ tmpl->wq = mlx5_rxq_ibv_wq_create(dev, idx);
if (!tmpl->wq) {
DRV_LOG(ERR, "Port %u Rx queue %u WQ creation failure.",
dev->data->port_id, idx);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 5a3ac49..8bbc664 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -24,28 +24,6 @@
/**
- * Calculate the number of CQEs in CQ for the Rx queue.
- *
- * @param rxq_data
- * Pointer to receive queue structure.
- *
- * @return
- * Number of CQEs in CQ.
- */
-static unsigned int
-mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data)
-{
- unsigned int cqe_n;
- unsigned int wqe_n = 1 << rxq_data->elts_n;
-
- if (mlx5_rxq_mprq_enabled(rxq_data))
- cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
- else
- cqe_n = wqe_n - 1;
- return cqe_n;
-}
-
-/**
* Modify RQ vlan stripping offload
*
* @param rxq_obj
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 00ef230..3115f5a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -123,6 +123,28 @@
}
/**
+ * Calculate the number of CQEs in CQ for the Rx queue.
+ *
+ * @param rxq_data
+ * Pointer to receive queue structure.
+ *
+ * @return
+ * Number of CQEs in CQ.
+ */
+unsigned int
+mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data)
+{
+ unsigned int cqe_n;
+ unsigned int wqe_n = 1 << rxq_data->elts_n;
+
+ if (mlx5_rxq_mprq_enabled(rxq_data))
+ cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
+ else
+ cqe_n = wqe_n - 1;
+ return cqe_n;
+}
+
+/**
* Allocate RX queue elements for Multi-Packet RQ.
*
* @param rxq_ctrl
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 6d135dd..75eedff 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -371,6 +371,7 @@ struct mlx5_txq_ctrl {
int mlx5_check_mprq_support(struct rte_eth_dev *dev);
int mlx5_rxq_mprq_enabled(struct mlx5_rxq_data *rxq);
int mlx5_mprq_enabled(struct rte_eth_dev *dev);
+unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data);
int mlx5_mprq_free_mp(struct rte_eth_dev *dev);
int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev);
int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 10/18] net/mlx5: separate Rx queue object modification
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (8 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 09/18] net/mlx5: rearrange the creation of WQ and CQ object Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 11/18] net/mlx5: share " Michael Baum
` (9 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Separate Rx object modification to the Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 22 ++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_devx.c | 27 +++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_rxq.c | 32 ++------------------------------
4 files changed, 52 insertions(+), 30 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index d9cf911..7d623c8 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -4,6 +4,7 @@
#include <stddef.h>
#include <errno.h>
+#include <stdbool.h>
#include <string.h>
#include <stdint.h>
#include <unistd.h>
@@ -423,9 +424,30 @@
return -rte_errno;
}
+/**
+ * Modifies the attributes for the specified WQ.
+ *
+ * @param rxq_obj
+ * Verbs Rx queue object.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
+{
+ struct ibv_wq_attr mod = {
+ .attr_mask = IBV_WQ_ATTR_STATE,
+ .wq_state = is_start ? IBV_WQS_RDY : IBV_WQS_RESET,
+ };
+
+ return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
+}
+
struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
.rxq_obj_new = mlx5_rxq_ibv_obj_new,
.rxq_event_get = mlx5_rx_ibv_get_event,
+ .rxq_obj_modify = mlx5_ibv_modify_wq,
.rxq_obj_release = mlx5_rxq_ibv_obj_release,
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5131a47..a51c88f 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -709,6 +709,7 @@ struct mlx5_obj_ops {
int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
int (*rxq_obj_new)(struct rte_eth_dev *dev, uint16_t idx);
int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
+ int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, bool is_start);
void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 8bbc664..e577e38 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -4,6 +4,7 @@
#include <stddef.h>
#include <errno.h>
+#include <stdbool.h>
#include <string.h>
#include <stdint.h>
#include <sys/queue.h>
@@ -143,6 +144,31 @@
}
/**
+ * Modify RQ using DevX API.
+ *
+ * @param rxq_obj
+ * DevX Rx queue object.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
+{
+ struct mlx5_devx_modify_rq_attr rq_attr;
+
+ memset(&rq_attr, 0, sizeof(rq_attr));
+ if (is_start) {
+ rq_attr.rq_state = MLX5_RQC_STATE_RST;
+ rq_attr.state = MLX5_RQC_STATE_RDY;
+ } else {
+ rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+ rq_attr.state = MLX5_RQC_STATE_RST;
+ }
+ return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+}
+
+/**
* Get event for an Rx DevX queue object.
*
* @param rxq_obj
@@ -605,5 +631,6 @@ struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
.rxq_obj_new = mlx5_rxq_devx_obj_new,
.rxq_event_get = mlx5_rx_devx_get_event,
+ .rxq_obj_modify = mlx5_devx_modify_rq,
.rxq_obj_release = mlx5_rxq_devx_obj_release,
};
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 3115f5a..c18610d 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -516,21 +516,7 @@
int ret;
MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
- if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
- struct ibv_wq_attr mod = {
- .attr_mask = IBV_WQ_ATTR_STATE,
- .wq_state = IBV_WQS_RESET,
- };
-
- ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
- } else { /* rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ. */
- struct mlx5_devx_modify_rq_attr rq_attr;
-
- memset(&rq_attr, 0, sizeof(rq_attr));
- rq_attr.rq_state = MLX5_RQC_STATE_RDY;
- rq_attr.state = MLX5_RQC_STATE_RST;
- ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
- }
+ ret = priv->obj_ops->rxq_obj_modify(rxq_ctrl->obj, false);
if (ret) {
DRV_LOG(ERR, "Cannot change Rx WQ state to RESET: %s",
strerror(errno));
@@ -629,21 +615,7 @@
/* Reset RQ consumer before moving queue to READY state. */
*rxq->rq_db = rte_cpu_to_be_32(0);
rte_cio_wmb();
- if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
- struct ibv_wq_attr mod = {
- .attr_mask = IBV_WQ_ATTR_STATE,
- .wq_state = IBV_WQS_RDY,
- };
-
- ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
- } else { /* rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ. */
- struct mlx5_devx_modify_rq_attr rq_attr;
-
- memset(&rq_attr, 0, sizeof(rq_attr));
- rq_attr.rq_state = MLX5_RQC_STATE_RST;
- rq_attr.state = MLX5_RQC_STATE_RDY;
- ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
- }
+ ret = priv->obj_ops->rxq_obj_modify(rxq_ctrl->obj, true);
if (ret) {
DRV_LOG(ERR, "Cannot change Rx WQ state to READY: %s",
strerror(errno));
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 11/18] net/mlx5: share Rx queue object modification
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (9 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 10/18] net/mlx5: separate Rx queue object modification Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 12/18] net/mlx5: separate Rx indirection table object creation Michael Baum
` (8 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Use new modify_wq functions for Rx object creation in DevX and Verbs
modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 48 +++++++++++------------
drivers/net/mlx5/mlx5_devx.c | 76 ++++++++++++++-----------------------
2 files changed, 50 insertions(+), 74 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 7d623c8..5eb556e 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -89,6 +89,27 @@
.flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING,
.flags = vlan_offloads,
};
+
+ return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
+}
+
+/**
+ * Modifies the attributes for the specified WQ.
+ *
+ * @param rxq_obj
+ * Verbs Rx queue object.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
+{
+ struct ibv_wq_attr mod = {
+ .attr_mask = IBV_WQ_ATTR_STATE,
+ .wq_state = is_start ? IBV_WQS_RDY : IBV_WQS_RESET,
+ };
+
return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
}
@@ -273,7 +294,6 @@
struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- struct ibv_wq_attr mod;
struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
struct mlx5dv_cq cq_info;
struct mlx5dv_rwq rwq;
@@ -335,11 +355,7 @@
goto error;
}
/* Change queue state to ready. */
- mod = (struct ibv_wq_attr){
- .attr_mask = IBV_WQ_ATTR_STATE,
- .wq_state = IBV_WQS_RDY,
- };
- ret = mlx5_glue->modify_wq(tmpl->wq, &mod);
+ ret = mlx5_ibv_modify_wq(tmpl, true);
if (ret) {
DRV_LOG(ERR,
"Port %u Rx queue %u WQ state to IBV_WQS_RDY failed.",
@@ -424,26 +440,6 @@
return -rte_errno;
}
-/**
- * Modifies the attributes for the specified WQ.
- *
- * @param rxq_obj
- * Verbs Rx queue object.
- *
- * @return
- * 0 on success, a negative errno value otherwise and rte_errno is set.
- */
-static int
-mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
-{
- struct ibv_wq_attr mod = {
- .attr_mask = IBV_WQ_ATTR_STATE,
- .wq_state = is_start ? IBV_WQS_RDY : IBV_WQS_RESET,
- };
-
- return mlx5_glue->modify_wq(rxq_obj->wq, &mod);
-}
-
struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
.rxq_obj_new = mlx5_rxq_ibv_obj_new,
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index e577e38..07922c2 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -47,6 +47,31 @@
}
/**
+ * Modify RQ using DevX API.
+ *
+ * @param rxq_obj
+ * DevX Rx queue object.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
+{
+ struct mlx5_devx_modify_rq_attr rq_attr;
+
+ memset(&rq_attr, 0, sizeof(rq_attr));
+ if (is_start) {
+ rq_attr.rq_state = MLX5_RQC_STATE_RST;
+ rq_attr.state = MLX5_RQC_STATE_RDY;
+ } else {
+ rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+ rq_attr.state = MLX5_RQC_STATE_RST;
+ }
+ return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
+}
+
+/**
* Release the resources allocated for an RQ DevX object.
*
* @param rxq_ctrl
@@ -101,24 +126,6 @@
}
/**
- * Release an Rx hairpin related resources.
- *
- * @param rxq_obj
- * Hairpin Rx queue object.
- */
-static void
-mlx5_rxq_obj_hairpin_release(struct mlx5_rxq_obj *rxq_obj)
-{
- struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
-
- MLX5_ASSERT(rxq_obj);
- rq_attr.state = MLX5_RQC_STATE_RST;
- rq_attr.rq_state = MLX5_RQC_STATE_RDY;
- mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
- claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
-}
-
-/**
* Release an Rx DevX queue object.
*
* @param rxq_obj
@@ -130,7 +137,8 @@
MLX5_ASSERT(rxq_obj);
MLX5_ASSERT(rxq_obj->rq);
if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN) {
- mlx5_rxq_obj_hairpin_release(rxq_obj);
+ mlx5_devx_modify_rq(rxq_obj, false);
+ claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
} else {
MLX5_ASSERT(rxq_obj->devx_cq);
claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
@@ -144,31 +152,6 @@
}
/**
- * Modify RQ using DevX API.
- *
- * @param rxq_obj
- * DevX Rx queue object.
- *
- * @return
- * 0 on success, a negative errno value otherwise and rte_errno is set.
- */
-static int
-mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, bool is_start)
-{
- struct mlx5_devx_modify_rq_attr rq_attr;
-
- memset(&rq_attr, 0, sizeof(rq_attr));
- if (is_start) {
- rq_attr.rq_state = MLX5_RQC_STATE_RST;
- rq_attr.state = MLX5_RQC_STATE_RDY;
- } else {
- rq_attr.rq_state = MLX5_RQC_STATE_RDY;
- rq_attr.state = MLX5_RQC_STATE_RST;
- }
- return mlx5_devx_cmd_modify_rq(rxq_obj->rq, &rq_attr);
-}
-
-/**
* Get event for an Rx DevX queue object.
*
* @param rxq_obj
@@ -563,7 +546,6 @@
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj;
- struct mlx5_devx_modify_rq_attr rq_attr = { 0 };
int ret = 0;
MLX5_ASSERT(rxq_data);
@@ -602,9 +584,7 @@
goto error;
}
/* Change queue state to ready. */
- rq_attr.rq_state = MLX5_RQC_STATE_RST;
- rq_attr.state = MLX5_RQC_STATE_RDY;
- ret = mlx5_devx_cmd_modify_rq(tmpl->rq, &rq_attr);
+ ret = mlx5_devx_modify_rq(tmpl, true);
if (ret)
goto error;
rxq_data->cq_arm_sn = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 12/18] net/mlx5: separate Rx indirection table object creation
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (10 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 11/18] net/mlx5: share " Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-09 11:29 ` Ferruh Yigit
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 13/18] net/mlx5: separate Rx hash queue creation Michael Baum
` (7 subsequent siblings)
19 siblings, 1 reply; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Separate Rx indirection table object creation into both Verbs and DevX
modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 79 ++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 23 +++++++
drivers/net/mlx5/mlx5_devx.c | 89 ++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow_verbs.c | 8 +--
drivers/net/mlx5/mlx5_rxq.c | 131 ++----------------------------------
drivers/net/mlx5/mlx5_rxtx.h | 19 ------
6 files changed, 201 insertions(+), 148 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 5eb556e..d36d915 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -440,10 +440,89 @@
return -rte_errno;
}
+/**
+ * Create an indirection table.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param queues
+ * Queues entering in the indirection table.
+ * @param queues_n
+ * Number of queues in the array.
+ *
+ * @return
+ * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_ind_table_obj *
+mlx5_ibv_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
+ uint32_t queues_n)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ind_table_obj *ind_tbl;
+ const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
+ log2above(queues_n) :
+ log2above(priv->config.ind_table_max_size);
+ struct ibv_wq *wq[1 << wq_n];
+ unsigned int i = 0, j = 0, k = 0;
+
+ ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
+ queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ ind_tbl->type = MLX5_IND_TBL_TYPE_IBV;
+ for (i = 0; i != queues_n; ++i) {
+ struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
+ if (!rxq)
+ goto error;
+ wq[i] = rxq->obj->wq;
+ ind_tbl->queues[i] = queues[i];
+ }
+ ind_tbl->queues_n = queues_n;
+ /* Finalise indirection table. */
+ k = i; /* Retain value of i for use in error case. */
+ for (j = 0; k != (unsigned int)(1 << wq_n); ++k, ++j)
+ wq[k] = wq[j];
+ ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table(priv->sh->ctx,
+ &(struct ibv_rwq_ind_table_init_attr){
+ .log_ind_tbl_size = wq_n,
+ .ind_tbl = wq,
+ .comp_mask = 0,
+ });
+ if (!ind_tbl->ind_table) {
+ rte_errno = errno;
+ goto error;
+ }
+ rte_atomic32_inc(&ind_tbl->refcnt);
+ LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
+ return ind_tbl;
+error:
+ for (j = 0; j < i; j++)
+ mlx5_rxq_release(dev, ind_tbl->queues[j]);
+ mlx5_free(ind_tbl);
+ DEBUG("Port %u cannot create indirection table.", dev->data->port_id);
+ return NULL;
+}
+
+/**
+ * Destroys the specified Indirection Table.
+ *
+ * @param ind_table
+ * Indirection table to release.
+ */
+static void
+mlx5_ibv_ind_table_obj_destroy(struct mlx5_ind_table_obj *ind_tbl)
+{
+ claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
+}
+
struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
.rxq_obj_new = mlx5_rxq_ibv_obj_new,
.rxq_event_get = mlx5_rx_ibv_get_event,
.rxq_obj_modify = mlx5_ibv_modify_wq,
.rxq_obj_release = mlx5_rxq_ibv_obj_release,
+ .ind_table_obj_new = mlx5_ibv_ind_table_obj_new,
+ .ind_table_obj_destroy = mlx5_ibv_ind_table_obj_destroy,
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a51c88f..c151e64 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -704,6 +704,25 @@ struct mlx5_rxq_obj {
};
};
+enum mlx5_ind_tbl_type {
+ MLX5_IND_TBL_TYPE_IBV,
+ MLX5_IND_TBL_TYPE_DEVX,
+};
+
+/* Indirection table. */
+struct mlx5_ind_table_obj {
+ LIST_ENTRY(mlx5_ind_table_obj) next; /* Pointer to the next element. */
+ rte_atomic32_t refcnt; /* Reference counter. */
+ enum mlx5_ind_tbl_type type;
+ RTE_STD_C11
+ union {
+ void *ind_table; /**< Indirection table. */
+ struct mlx5_devx_obj *rqt; /* DevX RQT object. */
+ };
+ uint32_t queues_n; /**< Number of queues in the list. */
+ uint16_t queues[]; /**< Queue list. */
+};
+
/* HW objects operations structure. */
struct mlx5_obj_ops {
int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
@@ -711,6 +730,10 @@ struct mlx5_obj_ops {
int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, bool is_start);
void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
+ struct mlx5_ind_table_obj *(*ind_table_obj_new)(struct rte_eth_dev *dev,
+ const uint16_t *queues,
+ uint32_t queues_n);
+ void (*ind_table_obj_destroy)(struct mlx5_ind_table_obj *ind_tbl);
};
struct mlx5_priv {
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 07922c2..aab5e50 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -22,6 +22,7 @@
#include "mlx5_rxtx.h"
#include "mlx5_utils.h"
#include "mlx5_devx.h"
+#include "mlx5_flow.h"
/**
@@ -607,10 +608,98 @@
return -rte_errno;
}
+/**
+ * Create an indirection table.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param queues
+ * Queues entering in the indirection table.
+ * @param queues_n
+ * Number of queues in the array.
+ *
+ * @return
+ * The DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_ind_table_obj *
+mlx5_devx_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
+ uint32_t queues_n)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ind_table_obj *ind_tbl;
+ struct mlx5_devx_rqt_attr *rqt_attr = NULL;
+ const unsigned int rqt_n = 1 << (rte_is_power_of_2(queues_n) ?
+ log2above(queues_n) :
+ log2above(priv->config.ind_table_max_size));
+ unsigned int i = 0, j = 0, k = 0;
+
+ ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
+ queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ ind_tbl->type = MLX5_IND_TBL_TYPE_DEVX;
+ rqt_attr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt_attr) +
+ rqt_n * sizeof(uint32_t), 0, SOCKET_ID_ANY);
+ if (!rqt_attr) {
+ DRV_LOG(ERR, "Port %u cannot allocate RQT resources.",
+ dev->data->port_id);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ rqt_attr->rqt_max_size = priv->config.ind_table_max_size;
+ rqt_attr->rqt_actual_size = rqt_n;
+ for (i = 0; i != queues_n; ++i) {
+ struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
+ if (!rxq) {
+ mlx5_free(rqt_attr);
+ goto error;
+ }
+ rqt_attr->rq_list[i] = rxq->obj->rq->id;
+ ind_tbl->queues[i] = queues[i];
+ }
+ k = i; /* Retain value of i for use in error case. */
+ for (j = 0; k != rqt_n; ++k, ++j)
+ rqt_attr->rq_list[k] = rqt_attr->rq_list[j];
+ ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx, rqt_attr);
+ mlx5_free(rqt_attr);
+ if (!ind_tbl->rqt) {
+ DRV_LOG(ERR, "Port %u cannot create DevX RQT.",
+ dev->data->port_id);
+ rte_errno = errno;
+ goto error;
+ }
+ ind_tbl->queues_n = queues_n;
+ rte_atomic32_inc(&ind_tbl->refcnt);
+ LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
+ return ind_tbl;
+error:
+ for (j = 0; j < i; j++)
+ mlx5_rxq_release(dev, ind_tbl->queues[j]);
+ mlx5_free(ind_tbl);
+ DEBUG("Port %u cannot create indirection table.", dev->data->port_id);
+ return NULL;
+}
+
+/**
+ * Destroy the DevX RQT object.
+ *
+ * @param ind_table
+ * Indirection table to release.
+ */
+static void
+mlx5_devx_ind_table_obj_destroy(struct mlx5_ind_table_obj *ind_tbl)
+{
+ claim_zero(mlx5_devx_cmd_destroy(ind_tbl->rqt));
+}
+
struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
.rxq_obj_new = mlx5_rxq_devx_obj_new,
.rxq_event_get = mlx5_rx_devx_get_event,
.rxq_obj_modify = mlx5_devx_modify_rq,
.rxq_obj_release = mlx5_rxq_devx_obj_release,
+ .ind_table_obj_new = mlx5_devx_ind_table_obj_new,
+ .ind_table_obj_destroy = mlx5_devx_ind_table_obj_destroy,
};
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 334e19b..80c549a 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1981,10 +1981,10 @@
MLX5_ASSERT(rss_desc->queue_num);
hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
- dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num);
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ rss_desc->queue,
+ rss_desc->queue_num);
if (!hrxq_idx)
hrxq_idx = mlx5_hrxq_new(dev, rss_desc->key,
MLX5_RSS_HASH_KEY_LEN,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index c18610d..aa39892 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -25,7 +25,6 @@
#include "mlx5_defs.h"
#include "mlx5.h"
-#include "mlx5_common_os.h"
#include "mlx5_rxtx.h"
#include "mlx5_utils.h"
#include "mlx5_autoconf.h"
@@ -1710,115 +1709,6 @@ enum mlx5_rxq_type
}
/**
- * Create an indirection table.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param queues
- * Queues entering in the indirection table.
- * @param queues_n
- * Number of queues in the array.
- *
- * @return
- * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_ind_table_obj *
-mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
- uint32_t queues_n, enum mlx5_ind_tbl_type type)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl;
- unsigned int i = 0, j = 0, k = 0;
-
- ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
- queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
- return NULL;
- }
- ind_tbl->type = type;
- if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
- const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
- log2above(queues_n) :
- log2above(priv->config.ind_table_max_size);
- struct ibv_wq *wq[1 << wq_n];
-
- for (i = 0; i != queues_n; ++i) {
- struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev,
- queues[i]);
- if (!rxq)
- goto error;
- wq[i] = rxq->obj->wq;
- ind_tbl->queues[i] = queues[i];
- }
- ind_tbl->queues_n = queues_n;
- /* Finalise indirection table. */
- k = i; /* Retain value of i for use in error case. */
- for (j = 0; k != (unsigned int)(1 << wq_n); ++k, ++j)
- wq[k] = wq[j];
- ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table
- (priv->sh->ctx,
- &(struct ibv_rwq_ind_table_init_attr){
- .log_ind_tbl_size = wq_n,
- .ind_tbl = wq,
- .comp_mask = 0,
- });
- if (!ind_tbl->ind_table) {
- rte_errno = errno;
- goto error;
- }
- } else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
- struct mlx5_devx_rqt_attr *rqt_attr = NULL;
- const unsigned int rqt_n =
- 1 << (rte_is_power_of_2(queues_n) ?
- log2above(queues_n) :
- log2above(priv->config.ind_table_max_size));
-
- rqt_attr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt_attr) +
- rqt_n * sizeof(uint32_t), 0,
- SOCKET_ID_ANY);
- if (!rqt_attr) {
- DRV_LOG(ERR, "port %u cannot allocate RQT resources",
- dev->data->port_id);
- rte_errno = ENOMEM;
- goto error;
- }
- rqt_attr->rqt_max_size = priv->config.ind_table_max_size;
- rqt_attr->rqt_actual_size = rqt_n;
- for (i = 0; i != queues_n; ++i) {
- struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev,
- queues[i]);
- if (!rxq)
- goto error;
- rqt_attr->rq_list[i] = rxq->obj->rq->id;
- ind_tbl->queues[i] = queues[i];
- }
- k = i; /* Retain value of i for use in error case. */
- for (j = 0; k != rqt_n; ++k, ++j)
- rqt_attr->rq_list[k] = rqt_attr->rq_list[j];
- ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx,
- rqt_attr);
- mlx5_free(rqt_attr);
- if (!ind_tbl->rqt) {
- DRV_LOG(ERR, "port %u cannot create DevX RQT",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- ind_tbl->queues_n = queues_n;
- }
- rte_atomic32_inc(&ind_tbl->refcnt);
- LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
- return ind_tbl;
-error:
- for (j = 0; j < i; j++)
- mlx5_rxq_release(dev, ind_tbl->queues[j]);
- mlx5_free(ind_tbl);
- DEBUG("port %u cannot create indirection table", dev->data->port_id);
- return NULL;
-}
-
-/**
* Get an indirection table.
*
* @param dev
@@ -1870,15 +1760,11 @@ enum mlx5_rxq_type
mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
struct mlx5_ind_table_obj *ind_tbl)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
- if (rte_atomic32_dec_and_test(&ind_tbl->refcnt)) {
- if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV)
- claim_zero(mlx5_glue->destroy_rwq_ind_table
- (ind_tbl->ind_table));
- else if (ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX)
- claim_zero(mlx5_devx_cmd_destroy(ind_tbl->rqt));
- }
+ if (rte_atomic32_dec_and_test(&ind_tbl->refcnt))
+ priv->obj_ops->ind_table_obj_destroy(ind_tbl);
for (i = 0; i != ind_tbl->queues_n; ++i)
claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
if (!rte_atomic32_read(&ind_tbl->refcnt)) {
@@ -1956,13 +1842,9 @@ enum mlx5_rxq_type
queues_n = hash_fields ? queues_n : 1;
ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
- if (!ind_tbl) {
- enum mlx5_ind_tbl_type type;
-
- type = rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV ?
- MLX5_IND_TBL_TYPE_IBV : MLX5_IND_TBL_TYPE_DEVX;
- ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n, type);
- }
+ if (!ind_tbl)
+ ind_tbl = priv->obj_ops->ind_table_obj_new(dev, queues,
+ queues_n);
if (!ind_tbl) {
rte_errno = ENOMEM;
return 0;
@@ -2062,7 +1944,6 @@ enum mlx5_rxq_type
struct mlx5_rx_hash_field_select *rx_hash_field_select =
&tir_attr.rx_hash_field_selector_outer;
#endif
-
/* 1 bit: 0: IPv4, 1: IPv6. */
rx_hash_field_select->l3_prot_type =
!!(hash_fields & MLX5_IPV6_IBV_RX_HASH);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 75eedff..7878c81 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -186,25 +186,6 @@ struct mlx5_rxq_ctrl {
struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
};
-enum mlx5_ind_tbl_type {
- MLX5_IND_TBL_TYPE_IBV,
- MLX5_IND_TBL_TYPE_DEVX,
-};
-
-/* Indirection table. */
-struct mlx5_ind_table_obj {
- LIST_ENTRY(mlx5_ind_table_obj) next; /* Pointer to the next element. */
- rte_atomic32_t refcnt; /* Reference counter. */
- enum mlx5_ind_tbl_type type;
- RTE_STD_C11
- union {
- void *ind_table; /**< Indirection table. */
- struct mlx5_devx_obj *rqt; /* DevX RQT object. */
- };
- uint32_t queues_n; /**< Number of queues in the list. */
- uint16_t queues[]; /**< Queue list. */
-};
-
/* Hash Rx queue. */
struct mlx5_hrxq {
ILIST_ENTRY(uint32_t)next; /* Index to the next element. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 12/18] net/mlx5: separate Rx indirection table object creation
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 12/18] net/mlx5: separate Rx indirection table object creation Michael Baum
@ 2020-09-09 11:29 ` Ferruh Yigit
2020-09-09 14:37 ` Matan Azrad
0 siblings, 1 reply; 28+ messages in thread
From: Ferruh Yigit @ 2020-09-09 11:29 UTC (permalink / raw)
To: Michael Baum, dev
Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko,
Honnappa Nagarahalli
On 9/3/2020 11:13 AM, Michael Baum wrote:
> Separate Rx indirection table object creation into both Verbs and DevX
> modules.
>
> Signed-off-by: Michael Baum <michaelba@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
<...>
> + if (!ind_tbl->ind_table) {
> + rte_errno = errno;
> + goto error;
> + }
> + rte_atomic32_inc(&ind_tbl->refcnt);
>
We are switching to c11 atomics, there is a checkpatch warning to higlight this
[1], can you please update the rte_atomics... usage to __atomic_... usage?
There are multiole usages in other patches too.
[1]
http://mails.dpdk.org/archives/test-report/2020-September/150684.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 12/18] net/mlx5: separate Rx indirection table object creation
2020-09-09 11:29 ` Ferruh Yigit
@ 2020-09-09 14:37 ` Matan Azrad
2020-09-09 16:28 ` Ferruh Yigit
0 siblings, 1 reply; 28+ messages in thread
From: Matan Azrad @ 2020-09-09 14:37 UTC (permalink / raw)
To: Ferruh Yigit, Michael Baum, dev
Cc: Raslan Darawsheh, Slava Ovsiienko, Honnappa Nagarahalli
Hi Ferruh
From: Ferruh Yigit
> On 9/3/2020 11:13 AM, Michael Baum wrote:
> > Separate Rx indirection table object creation into both Verbs and DevX
> > modules.
> >
> > Signed-off-by: Michael Baum <michaelba@nvidia.com>
> > Acked-by: Matan Azrad <matan@nvidia.com>
>
> <...>
>
> > + if (!ind_tbl->ind_table) {
> > + rte_errno = errno;
> > + goto error;
> > + }
> > + rte_atomic32_inc(&ind_tbl->refcnt);
> >
>
> We are switching to c11 atomics, there is a checkpatch warning to higlight this
> [1], can you please update the rte_atomics... usage to __atomic_... usage?
> There are multiole usages in other patches too.
>
Yes, we saw the warning.
This code didn't add new atomic, just move code from one location to another.
We have a plan to move all atomics to c11 atomics later in this release, different task.
> [1]
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.
> dpdk.org%2Farchives%2Ftest-report%2F2020-
> September%2F150684.html&data=02%7C01%7Cmatan%40nvidia.com%
> 7C055398d1a2cd4f8f0a0f08d854b3a1c8%7C43083d15727340c1b7db39efd9ccc
> 17a%7C0%7C0%7C637352477778797672&sdata=cqTSn5DQ1AipzbjymFW
> ZToYDRd%2FjQQHBHWJADXPF7yo%3D&reserved=0
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 12/18] net/mlx5: separate Rx indirection table object creation
2020-09-09 14:37 ` Matan Azrad
@ 2020-09-09 16:28 ` Ferruh Yigit
0 siblings, 0 replies; 28+ messages in thread
From: Ferruh Yigit @ 2020-09-09 16:28 UTC (permalink / raw)
To: Matan Azrad, Michael Baum, dev
Cc: Raslan Darawsheh, Slava Ovsiienko, Honnappa Nagarahalli
On 9/9/2020 3:37 PM, Matan Azrad wrote:
>
> Hi Ferruh
>
> From: Ferruh Yigit
>> On 9/3/2020 11:13 AM, Michael Baum wrote:
>>> Separate Rx indirection table object creation into both Verbs and DevX
>>> modules.
>>>
>>> Signed-off-by: Michael Baum <michaelba@nvidia.com>
>>> Acked-by: Matan Azrad <matan@nvidia.com>
>>
>> <...>
>>
>>> + if (!ind_tbl->ind_table) {
>>> + rte_errno = errno;
>>> + goto error;
>>> + }
>>> + rte_atomic32_inc(&ind_tbl->refcnt);
>>>
>>
>> We are switching to c11 atomics, there is a checkpatch warning to higlight this
>> [1], can you please update the rte_atomics... usage to __atomic_... usage?
>> There are multiole usages in other patches too.
>>
>
> Yes, we saw the warning.
> This code didn't add new atomic, just move code from one location to another.
> We have a plan to move all atomics to c11 atomics later in this release, different task.
I see, OK then.
>
>> [1]
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.
>> dpdk.org%2Farchives%2Ftest-report%2F2020-
>> September%2F150684.html&data=02%7C01%7Cmatan%40nvidia.com%
>> 7C055398d1a2cd4f8f0a0f08d854b3a1c8%7C43083d15727340c1b7db39efd9ccc
>> 17a%7C0%7C0%7C637352477778797672&sdata=cqTSn5DQ1AipzbjymFW
>> ZToYDRd%2FjQQHBHWJADXPF7yo%3D&reserved=0
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 13/18] net/mlx5: separate Rx hash queue creation
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (11 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 12/18] net/mlx5: separate Rx indirection table object creation Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 14/18] net/mlx5: remove indirection table type field Michael Baum
` (6 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Separate Rx hash queue creation into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 152 ++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 23 ++++
drivers/net/mlx5/mlx5_devx.c | 155 +++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow_dv.c | 14 +--
drivers/net/mlx5/mlx5_flow_verbs.c | 15 +--
drivers/net/mlx5/mlx5_rxq.c | 243 +-----------------------------------
drivers/net/mlx5/mlx5_rxtx.h | 28 +----
7 files changed, 353 insertions(+), 277 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index d36d915..d92dd48 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -517,6 +517,156 @@
claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
}
+/**
+ * Create an Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param rss_key
+ * RSS key for the Rx hash queue.
+ * @param rss_key_len
+ * RSS key length.
+ * @param hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param queues
+ * Queues entering in hash queue. In case of empty hash_fields only the
+ * first queue index will be taken for the indirection table.
+ * @param queues_n
+ * Number of queues.
+ * @param tunnel
+ * Tunnel type.
+ *
+ * @return
+ * The Verbs object initialized index, 0 otherwise and rte_errno is set.
+ */
+static uint32_t
+mlx5_ibv_hrxq_new(struct rte_eth_dev *dev,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n,
+ int tunnel __rte_unused)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq = NULL;
+ uint32_t hrxq_idx = 0;
+ struct ibv_qp *qp = NULL;
+ struct mlx5_ind_table_obj *ind_tbl;
+ int err;
+
+ queues_n = hash_fields ? queues_n : 1;
+ ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
+ if (!ind_tbl)
+ ind_tbl = priv->obj_ops->ind_table_obj_new(dev, queues,
+ queues_n);
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return 0;
+ }
+#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
+ struct mlx5dv_qp_init_attr qp_init_attr;
+
+ memset(&qp_init_attr, 0, sizeof(qp_init_attr));
+ if (tunnel) {
+ qp_init_attr.comp_mask =
+ MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
+ qp_init_attr.create_flags = MLX5DV_QP_CREATE_TUNNEL_OFFLOADS;
+ }
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ if (dev->data->dev_conf.lpbk_mode) {
+ /* Allow packet sent from NIC loop back w/o source MAC check. */
+ qp_init_attr.comp_mask |=
+ MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
+ qp_init_attr.create_flags |=
+ MLX5DV_QP_CREATE_TIR_ALLOW_SELF_LOOPBACK_UC;
+ }
+#endif
+ qp = mlx5_glue->dv_create_qp
+ (priv->sh->ctx,
+ &(struct ibv_qp_init_attr_ex){
+ .qp_type = IBV_QPT_RAW_PACKET,
+ .comp_mask =
+ IBV_QP_INIT_ATTR_PD |
+ IBV_QP_INIT_ATTR_IND_TABLE |
+ IBV_QP_INIT_ATTR_RX_HASH,
+ .rx_hash_conf = (struct ibv_rx_hash_conf){
+ .rx_hash_function =
+ IBV_RX_HASH_FUNC_TOEPLITZ,
+ .rx_hash_key_len = rss_key_len,
+ .rx_hash_key =
+ (void *)(uintptr_t)rss_key,
+ .rx_hash_fields_mask = hash_fields,
+ },
+ .rwq_ind_tbl = ind_tbl->ind_table,
+ .pd = priv->sh->pd,
+ },
+ &qp_init_attr);
+#else
+ qp = mlx5_glue->create_qp_ex
+ (priv->sh->ctx,
+ &(struct ibv_qp_init_attr_ex){
+ .qp_type = IBV_QPT_RAW_PACKET,
+ .comp_mask =
+ IBV_QP_INIT_ATTR_PD |
+ IBV_QP_INIT_ATTR_IND_TABLE |
+ IBV_QP_INIT_ATTR_RX_HASH,
+ .rx_hash_conf = (struct ibv_rx_hash_conf){
+ .rx_hash_function =
+ IBV_RX_HASH_FUNC_TOEPLITZ,
+ .rx_hash_key_len = rss_key_len,
+ .rx_hash_key =
+ (void *)(uintptr_t)rss_key,
+ .rx_hash_fields_mask = hash_fields,
+ },
+ .rwq_ind_tbl = ind_tbl->ind_table,
+ .pd = priv->sh->pd,
+ });
+#endif
+ if (!qp) {
+ rte_errno = errno;
+ goto error;
+ }
+ hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx);
+ if (!hrxq)
+ goto error;
+ hrxq->ind_table = ind_tbl;
+ hrxq->qp = qp;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ hrxq->action = mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
+ if (!hrxq->action) {
+ rte_errno = errno;
+ goto error;
+ }
+#endif
+ hrxq->rss_key_len = rss_key_len;
+ hrxq->hash_fields = hash_fields;
+ memcpy(hrxq->rss_key, rss_key, rss_key_len);
+ rte_atomic32_inc(&hrxq->refcnt);
+ ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, hrxq_idx,
+ hrxq, next);
+ return hrxq_idx;
+error:
+ err = rte_errno; /* Save rte_errno before cleanup. */
+ mlx5_ind_table_obj_release(dev, ind_tbl);
+ if (qp)
+ claim_zero(mlx5_glue->destroy_qp(qp));
+ if (hrxq)
+ mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
+ rte_errno = err; /* Restore rte_errno. */
+ return 0;
+}
+
+/**
+ * Destroy a Verbs queue pair.
+ *
+ * @param hrxq
+ * Hash Rx queue to release its qp.
+ */
+static void
+mlx5_ibv_qp_destroy(struct mlx5_hrxq *hrxq)
+{
+ claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+}
+
struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
.rxq_obj_new = mlx5_rxq_ibv_obj_new,
@@ -525,4 +675,6 @@ struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_release = mlx5_rxq_ibv_obj_release,
.ind_table_obj_new = mlx5_ibv_ind_table_obj_new,
.ind_table_obj_destroy = mlx5_ibv_ind_table_obj_destroy,
+ .hrxq_new = mlx5_ibv_hrxq_new,
+ .hrxq_destroy = mlx5_ibv_qp_destroy,
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c151e64..9fc4639 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -723,6 +723,24 @@ struct mlx5_ind_table_obj {
uint16_t queues[]; /**< Queue list. */
};
+/* Hash Rx queue. */
+struct mlx5_hrxq {
+ ILIST_ENTRY(uint32_t)next; /* Index to the next element. */
+ rte_atomic32_t refcnt; /* Reference counter. */
+ struct mlx5_ind_table_obj *ind_table; /* Indirection table. */
+ RTE_STD_C11
+ union {
+ void *qp; /* Verbs queue pair. */
+ struct mlx5_devx_obj *tir; /* DevX TIR object. */
+ };
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ void *action; /* DV QP action pointer. */
+#endif
+ uint64_t hash_fields; /* Verbs Hash fields. */
+ uint32_t rss_key_len; /* Hash key length in bytes. */
+ uint8_t rss_key[]; /* Hash key. */
+};
+
/* HW objects operations structure. */
struct mlx5_obj_ops {
int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on);
@@ -734,6 +752,11 @@ struct mlx5_obj_ops {
const uint16_t *queues,
uint32_t queues_n);
void (*ind_table_obj_destroy)(struct mlx5_ind_table_obj *ind_tbl);
+ uint32_t (*hrxq_new)(struct rte_eth_dev *dev, const uint8_t *rss_key,
+ uint32_t rss_key_len, uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n,
+ int tunnel __rte_unused);
+ void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
};
struct mlx5_priv {
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index aab5e50..b1b3037 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -694,6 +694,159 @@
claim_zero(mlx5_devx_cmd_destroy(ind_tbl->rqt));
}
+/**
+ * Create an Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param rss_key
+ * RSS key for the Rx hash queue.
+ * @param rss_key_len
+ * RSS key length.
+ * @param hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param queues
+ * Queues entering in hash queue. In case of empty hash_fields only the
+ * first queue index will be taken for the indirection table.
+ * @param queues_n
+ * Number of queues.
+ * @param tunnel
+ * Tunnel type.
+ *
+ * @return
+ * The DevX object initialized index, 0 otherwise and rte_errno is set.
+ */
+static uint32_t
+mlx5_devx_hrxq_new(struct rte_eth_dev *dev,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n,
+ int tunnel __rte_unused)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq = NULL;
+ uint32_t hrxq_idx = 0;
+ struct mlx5_ind_table_obj *ind_tbl;
+ struct mlx5_devx_obj *tir = NULL;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[queues[0]];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ struct mlx5_devx_tir_attr tir_attr;
+ int err;
+ uint32_t i;
+ bool lro = true;
+
+ queues_n = hash_fields ? queues_n : 1;
+ ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
+ if (!ind_tbl)
+ ind_tbl = priv->obj_ops->ind_table_obj_new(dev, queues,
+ queues_n);
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return 0;
+ }
+ /* Enable TIR LRO only if all the queues were configured for. */
+ for (i = 0; i < queues_n; ++i) {
+ if (!(*priv->rxqs)[queues[i]]->lro) {
+ lro = false;
+ break;
+ }
+ }
+ memset(&tir_attr, 0, sizeof(tir_attr));
+ tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
+ tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
+ tir_attr.tunneled_offload_en = !!tunnel;
+ /* If needed, translate hash_fields bitmap to PRM format. */
+ if (hash_fields) {
+ struct mlx5_rx_hash_field_select *rx_hash_field_select = NULL;
+#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
+ rx_hash_field_select = hash_fields & IBV_RX_HASH_INNER ?
+ &tir_attr.rx_hash_field_selector_inner :
+ &tir_attr.rx_hash_field_selector_outer;
+#else
+ rx_hash_field_select = &tir_attr.rx_hash_field_selector_outer;
+#endif
+ /* 1 bit: 0: IPv4, 1: IPv6. */
+ rx_hash_field_select->l3_prot_type =
+ !!(hash_fields & MLX5_IPV6_IBV_RX_HASH);
+ /* 1 bit: 0: TCP, 1: UDP. */
+ rx_hash_field_select->l4_prot_type =
+ !!(hash_fields & MLX5_UDP_IBV_RX_HASH);
+ /* Bitmask which sets which fields to use in RX Hash. */
+ rx_hash_field_select->selected_fields =
+ ((!!(hash_fields & MLX5_L3_SRC_IBV_RX_HASH)) <<
+ MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_SRC_IP) |
+ (!!(hash_fields & MLX5_L3_DST_IBV_RX_HASH)) <<
+ MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_DST_IP |
+ (!!(hash_fields & MLX5_L4_SRC_IBV_RX_HASH)) <<
+ MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_SPORT |
+ (!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) <<
+ MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT;
+ }
+ if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
+ tir_attr.transport_domain = priv->sh->td->id;
+ else
+ tir_attr.transport_domain = priv->sh->tdn;
+ memcpy(tir_attr.rx_hash_toeplitz_key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ tir_attr.indirect_table = ind_tbl->rqt->id;
+ if (dev->data->dev_conf.lpbk_mode)
+ tir_attr.self_lb_block = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+ if (lro) {
+ tir_attr.lro_timeout_period_usecs = priv->config.lro.timeout;
+ tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
+ tir_attr.lro_enable_mask = MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
+ MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
+ }
+ tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
+ if (!tir) {
+ DRV_LOG(ERR, "Port %u cannot create DevX TIR.",
+ dev->data->port_id);
+ rte_errno = errno;
+ goto error;
+ }
+ hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx);
+ if (!hrxq)
+ goto error;
+ hrxq->ind_table = ind_tbl;
+ hrxq->tir = tir;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ hrxq->action = mlx5_glue->dv_create_flow_action_dest_devx_tir
+ (hrxq->tir->obj);
+ if (!hrxq->action) {
+ rte_errno = errno;
+ goto error;
+ }
+#endif
+ hrxq->rss_key_len = rss_key_len;
+ hrxq->hash_fields = hash_fields;
+ memcpy(hrxq->rss_key, rss_key, rss_key_len);
+ rte_atomic32_inc(&hrxq->refcnt);
+ ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, hrxq_idx,
+ hrxq, next);
+ return hrxq_idx;
+error:
+ err = rte_errno; /* Save rte_errno before cleanup. */
+ mlx5_ind_table_obj_release(dev, ind_tbl);
+ if (tir)
+ claim_zero(mlx5_devx_cmd_destroy(tir));
+ if (hrxq)
+ mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
+ rte_errno = err; /* Restore rte_errno. */
+ return 0;
+}
+
+/**
+ * Destroy a DevX TIR object.
+ *
+ * @param hrxq
+ * Hash Rx queue to release its tir.
+ */
+static void
+mlx5_devx_tir_destroy(struct mlx5_hrxq *hrxq)
+{
+ claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
+}
+
struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
.rxq_obj_new = mlx5_rxq_devx_obj_new,
@@ -702,4 +855,6 @@ struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_release = mlx5_rxq_devx_obj_release,
.ind_table_obj_new = mlx5_devx_ind_table_obj_new,
.ind_table_obj_destroy = mlx5_devx_ind_table_obj_destroy,
+ .hrxq_new = mlx5_devx_hrxq_new,
+ .hrxq_destroy = mlx5_devx_tir_destroy,
};
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 58358ce..fa41486 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8949,14 +8949,14 @@ struct field_modify_info modify_tcp[] = {
rss_desc->queue,
rss_desc->queue_num);
if (!hrxq_idx) {
- hrxq_idx = mlx5_hrxq_new
+ hrxq_idx = priv->obj_ops->hrxq_new
(dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
- dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num,
- !!(dh->layers &
- MLX5_FLOW_LAYER_TUNNEL));
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ rss_desc->queue,
+ rss_desc->queue_num,
+ !!(dh->layers &
+ MLX5_FLOW_LAYER_TUNNEL));
}
hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
hrxq_idx);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 80c549a..f8edae1 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1986,13 +1986,14 @@
rss_desc->queue,
rss_desc->queue_num);
if (!hrxq_idx)
- hrxq_idx = mlx5_hrxq_new(dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
- dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num,
- !!(handle->layers &
- MLX5_FLOW_LAYER_TUNNEL));
+ hrxq_idx = priv->obj_ops->hrxq_new
+ (dev, rss_desc->key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ rss_desc->queue,
+ rss_desc->queue_num,
+ !!(handle->layers &
+ MLX5_FLOW_LAYER_TUNNEL));
hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
hrxq_idx);
if (!hrxq) {
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index aa39892..d84dfe1 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -20,7 +20,6 @@
#include <rte_eal_paging.h>
#include <mlx5_glue.h>
-#include <mlx5_devx_cmds.h>
#include <mlx5_malloc.h>
#include "mlx5_defs.h"
@@ -28,7 +27,6 @@
#include "mlx5_rxtx.h"
#include "mlx5_utils.h"
#include "mlx5_autoconf.h"
-#include "mlx5_flow.h"
/* Default RSS hash key also used for ConnectX-3. */
@@ -1721,7 +1719,7 @@ enum mlx5_rxq_type
* @return
* An indirection table if found.
*/
-static struct mlx5_ind_table_obj *
+struct mlx5_ind_table_obj *
mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues,
uint32_t queues_n)
{
@@ -1756,7 +1754,7 @@ enum mlx5_rxq_type
* @return
* 1 while a reference on it exists, 0 when freed.
*/
-static int
+int
mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
struct mlx5_ind_table_obj *ind_tbl)
{
@@ -1801,238 +1799,6 @@ enum mlx5_rxq_type
}
/**
- * Create an Rx Hash queue.
- *
- * @param dev
- * Pointer to Ethernet device.
- * @param rss_key
- * RSS key for the Rx hash queue.
- * @param rss_key_len
- * RSS key length.
- * @param hash_fields
- * Verbs protocol hash field to make the RSS on.
- * @param queues
- * Queues entering in hash queue. In case of empty hash_fields only the
- * first queue index will be taken for the indirection table.
- * @param queues_n
- * Number of queues.
- * @param tunnel
- * Tunnel type.
- *
- * @return
- * The Verbs/DevX object initialised index, 0 otherwise and rte_errno is set.
- */
-uint32_t
-mlx5_hrxq_new(struct rte_eth_dev *dev,
- const uint8_t *rss_key, uint32_t rss_key_len,
- uint64_t hash_fields,
- const uint16_t *queues, uint32_t queues_n,
- int tunnel __rte_unused)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_hrxq *hrxq = NULL;
- uint32_t hrxq_idx = 0;
- struct ibv_qp *qp = NULL;
- struct mlx5_ind_table_obj *ind_tbl;
- int err;
- struct mlx5_devx_obj *tir = NULL;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[queues[0]];
- struct mlx5_rxq_ctrl *rxq_ctrl =
- container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-
- queues_n = hash_fields ? queues_n : 1;
- ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
- if (!ind_tbl)
- ind_tbl = priv->obj_ops->ind_table_obj_new(dev, queues,
- queues_n);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
- return 0;
- }
- if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
-#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
- struct mlx5dv_qp_init_attr qp_init_attr;
-
- memset(&qp_init_attr, 0, sizeof(qp_init_attr));
- if (tunnel) {
- qp_init_attr.comp_mask =
- MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
- qp_init_attr.create_flags =
- MLX5DV_QP_CREATE_TUNNEL_OFFLOADS;
- }
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- if (dev->data->dev_conf.lpbk_mode) {
- /*
- * Allow packet sent from NIC loop back
- * w/o source MAC check.
- */
- qp_init_attr.comp_mask |=
- MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
- qp_init_attr.create_flags |=
- MLX5DV_QP_CREATE_TIR_ALLOW_SELF_LOOPBACK_UC;
- }
-#endif
- qp = mlx5_glue->dv_create_qp
- (priv->sh->ctx,
- &(struct ibv_qp_init_attr_ex){
- .qp_type = IBV_QPT_RAW_PACKET,
- .comp_mask =
- IBV_QP_INIT_ATTR_PD |
- IBV_QP_INIT_ATTR_IND_TABLE |
- IBV_QP_INIT_ATTR_RX_HASH,
- .rx_hash_conf = (struct ibv_rx_hash_conf){
- .rx_hash_function =
- IBV_RX_HASH_FUNC_TOEPLITZ,
- .rx_hash_key_len = rss_key_len,
- .rx_hash_key =
- (void *)(uintptr_t)rss_key,
- .rx_hash_fields_mask = hash_fields,
- },
- .rwq_ind_tbl = ind_tbl->ind_table,
- .pd = priv->sh->pd,
- },
- &qp_init_attr);
-#else
- qp = mlx5_glue->create_qp_ex
- (priv->sh->ctx,
- &(struct ibv_qp_init_attr_ex){
- .qp_type = IBV_QPT_RAW_PACKET,
- .comp_mask =
- IBV_QP_INIT_ATTR_PD |
- IBV_QP_INIT_ATTR_IND_TABLE |
- IBV_QP_INIT_ATTR_RX_HASH,
- .rx_hash_conf = (struct ibv_rx_hash_conf){
- .rx_hash_function =
- IBV_RX_HASH_FUNC_TOEPLITZ,
- .rx_hash_key_len = rss_key_len,
- .rx_hash_key =
- (void *)(uintptr_t)rss_key,
- .rx_hash_fields_mask = hash_fields,
- },
- .rwq_ind_tbl = ind_tbl->ind_table,
- .pd = priv->sh->pd,
- });
-#endif
- if (!qp) {
- rte_errno = errno;
- goto error;
- }
- } else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
- struct mlx5_devx_tir_attr tir_attr;
- uint32_t i;
- uint32_t lro = 1;
-
- /* Enable TIR LRO only if all the queues were configured for. */
- for (i = 0; i < queues_n; ++i) {
- if (!(*priv->rxqs)[queues[i]]->lro) {
- lro = 0;
- break;
- }
- }
- memset(&tir_attr, 0, sizeof(tir_attr));
- tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
- tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
- tir_attr.tunneled_offload_en = !!tunnel;
- /* If needed, translate hash_fields bitmap to PRM format. */
- if (hash_fields) {
-#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
- struct mlx5_rx_hash_field_select *rx_hash_field_select =
- hash_fields & IBV_RX_HASH_INNER ?
- &tir_attr.rx_hash_field_selector_inner :
- &tir_attr.rx_hash_field_selector_outer;
-#else
- struct mlx5_rx_hash_field_select *rx_hash_field_select =
- &tir_attr.rx_hash_field_selector_outer;
-#endif
- /* 1 bit: 0: IPv4, 1: IPv6. */
- rx_hash_field_select->l3_prot_type =
- !!(hash_fields & MLX5_IPV6_IBV_RX_HASH);
- /* 1 bit: 0: TCP, 1: UDP. */
- rx_hash_field_select->l4_prot_type =
- !!(hash_fields & MLX5_UDP_IBV_RX_HASH);
- /* Bitmask which sets which fields to use in RX Hash. */
- rx_hash_field_select->selected_fields =
- ((!!(hash_fields & MLX5_L3_SRC_IBV_RX_HASH)) <<
- MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_SRC_IP) |
- (!!(hash_fields & MLX5_L3_DST_IBV_RX_HASH)) <<
- MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_DST_IP |
- (!!(hash_fields & MLX5_L4_SRC_IBV_RX_HASH)) <<
- MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_SPORT |
- (!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) <<
- MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT;
- }
- if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN)
- tir_attr.transport_domain = priv->sh->td->id;
- else
- tir_attr.transport_domain = priv->sh->tdn;
- memcpy(tir_attr.rx_hash_toeplitz_key, rss_key,
- MLX5_RSS_HASH_KEY_LEN);
- tir_attr.indirect_table = ind_tbl->rqt->id;
- if (dev->data->dev_conf.lpbk_mode)
- tir_attr.self_lb_block =
- MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
- if (lro) {
- tir_attr.lro_timeout_period_usecs =
- priv->config.lro.timeout;
- tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
- tir_attr.lro_enable_mask =
- MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
- MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
- }
- tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
- if (!tir) {
- DRV_LOG(ERR, "port %u cannot create DevX TIR",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- }
- hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx);
- if (!hrxq)
- goto error;
- hrxq->ind_table = ind_tbl;
- if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
- hrxq->qp = qp;
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- hrxq->action =
- mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
- if (!hrxq->action) {
- rte_errno = errno;
- goto error;
- }
-#endif
- } else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
- hrxq->tir = tir;
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- hrxq->action = mlx5_glue->dv_create_flow_action_dest_devx_tir
- (hrxq->tir->obj);
- if (!hrxq->action) {
- rte_errno = errno;
- goto error;
- }
-#endif
- }
- hrxq->rss_key_len = rss_key_len;
- hrxq->hash_fields = hash_fields;
- memcpy(hrxq->rss_key, rss_key, rss_key_len);
- rte_atomic32_inc(&hrxq->refcnt);
- ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, hrxq_idx,
- hrxq, next);
- return hrxq_idx;
-error:
- err = rte_errno; /* Save rte_errno before cleanup. */
- mlx5_ind_table_obj_release(dev, ind_tbl);
- if (qp)
- claim_zero(mlx5_glue->destroy_qp(qp));
- else if (tir)
- claim_zero(mlx5_devx_cmd_destroy(tir));
- if (hrxq)
- mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
- rte_errno = err; /* Restore rte_errno. */
- return 0;
-}
-
-/**
* Get an Rx Hash queue.
*
* @param dev
@@ -2106,10 +1872,7 @@ enum mlx5_rxq_type
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
mlx5_glue->destroy_flow_action(hrxq->action);
#endif
- if (hrxq->ind_table->type == MLX5_IND_TBL_TYPE_IBV)
- claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
- else /* hrxq->ind_table->type == MLX5_IND_TBL_TYPE_DEVX */
- claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
+ priv->obj_ops->hrxq_destroy(hrxq);
mlx5_ind_table_obj_release(dev, hrxq->ind_table);
ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs,
hrxq_idx, hrxq, next);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 7878c81..14a3535 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -186,24 +186,6 @@ struct mlx5_rxq_ctrl {
struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */
};
-/* Hash Rx queue. */
-struct mlx5_hrxq {
- ILIST_ENTRY(uint32_t)next; /* Index to the next element. */
- rte_atomic32_t refcnt; /* Reference counter. */
- struct mlx5_ind_table_obj *ind_table; /* Indirection table. */
- RTE_STD_C11
- union {
- void *qp; /* Verbs queue pair. */
- struct mlx5_devx_obj *tir; /* DevX TIR object. */
- };
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- void *action; /* DV QP action pointer. */
-#endif
- uint64_t hash_fields; /* Verbs Hash fields. */
- uint32_t rss_key_len; /* Hash key length in bytes. */
- uint8_t rss_key[]; /* Hash key. */
-};
-
/* TX queue send local data. */
__extension__
struct mlx5_txq_local {
@@ -383,11 +365,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
int mlx5_rxq_verify(struct rte_eth_dev *dev);
int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
-uint32_t mlx5_hrxq_new(struct rte_eth_dev *dev,
- const uint8_t *rss_key, uint32_t rss_key_len,
- uint64_t hash_fields,
- const uint16_t *queues, uint32_t queues_n,
- int tunnel __rte_unused);
+struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev,
+ const uint16_t *queues,
+ uint32_t queues_n);
+int mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
+ struct mlx5_ind_table_obj *ind_tbl);
uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 14/18] net/mlx5: remove indirection table type field
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (12 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 13/18] net/mlx5: separate Rx hash queue creation Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 15/18] net/mlx5: share Rx queue indirection table code Michael Baum
` (5 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Once the separation between Verbs and DevX is done using function
pointers, the type field of the indirection table structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 1 -
drivers/net/mlx5/mlx5.h | 6 ------
drivers/net/mlx5/mlx5_devx.c | 1 -
3 files changed, 8 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index d92dd48..6eef85e 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -471,7 +471,6 @@
rte_errno = ENOMEM;
return NULL;
}
- ind_tbl->type = MLX5_IND_TBL_TYPE_IBV;
for (i = 0; i != queues_n; ++i) {
struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
if (!rxq)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9fc4639..9594856 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -704,16 +704,10 @@ struct mlx5_rxq_obj {
};
};
-enum mlx5_ind_tbl_type {
- MLX5_IND_TBL_TYPE_IBV,
- MLX5_IND_TBL_TYPE_DEVX,
-};
-
/* Indirection table. */
struct mlx5_ind_table_obj {
LIST_ENTRY(mlx5_ind_table_obj) next; /* Pointer to the next element. */
rte_atomic32_t refcnt; /* Reference counter. */
- enum mlx5_ind_tbl_type type;
RTE_STD_C11
union {
void *ind_table; /**< Indirection table. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index b1b3037..5fa41f1 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -639,7 +639,6 @@
rte_errno = ENOMEM;
return NULL;
}
- ind_tbl->type = MLX5_IND_TBL_TYPE_DEVX;
rqt_attr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt_attr) +
rqt_n * sizeof(uint32_t), 0, SOCKET_ID_ANY);
if (!rqt_attr) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 15/18] net/mlx5: share Rx queue indirection table code
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (13 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 14/18] net/mlx5: remove indirection table type field Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 16/18] net/mlx5: share Rx hash queue code Michael Baum
` (4 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Move Rx indirection table object similar resources allocations from DevX
and Verbs modules to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 75 ++++++++++++++----------------------
drivers/net/mlx5/mlx5.h | 7 ++--
drivers/net/mlx5/mlx5_devx.c | 76 ++++++++++++++-----------------------
drivers/net/mlx5/mlx5_rxq.c | 56 ++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_rxtx.h | 3 ++
5 files changed, 117 insertions(+), 100 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 6eef85e..be810b1 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -441,67 +441,49 @@
}
/**
- * Create an indirection table.
+ * Creates a receive work queue as a filed of indirection table.
*
* @param dev
* Pointer to Ethernet device.
- * @param queues
- * Queues entering in the indirection table.
- * @param queues_n
- * Number of queues in the array.
+ * @param log_n
+ * Log of number of queues in the array.
+ * @param ind_tbl
+ * Verbs indirection table object.
*
* @return
- * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_ind_table_obj *
-mlx5_ibv_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
- uint32_t queues_n)
+static int
+mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n,
+ struct mlx5_ind_table_obj *ind_tbl)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl;
- const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
- log2above(queues_n) :
- log2above(priv->config.ind_table_max_size);
- struct ibv_wq *wq[1 << wq_n];
- unsigned int i = 0, j = 0, k = 0;
+ struct ibv_wq *wq[1 << log_n];
+ unsigned int i, j;
- ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
- queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
- return NULL;
- }
- for (i = 0; i != queues_n; ++i) {
- struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
- if (!rxq)
- goto error;
- wq[i] = rxq->obj->wq;
- ind_tbl->queues[i] = queues[i];
+ MLX5_ASSERT(ind_tbl);
+ for (i = 0; i != ind_tbl->queues_n; ++i) {
+ struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+
+ wq[i] = rxq_ctrl->obj->wq;
}
- ind_tbl->queues_n = queues_n;
+ MLX5_ASSERT(i > 0);
/* Finalise indirection table. */
- k = i; /* Retain value of i for use in error case. */
- for (j = 0; k != (unsigned int)(1 << wq_n); ++k, ++j)
- wq[k] = wq[j];
+ for (j = 0; i != (unsigned int)(1 << log_n); ++j, ++i)
+ wq[i] = wq[j];
ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table(priv->sh->ctx,
&(struct ibv_rwq_ind_table_init_attr){
- .log_ind_tbl_size = wq_n,
+ .log_ind_tbl_size = log_n,
.ind_tbl = wq,
.comp_mask = 0,
});
if (!ind_tbl->ind_table) {
rte_errno = errno;
- goto error;
+ return -rte_errno;
}
- rte_atomic32_inc(&ind_tbl->refcnt);
- LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
- return ind_tbl;
-error:
- for (j = 0; j < i; j++)
- mlx5_rxq_release(dev, ind_tbl->queues[j]);
- mlx5_free(ind_tbl);
- DEBUG("Port %u cannot create indirection table.", dev->data->port_id);
- return NULL;
+ return 0;
}
/**
@@ -511,7 +493,7 @@
* Indirection table to release.
*/
static void
-mlx5_ibv_ind_table_obj_destroy(struct mlx5_ind_table_obj *ind_tbl)
+mlx5_ibv_ind_table_destroy(struct mlx5_ind_table_obj *ind_tbl)
{
claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
}
@@ -555,8 +537,7 @@
queues_n = hash_fields ? queues_n : 1;
ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
if (!ind_tbl)
- ind_tbl = priv->obj_ops->ind_table_obj_new(dev, queues,
- queues_n);
+ ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
if (!ind_tbl) {
rte_errno = ENOMEM;
return 0;
@@ -672,8 +653,8 @@ struct mlx5_obj_ops ibv_obj_ops = {
.rxq_event_get = mlx5_rx_ibv_get_event,
.rxq_obj_modify = mlx5_ibv_modify_wq,
.rxq_obj_release = mlx5_rxq_ibv_obj_release,
- .ind_table_obj_new = mlx5_ibv_ind_table_obj_new,
- .ind_table_obj_destroy = mlx5_ibv_ind_table_obj_destroy,
+ .ind_table_new = mlx5_ibv_ind_table_new,
+ .ind_table_destroy = mlx5_ibv_ind_table_destroy,
.hrxq_new = mlx5_ibv_hrxq_new,
.hrxq_destroy = mlx5_ibv_qp_destroy,
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9594856..12017e8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -742,10 +742,9 @@ struct mlx5_obj_ops {
int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj);
int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, bool is_start);
void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj);
- struct mlx5_ind_table_obj *(*ind_table_obj_new)(struct rte_eth_dev *dev,
- const uint16_t *queues,
- uint32_t queues_n);
- void (*ind_table_obj_destroy)(struct mlx5_ind_table_obj *ind_tbl);
+ int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n,
+ struct mlx5_ind_table_obj *ind_tbl);
+ void (*ind_table_destroy)(struct mlx5_ind_table_obj *ind_tbl);
uint32_t (*hrxq_new)(struct rte_eth_dev *dev, const uint8_t *rss_key,
uint32_t rss_key_len, uint64_t hash_fields,
const uint16_t *queues, uint32_t queues_n,
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 5fa41f1..ebc3929 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -609,76 +609,57 @@
}
/**
- * Create an indirection table.
+ * Create RQT using DevX API as a filed of indirection table.
*
* @param dev
* Pointer to Ethernet device.
- * @param queues
- * Queues entering in the indirection table.
- * @param queues_n
- * Number of queues in the array.
+ * @param log_n
+ * Log of number of queues in the array.
+ * @param ind_tbl
+ * DevX indirection table object.
*
* @return
- * The DevX object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_ind_table_obj *
-mlx5_devx_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
- uint32_t queues_n)
+static int
+mlx5_devx_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n,
+ struct mlx5_ind_table_obj *ind_tbl)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl;
struct mlx5_devx_rqt_attr *rqt_attr = NULL;
- const unsigned int rqt_n = 1 << (rte_is_power_of_2(queues_n) ?
- log2above(queues_n) :
- log2above(priv->config.ind_table_max_size));
- unsigned int i = 0, j = 0, k = 0;
+ const unsigned int rqt_n = 1 << log_n;
+ unsigned int i, j;
- ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
- queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
- return NULL;
- }
+ MLX5_ASSERT(ind_tbl);
rqt_attr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt_attr) +
rqt_n * sizeof(uint32_t), 0, SOCKET_ID_ANY);
if (!rqt_attr) {
DRV_LOG(ERR, "Port %u cannot allocate RQT resources.",
dev->data->port_id);
rte_errno = ENOMEM;
- goto error;
+ return -rte_errno;
}
rqt_attr->rqt_max_size = priv->config.ind_table_max_size;
rqt_attr->rqt_actual_size = rqt_n;
- for (i = 0; i != queues_n; ++i) {
- struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
- if (!rxq) {
- mlx5_free(rqt_attr);
- goto error;
- }
- rqt_attr->rq_list[i] = rxq->obj->rq->id;
- ind_tbl->queues[i] = queues[i];
+ for (i = 0; i != ind_tbl->queues_n; ++i) {
+ struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq, struct mlx5_rxq_ctrl, rxq);
+
+ rqt_attr->rq_list[i] = rxq_ctrl->obj->rq->id;
}
- k = i; /* Retain value of i for use in error case. */
- for (j = 0; k != rqt_n; ++k, ++j)
- rqt_attr->rq_list[k] = rqt_attr->rq_list[j];
+ MLX5_ASSERT(i > 0);
+ for (j = 0; i != rqt_n; ++j, ++i)
+ rqt_attr->rq_list[i] = rqt_attr->rq_list[j];
ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx, rqt_attr);
mlx5_free(rqt_attr);
if (!ind_tbl->rqt) {
DRV_LOG(ERR, "Port %u cannot create DevX RQT.",
dev->data->port_id);
rte_errno = errno;
- goto error;
+ return -rte_errno;
}
- ind_tbl->queues_n = queues_n;
- rte_atomic32_inc(&ind_tbl->refcnt);
- LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
- return ind_tbl;
-error:
- for (j = 0; j < i; j++)
- mlx5_rxq_release(dev, ind_tbl->queues[j]);
- mlx5_free(ind_tbl);
- DEBUG("Port %u cannot create indirection table.", dev->data->port_id);
- return NULL;
+ return 0;
}
/**
@@ -688,7 +669,7 @@
* Indirection table to release.
*/
static void
-mlx5_devx_ind_table_obj_destroy(struct mlx5_ind_table_obj *ind_tbl)
+mlx5_devx_ind_table_destroy(struct mlx5_ind_table_obj *ind_tbl)
{
claim_zero(mlx5_devx_cmd_destroy(ind_tbl->rqt));
}
@@ -738,8 +719,7 @@
queues_n = hash_fields ? queues_n : 1;
ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
if (!ind_tbl)
- ind_tbl = priv->obj_ops->ind_table_obj_new(dev, queues,
- queues_n);
+ ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
if (!ind_tbl) {
rte_errno = ENOMEM;
return 0;
@@ -852,8 +832,8 @@ struct mlx5_obj_ops devx_obj_ops = {
.rxq_event_get = mlx5_rx_devx_get_event,
.rxq_obj_modify = mlx5_devx_modify_rq,
.rxq_obj_release = mlx5_rxq_devx_obj_release,
- .ind_table_obj_new = mlx5_devx_ind_table_obj_new,
- .ind_table_obj_destroy = mlx5_devx_ind_table_obj_destroy,
+ .ind_table_new = mlx5_devx_ind_table_new,
+ .ind_table_destroy = mlx5_devx_ind_table_destroy,
.hrxq_new = mlx5_devx_hrxq_new,
.hrxq_destroy = mlx5_devx_tir_destroy,
};
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d84dfe1..c353139 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1762,7 +1762,7 @@ struct mlx5_ind_table_obj *
unsigned int i;
if (rte_atomic32_dec_and_test(&ind_tbl->refcnt))
- priv->obj_ops->ind_table_obj_destroy(ind_tbl);
+ priv->obj_ops->ind_table_destroy(ind_tbl);
for (i = 0; i != ind_tbl->queues_n; ++i)
claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
if (!rte_atomic32_read(&ind_tbl->refcnt)) {
@@ -1799,6 +1799,60 @@ struct mlx5_ind_table_obj *
}
/**
+ * Create an indirection table.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param queues
+ * Queues entering in the indirection table.
+ * @param queues_n
+ * Number of queues in the array.
+ *
+ * @return
+ * The Verbs/DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
+ uint32_t queues_n)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ind_table_obj *ind_tbl;
+ const unsigned int n = rte_is_power_of_2(queues_n) ?
+ log2above(queues_n) :
+ log2above(priv->config.ind_table_max_size);
+ unsigned int i, j;
+ int ret;
+
+ ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
+ queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ ind_tbl->queues_n = queues_n;
+ for (i = 0; i != queues_n; ++i) {
+ struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
+ if (!rxq)
+ goto error;
+ ind_tbl->queues[i] = queues[i];
+ }
+ ret = priv->obj_ops->ind_table_new(dev, n, ind_tbl);
+ if (ret < 0)
+ goto error;
+ rte_atomic32_inc(&ind_tbl->refcnt);
+ LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
+ return ind_tbl;
+error:
+ ret = rte_errno;
+ for (j = 0; j < i; j++)
+ mlx5_rxq_release(dev, ind_tbl->queues[j]);
+ rte_errno = ret;
+ mlx5_free(ind_tbl);
+ DEBUG("Port %u cannot create indirection table.", dev->data->port_id);
+ return NULL;
+}
+
+/**
* Get an Rx Hash queue.
*
* @param dev
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 14a3535..237344f 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -365,6 +365,9 @@ struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
int mlx5_rxq_verify(struct rte_eth_dev *dev);
int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
+struct mlx5_ind_table_obj *mlx5_ind_table_obj_new(struct rte_eth_dev *dev,
+ const uint16_t *queues,
+ uint32_t queues_n);
struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev,
const uint16_t *queues,
uint32_t queues_n);
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 16/18] net/mlx5: share Rx hash queue code
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (14 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 15/18] net/mlx5: share Rx queue indirection table code Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 17/18] net/mlx5: separate Rx queue drop Michael Baum
` (3 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Move Rx hash queue object similar resources allocations from DevX and
Verbs modules to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_verbs.c | 58 ++++++-----------------------
drivers/net/mlx5/mlx5.h | 6 +--
drivers/net/mlx5/mlx5_devx.c | 73 ++++++++++---------------------------
drivers/net/mlx5/mlx5_flow_dv.c | 2 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 4 +-
drivers/net/mlx5/mlx5_rxq.c | 70 ++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_rxtx.h | 8 ++--
7 files changed, 110 insertions(+), 111 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index be810b1..0745da9 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -503,45 +503,24 @@
*
* @param dev
* Pointer to Ethernet device.
- * @param rss_key
- * RSS key for the Rx hash queue.
- * @param rss_key_len
- * RSS key length.
- * @param hash_fields
- * Verbs protocol hash field to make the RSS on.
- * @param queues
- * Queues entering in hash queue. In case of empty hash_fields only the
- * first queue index will be taken for the indirection table.
- * @param queues_n
- * Number of queues.
+ * @param hrxq
+ * Pointer to Rx Hash queue.
* @param tunnel
* Tunnel type.
*
* @return
- * The Verbs object initialized index, 0 otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static uint32_t
-mlx5_ibv_hrxq_new(struct rte_eth_dev *dev,
- const uint8_t *rss_key, uint32_t rss_key_len,
- uint64_t hash_fields,
- const uint16_t *queues, uint32_t queues_n,
+static int
+mlx5_ibv_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
int tunnel __rte_unused)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_hrxq *hrxq = NULL;
- uint32_t hrxq_idx = 0;
struct ibv_qp *qp = NULL;
- struct mlx5_ind_table_obj *ind_tbl;
+ struct mlx5_ind_table_obj *ind_tbl = hrxq->ind_table;
+ const uint8_t *rss_key = hrxq->rss_key;
+ uint64_t hash_fields = hrxq->hash_fields;
int err;
-
- queues_n = hash_fields ? queues_n : 1;
- ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
- if (!ind_tbl)
- ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
- return 0;
- }
#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
struct mlx5dv_qp_init_attr qp_init_attr;
@@ -571,7 +550,7 @@
.rx_hash_conf = (struct ibv_rx_hash_conf){
.rx_hash_function =
IBV_RX_HASH_FUNC_TOEPLITZ,
- .rx_hash_key_len = rss_key_len,
+ .rx_hash_key_len = hrxq->rss_key_len,
.rx_hash_key =
(void *)(uintptr_t)rss_key,
.rx_hash_fields_mask = hash_fields,
@@ -592,7 +571,7 @@
.rx_hash_conf = (struct ibv_rx_hash_conf){
.rx_hash_function =
IBV_RX_HASH_FUNC_TOEPLITZ,
- .rx_hash_key_len = rss_key_len,
+ .rx_hash_key_len = hrxq->rss_key_len,
.rx_hash_key =
(void *)(uintptr_t)rss_key,
.rx_hash_fields_mask = hash_fields,
@@ -605,10 +584,6 @@
rte_errno = errno;
goto error;
}
- hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx);
- if (!hrxq)
- goto error;
- hrxq->ind_table = ind_tbl;
hrxq->qp = qp;
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
hrxq->action = mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
@@ -617,22 +592,13 @@
goto error;
}
#endif
- hrxq->rss_key_len = rss_key_len;
- hrxq->hash_fields = hash_fields;
- memcpy(hrxq->rss_key, rss_key, rss_key_len);
- rte_atomic32_inc(&hrxq->refcnt);
- ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, hrxq_idx,
- hrxq, next);
- return hrxq_idx;
+ return 0;
error:
err = rte_errno; /* Save rte_errno before cleanup. */
- mlx5_ind_table_obj_release(dev, ind_tbl);
if (qp)
claim_zero(mlx5_glue->destroy_qp(qp));
- if (hrxq)
- mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
rte_errno = err; /* Restore rte_errno. */
- return 0;
+ return -rte_errno;
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 12017e8..579c961 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -745,10 +745,8 @@ struct mlx5_obj_ops {
int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n,
struct mlx5_ind_table_obj *ind_tbl);
void (*ind_table_destroy)(struct mlx5_ind_table_obj *ind_tbl);
- uint32_t (*hrxq_new)(struct rte_eth_dev *dev, const uint8_t *rss_key,
- uint32_t rss_key_len, uint64_t hash_fields,
- const uint16_t *queues, uint32_t queues_n,
- int tunnel __rte_unused);
+ int (*hrxq_new)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
+ int tunnel __rte_unused);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
};
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index ebc3929..cfb9264 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -679,54 +679,33 @@
*
* @param dev
* Pointer to Ethernet device.
- * @param rss_key
- * RSS key for the Rx hash queue.
- * @param rss_key_len
- * RSS key length.
- * @param hash_fields
- * Verbs protocol hash field to make the RSS on.
- * @param queues
- * Queues entering in hash queue. In case of empty hash_fields only the
- * first queue index will be taken for the indirection table.
- * @param queues_n
- * Number of queues.
+ * @param hrxq
+ * Pointer to Rx Hash queue.
* @param tunnel
* Tunnel type.
*
* @return
- * The DevX object initialized index, 0 otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static uint32_t
-mlx5_devx_hrxq_new(struct rte_eth_dev *dev,
- const uint8_t *rss_key, uint32_t rss_key_len,
- uint64_t hash_fields,
- const uint16_t *queues, uint32_t queues_n,
+static int
+mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
int tunnel __rte_unused)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_hrxq *hrxq = NULL;
- uint32_t hrxq_idx = 0;
- struct mlx5_ind_table_obj *ind_tbl;
- struct mlx5_devx_obj *tir = NULL;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[queues[0]];
+ struct mlx5_ind_table_obj *ind_tbl = hrxq->ind_table;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
struct mlx5_devx_tir_attr tir_attr;
- int err;
- uint32_t i;
+ const uint8_t *rss_key = hrxq->rss_key;
+ uint64_t hash_fields = hrxq->hash_fields;
bool lro = true;
+ uint32_t i;
+ int err;
- queues_n = hash_fields ? queues_n : 1;
- ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
- if (!ind_tbl)
- ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
- return 0;
- }
/* Enable TIR LRO only if all the queues were configured for. */
- for (i = 0; i < queues_n; ++i) {
- if (!(*priv->rxqs)[queues[i]]->lro) {
+ for (i = 0; i < ind_tbl->queues_n; ++i) {
+ if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) {
lro = false;
break;
}
@@ -776,18 +755,13 @@
tir_attr.lro_enable_mask = MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
}
- tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
- if (!tir) {
+ hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
+ if (!hrxq->tir) {
DRV_LOG(ERR, "Port %u cannot create DevX TIR.",
dev->data->port_id);
rte_errno = errno;
goto error;
}
- hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx);
- if (!hrxq)
- goto error;
- hrxq->ind_table = ind_tbl;
- hrxq->tir = tir;
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
hrxq->action = mlx5_glue->dv_create_flow_action_dest_devx_tir
(hrxq->tir->obj);
@@ -796,22 +770,13 @@
goto error;
}
#endif
- hrxq->rss_key_len = rss_key_len;
- hrxq->hash_fields = hash_fields;
- memcpy(hrxq->rss_key, rss_key, rss_key_len);
- rte_atomic32_inc(&hrxq->refcnt);
- ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, hrxq_idx,
- hrxq, next);
- return hrxq_idx;
+ return 0;
error:
err = rte_errno; /* Save rte_errno before cleanup. */
- mlx5_ind_table_obj_release(dev, ind_tbl);
- if (tir)
- claim_zero(mlx5_devx_cmd_destroy(tir));
- if (hrxq)
- mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
+ if (hrxq->tir)
+ claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
rte_errno = err; /* Restore rte_errno. */
- return 0;
+ return -rte_errno;
}
/**
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index fa41486..d636c57 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8949,7 +8949,7 @@ struct field_modify_info modify_tcp[] = {
rss_desc->queue,
rss_desc->queue_num);
if (!hrxq_idx) {
- hrxq_idx = priv->obj_ops->hrxq_new
+ hrxq_idx = mlx5_hrxq_new
(dev, rss_desc->key,
MLX5_RSS_HASH_KEY_LEN,
dev_flow->hash_fields,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index f8edae1..2ce91f7 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1986,7 +1986,7 @@
rss_desc->queue,
rss_desc->queue_num);
if (!hrxq_idx)
- hrxq_idx = priv->obj_ops->hrxq_new
+ hrxq_idx = mlx5_hrxq_new
(dev, rss_desc->key,
MLX5_RSS_HASH_KEY_LEN,
dev_flow->hash_fields,
@@ -1995,7 +1995,7 @@
!!(handle->layers &
MLX5_FLOW_LAYER_TUNNEL));
hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
- hrxq_idx);
+ hrxq_idx);
if (!hrxq) {
rte_flow_error_set
(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index c353139..234ee28 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1811,7 +1811,7 @@ struct mlx5_ind_table_obj *
* @return
* The Verbs/DevX object initialized, NULL otherwise and rte_errno is set.
*/
-struct mlx5_ind_table_obj *
+static struct mlx5_ind_table_obj *
mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
uint32_t queues_n)
{
@@ -1938,6 +1938,74 @@ struct mlx5_ind_table_obj *
}
/**
+ * Create an Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param rss_key
+ * RSS key for the Rx hash queue.
+ * @param rss_key_len
+ * RSS key length.
+ * @param hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param queues
+ * Queues entering in hash queue. In case of empty hash_fields only the
+ * first queue index will be taken for the indirection table.
+ * @param queues_n
+ * Number of queues.
+ * @param tunnel
+ * Tunnel type.
+ *
+ * @return
+ * The DevX object initialized index, 0 otherwise and rte_errno is set.
+ */
+uint32_t
+mlx5_hrxq_new(struct rte_eth_dev *dev,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n,
+ int tunnel __rte_unused)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq = NULL;
+ uint32_t hrxq_idx = 0;
+ struct mlx5_ind_table_obj *ind_tbl;
+ int ret;
+
+ queues_n = hash_fields ? queues_n : 1;
+ ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
+ if (!ind_tbl)
+ ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return 0;
+ }
+ hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx);
+ if (!hrxq)
+ goto error;
+ hrxq->ind_table = ind_tbl;
+ hrxq->rss_key_len = rss_key_len;
+ hrxq->hash_fields = hash_fields;
+ memcpy(hrxq->rss_key, rss_key, rss_key_len);
+ ret = priv->obj_ops->hrxq_new(dev, hrxq, tunnel);
+ if (ret < 0) {
+ rte_errno = errno;
+ goto error;
+ }
+ rte_atomic32_inc(&hrxq->refcnt);
+ ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs, hrxq_idx,
+ hrxq, next);
+ return hrxq_idx;
+error:
+ ret = rte_errno; /* Save rte_errno before cleanup. */
+ mlx5_ind_table_obj_release(dev, ind_tbl);
+ if (hrxq)
+ mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
+ rte_errno = ret; /* Restore rte_errno. */
+ return 0;
+}
+
+/**
* Verify the Rx Queue list is empty
*
* @param dev
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 237344f..164f36b 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -365,14 +365,16 @@ struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new
int mlx5_rxq_verify(struct rte_eth_dev *dev);
int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
-struct mlx5_ind_table_obj *mlx5_ind_table_obj_new(struct rte_eth_dev *dev,
- const uint16_t *queues,
- uint32_t queues_n);
struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev,
const uint16_t *queues,
uint32_t queues_n);
int mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
struct mlx5_ind_table_obj *ind_tbl);
+uint32_t mlx5_hrxq_new(struct rte_eth_dev *dev,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n,
+ int tunnel __rte_unused);
uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 17/18] net/mlx5: separate Rx queue drop
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (15 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 16/18] net/mlx5: share Rx hash queue code Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 18/18] net/mlx5: share Rx queue drop action code Michael Baum
` (2 subsequent siblings)
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Separate Rx queue drop creation into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_os.c | 11 +-
drivers/net/mlx5/linux/mlx5_verbs.c | 252 +++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 4 +-
drivers/net/mlx5/mlx5_devx.c | 34 +++++
drivers/net/mlx5/mlx5_flow_dv.c | 8 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 10 +-
drivers/net/mlx5/mlx5_rxq.c | 271 ++----------------------------------
drivers/net/mlx5/mlx5_trigger.c | 2 +-
drivers/net/mlx5/mlx5_vlan.c | 2 +-
9 files changed, 316 insertions(+), 278 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 694fbd3..505e7d9 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1267,6 +1267,13 @@
goto error;
}
}
+ if (config->devx && config->dv_flow_en) {
+ priv->obj_ops = devx_obj_ops;
+ priv->obj_ops.hrxq_drop_new = ibv_obj_ops.hrxq_drop_new;
+ priv->obj_ops.hrxq_drop_release = ibv_obj_ops.hrxq_drop_release;
+ } else {
+ priv->obj_ops = ibv_obj_ops;
+ }
/* Supported Verbs flow priority number detection. */
err = mlx5_flow_discover_priorities(eth_dev);
if (err < 0) {
@@ -1323,10 +1330,6 @@
goto error;
}
}
- if (config->devx && config->dv_flow_en)
- priv->obj_ops = &devx_obj_ops;
- else
- priv->obj_ops = &ibv_obj_ops;
return eth_dev;
error:
if (priv) {
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 0745da9..0a8ae65 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -613,6 +613,256 @@
claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
}
+/**
+ * Create a drop Rx queue Verbs object.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_rxq_obj *
+mlx5_rxq_obj_drop_new(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct ibv_context *ctx = priv->sh->ctx;
+ struct ibv_cq *cq;
+ struct ibv_wq *wq = NULL;
+ struct mlx5_rxq_obj *rxq;
+
+ if (priv->drop_queue.rxq)
+ return priv->drop_queue.rxq;
+ cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
+ if (!cq) {
+ DEBUG("Port %u cannot allocate CQ for drop queue.",
+ dev->data->port_id);
+ rte_errno = errno;
+ goto error;
+ }
+ wq = mlx5_glue->create_wq(ctx,
+ &(struct ibv_wq_init_attr){
+ .wq_type = IBV_WQT_RQ,
+ .max_wr = 1,
+ .max_sge = 1,
+ .pd = priv->sh->pd,
+ .cq = cq,
+ });
+ if (!wq) {
+ DEBUG("Port %u cannot allocate WQ for drop queue.",
+ dev->data->port_id);
+ rte_errno = errno;
+ goto error;
+ }
+ rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
+ if (!rxq) {
+ DEBUG("Port %u cannot allocate drop Rx queue memory.",
+ dev->data->port_id);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ rxq->ibv_cq = cq;
+ rxq->wq = wq;
+ priv->drop_queue.rxq = rxq;
+ return rxq;
+error:
+ if (wq)
+ claim_zero(mlx5_glue->destroy_wq(wq));
+ if (cq)
+ claim_zero(mlx5_glue->destroy_cq(cq));
+ return NULL;
+}
+
+/**
+ * Release a drop Rx queue Verbs object.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+mlx5_rxq_obj_drop_release(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
+
+ if (rxq->wq)
+ claim_zero(mlx5_glue->destroy_wq(rxq->wq));
+ if (rxq->ibv_cq)
+ claim_zero(mlx5_glue->destroy_cq(rxq->ibv_cq));
+ mlx5_free(rxq);
+ priv->drop_queue.rxq = NULL;
+}
+
+/**
+ * Create a drop indirection table.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_drop_new(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ind_table_obj *ind_tbl;
+ struct mlx5_rxq_obj *rxq;
+ struct mlx5_ind_table_obj tmpl;
+
+ rxq = mlx5_rxq_obj_drop_new(dev);
+ if (!rxq)
+ return NULL;
+ tmpl.ind_table = mlx5_glue->create_rwq_ind_table
+ (priv->sh->ctx,
+ &(struct ibv_rwq_ind_table_init_attr){
+ .log_ind_tbl_size = 0,
+ .ind_tbl = (struct ibv_wq **)&rxq->wq,
+ .comp_mask = 0,
+ });
+ if (!tmpl.ind_table) {
+ DEBUG("Port %u cannot allocate indirection table for drop"
+ " queue.", dev->data->port_id);
+ rte_errno = errno;
+ goto error;
+ }
+ ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl), 0,
+ SOCKET_ID_ANY);
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ ind_tbl->ind_table = tmpl.ind_table;
+ return ind_tbl;
+error:
+ mlx5_rxq_obj_drop_release(dev);
+ return NULL;
+}
+
+/**
+ * Release a drop indirection table.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+mlx5_ind_table_obj_drop_release(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ind_table_obj *ind_tbl = priv->drop_queue.hrxq->ind_table;
+
+ claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
+ mlx5_rxq_obj_drop_release(dev);
+ mlx5_free(ind_tbl);
+ priv->drop_queue.hrxq->ind_table = NULL;
+}
+
+/**
+ * Create a drop Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_hrxq *
+mlx5_ibv_hrxq_drop_new(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ind_table_obj *ind_tbl = NULL;
+ struct ibv_qp *qp = NULL;
+ struct mlx5_hrxq *hrxq = NULL;
+
+ if (priv->drop_queue.hrxq) {
+ rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
+ return priv->drop_queue.hrxq;
+ }
+ hrxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq), 0, SOCKET_ID_ANY);
+ if (!hrxq) {
+ DRV_LOG(WARNING,
+ "Port %u cannot allocate memory for drop queue.",
+ dev->data->port_id);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ priv->drop_queue.hrxq = hrxq;
+ ind_tbl = mlx5_ind_table_obj_drop_new(dev);
+ if (!ind_tbl)
+ goto error;
+ hrxq->ind_table = ind_tbl;
+ qp = mlx5_glue->create_qp_ex(priv->sh->ctx,
+ &(struct ibv_qp_init_attr_ex){
+ .qp_type = IBV_QPT_RAW_PACKET,
+ .comp_mask =
+ IBV_QP_INIT_ATTR_PD |
+ IBV_QP_INIT_ATTR_IND_TABLE |
+ IBV_QP_INIT_ATTR_RX_HASH,
+ .rx_hash_conf = (struct ibv_rx_hash_conf){
+ .rx_hash_function =
+ IBV_RX_HASH_FUNC_TOEPLITZ,
+ .rx_hash_key_len = MLX5_RSS_HASH_KEY_LEN,
+ .rx_hash_key = rss_hash_default_key,
+ .rx_hash_fields_mask = 0,
+ },
+ .rwq_ind_tbl = ind_tbl->ind_table,
+ .pd = priv->sh->pd
+ });
+ if (!qp) {
+ DEBUG("Port %u cannot allocate QP for drop queue.",
+ dev->data->port_id);
+ rte_errno = errno;
+ goto error;
+ }
+ hrxq->qp = qp;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ hrxq->action = mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
+ if (!hrxq->action) {
+ rte_errno = errno;
+ goto error;
+ }
+#endif
+ rte_atomic32_set(&hrxq->refcnt, 1);
+ return hrxq;
+error:
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ if (hrxq && hrxq->action)
+ mlx5_glue->destroy_flow_action(hrxq->action);
+#endif
+ if (qp)
+ claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+ if (ind_tbl)
+ mlx5_ind_table_obj_drop_release(dev);
+ if (hrxq) {
+ priv->drop_queue.hrxq = NULL;
+ mlx5_free(hrxq);
+ }
+ return NULL;
+}
+
+/**
+ * Release a drop hash Rx queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+mlx5_ibv_hrxq_drop_release(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq;
+
+ if (rte_atomic32_dec_and_test(&hrxq->refcnt)) {
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ mlx5_glue->destroy_flow_action(hrxq->action);
+#endif
+ claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+ mlx5_ind_table_obj_drop_release(dev);
+ mlx5_free(hrxq);
+ priv->drop_queue.hrxq = NULL;
+ }
+}
+
struct mlx5_obj_ops ibv_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_wq_vlan_strip,
.rxq_obj_new = mlx5_rxq_ibv_obj_new,
@@ -623,4 +873,6 @@ struct mlx5_obj_ops ibv_obj_ops = {
.ind_table_destroy = mlx5_ibv_ind_table_destroy,
.hrxq_new = mlx5_ibv_hrxq_new,
.hrxq_destroy = mlx5_ibv_qp_destroy,
+ .hrxq_drop_new = mlx5_ibv_hrxq_drop_new,
+ .hrxq_drop_release = mlx5_ibv_hrxq_drop_release,
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 579c961..8cef097 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -748,6 +748,8 @@ struct mlx5_obj_ops {
int (*hrxq_new)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
int tunnel __rte_unused);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
+ struct mlx5_hrxq *(*hrxq_drop_new)(struct rte_eth_dev *dev);
+ void (*hrxq_drop_release)(struct rte_eth_dev *dev);
};
struct mlx5_priv {
@@ -793,7 +795,7 @@ struct mlx5_priv {
void *rss_desc; /* Intermediate rss description resources. */
int flow_idx; /* Intermediate device flow index. */
int flow_nested_idx; /* Intermediate device flow index, nested. */
- struct mlx5_obj_ops *obj_ops; /* HW objects operations. */
+ struct mlx5_obj_ops obj_ops; /* HW objects operations. */
LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */
LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */
uint32_t hrxqs; /* Verbs Hash Rx queues. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index cfb9264..ddaab83 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -791,6 +791,38 @@
claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
}
+/**
+ * Create a drop Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * The DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_hrxq *
+mlx5_devx_hrxq_drop_new(struct rte_eth_dev *dev)
+{
+ (void)dev;
+ DRV_LOG(ERR, "DevX drop action is not supported yet");
+ rte_errno = ENOTSUP;
+ return NULL;
+}
+
+/**
+ * Release a drop hash Rx queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+mlx5_devx_hrxq_drop_release(struct rte_eth_dev *dev)
+{
+ (void)dev;
+ DRV_LOG(ERR, "DevX drop action is not supported yet");
+ rte_errno = ENOTSUP;
+}
+
struct mlx5_obj_ops devx_obj_ops = {
.rxq_obj_modify_vlan_strip = mlx5_rxq_obj_modify_rq_vlan_strip,
.rxq_obj_new = mlx5_rxq_devx_obj_new,
@@ -801,4 +833,6 @@ struct mlx5_obj_ops devx_obj_ops = {
.ind_table_destroy = mlx5_devx_ind_table_destroy,
.hrxq_new = mlx5_devx_hrxq_new,
.hrxq_destroy = mlx5_devx_tir_destroy,
+ .hrxq_drop_new = mlx5_devx_hrxq_drop_new,
+ .hrxq_drop_release = mlx5_devx_hrxq_drop_release,
};
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d636c57..f953a2d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8917,7 +8917,7 @@ struct field_modify_info modify_tcp[] = {
dv->actions[n++] = priv->sh->esw_drop_action;
} else {
struct mlx5_hrxq *drop_hrxq;
- drop_hrxq = mlx5_hrxq_drop_new(dev);
+ drop_hrxq = priv->obj_ops.hrxq_drop_new(dev);
if (!drop_hrxq) {
rte_flow_error_set
(error, errno,
@@ -9013,7 +9013,7 @@ struct field_modify_info modify_tcp[] = {
/* hrxq is union, don't clear it if the flag is not set. */
if (dh->rix_hrxq) {
if (dh->fate_action == MLX5_FLOW_FATE_DROP) {
- mlx5_hrxq_drop_release(dev);
+ priv->obj_ops.hrxq_drop_release(dev);
dh->rix_hrxq = 0;
} else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) {
mlx5_hrxq_release(dev, dh->rix_hrxq);
@@ -9303,11 +9303,13 @@ struct field_modify_info modify_tcp[] = {
flow_dv_fate_resource_release(struct rte_eth_dev *dev,
struct mlx5_flow_handle *handle)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
+
if (!handle->rix_fate)
return;
switch (handle->fate_action) {
case MLX5_FLOW_FATE_DROP:
- mlx5_hrxq_drop_release(dev);
+ priv->obj_ops.hrxq_drop_release(dev);
break;
case MLX5_FLOW_FATE_QUEUE:
mlx5_hrxq_release(dev, handle->rix_hrxq);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 2ce91f7..e5fc278 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -72,7 +72,7 @@
},
};
struct ibv_flow *flow;
- struct mlx5_hrxq *drop = mlx5_hrxq_drop_new(dev);
+ struct mlx5_hrxq *drop = priv->obj_ops.hrxq_drop_new(dev);
uint16_t vprio[] = { 8, 16 };
int i;
int priority = 0;
@@ -89,7 +89,7 @@
claim_zero(mlx5_glue->destroy_flow(flow));
priority = vprio[i];
}
- mlx5_hrxq_drop_release(dev);
+ priv->obj_ops.hrxq_drop_release(dev);
switch (priority) {
case 8:
priority = RTE_DIM(priority_map_3);
@@ -1889,7 +1889,7 @@
/* hrxq is union, don't touch it only the flag is set. */
if (handle->rix_hrxq) {
if (handle->fate_action == MLX5_FLOW_FATE_DROP) {
- mlx5_hrxq_drop_release(dev);
+ priv->obj_ops.hrxq_drop_release(dev);
handle->rix_hrxq = 0;
} else if (handle->fate_action ==
MLX5_FLOW_FATE_QUEUE) {
@@ -1965,7 +1965,7 @@
dev_flow = &((struct mlx5_flow *)priv->inter_flows)[idx];
handle = dev_flow->handle;
if (handle->fate_action == MLX5_FLOW_FATE_DROP) {
- hrxq = mlx5_hrxq_drop_new(dev);
+ hrxq = priv->obj_ops.hrxq_drop_new(dev);
if (!hrxq) {
rte_flow_error_set
(error, errno,
@@ -2034,7 +2034,7 @@
/* hrxq is union, don't touch it only the flag is set. */
if (handle->rix_hrxq) {
if (handle->fate_action == MLX5_FLOW_FATE_DROP) {
- mlx5_hrxq_drop_release(dev);
+ priv->obj_ops.hrxq_drop_release(dev);
handle->rix_hrxq = 0;
} else if (handle->fate_action ==
MLX5_FLOW_FATE_QUEUE) {
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 234ee28..99b32f6 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -513,7 +513,7 @@
int ret;
MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
- ret = priv->obj_ops->rxq_obj_modify(rxq_ctrl->obj, false);
+ ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, false);
if (ret) {
DRV_LOG(ERR, "Cannot change Rx WQ state to RESET: %s",
strerror(errno));
@@ -612,7 +612,7 @@
/* Reset RQ consumer before moving queue to READY state. */
*rxq->rq_db = rte_cpu_to_be_32(0);
rte_cio_wmb();
- ret = priv->obj_ops->rxq_obj_modify(rxq_ctrl->obj, true);
+ ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, true);
if (ret) {
DRV_LOG(ERR, "Cannot change Rx WQ state to READY: %s",
strerror(errno));
@@ -1027,7 +1027,7 @@
if (!rxq_ctrl->obj)
goto error;
if (rxq_ctrl->irq) {
- ret = priv->obj_ops->rxq_event_get(rxq_ctrl->obj);
+ ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj);
if (ret < 0)
goto error;
rxq_ctrl->rxq.cq_arm_sn++;
@@ -1641,7 +1641,7 @@ struct mlx5_rxq_ctrl *
if (!rte_atomic32_dec_and_test(&rxq_ctrl->refcnt))
return 1;
if (rxq_ctrl->obj) {
- priv->obj_ops->rxq_obj_release(rxq_ctrl->obj);
+ priv->obj_ops.rxq_obj_release(rxq_ctrl->obj);
LIST_REMOVE(rxq_ctrl->obj, next);
mlx5_free(rxq_ctrl->obj);
rxq_ctrl->obj = NULL;
@@ -1762,7 +1762,7 @@ struct mlx5_ind_table_obj *
unsigned int i;
if (rte_atomic32_dec_and_test(&ind_tbl->refcnt))
- priv->obj_ops->ind_table_destroy(ind_tbl);
+ priv->obj_ops.ind_table_destroy(ind_tbl);
for (i = 0; i != ind_tbl->queues_n; ++i)
claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
if (!rte_atomic32_read(&ind_tbl->refcnt)) {
@@ -1836,7 +1836,7 @@ struct mlx5_ind_table_obj *
goto error;
ind_tbl->queues[i] = queues[i];
}
- ret = priv->obj_ops->ind_table_new(dev, n, ind_tbl);
+ ret = priv->obj_ops.ind_table_new(dev, n, ind_tbl);
if (ret < 0)
goto error;
rte_atomic32_inc(&ind_tbl->refcnt);
@@ -1926,7 +1926,7 @@ struct mlx5_ind_table_obj *
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
mlx5_glue->destroy_flow_action(hrxq->action);
#endif
- priv->obj_ops->hrxq_destroy(hrxq);
+ priv->obj_ops.hrxq_destroy(hrxq);
mlx5_ind_table_obj_release(dev, hrxq->ind_table);
ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_HRXQ], &priv->hrxqs,
hrxq_idx, hrxq, next);
@@ -1987,7 +1987,7 @@ struct mlx5_ind_table_obj *
hrxq->rss_key_len = rss_key_len;
hrxq->hash_fields = hash_fields;
memcpy(hrxq->rss_key, rss_key, rss_key_len);
- ret = priv->obj_ops->hrxq_new(dev, hrxq, tunnel);
+ ret = priv->obj_ops.hrxq_new(dev, hrxq, tunnel);
if (ret < 0) {
rte_errno = errno;
goto error;
@@ -2033,261 +2033,6 @@ struct mlx5_ind_table_obj *
}
/**
- * Create a drop Rx queue Verbs/DevX object.
- *
- * @param dev
- * Pointer to Ethernet device.
- *
- * @return
- * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_rxq_obj *
-mlx5_rxq_obj_drop_new(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct ibv_context *ctx = priv->sh->ctx;
- struct ibv_cq *cq;
- struct ibv_wq *wq = NULL;
- struct mlx5_rxq_obj *rxq;
-
- if (priv->drop_queue.rxq)
- return priv->drop_queue.rxq;
- cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
- if (!cq) {
- DEBUG("port %u cannot allocate CQ for drop queue",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- wq = mlx5_glue->create_wq(ctx,
- &(struct ibv_wq_init_attr){
- .wq_type = IBV_WQT_RQ,
- .max_wr = 1,
- .max_sge = 1,
- .pd = priv->sh->pd,
- .cq = cq,
- });
- if (!wq) {
- DEBUG("port %u cannot allocate WQ for drop queue",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
- if (!rxq) {
- DEBUG("port %u cannot allocate drop Rx queue memory",
- dev->data->port_id);
- rte_errno = ENOMEM;
- goto error;
- }
- rxq->ibv_cq = cq;
- rxq->wq = wq;
- priv->drop_queue.rxq = rxq;
- return rxq;
-error:
- if (wq)
- claim_zero(mlx5_glue->destroy_wq(wq));
- if (cq)
- claim_zero(mlx5_glue->destroy_cq(cq));
- return NULL;
-}
-
-/**
- * Release a drop Rx queue Verbs/DevX object.
- *
- * @param dev
- * Pointer to Ethernet device.
- *
- * @return
- * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static void
-mlx5_rxq_obj_drop_release(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
-
- if (rxq->wq)
- claim_zero(mlx5_glue->destroy_wq(rxq->wq));
- if (rxq->ibv_cq)
- claim_zero(mlx5_glue->destroy_cq(rxq->ibv_cq));
- mlx5_free(rxq);
- priv->drop_queue.rxq = NULL;
-}
-
-/**
- * Create a drop indirection table.
- *
- * @param dev
- * Pointer to Ethernet device.
- *
- * @return
- * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_ind_table_obj *
-mlx5_ind_table_obj_drop_new(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl;
- struct mlx5_rxq_obj *rxq;
- struct mlx5_ind_table_obj tmpl;
-
- rxq = mlx5_rxq_obj_drop_new(dev);
- if (!rxq)
- return NULL;
- tmpl.ind_table = mlx5_glue->create_rwq_ind_table
- (priv->sh->ctx,
- &(struct ibv_rwq_ind_table_init_attr){
- .log_ind_tbl_size = 0,
- .ind_tbl = (struct ibv_wq **)&rxq->wq,
- .comp_mask = 0,
- });
- if (!tmpl.ind_table) {
- DEBUG("port %u cannot allocate indirection table for drop"
- " queue",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl), 0,
- SOCKET_ID_ANY);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
- goto error;
- }
- ind_tbl->ind_table = tmpl.ind_table;
- return ind_tbl;
-error:
- mlx5_rxq_obj_drop_release(dev);
- return NULL;
-}
-
-/**
- * Release a drop indirection table.
- *
- * @param dev
- * Pointer to Ethernet device.
- */
-static void
-mlx5_ind_table_obj_drop_release(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl = priv->drop_queue.hrxq->ind_table;
-
- claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
- mlx5_rxq_obj_drop_release(dev);
- mlx5_free(ind_tbl);
- priv->drop_queue.hrxq->ind_table = NULL;
-}
-
-/**
- * Create a drop Rx Hash queue.
- *
- * @param dev
- * Pointer to Ethernet device.
- *
- * @return
- * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
- */
-struct mlx5_hrxq *
-mlx5_hrxq_drop_new(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl = NULL;
- struct ibv_qp *qp = NULL;
- struct mlx5_hrxq *hrxq = NULL;
-
- if (priv->drop_queue.hrxq) {
- rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
- return priv->drop_queue.hrxq;
- }
- hrxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq), 0, SOCKET_ID_ANY);
- if (!hrxq) {
- DRV_LOG(WARNING,
- "port %u cannot allocate memory for drop queue",
- dev->data->port_id);
- rte_errno = ENOMEM;
- goto error;
- }
- priv->drop_queue.hrxq = hrxq;
- ind_tbl = mlx5_ind_table_obj_drop_new(dev);
- if (!ind_tbl)
- goto error;
- hrxq->ind_table = ind_tbl;
- qp = mlx5_glue->create_qp_ex(priv->sh->ctx,
- &(struct ibv_qp_init_attr_ex){
- .qp_type = IBV_QPT_RAW_PACKET,
- .comp_mask =
- IBV_QP_INIT_ATTR_PD |
- IBV_QP_INIT_ATTR_IND_TABLE |
- IBV_QP_INIT_ATTR_RX_HASH,
- .rx_hash_conf = (struct ibv_rx_hash_conf){
- .rx_hash_function =
- IBV_RX_HASH_FUNC_TOEPLITZ,
- .rx_hash_key_len = MLX5_RSS_HASH_KEY_LEN,
- .rx_hash_key = rss_hash_default_key,
- .rx_hash_fields_mask = 0,
- },
- .rwq_ind_tbl = ind_tbl->ind_table,
- .pd = priv->sh->pd
- });
- if (!qp) {
- DEBUG("port %u cannot allocate QP for drop queue",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- hrxq->qp = qp;
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- hrxq->action = mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
- if (!hrxq->action) {
- rte_errno = errno;
- goto error;
- }
-#endif
- rte_atomic32_set(&hrxq->refcnt, 1);
- return hrxq;
-error:
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- if (hrxq && hrxq->action)
- mlx5_glue->destroy_flow_action(hrxq->action);
-#endif
- if (qp)
- claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
- if (ind_tbl)
- mlx5_ind_table_obj_drop_release(dev);
- if (hrxq) {
- priv->drop_queue.hrxq = NULL;
- mlx5_free(hrxq);
- }
- return NULL;
-}
-
-/**
- * Release a drop hash Rx queue.
- *
- * @param dev
- * Pointer to Ethernet device.
- */
-void
-mlx5_hrxq_drop_release(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq;
-
- if (rte_atomic32_dec_and_test(&hrxq->refcnt)) {
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- mlx5_glue->destroy_flow_action(hrxq->action);
-#endif
- claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
- mlx5_ind_table_obj_drop_release(dev);
- mlx5_free(hrxq);
- priv->drop_queue.hrxq = NULL;
- }
-}
-
-
-/**
* Set the Rx queue timestamp conversion parameters
*
* @param[in] dev
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 43eff93..0f4d031 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -150,7 +150,7 @@
rte_errno = ENOMEM;
goto error;
}
- ret = priv->obj_ops->rxq_obj_new(dev, i);
+ ret = priv->obj_ops.rxq_obj_new(dev, i);
if (ret) {
mlx5_free(rxq_ctrl->obj);
goto error;
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 4bcd3e2..290503a 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -114,7 +114,7 @@
rxq->vlan_strip = !!on;
return;
}
- ret = priv->obj_ops->rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on);
+ ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on);
if (ret) {
DRV_LOG(ERR, "port %u failed to modify object %d stripping "
"mode: %s", dev->data->port_id,
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 18/18] net/mlx5: share Rx queue drop action code
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (16 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 17/18] net/mlx5: separate Rx queue drop Michael Baum
@ 2020-09-03 10:13 ` Michael Baum
2020-09-03 14:34 ` [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Tom Barbette
2020-09-08 11:46 ` Raslan Darawsheh
19 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 10:13 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Move Rx queue drop action similar resources allocations from Verbs
module to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_os.c | 6 +-
drivers/net/mlx5/linux/mlx5_verbs.c | 246 ++++++++++++------------------------
drivers/net/mlx5/mlx5.h | 4 +-
drivers/net/mlx5/mlx5_devx.c | 16 +--
drivers/net/mlx5/mlx5_flow_dv.c | 10 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 10 +-
drivers/net/mlx5/mlx5_rxq.c | 72 +++++++++++
drivers/net/mlx5/mlx5_rxtx.h | 4 +-
8 files changed, 180 insertions(+), 188 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 505e7d9..ae871fe 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1269,8 +1269,10 @@
}
if (config->devx && config->dv_flow_en) {
priv->obj_ops = devx_obj_ops;
- priv->obj_ops.hrxq_drop_new = ibv_obj_ops.hrxq_drop_new;
- priv->obj_ops.hrxq_drop_release = ibv_obj_ops.hrxq_drop_release;
+ priv->obj_ops.drop_action_create =
+ ibv_obj_ops.drop_action_create;
+ priv->obj_ops.drop_action_destroy =
+ ibv_obj_ops.drop_action_destroy;
} else {
priv->obj_ops = ibv_obj_ops;
}
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index 0a8ae65..d6e670f 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -614,73 +614,13 @@
}
/**
- * Create a drop Rx queue Verbs object.
- *
- * @param dev
- * Pointer to Ethernet device.
- *
- * @return
- * The Verbs object initialized, NULL otherwise and rte_errno is set.
- */
-static struct mlx5_rxq_obj *
-mlx5_rxq_obj_drop_new(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct ibv_context *ctx = priv->sh->ctx;
- struct ibv_cq *cq;
- struct ibv_wq *wq = NULL;
- struct mlx5_rxq_obj *rxq;
-
- if (priv->drop_queue.rxq)
- return priv->drop_queue.rxq;
- cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
- if (!cq) {
- DEBUG("Port %u cannot allocate CQ for drop queue.",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- wq = mlx5_glue->create_wq(ctx,
- &(struct ibv_wq_init_attr){
- .wq_type = IBV_WQT_RQ,
- .max_wr = 1,
- .max_sge = 1,
- .pd = priv->sh->pd,
- .cq = cq,
- });
- if (!wq) {
- DEBUG("Port %u cannot allocate WQ for drop queue.",
- dev->data->port_id);
- rte_errno = errno;
- goto error;
- }
- rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
- if (!rxq) {
- DEBUG("Port %u cannot allocate drop Rx queue memory.",
- dev->data->port_id);
- rte_errno = ENOMEM;
- goto error;
- }
- rxq->ibv_cq = cq;
- rxq->wq = wq;
- priv->drop_queue.rxq = rxq;
- return rxq;
-error:
- if (wq)
- claim_zero(mlx5_glue->destroy_wq(wq));
- if (cq)
- claim_zero(mlx5_glue->destroy_cq(cq));
- return NULL;
-}
-
-/**
* Release a drop Rx queue Verbs object.
*
* @param dev
* Pointer to Ethernet device.
*/
static void
-mlx5_rxq_obj_drop_release(struct rte_eth_dev *dev)
+mlx5_rxq_ibv_obj_drop_release(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
@@ -694,127 +634,115 @@
}
/**
- * Create a drop indirection table.
+ * Create a drop Rx queue Verbs object.
*
* @param dev
* Pointer to Ethernet device.
*
* @return
- * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_ind_table_obj *
-mlx5_ind_table_obj_drop_new(struct rte_eth_dev *dev)
+static int
+mlx5_rxq_ibv_obj_drop_create(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl;
- struct mlx5_rxq_obj *rxq;
- struct mlx5_ind_table_obj tmpl;
-
- rxq = mlx5_rxq_obj_drop_new(dev);
- if (!rxq)
- return NULL;
- tmpl.ind_table = mlx5_glue->create_rwq_ind_table
- (priv->sh->ctx,
- &(struct ibv_rwq_ind_table_init_attr){
- .log_ind_tbl_size = 0,
- .ind_tbl = (struct ibv_wq **)&rxq->wq,
- .comp_mask = 0,
- });
- if (!tmpl.ind_table) {
- DEBUG("Port %u cannot allocate indirection table for drop"
- " queue.", dev->data->port_id);
+ struct ibv_context *ctx = priv->sh->ctx;
+ struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
+
+ if (rxq)
+ return 0;
+ rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
+ if (!rxq) {
+ DEBUG("Port %u cannot allocate drop Rx queue memory.",
+ dev->data->port_id);
+ rte_errno = ENOMEM;
+ return -rte_errno;
+ }
+ priv->drop_queue.rxq = rxq;
+ rxq->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0);
+ if (!rxq->ibv_cq) {
+ DEBUG("Port %u cannot allocate CQ for drop queue.",
+ dev->data->port_id);
rte_errno = errno;
goto error;
}
- ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl), 0,
- SOCKET_ID_ANY);
- if (!ind_tbl) {
- rte_errno = ENOMEM;
+ rxq->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){
+ .wq_type = IBV_WQT_RQ,
+ .max_wr = 1,
+ .max_sge = 1,
+ .pd = priv->sh->pd,
+ .cq = rxq->ibv_cq,
+ });
+ if (!rxq->wq) {
+ DEBUG("Port %u cannot allocate WQ for drop queue.",
+ dev->data->port_id);
+ rte_errno = errno;
goto error;
}
- ind_tbl->ind_table = tmpl.ind_table;
- return ind_tbl;
+ priv->drop_queue.rxq = rxq;
+ return 0;
error:
- mlx5_rxq_obj_drop_release(dev);
- return NULL;
-}
-
-/**
- * Release a drop indirection table.
- *
- * @param dev
- * Pointer to Ethernet device.
- */
-static void
-mlx5_ind_table_obj_drop_release(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl = priv->drop_queue.hrxq->ind_table;
-
- claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
- mlx5_rxq_obj_drop_release(dev);
- mlx5_free(ind_tbl);
- priv->drop_queue.hrxq->ind_table = NULL;
+ mlx5_rxq_ibv_obj_drop_release(dev);
+ return -rte_errno;
}
/**
- * Create a drop Rx Hash queue.
+ * Create a Verbs drop action for Rx Hash queue.
*
* @param dev
* Pointer to Ethernet device.
*
* @return
- * The Verbs object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_hrxq *
-mlx5_ibv_hrxq_drop_new(struct rte_eth_dev *dev)
+static int
+mlx5_ibv_drop_action_create(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl = NULL;
- struct ibv_qp *qp = NULL;
- struct mlx5_hrxq *hrxq = NULL;
+ struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq;
+ struct ibv_rwq_ind_table *ind_tbl = NULL;
+ struct mlx5_rxq_obj *rxq;
+ int ret;
- if (priv->drop_queue.hrxq) {
- rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
- return priv->drop_queue.hrxq;
- }
- hrxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq), 0, SOCKET_ID_ANY);
- if (!hrxq) {
- DRV_LOG(WARNING,
- "Port %u cannot allocate memory for drop queue.",
- dev->data->port_id);
- rte_errno = ENOMEM;
+ MLX5_ASSERT(hrxq && hrxq->ind_table);
+ ret = mlx5_rxq_ibv_obj_drop_create(dev);
+ if (ret < 0)
goto error;
- }
- priv->drop_queue.hrxq = hrxq;
- ind_tbl = mlx5_ind_table_obj_drop_new(dev);
- if (!ind_tbl)
+ rxq = priv->drop_queue.rxq;
+ ind_tbl = mlx5_glue->create_rwq_ind_table
+ (priv->sh->ctx,
+ &(struct ibv_rwq_ind_table_init_attr){
+ .log_ind_tbl_size = 0,
+ .ind_tbl = (struct ibv_wq **)&rxq->wq,
+ .comp_mask = 0,
+ });
+ if (!ind_tbl) {
+ DEBUG("Port %u cannot allocate indirection table for drop"
+ " queue.", dev->data->port_id);
+ rte_errno = errno;
goto error;
- hrxq->ind_table = ind_tbl;
- qp = mlx5_glue->create_qp_ex(priv->sh->ctx,
+ }
+ hrxq->qp = mlx5_glue->create_qp_ex(priv->sh->ctx,
&(struct ibv_qp_init_attr_ex){
.qp_type = IBV_QPT_RAW_PACKET,
- .comp_mask =
- IBV_QP_INIT_ATTR_PD |
- IBV_QP_INIT_ATTR_IND_TABLE |
- IBV_QP_INIT_ATTR_RX_HASH,
+ .comp_mask = IBV_QP_INIT_ATTR_PD |
+ IBV_QP_INIT_ATTR_IND_TABLE |
+ IBV_QP_INIT_ATTR_RX_HASH,
.rx_hash_conf = (struct ibv_rx_hash_conf){
- .rx_hash_function =
- IBV_RX_HASH_FUNC_TOEPLITZ,
+ .rx_hash_function = IBV_RX_HASH_FUNC_TOEPLITZ,
.rx_hash_key_len = MLX5_RSS_HASH_KEY_LEN,
.rx_hash_key = rss_hash_default_key,
.rx_hash_fields_mask = 0,
},
- .rwq_ind_tbl = ind_tbl->ind_table,
+ .rwq_ind_tbl = ind_tbl,
.pd = priv->sh->pd
});
- if (!qp) {
+ if (!hrxq->qp) {
DEBUG("Port %u cannot allocate QP for drop queue.",
dev->data->port_id);
rte_errno = errno;
goto error;
}
- hrxq->qp = qp;
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
hrxq->action = mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
if (!hrxq->action) {
@@ -822,22 +750,16 @@
goto error;
}
#endif
- rte_atomic32_set(&hrxq->refcnt, 1);
- return hrxq;
+ hrxq->ind_table->ind_table = ind_tbl;
+ return 0;
error:
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- if (hrxq && hrxq->action)
- mlx5_glue->destroy_flow_action(hrxq->action);
-#endif
- if (qp)
+ if (hrxq->qp)
claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
if (ind_tbl)
- mlx5_ind_table_obj_drop_release(dev);
- if (hrxq) {
- priv->drop_queue.hrxq = NULL;
- mlx5_free(hrxq);
- }
- return NULL;
+ claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl));
+ if (priv->drop_queue.rxq)
+ mlx5_rxq_ibv_obj_drop_release(dev);
+ return -rte_errno;
}
/**
@@ -847,20 +769,18 @@
* Pointer to Ethernet device.
*/
static void
-mlx5_ibv_hrxq_drop_release(struct rte_eth_dev *dev)
+mlx5_ibv_drop_action_destroy(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq;
+ struct ibv_rwq_ind_table *ind_tbl = hrxq->ind_table->ind_table;
- if (rte_atomic32_dec_and_test(&hrxq->refcnt)) {
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- mlx5_glue->destroy_flow_action(hrxq->action);
+ claim_zero(mlx5_glue->destroy_flow_action(hrxq->action));
#endif
- claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
- mlx5_ind_table_obj_drop_release(dev);
- mlx5_free(hrxq);
- priv->drop_queue.hrxq = NULL;
- }
+ claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+ claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl));
+ mlx5_rxq_ibv_obj_drop_release(dev);
}
struct mlx5_obj_ops ibv_obj_ops = {
@@ -873,6 +793,6 @@ struct mlx5_obj_ops ibv_obj_ops = {
.ind_table_destroy = mlx5_ibv_ind_table_destroy,
.hrxq_new = mlx5_ibv_hrxq_new,
.hrxq_destroy = mlx5_ibv_qp_destroy,
- .hrxq_drop_new = mlx5_ibv_hrxq_drop_new,
- .hrxq_drop_release = mlx5_ibv_hrxq_drop_release,
+ .drop_action_create = mlx5_ibv_drop_action_create,
+ .drop_action_destroy = mlx5_ibv_drop_action_destroy,
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 8cef097..865e72d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -748,8 +748,8 @@ struct mlx5_obj_ops {
int (*hrxq_new)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
int tunnel __rte_unused);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
- struct mlx5_hrxq *(*hrxq_drop_new)(struct rte_eth_dev *dev);
- void (*hrxq_drop_release)(struct rte_eth_dev *dev);
+ int (*drop_action_create)(struct rte_eth_dev *dev);
+ void (*drop_action_destroy)(struct rte_eth_dev *dev);
};
struct mlx5_priv {
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index ddaab83..3e81fcc 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -792,21 +792,21 @@
}
/**
- * Create a drop Rx Hash queue.
+ * Create a DevX drop action for Rx Hash queue.
*
* @param dev
* Pointer to Ethernet device.
*
* @return
- * The DevX object initialized, NULL otherwise and rte_errno is set.
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static struct mlx5_hrxq *
-mlx5_devx_hrxq_drop_new(struct rte_eth_dev *dev)
+static int
+mlx5_devx_drop_action_create(struct rte_eth_dev *dev)
{
(void)dev;
DRV_LOG(ERR, "DevX drop action is not supported yet");
rte_errno = ENOTSUP;
- return NULL;
+ return -rte_errno;
}
/**
@@ -816,7 +816,7 @@
* Pointer to Ethernet device.
*/
static void
-mlx5_devx_hrxq_drop_release(struct rte_eth_dev *dev)
+mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev)
{
(void)dev;
DRV_LOG(ERR, "DevX drop action is not supported yet");
@@ -833,6 +833,6 @@ struct mlx5_obj_ops devx_obj_ops = {
.ind_table_destroy = mlx5_devx_ind_table_destroy,
.hrxq_new = mlx5_devx_hrxq_new,
.hrxq_destroy = mlx5_devx_tir_destroy,
- .hrxq_drop_new = mlx5_devx_hrxq_drop_new,
- .hrxq_drop_release = mlx5_devx_hrxq_drop_release,
+ .drop_action_create = mlx5_devx_drop_action_create,
+ .drop_action_destroy = mlx5_devx_drop_action_destroy,
};
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f953a2d..56529c8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8917,7 +8917,7 @@ struct field_modify_info modify_tcp[] = {
dv->actions[n++] = priv->sh->esw_drop_action;
} else {
struct mlx5_hrxq *drop_hrxq;
- drop_hrxq = priv->obj_ops.hrxq_drop_new(dev);
+ drop_hrxq = mlx5_drop_action_create(dev);
if (!drop_hrxq) {
rte_flow_error_set
(error, errno,
@@ -8928,7 +8928,7 @@ struct field_modify_info modify_tcp[] = {
}
/*
* Drop queues will be released by the specify
- * mlx5_hrxq_drop_release() function. Assign
+ * mlx5_drop_action_destroy() function. Assign
* the special index to hrxq to mark the queue
* has been allocated.
*/
@@ -9013,7 +9013,7 @@ struct field_modify_info modify_tcp[] = {
/* hrxq is union, don't clear it if the flag is not set. */
if (dh->rix_hrxq) {
if (dh->fate_action == MLX5_FLOW_FATE_DROP) {
- priv->obj_ops.hrxq_drop_release(dev);
+ mlx5_drop_action_destroy(dev);
dh->rix_hrxq = 0;
} else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) {
mlx5_hrxq_release(dev, dh->rix_hrxq);
@@ -9303,13 +9303,11 @@ struct field_modify_info modify_tcp[] = {
flow_dv_fate_resource_release(struct rte_eth_dev *dev,
struct mlx5_flow_handle *handle)
{
- struct mlx5_priv *priv = dev->data->dev_private;
-
if (!handle->rix_fate)
return;
switch (handle->fate_action) {
case MLX5_FLOW_FATE_DROP:
- priv->obj_ops.hrxq_drop_release(dev);
+ mlx5_drop_action_destroy(dev);
break;
case MLX5_FLOW_FATE_QUEUE:
mlx5_hrxq_release(dev, handle->rix_hrxq);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index e5fc278..62c18b8 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -72,7 +72,7 @@
},
};
struct ibv_flow *flow;
- struct mlx5_hrxq *drop = priv->obj_ops.hrxq_drop_new(dev);
+ struct mlx5_hrxq *drop = mlx5_drop_action_create(dev);
uint16_t vprio[] = { 8, 16 };
int i;
int priority = 0;
@@ -89,7 +89,7 @@
claim_zero(mlx5_glue->destroy_flow(flow));
priority = vprio[i];
}
- priv->obj_ops.hrxq_drop_release(dev);
+ mlx5_drop_action_destroy(dev);
switch (priority) {
case 8:
priority = RTE_DIM(priority_map_3);
@@ -1889,7 +1889,7 @@
/* hrxq is union, don't touch it only the flag is set. */
if (handle->rix_hrxq) {
if (handle->fate_action == MLX5_FLOW_FATE_DROP) {
- priv->obj_ops.hrxq_drop_release(dev);
+ mlx5_drop_action_destroy(dev);
handle->rix_hrxq = 0;
} else if (handle->fate_action ==
MLX5_FLOW_FATE_QUEUE) {
@@ -1965,7 +1965,7 @@
dev_flow = &((struct mlx5_flow *)priv->inter_flows)[idx];
handle = dev_flow->handle;
if (handle->fate_action == MLX5_FLOW_FATE_DROP) {
- hrxq = priv->obj_ops.hrxq_drop_new(dev);
+ hrxq = mlx5_drop_action_create(dev);
if (!hrxq) {
rte_flow_error_set
(error, errno,
@@ -2034,7 +2034,7 @@
/* hrxq is union, don't touch it only the flag is set. */
if (handle->rix_hrxq) {
if (handle->fate_action == MLX5_FLOW_FATE_DROP) {
- priv->obj_ops.hrxq_drop_release(dev);
+ mlx5_drop_action_destroy(dev);
handle->rix_hrxq = 0;
} else if (handle->fate_action ==
MLX5_FLOW_FATE_QUEUE) {
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 99b32f6..0b3e813 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2006,6 +2006,78 @@ struct mlx5_ind_table_obj *
}
/**
+ * Create a drop Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * The Verbs/DevX object initialized, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_hrxq *
+mlx5_drop_action_create(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq = NULL;
+ int ret;
+
+ if (priv->drop_queue.hrxq) {
+ rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
+ return priv->drop_queue.hrxq;
+ }
+ hrxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq), 0, SOCKET_ID_ANY);
+ if (!hrxq) {
+ DRV_LOG(WARNING,
+ "Port %u cannot allocate memory for drop queue.",
+ dev->data->port_id);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ priv->drop_queue.hrxq = hrxq;
+ hrxq->ind_table = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq->ind_table),
+ 0, SOCKET_ID_ANY);
+ if (!hrxq->ind_table) {
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ ret = priv->obj_ops.drop_action_create(dev);
+ if (ret < 0)
+ goto error;
+ rte_atomic32_set(&hrxq->refcnt, 1);
+ return hrxq;
+error:
+ if (hrxq) {
+ if (hrxq->ind_table)
+ mlx5_free(hrxq->ind_table);
+ priv->drop_queue.hrxq = NULL;
+ mlx5_free(hrxq);
+ }
+ return NULL;
+}
+
+/**
+ * Release a drop hash Rx queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+void
+mlx5_drop_action_destroy(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq;
+
+ if (rte_atomic32_dec_and_test(&hrxq->refcnt)) {
+ priv->obj_ops.drop_action_destroy(dev);
+ mlx5_free(priv->drop_queue.rxq);
+ mlx5_free(hrxq->ind_table);
+ mlx5_free(hrxq);
+ priv->drop_queue.rxq = NULL;
+ priv->drop_queue.hrxq = NULL;
+ }
+}
+
+/**
* Verify the Rx Queue list is empty
*
* @param dev
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 164f36b..a8e6837 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -382,8 +382,8 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx);
int mlx5_hrxq_verify(struct rte_eth_dev *dev);
enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx);
-struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev);
-void mlx5_hrxq_drop_release(struct rte_eth_dev *dev);
+struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev);
+void mlx5_drop_action_destroy(struct rte_eth_dev *dev);
uint64_t mlx5_get_rx_port_offloads(void);
uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);
void mlx5_rxq_timestamp_set(struct rte_eth_dev *dev);
--
1.8.3.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (17 preceding siblings ...)
2020-09-03 10:13 ` [dpdk-dev] [PATCH v1 18/18] net/mlx5: share Rx queue drop action code Michael Baum
@ 2020-09-03 14:34 ` Tom Barbette
2020-09-03 20:59 ` Michael Baum
2020-09-08 11:46 ` Raslan Darawsheh
19 siblings, 1 reply; 28+ messages in thread
From: Tom Barbette @ 2020-09-03 14:34 UTC (permalink / raw)
To: Michael Baum, dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
Could you provide a cover letter?
Thanks,
Tom
Le 03/09/2020 à 12:13, Michael Baum a écrit :
> v1:
> Initial version
>
> Michael Baum (18):
> net/mlx5: fix Rx hash queue creation error flow
> net/mlx5: fix Rx queue state update
> net/mlx5: fix types differentiation in Rxq create
> net/mlx5: mitigate Rx queue reference counters
> net/mlx5: separate Rx queue object creations
> net/mlx5: separate Rx interrupt handling
> net/mlx5: share Rx control code
> net/mlx5: rearrange the creation of RQ and CQ resources
> net/mlx5: rearrange the creation of WQ and CQ object
> net/mlx5: separate Rx queue object modification
> net/mlx5: share Rx queue object modification
> net/mlx5: separate Rx indirection table object creation
> net/mlx5: separate Rx hash queue creation
> net/mlx5: remove indirection table type field
> net/mlx5: share Rx queue indirection table code
> net/mlx5: share Rx hash queue code
> net/mlx5: separate Rx queue drop
> net/mlx5: share Rx queue drop action code
>
> drivers/net/mlx5/Makefile | 1 +
> drivers/net/mlx5/linux/mlx5_os.c | 10 +
> drivers/net/mlx5/linux/mlx5_verbs.c | 707 +++++++++++++
> drivers/net/mlx5/linux/mlx5_verbs.h | 4 +
> drivers/net/mlx5/meson.build | 1 +
> drivers/net/mlx5/mlx5.h | 73 +-
> drivers/net/mlx5/mlx5_devx.c | 792 +++++++++++++-
> drivers/net/mlx5/mlx5_flow_dv.c | 20 +-
> drivers/net/mlx5/mlx5_flow_verbs.c | 35 +-
> drivers/net/mlx5/mlx5_rxq.c | 1934 ++++++-----------------------------
> drivers/net/mlx5/mlx5_rxtx.h | 84 +-
> drivers/net/mlx5/mlx5_trigger.c | 67 +-
> drivers/net/mlx5/mlx5_vlan.c | 2 +-
> 13 files changed, 1954 insertions(+), 1776 deletions(-)
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
2020-09-03 14:34 ` [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Tom Barbette
@ 2020-09-03 20:59 ` Michael Baum
2020-09-04 7:30 ` David Marchand
2020-09-04 7:47 ` Thomas Monjalon
0 siblings, 2 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-03 20:59 UTC (permalink / raw)
To: Tom Barbette, dev; +Cc: Matan Azrad, Raslan Darawsheh, Slava Ovsiienko
I think the names of the patches describe well enough and there is no need to add a description in the cover-letter.
> -----Original Message-----
> From: Tom Barbette <barbette@kth.se>
> Sent: Thursday, September 3, 2020 5:34 PM
> To: Michael Baum <michaelba@nvidia.com>; dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
> Subject: Re: [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
>
> External email: Use caution opening links or attachments
>
>
> Could you provide a cover letter?
>
> Thanks,
>
> Tom
>
> Le 03/09/2020 à 12:13, Michael Baum a écrit :
> > v1:
> > Initial version
> >
> > Michael Baum (18):
> > net/mlx5: fix Rx hash queue creation error flow
> > net/mlx5: fix Rx queue state update
> > net/mlx5: fix types differentiation in Rxq create
> > net/mlx5: mitigate Rx queue reference counters
> > net/mlx5: separate Rx queue object creations
> > net/mlx5: separate Rx interrupt handling
> > net/mlx5: share Rx control code
> > net/mlx5: rearrange the creation of RQ and CQ resources
> > net/mlx5: rearrange the creation of WQ and CQ object
> > net/mlx5: separate Rx queue object modification
> > net/mlx5: share Rx queue object modification
> > net/mlx5: separate Rx indirection table object creation
> > net/mlx5: separate Rx hash queue creation
> > net/mlx5: remove indirection table type field
> > net/mlx5: share Rx queue indirection table code
> > net/mlx5: share Rx hash queue code
> > net/mlx5: separate Rx queue drop
> > net/mlx5: share Rx queue drop action code
> >
> > drivers/net/mlx5/Makefile | 1 +
> > drivers/net/mlx5/linux/mlx5_os.c | 10 +
> > drivers/net/mlx5/linux/mlx5_verbs.c | 707 +++++++++++++
> > drivers/net/mlx5/linux/mlx5_verbs.h | 4 +
> > drivers/net/mlx5/meson.build | 1 +
> > drivers/net/mlx5/mlx5.h | 73 +-
> > drivers/net/mlx5/mlx5_devx.c | 792 +++++++++++++-
> > drivers/net/mlx5/mlx5_flow_dv.c | 20 +-
> > drivers/net/mlx5/mlx5_flow_verbs.c | 35 +-
> > drivers/net/mlx5/mlx5_rxq.c | 1934 ++++++-----------------------------
> > drivers/net/mlx5/mlx5_rxtx.h | 84 +-
> > drivers/net/mlx5/mlx5_trigger.c | 67 +-
> > drivers/net/mlx5/mlx5_vlan.c | 2 +-
> > 13 files changed, 1954 insertions(+), 1776 deletions(-)
> >
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
2020-09-03 20:59 ` Michael Baum
@ 2020-09-04 7:30 ` David Marchand
2020-09-04 7:47 ` Thomas Monjalon
1 sibling, 0 replies; 28+ messages in thread
From: David Marchand @ 2020-09-04 7:30 UTC (permalink / raw)
To: Michael Baum
Cc: Tom Barbette, dev, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko
On Thu, Sep 3, 2020 at 11:00 PM Michael Baum <michaelba@nvidia.com> wrote:
>
> I think the names of the patches describe well enough and there is no need to add a description in the cover-letter.
It gives no hint at the purpose, the impacts... why should we care
about this separation?
--
David Marchand
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
2020-09-03 20:59 ` Michael Baum
2020-09-04 7:30 ` David Marchand
@ 2020-09-04 7:47 ` Thomas Monjalon
2020-09-06 7:32 ` Michael Baum
1 sibling, 1 reply; 28+ messages in thread
From: Thomas Monjalon @ 2020-09-04 7:47 UTC (permalink / raw)
To: Michael Baum
Cc: Tom Barbette, dev, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko
03/09/2020 22:59, Michael Baum:
> I think the names of the patches describe well enough
> and there is no need to add a description in the cover-letter.
An introduction giving the general idea,
explaining the reason for writing these changes,
is always appreciable.
PS: Please do not top-post.
> From: Tom Barbette <barbette@kth.se>
> > Could you provide a cover letter?
> >
> > Thanks,
> >
> > Tom
> >
> > Le 03/09/2020 à 12:13, Michael Baum a écrit :
> > > v1:
> > > Initial version
> > >
> > > Michael Baum (18):
> > > net/mlx5: fix Rx hash queue creation error flow
> > > net/mlx5: fix Rx queue state update
> > > net/mlx5: fix types differentiation in Rxq create
> > > net/mlx5: mitigate Rx queue reference counters
> > > net/mlx5: separate Rx queue object creations
> > > net/mlx5: separate Rx interrupt handling
> > > net/mlx5: share Rx control code
> > > net/mlx5: rearrange the creation of RQ and CQ resources
> > > net/mlx5: rearrange the creation of WQ and CQ object
> > > net/mlx5: separate Rx queue object modification
> > > net/mlx5: share Rx queue object modification
> > > net/mlx5: separate Rx indirection table object creation
> > > net/mlx5: separate Rx hash queue creation
> > > net/mlx5: remove indirection table type field
> > > net/mlx5: share Rx queue indirection table code
> > > net/mlx5: share Rx hash queue code
> > > net/mlx5: separate Rx queue drop
> > > net/mlx5: share Rx queue drop action code
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
2020-09-04 7:47 ` Thomas Monjalon
@ 2020-09-06 7:32 ` Michael Baum
0 siblings, 0 replies; 28+ messages in thread
From: Michael Baum @ 2020-09-06 7:32 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon
Cc: Tom Barbette, dev, Matan Azrad, Raslan Darawsheh,
Slava Ovsiienko, David Marchand
From: Thomas Monjalon:
> 03/09/2020 22:59, Michael Baum:
> > I think the names of the patches describe well enough and there is no
> > need to add a description in the cover-letter.
>
> An introduction giving the general idea, explaining the reason for writing
> these changes, is always appreciable.
>
> PS: Please do not top-post.
Ok
>
> > From: Tom Barbette <barbette@kth.se>
> > > Could you provide a cover letter?
Yes, the series is an arrangement to multi-OS support by net/mlx5 driver so it comes to ease the code management for OS which supports\doesn't support DevX\Verbs operations.
> > >
> > > Thanks,
> > >
> > > Tom
> > >
> > > Le 03/09/2020 à 12:13, Michael Baum a écrit :
> > > > v1:
> > > > Initial version
> > > >
> > > > Michael Baum (18):
> > > > net/mlx5: fix Rx hash queue creation error flow
> > > > net/mlx5: fix Rx queue state update
> > > > net/mlx5: fix types differentiation in Rxq create
> > > > net/mlx5: mitigate Rx queue reference counters
> > > > net/mlx5: separate Rx queue object creations
> > > > net/mlx5: separate Rx interrupt handling
> > > > net/mlx5: share Rx control code
> > > > net/mlx5: rearrange the creation of RQ and CQ resources
> > > > net/mlx5: rearrange the creation of WQ and CQ object
> > > > net/mlx5: separate Rx queue object modification
> > > > net/mlx5: share Rx queue object modification
> > > > net/mlx5: separate Rx indirection table object creation
> > > > net/mlx5: separate Rx hash queue creation
> > > > net/mlx5: remove indirection table type field
> > > > net/mlx5: share Rx queue indirection table code
> > > > net/mlx5: share Rx hash queue code
> > > > net/mlx5: separate Rx queue drop
> > > > net/mlx5: share Rx queue drop action code
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
2020-09-03 10:13 [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Michael Baum
` (18 preceding siblings ...)
2020-09-03 14:34 ` [dpdk-dev] [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation Tom Barbette
@ 2020-09-08 11:46 ` Raslan Darawsheh
19 siblings, 0 replies; 28+ messages in thread
From: Raslan Darawsheh @ 2020-09-08 11:46 UTC (permalink / raw)
To: Michael Baum, dev; +Cc: Matan Azrad, Slava Ovsiienko
Hi,
> -----Original Message-----
> From: Michael Baum <michaelba@nvidia.com>
> Sent: Thursday, September 3, 2020 1:14 PM
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
> Subject: [PATCH v1 00/18] mlx5 Rx DevX/Verbs separation
>
> v1:
> Initial version
>
> Michael Baum (18):
> net/mlx5: fix Rx hash queue creation error flow
> net/mlx5: fix Rx queue state update
> net/mlx5: fix types differentiation in Rxq create
> net/mlx5: mitigate Rx queue reference counters
> net/mlx5: separate Rx queue object creations
> net/mlx5: separate Rx interrupt handling
> net/mlx5: share Rx control code
> net/mlx5: rearrange the creation of RQ and CQ resources
> net/mlx5: rearrange the creation of WQ and CQ object
> net/mlx5: separate Rx queue object modification
> net/mlx5: share Rx queue object modification
> net/mlx5: separate Rx indirection table object creation
> net/mlx5: separate Rx hash queue creation
> net/mlx5: remove indirection table type field
> net/mlx5: share Rx queue indirection table code
> net/mlx5: share Rx hash queue code
> net/mlx5: separate Rx queue drop
> net/mlx5: share Rx queue drop action code
>
> drivers/net/mlx5/Makefile | 1 +
> drivers/net/mlx5/linux/mlx5_os.c | 10 +
> drivers/net/mlx5/linux/mlx5_verbs.c | 707 +++++++++++++
> drivers/net/mlx5/linux/mlx5_verbs.h | 4 +
> drivers/net/mlx5/meson.build | 1 +
> drivers/net/mlx5/mlx5.h | 73 +-
> drivers/net/mlx5/mlx5_devx.c | 792 +++++++++++++-
> drivers/net/mlx5/mlx5_flow_dv.c | 20 +-
> drivers/net/mlx5/mlx5_flow_verbs.c | 35 +-
> drivers/net/mlx5/mlx5_rxq.c | 1934 ++++++-----------------------------
> drivers/net/mlx5/mlx5_rxtx.h | 84 +-
> drivers/net/mlx5/mlx5_trigger.c | 67 +-
> drivers/net/mlx5/mlx5_vlan.c | 2 +-
> 13 files changed, 1954 insertions(+), 1776 deletions(-)
>
> --
> 1.8.3.1
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 28+ messages in thread