* [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel
@ 2021-11-07 15:29 Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
` (5 more replies)
0 siblings, 6 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-07 15:29 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon
As a workaround to kernel driver/FW issues of the mlx5 devices, it is needed to use MR, which is an indirect mkey pointing to direct mkey created by the kernel for any DevX command uses an MR.
Fix any direct mkey usage to be configured by the ibv_reg_mr API.
If the direct mkey is for DevX command usage, werap it with an indiect mkey
to workaround the issues.
Matan Azrad (2):
common/mlx5: add wrapped MR create API
vdpa/mlx5: workaround dirty bitmap MR creation
Michael Baum (3):
common/mlx5: glue MR registration with IOVA
vdpa/mlx5: workaround guest MR registrations
net/mlx5: workaround counter memory region creation
drivers/common/mlx5/linux/meson.build | 2 +
drivers/common/mlx5/linux/mlx5_common_os.c | 56 ++++++++++++++++++++++
drivers/common/mlx5/linux/mlx5_glue.c | 18 +++++++
drivers/common/mlx5/linux/mlx5_glue.h | 3 ++
drivers/common/mlx5/mlx5_common.h | 18 +++++++
drivers/common/mlx5/version.map | 3 ++
drivers/net/mlx5/mlx5.c | 8 +---
drivers/net/mlx5/mlx5.h | 5 +-
drivers/net/mlx5/mlx5_flow.c | 25 +++-------
drivers/vdpa/mlx5/mlx5_vdpa.h | 9 ++--
drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 37 +++-----------
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 43 ++++++-----------
12 files changed, 134 insertions(+), 93 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 1/5] common/mlx5: glue MR registration with IOVA
2021-11-07 15:29 [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel Matan Azrad
@ 2021-11-07 15:29 ` Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 2/5] common/mlx5: add wrapped MR create API Matan Azrad
` (4 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-07 15:29 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, Michael Baum, stable
From: Michael Baum <michaelba@oss.nvidia.com>
Add support for rdma-core API to register IOVA MR.
The API gets the process VA, size, and IOVA and returns a memory region
with space pointed by a specific IOVA.
So any access in this MR should come with an address that is relative to
the IOVA specified in the API.
Fixes: cc07a42da250 ("vdpa/mlx5: prepare memory regions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/linux/meson.build | 2 ++
drivers/common/mlx5/linux/mlx5_glue.c | 18 ++++++++++++++++++
drivers/common/mlx5/linux/mlx5_glue.h | 3 +++
3 files changed, 23 insertions(+)
diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 2dcd27b778..7909f23e21 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -200,6 +200,8 @@ has_sym_args = [
'MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR' ],
[ 'HAVE_MLX5_DR_ALLOW_DUPLICATE', 'infiniband/mlx5dv.h',
'mlx5dv_dr_domain_allow_duplicate_rules' ],
+ [ 'HAVE_MLX5_IBV_REG_MR_IOVA', 'infiniband/verbs.h',
+ 'ibv_reg_mr_iova' ],
]
config = configuration_data()
foreach arg:has_sym_args
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index 037ca961a0..bc6622053f 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -224,6 +224,23 @@ mlx5_glue_reg_mr(struct ibv_pd *pd, void *addr, size_t length, int access)
return ibv_reg_mr(pd, addr, length, access);
}
+static struct ibv_mr *
+mlx5_glue_reg_mr_iova(struct ibv_pd *pd, void *addr, size_t length,
+ uint64_t iova, int access)
+{
+#ifdef HAVE_MLX5_IBV_REG_MR_IOVA
+ return ibv_reg_mr_iova(pd, addr, length, iova, access);
+#else
+ (void)pd;
+ (void)addr;
+ (void)length;
+ (void)iova;
+ (void)access;
+ errno = ENOTSUP;
+ return NULL;
+#endif
+}
+
static struct ibv_mr *
mlx5_glue_alloc_null_mr(struct ibv_pd *pd)
{
@@ -1412,6 +1429,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
.destroy_qp = mlx5_glue_destroy_qp,
.modify_qp = mlx5_glue_modify_qp,
.reg_mr = mlx5_glue_reg_mr,
+ .reg_mr_iova = mlx5_glue_reg_mr_iova,
.alloc_null_mr = mlx5_glue_alloc_null_mr,
.dereg_mr = mlx5_glue_dereg_mr,
.create_counter_set = mlx5_glue_create_counter_set,
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index f39ef2dac7..4e6d31f263 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -197,6 +197,9 @@ struct mlx5_glue {
int attr_mask);
struct ibv_mr *(*reg_mr)(struct ibv_pd *pd, void *addr,
size_t length, int access);
+ struct ibv_mr *(*reg_mr_iova)(struct ibv_pd *pd, void *addr,
+ size_t length, uint64_t iova,
+ int access);
struct ibv_mr *(*alloc_null_mr)(struct ibv_pd *pd);
int (*dereg_mr)(struct ibv_mr *mr);
struct ibv_counter_set *(*create_counter_set)
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 2/5] common/mlx5: add wrapped MR create API
2021-11-07 15:29 [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
@ 2021-11-07 15:29 ` Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
` (3 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-07 15:29 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, stable, Michael Baum
As a workaround to kernel driver/FW issues of the mlx5 devices, it is
needed to use MR, which is an indirect mkey pointing to direct mkey
created by the kernel for any DevX command uses an MR.
Add an API to create and destroy this wrapped MR.
Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Fixes: 9d39e57f21ac ("vdpa/mlx5: support live migration")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/linux/mlx5_common_os.c | 56 ++++++++++++++++++++++
drivers/common/mlx5/mlx5_common.h | 18 +++++++
drivers/common/mlx5/version.map | 3 ++
3 files changed, 77 insertions(+)
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index b516564b79..0d3e24e04e 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -744,3 +744,59 @@ mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
fclose(id_file);
return ret;
}
+
+/*
+ * Create direct mkey using the kernel ibv_reg_mr API and wrap it with a new
+ * indirect mkey created by the DevX API.
+ * This mkey should be used for DevX commands requesting mkey as a parameter.
+ */
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr)
+{
+ struct mlx5_klm klm = {
+ .byte_count = length,
+ .address = (uintptr_t)addr,
+ };
+ struct mlx5_devx_mkey_attr mkey_attr = {
+ .pd = pdn,
+ .klm_array = &klm,
+ .klm_num = 1,
+ };
+ struct mlx5_devx_obj *mkey;
+ struct ibv_mr *ibv_mr = mlx5_glue->reg_mr(pd, addr, length,
+ IBV_ACCESS_LOCAL_WRITE |
+ (haswell_broadwell_cpu ? 0 :
+ IBV_ACCESS_RELAXED_ORDERING));
+
+ if (!ibv_mr) {
+ rte_errno = errno;
+ return -rte_errno;
+ }
+ klm.mkey = ibv_mr->lkey;
+ mkey_attr.addr = (uintptr_t)addr;
+ mkey_attr.size = length;
+ mkey = mlx5_devx_cmd_mkey_create(ctx, &mkey_attr);
+ if (!mkey) {
+ claim_zero(mlx5_glue->dereg_mr(ibv_mr));
+ return -rte_errno;
+ }
+ pmd_mr->addr = addr;
+ pmd_mr->len = length;
+ pmd_mr->obj = (void *)ibv_mr;
+ pmd_mr->imkey = mkey;
+ pmd_mr->lkey = mkey->id;
+ return 0;
+}
+
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr)
+{
+ if (!pmd_mr)
+ return;
+ if (pmd_mr->imkey)
+ claim_zero(mlx5_devx_cmd_destroy(pmd_mr->imkey));
+ if (pmd_mr->obj)
+ claim_zero(mlx5_glue->dereg_mr(pmd_mr->obj));
+ memset(pmd_mr, 0, sizeof(*pmd_mr));
+}
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 744c6a72b3..62109a671a 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -429,4 +429,22 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id,
int mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes);
int mlx5_os_pd_create(struct mlx5_common_device *cdev);
+/* mlx5 PMD wrapped MR struct. */
+struct mlx5_pmd_wrapped_mr {
+ uint32_t lkey;
+ void *addr;
+ size_t len;
+ void *obj; /* verbs mr object or devx umem object. */
+ void *imkey; /* DevX indirect mkey object. */
+};
+
+__rte_internal
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr);
+
+__rte_internal
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr);
+
#endif /* RTE_PMD_MLX5_COMMON_H_ */
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 6e17a7b8b8..b6c045c110 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -133,6 +133,9 @@ INTERNAL {
mlx5_os_umem_dereg;
mlx5_os_umem_reg;
+ mlx5_os_wrapped_mkey_create; # WINDOWS_NO_EXPORT
+ mlx5_os_wrapped_mkey_destroy; # WINDOWS_NO_EXPORT
+
mlx5_realloc;
mlx5_translate_port_name; # WINDOWS_NO_EXPORT
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 3/5] vdpa/mlx5: workaround dirty bitmap MR creation
2021-11-07 15:29 [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 2/5] common/mlx5: add wrapped MR create API Matan Azrad
@ 2021-11-07 15:29 ` Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
` (2 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-07 15:29 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, stable, Michael Baum
Due to kernel driver/FW issues in direct MKEY creation using the DevX
API, this patch replaces the dirty bitmap MR creation to use wrapped
mkey instead.
Fixes: 9d39e57f21ac ("vdpa/mlx5: support live migration")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/vdpa/mlx5/mlx5_vdpa.h | 1 +
drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 37 ++++++-------------------------
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 2 ++
3 files changed, 10 insertions(+), 30 deletions(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index a6c9404cb0..3a7cf088b8 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -147,6 +147,7 @@ struct mlx5_vdpa_priv {
struct mlx5_vdpa_steer steer;
struct mlx5dv_var *var;
void *virtq_db_addr;
+ struct mlx5_pmd_wrapped_mr lm_mr;
SLIST_HEAD(mr_list, mlx5_vdpa_query_mr) mr_list;
struct mlx5_vdpa_virtq virtqs[];
};
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
index 3e8d9eb9a2..45a968bb6a 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
@@ -36,38 +36,21 @@ int
mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
uint64_t log_size)
{
- struct mlx5_devx_mkey_attr mkey_attr = {
- .addr = (uintptr_t)log_base,
- .size = log_size,
- .pd = priv->cdev->pdn,
- .pg_access = 1,
- };
struct mlx5_devx_virtq_attr attr = {
.type = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS,
.dirty_bitmap_addr = log_base,
.dirty_bitmap_size = log_size,
};
- struct mlx5_vdpa_query_mr *mr = rte_malloc(__func__, sizeof(*mr), 0);
int i;
+ int ret = mlx5_os_wrapped_mkey_create(priv->cdev->ctx, priv->cdev->pd,
+ priv->cdev->pdn, (void *)log_base,
+ log_size, &priv->lm_mr);
- if (!mr) {
- DRV_LOG(ERR, "Failed to allocate mem for lm mr.");
+ if (!ret) {
+ DRV_LOG(ERR, "Failed to allocate wrapped MR for lm.");
return -1;
}
- mr->umem = mlx5_glue->devx_umem_reg(priv->cdev->ctx,
- (void *)(uintptr_t)log_base,
- log_size, IBV_ACCESS_LOCAL_WRITE);
- if (!mr->umem) {
- DRV_LOG(ERR, "Failed to register umem for lm mr.");
- goto err;
- }
- mkey_attr.umem_id = mr->umem->umem_id;
- mr->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, &mkey_attr);
- if (!mr->mkey) {
- DRV_LOG(ERR, "Failed to create Mkey for lm.");
- goto err;
- }
- attr.dirty_bitmap_mkey = mr->mkey->id;
+ attr.dirty_bitmap_mkey = priv->lm_mr.lkey;
for (i = 0; i < priv->nr_virtqs; ++i) {
attr.queue_index = i;
if (!priv->virtqs[i].virtq) {
@@ -78,15 +61,9 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
goto err;
}
}
- mr->is_indirect = 0;
- SLIST_INSERT_HEAD(&priv->mr_list, mr, next);
return 0;
err:
- if (mr->mkey)
- mlx5_devx_cmd_destroy(mr->mkey);
- if (mr->umem)
- mlx5_glue->devx_umem_dereg(mr->umem);
- rte_free(mr);
+ mlx5_os_wrapped_mkey_destroy(&priv->lm_mr);
return -1;
}
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
index f551a094cd..d7707bbd91 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
@@ -31,6 +31,8 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv)
entry = next;
}
SLIST_INIT(&priv->mr_list);
+ if (priv->lm_mr.addr)
+ mlx5_os_wrapped_mkey_destroy(&priv->lm_mr);
if (priv->null_mr) {
claim_zero(mlx5_glue->dereg_mr(priv->null_mr));
priv->null_mr = NULL;
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 4/5] vdpa/mlx5: workaround guest MR registrations
2021-11-07 15:29 [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel Matan Azrad
` (2 preceding siblings ...)
2021-11-07 15:29 ` [dpdk-dev] [PATCH 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
@ 2021-11-07 15:29 ` Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 5/5] net/mlx5: workaround counter memory region creation Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-07 15:29 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, Michael Baum, stable
From: Michael Baum <michaelba@oss.nvidia.com>
Due to kernel issue in direct MKEY creation using the DevX API, this
patch replaces the virtio MR creation to use Verbs API.
Fixes: cc07a42da250 ("vdpa/mlx5: prepare memory regions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++---
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 41 +++++++++----------------------
2 files changed, 16 insertions(+), 33 deletions(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 3a7cf088b8..f290fb4895 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -59,10 +59,10 @@ struct mlx5_vdpa_event_qp {
struct mlx5_vdpa_query_mr {
SLIST_ENTRY(mlx5_vdpa_query_mr) next;
- void *addr;
- uint64_t length;
- struct mlx5dv_devx_umem *umem;
- struct mlx5_devx_obj *mkey;
+ union {
+ struct ibv_mr *mr;
+ struct mlx5_devx_obj *mkey;
+ };
int is_indirect;
};
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
index d7707bbd91..b1b9053bff 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
@@ -23,9 +23,10 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv)
entry = SLIST_FIRST(&priv->mr_list);
while (entry) {
next = SLIST_NEXT(entry, next);
- claim_zero(mlx5_devx_cmd_destroy(entry->mkey));
- if (!entry->is_indirect)
- claim_zero(mlx5_glue->devx_umem_dereg(entry->umem));
+ if (entry->is_indirect)
+ claim_zero(mlx5_devx_cmd_destroy(entry->mkey));
+ else
+ claim_zero(mlx5_glue->dereg_mr(entry->mr));
SLIST_REMOVE(&priv->mr_list, entry, mlx5_vdpa_query_mr, next);
rte_free(entry);
entry = next;
@@ -202,7 +203,6 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
goto error;
}
DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey);
- memset(&mkey_attr, 0, sizeof(mkey_attr));
for (i = 0; i < mem->nregions; i++) {
reg = &mem->regions[i];
entry = rte_zmalloc(__func__, sizeof(*entry), 0);
@@ -211,28 +211,15 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to allocate mem entry memory.");
goto error;
}
- entry->umem = mlx5_glue->devx_umem_reg(priv->cdev->ctx,
- (void *)(uintptr_t)reg->host_user_addr,
- reg->size, IBV_ACCESS_LOCAL_WRITE);
- if (!entry->umem) {
- DRV_LOG(ERR, "Failed to register Umem by Devx.");
- ret = -errno;
- goto error;
- }
- mkey_attr.addr = (uintptr_t)(reg->guest_phys_addr);
- mkey_attr.size = reg->size;
- mkey_attr.umem_id = entry->umem->umem_id;
- mkey_attr.pd = priv->cdev->pdn;
- mkey_attr.pg_access = 1;
- entry->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx,
- &mkey_attr);
- if (!entry->mkey) {
+ entry->mr = mlx5_glue->reg_mr_iova(priv->cdev->pd,
+ (void *)(uintptr_t)(reg->host_user_addr),
+ reg->size, reg->guest_phys_addr,
+ IBV_ACCESS_LOCAL_WRITE);
+ if (!entry->mr) {
DRV_LOG(ERR, "Failed to create direct Mkey.");
ret = -rte_errno;
goto error;
}
- entry->addr = (void *)(uintptr_t)(reg->host_user_addr);
- entry->length = reg->size;
entry->is_indirect = 0;
if (i > 0) {
uint64_t sadd;
@@ -262,12 +249,13 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
for (k = 0; k < reg->size; k += klm_size) {
klm_array[klm_index].byte_count = k + klm_size >
reg->size ? reg->size - k : klm_size;
- klm_array[klm_index].mkey = entry->mkey->id;
+ klm_array[klm_index].mkey = entry->mr->lkey;
klm_array[klm_index].address = reg->guest_phys_addr + k;
klm_index++;
}
SLIST_INSERT_HEAD(&priv->mr_list, entry, next);
}
+ memset(&mkey_attr, 0, sizeof(mkey_attr));
mkey_attr.addr = (uintptr_t)(mem->regions[0].guest_phys_addr);
mkey_attr.size = mem_size;
mkey_attr.pd = priv->cdev->pdn;
@@ -295,13 +283,8 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
priv->gpa_mkey_index = entry->mkey->id;
return 0;
error:
- if (entry) {
- if (entry->mkey)
- mlx5_devx_cmd_destroy(entry->mkey);
- if (entry->umem)
- mlx5_glue->devx_umem_dereg(entry->umem);
+ if (entry)
rte_free(entry);
- }
mlx5_vdpa_mem_dereg(priv);
rte_errno = -ret;
return ret;
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 5/5] net/mlx5: workaround counter memory region creation
2021-11-07 15:29 [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel Matan Azrad
` (3 preceding siblings ...)
2021-11-07 15:29 ` [dpdk-dev] [PATCH 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
@ 2021-11-07 15:29 ` Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-07 15:29 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, Michael Baum, stable
From: Michael Baum <michaelba@nvidia.com>
Due to kernel driver / FW issues in direct MKEY creation using the DevX
API, this patch replaces the counter MR creation to use wrapped mkey
API.
Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 8 +-------
drivers/net/mlx5/mlx5.h | 5 +----
drivers/net/mlx5/mlx5_flow.c | 25 ++++++-------------------
3 files changed, 8 insertions(+), 30 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 9c8d1cc76f..da21a30390 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -521,7 +521,6 @@ mlx5_flow_aging_init(struct mlx5_dev_ctx_shared *sh)
static void
mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
{
- struct mlx5_hca_attr *attr = &sh->cdev->config.hca_attr;
int i;
memset(&sh->cmng, 0, sizeof(sh->cmng));
@@ -534,10 +533,6 @@ mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
TAILQ_INIT(&sh->cmng.counters[i]);
rte_spinlock_init(&sh->cmng.csl[i]);
}
- if (sh->devx && !haswell_broadwell_cpu) {
- sh->cmng.relaxed_ordering_write = attr->relaxed_ordering_write;
- sh->cmng.relaxed_ordering_read = attr->relaxed_ordering_read;
- }
}
/**
@@ -552,8 +547,7 @@ mlx5_flow_destroy_counter_stat_mem_mng(struct mlx5_counter_stats_mem_mng *mng)
uint8_t *mem = (uint8_t *)(uintptr_t)mng->raws[0].data;
LIST_REMOVE(mng, next);
- claim_zero(mlx5_devx_cmd_destroy(mng->dm));
- claim_zero(mlx5_os_umem_dereg(mng->umem));
+ mlx5_os_wrapped_mkey_destroy(&mng->wm);
mlx5_free(mem);
}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9307a4f95b..05f2618aed 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -462,8 +462,7 @@ struct mlx5_flow_counter_pool {
struct mlx5_counter_stats_mem_mng {
LIST_ENTRY(mlx5_counter_stats_mem_mng) next;
struct mlx5_counter_stats_raw *raws;
- struct mlx5_devx_obj *dm;
- void *umem;
+ struct mlx5_pmd_wrapped_mr wm;
};
/* Raw memory structure for the counter statistics values of a pool. */
@@ -494,8 +493,6 @@ struct mlx5_flow_counter_mng {
uint8_t pending_queries;
uint16_t pool_index;
uint8_t query_thread_on;
- bool relaxed_ordering_read;
- bool relaxed_ordering_write;
bool counter_fallback; /* Use counter fallback management. */
LIST_HEAD(mem_mngs, mlx5_counter_stats_mem_mng) mem_mngs;
LIST_HEAD(stat_raws, mlx5_counter_stats_raw) free_stat_raws;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2f30a35525..40625688b0 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -7775,7 +7775,6 @@ mlx5_counter_query(struct rte_eth_dev *dev, uint32_t cnt,
static int
mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
{
- struct mlx5_devx_mkey_attr mkey_attr;
struct mlx5_counter_stats_mem_mng *mem_mng;
volatile struct flow_counter_stats *raw_data;
int raws_n = MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES;
@@ -7785,6 +7784,7 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
sizeof(struct mlx5_counter_stats_mem_mng);
size_t pgsize = rte_mem_page_size();
uint8_t *mem;
+ int ret;
int i;
if (pgsize == (size_t)-1) {
@@ -7799,23 +7799,10 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
}
mem_mng = (struct mlx5_counter_stats_mem_mng *)(mem + size) - 1;
size = sizeof(*raw_data) * MLX5_COUNTERS_PER_POOL * raws_n;
- mem_mng->umem = mlx5_os_umem_reg(sh->cdev->ctx, mem, size,
- IBV_ACCESS_LOCAL_WRITE);
- if (!mem_mng->umem) {
- rte_errno = errno;
- mlx5_free(mem);
- return -rte_errno;
- }
- memset(&mkey_attr, 0, sizeof(mkey_attr));
- mkey_attr.addr = (uintptr_t)mem;
- mkey_attr.size = size;
- mkey_attr.umem_id = mlx5_os_get_umem_id(mem_mng->umem);
- mkey_attr.pd = sh->cdev->pdn;
- mkey_attr.relaxed_ordering_write = sh->cmng.relaxed_ordering_write;
- mkey_attr.relaxed_ordering_read = sh->cmng.relaxed_ordering_read;
- mem_mng->dm = mlx5_devx_cmd_mkey_create(sh->cdev->ctx, &mkey_attr);
- if (!mem_mng->dm) {
- mlx5_os_umem_dereg(mem_mng->umem);
+ ret = mlx5_os_wrapped_mkey_create(sh->cdev->ctx, sh->cdev->pd,
+ sh->cdev->pdn, mem, size,
+ &mem_mng->wm);
+ if (ret) {
rte_errno = errno;
mlx5_free(mem);
return -rte_errno;
@@ -7934,7 +7921,7 @@ mlx5_flow_query_alarm(void *arg)
ret = mlx5_devx_cmd_flow_counter_query(pool->min_dcs, 0,
MLX5_COUNTERS_PER_POOL,
NULL, NULL,
- pool->raw_hw->mem_mng->dm->id,
+ pool->raw_hw->mem_mng->wm.lkey,
(void *)(uintptr_t)
pool->raw_hw->data,
sh->devx_comp,
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues
2021-11-07 15:29 [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel Matan Azrad
` (4 preceding siblings ...)
2021-11-07 15:29 ` [dpdk-dev] [PATCH 5/5] net/mlx5: workaround counter memory region creation Matan Azrad
@ 2021-11-08 17:21 ` Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
` (6 more replies)
5 siblings, 7 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-08 17:21 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon
The mlx5 PMD uses the kernel mlx5 driver to map physical
memory to the HW.
Using the Verbs API ibv_reg_mr, a mkey can be created for that.
In this case, the mkey is signed on the user ID of the kernel driver.
Using the DevX API, a mkey also can be created, but it should point an
umem object (represents the specific buffer mapping).
In this case, the mkey is signed on the user ID of the process DevX
context.
In FW DevX control commands which get mkey as a parameter, there is
a security check on the user ID and Verbs mkeys are rejected.
Unfortunately, also when using DevX mkey, there is an error in the FW
command on umem validation because the umem is not designed to be used
for any mkey parameters.
As a workaround to the kernel driver/FW issue, it is needed to use a
wrapped MR, which is an indirect mkey(created by the DevX API) pointing to
direct mkey created by the kernel for any DevX command uses an MR.
Add an API to create and destroy this wrapped MR and use it for any
control DevX command.
v2:
- fix compilation issue on Windows.
- improve logs.
Matan Azrad (2):
common/mlx5: add wrapped MR create API
vdpa/mlx5: workaround dirty bitmap MR creation
Michael Baum (3):
common/mlx5: glue MR registration with IOVA
vdpa/mlx5: workaround guest MR registrations
net/mlx5: workaround MR creation for flow counter
drivers/common/mlx5/linux/meson.build | 2 +
drivers/common/mlx5/linux/mlx5_common_os.c | 56 ++++++++++++++++++++
drivers/common/mlx5/linux/mlx5_glue.c | 18 +++++++
drivers/common/mlx5/linux/mlx5_glue.h | 3 ++
drivers/common/mlx5/mlx5_common.h | 18 +++++++
drivers/common/mlx5/version.map | 3 ++
drivers/common/mlx5/windows/mlx5_common_os.c | 40 ++++++++++++++
drivers/net/mlx5/mlx5.c | 8 +--
drivers/net/mlx5/mlx5.h | 5 +-
drivers/net/mlx5/mlx5_flow.c | 25 +++------
drivers/vdpa/mlx5/mlx5_vdpa.h | 9 ++--
drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 37 +++----------
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 43 +++++----------
13 files changed, 174 insertions(+), 93 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v2 1/5] common/mlx5: glue MR registration with IOVA
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
@ 2021-11-08 17:21 ` Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 2/5] common/mlx5: add wrapped MR create API Matan Azrad
` (5 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-08 17:21 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, Michael Baum, stable
From: Michael Baum <michaelba@oss.nvidia.com>
Add support for rdma-core API to register IOVA MR.
The API gets the process VA, size, and IOVA and returns a memory region
with space pointed by a specific IOVA.
So any access in this MR should come with an address that is relative to
the IOVA specified in the API.
Fixes: cc07a42da250 ("vdpa/mlx5: prepare memory regions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/linux/meson.build | 2 ++
drivers/common/mlx5/linux/mlx5_glue.c | 18 ++++++++++++++++++
drivers/common/mlx5/linux/mlx5_glue.h | 3 +++
3 files changed, 23 insertions(+)
diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 2dcd27b778..7909f23e21 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -200,6 +200,8 @@ has_sym_args = [
'MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR' ],
[ 'HAVE_MLX5_DR_ALLOW_DUPLICATE', 'infiniband/mlx5dv.h',
'mlx5dv_dr_domain_allow_duplicate_rules' ],
+ [ 'HAVE_MLX5_IBV_REG_MR_IOVA', 'infiniband/verbs.h',
+ 'ibv_reg_mr_iova' ],
]
config = configuration_data()
foreach arg:has_sym_args
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index 037ca961a0..bc6622053f 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -224,6 +224,23 @@ mlx5_glue_reg_mr(struct ibv_pd *pd, void *addr, size_t length, int access)
return ibv_reg_mr(pd, addr, length, access);
}
+static struct ibv_mr *
+mlx5_glue_reg_mr_iova(struct ibv_pd *pd, void *addr, size_t length,
+ uint64_t iova, int access)
+{
+#ifdef HAVE_MLX5_IBV_REG_MR_IOVA
+ return ibv_reg_mr_iova(pd, addr, length, iova, access);
+#else
+ (void)pd;
+ (void)addr;
+ (void)length;
+ (void)iova;
+ (void)access;
+ errno = ENOTSUP;
+ return NULL;
+#endif
+}
+
static struct ibv_mr *
mlx5_glue_alloc_null_mr(struct ibv_pd *pd)
{
@@ -1412,6 +1429,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
.destroy_qp = mlx5_glue_destroy_qp,
.modify_qp = mlx5_glue_modify_qp,
.reg_mr = mlx5_glue_reg_mr,
+ .reg_mr_iova = mlx5_glue_reg_mr_iova,
.alloc_null_mr = mlx5_glue_alloc_null_mr,
.dereg_mr = mlx5_glue_dereg_mr,
.create_counter_set = mlx5_glue_create_counter_set,
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index f39ef2dac7..4e6d31f263 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -197,6 +197,9 @@ struct mlx5_glue {
int attr_mask);
struct ibv_mr *(*reg_mr)(struct ibv_pd *pd, void *addr,
size_t length, int access);
+ struct ibv_mr *(*reg_mr_iova)(struct ibv_pd *pd, void *addr,
+ size_t length, uint64_t iova,
+ int access);
struct ibv_mr *(*alloc_null_mr)(struct ibv_pd *pd);
int (*dereg_mr)(struct ibv_mr *mr);
struct ibv_counter_set *(*create_counter_set)
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v2 2/5] common/mlx5: add wrapped MR create API
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
@ 2021-11-08 17:21 ` Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
` (4 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-08 17:21 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, stable, Michael Baum
The mlx5 PMD uses the kernel mlx5 driver to map physical memory to the
HW.
Using the Verbs API ibv_reg_mr, a mkey can be created for that.
In this case, the mkey is signed on the user ID of the kernel driver.
Using the DevX API, a mkey also can be created, but it should point an
umem object (represents the specific buffer mapping) created by the
kernel. In this case, the mkey is signed on the user ID of the process
DevX context.
In FW DevX control commands which get mkey as a parameter, there is
a security check on the user ID and Verbs mkeys are rejected.
Unfortunately, also when using DevX mkey, there is an error in the FW
command on umem validation because the umem is not designed to be used
for any mkey parameters.
As a workaround to the kernel driver/FW issue, it is needed to use a
wrapped MR, which is an indirect mkey(created by the DevX API) pointing to
direct mkey created by the kernel for any DevX command uses an MR.
Add an API to create and destroy this wrapped MR.
Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Fixes: 9d39e57f21ac ("vdpa/mlx5: support live migration")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/linux/mlx5_common_os.c | 56 ++++++++++++++++++++
drivers/common/mlx5/mlx5_common.h | 18 +++++++
drivers/common/mlx5/version.map | 3 ++
drivers/common/mlx5/windows/mlx5_common_os.c | 40 ++++++++++++++
4 files changed, 117 insertions(+)
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index b516564b79..0d3e24e04e 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -744,3 +744,59 @@ mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
fclose(id_file);
return ret;
}
+
+/*
+ * Create direct mkey using the kernel ibv_reg_mr API and wrap it with a new
+ * indirect mkey created by the DevX API.
+ * This mkey should be used for DevX commands requesting mkey as a parameter.
+ */
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr)
+{
+ struct mlx5_klm klm = {
+ .byte_count = length,
+ .address = (uintptr_t)addr,
+ };
+ struct mlx5_devx_mkey_attr mkey_attr = {
+ .pd = pdn,
+ .klm_array = &klm,
+ .klm_num = 1,
+ };
+ struct mlx5_devx_obj *mkey;
+ struct ibv_mr *ibv_mr = mlx5_glue->reg_mr(pd, addr, length,
+ IBV_ACCESS_LOCAL_WRITE |
+ (haswell_broadwell_cpu ? 0 :
+ IBV_ACCESS_RELAXED_ORDERING));
+
+ if (!ibv_mr) {
+ rte_errno = errno;
+ return -rte_errno;
+ }
+ klm.mkey = ibv_mr->lkey;
+ mkey_attr.addr = (uintptr_t)addr;
+ mkey_attr.size = length;
+ mkey = mlx5_devx_cmd_mkey_create(ctx, &mkey_attr);
+ if (!mkey) {
+ claim_zero(mlx5_glue->dereg_mr(ibv_mr));
+ return -rte_errno;
+ }
+ pmd_mr->addr = addr;
+ pmd_mr->len = length;
+ pmd_mr->obj = (void *)ibv_mr;
+ pmd_mr->imkey = mkey;
+ pmd_mr->lkey = mkey->id;
+ return 0;
+}
+
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr)
+{
+ if (!pmd_mr)
+ return;
+ if (pmd_mr->imkey)
+ claim_zero(mlx5_devx_cmd_destroy(pmd_mr->imkey));
+ if (pmd_mr->obj)
+ claim_zero(mlx5_glue->dereg_mr(pmd_mr->obj));
+ memset(pmd_mr, 0, sizeof(*pmd_mr));
+}
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 744c6a72b3..62109a671a 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -429,4 +429,22 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id,
int mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes);
int mlx5_os_pd_create(struct mlx5_common_device *cdev);
+/* mlx5 PMD wrapped MR struct. */
+struct mlx5_pmd_wrapped_mr {
+ uint32_t lkey;
+ void *addr;
+ size_t len;
+ void *obj; /* verbs mr object or devx umem object. */
+ void *imkey; /* DevX indirect mkey object. */
+};
+
+__rte_internal
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr);
+
+__rte_internal
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr);
+
#endif /* RTE_PMD_MLX5_COMMON_H_ */
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 6e17a7b8b8..28d685a7c6 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -133,6 +133,9 @@ INTERNAL {
mlx5_os_umem_dereg;
mlx5_os_umem_reg;
+ mlx5_os_wrapped_mkey_create;
+ mlx5_os_wrapped_mkey_destroy;
+
mlx5_realloc;
mlx5_translate_port_name; # WINDOWS_NO_EXPORT
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c
index ea478d7395..0d03344343 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.c
+++ b/drivers/common/mlx5/windows/mlx5_common_os.c
@@ -390,3 +390,43 @@ mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
*reg_mr_cb = mlx5_os_reg_mr;
*dereg_mr_cb = mlx5_os_dereg_mr;
}
+
+/*
+ * In Windows, no need to wrap the MR, no known issue for it in kernel.
+ * Use the regular function to create direct MR.
+ */
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *wpmd_mr)
+{
+ struct mlx5_pmd_mr pmd_mr = {0};
+ int ret = mlx5_os_reg_mr(pd, addr, length, &pmd_mr);
+
+ (void)pdn;
+ (void)ctx;
+ if (ret != 0)
+ return -1;
+ wpmd_mr->addr = addr;
+ wpmd_mr->len = length;
+ wpmd_mr->obj = pmd_mr.obj;
+ wpmd_mr->imkey = pmd_mr.mkey;
+ wpmd_mr->lkey = pmd_mr.mkey->id;
+ return 0;
+}
+
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *wpmd_mr)
+{
+ struct mlx5_pmd_mr pmd_mr;
+
+ if (!wpmd_mr)
+ return;
+ pmd_mr.addr = wpmd_mr->addr;
+ pmd_mr.len = wpmd_mr->len;
+ pmd_mr.obj = wpmd_mr->obj;
+ pmd_mr.mkey = wpmd_mr->imkey;
+ pmd_mr.lkey = wpmd_mr->lkey;
+ mlx5_os_dereg_mr(&pmd_mr);
+ memset(wpmd_mr, 0, sizeof(*wpmd_mr));
+}
+
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v2 3/5] vdpa/mlx5: workaround dirty bitmap MR creation
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 2/5] common/mlx5: add wrapped MR create API Matan Azrad
@ 2021-11-08 17:21 ` Matan Azrad
2021-11-08 19:38 ` [dpdk-dev] [dpdk-stable] " Thomas Monjalon
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
` (3 subsequent siblings)
6 siblings, 1 reply; 21+ messages in thread
From: Matan Azrad @ 2021-11-08 17:21 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, stable, Michael Baum
Due to kernel driver/FW issues in direct MKEY creation using the DevX
API, this patch replaces the dirty bitmap MR creation to use wrapped
mkey instead.
Fixes: 9d39e57f21ac ("vdpa/mlx5: support live migration")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/vdpa/mlx5/mlx5_vdpa.h | 1 +
drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 37 ++++++-------------------------
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 2 ++
3 files changed, 10 insertions(+), 30 deletions(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index a6c9404cb0..3a7cf088b8 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -147,6 +147,7 @@ struct mlx5_vdpa_priv {
struct mlx5_vdpa_steer steer;
struct mlx5dv_var *var;
void *virtq_db_addr;
+ struct mlx5_pmd_wrapped_mr lm_mr;
SLIST_HEAD(mr_list, mlx5_vdpa_query_mr) mr_list;
struct mlx5_vdpa_virtq virtqs[];
};
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
index 3e8d9eb9a2..45a968bb6a 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
@@ -36,38 +36,21 @@ int
mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
uint64_t log_size)
{
- struct mlx5_devx_mkey_attr mkey_attr = {
- .addr = (uintptr_t)log_base,
- .size = log_size,
- .pd = priv->cdev->pdn,
- .pg_access = 1,
- };
struct mlx5_devx_virtq_attr attr = {
.type = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS,
.dirty_bitmap_addr = log_base,
.dirty_bitmap_size = log_size,
};
- struct mlx5_vdpa_query_mr *mr = rte_malloc(__func__, sizeof(*mr), 0);
int i;
+ int ret = mlx5_os_wrapped_mkey_create(priv->cdev->ctx, priv->cdev->pd,
+ priv->cdev->pdn, (void *)log_base,
+ log_size, &priv->lm_mr);
- if (!mr) {
- DRV_LOG(ERR, "Failed to allocate mem for lm mr.");
+ if (!ret) {
+ DRV_LOG(ERR, "Failed to allocate wrapped MR for lm.");
return -1;
}
- mr->umem = mlx5_glue->devx_umem_reg(priv->cdev->ctx,
- (void *)(uintptr_t)log_base,
- log_size, IBV_ACCESS_LOCAL_WRITE);
- if (!mr->umem) {
- DRV_LOG(ERR, "Failed to register umem for lm mr.");
- goto err;
- }
- mkey_attr.umem_id = mr->umem->umem_id;
- mr->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, &mkey_attr);
- if (!mr->mkey) {
- DRV_LOG(ERR, "Failed to create Mkey for lm.");
- goto err;
- }
- attr.dirty_bitmap_mkey = mr->mkey->id;
+ attr.dirty_bitmap_mkey = priv->lm_mr.lkey;
for (i = 0; i < priv->nr_virtqs; ++i) {
attr.queue_index = i;
if (!priv->virtqs[i].virtq) {
@@ -78,15 +61,9 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
goto err;
}
}
- mr->is_indirect = 0;
- SLIST_INSERT_HEAD(&priv->mr_list, mr, next);
return 0;
err:
- if (mr->mkey)
- mlx5_devx_cmd_destroy(mr->mkey);
- if (mr->umem)
- mlx5_glue->devx_umem_dereg(mr->umem);
- rte_free(mr);
+ mlx5_os_wrapped_mkey_destroy(&priv->lm_mr);
return -1;
}
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
index f551a094cd..d7707bbd91 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
@@ -31,6 +31,8 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv)
entry = next;
}
SLIST_INIT(&priv->mr_list);
+ if (priv->lm_mr.addr)
+ mlx5_os_wrapped_mkey_destroy(&priv->lm_mr);
if (priv->null_mr) {
claim_zero(mlx5_glue->dereg_mr(priv->null_mr));
priv->null_mr = NULL;
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v2 4/5] vdpa/mlx5: workaround guest MR registrations
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
` (2 preceding siblings ...)
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
@ 2021-11-08 17:21 ` Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 5/5] net/mlx5: workaround MR creation for flow counter Matan Azrad
` (2 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-08 17:21 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, Michael Baum, stable
From: Michael Baum <michaelba@oss.nvidia.com>
Due to kernel issue in direct MKEY creation using the DevX API, this
patch replaces the virtio MR creation to use Verbs API.
Fixes: cc07a42da250 ("vdpa/mlx5: prepare memory regions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++---
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 41 +++++++++----------------------
2 files changed, 16 insertions(+), 33 deletions(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 3a7cf088b8..f290fb4895 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -59,10 +59,10 @@ struct mlx5_vdpa_event_qp {
struct mlx5_vdpa_query_mr {
SLIST_ENTRY(mlx5_vdpa_query_mr) next;
- void *addr;
- uint64_t length;
- struct mlx5dv_devx_umem *umem;
- struct mlx5_devx_obj *mkey;
+ union {
+ struct ibv_mr *mr;
+ struct mlx5_devx_obj *mkey;
+ };
int is_indirect;
};
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
index d7707bbd91..b1b9053bff 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
@@ -23,9 +23,10 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv)
entry = SLIST_FIRST(&priv->mr_list);
while (entry) {
next = SLIST_NEXT(entry, next);
- claim_zero(mlx5_devx_cmd_destroy(entry->mkey));
- if (!entry->is_indirect)
- claim_zero(mlx5_glue->devx_umem_dereg(entry->umem));
+ if (entry->is_indirect)
+ claim_zero(mlx5_devx_cmd_destroy(entry->mkey));
+ else
+ claim_zero(mlx5_glue->dereg_mr(entry->mr));
SLIST_REMOVE(&priv->mr_list, entry, mlx5_vdpa_query_mr, next);
rte_free(entry);
entry = next;
@@ -202,7 +203,6 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
goto error;
}
DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey);
- memset(&mkey_attr, 0, sizeof(mkey_attr));
for (i = 0; i < mem->nregions; i++) {
reg = &mem->regions[i];
entry = rte_zmalloc(__func__, sizeof(*entry), 0);
@@ -211,28 +211,15 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to allocate mem entry memory.");
goto error;
}
- entry->umem = mlx5_glue->devx_umem_reg(priv->cdev->ctx,
- (void *)(uintptr_t)reg->host_user_addr,
- reg->size, IBV_ACCESS_LOCAL_WRITE);
- if (!entry->umem) {
- DRV_LOG(ERR, "Failed to register Umem by Devx.");
- ret = -errno;
- goto error;
- }
- mkey_attr.addr = (uintptr_t)(reg->guest_phys_addr);
- mkey_attr.size = reg->size;
- mkey_attr.umem_id = entry->umem->umem_id;
- mkey_attr.pd = priv->cdev->pdn;
- mkey_attr.pg_access = 1;
- entry->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx,
- &mkey_attr);
- if (!entry->mkey) {
+ entry->mr = mlx5_glue->reg_mr_iova(priv->cdev->pd,
+ (void *)(uintptr_t)(reg->host_user_addr),
+ reg->size, reg->guest_phys_addr,
+ IBV_ACCESS_LOCAL_WRITE);
+ if (!entry->mr) {
DRV_LOG(ERR, "Failed to create direct Mkey.");
ret = -rte_errno;
goto error;
}
- entry->addr = (void *)(uintptr_t)(reg->host_user_addr);
- entry->length = reg->size;
entry->is_indirect = 0;
if (i > 0) {
uint64_t sadd;
@@ -262,12 +249,13 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
for (k = 0; k < reg->size; k += klm_size) {
klm_array[klm_index].byte_count = k + klm_size >
reg->size ? reg->size - k : klm_size;
- klm_array[klm_index].mkey = entry->mkey->id;
+ klm_array[klm_index].mkey = entry->mr->lkey;
klm_array[klm_index].address = reg->guest_phys_addr + k;
klm_index++;
}
SLIST_INSERT_HEAD(&priv->mr_list, entry, next);
}
+ memset(&mkey_attr, 0, sizeof(mkey_attr));
mkey_attr.addr = (uintptr_t)(mem->regions[0].guest_phys_addr);
mkey_attr.size = mem_size;
mkey_attr.pd = priv->cdev->pdn;
@@ -295,13 +283,8 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
priv->gpa_mkey_index = entry->mkey->id;
return 0;
error:
- if (entry) {
- if (entry->mkey)
- mlx5_devx_cmd_destroy(entry->mkey);
- if (entry->umem)
- mlx5_glue->devx_umem_dereg(entry->umem);
+ if (entry)
rte_free(entry);
- }
mlx5_vdpa_mem_dereg(priv);
rte_errno = -ret;
return ret;
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v2 5/5] net/mlx5: workaround MR creation for flow counter
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
` (3 preceding siblings ...)
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
@ 2021-11-08 17:21 ` Matan Azrad
2021-11-09 12:23 ` [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues Matan Azrad
2021-11-09 12:36 ` Matan Azrad
6 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-08 17:21 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, Michael Baum, stable
From: Michael Baum <michaelba@nvidia.com>
Due to kernel driver / FW issues in direct MKEY creation using the DevX
API, this patch replaces the counter MR creation to use wrapped mkey
API.
Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 8 +-------
drivers/net/mlx5/mlx5.h | 5 +----
drivers/net/mlx5/mlx5_flow.c | 25 ++++++-------------------
3 files changed, 8 insertions(+), 30 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 9c8d1cc76f..da21a30390 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -521,7 +521,6 @@ mlx5_flow_aging_init(struct mlx5_dev_ctx_shared *sh)
static void
mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
{
- struct mlx5_hca_attr *attr = &sh->cdev->config.hca_attr;
int i;
memset(&sh->cmng, 0, sizeof(sh->cmng));
@@ -534,10 +533,6 @@ mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
TAILQ_INIT(&sh->cmng.counters[i]);
rte_spinlock_init(&sh->cmng.csl[i]);
}
- if (sh->devx && !haswell_broadwell_cpu) {
- sh->cmng.relaxed_ordering_write = attr->relaxed_ordering_write;
- sh->cmng.relaxed_ordering_read = attr->relaxed_ordering_read;
- }
}
/**
@@ -552,8 +547,7 @@ mlx5_flow_destroy_counter_stat_mem_mng(struct mlx5_counter_stats_mem_mng *mng)
uint8_t *mem = (uint8_t *)(uintptr_t)mng->raws[0].data;
LIST_REMOVE(mng, next);
- claim_zero(mlx5_devx_cmd_destroy(mng->dm));
- claim_zero(mlx5_os_umem_dereg(mng->umem));
+ mlx5_os_wrapped_mkey_destroy(&mng->wm);
mlx5_free(mem);
}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9307a4f95b..05f2618aed 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -462,8 +462,7 @@ struct mlx5_flow_counter_pool {
struct mlx5_counter_stats_mem_mng {
LIST_ENTRY(mlx5_counter_stats_mem_mng) next;
struct mlx5_counter_stats_raw *raws;
- struct mlx5_devx_obj *dm;
- void *umem;
+ struct mlx5_pmd_wrapped_mr wm;
};
/* Raw memory structure for the counter statistics values of a pool. */
@@ -494,8 +493,6 @@ struct mlx5_flow_counter_mng {
uint8_t pending_queries;
uint16_t pool_index;
uint8_t query_thread_on;
- bool relaxed_ordering_read;
- bool relaxed_ordering_write;
bool counter_fallback; /* Use counter fallback management. */
LIST_HEAD(mem_mngs, mlx5_counter_stats_mem_mng) mem_mngs;
LIST_HEAD(stat_raws, mlx5_counter_stats_raw) free_stat_raws;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2f30a35525..40625688b0 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -7775,7 +7775,6 @@ mlx5_counter_query(struct rte_eth_dev *dev, uint32_t cnt,
static int
mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
{
- struct mlx5_devx_mkey_attr mkey_attr;
struct mlx5_counter_stats_mem_mng *mem_mng;
volatile struct flow_counter_stats *raw_data;
int raws_n = MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES;
@@ -7785,6 +7784,7 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
sizeof(struct mlx5_counter_stats_mem_mng);
size_t pgsize = rte_mem_page_size();
uint8_t *mem;
+ int ret;
int i;
if (pgsize == (size_t)-1) {
@@ -7799,23 +7799,10 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
}
mem_mng = (struct mlx5_counter_stats_mem_mng *)(mem + size) - 1;
size = sizeof(*raw_data) * MLX5_COUNTERS_PER_POOL * raws_n;
- mem_mng->umem = mlx5_os_umem_reg(sh->cdev->ctx, mem, size,
- IBV_ACCESS_LOCAL_WRITE);
- if (!mem_mng->umem) {
- rte_errno = errno;
- mlx5_free(mem);
- return -rte_errno;
- }
- memset(&mkey_attr, 0, sizeof(mkey_attr));
- mkey_attr.addr = (uintptr_t)mem;
- mkey_attr.size = size;
- mkey_attr.umem_id = mlx5_os_get_umem_id(mem_mng->umem);
- mkey_attr.pd = sh->cdev->pdn;
- mkey_attr.relaxed_ordering_write = sh->cmng.relaxed_ordering_write;
- mkey_attr.relaxed_ordering_read = sh->cmng.relaxed_ordering_read;
- mem_mng->dm = mlx5_devx_cmd_mkey_create(sh->cdev->ctx, &mkey_attr);
- if (!mem_mng->dm) {
- mlx5_os_umem_dereg(mem_mng->umem);
+ ret = mlx5_os_wrapped_mkey_create(sh->cdev->ctx, sh->cdev->pd,
+ sh->cdev->pdn, mem, size,
+ &mem_mng->wm);
+ if (ret) {
rte_errno = errno;
mlx5_free(mem);
return -rte_errno;
@@ -7934,7 +7921,7 @@ mlx5_flow_query_alarm(void *arg)
ret = mlx5_devx_cmd_flow_counter_query(pool->min_dcs, 0,
MLX5_COUNTERS_PER_POOL,
NULL, NULL,
- pool->raw_hw->mem_mng->dm->id,
+ pool->raw_hw->mem_mng->wm.lkey,
(void *)(uintptr_t)
pool->raw_hw->data,
sh->devx_comp,
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH v2 3/5] vdpa/mlx5: workaround dirty bitmap MR creation
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
@ 2021-11-08 19:38 ` Thomas Monjalon
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Monjalon @ 2021-11-08 19:38 UTC (permalink / raw)
To: Matan Azrad; +Cc: Viacheslav Ovsiienko, dev, stable, Michael Baum
08/11/2021 18:21, Matan Azrad:
> Due to kernel driver/FW issues in direct MKEY creation using the DevX
> API, this patch replaces the dirty bitmap MR creation to use wrapped
> mkey instead.
>
> Fixes: 9d39e57f21ac ("vdpa/mlx5: support live migration")
> Cc: stable@dpdk.org
>
> Signed-off-by: Michael Baum <michaelba@oss.nvidia.com>
> Signed-off-by: Matan Azrad <matan@nvidia.com>
32-bit compilation is broken:
drivers/vdpa/mlx5/mlx5_vdpa_lm.c:46:64: error:
cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
46 | priv->cdev->pdn, (void *)log_base,
| ^
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
` (4 preceding siblings ...)
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 5/5] net/mlx5: workaround MR creation for flow counter Matan Azrad
@ 2021-11-09 12:23 ` Matan Azrad
2021-11-09 12:36 ` Matan Azrad
6 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-09 12:23 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon
The mlx5 PMD uses the kernel mlx5 driver to map physical memory to the HW.
Using the Verbs API ibv_reg_mr, a mkey can be created for that.
In this case, the mkey is signed on the user ID of the kernel driver.
Using the DevX API, a mkey also can be created, but it should point an umem object (represents the specific buffer mapping) created by the kernel.
In this case, the mkey is signed on the user ID of the process DevX context.
In FW DevX control commands which get mkey as a parameter, there is a security check on the user ID and Verbs mkeys are rejected.
Unfortunately, also when using DevX mkey, there is an error in the FW
command on umem validation because the umem is not designed to be used
for any mkey parameter.
As a workaround to the kernel driver/FW issue, it is needed to use a wrapped MR, which is an indirect mkey(created by the DevX API) pointing to direct mkey created by the kernel for any DevX command uses an MR.
Add an API to create and destroy this wrapped MR.
Use this logic for counters and LM management.
V3:
Fix 32-bit compilation issue.
V2:
Fix Windows compilation issue.
Improve logs.
Matan Azrad (2):
common/mlx5: add wrapped MR create API
vdpa/mlx5: workaround dirty bitmap MR creation
Michael Baum (3):
common/mlx5: glue MR registration with IOVA
vdpa/mlx5: workaround guest MR registrations
net/mlx5: workaround MR creation for flow counter
drivers/common/mlx5/linux/meson.build | 2 +
drivers/common/mlx5/linux/mlx5_common_os.c | 56 ++++++++++++++++++++
drivers/common/mlx5/linux/mlx5_glue.c | 18 +++++++
drivers/common/mlx5/linux/mlx5_glue.h | 3 ++
drivers/common/mlx5/mlx5_common.h | 18 +++++++
drivers/common/mlx5/version.map | 3 ++
drivers/common/mlx5/windows/mlx5_common_os.c | 40 ++++++++++++++
drivers/net/mlx5/mlx5.c | 8 +--
drivers/net/mlx5/mlx5.h | 5 +-
drivers/net/mlx5/mlx5_flow.c | 25 +++------
drivers/vdpa/mlx5/mlx5_vdpa.h | 9 ++--
drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 38 +++----------
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 43 +++++----------
13 files changed, 175 insertions(+), 93 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
` (5 preceding siblings ...)
2021-11-09 12:23 ` [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues Matan Azrad
@ 2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
` (5 more replies)
6 siblings, 6 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-09 12:36 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon
The mlx5 PMD uses the kernel mlx5 driver to map physical memory to the
HW.
Using the Verbs API ibv_reg_mr, a mkey can be created for that.
In this case, the mkey is signed on the user ID of the kernel driver.
Using the DevX API, a mkey also can be created, but it should point an
umem object (represents the specific buffer mapping) created by the
kernel. In this case, the mkey is signed on the user ID of the process
DevX context.
In FW DevX control commands which get mkey as a parameter, there is
a security check on the user ID and Verbs mkeys are rejected.
Unfortunately, also when using DevX mkey, there is an error in the FW
command on umem validation because the umem is not designed to be used
for any mkey parameters.
As a workaround to the kernel driver/FW issue, it is needed to use a
wrapped MR, which is an indirect mkey(created by the DevX API) pointing to
direct mkey created by the kernel for any DevX command uses an MR.
Add an API to create and destroy this wrapped MR.
Use this logic in flow counter query management and in LM.
V3:
Fix issue in 32bit compilation.
V2:
Fix missing implementation for Windows.
Improve logs.
Matan Azrad (2):
common/mlx5: add wrapped MR create API
vdpa/mlx5: workaround dirty bitmap MR creation
Michael Baum (3):
common/mlx5: glue MR registration with IOVA
vdpa/mlx5: workaround guest MR registrations
net/mlx5: workaround MR creation for flow counter
drivers/common/mlx5/linux/meson.build | 2 +
drivers/common/mlx5/linux/mlx5_common_os.c | 56 ++++++++++++++++++++
drivers/common/mlx5/linux/mlx5_glue.c | 18 +++++++
drivers/common/mlx5/linux/mlx5_glue.h | 3 ++
drivers/common/mlx5/mlx5_common.h | 18 +++++++
drivers/common/mlx5/version.map | 3 ++
drivers/common/mlx5/windows/mlx5_common_os.c | 40 ++++++++++++++
drivers/net/mlx5/mlx5.c | 8 +--
drivers/net/mlx5/mlx5.h | 5 +-
drivers/net/mlx5/mlx5_flow.c | 25 +++------
drivers/vdpa/mlx5/mlx5_vdpa.h | 9 ++--
drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 38 +++----------
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 43 +++++----------
13 files changed, 175 insertions(+), 93 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 1/5] common/mlx5: glue MR registration with IOVA
2021-11-09 12:36 ` Matan Azrad
@ 2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 2/5] common/mlx5: add wrapped MR create API Matan Azrad
` (4 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-09 12:36 UTC (permalink / raw)
To: Viacheslav Ovsiienko
Cc: dev, Thomas Monjalon, Michael Baum, stable, Michael Baum
From: Michael Baum <michaelba@oss.nvidia.com>
Add support for rdma-core API to register IOVA MR.
The API gets the process VA, size, and IOVA and returns a memory region
with space pointed by a specific IOVA.
So any access in this MR should come with an address that is relative to
the IOVA specified in the API.
Fixes: cc07a42da250 ("vdpa/mlx5: prepare memory regions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/linux/meson.build | 2 ++
drivers/common/mlx5/linux/mlx5_glue.c | 18 ++++++++++++++++++
drivers/common/mlx5/linux/mlx5_glue.h | 3 +++
3 files changed, 23 insertions(+)
diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 2dcd27b778..7909f23e21 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -200,6 +200,8 @@ has_sym_args = [
'MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR' ],
[ 'HAVE_MLX5_DR_ALLOW_DUPLICATE', 'infiniband/mlx5dv.h',
'mlx5dv_dr_domain_allow_duplicate_rules' ],
+ [ 'HAVE_MLX5_IBV_REG_MR_IOVA', 'infiniband/verbs.h',
+ 'ibv_reg_mr_iova' ],
]
config = configuration_data()
foreach arg:has_sym_args
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index 037ca961a0..bc6622053f 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -224,6 +224,23 @@ mlx5_glue_reg_mr(struct ibv_pd *pd, void *addr, size_t length, int access)
return ibv_reg_mr(pd, addr, length, access);
}
+static struct ibv_mr *
+mlx5_glue_reg_mr_iova(struct ibv_pd *pd, void *addr, size_t length,
+ uint64_t iova, int access)
+{
+#ifdef HAVE_MLX5_IBV_REG_MR_IOVA
+ return ibv_reg_mr_iova(pd, addr, length, iova, access);
+#else
+ (void)pd;
+ (void)addr;
+ (void)length;
+ (void)iova;
+ (void)access;
+ errno = ENOTSUP;
+ return NULL;
+#endif
+}
+
static struct ibv_mr *
mlx5_glue_alloc_null_mr(struct ibv_pd *pd)
{
@@ -1412,6 +1429,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
.destroy_qp = mlx5_glue_destroy_qp,
.modify_qp = mlx5_glue_modify_qp,
.reg_mr = mlx5_glue_reg_mr,
+ .reg_mr_iova = mlx5_glue_reg_mr_iova,
.alloc_null_mr = mlx5_glue_alloc_null_mr,
.dereg_mr = mlx5_glue_dereg_mr,
.create_counter_set = mlx5_glue_create_counter_set,
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index f39ef2dac7..4e6d31f263 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -197,6 +197,9 @@ struct mlx5_glue {
int attr_mask);
struct ibv_mr *(*reg_mr)(struct ibv_pd *pd, void *addr,
size_t length, int access);
+ struct ibv_mr *(*reg_mr_iova)(struct ibv_pd *pd, void *addr,
+ size_t length, uint64_t iova,
+ int access);
struct ibv_mr *(*alloc_null_mr)(struct ibv_pd *pd);
int (*dereg_mr)(struct ibv_mr *mr);
struct ibv_counter_set *(*create_counter_set)
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 2/5] common/mlx5: add wrapped MR create API
2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
@ 2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
` (3 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-09 12:36 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, stable, Michael Baum
The mlx5 PMD uses the kernel mlx5 driver to map physical memory to the
HW.
Using the Verbs API ibv_reg_mr, a mkey can be created for that.
In this case, the mkey is signed on the user ID of the kernel driver.
Using the DevX API, a mkey also can be created, but it should point an
umem object (represents the specific buffer mapping) created by the
kernel. In this case, the mkey is signed on the user ID of the process
DevX context.
In FW DevX control commands which get mkey as a parameter, there is
a security check on the user ID and Verbs mkeys are rejected.
Unfortunately, also when using DevX mkey, there is an error in the FW
command on umem validation because the umem is not designed to be used
for any mkey parameters.
As a workaround to the kernel driver/FW issue, it is needed to use a
wrapped MR, which is an indirect mkey(created by the DevX API) pointing to
direct mkey created by the kernel for any DevX command uses an MR.
Add an API to create and destroy this wrapped MR.
Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Fixes: 9d39e57f21ac ("vdpa/mlx5: support live migration")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/linux/mlx5_common_os.c | 56 ++++++++++++++++++++
drivers/common/mlx5/mlx5_common.h | 18 +++++++
drivers/common/mlx5/version.map | 3 ++
drivers/common/mlx5/windows/mlx5_common_os.c | 40 ++++++++++++++
4 files changed, 117 insertions(+)
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index b516564b79..0d3e24e04e 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -744,3 +744,59 @@ mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
fclose(id_file);
return ret;
}
+
+/*
+ * Create direct mkey using the kernel ibv_reg_mr API and wrap it with a new
+ * indirect mkey created by the DevX API.
+ * This mkey should be used for DevX commands requesting mkey as a parameter.
+ */
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr)
+{
+ struct mlx5_klm klm = {
+ .byte_count = length,
+ .address = (uintptr_t)addr,
+ };
+ struct mlx5_devx_mkey_attr mkey_attr = {
+ .pd = pdn,
+ .klm_array = &klm,
+ .klm_num = 1,
+ };
+ struct mlx5_devx_obj *mkey;
+ struct ibv_mr *ibv_mr = mlx5_glue->reg_mr(pd, addr, length,
+ IBV_ACCESS_LOCAL_WRITE |
+ (haswell_broadwell_cpu ? 0 :
+ IBV_ACCESS_RELAXED_ORDERING));
+
+ if (!ibv_mr) {
+ rte_errno = errno;
+ return -rte_errno;
+ }
+ klm.mkey = ibv_mr->lkey;
+ mkey_attr.addr = (uintptr_t)addr;
+ mkey_attr.size = length;
+ mkey = mlx5_devx_cmd_mkey_create(ctx, &mkey_attr);
+ if (!mkey) {
+ claim_zero(mlx5_glue->dereg_mr(ibv_mr));
+ return -rte_errno;
+ }
+ pmd_mr->addr = addr;
+ pmd_mr->len = length;
+ pmd_mr->obj = (void *)ibv_mr;
+ pmd_mr->imkey = mkey;
+ pmd_mr->lkey = mkey->id;
+ return 0;
+}
+
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr)
+{
+ if (!pmd_mr)
+ return;
+ if (pmd_mr->imkey)
+ claim_zero(mlx5_devx_cmd_destroy(pmd_mr->imkey));
+ if (pmd_mr->obj)
+ claim_zero(mlx5_glue->dereg_mr(pmd_mr->obj));
+ memset(pmd_mr, 0, sizeof(*pmd_mr));
+}
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 661d3ab235..e8809844af 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -509,4 +509,22 @@ mlx5_devx_uar_release(struct mlx5_uar *uar);
int mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes);
int mlx5_os_pd_create(struct mlx5_common_device *cdev);
+/* mlx5 PMD wrapped MR struct. */
+struct mlx5_pmd_wrapped_mr {
+ uint32_t lkey;
+ void *addr;
+ size_t len;
+ void *obj; /* verbs mr object or devx umem object. */
+ void *imkey; /* DevX indirect mkey object. */
+};
+
+__rte_internal
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr);
+
+__rte_internal
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr);
+
#endif /* RTE_PMD_MLX5_COMMON_H_ */
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 2335edf39d..8a62dc2782 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -134,6 +134,9 @@ INTERNAL {
mlx5_os_umem_dereg;
mlx5_os_umem_reg;
+ mlx5_os_wrapped_mkey_create;
+ mlx5_os_wrapped_mkey_destroy;
+
mlx5_realloc;
mlx5_translate_port_name; # WINDOWS_NO_EXPORT
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c
index ea478d7395..0d03344343 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.c
+++ b/drivers/common/mlx5/windows/mlx5_common_os.c
@@ -390,3 +390,43 @@ mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
*reg_mr_cb = mlx5_os_reg_mr;
*dereg_mr_cb = mlx5_os_dereg_mr;
}
+
+/*
+ * In Windows, no need to wrap the MR, no known issue for it in kernel.
+ * Use the regular function to create direct MR.
+ */
+int
+mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
+ size_t length, struct mlx5_pmd_wrapped_mr *wpmd_mr)
+{
+ struct mlx5_pmd_mr pmd_mr = {0};
+ int ret = mlx5_os_reg_mr(pd, addr, length, &pmd_mr);
+
+ (void)pdn;
+ (void)ctx;
+ if (ret != 0)
+ return -1;
+ wpmd_mr->addr = addr;
+ wpmd_mr->len = length;
+ wpmd_mr->obj = pmd_mr.obj;
+ wpmd_mr->imkey = pmd_mr.mkey;
+ wpmd_mr->lkey = pmd_mr.mkey->id;
+ return 0;
+}
+
+void
+mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *wpmd_mr)
+{
+ struct mlx5_pmd_mr pmd_mr;
+
+ if (!wpmd_mr)
+ return;
+ pmd_mr.addr = wpmd_mr->addr;
+ pmd_mr.len = wpmd_mr->len;
+ pmd_mr.obj = wpmd_mr->obj;
+ pmd_mr.mkey = wpmd_mr->imkey;
+ pmd_mr.lkey = wpmd_mr->lkey;
+ mlx5_os_dereg_mr(&pmd_mr);
+ memset(wpmd_mr, 0, sizeof(*wpmd_mr));
+}
+
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 3/5] vdpa/mlx5: workaround dirty bitmap MR creation
2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 2/5] common/mlx5: add wrapped MR create API Matan Azrad
@ 2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
` (2 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-09 12:36 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, stable, Michael Baum
Due to kernel driver/FW issues in direct MKEY creation using the DevX
API, this patch replaces the dirty bitmap MR creation to use wrapped
mkey instead.
Fixes: 9d39e57f21ac ("vdpa/mlx5: support live migration")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/vdpa/mlx5/mlx5_vdpa.h | 1 +
drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 38 +++++++------------------------
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 2 ++
3 files changed, 11 insertions(+), 30 deletions(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 62498f87fd..15212a2b30 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -147,6 +147,7 @@ struct mlx5_vdpa_priv {
struct mlx5_vdpa_steer steer;
struct mlx5dv_var *var;
void *virtq_db_addr;
+ struct mlx5_pmd_wrapped_mr lm_mr;
SLIST_HEAD(mr_list, mlx5_vdpa_query_mr) mr_list;
struct mlx5_vdpa_virtq virtqs[];
};
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
index 3e8d9eb9a2..e65e4faa47 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
@@ -36,38 +36,22 @@ int
mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
uint64_t log_size)
{
- struct mlx5_devx_mkey_attr mkey_attr = {
- .addr = (uintptr_t)log_base,
- .size = log_size,
- .pd = priv->cdev->pdn,
- .pg_access = 1,
- };
struct mlx5_devx_virtq_attr attr = {
.type = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS,
.dirty_bitmap_addr = log_base,
.dirty_bitmap_size = log_size,
};
- struct mlx5_vdpa_query_mr *mr = rte_malloc(__func__, sizeof(*mr), 0);
int i;
+ int ret = mlx5_os_wrapped_mkey_create(priv->cdev->ctx, priv->cdev->pd,
+ priv->cdev->pdn,
+ (void *)(uintptr_t)log_base,
+ log_size, &priv->lm_mr);
- if (!mr) {
- DRV_LOG(ERR, "Failed to allocate mem for lm mr.");
+ if (!ret) {
+ DRV_LOG(ERR, "Failed to allocate wrapped MR for lm.");
return -1;
}
- mr->umem = mlx5_glue->devx_umem_reg(priv->cdev->ctx,
- (void *)(uintptr_t)log_base,
- log_size, IBV_ACCESS_LOCAL_WRITE);
- if (!mr->umem) {
- DRV_LOG(ERR, "Failed to register umem for lm mr.");
- goto err;
- }
- mkey_attr.umem_id = mr->umem->umem_id;
- mr->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, &mkey_attr);
- if (!mr->mkey) {
- DRV_LOG(ERR, "Failed to create Mkey for lm.");
- goto err;
- }
- attr.dirty_bitmap_mkey = mr->mkey->id;
+ attr.dirty_bitmap_mkey = priv->lm_mr.lkey;
for (i = 0; i < priv->nr_virtqs; ++i) {
attr.queue_index = i;
if (!priv->virtqs[i].virtq) {
@@ -78,15 +62,9 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
goto err;
}
}
- mr->is_indirect = 0;
- SLIST_INSERT_HEAD(&priv->mr_list, mr, next);
return 0;
err:
- if (mr->mkey)
- mlx5_devx_cmd_destroy(mr->mkey);
- if (mr->umem)
- mlx5_glue->devx_umem_dereg(mr->umem);
- rte_free(mr);
+ mlx5_os_wrapped_mkey_destroy(&priv->lm_mr);
return -1;
}
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
index f551a094cd..d7707bbd91 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
@@ -31,6 +31,8 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv)
entry = next;
}
SLIST_INIT(&priv->mr_list);
+ if (priv->lm_mr.addr)
+ mlx5_os_wrapped_mkey_destroy(&priv->lm_mr);
if (priv->null_mr) {
claim_zero(mlx5_glue->dereg_mr(priv->null_mr));
priv->null_mr = NULL;
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 4/5] vdpa/mlx5: workaround guest MR registrations
2021-11-09 12:36 ` Matan Azrad
` (2 preceding siblings ...)
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
@ 2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 5/5] net/mlx5: workaround MR creation for flow counter Matan Azrad
2021-11-10 14:55 ` [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues Thomas Monjalon
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-09 12:36 UTC (permalink / raw)
To: Viacheslav Ovsiienko
Cc: dev, Thomas Monjalon, Michael Baum, stable, Michael Baum
From: Michael Baum <michaelba@oss.nvidia.com>
Due to kernel issue in direct MKEY creation using the DevX API, this
patch replaces the virtio MR creation to use Verbs API.
Fixes: cc07a42da250 ("vdpa/mlx5: prepare memory regions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++---
drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 41 +++++++++----------------------
2 files changed, 16 insertions(+), 33 deletions(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 15212a2b30..22617924ea 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -59,10 +59,10 @@ struct mlx5_vdpa_event_qp {
struct mlx5_vdpa_query_mr {
SLIST_ENTRY(mlx5_vdpa_query_mr) next;
- void *addr;
- uint64_t length;
- struct mlx5dv_devx_umem *umem;
- struct mlx5_devx_obj *mkey;
+ union {
+ struct ibv_mr *mr;
+ struct mlx5_devx_obj *mkey;
+ };
int is_indirect;
};
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
index d7707bbd91..b1b9053bff 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c
@@ -23,9 +23,10 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv)
entry = SLIST_FIRST(&priv->mr_list);
while (entry) {
next = SLIST_NEXT(entry, next);
- claim_zero(mlx5_devx_cmd_destroy(entry->mkey));
- if (!entry->is_indirect)
- claim_zero(mlx5_glue->devx_umem_dereg(entry->umem));
+ if (entry->is_indirect)
+ claim_zero(mlx5_devx_cmd_destroy(entry->mkey));
+ else
+ claim_zero(mlx5_glue->dereg_mr(entry->mr));
SLIST_REMOVE(&priv->mr_list, entry, mlx5_vdpa_query_mr, next);
rte_free(entry);
entry = next;
@@ -202,7 +203,6 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
goto error;
}
DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey);
- memset(&mkey_attr, 0, sizeof(mkey_attr));
for (i = 0; i < mem->nregions; i++) {
reg = &mem->regions[i];
entry = rte_zmalloc(__func__, sizeof(*entry), 0);
@@ -211,28 +211,15 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to allocate mem entry memory.");
goto error;
}
- entry->umem = mlx5_glue->devx_umem_reg(priv->cdev->ctx,
- (void *)(uintptr_t)reg->host_user_addr,
- reg->size, IBV_ACCESS_LOCAL_WRITE);
- if (!entry->umem) {
- DRV_LOG(ERR, "Failed to register Umem by Devx.");
- ret = -errno;
- goto error;
- }
- mkey_attr.addr = (uintptr_t)(reg->guest_phys_addr);
- mkey_attr.size = reg->size;
- mkey_attr.umem_id = entry->umem->umem_id;
- mkey_attr.pd = priv->cdev->pdn;
- mkey_attr.pg_access = 1;
- entry->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx,
- &mkey_attr);
- if (!entry->mkey) {
+ entry->mr = mlx5_glue->reg_mr_iova(priv->cdev->pd,
+ (void *)(uintptr_t)(reg->host_user_addr),
+ reg->size, reg->guest_phys_addr,
+ IBV_ACCESS_LOCAL_WRITE);
+ if (!entry->mr) {
DRV_LOG(ERR, "Failed to create direct Mkey.");
ret = -rte_errno;
goto error;
}
- entry->addr = (void *)(uintptr_t)(reg->host_user_addr);
- entry->length = reg->size;
entry->is_indirect = 0;
if (i > 0) {
uint64_t sadd;
@@ -262,12 +249,13 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
for (k = 0; k < reg->size; k += klm_size) {
klm_array[klm_index].byte_count = k + klm_size >
reg->size ? reg->size - k : klm_size;
- klm_array[klm_index].mkey = entry->mkey->id;
+ klm_array[klm_index].mkey = entry->mr->lkey;
klm_array[klm_index].address = reg->guest_phys_addr + k;
klm_index++;
}
SLIST_INSERT_HEAD(&priv->mr_list, entry, next);
}
+ memset(&mkey_attr, 0, sizeof(mkey_attr));
mkey_attr.addr = (uintptr_t)(mem->regions[0].guest_phys_addr);
mkey_attr.size = mem_size;
mkey_attr.pd = priv->cdev->pdn;
@@ -295,13 +283,8 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv)
priv->gpa_mkey_index = entry->mkey->id;
return 0;
error:
- if (entry) {
- if (entry->mkey)
- mlx5_devx_cmd_destroy(entry->mkey);
- if (entry->umem)
- mlx5_glue->devx_umem_dereg(entry->umem);
+ if (entry)
rte_free(entry);
- }
mlx5_vdpa_mem_dereg(priv);
rte_errno = -ret;
return ret;
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 5/5] net/mlx5: workaround MR creation for flow counter
2021-11-09 12:36 ` Matan Azrad
` (3 preceding siblings ...)
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
@ 2021-11-09 12:36 ` Matan Azrad
2021-11-10 14:55 ` [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues Thomas Monjalon
5 siblings, 0 replies; 21+ messages in thread
From: Matan Azrad @ 2021-11-09 12:36 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: dev, Thomas Monjalon, Michael Baum, stable
From: Michael Baum <michaelba@nvidia.com>
Due to kernel driver / FW issues in direct MKEY creation using the DevX
API, this patch replaces the counter MR creation to use wrapped mkey
API.
Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 8 +-------
drivers/net/mlx5/mlx5.h | 5 +----
drivers/net/mlx5/mlx5_flow.c | 25 ++++++-------------------
3 files changed, 8 insertions(+), 30 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f5990dd757..2a3efb3588 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -522,7 +522,6 @@ mlx5_flow_aging_init(struct mlx5_dev_ctx_shared *sh)
static void
mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
{
- struct mlx5_hca_attr *attr = &sh->cdev->config.hca_attr;
int i;
memset(&sh->cmng, 0, sizeof(sh->cmng));
@@ -535,10 +534,6 @@ mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
TAILQ_INIT(&sh->cmng.counters[i]);
rte_spinlock_init(&sh->cmng.csl[i]);
}
- if (sh->devx && !haswell_broadwell_cpu) {
- sh->cmng.relaxed_ordering_write = attr->relaxed_ordering_write;
- sh->cmng.relaxed_ordering_read = attr->relaxed_ordering_read;
- }
}
/**
@@ -553,8 +548,7 @@ mlx5_flow_destroy_counter_stat_mem_mng(struct mlx5_counter_stats_mem_mng *mng)
uint8_t *mem = (uint8_t *)(uintptr_t)mng->raws[0].data;
LIST_REMOVE(mng, next);
- claim_zero(mlx5_devx_cmd_destroy(mng->dm));
- claim_zero(mlx5_os_umem_dereg(mng->umem));
+ mlx5_os_wrapped_mkey_destroy(&mng->wm);
mlx5_free(mem);
}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c2a13b6de4..bdadd6e024 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -462,8 +462,7 @@ struct mlx5_flow_counter_pool {
struct mlx5_counter_stats_mem_mng {
LIST_ENTRY(mlx5_counter_stats_mem_mng) next;
struct mlx5_counter_stats_raw *raws;
- struct mlx5_devx_obj *dm;
- void *umem;
+ struct mlx5_pmd_wrapped_mr wm;
};
/* Raw memory structure for the counter statistics values of a pool. */
@@ -494,8 +493,6 @@ struct mlx5_flow_counter_mng {
uint8_t pending_queries;
uint16_t pool_index;
uint8_t query_thread_on;
- bool relaxed_ordering_read;
- bool relaxed_ordering_write;
bool counter_fallback; /* Use counter fallback management. */
LIST_HEAD(mem_mngs, mlx5_counter_stats_mem_mng) mem_mngs;
LIST_HEAD(stat_raws, mlx5_counter_stats_raw) free_stat_raws;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2f30a35525..40625688b0 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -7775,7 +7775,6 @@ mlx5_counter_query(struct rte_eth_dev *dev, uint32_t cnt,
static int
mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
{
- struct mlx5_devx_mkey_attr mkey_attr;
struct mlx5_counter_stats_mem_mng *mem_mng;
volatile struct flow_counter_stats *raw_data;
int raws_n = MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES;
@@ -7785,6 +7784,7 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
sizeof(struct mlx5_counter_stats_mem_mng);
size_t pgsize = rte_mem_page_size();
uint8_t *mem;
+ int ret;
int i;
if (pgsize == (size_t)-1) {
@@ -7799,23 +7799,10 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh)
}
mem_mng = (struct mlx5_counter_stats_mem_mng *)(mem + size) - 1;
size = sizeof(*raw_data) * MLX5_COUNTERS_PER_POOL * raws_n;
- mem_mng->umem = mlx5_os_umem_reg(sh->cdev->ctx, mem, size,
- IBV_ACCESS_LOCAL_WRITE);
- if (!mem_mng->umem) {
- rte_errno = errno;
- mlx5_free(mem);
- return -rte_errno;
- }
- memset(&mkey_attr, 0, sizeof(mkey_attr));
- mkey_attr.addr = (uintptr_t)mem;
- mkey_attr.size = size;
- mkey_attr.umem_id = mlx5_os_get_umem_id(mem_mng->umem);
- mkey_attr.pd = sh->cdev->pdn;
- mkey_attr.relaxed_ordering_write = sh->cmng.relaxed_ordering_write;
- mkey_attr.relaxed_ordering_read = sh->cmng.relaxed_ordering_read;
- mem_mng->dm = mlx5_devx_cmd_mkey_create(sh->cdev->ctx, &mkey_attr);
- if (!mem_mng->dm) {
- mlx5_os_umem_dereg(mem_mng->umem);
+ ret = mlx5_os_wrapped_mkey_create(sh->cdev->ctx, sh->cdev->pd,
+ sh->cdev->pdn, mem, size,
+ &mem_mng->wm);
+ if (ret) {
rte_errno = errno;
mlx5_free(mem);
return -rte_errno;
@@ -7934,7 +7921,7 @@ mlx5_flow_query_alarm(void *arg)
ret = mlx5_devx_cmd_flow_counter_query(pool->min_dcs, 0,
MLX5_COUNTERS_PER_POOL,
NULL, NULL,
- pool->raw_hw->mem_mng->dm->id,
+ pool->raw_hw->mem_mng->wm.lkey,
(void *)(uintptr_t)
pool->raw_hw->data,
sh->devx_comp,
--
2.25.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues
2021-11-09 12:36 ` Matan Azrad
` (4 preceding siblings ...)
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 5/5] net/mlx5: workaround MR creation for flow counter Matan Azrad
@ 2021-11-10 14:55 ` Thomas Monjalon
5 siblings, 0 replies; 21+ messages in thread
From: Thomas Monjalon @ 2021-11-10 14:55 UTC (permalink / raw)
To: Matan Azrad; +Cc: Viacheslav Ovsiienko, dev, michaelba
> Matan Azrad (2):
> common/mlx5: add wrapped MR create API
> vdpa/mlx5: workaround dirty bitmap MR creation
>
> Michael Baum (3):
> common/mlx5: glue MR registration with IOVA
> vdpa/mlx5: workaround guest MR registrations
> net/mlx5: workaround MR creation for flow counter
Applied, thanks.
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2021-11-10 14:55 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-07 15:29 [dpdk-dev] [PATCH 0/5] mlx5: workaround MR issues in FW\kernel Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 2/5] common/mlx5: add wrapped MR create API Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
2021-11-07 15:29 ` [dpdk-dev] [PATCH 5/5] net/mlx5: workaround counter memory region creation Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 0/5] mlx5: workaround MR issues Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 2/5] common/mlx5: add wrapped MR create API Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
2021-11-08 19:38 ` [dpdk-dev] [dpdk-stable] " Thomas Monjalon
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
2021-11-08 17:21 ` [dpdk-dev] [PATCH v2 5/5] net/mlx5: workaround MR creation for flow counter Matan Azrad
2021-11-09 12:23 ` [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues Matan Azrad
2021-11-09 12:36 ` Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 1/5] common/mlx5: glue MR registration with IOVA Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 2/5] common/mlx5: add wrapped MR create API Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 3/5] vdpa/mlx5: workaround dirty bitmap MR creation Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 4/5] vdpa/mlx5: workaround guest MR registrations Matan Azrad
2021-11-09 12:36 ` [dpdk-dev] [PATCH v3 5/5] net/mlx5: workaround MR creation for flow counter Matan Azrad
2021-11-10 14:55 ` [dpdk-dev] [PATCH v3 0/5] mlx5: workaround MR issues Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).