From: Yajun Wu <yajunw@nvidia.com>
To: <orika@nvidia.com>, <viacheslavo@nvidia.com>, <matan@nvidia.com>,
<shahafs@nvidia.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: <dev@dpdk.org>, <thomas@monjalon.net>, <rasland@nvidia.com>,
<roniba@nvidia.com>, <stable@dpdk.org>
Subject: [PATCH] vdpa/mlx5: workaround var offset within page
Date: Mon, 14 Mar 2022 03:44:12 +0200 [thread overview]
Message-ID: <20220314014414.251869-1-yajunw@nvidia.com> (raw)
vDPA driver first uses kernel driver to allocate doorbell(VAR) area for
each device. Then uses var->mmap_off and var->length to mmap uverbs device
file as doorbell userspace virtual address.
Current kernel driver provides var->mmap_off equal to page start of VAR.
It's fine with x86 4K page server, because VAR physical address is only 4K
aligned thus locate in 4K page start.
But with aarch64 64K page server, the actual VAR physical address has
offset within page(not locate in 64K page start). So vDPA driver need add
this within page offset(caps.doorbell_bar_offset) to get right VAR virtual
address.
Fixes: 62c813706e4 ("vdpa/mlx5: map doorbell")
Cc: stable@dpdk.org
Signed-off-by: Yajun Wu <yajunw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index 3416797d28..0748710a76 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -9,6 +9,7 @@
#include <rte_malloc.h>
#include <rte_errno.h>
#include <rte_io.h>
+#include <rte_eal_paging.h>
#include <mlx5_common.h>
@@ -123,7 +124,10 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv)
priv->td = NULL;
}
if (priv->virtq_db_addr) {
- claim_zero(munmap(priv->virtq_db_addr, priv->var->length));
+ /* Mask out the within page offset for ummap. */
+ claim_zero(munmap((void *)((uint64_t)priv->virtq_db_addr &
+ ~(rte_mem_page_size() - 1ULL)),
+ priv->var->length));
priv->virtq_db_addr = NULL;
}
priv->features = 0;
@@ -486,6 +490,10 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv)
priv->virtq_db_addr = NULL;
goto error;
} else {
+ /* Add within page offset for 64K page system. */
+ priv->virtq_db_addr = (char *)priv->virtq_db_addr +
+ ((rte_mem_page_size() - 1) &
+ priv->caps.doorbell_bar_offset);
DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.",
priv->virtq_db_addr);
}
--
2.27.0
next reply other threads:[~2022-03-14 1:46 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-14 1:44 Yajun Wu [this message]
2022-03-15 2:47 ` [PATCH v2] " Yajun Wu
2022-06-01 7:55 ` Maxime Coquelin
2022-06-01 9:46 ` Maxime Coquelin
2022-06-14 9:45 ` Maxime Coquelin
2022-06-15 10:02 ` [PATCH v3] " Yajun Wu
2022-06-17 14:07 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220314014414.251869-1-yajunw@nvidia.com \
--to=yajunw@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=maxime.coquelin@redhat.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=roniba@nvidia.com \
--cc=shahafs@nvidia.com \
--cc=stable@dpdk.org \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).