From: Yajun Wu <yajunw@nvidia.com>
To: <stable@dpdk.org>
Cc: Matan Azrad <matan@nvidia.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>
Subject: [PATCH 20.11] vdpa/mlx5: workaround var offset within page
Date: Thu, 21 Jul 2022 08:42:39 +0300 [thread overview]
Message-ID: <20220721054239.378884-1-yajunw@nvidia.com> (raw)
[ upstream commit 95af59b7ad9f6a465de2ead9ef709429678e750d ]
vDPA driver first uses kernel driver to allocate doorbell(VAR) area for
each device. Then uses var->mmap_off and var->length to mmap uverbs device
file as doorbell userspace virtual address.
Current kernel driver provides var->mmap_off equal to page start of VAR.
It's fine with x86 4K page server, because VAR physical address is only 4K
aligned thus locate in 4K page start.
But with aarch64 64K page server, the actual VAR physical address has
offset within page(not locate in 64K page start). So vDPA driver need add
this within page offset(caps.doorbell_bar_offset) to get right VAR virtual
address.
Fixes: 62c813706e4 ("vdpa/mlx5: map doorbell")
Signed-off-by: Yajun Wu <yajunw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index 0ef7ed0e4a..952e641425 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -9,6 +9,7 @@
#include <rte_malloc.h>
#include <rte_errno.h>
#include <rte_io.h>
+#include <rte_eal_paging.h>
#include <mlx5_common.h>
@@ -122,7 +123,9 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv)
priv->td = NULL;
}
if (priv->virtq_db_addr) {
- claim_zero(munmap(priv->virtq_db_addr, priv->var->length));
+ /* Mask out the within page offset for munmap. */
+ claim_zero(munmap((void *)((uintptr_t)priv->virtq_db_addr &
+ ~(rte_mem_page_size() - 1)), priv->var->length));
priv->virtq_db_addr = NULL;
}
priv->features = 0;
@@ -485,6 +488,10 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv)
priv->virtq_db_addr = NULL;
goto error;
} else {
+ /* Add within page offset for 64K page system. */
+ priv->virtq_db_addr = (char *)priv->virtq_db_addr +
+ ((rte_mem_page_size() - 1) &
+ priv->caps.doorbell_bar_offset);
DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.",
priv->virtq_db_addr);
}
--
2.27.0
next reply other threads:[~2022-07-21 5:43 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-21 5:42 Yajun Wu [this message]
2022-08-09 14:56 ` Xueming(Steven) Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220721054239.378884-1-yajunw@nvidia.com \
--to=yajunw@nvidia.com \
--cc=matan@nvidia.com \
--cc=maxime.coquelin@redhat.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).