patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Yunjian Wang <wangyunjian@huawei.com>
To: <dev@dpdk.org>
Cc: <anatoly.burakov@intel.com>, <xiawei40@huawei.com>,
	<luyicai@huawei.com>,  Lipei Liang <lianglipei@huawei.com>,
	<stable@dpdk.org>
Subject: [PATCH] vfio: check iova if already mapped before do map
Date: Tue, 10 Sep 2024 11:28:54 +0800	[thread overview]
Message-ID: <1725938934-48952-1-git-send-email-wangyunjian@huawei.com> (raw)

From: Lipei Liang <lianglipei@huawei.com>

If we map two continuous memory area A and B, current implementation
will merge these two segments into one, as area C. But, if area A and
B are mapped again, after sort, there while be A, C, B in mem maps,
as A and B divided by C, these segs couldn't be merged. In other words,
if segments A and B that are adjacent, are mapped twice, there while be
two map entries Corresponding to A or B.

So when we partially unmap adjacent area A and B, entry C will be Residual
within mem maps. Then map an other memory area D which size is different
with A, but within area C, the area C while be mistakenly found by
find_user_mem_maps when unmapping area D. As area D and area C are of
different chunk size, this resulted in failed to unmap area D.

Fix this by check iova if already mapped before do dma map, if iova is
absolutely within mem maps, return whithout vfio map, while iova is
overlapping with entry in mem maps, return -EEXISTS.

Fixes: 56259f7fc010 ("vfio: allow partially unmapping adjacent memory")
Cc: stable@dpdk.org

Signed-off-by: Lipei Liang <lianglipei@huawei.com>
---
 lib/eal/linux/eal_vfio.c | 52 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 50 insertions(+), 2 deletions(-)

diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c
index 4e69e72e3b..cd32284fc6 100644
--- a/lib/eal/linux/eal_vfio.c
+++ b/lib/eal/linux/eal_vfio.c
@@ -216,6 +216,39 @@ copy_maps(struct user_mem_maps *user_mem_maps, struct user_mem_map *add_maps,
 	}
 }
 
+/* *
+ * check if iova area is already mapped or overlaps with existing mapped,
+ * @return
+ *        0 if iova area is not exist
+ *        1 if iova area is already mapped
+ *       -1 if overlaps between iova area and existing mapped
+ */
+static int
+check_iova_in_map(struct user_mem_maps *user_mem_maps, uint64_t iova, uint64_t len)
+{
+	int i;
+	uint64_t iova_end = iova + len;
+	uint64_t map_iova_end;
+	uint64_t map_iova_off;
+	uint64_t map_chunk;
+
+	for (i = 0; i < user_mem_maps->n_maps; i++) {
+		map_iova_off = iova - user_mem_maps->maps[i].iova;
+		map_iova_end = user_mem_maps->maps[i].iova + user_mem_maps->maps[i].len;
+		map_chunk = user_mem_maps->maps[i].chunk;
+
+		if ((user_mem_maps->maps[i].iova >= iova_end) || (iova >= map_iova_end))
+			continue;
+
+		if ((user_mem_maps->maps[i].iova <= iova) && (iova_end <= map_iova_end) &&
+			(len == map_chunk) && ((map_iova_off % map_chunk) == 0))
+			return 1;
+
+		return -1;
+	}
+	return 0;
+}
+
 /* try merging two maps into one, return 1 if succeeded */
 static int
 merge_map(struct user_mem_map *left, struct user_mem_map *right)
@@ -1873,6 +1906,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 	struct user_mem_maps *user_mem_maps;
 	bool has_partial_unmap;
 	int ret = 0;
+	int iova_check = 0;
 
 	user_mem_maps = &vfio_cfg->mem_maps;
 	rte_spinlock_recursive_lock(&user_mem_maps->lock);
@@ -1882,6 +1916,22 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 		ret = -1;
 		goto out;
 	}
+
+	/* do we have partial unmap support? */
+	has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;
+	/* check if we can map this region */
+	if (!has_partial_unmap) {
+		iova_check = check_iova_in_map(user_mem_maps, iova, len);
+		if (iova_check == 1) {
+			goto out;
+		} else if (iova_check < 0) {
+			EAL_LOG(ERR, "Overlapping DMA regions not allowed");
+			rte_errno = ENOTSUP;
+			ret = -1;
+			goto out;
+		}
+	}
+
 	/* map the entry */
 	if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) {
 		/* technically, this will fail if there are currently no devices
@@ -1895,8 +1945,6 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 		ret = -1;
 		goto out;
 	}
-	/* do we have partial unmap support? */
-	has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;
 
 	/* create new user mem map entry */
 	new_map = &user_mem_maps->maps[user_mem_maps->n_maps++];
-- 
2.33.0


                 reply	other threads:[~2024-09-10  3:29 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1725938934-48952-1-git-send-email-wangyunjian@huawei.com \
    --to=wangyunjian@huawei.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=lianglipei@huawei.com \
    --cc=luyicai@huawei.com \
    --cc=stable@dpdk.org \
    --cc=xiawei40@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).