patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH v2] vfio: check iova if already mapped before do map
@ 2024-09-10 13:18 Yunjian Wang
  2024-10-08  9:04 ` David Marchand
  0 siblings, 1 reply; 2+ messages in thread
From: Yunjian Wang @ 2024-09-10 13:18 UTC (permalink / raw)
  To: dev; +Cc: anatoly.burakov, xiawei40, luyicai, Lipei Liang, stable

From: Lipei Liang <lianglipei@huawei.com>

When mapping two adjacent memory areas A and B, the current implementation
merges them into one segment, known as area C. However, if areas A and B
are mapped again, there will be separate entries for A, C, and B in the
memory maps, as C divides A and B. This means that if A and B are mapped
twice, there will be two map entries for each. When partially unmapping
A and B, the entry for C will remain in the memory maps. If we then map
area D, which has a different size than A but falls within area C, the
find_user_mem_maps function may mistakenly identify C when attempting to
unmap D. This is because A and C have different chunk sizes, resulting in
a failure to unmap D.

To fix this issue, we can add a check for the iova before performing the
dma mapping. If the iova is already mapped, we should not perform the vfio
mapping. If the iova overlaps with an entry in the memory maps, we should
return an error code of -1 with the ENOTSUP error. However, if the iova
does not overlap with any entry in the memory maps, we can proceed with
the dma mapping.

Fixes: 56259f7fc010 ("vfio: allow partially unmapping adjacent memory")
Cc: stable@dpdk.org

Signed-off-by: Lipei Liang <lianglipei@huawei.com>
---
v2: update commit log
---
 lib/eal/linux/eal_vfio.c | 52 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 50 insertions(+), 2 deletions(-)

diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c
index 4e69e72e3b..cd32284fc6 100644
--- a/lib/eal/linux/eal_vfio.c
+++ b/lib/eal/linux/eal_vfio.c
@@ -216,6 +216,39 @@ copy_maps(struct user_mem_maps *user_mem_maps, struct user_mem_map *add_maps,
 	}
 }
 
+/* *
+ * check if iova area is already mapped or overlaps with existing mapped,
+ * @return
+ *        0 if iova area is not exist
+ *        1 if iova area is already mapped
+ *       -1 if overlaps between iova area and existing mapped
+ */
+static int
+check_iova_in_map(struct user_mem_maps *user_mem_maps, uint64_t iova, uint64_t len)
+{
+	int i;
+	uint64_t iova_end = iova + len;
+	uint64_t map_iova_end;
+	uint64_t map_iova_off;
+	uint64_t map_chunk;
+
+	for (i = 0; i < user_mem_maps->n_maps; i++) {
+		map_iova_off = iova - user_mem_maps->maps[i].iova;
+		map_iova_end = user_mem_maps->maps[i].iova + user_mem_maps->maps[i].len;
+		map_chunk = user_mem_maps->maps[i].chunk;
+
+		if ((user_mem_maps->maps[i].iova >= iova_end) || (iova >= map_iova_end))
+			continue;
+
+		if ((user_mem_maps->maps[i].iova <= iova) && (iova_end <= map_iova_end) &&
+			(len == map_chunk) && ((map_iova_off % map_chunk) == 0))
+			return 1;
+
+		return -1;
+	}
+	return 0;
+}
+
 /* try merging two maps into one, return 1 if succeeded */
 static int
 merge_map(struct user_mem_map *left, struct user_mem_map *right)
@@ -1873,6 +1906,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 	struct user_mem_maps *user_mem_maps;
 	bool has_partial_unmap;
 	int ret = 0;
+	int iova_check = 0;
 
 	user_mem_maps = &vfio_cfg->mem_maps;
 	rte_spinlock_recursive_lock(&user_mem_maps->lock);
@@ -1882,6 +1916,22 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 		ret = -1;
 		goto out;
 	}
+
+	/* do we have partial unmap support? */
+	has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;
+	/* check if we can map this region */
+	if (!has_partial_unmap) {
+		iova_check = check_iova_in_map(user_mem_maps, iova, len);
+		if (iova_check == 1) {
+			goto out;
+		} else if (iova_check < 0) {
+			EAL_LOG(ERR, "Overlapping DMA regions not allowed");
+			rte_errno = ENOTSUP;
+			ret = -1;
+			goto out;
+		}
+	}
+
 	/* map the entry */
 	if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) {
 		/* technically, this will fail if there are currently no devices
@@ -1895,8 +1945,6 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 		ret = -1;
 		goto out;
 	}
-	/* do we have partial unmap support? */
-	has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;
 
 	/* create new user mem map entry */
 	new_map = &user_mem_maps->maps[user_mem_maps->n_maps++];
-- 
2.33.0


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] vfio: check iova if already mapped before do map
  2024-09-10 13:18 [PATCH v2] vfio: check iova if already mapped before do map Yunjian Wang
@ 2024-10-08  9:04 ` David Marchand
  0 siblings, 0 replies; 2+ messages in thread
From: David Marchand @ 2024-10-08  9:04 UTC (permalink / raw)
  To: anatoly.burakov; +Cc: dev, Yunjian Wang, xiawei40, luyicai, Lipei Liang, stable

Hello Anatoly,

On Tue, Sep 10, 2024 at 3:19 PM Yunjian Wang <wangyunjian@huawei.com> wrote:
>
> From: Lipei Liang <lianglipei@huawei.com>
>
> When mapping two adjacent memory areas A and B, the current implementation
> merges them into one segment, known as area C. However, if areas A and B
> are mapped again, there will be separate entries for A, C, and B in the
> memory maps, as C divides A and B. This means that if A and B are mapped
> twice, there will be two map entries for each. When partially unmapping
> A and B, the entry for C will remain in the memory maps. If we then map
> area D, which has a different size than A but falls within area C, the
> find_user_mem_maps function may mistakenly identify C when attempting to
> unmap D. This is because A and C have different chunk sizes, resulting in
> a failure to unmap D.
>
> To fix this issue, we can add a check for the iova before performing the
> dma mapping. If the iova is already mapped, we should not perform the vfio
> mapping. If the iova overlaps with an entry in the memory maps, we should
> return an error code of -1 with the ENOTSUP error. However, if the iova
> does not overlap with any entry in the memory maps, we can proceed with
> the dma mapping.
>
> Fixes: 56259f7fc010 ("vfio: allow partially unmapping adjacent memory")
> Cc: stable@dpdk.org
>
> Signed-off-by: Lipei Liang <lianglipei@huawei.com>

Review, please.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-10-08  9:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-10 13:18 [PATCH v2] vfio: check iova if already mapped before do map Yunjian Wang
2024-10-08  9:04 ` David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).