From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9A5F845957 for ; Tue, 10 Sep 2024 15:19:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F2EF42E58; Tue, 10 Sep 2024 15:19:11 +0200 (CEST) Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by mails.dpdk.org (Postfix) with ESMTP id 81B23402AB; Tue, 10 Sep 2024 15:19:06 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4X342B2Rxhz1HJWB; Tue, 10 Sep 2024 21:15:30 +0800 (CST) Received: from kwepemd500024.china.huawei.com (unknown [7.221.188.194]) by mail.maildlp.com (Postfix) with ESMTPS id 8077C1A016C; Tue, 10 Sep 2024 21:19:04 +0800 (CST) Received: from localhost (10.174.242.157) by kwepemd500024.china.huawei.com (7.221.188.194) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 10 Sep 2024 21:19:04 +0800 From: Yunjian Wang To: CC: , , , Lipei Liang , Subject: [PATCH v2] vfio: check iova if already mapped before do map Date: Tue, 10 Sep 2024 21:18:58 +0800 Message-ID: <1725974338-56108-1-git-send-email-wangyunjian@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.242.157] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemd500024.china.huawei.com (7.221.188.194) X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org From: Lipei Liang When mapping two adjacent memory areas A and B, the current implementation merges them into one segment, known as area C. However, if areas A and B are mapped again, there will be separate entries for A, C, and B in the memory maps, as C divides A and B. This means that if A and B are mapped twice, there will be two map entries for each. When partially unmapping A and B, the entry for C will remain in the memory maps. If we then map area D, which has a different size than A but falls within area C, the find_user_mem_maps function may mistakenly identify C when attempting to unmap D. This is because A and C have different chunk sizes, resulting in a failure to unmap D. To fix this issue, we can add a check for the iova before performing the dma mapping. If the iova is already mapped, we should not perform the vfio mapping. If the iova overlaps with an entry in the memory maps, we should return an error code of -1 with the ENOTSUP error. However, if the iova does not overlap with any entry in the memory maps, we can proceed with the dma mapping. Fixes: 56259f7fc010 ("vfio: allow partially unmapping adjacent memory") Cc: stable@dpdk.org Signed-off-by: Lipei Liang --- v2: update commit log --- lib/eal/linux/eal_vfio.c | 52 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 50 insertions(+), 2 deletions(-) diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c index 4e69e72e3b..cd32284fc6 100644 --- a/lib/eal/linux/eal_vfio.c +++ b/lib/eal/linux/eal_vfio.c @@ -216,6 +216,39 @@ copy_maps(struct user_mem_maps *user_mem_maps, struct user_mem_map *add_maps, } } +/* * + * check if iova area is already mapped or overlaps with existing mapped, + * @return + * 0 if iova area is not exist + * 1 if iova area is already mapped + * -1 if overlaps between iova area and existing mapped + */ +static int +check_iova_in_map(struct user_mem_maps *user_mem_maps, uint64_t iova, uint64_t len) +{ + int i; + uint64_t iova_end = iova + len; + uint64_t map_iova_end; + uint64_t map_iova_off; + uint64_t map_chunk; + + for (i = 0; i < user_mem_maps->n_maps; i++) { + map_iova_off = iova - user_mem_maps->maps[i].iova; + map_iova_end = user_mem_maps->maps[i].iova + user_mem_maps->maps[i].len; + map_chunk = user_mem_maps->maps[i].chunk; + + if ((user_mem_maps->maps[i].iova >= iova_end) || (iova >= map_iova_end)) + continue; + + if ((user_mem_maps->maps[i].iova <= iova) && (iova_end <= map_iova_end) && + (len == map_chunk) && ((map_iova_off % map_chunk) == 0)) + return 1; + + return -1; + } + return 0; +} + /* try merging two maps into one, return 1 if succeeded */ static int merge_map(struct user_mem_map *left, struct user_mem_map *right) @@ -1873,6 +1906,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, struct user_mem_maps *user_mem_maps; bool has_partial_unmap; int ret = 0; + int iova_check = 0; user_mem_maps = &vfio_cfg->mem_maps; rte_spinlock_recursive_lock(&user_mem_maps->lock); @@ -1882,6 +1916,22 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, ret = -1; goto out; } + + /* do we have partial unmap support? */ + has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap; + /* check if we can map this region */ + if (!has_partial_unmap) { + iova_check = check_iova_in_map(user_mem_maps, iova, len); + if (iova_check == 1) { + goto out; + } else if (iova_check < 0) { + EAL_LOG(ERR, "Overlapping DMA regions not allowed"); + rte_errno = ENOTSUP; + ret = -1; + goto out; + } + } + /* map the entry */ if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) { /* technically, this will fail if there are currently no devices @@ -1895,8 +1945,6 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, ret = -1; goto out; } - /* do we have partial unmap support? */ - has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap; /* create new user mem map entry */ new_map = &user_mem_maps->maps[user_mem_maps->n_maps++]; -- 2.33.0