From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E4DD3A046B for ; Wed, 21 Aug 2019 15:04:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AABEB1BF03; Wed, 21 Aug 2019 15:04:06 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 8E1561BEFF; Wed, 21 Aug 2019 15:04:04 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Aug 2019 06:04:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,412,1559545200"; d="scan'208";a="183527141" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga006.jf.intel.com with ESMTP; 21 Aug 2019 06:04:02 -0700 Received: from wgcvswdev001.ir.intel.com (wgcvswdev001.ir.intel.com [10.102.246.100]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x7LD41Jj029286; Wed, 21 Aug 2019 14:04:01 +0100 Received: from wgcvswdev001.ir.intel.com (localhost [127.0.0.1]) by wgcvswdev001.ir.intel.com with ESMTP id x7LD3Qtk008336; Wed, 21 Aug 2019 14:03:26 +0100 Received: (from tchaitax@localhost) by wgcvswdev001.ir.intel.com with œ id x7LD3Qfb008332; Wed, 21 Aug 2019 14:03:26 +0100 From: Chaitanya Babu Talluri To: dev@dpdk.org Cc: reshma.pattan@intel.com, jananeex.m.parthasarathy@intel.com, anatoly.burakov@intel.com, Chaitanya Babu Talluri , stable@dpdk.org Date: Wed, 21 Aug 2019 14:02:53 +0100 Message-Id: <1566392575-7965-2-git-send-email-tallurix.chaitanya.babu@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1566392575-7965-1-git-send-email-tallurix.chaitanya.babu@intel.com> References: <1566392575-7965-1-git-send-email-tallurix.chaitanya.babu@intel.com> Subject: [dpdk-dev] [PATCH 1/3] lib/eal: fix vfio unmap that fails unexpectedly X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Unmap of multiple pages fails after a sequence of partial map/unmaps. The scenario is that multiple maps are created in user_mem_maps, after multiple map/unmap/remap sequences. For an example, Steps: 1. Map 3 pages together 2. Un-map page1 3. Re-map page 1 4. Un-map page 2 5. Re-map page 2 6. Un-map page 3 7. Re-map page 3 8. Un-map all pages Unmap fails when there are duplicate entries in user_mem_maps. The fix is to validate if the input VA, IOVA exists in user_mem_maps before creating map. Fixes: 73a63908 ("vfio: allow to map other memory regions") Cc: stable@dpdk.org Signed-off-by: Chaitanya Babu Talluri --- lib/librte_eal/linux/eal/eal_vfio.c | 46 +++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index 501c74f23..104912077 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -212,6 +212,41 @@ find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr, return NULL; } +static int +find_user_mem_map_overlap(struct user_mem_maps *user_mem_maps, uint64_t addr, + uint64_t iova, uint64_t len) +{ + uint64_t va_end = addr + len; + uint64_t iova_end = iova + len; + int i; + + for (i = 0; i < user_mem_maps->n_maps; i++) { + struct user_mem_map *map = &user_mem_maps->maps[i]; + uint64_t map_va_end = map->addr + map->len; + uint64_t map_iova_end = map->iova + map->len; + + bool no_lo_va_overlap = addr < map->addr && va_end <= map->addr; + bool no_hi_va_overlap = addr >= map_va_end && + va_end > map_va_end; + bool no_lo_iova_overlap = iova < map->iova && + iova_end <= map->iova; + bool no_hi_iova_overlap = iova >= map_iova_end && + iova_end > map_iova_end; + + /* check input VA and iova is not within the + * existing map's range + */ + if ((no_lo_va_overlap || no_hi_va_overlap) && + (no_lo_iova_overlap || no_hi_iova_overlap)) + continue; + else + /* map overlaps */ + return 1; + } + /* map doesn't overlap */ + return 0; +} + /* this will sort all user maps, and merge/compact any adjacent maps */ static void compact_user_maps(struct user_mem_maps *user_mem_maps) @@ -1732,6 +1767,17 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, ret = -1; goto out; } + + /* check whether vaddr and iova exists in user_mem_maps */ + ret = find_user_mem_map_overlap(user_mem_maps, vaddr, iova, len); + if (ret) { + RTE_LOG(ERR, EAL, "Mapping overlaps with a previously " + "existing mapping\n"); + rte_errno = EEXIST; + ret = -1; + goto out; + } + /* map the entry */ if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) { /* technically, this will fail if there are currently no devices -- 2.17.2