From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A12FA0A03 for ; Fri, 15 Jan 2021 08:33:01 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C2F5140DE5; Fri, 15 Jan 2021 08:33:01 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id F3467140DBA; Fri, 15 Jan 2021 08:32:57 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 10F7Tkqn029159; Thu, 14 Jan 2021 23:32:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=e1SBslWV6Vr61UokJ+Zw6yDhEbTDTcEkljf3GhMyB4c=; b=fI74iGdqUpgLhwaMpDQiWA5mTXQZ+/0SyyqOQfzzEFGkc3O9cEOxMlR0zRxFnhLYR9nc yZA8FCoc+xTT4fmQLURtn8U7/8752Zk6vTbgsUoVHCZ0FRC5hNRis33RoofH2o2RSKax 5eJVBNBl7huUnmuTSduUUdrucJzQQPTdkz9Bv79hRvtgDHwWLHikCwvf5HEYozfVH7Uh WUeAXtAsE93Nc6gU49yPip/0VviKKhJY6pY8qhg5GnjZtYHM9dQ08dhhJ+k6FBLCic1X WbrbmwYSYFGHs8ua9xH9HkZqHHdCj3okbYjQcB7aGiyGkTxk+tSS9Gv5CBJ0QC8kvoZT SA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 35yaqt25sa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 14 Jan 2021 23:32:57 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 14 Jan 2021 23:32:55 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 14 Jan 2021 23:32:55 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 14 Jan 2021 23:32:55 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 02C6A3F7040; Thu, 14 Jan 2021 23:32:52 -0800 (PST) From: Nithin Dabilpuram To: , David Christensen , CC: , , Nithin Dabilpuram , Date: Fri, 15 Jan 2021 13:02:42 +0530 Message-ID: <20210115073243.7025-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210115073243.7025-1-ndabilpuram@marvell.com> References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20210115073243.7025-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2021-01-15_03:2021-01-15, 2021-01-15 signatures=0 Subject: [dpdk-stable] [PATCH v8 2/3] vfio: fix DMA mapping granularity for type1 IOVA as VA X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Partial unmapping is not supported for VFIO IOMMU type1 by kernel. Though kernel gives return as zero, the unmapped size returned will not be same as expected. So check for returned unmap size and return error. For IOVA as PA, DMA mapping is already at memseg size granularity. Do the same even for IOVA as VA mode as DMA map/unmap triggered by heap allocations, maintain granularity of memseg page size so that heap expansion and contraction does not have this issue. For user requested DMA map/unmap disallow partial unmapping for VFIO type1. Fixes: 73a639085938 ("vfio: allow to map other memory regions") Cc: anatoly.burakov@intel.com Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram Acked-by: Anatoly Burakov Acked-by: David Christensen --- lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++++++++++++++++------ lib/librte_eal/linux/eal_vfio.h | 1 + 2 files changed, 29 insertions(+), 6 deletions(-) diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c index 64b134d..b15b758 100644 --- a/lib/librte_eal/linux/eal_vfio.c +++ b/lib/librte_eal/linux/eal_vfio.c @@ -70,6 +70,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_TYPE1, .name = "Type 1", + .partial_unmap = false, .dma_map_func = &vfio_type1_dma_map, .dma_user_map_func = &vfio_type1_dma_mem_map }, @@ -77,6 +78,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_SPAPR, .name = "sPAPR", + .partial_unmap = true, .dma_map_func = &vfio_spapr_dma_map, .dma_user_map_func = &vfio_spapr_dma_mem_map }, @@ -84,6 +86,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_NOIOMMU, .name = "No-IOMMU", + .partial_unmap = true, .dma_map_func = &vfio_noiommu_dma_map, .dma_user_map_func = &vfio_noiommu_dma_mem_map }, @@ -526,12 +529,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, /* for IOVA as VA mode, no need to care for IOVA addresses */ if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) { uint64_t vfio_va = (uint64_t)(uintptr_t)addr; - if (type == RTE_MEM_EVENT_ALLOC) - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, - len, 1); - else - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, - len, 0); + uint64_t page_sz = msl->page_sz; + + /* Maintain granularity of DMA map/unmap to memseg size */ + for (; cur_len < len; cur_len += page_sz) { + if (type == RTE_MEM_EVENT_ALLOC) + vfio_dma_mem_map(default_vfio_cfg, vfio_va, + vfio_va, page_sz, 1); + else + vfio_dma_mem_map(default_vfio_cfg, vfio_va, + vfio_va, page_sz, 0); + vfio_va += page_sz; + } + return; } @@ -1348,6 +1358,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, RTE_LOG(ERR, EAL, " cannot clear DMA remapping, error %i (%s)\n", errno, strerror(errno)); return -1; + } else if (dma_unmap.size != len) { + RTE_LOG(ERR, EAL, " unexpected size %"PRIu64" of DMA " + "remapping cleared instead of %"PRIu64"\n", + (uint64_t)dma_unmap.size, len); + rte_errno = EIO; + return -1; } } @@ -1823,6 +1839,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, /* we're partially unmapping a previously mapped region, so we * need to split entry into two. */ + if (!vfio_cfg->vfio_iommu_type->partial_unmap) { + RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); + rte_errno = ENOTSUP; + ret = -1; + goto out; + } if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) { RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n"); rte_errno = ENOMEM; diff --git a/lib/librte_eal/linux/eal_vfio.h b/lib/librte_eal/linux/eal_vfio.h index cb2d35f..6ebaca6 100644 --- a/lib/librte_eal/linux/eal_vfio.h +++ b/lib/librte_eal/linux/eal_vfio.h @@ -113,6 +113,7 @@ typedef int (*vfio_dma_user_func_t)(int fd, uint64_t vaddr, uint64_t iova, struct vfio_iommu_type { int type_id; const char *name; + bool partial_unmap; vfio_dma_user_func_t dma_user_map_func; vfio_dma_func_t dma_map_func; }; -- 2.8.4