From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B5F3A04B6; Mon, 12 Oct 2020 10:13:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 235591D681; Mon, 12 Oct 2020 10:12:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4CEAF1D62A; Mon, 12 Oct 2020 10:12:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09C8Ahx2001763; Mon, 12 Oct 2020 01:12:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=56DRe+igbEadmtKi4FhsVBmTmbCOo0tRX5CjoC7+djk=; b=ErimA7T4Yse9OXykwSovviJe5GtlU3mR6V2iM2KR8Yq8cVDWN2sfHszuRvaSgkIbF8eX /rnSosBEoFqdebRh/L04qPg6r+v5lnPPvdJCQSSBR61konQPikEBGR29GNioeCHadyH2 1kh2p2gl42Iw3WtMUYzghqFOrv3IwhGWSsCAbyYzyQnk98o+V2EIRrIUtPr6P3boZz2A oKhY7H67eZfBF1PxddLKPGr0hNryZGGlMw80FlnDy4n+MFZwjXM3k5m9QARKMf3ZBiMj PKx8cO10uGG9wK7PAi4AlPqut1Qd90W0sT3mXk6PKuXeahxrd4Ug1Zo6SU25KOvAB95B Vg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 343aand9gp-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 12 Oct 2020 01:12:45 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 12 Oct 2020 01:12:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 12 Oct 2020 01:12:44 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 172F43F703F; Mon, 12 Oct 2020 01:12:42 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , Nithin Dabilpuram , Date: Mon, 12 Oct 2020 13:41:06 +0530 Message-ID: <20201012081106.10610-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20201012081106.10610-1-ndabilpuram@marvell.com> References: <20201012081106.10610-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-10-12_03:2020-10-12, 2020-10-12 signatures=0 Subject: [dpdk-dev] [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Partial unmapping is not supported for VFIO IOMMU type1 by kernel. Though kernel gives return as zero, the unmapped size returned will not be same as expected. So check for returned unmap size and return error. For case of DMA map/unmap triggered by heap allocations, maintain granularity of memseg page size so that heap expansion and contraction does not have this issue. For user requested DMA map/unmap disallow partial unmapping for VFIO type1. Fixes: 73a639085938 ("vfio: allow to map other memory regions") Cc: anatoly.burakov@intel.com Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram --- lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++++++++++++++++------ lib/librte_eal/linux/eal_vfio.h | 1 + 2 files changed, 29 insertions(+), 6 deletions(-) diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c index d26e164..ef95259 100644 --- a/lib/librte_eal/linux/eal_vfio.c +++ b/lib/librte_eal/linux/eal_vfio.c @@ -69,6 +69,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_TYPE1, .name = "Type 1", + .partial_unmap = false, .dma_map_func = &vfio_type1_dma_map, .dma_user_map_func = &vfio_type1_dma_mem_map }, @@ -76,6 +77,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_SPAPR, .name = "sPAPR", + .partial_unmap = true, .dma_map_func = &vfio_spapr_dma_map, .dma_user_map_func = &vfio_spapr_dma_mem_map }, @@ -83,6 +85,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_NOIOMMU, .name = "No-IOMMU", + .partial_unmap = true, .dma_map_func = &vfio_noiommu_dma_map, .dma_user_map_func = &vfio_noiommu_dma_mem_map }, @@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, /* for IOVA as VA mode, no need to care for IOVA addresses */ if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) { uint64_t vfio_va = (uint64_t)(uintptr_t)addr; - if (type == RTE_MEM_EVENT_ALLOC) - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, - len, 1); - else - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, - len, 0); + uint64_t page_sz = msl->page_sz; + + /* Maintain granularity of DMA map/unmap to memseg size */ + for (; cur_len < len; cur_len += page_sz) { + if (type == RTE_MEM_EVENT_ALLOC) + vfio_dma_mem_map(default_vfio_cfg, vfio_va, + vfio_va, page_sz, 1); + else + vfio_dma_mem_map(default_vfio_cfg, vfio_va, + vfio_va, page_sz, 0); + vfio_va += page_sz; + } + return; } @@ -1383,6 +1393,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, RTE_LOG(ERR, EAL, " cannot clear DMA remapping, error %i (%s)\n", errno, strerror(errno)); return -1; + } else if (dma_unmap.size != len) { + RTE_LOG(ERR, EAL, " unexpected size %"PRIu64" of DMA " + "remapping cleared instead of %"PRIu64"\n", + (uint64_t)dma_unmap.size, len); + rte_errno = EIO; + return -1; } } @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, /* we're partially unmapping a previously mapped region, so we * need to split entry into two. */ + if (!vfio_cfg->vfio_iommu_type->partial_unmap) { + RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); + rte_errno = ENOTSUP; + ret = -1; + goto out; + } if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) { RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n"); rte_errno = ENOMEM; diff --git a/lib/librte_eal/linux/eal_vfio.h b/lib/librte_eal/linux/eal_vfio.h index cb2d35f..6ebaca6 100644 --- a/lib/librte_eal/linux/eal_vfio.h +++ b/lib/librte_eal/linux/eal_vfio.h @@ -113,6 +113,7 @@ typedef int (*vfio_dma_user_func_t)(int fd, uint64_t vaddr, uint64_t iova, struct vfio_iommu_type { int type_id; const char *name; + bool partial_unmap; vfio_dma_user_func_t dma_user_map_func; vfio_dma_func_t dma_map_func; }; -- 2.8.4