From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id D54434CA7 for ; Tue, 5 Mar 2019 15:00:05 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from shahafs@mellanox.com) with ESMTPS (AES256-SHA encrypted); 5 Mar 2019 15:59:59 +0200 Received: from unicorn01.mtl.labs.mlnx. (unicorn01.mtl.labs.mlnx [10.7.12.62]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x25DxtOA012489; Tue, 5 Mar 2019 15:59:55 +0200 From: Shahaf Shuler To: anatoly.burakov@intel.com, yskoh@mellanox.com, thomas@monjalon.net, ferruh.yigit@intel.com, nhorman@tuxdriver.com, gaetan.rivet@6wind.com Cc: dev@dpdk.org Date: Tue, 5 Mar 2019 15:59:42 +0200 Message-Id: <1e8400f68a2fb1ceb07127c72f0874bb881e5d80.1551793527.git.shahafs@mellanox.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v3 2/6] vfio: don't fail to DMA map if memory is already mapped X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Mar 2019 14:00:06 -0000 Currently vfio DMA map function will fail in case the same memory segment is mapped twice. This is too strict, as this is not an error to map the same memory twice. Instead, use the kernel return value to detect such state and have the DMA function to return as successful. For type1 mapping the kernel driver returns EEXISTS. For spapr mapping EBUSY is returned since kernel 4.10. Signed-off-by: Shahaf Shuler Acked-by: Anatoly Burakov --- lib/librte_eal/linuxapp/eal/eal_vfio.c | 32 +++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/librte_eal/linuxapp/eal/eal_vfio.c index 9adbda8bb7..d0a0f9c16f 100644 --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c @@ -1264,9 +1264,21 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, " cannot set up DMA remapping, error %i (%s)\n", - errno, strerror(errno)); + /** + * In case the mapping was already done EEXIST will be + * returned from kernel. + */ + if (errno == EEXIST) { + RTE_LOG(DEBUG, EAL, + " Memory segment is allready mapped," + " skipping"); + } else { + RTE_LOG(ERR, EAL, + " cannot set up DMA remapping," + " error %i (%s)\n", + errno, strerror(errno)); return -1; + } } } else { memset(&dma_unmap, 0, sizeof(dma_unmap)); @@ -1325,9 +1337,21 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, " cannot set up DMA remapping, error %i (%s)\n", - errno, strerror(errno)); + /** + * In case the mapping was already done EBUSY will be + * returned from kernel. + */ + if (errno == EBUSY) { + RTE_LOG(DEBUG, EAL, + " Memory segment is allready mapped," + " skipping"); + } else { + RTE_LOG(ERR, EAL, + " cannot set up DMA remapping," + " error %i (%s)\n", errno, + strerror(errno)); return -1; + } } } else { -- 2.12.0