From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06E2AA0C43; Thu, 23 Sep 2021 16:56:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8534941260; Thu, 23 Sep 2021 16:56:38 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 24C6441257 for ; Thu, 23 Sep 2021 16:56:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632408996; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iW3JXJ9u/hxcCNkOOJGGWg7NPQP0J//806ctcIfqiYc=; b=QdUSIi6/CM+AffcZYnC/Twqh8cB5OSh161uSVdWsFP0QXt0NEBxjAKYh1ZhJ7LvY6VYvPb 86gsV5p55PElc2JMx8UAajtHCaO2UNN4Lb/mIUrlLmwoyXkMSQCavPqX2L7XV1GxPK26No Dr6oWafIghYGqhjemnJ+RQ+73GXrRgw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-223-9GBXoKnQOg2qzWlzym5HZw-1; Thu, 23 Sep 2021 10:56:33 -0400 X-MC-Unique: 9GBXoKnQOg2qzWlzym5HZw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 498F2100790D; Thu, 23 Sep 2021 14:56:31 +0000 (UTC) Received: from [10.39.208.17] (unknown [10.39.208.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CA3C0171FF; Thu, 23 Sep 2021 14:56:28 +0000 (UTC) Message-ID: Date: Thu, 23 Sep 2021 16:56:27 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 To: "Hu, Jiayu" , "Ding, Xuan" , "dev@dpdk.org" , "Burakov, Anatoly" , "Xia, Chenbo" Cc: "Jiang, Cheng1" , "Richardson, Bruce" , "Pai G, Sunil" , "Wang, Yinan" , "Yang, YvonneX" References: <20210901053044.109901-1-xuan.ding@intel.com> <20210917052546.23883-1-xuan.ding@intel.com> <20210917052546.23883-3-xuan.ding@intel.com> <917d6068d3cc484b95e9e884cc9a4f3b@intel.com> From: Maxime Coquelin In-Reply-To: <917d6068d3cc484b95e9e884cc9a4f3b@intel.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2 2/2] vhost: enable IOMMU for async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/23/21 16:39, Hu, Jiayu wrote: > Hi Xuan, > >> -----Original Message----- >> From: Ding, Xuan >> Sent: Friday, September 17, 2021 1:26 PM >> To: dev@dpdk.org; Burakov, Anatoly ; >> maxime.coquelin@redhat.com; Xia, Chenbo >> Cc: Hu, Jiayu ; Jiang, Cheng1 ; >> Richardson, Bruce ; Pai G, Sunil >> ; Wang, Yinan ; Yang, >> YvonneX ; Ding, Xuan >> Subject: [PATCH v2 2/2] vhost: enable IOMMU for async vhost >> >> The use of IOMMU has many advantages, such as isolation and address >> translation. This patch extends the capbility of DMA engine to use IOMMU if >> the DMA engine is bound to vfio. >> >> When set memory table, the guest memory will be mapped into the default >> container of DPDK. >> >> Signed-off-by: Xuan Ding >> --- >> lib/vhost/rte_vhost.h | 1 + >> lib/vhost/vhost_user.c | 57 >> +++++++++++++++++++++++++++++++++++++++++- >> 2 files changed, 57 insertions(+), 1 deletion(-) >> >> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index >> 8d875e9322..e0537249f3 100644 >> --- a/lib/vhost/rte_vhost.h >> +++ b/lib/vhost/rte_vhost.h >> @@ -127,6 +127,7 @@ struct rte_vhost_mem_region { >> void *mmap_addr; >> uint64_t mmap_size; >> int fd; >> + uint64_t dma_map_success; > > How about using bool for dma_map_success? The bigger problem here is that you are breaking the ABI. >> }; >> >> /** >> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index >> 29a4c9af60..7d1d592b86 100644 >> --- a/lib/vhost/vhost_user.c >> +++ b/lib/vhost/vhost_user.c >> @@ -45,6 +45,8 @@ >> #include >> #include >> #include >> +#include >> +#include >> >> #include "iotlb.h" >> #include "vhost.h" >> @@ -141,6 +143,46 @@ get_blk_size(int fd) >> return ret == -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize; } >> >> +static int >> +async_dma_map(struct rte_vhost_mem_region *region, bool do_map) { >> + int ret = 0; >> + uint64_t host_iova; >> + host_iova = rte_mem_virt2iova((void *)(uintptr_t)region- >>> host_user_addr); >> + if (do_map) { >> + /* Add mapped region into the default container of DPDK. */ >> + ret = >> rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, >> + region->host_user_addr, >> + host_iova, >> + region->size); >> + region->dma_map_success = ret == 0; >> + if (ret) { >> + if (rte_errno != ENODEV && rte_errno != ENOTSUP) { >> + VHOST_LOG_CONFIG(ERR, "DMA engine map >> failed\n"); >> + return ret; >> + } >> + return 0; > > Why return 0, if ret is -1 here? > > Thanks, > Jiayu > >> + } >> + return ret; >> + } else { >> + /* No need to do vfio unmap if the map failed. */ >> + if (!region->dma_map_success) >> + return 0; >> + >> + /* Remove mapped region from the default container of >> DPDK. */ >> + ret = >> rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, >> + region->host_user_addr, >> + host_iova, >> + region->size); >> + if (ret) { >> + VHOST_LOG_CONFIG(ERR, "DMA engine unmap >> failed\n"); >> + return ret; >> + } >> + region->dma_map_success = 0; >> + } >> + return ret; >> +} >> + >> static void >> free_mem_region(struct virtio_net *dev) { @@ -153,6 +195,9 @@ >> free_mem_region(struct virtio_net *dev) >> for (i = 0; i < dev->mem->nregions; i++) { >> reg = &dev->mem->regions[i]; >> if (reg->host_user_addr) { >> + if (dev->async_copy && rte_vfio_is_enabled("vfio")) >> + async_dma_map(reg, false); >> + >> munmap(reg->mmap_addr, reg->mmap_size); >> close(reg->fd); >> } >> @@ -1157,6 +1202,7 @@ vhost_user_mmap_region(struct virtio_net *dev, >> uint64_t mmap_size; >> uint64_t alignment; >> int populate; >> + int ret; >> >> /* Check for memory_size + mmap_offset overflow */ >> if (mmap_offset >= -region->size) { >> @@ -1210,13 +1256,22 @@ vhost_user_mmap_region(struct virtio_net *dev, >> region->mmap_size = mmap_size; >> region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + >> mmap_offset; >> >> - if (dev->async_copy) >> + if (dev->async_copy) { >> if (add_guest_pages(dev, region, alignment) < 0) { >> VHOST_LOG_CONFIG(ERR, >> "adding guest pages to region >> failed.\n"); >> return -1; >> } >> >> + if (rte_vfio_is_enabled("vfio")) { >> + ret = async_dma_map(region, true); >> + if (ret < 0) { >> + VHOST_LOG_CONFIG(ERR, "Configure >> IOMMU for DMA engine failed\n"); >> + return -1; >> + } >> + } >> + } >> + >> VHOST_LOG_CONFIG(INFO, >> "guest memory region size: 0x%" PRIx64 "\n" >> "\t guest physical addr: 0x%" PRIx64 "\n" >> -- >> 2.17.1 >