From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id 7C7E61B529; Fri, 3 Aug 2018 09:54:26 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 95CCB40201A1; Fri, 3 Aug 2018 07:54:25 +0000 (UTC) Received: from [10.36.112.47] (ovpn-112-47.ams2.redhat.com [10.36.112.47]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B703E2026D65; Fri, 3 Aug 2018 07:54:23 +0000 (UTC) To: Tiwei Bie Cc: zhihong.wang@intel.com, jfreimann@redhat.com, dev@dpdk.org, stable@dpdk.org References: <20180802172122.25923-1-maxime.coquelin@redhat.com> <20180803023014.GA28910@debian> From: Maxime Coquelin Message-ID: Date: Fri, 3 Aug 2018 09:54:21 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <20180803023014.GA28910@debian> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 03 Aug 2018 07:54:25 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 03 Aug 2018 07:54:25 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: Re: [dpdk-dev] [PATCH v2] vhost: flush IOTLB cache on new mem table handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Aug 2018 07:54:26 -0000 On 08/03/2018 04:30 AM, Tiwei Bie wrote: > On Thu, Aug 02, 2018 at 07:21:22PM +0200, Maxime Coquelin wrote: >> IOTLB entries contain the host virtual address of the guest >> pages. When receiving a new VHOST_USER_SET_MEM_TABLE request, >> the previous regions get unmapped, so the IOTLB entries, if any, >> will be invalid. It does cause the vhost-user process to >> segfault. >> >> This patch introduces a new function to flush the IOTLB cache, >> and call it as soon as the backend handles a VHOST_USER_SET_MEM >> request. >> >> Fixes: 69c90e98f483 ("vhost: enable IOMMU support") >> Cc: stable@dpdk.org >> >> Signed-off-by: Maxime Coquelin >> --- >> Changes since v1: >> - Fix indentation (Stephen) >> - Fix double iotlb-lock lock >> >> lib/librte_vhost/iotlb.c | 10 ++++++++-- >> lib/librte_vhost/iotlb.h | 2 +- >> lib/librte_vhost/vhost_user.c | 5 +++++ >> 3 files changed, 14 insertions(+), 3 deletions(-) >> >> diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c >> index c11ebcaac..c6354fef7 100644 >> --- a/lib/librte_vhost/iotlb.c >> +++ b/lib/librte_vhost/iotlb.c >> @@ -303,6 +303,13 @@ vhost_user_iotlb_cache_find(struct vhost_virtqueue *vq, uint64_t iova, >> return vva; >> } >> >> +void >> +vhost_user_iotlb_flush_all(struct vhost_virtqueue *vq) >> +{ >> + vhost_user_iotlb_cache_remove_all(vq); >> + vhost_user_iotlb_pending_remove_all(vq); >> +} >> + >> int >> vhost_user_iotlb_init(struct virtio_net *dev, int vq_index) >> { >> @@ -315,8 +322,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index) >> * The cache has already been initialized, >> * just drop all cached and pending entries. >> */ >> - vhost_user_iotlb_cache_remove_all(vq); >> - vhost_user_iotlb_pending_remove_all(vq); >> + vhost_user_iotlb_flush_all(vq); >> } >> >> #ifdef RTE_LIBRTE_VHOST_NUMA >> diff --git a/lib/librte_vhost/iotlb.h b/lib/librte_vhost/iotlb.h >> index e7083e37b..60b9e4c57 100644 >> --- a/lib/librte_vhost/iotlb.h >> +++ b/lib/librte_vhost/iotlb.h >> @@ -73,7 +73,7 @@ void vhost_user_iotlb_pending_insert(struct vhost_virtqueue *vq, uint64_t iova, >> uint8_t perm); >> void vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq, uint64_t iova, >> uint64_t size, uint8_t perm); >> - >> +void vhost_user_iotlb_flush_all(struct vhost_virtqueue *vq); >> int vhost_user_iotlb_init(struct virtio_net *dev, int vq_index); >> >> #endif /* _VHOST_IOTLB_H_ */ >> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c >> index dc53ff712..a2d4c9ffc 100644 >> --- a/lib/librte_vhost/vhost_user.c >> +++ b/lib/librte_vhost/vhost_user.c >> @@ -813,6 +813,11 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *pmsg) >> dev->mem = NULL; >> } >> >> + /* Flush IOTLB cache as previous HVAs are now invalid */ >> + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) >> + for (i = 0; i < dev->nr_vring; i++) >> + vhost_user_iotlb_flush_all(dev->virtqueue[i]); > > Why is the pending list also flushed? As it might be asynchronous, I think it is better to flush the pending list too. For example, the backend request a translation just before the guest remove the driver, the IOVA requested might not be valid anymore and so no reply will be sent by QEMU. So the request would remain in the pending list forever. I don't doing that is mandatory, but it does nor hurt IMHO. Maxime > Thanks, > Tiwei >