From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C9ADA0C53; Wed, 3 Nov 2021 09:46:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BEEBB40E5A; Wed, 3 Nov 2021 09:46:42 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id AB1CA40E03 for ; Wed, 3 Nov 2021 09:46:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635929201; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wgfdiac1bMNR/aK3/uvzwDBRNNmm+ft2THAgXbLDo9c=; b=KPpITV36594LI13N/jP6UuT6MXfMp7CytPBVNEpbcmfL1aGG1ZPbWUDlvbDo6Xpd80j8/H NrtvvewCQVRElnPk18hM97Q6xqrfHi1YA0GlGe+BsjFTf5zyYtpmcodi3rSA9visi8k42+ Wr6RyXGMTjBplOQ4t78KRHkphYcEkUE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-582-ScN2GYppN1KdAkkF1LrHGg-1; Wed, 03 Nov 2021 04:46:36 -0400 X-MC-Unique: ScN2GYppN1KdAkkF1LrHGg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1F36C10B744C; Wed, 3 Nov 2021 08:46:35 +0000 (UTC) Received: from [10.39.208.36] (unknown [10.39.208.36]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 323675BAFB; Wed, 3 Nov 2021 08:46:33 +0000 (UTC) Message-ID: <9a34fcad-bd05-bcf2-c7e3-5246a51ef400@redhat.com> Date: Wed, 3 Nov 2021 09:46:31 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 To: "Xia, Chenbo" , "Xueming(Steven) Li" , "dev@dpdk.org" References: <20211019113956.2254537-1-xuemingl@nvidia.com> <2bfda127-f6de-d3fa-0903-566b8ec69be4@redhat.com> <04f60f8da1819b8a0196cfca20127107bd0b835c.camel@nvidia.com> From: Maxime Coquelin In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] vhost: add vDPA resource cleanup callback X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 11/3/21 09:41, Xia, Chenbo wrote: > Hi Xueming, > >> -----Original Message----- >> From: Xueming(Steven) Li >> Sent: Thursday, October 21, 2021 8:36 PM >> To: maxime.coquelin@redhat.com; dev@dpdk.org >> Cc: Xia, Chenbo >> Subject: Re: [PATCH] vhost: add vDPA resource cleanup callback >> >> On Thu, 2021-10-21 at 14:00 +0200, Maxime Coquelin wrote: >>> Hi Xueming, >>> >>> On 10/19/21 13:39, Xueming Li wrote: >>>> This patch adds vDPA device cleanup callback to release resources on >>>> vhost user connection close. >>>> >>>> Signed-off-by: Xueming Li >>>> --- >>>> lib/vhost/rte_vdpa_dev.h | 3 +++ >>>> lib/vhost/vhost_user.c | 6 ++++++ >>>> 2 files changed, 9 insertions(+) >>>> >>>> diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h >>>> index b0f494815fa..2711004fe05 100644 >>>> --- a/lib/vhost/rte_vdpa_dev.h >>>> +++ b/lib/vhost/rte_vdpa_dev.h >>>> @@ -32,6 +32,9 @@ struct rte_vdpa_dev_ops { >>>> /** Driver close the device (Mandatory) */ >>>> int (*dev_close)(int vid); >>>> >>>> + /** Connection closed, clean up resources */ >>>> + int (*dev_cleanup)(int vid); >>>> + >>>> /** Enable/disable this vring (Mandatory) */ >>>> int (*set_vring_state)(int vid, int vring, int state); >>>> >>>> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c >>>> index 5a894ca0cc7..032b621c86c 100644 >>>> --- a/lib/vhost/vhost_user.c >>>> +++ b/lib/vhost/vhost_user.c >>>> @@ -162,6 +162,12 @@ free_mem_region(struct virtio_net *dev) >>>> void >>>> vhost_backend_cleanup(struct virtio_net *dev) >>>> { >>>> + struct rte_vdpa_device *vdpa_dev; >>>> + >>>> + vdpa_dev = dev->vdpa_dev; >>>> + if (vdpa_dev && vdpa_dev->ops->dev_cleanup != NULL) >>>> + vdpa_dev->ops->dev_cleanup(dev->vid); >>>> + >>>> if (dev->mem) { >>>> free_mem_region(dev); >>>> rte_free(dev->mem); >>>> >>> >>> What will be done there that cannot be done in .dev_close()? >> >> .dev_close() mainly handles VM suspend and driver reset. If release >> everything inside dev_close(), the suspend and resume takes longer time >> if number of VQs are huge. Customer want to upgrade VM configuration >> using suspend and resume, pause customer VM too long can't be accepted. > > By saying 'upgrade VM configuration', do you mean VM memory hotplug? Or something > more? > > Is this patch a next-step improvement of this commit? > > commit 127f9c6f7b78a47b73b3e1c39e021cc81a30b4c9 > Author: Matan Azrad > Date: Mon Jun 29 14:08:19 2020 +0000 > > vhost: handle memory hotplug with vDPA devices > > Some vDPA drivers' basic configurations should be updated when the > guest memory is hotplugged. > > Close vDPA device before hotplug operation and recreate it after the > hotplug operation is done. > > Signed-off-by: Matan Azrad > Reviewed-by: Maxime Coquelin > Reviewed-by: Chenbo Xia > >> So the idea is to cache and reuse resource between dev_close() and >> dev_conf(). Actually, the two functions looks more like dev_stop() and >> dev_start(). >> >> dev_cleanup hooks to vhost backend cleanup which called when socket >> closed for both client and server mode, a safe point to cleanup all >> cached resources. >> >>> Having the mlx5 implementation of this callback alongside this patch may >>> help to understand. >> >> The mlx5 implementation still a prototype, pending on internal review. >> So I just post the vhost part to get suggestion/comment. Let me know if >> the ugly code does help :) > > I would prefer to see the mlx implementation with this patch in the same > patchset to understand the problem. A new callback is fine if the problem > itself makes sense :) FYI, I'm about to apply a patch that marks the vDPA driver API as internal, when you will submit a new version please apply on top of it. Thanks, Maxime > Thanks, > Chenbo > >> >>> >>> Thanks, >>> Maxime >>> >