DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xueming(Steven) Li" <xuemingl@nvidia.com>
To: "chenbo.xia@intel.com" <chenbo.xia@intel.com>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [PATCH] vhost: add vDPA resource cleanup callback
Date: Thu, 27 Jan 2022 08:48:13 +0000	[thread overview]
Message-ID: <a21ab9092f8037ce6c360dcb992b719a6ddad2e0.camel@nvidia.com> (raw)
In-Reply-To: <93e501e6-9f92-91a1-3875-7d4ec3a70366@redhat.com>

On Wed, 2022-01-26 at 11:03 +0100, Maxime Coquelin wrote:
> Hi Xueming,
> 
> On 11/3/21 14:49, Maxime Coquelin wrote:
> > 
> > 
> > On 11/3/21 14:45, Xueming(Steven) Li wrote:
> > > On Wed, 2021-11-03 at 09:46 +0100, Maxime Coquelin wrote:
> > > > 
> > > > On 11/3/21 09:41, Xia, Chenbo wrote:
> > > > > Hi Xueming,
> > > > > 
> > > > > > -----Original Message-----
> > > > > > From: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > Sent: Thursday, October 21, 2021 8:36 PM
> > > > > > To: maxime.coquelin@redhat.com; dev@dpdk.org
> > > > > > Cc: Xia, Chenbo <chenbo.xia@intel.com>
> > > > > > Subject: Re: [PATCH] vhost: add vDPA resource cleanup callback
> > > > > > 
> > > > > > On Thu, 2021-10-21 at 14:00 +0200, Maxime Coquelin wrote:
> > > > > > > Hi Xueming,
> > > > > > > 
> > > > > > > On 10/19/21 13:39, Xueming Li wrote:
> > > > > > > > This patch adds vDPA device cleanup callback to release resources on
> > > > > > > > vhost user connection close.
> > > > > > > > 
> > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > ---
> > > > > > > >     lib/vhost/rte_vdpa_dev.h | 3 +++
> > > > > > > >     lib/vhost/vhost_user.c   | 6 ++++++
> > > > > > > >     2 files changed, 9 insertions(+)
> > > > > > > > 
> > > > > > > > diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
> > > > > > > > index b0f494815fa..2711004fe05 100644
> > > > > > > > --- a/lib/vhost/rte_vdpa_dev.h
> > > > > > > > +++ b/lib/vhost/rte_vdpa_dev.h
> > > > > > > > @@ -32,6 +32,9 @@ struct rte_vdpa_dev_ops {
> > > > > > > >         /** Driver close the device (Mandatory) */
> > > > > > > >         int (*dev_close)(int vid);
> > > > > > > > 
> > > > > > > > +    /** Connection closed, clean up resources */
> > > > > > > > +    int (*dev_cleanup)(int vid);
> > > > > > > > +
> > > > > > > >         /** Enable/disable this vring (Mandatory) */
> > > > > > > >         int (*set_vring_state)(int vid, int vring, int state);
> > > > > > > > 
> > > > > > > > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> > > > > > > > index 5a894ca0cc7..032b621c86c 100644
> > > > > > > > --- a/lib/vhost/vhost_user.c
> > > > > > > > +++ b/lib/vhost/vhost_user.c
> > > > > > > > @@ -162,6 +162,12 @@ free_mem_region(struct virtio_net *dev)
> > > > > > > >     void
> > > > > > > >     vhost_backend_cleanup(struct virtio_net *dev)
> > > > > > > >     {
> > > > > > > > +    struct rte_vdpa_device *vdpa_dev;
> > > > > > > > +
> > > > > > > > +    vdpa_dev = dev->vdpa_dev;
> > > > > > > > +    if (vdpa_dev && vdpa_dev->ops->dev_cleanup != NULL)
> > > > > > > > +        vdpa_dev->ops->dev_cleanup(dev->vid);
> > > > > > > > +
> > > > > > > >         if (dev->mem) {
> > > > > > > >             free_mem_region(dev);
> > > > > > > >             rte_free(dev->mem);
> > > > > > > > 
> > > > > > > 
> > > > > > > What will be done there that cannot be done in .dev_close()?
> > > > > > 
> > > > > > .dev_close() mainly handles VM suspend and driver reset. If release
> > > > > > everything inside dev_close(), the suspend and resume takes longer 
> > > > > > time
> > > > > > if number of VQs are huge. Customer want to upgrade VM configuration
> > > > > > using suspend and resume, pause customer VM too long can't be 
> > > > > > accepted.
> > > > > 
> > > > > By saying 'upgrade VM configuration', do you mean VM memory hotplug? 
> > > > > Or something
> > > > > more?
> > > > > 
> > > > > Is this patch a next-step improvement of this commit?
> > > > > 
> > > > > commit 127f9c6f7b78a47b73b3e1c39e021cc81a30b4c9
> > > > > Author: Matan Azrad <matan@mellanox.com>
> > > > > Date:   Mon Jun 29 14:08:19 2020 +0000
> > > > > 
> > > > >       vhost: handle memory hotplug with vDPA devices
> > > > > 
> > > > >       Some vDPA drivers' basic configurations should be updated when 
> > > > > the
> > > > >       guest memory is hotplugged.
> > > > > 
> > > > >       Close vDPA device before hotplug operation and recreate it 
> > > > > after the
> > > > >       hotplug operation is done.
> > > > > 
> > > > >       Signed-off-by: Matan Azrad <matan@mellanox.com>
> > > > >       Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> > > > >       Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
> > > > > 
> > > > > > So the idea is to cache and reuse resource between dev_close() and
> > > > > > dev_conf(). Actually, the two functions looks more like dev_stop() and
> > > > > > dev_start().
> > > > > > 
> > > > > > dev_cleanup hooks to vhost backend cleanup which called when socket
> > > > > > closed for both client and server mode, a safe point to cleanup all
> > > > > > cached resources.
> > > > > > 
> > > > > > > Having the mlx5 implementation of this callback alongside this 
> > > > > > > patch may
> > > > > > > help to understand.
> > > > > > 
> > > > > > The mlx5 implementation still a prototype, pending on internal review.
> > > > > > So I just post the vhost part to get suggestion/comment. Let me 
> > > > > > know if
> > > > > > the ugly code does help :)
> > > > > 
> > > > > I would prefer to see the mlx implementation with this patch in the 
> > > > > same
> > > > > patchset to understand the problem. A new callback is fine if the 
> > > > > problem
> > > > > itself makes sense :)
> > > > 
> > > > FYI, I'm about to apply a patch that marks the vDPA driver API as
> > > > internal, when you will submit a new version please apply on top of it.
> > > 
> > > Haven't check your patch yet, but sounds good form subject :)
> > 
> > The branch is now ready, you can rebase your patch on top of it:
> > git://dpdk.org/next/dpdk-next-virtio main
> 
> Could you please rebase your patch if you want it in v22.03?

Thanks for reminding, new version sent.
Mlx5 PMD still WIP.

> 
> Thanks!
> Maxime
> 
> > Maxime
> > 
> > > > 
> > > > Thanks,
> > > > Maxime
> > > > 
> > > > > Thanks,
> > > > > Chenbo
> > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > Thanks,
> > > > > > > Maxime
> > > > > > > 
> > > > > 
> > > > 
> > > 
> 


  reply	other threads:[~2022-01-27  8:48 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-19 11:39 [dpdk-dev] " Xueming Li
2021-10-21 12:00 ` Maxime Coquelin
2021-10-21 12:35   ` Xueming(Steven) Li
2021-11-03  8:41     ` Xia, Chenbo
2021-11-03  8:46       ` Maxime Coquelin
2021-11-03 13:45         ` Xueming(Steven) Li
2021-11-03 13:49           ` Maxime Coquelin
2022-01-26 10:03             ` Maxime Coquelin
2022-01-27  8:48               ` Xueming(Steven) Li [this message]
2021-11-03 13:42       ` [dpdk-dev] " Xueming(Steven) Li
2022-01-27  8:45 ` [PATCH v2] " Xueming Li
2022-02-03 13:47   ` Maxime Coquelin
2022-02-10 15:13   ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a21ab9092f8037ce6c360dcb992b719a6ddad2e0.camel@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).