DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Xueming Li <xuemingl@nvidia.com>, dev@dpdk.org
Cc: Matan Azrad <matan@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Subject: Re: [PATCH v2 6/7] vdpa/mlx5: support device cleanup callback
Date: Thu, 21 Apr 2022 10:19:55 +0200	[thread overview]
Message-ID: <b7f17f1a-2e5b-ac40-1c74-10c232da639a@redhat.com> (raw)
In-Reply-To: <20220224155101.1991626-7-xuemingl@nvidia.com>



On 2/24/22 16:51, Xueming Li wrote:
> This patch supports device cleanup callback API which called when device
> disconnected with VM.

"This patch supports device cleanup callback API which is called when
the device is disconnected from the VM."

> Cached resources like VM MR and VQ memory are
> released.
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>   drivers/vdpa/mlx5/mlx5_vdpa.c | 23 +++++++++++++++++++++++
>   drivers/vdpa/mlx5/mlx5_vdpa.h |  1 +
>   2 files changed, 24 insertions(+)
> 
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
> index 38ed45f95f7..47874c9b1ff 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
> @@ -270,6 +270,8 @@ mlx5_vdpa_dev_close(int vid)
>   	if (priv->lm_mr.addr)
>   		mlx5_os_wrapped_mkey_destroy(&priv->lm_mr);
>   	priv->state = MLX5_VDPA_STATE_PROBED;
> +	if (!priv->connected)
> +		mlx5_vdpa_dev_cache_clean(priv);
>   	priv->vid = 0;
>   	/* The mutex may stay locked after event thread cancel - initiate it. */
>   	pthread_mutex_init(&priv->vq_config_lock, NULL);
> @@ -294,6 +296,7 @@ mlx5_vdpa_dev_config(int vid)
>   		return -1;
>   	}
>   	priv->vid = vid;
> +	priv->connected = true;
>   	if (mlx5_vdpa_mtu_set(priv))
>   		DRV_LOG(WARNING, "MTU cannot be set on device %s.",
>   				vdev->device->name);
> @@ -431,12 +434,32 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid)
>   	return mlx5_vdpa_virtq_stats_reset(priv, qid);
>   }
>   
> +static int
> +mlx5_vdpa_dev_clean(int vid)

mlx5_vdpa_dev_cleanup

> +{
> +	struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid);
> +	struct mlx5_vdpa_priv *priv;
> +
> +	if (vdev == NULL)
> +		return -1;
> +	priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev);
> +	if (priv == NULL) {
> +		DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name);
> +		return -1;
> +	}
> +	if (priv->state == MLX5_VDPA_STATE_PROBED)
> +		mlx5_vdpa_dev_cache_clean(priv);
> +	priv->connected = false;
> +	return 0;
> +}
> +
>   static struct rte_vdpa_dev_ops mlx5_vdpa_ops = {
>   	.get_queue_num = mlx5_vdpa_get_queue_num,
>   	.get_features = mlx5_vdpa_get_vdpa_features,
>   	.get_protocol_features = mlx5_vdpa_get_protocol_features,
>   	.dev_conf = mlx5_vdpa_dev_config,
>   	.dev_close = mlx5_vdpa_dev_close,
> +	.dev_cleanup = mlx5_vdpa_dev_clean,
>   	.set_vring_state = mlx5_vdpa_set_vring_state,
>   	.set_features = mlx5_vdpa_features_set,
>   	.migration_done = NULL,
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
> index 540bf87a352..24bafe85b44 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa.h
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
> @@ -121,6 +121,7 @@ enum mlx5_dev_state {
>   
>   struct mlx5_vdpa_priv {
>   	TAILQ_ENTRY(mlx5_vdpa_priv) next;
> +	bool connected;
>   	enum mlx5_dev_state state;
>   	pthread_mutex_t vq_config_lock;
>   	uint64_t no_traffic_counter;


Other then that:

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


  reply	other threads:[~2022-04-21  8:20 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-24 13:28 [PATCH 0/7] vdpa/mlx5: improve device shutdown time Xueming Li
2022-02-24 13:28 ` [PATCH 1/7] vdpa/mlx5: fix interrupt trash that leads to segment fault Xueming Li
2022-02-24 13:28 ` [PATCH 2/7] vdpa/mlx5: fix dead loop when process interrupted Xueming Li
2022-02-24 13:28 ` [PATCH 3/7] vdpa/mlx5: no kick handling during shutdown Xueming Li
2022-02-24 13:28 ` [PATCH 4/7] vdpa/mlx5: reuse resources in reconfiguration Xueming Li
2022-02-24 13:28 ` [PATCH 5/7] vdpa/mlx5: cache and reuse hardware resources Xueming Li
2022-02-24 13:28 ` [PATCH 6/7] vdpa/mlx5: support device cleanup callback Xueming Li
2022-02-24 13:28 ` [PATCH 7/7] vdpa/mlx5: make statistics counter persistent Xueming Li
2022-02-24 14:38 ` [PATCH v1 0/7] vdpa/mlx5: improve device shutdown time Xueming Li
2022-02-24 14:38   ` [PATCH v1 1/7] vdpa/mlx5: fix interrupt trash that leads to segment fault Xueming Li
2022-02-24 14:38   ` [PATCH v1 2/7] vdpa/mlx5: fix dead loop when process interrupted Xueming Li
2022-02-24 14:38   ` [PATCH v1 3/7] vdpa/mlx5: no kick handling during shutdown Xueming Li
2022-02-24 14:38   ` [PATCH v1 4/7] vdpa/mlx5: reuse resources in reconfiguration Xueming Li
2022-02-24 14:38   ` [PATCH v1 5/7] vdpa/mlx5: cache and reuse hardware resources Xueming Li
2022-02-24 14:38   ` [PATCH v1 6/7] vdpa/mlx5: support device cleanup callback Xueming Li
2022-02-24 14:38   ` [PATCH v1 7/7] vdpa/mlx5: make statistics counter persistent Xueming Li
2022-02-24 15:50 ` [PATCH v2 0/7] vdpa/mlx5: improve device shutdown time Xueming Li
2022-02-24 15:50   ` [PATCH v2 1/7] vdpa/mlx5: fix interrupt trash that leads to segment fault Xueming Li
2022-04-20 10:39     ` Maxime Coquelin
2022-02-24 15:50   ` [PATCH v2 2/7] vdpa/mlx5: fix dead loop when process interrupted Xueming Li
2022-04-20 10:33     ` Maxime Coquelin
2022-02-24 15:50   ` [PATCH v2 3/7] vdpa/mlx5: no kick handling during shutdown Xueming Li
2022-04-20 12:37     ` Maxime Coquelin
2022-04-20 13:23       ` Xueming(Steven) Li
2022-02-24 15:50   ` [PATCH v2 4/7] vdpa/mlx5: reuse resources in reconfiguration Xueming Li
2022-04-20 14:49     ` Maxime Coquelin
2022-02-24 15:50   ` [PATCH v2 5/7] vdpa/mlx5: cache and reuse hardware resources Xueming Li
2022-04-20 15:03     ` Maxime Coquelin
2022-04-25 13:28       ` Xueming(Steven) Li
2022-05-05 20:01         ` Maxime Coquelin
2022-02-24 15:51   ` [PATCH v2 6/7] vdpa/mlx5: support device cleanup callback Xueming Li
2022-04-21  8:19     ` Maxime Coquelin [this message]
2022-02-24 15:51   ` [PATCH v2 7/7] vdpa/mlx5: make statistics counter persistent Xueming Li
2022-04-21  8:22     ` Maxime Coquelin
2022-05-08 14:25 ` [PATCH v3 0/7] vdpa/mlx5: improve device shutdown time Xueming Li
2022-05-08 14:25   ` [PATCH v3 1/7] vdpa/mlx5: fix interrupt trash that leads to segment fault Xueming Li
2022-05-08 14:25   ` [PATCH v3 2/7] vdpa/mlx5: fix dead loop when process interrupted Xueming Li
2022-05-08 14:25   ` [PATCH v3 3/7] vdpa/mlx5: no kick handling during shutdown Xueming Li
2022-05-08 14:25   ` [PATCH v3 4/7] vdpa/mlx5: reuse resources in reconfiguration Xueming Li
2022-05-08 14:25   ` [PATCH v3 5/7] vdpa/mlx5: cache and reuse hardware resources Xueming Li
2022-05-08 14:25   ` [PATCH v3 6/7] vdpa/mlx5: support device cleanup callback Xueming Li
2022-05-08 14:25   ` [PATCH v3 7/7] vdpa/mlx5: make statistics counter persistent Xueming Li
2022-05-09 19:38   ` [PATCH v3 0/7] vdpa/mlx5: improve device shutdown time Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b7f17f1a-2e5b-ac40-1c74-10c232da639a@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    --cc=xuemingl@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).