From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Xueming Li <xuemingl@nvidia.com>, dev@dpdk.org
Cc: stable@dpdk.org, Matan Azrad <matan@nvidia.com>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Subject: Re: [dpdk-stable] [PATCH v2 2/2] vdpa/mlx5: retry VAR allocation during vDPA restart
Date: Thu, 21 Oct 2021 11:40:21 +0200 [thread overview]
Message-ID: <94fb26ff-3d1e-38c0-303d-b13b97dc4146@redhat.com> (raw)
In-Reply-To: <20211015150545.1673312-2-xuemingl@nvidia.com>
On 10/15/21 17:05, Xueming Li wrote:
> VAR is the device memory space for the virtio queues doorbells, qemu
> could mmap it to directly to speed up doorbell push.
>
> On a busy system, Qemu takes time to release VAR resources during driver
> shutdown. If vdpa restarted quickly, the VAR allocation failed with
> error 28 since the VAR is singleton resource per device.
>
> This patch adds retry mechanism for VAR allocation.
>
> Fixes: 4cae722c1b06 ("vdpa/mlx5: move virtual doorbell alloc to probe")
> Cc: stable@dpdk.org
>
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> Reviewed-by: Matan Azrad <matan@nvidia.com>
> ---
> drivers/vdpa/mlx5/mlx5_vdpa.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
> index 6d17d7a6f3e..991739e9840 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
> @@ -693,7 +693,14 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
> if (attr.num_lag_ports == 0)
> priv->num_lag_ports = 1;
> priv->ctx = ctx;
> - priv->var = mlx5_glue->dv_alloc_var(ctx, 0);
> + for (retry = 0; retry < 7; retry++) {
> + priv->var = mlx5_glue->dv_alloc_var(ctx, 0);
> + if (priv->var != NULL)
> + break;
> + DRV_LOG(WARNING, "Failed to allocate VAR, retry %d.\n", retry);
> + /* Wait Qemu release VAR during vdpa restart, 0.1 sec based. */
> + usleep(100000U << retry);
> + }
> if (!priv->var) {
> DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
> goto error;
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
next prev parent reply other threads:[~2021-10-21 9:40 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20210923081758.178745-1-xuemingl@nvidia.com>
2021-10-15 13:43 ` [dpdk-stable] [PATCH v1 1/2] vdpa/mlx5: workaround FW first completion in start Xueming Li
2021-10-15 13:43 ` [dpdk-stable] [PATCH v1 2/2] vdpa/mlx5: retry VAR allocation during vDPA restart Xueming Li
2021-10-15 13:57 ` [dpdk-stable] [PATCH v1 1/2] vdpa/mlx5: workaround FW first completion in start Maxime Coquelin
2021-10-15 14:51 ` Xueming(Steven) Li
2021-10-15 15:05 ` [dpdk-stable] [PATCH v2 " Xueming Li
2021-10-15 15:05 ` [dpdk-stable] [PATCH v2 2/2] vdpa/mlx5: retry VAR allocation during vDPA restart Xueming Li
2021-10-21 9:40 ` Maxime Coquelin [this message]
2021-10-21 12:27 ` Maxime Coquelin
2021-10-21 9:40 ` [dpdk-stable] [PATCH v2 1/2] vdpa/mlx5: workaround FW first completion in start Maxime Coquelin
2021-10-21 12:27 ` Maxime Coquelin
2021-10-21 12:36 ` Xueming(Steven) Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=94fb26ff-3d1e-38c0-303d-b13b97dc4146@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=stable@dpdk.org \
--cc=viacheslavo@nvidia.com \
--cc=xuemingl@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).