patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Xueming(Steven) Li" <xuemingl@nvidia.com>
To: Li Zhang <lizh@nvidia.com>, Ori Kam <orika@nvidia.com>,
	Slava Ovsiienko <viacheslavo@nvidia.com>,
	Matan Azrad <matan@nvidia.com>,
	Shahaf Shuler <shahafs@nvidia.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: "stable@dpdk.org" <stable@dpdk.org>,
	"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
	Raslan Darawsheh <rasland@nvidia.com>,
	Roni Bar Yanai <roniba@nvidia.com>
Subject: RE: [PATCH 20.11] vdpa/mlx5: fix maximum number of virtqs
Date: Tue, 9 Aug 2022 14:57:39 +0000	[thread overview]
Message-ID: <DM4PR12MB5373FF56683DAF70B96BB3ECA1629@DM4PR12MB5373.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20220720104838.2815387-1-lizh@nvidia.com>

Thanks, applied.

> -----Original Message-----
> From: Li Zhang <lizh@nvidia.com>
> Sent: Wednesday, July 20, 2022 6:49 PM
> To: Ori Kam <orika@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler
> <shahafs@nvidia.com>; Maxime Coquelin <maxime.coquelin@redhat.com>
> Cc: stable@dpdk.org; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Raslan Darawsheh <rasland@nvidia.com>;
> Roni Bar Yanai <roniba@nvidia.com>
> Subject: [PATCH 20.11] vdpa/mlx5: fix maximum number of virtqs
> 
> [ upstream commit 6f065d1539bed56602e3c6159c99cccb3bca38e4 ]
> 
> The driver wrongly takes the capability value for the number of virtq pairs instead of just the number of virtqs.
> 
> Adjust all the usages of it to be the number of virtqs.
> 
> Fixes: c2eb33aaf967 ("vdpa/mlx5: manage virtqs by array")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Li Zhang <lizh@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa.c       | 6 +++---
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 4 ++--
>  2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 6519b9c9a..65a1edc33 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
> @@ -82,7 +82,7 @@ mlx5_vdpa_get_queue_num(struct rte_vdpa_device *vdev, uint32_t *queue_num)
>  		DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name);
>  		return -1;
>  	}
> -	*queue_num = priv->caps.max_num_virtio_queues;
> +	*queue_num = priv->caps.max_num_virtio_queues / 2;
>  	return 0;
>  }
> 
> @@ -139,7 +139,7 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state)
>  		DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name);
>  		return -EINVAL;
>  	}
> -	if (vring >= (int)priv->caps.max_num_virtio_queues * 2) {
> +	if (vring >= (int)priv->caps.max_num_virtio_queues) {
>  		DRV_LOG(ERR, "Too big vring id: %d.", vring);
>  		return -E2BIG;
>  	}
> @@ -726,7 +726,7 @@ mlx5_vdpa_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
>  		DRV_LOG(DEBUG, "No capability to support virtq statistics.");
>  	priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) +
>  			   sizeof(struct mlx5_vdpa_virtq) *
> -			   attr.vdpa.max_num_virtio_queues * 2,
> +			   attr.vdpa.max_num_virtio_queues,
>  			   RTE_CACHE_LINE_SIZE);
>  	if (!priv) {
>  		DRV_LOG(ERR, "Failed to allocate private memory."); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> index 0d31e1d95..a1ae02292 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> @@ -450,9 +450,9 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv)
>  		DRV_LOG(INFO, "TSO is enabled without CSUM, force CSUM.");
>  		priv->features |= (1ULL << VIRTIO_NET_F_CSUM);
>  	}
> -	if (nr_vring > priv->caps.max_num_virtio_queues * 2) {
> +	if (nr_vring > priv->caps.max_num_virtio_queues) {
>  		DRV_LOG(ERR, "Do not support more than %d virtqs(%d).",
> -			(int)priv->caps.max_num_virtio_queues * 2,
> +			(int)priv->caps.max_num_virtio_queues,
>  			(int)nr_vring);
>  		return -1;
>  	}
> --
> 2.30.2


      reply	other threads:[~2022-08-09 14:57 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-20 10:48 Li Zhang
2022-08-09 14:57 ` Xueming(Steven) Li [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM4PR12MB5373FF56683DAF70B96BB3ECA1629@DM4PR12MB5373.namprd12.prod.outlook.com \
    --to=xuemingl@nvidia.com \
    --cc=lizh@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=roniba@nvidia.com \
    --cc=shahafs@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).