DPDK patches and discussions
 help / color / mirror / Atom feed
From: Slava Ovsiienko <viacheslavo@nvidia.com>
To: Igor Gutorov <igootorov@gmail.com>,
	Dariusz Sosnowski <dsosnowski@nvidia.com>,
	Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
	Matan Azrad <matan@nvidia.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [PATCH 1/1] net/mlx5: show rx/tx descriptor ring limitations in rte_eth_dev_info
Date: Mon, 17 Jun 2024 07:18:58 +0000	[thread overview]
Message-ID: <IA1PR12MB8078B175BF86C1B607921F83DFCD2@IA1PR12MB8078.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20240616173803.424025-2-igootorov@gmail.com>

Hi, Igor

Thank you for the patch.

1. The absolute max descriptor number supported by ConnectX hardware is 32768.
2. The actual max descriptor number supported by the port (and its related representors)
    reported in log_max_wq_sz in HCA.caps.  This value should be queried and save in mlx5_devx_cmd_query_hca_attr() routine.
3. mlx5_rx_queue_pre_setup() should check requested descriptor number and reject if it exceeds log_max_wq_sz
4. Please, format your patch according to the "fix" template.

With best regards,
Slava

> -----Original Message-----
> From: Igor Gutorov <igootorov@gmail.com>
> Sent: Sunday, June 16, 2024 8:38 PM
> To: Dariusz Sosnowski <dsosnowski@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou
> <suanmingm@nvidia.com>; Matan Azrad <matan@nvidia.com>
> Cc: dev@dpdk.org; Igor Gutorov <igootorov@gmail.com>
> Subject: [PATCH 1/1] net/mlx5: show rx/tx descriptor ring limitations in
> rte_eth_dev_info
> 
> Currently, rte_eth_dev_info.rx_desc_lim.nb_max shows 65535 as a limit, which
> results in a few problems:
> 
> * It is an incorrect value
> * Allocating an RX queue and passing `rx_desc_lim.nb_max` results in an
>   integer overflow and 0 ring size:
> 
> ```
> rte_eth_rx_queue_setup(0, 0, rx_desc_lim.nb_max, 0, NULL, mb_pool); ```
> 
> Which overflows ring size and generates the following log:
> ```
> mlx5_net: port 0 increased number of descriptors in Rx queue 0 to the next
> power of two (0) ```
> 
> This patch fixes these issues.
> 
> Signed-off-by: Igor Gutorov <igootorov@gmail.com>
> ---
>  drivers/net/mlx5/mlx5_defs.h   | 3 +++
>  drivers/net/mlx5/mlx5_ethdev.c | 5 ++++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index
> dc5216cb24..df608f0921 100644
> --- a/drivers/net/mlx5/mlx5_defs.h
> +++ b/drivers/net/mlx5/mlx5_defs.h
> @@ -84,6 +84,9 @@
>  #define MLX5_RX_DEFAULT_BURST 64U
>  #define MLX5_TX_DEFAULT_BURST 64U
> 
> +/* Maximum number of descriptors in an RX/TX ring */ #define
> +MLX5_MAX_RING_DESC 8192
> +
>  /* Number of packets vectorized Rx can simultaneously process in a loop. */
>  #define MLX5_VPMD_DESCS_PER_LOOP      4
> 
> diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
> index aea799341c..d5be1ff1aa 100644
> --- a/drivers/net/mlx5/mlx5_ethdev.c
> +++ b/drivers/net/mlx5/mlx5_ethdev.c
> @@ -22,6 +22,7 @@
> 
>  #include <mlx5_malloc.h>
> 
> +#include "mlx5_defs.h"
>  #include "mlx5_rxtx.h"
>  #include "mlx5_rx.h"
>  #include "mlx5_tx.h"
> @@ -345,6 +346,8 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *info)
>  	info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK;
>  	mlx5_set_default_params(dev, info);
>  	mlx5_set_txlimit_params(dev, info);
> +	info->rx_desc_lim.nb_max = MLX5_MAX_RING_DESC;
> +	info->tx_desc_lim.nb_max = MLX5_MAX_RING_DESC;
>  	if (priv->sh->cdev->config.hca_attr.mem_rq_rmp &&
>  	    priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new)
>  		info->dev_capa |= RTE_ETH_DEV_CAPA_RXQ_SHARE; @@ -
> 774,7 +777,7 @@ mlx5_hairpin_cap_get(struct rte_eth_dev *dev, struct
> rte_eth_hairpin_cap *cap)
>  	cap->max_nb_queues = UINT16_MAX;
>  	cap->max_rx_2_tx = 1;
>  	cap->max_tx_2_rx = 1;
> -	cap->max_nb_desc = 8192;
> +	cap->max_nb_desc = MLX5_MAX_RING_DESC;
>  	hca_attr = &priv->sh->cdev->config.hca_attr;
>  	cap->rx_cap.locked_device_memory = hca_attr-
> >hairpin_data_buffer_locked;
>  	cap->rx_cap.rte_memory = 0;
> --
> 2.45.2


  reply	other threads:[~2024-06-17  7:19 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-16 17:38 [PATCH 0/1] " Igor Gutorov
2024-06-16 17:38 ` [PATCH 1/1] " Igor Gutorov
2024-06-17  7:18   ` Slava Ovsiienko [this message]
2024-06-18 22:56     ` [PATCH v2 0/1] net/mlx5: fix incorrect rx/tx descriptor " Igor Gutorov
2024-06-18 22:56       ` [PATCH v2] " Igor Gutorov
2024-06-23 11:34       ` [PATCH v2 0/1] " Slava Ovsiienko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=IA1PR12MB8078B175BF86C1B607921F83DFCD2@IA1PR12MB8078.namprd12.prod.outlook.com \
    --to=viacheslavo@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=igootorov@gmail.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=suanmingm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).