From: Igor Gutorov <igootorov@gmail.com>
To: dsosnowski@nvidia.com, viacheslavo@nvidia.com, orika@nvidia.com,
suanmingm@nvidia.com, matan@nvidia.com
Cc: dev@dpdk.org, Igor Gutorov <igootorov@gmail.com>
Subject: [PATCH 1/1] net/mlx5: show rx/tx descriptor ring limitations in rte_eth_dev_info
Date: Sun, 16 Jun 2024 20:38:03 +0300 [thread overview]
Message-ID: <20240616173803.424025-2-igootorov@gmail.com> (raw)
In-Reply-To: <20240616173803.424025-1-igootorov@gmail.com>
Currently, rte_eth_dev_info.rx_desc_lim.nb_max shows 65535 as a limit,
which results in a few problems:
* It is an incorrect value
* Allocating an RX queue and passing `rx_desc_lim.nb_max` results in an
integer overflow and 0 ring size:
```
rte_eth_rx_queue_setup(0, 0, rx_desc_lim.nb_max, 0, NULL, mb_pool);
```
Which overflows ring size and generates the following log:
```
mlx5_net: port 0 increased number of descriptors in Rx queue 0 to the
next power of two (0)
```
This patch fixes these issues.
Signed-off-by: Igor Gutorov <igootorov@gmail.com>
---
drivers/net/mlx5/mlx5_defs.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 5 ++++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index dc5216cb24..df608f0921 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -84,6 +84,9 @@
#define MLX5_RX_DEFAULT_BURST 64U
#define MLX5_TX_DEFAULT_BURST 64U
+/* Maximum number of descriptors in an RX/TX ring */
+#define MLX5_MAX_RING_DESC 8192
+
/* Number of packets vectorized Rx can simultaneously process in a loop. */
#define MLX5_VPMD_DESCS_PER_LOOP 4
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index aea799341c..d5be1ff1aa 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -22,6 +22,7 @@
#include <mlx5_malloc.h>
+#include "mlx5_defs.h"
#include "mlx5_rxtx.h"
#include "mlx5_rx.h"
#include "mlx5_tx.h"
@@ -345,6 +346,8 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK;
mlx5_set_default_params(dev, info);
mlx5_set_txlimit_params(dev, info);
+ info->rx_desc_lim.nb_max = MLX5_MAX_RING_DESC;
+ info->tx_desc_lim.nb_max = MLX5_MAX_RING_DESC;
if (priv->sh->cdev->config.hca_attr.mem_rq_rmp &&
priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new)
info->dev_capa |= RTE_ETH_DEV_CAPA_RXQ_SHARE;
@@ -774,7 +777,7 @@ mlx5_hairpin_cap_get(struct rte_eth_dev *dev, struct rte_eth_hairpin_cap *cap)
cap->max_nb_queues = UINT16_MAX;
cap->max_rx_2_tx = 1;
cap->max_tx_2_rx = 1;
- cap->max_nb_desc = 8192;
+ cap->max_nb_desc = MLX5_MAX_RING_DESC;
hca_attr = &priv->sh->cdev->config.hca_attr;
cap->rx_cap.locked_device_memory = hca_attr->hairpin_data_buffer_locked;
cap->rx_cap.rte_memory = 0;
--
2.45.2
next prev parent reply other threads:[~2024-06-16 17:39 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-16 17:38 [PATCH 0/1] " Igor Gutorov
2024-06-16 17:38 ` Igor Gutorov [this message]
2024-06-17 7:18 ` [PATCH 1/1] " Slava Ovsiienko
2024-06-18 22:56 ` [PATCH v2 0/1] net/mlx5: fix incorrect rx/tx descriptor " Igor Gutorov
2024-06-18 22:56 ` [PATCH v2] " Igor Gutorov
2024-06-23 11:34 ` [PATCH v2 0/1] " Slava Ovsiienko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240616173803.424025-2-igootorov@gmail.com \
--to=igootorov@gmail.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).