DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bing Zhao <bingz@nvidia.com>
To: <viacheslavo@nvidia.com>, <matan@nvidia.com>
Cc: <dev@dpdk.org>, <thomas@monjalon.net>, <dsosnowski@nvidia.com>,
	<suanmingm@nvidia.com>, <rasland@nvidia.com>, <stable@dpdk.org>
Subject: [PATCH] net/mlx5: fix the WQE size calculation for Tx queue
Date: Fri, 27 Jun 2025 11:25:02 +0300	[thread overview]
Message-ID: <20250627082502.26088-1-bingz@nvidia.com> (raw)
In-Reply-To: <20250623173524.128125-1:x-bingz@nvidia.com>

The txq_calc_wqebb_cnt() should be consistent with the calculation in
the function mlx5_txq_devx_obj_new(). Or when the input descriptor
number is 512, the WQE size will be wrongly considered to be 30 when
no max_inline_data is zero in some cases. The total number of WQE will
be considered as 256 and that is incorrect.

Adjusting the WQE size will solve the wrong calculation when the
calculated WQE size is less than 64B.

Fixes: 38b4b397a57d ("net/mlx5: add Tx configuration and setup")
Cc: viacheslavo@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_txq.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 5fee5bc4e8..62b3fc4f25 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -709,6 +709,7 @@ txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl)
 		   MLX5_WSEG_SIZE -
 		   MLX5_ESEG_MIN_INLINE_SIZE +
 		   txq_ctrl->max_inline_data;
+	wqe_size = RTE_MAX(wqe_size, MLX5_WQE_SIZE);
 	return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE;
 }
 
-- 
2.34.1


  parent reply	other threads:[~2025-06-27  8:25 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20250623173524.128125-1:x-bingz@nvidia.com>
2025-06-23 18:34 ` [PATCH v2 0/3] Use consecutive Tx queues' memory Bing Zhao
2025-06-23 18:34   ` [PATCH v2 1/3] net/mlx5: fix the WQE size calculation for Tx queue Bing Zhao
2025-06-23 18:34   ` [PATCH v2 2/3] net/mlx5: add new devarg for Tx queue consecutive memory Bing Zhao
2025-06-24 12:01     ` Stephen Hemminger
2025-06-26 13:18       ` Bing Zhao
2025-06-26 14:29         ` Stephen Hemminger
2025-06-26 15:21           ` Thomas Monjalon
2025-06-23 18:34   ` [PATCH v2 3/3] net/mlx5: use consecutive memory for all Tx queues Bing Zhao
2025-06-27 16:37   ` [PATCH v3 0/5] Use consecutive Tx queues' memory Bing Zhao
2025-06-27 16:37     ` [PATCH v3 1/5] net/mlx5: add new devarg for Tx queue consecutive memory Bing Zhao
2025-06-27 16:37     ` [PATCH v3 2/5] net/mlx5: calculate the memory length for all Tx queues Bing Zhao
2025-06-27 16:37     ` [PATCH v3 3/5] net/mlx5: allocate and release unique resources for " Bing Zhao
2025-06-27 16:37     ` [PATCH v3 4/5] net/mlx5: pass the information in Tx queue start Bing Zhao
2025-06-27 16:37     ` [PATCH v3 5/5] net/mlx5: use consecutive memory for Tx queue creation Bing Zhao
2025-06-27  8:25 ` Bing Zhao [this message]
2025-06-27  9:27   ` [PATCH] net/mlx5: fix the WQE size calculation for Tx queue Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250627082502.26088-1-bingz@nvidia.com \
    --to=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=suanmingm@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).