From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 876C146BAE for ; Fri, 18 Jul 2025 21:38:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 814EF40611; Fri, 18 Jul 2025 21:38:33 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 1553940E4D for ; Fri, 18 Jul 2025 21:38:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752867511; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u1NqSFtpez9aLi/4PyJZARFUk7743IWvDjwjTNcj61E=; b=dd1zaVPiMTIpl7WC6Y27k6p0Gdzj7l6FsGttgW1nEbGpCOij2GZWzbNojTg1M4tuOVnyZ4 YaWHyz32/smQCkld6775EFLb3w7a9duFc3MuF/g7nMiMabKTpcz1eOxTI+xwODHwPlQCGW z00679Mhi40KWraTdZriujO+SWRHark= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-672-X14j0AaGOYKNKN71qRvmdQ-1; Fri, 18 Jul 2025 15:38:30 -0400 X-MC-Unique: X14j0AaGOYKNKN71qRvmdQ-1 X-Mimecast-MFC-AGG-ID: X14j0AaGOYKNKN71qRvmdQ_1752867509 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0ABF91800D81; Fri, 18 Jul 2025 19:38:29 +0000 (UTC) Received: from rh.redhat.com (unknown [10.44.32.40]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4329A18003FC; Fri, 18 Jul 2025 19:38:26 +0000 (UTC) From: Kevin Traynor To: Viacheslav Ovsiienko Cc: Edwin Brossette , Dariusz Sosnowski , dpdk stable Subject: patch 'net/mlx5: fix maximal queue size query' has been queued to stable release 24.11.3 Date: Fri, 18 Jul 2025 20:31:36 +0100 Message-ID: <20250718193247.1008129-162-ktraynor@redhat.com> In-Reply-To: <20250718193247.1008129-1-ktraynor@redhat.com> References: <20250718193247.1008129-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 92pIjsBTLuk3VXnlGdYTXCQiztVLc815DsQ4KXjKZ68_1752867509 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 24.11.3 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 07/23/25. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/355a5224b5a3467646e36950b4cd5844a05d5060 Thanks. Kevin --- >From 355a5224b5a3467646e36950b4cd5844a05d5060 Mon Sep 17 00:00:00 2001 From: Viacheslav Ovsiienko Date: Wed, 14 May 2025 10:55:30 +0300 Subject: [PATCH] net/mlx5: fix maximal queue size query [ upstream commit 9de8acd30d5adfc5b9703d15a3e1babc7d4ddacc ] The mlx5 PMD manages the device using two modes: the Verbs API and the DevX API. Each API offers its own method for querying the maximum work queue size (in descriptors). The corrected patch enhanced the rte_eth_dev_info_get() API support in mlx5 PMD to return the true maximum number of descriptors. It also implemented a limit check during queue creation, but this was applied only to "DevX mode." Consequently, the "Verbs mode" was overlooked, leading to malfunction on legacy NICs that do not support DevX. This patch adds support for Verbs mode, and all limit checks are updated accordingly. Fixes: 4c3d7961d900 ("net/mlx5: fix reported Rx/Tx descriptor limits") Reported-by: Edwin Brossette Signed-off-by: Viacheslav Ovsiienko Acked-by: Dariusz Sosnowski --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_ethdev.c | 39 +++++++++++++++++++++++++++++---- drivers/net/mlx5/mlx5_rxq.c | 2 +- drivers/net/mlx5/mlx5_trigger.c | 4 ++-- drivers/net/mlx5/mlx5_txq.c | 12 +++++----- 7 files changed, 47 insertions(+), 14 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 2d82807bc2..d24fd197ba 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -42,4 +42,5 @@ #define MLX5_CQ_INDEX_WIDTH 24 #define MLX5_WQ_INDEX_WIDTH 16 +#define MLX5_WQ_INDEX_MAX (1u << (MLX5_WQ_INDEX_WIDTH - 1)) /* WQE Segment sizes in bytes. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 856d432c69..8849334755 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2273,4 +2273,5 @@ int mlx5_representor_info_get(struct rte_eth_dev *dev, uint16_t mlx5_representor_id_encode(const struct mlx5_switch_info *info, enum rte_eth_representor_type hpf_type); +uint16_t mlx5_dev_get_max_wq_size(struct mlx5_dev_ctx_shared *sh); int mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info); int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size); diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index f9081b0e30..7ca95e81c6 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -1627,5 +1627,5 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) /* Create Send Queue object with DevX. */ wqe_n = RTE_MIN((1UL << txq_data->elts_n) * wqe_size, - (uint32_t)priv->sh->dev_cap.max_qp_wr); + (uint32_t)mlx5_dev_get_max_wq_size(priv->sh)); log_desc_n = log2above(wqe_n); ret = mlx5_txq_create_devx_sq_resources(dev, idx, log_desc_n); diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index ddfe968a99..68d1c1bfa7 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -315,4 +315,35 @@ mlx5_set_txlimit_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) } +/** + * Get maximal work queue size in WQEs + * + * @param sh + * Pointer to the device shared context. + * @return + * Maximal number of WQEs in queue + */ +uint16_t +mlx5_dev_get_max_wq_size(struct mlx5_dev_ctx_shared *sh) +{ + uint16_t max_wqe = MLX5_WQ_INDEX_MAX; + + if (sh->cdev->config.devx) { + /* use HCA properties for DevX config */ + MLX5_ASSERT(sh->cdev->config.hca_attr.log_max_wq_sz != 0); + MLX5_ASSERT(sh->cdev->config.hca_attr.log_max_wq_sz < MLX5_WQ_INDEX_WIDTH); + if (sh->cdev->config.hca_attr.log_max_wq_sz != 0 && + sh->cdev->config.hca_attr.log_max_wq_sz < MLX5_WQ_INDEX_WIDTH) + max_wqe = 1u << sh->cdev->config.hca_attr.log_max_wq_sz; + } else { + /* use IB device capabilities */ + MLX5_ASSERT(sh->dev_cap.max_qp_wr > 0); + MLX5_ASSERT((unsigned int)sh->dev_cap.max_qp_wr <= MLX5_WQ_INDEX_MAX); + if (sh->dev_cap.max_qp_wr > 0 && + (uint32_t)sh->dev_cap.max_qp_wr <= MLX5_WQ_INDEX_MAX) + max_wqe = (uint16_t)sh->dev_cap.max_qp_wr; + } + return max_wqe; +} + /** * DPDK callback to get information about the device. @@ -328,4 +359,5 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) struct mlx5_priv *priv = dev->data->dev_private; unsigned int max; + uint16_t max_wqe; /* FIXME: we should ask the device for these values. */ @@ -360,8 +392,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) mlx5_set_default_params(dev, info); mlx5_set_txlimit_params(dev, info); - info->rx_desc_lim.nb_max = - 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz; - info->tx_desc_lim.nb_max = - 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz; + max_wqe = mlx5_dev_get_max_wq_size(priv->sh); + info->rx_desc_lim.nb_max = max_wqe; + info->tx_desc_lim.nb_max = max_wqe; if (priv->sh->cdev->config.hca_attr.mem_rq_rmp && priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 75733339e4..508d27d318 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -662,5 +662,5 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc, bool empty; - if (*desc > 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz) { + if (*desc > mlx5_dev_get_max_wq_size(priv->sh)) { DRV_LOG(ERR, "port %u number of descriptors requested for Rx queue" diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 2f679a30cf..485984f9b0 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -218,6 +218,6 @@ mlx5_rxq_start(struct rte_eth_dev *dev) return -rte_errno; } - DRV_LOG(DEBUG, "Port %u dev_cap.max_qp_wr is %d.", - dev->data->port_id, priv->sh->dev_cap.max_qp_wr); + DRV_LOG(DEBUG, "Port %u max work queue size is %d.", + dev->data->port_id, mlx5_dev_get_max_wq_size(priv->sh)); DRV_LOG(DEBUG, "Port %u dev_cap.max_sge is %d.", dev->data->port_id, priv->sh->dev_cap.max_sge); diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index d0b9576b09..f74af5471e 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -334,5 +334,5 @@ mlx5_tx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc) struct mlx5_priv *priv = dev->data->dev_private; - if (*desc > 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz) { + if (*desc > mlx5_dev_get_max_wq_size(priv->sh)) { DRV_LOG(ERR, "port %u number of descriptors requested for Tx queue" @@ -729,5 +729,5 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl) unsigned int wqe_size; - wqe_size = priv->sh->dev_cap.max_qp_wr / desc; + wqe_size = mlx5_dev_get_max_wq_size(priv->sh) / desc; if (!wqe_size) return 0; @@ -1084,4 +1084,5 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_txq_ctrl *tmpl; + uint16_t max_wqe; tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) + @@ -1109,11 +1110,10 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (txq_adjust_params(tmpl)) goto error; - if (txq_calc_wqebb_cnt(tmpl) > - priv->sh->dev_cap.max_qp_wr) { + max_wqe = mlx5_dev_get_max_wq_size(priv->sh); + if (txq_calc_wqebb_cnt(tmpl) > max_wqe) { DRV_LOG(ERR, "port %u Tx WQEBB count (%d) exceeds the limit (%d)," " try smaller queue size", - dev->data->port_id, txq_calc_wqebb_cnt(tmpl), - priv->sh->dev_cap.max_qp_wr); + dev->data->port_id, txq_calc_wqebb_cnt(tmpl), max_wqe); rte_errno = ENOMEM; goto error; -- 2.50.0 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2025-07-18 20:29:16.648964724 +0100 +++ 0162-net-mlx5-fix-maximal-queue-size-query.patch 2025-07-18 20:29:11.171908069 +0100 @@ -1 +1 @@ -From 9de8acd30d5adfc5b9703d15a3e1babc7d4ddacc Mon Sep 17 00:00:00 2001 +From 355a5224b5a3467646e36950b4cd5844a05d5060 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 9de8acd30d5adfc5b9703d15a3e1babc7d4ddacc ] + @@ -21 +22,0 @@ -Cc: stable@dpdk.org @@ -37 +38 @@ -index 742c274a85..7accdeab87 100644 +index 2d82807bc2..d24fd197ba 100644 @@ -47 +48 @@ -index 36f11b9c51..5695d0f54a 100644 +index 856d432c69..8849334755 100644 @@ -50 +51 @@ -@@ -2304,4 +2304,5 @@ int mlx5_representor_info_get(struct rte_eth_dev *dev, +@@ -2273,4 +2273,5 @@ int mlx5_representor_info_get(struct rte_eth_dev *dev, @@ -57 +58 @@ -index a12891a983..9711746edb 100644 +index f9081b0e30..7ca95e81c6 100644 @@ -60 +61 @@ -@@ -1594,5 +1594,5 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) +@@ -1627,5 +1627,5 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) @@ -68 +69 @@ -index 7708a0b808..a50320075c 100644 +index ddfe968a99..68d1c1bfa7 100644 @@ -126 +127 @@ -index ab29b43875..b676e5394b 100644 +index 75733339e4..508d27d318 100644 @@ -129 +130 @@ -@@ -657,5 +657,5 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc, +@@ -662,5 +662,5 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc, @@ -137 +138 @@ -index 4ee44e9165..8145ad4233 100644 +index 2f679a30cf..485984f9b0 100644 @@ -150 +151 @@ -index ddd3a66282..5fee5bc4e8 100644 +index d0b9576b09..f74af5471e 100644 @@ -153 +154 @@ -@@ -335,5 +335,5 @@ mlx5_tx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc) +@@ -334,5 +334,5 @@ mlx5_tx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc) @@ -167 +168 @@ -@@ -1055,4 +1055,5 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +@@ -1084,4 +1084,5 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, @@ -173,3 +174,3 @@ -@@ -1079,11 +1080,10 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, - txq_set_params(tmpl); - txq_adjust_params(tmpl); +@@ -1109,11 +1110,10 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, + if (txq_adjust_params(tmpl)) + goto error;