From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C009A00C2 for ; Thu, 3 Nov 2022 10:30:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5602F41156; Thu, 3 Nov 2022 10:30:32 +0100 (CET) Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49]) by mails.dpdk.org (Postfix) with ESMTP id 77130427EA for ; Thu, 3 Nov 2022 10:30:31 +0100 (CET) Received: by mail-wr1-f49.google.com with SMTP id k8so1756710wrh.1 for ; Thu, 03 Nov 2022 02:30:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3s9ZPIOjybruX9036v1TMG3zjDUSyIp41v8svqjA3aU=; b=cORZ/9VHgsV3mDa+kGh9FMn8fi/0n0ikixShIUWNHCj7itW/pXjrwfTP2CvNvZo/Tu M980rqjbZlB21DrBQPpon05AnJ7xPFHeItDp5ud/nqoTwf63VWt8FS4+sGCJ7LqvEbCA KYVsxF8qFt5rVnDZI/XXwWg2R0VLLkFWSH7CCA5m/nWVjg3nKggNrhBe/G+5MGqU1f0R oKlsAqj+u/lkXl+sTDBXDxqN9iEyDXMi3GuFN/u67d6t0ijqZoOipdQ0KX7bTxazR2gz CboeScpm4F0E/AJDDtTTfIyphX/znwKU1Ps9OsoC7fNUN9Q0mTlxF/Dq18JlxseBVHyD fdig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3s9ZPIOjybruX9036v1TMG3zjDUSyIp41v8svqjA3aU=; b=H9QmXnczDW1L+d5uVeYkcrLehu09leByj4eaJTihRaYoq5pt0RjwaVbZsvIRFvetvT OgNnGoeLD2T/3heajDeyZI6rYnCT+pQnlb0XoDVFRjkuUXMedvftKX/0AoWS4t0MZAX1 a5lv6TZ9MCZ4OpXFtT2XgzWKkXjD9rPVFXb2aWGT6ZgFioWe9zuGK2cPYSx5stvh6gBR iVtu8sVMttoaD1qBBDfbi3kf3nyFmG28ban6/ubS18Kuu62oELD5UgJ3GzRlDmCTkIri U5xmdM7ROBnVvoLS7jtXoi2uZoDTOjDJHeuc3TLOPZU2BYIUZzsH8XW2WsHpVbR+JPub N81w== X-Gm-Message-State: ACrzQf3+3Y3qvHRr5LBQq7Pnseb0Zhd5ZG5ojSq4l+9dvK4Q3kznS5em zCoD4HI4Bf9Mrskt3xFrldr9OVlm0eWJwf0/ X-Google-Smtp-Source: AMsMyM5FmzlHN4FBg29uyb+v6MMJTJxnQPFfN4Mqm9GMVLQ3fAshPQAUYvT2SvATTywoo6cGVotKWg== X-Received: by 2002:adf:e307:0:b0:236:ba3e:69c9 with SMTP id b7-20020adfe307000000b00236ba3e69c9mr16151810wrj.491.1667467831156; Thu, 03 Nov 2022 02:30:31 -0700 (PDT) Received: from localhost ([2a01:4b00:d307:1000:f1d3:eb5e:11f4:a7d9]) by smtp.gmail.com with ESMTPSA id n5-20020a05600c3b8500b003b4935f04a4sm927846wms.5.2022.11.03.02.30.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Nov 2022 02:30:30 -0700 (PDT) From: luca.boccassi@gmail.com To: Viacheslav Ovsiienko Cc: dpdk stable Subject: patch 'net/mlx5: fix check for orphan wait descriptor' has been queued to stable release 20.11.7 Date: Thu, 3 Nov 2022 09:27:02 +0000 Message-Id: <20221103092758.1099402-44-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221103092758.1099402-1-luca.boccassi@gmail.com> References: <20221103092758.1099402-1-luca.boccassi@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 20.11.7 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 11/05/22. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/3cf97b1cd1c14f2cbf338aa479be32e7c7e5f13e Thanks. Luca Boccassi --- >From 3cf97b1cd1c14f2cbf338aa479be32e7c7e5f13e Mon Sep 17 00:00:00 2001 From: Viacheslav Ovsiienko Date: Thu, 11 Aug 2022 08:50:58 +0300 Subject: [PATCH] net/mlx5: fix check for orphan wait descriptor [ upstream commit 37d6fc30c1ad03485ef707140b67623b95498d0d ] The mlx5 PMD supports send scheduling feature, it allows to send packets at specified moment of time, to do that PMD pushes special wait descriptor (WQE) to the hardware queue and then pushes descriptor for packet data as usual. If queue is close to be full or there is no enough elts buffers to store mbufs being sent the data descriptors might be not pushed and the orphan wait WQE (not followed by the data) might reside in queue on tx_burst routine exit. To avoid orphan wait WQEs there was the check for enough free space in the queue WQE buffer and enough amount of the free elts in queue mbuf storage. This check was incomplete and did not cover all the cases for Enhanced Multi-Packet Write descriptors. Fixes: 2f827f5ea6e1 ("net/mlx5: support scheduling on send routine template") Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_rxtx.c | 78 ++++++++++++++++++++---------------- 1 file changed, 43 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index baacd7587a..b73ab06367 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -3152,6 +3152,9 @@ dseg_done: * Pointer to TX queue structure. * @param loc * Pointer to burst routine local context. + * @param elts + * Number of free elements in elts buffer to be checked, for zero + * value the check is optimized out by compiler. * @param olx * Configured Tx offloads mask. It is fully defined at * compile time and may be used for optimization. @@ -3165,6 +3168,7 @@ dseg_done: static __rte_always_inline enum mlx5_txcmp_code mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, struct mlx5_txq_local *restrict loc, + uint16_t elts, unsigned int olx) { if (MLX5_TXOFF_CONFIG(TXPP) && @@ -3179,7 +3183,7 @@ mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, * to the queue and we won't get the orphan WAIT WQE. */ if (loc->wqe_free <= MLX5_WQE_SIZE_MAX / MLX5_WQE_SIZE || - loc->elts_free < NB_SEGS(loc->mbuf)) + loc->elts_free < elts) return MLX5_TXCMP_CODE_EXIT; /* Convert the timestamp into completion to wait. */ ts = *RTE_MBUF_DYNFIELD(loc->mbuf, txq->ts_offset, uint64_t *); @@ -3226,11 +3230,12 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq, struct mlx5_wqe *__rte_restrict wqe; unsigned int ds, dlen, inlen, ntcp, vlan = 0; + MLX5_ASSERT(loc->elts_free >= NB_SEGS(loc->mbuf)); if (MLX5_TXOFF_CONFIG(TXPP)) { enum mlx5_txcmp_code wret; /* Generate WAIT for scheduling if requested. */ - wret = mlx5_tx_schedule_send(txq, loc, olx); + wret = mlx5_tx_schedule_send(txq, loc, 0, olx); if (wret == MLX5_TXCMP_CODE_EXIT) return MLX5_TXCMP_CODE_EXIT; if (wret == MLX5_TXCMP_CODE_ERROR) @@ -3326,11 +3331,12 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq, unsigned int ds, nseg; MLX5_ASSERT(NB_SEGS(loc->mbuf) > 1); + MLX5_ASSERT(loc->elts_free >= NB_SEGS(loc->mbuf)); if (MLX5_TXOFF_CONFIG(TXPP)) { enum mlx5_txcmp_code wret; /* Generate WAIT for scheduling if requested. */ - wret = mlx5_tx_schedule_send(txq, loc, olx); + wret = mlx5_tx_schedule_send(txq, loc, 0, olx); if (wret == MLX5_TXCMP_CODE_EXIT) return MLX5_TXCMP_CODE_EXIT; if (wret == MLX5_TXCMP_CODE_ERROR) @@ -3444,16 +3450,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq, MLX5_ASSERT(MLX5_TXOFF_CONFIG(INLINE)); MLX5_ASSERT(NB_SEGS(loc->mbuf) > 1); - if (MLX5_TXOFF_CONFIG(TXPP)) { - enum mlx5_txcmp_code wret; - - /* Generate WAIT for scheduling if requested. */ - wret = mlx5_tx_schedule_send(txq, loc, olx); - if (wret == MLX5_TXCMP_CODE_EXIT) - return MLX5_TXCMP_CODE_EXIT; - if (wret == MLX5_TXCMP_CODE_ERROR) - return MLX5_TXCMP_CODE_ERROR; - } + MLX5_ASSERT(loc->elts_free >= NB_SEGS(loc->mbuf)); /* * First calculate data length to be inlined * to estimate the required space for WQE. @@ -3560,6 +3557,16 @@ do_align: * supposing no any mbufs is being freed during inlining. */ do_build: + if (MLX5_TXOFF_CONFIG(TXPP)) { + enum mlx5_txcmp_code wret; + + /* Generate WAIT for scheduling if requested. */ + wret = mlx5_tx_schedule_send(txq, loc, 0, olx); + if (wret == MLX5_TXCMP_CODE_EXIT) + return MLX5_TXCMP_CODE_EXIT; + if (wret == MLX5_TXCMP_CODE_ERROR) + return MLX5_TXCMP_CODE_ERROR; + } MLX5_ASSERT(inlen <= txq->inlen_send); ds = NB_SEGS(loc->mbuf) + 2 + (inlen - MLX5_ESEG_MIN_INLINE_SIZE + @@ -3723,7 +3730,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq, enum mlx5_txcmp_code wret; /* Generate WAIT for scheduling if requested. */ - wret = mlx5_tx_schedule_send(txq, loc, olx); + wret = mlx5_tx_schedule_send(txq, loc, 1, olx); if (wret == MLX5_TXCMP_CODE_EXIT) return MLX5_TXCMP_CODE_EXIT; if (wret == MLX5_TXCMP_CODE_ERROR) @@ -4109,16 +4116,6 @@ mlx5_tx_burst_empw_simple(struct mlx5_txq_data *__rte_restrict txq, next_empw: MLX5_ASSERT(NB_SEGS(loc->mbuf) == 1); - if (MLX5_TXOFF_CONFIG(TXPP)) { - enum mlx5_txcmp_code wret; - - /* Generate WAIT for scheduling if requested. */ - wret = mlx5_tx_schedule_send(txq, loc, olx); - if (wret == MLX5_TXCMP_CODE_EXIT) - return MLX5_TXCMP_CODE_EXIT; - if (wret == MLX5_TXCMP_CODE_ERROR) - return MLX5_TXCMP_CODE_ERROR; - } part = RTE_MIN(pkts_n, MLX5_TXOFF_CONFIG(MPW) ? MLX5_MPW_MAX_PACKETS : MLX5_EMPW_MAX_PACKETS); @@ -4129,6 +4126,16 @@ next_empw: /* But we still able to send at least minimal eMPW. */ part = loc->elts_free; } + if (MLX5_TXOFF_CONFIG(TXPP)) { + enum mlx5_txcmp_code wret; + + /* Generate WAIT for scheduling if requested. */ + wret = mlx5_tx_schedule_send(txq, loc, 0, olx); + if (wret == MLX5_TXCMP_CODE_EXIT) + return MLX5_TXCMP_CODE_EXIT; + if (wret == MLX5_TXCMP_CODE_ERROR) + return MLX5_TXCMP_CODE_ERROR; + } /* Check whether we have enough WQEs */ if (unlikely(loc->wqe_free < ((2 + part + 3) / 4))) { if (unlikely(loc->wqe_free < @@ -4285,16 +4292,6 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, unsigned int slen = 0; MLX5_ASSERT(NB_SEGS(loc->mbuf) == 1); - if (MLX5_TXOFF_CONFIG(TXPP)) { - enum mlx5_txcmp_code wret; - - /* Generate WAIT for scheduling if requested. */ - wret = mlx5_tx_schedule_send(txq, loc, olx); - if (wret == MLX5_TXCMP_CODE_EXIT) - return MLX5_TXCMP_CODE_EXIT; - if (wret == MLX5_TXCMP_CODE_ERROR) - return MLX5_TXCMP_CODE_ERROR; - } /* * Limits the amount of packets in one WQE * to improve CQE latency generation. @@ -4302,6 +4299,16 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, nlim = RTE_MIN(pkts_n, MLX5_TXOFF_CONFIG(MPW) ? MLX5_MPW_INLINE_MAX_PACKETS : MLX5_EMPW_MAX_PACKETS); + if (MLX5_TXOFF_CONFIG(TXPP)) { + enum mlx5_txcmp_code wret; + + /* Generate WAIT for scheduling if requested. */ + wret = mlx5_tx_schedule_send(txq, loc, nlim, olx); + if (wret == MLX5_TXCMP_CODE_EXIT) + return MLX5_TXCMP_CODE_EXIT; + if (wret == MLX5_TXCMP_CODE_ERROR) + return MLX5_TXCMP_CODE_ERROR; + } /* Check whether we have minimal amount WQEs */ if (unlikely(loc->wqe_free < ((2 + MLX5_EMPW_MIN_PACKETS + 3) / 4))) @@ -4570,11 +4577,12 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, enum mlx5_txcmp_code ret; MLX5_ASSERT(NB_SEGS(loc->mbuf) == 1); + MLX5_ASSERT(loc->elts_free); if (MLX5_TXOFF_CONFIG(TXPP)) { enum mlx5_txcmp_code wret; /* Generate WAIT for scheduling if requested. */ - wret = mlx5_tx_schedule_send(txq, loc, olx); + wret = mlx5_tx_schedule_send(txq, loc, 0, olx); if (wret == MLX5_TXCMP_CODE_EXIT) return MLX5_TXCMP_CODE_EXIT; if (wret == MLX5_TXCMP_CODE_ERROR) -- 2.34.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2022-11-03 09:27:28.100378300 +0000 +++ 0044-net-mlx5-fix-check-for-orphan-wait-descriptor.patch 2022-11-03 09:27:25.421423370 +0000 @@ -1 +1 @@ -From 37d6fc30c1ad03485ef707140b67623b95498d0d Mon Sep 17 00:00:00 2001 +From 3cf97b1cd1c14f2cbf338aa479be32e7c7e5f13e Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 37d6fc30c1ad03485ef707140b67623b95498d0d ] + @@ -22 +23,0 @@ -Cc: stable@dpdk.org @@ -26 +27 @@ - drivers/net/mlx5/mlx5_tx.h | 78 +++++++++++++++++++++----------------- + drivers/net/mlx5/mlx5_rxtx.c | 78 ++++++++++++++++++++---------------- @@ -29,5 +30,5 @@ -diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h -index 20776919c2..f081921ffc 100644 ---- a/drivers/net/mlx5/mlx5_tx.h -+++ b/drivers/net/mlx5/mlx5_tx.h -@@ -1642,6 +1642,9 @@ dseg_done: +diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c +index baacd7587a..b73ab06367 100644 +--- a/drivers/net/mlx5/mlx5_rxtx.c ++++ b/drivers/net/mlx5/mlx5_rxtx.c +@@ -3152,6 +3152,9 @@ dseg_done: @@ -43 +44 @@ -@@ -1655,6 +1658,7 @@ dseg_done: +@@ -3165,6 +3168,7 @@ dseg_done: @@ -51 +52 @@ -@@ -1669,7 +1673,7 @@ mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, +@@ -3179,7 +3183,7 @@ mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, @@ -60 +61 @@ -@@ -1735,11 +1739,12 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq, +@@ -3226,11 +3230,12 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq, @@ -74 +75 @@ -@@ -1833,11 +1838,12 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq, +@@ -3326,11 +3331,12 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq, @@ -88 +89 @@ -@@ -1948,16 +1954,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq, +@@ -3444,16 +3450,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq, @@ -106 +107 @@ -@@ -2063,6 +2060,16 @@ do_align: +@@ -3560,6 +3557,16 @@ do_align: @@ -123 +124 @@ -@@ -2223,7 +2230,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq, +@@ -3723,7 +3730,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq, @@ -132 +133 @@ -@@ -2601,16 +2608,6 @@ mlx5_tx_burst_empw_simple(struct mlx5_txq_data *__rte_restrict txq, +@@ -4109,16 +4116,6 @@ mlx5_tx_burst_empw_simple(struct mlx5_txq_data *__rte_restrict txq, @@ -149 +150 @@ -@@ -2621,6 +2618,16 @@ next_empw: +@@ -4129,6 +4126,16 @@ next_empw: @@ -166 +167 @@ -@@ -2775,16 +2782,6 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, +@@ -4285,16 +4292,6 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, @@ -183 +184 @@ -@@ -2792,6 +2789,16 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, +@@ -4302,6 +4299,16 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, @@ -200 +201 @@ -@@ -3074,11 +3081,12 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, +@@ -4570,11 +4577,12 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,