From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0500841D52 for ; Thu, 23 Feb 2023 10:40:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00B7443164; Thu, 23 Feb 2023 10:40:03 +0100 (CET) Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) by mails.dpdk.org (Postfix) with ESMTP id A7FA143150 for ; Thu, 23 Feb 2023 10:40:01 +0100 (CET) Received: by mail-wm1-f51.google.com with SMTP id p26so8213038wmc.4 for ; Thu, 23 Feb 2023 01:40:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=I1W1skH31Pt/XbfL2OtgFwUYQr6ywaonNGxoXL0i8Is=; b=XFMpB1/He5XmyQilxmtWlaUYS9rddSgKmTDWVCfK5RjjEC+9/Z+olG0+ydJwZSXNU/ 1LQ8Mh9IJ9JIP36TbpqU+NmXR06/yNWd3ICcNri5R7EkmRcxDVcAZ6s2I5akxWq4wVaY 8VEEBrJ4THc7s2Yuduwz8uJmarN6PC/uLNmm1DZEQ/Zwrbl+ha77F83O7Ont3zd+7t4L sYwRR4ue0xtV7T2zsvDvjbXSRRRs8fUGycyH09UYTXg9xNEoNOFWXYlrlmfqjFfr+VHU CnysKGL+NNLZ4WtCPCzX0048L2xd5/5TsbUZ2BewnFYsSYrxlH7DNrWrWMso8O1XdAf7 Cm4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I1W1skH31Pt/XbfL2OtgFwUYQr6ywaonNGxoXL0i8Is=; b=jpfq3xlbFhrJq0vn9587/+CzbO+FBsLzLz3UX4Ko/c+HXK7toUDapE6i+fddrXI99+ /YwJqP+CPTq6MLiSTcrjtxIZFYpLOe/ye2PgxsfA0KdUpyrKIJulbVCRU5o73I40Ema7 s1c+KeC2Pe0rpdXlmy2HOZbhh/vgDXWGmmIg+ov3g3tzIs5drYwhjtVHX3iJTw8loSP7 V54iVh4IYVBIIGUFz8wvbyT4w1uHoHI10ZBRfUGLX/lESk1X5zAZBdo/WT8TajXGrmby /+lRSNJEM+hk//8PlNihcvwWARcFZdkS8cT+BjXik2oK8nCDUDujCxozkEODgjjK9Xy8 rljg== X-Gm-Message-State: AO0yUKXBMY0QISf5lUpiiEtjaTV+qSCEAyE2EBRs7rdRXaljGbu9Pq+4 WLpb/AGF8GHjpEy1R+8Qb3//qRAONRGqcQ== X-Google-Smtp-Source: AK7set9UHMO2vDSJyTCwHkyrBWLpPeuQuI6PeqAicyGiKXB5q/+MDJ5qry2xayGhY4Z6hBo7UY/FWA== X-Received: by 2002:a05:600c:4b28:b0:3e1:feb9:5a33 with SMTP id i40-20020a05600c4b2800b003e1feb95a33mr7710556wmp.9.1677145201268; Thu, 23 Feb 2023 01:40:01 -0800 (PST) Received: from localhost ([137.220.119.58]) by smtp.gmail.com with ESMTPSA id h13-20020a05600c314d00b003e2059c7978sm10913658wmo.36.2023.02.23.01.40.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 01:40:00 -0800 (PST) From: luca.boccassi@gmail.com To: Alexander Kozyrev Cc: Matan Azrad , dpdk stable Subject: patch 'net/mlx5: fix error CQE dumping for vectorized Rx' has been queued to stable release 20.11.8 Date: Thu, 23 Feb 2023 09:36:56 +0000 Message-Id: <20230223093715.3926893-52-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230223093715.3926893-1-luca.boccassi@gmail.com> References: <20230223093715.3926893-1-luca.boccassi@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 20.11.8 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 02/25/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/b151f703cdf97f40950ca20d03b91cf33af342d6 Thanks. Luca Boccassi --- >From b151f703cdf97f40950ca20d03b91cf33af342d6 Mon Sep 17 00:00:00 2001 From: Alexander Kozyrev Date: Fri, 27 Jan 2023 05:22:11 +0200 Subject: [PATCH] net/mlx5: fix error CQE dumping for vectorized Rx [ upstream commit 633684e0d0defdd7649132797cc14329f71f678c ] There is a dump file with debug information created for an error CQE to help with troubleshooting later. It starts with the last CQE, which, presumably is the error CQE. But this is only true for the scalar Rx burst routing since we handle CQEs there one by one and detect the error immediately. For vectorized Rx bursts, we may already move to another CQE when we detect the error since we handle CQEs in batches there. Go back to the error CQE in this case to dump proper CQE. Fixes: 88c0733535 ("net/mlx5: extend Rx completion with error handling") Signed-off-by: Alexander Kozyrev Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_rxtx.c | 16 +++++++++++----- drivers/net/mlx5/mlx5_rxtx.h | 3 ++- drivers/net/mlx5/mlx5_rxtx_vec.c | 12 +++++++----- 3 files changed, 20 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 12a1eff681..1422961d62 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -1000,12 +1000,14 @@ mlx5_queue_state_modify(struct rte_eth_dev *dev, * @param[in] vec * 1 when called from vectorized Rx burst, need to prepare mbufs for the RQ. * 0 when called from non-vectorized Rx burst. + * @param[in] err_n + * Number of CQEs to check for an error. * * @return * MLX5_RECOVERY_ERROR_RET in case of recovery error, otherwise the CQE status. */ int -mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) +mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) { const uint16_t cqe_n = 1 << rxq->cqe_n; const uint16_t cqe_mask = cqe_n - 1; @@ -1017,13 +1019,18 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) volatile struct mlx5_cqe *cqe; volatile struct mlx5_err_cqe *err_cqe; } u = { - .cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_mask], + .cqe = &(*rxq->cqes)[(rxq->cq_ci - vec) & cqe_mask], }; struct mlx5_mp_arg_queue_state_modify sm; - int ret; + int ret, i; switch (rxq->err_state) { case MLX5_RXQ_ERR_STATE_NO_ERROR: + for (i = 0; i < (int)err_n; i++) { + u.cqe = &(*rxq->cqes)[(rxq->cq_ci - vec - i) & cqe_mask]; + if (MLX5_CQE_OPCODE(u.cqe->op_own) == MLX5_CQE_RESP_ERR) + break; + } rxq->err_state = MLX5_RXQ_ERR_STATE_NEED_RESET; /* Fall-through */ case MLX5_RXQ_ERR_STATE_NEED_RESET: @@ -1083,7 +1090,6 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) rxq->elts_ci : rxq->rq_ci; uint32_t elt_idx; struct rte_mbuf **elt; - int i; unsigned int n = elts_n - (elts_ci - rxq->rq_pi); @@ -1204,7 +1210,7 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { if (unlikely(ret == MLX5_CQE_STATUS_ERR || rxq->err_state)) { - ret = mlx5_rx_err_handle(rxq, 0); + ret = mlx5_rx_err_handle(rxq, 0, 1); if (ret == MLX5_CQE_STATUS_HW_OWN || ret == MLX5_RECOVERY_ERROR_RET) return MLX5_ERROR_CQE_RET; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 964ebaaaad..7fa471a651 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -427,7 +427,8 @@ void mlx5_set_cksum_table(void); void mlx5_set_swp_types_table(void); uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); -__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec); +__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, + uint8_t vec, uint16_t err_n); void mlx5_mprq_buf_free_cb(void *addr, void *opaque); void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index d156de4ec1..ca0f585863 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -73,7 +73,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rxq->stats.ipackets -= (pkts_n - n); rxq->stats.ibytes -= err_bytes; #endif - mlx5_rx_err_handle(rxq, 1); + mlx5_rx_err_handle(rxq, 1, pkts_n); return n; } @@ -247,8 +247,6 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq, } rxq->rq_pi += i; rxq->cq_ci += i; - rte_io_wmb(); - *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); if (rq_ci != rxq->rq_ci) { rxq->rq_ci = rq_ci; rte_io_wmb(); @@ -355,8 +353,6 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rxq->decompressed -= n; } } - rte_io_wmb(); - *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); *no_cq = !rcvd_pkt; return rcvd_pkt; } @@ -384,6 +380,7 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) bool no_cq = false; do { + err = 0; nb_rx = rxq_burst_v(rxq, pkts + tn, pkts_n - tn, &err, &no_cq); if (unlikely(err | rxq->err_state)) @@ -391,6 +388,8 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) tn += nb_rx; if (unlikely(no_cq)) break; + rte_io_wmb(); + *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); } while (tn != pkts_n); return tn; } @@ -518,6 +517,7 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) bool no_cq = false; do { + err = 0; nb_rx = rxq_burst_mprq_v(rxq, pkts + tn, pkts_n - tn, &err, &no_cq); if (unlikely(err | rxq->err_state)) @@ -525,6 +525,8 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) tn += nb_rx; if (unlikely(no_cq)) break; + rte_io_wmb(); + *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); } while (tn != pkts_n); return tn; } -- 2.39.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-02-23 09:36:30.314715380 +0000 +++ 0052-net-mlx5-fix-error-CQE-dumping-for-vectorized-Rx.patch 2023-02-23 09:36:28.310171487 +0000 @@ -1 +1 @@ -From 633684e0d0defdd7649132797cc14329f71f678c Mon Sep 17 00:00:00 2001 +From b151f703cdf97f40950ca20d03b91cf33af342d6 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 633684e0d0defdd7649132797cc14329f71f678c ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -21,2 +22,2 @@ - drivers/net/mlx5/mlx5_rx.c | 16 +++++++++++----- - drivers/net/mlx5/mlx5_rx.h | 3 ++- + drivers/net/mlx5/mlx5_rxtx.c | 16 +++++++++++----- + drivers/net/mlx5/mlx5_rxtx.h | 3 ++- @@ -26,5 +27,5 @@ -diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c -index 917c517b83..7612d15f01 100644 ---- a/drivers/net/mlx5/mlx5_rx.c -+++ b/drivers/net/mlx5/mlx5_rx.c -@@ -425,12 +425,14 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) +diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c +index 12a1eff681..1422961d62 100644 +--- a/drivers/net/mlx5/mlx5_rxtx.c ++++ b/drivers/net/mlx5/mlx5_rxtx.c +@@ -1000,12 +1000,14 @@ mlx5_queue_state_modify(struct rte_eth_dev *dev, @@ -46 +47 @@ -@@ -442,13 +444,18 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) +@@ -1017,13 +1019,18 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) @@ -67 +68 @@ -@@ -507,7 +514,6 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) +@@ -1083,7 +1090,6 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) @@ -75 +76 @@ -@@ -628,7 +634,7 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, +@@ -1204,7 +1210,7 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, @@ -84,6 +85,6 @@ -diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h -index e078aaf3dc..4ba53ebc48 100644 ---- a/drivers/net/mlx5/mlx5_rx.h -+++ b/drivers/net/mlx5/mlx5_rx.h -@@ -286,7 +286,8 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, - +diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h +index 964ebaaaad..7fa471a651 100644 +--- a/drivers/net/mlx5/mlx5_rxtx.h ++++ b/drivers/net/mlx5/mlx5_rxtx.h +@@ -427,7 +427,8 @@ void mlx5_set_cksum_table(void); + void mlx5_set_swp_types_table(void); @@ -94,0 +96 @@ + void mlx5_mprq_buf_free_cb(void *addr, void *opaque); @@ -97 +98,0 @@ - uint16_t pkts_n); @@ -99 +100 @@ -index 0e2eab068a..c6be2be763 100644 +index d156de4ec1..ca0f585863 100644 @@ -102 +103 @@ -@@ -74,7 +74,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, +@@ -73,7 +73,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, @@ -111 +112 @@ -@@ -253,8 +253,6 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq, +@@ -247,8 +247,6 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq, @@ -120 +121 @@ -@@ -361,8 +359,6 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, +@@ -355,8 +353,6 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, @@ -129 +130 @@ -@@ -390,6 +386,7 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -384,6 +380,7 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) @@ -137 +138 @@ -@@ -397,6 +394,8 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -391,6 +388,8 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) @@ -146 +147 @@ -@@ -524,6 +523,7 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -518,6 +517,7 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) @@ -154 +155 @@ -@@ -531,6 +531,8 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -525,6 +525,8 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)