From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3752041D52 for ; Thu, 23 Feb 2023 10:40:05 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 325AD43150; Thu, 23 Feb 2023 10:40:05 +0100 (CET) Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49]) by mails.dpdk.org (Postfix) with ESMTP id A3E4D427E9 for ; Thu, 23 Feb 2023 10:40:02 +0100 (CET) Received: by mail-wr1-f49.google.com with SMTP id t13so10067603wrv.13 for ; Thu, 23 Feb 2023 01:40:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=umpoIi26yKJ94/tC+aKX0qP3XxXlj27NdVyqFavTstM=; b=CAVX5ZEBcHwdjtjh/krReqE7B/U7a6xLuFswJDtQ8T24xrcGKrMRlKRCkbfJMgXAwM WkC1V6viUY+IoMgLaIVaXTvjAlrXVOBr1aydDedioWA/q1tyTzqQxBPUX8ZS+FsYWUtX HBvv92qwLdD9FSVYFELbLjuMWNCvE/WZM7/4tEkIP01P9v4ot5NVJyq3TAW+4rq/7iQY FiD5n8SG16dFBWHKgbmSKS+1LYo4pfMDQ1V963fpmoRaBq3w9xVVmRoRgM/j7JOmGbUx N/eispprRpx97lkh1Bv0U1qgQ7eQ+++wNvOKJ38SNLUhbZNGoiHXw0h8fEGJJlJqBjP0 8aSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=umpoIi26yKJ94/tC+aKX0qP3XxXlj27NdVyqFavTstM=; b=Kfl7DSPMzk2ZlvX1kHuQ9uw1AMGPeC+vhCp4K0LPaG1dx7rI/7HvTGE42piZ09YXpl VwAm0+jEQ6zuQU4WdXnc5FGzi1OpToXJel/ttd08/OjMV25VvjOp5FcKs7IXS8EBxfFm GxkyclaqARzLvIW81LyxHPmsc0L8hIzSNLM/+1nfA/rkW15/ycbot4/+xFxo0/+s7spa g6hWYKhRTMqxWjAZN5yrZ80fFHwxaLA6TN5+SvTDye4KkfMM8m99zIfqCtg5kePs7KLu Bz//UvJhaTR4eOq+/d3lnkspJgxj5jZAY5l1+3q86S7EVk1auwMfXyT3NZtkvat+gVWl PT8Q== X-Gm-Message-State: AO0yUKUj4dZM0+nuGuAdZMn3Vu5AFQnLjRCYsxz33ekVITxmijH8g2Ix F5rVYKTmJhb6u+o3WpvqpotWI8bF7aEBzg== X-Google-Smtp-Source: AK7set8mYcS6HqXvarlcyzE9bvDGRRAJ1ACDwZmmcx1gJ32JHZdtaTZMPt2Ne7Fu3qQ6bLgyC/RI3Q== X-Received: by 2002:a5d:5387:0:b0:2c5:455b:4cd2 with SMTP id d7-20020a5d5387000000b002c5455b4cd2mr11606440wrv.57.1677145202317; Thu, 23 Feb 2023 01:40:02 -0800 (PST) Received: from localhost ([2a01:4b00:d307:1000:f1d3:eb5e:11f4:a7d9]) by smtp.gmail.com with ESMTPSA id z8-20020a5d4c88000000b002c5598c14acsm10736542wrs.6.2023.02.23.01.40.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 01:40:01 -0800 (PST) From: luca.boccassi@gmail.com To: Alexander Kozyrev Cc: Matan Azrad , dpdk stable Subject: patch 'net/mlx5: ignore non-critical syndromes for Rx queue' has been queued to stable release 20.11.8 Date: Thu, 23 Feb 2023 09:36:57 +0000 Message-Id: <20230223093715.3926893-53-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230223093715.3926893-1-luca.boccassi@gmail.com> References: <20230223093715.3926893-1-luca.boccassi@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 20.11.8 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 02/25/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/aad5672767479652695f199cdb26dcdae7e5f5e6 Thanks. Luca Boccassi --- >From aad5672767479652695f199cdb26dcdae7e5f5e6 Mon Sep 17 00:00:00 2001 From: Alexander Kozyrev Date: Fri, 27 Jan 2023 05:22:43 +0200 Subject: [PATCH] net/mlx5: ignore non-critical syndromes for Rx queue [ upstream commit aa67ed3084588e6ca12e9709a6cab021f0ffeba7 ] For non-fatal syndromes like LOCAL_LENGTH_ERR, the Rx queue reset shouldn't be triggered. Rx queue could continue with the next packets without any recovery. Only three syndromes warrant Rx queue reset: LOCAL_QP_OP_ERR, LOCAL_PROT_ERR and WR_FLUSH_ERR. Do not initiate a Rx queue reset in any other cases. Skip all non-critical error CQEs and continue with packet processing. Fixes: 88c0733535 ("net/mlx5: extend Rx completion with error handling") Signed-off-by: Alexander Kozyrev Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_rxtx.c | 123 ++++++++++++++++++++++++------- drivers/net/mlx5/mlx5_rxtx.h | 5 +- drivers/net/mlx5/mlx5_rxtx_vec.c | 3 +- 3 files changed, 102 insertions(+), 29 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 1422961d62..f7c8b8c076 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -86,7 +86,8 @@ rxq_cq_to_pkt_type(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, static __rte_always_inline int mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, - uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe); + uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe, + uint16_t *skip_cnt, bool mprq); static __rte_always_inline uint32_t rxq_cq_to_ol_flags(volatile struct mlx5_cqe *cqe); @@ -983,10 +984,14 @@ mlx5_queue_state_modify(struct rte_eth_dev *dev, return ret; } +#define MLX5_ERROR_CQE_MASK 0x40000000 /* Must be negative. */ -#define MLX5_ERROR_CQE_RET (-1) +#define MLX5_REGULAR_ERROR_CQE_RET (-5) +#define MLX5_CRITICAL_ERROR_CQE_RET (-4) /* Must not be negative. */ #define MLX5_RECOVERY_ERROR_RET 0 +#define MLX5_RECOVERY_IGNORE_RET 1 +#define MLX5_RECOVERY_COMPLETED_RET 2 /** * Handle a Rx error. @@ -1004,10 +1009,14 @@ mlx5_queue_state_modify(struct rte_eth_dev *dev, * Number of CQEs to check for an error. * * @return - * MLX5_RECOVERY_ERROR_RET in case of recovery error, otherwise the CQE status. + * MLX5_RECOVERY_ERROR_RET in case of recovery error, + * MLX5_RECOVERY_IGNORE_RET in case of non-critical error syndrome, + * MLX5_RECOVERY_COMPLETED_RET in case of recovery is completed, + * otherwise the CQE status after ignored error syndrome or queue reset. */ int -mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) +mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, + uint16_t err_n, uint16_t *skip_cnt) { const uint16_t cqe_n = 1 << rxq->cqe_n; const uint16_t cqe_mask = cqe_n - 1; @@ -1022,14 +1031,35 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) .cqe = &(*rxq->cqes)[(rxq->cq_ci - vec) & cqe_mask], }; struct mlx5_mp_arg_queue_state_modify sm; + bool critical_syndrome = false; int ret, i; switch (rxq->err_state) { + case MLX5_RXQ_ERR_STATE_IGNORE: + ret = check_cqe(u.cqe, cqe_n, rxq->cq_ci - vec); + if (ret != MLX5_CQE_STATUS_ERR) { + rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + return ret; + } + /* Fall-through */ case MLX5_RXQ_ERR_STATE_NO_ERROR: for (i = 0; i < (int)err_n; i++) { u.cqe = &(*rxq->cqes)[(rxq->cq_ci - vec - i) & cqe_mask]; - if (MLX5_CQE_OPCODE(u.cqe->op_own) == MLX5_CQE_RESP_ERR) + if (MLX5_CQE_OPCODE(u.cqe->op_own) == MLX5_CQE_RESP_ERR) { + if (u.err_cqe->syndrome == MLX5_CQE_SYNDROME_LOCAL_QP_OP_ERR || + u.err_cqe->syndrome == MLX5_CQE_SYNDROME_LOCAL_PROT_ERR || + u.err_cqe->syndrome == MLX5_CQE_SYNDROME_WR_FLUSH_ERR) + critical_syndrome = true; break; + } + } + if (!critical_syndrome) { + if (rxq->err_state == MLX5_RXQ_ERR_STATE_NO_ERROR) { + *skip_cnt = 0; + if (i == err_n) + rxq->err_state = MLX5_RXQ_ERR_STATE_IGNORE; + } + return MLX5_RECOVERY_IGNORE_RET; } rxq->err_state = MLX5_RXQ_ERR_STATE_NEED_RESET; /* Fall-through */ @@ -1122,6 +1152,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) } mlx5_rxq_initialize(rxq); rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + return MLX5_RECOVERY_COMPLETED_RET; } return ret; default: @@ -1141,19 +1172,24 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) * @param[out] mcqe * Store pointer to mini-CQE if compressed. Otherwise, the pointer is not * written. - * + * @param[out] skip_cnt + * Number of packets skipped due to recoverable errors. + * @param mprq + * Indication if it is called from MPRQ. * @return - * 0 in case of empty CQE, MLX5_ERROR_CQE_RET in case of error CQE, - * otherwise the packet size in regular RxQ, and striding byte - * count format in mprq case. + * 0 in case of empty CQE, MLX5_REGULAR_ERROR_CQE_RET in case of error CQE, + * MLX5_CRITICAL_ERROR_CQE_RET in case of error CQE lead to Rx queue reset, + * otherwise the packet size in regular RxQ, + * and striding byte count format in mprq case. */ static inline int mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, - uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe) + uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe, + uint16_t *skip_cnt, bool mprq) { struct rxq_zip *zip = &rxq->zip; uint16_t cqe_n = cqe_cnt + 1; - int len; + int len = 0, ret = 0; uint16_t idx, end; do { @@ -1202,7 +1238,6 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, * compressed. */ } else { - int ret; int8_t op_own; uint32_t cq_ci; @@ -1210,10 +1245,12 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { if (unlikely(ret == MLX5_CQE_STATUS_ERR || rxq->err_state)) { - ret = mlx5_rx_err_handle(rxq, 0, 1); - if (ret == MLX5_CQE_STATUS_HW_OWN || - ret == MLX5_RECOVERY_ERROR_RET) - return MLX5_ERROR_CQE_RET; + ret = mlx5_rx_err_handle(rxq, 0, 1, skip_cnt); + if (ret == MLX5_CQE_STATUS_HW_OWN) + return MLX5_ERROR_CQE_MASK; + if (ret == MLX5_RECOVERY_ERROR_RET || + ret == MLX5_RECOVERY_COMPLETED_RET) + return MLX5_CRITICAL_ERROR_CQE_RET; } else { return 0; } @@ -1266,8 +1303,15 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, } } if (unlikely(rxq->err_state)) { + if (rxq->err_state == MLX5_RXQ_ERR_STATE_IGNORE && + ret == MLX5_CQE_STATUS_SW_OWN) { + rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + return len & MLX5_ERROR_CQE_MASK; + } cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_cnt]; ++rxq->stats.idropped; + (*skip_cnt) += mprq ? (len & MLX5_MPRQ_STRIDE_NUM_MASK) >> + MLX5_MPRQ_STRIDE_NUM_SHIFT : 1; } else { return len; } @@ -1418,6 +1462,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) int len = 0; /* keep its value across iterations. */ while (pkts_n) { + uint16_t skip_cnt; unsigned int idx = rq_ci & wqe_cnt; volatile struct mlx5_wqe_data_seg *wqe = &((volatile struct mlx5_wqe_data_seg *)rxq->wqes)[idx]; @@ -1456,11 +1501,24 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) } if (!pkt) { cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_cnt]; - len = mlx5_rx_poll_len(rxq, cqe, cqe_cnt, &mcqe); - if (len <= 0) { - rte_mbuf_raw_free(rep); - if (unlikely(len == MLX5_ERROR_CQE_RET)) + len = mlx5_rx_poll_len(rxq, cqe, cqe_cnt, &mcqe, &skip_cnt, false); + if (unlikely(len & MLX5_ERROR_CQE_MASK)) { + if (len == MLX5_CRITICAL_ERROR_CQE_RET) { + rte_mbuf_raw_free(rep); rq_ci = rxq->rq_ci << sges_n; + break; + } + rq_ci >>= sges_n; + rq_ci += skip_cnt; + rq_ci <<= sges_n; + idx = rq_ci & wqe_cnt; + wqe = &((volatile struct mlx5_wqe_data_seg *)rxq->wqes)[idx]; + seg = (*rxq->elts)[idx]; + cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_cnt]; + len = len & ~MLX5_ERROR_CQE_MASK; + } + if (len == 0) { + rte_mbuf_raw_free(rep); break; } pkt = seg; @@ -1684,6 +1742,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) uint16_t strd_cnt; uint16_t strd_idx; uint32_t byte_cnt; + uint16_t skip_cnt; volatile struct mlx5_mini_cqe8 *mcqe = NULL; enum mlx5_rqx_code rxq_code; @@ -1696,14 +1755,26 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) buf = (*rxq->mprq_bufs)[rq_ci & wq_mask]; } cqe = &(*rxq->cqes)[rxq->cq_ci & cq_mask]; - ret = mlx5_rx_poll_len(rxq, cqe, cq_mask, &mcqe); + ret = mlx5_rx_poll_len(rxq, cqe, cq_mask, &mcqe, &skip_cnt, true); + if (unlikely(ret & MLX5_ERROR_CQE_MASK)) { + if (ret == MLX5_CRITICAL_ERROR_CQE_RET) { + rq_ci = rxq->rq_ci; + consumed_strd = rxq->consumed_strd; + break; + } + consumed_strd += skip_cnt; + while (consumed_strd >= strd_n) { + /* Replace WQE if the buffer is still in use. */ + mprq_buf_replace(rxq, rq_ci & wq_mask); + /* Advance to the next WQE. */ + consumed_strd -= strd_n; + ++rq_ci; + buf = (*rxq->mprq_bufs)[rq_ci & wq_mask]; + } + cqe = &(*rxq->cqes)[rxq->cq_ci & cq_mask]; + } if (ret == 0) break; - if (unlikely(ret == MLX5_ERROR_CQE_RET)) { - rq_ci = rxq->rq_ci; - consumed_strd = rxq->consumed_strd; - break; - } byte_cnt = ret; len = (byte_cnt & MLX5_MPRQ_LEN_MASK) >> MLX5_MPRQ_LEN_SHIFT; MLX5_ASSERT((int)len >= (rxq->crc_present << 2)); diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 7fa471a651..b5fe330a71 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -91,6 +91,7 @@ enum mlx5_rxq_err_state { MLX5_RXQ_ERR_STATE_NO_ERROR = 0, MLX5_RXQ_ERR_STATE_NEED_RESET, MLX5_RXQ_ERR_STATE_NEED_READY, + MLX5_RXQ_ERR_STATE_IGNORE, }; enum mlx5_rqx_code { @@ -427,8 +428,8 @@ void mlx5_set_cksum_table(void); void mlx5_set_swp_types_table(void); uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); -__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, - uint8_t vec, uint16_t err_n); +__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, + uint16_t err_n, uint16_t *skip_cnt); void mlx5_mprq_buf_free_cb(void *addr, void *opaque); void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index ca0f585863..4a1dc3c3e1 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -50,6 +50,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, uint16_t pkts_n) { uint16_t n = 0; + uint16_t skip_cnt; unsigned int i; #ifdef MLX5_PMD_SOFT_COUNTERS uint32_t err_bytes = 0; @@ -73,7 +74,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rxq->stats.ipackets -= (pkts_n - n); rxq->stats.ibytes -= err_bytes; #endif - mlx5_rx_err_handle(rxq, 1, pkts_n); + mlx5_rx_err_handle(rxq, 1, pkts_n, &skip_cnt); return n; } -- 2.39.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-02-23 09:36:30.351277669 +0000 +++ 0053-net-mlx5-ignore-non-critical-syndromes-for-Rx-queue.patch 2023-02-23 09:36:28.318171634 +0000 @@ -1 +1 @@ -From aa67ed3084588e6ca12e9709a6cab021f0ffeba7 Mon Sep 17 00:00:00 2001 +From aad5672767479652695f199cdb26dcdae7e5f5e6 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit aa67ed3084588e6ca12e9709a6cab021f0ffeba7 ] + @@ -14 +15,0 @@ -Cc: stable@dpdk.org @@ -19,2 +20,2 @@ - drivers/net/mlx5/mlx5_rx.c | 123 ++++++++++++++++++++++++------- - drivers/net/mlx5/mlx5_rx.h | 5 +- + drivers/net/mlx5/mlx5_rxtx.c | 123 ++++++++++++++++++++++++------- + drivers/net/mlx5/mlx5_rxtx.h | 5 +- @@ -24,5 +25,5 @@ -diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c -index 7612d15f01..99a08ef5f1 100644 ---- a/drivers/net/mlx5/mlx5_rx.c -+++ b/drivers/net/mlx5/mlx5_rx.c -@@ -39,7 +39,8 @@ rxq_cq_to_pkt_type(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, +diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c +index 1422961d62..f7c8b8c076 100644 +--- a/drivers/net/mlx5/mlx5_rxtx.c ++++ b/drivers/net/mlx5/mlx5_rxtx.c +@@ -86,7 +86,8 @@ rxq_cq_to_pkt_type(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, @@ -38,2 +39,2 @@ -@@ -408,10 +409,14 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) - *rxq->rq_db = rte_cpu_to_be_32(rxq->rq_ci); +@@ -983,10 +984,14 @@ mlx5_queue_state_modify(struct rte_eth_dev *dev, + return ret; @@ -54 +55 @@ -@@ -429,10 +434,14 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) +@@ -1004,10 +1009,14 @@ mlx5_queue_state_modify(struct rte_eth_dev *dev, @@ -71 +72 @@ -@@ -447,14 +456,35 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) +@@ -1022,14 +1031,35 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) @@ -108 +109 @@ -@@ -546,6 +576,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) +@@ -1122,6 +1152,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) @@ -116 +117 @@ -@@ -565,19 +596,24 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) +@@ -1141,19 +1172,24 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) @@ -147 +148 @@ -@@ -626,7 +662,6 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, +@@ -1202,7 +1238,6 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, @@ -155 +156 @@ -@@ -634,10 +669,12 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, +@@ -1210,10 +1245,12 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, @@ -172 +173 @@ -@@ -690,8 +727,15 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, +@@ -1266,8 +1303,15 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, @@ -188 +189 @@ -@@ -843,6 +887,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1418,6 +1462,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) @@ -196 +197 @@ -@@ -881,11 +926,24 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1456,11 +1501,24 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) @@ -225 +226 @@ -@@ -1095,6 +1153,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1684,6 +1742,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) @@ -233 +234 @@ -@@ -1107,14 +1166,26 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1696,14 +1755,26 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) @@ -266,5 +267,5 @@ -diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h -index 4ba53ebc48..6b42e27c89 100644 ---- a/drivers/net/mlx5/mlx5_rx.h -+++ b/drivers/net/mlx5/mlx5_rx.h -@@ -62,6 +62,7 @@ enum mlx5_rxq_err_state { +diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h +index 7fa471a651..b5fe330a71 100644 +--- a/drivers/net/mlx5/mlx5_rxtx.h ++++ b/drivers/net/mlx5/mlx5_rxtx.h +@@ -91,6 +91,7 @@ enum mlx5_rxq_err_state { @@ -278,2 +279,2 @@ -@@ -286,8 +287,8 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, - +@@ -427,8 +428,8 @@ void mlx5_set_cksum_table(void); + void mlx5_set_swp_types_table(void); @@ -285,0 +287 @@ + void mlx5_mprq_buf_free_cb(void *addr, void *opaque); @@ -288 +289,0 @@ - uint16_t pkts_n); @@ -290 +291 @@ -index c6be2be763..667475a93e 100644 +index ca0f585863..4a1dc3c3e1 100644 @@ -293 +294 @@ -@@ -51,6 +51,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, +@@ -50,6 +50,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, @@ -301 +302 @@ -@@ -74,7 +75,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, +@@ -73,7 +74,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,