From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD208A0093 for ; Tue, 19 May 2020 15:10:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CEE1E1D6A4; Tue, 19 May 2020 15:10:29 +0200 (CEST) Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by dpdk.org (Postfix) with ESMTP id A55E11D6A4 for ; Tue, 19 May 2020 15:10:28 +0200 (CEST) Received: by mail-wr1-f67.google.com with SMTP id y3so15883657wrt.1 for ; Tue, 19 May 2020 06:10:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iqR8wmLAISVpI4j5co3XAONAJb5TEUavy2twJZPd+/c=; b=RQH8hAj0lXFycFM/icwg+L3QQBRTDdHRlhbV1+eQC3EPDQE3t0LQxruZgEgjV10KlY x5D4BKv7ML03fze7/wODy/6UlnBmn8D42JH/CsEYZsSgXalodbBoMq49RQHBdwVP3Qgw V+reZiYXldlEfqiSHb9meuoAhlAT7N/SohoQjq1AJLNaEpPJO3Ez+uwKp0ll+U/1G2i/ nfvHcivtFxgLRIf2ozvwb/dk/Mf5KvQceRNVe7Q1qbPXo9hw+P7ETG4FbHAPd5HaJe5P qYlPvyY2rM9GDdbFx7VtYRm5fF+QarLx5An97bt6mVMFz0T6NTviqrsTLSE3qBeaefa5 ZOQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iqR8wmLAISVpI4j5co3XAONAJb5TEUavy2twJZPd+/c=; b=Nj46La4FiU0Ps1XTn8b30PkvIiVVGTLQOWpo8ApWkBSf9vBW/EA/iRL1lo83FkX+uN G8Dg906YZHdqFAsKm/8oPpDnxbII04Qr0aRpvbEX49tUNjC5JdzrnJTe5u2mx/ED74Zt vTTV9/ge6rbwXGNYgVCRnP0N+KhrOGIcI4UJxXqNiJ71Qo8Po+Z902LNw4qewrvQU/C5 p6cmmFqSPA+9kvp6Vk0H/+EPPcXVa3kecYx94m5mbYEWHA5gplnCsKEoerIxEnY/LPJQ c0HcvK/Errx8f8iocWIICvxTKeJ5oSCilarb1GJ+ycZx3GfkObxSpUC9Biu1p8c/3nGD Ct5w== X-Gm-Message-State: AOAM5325gVHPtHSi/1U2qZuuDfHHxpYsF1uxBJLd4N13MbulLKGzULyk cdjiic/OftiYRDcr2HEaBis= X-Google-Smtp-Source: ABdhPJy8BZlDFGVtzUCgjdmh0xDeoN7aGNgEps8L48chB744/UYKZZXm7G/rC5jqW/sZY6JzOW8WEw== X-Received: by 2002:adf:fa92:: with SMTP id h18mr26083749wrr.260.1589893828317; Tue, 19 May 2020 06:10:28 -0700 (PDT) Received: from localhost ([88.98.246.218]) by smtp.gmail.com with ESMTPSA id z9sm1085423wrp.66.2020.05.19.06.10.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 May 2020 06:10:27 -0700 (PDT) From: luca.boccassi@gmail.com To: Alexander Kozyrev Cc: Viacheslav Ovsiienko , Matan Azrad , dpdk stable Date: Tue, 19 May 2020 14:03:41 +0100 Message-Id: <20200519130549.112823-86-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200519130549.112823-1-luca.boccassi@gmail.com> References: <20200519125804.104349-1-luca.boccassi@gmail.com> <20200519130549.112823-1-luca.boccassi@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] patch 'net/mlx5: enable MPRQ multi-stride operations' has been queued to stable release 19.11.3 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to stable release 19.11.3 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 05/21/20. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Thanks. Luca Boccassi --- >From bbdd1a9c79b0fcaa80b6111c1ac4fe63f8f30d27 Mon Sep 17 00:00:00 2001 From: Alexander Kozyrev Date: Thu, 9 Apr 2020 22:23:52 +0000 Subject: [PATCH] net/mlx5: enable MPRQ multi-stride operations [ upstream commit bd0d5930bf567b41c634b5a7ef0fe76c167ef3b6 ] MPRQ feature should be updated to allow a packet to be received into multiple strides in order to support the MTU exceeding 8KB. Special care is needed to prevent the headroom corruption in the multi-stride mode since the headroom space is borrowed by the PMD from the tail of the preceding stride. Copy the whole packet into a separate mbuf in this case or just the overlapping data if the Rx scattering is supported by an application. Signed-off-by: Alexander Kozyrev Acked-by: Viacheslav Ovsiienko Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_rxq.c | 28 ++++----------- drivers/net/mlx5/mlx5_rxtx.c | 68 +++++++++++++++--------------------- drivers/net/mlx5/mlx5_rxtx.h | 2 +- 3 files changed, 36 insertions(+), 62 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index a3d62bdd81..a4071f891e 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1772,7 +1772,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, unsigned int mprq_stride_size; unsigned int mprq_stride_cap; struct mlx5_dev_config *config = &priv->config; - unsigned int strd_headroom_en; /* * Always allocate extra slots, even if eventually * the vector Rx will not be used. @@ -1818,26 +1817,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->socket = socket; if (dev->data->dev_conf.intr_conf.rxq) tmpl->irq = 1; - /* - * LRO packet may consume all the stride memory, hence we cannot - * guaranty head-room near the packet memory in the stride. - * In this case scatter is, for sure, enabled and an empty mbuf may be - * added in the start for the head-room. - */ - if (lro_on_queue && RTE_PKTMBUF_HEADROOM > 0 && - non_scatter_min_mbuf_size > mb_len) { - strd_headroom_en = 0; - mprq_stride_size = RTE_MIN(max_rx_pkt_len, - 1u << config->mprq.max_stride_size_n); - } else { - strd_headroom_en = 1; - mprq_stride_size = non_scatter_min_mbuf_size; - } mprq_stride_nums = config->mprq.stride_num_n ? config->mprq.stride_num_n : MLX5_MPRQ_STRIDE_NUM_N; - mprq_stride_size = (mprq_stride_size <= - (1U << config->mprq.max_stride_size_n)) ? - log2above(mprq_stride_size) : MLX5_MPRQ_STRIDE_SIZE_N; + mprq_stride_size = non_scatter_min_mbuf_size <= + (1U << config->mprq.max_stride_size_n) ? + log2above(non_scatter_min_mbuf_size) : MLX5_MPRQ_STRIDE_SIZE_N; mprq_stride_cap = (config->mprq.stride_num_n ? (1U << config->mprq.stride_num_n) : (1U << mprq_stride_nums)) * (config->mprq.stride_size_n ? @@ -1854,8 +1838,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * Otherwise, enable Rx scatter if necessary. */ if (mprq_en && desc > (1U << mprq_stride_nums) && - (non_scatter_min_mbuf_size - - (lro_on_queue ? RTE_PKTMBUF_HEADROOM : 0) <= + (non_scatter_min_mbuf_size <= (1U << config->mprq.max_stride_size_n) || (config->mprq.stride_size_n && non_scatter_min_mbuf_size <= mprq_stride_cap))) { @@ -1868,7 +1851,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->rxq.strd_sz_n = config->mprq.stride_size_n ? config->mprq.stride_size_n : mprq_stride_size; tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT; - tmpl->rxq.strd_headroom_en = strd_headroom_en; + tmpl->rxq.strd_scatter_en = + !!(offloads & DEV_RX_OFFLOAD_SCATTER); tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size, config->mprq.max_memcpy_len); max_lro_size = RTE_MIN(max_rx_pkt_len, diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 905a84d4dc..c2007282f6 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -1570,21 +1570,20 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) unsigned int i = 0; uint32_t rq_ci = rxq->rq_ci; uint16_t consumed_strd = rxq->consumed_strd; - uint16_t headroom_sz = rxq->strd_headroom_en * RTE_PKTMBUF_HEADROOM; struct mlx5_mprq_buf *buf = (*rxq->mprq_bufs)[rq_ci & wq_mask]; while (i < pkts_n) { struct rte_mbuf *pkt; void *addr; int ret; - unsigned int len; + uint32_t len; uint16_t strd_cnt; uint16_t strd_idx; uint32_t offset; uint32_t byte_cnt; + int32_t hdrm_overlap; volatile struct mlx5_mini_cqe8 *mcqe = NULL; uint32_t rss_hash_res = 0; - uint8_t lro_num_seg; if (consumed_strd == strd_n) { /* Replace WQE only if the buffer is still in use. */ @@ -1630,18 +1629,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) } assert(strd_idx < strd_n); assert(!((rte_be_to_cpu_16(cqe->wqe_id) ^ rq_ci) & wq_mask)); - lro_num_seg = cqe->lro_num_seg; - /* - * Currently configured to receive a packet per a stride. But if - * MTU is adjusted through kernel interface, device could - * consume multiple strides without raising an error. In this - * case, the packet should be dropped because it is bigger than - * the max_rx_pkt_len. - */ - if (unlikely(!lro_num_seg && strd_cnt > 1)) { - ++rxq->stats.idropped; - continue; - } pkt = rte_pktmbuf_alloc(rxq->mp); if (unlikely(pkt == NULL)) { ++rxq->stats.rx_nombuf; @@ -1653,12 +1640,16 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) len -= RTE_ETHER_CRC_LEN; offset = strd_idx * strd_sz + strd_shift; addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), offset); + hdrm_overlap = len + RTE_PKTMBUF_HEADROOM - strd_cnt * strd_sz; /* * Memcpy packets to the target mbuf if: * - The size of packet is smaller than mprq_max_memcpy_len. * - Out of buffer in the Mempool for Multi-Packet RQ. + * - There is no space for a headroom and scatter is disabled. */ - if (len <= rxq->mprq_max_memcpy_len || rxq->mprq_repl == NULL) { + if (len <= rxq->mprq_max_memcpy_len || + rxq->mprq_repl == NULL || + (hdrm_overlap > 0 && !rxq->strd_scatter_en)) { /* * When memcpy'ing packet due to out-of-buffer, the * packet must be smaller than the target mbuf. @@ -1680,7 +1671,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) rte_atomic16_add_return(&buf->refcnt, 1); assert((uint16_t)rte_atomic16_read(&buf->refcnt) <= strd_n + 1); - buf_addr = RTE_PTR_SUB(addr, headroom_sz); + buf_addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM); /* * MLX5 device doesn't use iova but it is necessary in a * case where the Rx packet is transmitted via a @@ -1699,43 +1690,42 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova, buf_len, shinfo); /* Set mbuf head-room. */ - pkt->data_off = headroom_sz; + SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM); assert(pkt->ol_flags == EXT_ATTACHED_MBUF); - /* - * Prevent potential overflow due to MTU change through - * kernel interface. - */ - if (unlikely(rte_pktmbuf_tailroom(pkt) < len)) { - rte_pktmbuf_free_seg(pkt); - ++rxq->stats.idropped; - continue; - } + assert(rte_pktmbuf_tailroom(pkt) < + len - (hdrm_overlap > 0 ? hdrm_overlap : 0)); DATA_LEN(pkt) = len; /* - * LRO packet may consume all the stride memory, in this - * case packet head-room space is not guaranteed so must - * to add an empty mbuf for the head-room. + * Copy the last fragment of a packet (up to headroom + * size bytes) in case there is a stride overlap with + * a next packet's headroom. Allocate a separate mbuf + * to store this fragment and link it. Scatter is on. */ - if (!rxq->strd_headroom_en) { - struct rte_mbuf *headroom_mbuf = - rte_pktmbuf_alloc(rxq->mp); + if (hdrm_overlap > 0) { + assert(rxq->strd_scatter_en); + struct rte_mbuf *seg = + rte_pktmbuf_alloc(rxq->mp); - if (unlikely(headroom_mbuf == NULL)) { + if (unlikely(seg == NULL)) { rte_pktmbuf_free_seg(pkt); ++rxq->stats.rx_nombuf; break; } - PORT(pkt) = rxq->port_id; - NEXT(headroom_mbuf) = pkt; - pkt = headroom_mbuf; + SET_DATA_OFF(seg, 0); + rte_memcpy(rte_pktmbuf_mtod(seg, void *), + RTE_PTR_ADD(addr, len - hdrm_overlap), + hdrm_overlap); + DATA_LEN(seg) = hdrm_overlap; + DATA_LEN(pkt) = len - hdrm_overlap; + NEXT(pkt) = seg; NB_SEGS(pkt) = 2; } } rxq_cq_to_mbuf(rxq, pkt, cqe, rss_hash_res); - if (lro_num_seg > 1) { + if (cqe->lro_num_seg > 1) { mlx5_lro_update_hdr(addr, cqe, len); pkt->ol_flags |= PKT_RX_LRO; - pkt->tso_segsz = strd_sz; + pkt->tso_segsz = len / cqe->lro_num_seg; } PKT_LEN(pkt) = len; PORT(pkt) = rxq->port_id; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index e362b4afe0..aa6fabbd3d 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -114,7 +114,7 @@ struct mlx5_rxq_data { unsigned int strd_sz_n:4; /* Log 2 of stride size. */ unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */ unsigned int err_state:2; /* enum mlx5_rxq_err_state. */ - unsigned int strd_headroom_en:1; /* Enable mbuf headroom in MPRQ. */ + unsigned int strd_scatter_en:1; /* Scattered packets from a stride. */ unsigned int lro:1; /* Enable LRO. */ unsigned int :1; /* Remaining bits. */ volatile uint32_t *rq_db; -- 2.20.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2020-05-19 14:04:47.950748412 +0100 +++ 0086-net-mlx5-enable-MPRQ-multi-stride-operations.patch 2020-05-19 14:04:44.268649315 +0100 @@ -1,8 +1,10 @@ -From bd0d5930bf567b41c634b5a7ef0fe76c167ef3b6 Mon Sep 17 00:00:00 2001 +From bbdd1a9c79b0fcaa80b6111c1ac4fe63f8f30d27 Mon Sep 17 00:00:00 2001 From: Alexander Kozyrev Date: Thu, 9 Apr 2020 22:23:52 +0000 Subject: [PATCH] net/mlx5: enable MPRQ multi-stride operations +[ upstream commit bd0d5930bf567b41c634b5a7ef0fe76c167ef3b6 ] + MPRQ feature should be updated to allow a packet to be received into multiple strides in order to support the MTU exceeding 8KB. Special care is needed to prevent the headroom corruption in the @@ -11,8 +13,6 @@ a separate mbuf in this case or just the overlapping data if the Rx scattering is supported by an application. -Cc: stable@dpdk.org - Signed-off-by: Alexander Kozyrev Acked-by: Viacheslav Ovsiienko Acked-by: Matan Azrad @@ -23,10 +23,10 @@ 3 files changed, 36 insertions(+), 62 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c -index 1b57f00cb2..1cc9f1dba8 100644 +index a3d62bdd81..a4071f891e 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c -@@ -1797,7 +1797,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +@@ -1772,7 +1772,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, unsigned int mprq_stride_size; unsigned int mprq_stride_cap; struct mlx5_dev_config *config = &priv->config; @@ -34,7 +34,7 @@ /* * Always allocate extra slots, even if eventually * the vector Rx will not be used. -@@ -1843,26 +1842,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +@@ -1818,26 +1817,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->socket = socket; if (dev->data->dev_conf.intr_conf.rxq) tmpl->irq = 1; @@ -64,7 +64,7 @@ mprq_stride_cap = (config->mprq.stride_num_n ? (1U << config->mprq.stride_num_n) : (1U << mprq_stride_nums)) * (config->mprq.stride_size_n ? -@@ -1879,8 +1863,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +@@ -1854,8 +1838,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * Otherwise, enable Rx scatter if necessary. */ if (mprq_en && desc > (1U << mprq_stride_nums) && @@ -74,7 +74,7 @@ (1U << config->mprq.max_stride_size_n) || (config->mprq.stride_size_n && non_scatter_min_mbuf_size <= mprq_stride_cap))) { -@@ -1893,7 +1876,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +@@ -1868,7 +1851,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->rxq.strd_sz_n = config->mprq.stride_size_n ? config->mprq.stride_size_n : mprq_stride_size; tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT; @@ -85,10 +85,10 @@ config->mprq.max_memcpy_len); max_lro_size = RTE_MIN(max_rx_pkt_len, diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c -index f3bf763769..4c279520d1 100644 +index 905a84d4dc..c2007282f6 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c -@@ -1658,21 +1658,20 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1570,21 +1570,20 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) unsigned int i = 0; uint32_t rq_ci = rxq->rq_ci; uint16_t consumed_strd = rxq->consumed_strd; @@ -112,10 +112,10 @@ if (consumed_strd == strd_n) { /* Replace WQE only if the buffer is still in use. */ -@@ -1719,18 +1718,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) - MLX5_ASSERT(strd_idx < strd_n); - MLX5_ASSERT(!((rte_be_to_cpu_16(cqe->wqe_id) ^ rq_ci) & - wq_mask)); +@@ -1630,18 +1629,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) + } + assert(strd_idx < strd_n); + assert(!((rte_be_to_cpu_16(cqe->wqe_id) ^ rq_ci) & wq_mask)); - lro_num_seg = cqe->lro_num_seg; - /* - * Currently configured to receive a packet per a stride. But if @@ -131,7 +131,7 @@ pkt = rte_pktmbuf_alloc(rxq->mp); if (unlikely(pkt == NULL)) { ++rxq->stats.rx_nombuf; -@@ -1742,12 +1729,16 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1653,12 +1640,16 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) len -= RTE_ETHER_CRC_LEN; offset = strd_idx * strd_sz + strd_shift; addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), offset); @@ -149,22 +149,22 @@ /* * When memcpy'ing packet due to out-of-buffer, the * packet must be smaller than the target mbuf. -@@ -1769,7 +1760,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1680,7 +1671,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) rte_atomic16_add_return(&buf->refcnt, 1); - MLX5_ASSERT((uint16_t)rte_atomic16_read(&buf->refcnt) <= - strd_n + 1); + assert((uint16_t)rte_atomic16_read(&buf->refcnt) <= + strd_n + 1); - buf_addr = RTE_PTR_SUB(addr, headroom_sz); + buf_addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM); /* * MLX5 device doesn't use iova but it is necessary in a * case where the Rx packet is transmitted via a -@@ -1788,43 +1779,42 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) +@@ -1699,43 +1690,42 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova, buf_len, shinfo); /* Set mbuf head-room. */ - pkt->data_off = headroom_sz; + SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM); - MLX5_ASSERT(pkt->ol_flags == EXT_ATTACHED_MBUF); + assert(pkt->ol_flags == EXT_ATTACHED_MBUF); - /* - * Prevent potential overflow due to MTU change through - * kernel interface. @@ -174,7 +174,7 @@ - ++rxq->stats.idropped; - continue; - } -+ MLX5_ASSERT(rte_pktmbuf_tailroom(pkt) < ++ assert(rte_pktmbuf_tailroom(pkt) < + len - (hdrm_overlap > 0 ? hdrm_overlap : 0)); DATA_LEN(pkt) = len; /* @@ -190,7 +190,7 @@ - struct rte_mbuf *headroom_mbuf = - rte_pktmbuf_alloc(rxq->mp); + if (hdrm_overlap > 0) { -+ MLX5_ASSERT(rxq->strd_scatter_en); ++ assert(rxq->strd_scatter_en); + struct rte_mbuf *seg = + rte_pktmbuf_alloc(rxq->mp); @@ -224,10 +224,10 @@ PKT_LEN(pkt) = len; PORT(pkt) = rxq->port_id; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h -index 939778aa55..d155c241eb 100644 +index e362b4afe0..aa6fabbd3d 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h -@@ -119,7 +119,7 @@ struct mlx5_rxq_data { +@@ -114,7 +114,7 @@ struct mlx5_rxq_data { unsigned int strd_sz_n:4; /* Log 2 of stride size. */ unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */ unsigned int err_state:2; /* enum mlx5_rxq_err_state. */