From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id C4657A0562;
	Thu,  2 Apr 2020 20:12:23 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id EB7E81BECC;
	Thu,  2 Apr 2020 20:11:56 +0200 (CEST)
Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129])
 by dpdk.org (Postfix) with ESMTP id C58A01BE99
 for <dev@dpdk.org>; Thu,  2 Apr 2020 20:11:53 +0200 (CEST)
Received: from Internal Mail-Server by MTLPINE1 (envelope-from
 akozyrev@mellanox.com)
 with ESMTPS (AES256-SHA encrypted); 2 Apr 2020 21:11:49 +0300
Received: from pegasus02.mtr.labs.mlnx. (pegasus02.mtr.labs.mlnx
 [10.210.16.122])
 by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 032IBnQu028687;
 Thu, 2 Apr 2020 21:11:49 +0300
From: Alexander Kozyrev <akozyrev@mellanox.com>
To: dev@dpdk.org
Cc: rasland@mellanox.com, matan@mellanox.com, viacheslavo@mellanox.com
Date: Thu,  2 Apr 2020 18:11:48 +0000
Message-Id: <1585851108-485-4-git-send-email-akozyrev@mellanox.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1585851108-485-1-git-send-email-akozyrev@mellanox.com>
References: <1585691559-17409-1-git-send-email-akozyrev@mellanox.com>
 <1585851108-485-1-git-send-email-akozyrev@mellanox.com>
Subject: [dpdk-dev] [PATCH 3/3] net/mlx5: add multi-segment packets in MPRQ
	mode
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

The multi-stride operations now allow to reduce a stride size
while supporting Jumbo frames. That means that it is possible
to have mbufs configured with a size smaller than the whole
packet received. It is not an issue during normal MPRQ operations
since we attach external buffers instead of copying the data
into the mbuf itself. But it is not the case in "emergency mode"
when we have to copy every packet because of no more external
mbufs are available. Assemble a multi-segment packet to overcome
this issue in case scatter mode is enabled, drop a packet if not.

Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 47 ++++++++++++++++++++++++++++++++++++--------
 1 file changed, 39 insertions(+), 8 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 4c27952..7ce3732 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1734,22 +1734,52 @@ enum mlx5_txcmp_code {
 		 * Memcpy packets to the target mbuf if:
 		 * - The size of packet is smaller than mprq_max_memcpy_len.
 		 * - Out of buffer in the Mempool for Multi-Packet RQ.
-		 * - There is no space for a headroom and scatter is disabled.
+		 * - The packet's stride overlaps a headroom and scatter is off.
 		 */
 		if (len <= rxq->mprq_max_memcpy_len ||
 		    rxq->mprq_repl == NULL ||
 		    (hdrm_overlap > 0 && !rxq->strd_scatter_en)) {
-			/*
-			 * When memcpy'ing packet due to out-of-buffer, the
-			 * packet must be smaller than the target mbuf.
-			 */
-			if (unlikely(rte_pktmbuf_tailroom(pkt) < len)) {
+			if (likely(rte_pktmbuf_tailroom(pkt) >= len)) {
+				rte_memcpy(rte_pktmbuf_mtod(pkt, void *),
+					   addr, len);
+				DATA_LEN(pkt) = len;
+			} else if (rxq->strd_scatter_en) {
+				struct rte_mbuf *prev = pkt;
+				uint32_t seg_len =
+					RTE_MIN(rte_pktmbuf_tailroom(pkt), len);
+				uint32_t rem_len = len - seg_len;
+
+				rte_memcpy(rte_pktmbuf_mtod(pkt, void *),
+					   addr, seg_len);
+				DATA_LEN(pkt) = seg_len;
+				while (rem_len) {
+					struct rte_mbuf *next =
+						rte_pktmbuf_alloc(rxq->mp);
+
+					if (unlikely(next == NULL)) {
+						rte_pktmbuf_free(pkt);
+						++rxq->stats.rx_nombuf;
+						goto out;
+					}
+					NEXT(prev) = next;
+					SET_DATA_OFF(next, 0);
+					addr = RTE_PTR_ADD(addr, seg_len);
+					seg_len = RTE_MIN
+						(rte_pktmbuf_tailroom(next),
+						 rem_len);
+					rte_memcpy
+						(rte_pktmbuf_mtod(next, void *),
+						 addr, seg_len);
+					DATA_LEN(next) = seg_len;
+					rem_len -= seg_len;
+					prev = next;
+					++NB_SEGS(pkt);
+				}
+			} else {
 				rte_pktmbuf_free_seg(pkt);
 				++rxq->stats.idropped;
 				continue;
 			}
-			rte_memcpy(rte_pktmbuf_mtod(pkt, void *), addr, len);
-			DATA_LEN(pkt) = len;
 		} else {
 			rte_iova_t buf_iova;
 			struct rte_mbuf_ext_shared_info *shinfo;
@@ -1826,6 +1856,7 @@ enum mlx5_txcmp_code {
 		*(pkts++) = pkt;
 		++i;
 	}
+out:
 	/* Update the consumer indexes. */
 	rxq->consumed_strd = consumed_strd;
 	rte_cio_wmb();
-- 
1.8.3.1