From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 72C75A04FD;
	Thu, 10 Nov 2022 11:18:05 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 1AABC40150;
	Thu, 10 Nov 2022 11:18:05 +0100 (CET)
Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com
 [67.231.148.174])
 by mails.dpdk.org (Postfix) with ESMTP id 74B01400EF
 for <dev@dpdk.org>; Thu, 10 Nov 2022 11:18:03 +0100 (CET)
Received: from pps.filterd (m0045849.ppops.net [127.0.0.1])
 by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 2AA6refF020874; Thu, 10 Nov 2022 02:18:02 -0800
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;
 h=from : to : cc :
 subject : date : message-id : in-reply-to : references : mime-version :
 content-transfer-encoding : content-type; s=pfpt0220;
 bh=N4C3X9fdN7DLo7OKEdbPUolyVva9mkjgvjcwySQ+pKU=;
 b=gMdOUqtfQnrmCwXOF6znUE1ndddii7AqaNtz8z3wnJ+zQ+sOIPVdcLZvFuxYlGVnrR6k
 F0bWmrKsjdJPDZvVpTEiEIVPKoVhSP7F0oPWVhWmcbP0dF1FP5Teftz+WPzRdsNpGQr/
 FFmfGDtbrrTyKfGy49USnmTk3SO9kgzhv+DCqFPJaPIMbnLydN3H3q97kRTFn+DoEA7p
 pTPLibhBrHqY9yMq/kUJtAAehKsoSgke5r4l4xaZhQlB+zOw+DIMQKgSmu+k9bx/a2T3
 ABTsIv5CoQxXndrwy68FUh0sNZBhO86MgN8HViQ0n1e9sm8skYAz0rVl0lXsCCnWOlhh Qg== 
Received: from dc5-exch02.marvell.com ([199.233.59.182])
 by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3krvecgpu9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);
 Thu, 10 Nov 2022 02:18:02 -0800
Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com
 (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18;
 Thu, 10 Nov 2022 02:18:00 -0800
Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com
 (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend
 Transport; Thu, 10 Nov 2022 02:18:00 -0800
Received: from localhost.localdomain (unknown [10.28.36.155])
 by maili.marvell.com (Postfix) with ESMTP id C4C653F704C;
 Thu, 10 Nov 2022 02:17:57 -0800 (PST)
From: Hanumanth Pothula <hpothula@marvell.com>
To: Aman Singh <aman.deep.singh@intel.com>, Yuying Zhang
 <yuying.zhang@intel.com>
CC: <dev@dpdk.org>, <andrew.rybchenko@oktetlabs.ru>, <thomas@monjalon.net>,
 <jerinj@marvell.com>, <ndabilpuram@marvell.com>, <hpothula@marvell.com>
Subject: [PATCH v14 1/1] app/testpmd: support multiple mbuf pools per Rx queue
Date: Thu, 10 Nov 2022 15:46:31 +0530
Message-ID: <20221110101631.2451791-1-hpothula@marvell.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20221110081719.2404059-1-hpothula@marvell.com>
References: <20221110081719.2404059-1-hpothula@marvell.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Proofpoint-ORIG-GUID: xRveO3MwBSvkm4L9yumGUXXoGjHaMdMX
X-Proofpoint-GUID: xRveO3MwBSvkm4L9yumGUXXoGjHaMdMX
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2022-11-10_07,2022-11-09_01,2022-06-22_01
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Some of the HW has support for choosing memory pools based on
the packet's size. The pool sort capability allows PMD/NIC to
choose a memory pool based on the packet's length.

On multiple mempool support enabled, populate mempool array
accordingly. Also, print pool name on which packet is received.

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

v14:
 - Rebased on tip of next-net/main
v13:
 - Make sure protocol-based header split feature is not broken
   by updating changes with latest code base.
v12:
 - Process multi-segment configuration on number segments
   (rx_pkt_nb_segs) greater than 1 or buffer split offload
   flag (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) set.
v11:
 - Resolve compilation and warning.
v10:
 - Populate multi-mempool array based on mbuf_data_size_n instead
   of rx_pkt_nb_segs.
---
 app/test-pmd/testpmd.c | 70 +++++++++++++++++++++++++++---------------
 app/test-pmd/testpmd.h |  3 ++
 app/test-pmd/util.c    |  4 +--
 3 files changed, 51 insertions(+), 26 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d494870e59..ef281ccd20 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2653,12 +2653,20 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	       struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp)
 {
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
+	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
+	struct rte_mempool *mpx;
 	unsigned int i, mp_n;
 	uint32_t prev_hdrs = 0;
 	int ret;
 
-	if (rx_pkt_nb_segs <= 1 ||
-	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) {
+	/* Verify Rx queue configuration is single pool and segment or
+	 * multiple pool/segment.
+	 * @see rte_eth_rxconf::rx_mempools
+	 * @see rte_eth_rxconf::rx_seg
+	 */
+	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+		/* Single pool/segment configuration */
 		rx_conf->rx_seg = NULL;
 		rx_conf->rx_nseg = 0;
 		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
@@ -2666,34 +2674,48 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 					     rx_conf, mp);
 		goto exit;
 	}
-	for (i = 0; i < rx_pkt_nb_segs; i++) {
-		struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
-		struct rte_mempool *mpx;
-		/*
-		 * Use last valid pool for the segments with number
-		 * exceeding the pool index.
-		 */
-		mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i;
-		mpx = mbuf_pool_find(socket_id, mp_n);
-		/* Handle zero as mbuf data buffer size. */
-		rx_seg->offset = i < rx_pkt_nb_offs ?
-				   rx_pkt_seg_offsets[i] : 0;
-		rx_seg->mp = mpx ? mpx : mp;
-		if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) {
-			rx_seg->proto_hdr = rx_pkt_hdr_protos[i] & ~prev_hdrs;
-			prev_hdrs |= rx_seg->proto_hdr;
-		} else {
-			rx_seg->length = rx_pkt_seg_lengths[i] ?
-					rx_pkt_seg_lengths[i] :
-					mbuf_data_size[mp_n];
+
+	if (rx_pkt_nb_segs > 1 ||
+	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+		/* multi-segment configuration */
+		for (i = 0; i < rx_pkt_nb_segs; i++) {
+			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
+			/*
+			 * Use last valid pool for the segments with number
+			 * exceeding the pool index.
+			 */
+			mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i;
+			mpx = mbuf_pool_find(socket_id, mp_n);
+			/* Handle zero as mbuf data buffer size. */
+			rx_seg->offset = i < rx_pkt_nb_offs ?
+					   rx_pkt_seg_offsets[i] : 0;
+			rx_seg->mp = mpx ? mpx : mp;
+			if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) {
+				rx_seg->proto_hdr = rx_pkt_hdr_protos[i] & ~prev_hdrs;
+				prev_hdrs |= rx_seg->proto_hdr;
+			} else {
+				rx_seg->length = rx_pkt_seg_lengths[i] ?
+						rx_pkt_seg_lengths[i] :
+						mbuf_data_size[mp_n];
+			}
+		}
+		rx_conf->rx_nseg = rx_pkt_nb_segs;
+		rx_conf->rx_seg = rx_useg;
+	} else {
+		/* multi-pool configuration */
+		for (i = 0; i < mbuf_data_size_n; i++) {
+			mpx = mbuf_pool_find(socket_id, i);
+			rx_mempool[i] = mpx ? mpx : mp;
 		}
+		rx_conf->rx_mempools = rx_mempool;
+		rx_conf->rx_nmempool = mbuf_data_size_n;
 	}
-	rx_conf->rx_nseg = rx_pkt_nb_segs;
-	rx_conf->rx_seg = rx_useg;
 	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
 				    socket_id, rx_conf, NULL);
 	rx_conf->rx_seg = NULL;
 	rx_conf->rx_nseg = 0;
+	rx_conf->rx_mempools = NULL;
+	rx_conf->rx_nmempool = 0;
 exit:
 	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
 						RTE_ETH_QUEUE_STATE_STOPPED :
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 6aa85e74ee..05ca8628cf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -80,6 +80,9 @@ extern uint8_t cl_quit;
 
 #define MIN_TOTAL_NUM_MBUFS 1024
 
+/* Maximum number of pools supported per Rx queue */
+#define MAX_MEMPOOL 8
+
 typedef uint8_t  lcoreid_t;
 typedef uint16_t portid_t;
 typedef uint16_t queueid_t;
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index fd98e8b51d..f9df5f69ef 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 		print_ether_addr(" - dst=", &eth_hdr->dst_addr,
 				 print_buf, buf_size, &cur_len);
 		MKDUMPSTR(print_buf, buf_size, cur_len,
-			  " - type=0x%04x - length=%u - nb_segs=%d",
-			  eth_type, (unsigned int) mb->pkt_len,
+			  " - pool=%s - type=0x%04x - length=%u - nb_segs=%d",
+			  mb->pool->name, eth_type, (unsigned int) mb->pkt_len,
 			  (int)mb->nb_segs);
 		ol_flags = mb->ol_flags;
 		if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {
-- 
2.25.1