From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1D6E5A04FD; Thu, 10 Nov 2022 09:17:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00FFB400EF; Thu, 10 Nov 2022 09:17:30 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 74FD5400D4 for ; Thu, 10 Nov 2022 09:17:28 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AA6pEeH018430; Thu, 10 Nov 2022 00:17:27 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=y6P1Vr1Ey66ckRrifa+8FWrk3kuKkBsMdKvFRxl34r4=; b=VoZAz2pCLzEWKNidCHZoptH4MibxbEW4LVrw3ssRQ8gwIrwgaM+uk+jQYDSUL+co1EYs 878fLnKUEty4DChirjwYf3Zayl5JjVBmPjatspDXXjeNzVnnYE1KOodnZ4+CyPaGYrYr vClMD6Nlo5g2iywWLeD2+J4f6WDSzfBSrvmC7+qiUYdWxxt3/NUmKXI0bMIO5SpvKbKC SADeXVJCc9GzpAZg4p+mQ/trtQPAVsQFebq1uPRZF1VMICVBzEQE72uWEFhpm9Uk4ey0 urse7l17xSIpnAV5SWyi+JRRjlMfdFq2+uv09vaob4cCdkkNtOY/scOTKx99ZAea1pgZ ig== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3krvcb09pk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 10 Nov 2022 00:17:27 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Nov 2022 00:17:25 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 10 Nov 2022 00:17:25 -0800 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id D74CC3F70C5; Thu, 10 Nov 2022 00:17:22 -0800 (PST) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , , , , , Subject: [PATCH v13 1/1] app/testpmd: support multiple mbuf pools per Rx queue Date: Thu, 10 Nov 2022 13:47:19 +0530 Message-ID: <20221110081719.2404059-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107053153.2275618-1-hpothula@marvell.com> References: <20221107053153.2275618-1-hpothula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: QQE5rSvG65uNeuJ5vCsWE_gRZ_obEzWd X-Proofpoint-ORIG-GUID: QQE5rSvG65uNeuJ5vCsWE_gRZ_obEzWd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-10_05,2022-11-09_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Some of the HW has support for choosing memory pools based on the packet's size. The pool sort capability allows PMD/NIC to choose a memory pool based on the packet's length. On multiple mempool support enabled, populate mempool array accordingly. Also, print pool name on which packet is received. Signed-off-by: Hanumanth Pothula v13: - Make sure protocol-based header split feature is not broken by updating changes with latest code base. v12: - Process multi-segment configuration on number segments (rx_pkt_nb_segs) greater than 1 or buffer split offload flag (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) set. v11: - Resolve compilation and warning. v10: - Populate multi-mempool array based on mbuf_data_size_n instead of rx_pkt_nb_segs. --- app/test-pmd/testpmd.c | 65 ++++++++++++++++++++++++++++-------------- app/test-pmd/testpmd.h | 3 ++ app/test-pmd/util.c | 4 +-- 3 files changed, 48 insertions(+), 24 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 5b0f0838dc..78ea19fcbb 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2647,11 +2647,19 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {}; + struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {}; + struct rte_mempool *mpx; unsigned int i, mp_n; int ret; - if (rx_pkt_nb_segs <= 1 || - (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) { + /* Verify Rx queue configuration is single pool and segment or + * multiple pool/segment. + * @see rte_eth_rxconf::rx_mempools + * @see rte_eth_rxconf::rx_seg + */ + if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 || + ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) { + /* Single pool/segment configuration */ rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, @@ -2659,33 +2667,46 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_conf, mp); goto exit; } - for (i = 0; i < rx_pkt_nb_segs; i++) { - struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; - struct rte_mempool *mpx; - /* - * Use last valid pool for the segments with number - * exceeding the pool index. - */ - mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; - mpx = mbuf_pool_find(socket_id, mp_n); - /* Handle zero as mbuf data buffer size. */ - rx_seg->offset = i < rx_pkt_nb_offs ? - rx_pkt_seg_offsets[i] : 0; - rx_seg->mp = mpx ? mpx : mp; - if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) { - rx_seg->proto_hdr = rx_pkt_hdr_protos[i]; - } else { - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; + + if (rx_pkt_nb_segs > 1 || + rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + for (i = 0; i < rx_pkt_nb_segs; i++) { + struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; + /* + * Use last valid pool for the segments with number + * exceeding the pool index. + */ + mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; + mpx = mbuf_pool_find(socket_id, mp_n); + /* Handle zero as mbuf data buffer size. */ + rx_seg->offset = i < rx_pkt_nb_offs ? + rx_pkt_seg_offsets[i] : 0; + rx_seg->mp = mpx ? mpx : mp; + if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) { + rx_seg->proto_hdr = rx_pkt_hdr_protos[i]; + } else { + rx_seg->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n]; + } } - } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; + } else { + /* multi-pool configuration */ + for (i = 0; i < mbuf_data_size_n; i++) { + mpx = mbuf_pool_find(socket_id, i); + rx_mempool[i] = mpx ? mpx : mp; + } + rx_conf->rx_mempools = rx_mempool; + rx_conf->rx_nmempool = mbuf_data_size_n; + } ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, socket_id, rx_conf, NULL); rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; + rx_conf->rx_mempools = NULL; + rx_conf->rx_nmempool = 0; exit: ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ? RTE_ETH_QUEUE_STATE_STOPPED : diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index e65be323b8..14be10dcef 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -80,6 +80,9 @@ extern uint8_t cl_quit; #define MIN_TOTAL_NUM_MBUFS 1024 +/* Maximum number of pools supported per Rx queue */ +#define MAX_MEMPOOL 8 + typedef uint8_t lcoreid_t; typedef uint16_t portid_t; typedef uint16_t queueid_t; diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index fd98e8b51d..f9df5f69ef 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], print_ether_addr(" - dst=", ð_hdr->dst_addr, print_buf, buf_size, &cur_len); MKDUMPSTR(print_buf, buf_size, cur_len, - " - type=0x%04x - length=%u - nb_segs=%d", - eth_type, (unsigned int) mb->pkt_len, + " - pool=%s - type=0x%04x - length=%u - nb_segs=%d", + mb->pool->name, eth_type, (unsigned int) mb->pkt_len, (int)mb->nb_segs); ol_flags = mb->ol_flags; if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) { -- 2.25.1