From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1152DA00C5; Thu, 15 Sep 2022 09:09:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 034B040A80; Thu, 15 Sep 2022 09:09:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6159840156 for ; Thu, 15 Sep 2022 09:09:57 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28ENmgEt023790; Thu, 15 Sep 2022 00:07:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=xShqtOsL3WhBDYMefcN1iKRuWKdjnF8EO8A14DPNEkQ=; b=KxA8K/IyhXzyLpIYBOdaeMfuxicbcCXdKXPW8WuvBbXNdTJrEtW1CgjYLVpAtUQIqsYI ewcACMUuQsfdD2c3Gm1BnNtIrfrPkRNxheoT61cNP8amY82hJSNPTVWrpmta2CfUjklH KUfdMSs91UXr94pOUyYORlktX+NBrIHX1+XhhRHB24H5MYTYZGpKJatOsypSedC/H+ZC xzvdCJQ7SZ1Y2dzmTxW5xkNnG+JZUfsrhh4NgUqtGj8Ws2cHxhRluJMRUP5zaIaoAN+C 2u7mF/kEqxF+pM+oI0AahxgHXVx2Q7VjcXLkltVlteGjnDQnzdBhGC09bbrljLrg1Umo Kg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jjy0272y3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 15 Sep 2022 00:07:50 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 15 Sep 2022 00:07:48 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 15 Sep 2022 00:07:48 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id B838B3F70A3; Thu, 15 Sep 2022 00:07:43 -0700 (PDT) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , , , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v4 2/3] app/testpmd: Add support for mulitiple mbuf pools per Rx queue Date: Thu, 15 Sep 2022 12:37:31 +0530 Message-ID: <20220915070732.182542-2-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915070732.182542-1-hpothula@marvell.com> References: <20220902070047.2812906-1-hpothula@marvell.com> <20220915070732.182542-1-hpothula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: bvdIWyDalLyJgbpN1jF6Z-SyR3zGN9KB X-Proofpoint-ORIG-GUID: bvdIWyDalLyJgbpN1jF6Z-SyR3zGN9KB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-15_03,2022-09-14_04,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for the mulitiple mempool. Some of the HW has support for choosing memory pools based on the packet's size. The pool sort capability allows PMD to choose a memory pool based on the packet's length. On multiple mempool support enabled, populate mempool array and also print pool name on which packet is received. Signed-off-by: Hanumanth Pothula --- app/test-pmd/testpmd.c | 41 +++++++++++++++++++++++++++++------------ app/test-pmd/testpmd.h | 3 +++ app/test-pmd/util.c | 4 ++-- 3 files changed, 34 insertions(+), 14 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 77741fc41f..d16a552e6d 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2624,11 +2624,13 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {}; + struct rte_eth_rx_mempool rx_mempool[MAX_MEMPOOL] = {}; unsigned int i, mp_n; int ret; if (rx_pkt_nb_segs <= 1 || - (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) { + (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT || + rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) == 0) { rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, @@ -2637,7 +2639,8 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, goto exit; } for (i = 0; i < rx_pkt_nb_segs; i++) { - struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; + struct rte_eth_rxseg_split *rx_split = &rx_useg[i].split; + struct rte_eth_rx_mempool *mempool = &rx_mempool[i]; struct rte_mempool *mpx; /* * Use last valid pool for the segments with number @@ -2645,16 +2648,30 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); - /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; - rx_seg->offset = i < rx_pkt_nb_offs ? - rx_pkt_seg_offsets[i] : 0; - rx_seg->mp = mpx ? mpx : mp; - } - rx_conf->rx_nseg = rx_pkt_nb_segs; - rx_conf->rx_seg = rx_useg; + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + /** + * On Segment length zero, update length as, + * buffer size - headroom size + * to make sure enough space is accomidate for header. + */ + rx_split->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n] - RTE_PKTMBUF_HEADROOM; + rx_split->offset = i < rx_pkt_nb_offs ? + rx_pkt_seg_offsets[i] : 0; + rx_split->mp = mpx ? mpx : mp; + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) + mempool->mp = mpx ? mpx : mp; + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + rx_conf->rx_nseg = rx_pkt_nb_segs; + rx_conf->rx_seg = rx_useg; + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + rx_conf->rx_mempool = rx_mempool; + rx_conf->rx_npool = rx_pkt_nb_segs; + } ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, socket_id, rx_conf, NULL); rx_conf->rx_seg = NULL; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ddf5e21849..15a26171e2 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -82,6 +82,9 @@ extern uint8_t cl_quit; #define MIN_TOTAL_NUM_MBUFS 1024 +/* Maximum number of pools supprted per Rx queue */ +#define MAX_MEMPOOL 8 + typedef uint8_t lcoreid_t; typedef uint16_t portid_t; typedef uint16_t queueid_t; diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index fd98e8b51d..f9df5f69ef 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], print_ether_addr(" - dst=", ð_hdr->dst_addr, print_buf, buf_size, &cur_len); MKDUMPSTR(print_buf, buf_size, cur_len, - " - type=0x%04x - length=%u - nb_segs=%d", - eth_type, (unsigned int) mb->pkt_len, + " - pool=%s - type=0x%04x - length=%u - nb_segs=%d", + mb->pool->name, eth_type, (unsigned int) mb->pkt_len, (int)mb->nb_segs); ol_flags = mb->ol_flags; if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) { -- 2.25.1