From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 354DCA00C2; Thu, 6 Oct 2022 19:56:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28E5F410D3; Thu, 6 Oct 2022 19:56:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2B155427F4 for ; Thu, 6 Oct 2022 19:56:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2969IdPP006908; Thu, 6 Oct 2022 10:54:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=dYynIfpDFToyQ3LZFKQ15mSFaxr2KmOI97YsW3aAnds=; b=lCkY1XroxwMHwqh71nKOG/xdQiC8ustiT5imcltp6iqDGWlOnuUqVAb0CbeA1jzyQZJK HAuIPUi6mE6YcsJGcGu4ReJzDslKSWZ1LedIKh9P6+1I8rRAQCM9WwWChpqM3dwf50dF ordOhFTEfXL0dWBlE+oUY0Tedg99Gvz03gt+U8mq/tQH2sCkvnh5b3M5C/dkrjxN7aQu QUDClLqwrinaJa1znU8x7H0AxBfGu5wyBw6fw+VWGDjyz3bVp/reZiWXjUu1WcTfNJQx z2FfS3XIBhKhfM3xrPZEtcc2z94V/M8mPPovbX0XZKzA3j3bmrDF/V5eidPQLjRWHKJI ig== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3k1v9asyur-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 06 Oct 2022 10:54:23 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 6 Oct 2022 10:54:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 6 Oct 2022 10:54:21 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id B50C85B6957; Thu, 6 Oct 2022 10:54:16 -0700 (PDT) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , , , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v6 3/3] app/testpmd: support mulitiple mbuf pools per Rx queue Date: Thu, 6 Oct 2022 23:23:55 +0530 Message-ID: <20221006175355.1327673-3-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221006175355.1327673-1-hpothula@marvell.com> References: <20221006170126.1322852-1-hpothula@marvell.com> <20221006175355.1327673-1-hpothula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: K5Xa96l6eYlQhGBGB1FfAI_8O65SPYe7 X-Proofpoint-ORIG-GUID: K5Xa96l6eYlQhGBGB1FfAI_8O65SPYe7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-06_04,2022-10-06_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for the mulitiple mempool. Some of the HW has support for choosing memory pools based on the packet's size. The pool sort capability allows PMD to choose a memory pool based on the packet's length. On multiple mempool support enabled, populate mempool array and also print pool name on which packet is received. Signed-off-by: Hanumanth Pothula --- app/test-pmd/testpmd.c | 44 ++++++++++++++++++++++++++++++------------ app/test-pmd/testpmd.h | 3 +++ app/test-pmd/util.c | 4 ++-- 3 files changed, 37 insertions(+), 14 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 77741fc41f..1dbddf7b43 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2624,11 +2624,13 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {}; + struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {}; unsigned int i, mp_n; int ret; if (rx_pkt_nb_segs <= 1 || - (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) { + (rx_conf->offloads & (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL)) == 0) { rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, @@ -2637,7 +2639,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, goto exit; } for (i = 0; i < rx_pkt_nb_segs; i++) { - struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; + struct rte_eth_rxseg_split *rx_split = &rx_useg[i].split; + struct rte_mempool *mempool; + struct rte_mempool *mpx; /* * Use last valid pool for the segments with number @@ -2645,16 +2649,32 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); - /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; - rx_seg->offset = i < rx_pkt_nb_offs ? - rx_pkt_seg_offsets[i] : 0; - rx_seg->mp = mpx ? mpx : mp; - } - rx_conf->rx_nseg = rx_pkt_nb_segs; - rx_conf->rx_seg = rx_useg; + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + /** + * On Segment length zero, update length as, + * buffer size - headroom size + * to make sure enough space is accomidate for header. + */ + rx_split->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n] - RTE_PKTMBUF_HEADROOM; + rx_split->offset = i < rx_pkt_nb_offs ? + rx_pkt_seg_offsets[i] : 0; + rx_split->mp = mpx ? mpx : mp; + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + mempool = mpx ? mpx : mp; + rx_mempool[i] = mempool; + } + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + rx_conf->rx_nseg = rx_pkt_nb_segs; + rx_conf->rx_seg = rx_useg; + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + rx_conf->rx_mempools = rx_mempool; + rx_conf->rx_npool = rx_pkt_nb_segs; + } ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, socket_id, rx_conf, NULL); rx_conf->rx_seg = NULL; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ddf5e21849..15a26171e2 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -82,6 +82,9 @@ extern uint8_t cl_quit; #define MIN_TOTAL_NUM_MBUFS 1024 +/* Maximum number of pools supprted per Rx queue */ +#define MAX_MEMPOOL 8 + typedef uint8_t lcoreid_t; typedef uint16_t portid_t; typedef uint16_t queueid_t; diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index fd98e8b51d..f9df5f69ef 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], print_ether_addr(" - dst=", ð_hdr->dst_addr, print_buf, buf_size, &cur_len); MKDUMPSTR(print_buf, buf_size, cur_len, - " - type=0x%04x - length=%u - nb_segs=%d", - eth_type, (unsigned int) mb->pkt_len, + " - pool=%s - type=0x%04x - length=%u - nb_segs=%d", + mb->pool->name, eth_type, (unsigned int) mb->pkt_len, (int)mb->nb_segs); ol_flags = mb->ol_flags; if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) { -- 2.25.1