From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E67E7A0550; Thu, 17 Nov 2022 13:55:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8BA2140DDC; Thu, 17 Nov 2022 13:55:52 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AD14E40DDA for ; Thu, 17 Nov 2022 13:55:50 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AH6pVv7025530; Thu, 17 Nov 2022 04:55:50 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=FExWUP5ZBOeoXXy7i4scJHNbbUygcQZDHPithiIhdus=; b=IsMfjgiDFv1N+PlDnzoLhptwzg3wI5PbjAoZzWkMZDr2nEapIdkNqhAadK8SWCBKc1i/ PKwYn63kKzAQKnDitVD+78q+DF3J+AtCvykUoxbtQO5+IF1Sf9jeMfealyENV37GUDwI ae61CeluX0K44URJw9DrPe6O+CgRIdB2ocgazb0FwNLFpAqHfYM9d+WgYOngcVgeWZK5 3Kyns82U4D5QgWDJkKKleVuGwvZjKbCkDtPoVmyj6NfTwram5YMDWbnwP+fmOUfgJ7xC pEyU1cQozCmAk/PDjNoJ2+rZAdrhYf3FHxKOvnn9pSWLZSdyaO1+Cfq1rgJP5w3Cb9/T fQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3kwg2b11cq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 17 Nov 2022 04:55:49 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Nov 2022 04:55:47 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 17 Nov 2022 04:55:47 -0800 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id 172BB5B6921; Thu, 17 Nov 2022 04:55:44 -0800 (PST) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , , , <-yux.jiang@intel.com>, , , Subject: [PATCH v2 1/1] app/testpmd: add valid check to verify multi mempool feature Date: Thu, 17 Nov 2022 18:25:42 +0530 Message-ID: <20221117125542.3091224-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221117113047.3088461-1-hpothula@marvell.com> References: <20221117113047.3088461-1-hpothula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: 3oie0QPcX24DqWb2YJq92zjwEblFkoxW X-Proofpoint-ORIG-GUID: 3oie0QPcX24DqWb2YJq92zjwEblFkoxW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Validate ethdev parameter 'max_rx_mempools' to know wheater device supports multi-mempool feature or not. Bugzilla ID: 1128 Signed-off-by: Hanumanth Pothula v2: - Rebased on tip of next-net/main --- app/test-pmd/testpmd.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 4e25f77c6a..fd634bd5e6 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {}; struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {}; struct rte_mempool *mpx; + struct rte_eth_dev_info dev_info; unsigned int i, mp_n; uint32_t prev_hdrs = 0; int ret; + ret = rte_eth_dev_info_get(port_id, &dev_info); + if (ret != 0) + return ret; + /* Verify Rx queue configuration is single pool and segment or * multiple pool/segment. + * @see rte_eth_dev_info::max_rx_mempools * @see rte_eth_rxconf::rx_mempools * @see rte_eth_rxconf::rx_seg */ - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 || + if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 || ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) { /* Single pool/segment configuration */ rx_conf->rx_seg = NULL; -- 2.25.1