From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id A93BC457BB;
	Wed, 14 Aug 2024 09:50:39 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id E9D9F427AA;
	Wed, 14 Aug 2024 09:49:05 +0200 (CEST)
Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16])
 by mails.dpdk.org (Postfix) with ESMTP id 03F6E4026C
 for <dev@dpdk.org>; Tue, 13 Aug 2024 18:00:41 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1723564842; x=1755100842;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=BYZjZsOQso2TsOSzgi+tAHc9iGs26FKv0IVwN407a/Y=;
 b=P4yfuyfSzaS/AiZsd8mC94Cjm+f42UM8oBZ9oXYZCyOZtQyAmsRDPlLu
 ESN+2V1e45F9Sn4cgHIhO5OruYx6sRo6tN+7jVRrl3ra1LDAFRTSRi1Z3
 jsWehMkM6d7VnOumQBgOqSoSrAysT0RIUjCWdcnZRPqJYci9yNITBg5sz
 hU7uMYuJhxylrRcNvqVrRZ44BXV+Z0/5+Rxn0EJsNCjvfqZh1dg8XvtaF
 mNvojwCPd4P7ZfYYnblXsuulY8YZVIpj35FC4u7l5EU4PYLlw43tTvzXn
 imsNPYS0Jn+4VJGCQ9QQSuT13V0oTO0Pp6m/qdior2Rsl29BMflW7ZEg1 g==;
X-CSE-ConnectionGUID: ZqU4DNDdRGS3otWBs5WQMg==
X-CSE-MsgGUID: mu1QseGgTd2mrXh/JFGDxQ==
X-IronPort-AV: E=McAfee;i="6700,10204,11163"; a="12987881"
X-IronPort-AV: E=Sophos;i="6.09,286,1716274800"; d="scan'208";a="12987881"
Received: from fmviesa007.fm.intel.com ([10.60.135.147])
 by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Aug 2024 09:00:42 -0700
X-CSE-ConnectionGUID: OCEAY4YXSj6st8JqZnZj0Q==
X-CSE-MsgGUID: EgJiMR38RpqxXbz4F6gTnA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.09,286,1716274800"; d="scan'208";a="58406610"
Received: from unknown (HELO silpixa00401385.ir.intel.com) ([10.237.214.25])
 by fmviesa007.fm.intel.com with ESMTP; 13 Aug 2024 09:00:40 -0700
From: Bruce Richardson <bruce.richards@intel.com>
To: dev@dpdk.org
Cc: ferruh.yigit@amd.com, thomas@monjalon.net, mb@smartsharesystems.com,
 Bruce Richardson <bruce.richards@intel.com>,
 Bruce Richardson <bruce.richardson@intel.com>
Subject: [RFC PATCH v2 20/26] app/test-pmd: use separate Rx and Tx queue limits
Date: Tue, 13 Aug 2024 16:59:57 +0100
Message-ID: <20240813160003.423935-21-bruce.richards@intel.com>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240813160003.423935-1-bruce.richards@intel.com>
References: <20240812132910.162252-1-bruce.richardson@intel.com>
 <20240813160003.423935-1-bruce.richards@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 14 Aug 2024 09:48:35 +0200
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Update app to use the new defines RTE_MAX_ETHPORT_TX_QUEUES and
RTE_MAX_ETHPORT_RX_QUEUES rather than the old define
RTE_MAX_QUEUES_PER_PORT.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test-pmd/testpmd.c |  7 ++++---
 app/test-pmd/testpmd.h | 16 ++++++++--------
 2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index b1401136e4..84da9a80f2 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1305,7 +1305,7 @@ check_socket_id(const unsigned int socket_id)
 queueid_t
 get_allowed_max_nb_rxq(portid_t *pid)
 {
-	queueid_t allowed_max_rxq = RTE_MAX_QUEUES_PER_PORT;
+	queueid_t allowed_max_rxq = RTE_MAX_ETHPORT_RX_QUEUES;
 	bool max_rxq_valid = false;
 	portid_t pi;
 	struct rte_eth_dev_info dev_info;
@@ -1353,7 +1353,7 @@ check_nb_rxq(queueid_t rxq)
 queueid_t
 get_allowed_max_nb_txq(portid_t *pid)
 {
-	queueid_t allowed_max_txq = RTE_MAX_QUEUES_PER_PORT;
+	queueid_t allowed_max_txq = RTE_MAX_ETHPORT_TX_QUEUES;
 	bool max_txq_valid = false;
 	portid_t pi;
 	struct rte_eth_dev_info dev_info;
@@ -1564,7 +1564,8 @@ check_nb_txd(queueid_t txd)
 queueid_t
 get_allowed_max_nb_hairpinq(portid_t *pid)
 {
-	queueid_t allowed_max_hairpinq = RTE_MAX_QUEUES_PER_PORT;
+	queueid_t allowed_max_hairpinq = RTE_MIN(RTE_MAX_ETHPORT_RX_QUEUES,
+			RTE_MAX_ETHPORT_TX_QUEUES);
 	portid_t pi;
 	struct rte_eth_hairpin_cap cap;
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9facd7f281..5e405775b1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -332,10 +332,10 @@ struct rte_port {
 	uint8_t                 need_reconfig_queues; /**< need reconfiguring queues or not */
 	uint8_t                 rss_flag;   /**< enable rss or not */
 	uint8_t                 dcb_flag;   /**< enable dcb */
-	uint16_t                nb_rx_desc[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue rx desc number */
-	uint16_t                nb_tx_desc[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx desc number */
-	struct port_rxqueue     rxq[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue Rx config and state */
-	struct port_txqueue     txq[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue Tx config and state */
+	uint16_t                nb_rx_desc[RTE_MAX_ETHPORT_RX_QUEUES+1]; /**< per queue rx desc number */
+	uint16_t                nb_tx_desc[RTE_MAX_ETHPORT_TX_QUEUES+1]; /**< per queue tx desc number */
+	struct port_rxqueue     rxq[RTE_MAX_ETHPORT_RX_QUEUES+1]; /**< per queue Rx config and state */
+	struct port_txqueue     txq[RTE_MAX_ETHPORT_TX_QUEUES+1]; /**< per queue Tx config and state */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
@@ -351,14 +351,14 @@ struct rte_port {
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
 	LIST_HEAD(, port_flow_tunnel) flow_tunnel_list;
-	const struct rte_eth_rxtx_callback *rx_dump_cb[RTE_MAX_QUEUES_PER_PORT+1];
-	const struct rte_eth_rxtx_callback *tx_dump_cb[RTE_MAX_QUEUES_PER_PORT+1];
+	const struct rte_eth_rxtx_callback *rx_dump_cb[RTE_MAX_ETHPORT_RX_QUEUES+1];
+	const struct rte_eth_rxtx_callback *tx_dump_cb[RTE_MAX_ETHPORT_TX_QUEUES+1];
 	/**< metadata value to insert in Tx packets. */
 	uint32_t		tx_metadata;
-	const struct rte_eth_rxtx_callback *tx_set_md_cb[RTE_MAX_QUEUES_PER_PORT+1];
+	const struct rte_eth_rxtx_callback *tx_set_md_cb[RTE_MAX_ETHPORT_TX_QUEUES+1];
 	/**< dynamic flags. */
 	uint64_t		mbuf_dynf;
-	const struct rte_eth_rxtx_callback *tx_set_dynf_cb[RTE_MAX_QUEUES_PER_PORT+1];
+	const struct rte_eth_rxtx_callback *tx_set_dynf_cb[RTE_MAX_ETHPORT_TX_QUEUES+1];
 	struct xstat_display_info xstats_info;
 };
 
-- 
2.43.0