DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jie Hai <haijie1@huawei.com>
To: Aman Singh <aman.deep.singh@intel.com>,
	Yuying Zhang <yuying.zhang@intel.com>,
	Ferruh Yigit <ferruh.yigit@amd.com>,
	Shiyang He <shiyangx.he@intel.com>
Cc: <dev@dpdk.org>, <liudongdong3@huawei.com>, <alialnu@nvidia.com>
Subject: [PATCH] app/testpmd: fix invalid queue ID when start port
Date: Mon, 3 Jul 2023 19:02:31 +0800	[thread overview]
Message-ID: <20230703110232.28494-1-haijie1@huawei.com> (raw)

Function update_queue_state updates queue state of all queues
of all ports, using the queue num nb_rxq|nb_txq stored locally
by testpmd. Error on invalid queue ID occurs if we start testpmd
with two ports and detach-attach one of them and start the other
port first. That's because the attached port has zero queues and
that differs from the nb_rxq|nb_txq. The similar error happens
in multi-process senoris if secondary process attaches a port
and starts it.

This patch updates queue state according to the num of queues
reported by driver instead of testpmd.

Fixes: 141a520b35f7 ("app/testpmd: fix primary process not polling all queues")
Fixes: 5028f207a4fa ("app/testpmd: fix secondary process packet forwarding")
Cc: stable@dpdk.org

Signed-off-by: Jie Hai <haijie1@huawei.com>
---
 app/test-pmd/testpmd.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 1fc70650e0a4..c8ce67d0de9f 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2479,13 +2479,22 @@ update_tx_queue_state(uint16_t port_id, uint16_t queue_id)
 static void
 update_queue_state(void)
 {
+	struct rte_port *port;
+	uint16_t nb_rx_queues;
+	uint16_t nb_tx_queues;
 	portid_t pi;
 	queueid_t qi;
 
 	RTE_ETH_FOREACH_DEV(pi) {
-		for (qi = 0; qi < nb_rxq; qi++)
+		port = &ports[pi];
+		if (eth_dev_info_get_print_err(pi, &port->dev_info) != 0)
+			continue;
+
+		nb_rx_queues = RTE_MIN(nb_rxq, port->dev_info.nb_rx_queues);
+		nb_tx_queues = RTE_MIN(nb_txq, port->dev_info.nb_tx_queues);
+		for (qi = 0; qi < nb_rx_queues; qi++)
 			update_rx_queue_state(pi, qi);
-		for (qi = 0; qi < nb_txq; qi++)
+		for (qi = 0; qi < nb_tx_queues; qi++)
 			update_tx_queue_state(pi, qi);
 	}
 }
-- 
2.33.0


             reply	other threads:[~2023-07-03 11:05 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-03 11:02 Jie Hai [this message]
2023-07-03 12:33 ` Ali Alnubani
2023-07-04  2:01   ` Jie Hai
2023-07-04  2:22 ` lihuisong (C)
2023-07-04  8:45 ` [PATCH v2] " Jie Hai
2023-07-04  9:16   ` Ali Alnubani
2023-07-04  9:42   ` fengchengwen
2023-07-04 10:59   ` Ferruh Yigit
2023-07-05  3:16     ` lihuisong (C)
2023-07-05  8:02       ` Ferruh Yigit
2023-07-05  8:07         ` Ferruh Yigit
2023-07-05  9:40         ` lihuisong (C)
2023-07-05 11:41           ` Ferruh Yigit
2023-07-06  2:48             ` lihuisong (C)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230703110232.28494-1-haijie1@huawei.com \
    --to=haijie1@huawei.com \
    --cc=alialnu@nvidia.com \
    --cc=aman.deep.singh@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=liudongdong3@huawei.com \
    --cc=shiyangx.he@intel.com \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).