DPDK patches and discussions
 help / color / Atom feed
* [dpdk-dev] [RFC V2 0/2] fix queue stats mapping
@ 2020-10-20  8:26 Min Hu (Connor)
  2020-10-20  8:26 ` [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration Min Hu (Connor)
  2020-10-20  8:26 ` [dpdk-dev] [RFC V2 2/2] app/testpmd: fix starting failed with queue-stats-mapping Min Hu (Connor)
  0 siblings, 2 replies; 15+ messages in thread
From: Min Hu (Connor) @ 2020-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, bruce.richardson, thomas.monjalon, lihuisong

From: Huisong Li <lihuisong@huawei.com>

This patch set is used to adjust the unreasonable configurations
 and runtime errors about rx/tx queue stats mapping.

---
V1 -> V2
* adjust commit log description.
---

Huisong Li (2):
  app/testpmd: fix queue stats mapping configuration
  app/testpmd: fix starting failed with queue-stats-mapping

 app/test-pmd/config.c     | 180 ++++++++++++++++++++++++++++-----------------
 app/test-pmd/parameters.c |  63 ++++++++--------
 app/test-pmd/testpmd.c    | 182 ++++++++++++++++++++++++++++++++--------------
 app/test-pmd/testpmd.h    |  41 +++++------
 4 files changed, 293 insertions(+), 173 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration
  2020-10-20  8:26 [dpdk-dev] [RFC V2 0/2] fix queue stats mapping Min Hu (Connor)
@ 2020-10-20  8:26 ` Min Hu (Connor)
  2020-10-30 20:54   ` Ferruh Yigit
  2020-10-20  8:26 ` [dpdk-dev] [RFC V2 2/2] app/testpmd: fix starting failed with queue-stats-mapping Min Hu (Connor)
  1 sibling, 1 reply; 15+ messages in thread
From: Min Hu (Connor) @ 2020-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, bruce.richardson, thomas.monjalon, lihuisong

From: Huisong Li <lihuisong@huawei.com>

Currently, the queue stats mapping has the following problems:
1) Many PMD drivers don't support queue stats mapping. But there is no
failure message after executing the command "set stat_qmap rx 0 2 2".
2) Once queue mapping is set, unrelated and unmapped queues are also
displayed.
3) There is no need to keep cache line alignment for
'struct queue_stats_mappings'.
4) The mapping arrays, 'tx_queue_stats_mappings_array' &
'rx_queue_stats_mappings_array' are global and their sizes are based on
fixed max port and queue size assumptions.
5) The configuration result does not take effect or can not be queried
in real time.

Therefore, we have made the following adjustments:
1) If PMD supports queue stats mapping, configure to driver in real time
after executing the command "set stat_qmap rx/tx ...". If not,
the command can not be accepted.
2) Only display queues that mapping done by adding a new 'active' field
 in queue_stats_mappings struct.
3) Remove cache alignment for 'struct queue_stats_mappings'.
4) Add a new port_stats_mappings struct in rte_port.
The struct contains number of rx/txq stats mapping, rx/tx
queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
 to ensure that the same number of queues can be set for each port.

Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")

Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 app/test-pmd/config.c     | 180 +++++++++++++++++++++++++++++-----------------
 app/test-pmd/parameters.c |  63 ++++++++--------
 app/test-pmd/testpmd.c    | 180 ++++++++++++++++++++++++++++++++--------------
 app/test-pmd/testpmd.h    |  41 ++++++-----
 4 files changed, 292 insertions(+), 172 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ba17f3b..325c42f 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -177,13 +177,13 @@ nic_stats_display(portid_t port_id)
 	static uint64_t prev_bytes_rx[RTE_MAX_ETHPORTS];
 	static uint64_t prev_bytes_tx[RTE_MAX_ETHPORTS];
 	static uint64_t prev_ns[RTE_MAX_ETHPORTS];
+	struct port_stats_mappings *p_stats_map;
 	struct timespec cur_time;
 	uint64_t diff_pkts_rx, diff_pkts_tx, diff_bytes_rx, diff_bytes_tx,
 								diff_ns;
 	uint64_t mpps_rx, mpps_tx, mbps_rx, mbps_tx;
 	struct rte_eth_stats stats;
-	struct rte_port *port = &ports[port_id];
-	uint8_t i;
+	struct rte_port *port;
 
 	static const char *nic_stats_border = "########################";
 
@@ -195,7 +195,10 @@ nic_stats_display(portid_t port_id)
 	printf("\n  %s NIC statistics for port %-2d %s\n",
 	       nic_stats_border, port_id, nic_stats_border);
 
-	if ((!port->rx_queue_stats_mapping_enabled) && (!port->tx_queue_stats_mapping_enabled)) {
+	port = &ports[port_id];
+	p_stats_map = &port->p_stats_map;
+	if ((!p_stats_map->rx_queue_stats_mapping_enabled) &&
+		(!p_stats_map->tx_queue_stats_mapping_enabled)) {
 		printf("  RX-packets: %-10"PRIu64" RX-missed: %-10"PRIu64" RX-bytes:  "
 		       "%-"PRIu64"\n",
 		       stats.ipackets, stats.imissed, stats.ibytes);
@@ -205,36 +208,20 @@ nic_stats_display(portid_t port_id)
 		printf("  TX-packets: %-10"PRIu64" TX-errors: %-10"PRIu64" TX-bytes:  "
 		       "%-"PRIu64"\n",
 		       stats.opackets, stats.oerrors, stats.obytes);
-	}
-	else {
-		printf("  RX-packets:              %10"PRIu64"    RX-errors: %10"PRIu64
-		       "    RX-bytes: %10"PRIu64"\n",
-		       stats.ipackets, stats.ierrors, stats.ibytes);
-		printf("  RX-errors:  %10"PRIu64"\n", stats.ierrors);
-		printf("  RX-nombuf:               %10"PRIu64"\n",
+	} else {
+		printf("  RX-packets:             %14"PRIu64"    RX-missed: "
+		       "%14"PRIu64"    RX-bytes: %14"PRIu64"\n",
+		       stats.ipackets, stats.imissed, stats.ibytes);
+		printf("  RX-errors:              %14"PRIu64"\n",
+		       stats.ierrors);
+		printf("  RX-nombuf:              %14"PRIu64"\n",
 		       stats.rx_nombuf);
-		printf("  TX-packets:              %10"PRIu64"    TX-errors: %10"PRIu64
-		       "    TX-bytes: %10"PRIu64"\n",
+		printf("  TX-packets:             %14"PRIu64"    TX-errors: "
+		       "%14"PRIu64"    TX-bytes: %14"PRIu64"\n",
 		       stats.opackets, stats.oerrors, stats.obytes);
 	}
 
-	if (port->rx_queue_stats_mapping_enabled) {
-		printf("\n");
-		for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
-			printf("  Stats reg %2d RX-packets: %10"PRIu64
-			       "    RX-errors: %10"PRIu64
-			       "    RX-bytes: %10"PRIu64"\n",
-			       i, stats.q_ipackets[i], stats.q_errors[i], stats.q_ibytes[i]);
-		}
-	}
-	if (port->tx_queue_stats_mapping_enabled) {
-		printf("\n");
-		for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
-			printf("  Stats reg %2d TX-packets: %10"PRIu64
-			       "                             TX-bytes: %10"PRIu64"\n",
-			       i, stats.q_opackets[i], stats.q_obytes[i]);
-		}
-	}
+	port_stats_mapping_display(port_id, &stats);
 
 	diff_ns = 0;
 	if (clock_gettime(CLOCK_TYPE_ID, &cur_time) == 0) {
@@ -400,7 +387,9 @@ nic_xstats_clear(portid_t port_id)
 void
 nic_stats_mapping_display(portid_t port_id)
 {
-	struct rte_port *port = &ports[port_id];
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	struct rte_port *port;
 	uint16_t i;
 
 	static const char *nic_stats_mapping_border = "########################";
@@ -410,7 +399,10 @@ nic_stats_mapping_display(portid_t port_id)
 		return;
 	}
 
-	if ((!port->rx_queue_stats_mapping_enabled) && (!port->tx_queue_stats_mapping_enabled)) {
+	port = &ports[port_id];
+	p_stats_map = &port->p_stats_map;
+	if ((!p_stats_map->rx_queue_stats_mapping_enabled) &&
+		(!p_stats_map->tx_queue_stats_mapping_enabled)) {
 		printf("Port id %d - either does not support queue statistic mapping or"
 		       " no queue statistic mapping set\n", port_id);
 		return;
@@ -419,24 +411,26 @@ nic_stats_mapping_display(portid_t port_id)
 	printf("\n  %s NIC statistics mapping for port %-2d %s\n",
 	       nic_stats_mapping_border, port_id, nic_stats_mapping_border);
 
-	if (port->rx_queue_stats_mapping_enabled) {
-		for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-			if (rx_queue_stats_mappings[i].port_id == port_id) {
+	if (p_stats_map->rx_queue_stats_mapping_enabled) {
+		for (i = 0; i < p_stats_map->nb_rxq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->rxq_map_array[i];
+			if (q_stats_map->active) {
 				printf("  RX-queue %2d mapped to Stats Reg %2d\n",
-				       rx_queue_stats_mappings[i].queue_id,
-				       rx_queue_stats_mappings[i].stats_counter_id);
+				       q_stats_map->queue_id,
+				       q_stats_map->stats_counter_id);
 			}
 		}
 		printf("\n");
 	}
 
 
-	if (port->tx_queue_stats_mapping_enabled) {
-		for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-			if (tx_queue_stats_mappings[i].port_id == port_id) {
+	if (p_stats_map->tx_queue_stats_mapping_enabled) {
+		for (i = 0; i < p_stats_map->nb_txq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->txq_map_array[i];
+			if (q_stats_map->active) {
 				printf("  TX-queue %2d mapped to Stats Reg %2d\n",
-				       tx_queue_stats_mappings[i].queue_id,
-				       tx_queue_stats_mappings[i].stats_counter_id);
+				       q_stats_map->queue_id,
+				       q_stats_map->stats_counter_id);
 			}
 		}
 	}
@@ -4546,8 +4540,13 @@ tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on)
 void
 set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
 {
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool existing_mapping_found = false;
+	struct rte_port *port;
+	uint16_t cur_map_idx;
 	uint16_t i;
-	uint8_t existing_mapping_found = 0;
+	int ret;
 
 	if (port_id_is_invalid(port_id, ENABLED_WARN))
 		return;
@@ -4561,37 +4560,88 @@ set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
 		return;
 	}
 
-	if (!is_rx) { /*then tx*/
-		for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-			if ((tx_queue_stats_mappings[i].port_id == port_id) &&
-			    (tx_queue_stats_mappings[i].queue_id == queue_id)) {
-				tx_queue_stats_mappings[i].stats_counter_id = map_value;
-				existing_mapping_found = 1;
+	port = &ports[port_id];
+	p_stats_map = &port->p_stats_map;
+	if (!is_rx) { /* tx */
+		for (i = 0; i < p_stats_map->nb_txq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->txq_map_array[i];
+			if (q_stats_map->queue_id == queue_id) {
+				ret =
+				rte_eth_dev_set_tx_queue_stats_mapping(port_id,
+							queue_id, map_value);
+				if (ret) {
+					printf("failed to set tx queue stats "
+						"mapping.\n");
+					return;
+				}
+
+				q_stats_map->stats_counter_id = map_value;
+				q_stats_map->active = true;
+				existing_mapping_found = true;
 				break;
 			}
 		}
-		if (!existing_mapping_found) { /* A new additional mapping... */
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].port_id = port_id;
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].queue_id = queue_id;
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].stats_counter_id = map_value;
-			nb_tx_queue_stats_mappings++;
+
+		/* A new additional mapping... */
+		if (!existing_mapping_found) {
+			ret = rte_eth_dev_set_tx_queue_stats_mapping(port_id,
+								     queue_id,
+								     map_value);
+			if (ret) {
+				printf("failed to set tx queue stats "
+					"mapping.\n");
+				return;
+			}
+
+			cur_map_idx = p_stats_map->nb_txq_stats_mappings;
+			q_stats_map = &p_stats_map->txq_map_array[cur_map_idx];
+			q_stats_map->queue_id = queue_id;
+			q_stats_map->stats_counter_id = map_value;
+			q_stats_map->active = true;
+			p_stats_map->nb_txq_stats_mappings++;
 		}
-	}
-	else { /*rx*/
-		for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-			if ((rx_queue_stats_mappings[i].port_id == port_id) &&
-			    (rx_queue_stats_mappings[i].queue_id == queue_id)) {
-				rx_queue_stats_mappings[i].stats_counter_id = map_value;
-				existing_mapping_found = 1;
+
+		p_stats_map->tx_queue_stats_mapping_enabled = true;
+	} else { /* rx */
+		for (i = 0; i < p_stats_map->nb_rxq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->rxq_map_array[i];
+			if (q_stats_map->queue_id == queue_id) {
+				ret =
+				rte_eth_dev_set_rx_queue_stats_mapping(port_id,
+							queue_id, map_value);
+				if (ret) {
+					printf("failed to set rx queue stats "
+						"mapping.\n");
+					return;
+				}
+
+				q_stats_map->stats_counter_id = map_value;
+				q_stats_map->active = true;
+				existing_mapping_found = true;
 				break;
 			}
 		}
-		if (!existing_mapping_found) { /* A new additional mapping... */
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].port_id = port_id;
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].queue_id = queue_id;
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].stats_counter_id = map_value;
-			nb_rx_queue_stats_mappings++;
+
+		/* A new additional mapping... */
+		if (!existing_mapping_found) {
+			ret = rte_eth_dev_set_rx_queue_stats_mapping(port_id,
+								     queue_id,
+								     map_value);
+			if (ret) {
+				printf("failed to set rx queue stats "
+					"mapping.\n");
+				return;
+			}
+
+			cur_map_idx = p_stats_map->nb_rxq_stats_mappings;
+			q_stats_map = &p_stats_map->rxq_map_array[cur_map_idx];
+			q_stats_map->queue_id = queue_id;
+			q_stats_map->stats_counter_id = map_value;
+			q_stats_map->active = true;
+			p_stats_map->nb_rxq_stats_mappings++;
 		}
+
+		p_stats_map->rx_queue_stats_mapping_enabled = true;
 	}
 }
 
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 5ae0cb6..ee2501b 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -300,7 +300,6 @@ parse_fwd_portmask(const char *portmask)
 		set_fwd_ports_mask((uint64_t) pm);
 }
 
-
 static int
 parse_queue_stats_mapping_config(const char *q_arg, int is_rx)
 {
@@ -315,11 +314,13 @@ parse_queue_stats_mapping_config(const char *q_arg, int is_rx)
 	};
 	unsigned long int_fld[_NUM_FLD];
 	char *str_fld[_NUM_FLD];
+	struct rte_port *port;
 	int i;
 	unsigned size;
-
-	/* reset from value set at definition */
-	is_rx ? (nb_rx_queue_stats_mappings = 0) : (nb_tx_queue_stats_mappings = 0);
+	int port_id;
+	struct queue_stats_mappings *q_stats_map;
+	struct port_stats_mappings *p_stats_map;
+	uint16_t q_map_idx;
 
 	while ((p = strchr(p0,'(')) != NULL) {
 		++p;
@@ -346,44 +347,40 @@ parse_queue_stats_mapping_config(const char *q_arg, int is_rx)
 			return -1;
 		}
 
+		port_id = (uint8_t)int_fld[FLD_PORT];
+		port = &ports[port_id];
+		p_stats_map = &port->p_stats_map;
 		if (!is_rx) {
-			if ((nb_tx_queue_stats_mappings >=
-						MAX_TX_QUEUE_STATS_MAPPINGS)) {
+			q_map_idx = p_stats_map->nb_txq_stats_mappings;
+			if (q_map_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS) {
 				printf("exceeded max number of TX queue "
-						"statistics mappings: %hu\n",
-						nb_tx_queue_stats_mappings);
+					"statistics mappings: %hu\n",
+					p_stats_map->nb_txq_stats_mappings);
 				return -1;
 			}
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].port_id =
-				(uint8_t)int_fld[FLD_PORT];
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].queue_id =
-				(uint8_t)int_fld[FLD_QUEUE];
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].stats_counter_id =
-				(uint8_t)int_fld[FLD_STATS_COUNTER];
-			++nb_tx_queue_stats_mappings;
-		}
-		else {
-			if ((nb_rx_queue_stats_mappings >=
-						MAX_RX_QUEUE_STATS_MAPPINGS)) {
+			q_stats_map =
+				&p_stats_map->txq_map_array[q_map_idx];
+			q_stats_map->queue_id = int_fld[FLD_QUEUE];
+			q_stats_map->stats_counter_id =
+						int_fld[FLD_STATS_COUNTER];
+			++p_stats_map->nb_txq_stats_mappings;
+		} else {
+			q_map_idx = p_stats_map->nb_rxq_stats_mappings;
+			if (q_map_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS) {
 				printf("exceeded max number of RX queue "
-						"statistics mappings: %hu\n",
-						nb_rx_queue_stats_mappings);
+					"statistics mappings: %hu\n",
+					p_stats_map->nb_rxq_stats_mappings);
 				return -1;
 			}
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].port_id =
-				(uint8_t)int_fld[FLD_PORT];
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].queue_id =
-				(uint8_t)int_fld[FLD_QUEUE];
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].stats_counter_id =
-				(uint8_t)int_fld[FLD_STATS_COUNTER];
-			++nb_rx_queue_stats_mappings;
+			q_stats_map =
+				&p_stats_map->rxq_map_array[q_map_idx];
+			q_stats_map->queue_id = int_fld[FLD_QUEUE];
+			q_stats_map->stats_counter_id =
+						int_fld[FLD_STATS_COUNTER];
+			++p_stats_map->nb_rxq_stats_mappings;
 		}
-
 	}
-/* Reassign the rx/tx_queue_stats_mappings pointer to point to this newly populated array rather */
-/* than to the default array (that was set at its definition) */
-	is_rx ? (rx_queue_stats_mappings = rx_queue_stats_mappings_array) :
-		(tx_queue_stats_mappings = tx_queue_stats_mappings_array);
+
 	return 0;
 }
 
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 94e3688..86e3271 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -476,15 +476,6 @@ struct rte_fdir_conf fdir_conf = {
 
 volatile int test_done = 1; /* stop packet forwarding when set to 1. */
 
-struct queue_stats_mappings tx_queue_stats_mappings_array[MAX_TX_QUEUE_STATS_MAPPINGS];
-struct queue_stats_mappings rx_queue_stats_mappings_array[MAX_RX_QUEUE_STATS_MAPPINGS];
-
-struct queue_stats_mappings *tx_queue_stats_mappings = tx_queue_stats_mappings_array;
-struct queue_stats_mappings *rx_queue_stats_mappings = rx_queue_stats_mappings_array;
-
-uint16_t nb_tx_queue_stats_mappings = 0;
-uint16_t nb_rx_queue_stats_mappings = 0;
-
 /*
  * Display zero values by default for xstats
  */
@@ -1809,10 +1800,84 @@ fwd_stream_stats_display(streamid_t stream_id)
 }
 
 void
+port_stats_mapping_display(portid_t pt_id, struct rte_eth_stats *stats)
+{
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool txq_stats_map_found = false;
+	bool rxq_stats_map_found = false;
+	uint16_t nb_txq_stats_map;
+	uint16_t nb_rxq_stats_map;
+	struct rte_port *port;
+	uint16_t i, j;
+
+	if (stats == NULL) {
+		printf("input stats address is null pointer.\n");
+		return;
+	}
+
+	if (port_id_is_invalid(pt_id, ENABLED_WARN)) {
+		print_valid_ports();
+		return;
+	}
+
+	port = &ports[pt_id];
+	p_stats_map = &port->p_stats_map;
+	if (p_stats_map->rx_queue_stats_mapping_enabled) {
+		printf("\n");
+		nb_rxq_stats_map = p_stats_map->nb_rxq_stats_mappings;
+		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
+			for (i = 0; i < nb_rxq_stats_map; i++) {
+				q_stats_map = &p_stats_map->rxq_map_array[i];
+				if (q_stats_map->stats_counter_id == j &&
+					q_stats_map->active) {
+					rxq_stats_map_found = true;
+					break;
+				}
+			}
+
+			if (rxq_stats_map_found) {
+				printf("  Stats reg %2d RX-packets:%14"PRIu64
+				       "    RX-errors: %14"PRIu64
+				       "    RX-bytes:%14"PRIu64"\n",
+				       j, stats->q_ipackets[j],
+				       stats->q_errors[j],
+				       stats->q_ibytes[j]);
+				rxq_stats_map_found = false;
+			}
+		}
+	}
+	if (p_stats_map->tx_queue_stats_mapping_enabled) {
+		printf("\n");
+		nb_txq_stats_map = p_stats_map->nb_txq_stats_mappings;
+		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
+			for (i = 0; i < nb_txq_stats_map; i++) {
+				q_stats_map = &p_stats_map->txq_map_array[i];
+				if (q_stats_map->stats_counter_id == j &&
+					q_stats_map->active) {
+					txq_stats_map_found = true;
+					break;
+				}
+			}
+
+			if (txq_stats_map_found) {
+				printf("  Stats reg %2d TX-packets:%14"PRIu64
+				       "				 TX-bytes:%14"
+				       PRIu64"\n",
+				       j, stats->q_opackets[j],
+				       stats->q_obytes[j]);
+				txq_stats_map_found = false;
+			}
+		}
+	}
+}
+
+void
 fwd_stats_display(void)
 {
 	static const char *fwd_stats_border = "----------------------";
 	static const char *acc_stats_border = "+++++++++++++++";
+	struct port_stats_mappings *p_stats_map;
 	struct {
 		struct fwd_stream *rx_stream;
 		struct fwd_stream *tx_stream;
@@ -1857,8 +1922,6 @@ fwd_stats_display(void)
 			fwd_cycles += fs->core_cycles;
 	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
-		uint8_t j;
-
 		pt_id = fwd_ports_ids[i];
 		port = &ports[pt_id];
 
@@ -1881,8 +1944,9 @@ fwd_stats_display(void)
 		printf("\n  %s Forward statistics for port %-2d %s\n",
 		       fwd_stats_border, pt_id, fwd_stats_border);
 
-		if (!port->rx_queue_stats_mapping_enabled &&
-		    !port->tx_queue_stats_mapping_enabled) {
+		p_stats_map = &port->p_stats_map;
+		if (!p_stats_map->rx_queue_stats_mapping_enabled &&
+		    !p_stats_map->tx_queue_stats_mapping_enabled) {
 			printf("  RX-packets: %-14"PRIu64
 			       " RX-dropped: %-14"PRIu64
 			       "RX-total: %-"PRIu64"\n",
@@ -1944,26 +2008,7 @@ fwd_stats_display(void)
 					&ports_stats[pt_id].tx_stream->tx_burst_stats);
 		}
 
-		if (port->rx_queue_stats_mapping_enabled) {
-			printf("\n");
-			for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-				printf("  Stats reg %2d RX-packets:%14"PRIu64
-				       "     RX-errors:%14"PRIu64
-				       "    RX-bytes:%14"PRIu64"\n",
-				       j, stats.q_ipackets[j],
-				       stats.q_errors[j], stats.q_ibytes[j]);
-			}
-			printf("\n");
-		}
-		if (port->tx_queue_stats_mapping_enabled) {
-			for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-				printf("  Stats reg %2d TX-packets:%14"PRIu64
-				       "                                 TX-bytes:%14"
-				       PRIu64"\n",
-				       j, stats.q_opackets[j],
-				       stats.q_obytes[j]);
-			}
-		}
+		port_stats_mapping_display(pt_id, &stats);
 
 		printf("  %s--------------------------------%s\n",
 		       fwd_stats_border, fwd_stats_border);
@@ -3355,59 +3400,84 @@ dev_event_callback(const char *device_name, enum rte_dev_event_type type,
 static int
 set_tx_queue_stats_mapping_registers(portid_t port_id, struct rte_port *port)
 {
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool mapping_found = false;
 	uint16_t i;
 	int diag;
-	uint8_t mapping_found = 0;
 
-	for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-		if ((tx_queue_stats_mappings[i].port_id == port_id) &&
-				(tx_queue_stats_mappings[i].queue_id < nb_txq )) {
+	p_stats_map = &port->p_stats_map;
+	for (i = 0; i < p_stats_map->nb_txq_stats_mappings; i++) {
+		q_stats_map = &p_stats_map->txq_map_array[i];
+		if (q_stats_map->active) {
+			mapping_found = true;
+			continue;
+		}
+
+		if (q_stats_map->queue_id < nb_txq) {
 			diag = rte_eth_dev_set_tx_queue_stats_mapping(port_id,
-					tx_queue_stats_mappings[i].queue_id,
-					tx_queue_stats_mappings[i].stats_counter_id);
+					q_stats_map->queue_id,
+					q_stats_map->stats_counter_id);
 			if (diag != 0)
 				return diag;
-			mapping_found = 1;
+			q_stats_map->active = true;
+			mapping_found = true;
 		}
 	}
 	if (mapping_found)
-		port->tx_queue_stats_mapping_enabled = 1;
+		p_stats_map->tx_queue_stats_mapping_enabled = true;
+
 	return 0;
 }
 
 static int
 set_rx_queue_stats_mapping_registers(portid_t port_id, struct rte_port *port)
 {
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool mapping_found = false;
 	uint16_t i;
 	int diag;
-	uint8_t mapping_found = 0;
 
-	for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-		if ((rx_queue_stats_mappings[i].port_id == port_id) &&
-				(rx_queue_stats_mappings[i].queue_id < nb_rxq )) {
+	p_stats_map = &port->p_stats_map;
+	for (i = 0; i < p_stats_map->nb_rxq_stats_mappings; i++) {
+		q_stats_map = &p_stats_map->rxq_map_array[i];
+		if (q_stats_map->active) {
+			mapping_found = true;
+			continue;
+		}
+
+		if (q_stats_map->queue_id < nb_rxq) {
 			diag = rte_eth_dev_set_rx_queue_stats_mapping(port_id,
-					rx_queue_stats_mappings[i].queue_id,
-					rx_queue_stats_mappings[i].stats_counter_id);
+					q_stats_map->queue_id,
+					q_stats_map->stats_counter_id);
 			if (diag != 0)
 				return diag;
-			mapping_found = 1;
+			q_stats_map->active = true;
+			mapping_found = true;
 		}
 	}
 	if (mapping_found)
-		port->rx_queue_stats_mapping_enabled = 1;
+		p_stats_map->rx_queue_stats_mapping_enabled = true;
+
 	return 0;
 }
 
 static void
 map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port)
 {
+	struct port_stats_mappings *p_stats_map = &port->p_stats_map;
 	int diag = 0;
 
 	diag = set_tx_queue_stats_mapping_registers(pi, port);
 	if (diag != 0) {
 		if (diag == -ENOTSUP) {
-			port->tx_queue_stats_mapping_enabled = 0;
-			printf("TX queue stats mapping not supported port id=%d\n", pi);
+			memset(p_stats_map->txq_map_array, 0,
+				sizeof(p_stats_map->txq_map_array));
+			p_stats_map->nb_txq_stats_mappings = 0;
+			p_stats_map->tx_queue_stats_mapping_enabled = false;
+			printf("TX queue stats mapping not supported "
+				"port id=%d\n", pi);
 		}
 		else
 			rte_exit(EXIT_FAILURE,
@@ -3419,8 +3489,12 @@ map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port)
 	diag = set_rx_queue_stats_mapping_registers(pi, port);
 	if (diag != 0) {
 		if (diag == -ENOTSUP) {
-			port->rx_queue_stats_mapping_enabled = 0;
-			printf("RX queue stats mapping not supported port id=%d\n", pi);
+			memset(p_stats_map->rxq_map_array, 0,
+				sizeof(p_stats_map->rxq_map_array));
+			p_stats_map->nb_rxq_stats_mappings = 0;
+			p_stats_map->rx_queue_stats_mapping_enabled = false;
+			printf("RX queue stats mapping not supported "
+				"port id=%d\n", pi);
 		}
 		else
 			rte_exit(EXIT_FAILURE,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 833ca14..0397d6d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -181,6 +181,24 @@ struct tunnel_ops {
 	uint32_t items:1;
 };
 
+struct queue_stats_mappings {
+	uint16_t queue_id;
+	uint8_t stats_counter_id;
+	bool active;
+};
+
+/**
+ * The data of queue stats mapping on this port.
+ */
+struct port_stats_mappings {
+	struct queue_stats_mappings rxq_map_array[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+	struct queue_stats_mappings txq_map_array[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+	uint16_t nb_rxq_stats_mappings;
+	uint16_t nb_txq_stats_mappings;
+	bool rx_queue_stats_mapping_enabled;
+	bool tx_queue_stats_mapping_enabled;
+};
+
 /**
  * The data structure associated with each port.
  */
@@ -195,8 +213,7 @@ struct rte_port {
 	uint16_t                tunnel_tso_segsz; /**< Segmentation offload MSS for tunneled pkts. */
 	uint16_t                tx_vlan_id;/**< The tag ID */
 	uint16_t                tx_vlan_id_outer;/**< The outer tag ID */
-	uint8_t                 tx_queue_stats_mapping_enabled;
-	uint8_t                 rx_queue_stats_mapping_enabled;
+	struct port_stats_mappings p_stats_map;
 	volatile uint16_t        port_status;    /**< port started or not */
 	uint8_t                 need_setup;     /**< port just attached */
 	uint8_t                 need_reconfig;  /**< need reconfiguring port or not */
@@ -315,25 +332,6 @@ enum dcb_mode_enable
 	DCB_ENABLED
 };
 
-#define MAX_TX_QUEUE_STATS_MAPPINGS 1024 /* MAX_PORT of 32 @ 32 tx_queues/port */
-#define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /* MAX_PORT of 32 @ 128 rx_queues/port */
-
-struct queue_stats_mappings {
-	portid_t port_id;
-	uint16_t queue_id;
-	uint8_t stats_counter_id;
-} __rte_cache_aligned;
-
-extern struct queue_stats_mappings tx_queue_stats_mappings_array[];
-extern struct queue_stats_mappings rx_queue_stats_mappings_array[];
-
-/* Assign both tx and rx queue stats mappings to the same default values */
-extern struct queue_stats_mappings *tx_queue_stats_mappings;
-extern struct queue_stats_mappings *rx_queue_stats_mappings;
-
-extern uint16_t nb_tx_queue_stats_mappings;
-extern uint16_t nb_rx_queue_stats_mappings;
-
 extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */
 
 /* globals used for configuration */
@@ -780,6 +778,7 @@ void nic_stats_clear(portid_t port_id);
 void nic_xstats_display(portid_t port_id);
 void nic_xstats_clear(portid_t port_id);
 void nic_stats_mapping_display(portid_t port_id);
+void port_stats_mapping_display(portid_t pt_id, struct rte_eth_stats *stats);
 void device_infos_display(const char *identifier);
 void port_infos_display(portid_t port_id);
 void port_summary_display(portid_t port_id);
-- 
2.7.4


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [dpdk-dev] [RFC V2 2/2] app/testpmd: fix starting failed with queue-stats-mapping
  2020-10-20  8:26 [dpdk-dev] [RFC V2 0/2] fix queue stats mapping Min Hu (Connor)
  2020-10-20  8:26 ` [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration Min Hu (Connor)
@ 2020-10-20  8:26 ` Min Hu (Connor)
  1 sibling, 0 replies; 15+ messages in thread
From: Min Hu (Connor) @ 2020-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, bruce.richardson, thomas.monjalon, lihuisong

From: Huisong Li <lihuisong@huawei.com>

testpmd fails to start with "--rx-queue-stats-mapping" and
"--tx-queue-stats-mapping", which is caused by the failure and exit of
'map_port_queue_stats_mapping_registers' function. By default,
the configuration of queue statistics mapping in the initialization
process is implemented in the 'init_port_config' function. However,
the dev_configure interface is not called to configure the NIC
and dev->data-nb_rx/tx_queues is zero. As a result,
'rte_eth_dev_set_tx/rx_queue_stats_mapping' function fails to verify
the queue_id. Therefore, it is necessary to move
'map_port_queue_stats_mapping_registers' from 'init_port_config'
to 'start_port' function.

Fixes: 013af9b6b64f6e7 ("app/testpmd: various updates")

Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 app/test-pmd/testpmd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 86e3271..e4fe2a6 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2742,6 +2742,7 @@ start_port(portid_t pid)
 		need_check_link_status = 1;
 
 		pl[cfg_pi++] = pi;
+		map_port_queue_stats_mapping_registers(pi, port);
 	}
 
 	if (need_check_link_status == 1 && !no_link_check)
@@ -3600,7 +3601,6 @@ init_port_config(void)
 		if (ret != 0)
 			return;
 
-		map_port_queue_stats_mapping_registers(pid, port);
 #if defined RTE_LIBRTE_IXGBE_PMD && defined RTE_LIBRTE_IXGBE_BYPASS
 		rte_pmd_ixgbe_bypass_init(pid);
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration
  2020-10-20  8:26 ` [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration Min Hu (Connor)
@ 2020-10-30 20:54   ` Ferruh Yigit
  2020-11-03  6:30     ` Min Hu (Connor)
  0 siblings, 1 reply; 15+ messages in thread
From: Ferruh Yigit @ 2020-10-30 20:54 UTC (permalink / raw)
  To: Min Hu (Connor), dev; +Cc: bruce.richardson, thomas.monjalon, lihuisong

On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
> From: Huisong Li <lihuisong@huawei.com>
> 
> Currently, the queue stats mapping has the following problems:
> 1) Many PMD drivers don't support queue stats mapping. But there is no
> failure message after executing the command "set stat_qmap rx 0 2 2".
> 2) Once queue mapping is set, unrelated and unmapped queues are also
> displayed.
> 3) There is no need to keep cache line alignment for
> 'struct queue_stats_mappings'.
> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
> 'rx_queue_stats_mappings_array' are global and their sizes are based on
> fixed max port and queue size assumptions.
> 5) The configuration result does not take effect or can not be queried
> in real time.
> 
> Therefore, we have made the following adjustments:
> 1) If PMD supports queue stats mapping, configure to driver in real time
> after executing the command "set stat_qmap rx/tx ...". If not,
> the command can not be accepted.
> 2) Only display queues that mapping done by adding a new 'active' field
>   in queue_stats_mappings struct.
> 3) Remove cache alignment for 'struct queue_stats_mappings'.
> 4) Add a new port_stats_mappings struct in rte_port.
> The struct contains number of rx/txq stats mapping, rx/tx
> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
> Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>   to ensure that the same number of queues can be set for each port.
> 

Hi Connor,

I think above adjustment are good, but after the decision to use xstats for the 
queue stats, what do you think about more simplification,

1)
What testpmd does is, records the queue stats mapping commands and registers 
them later on port start & forwarding start.
What happens if recording and registering completely removed?
When "set stat_qmap .." issued, it just call the ethdev APIs to do the mapping 
in device.
This lets us removing record structures, "struct port_stats_mappings p_stats_map"
Also can remove 'map_port_queue_stats_mapping_registers()' and its sub functions.

2)
Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters, 
which enables removing 'parse_queue_stats_mapping_config()' function too

3)
Another problem is to display the queue stats, in 'fwd_stats_display()' & 
'nic_stats_display()', there is a check if the queue stats mapping enable or not 
('rx_queue_stats_mapping_enabled' & 'tx_queue_stats_mapping_enabled'),
I think displaying queue stats and queue stat mapping should be separate, why 
not drop checks for queue stats mapping and display queue stats for 'nb_rxq' & 
'nb_txq' queues?

Does above make sense?


Majority of the drivers doesn't require queue stat mapping to get the queue 
stats, lets don't pollute main usage with this requirement.


> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
> 
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration
  2020-10-30 20:54   ` Ferruh Yigit
@ 2020-11-03  6:30     ` Min Hu (Connor)
  2020-11-12  2:28       ` Min Hu (Connor)
  0 siblings, 1 reply; 15+ messages in thread
From: Min Hu (Connor) @ 2020-11-03  6:30 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: bruce.richardson, thomas.monjalon, lihuisong

Hi Ferruh,

I agree with your proposal. But if we remove record structures, we will
not be able to query the current queue stats mapping configuration. Or
we can provide a query API for the PMD driver that uses the
set_queue_stats_mapping API, and driver records these mapping
information from user.

What do you think?


在 2020/10/31 4:54, Ferruh Yigit 写道:
> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>> From: Huisong Li <lihuisong@huawei.com>
>>
>> Currently, the queue stats mapping has the following problems:
>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>> failure message after executing the command "set stat_qmap rx 0 2 2".
>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>> displayed.
>> 3) There is no need to keep cache line alignment for
>> 'struct queue_stats_mappings'.
>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>> 'rx_queue_stats_mappings_array' are global and their sizes are based on
>> fixed max port and queue size assumptions.
>> 5) The configuration result does not take effect or can not be queried
>> in real time.
>>
>> Therefore, we have made the following adjustments:
>> 1) If PMD supports queue stats mapping, configure to driver in real time
>> after executing the command "set stat_qmap rx/tx ...". If not,
>> the command can not be accepted.
>> 2) Only display queues that mapping done by adding a new 'active' field
>>   in queue_stats_mappings struct.
>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>> 4) Add a new port_stats_mappings struct in rte_port.
>> The struct contains number of rx/txq stats mapping, rx/tx
>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>> Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>   to ensure that the same number of queues can be set for each port.
>>
> 
> Hi Connor,
> 
> I think above adjustment are good, but after the decision to use xstats 
> for the queue stats, what do you think about more simplification,
> 
> 1)
> What testpmd does is, records the queue stats mapping commands and 
> registers them later on port start & forwarding start.
> What happens if recording and registering completely removed?
> When "set stat_qmap .." issued, it just call the ethdev APIs to do the 
> mapping in device.
> This lets us removing record structures, "struct port_stats_mappings 
> p_stats_map"
> Also can remove 'map_port_queue_stats_mapping_registers()' and its sub 
> functions.
> 
> 2)
> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
> parameters, which enables removing 'parse_queue_stats_mapping_config()' 
> function too
> 
> 3)
> Another problem is to display the queue stats, in 'fwd_stats_display()' 
> & 'nic_stats_display()', there is a check if the queue stats mapping 
> enable or not ('rx_queue_stats_mapping_enabled' & 
> 'tx_queue_stats_mapping_enabled'),
> I think displaying queue stats and queue stat mapping should be 
> separate, why not drop checks for queue stats mapping and display queue 
> stats for 'nb_rxq' & 'nb_txq' queues?
> 
> Does above make sense?
> 
> 
> Majority of the drivers doesn't require queue stat mapping to get the 
> queue stats, lets don't pollute main usage with this requirement.
> 
> 
>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats 
>> mappings error")
>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> 
> <...>
> 
> .

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration
  2020-11-03  6:30     ` Min Hu (Connor)
@ 2020-11-12  2:28       ` Min Hu (Connor)
  2020-11-12  9:52         ` Ferruh Yigit
  0 siblings, 1 reply; 15+ messages in thread
From: Min Hu (Connor) @ 2020-11-12  2:28 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: bruce.richardson, thomas.monjalon, lihuisong

Hi Ferruh, any suggestions?


在 2020/11/3 14:30, Min Hu (Connor) 写道:
> Hi Ferruh,
> 
> I agree with your proposal. But if we remove record structures, we will
> not be able to query the current queue stats mapping configuration. Or
> we can provide a query API for the PMD driver that uses the
> set_queue_stats_mapping API, and driver records these mapping
> information from user.
> 
> What do you think?
> 
> 
> 在 2020/10/31 4:54, Ferruh Yigit 写道:
>> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>>> From: Huisong Li <lihuisong@huawei.com>
>>>
>>> Currently, the queue stats mapping has the following problems:
>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>> failure message after executing the command "set stat_qmap rx 0 2 2".
>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>> displayed.
>>> 3) There is no need to keep cache line alignment for
>>> 'struct queue_stats_mappings'.
>>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>>> 'rx_queue_stats_mappings_array' are global and their sizes are based on
>>> fixed max port and queue size assumptions.
>>> 5) The configuration result does not take effect or can not be queried
>>> in real time.
>>>
>>> Therefore, we have made the following adjustments:
>>> 1) If PMD supports queue stats mapping, configure to driver in real time
>>> after executing the command "set stat_qmap rx/tx ...". If not,
>>> the command can not be accepted.
>>> 2) Only display queues that mapping done by adding a new 'active' field
>>>   in queue_stats_mappings struct.
>>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>>> 4) Add a new port_stats_mappings struct in rte_port.
>>> The struct contains number of rx/txq stats mapping, rx/tx
>>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>>> Size of queue_stats_mapping_array is set to 
>>> "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>>   to ensure that the same number of queues can be set for each port.
>>>
>>
>> Hi Connor,
>>
>> I think above adjustment are good, but after the decision to use 
>> xstats for the queue stats, what do you think about more simplification,
>>
>> 1)
>> What testpmd does is, records the queue stats mapping commands and 
>> registers them later on port start & forwarding start.
>> What happens if recording and registering completely removed?
>> When "set stat_qmap .." issued, it just call the ethdev APIs to do the 
>> mapping in device.
>> This lets us removing record structures, "struct port_stats_mappings 
>> p_stats_map"
>> Also can remove 'map_port_queue_stats_mapping_registers()' and its sub 
>> functions.
>>
>> 2)
>> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
>> parameters, which enables removing 
>> 'parse_queue_stats_mapping_config()' function too
>>
>> 3)
>> Another problem is to display the queue stats, in 
>> 'fwd_stats_display()' & 'nic_stats_display()', there is a check if the 
>> queue stats mapping enable or not ('rx_queue_stats_mapping_enabled' & 
>> 'tx_queue_stats_mapping_enabled'),
>> I think displaying queue stats and queue stat mapping should be 
>> separate, why not drop checks for queue stats mapping and display 
>> queue stats for 'nb_rxq' & 'nb_txq' queues?
>>
>> Does above make sense?
>>
>>
>> Majority of the drivers doesn't require queue stat mapping to get the 
>> queue stats, lets don't pollute main usage with this requirement.
>>
>>
>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats 
>>> mappings error")
>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>
>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>
>> <...>
>>
>> .

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration
  2020-11-12  2:28       ` Min Hu (Connor)
@ 2020-11-12  9:52         ` Ferruh Yigit
  2020-11-18  3:39           ` Min Hu (Connor)
  2020-11-20 11:50           ` [dpdk-dev] [RFC V4] " Min Hu (Connor)
  0 siblings, 2 replies; 15+ messages in thread
From: Ferruh Yigit @ 2020-11-12  9:52 UTC (permalink / raw)
  To: Min Hu (Connor), dev; +Cc: bruce.richardson, thomas.monjalon, lihuisong

On 11/12/2020 2:28 AM, Min Hu (Connor) wrote:
> Hi Ferruh, any suggestions?
> 
> 
> 在 2020/11/3 14:30, Min Hu (Connor) 写道:
>> Hi Ferruh,
>>
>> I agree with your proposal. But if we remove record structures, we will
>> not be able to query the current queue stats mapping configuration. Or
>> we can provide a query API for the PMD driver that uses the
>> set_queue_stats_mapping API, and driver records these mapping
>> information from user.
>>
>> What do you think?
>>

Sorry for delay,

Yes that information will be lost, but since the queue stats mapping is not 
commonly used, I think it can be OK to loose it.

As you said another option is to add a new ethdev API to get queue stats 
mapping, but because of same reason not sure about adding it. We can add it 
later if there is a request for it.

>>
>> 在 2020/10/31 4:54, Ferruh Yigit 写道:
>>> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> Currently, the queue stats mapping has the following problems:
>>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>> failure message after executing the command "set stat_qmap rx 0 2 2".
>>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>>> displayed.
>>>> 3) There is no need to keep cache line alignment for
>>>> 'struct queue_stats_mappings'.
>>>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>>>> 'rx_queue_stats_mappings_array' are global and their sizes are based on
>>>> fixed max port and queue size assumptions.
>>>> 5) The configuration result does not take effect or can not be queried
>>>> in real time.
>>>>
>>>> Therefore, we have made the following adjustments:
>>>> 1) If PMD supports queue stats mapping, configure to driver in real time
>>>> after executing the command "set stat_qmap rx/tx ...". If not,
>>>> the command can not be accepted.
>>>> 2) Only display queues that mapping done by adding a new 'active' field
>>>>   in queue_stats_mappings struct.
>>>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>>>> 4) Add a new port_stats_mappings struct in rte_port.
>>>> The struct contains number of rx/txq stats mapping, rx/tx
>>>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>>>> Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>>>   to ensure that the same number of queues can be set for each port.
>>>>
>>>
>>> Hi Connor,
>>>
>>> I think above adjustment are good, but after the decision to use xstats for 
>>> the queue stats, what do you think about more simplification,
>>>
>>> 1)
>>> What testpmd does is, records the queue stats mapping commands and registers 
>>> them later on port start & forwarding start.
>>> What happens if recording and registering completely removed?
>>> When "set stat_qmap .." issued, it just call the ethdev APIs to do the 
>>> mapping in device.
>>> This lets us removing record structures, "struct port_stats_mappings 
>>> p_stats_map"
>>> Also can remove 'map_port_queue_stats_mapping_registers()' and its sub 
>>> functions.
>>>
>>> 2)
>>> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
>>> parameters, which enables removing 'parse_queue_stats_mapping_config()' 
>>> function too
>>>
>>> 3)
>>> Another problem is to display the queue stats, in 'fwd_stats_display()' & 
>>> 'nic_stats_display()', there is a check if the queue stats mapping enable or 
>>> not ('rx_queue_stats_mapping_enabled' & 'tx_queue_stats_mapping_enabled'),
>>> I think displaying queue stats and queue stat mapping should be separate, why 
>>> not drop checks for queue stats mapping and display queue stats for 'nb_rxq' 
>>> & 'nb_txq' queues?
>>>
>>> Does above make sense?
>>>
>>>
>>> Majority of the drivers doesn't require queue stat mapping to get the queue 
>>> stats, lets don't pollute main usage with this requirement.
>>>
>>>
>>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
>>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>>
>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>
>>> <...>
>>>
>>> .


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration
  2020-11-12  9:52         ` Ferruh Yigit
@ 2020-11-18  3:39           ` Min Hu (Connor)
  2020-11-20 11:50           ` [dpdk-dev] [RFC V4] " Min Hu (Connor)
  1 sibling, 0 replies; 15+ messages in thread
From: Min Hu (Connor) @ 2020-11-18  3:39 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: bruce.richardson, thomas.monjalon, lihuisong

HI, Ferruh,
	Ok,I will send V3 patches. thanks.

在 2020/11/12 17:52, Ferruh Yigit 写道:
> On 11/12/2020 2:28 AM, Min Hu (Connor) wrote:
>> Hi Ferruh, any suggestions?
>>
>>
>> 在 2020/11/3 14:30, Min Hu (Connor) 写道:
>>> Hi Ferruh,
>>>
>>> I agree with your proposal. But if we remove record structures, we will
>>> not be able to query the current queue stats mapping configuration. Or
>>> we can provide a query API for the PMD driver that uses the
>>> set_queue_stats_mapping API, and driver records these mapping
>>> information from user.
>>>
>>> What do you think?
>>>
> 
> Sorry for delay,
> 
> Yes that information will be lost, but since the queue stats mapping is 
> not commonly used, I think it can be OK to loose it.
> 
> As you said another option is to add a new ethdev API to get queue stats 
> mapping, but because of same reason not sure about adding it. We can add 
> it later if there is a request for it.
> 
>>>
>>> 在 2020/10/31 4:54, Ferruh Yigit 写道:
>>>> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>>
>>>>> Currently, the queue stats mapping has the following problems:
>>>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>>> failure message after executing the command "set stat_qmap rx 0 2 2".
>>>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>>>> displayed.
>>>>> 3) There is no need to keep cache line alignment for
>>>>> 'struct queue_stats_mappings'.
>>>>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>>>>> 'rx_queue_stats_mappings_array' are global and their sizes are 
>>>>> based on
>>>>> fixed max port and queue size assumptions.
>>>>> 5) The configuration result does not take effect or can not be queried
>>>>> in real time.
>>>>>
>>>>> Therefore, we have made the following adjustments:
>>>>> 1) If PMD supports queue stats mapping, configure to driver in real 
>>>>> time
>>>>> after executing the command "set stat_qmap rx/tx ...". If not,
>>>>> the command can not be accepted.
>>>>> 2) Only display queues that mapping done by adding a new 'active' 
>>>>> field
>>>>>   in queue_stats_mappings struct.
>>>>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>>>>> 4) Add a new port_stats_mappings struct in rte_port.
>>>>> The struct contains number of rx/txq stats mapping, rx/tx
>>>>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>>>>> Size of queue_stats_mapping_array is set to 
>>>>> "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>>>>   to ensure that the same number of queues can be set for each port.
>>>>>
>>>>
>>>> Hi Connor,
>>>>
>>>> I think above adjustment are good, but after the decision to use 
>>>> xstats for the queue stats, what do you think about more 
>>>> simplification,
>>>>
>>>> 1)
>>>> What testpmd does is, records the queue stats mapping commands and 
>>>> registers them later on port start & forwarding start.
>>>> What happens if recording and registering completely removed?
>>>> When "set stat_qmap .." issued, it just call the ethdev APIs to do 
>>>> the mapping in device.
>>>> This lets us removing record structures, "struct port_stats_mappings 
>>>> p_stats_map"
>>>> Also can remove 'map_port_queue_stats_mapping_registers()' and its 
>>>> sub functions.
>>>>
>>>> 2)
>>>> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
>>>> parameters, which enables removing 
>>>> 'parse_queue_stats_mapping_config()' function too
>>>>
>>>> 3)
>>>> Another problem is to display the queue stats, in 
>>>> 'fwd_stats_display()' & 'nic_stats_display()', there is a check if 
>>>> the queue stats mapping enable or not 
>>>> ('rx_queue_stats_mapping_enabled' & 'tx_queue_stats_mapping_enabled'),
>>>> I think displaying queue stats and queue stat mapping should be 
>>>> separate, why not drop checks for queue stats mapping and display 
>>>> queue stats for 'nb_rxq' & 'nb_txq' queues?
>>>>
>>>> Does above make sense?
>>>>
>>>>
>>>> Majority of the drivers doesn't require queue stat mapping to get 
>>>> the queue stats, lets don't pollute main usage with this requirement.
>>>>
>>>>
>>>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats 
>>>>> mappings error")
>>>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>>>
>>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> <...>
>>>>
>>>> .
> 
> .

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [dpdk-dev] [RFC V4] app/testpmd: fix queue stats mapping configuration
  2020-11-12  9:52         ` Ferruh Yigit
  2020-11-18  3:39           ` Min Hu (Connor)
@ 2020-11-20 11:50           ` " Min Hu (Connor)
  2020-11-20 17:26             ` Ferruh Yigit
  1 sibling, 1 reply; 15+ messages in thread
From: Min Hu (Connor) @ 2020-11-20 11:50 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, bruce.richardson, thomas.monjalon, lihuisong

From: Huisong Li <lihuisong@huawei.com>

Currently, the queue stats mapping has the following problems:
1) Many PMD drivers don't support queue stats mapping. But there is no
 failure message after executing the command "set stat_qmap rx 0 2 2".
2) Once queue mapping is set, unrelated and unmapped queues are also
displayed.
3) The configuration result does not take effect or can not be queried
in real time.
4) The mapping arrays, "tx_queue_stats_mappings_array" &
"rx_queue_stats_mappings_array" are global and their sizes are based on
fixed max port and queue size assumptions.
5) These record structures, 'map_port_queue_stats_mapping_registers()'
 and its sub functions are redundant for majority of drivers.
6) The display of the queue stats and queue stats mapping is mixed
 together.

Since xstats is used to obtain queue statistics, we have made the following
 simplifications and adjustments:
1) If PMD requires and supports queue stats mapping, configure to driver in
 real time by calling ethdev API after executing the command
 "set stat_qmap rx/tx ...". If not, the command can not be accepted.
2) Based on the above adjustments, these record structures,
'map_port_queue_stats_mapping_registers()' and its sub functions can be
 removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters,
 and 'parse_queue_stats_mapping_config()' can be removed too.
3) remove display of queue stats mapping in 'fwd_stats_display()' &
 'nic_stats_display()', and obtain queue stats by xstats.
 Since the record structures are removed, 'nic_stats_mapping_display()'
 can be deleted.

Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")

Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
V3 -> V4
* remove unnecessary spaces.

V2 -> V3
* remove record structures, "rx/tx-queue-stats-mapping" parameters
  and related function.

V1 -> V2
* adjust commit log description.

---
 app/test-pmd/cmdline.c    |  17 ++---
 app/test-pmd/config.c     | 144 +++++-----------------------------
 app/test-pmd/parameters.c | 107 --------------------------
 app/test-pmd/testpmd.c    | 191 +++++-----------------------------------------
 app/test-pmd/testpmd.h    |  22 ------
 5 files changed, 46 insertions(+), 435 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 5e2881e..4c1ee0d 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -163,7 +163,7 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"Display:\n"
 			"--------\n\n"
 
-			"show port (info|stats|summary|xstats|fdir|stat_qmap|dcb_tc|cap) (port_id|all)\n"
+			"show port (info|stats|summary|xstats|fdir|dcb_tc|cap) (port_id|all)\n"
 			"    Display information for port_id, or all.\n\n"
 
 			"show port port_id (module_eeprom|eeprom)\n"
@@ -177,7 +177,7 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"show port (port_id) rss-hash [key]\n"
 			"    Display the RSS hash functions and RSS hash key of port\n\n"
 
-			"clear port (info|stats|xstats|fdir|stat_qmap) (port_id|all)\n"
+			"clear port (info|stats|xstats|fdir) (port_id|all)\n"
 			"    Clear information for port_id, or all.\n\n"
 
 			"show (rxq|txq) info (port_id) (queue_id)\n"
@@ -7552,9 +7552,6 @@ static void cmd_showportall_parsed(void *parsed_result,
 		RTE_ETH_FOREACH_DEV(i)
 			fdir_get_infos(i);
 #endif
-	else if (!strcmp(res->what, "stat_qmap"))
-		RTE_ETH_FOREACH_DEV(i)
-			nic_stats_mapping_display(i);
 	else if (!strcmp(res->what, "dcb_tc"))
 		RTE_ETH_FOREACH_DEV(i)
 			port_dcb_info_display(i);
@@ -7570,14 +7567,14 @@ cmdline_parse_token_string_t cmd_showportall_port =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, port, "port");
 cmdline_parse_token_string_t cmd_showportall_what =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, what,
-				 "info#summary#stats#xstats#fdir#stat_qmap#dcb_tc#cap");
+				 "info#summary#stats#xstats#fdir#dcb_tc#cap");
 cmdline_parse_token_string_t cmd_showportall_all =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, all, "all");
 cmdline_parse_inst_t cmd_showportall = {
 	.f = cmd_showportall_parsed,
 	.data = NULL,
 	.help_str = "show|clear port "
-		"info|summary|stats|xstats|fdir|stat_qmap|dcb_tc|cap all",
+		"info|summary|stats|xstats|fdir|dcb_tc|cap all",
 	.tokens = {
 		(void *)&cmd_showportall_show,
 		(void *)&cmd_showportall_port,
@@ -7619,8 +7616,6 @@ static void cmd_showport_parsed(void *parsed_result,
 	else if (!strcmp(res->what, "fdir"))
 		 fdir_get_infos(res->portnum);
 #endif
-	else if (!strcmp(res->what, "stat_qmap"))
-		nic_stats_mapping_display(res->portnum);
 	else if (!strcmp(res->what, "dcb_tc"))
 		port_dcb_info_display(res->portnum);
 	else if (!strcmp(res->what, "cap"))
@@ -7634,7 +7629,7 @@ cmdline_parse_token_string_t cmd_showport_port =
 	TOKEN_STRING_INITIALIZER(struct cmd_showport_result, port, "port");
 cmdline_parse_token_string_t cmd_showport_what =
 	TOKEN_STRING_INITIALIZER(struct cmd_showport_result, what,
-				 "info#summary#stats#xstats#fdir#stat_qmap#dcb_tc#cap");
+				 "info#summary#stats#xstats#fdir#dcb_tc#cap");
 cmdline_parse_token_num_t cmd_showport_portnum =
 	TOKEN_NUM_INITIALIZER(struct cmd_showport_result, portnum, RTE_UINT16);
 
@@ -7642,7 +7637,7 @@ cmdline_parse_inst_t cmd_showport = {
 	.f = cmd_showport_parsed,
 	.data = NULL,
 	.help_str = "show|clear port "
-		"info|summary|stats|xstats|fdir|stat_qmap|dcb_tc|cap "
+		"info|summary|stats|xstats|fdir|dcb_tc|cap "
 		"<port_id>",
 	.tokens = {
 		(void *)&cmd_showport_show,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 91e7542..196fa80 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -182,8 +182,6 @@ nic_stats_display(portid_t port_id)
 								diff_ns;
 	uint64_t mpps_rx, mpps_tx, mbps_rx, mbps_tx;
 	struct rte_eth_stats stats;
-	struct rte_port *port = &ports[port_id];
-	uint8_t i;
 
 	static const char *nic_stats_border = "########################";
 
@@ -195,46 +193,12 @@ nic_stats_display(portid_t port_id)
 	printf("\n  %s NIC statistics for port %-2d %s\n",
 	       nic_stats_border, port_id, nic_stats_border);
 
-	if ((!port->rx_queue_stats_mapping_enabled) && (!port->tx_queue_stats_mapping_enabled)) {
-		printf("  RX-packets: %-10"PRIu64" RX-missed: %-10"PRIu64" RX-bytes:  "
-		       "%-"PRIu64"\n",
-		       stats.ipackets, stats.imissed, stats.ibytes);
-		printf("  RX-errors: %-"PRIu64"\n", stats.ierrors);
-		printf("  RX-nombuf:  %-10"PRIu64"\n",
-		       stats.rx_nombuf);
-		printf("  TX-packets: %-10"PRIu64" TX-errors: %-10"PRIu64" TX-bytes:  "
-		       "%-"PRIu64"\n",
-		       stats.opackets, stats.oerrors, stats.obytes);
-	}
-	else {
-		printf("  RX-packets:              %10"PRIu64"    RX-errors: %10"PRIu64
-		       "    RX-bytes: %10"PRIu64"\n",
-		       stats.ipackets, stats.ierrors, stats.ibytes);
-		printf("  RX-errors:  %10"PRIu64"\n", stats.ierrors);
-		printf("  RX-nombuf:               %10"PRIu64"\n",
-		       stats.rx_nombuf);
-		printf("  TX-packets:              %10"PRIu64"    TX-errors: %10"PRIu64
-		       "    TX-bytes: %10"PRIu64"\n",
-		       stats.opackets, stats.oerrors, stats.obytes);
-	}
-
-	if (port->rx_queue_stats_mapping_enabled) {
-		printf("\n");
-		for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
-			printf("  Stats reg %2d RX-packets: %10"PRIu64
-			       "    RX-errors: %10"PRIu64
-			       "    RX-bytes: %10"PRIu64"\n",
-			       i, stats.q_ipackets[i], stats.q_errors[i], stats.q_ibytes[i]);
-		}
-	}
-	if (port->tx_queue_stats_mapping_enabled) {
-		printf("\n");
-		for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
-			printf("  Stats reg %2d TX-packets: %10"PRIu64
-			       "                             TX-bytes: %10"PRIu64"\n",
-			       i, stats.q_opackets[i], stats.q_obytes[i]);
-		}
-	}
+	printf("  RX-packets: %-10"PRIu64" RX-missed: %-10"PRIu64" RX-bytes:  "
+	       "%-"PRIu64"\n", stats.ipackets, stats.imissed, stats.ibytes);
+	printf("  RX-errors: %-"PRIu64"\n", stats.ierrors);
+	printf("  RX-nombuf:  %-10"PRIu64"\n", stats.rx_nombuf);
+	printf("  TX-packets: %-10"PRIu64" TX-errors: %-10"PRIu64" TX-bytes:  "
+	       "%-"PRIu64"\n", stats.opackets, stats.oerrors, stats.obytes);
 
 	diff_ns = 0;
 	if (clock_gettime(CLOCK_TYPE_ID, &cur_time) == 0) {
@@ -398,54 +362,6 @@ nic_xstats_clear(portid_t port_id)
 }
 
 void
-nic_stats_mapping_display(portid_t port_id)
-{
-	struct rte_port *port = &ports[port_id];
-	uint16_t i;
-
-	static const char *nic_stats_mapping_border = "########################";
-
-	if (port_id_is_invalid(port_id, ENABLED_WARN)) {
-		print_valid_ports();
-		return;
-	}
-
-	if ((!port->rx_queue_stats_mapping_enabled) && (!port->tx_queue_stats_mapping_enabled)) {
-		printf("Port id %d - either does not support queue statistic mapping or"
-		       " no queue statistic mapping set\n", port_id);
-		return;
-	}
-
-	printf("\n  %s NIC statistics mapping for port %-2d %s\n",
-	       nic_stats_mapping_border, port_id, nic_stats_mapping_border);
-
-	if (port->rx_queue_stats_mapping_enabled) {
-		for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-			if (rx_queue_stats_mappings[i].port_id == port_id) {
-				printf("  RX-queue %2d mapped to Stats Reg %2d\n",
-				       rx_queue_stats_mappings[i].queue_id,
-				       rx_queue_stats_mappings[i].stats_counter_id);
-			}
-		}
-		printf("\n");
-	}
-
-
-	if (port->tx_queue_stats_mapping_enabled) {
-		for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-			if (tx_queue_stats_mappings[i].port_id == port_id) {
-				printf("  TX-queue %2d mapped to Stats Reg %2d\n",
-				       tx_queue_stats_mappings[i].queue_id,
-				       tx_queue_stats_mappings[i].stats_counter_id);
-			}
-		}
-	}
-
-	printf("  %s####################################%s\n",
-	       nic_stats_mapping_border, nic_stats_mapping_border);
-}
-
-void
 rx_queue_infos_display(portid_t port_id, uint16_t queue_id)
 {
 	struct rte_eth_burst_mode mode;
@@ -2572,7 +2488,7 @@ tx_queue_id_is_invalid(queueid_t txq_id)
 {
 	if (txq_id < nb_txq)
 		return 0;
-	printf("Invalid TX queue %d (must be < nb_rxq=%d)\n", txq_id, nb_txq);
+	printf("Invalid TX queue %d (must be < nb_txq=%d)\n", txq_id, nb_txq);
 	return 1;
 }
 
@@ -4527,8 +4443,7 @@ tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on)
 void
 set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
 {
-	uint16_t i;
-	uint8_t existing_mapping_found = 0;
+	int ret;
 
 	if (port_id_is_invalid(port_id, ENABLED_WARN))
 		return;
@@ -4538,40 +4453,23 @@ set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
 
 	if (map_value >= RTE_ETHDEV_QUEUE_STAT_CNTRS) {
 		printf("map_value not in required range 0..%d\n",
-				RTE_ETHDEV_QUEUE_STAT_CNTRS - 1);
+		       RTE_ETHDEV_QUEUE_STAT_CNTRS - 1);
 		return;
 	}
 
-	if (!is_rx) { /*then tx*/
-		for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-			if ((tx_queue_stats_mappings[i].port_id == port_id) &&
-			    (tx_queue_stats_mappings[i].queue_id == queue_id)) {
-				tx_queue_stats_mappings[i].stats_counter_id = map_value;
-				existing_mapping_found = 1;
-				break;
-			}
-		}
-		if (!existing_mapping_found) { /* A new additional mapping... */
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].port_id = port_id;
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].queue_id = queue_id;
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].stats_counter_id = map_value;
-			nb_tx_queue_stats_mappings++;
-		}
-	}
-	else { /*rx*/
-		for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-			if ((rx_queue_stats_mappings[i].port_id == port_id) &&
-			    (rx_queue_stats_mappings[i].queue_id == queue_id)) {
-				rx_queue_stats_mappings[i].stats_counter_id = map_value;
-				existing_mapping_found = 1;
-				break;
-			}
+	if (!is_rx) { /* tx */
+		ret = rte_eth_dev_set_tx_queue_stats_mapping(port_id, queue_id,
+							     map_value);
+		if (ret) {
+			printf("failed to set tx queue stats mapping.\n");
+			return;
 		}
-		if (!existing_mapping_found) { /* A new additional mapping... */
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].port_id = port_id;
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].queue_id = queue_id;
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].stats_counter_id = map_value;
-			nb_rx_queue_stats_mappings++;
+	} else { /* rx */
+		ret = rte_eth_dev_set_rx_queue_stats_mapping(port_id, queue_id,
+							     map_value);
+		if (ret) {
+			printf("failed to set rx queue stats mapping.\n");
+			return;
 		}
 	}
 }
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index bbb68a5..414a006 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -176,12 +176,6 @@ usage(char* progname)
 	       "(0 <= N <= value of txd).\n");
 	printf("  --txrst=N: set the transmit RS bit threshold of TX rings to N "
 	       "(0 <= N <= value of txd).\n");
-	printf("  --tx-queue-stats-mapping=(port,queue,mapping)[,(port,queue,mapping]: "
-	       "tx queues statistics counters mapping "
-	       "(0 <= mapping <= %d).\n", RTE_ETHDEV_QUEUE_STAT_CNTRS - 1);
-	printf("  --rx-queue-stats-mapping=(port,queue,mapping)[,(port,queue,mapping]: "
-	       "rx queues statistics counters mapping "
-	       "(0 <= mapping <= %d).\n", RTE_ETHDEV_QUEUE_STAT_CNTRS - 1);
 	printf("  --no-flush-rx: Don't flush RX streams before forwarding."
 	       " Used mainly with PCAP drivers.\n");
 	printf("  --rxoffs=X[,Y]*: set RX segment offsets for split.\n");
@@ -300,93 +294,6 @@ parse_fwd_portmask(const char *portmask)
 		set_fwd_ports_mask((uint64_t) pm);
 }
 
-
-static int
-parse_queue_stats_mapping_config(const char *q_arg, int is_rx)
-{
-	char s[256];
-	const char *p, *p0 = q_arg;
-	char *end;
-	enum fieldnames {
-		FLD_PORT = 0,
-		FLD_QUEUE,
-		FLD_STATS_COUNTER,
-		_NUM_FLD
-	};
-	unsigned long int_fld[_NUM_FLD];
-	char *str_fld[_NUM_FLD];
-	int i;
-	unsigned size;
-
-	/* reset from value set at definition */
-	is_rx ? (nb_rx_queue_stats_mappings = 0) : (nb_tx_queue_stats_mappings = 0);
-
-	while ((p = strchr(p0,'(')) != NULL) {
-		++p;
-		if((p0 = strchr(p,')')) == NULL)
-			return -1;
-
-		size = p0 - p;
-		if(size >= sizeof(s))
-			return -1;
-
-		snprintf(s, sizeof(s), "%.*s", size, p);
-		if (rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',') != _NUM_FLD)
-			return -1;
-		for (i = 0; i < _NUM_FLD; i++){
-			errno = 0;
-			int_fld[i] = strtoul(str_fld[i], &end, 0);
-			if (errno != 0 || end == str_fld[i] || int_fld[i] > 255)
-				return -1;
-		}
-		/* Check mapping field is in correct range (0..RTE_ETHDEV_QUEUE_STAT_CNTRS-1) */
-		if (int_fld[FLD_STATS_COUNTER] >= RTE_ETHDEV_QUEUE_STAT_CNTRS) {
-			printf("Stats counter not in the correct range 0..%d\n",
-					RTE_ETHDEV_QUEUE_STAT_CNTRS - 1);
-			return -1;
-		}
-
-		if (!is_rx) {
-			if ((nb_tx_queue_stats_mappings >=
-						MAX_TX_QUEUE_STATS_MAPPINGS)) {
-				printf("exceeded max number of TX queue "
-						"statistics mappings: %hu\n",
-						nb_tx_queue_stats_mappings);
-				return -1;
-			}
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].port_id =
-				(uint8_t)int_fld[FLD_PORT];
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].queue_id =
-				(uint8_t)int_fld[FLD_QUEUE];
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].stats_counter_id =
-				(uint8_t)int_fld[FLD_STATS_COUNTER];
-			++nb_tx_queue_stats_mappings;
-		}
-		else {
-			if ((nb_rx_queue_stats_mappings >=
-						MAX_RX_QUEUE_STATS_MAPPINGS)) {
-				printf("exceeded max number of RX queue "
-						"statistics mappings: %hu\n",
-						nb_rx_queue_stats_mappings);
-				return -1;
-			}
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].port_id =
-				(uint8_t)int_fld[FLD_PORT];
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].queue_id =
-				(uint8_t)int_fld[FLD_QUEUE];
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].stats_counter_id =
-				(uint8_t)int_fld[FLD_STATS_COUNTER];
-			++nb_rx_queue_stats_mappings;
-		}
-
-	}
-/* Reassign the rx/tx_queue_stats_mappings pointer to point to this newly populated array rather */
-/* than to the default array (that was set at its definition) */
-	is_rx ? (rx_queue_stats_mappings = rx_queue_stats_mappings_array) :
-		(tx_queue_stats_mappings = tx_queue_stats_mappings_array);
-	return 0;
-}
-
 static void
 print_invalid_socket_id_error(void)
 {
@@ -664,8 +571,6 @@ launch_args_parse(int argc, char** argv)
 		{ "rxht",			1, 0, 0 },
 		{ "rxwt",			1, 0, 0 },
 		{ "rxfreet",                    1, 0, 0 },
-		{ "tx-queue-stats-mapping",	1, 0, 0 },
-		{ "rx-queue-stats-mapping",	1, 0, 0 },
 		{ "no-flush-rx",	0, 0, 0 },
 		{ "flow-isolate-all",	        0, 0, 0 },
 		{ "rxoffs",			1, 0, 0 },
@@ -1279,18 +1184,6 @@ launch_args_parse(int argc, char** argv)
 				else
 					rte_exit(EXIT_FAILURE, "rxfreet must be >= 0\n");
 			}
-			if (!strcmp(lgopts[opt_idx].name, "tx-queue-stats-mapping")) {
-				if (parse_queue_stats_mapping_config(optarg, TX)) {
-					rte_exit(EXIT_FAILURE,
-						 "invalid TX queue statistics mapping config entered\n");
-				}
-			}
-			if (!strcmp(lgopts[opt_idx].name, "rx-queue-stats-mapping")) {
-				if (parse_queue_stats_mapping_config(optarg, RX)) {
-					rte_exit(EXIT_FAILURE,
-						 "invalid RX queue statistics mapping config entered\n");
-				}
-			}
 			if (!strcmp(lgopts[opt_idx].name, "rxoffs")) {
 				unsigned int seg_off[MAX_SEGS_BUFFER_SPLIT];
 				unsigned int nb_offs;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 48e9647..e2b138d 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -476,15 +476,6 @@ struct rte_fdir_conf fdir_conf = {
 
 volatile int test_done = 1; /* stop packet forwarding when set to 1. */
 
-struct queue_stats_mappings tx_queue_stats_mappings_array[MAX_TX_QUEUE_STATS_MAPPINGS];
-struct queue_stats_mappings rx_queue_stats_mappings_array[MAX_RX_QUEUE_STATS_MAPPINGS];
-
-struct queue_stats_mappings *tx_queue_stats_mappings = tx_queue_stats_mappings_array;
-struct queue_stats_mappings *rx_queue_stats_mappings = rx_queue_stats_mappings_array;
-
-uint16_t nb_tx_queue_stats_mappings = 0;
-uint16_t nb_rx_queue_stats_mappings = 0;
-
 /*
  * Display zero values by default for xstats
  */
@@ -520,8 +511,6 @@ enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
 
 /* Forward function declarations */
 static void setup_attached_port(portid_t pi);
-static void map_port_queue_stats_mapping_registers(portid_t pi,
-						   struct rte_port *port);
 static void check_all_ports_link_status(uint32_t port_mask);
 static int eth_event_callback(portid_t port_id,
 			      enum rte_eth_event_type type,
@@ -1857,8 +1846,6 @@ fwd_stats_display(void)
 			fwd_cycles += fs->core_cycles;
 	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
-		uint8_t j;
-
 		pt_id = fwd_ports_ids[i];
 		port = &ports[pt_id];
 
@@ -1881,88 +1868,34 @@ fwd_stats_display(void)
 		printf("\n  %s Forward statistics for port %-2d %s\n",
 		       fwd_stats_border, pt_id, fwd_stats_border);
 
-		if (!port->rx_queue_stats_mapping_enabled &&
-		    !port->tx_queue_stats_mapping_enabled) {
-			printf("  RX-packets: %-14"PRIu64
-			       " RX-dropped: %-14"PRIu64
-			       "RX-total: %-"PRIu64"\n",
-			       stats.ipackets, stats.imissed,
-			       stats.ipackets + stats.imissed);
-
-			if (cur_fwd_eng == &csum_fwd_engine)
-				printf("  Bad-ipcsum: %-14"PRIu64
-				       " Bad-l4csum: %-14"PRIu64
-				       "Bad-outer-l4csum: %-14"PRIu64"\n",
-				       ports_stats[pt_id].rx_bad_ip_csum,
-				       ports_stats[pt_id].rx_bad_l4_csum,
-				       ports_stats[pt_id].rx_bad_outer_l4_csum);
-			if (stats.ierrors + stats.rx_nombuf > 0) {
-				printf("  RX-error: %-"PRIu64"\n",
-				       stats.ierrors);
-				printf("  RX-nombufs: %-14"PRIu64"\n",
-				       stats.rx_nombuf);
-			}
+		printf("  RX-packets: %-14"PRIu64" RX-dropped: %-14"PRIu64
+		       "RX-total: %-"PRIu64"\n", stats.ipackets, stats.imissed,
+		       stats.ipackets + stats.imissed);
 
-			printf("  TX-packets: %-14"PRIu64
-			       " TX-dropped: %-14"PRIu64
-			       "TX-total: %-"PRIu64"\n",
-			       stats.opackets, ports_stats[pt_id].tx_dropped,
-			       stats.opackets + ports_stats[pt_id].tx_dropped);
-		} else {
-			printf("  RX-packets:             %14"PRIu64
-			       "    RX-dropped:%14"PRIu64
-			       "    RX-total:%14"PRIu64"\n",
-			       stats.ipackets, stats.imissed,
-			       stats.ipackets + stats.imissed);
-
-			if (cur_fwd_eng == &csum_fwd_engine)
-				printf("  Bad-ipcsum:%14"PRIu64
-				       "    Bad-l4csum:%14"PRIu64
-				       "    Bad-outer-l4csum: %-14"PRIu64"\n",
-				       ports_stats[pt_id].rx_bad_ip_csum,
-				       ports_stats[pt_id].rx_bad_l4_csum,
-				       ports_stats[pt_id].rx_bad_outer_l4_csum);
-			if ((stats.ierrors + stats.rx_nombuf) > 0) {
-				printf("  RX-error:%"PRIu64"\n", stats.ierrors);
-				printf("  RX-nombufs:             %14"PRIu64"\n",
-				       stats.rx_nombuf);
-			}
-
-			printf("  TX-packets:             %14"PRIu64
-			       "    TX-dropped:%14"PRIu64
-			       "    TX-total:%14"PRIu64"\n",
-			       stats.opackets, ports_stats[pt_id].tx_dropped,
-			       stats.opackets + ports_stats[pt_id].tx_dropped);
+		if (cur_fwd_eng == &csum_fwd_engine)
+			printf("  Bad-ipcsum: %-14"PRIu64
+			       " Bad-l4csum: %-14"PRIu64
+			       "Bad-outer-l4csum: %-14"PRIu64"\n",
+			       ports_stats[pt_id].rx_bad_ip_csum,
+			       ports_stats[pt_id].rx_bad_l4_csum,
+			       ports_stats[pt_id].rx_bad_outer_l4_csum);
+		if (stats.ierrors + stats.rx_nombuf > 0) {
+			printf("  RX-error: %-"PRIu64"\n", stats.ierrors);
+			printf("  RX-nombufs: %-14"PRIu64"\n", stats.rx_nombuf);
 		}
 
+		printf("  TX-packets: %-14"PRIu64" TX-dropped: %-14"PRIu64
+		       "TX-total: %-"PRIu64"\n",
+		       stats.opackets, ports_stats[pt_id].tx_dropped,
+		       stats.opackets + ports_stats[pt_id].tx_dropped);
+
 		if (record_burst_stats) {
 			if (ports_stats[pt_id].rx_stream)
 				pkt_burst_stats_display("RX",
 					&ports_stats[pt_id].rx_stream->rx_burst_stats);
 			if (ports_stats[pt_id].tx_stream)
 				pkt_burst_stats_display("TX",
-					&ports_stats[pt_id].tx_stream->tx_burst_stats);
-		}
-
-		if (port->rx_queue_stats_mapping_enabled) {
-			printf("\n");
-			for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-				printf("  Stats reg %2d RX-packets:%14"PRIu64
-				       "     RX-errors:%14"PRIu64
-				       "    RX-bytes:%14"PRIu64"\n",
-				       j, stats.q_ipackets[j],
-				       stats.q_errors[j], stats.q_ibytes[j]);
-			}
-			printf("\n");
-		}
-		if (port->tx_queue_stats_mapping_enabled) {
-			for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-				printf("  Stats reg %2d TX-packets:%14"PRIu64
-				       "                                 TX-bytes:%14"
-				       PRIu64"\n",
-				       j, stats.q_opackets[j],
-				       stats.q_obytes[j]);
-			}
+				&ports_stats[pt_id].tx_stream->tx_burst_stats);
 		}
 
 		printf("  %s--------------------------------%s\n",
@@ -2236,11 +2169,6 @@ start_packet_forwarding(int with_tx_first)
 	rxtx_config_display();
 
 	fwd_stats_reset();
-	for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
-		pt_id = fwd_ports_ids[i];
-		port = &ports[pt_id];
-		map_port_queue_stats_mapping_registers(pt_id, port);
-	}
 	if (with_tx_first) {
 		port_fwd_begin = tx_only_engine.port_fwd_begin;
 		if (port_fwd_begin != NULL) {
@@ -3371,84 +3299,6 @@ dev_event_callback(const char *device_name, enum rte_dev_event_type type,
 	}
 }
 
-static int
-set_tx_queue_stats_mapping_registers(portid_t port_id, struct rte_port *port)
-{
-	uint16_t i;
-	int diag;
-	uint8_t mapping_found = 0;
-
-	for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-		if ((tx_queue_stats_mappings[i].port_id == port_id) &&
-				(tx_queue_stats_mappings[i].queue_id < nb_txq )) {
-			diag = rte_eth_dev_set_tx_queue_stats_mapping(port_id,
-					tx_queue_stats_mappings[i].queue_id,
-					tx_queue_stats_mappings[i].stats_counter_id);
-			if (diag != 0)
-				return diag;
-			mapping_found = 1;
-		}
-	}
-	if (mapping_found)
-		port->tx_queue_stats_mapping_enabled = 1;
-	return 0;
-}
-
-static int
-set_rx_queue_stats_mapping_registers(portid_t port_id, struct rte_port *port)
-{
-	uint16_t i;
-	int diag;
-	uint8_t mapping_found = 0;
-
-	for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-		if ((rx_queue_stats_mappings[i].port_id == port_id) &&
-				(rx_queue_stats_mappings[i].queue_id < nb_rxq )) {
-			diag = rte_eth_dev_set_rx_queue_stats_mapping(port_id,
-					rx_queue_stats_mappings[i].queue_id,
-					rx_queue_stats_mappings[i].stats_counter_id);
-			if (diag != 0)
-				return diag;
-			mapping_found = 1;
-		}
-	}
-	if (mapping_found)
-		port->rx_queue_stats_mapping_enabled = 1;
-	return 0;
-}
-
-static void
-map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port)
-{
-	int diag = 0;
-
-	diag = set_tx_queue_stats_mapping_registers(pi, port);
-	if (diag != 0) {
-		if (diag == -ENOTSUP) {
-			port->tx_queue_stats_mapping_enabled = 0;
-			printf("TX queue stats mapping not supported port id=%d\n", pi);
-		}
-		else
-			rte_exit(EXIT_FAILURE,
-					"set_tx_queue_stats_mapping_registers "
-					"failed for port id=%d diag=%d\n",
-					pi, diag);
-	}
-
-	diag = set_rx_queue_stats_mapping_registers(pi, port);
-	if (diag != 0) {
-		if (diag == -ENOTSUP) {
-			port->rx_queue_stats_mapping_enabled = 0;
-			printf("RX queue stats mapping not supported port id=%d\n", pi);
-		}
-		else
-			rte_exit(EXIT_FAILURE,
-					"set_rx_queue_stats_mapping_registers "
-					"failed for port id=%d diag=%d\n",
-					pi, diag);
-	}
-}
-
 static void
 rxtx_port_config(struct rte_port *port)
 {
@@ -3545,7 +3395,6 @@ init_port_config(void)
 		if (ret != 0)
 			return;
 
-		map_port_queue_stats_mapping_registers(pid, port);
 #if defined RTE_NET_IXGBE && defined RTE_LIBRTE_IXGBE_BYPASS
 		rte_pmd_ixgbe_bypass_init(pid);
 #endif
@@ -3756,8 +3605,6 @@ init_port_dcb_config(portid_t pid,
 	if (retval != 0)
 		return retval;
 
-	map_port_queue_stats_mapping_registers(pid, rte_port);
-
 	rte_port->dcb_flag = 1;
 
 	return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 6b901a8..5f23162 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -206,8 +206,6 @@ struct rte_port {
 	uint16_t                tunnel_tso_segsz; /**< Segmentation offload MSS for tunneled pkts. */
 	uint16_t                tx_vlan_id;/**< The tag ID */
 	uint16_t                tx_vlan_id_outer;/**< The outer tag ID */
-	uint8_t                 tx_queue_stats_mapping_enabled;
-	uint8_t                 rx_queue_stats_mapping_enabled;
 	volatile uint16_t        port_status;    /**< port started or not */
 	uint8_t                 need_setup;     /**< port just attached */
 	uint8_t                 need_reconfig;  /**< need reconfiguring port or not */
@@ -326,25 +324,6 @@ enum dcb_mode_enable
 	DCB_ENABLED
 };
 
-#define MAX_TX_QUEUE_STATS_MAPPINGS 1024 /* MAX_PORT of 32 @ 32 tx_queues/port */
-#define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /* MAX_PORT of 32 @ 128 rx_queues/port */
-
-struct queue_stats_mappings {
-	portid_t port_id;
-	uint16_t queue_id;
-	uint8_t stats_counter_id;
-} __rte_cache_aligned;
-
-extern struct queue_stats_mappings tx_queue_stats_mappings_array[];
-extern struct queue_stats_mappings rx_queue_stats_mappings_array[];
-
-/* Assign both tx and rx queue stats mappings to the same default values */
-extern struct queue_stats_mappings *tx_queue_stats_mappings;
-extern struct queue_stats_mappings *rx_queue_stats_mappings;
-
-extern uint16_t nb_tx_queue_stats_mappings;
-extern uint16_t nb_rx_queue_stats_mappings;
-
 extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */
 
 /* globals used for configuration */
@@ -790,7 +769,6 @@ void nic_stats_display(portid_t port_id);
 void nic_stats_clear(portid_t port_id);
 void nic_xstats_display(portid_t port_id);
 void nic_xstats_clear(portid_t port_id);
-void nic_stats_mapping_display(portid_t port_id);
 void device_infos_display(const char *identifier);
 void port_infos_display(portid_t port_id);
 void port_summary_display(portid_t port_id);
-- 
2.7.4


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V4] app/testpmd: fix queue stats mapping configuration
  2020-11-20 11:50           ` [dpdk-dev] [RFC V4] " Min Hu (Connor)
@ 2020-11-20 17:26             ` Ferruh Yigit
  2020-11-20 23:21               ` Stephen Hemminger
  0 siblings, 1 reply; 15+ messages in thread
From: Ferruh Yigit @ 2020-11-20 17:26 UTC (permalink / raw)
  To: Min Hu (Connor), dev; +Cc: bruce.richardson, thomas.monjalon, lihuisong

On 11/20/2020 11:50 AM, Min Hu (Connor) wrote:
> From: Huisong Li <lihuisong@huawei.com>
> 
> Currently, the queue stats mapping has the following problems:
> 1) Many PMD drivers don't support queue stats mapping. But there is no
>   failure message after executing the command "set stat_qmap rx 0 2 2".
> 2) Once queue mapping is set, unrelated and unmapped queues are also
> displayed.
> 3) The configuration result does not take effect or can not be queried
> in real time.
> 4) The mapping arrays, "tx_queue_stats_mappings_array" &
> "rx_queue_stats_mappings_array" are global and their sizes are based on
> fixed max port and queue size assumptions.
> 5) These record structures, 'map_port_queue_stats_mapping_registers()'
>   and its sub functions are redundant for majority of drivers.
> 6) The display of the queue stats and queue stats mapping is mixed
>   together.
> 
> Since xstats is used to obtain queue statistics, we have made the following
>   simplifications and adjustments:
> 1) If PMD requires and supports queue stats mapping, configure to driver in
>   real time by calling ethdev API after executing the command
>   "set stat_qmap rx/tx ...". If not, the command can not be accepted.
> 2) Based on the above adjustments, these record structures,
> 'map_port_queue_stats_mapping_registers()' and its sub functions can be
>   removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters,
>   and 'parse_queue_stats_mapping_config()' can be removed too.
> 3) remove display of queue stats mapping in 'fwd_stats_display()' &
>   'nic_stats_display()', and obtain queue stats by xstats.
>   Since the record structures are removed, 'nic_stats_mapping_display()'
>   can be deleted.
> 
> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
> 
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

Overall	looks good to me.
I did a quick test didn't see anything unexpected. 'xstats' or 'dpdk-proc-info' 
app still can be used to get queue stats and "set stat_qmap ..." is working as 
expected.

But it is a little late for this release cycle, would you be OK to get this at 
the beginning of next release?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V4] app/testpmd: fix queue stats mapping configuration
  2020-11-20 17:26             ` Ferruh Yigit
@ 2020-11-20 23:21               ` Stephen Hemminger
  2020-11-20 23:33                 ` Ferruh Yigit
  0 siblings, 1 reply; 15+ messages in thread
From: Stephen Hemminger @ 2020-11-20 23:21 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Min Hu (Connor), dev, bruce.richardson, thomas.monjalon, lihuisong

On Fri, 20 Nov 2020 17:26:55 +0000
Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 11/20/2020 11:50 AM, Min Hu (Connor) wrote:
> > From: Huisong Li <lihuisong@huawei.com>
> > 
> > Currently, the queue stats mapping has the following problems:
> > 1) Many PMD drivers don't support queue stats mapping. But there is no
> >   failure message after executing the command "set stat_qmap rx 0 2 2".
> > 2) Once queue mapping is set, unrelated and unmapped queues are also
> > displayed.
> > 3) The configuration result does not take effect or can not be queried
> > in real time.
> > 4) The mapping arrays, "tx_queue_stats_mappings_array" &
> > "rx_queue_stats_mappings_array" are global and their sizes are based on
> > fixed max port and queue size assumptions.
> > 5) These record structures, 'map_port_queue_stats_mapping_registers()'
> >   and its sub functions are redundant for majority of drivers.
> > 6) The display of the queue stats and queue stats mapping is mixed
> >   together.
> > 
> > Since xstats is used to obtain queue statistics, we have made the following
> >   simplifications and adjustments:
> > 1) If PMD requires and supports queue stats mapping, configure to driver in
> >   real time by calling ethdev API after executing the command
> >   "set stat_qmap rx/tx ...". If not, the command can not be accepted.
> > 2) Based on the above adjustments, these record structures,
> > 'map_port_queue_stats_mapping_registers()' and its sub functions can be
> >   removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters,
> >   and 'parse_queue_stats_mapping_config()' can be removed too.
> > 3) remove display of queue stats mapping in 'fwd_stats_display()' &
> >   'nic_stats_display()', and obtain queue stats by xstats.
> >   Since the record structures are removed, 'nic_stats_mapping_display()'
> >   can be deleted.
> > 
> > Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
> > Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
> > Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
> > 
> > Signed-off-by: Huisong Li <lihuisong@huawei.com>  
> 
> Overall	looks good to me.
> I did a quick test didn't see anything unexpected. 'xstats' or 'dpdk-proc-info' 
> app still can be used to get queue stats and "set stat_qmap ..." is working as 
> expected.
> 
> But it is a little late for this release cycle, would you be OK to get this at 
> the beginning of next release?

Could we plan to deprecate queue stats mapping in future when xstats work is done?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V4] app/testpmd: fix queue stats mapping configuration
  2020-11-20 23:21               ` Stephen Hemminger
@ 2020-11-20 23:33                 ` Ferruh Yigit
  2020-11-21  4:29                   ` Stephen Hemminger
  0 siblings, 1 reply; 15+ messages in thread
From: Ferruh Yigit @ 2020-11-20 23:33 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Min Hu (Connor), dev, bruce.richardson, thomas.monjalon, lihuisong

On 11/20/2020 11:21 PM, Stephen Hemminger wrote:
> On Fri, 20 Nov 2020 17:26:55 +0000
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> 
>> On 11/20/2020 11:50 AM, Min Hu (Connor) wrote:
>>> From: Huisong Li <lihuisong@huawei.com>
>>>
>>> Currently, the queue stats mapping has the following problems:
>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>    failure message after executing the command "set stat_qmap rx 0 2 2".
>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>> displayed.
>>> 3) The configuration result does not take effect or can not be queried
>>> in real time.
>>> 4) The mapping arrays, "tx_queue_stats_mappings_array" &
>>> "rx_queue_stats_mappings_array" are global and their sizes are based on
>>> fixed max port and queue size assumptions.
>>> 5) These record structures, 'map_port_queue_stats_mapping_registers()'
>>>    and its sub functions are redundant for majority of drivers.
>>> 6) The display of the queue stats and queue stats mapping is mixed
>>>    together.
>>>
>>> Since xstats is used to obtain queue statistics, we have made the following
>>>    simplifications and adjustments:
>>> 1) If PMD requires and supports queue stats mapping, configure to driver in
>>>    real time by calling ethdev API after executing the command
>>>    "set stat_qmap rx/tx ...". If not, the command can not be accepted.
>>> 2) Based on the above adjustments, these record structures,
>>> 'map_port_queue_stats_mapping_registers()' and its sub functions can be
>>>    removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters,
>>>    and 'parse_queue_stats_mapping_config()' can be removed too.
>>> 3) remove display of queue stats mapping in 'fwd_stats_display()' &
>>>    'nic_stats_display()', and obtain queue stats by xstats.
>>>    Since the record structures are removed, 'nic_stats_mapping_display()'
>>>    can be deleted.
>>>
>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>
>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>
>> Overall	looks good to me.
>> I did a quick test didn't see anything unexpected. 'xstats' or 'dpdk-proc-info'
>> app still can be used to get queue stats and "set stat_qmap ..." is working as
>> expected.
>>
>> But it is a little late for this release cycle, would you be OK to get this at
>> the beginning of next release?
> 
> Could we plan to deprecate queue stats mapping in future when xstats work is done?
> 

Even queue stats moved to xstats, a few PMDs still need this configuration and API.
And this patch already cleans the queue stats mapping noise from testpmd.

What is the benefit/motivation to deprecate the queue stats mapping API?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V4] app/testpmd: fix queue stats mapping configuration
  2020-11-20 23:33                 ` Ferruh Yigit
@ 2020-11-21  4:29                   ` Stephen Hemminger
  2020-11-23  7:22                     ` Min Hu (Connor)
  2020-11-23  9:51                     ` Ferruh Yigit
  0 siblings, 2 replies; 15+ messages in thread
From: Stephen Hemminger @ 2020-11-21  4:29 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Min Hu (Connor), dev, bruce.richardson, thomas.monjalon, lihuisong

On Fri, 20 Nov 2020 23:33:40 +0000
Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 11/20/2020 11:21 PM, Stephen Hemminger wrote:
> > On Fri, 20 Nov 2020 17:26:55 +0000
> > Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >   
> >> On 11/20/2020 11:50 AM, Min Hu (Connor) wrote:  
> >>> From: Huisong Li <lihuisong@huawei.com>
> >>>
> >>> Currently, the queue stats mapping has the following problems:
> >>> 1) Many PMD drivers don't support queue stats mapping. But there is no
> >>>    failure message after executing the command "set stat_qmap rx 0 2 2".
> >>> 2) Once queue mapping is set, unrelated and unmapped queues are also
> >>> displayed.
> >>> 3) The configuration result does not take effect or can not be queried
> >>> in real time.
> >>> 4) The mapping arrays, "tx_queue_stats_mappings_array" &
> >>> "rx_queue_stats_mappings_array" are global and their sizes are based on
> >>> fixed max port and queue size assumptions.
> >>> 5) These record structures, 'map_port_queue_stats_mapping_registers()'
> >>>    and its sub functions are redundant for majority of drivers.
> >>> 6) The display of the queue stats and queue stats mapping is mixed
> >>>    together.
> >>>
> >>> Since xstats is used to obtain queue statistics, we have made the following
> >>>    simplifications and adjustments:
> >>> 1) If PMD requires and supports queue stats mapping, configure to driver in
> >>>    real time by calling ethdev API after executing the command
> >>>    "set stat_qmap rx/tx ...". If not, the command can not be accepted.
> >>> 2) Based on the above adjustments, these record structures,
> >>> 'map_port_queue_stats_mapping_registers()' and its sub functions can be
> >>>    removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters,
> >>>    and 'parse_queue_stats_mapping_config()' can be removed too.
> >>> 3) remove display of queue stats mapping in 'fwd_stats_display()' &
> >>>    'nic_stats_display()', and obtain queue stats by xstats.
> >>>    Since the record structures are removed, 'nic_stats_mapping_display()'
> >>>    can be deleted.
> >>>
> >>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
> >>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
> >>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
> >>>
> >>> Signed-off-by: Huisong Li <lihuisong@huawei.com>  
> >>
> >> Overall	looks good to me.
> >> I did a quick test didn't see anything unexpected. 'xstats' or 'dpdk-proc-info'
> >> app still can be used to get queue stats and "set stat_qmap ..." is working as
> >> expected.
> >>
> >> But it is a little late for this release cycle, would you be OK to get this at
> >> the beginning of next release?  
> > 
> > Could we plan to deprecate queue stats mapping in future when xstats work is done?
> >   
> 
> Even queue stats moved to xstats, a few PMDs still need this configuration and API.
> And this patch already cleans the queue stats mapping noise from testpmd.
> 
> What is the benefit/motivation to deprecate the queue stats mapping API?

Mostly because so few drivers implement it, that any application using it
is  going to be non-portable.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V4] app/testpmd: fix queue stats mapping configuration
  2020-11-21  4:29                   ` Stephen Hemminger
@ 2020-11-23  7:22                     ` Min Hu (Connor)
  2020-11-23  9:51                     ` Ferruh Yigit
  1 sibling, 0 replies; 15+ messages in thread
From: Min Hu (Connor) @ 2020-11-23  7:22 UTC (permalink / raw)
  To: Stephen Hemminger, Ferruh Yigit
  Cc: dev, bruce.richardson, thomas.monjalon, lihuisong



ÔÚ 2020/11/21 12:29, Stephen Hemminger дµÀ:
> On Fri, 20 Nov 2020 23:33:40 +0000
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> 
>> On 11/20/2020 11:21 PM, Stephen Hemminger wrote:
>>> On Fri, 20 Nov 2020 17:26:55 +0000
>>> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>>    
>>>> On 11/20/2020 11:50 AM, Min Hu (Connor) wrote:
>>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>>
>>>>> Currently, the queue stats mapping has the following problems:
>>>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>>>     failure message after executing the command "set stat_qmap rx 0 2 2".
>>>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>>>> displayed.
>>>>> 3) The configuration result does not take effect or can not be queried
>>>>> in real time.
>>>>> 4) The mapping arrays, "tx_queue_stats_mappings_array" &
>>>>> "rx_queue_stats_mappings_array" are global and their sizes are based on
>>>>> fixed max port and queue size assumptions.
>>>>> 5) These record structures, 'map_port_queue_stats_mapping_registers()'
>>>>>     and its sub functions are redundant for majority of drivers.
>>>>> 6) The display of the queue stats and queue stats mapping is mixed
>>>>>     together.
>>>>>
>>>>> Since xstats is used to obtain queue statistics, we have made the following
>>>>>     simplifications and adjustments:
>>>>> 1) If PMD requires and supports queue stats mapping, configure to driver in
>>>>>     real time by calling ethdev API after executing the command
>>>>>     "set stat_qmap rx/tx ...". If not, the command can not be accepted.
>>>>> 2) Based on the above adjustments, these record structures,
>>>>> 'map_port_queue_stats_mapping_registers()' and its sub functions can be
>>>>>     removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters,
>>>>>     and 'parse_queue_stats_mapping_config()' can be removed too.
>>>>> 3) remove display of queue stats mapping in 'fwd_stats_display()' &
>>>>>     'nic_stats_display()', and obtain queue stats by xstats.
>>>>>     Since the record structures are removed, 'nic_stats_mapping_display()'
>>>>>     can be deleted.
>>>>>
>>>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
>>>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>>>
>>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> Overall	looks good to me.
>>>> I did a quick test didn't see anything unexpected. 'xstats' or 'dpdk-proc-info'
>>>> app still can be used to get queue stats and "set stat_qmap ..." is working as
>>>> expected.
>>>>
>>>> But it is a little late for this release cycle, would you be OK to get this at
>>>> the beginning of next release?
>>>
>>> Could we plan to deprecate queue stats mapping in future when xstats work is done?
>>>    
>>
>> Even queue stats moved to xstats, a few PMDs still need this configuration and API.
>> And this patch already cleans the queue stats mapping noise from testpmd.
>>
>> What is the benefit/motivation to deprecate the queue stats mapping API?
> 
> Mostly because so few drivers implement it, that any application using it
> is  going to be non-portable.
>    Hi, Stephen,
	For PMDs with few hardware queue statistics resources, such as
ixgbe, all queue stats are recorded in statistics register 0 by default.
If users want to get detailed queue stats, users have to call the queue
stats mapping API. From this point of view, it is necessary for these
PMDs.


> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [RFC V4] app/testpmd: fix queue stats mapping configuration
  2020-11-21  4:29                   ` Stephen Hemminger
  2020-11-23  7:22                     ` Min Hu (Connor)
@ 2020-11-23  9:51                     ` Ferruh Yigit
  1 sibling, 0 replies; 15+ messages in thread
From: Ferruh Yigit @ 2020-11-23  9:51 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Min Hu (Connor), dev, bruce.richardson, thomas.monjalon, lihuisong

On 11/21/2020 4:29 AM, Stephen Hemminger wrote:
> On Fri, 20 Nov 2020 23:33:40 +0000
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> 
>> On 11/20/2020 11:21 PM, Stephen Hemminger wrote:
>>> On Fri, 20 Nov 2020 17:26:55 +0000
>>> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>>    
>>>> On 11/20/2020 11:50 AM, Min Hu (Connor) wrote:
>>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>>
>>>>> Currently, the queue stats mapping has the following problems:
>>>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>>>     failure message after executing the command "set stat_qmap rx 0 2 2".
>>>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>>>> displayed.
>>>>> 3) The configuration result does not take effect or can not be queried
>>>>> in real time.
>>>>> 4) The mapping arrays, "tx_queue_stats_mappings_array" &
>>>>> "rx_queue_stats_mappings_array" are global and their sizes are based on
>>>>> fixed max port and queue size assumptions.
>>>>> 5) These record structures, 'map_port_queue_stats_mapping_registers()'
>>>>>     and its sub functions are redundant for majority of drivers.
>>>>> 6) The display of the queue stats and queue stats mapping is mixed
>>>>>     together.
>>>>>
>>>>> Since xstats is used to obtain queue statistics, we have made the following
>>>>>     simplifications and adjustments:
>>>>> 1) If PMD requires and supports queue stats mapping, configure to driver in
>>>>>     real time by calling ethdev API after executing the command
>>>>>     "set stat_qmap rx/tx ...". If not, the command can not be accepted.
>>>>> 2) Based on the above adjustments, these record structures,
>>>>> 'map_port_queue_stats_mapping_registers()' and its sub functions can be
>>>>>     removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters,
>>>>>     and 'parse_queue_stats_mapping_config()' can be removed too.
>>>>> 3) remove display of queue stats mapping in 'fwd_stats_display()' &
>>>>>     'nic_stats_display()', and obtain queue stats by xstats.
>>>>>     Since the record structures are removed, 'nic_stats_mapping_display()'
>>>>>     can be deleted.
>>>>>
>>>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
>>>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>>>
>>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> Overall	looks good to me.
>>>> I did a quick test didn't see anything unexpected. 'xstats' or 'dpdk-proc-info'
>>>> app still can be used to get queue stats and "set stat_qmap ..." is working as
>>>> expected.
>>>>
>>>> But it is a little late for this release cycle, would you be OK to get this at
>>>> the beginning of next release?
>>>
>>> Could we plan to deprecate queue stats mapping in future when xstats work is done?
>>>    
>>
>> Even queue stats moved to xstats, a few PMDs still need this configuration and API.
>> And this patch already cleans the queue stats mapping noise from testpmd.
>>
>> What is the benefit/motivation to deprecate the queue stats mapping API?
> 
> Mostly because so few drivers implement it, that any application using it
> is  going to be non-portable.
> 

'rte_eth_dev_set_.*_queue_stats_mapping()' APIs return '-ENOTSUP' if underlying 
PMD doesn't support it, applications can check this for portable code.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, back to index

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-20  8:26 [dpdk-dev] [RFC V2 0/2] fix queue stats mapping Min Hu (Connor)
2020-10-20  8:26 ` [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration Min Hu (Connor)
2020-10-30 20:54   ` Ferruh Yigit
2020-11-03  6:30     ` Min Hu (Connor)
2020-11-12  2:28       ` Min Hu (Connor)
2020-11-12  9:52         ` Ferruh Yigit
2020-11-18  3:39           ` Min Hu (Connor)
2020-11-20 11:50           ` [dpdk-dev] [RFC V4] " Min Hu (Connor)
2020-11-20 17:26             ` Ferruh Yigit
2020-11-20 23:21               ` Stephen Hemminger
2020-11-20 23:33                 ` Ferruh Yigit
2020-11-21  4:29                   ` Stephen Hemminger
2020-11-23  7:22                     ` Min Hu (Connor)
2020-11-23  9:51                     ` Ferruh Yigit
2020-10-20  8:26 ` [dpdk-dev] [RFC V2 2/2] app/testpmd: fix starting failed with queue-stats-mapping Min Hu (Connor)

DPDK patches and discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox