DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/4] small fixes and improvements for qos_sched example
@ 2023-02-03 10:05 Bruce Richardson
  2023-02-03 10:05 ` [PATCH 1/4] examples/qos_sched: fix errors when TX port not up Bruce Richardson
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Bruce Richardson @ 2023-02-03 10:05 UTC (permalink / raw)
  To: dev; +Cc: jasvinder.singh, Bruce Richardson

This patchset contains a set of fixes and improvements for the qos_sched
application. After this patchset the code is shorter, and also seems a
little faster in my performance tests.

Bruce Richardson (4):
  examples/qos_sched: fix errors when TX port not up
  examples/qos_sched: remove TX buffering
  examples/qos_sched: use bigger bursts on dequeue
  examples/qos_sched: remove limit on core ids

 doc/guides/sample_app_ug/qos_scheduler.rst |  2 +-
 examples/qos_sched/app_thread.c            | 94 +++-------------------
 examples/qos_sched/args.c                  | 72 +----------------
 examples/qos_sched/init.c                  | 10 +++
 examples/qos_sched/main.c                  | 12 ---
 examples/qos_sched/main.h                  | 18 +----
 6 files changed, 25 insertions(+), 183 deletions(-)

--
2.37.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] examples/qos_sched: fix errors when TX port not up
  2023-02-03 10:05 [PATCH 0/4] small fixes and improvements for qos_sched example Bruce Richardson
@ 2023-02-03 10:05 ` Bruce Richardson
  2023-02-17 16:19   ` Dumitrescu, Cristian
  2023-02-03 10:05 ` [PATCH 2/4] examples/qos_sched: remove TX buffering Bruce Richardson
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2023-02-03 10:05 UTC (permalink / raw)
  To: dev; +Cc: jasvinder.singh, Bruce Richardson, stable, Cristian Dumitrescu

The TX port config will fail if the port is not up, so wait 10 seconds
on startup for it to start.

Fixes: de3cfa2c9823 ("sched: initial import")
Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 examples/qos_sched/init.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 0709aec10c..6020367705 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -326,6 +326,8 @@ int app_init(void)
 	for(i = 0; i < nb_pfc; i++) {
 		uint32_t socket = rte_lcore_to_socket_id(qos_conf[i].rx_core);
 		struct rte_ring *ring;
+		struct rte_eth_link link = {0};
+		int retry_count = 100, retry_delay = 100; /* try every 100ms for 10 sec */
 
 		snprintf(ring_name, MAX_NAME_LEN, "ring-%u-%u", i, qos_conf[i].rx_core);
 		ring = rte_ring_lookup(ring_name);
@@ -356,6 +358,14 @@ int app_init(void)
 		app_init_port(qos_conf[i].rx_port, qos_conf[i].mbuf_pool);
 		app_init_port(qos_conf[i].tx_port, qos_conf[i].mbuf_pool);
 
+		rte_eth_link_get(qos_conf[i].tx_port, &link);
+		if (link.link_status == 0)
+			printf("Waiting for link on port %u\n", qos_conf[i].tx_port);
+		while (link.link_status == 0 && retry_count--) {
+			rte_delay_ms(retry_delay);
+			rte_eth_link_get(qos_conf[i].tx_port, &link);
+		}
+
 		qos_conf[i].sched_port = app_init_sched_port(qos_conf[i].tx_port, socket);
 	}
 
-- 
2.37.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 2/4] examples/qos_sched: remove TX buffering
  2023-02-03 10:05 [PATCH 0/4] small fixes and improvements for qos_sched example Bruce Richardson
  2023-02-03 10:05 ` [PATCH 1/4] examples/qos_sched: fix errors when TX port not up Bruce Richardson
@ 2023-02-03 10:05 ` Bruce Richardson
  2023-02-17 16:19   ` Dumitrescu, Cristian
  2023-02-03 10:05 ` [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue Bruce Richardson
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2023-02-03 10:05 UTC (permalink / raw)
  To: dev; +Cc: jasvinder.singh, Bruce Richardson, Cristian Dumitrescu

Since the qos_sched app does batch dequeues from the QoS block, there is
little point in trying to batch further in the app - just send out the
full burst of packets that were received from the QoS block. With modern
CPUs and write-combining doorbells, the cost of doing smaller TX's is
reduced anyway for the worst case.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 examples/qos_sched/app_thread.c | 94 ++++-----------------------------
 examples/qos_sched/main.c       | 12 -----
 examples/qos_sched/main.h       |  6 ---
 3 files changed, 9 insertions(+), 103 deletions(-)

diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index dbc878b553..1ea732aa91 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -104,82 +104,21 @@ app_rx_thread(struct thread_conf **confs)
 	}
 }
 
-
-
-/* Send the packet to an output interface
- * For performance reason function returns number of packets dropped, not sent,
- * so 0 means that all packets were sent successfully
- */
-
-static inline void
-app_send_burst(struct thread_conf *qconf)
-{
-	struct rte_mbuf **mbufs;
-	uint32_t n, ret;
-
-	mbufs = (struct rte_mbuf **)qconf->m_table;
-	n = qconf->n_mbufs;
-
-	do {
-		ret = rte_eth_tx_burst(qconf->tx_port, qconf->tx_queue, mbufs, (uint16_t)n);
-		/* we cannot drop the packets, so re-send */
-		/* update number of packets to be sent */
-		n -= ret;
-		mbufs = (struct rte_mbuf **)&mbufs[ret];
-	} while (n);
-}
-
-
-/* Send the packet to an output interface */
-static void
-app_send_packets(struct thread_conf *qconf, struct rte_mbuf **mbufs, uint32_t nb_pkt)
-{
-	uint32_t i, len;
-
-	len = qconf->n_mbufs;
-	for(i = 0; i < nb_pkt; i++) {
-		qconf->m_table[len] = mbufs[i];
-		len++;
-		/* enough pkts to be sent */
-		if (unlikely(len == burst_conf.tx_burst)) {
-			qconf->n_mbufs = len;
-			app_send_burst(qconf);
-			len = 0;
-		}
-	}
-
-	qconf->n_mbufs = len;
-}
-
 void
 app_tx_thread(struct thread_conf **confs)
 {
 	struct rte_mbuf *mbufs[burst_conf.qos_dequeue];
 	struct thread_conf *conf;
 	int conf_idx = 0;
-	int retval;
-	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US;
+	int nb_pkts;
 
 	while ((conf = confs[conf_idx])) {
-		retval = rte_ring_sc_dequeue_bulk(conf->tx_ring, (void **)mbufs,
+		nb_pkts = rte_ring_sc_dequeue_burst(conf->tx_ring, (void **)mbufs,
 					burst_conf.qos_dequeue, NULL);
-		if (likely(retval != 0)) {
-			app_send_packets(conf, mbufs, burst_conf.qos_dequeue);
-
-			conf->counter = 0; /* reset empty read loop counter */
-		}
-
-		conf->counter++;
-
-		/* drain ring and TX queues */
-		if (unlikely(conf->counter > drain_tsc)) {
-			/* now check is there any packets left to be transmitted */
-			if (conf->n_mbufs != 0) {
-				app_send_burst(conf);
-
-				conf->n_mbufs = 0;
-			}
-			conf->counter = 0;
+		if (likely(nb_pkts != 0)) {
+			uint16_t nb_tx = rte_eth_tx_burst(conf->tx_port, 0, mbufs, nb_pkts);
+			if (nb_pkts != nb_tx)
+				rte_pktmbuf_free_bulk(&mbufs[nb_pkts], nb_pkts - nb_tx);
 		}
 
 		conf_idx++;
@@ -230,7 +169,6 @@ app_mixed_thread(struct thread_conf **confs)
 	struct rte_mbuf *mbufs[burst_conf.ring_burst];
 	struct thread_conf *conf;
 	int conf_idx = 0;
-	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US;
 
 	while ((conf = confs[conf_idx])) {
 		uint32_t nb_pkt;
@@ -250,23 +188,9 @@ app_mixed_thread(struct thread_conf **confs)
 		nb_pkt = rte_sched_port_dequeue(conf->sched_port, mbufs,
 					burst_conf.qos_dequeue);
 		if (likely(nb_pkt > 0)) {
-			app_send_packets(conf, mbufs, nb_pkt);
-
-			conf->counter = 0; /* reset empty read loop counter */
-		}
-
-		conf->counter++;
-
-		/* drain ring and TX queues */
-		if (unlikely(conf->counter > drain_tsc)) {
-
-			/* now check is there any packets left to be transmitted */
-			if (conf->n_mbufs != 0) {
-				app_send_burst(conf);
-
-				conf->n_mbufs = 0;
-			}
-			conf->counter = 0;
+			uint16_t nb_tx = rte_eth_tx_burst(conf->tx_port, 0, mbufs, nb_pkt);
+			if (nb_tx != nb_pkt)
+				rte_pktmbuf_free_bulk(&mbufs[nb_tx], nb_pkt - nb_tx);
 		}
 
 		conf_idx++;
diff --git a/examples/qos_sched/main.c b/examples/qos_sched/main.c
index dc6a17a646..b3c2c9ef23 100644
--- a/examples/qos_sched/main.c
+++ b/examples/qos_sched/main.c
@@ -105,12 +105,6 @@ app_main_loop(__rte_unused void *dummy)
 	}
 	else if (mode == (APP_TX_MODE | APP_WT_MODE)) {
 		for (i = 0; i < wt_idx; i++) {
-			wt_confs[i]->m_table = rte_malloc("table_wt", sizeof(struct rte_mbuf *)
-					* burst_conf.tx_burst, RTE_CACHE_LINE_SIZE);
-
-			if (wt_confs[i]->m_table == NULL)
-				rte_panic("flow %u unable to allocate memory buffer\n", i);
-
 			RTE_LOG(INFO, APP,
 				"flow %u lcoreid %u sched+write port %u\n",
 					i, lcore_id, wt_confs[i]->tx_port);
@@ -120,12 +114,6 @@ app_main_loop(__rte_unused void *dummy)
 	}
 	else if (mode == APP_TX_MODE) {
 		for (i = 0; i < tx_idx; i++) {
-			tx_confs[i]->m_table = rte_malloc("table_tx", sizeof(struct rte_mbuf *)
-					* burst_conf.tx_burst, RTE_CACHE_LINE_SIZE);
-
-			if (tx_confs[i]->m_table == NULL)
-				rte_panic("flow %u unable to allocate memory buffer\n", i);
-
 			RTE_LOG(INFO, APP, "flow%u lcoreid%u write port%u\n",
 					i, lcore_id, tx_confs[i]->tx_port);
 		}
diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h
index 76a68f585f..b9c301483a 100644
--- a/examples/qos_sched/main.h
+++ b/examples/qos_sched/main.h
@@ -37,8 +37,6 @@ extern "C" {
 #define TX_HTHRESH 0  /**< Default values of TX host threshold reg. */
 #define TX_WTHRESH 0  /**< Default values of TX write-back threshold reg. */
 
-#define BURST_TX_DRAIN_US 100
-
 #ifndef APP_MAX_LCORE
 #if (RTE_MAX_LCORE > 64)
 #define APP_MAX_LCORE 64
@@ -75,10 +73,6 @@ struct thread_stat
 
 struct thread_conf
 {
-	uint32_t counter;
-	uint32_t n_mbufs;
-	struct rte_mbuf **m_table;
-
 	uint16_t rx_port;
 	uint16_t tx_port;
 	uint16_t rx_queue;
-- 
2.37.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue
  2023-02-03 10:05 [PATCH 0/4] small fixes and improvements for qos_sched example Bruce Richardson
  2023-02-03 10:05 ` [PATCH 1/4] examples/qos_sched: fix errors when TX port not up Bruce Richardson
  2023-02-03 10:05 ` [PATCH 2/4] examples/qos_sched: remove TX buffering Bruce Richardson
@ 2023-02-03 10:05 ` Bruce Richardson
  2023-02-17 16:20   ` Dumitrescu, Cristian
  2023-02-03 10:05 ` [PATCH 4/4] examples/qos_sched: remove limit on core ids Bruce Richardson
  2023-02-20 15:41 ` [PATCH 0/4] small fixes and improvements for qos_sched example Thomas Monjalon
  4 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2023-02-03 10:05 UTC (permalink / raw)
  To: dev; +Cc: jasvinder.singh, Bruce Richardson, Cristian Dumitrescu

While performance of the QoS block drops sharply if the dequeue size is
greater than or equal to the enqueue size, increasing the dequeue size
to just under the enqueue one gives improved performance when the
scheduler is not keeping up with the line rate.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/sample_app_ug/qos_scheduler.rst | 2 +-
 examples/qos_sched/main.h                  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/guides/sample_app_ug/qos_scheduler.rst b/doc/guides/sample_app_ug/qos_scheduler.rst
index f376554dd9..9936b99172 100644
--- a/doc/guides/sample_app_ug/qos_scheduler.rst
+++ b/doc/guides/sample_app_ug/qos_scheduler.rst
@@ -91,7 +91,7 @@ Optional application parameters include:
 *   B = I/O RX lcore write burst size to the output software rings,
     worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64)

-*   C = QoS dequeue size (the default value is 32)
+*   C = QoS dequeue size (the default value is 63)

 *   D = Worker lcore write burst size to the NIC TX (the default value is 64)

diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h
index b9c301483a..d8f3e32c83 100644
--- a/examples/qos_sched/main.h
+++ b/examples/qos_sched/main.h
@@ -26,7 +26,7 @@ extern "C" {

 #define MAX_PKT_RX_BURST 64
 #define PKT_ENQUEUE 64
-#define PKT_DEQUEUE 32
+#define PKT_DEQUEUE 63
 #define MAX_PKT_TX_BURST 64

 #define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
--
2.37.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 4/4] examples/qos_sched: remove limit on core ids
  2023-02-03 10:05 [PATCH 0/4] small fixes and improvements for qos_sched example Bruce Richardson
                   ` (2 preceding siblings ...)
  2023-02-03 10:05 ` [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue Bruce Richardson
@ 2023-02-03 10:05 ` Bruce Richardson
  2023-02-17 16:20   ` Dumitrescu, Cristian
  2023-02-20 15:41 ` [PATCH 0/4] small fixes and improvements for qos_sched example Thomas Monjalon
  4 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2023-02-03 10:05 UTC (permalink / raw)
  To: dev; +Cc: jasvinder.singh, Bruce Richardson, Cristian Dumitrescu

The qos_sched app was limited to using lcores between 0 and 64 only,
even if RTE_MAX_LCORE was set to a higher value (as it is by default).
Remove some of the checks on the lcore ids in order to support running
with core ids > 64.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 examples/qos_sched/args.c | 72 ++-------------------------------------
 examples/qos_sched/main.h | 10 +-----
 2 files changed, 4 insertions(+), 78 deletions(-)

diff --git a/examples/qos_sched/args.c b/examples/qos_sched/args.c
index b2959499ae..e97273152a 100644
--- a/examples/qos_sched/args.c
+++ b/examples/qos_sched/args.c
@@ -24,7 +24,6 @@
 
 static uint32_t app_main_core = 1;
 static uint32_t app_numa_mask;
-static uint64_t app_used_core_mask = 0;
 static uint64_t app_used_port_mask = 0;
 static uint64_t app_used_rx_port_mask = 0;
 static uint64_t app_used_tx_port_mask = 0;
@@ -82,43 +81,6 @@ app_usage(const char *prgname)
 }
 
 
-/* returns core mask used by DPDK */
-static uint64_t
-app_eal_core_mask(void)
-{
-	uint64_t cm = 0;
-	uint32_t i;
-
-	for (i = 0; i < APP_MAX_LCORE; i++) {
-		if (rte_lcore_has_role(i, ROLE_RTE))
-			cm |= (1ULL << i);
-	}
-
-	cm |= (1ULL << rte_get_main_lcore());
-
-	return cm;
-}
-
-
-/* returns total number of cores presented in a system */
-static uint32_t
-app_cpu_core_count(void)
-{
-	int i, len;
-	char path[PATH_MAX];
-	uint32_t ncores = 0;
-
-	for (i = 0; i < APP_MAX_LCORE; i++) {
-		len = snprintf(path, sizeof(path), SYS_CPU_DIR, i);
-		if (len <= 0 || (unsigned)len >= sizeof(path))
-			continue;
-
-		if (access(path, F_OK) == 0)
-			ncores++;
-	}
-
-	return ncores;
-}
 
 /* returns:
 	 number of values parsed
@@ -261,15 +223,6 @@ app_parse_flow_conf(const char *conf_str)
 	app_used_tx_port_mask |= mask;
 	app_used_port_mask |= mask;
 
-	mask = 1lu << pconf->rx_core;
-	app_used_core_mask |= mask;
-
-	mask = 1lu << pconf->wt_core;
-	app_used_core_mask |= mask;
-
-	mask = 1lu << pconf->tx_core;
-	app_used_core_mask |= mask;
-
 	nb_pfc++;
 
 	return 0;
@@ -322,7 +275,7 @@ app_parse_args(int argc, char **argv)
 	int opt, ret;
 	int option_index;
 	char *prgname = argv[0];
-	uint32_t i, nb_lcores;
+	uint32_t i;
 
 	static struct option lgopts[] = {
 		{OPT_PFC, 1, NULL, OPT_PFC_NUM},
@@ -425,23 +378,6 @@ app_parse_args(int argc, char **argv)
 			}
 	}
 
-	/* check main core index validity */
-	for (i = 0; i <= app_main_core; i++) {
-		if (app_used_core_mask & RTE_BIT64(app_main_core)) {
-			RTE_LOG(ERR, APP, "Main core index is not configured properly\n");
-			app_usage(prgname);
-			return -1;
-		}
-	}
-	app_used_core_mask |= RTE_BIT64(app_main_core);
-
-	if ((app_used_core_mask != app_eal_core_mask()) ||
-			(app_main_core != rte_get_main_lcore())) {
-		RTE_LOG(ERR, APP, "EAL core mask not configured properly, must be %" PRIx64
-				" instead of %" PRIx64 "\n" , app_used_core_mask, app_eal_core_mask());
-		return -1;
-	}
-
 	if (nb_pfc == 0) {
 		RTE_LOG(ERR, APP, "Packet flow not configured!\n");
 		app_usage(prgname);
@@ -449,15 +385,13 @@ app_parse_args(int argc, char **argv)
 	}
 
 	/* sanity check for cores assignment */
-	nb_lcores = app_cpu_core_count();
-
 	for(i = 0; i < nb_pfc; i++) {
-		if (qos_conf[i].rx_core >= nb_lcores) {
+		if (qos_conf[i].rx_core >= RTE_MAX_LCORE) {
 			RTE_LOG(ERR, APP, "pfc %u: invalid RX lcore index %u\n", i + 1,
 					qos_conf[i].rx_core);
 			return -1;
 		}
-		if (qos_conf[i].wt_core >= nb_lcores) {
+		if (qos_conf[i].wt_core >= RTE_MAX_LCORE) {
 			RTE_LOG(ERR, APP, "pfc %u: invalid WT lcore index %u\n", i + 1,
 					qos_conf[i].wt_core);
 			return -1;
diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h
index d8f3e32c83..bc647ec595 100644
--- a/examples/qos_sched/main.h
+++ b/examples/qos_sched/main.h
@@ -37,15 +37,7 @@ extern "C" {
 #define TX_HTHRESH 0  /**< Default values of TX host threshold reg. */
 #define TX_WTHRESH 0  /**< Default values of TX write-back threshold reg. */
 
-#ifndef APP_MAX_LCORE
-#if (RTE_MAX_LCORE > 64)
-#define APP_MAX_LCORE 64
-#else
-#define APP_MAX_LCORE RTE_MAX_LCORE
-#endif
-#endif
-
-#define MAX_DATA_STREAMS (APP_MAX_LCORE/2)
+#define MAX_DATA_STREAMS RTE_MAX_LCORE/2
 #define MAX_SCHED_SUBPORTS		8
 #define MAX_SCHED_PIPES		4096
 #define MAX_SCHED_PIPE_PROFILES		256
-- 
2.37.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH 1/4] examples/qos_sched: fix errors when TX port not up
  2023-02-03 10:05 ` [PATCH 1/4] examples/qos_sched: fix errors when TX port not up Bruce Richardson
@ 2023-02-17 16:19   ` Dumitrescu, Cristian
  0 siblings, 0 replies; 10+ messages in thread
From: Dumitrescu, Cristian @ 2023-02-17 16:19 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Singh, Jasvinder, stable



> -----Original Message-----
> From: Richardson, Bruce <bruce.richardson@intel.com>
> Sent: Friday, February 3, 2023 10:06 AM
> To: dev@dpdk.org
> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; stable@dpdk.org; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>
> Subject: [PATCH 1/4] examples/qos_sched: fix errors when TX port not up
> 
> The TX port config will fail if the port is not up, so wait 10 seconds
> on startup for it to start.
> 
> Fixes: de3cfa2c9823 ("sched: initial import")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  examples/qos_sched/init.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 

Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH 2/4] examples/qos_sched: remove TX buffering
  2023-02-03 10:05 ` [PATCH 2/4] examples/qos_sched: remove TX buffering Bruce Richardson
@ 2023-02-17 16:19   ` Dumitrescu, Cristian
  0 siblings, 0 replies; 10+ messages in thread
From: Dumitrescu, Cristian @ 2023-02-17 16:19 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Singh, Jasvinder



> -----Original Message-----
> From: Richardson, Bruce <bruce.richardson@intel.com>
> Sent: Friday, February 3, 2023 10:06 AM
> To: dev@dpdk.org
> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>
> Subject: [PATCH 2/4] examples/qos_sched: remove TX buffering
> 
> Since the qos_sched app does batch dequeues from the QoS block, there is
> little point in trying to batch further in the app - just send out the
> full burst of packets that were received from the QoS block. With modern
> CPUs and write-combining doorbells, the cost of doing smaller TX's is
> reduced anyway for the worst case.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  examples/qos_sched/app_thread.c | 94 ++++-----------------------------
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue
  2023-02-03 10:05 ` [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue Bruce Richardson
@ 2023-02-17 16:20   ` Dumitrescu, Cristian
  0 siblings, 0 replies; 10+ messages in thread
From: Dumitrescu, Cristian @ 2023-02-17 16:20 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Singh, Jasvinder



> -----Original Message-----
> From: Richardson, Bruce <bruce.richardson@intel.com>
> Sent: Friday, February 3, 2023 10:06 AM
> To: dev@dpdk.org
> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>
> Subject: [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue
> 
> While performance of the QoS block drops sharply if the dequeue size is
> greater than or equal to the enqueue size, increasing the dequeue size
> to just under the enqueue one gives improved performance when the
> scheduler is not keeping up with the line rate.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  doc/guides/sample_app_ug/qos_scheduler.rst | 2 +-
>  examples/qos_sched/main.h                  | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH 4/4] examples/qos_sched: remove limit on core ids
  2023-02-03 10:05 ` [PATCH 4/4] examples/qos_sched: remove limit on core ids Bruce Richardson
@ 2023-02-17 16:20   ` Dumitrescu, Cristian
  0 siblings, 0 replies; 10+ messages in thread
From: Dumitrescu, Cristian @ 2023-02-17 16:20 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Singh, Jasvinder



> -----Original Message-----
> From: Richardson, Bruce <bruce.richardson@intel.com>
> Sent: Friday, February 3, 2023 10:06 AM
> To: dev@dpdk.org
> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>
> Subject: [PATCH 4/4] examples/qos_sched: remove limit on core ids
> 
> The qos_sched app was limited to using lcores between 0 and 64 only,
> even if RTE_MAX_LCORE was set to a higher value (as it is by default).
> Remove some of the checks on the lcore ids in order to support running
> with core ids > 64.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---

Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] small fixes and improvements for qos_sched example
  2023-02-03 10:05 [PATCH 0/4] small fixes and improvements for qos_sched example Bruce Richardson
                   ` (3 preceding siblings ...)
  2023-02-03 10:05 ` [PATCH 4/4] examples/qos_sched: remove limit on core ids Bruce Richardson
@ 2023-02-20 15:41 ` Thomas Monjalon
  4 siblings, 0 replies; 10+ messages in thread
From: Thomas Monjalon @ 2023-02-20 15:41 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, jasvinder.singh

03/02/2023 11:05, Bruce Richardson:
> This patchset contains a set of fixes and improvements for the qos_sched
> application. After this patchset the code is shorter, and also seems a
> little faster in my performance tests.
> 
> Bruce Richardson (4):
>   examples/qos_sched: fix errors when TX port not up
>   examples/qos_sched: remove TX buffering
>   examples/qos_sched: use bigger bursts on dequeue
>   examples/qos_sched: remove limit on core ids

Applied, thanks.




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-02-20 15:42 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-03 10:05 [PATCH 0/4] small fixes and improvements for qos_sched example Bruce Richardson
2023-02-03 10:05 ` [PATCH 1/4] examples/qos_sched: fix errors when TX port not up Bruce Richardson
2023-02-17 16:19   ` Dumitrescu, Cristian
2023-02-03 10:05 ` [PATCH 2/4] examples/qos_sched: remove TX buffering Bruce Richardson
2023-02-17 16:19   ` Dumitrescu, Cristian
2023-02-03 10:05 ` [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue Bruce Richardson
2023-02-17 16:20   ` Dumitrescu, Cristian
2023-02-03 10:05 ` [PATCH 4/4] examples/qos_sched: remove limit on core ids Bruce Richardson
2023-02-17 16:20   ` Dumitrescu, Cristian
2023-02-20 15:41 ` [PATCH 0/4] small fixes and improvements for qos_sched example Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).