DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes
@ 2015-05-11 17:07 Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 1/6] rte_sched: make RED optional at runtime Stephen Hemminger
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:07 UTC (permalink / raw)
  To: dev

This is 3rd rev of DPDK changes to QoS scheduler.
The change from last revision was to fix the example and fix whitespace.

Stephen Hemminger (6):
  rte_sched: make RED optional at runtime
  rte_sched: expand scheduler hierarchy for more VLAN's
  rte_sched: keep track of RED drops
  rte_sched: allow reading without clearing
  rte_sched: don't put tabs in log messages
  rte_sched: use correct log level

 app/test/test_sched.c        |   4 +-
 examples/qos_sched/stats.c   |  22 ++++++---
 lib/librte_mbuf/rte_mbuf.h   |   5 +-
 lib/librte_sched/rte_sched.c | 113 ++++++++++++++++++++++++++++---------------
 lib/librte_sched/rte_sched.h |  62 +++++++++++++++---------
 5 files changed, 134 insertions(+), 72 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 1/6] rte_sched: make RED optional at runtime
  2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
@ 2015-05-11 17:07 ` Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's Stephen Hemminger
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:07 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

From: Stephen Hemminger <shemming@brocade.com>

Want to be able to build with RTE_SCHED_RED enabled but
allow disabling RED on a per-queue basis at runtime.

RED is disabled unless min/max thresholds set.

Signed-off-by: Stephen Hemmminger <stephen@networkplumber.org>
---
 lib/librte_sched/rte_sched.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 95dee27..3b5acd1 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -636,6 +636,12 @@ rte_sched_port_config(struct rte_sched_port_params *params)
 		uint32_t j;
 
 		for (j = 0; j < e_RTE_METER_COLORS; j++) {
+			/* if min/max are both zero, then RED is disabled */
+			if ((params->red_params[i][j].min_th |
+			     params->red_params[i][j].max_th) == 0) {
+				continue;
+			}
+
 			if (rte_red_config_init(&port->red_config[i][j],
 				params->red_params[i][j].wq_log2,
 				params->red_params[i][j].min_th,
@@ -1069,6 +1075,9 @@ rte_sched_port_red_drop(struct rte_sched_port *port, struct rte_mbuf *pkt, uint3
 	color = rte_sched_port_pkt_read_color(pkt);
 	red_cfg = &port->red_config[tc_index][color];
 
+	if ((red_cfg->min_th | red_cfg->max_th) == 0)
+		return 0;
+
 	qe = port->queue_extra + qindex;
 	red = &qe->red;
 
-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's
  2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 1/6] rte_sched: make RED optional at runtime Stephen Hemminger
@ 2015-05-11 17:07 ` Stephen Hemminger
  2015-05-11 17:20   ` Neil Horman
       [not found]   ` <8edea4c81f624728bb5f0476b680c410@BRMWP-EXMB11.corp.brocade.com>
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 3/6] rte_sched: keep track of RED drops Stephen Hemminger
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:07 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

From: Stephen Hemminger <shemming@brocade.com>

The QoS subport is limited to 8 bits in original code.
But customers demanded ability to support full number of VLAN's (4096)
therefore use the full part of the tag field of mbuf.

Resize the pipe as well to allow for more pipes in future and
avoid expensive bitfield access.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/librte_mbuf/rte_mbuf.h   |  5 ++++-
 lib/librte_sched/rte_sched.h | 38 ++++++++++++++++++++++++--------------
 2 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ab6de67..cc0658d 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -295,7 +295,10 @@ struct rte_mbuf {
 			/**< First 4 flexible bytes or FD ID, dependent on
 			     PKT_RX_FDIR_* flag in ol_flags. */
 		} fdir;           /**< Filter identifier if FDIR enabled */
-		uint32_t sched;   /**< Hierarchical scheduler */
+		struct {
+			uint32_t lo;
+			uint32_t hi;
+		} sched;          /**< Hierarchical scheduler */
 		uint32_t usr;	  /**< User defined tags. See rte_distributor_process() */
 	} hash;                   /**< hash information */
 
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index e6bba22..bf5ef8d 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -195,16 +195,20 @@ struct rte_sched_port_params {
 #endif
 };
 
-/** Path through the scheduler hierarchy used by the scheduler enqueue operation to
-identify the destination queue for the current packet. Stored in the field hash.sched
-of struct rte_mbuf of each packet, typically written by the classification stage and read by
-scheduler enqueue.*/
+/*
+ * Path through the scheduler hierarchy used by the scheduler enqueue
+ * operation to identify the destination queue for the current
+ * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
+ * each packet, typically written by the classification stage and read
+ * by scheduler enqueue.
+ */
 struct rte_sched_port_hierarchy {
-	uint32_t queue:2;                /**< Queue ID (0 .. 3) */
-	uint32_t traffic_class:2;        /**< Traffic class ID (0 .. 3)*/
-	uint32_t pipe:20;                /**< Pipe ID */
-	uint32_t subport:6;              /**< Subport ID */
-	uint32_t color:2;                /**< Color */
+	uint16_t queue:2;		 /**< Queue ID (0 .. 3) */
+	uint16_t traffic_class:2;	 /**< Traffic class ID (0 .. 3)*/
+	uint16_t color:2;		 /**< Color */
+	uint16_t unused:10;
+	uint16_t subport;		 /**< Subport ID */
+	uint32_t pipe;			 /**< Pipe ID */
 };
 
 /*
@@ -350,12 +354,15 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
  */
 static inline void
 rte_sched_port_pkt_write(struct rte_mbuf *pkt,
-	uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue, enum rte_meter_color color)
+			 uint32_t subport, uint32_t pipe,
+			 uint32_t traffic_class,
+			 uint32_t queue, enum rte_meter_color color)
 {
-	struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
+	struct rte_sched_port_hierarchy *sched
+		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
 
-	sched->color = (uint32_t) color;
 	sched->subport = subport;
+	sched->color = (uint32_t) color;
 	sched->pipe = pipe;
 	sched->traffic_class = traffic_class;
 	sched->queue = queue;
@@ -379,9 +386,12 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
  *
  */
 static inline void
-rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint32_t *pipe, uint32_t *traffic_class, uint32_t *queue)
+rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport,
+				  uint32_t *pipe, uint32_t *traffic_class,
+				  uint32_t *queue)
 {
-	struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
+	struct rte_sched_port_hierarchy *sched
+		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
 
 	*subport = sched->subport;
 	*pipe = sched->pipe;
-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 3/6] rte_sched: keep track of RED drops
  2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 1/6] rte_sched: make RED optional at runtime Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's Stephen Hemminger
@ 2015-05-11 17:07 ` Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 4/6] rte_sched: allow reading without clearing Stephen Hemminger
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:07 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

From: Stephen Hemminger <shemming@brocade.com>

Add new statistic to keep track of drops due to RED.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/librte_sched/rte_sched.c | 31 ++++++++++++++++++++++++++-----
 lib/librte_sched/rte_sched.h |  6 ++++++
 2 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 3b5acd1..c044c09 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -1028,7 +1028,9 @@ rte_sched_port_update_subport_stats(struct rte_sched_port *port, uint32_t qindex
 }
 
 static inline void
-rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port, uint32_t qindex, struct rte_mbuf *pkt)
+rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port,
+					    uint32_t qindex,
+					    struct rte_mbuf *pkt, uint32_t red)
 {
 	struct rte_sched_subport *s = port->subport + (qindex / rte_sched_port_queues_per_subport(port));
 	uint32_t tc_index = (qindex >> 2) & 0x3;
@@ -1036,6 +1038,9 @@ rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port, uint32_
 
 	s->stats.n_pkts_tc_dropped[tc_index] += 1;
 	s->stats.n_bytes_tc_dropped[tc_index] += pkt_len;
+#ifdef RTE_SCHED_RED
+	s->stats.n_pkts_red_dropped[tc_index] += red;
+#endif
 }
 
 static inline void
@@ -1049,13 +1054,18 @@ rte_sched_port_update_queue_stats(struct rte_sched_port *port, uint32_t qindex,
 }
 
 static inline void
-rte_sched_port_update_queue_stats_on_drop(struct rte_sched_port *port, uint32_t qindex, struct rte_mbuf *pkt)
+rte_sched_port_update_queue_stats_on_drop(struct rte_sched_port *port,
+					  uint32_t qindex,
+					  struct rte_mbuf *pkt, uint32_t red)
 {
 	struct rte_sched_queue_extra *qe = port->queue_extra + qindex;
 	uint32_t pkt_len = pkt->pkt_len;
 
 	qe->stats.n_pkts_dropped += 1;
 	qe->stats.n_bytes_dropped += pkt_len;
+#ifdef RTE_SCHED_RED
+	qe->stats.n_pkts_red_dropped += red;
+#endif
 }
 
 #endif /* RTE_SCHED_COLLECT_STATS */
@@ -1206,12 +1216,23 @@ rte_sched_port_enqueue_qwa(struct rte_sched_port *port, uint32_t qindex, struct
 	qlen = q->qw - q->qr;
 
 	/* Drop the packet (and update drop stats) when queue is full */
-	if (unlikely(rte_sched_port_red_drop(port, pkt, qindex, qlen) || (qlen >= qsize))) {
+	if (unlikely(rte_sched_port_red_drop(port, pkt, qindex, qlen))) {
+#ifdef RTE_SCHED_COLLECT_STATS
+		rte_sched_port_update_subport_stats_on_drop(port, qindex,
+							    pkt, 1);
+		rte_sched_port_update_queue_stats_on_drop(port, qindex, pkt, 1);
+#endif
 		rte_pktmbuf_free(pkt);
+		return 0;
+	}
+
+	if (qlen >= qsize) {
 #ifdef RTE_SCHED_COLLECT_STATS
-		rte_sched_port_update_subport_stats_on_drop(port, qindex, pkt);
-		rte_sched_port_update_queue_stats_on_drop(port, qindex, pkt);
+		rte_sched_port_update_subport_stats_on_drop(port, qindex,
+							    pkt, 0);
+		rte_sched_port_update_queue_stats_on_drop(port, qindex, pkt, 0);
 #endif
+		rte_pktmbuf_free(pkt);
 		return 0;
 	}
 
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index bf5ef8d..3fd1fe1 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -140,6 +140,9 @@ struct rte_sched_subport_stats {
 	                                      subport for each traffic class*/
 	uint32_t n_bytes_tc_dropped[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; /**< Number of bytes dropped by the current
                                           subport for each traffic class due to subport queues being full or congested */
+#ifdef RTE_SCHED_RED
+	uint32_t n_pkts_red_dropped[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; /**< Number of packets dropped by red */
+#endif
 };
 
 /** Pipe configuration parameters. The period and credits_per_period parameters are measured
@@ -168,6 +171,9 @@ struct rte_sched_queue_stats {
 	/* Packets */
 	uint32_t n_pkts;                 /**< Number of packets successfully written to current queue */
 	uint32_t n_pkts_dropped;         /**< Number of packets dropped due to current queue being full or congested */
+#ifdef RTE_SCHED_RED
+	uint32_t n_pkts_red_dropped;
+#endif
 
 	/* Bytes */
 	uint32_t n_bytes;                /**< Number of bytes successfully written to current queue */
-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 4/6] rte_sched: allow reading without clearing
  2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
                   ` (2 preceding siblings ...)
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 3/6] rte_sched: keep track of RED drops Stephen Hemminger
@ 2015-05-11 17:07 ` Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 5/6] rte_sched: don't put tabs in log messages Stephen Hemminger
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:07 UTC (permalink / raw)
  To: dev

The rte_sched statistics API should allow reading statistics without
clearing. Make auto-clear optional.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 app/test/test_sched.c        |  4 ++--
 examples/qos_sched/stats.c   | 22 +++++++++++++++-------
 lib/librte_sched/rte_sched.c | 44 ++++++++++++++++++++++----------------------
 lib/librte_sched/rte_sched.h | 18 ++++++++++--------
 4 files changed, 49 insertions(+), 39 deletions(-)

diff --git a/app/test/test_sched.c b/app/test/test_sched.c
index c7239f8..1526ad7 100644
--- a/app/test/test_sched.c
+++ b/app/test/test_sched.c
@@ -198,13 +198,13 @@ test_sched(void)
 
 	struct rte_sched_subport_stats subport_stats;
 	uint32_t tc_ov;
-	rte_sched_subport_read_stats(port, SUBPORT, &subport_stats, &tc_ov);
+	rte_sched_subport_read_stats(port, SUBPORT, &subport_stats, &tc_ov, 1);
 #if 0
 	TEST_ASSERT_EQUAL(subport_stats.n_pkts_tc[TC-1], 10, "Wrong subport stats\n");
 #endif
 	struct rte_sched_queue_stats queue_stats;
 	uint16_t qlen;
-	rte_sched_queue_read_stats(port, QUEUE, &queue_stats, &qlen);
+	rte_sched_queue_read_stats(port, QUEUE, &queue_stats, &qlen, 1);
 #if 0
 	TEST_ASSERT_EQUAL(queue_stats.n_pkts, 10, "Wrong queue stats\n");
 #endif
diff --git a/examples/qos_sched/stats.c b/examples/qos_sched/stats.c
index b4db7b5..a6d05ab 100644
--- a/examples/qos_sched/stats.c
+++ b/examples/qos_sched/stats.c
@@ -61,7 +61,7 @@ qavg_q(uint8_t port_id, uint32_t subport_id, uint32_t pipe_id, uint8_t tc, uint8
         average = 0;
 
         for (count = 0; count < qavg_ntimes; count++) {
-                rte_sched_queue_read_stats(port, queue_id, &stats, &qlen);
+                rte_sched_queue_read_stats(port, queue_id, &stats, &qlen, 1);
                 average += qlen;
                 usleep(qavg_period);
         }
@@ -99,7 +99,9 @@ qavg_tcpipe(uint8_t port_id, uint32_t subport_id, uint32_t pipe_id, uint8_t tc)
         for (count = 0; count < qavg_ntimes; count++) {
                 part_average = 0;
                 for (i = 0; i < RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; i++) {
-                        rte_sched_queue_read_stats(port, queue_id + (tc * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + i), &stats, &qlen);
+                        rte_sched_queue_read_stats(port,
+				   queue_id + (tc * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + i),
+				   &stats, &qlen, 1);
                         part_average += qlen;
                 }
                 average += part_average / RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS;
@@ -138,7 +140,8 @@ qavg_pipe(uint8_t port_id, uint32_t subport_id, uint32_t pipe_id)
         for (count = 0; count < qavg_ntimes; count++) {
                 part_average = 0;
                 for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; i++) {
-                        rte_sched_queue_read_stats(port, queue_id + i, &stats, &qlen);
+                        rte_sched_queue_read_stats(port, queue_id + i,
+						   &stats, &qlen, 1);
                         part_average += qlen;
                 }
                 average += part_average / (RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS);
@@ -178,7 +181,9 @@ qavg_tcsubport(uint8_t port_id, uint32_t subport_id, uint8_t tc)
                         queue_id = RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS * (subport_id * port_params.n_pipes_per_subport + i);
 
                         for (j = 0; j < RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; j++) {
-                                rte_sched_queue_read_stats(port, queue_id + (tc * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + j), &stats, &qlen);
+                                rte_sched_queue_read_stats(port,
+							   queue_id + (tc * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + j),
+							   &stats, &qlen, 1);
                                 part_average += qlen;
                         }
                 }
@@ -220,7 +225,8 @@ qavg_subport(uint8_t port_id, uint32_t subport_id)
                         queue_id = RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS * (subport_id * port_params.n_pipes_per_subport + i);
 
                         for (j = 0; j < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; j++) {
-                                rte_sched_queue_read_stats(port, queue_id + j, &stats, &qlen);
+                                rte_sched_queue_read_stats(port, queue_id + j,
+							   &stats, &qlen, 1);
                                 part_average += qlen;
                         }
                 }
@@ -254,7 +260,7 @@ subport_stat(uint8_t port_id, uint32_t subport_id)
         port = qos_conf[i].sched_port;
 	memset (tc_ov, 0, sizeof(tc_ov));
 
-        rte_sched_subport_read_stats(port, subport_id, &stats, tc_ov);
+	rte_sched_subport_read_stats(port, subport_id, &stats, tc_ov, 0);
 
         printf("\n");
         printf("+----+-------------+-------------+-------------+-------------+-------------+\n");
@@ -300,7 +306,9 @@ pipe_stat(uint8_t port_id, uint32_t subport_id, uint32_t pipe_id)
         for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
                 for (j = 0; j < RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; j++) {
 
-                        rte_sched_queue_read_stats(port, queue_id + (i * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + j), &stats, &qlen);
+			rte_sched_queue_read_stats(port,
+						   queue_id + (i * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + j),
+						   &stats, &qlen, 0);
 
                         printf("|  %d |   %d   | %11" PRIu32 " | %11" PRIu32 " | %11" PRIu32 " | %11" PRIu32 " | %11i |\n", i, j,
                                         stats.n_pkts, stats.n_pkts_dropped, stats.n_bytes, stats.n_bytes_dropped, qlen);
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index c044c09..74b3111 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -909,56 +909,56 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 
 int
 rte_sched_subport_read_stats(struct rte_sched_port *port,
-	uint32_t subport_id,
-	struct rte_sched_subport_stats *stats,
-	uint32_t *tc_ov)
+			     uint32_t subport_id,
+			     struct rte_sched_subport_stats *stats,
+			     uint32_t *tc_ov, int clear)
 {
 	struct rte_sched_subport *s;
 
 	/* Check user parameters */
-	if ((port == NULL) ||
-	    (subport_id >= port->n_subports_per_port) ||
-		(stats == NULL) ||
-		(tc_ov == NULL)) {
+	if (port == NULL || subport_id >= port->n_subports_per_port)
 		return -1;
-	}
+
 	s = port->subport + subport_id;
 
 	/* Copy subport stats and clear */
-	memcpy(stats, &s->stats, sizeof(struct rte_sched_subport_stats));
-	memset(&s->stats, 0, sizeof(struct rte_sched_subport_stats));
+	if (stats)
+		*stats = s->stats;
+	if (clear)
+		memset(&s->stats, 0, sizeof(struct rte_sched_subport_stats));
 
 	/* Subport TC ovesubscription status */
-	*tc_ov = s->tc_ov;
+	if (tc_ov)
+		*tc_ov = s->tc_ov;
 
 	return 0;
 }
 
 int
 rte_sched_queue_read_stats(struct rte_sched_port *port,
-	uint32_t queue_id,
-	struct rte_sched_queue_stats *stats,
-	uint16_t *qlen)
+			   uint32_t queue_id,
+			   struct rte_sched_queue_stats *stats,
+			   uint16_t *qlen, int clear)
 {
 	struct rte_sched_queue *q;
 	struct rte_sched_queue_extra *qe;
 
 	/* Check user parameters */
-	if ((port == NULL) ||
-	    (queue_id >= rte_sched_port_queues_per_port(port)) ||
-		(stats == NULL) ||
-		(qlen == NULL)) {
+	if (port == NULL || queue_id >= rte_sched_port_queues_per_port(port))
 		return -1;
-	}
+
 	q = port->queue + queue_id;
 	qe = port->queue_extra + queue_id;
 
 	/* Copy queue stats and clear */
-	memcpy(stats, &qe->stats, sizeof(struct rte_sched_queue_stats));
-	memset(&qe->stats, 0, sizeof(struct rte_sched_queue_stats));
+	if (stats)
+		*stats = qe->stats;
+	if (clear)
+		memset(&qe->stats, 0, sizeof(struct rte_sched_queue_stats));
 
 	/* Queue length */
-	*qlen = q->qw - q->qr;
+	if (qlen)
+		*qlen = q->qw - q->qr;
 
 	return 0;
 }
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 3fd1fe1..dff7f85 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -308,14 +308,15 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *params);
  * @param tc_ov
  *   Pointer to pre-allocated 4-entry array where the oversubscription status for
  *   each of the 4 subport traffic classes should be stored.
+ * @parm clear
+ *   Reset statistics after read
  * @return
  *   0 upon success, error code otherwise
  */
 int
-rte_sched_subport_read_stats(struct rte_sched_port *port,
-	uint32_t subport_id,
-	struct rte_sched_subport_stats *stats,
-	uint32_t *tc_ov);
+rte_sched_subport_read_stats(struct rte_sched_port *port, uint32_t subport_id,
+			     struct rte_sched_subport_stats *stats,
+			     uint32_t *tc_ov, int clear);
 
 /**
  * Hierarchical scheduler queue statistics read
@@ -329,14 +330,15 @@ rte_sched_subport_read_stats(struct rte_sched_port *port,
  *   counters should be stored
  * @param qlen
  *   Pointer to pre-allocated variable where the current queue length should be stored.
+ * @parm clear
+ *   Reset statistics after read
  * @return
  *   0 upon success, error code otherwise
  */
 int
-rte_sched_queue_read_stats(struct rte_sched_port *port,
-	uint32_t queue_id,
-	struct rte_sched_queue_stats *stats,
-	uint16_t *qlen);
+rte_sched_queue_read_stats(struct rte_sched_port *port, uint32_t queue_id,
+			   struct rte_sched_queue_stats *stats,
+			   uint16_t *qlen, int clear);
 
 /*
  * Run-time
-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 5/6] rte_sched: don't put tabs in log messages
  2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
                   ` (3 preceding siblings ...)
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 4/6] rte_sched: allow reading without clearing Stephen Hemminger
@ 2015-05-11 17:07 ` Stephen Hemminger
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 6/6] rte_sched: use correct log level Stephen Hemminger
  2015-05-12 13:20 ` [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Thomas Monjalon
  6 siblings, 0 replies; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:07 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

From: Stephen Hemminger <shemming@brocade.com>

syslog does not like tabs in log messages; tab gets translated to #011

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/librte_sched/rte_sched.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 74b3111..b8d036a 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -495,10 +495,10 @@ rte_sched_port_log_pipe_profile(struct rte_sched_port *port, uint32_t i)
 	struct rte_sched_pipe_profile *p = port->pipe_profiles + i;
 
 	RTE_LOG(INFO, SCHED, "Low level config for pipe profile %u:\n"
-		"\tToken bucket: period = %u, credits per period = %u, size = %u\n"
-		"\tTraffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
-		"\tTraffic class 3 oversubscription: weight = %hhu\n"
-		"\tWRR cost: [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu]\n",
+		"    Token bucket: period = %u, credits per period = %u, size = %u\n"
+		"    Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
+		"    Traffic class 3 oversubscription: weight = %hhu\n"
+		"    WRR cost: [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu]\n",
 		i,
 
 		/* Token bucket */
@@ -716,9 +716,9 @@ rte_sched_port_log_subport_config(struct rte_sched_port *port, uint32_t i)
 	struct rte_sched_subport *s = port->subport + i;
 
 	RTE_LOG(INFO, SCHED, "Low level config for subport %u:\n"
-		"\tToken bucket: period = %u, credits per period = %u, size = %u\n"
-		"\tTraffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
-		"\tTraffic class 3 oversubscription: wm min = %u, wm max = %u\n",
+		"    Token bucket: period = %u, credits per period = %u, size = %u\n"
+		"    Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
+		"    Traffic class 3 oversubscription: wm min = %u, wm max = %u\n",
 		i,
 
 		/* Token bucket */
-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 6/6] rte_sched: use correct log level
  2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
                   ` (4 preceding siblings ...)
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 5/6] rte_sched: don't put tabs in log messages Stephen Hemminger
@ 2015-05-11 17:07 ` Stephen Hemminger
  2015-05-12 13:20 ` [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Thomas Monjalon
  6 siblings, 0 replies; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:07 UTC (permalink / raw)
  To: dev

The setup messages should be at DEBUG level since they are not
important for normal operation of system. The messages about
problems should be at NOTICE or ERR level.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/librte_sched/rte_sched.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index b8d036a..ec55f67 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -448,7 +448,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *params)
 
 	status = rte_sched_port_check_params(params);
 	if (status != 0) {
-		RTE_LOG(INFO, SCHED, "Port scheduler params check failed (%d)\n", status);
+		RTE_LOG(NOTICE, SCHED,
+			"Port scheduler params check failed (%d)\n", status);
 
 		return 0;
 	}
@@ -494,7 +495,7 @@ rte_sched_port_log_pipe_profile(struct rte_sched_port *port, uint32_t i)
 {
 	struct rte_sched_pipe_profile *p = port->pipe_profiles + i;
 
-	RTE_LOG(INFO, SCHED, "Low level config for pipe profile %u:\n"
+	RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n"
 		"    Token bucket: period = %u, credits per period = %u, size = %u\n"
 		"    Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
 		"    Traffic class 3 oversubscription: weight = %hhu\n"
@@ -688,7 +689,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
 	bmp_mem_size = rte_bitmap_get_memory_footprint(n_queues_per_port);
 	port->bmp = rte_bitmap_init(n_queues_per_port, port->bmp_array, bmp_mem_size);
 	if (port->bmp == NULL) {
-		RTE_LOG(INFO, SCHED, "Bitmap init error\n");
+		RTE_LOG(ERR, SCHED, "Bitmap init error\n");
 		return NULL;
 	}
 	for (i = 0; i < RTE_SCHED_PORT_N_GRINDERS; i ++) {
@@ -715,7 +716,7 @@ rte_sched_port_log_subport_config(struct rte_sched_port *port, uint32_t i)
 {
 	struct rte_sched_subport *s = port->subport + i;
 
-	RTE_LOG(INFO, SCHED, "Low level config for subport %u:\n"
+	RTE_LOG(DEBUG, SCHED, "Low level config for subport %u:\n"
 		"    Token bucket: period = %u, credits per period = %u, size = %u\n"
 		"    Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
 		"    Traffic class 3 oversubscription: wm min = %u, wm max = %u\n",
@@ -857,7 +858,8 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 		s->tc_ov = s->tc_ov_rate > subport_tc3_rate;
 
 		if (s->tc_ov != tc3_ov) {
-			RTE_LOG(INFO, SCHED, "Subport %u TC3 oversubscription is OFF (%.4lf >= %.4lf)\n",
+			RTE_LOG(DEBUG, SCHED,
+				"Subport %u TC3 oversubscription is OFF (%.4lf >= %.4lf)\n",
 				subport_id, subport_tc3_rate, s->tc_ov_rate);
 		}
 #endif
@@ -896,7 +898,8 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 		s->tc_ov = s->tc_ov_rate > subport_tc3_rate;
 
 		if (s->tc_ov != tc3_ov) {
-			RTE_LOG(INFO, SCHED, "Subport %u TC3 oversubscription is ON (%.4lf < %.4lf)\n",
+			RTE_LOG(DEBUG, SCHED,
+				"Subport %u TC3 oversubscription is ON (%.4lf < %.4lf)\n",
 				subport_id, subport_tc3_rate, s->tc_ov_rate);
 		}
 		p->tc_ov_period_id = s->tc_ov_period_id;
-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's Stephen Hemminger
@ 2015-05-11 17:20   ` Neil Horman
       [not found]   ` <8edea4c81f624728bb5f0476b680c410@BRMWP-EXMB11.corp.brocade.com>
  1 sibling, 0 replies; 13+ messages in thread
From: Neil Horman @ 2015-05-11 17:20 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Stephen Hemminger

On Mon, May 11, 2015 at 10:07:47AM -0700, Stephen Hemminger wrote:
> From: Stephen Hemminger <shemming@brocade.com>
> 
> The QoS subport is limited to 8 bits in original code.
> But customers demanded ability to support full number of VLAN's (4096)
> therefore use the full part of the tag field of mbuf.
> 
> Resize the pipe as well to allow for more pipes in future and
> avoid expensive bitfield access.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  lib/librte_mbuf/rte_mbuf.h   |  5 ++++-
>  lib/librte_sched/rte_sched.h | 38 ++++++++++++++++++++++++--------------
>  2 files changed, 28 insertions(+), 15 deletions(-)
> 
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index ab6de67..cc0658d 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -295,7 +295,10 @@ struct rte_mbuf {
>  			/**< First 4 flexible bytes or FD ID, dependent on
>  			     PKT_RX_FDIR_* flag in ol_flags. */
>  		} fdir;           /**< Filter identifier if FDIR enabled */
> -		uint32_t sched;   /**< Hierarchical scheduler */
> +		struct {
> +			uint32_t lo;
> +			uint32_t hi;
> +		} sched;          /**< Hierarchical scheduler */
>  		uint32_t usr;	  /**< User defined tags. See rte_distributor_process() */
>  	} hash;                   /**< hash information */
>  
> diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
> index e6bba22..bf5ef8d 100644
> --- a/lib/librte_sched/rte_sched.h
> +++ b/lib/librte_sched/rte_sched.h
> @@ -195,16 +195,20 @@ struct rte_sched_port_params {
>  #endif
>  };
>  
> -/** Path through the scheduler hierarchy used by the scheduler enqueue operation to
> -identify the destination queue for the current packet. Stored in the field hash.sched
> -of struct rte_mbuf of each packet, typically written by the classification stage and read by
> -scheduler enqueue.*/
> +/*
> + * Path through the scheduler hierarchy used by the scheduler enqueue
> + * operation to identify the destination queue for the current
> + * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
> + * each packet, typically written by the classification stage and read
> + * by scheduler enqueue.
> + */
>  struct rte_sched_port_hierarchy {
> -	uint32_t queue:2;                /**< Queue ID (0 .. 3) */
> -	uint32_t traffic_class:2;        /**< Traffic class ID (0 .. 3)*/
> -	uint32_t pipe:20;                /**< Pipe ID */
> -	uint32_t subport:6;              /**< Subport ID */
> -	uint32_t color:2;                /**< Color */
> +	uint16_t queue:2;		 /**< Queue ID (0 .. 3) */
> +	uint16_t traffic_class:2;	 /**< Traffic class ID (0 .. 3)*/
> +	uint16_t color:2;		 /**< Color */
> +	uint16_t unused:10;
> +	uint16_t subport;		 /**< Subport ID */
> +	uint32_t pipe;			 /**< Pipe ID */
>  };
Have you run this through the ABI checker?  Seems like this would alter lots of
pointer offsets.
Neil

>  
>  /*
> @@ -350,12 +354,15 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
>   */
>  static inline void
>  rte_sched_port_pkt_write(struct rte_mbuf *pkt,
> -	uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue, enum rte_meter_color color)
> +			 uint32_t subport, uint32_t pipe,
> +			 uint32_t traffic_class,
> +			 uint32_t queue, enum rte_meter_color color)
>  {
> -	struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
> +	struct rte_sched_port_hierarchy *sched
> +		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
>  
> -	sched->color = (uint32_t) color;
>  	sched->subport = subport;
> +	sched->color = (uint32_t) color;
>  	sched->pipe = pipe;
>  	sched->traffic_class = traffic_class;
>  	sched->queue = queue;
> @@ -379,9 +386,12 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
>   *
>   */
>  static inline void
> -rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint32_t *pipe, uint32_t *traffic_class, uint32_t *queue)
> +rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport,
> +				  uint32_t *pipe, uint32_t *traffic_class,
> +				  uint32_t *queue)
>  {
> -	struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
> +	struct rte_sched_port_hierarchy *sched
> +		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
>  
>  	*subport = sched->subport;
>  	*pipe = sched->pipe;
> -- 
> 2.1.4
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's
       [not found]   ` <8edea4c81f624728bb5f0476b680c410@BRMWP-EXMB11.corp.brocade.com>
@ 2015-05-11 17:32     ` Stephen Hemminger
  2015-05-11 17:43       ` Neil Horman
  0 siblings, 1 reply; 13+ messages in thread
From: Stephen Hemminger @ 2015-05-11 17:32 UTC (permalink / raw)
  To: Neil Horman; +Cc: dev, Stephen Hemminger

On Mon, 11 May 2015 17:20:07 +0000
Neil Horman <nhorman@tuxdriver.com> wrote:

> Have you run this through the ABI checker?  Seems like this would alter lots of
> pointer offsets.
> Neil

No, I have not run it through ABI checker.
It would change the ABI for applications using qos_sched but will not
change layout of mbuf.

But my assumption was that as part of release process the ABI version
would change rather than doing for each patch that gets merged.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's
  2015-05-11 17:32     ` Stephen Hemminger
@ 2015-05-11 17:43       ` Neil Horman
  0 siblings, 0 replies; 13+ messages in thread
From: Neil Horman @ 2015-05-11 17:43 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Stephen Hemminger

On Mon, May 11, 2015 at 10:32:59AM -0700, Stephen Hemminger wrote:
> On Mon, 11 May 2015 17:20:07 +0000
> Neil Horman <nhorman@tuxdriver.com> wrote:
> 
> > Have you run this through the ABI checker?  Seems like this would alter lots of
> > pointer offsets.
> > Neil
> 
> No, I have not run it through ABI checker.
> It would change the ABI for applications using qos_sched but will not
> change layout of mbuf.
> 
> But my assumption was that as part of release process the ABI version
> would change rather than doing for each patch that gets merged.
> 

You're correct that the ABI version can change, but the process is to make an
update to doc/guides/rel_notes/abi.rst documenting the proposed changed, wait
for that to be published in an official release, then make the change for the
following release.  That way downstream adopters have some lead time to prep for
upstream changes.

Neil

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes
  2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
                   ` (5 preceding siblings ...)
  2015-05-11 17:07 ` [dpdk-dev] [PATCH 6/6] rte_sched: use correct log level Stephen Hemminger
@ 2015-05-12 13:20 ` Thomas Monjalon
  6 siblings, 0 replies; 13+ messages in thread
From: Thomas Monjalon @ 2015-05-12 13:20 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

Hi Stephen,

2015-05-11 10:07, Stephen Hemminger:
> This is 3rd rev of DPDK changes to QoS scheduler.
> The change from last revision was to fix the example and fix whitespace.
> 
> Stephen Hemminger (6):
>   rte_sched: make RED optional at runtime
>   rte_sched: expand scheduler hierarchy for more VLAN's
>   rte_sched: keep track of RED drops
>   rte_sched: allow reading without clearing
>   rte_sched: don't put tabs in log messages
>   rte_sched: use correct log level

You didn't include the acknowledgement from Cristian.
I think it should be kept at least for patches which didn't change.

And please try to add the v3 suffix to every patches of the series.

Thanks

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's
  2015-04-29 17:04 ` [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's Stephen Hemminger
@ 2015-05-11 12:46   ` Thomas Monjalon
  0 siblings, 0 replies; 13+ messages in thread
From: Thomas Monjalon @ 2015-05-11 12:46 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Stephen Hemminger

2015-04-29 10:04, Stephen Hemminger:
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -285,7 +285,10 @@ struct rte_mbuf {
>  			/**< First 4 flexible bytes or FD ID, dependent on
>  			     PKT_RX_FDIR_* flag in ol_flags. */
>  		} fdir;           /**< Filter identifier if FDIR enabled */
> -		uint32_t sched;   /**< Hierarchical scheduler */
> +		struct {
> +			uint32_t lo;
> +			uint32_t hi;
> +		} sched;     	  /**< Hierarchical scheduler */

Please don't use tabs to align a comment.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's
  2015-04-29 17:04 [dpdk-dev] [PATCH 0/6] rte_sched: patches against 2.o Stephen Hemminger
@ 2015-04-29 17:04 ` Stephen Hemminger
  2015-05-11 12:46   ` Thomas Monjalon
  0 siblings, 1 reply; 13+ messages in thread
From: Stephen Hemminger @ 2015-04-29 17:04 UTC (permalink / raw)
  To: cristian.dumitrescu; +Cc: dev, Stephen Hemminger

From: Stephen Hemminger <shemming@brocade.com>

The QoS subport is limited to 8 bits in original code.
But customers demanded ability to support full number of VLAN's (4096)
therefore use the full part of the tag field of mbuf.

Resize the pipe as well to allow for more pipes in future and
avoid expensive bitfield access.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/librte_mbuf/rte_mbuf.h   |  5 ++++-
 lib/librte_sched/rte_sched.h | 38 ++++++++++++++++++++++++--------------
 2 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 70b0987..3dde695 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -285,7 +285,10 @@ struct rte_mbuf {
 			/**< First 4 flexible bytes or FD ID, dependent on
 			     PKT_RX_FDIR_* flag in ol_flags. */
 		} fdir;           /**< Filter identifier if FDIR enabled */
-		uint32_t sched;   /**< Hierarchical scheduler */
+		struct {
+			uint32_t lo;
+			uint32_t hi;
+		} sched;     	  /**< Hierarchical scheduler */
 		uint32_t usr;	  /**< User defined tags. See rte_distributor_process() */
 	} hash;                   /**< hash information */
 
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index e6bba22..bf5ef8d 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -195,16 +195,20 @@ struct rte_sched_port_params {
 #endif
 };
 
-/** Path through the scheduler hierarchy used by the scheduler enqueue operation to
-identify the destination queue for the current packet. Stored in the field hash.sched
-of struct rte_mbuf of each packet, typically written by the classification stage and read by
-scheduler enqueue.*/
+/*
+ * Path through the scheduler hierarchy used by the scheduler enqueue
+ * operation to identify the destination queue for the current
+ * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
+ * each packet, typically written by the classification stage and read
+ * by scheduler enqueue.
+ */
 struct rte_sched_port_hierarchy {
-	uint32_t queue:2;                /**< Queue ID (0 .. 3) */
-	uint32_t traffic_class:2;        /**< Traffic class ID (0 .. 3)*/
-	uint32_t pipe:20;                /**< Pipe ID */
-	uint32_t subport:6;              /**< Subport ID */
-	uint32_t color:2;                /**< Color */
+	uint16_t queue:2;		 /**< Queue ID (0 .. 3) */
+	uint16_t traffic_class:2;	 /**< Traffic class ID (0 .. 3)*/
+	uint16_t color:2;		 /**< Color */
+	uint16_t unused:10;
+	uint16_t subport;		 /**< Subport ID */
+	uint32_t pipe;			 /**< Pipe ID */
 };
 
 /*
@@ -350,12 +354,15 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
  */
 static inline void
 rte_sched_port_pkt_write(struct rte_mbuf *pkt,
-	uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue, enum rte_meter_color color)
+			 uint32_t subport, uint32_t pipe,
+			 uint32_t traffic_class,
+			 uint32_t queue, enum rte_meter_color color)
 {
-	struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
+	struct rte_sched_port_hierarchy *sched
+		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
 
-	sched->color = (uint32_t) color;
 	sched->subport = subport;
+	sched->color = (uint32_t) color;
 	sched->pipe = pipe;
 	sched->traffic_class = traffic_class;
 	sched->queue = queue;
@@ -379,9 +386,12 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
  *
  */
 static inline void
-rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint32_t *pipe, uint32_t *traffic_class, uint32_t *queue)
+rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport,
+				  uint32_t *pipe, uint32_t *traffic_class,
+				  uint32_t *queue)
 {
-	struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
+	struct rte_sched_port_hierarchy *sched
+		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
 
 	*subport = sched->subport;
 	*pipe = sched->pipe;
-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-05-12 13:20 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-11 17:07 [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Stephen Hemminger
2015-05-11 17:07 ` [dpdk-dev] [PATCH 1/6] rte_sched: make RED optional at runtime Stephen Hemminger
2015-05-11 17:07 ` [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's Stephen Hemminger
2015-05-11 17:20   ` Neil Horman
     [not found]   ` <8edea4c81f624728bb5f0476b680c410@BRMWP-EXMB11.corp.brocade.com>
2015-05-11 17:32     ` Stephen Hemminger
2015-05-11 17:43       ` Neil Horman
2015-05-11 17:07 ` [dpdk-dev] [PATCH 3/6] rte_sched: keep track of RED drops Stephen Hemminger
2015-05-11 17:07 ` [dpdk-dev] [PATCH 4/6] rte_sched: allow reading without clearing Stephen Hemminger
2015-05-11 17:07 ` [dpdk-dev] [PATCH 5/6] rte_sched: don't put tabs in log messages Stephen Hemminger
2015-05-11 17:07 ` [dpdk-dev] [PATCH 6/6] rte_sched: use correct log level Stephen Hemminger
2015-05-12 13:20 ` [dpdk-dev] [PATCH v3 0/6] rte_sched: cleanups and API changes Thomas Monjalon
  -- strict thread matches above, loose matches on Subject: below --
2015-04-29 17:04 [dpdk-dev] [PATCH 0/6] rte_sched: patches against 2.o Stephen Hemminger
2015-04-29 17:04 ` [dpdk-dev] [PATCH 2/6] rte_sched: expand scheduler hierarchy for more VLAN's Stephen Hemminger
2015-05-11 12:46   ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).