* [dpdk-dev] [PATCH 1/5] rte_sched: make RED optional at runtime
2015-05-27 18:10 [dpdk-dev] [PATCH v4 0/5] rte_sched: cleanup and API enhancements Stephen Hemminger
@ 2015-05-27 18:10 ` Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 2/5] rte_sched: don't put tabs in log messages Stephen Hemminger
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2015-05-27 18:10 UTC (permalink / raw)
To: cristian.dumitrescu; +Cc: dev, Stephen Hemminger
From: Stephen Hemminger <shemming@brocade.com>
Want to be able to build with RTE_SCHED_RED enabled but
allow disabling RED on a per-queue basis at runtime.
RED is disabled unless min/max thresholds set.
Signed-off-by: Stephen Hemmminger <stephen@networkplumber.org>
---
lib/librte_sched/rte_sched.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 95dee27..3b5acd1 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -636,6 +636,12 @@ rte_sched_port_config(struct rte_sched_port_params *params)
uint32_t j;
for (j = 0; j < e_RTE_METER_COLORS; j++) {
+ /* if min/max are both zero, then RED is disabled */
+ if ((params->red_params[i][j].min_th |
+ params->red_params[i][j].max_th) == 0) {
+ continue;
+ }
+
if (rte_red_config_init(&port->red_config[i][j],
params->red_params[i][j].wq_log2,
params->red_params[i][j].min_th,
@@ -1069,6 +1075,9 @@ rte_sched_port_red_drop(struct rte_sched_port *port, struct rte_mbuf *pkt, uint3
color = rte_sched_port_pkt_read_color(pkt);
red_cfg = &port->red_config[tc_index][color];
+ if ((red_cfg->min_th | red_cfg->max_th) == 0)
+ return 0;
+
qe = port->queue_extra + qindex;
red = &qe->red;
--
2.1.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 2/5] rte_sched: don't put tabs in log messages
2015-05-27 18:10 [dpdk-dev] [PATCH v4 0/5] rte_sched: cleanup and API enhancements Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 1/5] rte_sched: make RED optional at runtime Stephen Hemminger
@ 2015-05-27 18:10 ` Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 3/5] rte_sched: use correct log level Stephen Hemminger
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2015-05-27 18:10 UTC (permalink / raw)
To: cristian.dumitrescu; +Cc: dev, Stephen Hemminger
From: Stephen Hemminger <shemming@brocade.com>
syslog does not like tabs in log messages; tab gets translated to #011
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/librte_sched/rte_sched.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 3b5acd1..91e16f8 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -495,10 +495,10 @@ rte_sched_port_log_pipe_profile(struct rte_sched_port *port, uint32_t i)
struct rte_sched_pipe_profile *p = port->pipe_profiles + i;
RTE_LOG(INFO, SCHED, "Low level config for pipe profile %u:\n"
- "\tToken bucket: period = %u, credits per period = %u, size = %u\n"
- "\tTraffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
- "\tTraffic class 3 oversubscription: weight = %hhu\n"
- "\tWRR cost: [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu]\n",
+ " Token bucket: period = %u, credits per period = %u, size = %u\n"
+ " Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
+ " Traffic class 3 oversubscription: weight = %hhu\n"
+ " WRR cost: [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu], [%hhu, %hhu, %hhu, %hhu]\n",
i,
/* Token bucket */
@@ -716,9 +716,9 @@ rte_sched_port_log_subport_config(struct rte_sched_port *port, uint32_t i)
struct rte_sched_subport *s = port->subport + i;
RTE_LOG(INFO, SCHED, "Low level config for subport %u:\n"
- "\tToken bucket: period = %u, credits per period = %u, size = %u\n"
- "\tTraffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
- "\tTraffic class 3 oversubscription: wm min = %u, wm max = %u\n",
+ " Token bucket: period = %u, credits per period = %u, size = %u\n"
+ " Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
+ " Traffic class 3 oversubscription: wm min = %u, wm max = %u\n",
i,
/* Token bucket */
--
2.1.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 3/5] rte_sched: use correct log level
2015-05-27 18:10 [dpdk-dev] [PATCH v4 0/5] rte_sched: cleanup and API enhancements Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 1/5] rte_sched: make RED optional at runtime Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 2/5] rte_sched: don't put tabs in log messages Stephen Hemminger
@ 2015-05-27 18:10 ` Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 4/5] rte_sched: hide structure of port hierarchy Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 5/5] rte_sched: allow reading without clearing Stephen Hemminger
4 siblings, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2015-05-27 18:10 UTC (permalink / raw)
To: cristian.dumitrescu; +Cc: dev
The setup messages should be at DEBUG level since they are not
important for normal operation of system. The messages about
problems should be at NOTICE or ERR level.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/librte_sched/rte_sched.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 91e16f8..b1655b4 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -448,7 +448,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *params)
status = rte_sched_port_check_params(params);
if (status != 0) {
- RTE_LOG(INFO, SCHED, "Port scheduler params check failed (%d)\n", status);
+ RTE_LOG(NOTICE, SCHED,
+ "Port scheduler params check failed (%d)\n", status);
return 0;
}
@@ -494,7 +495,7 @@ rte_sched_port_log_pipe_profile(struct rte_sched_port *port, uint32_t i)
{
struct rte_sched_pipe_profile *p = port->pipe_profiles + i;
- RTE_LOG(INFO, SCHED, "Low level config for pipe profile %u:\n"
+ RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n"
" Token bucket: period = %u, credits per period = %u, size = %u\n"
" Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
" Traffic class 3 oversubscription: weight = %hhu\n"
@@ -688,7 +689,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
bmp_mem_size = rte_bitmap_get_memory_footprint(n_queues_per_port);
port->bmp = rte_bitmap_init(n_queues_per_port, port->bmp_array, bmp_mem_size);
if (port->bmp == NULL) {
- RTE_LOG(INFO, SCHED, "Bitmap init error\n");
+ RTE_LOG(ERR, SCHED, "Bitmap init error\n");
return NULL;
}
for (i = 0; i < RTE_SCHED_PORT_N_GRINDERS; i ++) {
@@ -715,7 +716,7 @@ rte_sched_port_log_subport_config(struct rte_sched_port *port, uint32_t i)
{
struct rte_sched_subport *s = port->subport + i;
- RTE_LOG(INFO, SCHED, "Low level config for subport %u:\n"
+ RTE_LOG(DEBUG, SCHED, "Low level config for subport %u:\n"
" Token bucket: period = %u, credits per period = %u, size = %u\n"
" Traffic classes: period = %u, credits per period = [%u, %u, %u, %u]\n"
" Traffic class 3 oversubscription: wm min = %u, wm max = %u\n",
@@ -857,7 +858,8 @@ rte_sched_pipe_config(struct rte_sched_port *port,
s->tc_ov = s->tc_ov_rate > subport_tc3_rate;
if (s->tc_ov != tc3_ov) {
- RTE_LOG(INFO, SCHED, "Subport %u TC3 oversubscription is OFF (%.4lf >= %.4lf)\n",
+ RTE_LOG(DEBUG, SCHED,
+ "Subport %u TC3 oversubscription is OFF (%.4lf >= %.4lf)\n",
subport_id, subport_tc3_rate, s->tc_ov_rate);
}
#endif
@@ -896,7 +898,8 @@ rte_sched_pipe_config(struct rte_sched_port *port,
s->tc_ov = s->tc_ov_rate > subport_tc3_rate;
if (s->tc_ov != tc3_ov) {
- RTE_LOG(INFO, SCHED, "Subport %u TC3 oversubscription is ON (%.4lf < %.4lf)\n",
+ RTE_LOG(DEBUG, SCHED,
+ "Subport %u TC3 oversubscription is ON (%.4lf < %.4lf)\n",
subport_id, subport_tc3_rate, s->tc_ov_rate);
}
p->tc_ov_period_id = s->tc_ov_period_id;
--
2.1.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 4/5] rte_sched: hide structure of port hierarchy
2015-05-27 18:10 [dpdk-dev] [PATCH v4 0/5] rte_sched: cleanup and API enhancements Stephen Hemminger
` (2 preceding siblings ...)
2015-05-27 18:10 ` [dpdk-dev] [PATCH 3/5] rte_sched: use correct log level Stephen Hemminger
@ 2015-05-27 18:10 ` Stephen Hemminger
2015-05-27 18:10 ` [dpdk-dev] [PATCH 5/5] rte_sched: allow reading without clearing Stephen Hemminger
4 siblings, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2015-05-27 18:10 UTC (permalink / raw)
To: cristian.dumitrescu; +Cc: dev
Right now the scheduler hierarchy is encoded as a bitfield
that is visible as part of the ABI. This creates an barrier
limiting future expansion of the hierarchy.
As a transistional step. hide the actual layout of the hierarchy
and mark the exposed structure as deprecated. This will allow for
expansion in later release.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/librte_sched/rte_sched.c | 54 ++++++++++++++++++++++++++++++++++
lib/librte_sched/rte_sched.h | 54 ++++++++++------------------------
lib/librte_sched/rte_sched_version.map | 3 ++
3 files changed, 73 insertions(+), 38 deletions(-)
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index b1655b4..9c9419d 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -184,6 +184,21 @@ enum grinder_state {
e_GRINDER_READ_MBUF
};
+/*
+ * Path through the scheduler hierarchy used by the scheduler enqueue
+ * operation to identify the destination queue for the current
+ * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
+ * each packet, typically written by the classification stage and read
+ * by scheduler enqueue.
+ */
+struct __rte_sched_port_hierarchy {
+ uint32_t queue:2; /**< Queue ID (0 .. 3) */
+ uint32_t traffic_class:2; /**< Traffic class ID (0 .. 3)*/
+ uint32_t pipe:20; /**< Pipe ID */
+ uint32_t subport:6; /**< Subport ID */
+ uint32_t color:2; /**< Color */
+};
+
struct rte_sched_grinder {
/* Pipe cache */
uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE];
@@ -910,6 +925,45 @@ rte_sched_pipe_config(struct rte_sched_port *port,
return 0;
}
+void
+rte_sched_port_pkt_write(struct rte_mbuf *pkt,
+ uint32_t subport, uint32_t pipe, uint32_t traffic_class,
+ uint32_t queue, enum rte_meter_color color)
+{
+ struct __rte_sched_port_hierarchy *sched
+ = (struct __rte_sched_port_hierarchy *) &pkt->hash.sched;
+
+ sched->color = (uint32_t) color;
+ sched->subport = subport;
+ sched->pipe = pipe;
+ sched->traffic_class = traffic_class;
+ sched->queue = queue;
+}
+
+void
+rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
+ uint32_t *subport, uint32_t *pipe,
+ uint32_t *traffic_class, uint32_t *queue)
+{
+ const struct __rte_sched_port_hierarchy *sched
+ = (const struct __rte_sched_port_hierarchy *) &pkt->hash.sched;
+
+ *subport = sched->subport;
+ *pipe = sched->pipe;
+ *traffic_class = sched->traffic_class;
+ *queue = sched->queue;
+}
+
+
+enum rte_meter_color
+rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
+{
+ const struct __rte_sched_port_hierarchy *sched
+ = (const struct __rte_sched_port_hierarchy *) &pkt->hash.sched;
+
+ return (enum rte_meter_color) sched->color;
+}
+
int
rte_sched_subport_read_stats(struct rte_sched_port *port,
uint32_t subport_id,
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index e6bba22..f7c0b8e 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -195,17 +195,19 @@ struct rte_sched_port_params {
#endif
};
-/** Path through the scheduler hierarchy used by the scheduler enqueue operation to
-identify the destination queue for the current packet. Stored in the field hash.sched
-of struct rte_mbuf of each packet, typically written by the classification stage and read by
-scheduler enqueue.*/
+/*
+ * Path through scheduler hierarchy
+ *
+ * Note: direct access to internal bitfields is deprecated to allow for future expansion.
+ * Use rte_sched_port_pkt_read/write API instead
+ */
struct rte_sched_port_hierarchy {
uint32_t queue:2; /**< Queue ID (0 .. 3) */
uint32_t traffic_class:2; /**< Traffic class ID (0 .. 3)*/
uint32_t pipe:20; /**< Pipe ID */
uint32_t subport:6; /**< Subport ID */
uint32_t color:2; /**< Color */
-};
+} __attribute__ ((deprecated));
/*
* Configuration
@@ -328,11 +330,6 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
struct rte_sched_queue_stats *stats,
uint16_t *qlen);
-/*
- * Run-time
- *
- ***/
-
/**
* Scheduler hierarchy path write to packet descriptor. Typically called by the
* packet classification stage.
@@ -348,18 +345,10 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
* @param queue
* Queue ID within pipe traffic class (0 .. 3)
*/
-static inline void
+void
rte_sched_port_pkt_write(struct rte_mbuf *pkt,
- uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue, enum rte_meter_color color)
-{
- struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
-
- sched->color = (uint32_t) color;
- sched->subport = subport;
- sched->pipe = pipe;
- sched->traffic_class = traffic_class;
- sched->queue = queue;
-}
+ uint32_t subport, uint32_t pipe, uint32_t traffic_class,
+ uint32_t queue, enum rte_meter_color color);
/**
* Scheduler hierarchy path read from packet descriptor (struct rte_mbuf). Typically
@@ -378,24 +367,13 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
* Queue ID within pipe traffic class (0 .. 3)
*
*/
-static inline void
-rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint32_t *pipe, uint32_t *traffic_class, uint32_t *queue)
-{
- struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
-
- *subport = sched->subport;
- *pipe = sched->pipe;
- *traffic_class = sched->traffic_class;
- *queue = sched->queue;
-}
-
-static inline enum rte_meter_color
-rte_sched_port_pkt_read_color(struct rte_mbuf *pkt)
-{
- struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
+void
+rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
+ uint32_t *subport, uint32_t *pipe,
+ uint32_t *traffic_class, uint32_t *queue);
- return (enum rte_meter_color) sched->color;
-}
+enum rte_meter_color
+rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt);
/**
* Hierarchical scheduler port enqueue. Writes up to n_pkts to port scheduler and
diff --git a/lib/librte_sched/rte_sched_version.map b/lib/librte_sched/rte_sched_version.map
index 9f74e8b..6626a74 100644
--- a/lib/librte_sched/rte_sched_version.map
+++ b/lib/librte_sched/rte_sched_version.map
@@ -17,6 +17,9 @@ DPDK_2.0 {
rte_sched_queue_read_stats;
rte_sched_subport_config;
rte_sched_subport_read_stats;
+ rte_sched_port_pkt_write;
+ rte_sched_port_pkt_read_tree_path;
+ rte_sched_port_pkt_read_color;
local: *;
};
--
2.1.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 5/5] rte_sched: allow reading without clearing
2015-05-27 18:10 [dpdk-dev] [PATCH v4 0/5] rte_sched: cleanup and API enhancements Stephen Hemminger
` (3 preceding siblings ...)
2015-05-27 18:10 ` [dpdk-dev] [PATCH 4/5] rte_sched: hide structure of port hierarchy Stephen Hemminger
@ 2015-05-27 18:10 ` Stephen Hemminger
2015-06-04 17:48 ` Dumitrescu, Cristian
4 siblings, 1 reply; 8+ messages in thread
From: Stephen Hemminger @ 2015-05-27 18:10 UTC (permalink / raw)
To: cristian.dumitrescu; +Cc: dev
The rte_sched statistics API should allow reading statistics without
clearing. Make auto-clear optional. In this version, this is handled
by deprecating the old API and adding a new one.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/test/test_sched.c | 4 +--
lib/librte_sched/rte_sched.c | 65 +++++++++++++++++++++-------------
lib/librte_sched/rte_sched.h | 37 ++++++++++++++-----
lib/librte_sched/rte_sched_version.map | 2 ++
4 files changed, 74 insertions(+), 34 deletions(-)
diff --git a/app/test/test_sched.c b/app/test/test_sched.c
index c7239f8..03f89b4 100644
--- a/app/test/test_sched.c
+++ b/app/test/test_sched.c
@@ -198,13 +198,13 @@ test_sched(void)
struct rte_sched_subport_stats subport_stats;
uint32_t tc_ov;
- rte_sched_subport_read_stats(port, SUBPORT, &subport_stats, &tc_ov);
+ rte_sched_subport_stats(port, SUBPORT, &subport_stats, &tc_ov, 1);
#if 0
TEST_ASSERT_EQUAL(subport_stats.n_pkts_tc[TC-1], 10, "Wrong subport stats\n");
#endif
struct rte_sched_queue_stats queue_stats;
uint16_t qlen;
- rte_sched_queue_read_stats(port, QUEUE, &queue_stats, &qlen);
+ rte_sched_queue_stats(port, QUEUE, &queue_stats, &qlen, 1);
#if 0
TEST_ASSERT_EQUAL(queue_stats.n_pkts, 10, "Wrong queue stats\n");
#endif
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 9c9419d..b4d7edd 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -965,61 +965,78 @@ rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
}
int
-rte_sched_subport_read_stats(struct rte_sched_port *port,
- uint32_t subport_id,
- struct rte_sched_subport_stats *stats,
- uint32_t *tc_ov)
+rte_sched_subport_stats(struct rte_sched_port *port, uint32_t subport_id,
+ struct rte_sched_subport_stats *stats,
+ uint32_t *tc_ov, int clear)
{
struct rte_sched_subport *s;
/* Check user parameters */
- if ((port == NULL) ||
- (subport_id >= port->n_subports_per_port) ||
- (stats == NULL) ||
- (tc_ov == NULL)) {
+ if (port == NULL || subport_id >= port->n_subports_per_port)
return -1;
- }
+
s = port->subport + subport_id;
/* Copy subport stats and clear */
- memcpy(stats, &s->stats, sizeof(struct rte_sched_subport_stats));
- memset(&s->stats, 0, sizeof(struct rte_sched_subport_stats));
+ if (stats)
+ *stats = s->stats;
+ if (clear)
+ memset(&s->stats, 0, sizeof(struct rte_sched_subport_stats));
/* Subport TC ovesubscription status */
- *tc_ov = s->tc_ov;
+ if (tc_ov)
+ *tc_ov = s->tc_ov;
return 0;
}
+/* Deprecated API, always clears */
int
-rte_sched_queue_read_stats(struct rte_sched_port *port,
- uint32_t queue_id,
- struct rte_sched_queue_stats *stats,
- uint16_t *qlen)
+rte_sched_subport_read_stats(struct rte_sched_port *port, uint32_t subport_id,
+ struct rte_sched_subport_stats *stats,
+ uint32_t *tc_ov)
+{
+ return rte_sched_subport_stats(port, subport_id, stats, tc_ov, 1);
+}
+
+int
+rte_sched_queue_stats(struct rte_sched_port *port, uint32_t queue_id,
+ struct rte_sched_queue_stats *stats,
+ uint16_t *qlen, int clear)
{
struct rte_sched_queue *q;
struct rte_sched_queue_extra *qe;
/* Check user parameters */
- if ((port == NULL) ||
- (queue_id >= rte_sched_port_queues_per_port(port)) ||
- (stats == NULL) ||
- (qlen == NULL)) {
+ if (port == NULL || queue_id >= rte_sched_port_queues_per_port(port))
return -1;
- }
+
q = port->queue + queue_id;
qe = port->queue_extra + queue_id;
/* Copy queue stats and clear */
- memcpy(stats, &qe->stats, sizeof(struct rte_sched_queue_stats));
- memset(&qe->stats, 0, sizeof(struct rte_sched_queue_stats));
+ if (stats)
+ *stats = qe->stats;
+ if (clear)
+ memset(&qe->stats, 0, sizeof(struct rte_sched_queue_stats));
/* Queue length */
- *qlen = q->qw - q->qr;
+ if (qlen)
+ *qlen = q->qw - q->qr;
return 0;
}
+/* Deprecated API, always clears */
+int
+rte_sched_queue_read_stats(struct rte_sched_port *port,
+ uint32_t queue_id,
+ struct rte_sched_queue_stats *stats,
+ uint16_t *qlen)
+{
+ return rte_sched_queue_stats(port, queue_id, stats, qlen, 1);
+}
+
static inline uint32_t
rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue)
{
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index f7c0b8e..053454c 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -300,14 +300,24 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *params);
* @param tc_ov
* Pointer to pre-allocated 4-entry array where the oversubscription status for
* each of the 4 subport traffic classes should be stored.
+ * @parm clear
+ * Reset statistics after read
+ *
* @return
* 0 upon success, error code otherwise
*/
int
-rte_sched_subport_read_stats(struct rte_sched_port *port,
- uint32_t subport_id,
- struct rte_sched_subport_stats *stats,
- uint32_t *tc_ov);
+rte_sched_subport_stats(struct rte_sched_port *port, uint32_t subport_id,
+ struct rte_sched_subport_stats *stats,
+ uint32_t *tc_ov, int clear);
+
+/* Note: use rte_sched_subport_stats() instead,
+ * which allows separate read and clear.
+ */
+int
+rte_sched_subport_read_stats(struct rte_sched_port *port, uint32_t subport_id,
+ struct rte_sched_subport_stats *stats,
+ uint32_t *tc_ov) __attribute__((deprecated));
/**
* Hierarchical scheduler queue statistics read
@@ -321,14 +331,25 @@ rte_sched_subport_read_stats(struct rte_sched_port *port,
* counters should be stored
* @param qlen
* Pointer to pre-allocated variable where the current queue length should be stored.
+ * @parm clear
+ * Reset statistics after read
* @return
* 0 upon success, error code otherwise
*/
int
-rte_sched_queue_read_stats(struct rte_sched_port *port,
- uint32_t queue_id,
- struct rte_sched_queue_stats *stats,
- uint16_t *qlen);
+rte_sched_queue_stats(struct rte_sched_port *port,
+ uint32_t queue_id,
+ struct rte_sched_queue_stats *stats,
+ uint16_t *qlen, int clear);
+
+
+/* Note: use rte_sched_queue_stats() instead
+ * which allows separate read and clear.
+ */
+int
+rte_sched_queue_read_stats(struct rte_sched_port *port, uint32_t queue_id,
+ struct rte_sched_queue_stats *stats,
+ uint16_t *qlen) __attribute__((deprecated));
/**
* Scheduler hierarchy path write to packet descriptor. Typically called by the
diff --git a/lib/librte_sched/rte_sched_version.map b/lib/librte_sched/rte_sched_version.map
index 6626a74..cdbaeb7 100644
--- a/lib/librte_sched/rte_sched_version.map
+++ b/lib/librte_sched/rte_sched_version.map
@@ -14,8 +14,10 @@ DPDK_2.0 {
rte_sched_port_enqueue;
rte_sched_port_free;
rte_sched_port_get_memory_footprint;
+ rte_sched_queue_stats;
rte_sched_queue_read_stats;
rte_sched_subport_config;
+ rte_sched_subport_stats;
rte_sched_subport_read_stats;
rte_sched_port_pkt_write;
rte_sched_port_pkt_read_tree_path;
--
2.1.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 5/5] rte_sched: allow reading without clearing
2015-05-27 18:10 ` [dpdk-dev] [PATCH 5/5] rte_sched: allow reading without clearing Stephen Hemminger
@ 2015-06-04 17:48 ` Dumitrescu, Cristian
2015-07-09 22:51 ` Thomas Monjalon
0 siblings, 1 reply; 8+ messages in thread
From: Dumitrescu, Cristian @ 2015-06-04 17:48 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, May 27, 2015 7:10 PM
> To: Dumitrescu, Cristian
> Cc: dev@dpdk.org; Stephen Hemminger
> Subject: [PATCH 5/5] rte_sched: allow reading without clearing
>
> The rte_sched statistics API should allow reading statistics without
> clearing. Make auto-clear optional. In this version, this is handled
> by deprecating the old API and adding a new one.
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> app/test/test_sched.c | 4 +--
> lib/librte_sched/rte_sched.c | 65 +++++++++++++++++++++-------------
> lib/librte_sched/rte_sched.h | 37 ++++++++++++++-----
> lib/librte_sched/rte_sched_version.map | 2 ++
> 4 files changed, 74 insertions(+), 34 deletions(-)
>
> diff --git a/app/test/test_sched.c b/app/test/test_sched.c
> index c7239f8..03f89b4 100644
> --- a/app/test/test_sched.c
> +++ b/app/test/test_sched.c
> @@ -198,13 +198,13 @@ test_sched(void)
>
> struct rte_sched_subport_stats subport_stats;
> uint32_t tc_ov;
> - rte_sched_subport_read_stats(port, SUBPORT, &subport_stats,
> &tc_ov);
> + rte_sched_subport_stats(port, SUBPORT, &subport_stats, &tc_ov,
> 1);
> #if 0
> TEST_ASSERT_EQUAL(subport_stats.n_pkts_tc[TC-1], 10, "Wrong
> subport stats\n");
> #endif
> struct rte_sched_queue_stats queue_stats;
> uint16_t qlen;
> - rte_sched_queue_read_stats(port, QUEUE, &queue_stats, &qlen);
> + rte_sched_queue_stats(port, QUEUE, &queue_stats, &qlen, 1);
> #if 0
> TEST_ASSERT_EQUAL(queue_stats.n_pkts, 10, "Wrong queue
> stats\n");
> #endif
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index 9c9419d..b4d7edd 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -965,61 +965,78 @@ rte_sched_port_pkt_read_color(const struct
> rte_mbuf *pkt)
> }
>
> int
> -rte_sched_subport_read_stats(struct rte_sched_port *port,
> - uint32_t subport_id,
> - struct rte_sched_subport_stats *stats,
> - uint32_t *tc_ov)
> +rte_sched_subport_stats(struct rte_sched_port *port, uint32_t
> subport_id,
> + struct rte_sched_subport_stats *stats,
> + uint32_t *tc_ov, int clear)
> {
> struct rte_sched_subport *s;
>
> /* Check user parameters */
> - if ((port == NULL) ||
> - (subport_id >= port->n_subports_per_port) ||
> - (stats == NULL) ||
> - (tc_ov == NULL)) {
> + if (port == NULL || subport_id >= port->n_subports_per_port)
> return -1;
> - }
> +
> s = port->subport + subport_id;
>
> /* Copy subport stats and clear */
> - memcpy(stats, &s->stats, sizeof(struct rte_sched_subport_stats));
> - memset(&s->stats, 0, sizeof(struct rte_sched_subport_stats));
> + if (stats)
> + *stats = s->stats;
> + if (clear)
> + memset(&s->stats, 0, sizeof(struct
> rte_sched_subport_stats));
>
> /* Subport TC ovesubscription status */
> - *tc_ov = s->tc_ov;
> + if (tc_ov)
> + *tc_ov = s->tc_ov;
>
> return 0;
> }
>
> +/* Deprecated API, always clears */
> int
> -rte_sched_queue_read_stats(struct rte_sched_port *port,
> - uint32_t queue_id,
> - struct rte_sched_queue_stats *stats,
> - uint16_t *qlen)
> +rte_sched_subport_read_stats(struct rte_sched_port *port, uint32_t
> subport_id,
> + struct rte_sched_subport_stats *stats,
> + uint32_t *tc_ov)
> +{
> + return rte_sched_subport_stats(port, subport_id, stats, tc_ov, 1);
> +}
> +
> +int
> +rte_sched_queue_stats(struct rte_sched_port *port, uint32_t queue_id,
> + struct rte_sched_queue_stats *stats,
> + uint16_t *qlen, int clear)
> {
> struct rte_sched_queue *q;
> struct rte_sched_queue_extra *qe;
>
> /* Check user parameters */
> - if ((port == NULL) ||
> - (queue_id >= rte_sched_port_queues_per_port(port)) ||
> - (stats == NULL) ||
> - (qlen == NULL)) {
> + if (port == NULL || queue_id >=
> rte_sched_port_queues_per_port(port))
> return -1;
> - }
> +
> q = port->queue + queue_id;
> qe = port->queue_extra + queue_id;
>
> /* Copy queue stats and clear */
> - memcpy(stats, &qe->stats, sizeof(struct rte_sched_queue_stats));
> - memset(&qe->stats, 0, sizeof(struct rte_sched_queue_stats));
> + if (stats)
> + *stats = qe->stats;
> + if (clear)
> + memset(&qe->stats, 0, sizeof(struct
> rte_sched_queue_stats));
>
> /* Queue length */
> - *qlen = q->qw - q->qr;
> + if (qlen)
> + *qlen = q->qw - q->qr;
>
> return 0;
> }
>
> +/* Deprecated API, always clears */
> +int
> +rte_sched_queue_read_stats(struct rte_sched_port *port,
> + uint32_t queue_id,
> + struct rte_sched_queue_stats *stats,
> + uint16_t *qlen)
> +{
> + return rte_sched_queue_stats(port, queue_id, stats, qlen, 1);
> +}
> +
> static inline uint32_t
> rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport,
> uint32_t pipe, uint32_t traffic_class, uint32_t queue)
> {
> diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
> index f7c0b8e..053454c 100644
> --- a/lib/librte_sched/rte_sched.h
> +++ b/lib/librte_sched/rte_sched.h
> @@ -300,14 +300,24 @@ rte_sched_port_get_memory_footprint(struct
> rte_sched_port_params *params);
> * @param tc_ov
> * Pointer to pre-allocated 4-entry array where the oversubscription status
> for
> * each of the 4 subport traffic classes should be stored.
> + * @parm clear
> + * Reset statistics after read
> + *
> * @return
> * 0 upon success, error code otherwise
> */
> int
> -rte_sched_subport_read_stats(struct rte_sched_port *port,
> - uint32_t subport_id,
> - struct rte_sched_subport_stats *stats,
> - uint32_t *tc_ov);
> +rte_sched_subport_stats(struct rte_sched_port *port, uint32_t
> subport_id,
> + struct rte_sched_subport_stats *stats,
> + uint32_t *tc_ov, int clear);
> +
> +/* Note: use rte_sched_subport_stats() instead,
> + * which allows separate read and clear.
> + */
> +int
> +rte_sched_subport_read_stats(struct rte_sched_port *port, uint32_t
> subport_id,
> + struct rte_sched_subport_stats *stats,
> + uint32_t *tc_ov) __attribute__((deprecated));
>
> /**
> * Hierarchical scheduler queue statistics read
> @@ -321,14 +331,25 @@ rte_sched_subport_read_stats(struct
> rte_sched_port *port,
> * counters should be stored
> * @param qlen
> * Pointer to pre-allocated variable where the current queue length should
> be stored.
> + * @parm clear
> + * Reset statistics after read
> * @return
> * 0 upon success, error code otherwise
> */
> int
> -rte_sched_queue_read_stats(struct rte_sched_port *port,
> - uint32_t queue_id,
> - struct rte_sched_queue_stats *stats,
> - uint16_t *qlen);
> +rte_sched_queue_stats(struct rte_sched_port *port,
> + uint32_t queue_id,
> + struct rte_sched_queue_stats *stats,
> + uint16_t *qlen, int clear);
> +
> +
> +/* Note: use rte_sched_queue_stats() instead
> + * which allows separate read and clear.
> + */
> +int
> +rte_sched_queue_read_stats(struct rte_sched_port *port, uint32_t
> queue_id,
> + struct rte_sched_queue_stats *stats,
> + uint16_t *qlen) __attribute__((deprecated));
>
> /**
> * Scheduler hierarchy path write to packet descriptor. Typically called by the
> diff --git a/lib/librte_sched/rte_sched_version.map
> b/lib/librte_sched/rte_sched_version.map
> index 6626a74..cdbaeb7 100644
> --- a/lib/librte_sched/rte_sched_version.map
> +++ b/lib/librte_sched/rte_sched_version.map
> @@ -14,8 +14,10 @@ DPDK_2.0 {
> rte_sched_port_enqueue;
> rte_sched_port_free;
> rte_sched_port_get_memory_footprint;
> + rte_sched_queue_stats;
> rte_sched_queue_read_stats;
> rte_sched_subport_config;
> + rte_sched_subport_stats;
> rte_sched_subport_read_stats;
> rte_sched_port_pkt_write;
> rte_sched_port_pkt_read_tree_path;
> --
> 2.1.4
If we plan to obsolete API function rte_sched_queue_read_stats() and have people use the new API function rte_sched_queue_stats(), we should replace every usage of the obsolete function with call to new function. We need to do this in examples/qos_sched and app/test.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 5/5] rte_sched: allow reading without clearing
2015-06-04 17:48 ` Dumitrescu, Cristian
@ 2015-07-09 22:51 ` Thomas Monjalon
0 siblings, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2015-07-09 22:51 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
2015-06-04 17:48, Dumitrescu, Cristian:
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > The rte_sched statistics API should allow reading statistics without
> > clearing. Make auto-clear optional. In this version, this is handled
> > by deprecating the old API and adding a new one.
> >
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
[...]
> > --- a/lib/librte_sched/rte_sched_version.map
> > +++ b/lib/librte_sched/rte_sched_version.map
> > @@ -14,8 +14,10 @@ DPDK_2.0 {
> > rte_sched_port_enqueue;
> > rte_sched_port_free;
> > rte_sched_port_get_memory_footprint;
> > + rte_sched_queue_stats;
> > rte_sched_queue_read_stats;
> > rte_sched_subport_config;
> > + rte_sched_subport_stats;
> > rte_sched_subport_read_stats;
> > rte_sched_port_pkt_write;
> > rte_sched_port_pkt_read_tree_path;
>
> If we plan to obsolete API function rte_sched_queue_read_stats() and have people use the new API function rte_sched_queue_stats(), we should replace every usage of the obsolete function with call to new function. We need to do this in examples/qos_sched and app/test.
Patchwork status is now "Changes requested".
^ permalink raw reply [flat|nested] 8+ messages in thread