From: Megha Ajmera <megha.ajmera@intel.com>
To: dev@dpdk.org, jasvinder.singh@intel.com,
cristian.dumitrescu@intel.com, thomas@monjalon.net,
david.marchand@redhat.com, sham.singh.thakur@intel.com
Subject: [PATCH v3 3/4] sched: enable statistics unconditionally
Date: Tue, 22 Feb 2022 12:57:44 +0000 [thread overview]
Message-ID: <20220222125745.2944462-4-megha.ajmera@intel.com> (raw)
In-Reply-To: <20220222125745.2944462-1-megha.ajmera@intel.com>
Removed RTE_SCHED_COLLECT_STATS flag from rte_config.h.
Stats collection is always enabled.
Signed-off-by: Megha Ajmera <megha.ajmera@intel.com>
---
config/rte_config.h | 1 -
doc/guides/sample_app_ug/qos_scheduler.rst | 6 ------
lib/sched/rte_sched.c | 12 ------------
3 files changed, 19 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index d449af4810..de6fea5b67 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -90,7 +90,6 @@
/* rte_sched defines */
#undef RTE_SCHED_CMAN
-#undef RTE_SCHED_COLLECT_STATS
#undef RTE_SCHED_SUBPORT_TC_OV
/* rte_graph defines */
diff --git a/doc/guides/sample_app_ug/qos_scheduler.rst b/doc/guides/sample_app_ug/qos_scheduler.rst
index 0782e41ee7..34b662b230 100644
--- a/doc/guides/sample_app_ug/qos_scheduler.rst
+++ b/doc/guides/sample_app_ug/qos_scheduler.rst
@@ -39,12 +39,6 @@ The application is located in the ``qos_sched`` sub-directory.
This application is intended as a linux only.
-.. note::
-
- To get statistics on the sample app using the command line interface as described in the next section,
- DPDK must be compiled defining *RTE_SCHED_COLLECT_STATS*, which can be done by changing the relevant
- entry in the ``config/rte_config.h`` file.
-
.. note::
Number of grinders is currently set to 8. This can be modified by specifying RTE_SCHED_PORT_N_GRINDERS=N in
diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c
index 9c85edb4cc..8a051049de 100644
--- a/lib/sched/rte_sched.c
+++ b/lib/sched/rte_sched.c
@@ -1779,8 +1779,6 @@ rte_sched_port_queue_is_empty(struct rte_sched_subport *subport,
#endif /* RTE_SCHED_DEBUG */
-#ifdef RTE_SCHED_COLLECT_STATS
-
static inline void
rte_sched_port_update_subport_stats(struct rte_sched_port *port,
struct rte_sched_subport *subport,
@@ -1838,8 +1836,6 @@ rte_sched_port_update_queue_stats_on_drop(struct rte_sched_subport *subport,
#endif
}
-#endif /* RTE_SCHED_COLLECT_STATS */
-
#ifdef RTE_SCHED_CMAN
static inline int
@@ -1978,18 +1974,14 @@ rte_sched_port_enqueue_qptrs_prefetch0(struct rte_sched_subport *subport,
struct rte_mbuf *pkt, uint32_t subport_qmask)
{
struct rte_sched_queue *q;
-#ifdef RTE_SCHED_COLLECT_STATS
struct rte_sched_queue_extra *qe;
-#endif
uint32_t qindex = rte_mbuf_sched_queue_get(pkt);
uint32_t subport_queue_id = subport_qmask & qindex;
q = subport->queue + subport_queue_id;
rte_prefetch0(q);
-#ifdef RTE_SCHED_COLLECT_STATS
qe = subport->queue_extra + subport_queue_id;
rte_prefetch0(qe);
-#endif
return subport_queue_id;
}
@@ -2031,12 +2023,10 @@ rte_sched_port_enqueue_qwa(struct rte_sched_port *port,
if (unlikely(rte_sched_port_cman_drop(port, subport, pkt, qindex, qlen) ||
(qlen >= qsize))) {
rte_pktmbuf_free(pkt);
-#ifdef RTE_SCHED_COLLECT_STATS
rte_sched_port_update_subport_stats_on_drop(port, subport,
qindex, pkt, qlen < qsize);
rte_sched_port_update_queue_stats_on_drop(subport, qindex, pkt,
qlen < qsize);
-#endif
return 0;
}
@@ -2048,10 +2038,8 @@ rte_sched_port_enqueue_qwa(struct rte_sched_port *port,
rte_bitmap_set(subport->bmp, qindex);
/* Statistics */
-#ifdef RTE_SCHED_COLLECT_STATS
rte_sched_port_update_subport_stats(port, subport, qindex, pkt);
rte_sched_port_update_queue_stats(subport, qindex, pkt);
-#endif
return 1;
}
--
2.25.1
next prev parent reply other threads:[~2022-02-22 12:58 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-18 9:36 [PATCH v2 0/4] sched: HQoS Library cleanup Megha Ajmera
2022-02-18 9:36 ` [PATCH v2 1/4] sched: Cleanup qos scheduler defines from rte_config Megha Ajmera
2022-02-18 10:52 ` Thomas Monjalon
2022-02-18 11:14 ` Dumitrescu, Cristian
2022-02-18 11:17 ` Dumitrescu, Cristian
2022-02-18 11:04 ` Dumitrescu, Cristian
2022-02-18 9:36 ` [PATCH v2 2/4] sched: Always enable stats in HQoS library Megha Ajmera
2022-02-18 11:01 ` Dumitrescu, Cristian
2022-02-18 9:36 ` [PATCH v2 3/4] sched: Always enable best effort TC oversubscription " Megha Ajmera
2022-02-18 11:02 ` Dumitrescu, Cristian
2022-02-18 9:36 ` [PATCH v2 4/4] sched: Removed code defined under VECTOR Defines Megha Ajmera
2022-02-18 11:03 ` Dumitrescu, Cristian
2022-02-18 10:58 ` [PATCH v2 0/4] sched: HQoS Library cleanup Dumitrescu, Cristian
2022-02-18 11:49 ` Thomas Monjalon
2022-02-22 12:57 ` [PATCH v3 0/4] sched: cleanup of sched library Megha Ajmera
2022-02-22 12:57 ` [PATCH v3 1/4] sched: remove code no longer needed Megha Ajmera
2022-02-22 12:57 ` [PATCH v3 2/4] sched: move grinder configuration flag Megha Ajmera
2022-02-22 12:57 ` Megha Ajmera [this message]
2022-02-22 12:57 ` [PATCH v3 4/4] sched: enable traffic class oversubscription unconditionally Megha Ajmera
2022-02-22 15:27 ` [PATCH v3 0/4] sched: cleanup of sched library Dumitrescu, Cristian
2022-02-24 22:44 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220222125745.2944462-4-megha.ajmera@intel.com \
--to=megha.ajmera@intel.com \
--cc=cristian.dumitrescu@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=jasvinder.singh@intel.com \
--cc=sham.singh.thakur@intel.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).