DPDK patches and discussions
 help / color / mirror / Atom feed
From: Neil Horman <nhorman@tuxdriver.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: dev@dpdk.org, Stephen Hemminger <shemming@brocade.com>
Subject: Re: [dpdk-dev] [PATCH 4/7] rte_sched: don't clear statistics when read
Date: Sun, 1 Feb 2015 09:25:53 -0500	[thread overview]
Message-ID: <20150201142553.GB3141@localhost.localdomain> (raw)
In-Reply-To: <1422785031-11494-4-git-send-email-stephen@networkplumber.org>

On Sun, Feb 01, 2015 at 10:03:48AM +0000, Stephen Hemminger wrote:
> From: Stephen Hemminger <shemming@brocade.com>
> 
> Make rte_sched statistics API work like the ethernet statistics API.
> Don't auto-clear statistics.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  lib/librte_sched/rte_sched.c | 30 ++++++++++++++++++++++++++++++
>  lib/librte_sched/rte_sched.h | 29 +++++++++++++++++++++++++++++
>  2 files changed, 59 insertions(+)
> 
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index 8cb8bf1..d891e50 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -935,6 +935,21 @@ rte_sched_subport_read_stats(struct rte_sched_port *port,
>  }
>  
>  int
> +rte_sched_subport_stats_reset(struct rte_sched_port *port,
> +			      uint32_t subport_id)
> +{
> +	struct rte_sched_subport *s;
> +
> +	/* Check user parameters */
> +	if (port == NULL || subport_id >= port->n_subports_per_port)
> +		return -1;
> +
> +	s = port->subport + subport_id;
> +	memset(&s->stats, 0, sizeof(struct rte_sched_subport_stats));
Its like this in the current implementation as well, but isn't this a bit racy?
Like if we're clearning the stats while another thread is polling the interface?

Would it be worth implementing a toggle mechanism whereby a reset does an atomic
cmpxch on the stats pointer between two stats copies, then zeroing the exchanged
copy?
Neil

> +	return 0;
> +}
> +
> +int
>  rte_sched_queue_read_stats(struct rte_sched_port *port,
>  	uint32_t queue_id,
>  	struct rte_sched_queue_stats *stats,
> @@ -963,6 +978,21 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
>  	return 0;
>  }
>  
> +int
> +rte_sched_queue_stats_reset(struct rte_sched_port *port,
> +			    uint32_t queue_id)
> +{
> +	struct rte_sched_queue_extra *qe;
> +
> +	/* Check user parameters */
> +	if (port == NULL || queue_id >= rte_sched_port_queues_per_port(port))
> +		return -1;
> +
> +	qe = port->queue_extra + queue_id;
> +	memset(&qe->stats, 0, sizeof(struct rte_sched_queue_stats));
> +	return 0;
> +}
> +
>  static inline uint32_t
>  rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue)
>  {
> diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
> index d5a1d5b..64b4dd6 100644
> --- a/lib/librte_sched/rte_sched.h
> +++ b/lib/librte_sched/rte_sched.h
> @@ -316,6 +316,21 @@ rte_sched_subport_read_stats(struct rte_sched_port *port,
>  	struct rte_sched_subport_stats *stats,
>  	uint32_t *tc_ov);
>  
> +
> +/**
> + * Hierarchical scheduler subport statistics reset
> + *
> + * @param port
> + *   Handle to port scheduler instance
> + * @param subport_id
> + *   Subport ID
> + * @return
> + *   0 upon success, error code otherwise
> + */
> +int
> +rte_sched_subport_stats_reset(struct rte_sched_port *port,
> +			      uint32_t subport_id);
> +
>  /**
>   * Hierarchical scheduler queue statistics read
>   *
> @@ -337,6 +352,20 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
>  	struct rte_sched_queue_stats *stats,
>  	uint16_t *qlen);
>  
> +/**
> + * Hierarchical scheduler queue statistics reset
> + *
> + * @param port
> + *   Handle to port scheduler instance
> + * @param queue_id
> + *   Queue ID within port scheduler
> + * @return
> + *   0 upon success, error code otherwise
> + */
> +int
> +rte_sched_queue_stats_reset(struct rte_sched_port *port,
> +			    uint32_t queue_id);
> +
>  /*
>   * Run-time
>   *
> -- 
> 2.1.4
> 
> 

  reply	other threads:[~2015-02-01 14:26 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-01 10:03 [dpdk-dev] [PATCH 1/7] rte_sched: make RED optional at runtime Stephen Hemminger
2015-02-01 10:03 ` [dpdk-dev] [PATCH 2/7] rte_sched: use reserved field to allow more VLAN's Stephen Hemminger
     [not found]   ` <2601191342CEEE43887BDE71AB977258213E2822@irsmsx105.ger.corp.intel.com>
2015-02-02 22:31     ` Stephen Hemminger
2015-02-03  0:07       ` Ananyev, Konstantin
2015-02-01 10:03 ` [dpdk-dev] [PATCH 3/7] rte_sched: keep track of RED drops Stephen Hemminger
2015-02-01 10:03 ` [dpdk-dev] [PATCH 4/7] rte_sched: don't clear statistics when read Stephen Hemminger
2015-02-01 14:25   ` Neil Horman [this message]
2015-02-01 10:03 ` [dpdk-dev] [PATCH 5/7] rte_sched: don't put tabs in log messages Stephen Hemminger
2015-02-01 10:03 ` [dpdk-dev] [PATCH 6/7] rte_sched: eliminate floating point in calculating byte clock Stephen Hemminger
2015-02-01 10:03 ` [dpdk-dev] [PATCH 7/7] rte_sched: rearrange data structures Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150201142553.GB3141@localhost.localdomain \
    --to=nhorman@tuxdriver.com \
    --cc=dev@dpdk.org \
    --cc=shemming@brocade.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).