DPDK patches and discussions
 help / color / mirror / Atom feed
From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
To: "Van Haaren, Harry" <harry.van.haaren@intel.com>,
	"Mattias Rönnblom" <hofors@lysator.liu.se>,
	"mattias.ronnblom" <mattias.ronnblom@ericsson.com>,
	"Morten Brørup" <mb@smartsharesystems.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: nd <nd@arm.com>, nd <nd@arm.com>, nd <nd@arm.com>
Subject: RE: Service core statistics MT safety
Date: Fri, 1 Jul 2022 18:37:43 +0000	[thread overview]
Message-ID: <DBAPR08MB58141ABAC2D6284CF99A874698BD9@DBAPR08MB5814.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <BN0PR11MB5712D429524F7B39E435A71AD7BA9@BN0PR11MB5712.namprd11.prod.outlook.com>

<snip>
> 
> <big snip of previous discussions>
> 
> > > At the time of the read operation (in the global counter solution),
> > > there may well be cycles consumed or calls having been made, but not
> > > yet posted. The window between call having been made, and global
> > > counter having been incremented (and thus made globally visible) is small,
> but non-zero.
> > Agree. The read value is the atomic state of the system at a given
> > instance (when the read was executed), though that instance happened few
> cycles back.
> > (Just to be clear, I am fine with per-core counters)
> 
> Option 1: "Per core counters"
> 
> > Agree we need atomic operations. I am not sure if __atomic_fetch_add
> > or __atomic_store_n would have a large difference. __atomic_fetch_add
> > would result in less number of instructions. I am fine with either.
> 
> Option 2: "Use atomics for counter increments".
> 
> > > >> I was fortunate to get some data from a real-world application,
> > > >> and enabling service core stats resulted in a 7% degradation of
> > > >> overall system capacity. I'm guessing atomic instructions would
> > > >> not make things
> > > better.
> 
> Agree, performance of atomics is likely to reduce performance.. but correctness
> is worth more than performance.
> 
> <snip>
> 
> In my mind, any LTS/backports get the simplest/highest-confidence bugfix: using
> atomics.
> The atomics are behind the "service stats" feature enable, so impact is only
> when those are enabled.
> 
> If there is still a performance hit, and there are *no* MT services registered, we
> could check a static-global flag, and if there are no MT services use the normal
> adds. Thoughts on such a solution to reduce atomic perf impact only to apps
> with MT-services?
The choice is not between atomic stats and non-atomic stats. The stats need to be incremented atomically in both cases(MT or no MT services) as the reader could be another thread. We need to ensure that the stores and loads are not split. Hence, we need atomic operations. We cannot avoid any performance hit here.

I think the choice is between per-core stats vs global stats. IMO, we should go with per-core stats to be aligned with the norm.

> 
> The code changes themselves are OK.. I can send a patch with fix if there's
> agreement on the approach?
> 
> 
> diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c index
> ef31b1f63c..a07c8fc2d7 100644
> --- a/lib/eal/common/rte_service.c
> +++ b/lib/eal/common/rte_service.c
> @@ -363,9 +363,9 @@ service_runner_do_callback(struct
> rte_service_spec_impl *s,
>                 uint64_t start = rte_rdtsc();
>                 s->spec.callback(userdata);
>                 uint64_t end = rte_rdtsc();
> -               s->cycles_spent += end - start;
> +               __atomic_fetch_add(&s->cycles_spent, (end-start),
> __ATOMIC_RELAXED);
> +               __atomic_fetch_add(&s->calls, 1, __ATOMIC_RELAXED);
>                 cs->calls_per_service[service_idx]++;
> -               s->calls++;
>         } else
>                 s->spec.callback(userdata);  }

  reply	other threads:[~2022-07-01 18:38 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-27 11:05 Mattias Rönnblom
2022-06-27 12:31 ` Morten Brørup
2022-06-27 17:39   ` Honnappa Nagarahalli
2022-06-27 17:47     ` Mattias Rönnblom
2022-06-27 18:19       ` Honnappa Nagarahalli
2022-06-27 20:00         ` Mattias Rönnblom
2022-06-28  0:14           ` Honnappa Nagarahalli
2022-06-28  6:52             ` Mattias Rönnblom
2022-06-28 15:24               ` Honnappa Nagarahalli
2022-06-28 18:30                 ` Mattias Rönnblom
2022-06-28 19:15                   ` Honnappa Nagarahalli
2022-06-29  6:34                     ` Mattias Rönnblom
2022-06-29 20:07                       ` Honnappa Nagarahalli
2022-06-30  8:46                         ` Van Haaren, Harry
2022-07-01 18:37                           ` Honnappa Nagarahalli [this message]
2022-07-01 20:51                             ` Morten Brørup
2022-07-05 17:09                               ` Van Haaren, Harry
2022-07-07 17:26                                 ` Honnappa Nagarahalli
2022-07-08 12:58                                   ` Van Haaren, Harry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBAPR08MB58141ABAC2D6284CF99A874698BD9@DBAPR08MB5814.eurprd08.prod.outlook.com \
    --to=honnappa.nagarahalli@arm.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=hofors@lysator.liu.se \
    --cc=mattias.ronnblom@ericsson.com \
    --cc=mb@smartsharesystems.com \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).