From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
To: "Mattias Rönnblom" <hofors@lysator.liu.se>,
"Morten Brørup" <mb@smartsharesystems.com>,
"Mattias Rönnblom" <mattias.ronnblom@ericsson.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: Stefan Sundkvist <stefan.sundkvist@ericsson.com>
Subject: RE: [RFC] service: extend service function call statistics
Date: Mon, 29 Jan 2024 12:50:03 +0000 [thread overview]
Message-ID: <PH8PR11MB6803496326DA2824F2A29E22D77E2@PH8PR11MB6803.namprd11.prod.outlook.com> (raw)
In-Reply-To: <31280b22-4ada-421d-930a-662e494b34e0@lysator.liu.se>
> -----Original Message-----
> From: Mattias Rönnblom <hofors@lysator.liu.se>
> Sent: Saturday, January 27, 2024 7:32 PM
> To: Morten Brørup <mb@smartsharesystems.com>; Mattias Rönnblom
> <mattias.ronnblom@ericsson.com>; dev@dpdk.org
> Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; Stefan Sundkvist
> <stefan.sundkvist@ericsson.com>
> Subject: Re: [RFC] service: extend service function call statistics
Hi Mattias,
Thanks for the patch, looks good to me! Some responses to discussion below;
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
> On 2024-01-26 11:07, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >> Sent: Friday, 26 January 2024 09.28
> >>
> >> On 2024-01-26 00:19, Morten Brørup wrote:
> >>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >>>> Sent: Thursday, 25 January 2024 20.15
> >>>>
> >>>> Add two new per-service counters.
> >>>>
> >>>> RTE_SERVICE_ATTR_IDLE_CALL_COUNT tracks the number of service
> >> function
> >>>> invocations where no work was performed.
> >>>>
> >>>> RTE_SERVICE_ATTR_ERROR_CALL_COUNT tracks the number invocations
> >>>> resulting in an error.
> >>>>
> >>>> The semantics of RTE_SERVICE_ATTR_CALL_COUNT remains the same (i.e.,
> >>>> counting all invocations, regardless of return value).
> >>>>
> >>>> The new statistics may be useful for both debugging and profiling
> >>>> (e.g., calculate the average per-call processing latency for non-
> >> idle
> >>>> service calls).
> >>>>
> >>>> Service core tests are extended to cover the new counters, and
> >>>> coverage for RTE_SERVICE_ATTR_CALL_COUNT is improved.
> >>>
> >>> OK to all of the above. Good stuff.
> >>>
> >>>>
> >>>> The documentation for the CYCLES attributes are updated to reflect
> >>>> their actual semantics.
> >>>
> >>> If this is intended behavior, then updating the documentation seems
> >> appropriate - I would even go so far as considering it a bug fix.
> >>>
> >>> However, quite a few cycles may be consumed by a service before it
> >> can conclude that it had no work to do. Shouldn't that be considered
> >> time spent by the service? I.e. should the code be fixed instead of the
> >> documentation?
> >>>
> >>
> >> Generally, polling for new work in the service is cheap, in my
> >> experience. But there's nothing in principle that prevents the
> >> situation
> >> your describe from occurring. You could add an "IDLE_CYCLES" counter in
> >> case that would ever be a real-world problem.
> >>
> >> That wouldn't be a fix, but rather just returning to the old, subtly
> >> broken, (pre-22.11?) semantics.
> >>
> >> Have a look at 809bd24 to see the rationale for the change. There's an
> >> example in 4689c57.
> >>
> >> The cause of this ambiguity is due to the fact that the platform/OS
> >> (i.e., DPDK) doesn't know which service to "wake up" (which in turn is
> >> the result of the platform not being able to tracking input sources,
> >> like NIC RX queues or timer wheels) and thus must ask every service to
> >> check if it has something to do.
> >
> > OK. Makes good sense.
> > So definitely fix the documentation, not the code. :-)
Agreed.
> >>
> >>> Alternatively, keep the behavior (for backwards compatibility) and
> >> fix the documentation, as this patch does, and add an IDLE_CYCLES
> >> counter for time spent in idle calls.
> >>>
> >>> PS: We're not using DPDK service cores in our applications, so I'm
> >> not familiar with the details. We are using something somewhat similar
> >> (but homegrown), also for profiling and power management purposes, and
> >> my feedback is based on my experience with our own variant of service
> >> cores.
> >>>
> >>
> >> When are you making the switch to service cores? :)
> >
> > Our own "service cores" implementation has some slightly different properties,
> which we are still experimenting with.
> >
> > E.g. in addition to the special return value "idle (no work to do)", we also have a
> special return value for "incomplete (more work urgently pending)" when a service
> processed a full burst and still has more work pending its input queue.
> >
> > We are also considering returning a value to inform what time it needs to be called
> again. This concept is only an idea, and we haven't started experimenting with it yet.
> >
> >
> > From a high level perspective, the service cores library is much like an operating
> system's CPU scheduler, although based on voluntary time sharing. Many algorithms
> and many parameters can be considered. It can also tie into power management and
> prioritization of different tasks.
This was brought up as a concern when initially adding it to DPDK; the scope of service-cores
is quite easily going to change from "way to run a dataplane function on a core, to abstract
away the different environments that DPDK runs" to "userspace scheduler with bells-and-whistles".
The reason service-cores was built was to allow applications run with HW & SW eventdev PMDs,
and not have to handle the different threading requirements at the application level. This goal is
achieved by the current service-cores infrastructure.
> Service cores in their current form is more like a primitive kernel
> bottom half implementation.
>
> The primary work scheduler in DPDK is Eventdev, but even with Eventdev
> you want some way to fit non-event kind of processing as well, and here
> services cores serves a role.
>
> rte_service.c is a good place for power management. I agree some means
> for the services to convey what latency (e.g., sleep time) is acceptable
> is needed. Returning a time per service function invocation would be a
> flexible way to do so. Could also just be a per-lcore, or global
> configuration, set by the application.
While I agree it is potentially a place to implement power management, sleeping etc... to do so expands
the scope of service-cores significantly; not something that we should do without considerable thought.
next prev parent reply other threads:[~2024-01-29 12:50 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-25 19:15 Mattias Rönnblom
2024-01-25 23:19 ` Morten Brørup
2024-01-26 8:28 ` Mattias Rönnblom
2024-01-26 10:07 ` Morten Brørup
2024-01-27 19:31 ` Mattias Rönnblom
2024-01-29 12:50 ` Van Haaren, Harry [this message]
2024-01-30 7:16 ` Mattias Rönnblom
2024-08-09 20:25 ` [PATCH] " Mattias Rönnblom
2024-09-09 19:11 ` [PATCH v2] " Mattias Rönnblom
2024-09-10 1:19 ` fengchengwen
2024-09-10 7:19 ` Mattias Rönnblom
2024-09-12 9:18 ` Van Haaren, Harry
2024-10-01 15:24 ` David Marchand
2024-10-02 18:26 ` David Marchand
2024-10-02 18:56 ` Mattias Rönnblom
2024-10-02 19:08 ` David Marchand
2024-10-02 19:43 ` Mattias Rönnblom
2024-10-03 9:53 ` Van Haaren, Harry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH8PR11MB6803496326DA2824F2A29E22D77E2@PH8PR11MB6803.namprd11.prod.outlook.com \
--to=harry.van.haaren@intel.com \
--cc=dev@dpdk.org \
--cc=hofors@lysator.liu.se \
--cc=mattias.ronnblom@ericsson.com \
--cc=mb@smartsharesystems.com \
--cc=stefan.sundkvist@ericsson.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).