DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: "Van Haaren, Harry" <harry.van.haaren@intel.com>
Cc: "Richardson, Bruce" <bruce.richardson@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"thomas@monjalon.net" <thomas@monjalon.net>,
	"Wiles, Keith" <keith.wiles@intel.com>
Subject: Re: [dpdk-dev] Service lcores and Application lcores
Date: Fri, 30 Jun 2017 18:21:49 +0530	[thread overview]
Message-ID: <20170630125147.GA4578@jerin> (raw)
In-Reply-To: <E923DB57A917B54B9182A2E928D00FA640C344F6@IRSMSX102.ger.corp.intel.com>

-----Original Message-----
> Date: Fri, 30 Jun 2017 10:00:18 +0000
> From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, "Richardson, Bruce"
>  <bruce.richardson@intel.com>
> CC: "dev@dpdk.org" <dev@dpdk.org>, "thomas@monjalon.net"
>  <thomas@monjalon.net>, "Wiles, Keith" <keith.wiles@intel.com>
> Subject: RE: Service lcores and Application lcores
> 
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Friday, June 30, 2017 5:45 AM
> > To: Richardson, Bruce <bruce.richardson@intel.com>
> > Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; dev@dpdk.org; thomas@monjalon.net;
> > Wiles, Keith <keith.wiles@intel.com>
> > Subject: Re: Service lcores and Application lcores
> > 
> > -----Original Message-----
> > > Date: Thu, 29 Jun 2017 16:57:08 +0100
> > > From: Bruce Richardson <bruce.richardson@intel.com>
> > > To: "Van Haaren, Harry" <harry.van.haaren@intel.com>
> > > CC: "dev@dpdk.org" <dev@dpdk.org>, 'Jerin Jacob'
> > >  <jerin.jacob@caviumnetworks.com>, "thomas@monjalon.net"
> > >  <thomas@monjalon.net>, "Wiles, Keith" <keith.wiles@intel.com>
> > > Subject: Re: Service lcores and Application lcores
> > > User-Agent: Mutt/1.8.1 (2017-04-11)
> > >
> > > On Thu, Jun 29, 2017 at 03:36:04PM +0100, Van Haaren, Harry wrote:
> > > > Hi All,
> 
> <snip>
> 
> > > > A proposal for Eventdev, to ensure Service lcores and Application lcores play nice;
> > > >
> > > > 1) Application lcores must not directly call rte_eventdev_schedule()
> > > > 2A) Service cores are the proper method to run services
> > > > 2B) If an application insists on running a service "manually" on an app lcore, we
> > provide a function for that:
> > > >      rte_service_run_from_app_lcore(struct service *srv);
> > > >
> > > > The above function would allow a pesky app to run services on its own (non-service
> > core) lcores, but
> > > > does so through the service-core framework, allowing the service-library atomic to
> > keep access serialized as required for non-multi-thread-safe services.
> > > >
> > > > The above solution maintains the option of running the eventdev PMD as now (single-
> > core dedicated to a single service), while providing correct serialization by using the
> > rte_service_run_from_app_lcore() function. Given the atomic is only used when required
> > (multiple cores mapped to the service) there should be no performance delta.
> > > >
> > > > Given that the application should not invoke rte_eventdev_schedule(), we could even
> > consider removing it from the Eventdev API. A PMD that requires cycles registers a
> > service, and an application can use a service core or the run_from_app_lcore() function if
> > it wishes to invoke that service on an application owned lcore.
> > > >
> > > >
> > > > Opinions?
> > >
> > > I would be in favour of this proposal, except for the proposed name for
> > > the new function. It would be useful for an app to be able to "adopt" a
> > > service into it's main loop if so desired. If we do this, I think I'd
> > 
> > +1
> > 
> > Agree with Harry and Bruce here.
> > 
> > I think, The adapter function should take "struct service *" and return
> > lcore_function_t so that it can run using exiting rte_eal_remote_launch()
> 
> 
> I don't think providing a remote-launch API is actually beneficial. Remote-launching a single service
> is equivalent to adding that lcore as a service-core, and mapping it to just that single service.
> The advantage of adding it as a service core, is future-proofing for if more services need to be added
> to that core in future, and statistics of the service core infrastructure. A convenience API could be
> provided to perform the core_add(), service_start(), enable_on_service() and core_start() APIs in one.
> 
> Also, the remote_launch API doesn't solve the original problem - what if an application lcore wishes
> to run one iteration of a service "manually". The remote_launch style API does not solve this problem.

Agree with problem statement. But, remote_launch() operates on lcores not on
not necessary on 1:1 mapped physical cores.

By introducing "rte_service_iterate", We are creating a parallel infrastructure to 
run the service on non DPDK service lcores aka normal lcores.
Is this really required? Is there  any real advantage for
application not use builtin service lcore infrastructure, rather than iterating over
"rte_service_iterate" and run on normal lcores. If we really want to mux
a physical core to N lcore, EAL already provides that in the form of threads.

I think, providing too many parallel options for the same use case may be
a overkill.

Just my 2c.

> 
> 
> Here a much simpler API to run a service... as a counter-proposal :)
> 
> /** Runs one iteration of *service* on the calling lcore */
> int rte_service_iterate(struct rte_service_spec *service);
> 
> 
> The iterate() function can check that the service is start()-ed, check the number of mapped-lcores and utilize the atomic to prevent concurrent access to multi-thread unsafe services. By exposing the function-pointer/userdata directly, we lose that.
> 
> Thinking about it, a function like rte_service_iterate() is the only functionally correct approach. (Exposing the callback directly brings us back to the "application thread without atomic check" problem.)
> 
> Thoughts?
> 
> 
> > > also support the removal of a dedicated schedule call from the eventdev
> > > API, or alternatively, if it is needed by other PMDs, leave it as a
> > > no-op in the sw PMD in favour of the service-cores managed function.
> > 
> > I would be in favor of removing eventdev schedule and
> > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability so that it is completely
> > transparent to application whether scheduler runs on HW or SW or "combination
> > of both"
> 
> 
> Yep this bit sounds good!

  reply	other threads:[~2017-06-30 12:52 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-29 14:36 Van Haaren, Harry
2017-06-29 15:16 ` Thomas Monjalon
2017-06-29 16:35   ` Van Haaren, Harry
2017-06-29 20:18     ` Thomas Monjalon
2017-06-30  8:52       ` Van Haaren, Harry
2017-06-30  9:29         ` Thomas Monjalon
2017-06-30 10:18           ` Van Haaren, Harry
2017-06-30 10:38             ` Thomas Monjalon
2017-06-30 11:14               ` Van Haaren, Harry
2017-06-30 13:04                 ` Jerin Jacob
2017-06-30 13:16                   ` Van Haaren, Harry
2017-06-29 15:57 ` Bruce Richardson
2017-06-30  4:45   ` Jerin Jacob
2017-06-30 10:00     ` Van Haaren, Harry
2017-06-30 12:51       ` Jerin Jacob [this message]
2017-06-30 13:08         ` Van Haaren, Harry
2017-06-30 13:20           ` Jerin Jacob
2017-06-30 13:24             ` Van Haaren, Harry
2017-06-30 13:51               ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170630125147.GA4578@jerin \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=keith.wiles@intel.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).