DPDK patches and discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: "Van Haaren, Harry" <harry.van.haaren@intel.com>
Cc: Jerin Jacob <jerin.jacob@caviumnetworks.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	dev@dpdk.org, "Wiles, Keith" <keith.wiles@intel.com>
Subject: Re: [dpdk-dev] Service lcores and Application lcores
Date: Fri, 30 Jun 2017 15:51:25 +0200	[thread overview]
Message-ID: <13779354.Liqf8ceSdn@xps> (raw)
In-Reply-To: <E923DB57A917B54B9182A2E928D00FA640C34846@IRSMSX102.ger.corp.intel.com>

30/06/2017 15:24, Van Haaren, Harry:
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
> > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > > > From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
> > > <snip previous non-related items>
> > >
> > > > > I don't think providing a remote-launch API is actually beneficial. Remote-launching
> > a
> > > > single service
> > > > > is equivalent to adding that lcore as a service-core, and mapping it to just that
> > single
> > > > service.
> > > > > The advantage of adding it as a service core, is future-proofing for if more
> > services
> > > > need to be added
> > > > > to that core in future, and statistics of the service core infrastructure. A
> > convenience
> > > > API could be
> > > > > provided to perform the core_add(), service_start(), enable_on_service() and
> > > > core_start() APIs in one.
> > > > >
> > > > > Also, the remote_launch API doesn't solve the original problem - what if an
> > application
> > > > lcore wishes
> > > > > to run one iteration of a service "manually". The remote_launch style API does not
> > solve
> > > > this problem.
> > > >
> > > > Agree with problem statement. But, remote_launch() operates on lcores not on
> > > > not necessary on 1:1 mapped physical cores.
> > > >
> > > > By introducing "rte_service_iterate", We are creating a parallel infrastructure to
> > > > run the service on non DPDK service lcores aka normal lcores.
> > > > Is this really required? Is there  any real advantage for
> > > > application not use builtin service lcore infrastructure, rather than iterating over
> > > > "rte_service_iterate" and run on normal lcores. If we really want to mux
> > > > a physical core to N lcore, EAL already provides that in the form of threads.
> > > >
> > > > I think, providing too many parallel options for the same use case may be
> > > > a overkill.
> > > >
> > > > Just my 2c.
> > >
> > >
> > > The use-case that the rte_service_iterate() caters for is one where the application
> > > wishes to run a service on an "ordinary app lcore", together with an application
> > workload.
> > >
> > > For example, the eventdev-scheduler and one worker can be run on the same lcore. If the
> > schedule() running thread *must* be a service lcore, we would not be able to also use that
> > lcore as an application worker core.
> > >
> > > That was my motivation for adding this API, I do agree with you above; it is a second
> > "parallel" method to run a service. I think there's enough value in enabling the use-case
> > as per example above to add it.
> > >
> > >
> > > Do you see enough value in the use-case above to add the API?
> > 
> > The above use case can be realized like --lcores='(0-1)@1'(Two lcore on
> > an physical core). I believe, application writers never want to write a
> > code based on specific number of cores available in the system. If they
> > do then they will be stuck on running on another environment and too
> > many combination to address.
> 
> Good point.
> 
> > For me it complicates service lcore usage. But someone think, it will useful then
> > I don't have strong objection.
> 
> We can easily add APIs later - and removing them isn't so easy. +1 from me leave it out for now, and we can see about adding it for 17.11 if the need arises.
> 
> Thanks for your input, I'll spin a v3 without the rte_service_iterate() function, and that should be it then!

I agree to leave it and keep things simple.

      reply	other threads:[~2017-06-30 13:51 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-29 14:36 Van Haaren, Harry
2017-06-29 15:16 ` Thomas Monjalon
2017-06-29 16:35   ` Van Haaren, Harry
2017-06-29 20:18     ` Thomas Monjalon
2017-06-30  8:52       ` Van Haaren, Harry
2017-06-30  9:29         ` Thomas Monjalon
2017-06-30 10:18           ` Van Haaren, Harry
2017-06-30 10:38             ` Thomas Monjalon
2017-06-30 11:14               ` Van Haaren, Harry
2017-06-30 13:04                 ` Jerin Jacob
2017-06-30 13:16                   ` Van Haaren, Harry
2017-06-29 15:57 ` Bruce Richardson
2017-06-30  4:45   ` Jerin Jacob
2017-06-30 10:00     ` Van Haaren, Harry
2017-06-30 12:51       ` Jerin Jacob
2017-06-30 13:08         ` Van Haaren, Harry
2017-06-30 13:20           ` Jerin Jacob
2017-06-30 13:24             ` Van Haaren, Harry
2017-06-30 13:51               ` Thomas Monjalon [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=13779354.Liqf8ceSdn@xps \
    --to=thomas@monjalon.net \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=keith.wiles@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).