DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: "Eads, Gage" <gage.eads@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"Van Haaren, Harry" <harry.van.haaren@intel.com>,
	"hemant.agrawal@nxp.com" <hemant.agrawal@nxp.com>,
	"nipun.gupta@nxp.com" <nipun.gupta@nxp.com>,
	"Vangati, Narender" <narender.vangati@intel.com>
Subject: Re: [dpdk-dev] [RFC] [PATCH] eventdev: abstract ethdev HW capability to inject packets to eventdev
Date: Tue, 2 May 2017 21:30:43 +0530	[thread overview]
Message-ID: <20170502155920.GA2664@jerin> (raw)
In-Reply-To: <9184057F7FC11744A2107296B6B8EB1E01EA280D@FMSMSX108.amr.corp.intel.com>

-----Original Message-----
> Date: Fri, 21 Apr 2017 22:31:52 +0000
> From: "Eads, Gage" <gage.eads@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, "dev@dpdk.org"
>  <dev@dpdk.org>
> CC: "Richardson, Bruce" <bruce.richardson@intel.com>, "Van Haaren, Harry"
>  <harry.van.haaren@intel.com>, "hemant.agrawal@nxp.com"
>  <hemant.agrawal@nxp.com>, "nipun.gupta@nxp.com" <nipun.gupta@nxp.com>,
>  "Vangati, Narender" <narender.vangati@intel.com>
> Subject: RE: [RFC] [dpdk-dev] [PATCH] eventdev: abstract ethdev HW
>  capability to inject packets to eventdev
> 
> Hi Jerin,

Hi Gage,

> 
> Thanks for getting this ball rolling, and I agree that we need a solution that covers the three cases you described.

OK. Half problem is solved if we agree on problem statement :-)

> We've also been thinking about an environment where devices (NIC Rx (or even Tx), crypto, or a timer "device" that uses librte_timer to inject events) can plug in eventdev -- whether through a direct connection to the event scheduler (case #3) or using software to bridge the gap -- such that application software can have a consistent view of device interfacing on different platforms.

Make sense. Yes, The NPUs can produce events from NIC Rx, NIC Tx, crypto, timer device
sources without SW service functions.

> 
> Some initial thoughts on your proposal:
> 
> 1. I imagine that deploying these service functions at the granularity of a core can be excessive on devices with few (<= 8) cores. For example, if the crypto traffic rate is low then a cryptodev service function could be co-scheduled with other service functions and/or application work. I think we'll need a more flexible deployment of these service functions.

I agree.

> 
> 2. Knowing which device type a service function is for would be useful -- without it, it's not possible to assign the function to the NUMA node on which the device is located.

I guess we can use rte_eth_dev_socket_id() on requested port to get NUMA
id.

> 
> 3. Placing the service core logic in the PMDs is nice in terms of application ease-of-use, but it forces PMD to write one-size-fits-all service core functions, where, for example, the application's control of the NIC Rx functionality is limited to the options that struct rte_event_queue_producer_conf exports. An application may want customized service core behavior such as: prioritized polling of Rx queues, using Rx queue interrupts for low traffic rate queues, or (for "closed system" eventdevs) control over whether/when a service core drops events (and a way to notify applications of event drops). For such cases, I think the appropriate solution is allow applications to plug in their own service core functions (when hardware support isn't present).

I agree. I think, we can have reusable producer code as static inline
functions in librte_event with multiple event producing strategies and
let application to call respective one if HW support is not present or
not adequate.

I will work towards this theme in RFC v2.

> 
> Some of these thoughts are reflected in the eventdev_pipeline app[1] that Harry submitted earlier today, like flexible service function deployment. In that app, the user supplies a device coremask that can pin a service function to a core, multiplex multiple functions on the core, or even affinitize the service function to multiple cores (using cmpset-based exclusion to ensure it's executed by one lcore at a time).

Thanks for the sample application.I could make it work with NIC + HW
eventdev with some tweaking. I will send the review comment on that
email thread.
One thing, I noticed with cmpset based scheme is that, at given point of
time it can produce at most up to the events one LCORE can support.May not
be well suited for low end cores.I think, we need multiple event
producer strategy code as common code.


> In thinking about this, Narender and I have envisioned something like a framework for eventdev applications in which these service functions can be registered and (in a similar manner to eventdev_pipeline's service functions) executed.

That will be useful. I think it will be not just restricted to eventdev
applications, I guess, New traffic manager's SW implementation or any
future offloads need a framework for service function registration and
invocation.


> 
> Thanks,
> Gage
> 
> [1] http://dpdk.org/ml/archives/dev/2017-April/064511.html

  reply	other threads:[~2017-05-02 16:01 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-18 13:23 Jerin Jacob
2017-04-21 22:31 ` Eads, Gage
2017-05-02 16:00   ` Jerin Jacob [this message]
2017-05-02 16:24     ` Van Haaren, Harry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170502155920.GA2664@jerin \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=gage.eads@intel.com \
    --cc=harry.van.haaren@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=narender.vangati@intel.com \
    --cc=nipun.gupta@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).