DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: Hemant Agrawal <hemant.agrawal@nxp.com>
Cc: "Vangati, Narender" <narender.vangati@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK
Date: Sun, 9 Oct 2016 13:57:05 +0530	[thread overview]
Message-ID: <20161009082703.GA7752@localhost.localdomain> (raw)
In-Reply-To: <DB5PR04MB1605A2C3BCE440C50C1D647D89C60@DB5PR04MB1605.eurprd04.prod.outlook.com>

On Fri, Oct 07, 2016 at 10:40:03AM +0000, Hemant Agrawal wrote:
> Hi Jerin/Narender,

Hi Hemant,

Thanks for the review.

> 
> 	Thanks for the proposal and discussions. 

> 
> 	I agree with many of the comment made by Narender.  Here are some additional comments.
> 
> 1. rte_event_schedule - should support option for bulk dequeue. The size of bulk should be a property of device, how much depth it can support.

OK. Will fix it in v2.

> 
> 2. The event schedule should also support the option to specify the amount of time, it can wait. The implementation may only support global setting(dequeue_wait_ns) for wait time. They can take any non-zero wait value as to implement wait.  

OK. Will fix it in v2.

> 
> 3. rte_event_schedule_from_group - there should be one model.  Both Push and Pull may not work well together. At least the simultaneous mixed config will not work on NXP hardware scheduler. 

OK. Will remove Cavium specific "rte_event_schedule_from_group" API in v2.

> 
> 4. Priority of queues within the scheduling group?  - Please keep in mind that some hardware supports intra scheduler priority and some only support intra flow_queue priority within a scheduler instance. The events of same flow id should have same priority.

Will try to address some solution based on capability.

> 
> 5. w.r.t flow_queue numbers in log2, I will prefer to have absolute number. Not all system may have large number of queues. So the design should keep in account the system will fewer number of queues.

OK. Will fix it in v2.

> 
> Regards,
> Hemant
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Wednesday, October 05, 2016 12:55 PM
> > On Tue, Oct 04, 2016 at 09:49:52PM +0000, Vangati, Narender wrote:
> > > Hi Jerin,
> > 
> > Hi Narender,
> > 
> > Thanks for the comments.I agree with proposed changes; I will address these
> > comments in v2.
> > 
> > /Jerin
> > 
> > 
> > >
> > >
> > >
> > > Here are some comments on the libeventdev RFC.
> > >
> > > These are collated thoughts after discussions with you & others to understand
> > the concepts and rationale for the current proposal.
> > >
> > >
> > >
> > > 1. Concept of flow queues. This is better abstracted as flow ids and not as flow
> > queues which implies there is a queueing structure per flow. A s/w
> > implementation can do atomic load balancing on multiple flow ids more
> > efficiently than maintaining each event in a specific flow queue.
> > >
> > >
> > >
> > > 2. Scheduling group. A scheduling group is more a steam of events, so an event
> > queue might be a better abstraction.
> > >
> > >
> > >
> > > 3. An event queue should support the concept of max active atomic flows
> > (maximum number of active flows this queue can track at any given time) and
> > max active ordered sequences (maximum number of outstanding events waiting
> > to be egress reordered by this queue). This allows a scheduler implementation to
> > dimension/partition its resources among event queues.
> > >
> > >
> > >
> > > 4. An event queue should support concept of a single consumer. In an
> > application, a stream of events may need to be brought together to a single
> > core for some stages of processing, e.g. for TX at the end of the pipeline to
> > avoid NIC reordering of the packets. Having a 'single consumer' event queue for
> > that stage allows the intensive scheduling logic to be short circuited and can
> > improve throughput for s/w implementations.
> > >
> > >
> > >
> > > 5. Instead of tying eventdev access to an lcore, a higher level of abstraction
> > called event port is needed which is the application i/f to the eventdev. Event
> > ports are connected to event queues and is the object the application uses to
> > dequeue and enqueue events. There can be more than one event port per lcore
> > allowing multiple lightweight threads to have their own i/f into eventdev, if the
> > implementation supports it. An event port abstraction also encapsulates
> > dequeue depth and enqueue depth for a scheduler implementations which can
> > schedule multiple events at a time and output events that can be buffered.
> > >
> > >
> > >
> > > 6. An event should support priority. Per event priority is useful for segregating
> > high priority (control messages) traffic from low priority within the same flow.
> > This needs to be part of the event definition for implementations which support
> > it.
> > >
> > >
> > >
> > > 7. Event port to event queue servicing priority. This allows two event ports to
> > connect to the same event queue with different priorities. For implementations
> > which support it, this allows a worker core to participate in two different
> > workflows with different priorities (workflow 1 needing 3.5 cores, workflow 2
> > needing 2.5 cores, and so on).
> > >
> > >
> > >
> > > 8. Define the workflow as schedule/dequeue/enqueue. An implementation is
> > free to define schedule as NOOP. A distributed s/w scheduler can use this to
> > schedule events; also a centralized s/w scheduler can make this a NOOP on non-
> > scheduler cores.
> > >
> > >
> > >
> > > 9. The schedule_from_group API does not fit the workflow.
> > >
> > >
> > >
> > > 10. The ctxt_update/ctxt_wait breaks the normal workflow. If the normal
> > workflow is a dequeue -> do work based on event type -> enqueue,  a pin_event
> > argument to enqueue (where the pinned event is returned through the normal
> > dequeue) allows application workflow to remain the same whether or not an
> > implementation supports it.
> > >
> > >
> > >
> > > 11. Burst dequeue/enqueue needed.
> > >
> > >
> > >
> > > 12. Definition of a closed/open system - where open system is memory backed
> > and closed system eventdev has limited capacity. In such systems, it is also
> > useful to denote per event port how many packets can be active in the system.
> > This can serve as a threshold for ethdev like devices so they don't overwhelm
> > core to core events.
> > >
> > >
> > >
> > > 13. There should be sort of device capabilities definition to address different
> > implementations.
> > >
> > >
> > >
> > >
> > > vnr
> > > ---
> > >

  reply	other threads:[~2016-10-09  8:27 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-04 21:49 Vangati, Narender
2016-10-05  7:24 ` Jerin Jacob
2016-10-07 10:40   ` Hemant Agrawal
2016-10-09  8:27     ` Jerin Jacob [this message]
2016-10-11 19:30   ` [dpdk-dev] [RFC] [PATCH v2] " Jerin Jacob
2016-10-14  4:14     ` Bill Fischofer
2016-10-14  9:26       ` Jerin Jacob
2016-10-14 10:30         ` Hemant Agrawal
2016-10-14 12:52           ` Jerin Jacob
2016-10-14 15:00     ` Eads, Gage
2016-10-17  4:18       ` Jerin Jacob
2016-10-17 20:26         ` Eads, Gage
2016-10-18 11:19           ` Jerin Jacob
2016-10-14 16:02     ` Bruce Richardson
2016-10-17  5:10       ` Jerin Jacob
2016-10-25 17:49     ` Jerin Jacob
2016-10-26 12:11       ` Van Haaren, Harry
2016-10-26 12:24         ` Jerin Jacob
2016-10-26 12:54           ` Bruce Richardson
2016-10-28  3:01             ` Jerin Jacob
2016-10-28  8:36               ` Bruce Richardson
2016-10-28  9:06                 ` Jerin Jacob
2016-11-02 11:25                   ` Jerin Jacob
2016-11-02 11:35                     ` Bruce Richardson
2016-11-02 13:09                       ` Jerin Jacob
2016-11-02 13:56                         ` Bruce Richardson
2016-11-02 14:54                           ` Jerin Jacob
2016-10-26 18:37         ` Vincent Jardin
2016-10-28 13:10           ` Van Haaren, Harry
2016-11-02 10:47         ` Jerin Jacob
2016-11-02 11:45           ` Bruce Richardson
2016-11-02 12:34             ` Jerin Jacob
2016-10-26 12:43       ` Bruce Richardson
2016-10-26 17:30         ` Jerin Jacob
2016-10-28 13:48       ` Van Haaren, Harry
2016-10-28 14:16         ` Bruce Richardson
2016-11-02  8:59           ` Jerin Jacob
2016-11-02  8:06         ` Jerin Jacob
2016-11-02 11:48           ` Bruce Richardson
2016-11-02 12:57             ` Jerin Jacob
  -- strict thread matches above, loose matches on Subject: below --
2016-08-09  1:01 [dpdk-dev] [RFC] " Jerin Jacob
2016-08-09  8:48 ` Bruce Richardson
2016-08-09 18:46   ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161009082703.GA7752@localhost.localdomain \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=narender.vangati@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).