DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: "Van Haaren, Harry" <harry.van.haaren@intel.com>
Cc: "Vangati, Narender" <narender.vangati@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>, "Eads, Gage" <gage.eads@intel.com>
Subject: Re: [dpdk-dev] [RFC] [PATCH v2] libeventdev: event driven programming model framework for DPDK
Date: Wed, 2 Nov 2016 16:17:04 +0530	[thread overview]
Message-ID: <20161102104702.GA30658@localhost.localdomain> (raw)
In-Reply-To: <E923DB57A917B54B9182A2E928D00FA6129AD56F@IRSMSX102.ger.corp.intel.com>

On Wed, Oct 26, 2016 at 12:11:03PM +0000, Van Haaren, Harry wrote:
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > 
> > So far, I have received constructive feedback from Intel, NXP and Linaro folks.
> > Let me know, if anyone else interested in contributing to the definition of eventdev?
> > 
> > If there are no major issues in proposed spec, then Cavium would like work on
> > implementing and up-streaming the common code(lib/librte_eventdev/) and
> > an associated HW driver.(Requested minor changes of v2 will be addressed
> > in next version).
>

Hi All,

Two queries,

1) In SW implementation, Is their any connection between "struct
rte_event_port_conf"'s dequeue_queue_depth and enqueue_queue_depth ?
i.e it should be enqueue_queue_depth >= dequeue_queue_depth. Right ?
Thought of adding the common checks in common layer.

2)Any comments on follow item(section under ----) that needs improvement.
-------------------------------------------------------------------------------
Abstract the differences in event QoS management with different
priority schemes available in different HW or SW implementations with portable
application workflow.

Based on the feedback, there three different kinds of QoS support
available in
three different HW or SW implementations.
1) Priority associated with the event queue
2) Priority associated with each event enqueue
(Same flow can have two different priority on two separate enqueue)
3) Priority associated with the flow(each flow has unique priority)

In v2, The differences abstracted based on device capability
(RTE_EVENT_DEV_CAP_QUEUE_QOS for the first scheme,
RTE_EVENT_DEV_CAP_EVENT_QOS for the second and third scheme).
This scheme would call for different application workflow for
nontrivial QoS-enabled applications.
-------------------------------------------------------------------------------
After thinking a while, I think, RTE_EVENT_DEV_CAP_EVENT_QOS is a
super-set.if so, the subset RTE_EVENT_DEV_CAP_QUEUE_QOS can be
implemented with RTE_EVENT_DEV_CAP_EVENT_QOS. i.e We may not need two
flags, Just one flag RTE_EVENT_DEV_CAP_EVENT_QOS is enough to fix
portability issue with basic QoS enabled applications.

i.e Introduce RTE_EVENT_DEV_CAP_EVENT_QOS as config option in device
configure stage if application needs fine granularity on QoS per event
enqueue.For trivial applications, configured
rte_event_queue_conf->priority can be used as rte_event_enqueue(struct
rte_event.priority)

Thoughts?

/Jerin

  parent reply	other threads:[~2016-11-02 10:47 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-04 21:49 [dpdk-dev] [RFC] " Vangati, Narender
2016-10-05  7:24 ` Jerin Jacob
2016-10-07 10:40   ` Hemant Agrawal
2016-10-09  8:27     ` Jerin Jacob
2016-10-11 19:30   ` [dpdk-dev] [RFC] [PATCH v2] " Jerin Jacob
2016-10-14  4:14     ` Bill Fischofer
2016-10-14  9:26       ` Jerin Jacob
2016-10-14 10:30         ` Hemant Agrawal
2016-10-14 12:52           ` Jerin Jacob
2016-10-14 15:00     ` Eads, Gage
2016-10-17  4:18       ` Jerin Jacob
2016-10-17 20:26         ` Eads, Gage
2016-10-18 11:19           ` Jerin Jacob
2016-10-14 16:02     ` Bruce Richardson
2016-10-17  5:10       ` Jerin Jacob
2016-10-25 17:49     ` Jerin Jacob
2016-10-26 12:11       ` Van Haaren, Harry
2016-10-26 12:24         ` Jerin Jacob
2016-10-26 12:54           ` Bruce Richardson
2016-10-28  3:01             ` Jerin Jacob
2016-10-28  8:36               ` Bruce Richardson
2016-10-28  9:06                 ` Jerin Jacob
2016-11-02 11:25                   ` Jerin Jacob
2016-11-02 11:35                     ` Bruce Richardson
2016-11-02 13:09                       ` Jerin Jacob
2016-11-02 13:56                         ` Bruce Richardson
2016-11-02 14:54                           ` Jerin Jacob
2016-10-26 18:37         ` Vincent Jardin
2016-10-28 13:10           ` Van Haaren, Harry
2016-11-02 10:47         ` Jerin Jacob [this message]
2016-11-02 11:45           ` Bruce Richardson
2016-11-02 12:34             ` Jerin Jacob
2016-10-26 12:43       ` Bruce Richardson
2016-10-26 17:30         ` Jerin Jacob
2016-10-28 13:48       ` Van Haaren, Harry
2016-10-28 14:16         ` Bruce Richardson
2016-11-02  8:59           ` Jerin Jacob
2016-11-02  8:06         ` Jerin Jacob
2016-11-02 11:48           ` Bruce Richardson
2016-11-02 12:57             ` Jerin Jacob
2016-10-14 15:00 Francois Ozog

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161102104702.GA30658@localhost.localdomain \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=dev@dpdk.org \
    --cc=gage.eads@intel.com \
    --cc=harry.van.haaren@intel.com \
    --cc=narender.vangati@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).