DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Richardson, Bruce" <bruce.richardson@intel.com>
Subject: Re: [dpdk-dev] [PATCH v5 06/20] event/sw: add support for event queues
Date: Wed, 29 Mar 2017 08:28:12 +0000	[thread overview]
Message-ID: <E923DB57A917B54B9182A2E928D00FA612A20E64@IRSMSX102.ger.corp.intel.com> (raw)
In-Reply-To: <20170328173610.3hi6wyqvdpx2lo7e@localhost.localdomain>

> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Tuesday, March 28, 2017 6:36 PM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>
> Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues
> 

<snip IQ priority question>


> > > A few question on everyone benefit:
> > >
> > > 1) Does RTE_EVENT_QUEUE_CFG_SINGLE_LINK has any other meaning other than an
> > > event queue linked only to single port?  Based on the discussions, It was
> > > add in the header file so that SW PMD can know upfront only single port
> > > will be linked to the given event queue. It is added as an optimization for SW
> > > PMD. Does it has any functional expectation?
> >
> > In the context of the SW PMD, SINGLE_LINK means that a specific queue and port have a unique
> relationship in that there is only connection. This allows bypassing of Atomic, Ordering and
> Load-Balancing code. The result is a good performance increase, particularly if the worker port
> dequeue depth is large, as then large bursts of packets can be dequeued with little overhead.
> >
> > As a result, (ATOMIC | SINGLE_LINK) is not a supported combination for the sw pmd queue
> types.
> > To be more precise, a SINGLE_LINK is its own queue type, and can not be OR-ed with any other
> type.
> >
> >
> > > 2) Based on following topology given in documentation patch for queue
> > > based event pipelining,
> > >
> > >   rx_port    w1_port
> > > 	 \     /         \
> > > 	  qid0 - w2_port - qid1
> > > 	       \         /     \
> > > 		    w3_port        tx_port
> > >
> > > a) I understand, rx_port is feeding events to qid0
> > > b) But, Do you see any issue with following model? IMO, It scales well
> > > linearly based on number of cores available to work(Since it is ATOMIC to
> > > ATOMIC). Nothing wrong with
> > > qid1 just connects to tx_port, I am just trying understand the rational
> > > behind it?
> > >
> > >   rx_port   w1_port         w1_port
> > > 	 \     /         \     /
> > > 	  qid0 - w2_port - qid1- w2_port
> > > 	       \         /     \
> > > 		   w3_port         w3_port
> >
> >
> > This is also a valid model from the SW eventdev.
> 
> OK. If understand it correctly, On the above topology,  Even though you
> make qid2 as ATOMIC. SW PMD will not maintain ingress order when comes out of
> qid1 on different workers.


If qid0 is ORDERED, and qid1 is Atomic, then the following happens;
- after qid 0, the packets are sprayed across cores,
- they are returned out of order by worker cores
- *at the start* of qid1, packets are re-ordered back into ingress order (maintain 100% of ordering)
- on dequeue from qid1, the atomic flow distribution will keep order per flow


> A SINGLE_LINK queue with one port attached
> scheme is required at end of the pipeline or where ever ordering has to be
> maintained. Is my understanding correct?


Not quite, the SINGLE_LINK is not required at the end - we just see it as useful for common use cases.
If not useful, there is no reason (due to SW PMD) for an application to create this SINGLE_LINK to finish the pipeline.
If you have three cores that wish to TX, the above pipeline is 100% valid in the SW PMD case.


> > The value of using a SINGLE_LINK at the end of a pipeline is
> > A) can TX all traffic on a single core (using a single queue)
> > B) re-ordering of traffic from the previous stage is possible
> >
> > To illustrate (B), a very simple pipeline here
> >
> >  RX port -> QID #1 (Ordered) -> workers(eg 4 ports) -> QID # 2 (SINGLE_LINK to tx) -> TX port
> >
> > Here, QID #1 is allowed to send the packets out of order to the 4 worker ports - because they
> are later passed back to the eventdev for re-ordering before they get to the SINGLE_LINK stage,
> and then TX in the correct order.
> >
> >
> > > 3)
> > > > Does anybody have a need for a queue to be both Atomic *and* Single-link?  I understand
> the
> > > current API doesn't prohibit it, but I don't see the actual use-case in which that may be
> > > useful. Atomic implies load-balancing is occurring, single link implies there is only one
> > > consuming core. Those seem like opposites to me?
> > >
> > > I can think about the following use case:
> > >
> > > topology:
> > >
> > >   rx_port    w1_port
> > > 	 \     /         \
> > > 	  qid0 - w2_port - qid1
> > > 	       \         /     \
> > > 		    w3_port        tx_port
> > >
> > > Use case:
> > >
> > > Queue based event pipeling:
> > > ORERDED(Stage1) to ATOMIC(Stage2) pipeline:
> > > - For ingress order maintenance
> > > - For executing Stage 1 in parallel for better scaling
> > > i.e A fat flow can spray over N cores while maintaining the ingress
> > > order when it sends out on the wire(after consuming from tx_port)
> > >
> > > I am not sure how SW PMD work in the use case of ingress order maintenance.
> >
> > I think my illustration of (B) above is the same use-case as you have here. Instead of using
> an ATOMIC stage2, the SW PMD benefits from using the SINGLE_LINK port/queue, and the
> SINGLE_LINK queue ensures ingress order is also egress order to the TX port.
> >
> >
> > > But the HW and header file expects this form:
> > > Snippet from header file:
> > > --
> > >  * The source flow ordering from an event queue is maintained when events are
> > >  * enqueued to their destination queue within the same ordered flow context.
> > >  *
> > >  * Events from the source queue appear in their original order when dequeued
> > >  * from a destination queue.
> > > --
> > > Here qid0 is source queue with ORDERED sched_type and qid1 is destination
> > > queue with ATOMIC sched_type. qid1 can be linked to only port(tx_port).
> > >
> > > Are we on same page? If not, let me know the differences? We will try to
> > > accommodate the same in header file.
> >
> > Yes I think we are saying the same thing, using slightly different words.
> >
> > To summarize;
> > - SW PMD sees SINGLE_LINK as its own queue type, and does not support load-balanced (Atomic
> Ordered, Parallel) queue functionality.
> > - SW PMD would use a SINGLE_LINK queue/port for the final stage of a pipeline
> >    A) to allow re-ordering to happen if required
> >    B) to merge traffic from multiple ports into a single stream for TX
> >
> > A possible solution;
> > 1) The application creates a SINGLE_LINK for the purpose of ensuring re-ordering is taking
> place as expected, and linking only one port for TX.
> 
> The only issue is in Low-end cores case it wont scale. TX core will become as
> bottleneck and we need to have different pipelines based on the amount of traffic(40G or 10G)
> a core can handle.


See above - the SINGLE_LINK isn't required to maintain ordering. Using multiple TX cores is also valid in SW PMD.


> > 2) SW PMDs can create a SINGLE_LINK queue type, and benefit from the optimization
> 
> Yes.
> 
> > 3) HW PMDs can ignore the "SINGLE_LINK" aspect and uses an ATOMIC instead (as per your
> example in 3) above)
> 
> But topology will be fixed for both HW and SW. An extra port and
> extra core needs to wasted for ordering business in case HW. Right?


Nope, no wasting cores, see above :) The SINGLE_LINK is just an easy way to "fan in" traffic from lots of cores to one core (in a performant way in SW) to allow a single core do TX. A typical use-case might be putting RX and TX on the same core - TX is just a dequeue from a port with a SINGLE_LINK queue, and an enqueue to NIC.


Summary from the SW PMD point-of-view; 
- SINGLE_LINK is its own queue type
- SINGLE_LINK queue can NOT schedule according to (Atomic, Ordered or Parallel) rules

Is that acceptable from an API and HW point of view? 

If so, I will send a new patch for the API to specify more clearly what SINGLE_LINK is.
If not, I'm open to using a capability flag to solve the problem but my understanding right now is that there is no need.



> I think, we can roll out something based on capability.

Yes, if required that would be a good solution.


> > The application doesn't have to change anything, and just configures its pipeline. The PMD is
> able to optimize if it makes sense (SW) or just use another queue type to provide the same
> functionality to the application (HW).
> >
> > Thoughts? -Harry

  reply	other threads:[~2017-03-29  8:28 UTC|newest]

Thread overview: 109+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <489175012-101439-1-git-send-email-harry.van.haaren@intel.com>
2017-03-24 16:52 ` [dpdk-dev] [PATCH v5 00/20] next-eventdev: event/sw software eventdev Harry van Haaren
2017-03-24 16:52   ` [dpdk-dev] [PATCH v5 01/20] test/eventdev: pass timeout ticks unsupported Harry van Haaren
2017-03-25  5:38     ` Jerin Jacob
2017-03-24 16:52   ` [dpdk-dev] [PATCH v5 02/20] event/sw: add new software-only eventdev driver Harry van Haaren
2017-03-25  6:24     ` Jerin Jacob
2017-03-27 15:30       ` Van Haaren, Harry
2017-03-24 16:52   ` [dpdk-dev] [PATCH v5 03/20] event/sw: add device capabilities function Harry van Haaren
2017-03-25 10:50     ` Jerin Jacob
2017-03-24 16:52   ` [dpdk-dev] [PATCH v5 04/20] event/sw: add configure function Harry van Haaren
2017-03-25 13:17     ` Jerin Jacob
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 05/20] event/sw: add fns to return default port/queue config Harry van Haaren
2017-03-25 13:21     ` Jerin Jacob
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 06/20] event/sw: add support for event queues Harry van Haaren
2017-03-27  7:45     ` Jerin Jacob
2017-03-27  8:47       ` Bruce Richardson
2017-03-27 15:17       ` Van Haaren, Harry
2017-03-28 10:43         ` Jerin Jacob
2017-03-28 12:42           ` Van Haaren, Harry
2017-03-28 17:36             ` Jerin Jacob
2017-03-29  8:28               ` Van Haaren, Harry [this message]
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 07/20] event/sw: add support for event ports Harry van Haaren
2017-03-27  8:55     ` Jerin Jacob
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 08/20] event/sw: add support for linking queues to ports Harry van Haaren
2017-03-27 11:20     ` Jerin Jacob
2017-03-29 10:58       ` Van Haaren, Harry
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 09/20] event/sw: add worker core functions Harry van Haaren
2017-03-27 13:50     ` Jerin Jacob
2017-03-28 16:17       ` Van Haaren, Harry
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 10/20] event/sw: add scheduling logic Harry van Haaren
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 11/20] event/sw: add start stop and close functions Harry van Haaren
2017-03-27 16:02     ` Jerin Jacob
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 12/20] event/sw: add dump function for easier debugging Harry van Haaren
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 13/20] event/sw: add xstats support Harry van Haaren
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 14/20] test/eventdev: add SW test infrastructure Harry van Haaren
2017-03-28 15:20     ` Burakov, Anatoly
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 15/20] test/eventdev: add basic SW tests Harry van Haaren
2017-03-28 15:21     ` Burakov, Anatoly
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 16/20] test/eventdev: add SW tests for load balancing Harry van Haaren
2017-03-28 15:21     ` Burakov, Anatoly
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 17/20] test/eventdev: add SW xstats tests Harry van Haaren
2017-03-28 15:22     ` Burakov, Anatoly
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 18/20] test/eventdev: add SW deadlock tests Harry van Haaren
2017-03-28 15:22     ` Burakov, Anatoly
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 19/20] doc: add event device and software eventdev Harry van Haaren
2017-03-29 13:47     ` Jerin Jacob
2017-03-24 16:53   ` [dpdk-dev] [PATCH v5 20/20] maintainers: add eventdev section and claim SW PMD Harry van Haaren
2017-03-29 13:05     ` Jerin Jacob
2017-03-29 23:25   ` [dpdk-dev] [PATCH v6 00/21] next-eventdev: event/sw software eventdev Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 01/21] eventdev: improve API docs for start function Harry van Haaren
2017-03-30 10:56       ` Burakov, Anatoly
2017-03-30 17:11       ` Jerin Jacob
2017-03-30 17:24         ` Van Haaren, Harry
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 02/21] test/eventdev: pass timeout ticks unsupported Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 03/21] event/sw: add new software-only eventdev driver Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 04/21] event/sw: add device capabilities function Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 05/21] event/sw: add configure function Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 06/21] event/sw: add fns to return default port/queue config Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 07/21] event/sw: add support for event queues Harry van Haaren
2017-03-30 18:06       ` Jerin Jacob
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 08/21] event/sw: add support for event ports Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 09/21] event/sw: add support for linking queues to ports Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 10/21] event/sw: add worker core functions Harry van Haaren
2017-03-30 18:07       ` Jerin Jacob
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 11/21] event/sw: add scheduling logic Harry van Haaren
2017-03-30 10:07       ` Hunt, David
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 12/21] event/sw: add start stop and close functions Harry van Haaren
2017-03-30  8:24       ` Jerin Jacob
2017-03-30  8:49         ` Van Haaren, Harry
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 13/21] event/sw: add dump function for easier debugging Harry van Haaren
2017-03-30 10:32       ` Hunt, David
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 14/21] event/sw: add xstats support Harry van Haaren
2017-03-30 11:12       ` Hunt, David
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 15/21] test/eventdev: add SW test infrastructure Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 16/21] test/eventdev: add basic SW tests Harry van Haaren
2017-03-29 23:25     ` [dpdk-dev] [PATCH v6 17/21] test/eventdev: add SW tests for load balancing Harry van Haaren
2017-03-29 23:26     ` [dpdk-dev] [PATCH v6 18/21] test/eventdev: add SW xstats tests Harry van Haaren
2017-03-29 23:26     ` [dpdk-dev] [PATCH v6 19/21] test/eventdev: add SW deadlock tests Harry van Haaren
2017-03-29 23:26     ` [dpdk-dev] [PATCH v6 20/21] doc: add event device and software eventdev Harry van Haaren
2017-03-30  8:27       ` Burakov, Anatoly
2017-03-29 23:26     ` [dpdk-dev] [PATCH v6 21/21] maintainers: add eventdev section and claim SW PMD Harry van Haaren
2017-03-30 19:30     ` [dpdk-dev] [PATCH v7 00/22] next-eventdev: event/sw software eventdev Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 01/22] eventdev: improve API docs for start function Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 02/22] test/eventdev: pass timeout ticks unsupported Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 03/22] event/sw: add new software-only eventdev driver Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 04/22] event/sw: add device capabilities function Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 05/22] event/sw: add configure function Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 06/22] event/sw: add fns to return default port/queue config Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 07/22] event/sw: add support for event queues Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 08/22] event/sw: add support for event ports Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 09/22] event/sw: add support for linking queues to ports Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 10/22] event/sw: add worker core functions Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 11/22] event/sw: add scheduling logic Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 12/22] event/sw: add start stop and close functions Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 13/22] event/sw: add dump function for easier debugging Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 14/22] event/sw: add xstats support Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 15/22] test/eventdev: add SW test infrastructure Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 16/22] test/eventdev: add basic SW tests Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 17/22] test/eventdev: add SW tests for load balancing Harry van Haaren
2017-04-02 14:56         ` Jerin Jacob
2017-04-03  9:08           ` Van Haaren, Harry
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 18/22] test/eventdev: add SW xstats tests Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 19/22] test/eventdev: add SW deadlock tests Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 20/22] doc: add event device and software eventdev Harry van Haaren
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 21/22] doc: add SW eventdev PMD to 17.05 release notes Harry van Haaren
2017-03-31 12:23         ` Hunt, David
2017-03-31 14:45         ` Jerin Jacob
2017-03-30 19:30       ` [dpdk-dev] [PATCH v7 22/22] maintainers: add eventdev section and claim SW PMD Harry van Haaren
2017-03-31 13:56         ` Jerin Jacob
2017-04-01 11:38       ` [dpdk-dev] [PATCH v7 00/22] next-eventdev: event/sw software eventdev Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E923DB57A917B54B9182A2E928D00FA612A20E64@IRSMSX102.ger.corp.intel.com \
    --to=harry.van.haaren@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerin.jacob@caviumnetworks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).