From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 6160069EC for ; Tue, 28 Mar 2017 14:42:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490704951; x=1522240951; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=uI4BeECb4JU4Kc5QqeK8YStp8ByNMKfPVOirYot6oaU=; b=Gfc2ejB881eADZQkRG4Nscts33wzyDQb7w/fapInCMOOCnvR/zExhiCp BPnw/G7V9yrFiCP2omtw5kEcEP/VYw==; Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Mar 2017 05:42:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,236,1486454400"; d="scan'208";a="948995489" Received: from irsmsx101.ger.corp.intel.com ([163.33.3.153]) by orsmga003.jf.intel.com with ESMTP; 28 Mar 2017 05:42:29 -0700 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.153]) by IRSMSX101.ger.corp.intel.com ([163.33.3.153]) with mapi id 14.03.0319.002; Tue, 28 Mar 2017 13:42:28 +0100 From: "Van Haaren, Harry" To: Jerin Jacob CC: "dev@dpdk.org" , "Richardson, Bruce" Thread-Topic: [PATCH v5 06/20] event/sw: add support for event queues Thread-Index: AQHSpL8pQiOO24QChUGy3SqZRbqlMqGoQdgAgAAj03CAAaA4AIAAGY8Q Date: Tue, 28 Mar 2017 12:42:27 +0000 Message-ID: References: <489175012-101439-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-7-git-send-email-harry.van.haaren@intel.com> <20170327074011.fgodyrhquabj54r2@localhost.localdomain> <20170328104301.ysxnlgyxvnqfv674@localhost.localdomain> In-Reply-To: <20170328104301.ysxnlgyxvnqfv674@localhost.localdomain> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZWVkODg4NTgtMDY3Yy00ODlkLThhNzYtNWU3NmExOTMwZjUzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6Iitsd1N1OG5YQ0xkZUFYRE54WU51akF1eHNEY3ZwNnUxdUc4c292b1RhOU09In0= x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v5 06/20] event/sw: add support for event queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Mar 2017 12:42:32 -0000 > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Tuesday, March 28, 2017 11:43 AM > To: Van Haaren, Harry > Cc: dev@dpdk.org; Richardson, Bruce > Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues >=20 > On Mon, Mar 27, 2017 at 03:17:48PM +0000, Van Haaren, Harry wrote: > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > Sent: Monday, March 27, 2017 8:45 AM > > > To: Van Haaren, Harry > > > Cc: dev@dpdk.org; Richardson, Bruce > > > Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues > > > Just for my understanding, Are 4(SW_IQS_MAX) iq rings created to addr= ess > > > different priority for each enqueue operation? What is the significan= ce of > > > 4(SW_IQS_MAX) here? > > > > Yes each IQ represents a priority level. There is a compile-time define= (SW_IQS_MAX) which > allows setting the number of internal-queues at each queue stage. The def= ault number of > priorities is currently 4. >=20 > OK. The reason why I asked because, If i understood it correctly the > PRIO_TO_IQ is not normalizing it correctly if SW_IQS_MAX =3D=3D 4. >=20 > I thought following mapping will be the correct normalization if SW_IQS_M= AX > =3D=3D 4 >=20 > What do you think? Good catch - agreed, will fix. > > > > +static int > > > > +sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, > > > > + const struct rte_event_queue_conf *conf) > > > > +{ > > > > + int type; > > > > + > > > > + switch (conf->event_queue_cfg) { > > > > + case RTE_EVENT_QUEUE_CFG_SINGLE_LINK: > > > > + type =3D SW_SCHED_TYPE_DIRECT; > > > > + break; > > > > > > event_queue_cfg is a bitmap. It is valid to have > > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. > > > i.e An atomic schedule type queue and it has only one port linked to > > > dequeue the events. > > > So in the above context, The switch case is not correct. i.e > > > it goes to the default condition. Right? > > > Is this intentional? > > > > > > If I understand it correctly, Based on the use case(grouped based eve= nt > > > pipelining), you have shared in > > > the documentation patch. RTE_EVENT_QUEUE_CFG_SINGLE_LINK used for las= t > > > stage(last queue). One option is if SW PMD cannot support > > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY mod= e > > > then even tough application sets the RTE_EVENT_QUEUE_CFG_SINGLE_LINK = | > > > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, driver can ignore > > > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. But I am not sure the case where > > > application sets RTE_EVENT_QUEUE_CFG_SINGLE_LINK in the middle of the= pipeline. > > > > > > Thoughts? > > > > > > I don't like the idea of the SW PMD ignoring flags for queues - the PMD= has no idea if the > queue is the final or middle of the pipeline as it's the applications usa= ge which defines that. > > > > > > Does anybody have a need for a queue to be both Atomic *and* Single-lin= k? I understand the > current API doesn't prohibit it, but I don't see the actual use-case in w= hich that may be > useful. Atomic implies load-balancing is occurring, single link implies t= here is only one > consuming core. Those seem like opposites to me? > > > > Unless anybody sees value in queue's having both, I suggest we update t= he documentation to > specify that a queue is either load balanced, or single-link, and that se= tting both flags will > result in -ENOTSUP being returned. (This check can be added to EventDev l= ayer if consistent for > all PMDs). >=20 > If I understand it correctly(Based on the previous discussions), > HW implementations(Cavium or NXP) does not > need to use RTE_EVENT_QUEUE_CFG_* flags for the operations(sched type > will be derived from event.sched_type on enqueue). So that means we are > free to tailor the header file based on the SW PMD requirement on this. > But semantically it has to be inline with rest of the header file.We can > work together to make it happen. OK :) > A few question on everyone benefit: >=20 > 1) Does RTE_EVENT_QUEUE_CFG_SINGLE_LINK has any other meaning other than = an > event queue linked only to single port? Based on the discussions, It was > add in the header file so that SW PMD can know upfront only single port > will be linked to the given event queue. It is added as an optimization f= or SW > PMD. Does it has any functional expectation? In the context of the SW PMD, SINGLE_LINK means that a specific queue and p= ort have a unique relationship in that there is only connection. This allow= s bypassing of Atomic, Ordering and Load-Balancing code. The result is a go= od performance increase, particularly if the worker port dequeue depth is l= arge, as then large bursts of packets can be dequeued with little overhead. As a result, (ATOMIC | SINGLE_LINK) is not a supported combination for the = sw pmd queue types. To be more precise, a SINGLE_LINK is its own queue type, and can not be OR-= ed with any other type. > 2) Based on following topology given in documentation patch for queue > based event pipelining, >=20 > rx_port w1_port > \ / \ > qid0 - w2_port - qid1 > \ / \ > w3_port tx_port >=20 > a) I understand, rx_port is feeding events to qid0 > b) But, Do you see any issue with following model? IMO, It scales well > linearly based on number of cores available to work(Since it is ATOMIC to > ATOMIC). Nothing wrong with > qid1 just connects to tx_port, I am just trying understand the rational > behind it? >=20 > rx_port w1_port w1_port > \ / \ / > qid0 - w2_port - qid1- w2_port > \ / \ > w3_port w3_port This is also a valid model from the SW eventdev.=20 The value of using a SINGLE_LINK at the end of a pipeline is A) can TX all traffic on a single core (using a single queue) B) re-ordering of traffic from the previous stage is possible To illustrate (B), a very simple pipeline here RX port -> QID #1 (Ordered) -> workers(eg 4 ports) -> QID # 2 (SINGLE_LINK= to tx) -> TX port Here, QID #1 is allowed to send the packets out of order to the 4 worker po= rts - because they are later passed back to the eventdev for re-ordering be= fore they get to the SINGLE_LINK stage, and then TX in the correct order. > 3) > > Does anybody have a need for a queue to be both Atomic *and* Single-lin= k? I understand the > current API doesn't prohibit it, but I don't see the actual use-case in w= hich that may be > useful. Atomic implies load-balancing is occurring, single link implies t= here is only one > consuming core. Those seem like opposites to me? >=20 > I can think about the following use case: >=20 > topology: >=20 > rx_port w1_port > \ / \ > qid0 - w2_port - qid1 > \ / \ > w3_port tx_port >=20 > Use case: >=20 > Queue based event pipeling: > ORERDED(Stage1) to ATOMIC(Stage2) pipeline: > - For ingress order maintenance > - For executing Stage 1 in parallel for better scaling > i.e A fat flow can spray over N cores while maintaining the ingress > order when it sends out on the wire(after consuming from tx_port) >=20 > I am not sure how SW PMD work in the use case of ingress order maintenanc= e. I think my illustration of (B) above is the same use-case as you have here.= Instead of using an ATOMIC stage2, the SW PMD benefits from using the SING= LE_LINK port/queue, and the SINGLE_LINK queue ensures ingress order is also= egress order to the TX port. > But the HW and header file expects this form: > Snippet from header file: > -- > * The source flow ordering from an event queue is maintained when events= are > * enqueued to their destination queue within the same ordered flow conte= xt. > * > * Events from the source queue appear in their original order when deque= ued > * from a destination queue. > -- > Here qid0 is source queue with ORDERED sched_type and qid1 is destination > queue with ATOMIC sched_type. qid1 can be linked to only port(tx_port). >=20 > Are we on same page? If not, let me know the differences? We will try to > accommodate the same in header file. Yes I think we are saying the same thing, using slightly different words. To summarize; - SW PMD sees SINGLE_LINK as its own queue type, and does not support load-= balanced (Atomic Ordered, Parallel) queue functionality. - SW PMD would use a SINGLE_LINK queue/port for the final stage of a pipeli= ne A) to allow re-ordering to happen if required B) to merge traffic from multiple ports into a single stream for TX A possible solution; 1) The application creates a SINGLE_LINK for the purpose of ensuring re-ord= ering is taking place as expected, and linking only one port for TX. 2) SW PMDs can create a SINGLE_LINK queue type, and benefit from the optimi= zation 3) HW PMDs can ignore the "SINGLE_LINK" aspect and uses an ATOMIC instead (= as per your example in 3) above) The application doesn't have to change anything, and just configures its pi= peline. The PMD is able to optimize if it makes sense (SW) or just use anot= her queue type to provide the same functionality to the application (HW). Thoughts? -Harry