From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id A183098 for ; Tue, 29 Nov 2016 06:46:10 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 28 Nov 2016 21:46:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,715,1473145200"; d="scan'208";a="791971811" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by FMSMGA003.fm.intel.com with ESMTP; 28 Nov 2016 21:46:09 -0800 Received: from fmsmsx153.amr.corp.intel.com (10.18.125.6) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.248.2; Mon, 28 Nov 2016 21:46:09 -0800 Received: from fmsmsx108.amr.corp.intel.com ([169.254.9.109]) by FMSMSX153.amr.corp.intel.com ([169.254.9.101]) with mapi id 14.03.0248.002; Mon, 28 Nov 2016 21:46:08 -0800 From: "Eads, Gage" To: Jerin Jacob CC: "dev@dpdk.org" , "Richardson, Bruce" , "Van Haaren, Harry" , "hemant.agrawal@nxp.com" Thread-Topic: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs Thread-Index: AQHSSfKxDABbR2Ap+keemUWnlNHDlKDvc7dw Date: Tue, 29 Nov 2016 05:46:08 +0000 Message-ID: <9184057F7FC11744A2107296B6B8EB1E01E3443D@FMSMSX108.amr.corp.intel.com> References: <9184057F7FC11744A2107296B6B8EB1E01E31739@FMSMSX108.amr.corp.intel.com> <20161121191358.GA9044@svelivela-lt.caveonetworks.com> <20161121193133.GA9895@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E31C40@FMSMSX108.amr.corp.intel.com> <20161122181913.GA9456@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E32F3E@FMSMSX108.amr.corp.intel.com> <20161122200022.GA12168@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E331A3@FMSMSX108.amr.corp.intel.com> <20161122234331.GA20501@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E33E96@FMSMSX108.amr.corp.intel.com> <20161129034304.GB9930@svelivela-lt.caveonetworks.com> In-Reply-To: <20161129034304.GB9930@svelivela-lt.caveonetworks.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYTkzZWFjNDQtNzJlMC00NzQ5LTliNTAtOGQ0MjRjNjIyMGFhIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6InArbVBZM1VVcjVDdjFvaGFrUlVPcE42UFY3Y3dMckNCUytGTlc1dXkrWVk9In0= x-ctpclassification: CTP_IC x-originating-ip: [10.1.200.107] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Nov 2016 05:46:12 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Monday, November 28, 2016 9:43 PM > To: Eads, Gage > Cc: dev@dpdk.org; Richardson, Bruce ; Van > Haaren, Harry ; hemant.agrawal@nxp.com > Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound A= PIs > =20 > On Mon, Nov 28, 2016 at 03:53:08PM +0000, Eads, Gage wrote: > > (Bruce's adviced heeded :)) > > > > > -----Original Message----- > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > Sent: Tuesday, November 22, 2016 5:44 PM > > > To: Eads, Gage > > > Cc: dev@dpdk.org; Richardson, Bruce ; > > > Van Haaren, Harry ; > > > hemant.agrawal@nxp.com > > > Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the > > > northbound APIs > > > > > > On Tue, Nov 22, 2016 at 10:48:32PM +0000, Eads, Gage wrote: > > > > > > > > > > > > > -----Original Message----- > > > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > > > Sent: Tuesday, November 22, 2016 2:00 PM > > To: Eads, Gage > > > > > Cc: dev@dpdk.org; Richardson, Bruce > > > ; > > Van Haaren, Harry > > > ; > > hemant.agrawal@nxp.com > > > > > Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the > > > > > northbound APIs > > > > On Tue, Nov 22, 2016 at 07:43:03PM +0000, > > > Eads, Gage wrote: > > > > > > > > > > > One open issue I noticed is the "typical workflo= w" > > > > > > > description starting in > > rte_eventdev.h:204 conflicts > > > with > > the > > centralized software PMD that Harry > > posted = last > week. > > > > > > > Specifically, that PMD expects a single core to call the > > > > > > > > > schedule function. We could extend the documentation to > > > account > > for > > this > > alternative style of scheduler > > > invocation, or > > discuss > > ways to make the software > > > > > PMD work with the > > documented > > workflow. I prefer the > > > former, but either way I > > > > think we > > ought to expose > > > the scheduler's expected usage to > > the user -- > > perhaps > >= through > an RTE_EVENT_DEV_CAP flag? > > > > > > > > > > > > > > > > > > > > I prefer former too, you can propose the > > > documentation > > > > change required for > > software PMD. > > > > > > > > > > > > > > > > Sure, proposal follows. The "typical workflow" isn't > > > the > > most > > optimal by having a conditional in the > > > fast-path, of > > course, but it > > demonstrates the idea simply= . > > > > > > > > > > > > > > > > (line 204) > > > > > > > > * An event driven based application has following > > > typical > > > > workflow on > > fastpath: > > > > > > > > * \code{.c} > > > > > > > > * while (1) { > > > > > > > > * > > > > > > > > * if (dev_info.event_dev_cap & > > > > > > > > * RTE_EVENT_DEV_CAP_DISTRIBUTED_S= CHED) > > > > > > > > * rte_event_schedule(dev_id); > > > > > > > > > > > > > > Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCH= ED. > > > > > > > It can be input to application/subsystem to launch > > > separate > > > > core(s) for schedule functions. > > > > > > > But, I think, the "dev_info.event_dev_cap & > > > > > > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED" > > > > > > > check can be moved inside the implementation(to make the > > > > > better > > decisions and avoiding consuming cycles on HW > > > based schedulers. > > > > > > > > > > > > How would this check work? Wouldn't it prevent any core from > > > > > running the software scheduler in the centralized case? > > > > > > > > > > I guess you may not need RTE_EVENT_DEV_CAP here, instead need > > > flag > > for device configure here > > > > #define > > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1) > > > > struct > > > rte_event_dev_config config; config.event_dev_cfg =3D > > > > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED; > > > > > rte_event_dev_configure(.., &config); > > > > on the driver > > > side on configure, > > if (config.event_dev_cfg & > > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED) > > > > > eventdev->schedule =3D NULL; > > > > > else // centralized case > > > > > eventdev->schedule =3D your_centrized_schedule_function; > > > > > > > > > > Does that work? > > > > > > > > Hm, I fear the API would give users the impression that they can > > > select the scheduling behavior of a given eventdev, when a software > > > scheduler is more likely to be either distributed or centralized --= not both. > > > > > > Even if it is capability flag then also it is per "device". Right ? > > > capability flag is more of read only too. Am i missing something he= re? > > > > > > > Correct, the capability flag I'm envisioning is per-device and read-on= ly. > > > > > > > > > > What if we use the capability flag, and define > > > rte_event_schedule() as the scheduling function for centralized > > > schedulers and rte_event_dequeue() as the scheduling function for > > > distributed schedulers? That way, the datapath could be the simple > > > dequeue -> process -> enqueue. Applications would check the > > > capability flag at configuration time to decide whether or not to la= unch an > lcore that calls rte_event_schedule(). > > > > > > I am all for simple "dequeue -> process -> enqueue". > > > rte_event_schedule() added for SW scheduler only, now it may not > > > make sense to add one more check on top of "rte_event_schedule()" > > > to see it is really need or not in fastpath? > > > > > > > Yes, the additional check shouldn't be needed. In terms of the 'typica= l > workflow' description, this is what I have in mind: > > > > * > > * An event driven based application has following typical workflow on > fastpath: > > * \code{.c} > > * while (1) { > > * > > * rte_event_dequeue(...); > > * > > * (event processing) > > * > > * rte_event_enqueue(...); > > * } > > * \endcode > > * > > * The events are injected to event device through the *enqueue* > > operation by > > * event producers in the system. The typical event producers are > > ethdev > > * subsystem for generating packet events, core(SW) for generating > > events based > > * on different stages of application processing, cryptodev for > > generating > > * crypto work completion notification etc > > * > > * The *dequeue* operation gets one or more events from the event port= s. > > * The application process the events and send to downstream event > > queue through > > * rte_event_enqueue() if it is an intermediate stage of event > > processing, on > > * the final stage, the application may send to different subsystem > > like ethdev > > * to send the packet/event on the wire using ethdev rte_eth_tx_burst(= ) API. > > * > > * The point at which events are scheduled to ports depends on the > > device. For > > * hardware devices, scheduling occurs asynchronously. Software > > schedulers can > > * either be distributed (each worker thread schedules events to its > > own port) > > * or centralized (a dedicated thread schedules to all ports). > > Distributed > > * software schedulers perform the scheduling in rte_event_dequeue(), > > whereas > > * centralized scheduler logic is located in rte_event_schedule(). The > > * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates > > whether a > > * device is centralized and thus needs a dedicated scheduling thread > > that > =20 > Since we are starting a dedicated thread in centralized case, How about = name > the flag as RTE_EVENT_DEV_CAP_CENTRALIZED_SCHED? > instead of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED. > No strong opinion here. Just a thought. > =20 Fine with me.