From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id C4666B6D for ; Mon, 28 Nov 2016 16:53:20 +0100 (CET) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP; 28 Nov 2016 07:53:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,564,1473145200"; d="scan'208";a="35156306" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by orsmga004.jf.intel.com with ESMTP; 28 Nov 2016 07:53:19 -0800 Received: from fmsmsx123.amr.corp.intel.com (10.18.125.38) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.248.2; Mon, 28 Nov 2016 07:53:19 -0800 Received: from fmsmsx108.amr.corp.intel.com ([169.254.9.109]) by fmsmsx123.amr.corp.intel.com ([169.254.7.41]) with mapi id 14.03.0248.002; Mon, 28 Nov 2016 07:53:08 -0800 From: "Eads, Gage" To: Jerin Jacob CC: "dev@dpdk.org" , "Richardson, Bruce" , "Van Haaren, Harry" , "hemant.agrawal@nxp.com" Thread-Topic: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs Thread-Index: AQHSQV8LELAhdxcND0O3wvTpAFVOiaDlc3UhgAApkrCAAJrOAIAIYBhw Date: Mon, 28 Nov 2016 15:53:08 +0000 Message-ID: <9184057F7FC11744A2107296B6B8EB1E01E33E96@FMSMSX108.amr.corp.intel.com> References: <1479447902-3700-1-git-send-email-jerin.jacob@caviumnetworks.com> <1479447902-3700-3-git-send-email-jerin.jacob@caviumnetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E31739@FMSMSX108.amr.corp.intel.com> <20161121191358.GA9044@svelivela-lt.caveonetworks.com> <20161121193133.GA9895@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E31C40@FMSMSX108.amr.corp.intel.com> <20161122181913.GA9456@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E32F3E@FMSMSX108.amr.corp.intel.com> <20161122200022.GA12168@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E331A3@FMSMSX108.amr.corp.intel.com> <20161122234331.GA20501@svelivela-lt.caveonetworks.com> In-Reply-To: <20161122234331.GA20501@svelivela-lt.caveonetworks.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMDgyZDRmZjEtZTEzMy00YmM4LTk5NmQtM2FjZjRhNDI3M2Y4IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IkhPd0dVRUgzb1VoOVVcL2kxeGJzTGNwXC9HS0NtejU1VEE1VFVpT1FHS1FrST0ifQ== x-ctpclassification: CTP_IC x-originating-ip: [10.1.200.107] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Nov 2016 15:53:22 -0000 (Bruce's adviced heeded :)) > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Tuesday, November 22, 2016 5:44 PM > To: Eads, Gage > Cc: dev@dpdk.org; Richardson, Bruce ; Van > Haaren, Harry ; hemant.agrawal@nxp.com > Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound A= PIs > =20 > On Tue, Nov 22, 2016 at 10:48:32PM +0000, Eads, Gage wrote: > > > > > > > -----Original Message----- > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > Sent: Tuesday, November 22, 2016 2:00 PM > > > To: Eads, Gage > > > Cc: dev@dpdk.org; Richardson, Bruce ; > > > Van Haaren, Harry ; > > > hemant.agrawal@nxp.com > > > Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the > > > northbound APIs > > > > > > On Tue, Nov 22, 2016 at 07:43:03PM +0000, Eads, Gage wrote: > > > > > > > > > One open issue I noticed is the "typical workflow" > > > > > description starting in > > rte_eventdev.h:204 conflicts with > > > the > > centralized software PMD that Harry > > posted last week. > > > > > Specifically, that PMD expects a single core to call the > > > > > > > schedule function. We could extend the documentation to account > > > for > > this > > alternative style of scheduler invocation, or > > > discuss > > ways to make the software > > PMD work with the > > > documented > > workflow. I prefer the former, but either way I > > > > > think we > > ought to expose the scheduler's expected usage to > > > the user -- > > perhaps > > through an RTE_EVENT_DEV_CAP flag? > > > > > > > > > > > > > > > > I prefer former too, you can propose the documentation > > > > > change required for > > software PMD. > > > > > > > > > > > > Sure, proposal follows. The "typical workflow" isn't the > > > most > > optimal by having a conditional in the fast-path, of > > > course, but it > > demonstrates the idea simply. > > > > > > > > > > > > (line 204) > > > > > > * An event driven based application has following typical > > > > > workflow on > > fastpath: > > > > > > * \code{.c} > > > > > > * while (1) { > > > > > > * > > > > > > * if (dev_info.event_dev_cap & > > > > > > * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) > > > > > > * rte_event_schedule(dev_id); > > > > > > > > > > Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED. > > > > > It can be input to application/subsystem to launch separate > > > > > core(s) for schedule functions. > > > > > But, I think, the "dev_info.event_dev_cap & > > > > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED" > > > > > check can be moved inside the implementation(to make the > > > better > > decisions and avoiding consuming cycles on HW based > schedulers. > > > > > > > > How would this check work? Wouldn't it prevent any core from > > > running the software scheduler in the centralized case? > > > > > > I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag > > > for device configure here > > > > > > #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1) > > > > > > struct rte_event_dev_config config; config.event_dev_cfg =3D > > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED; > > > rte_event_dev_configure(.., &config); > > > > > > on the driver side on configure, > > > if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED) > > > eventdev->schedule =3D NULL; > > > else // centralized case > > > eventdev->schedule =3D your_centrized_schedule_function; > > > > > > Does that work? > > > > Hm, I fear the API would give users the impression that they can selec= t the > scheduling behavior of a given eventdev, when a software scheduler is mo= re > likely to be either distributed or centralized -- not both. > =20 > Even if it is capability flag then also it is per "device". Right ? > capability flag is more of read only too. Am i missing something here? > =20 Correct, the capability flag I'm envisioning is per-device and read-only.=20 > > > > What if we use the capability flag, and define rte_event_schedule() as= the > scheduling function for centralized schedulers and rte_event_dequeue() a= s the > scheduling function for distributed schedulers? That way, the datapath c= ould be > the simple dequeue -> process -> enqueue. Applications would check the > capability flag at configuration time to decide whether or not to launch= an > lcore that calls rte_event_schedule(). > =20 > I am all for simple "dequeue -> process -> enqueue". > rte_event_schedule() added for SW scheduler only, now it may not make s= ense > to add one more check on top of "rte_event_schedule()" to see it is real= ly need > or not in fastpath? > =20 Yes, the additional check shouldn't be needed. In terms of the 'typical wor= kflow' description, this is what I have in mind: * * An event driven based application has following typical workflow on fast= path: * \code{.c} * while (1) { * * rte_event_dequeue(...); * * (event processing) * * rte_event_enqueue(...); * } * \endcode * * The events are injected to event device through the *enqueue* operation = by * event producers in the system. The typical event producers are ethdev * subsystem for generating packet events, core(SW) for generating events b= ased * on different stages of application processing, cryptodev for generating * crypto work completion notification etc * * The *dequeue* operation gets one or more events from the event ports. * The application process the events and send to downstream event queue th= rough * rte_event_enqueue() if it is an intermediate stage of event processing, = on * the final stage, the application may send to different subsystem like et= hdev * to send the packet/event on the wire using ethdev rte_eth_tx_burst() API= . * * The point at which events are scheduled to ports depends on the device. = For * hardware devices, scheduling occurs asynchronously. Software schedulers = can * either be distributed (each worker thread schedules events to its own po= rt) * or centralized (a dedicated thread schedules to all ports). Distributed * software schedulers perform the scheduling in rte_event_dequeue(), where= as * centralized scheduler logic is located in rte_event_schedule(). The * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether a * device is centralized and thus needs a dedicated scheduling thread that * repeatedly calls rte_event_schedule(). * */