From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 973C3374 for ; Tue, 27 Jun 2017 10:44:41 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP; 27 Jun 2017 01:44:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,399,1493708400"; d="scan'208";a="872093471" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by FMSMGA003.fm.intel.com with ESMTP; 27 Jun 2017 01:44:35 -0700 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.211]) by IRSMSX104.ger.corp.intel.com ([169.254.5.26]) with mapi id 14.03.0319.002; Tue, 27 Jun 2017 09:44:34 +0100 From: "Van Haaren, Harry" To: Jerin Jacob , "Eads, Gage" CC: "dev@dpdk.org" , "Richardson, Bruce" , "hemant.agrawal@nxp.com" , "nipun.gupta@nxp.com" , "Vangati, Narender" , "Rao, Nikhil" Thread-Topic: [dpdk-dev] [PATCH] eventdev: add producer enqueue hint Thread-Index: AQHS43Gx6qmj56L3f0C/gM5L5HEKOKI3TnMAgAES+4CAABeLgA== Date: Tue, 27 Jun 2017 08:44:34 +0000 Message-ID: References: <20170612114627.18893-1-jerin.jacob@caviumnetworks.com> <9184057F7FC11744A2107296B6B8EB1E01ED7263@FMSMSX108.amr.corp.intel.com> <20170627080820.GA14276@jerin> In-Reply-To: <20170627080820.GA14276@jerin> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNmUyYWFhYTEtYmNjNC00MDMxLWI4YTQtODk1OGIyOTZiNjk1IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IkpQQnNkSm1WMUlVZDNmbjV1QmNsbWZKSU1cLzE1Zm5GRUs2K3dVTlAwSXpJPSJ9 x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 10.0.102.7 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] eventdev: add producer enqueue hint X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Jun 2017 08:44:42 -0000 > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Tuesday, June 27, 2017 9:08 AM > To: Eads, Gage > Cc: dev@dpdk.org; Richardson, Bruce ; Van Haa= ren, Harry > ; hemant.agrawal@nxp.com; nipun.gupta@nxp.com= ; Vangati, > Narender ; Rao, Nikhil > Subject: Re: [dpdk-dev] [PATCH] eventdev: add producer enqueue hint > > > void > > > diff --git a/lib/librte_eventdev/rte_eventdev.h > > > b/lib/librte_eventdev/rte_eventdev.h > > > index a248fe90e..1c1a46593 100644 > > > --- a/lib/librte_eventdev/rte_eventdev.h > > > +++ b/lib/librte_eventdev/rte_eventdev.h > > > @@ -933,7 +933,15 @@ struct rte_event { > > > * and is undefined on dequeue. > > > * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*) > > > */ > > > - uint8_t rsvd:4; > > > + uint8_t all_op_new:1; > > > + /**< Valid only with event enqueue operation - This hint > > > + * indicates that the enqueue request has only the > > > + * events with op =3D=3D RTE_EVENT_OP_NEW. > > > + * The event producer, typically use this pattern to > > > + * inject the events to eventdev. > > > + * @see RTE_EVENT_OP_NEW > > > rte_event_enqueue_burst() > > > + */ > > > + uint8_t rsvd:3; > > > /**< Reserved for future use */ > > > uint8_t sched_type:2; > > > /**< Scheduler synchronization type > > > (RTE_SCHED_TYPE_*) > > > -- > > > 2.13.1 > > > > I slightly prefer the parallel enqueue API -- I can see folks making th= e mistake of > setting all_op_new without setting the op to RTE_EVENT_OP_NEW, and later = adding a > "forward-only" enqueue API could be interesting for the sw PMD -- but thi= s looks fine to > me. Curious if others have any thoughts. >=20 > If forward-only parallel enqueue API interesting for the SW PMD then I > can drop this one and introduce forward-only API. Let me know if others > have any thoughts? To make sure I understand correctly, the "parallel API" idea is to add a ne= w function pointer per-PMD, and dedicate it to enqueueing a burst of packet= s with the same OP? So the end result would be function(s) in the public AP= I like this: rte_event_enqueue_burst_new(port, new_events, n_events); rte_event_enqueue_burst_forward(port, new_events, n_events); Given these are a "specialization" of the generic enqueue_burst() function,= the PMD is not obliged to implement them. If they are NULL, the eventdev.c= infrastructure can just point the burst_new() and burst_forward() to the g= eneric enqueue without any performance delta? The cost is some added code in the public header and infrastructure. The gain is that we don't overload the current API with new behavior.=20 Assuming my description of the parallel proposal above is correct, +1 for t= he parallel function approach. I like APIs that "do what they say on the ti= n" :)