From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id E9FE0BB38 for ; Wed, 26 Oct 2016 14:54:18 +0200 (CEST) Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP; 26 Oct 2016 05:54:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,550,1473145200"; d="scan'208";a="23876036" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.210.150]) by fmsmga005.fm.intel.com with SMTP; 26 Oct 2016 05:54:15 -0700 Received: by (sSMTP sendmail emulation); Wed, 26 Oct 2016 13:54:14 +0100 Date: Wed, 26 Oct 2016 13:54:14 +0100 From: Bruce Richardson To: Jerin Jacob Message-ID: <20161026125414.GB33288@bricha3-MOBL3.ger.corp.intel.com> References: <20161005072451.GA2358@localhost.localdomain> <1476214216-31982-1-git-send-email-jerin.jacob@caviumnetworks.com> <20161025174904.GA18333@localhost.localdomain> <20161026122416.GA21509@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161026122416.GA21509@localhost.localdomain> Organization: Intel Research and =?iso-8859-1?Q?De=ACvel?= =?iso-8859-1?Q?opment?= Ireland Ltd. User-Agent: Mutt/1.7.1 (2016-10-04) Cc: "Vangati, Narender" , "dev@dpdk.org" , "Eads, Gage" , "thomas.monjalon@6wind.com" Subject: Re: [dpdk-dev] [RFC] [PATCH v2] libeventdev: event driven programming model framework for DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Oct 2016 12:54:19 -0000 On Wed, Oct 26, 2016 at 05:54:17PM +0530, Jerin Jacob wrote: > On Wed, Oct 26, 2016 at 12:11:03PM +0000, Van Haaren, Harry wrote: > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob > > > > > > So far, I have received constructive feedback from Intel, NXP and Linaro folks. > > > Let me know, if anyone else interested in contributing to the definition of eventdev? > > > > > > If there are no major issues in proposed spec, then Cavium would like work on > > > implementing and up-streaming the common code(lib/librte_eventdev/) and > > > an associated HW driver.(Requested minor changes of v2 will be addressed > > > in next version). > > > > Hi All, > > > > I will propose a minor change to the rte_event struct, allowing some bits to be implementation specific. Currently the rte_event struct has no space to allow an implementation store any metadata about the event. For software performance it would be really helpful if there are some bits available for the implementation to keep some flags about each event. > > OK. > > > > > I suggest to rework the struct as below which opens 6 bits that were otherwise wasted, and define them as implementation specific. By implementation specific it is understood that the implementation can overwrite any information stored in those bits, and the application must not expect the data to remain after the event is scheduled. > > > > OLD: > > struct rte_event { > > uint32_t flow_id:24; > > uint32_t queue_id:8; > > uint8_t sched_type; /* Note only 2 bits of 8 are required */ > > > > NEW: > > struct rte_event { > > uint32_t flow_id:24; > > uint32_t sched_type:2; /* reduced size : but 2 bits is enough for the enqueue types Ordered,Atomic,Parallel.*/ > > uint32_t implementation:6; /* available for implementation specific metadata */ > > uint8_t queue_id; /* still 8 bits as before */ > > > > > > Thoughts? -Harry > > Looks good to me. I will add it in v3. > Thanks. One other suggestion is that it might be useful to provide support for having typed queues explicitly in the API. Right now, when you create an queue, the queue_conf structure takes as parameters how many atomic flows that are needed for the queue, or how many reorder slots need to be reserved for it. This implicitly hints at the type of traffic which will be sent to the queue, but I'm wondering if it's better to make it explicit. There are certain optimisations that can be looked at if we know that a queue only handles packets of a particular type. [Not having to handle reordering when pulling events from a core can be a big win for software!]. How about adding: "allowed_event_types" as a field to rte_event_queue_conf, with possible values: * atomic * ordered * parallel * mixed - allowing all 3 types. I think allowing 2 of three types might make things too complicated. An open question would then be how to behave when the queue type and requested event type conflict. We can either throw an error, or just ignore the event type and always treat enqueued events as being of the queue type. I prefer the latter, because it's faster not having to error-check, and it pushes the responsibility on the app to know what it's doing. /Bruce