From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4DDF4A0350; Wed, 1 Jul 2020 06:50:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 99CC61BF6D; Wed, 1 Jul 2020 06:50:28 +0200 (CEST) Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by dpdk.org (Postfix) with ESMTP id E57F51BE95 for ; Wed, 1 Jul 2020 06:50:27 +0200 (CEST) Received: by mail-io1-f65.google.com with SMTP id y2so23598723ioy.3 for ; Tue, 30 Jun 2020 21:50:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=RnH7eul55nfcorMe71WJBM8C/rUy9gxiD7n9cBwwo68=; b=mcvXv50gvmkyVqBFhaQ+NioBixOm9f5bX/WaPPUUFYzCKInTswgjuefeZtMz/RM+dD 6konDvxVOMHFSsG83dc8Mhtjc/f5YFncjJrV6suDTyARDU6cyUA+0omT3C9AsS6FLAxy Uw/fonYrLiP0F9BoTAbz6F3n8Jezf3fXNJKs6DB3D8AVXmgvI+6eK7XA1t5i3AlPIiQb zxi7LIIu1z7IpkE1kH4yoD8/gpSenvjRmUMEpt48hF3dk+TFr3aHCMfv70tsBQsMbn/R iP7V4CR93SWlt15D2lg/a7CWyJgXlRq6hwdaN80X8tORsMmp6GkyvOG8R/WC0+vxmzCz Cpvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=RnH7eul55nfcorMe71WJBM8C/rUy9gxiD7n9cBwwo68=; b=SigZAsnoW6tjHAihMB1Det+Gs7DZSul5cIQBJCl05OXEjW3rPcOcNc77DqUtCCef42 KWr6eldXc9Z09rhkCgnRR7oqeLHuIIP29xStQpuMwGZfeRmQkK60SVkHqpfG3E263LR8 g/26gA3cmzgjgR53e8M52c51jBHkjyoi/BrfbpnPF5O8A91hzcSW4HitsiV8MJKxkjSn eENHg7OdkzPYnJM20XTiqLMSWfZkuOACrOZ02bz6liVkEqjTdOfmef9xGk/fizOqNDXc 6Q9PIGBUGX05GflSpqgIp3IBNcXtIrWfsqqB8rR0DoBOiK53VVmUZoAKX38hl6hqJLPz KgMg== X-Gm-Message-State: AOAM530h4FwIhWK8nerK6Yfa6UE7k/hB9pzFBUJ5kLEVHmbVAjVZSyO0 XXpVvgCBFXODXHRKXWIQApa1hQwqgqqK+Dw3AfI= X-Google-Smtp-Source: ABdhPJx7P6enfqcSS5ej2BYyJVAzgPmT+l6p15754Zdh4bRqrSBb4S978HPb9DK3tyqZMsYKs3ZBdlKVy/lyKb40b7Y= X-Received: by 2002:a05:6602:21c7:: with SMTP id c7mr450622ioc.1.1593579027074; Tue, 30 Jun 2020 21:50:27 -0700 (PDT) MIME-Version: 1.0 References: <1593232671-5690-1-git-send-email-timothy.mcdaniel@intel.com> <1593232671-5690-2-git-send-email-timothy.mcdaniel@intel.com> In-Reply-To: From: Jerin Jacob Date: Wed, 1 Jul 2020 10:20:11 +0530 Message-ID: To: "McDaniel, Timothy" Cc: Ray Kinsella , Neil Horman , Jerin Jacob , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , dpdk-dev , "Eads, Gage" , "Van Haaren, Harry" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Jul 1, 2020 at 12:57 AM McDaniel, Timothy wrote: > > >-----Original Message----- > >From: Jerin Jacob > >Sent: Tuesday, June 30, 2020 10:58 AM > >To: McDaniel, Timothy > >Cc: Ray Kinsella ; Neil Horman ; > >Jerin Jacob ; Mattias R=C3=B6nnblom > >; dpdk-dev ; Eads, Gage > >; Van Haaren, Harry > >Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisit= es > > > >On Tue, Jun 30, 2020 at 9:12 PM McDaniel, Timothy > > wrote: > >> > >> >-----Original Message----- > >> >From: Jerin Jacob > >> >Sent: Monday, June 29, 2020 11:21 PM > >> >To: McDaniel, Timothy > >> >Cc: Ray Kinsella ; Neil Horman = ; > >> >Jerin Jacob ; Mattias R=C3=B6nnblom > >> >; dpdk-dev ; Eads, Gage > >> >; Van Haaren, Harry > >> >Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequi= sites > >> > > >> >On Tue, Jun 30, 2020 at 1:01 AM McDaniel, Timothy > >> > wrote: > >> >> > >> >> -----Original Message----- > >> >> From: Jerin Jacob > >> >> Sent: Saturday, June 27, 2020 2:45 AM > >> >> To: McDaniel, Timothy ; Ray Kinsella > >> >; Neil Horman > >> >> Cc: Jerin Jacob ; Mattias R=C3=B6nnblom > >> >; dpdk-dev ; Eads, Gage > >> >; Van Haaren, Harry > >> >> Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prereq= uisites > >> >> > >> >> > + > >> >> > +/** Event port configuration structure */ > >> >> > +struct rte_event_port_conf_v20 { > >> >> > + int32_t new_event_threshold; > >> >> > + /**< A backpressure threshold for new event enqueues on t= his port. > >> >> > + * Use for *closed system* event dev where event capacity= is limited, > >> >> > + * and cannot exceed the capacity of the event dev. > >> >> > + * Configuring ports with different thresholds can make h= igher priority > >> >> > + * traffic less likely to be backpressured. > >> >> > + * For example, a port used to inject NIC Rx packets into= the event dev > >> >> > + * can have a lower threshold so as not to overwhelm the = device, > >> >> > + * while ports used for worker pools can have a higher th= reshold. > >> >> > + * This value cannot exceed the *nb_events_limit* > >> >> > + * which was previously supplied to rte_event_dev_configu= re(). > >> >> > + * This should be set to '-1' for *open system*. > >> >> > + */ > >> >> > + uint16_t dequeue_depth; > >> >> > + /**< Configure number of bulk dequeues for this event por= t. > >> >> > + * This value cannot exceed the *nb_event_port_dequeue_de= pth* > >> >> > + * which previously supplied to rte_event_dev_configure()= . > >> >> > + * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MOD= E > >> >capable. > >> >> > + */ > >> >> > + uint16_t enqueue_depth; > >> >> > + /**< Configure number of bulk enqueues for this event por= t. > >> >> > + * This value cannot exceed the *nb_event_port_enqueue_de= pth* > >> >> > + * which previously supplied to rte_event_dev_configure()= . > >> >> > + * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MOD= E > >> >capable. > >> >> > + */ > >> >> > uint8_t disable_implicit_release; > >> >> > /**< Configure the port not to release outstanding events= in > >> >> > * rte_event_dev_dequeue_burst(). If true, all events rec= eived through > >> >> > @@ -733,6 +911,14 @@ struct rte_event_port_conf { > >> >> > rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id, > >> >> > struct rte_event_port_conf *port_= conf); > >> >> > > >> >> > +int > >> >> > +rte_event_port_default_conf_get_v20(uint8_t dev_id, uint8_t port= _id, > >> >> > + struct rte_event_port_conf_v20 *p= ort_conf); > >> >> > + > >> >> > +int > >> >> > +rte_event_port_default_conf_get_v21(uint8_t dev_id, uint8_t port= _id, > >> >> > + struct rte_event_port_conf = *port_conf); > >> >> > >> >> Hi Timothy, > >> >> > >> >> + ABI Maintainers (Ray, Neil) > >> >> > >> >> # As per my understanding, the structures can not be versioned, onl= y > >> >> function can be versioned. > >> >> i.e we can not make any change to " struct rte_event_port_conf" > >> >> > >> >> # We have a similar case with ethdev and it deferred to next releas= e v20.11 > >> >> http://patches.dpdk.org/patch/69113/ > >> >> > >> >> Regarding the API changes: > >> >> # The slow path changes general looks good to me. I will review the > >> >> next level in the coming days > >> >> # The following fast path changes bothers to me. Could you share mo= re > >> >> details on below change? > >> >> > >> >> diff --git a/app/test-eventdev/test_order_atq.c > >> >> b/app/test-eventdev/test_order_atq.c > >> >> index 3366cfc..8246b96 100644 > >> >> --- a/app/test-eventdev/test_order_atq.c > >> >> +++ b/app/test-eventdev/test_order_atq.c > >> >> @@ -34,6 +34,8 @@ > >> >> continue; > >> >> } > >> >> > >> >> + ev.flow_id =3D ev.mbuf->udata64; > >> >> + > >> >> # Since RC1 is near, I am not sure how to accommodate the API chang= es > >> >> now and sort out ABI stuffs. > >> >> # Other concern is eventdev spec get bloated with versioning files > >> >> just for ONE release as 20.11 will be OK to change the ABI. > >> >> # While we discuss the API change, Please send deprecation notice f= or > >> >> ABI change for 20.11, > >> >> so that there is no ambiguity of this patch for the 20.11 release. > >> >> > >> >> Hello Jerin, > >> >> > >> >> Thank you for the review comments. > >> >> > >> >> With regard to your comments regarding the fast path flow_id change= , the > >Intel > >> >DLB hardware > >> >> is not capable of transferring the flow_id as part of the event its= elf. We > >> >therefore require a mechanism > >> >> to accomplish this. What we have done to work around this is to req= uire the > >> >application to embed the flow_id > >> >> within the data payload. The new flag, #define > >> >RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9), can be used > >> >> by applications to determine if they need to embed the flow_id, or = if its > >> >automatically propagated and present in the > >> >> received event. > >> >> > >> >> What we should have done is to wrap the assignment with a condition= al. > >> >> > >> >> if (!(device_capability_flags & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID)) > >> >> ev.flow_id =3D ev.mbuf->udata64; > >> > > >> >Two problems with this approach, > >> >1) we are assuming mbuf udata64 field is available for DLB driver > >> >2) It won't work with another adapter, eventdev has no dependency wit= h mbuf > >> > > >> > >> This snippet is not intended to suggest that udata64 always be used to= store the > >flow ID, but as an example of how an application could do it. Some appli= cations > >won=E2=80=99t need to carry the flow ID through; others can select an un= used field in the > >event data (e.g. hash.rss or udata64 if using mbufs), or (worst-case) re= -generate > >the flow ID in pipeline stages that require it. > > > >OK. > >> > >> >Question: > >> >1) In the case of DLB hardware, on dequeue(), what HW returns? is it > >> >only event pointer and not have any other metadata like schedule_type > >> >etc. > >> > > >> > >> The DLB device provides a 16B =E2=80=9Cqueue entry=E2=80=9D that consi= sts of: > >> > >> * 8B event data > >> * Queue ID > >> * Priority > >> * Scheduling type > >> * 19 bits of carried-through data > >> * Assorted error/debug/reserved bits that are set by the device = (not carried- > >through) > >> > >> For the carried-through 19b, we use 12b for event_type and sub_event_= type. > > > >I can only think of TWO options to help > >1) Since event pointer always cache aligned, You could grab LSB > >6bits(2^6 =3D 64B ) and 7 bits from (19b - 12b) carried through > >structure > >2) Have separate mempool driver using existing drivers, ie "event > >pointer" + or - some offset have any amount of custom data. > > > > We can't guarantee that the event will contain a pointer -- it's possible= that 8B is inline data (i.e. struct rte_event's u64 field). > > It's really an application decision -- for example an app could allocate = space in the 'mbuf private data' to store the flow ID, if the event device = lacks that carry-flow-ID capability and the other mbuf fields can't be used= for whatever reason. > We modified the tests, sample apps to show how this might be done, not ne= cessarily how it must be done. Yeah. If HW has limitation we can't do much. It is OK to change eventdev spec to support new HW limitations. aka, RTE_EVENT_DEV_CAP_CARRY_FLOW_ID is OK. Please update existing drivers has this RTE_EVENT_DEV_CAP_CARRY_FLOW_ID capability which is missing in the patch(I believe) > > > > >> > >> > > >> >> > >> >> This would minimize/eliminate any performance impact due to the > >processor's > >> >branch prediction logic. > > > >I think, If we need to change common fastpath, better we need to make > >it template to create code for compile-time to have absolute zero > >overhead > >and use runtime. > >See app/test-eventdev/test_order_atq.c: function: worker_wrapper() > >_create_ worker at compile time based on runtime capability. > > > > Yes, that would be perfect. Thanks for the example! Where ever you are making fastpath change, Please follow this scheme and send the next version. In order to have clean and reusable code, you could have template function and with "if" and it can opt-out in _compile_ time. i.e no_inline generic_worker(..., _const_ uint64_t flags) { .. .. if (! flags & CAP_CARRY_FLOW_ID) .... } worker_with_out_carry_flow_id() { generic_worker(.., CAP_CARRY_FLOW_ID) } normal_worker() { generic_worker(.., 0) } No other controversial top-level comments with this patch series. Once we sorted out the ABI issues then I can review and merge. > > > > > > >> >> The assignment then becomes in essence a NOOP for all event devices= that > >are > >> >capable of carrying the flow_id as part of the event payload itself. > >> >> > >> >> Thanks, > >> >> Tim > >> >> > >> >> > >> >> > >> >> Thanks, > >> >> Tim