From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CCDCA0523; Tue, 30 Jun 2020 17:57:50 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CFF1E1BFE8; Tue, 30 Jun 2020 17:57:48 +0200 (CEST) Received: from mail-il1-f193.google.com (mail-il1-f193.google.com [209.85.166.193]) by dpdk.org (Postfix) with ESMTP id DB4B71BFB9 for ; Tue, 30 Jun 2020 17:57:47 +0200 (CEST) Received: by mail-il1-f193.google.com with SMTP id a11so9927128ilk.0 for ; Tue, 30 Jun 2020 08:57:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=XREPa8tH9DVMScdDwsbsIhJ7jxOG1dr3VMELzFPaOUw=; b=rV1Hseius/k7ypblvKdNBlDhDmzLC9BNWUr8wfwBTkvZeJk0i4rrnIjrp2t/iGodcy Dht1K9UeLHdS9JIoDNZm++1QQW9KVrzVNulpsItxQoLNsmQCr12RN30Wpdnu7KO0gg/y XxQqbranBdNQpGMCtUu1B1Y1GFuFPCQvQxPyuw02plnJg1nP1w3ImFShweQzbZ4JLKdL BNvT77j1xxd/89hOrKkkWHCH6d5TtGDAFTcbZTmoVRjbY41z7I4V9vvrFhk9QyaAkGI4 bKmCiIEczXXc4TrmU5ilgR/IIiXlTBxA2FVFM78wC7fBWaJE3k7nnCyOF/ZT2CHOipPK 6Tyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=XREPa8tH9DVMScdDwsbsIhJ7jxOG1dr3VMELzFPaOUw=; b=K8GzpYdW98qfHnhnGerZEbJPfimkPZv/n6h8RUKvqXTIQvm1TJizmnpmKEko0mv0ay dPuCe6J2GqFEWLUjvYS6/aS3XjUb7A3iLY2chcY02TTDdqS6XLXwYt9ebd9d5BbZJdAW LphJJ9dzklbDHs0TTG+sWf8NLioEcoWiFRz0FMbeAG4MzaSeVXotqlwr5k/CBMxRcYpH OAwkWnQuBPoMTbRS2o19sbNeDiCdd5tfW5KROzQMcRgktYfXjQfM56XxenjNg9EeX3AE qROJTdoS4rg+SXKSU2hu/O4Q3Q6eG4YSzZXJ4ry/iX0kFc2GZO/QpZB2vHeZ2JfXvBsu OxHA== X-Gm-Message-State: AOAM5331nl9ffgOhf0tXWRBuX/S/5/4hG+hHmwUM+gYU5fZ2AcMX9DJW WuaWP+zd8tBMyuljh4+aGYWt1tIN+2gcii4O4TM= X-Google-Smtp-Source: ABdhPJztFL5YspZie8EpLl+8BOStkcQ2h220nIJUa3sAjlkWug4SVZrGRkwmGqFcUO45evbCcU6EVamZLlcX/gELHs4= X-Received: by 2002:a92:2d4:: with SMTP id 203mr3088222ilc.60.1593532666947; Tue, 30 Jun 2020 08:57:46 -0700 (PDT) MIME-Version: 1.0 References: <1593232671-5690-1-git-send-email-timothy.mcdaniel@intel.com> <1593232671-5690-2-git-send-email-timothy.mcdaniel@intel.com> In-Reply-To: From: Jerin Jacob Date: Tue, 30 Jun 2020 21:27:30 +0530 Message-ID: To: "McDaniel, Timothy" Cc: Ray Kinsella , Neil Horman , Jerin Jacob , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , dpdk-dev , "Eads, Gage" , "Van Haaren, Harry" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Jun 30, 2020 at 9:12 PM McDaniel, Timothy wrote: > > >-----Original Message----- > >From: Jerin Jacob > >Sent: Monday, June 29, 2020 11:21 PM > >To: McDaniel, Timothy > >Cc: Ray Kinsella ; Neil Horman ; > >Jerin Jacob ; Mattias R=C3=B6nnblom > >; dpdk-dev ; Eads, Gage > >; Van Haaren, Harry > >Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisit= es > > > >On Tue, Jun 30, 2020 at 1:01 AM McDaniel, Timothy > > wrote: > >> > >> -----Original Message----- > >> From: Jerin Jacob > >> Sent: Saturday, June 27, 2020 2:45 AM > >> To: McDaniel, Timothy ; Ray Kinsella > >; Neil Horman > >> Cc: Jerin Jacob ; Mattias R=C3=B6nnblom > >; dpdk-dev ; Eads, Gage > >; Van Haaren, Harry > >> Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequis= ites > >> > >> > + > >> > +/** Event port configuration structure */ > >> > +struct rte_event_port_conf_v20 { > >> > + int32_t new_event_threshold; > >> > + /**< A backpressure threshold for new event enqueues on this= port. > >> > + * Use for *closed system* event dev where event capacity is= limited, > >> > + * and cannot exceed the capacity of the event dev. > >> > + * Configuring ports with different thresholds can make high= er priority > >> > + * traffic less likely to be backpressured. > >> > + * For example, a port used to inject NIC Rx packets into th= e event dev > >> > + * can have a lower threshold so as not to overwhelm the dev= ice, > >> > + * while ports used for worker pools can have a higher thres= hold. > >> > + * This value cannot exceed the *nb_events_limit* > >> > + * which was previously supplied to rte_event_dev_configure(= ). > >> > + * This should be set to '-1' for *open system*. > >> > + */ > >> > + uint16_t dequeue_depth; > >> > + /**< Configure number of bulk dequeues for this event port. > >> > + * This value cannot exceed the *nb_event_port_dequeue_depth= * > >> > + * which previously supplied to rte_event_dev_configure(). > >> > + * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE > >capable. > >> > + */ > >> > + uint16_t enqueue_depth; > >> > + /**< Configure number of bulk enqueues for this event port. > >> > + * This value cannot exceed the *nb_event_port_enqueue_depth= * > >> > + * which previously supplied to rte_event_dev_configure(). > >> > + * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE > >capable. > >> > + */ > >> > uint8_t disable_implicit_release; > >> > /**< Configure the port not to release outstanding events in > >> > * rte_event_dev_dequeue_burst(). If true, all events receiv= ed through > >> > @@ -733,6 +911,14 @@ struct rte_event_port_conf { > >> > rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id, > >> > struct rte_event_port_conf *port_con= f); > >> > > >> > +int > >> > +rte_event_port_default_conf_get_v20(uint8_t dev_id, uint8_t port_id= , > >> > + struct rte_event_port_conf_v20 *port= _conf); > >> > + > >> > +int > >> > +rte_event_port_default_conf_get_v21(uint8_t dev_id, uint8_t port_id= , > >> > + struct rte_event_port_conf *po= rt_conf); > >> > >> Hi Timothy, > >> > >> + ABI Maintainers (Ray, Neil) > >> > >> # As per my understanding, the structures can not be versioned, only > >> function can be versioned. > >> i.e we can not make any change to " struct rte_event_port_conf" > >> > >> # We have a similar case with ethdev and it deferred to next release v= 20.11 > >> http://patches.dpdk.org/patch/69113/ > >> > >> Regarding the API changes: > >> # The slow path changes general looks good to me. I will review the > >> next level in the coming days > >> # The following fast path changes bothers to me. Could you share more > >> details on below change? > >> > >> diff --git a/app/test-eventdev/test_order_atq.c > >> b/app/test-eventdev/test_order_atq.c > >> index 3366cfc..8246b96 100644 > >> --- a/app/test-eventdev/test_order_atq.c > >> +++ b/app/test-eventdev/test_order_atq.c > >> @@ -34,6 +34,8 @@ > >> continue; > >> } > >> > >> + ev.flow_id =3D ev.mbuf->udata64; > >> + > >> # Since RC1 is near, I am not sure how to accommodate the API changes > >> now and sort out ABI stuffs. > >> # Other concern is eventdev spec get bloated with versioning files > >> just for ONE release as 20.11 will be OK to change the ABI. > >> # While we discuss the API change, Please send deprecation notice for > >> ABI change for 20.11, > >> so that there is no ambiguity of this patch for the 20.11 release. > >> > >> Hello Jerin, > >> > >> Thank you for the review comments. > >> > >> With regard to your comments regarding the fast path flow_id change, t= he Intel > >DLB hardware > >> is not capable of transferring the flow_id as part of the event itself= . We > >therefore require a mechanism > >> to accomplish this. What we have done to work around this is to requir= e the > >application to embed the flow_id > >> within the data payload. The new flag, #define > >RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9), can be used > >> by applications to determine if they need to embed the flow_id, or if = its > >automatically propagated and present in the > >> received event. > >> > >> What we should have done is to wrap the assignment with a conditional. > >> > >> if (!(device_capability_flags & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID)) > >> ev.flow_id =3D ev.mbuf->udata64; > > > >Two problems with this approach, > >1) we are assuming mbuf udata64 field is available for DLB driver > >2) It won't work with another adapter, eventdev has no dependency with m= buf > > > > This snippet is not intended to suggest that udata64 always be used to st= ore the flow ID, but as an example of how an application could do it. Some = applications won=E2=80=99t need to carry the flow ID through; others can se= lect an unused field in the event data (e.g. hash.rss or udata64 if using m= bufs), or (worst-case) re-generate the flow ID in pipeline stages that requ= ire it. OK. > > >Question: > >1) In the case of DLB hardware, on dequeue(), what HW returns? is it > >only event pointer and not have any other metadata like schedule_type > >etc. > > > > The DLB device provides a 16B =E2=80=9Cqueue entry=E2=80=9D that consists= of: > > * 8B event data > * Queue ID > * Priority > * Scheduling type > * 19 bits of carried-through data > * Assorted error/debug/reserved bits that are set by the device (no= t carried-through) > > For the carried-through 19b, we use 12b for event_type and sub_event_typ= e. I can only think of TWO options to help 1) Since event pointer always cache aligned, You could grab LSB 6bits(2^6 =3D 64B ) and 7 bits from (19b - 12b) carried through structure 2) Have separate mempool driver using existing drivers, ie "event pointer" + or - some offset have any amount of custom data. > > > > >> > >> This would minimize/eliminate any performance impact due to the proces= sor's > >branch prediction logic. I think, If we need to change common fastpath, better we need to make it template to create code for compile-time to have absolute zero overhead and use runtime. See app/test-eventdev/test_order_atq.c: function: worker_wrapper() _create_ worker at compile time based on runtime capability. > >> The assignment then becomes in essence a NOOP for all event devices th= at are > >capable of carrying the flow_id as part of the event payload itself. > >> > >> Thanks, > >> Tim > >> > >> > >> > >> Thanks, > >> Tim