From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79FBBA0350; Tue, 30 Jun 2020 06:21:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2EFFC1BEA6; Tue, 30 Jun 2020 06:21:26 +0200 (CEST) Received: from mail-il1-f194.google.com (mail-il1-f194.google.com [209.85.166.194]) by dpdk.org (Postfix) with ESMTP id AEFDE1BE90 for ; Tue, 30 Jun 2020 06:21:24 +0200 (CEST) Received: by mail-il1-f194.google.com with SMTP id i18so16488370ilk.10 for ; Mon, 29 Jun 2020 21:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=tqJXXGirg0IHiMUapEIqBuelKDo6dnO/VjbWTDos+ic=; b=Vyrk7beEXGfOCBs6HImnM0mGeTrzjKSHAodwXKzhF1vYFt4ZuhDwus0hjxm7l2bOBc 8WKtWCZSf6tmnCDKiupy6Cr8yTyP5dHWKo+UIGfAF0Xkq5TsRIDZ9V3iKFanuomOprEQ 6NfZztxv4qvOJIirG/A3RthJyH+impbVzCJdryvk3JdPMQ6EO+gQW2ccMb6kcZ0qp+fR CAKQVEJ6f8f9Oc1Y1CKBchEmG2VQV6+vzBInHWGGvUrRA8otP9wfaIhBEWWRCLKFvYXr Y8u0jKLvn1kKdVLW8qQyyMhKIwmXx6lo79FPjWsvRz8mZIYwzBYZ0JrVVh2qyhbiYy4P pt9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=tqJXXGirg0IHiMUapEIqBuelKDo6dnO/VjbWTDos+ic=; b=OTV/X8c74qbO973VtQTtwoIxbF1HOtUY96YeoUhGA5njVEaB+S6EhU6InsFGDEA/mO 9gm+DLHQFXkj6ciCNPP14CgI7a/UP9mZpB625S4ADFWBsA3ScYmXLlrGRz7gChjwdhjv GmhZovGwEgE2SHDK12r16NeDhWBesn/LaseBtdM73XL5JohHNp98c8paEMWsPCdGmo1K yWkYryKIDsfq0/AUdqHC0IaFsb41LH48s8GOfrvyIVYmSI6FLHLRYaRnhLWDdi3ntC+4 5dhRsnFINwUqc4hGNqwFdfABDYymezfDt8OBfu//7lTNEG5NqIAJXsBEYAEHVs1XeIsX SksQ== X-Gm-Message-State: AOAM532iU4zPLHePOO16QAyiLUD/K6XOSnviX+hb/pr/htUcuTQB/ioa Z8e7uBHb19TuC7IN5LPmEbbVmogYnG4tp+9nrTg= X-Google-Smtp-Source: ABdhPJw/7x/l+D1cxu1lEsGE4FBzY454h2oaeAu/BLEbez8F2LksJH+RfrWB+0nfdNofvjKT3TIyixmYOMM4fu70yPc= X-Received: by 2002:a92:9804:: with SMTP id l4mr805827ili.271.1593490883922; Mon, 29 Jun 2020 21:21:23 -0700 (PDT) MIME-Version: 1.0 References: <1593232671-5690-1-git-send-email-timothy.mcdaniel@intel.com> <1593232671-5690-2-git-send-email-timothy.mcdaniel@intel.com> In-Reply-To: From: Jerin Jacob Date: Tue, 30 Jun 2020 09:51:08 +0530 Message-ID: To: "McDaniel, Timothy" Cc: Ray Kinsella , Neil Horman , Jerin Jacob , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , dpdk-dev , "Eads, Gage" , "Van Haaren, Harry" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Jun 30, 2020 at 1:01 AM McDaniel, Timothy wrote: > > -----Original Message----- > From: Jerin Jacob > Sent: Saturday, June 27, 2020 2:45 AM > To: McDaniel, Timothy ; Ray Kinsella ; Neil Horman > Cc: Jerin Jacob ; Mattias R=C3=B6nnblom ; dpdk-dev ; Eads, Gage ; Van Haaren, Harry > Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisite= s > > > + > > +/** Event port configuration structure */ > > +struct rte_event_port_conf_v20 { > > + int32_t new_event_threshold; > > + /**< A backpressure threshold for new event enqueues on this po= rt. > > + * Use for *closed system* event dev where event capacity is li= mited, > > + * and cannot exceed the capacity of the event dev. > > + * Configuring ports with different thresholds can make higher = priority > > + * traffic less likely to be backpressured. > > + * For example, a port used to inject NIC Rx packets into the e= vent dev > > + * can have a lower threshold so as not to overwhelm the device= , > > + * while ports used for worker pools can have a higher threshol= d. > > + * This value cannot exceed the *nb_events_limit* > > + * which was previously supplied to rte_event_dev_configure(). > > + * This should be set to '-1' for *open system*. > > + */ > > + uint16_t dequeue_depth; > > + /**< Configure number of bulk dequeues for this event port. > > + * This value cannot exceed the *nb_event_port_dequeue_depth* > > + * which previously supplied to rte_event_dev_configure(). > > + * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capa= ble. > > + */ > > + uint16_t enqueue_depth; > > + /**< Configure number of bulk enqueues for this event port. > > + * This value cannot exceed the *nb_event_port_enqueue_depth* > > + * which previously supplied to rte_event_dev_configure(). > > + * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capa= ble. > > + */ > > uint8_t disable_implicit_release; > > /**< Configure the port not to release outstanding events in > > * rte_event_dev_dequeue_burst(). If true, all events received = through > > @@ -733,6 +911,14 @@ struct rte_event_port_conf { > > rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id, > > struct rte_event_port_conf *port_conf); > > > > +int > > +rte_event_port_default_conf_get_v20(uint8_t dev_id, uint8_t port_id, > > + struct rte_event_port_conf_v20 *port_co= nf); > > + > > +int > > +rte_event_port_default_conf_get_v21(uint8_t dev_id, uint8_t port_id, > > + struct rte_event_port_conf *port_= conf); > > Hi Timothy, > > + ABI Maintainers (Ray, Neil) > > # As per my understanding, the structures can not be versioned, only > function can be versioned. > i.e we can not make any change to " struct rte_event_port_conf" > > # We have a similar case with ethdev and it deferred to next release v20.= 11 > http://patches.dpdk.org/patch/69113/ > > Regarding the API changes: > # The slow path changes general looks good to me. I will review the > next level in the coming days > # The following fast path changes bothers to me. Could you share more > details on below change? > > diff --git a/app/test-eventdev/test_order_atq.c > b/app/test-eventdev/test_order_atq.c > index 3366cfc..8246b96 100644 > --- a/app/test-eventdev/test_order_atq.c > +++ b/app/test-eventdev/test_order_atq.c > @@ -34,6 +34,8 @@ > continue; > } > > + ev.flow_id =3D ev.mbuf->udata64; > + > # Since RC1 is near, I am not sure how to accommodate the API changes > now and sort out ABI stuffs. > # Other concern is eventdev spec get bloated with versioning files > just for ONE release as 20.11 will be OK to change the ABI. > # While we discuss the API change, Please send deprecation notice for > ABI change for 20.11, > so that there is no ambiguity of this patch for the 20.11 release. > > Hello Jerin, > > Thank you for the review comments. > > With regard to your comments regarding the fast path flow_id change, the = Intel DLB hardware > is not capable of transferring the flow_id as part of the event itself. W= e therefore require a mechanism > to accomplish this. What we have done to work around this is to require t= he application to embed the flow_id > within the data payload. The new flag, #define RTE_EVENT_DEV_CAP_CARRY_FL= OW_ID (1ULL << 9), can be used > by applications to determine if they need to embed the flow_id, or if its= automatically propagated and present in the > received event. > > What we should have done is to wrap the assignment with a conditional. > > if (!(device_capability_flags & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID)) > ev.flow_id =3D ev.mbuf->udata64; Two problems with this approach, 1) we are assuming mbuf udata64 field is available for DLB driver 2) It won't work with another adapter, eventdev has no dependency with mbuf Question: 1) In the case of DLB hardware, on dequeue(), what HW returns? is it only event pointer and not have any other metadata like schedule_type etc. > > This would minimize/eliminate any performance impact due to the processor= 's branch prediction logic. > The assignment then becomes in essence a NOOP for all event devices that = are capable of carrying the flow_id as part of the event payload itself. > > Thanks, > Tim > > > > Thanks, > Tim