From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A497B43AAA; Fri, 9 Feb 2024 10:14:34 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1FA9840697; Fri, 9 Feb 2024 10:14:34 +0100 (CET) Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) by mails.dpdk.org (Postfix) with ESMTP id D5BC54026A for ; Fri, 9 Feb 2024 10:14:31 +0100 (CET) Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-604966eaf6eso7547807b3.1 for ; Fri, 09 Feb 2024 01:14:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1707470071; x=1708074871; darn=dpdk.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Oum3xLURzRrxaG6MHN6cZOWeDTyOZovK4NPUphzejIo=; b=nVzaWFOJt+OSGH9vB4L/Z3cAc6RiUw1SpdSVU68+HNCwURPFaNvJjBkiMzFBapU3FE VrCkDU5yRdImvBsiYwxjnMfCQGCV848uGMZ8PsCKNt80bjEJheafIyj+whywRshVqiEu tqAtWzcrjssOKN6cdLeoRR+pVeOPB7hle+7ocZpyH2vKbMvTeTUkCWK2DTDEeySGdX1a Z5MP2SWicNxzrquaAJkNhRXUgIt9bFiKX86hHW4xCu1HrYfHk61B1PsNSQeQTJoiaPMQ 5HCoKH40QMq5YqKVUA8pQvRd4Ml5yodmZLTfMN7s87ZyM/O/eOtFyLcAG07vr21HpNGw uUbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707470071; x=1708074871; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Oum3xLURzRrxaG6MHN6cZOWeDTyOZovK4NPUphzejIo=; b=F8xZqIw4/+X5HbMI3xUrtJE3HwKRZSgM1wasftYWUwM7/7lqxP9CPhkoIYc4+FMRWq a/Q+4QYbDbKQIUT248b4Qe3g15PCdC+fxG4zUcyZhfKJtrvIPbB3DISc4e+S+uAN5wnu /YqwmNirkM6SNUSavHXw9hx1jYQPOR9W3TraTNSEQUsts5OIg/omPHXhrz7pnmAlqG2D M5g/G58YZkvWGsGTceJWn1UAojOlPsEBmQvpq2XEYg6kcZQbA9TefDrnykV2k7ZGI1l3 pP60UfL4Ga/cCeEHmYYB3WWc2tEWN3IcCdbQBQDIc76ub/n2vKGInIYxJ/YF3OC9We3H s4Cw== X-Gm-Message-State: AOJu0Yy25zHjmEzLZGgWLD0x2xiCOTlWPuHWeWlx60To6Q9mDSRkLEoQ bwIQbrSHfASVRYZHs3jyAPWBkQSQKmqUq9h9Xr72gQBs8B4tvhN8VjzNrHVQylrx+pYkB1iZBIX WFyRRHJWr1Bzml2uIzKelr/2mjBs= X-Google-Smtp-Source: AGHT+IG12Yz7IH5/C1NPL4OZsbrD0k+kWEteMFkGS7L2fF0I5EULrIulVC62bf3C95j0FiR6qd+T5dXiu2z1Bg0PPUM= X-Received: by 2002:a81:bb53:0:b0:5fa:5251:2332 with SMTP id a19-20020a81bb53000000b005fa52512332mr846854ywl.32.1707470070944; Fri, 09 Feb 2024 01:14:30 -0800 (PST) MIME-Version: 1.0 References: <20240119174346.108905-1-bruce.richardson@intel.com> <20240202123953.77166-1-bruce.richardson@intel.com> <20240202123953.77166-11-bruce.richardson@intel.com> In-Reply-To: <20240202123953.77166-11-bruce.richardson@intel.com> From: Jerin Jacob Date: Fri, 9 Feb 2024 14:44:04 +0530 Message-ID: Subject: Re: [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types To: Bruce Richardson Cc: dev@dpdk.org, jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Fri, Feb 2, 2024 at 6:11=E2=80=AFPM Bruce Richardson wrote: > > Clarify the meaning of the NEW, FORWARD and RELEASE event types. > For the fields in "rte_event" struct, enhance the comments on each to > clarify the field's use, and whether it is preserved between enqueue and > dequeue, and it's role, if any, in scheduling. > > Signed-off-by: Bruce Richardson > --- > V3: updates following review > --- > lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++----------- > 1 file changed, 111 insertions(+), 50 deletions(-) > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h > index 8d72765ae7..58219e027e 100644 > --- a/lib/eventdev/rte_eventdev.h > +++ b/lib/eventdev/rte_eventdev.h > @@ -1463,47 +1463,54 @@ struct rte_event_vector { > > /* Event enqueue operations */ > #define RTE_EVENT_OP_NEW 0 > -/**< The event producers use this operation to inject a new event to the > - * event device. > +/**< The @ref rte_event.op field must be set to this operation type to i= nject a new event, > + * i.e. one not previously dequeued, into the event device, to be schedu= led > + * for processing. > */ > #define RTE_EVENT_OP_FORWARD 1 > -/**< The CPU use this operation to forward the event to different event = queue or > - * change to new application specific flow or schedule type to enable > - * pipelining. > +/**< The application must set the @ref rte_event.op field to this operat= ion type to return a > + * previously dequeued event to the event device to be scheduled for fur= ther processing. > * > - * This operation must only be enqueued to the same port that the > + * This event *must* be enqueued to the same port that the > * event to be forwarded was dequeued from. > + * > + * The event's fields, including (but not limited to) flow_id, schedulin= g type, > + * destination queue, and event payload e.g. mbuf pointer, may all be up= dated as > + * desired by the application, but the @ref rte_event.impl_opaque field = must > + * be kept to the same value as was present when the event was dequeued. > */ > #define RTE_EVENT_OP_RELEASE 2 > /**< Release the flow context associated with the schedule type. > * > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC* > - * then this function hints the scheduler that the user has completed cr= itical > - * section processing in the current atomic context. > - * The scheduler is now allowed to schedule events from the same flow fr= om > - * an event queue to another port. However, the context may be still hel= d > - * until the next rte_event_dequeue_burst() call, this call allows but d= oes not > - * force the scheduler to release the context early. > - * > - * Early atomic context release may increase parallelism and thus system > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC > + * then this operation type hints the scheduler that the user has comple= ted critical > + * section processing for this event in the current atomic context, and = that the > + * scheduler may unlock any atomic locks held for this event. > + * If this is the last event from an atomic flow, i.e. all flow locks ar= e released, Similar comment as other email [Jerin] When there are multiple atomic events dequeue from @ref rte_event_dequeue_burst() for the same event queue, and it has same flow id then the lock is .... [Mattias] Yes, or maybe describing the whole lock/unlock state. "The conceptual per-queue-per-flow lock is in a locked state as long (and only as long) as one or more events pertaining to that flow were scheduled to the port in question, but are not yet released." Maybe it needs to be more meaty, describing what released means. I don't have the full context of the documentation in my head when I'm writing this= . > + * the scheduler is now allowed to schedule events from that flow from t= o another port. > + * However, the atomic locks may be still held until the next rte_event_= dequeue_burst() > + * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE allo= ws, Is ";" intended? > + * but does not force, the scheduler to release the atomic locks early. instead of "not force", can use the term _hint_ the driver and reword. > + * > + * Early atomic lock release may increase parallelism and thus system > * performance, but the user needs to design carefully the split into cr= itical > * vs non-critical sections. > * > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED* > - * then this function hints the scheduler that the user has done all tha= t need > - * to maintain event order in the current ordered context. > - * The scheduler is allowed to release the ordered context of this port = and > - * avoid reordering any following enqueues. > - * > - * Early ordered context release may increase parallelism and thus syste= m > - * performance. > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERE= D > + * then this operation type informs the scheduler that the current event= has > + * completed processing and will not be returned to the scheduler, i.e. > + * it has been dropped, and so the reordering context for that event > + * should be considered filled. > * > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL* > - * or no scheduling context is held then this function may be an NOOP, > - * depending on the implementation. > + * Events with this operation type must only be enqueued to the same por= t that the > + * event to be released was dequeued from. The @ref rte_event.impl_opaqu= e > + * field in the release event must have the same value as that in the or= iginal dequeued event. > * > - * This operation must only be enqueued to the same port that the > - * event to be released was dequeued from. > + * If a dequeued event is re-enqueued with operation type of @ref RTE_EV= ENT_OP_RELEASE, > + * then any subsequent enqueue of that event - or a copy of it - must be= done as event of type > + * @ref RTE_EVENT_OP_NEW, not @ref RTE_EVENT_OP_FORWARD. This is because= any context for > + * the originally dequeued event, i.e. atomic locks, or reorder buffer e= ntries, will have > + * been removed or invalidated by the release operation. > */ > > /** > @@ -1517,56 +1524,110 @@ struct rte_event { > /** Event attributes for dequeue or enqueue operation */ > struct { > uint32_t flow_id:20; > - /**< Targeted flow identifier for the enqueue and > - * dequeue operation. > - * The value must be in the range of > - * [0, nb_event_queue_flows - 1] which > - * previously supplied to rte_event_dev_configure= (). > + /**< Target flow identifier for the enqueue and d= equeue operation. > + * > + * For @ref RTE_SCHED_TYPE_ATOMIC, this field is = used to identify a > + * flow for atomicity within a queue & priority l= evel, such that events > + * from each individual flow will only be schedul= ed to one port at a time. > + * > + * This field is preserved between enqueue and de= queue when > + * a device reports the @ref RTE_EVENT_DEV_CAP_CA= RRY_FLOW_ID > + * capability. Otherwise the value is implementat= ion dependent > + * on dequeue. > */ > uint32_t sub_event_type:8; > /**< Sub-event types based on the event source. > + * > + * This field is preserved between enqueue and de= queue. > + * This field is for application or event adapter= use, > + * and is not considered in scheduling decisions. cnxk driver is considering this for scheduling decision to differentiate the producer i.e event adapters. If other drivers are not then we can change the language around it is implementation defined. > + * > * @see RTE_EVENT_TYPE_CPU > */ > uint32_t event_type:4; > - /**< Event type to classify the event source. > - * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*) > + /**< Event type to classify the event source. (RT= E_EVENT_TYPE_*) > + * > + * This field is preserved between enqueue and de= queue > + * This field is for application or event adapter= use, > + * and is not considered in scheduling decisions. cnxk driver is considering this for scheduling decision to differentiate the producer i.e event adapters. If other drivers are not then we can change the language around it is implementation defined. > */ > uint8_t op:2; > - /**< The type of event enqueue operation - new/fo= rward/ > - * etc.This field is not preserved across an inst= ance > - * and is undefined on dequeue. > - * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*) > + /**< The type of event enqueue operation - new/fo= rward/ etc. > + * > + * This field is *not* preserved across an instan= ce > + * and is implementation dependent on dequeue. > + * > + * @see RTE_EVENT_OP_NEW > + * @see RTE_EVENT_OP_FORWARD > + * @see RTE_EVENT_OP_RELEASE > */ > uint8_t rsvd:4; > - /**< Reserved for future use */ > + /**< Reserved for future use. > + * > + * Should be set to zero on enqueue. I am worried about some application explicitly start setting this to zero on every enqueue. Instead, can we say application should not touch the field, Since every eve= ntdev operations starts with dequeue() driver can fill to the correct value. > + */ > uint8_t sched_type:2; > /**< Scheduler synchronization type (RTE_SCHED_TY= PE_*) > * associated with flow id on a given event queue > * for the enqueue and dequeue operation. > + * > + * This field is used to determine the scheduling= type > + * for events sent to queues where @ref RTE_EVENT= _QUEUE_CFG_ALL_TYPES > + * is configured. > + * For queues where only a single scheduling type= is available, > + * this field must be set to match the configured= scheduling type. > + * > + * This field is preserved between enqueue and de= queue. > + * > + * @see RTE_SCHED_TYPE_ORDERED > + * @see RTE_SCHED_TYPE_ATOMIC > + * @see RTE_SCHED_TYPE_PARALLEL > */ > uint8_t queue_id; > /**< Targeted event queue identifier for the enqu= eue or > * dequeue operation. > - * The value must be in the range of > - * [0, nb_event_queues - 1] which previously supp= lied to > - * rte_event_dev_configure(). > + * The value must be less than @ref rte_event_dev= _config.nb_event_queues > + * which was previously supplied to rte_event_dev= _configure(). Some reason, similar text got removed for flow_id. Please add the same. > + * > + * This field is preserved between enqueue on deq= ueue. > */ > uint8_t priority; > /**< Event priority relative to other events in t= he > * event queue. The requested priority should in = the > - * range of [RTE_EVENT_DEV_PRIORITY_HIGHEST, > - * RTE_EVENT_DEV_PRIORITY_LOWEST]. > + * range of [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST= , > + * @ref RTE_EVENT_DEV_PRIORITY_LOWEST]. > + * > * The implementation shall normalize the request= ed > * priority to supported priority value. > - * Valid when the device has > - * RTE_EVENT_DEV_CAP_EVENT_QOS capability. > + * [For devices with where the supported priority= range is a power-of-2, the > + * normalization will be done via bit-shifting, s= o only the highest > + * log2(num_priorities) bits will be used by the = event device] > + * > + * Valid when the device has @ref RTE_EVENT_DEV_C= AP_EVENT_QOS capability > + * and this field is preserved between enqueue an= d dequeue, > + * though with possible loss of precision due to = normalization and > + * subsequent de-normalization. (For example, if = a device only supports 8 > + * priority levels, only the high 3 bits of this = field will be > + * used by that device, and hence only the value = of those 3 bits are > + * guaranteed to be preserved between enqueue and= dequeue.) > + * > + * Ignored when device does not support @ref RTE_= EVENT_DEV_CAP_EVENT_QOS > + * capability, and it is implementation dependent= if this field is preserved > + * between enqueue and dequeue. > */ > uint8_t impl_opaque; > - /**< Implementation specific opaque value. > - * An implementation may use this field to hold > + /**< Opaque field for event device use. > + * > + * An event driver implementation may use this fi= eld to hold an > * implementation specific value to share between > * dequeue and enqueue operation. > - * The application should not modify this field. > + * > + * The application most not modify this field. most -> must > + * Its value is implementation dependent on deque= ue, > + * and must be returned unmodified on enqueue whe= n > + * op type is @ref RTE_EVENT_OP_FORWARD or @ref R= TE_EVENT_OP_RELEASE. > + * This field is ignored on events with op type > + * @ref RTE_EVENT_OP_NEW. > */ > }; > }; > -- > 2.40.1 >