From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A33BD43AD3; Thu, 8 Feb 2024 11:04:07 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 858A54028B; Thu, 8 Feb 2024 11:04:07 +0100 (CET) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id E27D240278 for ; Thu, 8 Feb 2024 11:04:05 +0100 (CET) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id A689D1B851 for ; Thu, 8 Feb 2024 11:04:05 +0100 (CET) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id 999941B890; Thu, 8 Feb 2024 11:04:05 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on hermod.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-1.4 required=5.0 tests=ALL_TRUSTED,AWL, T_SCC_BODY_TEXT_LINE autolearn=disabled version=4.0.0 X-Spam-Score: -1.4 Received: from [192.168.1.59] (h-62-63-215-114.A163.priv.bahnhof.se [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id C8CE01B7D6; Thu, 8 Feb 2024 11:04:03 +0100 (CET) Message-ID: <0a94b2e5-1c66-4f89-8d28-123ce26217f1@lysator.liu.se> Date: Thu, 8 Feb 2024 11:04:03 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 09/11] eventdev: improve comments on scheduling types Content-Language: en-US To: Jerin Jacob , Bruce Richardson Cc: dev@dpdk.org, jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com References: <20240119174346.108905-1-bruce.richardson@intel.com> <20240202123953.77166-1-bruce.richardson@intel.com> <20240202123953.77166-10-bruce.richardson@intel.com> From: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2024-02-08 10:18, Jerin Jacob wrote: > On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson > wrote: >> >> The description of ordered and atomic scheduling given in the eventdev >> doxygen documentation was not always clear. Try and simplify this so >> that it is clearer for the end-user of the application >> >> Signed-off-by: Bruce Richardson >> >> --- >> V3: extensive rework following feedback. Please re-review! >> --- >> lib/eventdev/rte_eventdev.h | 73 +++++++++++++++++++++++-------------- >> 1 file changed, 45 insertions(+), 28 deletions(-) >> >> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h >> index a7d8c28015..8d72765ae7 100644 >> --- a/lib/eventdev/rte_eventdev.h >> +++ b/lib/eventdev/rte_eventdev.h >> @@ -1347,25 +1347,35 @@ struct rte_event_vector { >> /**< Ordered scheduling >> * >> * Events from an ordered flow of an event queue can be scheduled to multiple >> - * ports for concurrent processing while maintaining the original event order. >> + * ports for concurrent processing while maintaining the original event order, >> + * i.e. the order in which they were first enqueued to that queue. >> * This scheme enables the user to achieve high single flow throughput by >> - * avoiding SW synchronization for ordering between ports which bound to cores. >> - * >> - * The source flow ordering from an event queue is maintained when events are >> - * enqueued to their destination queue within the same ordered flow context. >> - * An event port holds the context until application call >> - * rte_event_dequeue_burst() from the same port, which implicitly releases >> - * the context. >> - * User may allow the scheduler to release the context earlier than that >> - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. >> - * >> - * Events from the source queue appear in their original order when dequeued >> - * from a destination queue. >> - * Event ordering is based on the received event(s), but also other >> - * (newly allocated or stored) events are ordered when enqueued within the same >> - * ordered context. Events not enqueued (e.g. released or stored) within the >> - * context are considered missing from reordering and are skipped at this time >> - * (but can be ordered again within another context). >> + * avoiding SW synchronization for ordering between ports which are polled >> + * by different cores. > > I prefer the following version to remove "polled" and to be more explicit. > > avoiding SW synchronization for ordering between ports which are > dequeuing events > using @ref rte_event_deque_burst() across different cores. > "This scheme allows events pertaining to the same, potentially large flow to be processed in parallel on multiple cores without incurring any application-level order restoration logic overhead." >> + * >> + * After events are dequeued from a set of ports, as those events are re-enqueued >> + * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event >> + * device restores the original event order - including events returned from all >> + * ports in the set - before the events arrive on the destination queue. > > _arrrive_ is bit vague since we have enqueue operation. How about, > "before the events actually deposited on the destination queue." > > >> + * >> + * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly >> + * released by the next dequeue operation on a port, are skipped by the reordering >> + * stage and do not affect the reordering of other returned events. >> + * >> + * Any NEW events sent on a port are not ordered with respect to FORWARD events sent >> + * on the same port, since they have no original event order. They also are not >> + * ordered with respect to NEW events enqueued on other ports. >> + * However, NEW events to the same destination queue from the same port are guaranteed >> + * to be enqueued in the order they were submitted via rte_event_enqueue_burst(). >> + * >> + * NOTE: >> + * In restoring event order of forwarded events, the eventdev API guarantees that >> + * all events from the same flow (i.e. same @ref rte_event.flow_id, >> + * @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original >> + * order before being forwarded to the destination queue. >> + * Some eventdevs may implement stricter ordering to achieve this aim, >> + * for example, restoring the order across *all* flows dequeued from the same ORDERED >> + * queue. >> * >> * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE >> */ >> @@ -1373,18 +1383,25 @@ struct rte_event_vector { >> #define RTE_SCHED_TYPE_ATOMIC 1 >> /**< Atomic scheduling >> * >> - * Events from an atomic flow of an event queue can be scheduled only to a >> + * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id, >> + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a >> * single port at a time. The port is guaranteed to have exclusive (atomic) >> * access to the associated flow context, which enables the user to avoid SW >> - * synchronization. Atomic flows also help to maintain event ordering >> - * since only one port at a time can process events from a flow of an >> - * event queue. >> - * >> - * The atomic queue synchronization context is dedicated to the port until >> - * application call rte_event_dequeue_burst() from the same port, >> - * which implicitly releases the context. User may allow the scheduler to >> - * release the context earlier than that by invoking rte_event_enqueue_burst() >> - * with RTE_EVENT_OP_RELEASE operation. >> + * synchronization. Atomic flows also maintain event ordering >> + * since only one port at a time can process events from each flow of an >> + * event queue, and events within a flow are not reordered within the scheduler. >> + * >> + * An atomic flow is locked to a port when events from that flow are first >> + * scheduled to that port. That lock remains in place until the >> + * application calls rte_event_dequeue_burst() from the same port, >> + * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set). >> + * User may allow the scheduler to release the lock earlier than that by invoking >> + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow. >> + * >> + * NOTE: The lock is only released once the last event from the flow, outstanding on the port, > > I think, Note can start with something like below, > > When there are multiple atomic events dequeue from @ref > rte_event_dequeue_burst() > for the same event queue, and it has same flow id then the lock is .... > Yes, or maybe describing the whole lock/unlock state. "The conceptual per-queue-per-flow lock is in a locked state as long (and only as long) as one or more events pertaining to that flow were scheduled to the port in question, but are not yet released." Maybe it needs to be more meaty, describing what released means. I don't have the full context of the documentation in my head when I'm writing this. >> + * is released. So long as there is one event from an atomic flow scheduled to >> + * a port/core (including any events in the port's dequeue queue, not yet read >> + * by the application), that port will hold the synchronization lock for that flow. >> * >> * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE >> */ >> -- >> 2.40.1 >>