From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9598143ACD; Thu, 8 Feb 2024 10:18:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CEB64028B; Thu, 8 Feb 2024 10:18:32 +0100 (CET) Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) by mails.dpdk.org (Postfix) with ESMTP id E8C9E40278 for ; Thu, 8 Feb 2024 10:18:30 +0100 (CET) Received: by mail-qt1-f174.google.com with SMTP id d75a77b69052e-42a029c8e76so9699551cf.2 for ; Thu, 08 Feb 2024 01:18:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1707383910; x=1707988710; darn=dpdk.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=XLsMMkxlhGCG9OqxH3a+4MbwAK5DIRD3yJO5N2YSN8o=; b=P9wD1pabFIu+dP2sD+vYpyn+KaNoHiFEs84ftIBMc0zM+y9rJHTFC6pVnO2VqtgqMB kkGAWGalobliVCzIUA49BBEwL5lWkXZ/5cGatmt8KZEeLiqzuo/XQDX3HfD2AXmCOYiX fXIVksRDbkQmuYOeim/LQMfg2GYNSxRFzjNiUMQ0g0CbtZMX8Ex6QqT3gR3Oj8PxErU1 1xggxhczMCgg5XIHwvbIwqr3ZakN1374OtJOvdXPIkkMq5KLhXxDGvoEBp864l6/uM5Y uxUapXVXiJxPCeU8HBPZaIz7Mml4xCcTnBKTbBct0WwhPCAwBTUWxCyxumxksJ1u5d9M Momw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707383910; x=1707988710; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XLsMMkxlhGCG9OqxH3a+4MbwAK5DIRD3yJO5N2YSN8o=; b=XssAVHjr8d6cXvZaesHdtfuZf89awwI3wdxWh9A6WlamJLQ7ycMqQjLMbVPSFRPZ42 TofBazpmvhgAx4jgfqAGoSEVcxW+CgaspkwAFvFmGH+QLXZofl6lykbuTYDx98tJtLC8 ulxe0PKkLAImOElsoXa5zcDq/IuJjRKjmWS61vXeR3nBF0OvwSHO25IvYF1dDkB+WsWa JDQGJ1rgxysT34/ndVOHn24OTD16mRWU30EL60LmRDp2qMaqmTV+X6Ip9hh3Okg/AW3m u8y3d8+kHHZ4q/KG1rWUe5cobnOOqLVzgv66q4W/lN9cpxvAVWBP7MuVEJyAAyEBPJRD TizA== X-Gm-Message-State: AOJu0YwFGb33lMX3Onjj8YcpTYleuGY8RNMVY+9y+M9ps1DWrTpkZtVj h14CjLjJPWJpw01u/GEPi2m2PkpXFhyneriUb2HfXYw67FcTLuamJPUkx9IPNyJQ7mrlXGyMpOO nN/v8hGmIllGoMKkM3PCrq6uL+5A= X-Google-Smtp-Source: AGHT+IEIYYi4bIxaMZ0nV+tMlrlgdljv2hJBwzQRbM1Q3zV1XhZh44qjR6VL1UQW/SWqY2Sv/Q/Ya6LNcvkMk8X8OpA= X-Received: by 2002:ac8:1383:0:b0:42c:a1c:49d4 with SMTP id h3-20020ac81383000000b0042c0a1c49d4mr7334218qtj.38.1707383910153; Thu, 08 Feb 2024 01:18:30 -0800 (PST) MIME-Version: 1.0 References: <20240119174346.108905-1-bruce.richardson@intel.com> <20240202123953.77166-1-bruce.richardson@intel.com> <20240202123953.77166-10-bruce.richardson@intel.com> In-Reply-To: <20240202123953.77166-10-bruce.richardson@intel.com> From: Jerin Jacob Date: Thu, 8 Feb 2024 14:48:03 +0530 Message-ID: Subject: Re: [PATCH v3 09/11] eventdev: improve comments on scheduling types To: Bruce Richardson Cc: dev@dpdk.org, jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Fri, Feb 2, 2024 at 6:11=E2=80=AFPM Bruce Richardson wrote: > > The description of ordered and atomic scheduling given in the eventdev > doxygen documentation was not always clear. Try and simplify this so > that it is clearer for the end-user of the application > > Signed-off-by: Bruce Richardson > > --- > V3: extensive rework following feedback. Please re-review! > --- > lib/eventdev/rte_eventdev.h | 73 +++++++++++++++++++++++-------------- > 1 file changed, 45 insertions(+), 28 deletions(-) > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h > index a7d8c28015..8d72765ae7 100644 > --- a/lib/eventdev/rte_eventdev.h > +++ b/lib/eventdev/rte_eventdev.h > @@ -1347,25 +1347,35 @@ struct rte_event_vector { > /**< Ordered scheduling > * > * Events from an ordered flow of an event queue can be scheduled to mul= tiple > - * ports for concurrent processing while maintaining the original event = order. > + * ports for concurrent processing while maintaining the original event = order, > + * i.e. the order in which they were first enqueued to that queue. > * This scheme enables the user to achieve high single flow throughput b= y > - * avoiding SW synchronization for ordering between ports which bound to= cores. > - * > - * The source flow ordering from an event queue is maintained when event= s are > - * enqueued to their destination queue within the same ordered flow cont= ext. > - * An event port holds the context until application call > - * rte_event_dequeue_burst() from the same port, which implicitly releas= es > - * the context. > - * User may allow the scheduler to release the context earlier than that > - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE opera= tion. > - * > - * Events from the source queue appear in their original order when dequ= eued > - * from a destination queue. > - * Event ordering is based on the received event(s), but also other > - * (newly allocated or stored) events are ordered when enqueued within t= he same > - * ordered context. Events not enqueued (e.g. released or stored) within= the > - * context are considered missing from reordering and are skipped at th= is time > - * (but can be ordered again within another context). > + * avoiding SW synchronization for ordering between ports which are poll= ed > + * by different cores. I prefer the following version to remove "polled" and to be more explicit. avoiding SW synchronization for ordering between ports which are dequeuing events using @ref rte_event_deque_burst() across different cores. > + * > + * After events are dequeued from a set of ports, as those events are re= -enqueued > + * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD)= , the event > + * device restores the original event order - including events returned = from all > + * ports in the set - before the events arrive on the destination queue. _arrrive_ is bit vague since we have enqueue operation. How about, "before the events actually deposited on the destination queue." > + * > + * Any events not forwarded i.e. dropped explicitly via RELEASE or impli= citly > + * released by the next dequeue operation on a port, are skipped by the = reordering > + * stage and do not affect the reordering of other returned events. > + * > + * Any NEW events sent on a port are not ordered with respect to FORWARD= events sent > + * on the same port, since they have no original event order. They also = are not > + * ordered with respect to NEW events enqueued on other ports. > + * However, NEW events to the same destination queue from the same port = are guaranteed > + * to be enqueued in the order they were submitted via rte_event_enqueue= _burst(). > + * > + * NOTE: > + * In restoring event order of forwarded events, the eventdev API guar= antees that > + * all events from the same flow (i.e. same @ref rte_event.flow_id, > + * @ref rte_event.priority and @ref rte_event.queue_id) will be put in= the original > + * order before being forwarded to the destination queue. > + * Some eventdevs may implement stricter ordering to achieve this aim, > + * for example, restoring the order across *all* flows dequeued from t= he same ORDERED > + * queue. > * > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP= _RELEASE > */ > @@ -1373,18 +1383,25 @@ struct rte_event_vector { > #define RTE_SCHED_TYPE_ATOMIC 1 > /**< Atomic scheduling > * > - * Events from an atomic flow of an event queue can be scheduled only to= a > + * Events from an atomic flow, identified by a combination of @ref rte_e= vent.flow_id, > + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled= only to a > * single port at a time. The port is guaranteed to have exclusive (atom= ic) > * access to the associated flow context, which enables the user to avoi= d SW > - * synchronization. Atomic flows also help to maintain event ordering > - * since only one port at a time can process events from a flow of an > - * event queue. > - * > - * The atomic queue synchronization context is dedicated to the port unt= il > - * application call rte_event_dequeue_burst() from the same port, > - * which implicitly releases the context. User may allow the scheduler t= o > - * release the context earlier than that by invoking rte_event_enqueue_b= urst() > - * with RTE_EVENT_OP_RELEASE operation. > + * synchronization. Atomic flows also maintain event ordering > + * since only one port at a time can process events from each flow of an > + * event queue, and events within a flow are not reordered within the sc= heduler. > + * > + * An atomic flow is locked to a port when events from that flow are fir= st > + * scheduled to that port. That lock remains in place until the > + * application calls rte_event_dequeue_burst() from the same port, > + * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABL= E_IMPL_REL flag is not set). > + * User may allow the scheduler to release the lock earlier than that by= invoking > + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for eac= h event from that flow. > + * > + * NOTE: The lock is only released once the last event from the flow, ou= tstanding on the port, I think, Note can start with something like below, When there are multiple atomic events dequeue from @ref rte_event_dequeue_burst() for the same event queue, and it has same flow id then the lock is .... > + * is released. So long as there is one event from an atomic flow schedu= led to > + * a port/core (including any events in the port's dequeue queue, not ye= t read > + * by the application), that port will hold the synchronization lock for= that flow. > * > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP= _RELEASE > */ > -- > 2.40.1 >