DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v1 0/7] improve eventdev API specification/documentation
@ 2024-01-18 13:45 Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 1/7] eventdev: improve doxygen introduction text Bruce Richardson
                   ` (7 more replies)
  0 siblings, 8 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson

This patchset makes small rewording improvements to the eventdev doxygen
documentation to try and ensure that it is as clear as possible,
describes the implementation as accurately as possible, and is
consistent within itself. Most changes are just minor rewordings, along
with plenty of changes to change references into doxygen
links/cross-references.

For now I am approx 1/4 way through reviewing the rte_eventdev.h file,
but sending v1 now to get any reviews started.

Bruce Richardson (7):
  eventdev: improve doxygen introduction text
  eventdev: move text on driver internals to proper section
  eventdev: update documentation on device capability flags
  eventdev: cleanup doxygen comments on info structure
  eventdev: improve function documentation for query fns
  eventdev: improve doxygen comments on configure struct
  eventdev: fix documentation for counting single-link ports

 lib/eventdev/rte_eventdev.c |   2 +-
 lib/eventdev/rte_eventdev.h | 391 +++++++++++++++++++++++-------------
 2 files changed, 247 insertions(+), 146 deletions(-)

--
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v1 1/7] eventdev: improve doxygen introduction text
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
@ 2024-01-18 13:45 ` Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 2/7] eventdev: move text on driver internals to proper section Bruce Richardson
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson, Jerin Jacob

Make some textual improvements to the introduction to eventdev and event
devices in the eventdev header file. This text appears in the doxygen
output for the header file, and introduces the key concepts, for
example: events, event devices, queues, ports and scheduling.

This patch makes the following improvements:
* small textual fixups, e.g. correcting use of singular/plural
* rewrites of some sentences to improve clarity
* using doxygen markdown to split the whole large block up into
  sections, thereby making it easier to read.

No large-scale changes are made, and blocks are not reordered

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++---------------
 1 file changed, 66 insertions(+), 46 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index ec9b02455d..a36c89c7a4 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -12,12 +12,13 @@
  * @file
  *
  * RTE Event Device API
+ * ====================
  *
  * In a polling model, lcores poll ethdev ports and associated rx queues
- * directly to look for packet. In an event driven model, by contrast, lcores
- * call the scheduler that selects packets for them based on programmer
- * specified criteria. Eventdev library adds support for event driven
- * programming model, which offer applications automatic multicore scaling,
+ * directly to look for packets. In an event driven model, in contrast, lcores
+ * call a scheduler that selects packets for them based on programmer
+ * specified criteria. The eventdev library adds support for the event driven
+ * programming model, which offers applications automatic multicore scaling,
  * dynamic load balancing, pipelining, packet ingress order maintenance and
  * synchronization services to simplify application packet processing.
  *
@@ -25,12 +26,15 @@
  *
  * - The application-oriented Event API that includes functions to setup
  *   an event device (configure it, setup its queues, ports and start it), to
- *   establish the link between queues to port and to receive events, and so on.
+ *   establish the links between queues and ports to receive events, and so on.
  *
  * - The driver-oriented Event API that exports a function allowing
- *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event poll Mode Driver (PMD) to register itself as
  *   an event device driver.
  *
+ * Application-oriented Event API
+ * ------------------------------
+ *
  * Event device components:
  *
  *                     +-----------------+
@@ -75,27 +79,33 @@
  *            |                                                           |
  *            +-----------------------------------------------------------+
  *
- * Event device: A hardware or software-based event scheduler.
+ * **Event device**: A hardware or software-based event scheduler.
  *
- * Event: A unit of scheduling that encapsulates a packet or other datatype
- * like SW generated event from the CPU, Crypto work completion notification,
- * Timer expiry event notification etc as well as metadata.
- * The metadata includes flow ID, scheduling type, event priority, event_type,
+ * **Event**: A unit of scheduling that encapsulates a packet or other datatype,
+ * such as: SW generated event from the CPU, crypto work completion notification,
+ * timer expiry event notification etc., as well as metadata about the packet or data.
+ * The metadata includes a flow ID (if any), scheduling type, event priority, event_type,
  * sub_event_type etc.
  *
- * Event queue: A queue containing events that are scheduled by the event dev.
+ * **Event queue**: A queue containing events that are scheduled by the event device.
  * An event queue contains events of different flows associated with scheduling
  * types, such as atomic, ordered, or parallel.
+ * Each event given to an eventdev must have a valid event queue id field in the metadata,
+ * to specify on which event queue in the device the event must be placed,
+ * for later scheduling to a core.
  *
- * Event port: An application's interface into the event dev for enqueue and
+ * **Event port**: An application's interface into the event dev for enqueue and
  * dequeue operations. Each event port can be linked with one or more
  * event queues for dequeue operations.
- *
- * By default, all the functions of the Event Device API exported by a PMD
- * are lock-free functions which assume to not be invoked in parallel on
- * different logical cores to work on the same target object. For instance,
- * the dequeue function of a PMD cannot be invoked in parallel on two logical
- * cores to operates on same  event port. Of course, this function
+ * Each port should be associated with a single core (enqueue and dequeue is not thread-safe).
+ * To schedule events to a core, the event device will schedule them to the event port(s)
+ * being polled by that core.
+ *
+ * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
+ * are lock-free functions, which must not be invoked on the same object in parallel on
+ * different logical cores.
+ * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operate on same  event port. Of course, this function
  * can be invoked in parallel by different logical cores on different ports.
  * It is the responsibility of the upper level application to enforce this rule.
  *
@@ -107,22 +117,19 @@
  *
  * Event devices are dynamically registered during the PCI/SoC device probing
  * phase performed at EAL initialization time.
- * When an Event device is being probed, a *rte_event_dev* structure and
- * a new device identifier are allocated for that device. Then, the
- * event_dev_init() function supplied by the Event driver matching the probed
- * device is invoked to properly initialize the device.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
  *
- * The role of the device init function consists of resetting the hardware or
- * software event driver implementations.
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
  *
- * If the device init operation is successful, the correspondence between
- * the device identifier assigned to the new device and its associated
- * *rte_event_dev* structure is effectively registered.
- * Otherwise, both the *rte_event_dev* structure and the device identifier are
- * freed.
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
  *
  * The functions exported by the application Event API to setup a device
- * designated by its device identifier must be invoked in the following order:
+ * must be invoked in the following order:
  *     - rte_event_dev_configure()
  *     - rte_event_queue_setup()
  *     - rte_event_port_setup()
@@ -130,10 +137,15 @@
  *     - rte_event_dev_start()
  *
  * Then, the application can invoke, in any order, the functions
- * exported by the Event API to schedule events, dequeue events, enqueue events,
- * change event queue(s) to event port [un]link establishment and so on.
- *
- * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * exported by the Event API to dequeue events, enqueue events,
+ * and link and unlink event queue(s) to event ports.
+ *
+ * Before configuring a device, an application should call rte_event_dev_info_get()
+ * to determine the capabilities of the event device, and any queue or port
+ * limits of that device. The parameters set in the various device configuration
+ * structures may need to be adjusted based on the max values provided in the
+ * device information structure returned from the info_get API.
+ * An application may use rte_event_[queue/port]_default_conf_get() to get the
  * default configuration to set up an event queue or event port by
  * overriding few default values.
  *
@@ -145,7 +157,11 @@
  * when the device is stopped.
  *
  * Finally, an application can close an Event device by invoking the
- * rte_event_dev_close() function.
+ * rte_event_dev_close() function. Once closed, a device cannot be
+ * reconfigured or restarted.
+ *
+ * Driver-Oriented Event API
+ * -------------------------
  *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
@@ -164,10 +180,13 @@
  * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
  *
  * For performance reasons, the address of the fast-path functions of the
- * Event driver is not contained in the *event_dev_ops* structure.
+ * Event driver are not contained in the *event_dev_ops* structure.
  * Instead, they are directly stored at the beginning of the *rte_event_dev*
  * structure to avoid an extra indirect memory access during their invocation.
  *
+ * Event Enqueue, Dequeue and Scheduling
+ * -------------------------------------
+ *
  * RTE event device drivers do not use interrupts for enqueue or dequeue
  * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
  * functions to applications.
@@ -179,21 +198,22 @@
  * crypto work completion notification etc
  *
  * The *dequeue* operation gets one or more events from the event ports.
- * The application process the events and send to downstream event queue through
- * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
- * on the final stage, the application may use Tx adapter API for maintaining
- * the ingress order and then send the packet/event on the wire.
+ * The application processes the events and sends them to a downstream event queue through
+ * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
+ * On the final stage of processing, the application may use the Tx adapter API for maintaining
+ * the event ingress order while sending the packet/event on the wire via NIC Tx.
  *
  * The point at which events are scheduled to ports depends on the device.
  * For hardware devices, scheduling occurs asynchronously without any software
  * intervention. Software schedulers can either be distributed
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
- * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic need a dedicated service core for scheduling.
- * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
- * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls software specific scheduling function.
+ * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
+ * software schedulers need a dedicated service core for scheduling.
+ * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
+ * indicates that the device is centralized and thus needs a dedicated scheduling
+ * thread, generally a service core,
+ * that repeatedly calls the software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v1 2/7] eventdev: move text on driver internals to proper section
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 1/7] eventdev: improve doxygen introduction text Bruce Richardson
@ 2024-01-18 13:45 ` Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 3/7] eventdev: update documentation on device capability flags Bruce Richardson
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson, Jerin Jacob

Inside the doxygen introduction text, some internal details of how
eventdev works was mixed in with application-relevant details. Move
these details on probing etc. to the driver-relevant section.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a36c89c7a4..949e957f1b 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -112,22 +112,6 @@
  * In all functions of the Event API, the Event device is
  * designated by an integer >= 0 named the device identifier *dev_id*
  *
- * At the Event driver level, Event devices are represented by a generic
- * data structure of type *rte_event_dev*.
- *
- * Event devices are dynamically registered during the PCI/SoC device probing
- * phase performed at EAL initialization time.
- * When an Event device is being probed, an *rte_event_dev* structure is allocated
- * for it and the event_dev_init() function supplied by the Event driver
- * is invoked to properly initialize the device.
- *
- * The role of the device init function is to reset the device hardware or
- * to initialize the software event driver implementation.
- *
- * If the device init operation is successful, the device is assigned a device
- * id (dev_id) for application use.
- * Otherwise, the *rte_event_dev* structure is freed.
- *
  * The functions exported by the application Event API to setup a device
  * must be invoked in the following order:
  *     - rte_event_dev_configure()
@@ -163,6 +147,22 @@
  * Driver-Oriented Event API
  * -------------------------
  *
+ * At the Event driver level, Event devices are represented by a generic
+ * data structure of type *rte_event_dev*.
+ *
+ * Event devices are dynamically registered during the PCI/SoC device probing
+ * phase performed at EAL initialization time.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
+ *
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
+ *
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
+ *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
  * identifier.
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v1 3/7] eventdev: update documentation on device capability flags
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 1/7] eventdev: improve doxygen introduction text Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 2/7] eventdev: move text on driver internals to proper section Bruce Richardson
@ 2024-01-18 13:45 ` Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 4/7] eventdev: cleanup doxygen comments on info structure Bruce Richardson
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson, Jerin Jacob

Update the device capability docs, to:

* include more cross-references
* split longer text into paragraphs, in most cases with each flag having
  a single-line summary at the start of the doc block
* general comment rewording and clarification as appropriate

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
 1 file changed, 93 insertions(+), 37 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 949e957f1b..57a2791946 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -243,143 +243,199 @@ struct rte_event;
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
 /**< Event scheduling prioritization is based on the priority and weight
- * associated with each event queue. Events from a queue with highest priority
- * is scheduled first. If the queues are of same priority, weight of the queues
+ * associated with each event queue.
+ *
+ * Events from a queue with highest priority
+ * are scheduled first. If the queues are of same priority, weight of the queues
  * are considered to select a queue in a weighted round robin fashion.
  * Subsequent dequeue calls from an event port could see events from the same
  * event queue, if the queue is configured with an affinity count. Affinity
  * count is the number of subsequent dequeue calls, in which an event port
  * should use the same event queue if the queue is non-empty
  *
+ * NOTE: A device may use both queue prioritization and event prioritization
+ * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions.
+ *
  *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
- *  each event. Priority of each event is supplied in *rte_event* structure
+ *  each event.
+ *
+ *  Priority of each event is supplied in *rte_event* structure
  *  on each enqueue operation.
+ *  If this capability is not set, the priority field of the event structure
+ *  is ignored for each event.
  *
+ * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability)
+ * and event prioritization when making packet scheduling decisions.
+
  *  @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
 /**< Event device operates in distributed scheduling mode.
+ *
  * In distributed scheduling mode, event scheduling happens in HW or
- * rte_event_dequeue_burst() or the combination of these two.
+ * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
  * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_dequeue_burst()
+ * @see rte_event_dev_service_id_get
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
+ *
  * If this capability is not set, the queue only supports events of the
- *  *RTE_SCHED_TYPE_* type that it was created with.
+ * *RTE_SCHED_TYPE_* type that it was created with.
+ * Any events of other types scheduled to the queue will handled in an
+ * implementation-dependent manner. They may be dropped by the
+ * event device, or enqueued with the scheduling type adjusted to the
+ * correct/supported value.
  *
- * @see RTE_SCHED_TYPE_* values
+ * @see rte_event_enqueue_burst
+ * @see RTE_SCHED_TYPE_ATOMIC RTE_SCHED_TYPE_ORDERED RTE_SCHED_TYPE_PARALLEL
  */
 #define RTE_EVENT_DEV_CAP_BURST_MODE          (1ULL << 4)
 /**< Event device is capable of operating in burst mode for enqueue(forward,
- * release) and dequeue operation. If this capability is not set, application
- * still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
- * PMD accepts only one event at a time.
+ * release) and dequeue operation.
+ *
+ * If this capability is not set, application
+ * can still use the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
+ * PMD accepts or returns only one event at a time.
  *
  * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE    (1ULL << 5)
 /**< Event device ports support disabling the implicit release feature, in
  * which the port will release all unreleased events in its dequeue operation.
+ *
  * If this capability is set and the port is configured with implicit release
  * disabled, the application is responsible for explicitly releasing events
- * using either the RTE_EVENT_OP_FORWARD or the RTE_EVENT_OP_RELEASE event
+ * using either the @ref RTE_EVENT_OP_FORWARD or the @ref RTE_EVENT_OP_RELEASE event
  * enqueue operations.
  *
  * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
  */
 
 #define RTE_EVENT_DEV_CAP_NONSEQ_MODE         (1ULL << 6)
-/**< Event device is capable of operating in none sequential mode. The path
- * of the event is not necessary to be sequential. Application can change
- * the path of event at runtime. If the flag is not set, then event each event
- * will follow a path from queue 0 to queue 1 to queue 2 etc. If the flag is
- * set, events may be sent to queues in any order. If the flag is not set, the
- * eventdev will return an error when the application enqueues an event for a
+/**< Event device is capable of operating in non-sequential mode.
+ *
+ * The path of the event is not necessary to be sequential. Application can change
+ * the path of event at runtime and events may be sent to queues in any order.
+ *
+ * If the flag is not set, then event each event will follow a path from queue 0
+ * to queue 1 to queue 2 etc.
+ * The eventdev will return an error when the application enqueues an event for a
  * qid which is not the next in the sequence.
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK   (1ULL << 7)
-/**< Event device is capable of configuring the queue/port link at runtime.
+/**< Event device is capable of reconfiguring the queue/port link at runtime.
+ *
  * If the flag is not set, the eventdev queue/port link is only can be
- * configured during  initialization.
+ * configured during  initialization, or by stopping the device and
+ * then later restarting it after reconfiguration.
+ *
+ * @see rte_event_port_link rte_event_port_unlink
  */
 
 #define RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT (1ULL << 8)
-/**< Event device is capable of setting up the link between multiple queue
- * with single port. If the flag is not set, the eventdev can only map a
- * single queue to each port or map a single queue to many port.
+/**< Event device is capable of setting up links between multiple queues and a single port.
+ *
+ * If the flag is not set, each port may only be linked to a single queue, and
+ * so can only receive events from that queue.
+ * However, each queue may be linked to multiple ports.
+ *
+ * @see rte_event_port_link
  */
 
 #define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
-/**< Event device preserves the flow ID from the enqueued
- * event to the dequeued event if the flag is set. Otherwise,
- * the content of this field is implementation dependent.
+/**< Event device preserves the flow ID from the enqueued event to the dequeued event.
+ *
+ * If this flag is not set,
+ * the content of the flow-id field in dequeued events is implementation dependent.
+ *
+ * @see rte_event_dequeue_burst
  */
 
 #define RTE_EVENT_DEV_CAP_MAINTENANCE_FREE (1ULL << 10)
 /**< Event device *does not* require calls to rte_event_maintain().
+ *
  * An event device that does not set this flag requires calls to
  * rte_event_maintain() during periods when neither
  * rte_event_dequeue_burst() nor rte_event_enqueue_burst() are called
  * on a port. This will allow the event device to perform internal
  * processing, such as flushing buffered events, return credits to a
  * global pool, or process signaling related to load balancing.
+ *
+ * @see rte_event_maintain
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
 /**< Event device is capable of changing the queue attributes at runtime i.e
- * after rte_event_queue_setup() or rte_event_start() call sequence. If this
- * flag is not set, eventdev queue attributes can only be configured during
+ * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
+ *
+ * If this flag is not set, eventdev queue attributes can only be configured during
  * rte_event_queue_setup().
+ *
+ * @see rte_event_queue_setup
  */
 
 #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
-/**< Event device is capable of supporting multiple link profiles per event port
- * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
- * than one.
+/**< Event device is capable of supporting multiple link profiles per event port.
+ *
+ *
+ * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one, and multiple profiles may be configured and then switched at runtime.
+ * If not set, only a single profile may be configured, which may itself be
+ * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
+ *
+ * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
+ * @see rte_event_port_profile_switch
+ * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
  */
 
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
-/**< Highest priority expressed across eventdev subsystem
+/**< Highest priority expressed across eventdev subsystem.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_NORMAL    128
-/**< Normal priority expressed across eventdev subsystem
+/**< Normal priority expressed across eventdev subsystem.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_LOWEST    255
-/**< Lowest priority expressed across eventdev subsystem
+/**< Lowest priority expressed across eventdev subsystem.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 
 /* Event queue scheduling weights */
 #define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255
-/**< Highest weight of an event queue
+/**< Highest weight of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0
-/**< Lowest weight of an event queue
+/**< Lowest weight of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 
 /* Event queue scheduling affinity */
 #define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255
-/**< Highest scheduling affinity of an event queue
+/**< Highest scheduling affinity of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0
-/**< Lowest scheduling affinity of an event queue
+/**< Lowest scheduling affinity of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v1 4/7] eventdev: cleanup doxygen comments on info structure
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
                   ` (2 preceding siblings ...)
  2024-01-18 13:45 ` [PATCH v1 3/7] eventdev: update documentation on device capability flags Bruce Richardson
@ 2024-01-18 13:45 ` Bruce Richardson
  2024-01-18 13:49   ` Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 5/7] eventdev: improve function documentation for query fns Bruce Richardson
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson, Jerin Jacob

Some small rewording changes to the doxygen comments on struct
rte_event_dev_info.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.c |  2 +-
 lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
 2 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 94628a66ef..9bf7c7be89 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -83,7 +83,7 @@ rte_event_dev_socket_id(uint8_t dev_id)
 
 	rte_eventdev_trace_socket_id(dev_id, dev, dev->data->socket_id);
 
-	return dev->data->socket_id;
+	return dev->data->socket_id < 0 ? 0 : dev->data->socket_id;
 }
 
 int
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 57a2791946..872f241df2 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -482,54 +482,58 @@ struct rte_event_dev_info {
 	const char *driver_name;	/**< Event driver name */
 	struct rte_device *dev;	/**< Device information */
 	uint32_t min_dequeue_timeout_ns;
-	/**< Minimum supported global dequeue timeout(ns) by this device */
+	/**< Minimum global dequeue timeout(ns) supported by this device */
 	uint32_t max_dequeue_timeout_ns;
-	/**< Maximum supported global dequeue timeout(ns) by this device */
+	/**< Maximum global dequeue timeout(ns) supported by this device */
 	uint32_t dequeue_timeout_ns;
 	/**< Configured global dequeue timeout(ns) for this device */
 	uint8_t max_event_queues;
-	/**< Maximum event_queues supported by this device */
+	/**< Maximum event queues supported by this device */
 	uint32_t max_event_queue_flows;
-	/**< Maximum supported flows in an event queue by this device*/
+	/**< Maximum number of flows within an event queue supported by this device*/
 	uint8_t max_event_queue_priority_levels;
 	/**< Maximum number of event queue priority levels by this device.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 * The priority levels are evenly distributed between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
 	 */
 	uint8_t max_event_priority_levels;
 	/**< Maximum number of event priority levels by this device.
 	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability
+	 * The priority levels are evenly distributed between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
 	 */
 	uint8_t max_event_ports;
 	/**< Maximum number of event ports supported by this device */
 	uint8_t max_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk dequeue will set this as 1.
+	/**< Maximum number of events that can be dequeued at a time from an event port
+	 * on this device.
+	 * A device that does not support bulk dequeue will set this to 1.
 	 */
 	uint32_t max_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk enqueue will set this as 1.
+	/**< Maximum number of events that can be enqueued at a time to an event port
+	 * on this device.
+	 * A device that does not support bulk enqueue will set this to 1.
 	 */
 	uint8_t max_event_port_links;
-	/**< Maximum number of queues that can be linked to a single event
-	 * port by this device.
+	/**< Maximum number of queues that can be linked to a single event port on this device.
 	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
-	 * can manage at a time. An *open system* event dev does not have a
-	 * limit and will specify this as -1.
+	 * can manage at a time.
+	 * Once the number of events tracked by an eventdev exceeds this number,
+	 * any enqueues of NEW events will fail.
+	 * An *open system* event dev does not have a limit and will specify this as -1.
 	 */
 	uint32_t event_dev_cap;
-	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	/**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*) */
 	uint8_t max_single_link_event_port_queue_pairs;
-	/**< Maximum number of event ports and queues that are optimized for
-	 * (and only capable of) single-link configurations supported by this
-	 * device. These ports and queues are not accounted for in
-	 * max_event_ports or max_event_queues.
+	/**< Maximum number of event ports and queues,  supported by this device,
+	 * that are optimized for (and only capable of) single-link configurations.
+	 * These ports and queues are not accounted for in max_event_ports or max_event_queues.
 	 */
 	uint8_t max_profiles_per_port;
-	/**< Maximum number of event queue profiles per event port.
+	/**< Maximum number of event queue link profiles per event port.
 	 * A device that doesn't support multiple profiles will set this as 1.
 	 */
 };
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v1 5/7] eventdev: improve function documentation for query fns
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
                   ` (3 preceding siblings ...)
  2024-01-18 13:45 ` [PATCH v1 4/7] eventdev: cleanup doxygen comments on info structure Bruce Richardson
@ 2024-01-18 13:45 ` Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 6/7] eventdev: improve doxygen comments on configure struct Bruce Richardson
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson, Jerin Jacob

General improvements to the doxygen docs for eventdev functions for
querying basic information:
* number of devices
* id for a particular device
* socket id of device
* capability information for a device

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 872f241df2..c57c93a22e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -440,8 +440,7 @@ struct rte_event;
  */
 
 /**
- * Get the total number of event devices that have been successfully
- * initialised.
+ * Get the total number of event devices available for application use.
  *
  * @return
  *   The total number of usable event devices.
@@ -456,8 +455,10 @@ rte_event_dev_count(void);
  *   Event device name to select the event device identifier.
  *
  * @return
- *   Returns event device identifier on success.
- *   - <0: Failure to find named event device.
+ *   Event device identifier (dev_id >= 0) on success.
+ *   Negative error code on failure:
+ *   - -EINVAL - input name parameter is invalid
+ *   - -ENODEV - no event device found with that name
  */
 int
 rte_event_dev_get_dev_id(const char *name);
@@ -470,7 +471,8 @@ rte_event_dev_get_dev_id(const char *name);
  * @return
  *   The NUMA socket id to which the device is connected or
  *   a default of zero if the socket could not be determined.
- *   -(-EINVAL)  dev_id value is out of range.
+ *   -EINVAL on error, where the given dev_id value does not
+ *   correspond to any event device.
  */
 int
 rte_event_dev_socket_id(uint8_t dev_id);
@@ -539,18 +541,20 @@ struct rte_event_dev_info {
 };
 
 /**
- * Retrieve the contextual information of an event device.
+ * Retrieve details of an event device's capabilities and configuration limits.
  *
  * @param dev_id
  *   The identifier of the device.
  *
  * @param[out] dev_info
  *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
- *   contextual information of the device.
+ *   information about the device's capabilities.
  *
  * @return
- *   - 0: Success, driver updates the contextual information of the event device
- *   - <0: Error code returned by the driver info get function.
+ *   - 0: Success, information about the event device is present in dev_info.
+ *   - <0: Failure, error code returned by the function.
+ *     - -EINVAL - invalid input parameters, e.g. incorrect device id
+ *     - -ENOTSUP - device does not support returning capabilities information
  */
 int
 rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v1 6/7] eventdev: improve doxygen comments on configure struct
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
                   ` (4 preceding siblings ...)
  2024-01-18 13:45 ` [PATCH v1 5/7] eventdev: improve function documentation for query fns Bruce Richardson
@ 2024-01-18 13:45 ` Bruce Richardson
  2024-01-18 13:45 ` [PATCH v1 7/7] eventdev: fix documentation for counting single-link ports Bruce Richardson
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
  7 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson, Jerin Jacob

General rewording and cleanup on the rte_event_dev_config structure.
Improved the wording of some sentences and created linked
cross-references out of the existing references to the dev_info
structure.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++------------------
 1 file changed, 24 insertions(+), 23 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index c57c93a22e..4139ccb982 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -599,9 +599,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 struct rte_event_dev_config {
 	uint32_t dequeue_timeout_ns;
 	/**< rte_event_dequeue_burst() timeout on this device.
-	 * This value should be in the range of *min_dequeue_timeout_ns* and
-	 * *max_dequeue_timeout_ns* which previously provided in
-	 * rte_event_dev_info_get()
+	 * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and
+	 * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
+	 * @ref rte_event_dev_info_get()
 	 * The value 0 is allowed, in which case, default dequeue timeout used.
 	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
 	 */
@@ -609,40 +609,41 @@ struct rte_event_dev_config {
 	/**< In a *closed system* this field is the limit on maximum number of
 	 * events that can be inflight in the eventdev at a given time. The
 	 * limit is required to ensure that the finite space in a closed system
-	 * is not overwhelmed. The value cannot exceed the *max_num_events*
-	 * as provided by rte_event_dev_info_get().
+	 * is not overwhelmed.
+	 * Once the limit has been reached, any enqueues of NEW events to the
+	 * system will fail.
+	 * The value cannot exceed @ref rte_event_dev_info.max_num_events
+	 * returned by rte_event_dev_info_get().
 	 * This value should be set to -1 for *open system*.
 	 */
 	uint8_t nb_event_queues;
 	/**< Number of event queues to configure on this device.
-	 * This value cannot exceed the *max_event_queues* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint8_t nb_event_ports;
 	/**< Number of event ports to configure on this device.
-	 * This value cannot exceed the *max_event_ports* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint32_t nb_event_queue_flows;
-	/**< Number of flows for any event queue on this device.
-	 * This value cannot exceed the *max_event_queue_flows* which previously
-	 * provided in rte_event_dev_info_get()
+	/**< Max number of flows needed for a single event queue on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint32_t nb_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_dequeue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Max number of events that can be dequeued at a time from an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_dequeue_burst()
 	 */
 	uint32_t nb_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_enqueue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Maximum number of events can be enqueued at a time to an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_enqueue_burst()
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
@@ -652,7 +653,7 @@ struct rte_event_dev_config {
 	 * queues; this value cannot exceed *nb_event_ports* or
 	 * *nb_event_queues*. If the device has ports and queues that are
 	 * optimized for single-link usage, this field is a hint for how many
-	 * to allocate; otherwise, regular event ports and queues can be used.
+	 * to allocate; otherwise, regular event ports and queues will be used.
 	 */
 };
 
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v1 7/7] eventdev: fix documentation for counting single-link ports
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
                   ` (5 preceding siblings ...)
  2024-01-18 13:45 ` [PATCH v1 6/7] eventdev: improve doxygen comments on configure struct Bruce Richardson
@ 2024-01-18 13:45 ` Bruce Richardson
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
  7 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:45 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, stable, Jerin Jacob, Harry van Haaren,
	Pavan Nikhilesh, Timothy McDaniel

The documentation of how single-link port-queue pairs were counted in
the rte_event_dev_config structure did not match the actual
implementation and, if following the documentation, certain valid
port/queue configurations would have been impossible to configure. Fix
this by changing the documentation to match the implementation - however
confusing that implementation ends up being.

Bugzilla ID:  1368
Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints")
Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 28 ++++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 4139ccb982..3b8f5b8101 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -490,7 +490,10 @@ struct rte_event_dev_info {
 	uint32_t dequeue_timeout_ns;
 	/**< Configured global dequeue timeout(ns) for this device */
 	uint8_t max_event_queues;
-	/**< Maximum event queues supported by this device */
+	/**< Maximum event queues supported by this device.
+	 * This excludes any queue-port pairs covered by the
+	 * *max_single_link_event_port_queue_pairs* value in this structure.
+	 */
 	uint32_t max_event_queue_flows;
 	/**< Maximum number of flows within an event queue supported by this device*/
 	uint8_t max_event_queue_priority_levels;
@@ -506,7 +509,10 @@ struct rte_event_dev_info {
 	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
 	 */
 	uint8_t max_event_ports;
-	/**< Maximum number of event ports supported by this device */
+	/**< Maximum number of event ports supported by this device
+	 * This excludes any queue-port pairs covered by the
+	 * *max_single_link_event_port_queue_pairs* value in this structure.
+	 */
 	uint8_t max_event_port_dequeue_depth;
 	/**< Maximum number of events that can be dequeued at a time from an event port
 	 * on this device.
@@ -618,13 +624,23 @@ struct rte_event_dev_config {
 	 */
 	uint8_t nb_event_queues;
 	/**< Number of event queues to configure on this device.
-	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues
-	 * returned by rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link queues i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_queues
 	 */
 	uint8_t nb_event_ports;
 	/**< Number of event ports to configure on this device.
-	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports
-	 * returned by rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link ports i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_ports
 	 */
 	uint32_t nb_event_queue_flows;
 	/**< Max number of flows needed for a single event queue on this device.
--
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v1 4/7] eventdev: cleanup doxygen comments on info structure
  2024-01-18 13:45 ` [PATCH v1 4/7] eventdev: cleanup doxygen comments on info structure Bruce Richardson
@ 2024-01-18 13:49   ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:49 UTC (permalink / raw)
  To: dev; +Cc: Jerin Jacob

On Thu, Jan 18, 2024 at 01:45:54PM +0000, Bruce Richardson wrote:
> Some small rewording changes to the doxygen comments on struct
> rte_event_dev_info.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  lib/eventdev/rte_eventdev.c |  2 +-
>  lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
>  2 files changed, 26 insertions(+), 22 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 94628a66ef..9bf7c7be89 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -83,7 +83,7 @@ rte_event_dev_socket_id(uint8_t dev_id)
>  
>  	rte_eventdev_trace_socket_id(dev_id, dev, dev->data->socket_id);
>  
> -	return dev->data->socket_id;
> +	return dev->data->socket_id < 0 ? 0 : dev->data->socket_id;
>  }

Apologies, this is a stray change that I thought I had rolled back, but
somehow made it into the commit! Please ignore when reviewing.
>  
<snip>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 00/11] improve eventdev API specification/documentation
  2024-01-18 13:45 [PATCH v1 0/7] improve eventdev API specification/documentation Bruce Richardson
                   ` (6 preceding siblings ...)
  2024-01-18 13:45 ` [PATCH v1 7/7] eventdev: fix documentation for counting single-link ports Bruce Richardson
@ 2024-01-19 17:43 ` Bruce Richardson
  2024-01-19 17:43   ` [PATCH v2 01/11] eventdev: improve doxygen introduction text Bruce Richardson
                     ` (12 more replies)
  7 siblings, 13 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

This patchset makes rewording improvements to the eventdev doxygen
documentation to try and ensure that it is as clear as possible,
describes the implementation as accurately as possible, and is
consistent within itself.

Most changes are just minor rewordings, along with plenty of changes to
change references into doxygen links/cross-references.

However, the final two patches are attempting to clarify what, to me
anyway, is unclear wording in some key definitions. As such, it probably
requires careful review by eventdev PMD maintainers, as different
implementaters may have different understandings of what the text meant,
and in some cases in "clarifying" I may have changed the meaning in a
way that breaks things.

V2:
* additional cleanup and changes
* remove "escaped" accidental change to .c file

Bruce Richardson (11):
  eventdev: improve doxygen introduction text
  eventdev: move text on driver internals to proper section
  eventdev: update documentation on device capability flags
  eventdev: cleanup doxygen comments on info structure
  eventdev: improve function documentation for query fns
  eventdev: improve doxygen comments on configure struct
  eventdev: fix documentation for counting single-link ports
  eventdev: improve doxygen comments on config fns
  eventdev: improve doxygen comments for control APIs
  eventdev: RFC clarify comments on scheduling types
  eventdev: RFC, clarify docs on event object fields

 lib/eventdev/rte_eventdev.h | 773 +++++++++++++++++++++++-------------
 1 file changed, 501 insertions(+), 272 deletions(-)

--
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 01/11] eventdev: improve doxygen introduction text
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23  8:57     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 02/11] eventdev: move text on driver internals to proper section Bruce Richardson
                     ` (11 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

Make some textual improvements to the introduction to eventdev and event
devices in the eventdev header file. This text appears in the doxygen
output for the header file, and introduces the key concepts, for
example: events, event devices, queues, ports and scheduling.

This patch makes the following improvements:
* small textual fixups, e.g. correcting use of singular/plural
* rewrites of some sentences to improve clarity
* using doxygen markdown to split the whole large block up into
  sections, thereby making it easier to read.

No large-scale changes are made, and blocks are not reordered

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++---------------
 1 file changed, 66 insertions(+), 46 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index ec9b02455d..a36c89c7a4 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -12,12 +12,13 @@
  * @file
  *
  * RTE Event Device API
+ * ====================
  *
  * In a polling model, lcores poll ethdev ports and associated rx queues
- * directly to look for packet. In an event driven model, by contrast, lcores
- * call the scheduler that selects packets for them based on programmer
- * specified criteria. Eventdev library adds support for event driven
- * programming model, which offer applications automatic multicore scaling,
+ * directly to look for packets. In an event driven model, in contrast, lcores
+ * call a scheduler that selects packets for them based on programmer
+ * specified criteria. The eventdev library adds support for the event driven
+ * programming model, which offers applications automatic multicore scaling,
  * dynamic load balancing, pipelining, packet ingress order maintenance and
  * synchronization services to simplify application packet processing.
  *
@@ -25,12 +26,15 @@
  *
  * - The application-oriented Event API that includes functions to setup
  *   an event device (configure it, setup its queues, ports and start it), to
- *   establish the link between queues to port and to receive events, and so on.
+ *   establish the links between queues and ports to receive events, and so on.
  *
  * - The driver-oriented Event API that exports a function allowing
- *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event poll Mode Driver (PMD) to register itself as
  *   an event device driver.
  *
+ * Application-oriented Event API
+ * ------------------------------
+ *
  * Event device components:
  *
  *                     +-----------------+
@@ -75,27 +79,33 @@
  *            |                                                           |
  *            +-----------------------------------------------------------+
  *
- * Event device: A hardware or software-based event scheduler.
+ * **Event device**: A hardware or software-based event scheduler.
  *
- * Event: A unit of scheduling that encapsulates a packet or other datatype
- * like SW generated event from the CPU, Crypto work completion notification,
- * Timer expiry event notification etc as well as metadata.
- * The metadata includes flow ID, scheduling type, event priority, event_type,
+ * **Event**: A unit of scheduling that encapsulates a packet or other datatype,
+ * such as: SW generated event from the CPU, crypto work completion notification,
+ * timer expiry event notification etc., as well as metadata about the packet or data.
+ * The metadata includes a flow ID (if any), scheduling type, event priority, event_type,
  * sub_event_type etc.
  *
- * Event queue: A queue containing events that are scheduled by the event dev.
+ * **Event queue**: A queue containing events that are scheduled by the event device.
  * An event queue contains events of different flows associated with scheduling
  * types, such as atomic, ordered, or parallel.
+ * Each event given to an eventdev must have a valid event queue id field in the metadata,
+ * to specify on which event queue in the device the event must be placed,
+ * for later scheduling to a core.
  *
- * Event port: An application's interface into the event dev for enqueue and
+ * **Event port**: An application's interface into the event dev for enqueue and
  * dequeue operations. Each event port can be linked with one or more
  * event queues for dequeue operations.
- *
- * By default, all the functions of the Event Device API exported by a PMD
- * are lock-free functions which assume to not be invoked in parallel on
- * different logical cores to work on the same target object. For instance,
- * the dequeue function of a PMD cannot be invoked in parallel on two logical
- * cores to operates on same  event port. Of course, this function
+ * Each port should be associated with a single core (enqueue and dequeue is not thread-safe).
+ * To schedule events to a core, the event device will schedule them to the event port(s)
+ * being polled by that core.
+ *
+ * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
+ * are lock-free functions, which must not be invoked on the same object in parallel on
+ * different logical cores.
+ * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operate on same  event port. Of course, this function
  * can be invoked in parallel by different logical cores on different ports.
  * It is the responsibility of the upper level application to enforce this rule.
  *
@@ -107,22 +117,19 @@
  *
  * Event devices are dynamically registered during the PCI/SoC device probing
  * phase performed at EAL initialization time.
- * When an Event device is being probed, a *rte_event_dev* structure and
- * a new device identifier are allocated for that device. Then, the
- * event_dev_init() function supplied by the Event driver matching the probed
- * device is invoked to properly initialize the device.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
  *
- * The role of the device init function consists of resetting the hardware or
- * software event driver implementations.
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
  *
- * If the device init operation is successful, the correspondence between
- * the device identifier assigned to the new device and its associated
- * *rte_event_dev* structure is effectively registered.
- * Otherwise, both the *rte_event_dev* structure and the device identifier are
- * freed.
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
  *
  * The functions exported by the application Event API to setup a device
- * designated by its device identifier must be invoked in the following order:
+ * must be invoked in the following order:
  *     - rte_event_dev_configure()
  *     - rte_event_queue_setup()
  *     - rte_event_port_setup()
@@ -130,10 +137,15 @@
  *     - rte_event_dev_start()
  *
  * Then, the application can invoke, in any order, the functions
- * exported by the Event API to schedule events, dequeue events, enqueue events,
- * change event queue(s) to event port [un]link establishment and so on.
- *
- * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * exported by the Event API to dequeue events, enqueue events,
+ * and link and unlink event queue(s) to event ports.
+ *
+ * Before configuring a device, an application should call rte_event_dev_info_get()
+ * to determine the capabilities of the event device, and any queue or port
+ * limits of that device. The parameters set in the various device configuration
+ * structures may need to be adjusted based on the max values provided in the
+ * device information structure returned from the info_get API.
+ * An application may use rte_event_[queue/port]_default_conf_get() to get the
  * default configuration to set up an event queue or event port by
  * overriding few default values.
  *
@@ -145,7 +157,11 @@
  * when the device is stopped.
  *
  * Finally, an application can close an Event device by invoking the
- * rte_event_dev_close() function.
+ * rte_event_dev_close() function. Once closed, a device cannot be
+ * reconfigured or restarted.
+ *
+ * Driver-Oriented Event API
+ * -------------------------
  *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
@@ -164,10 +180,13 @@
  * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
  *
  * For performance reasons, the address of the fast-path functions of the
- * Event driver is not contained in the *event_dev_ops* structure.
+ * Event driver are not contained in the *event_dev_ops* structure.
  * Instead, they are directly stored at the beginning of the *rte_event_dev*
  * structure to avoid an extra indirect memory access during their invocation.
  *
+ * Event Enqueue, Dequeue and Scheduling
+ * -------------------------------------
+ *
  * RTE event device drivers do not use interrupts for enqueue or dequeue
  * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
  * functions to applications.
@@ -179,21 +198,22 @@
  * crypto work completion notification etc
  *
  * The *dequeue* operation gets one or more events from the event ports.
- * The application process the events and send to downstream event queue through
- * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
- * on the final stage, the application may use Tx adapter API for maintaining
- * the ingress order and then send the packet/event on the wire.
+ * The application processes the events and sends them to a downstream event queue through
+ * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
+ * On the final stage of processing, the application may use the Tx adapter API for maintaining
+ * the event ingress order while sending the packet/event on the wire via NIC Tx.
  *
  * The point at which events are scheduled to ports depends on the device.
  * For hardware devices, scheduling occurs asynchronously without any software
  * intervention. Software schedulers can either be distributed
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
- * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic need a dedicated service core for scheduling.
- * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
- * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls software specific scheduling function.
+ * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
+ * software schedulers need a dedicated service core for scheduling.
+ * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
+ * indicates that the device is centralized and thus needs a dedicated scheduling
+ * thread, generally a service core,
+ * that repeatedly calls the software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 02/11] eventdev: move text on driver internals to proper section
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
  2024-01-19 17:43   ` [PATCH v2 01/11] eventdev: improve doxygen introduction text Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-19 17:43   ` [PATCH v2 03/11] eventdev: update documentation on device capability flags Bruce Richardson
                     ` (10 subsequent siblings)
  12 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

Inside the doxygen introduction text, some internal details of how
eventdev works was mixed in with application-relevant details. Move
these details on probing etc. to the driver-relevant section.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a36c89c7a4..949e957f1b 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -112,22 +112,6 @@
  * In all functions of the Event API, the Event device is
  * designated by an integer >= 0 named the device identifier *dev_id*
  *
- * At the Event driver level, Event devices are represented by a generic
- * data structure of type *rte_event_dev*.
- *
- * Event devices are dynamically registered during the PCI/SoC device probing
- * phase performed at EAL initialization time.
- * When an Event device is being probed, an *rte_event_dev* structure is allocated
- * for it and the event_dev_init() function supplied by the Event driver
- * is invoked to properly initialize the device.
- *
- * The role of the device init function is to reset the device hardware or
- * to initialize the software event driver implementation.
- *
- * If the device init operation is successful, the device is assigned a device
- * id (dev_id) for application use.
- * Otherwise, the *rte_event_dev* structure is freed.
- *
  * The functions exported by the application Event API to setup a device
  * must be invoked in the following order:
  *     - rte_event_dev_configure()
@@ -163,6 +147,22 @@
  * Driver-Oriented Event API
  * -------------------------
  *
+ * At the Event driver level, Event devices are represented by a generic
+ * data structure of type *rte_event_dev*.
+ *
+ * Event devices are dynamically registered during the PCI/SoC device probing
+ * phase performed at EAL initialization time.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
+ *
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
+ *
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
+ *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
  * identifier.
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 03/11] eventdev: update documentation on device capability flags
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
  2024-01-19 17:43   ` [PATCH v2 01/11] eventdev: improve doxygen introduction text Bruce Richardson
  2024-01-19 17:43   ` [PATCH v2 02/11] eventdev: move text on driver internals to proper section Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23  9:18     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure Bruce Richardson
                     ` (9 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

Update the device capability docs, to:

* include more cross-references
* split longer text into paragraphs, in most cases with each flag having
  a single-line summary at the start of the doc block
* general comment rewording and clarification as appropriate

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
 1 file changed, 93 insertions(+), 37 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 949e957f1b..57a2791946 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -243,143 +243,199 @@ struct rte_event;
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
 /**< Event scheduling prioritization is based on the priority and weight
- * associated with each event queue. Events from a queue with highest priority
- * is scheduled first. If the queues are of same priority, weight of the queues
+ * associated with each event queue.
+ *
+ * Events from a queue with highest priority
+ * are scheduled first. If the queues are of same priority, weight of the queues
  * are considered to select a queue in a weighted round robin fashion.
  * Subsequent dequeue calls from an event port could see events from the same
  * event queue, if the queue is configured with an affinity count. Affinity
  * count is the number of subsequent dequeue calls, in which an event port
  * should use the same event queue if the queue is non-empty
  *
+ * NOTE: A device may use both queue prioritization and event prioritization
+ * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions.
+ *
  *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
- *  each event. Priority of each event is supplied in *rte_event* structure
+ *  each event.
+ *
+ *  Priority of each event is supplied in *rte_event* structure
  *  on each enqueue operation.
+ *  If this capability is not set, the priority field of the event structure
+ *  is ignored for each event.
  *
+ * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability)
+ * and event prioritization when making packet scheduling decisions.
+
  *  @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
 /**< Event device operates in distributed scheduling mode.
+ *
  * In distributed scheduling mode, event scheduling happens in HW or
- * rte_event_dequeue_burst() or the combination of these two.
+ * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
  * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_dequeue_burst()
+ * @see rte_event_dev_service_id_get
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
+ *
  * If this capability is not set, the queue only supports events of the
- *  *RTE_SCHED_TYPE_* type that it was created with.
+ * *RTE_SCHED_TYPE_* type that it was created with.
+ * Any events of other types scheduled to the queue will handled in an
+ * implementation-dependent manner. They may be dropped by the
+ * event device, or enqueued with the scheduling type adjusted to the
+ * correct/supported value.
  *
- * @see RTE_SCHED_TYPE_* values
+ * @see rte_event_enqueue_burst
+ * @see RTE_SCHED_TYPE_ATOMIC RTE_SCHED_TYPE_ORDERED RTE_SCHED_TYPE_PARALLEL
  */
 #define RTE_EVENT_DEV_CAP_BURST_MODE          (1ULL << 4)
 /**< Event device is capable of operating in burst mode for enqueue(forward,
- * release) and dequeue operation. If this capability is not set, application
- * still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
- * PMD accepts only one event at a time.
+ * release) and dequeue operation.
+ *
+ * If this capability is not set, application
+ * can still use the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
+ * PMD accepts or returns only one event at a time.
  *
  * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE    (1ULL << 5)
 /**< Event device ports support disabling the implicit release feature, in
  * which the port will release all unreleased events in its dequeue operation.
+ *
  * If this capability is set and the port is configured with implicit release
  * disabled, the application is responsible for explicitly releasing events
- * using either the RTE_EVENT_OP_FORWARD or the RTE_EVENT_OP_RELEASE event
+ * using either the @ref RTE_EVENT_OP_FORWARD or the @ref RTE_EVENT_OP_RELEASE event
  * enqueue operations.
  *
  * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
  */
 
 #define RTE_EVENT_DEV_CAP_NONSEQ_MODE         (1ULL << 6)
-/**< Event device is capable of operating in none sequential mode. The path
- * of the event is not necessary to be sequential. Application can change
- * the path of event at runtime. If the flag is not set, then event each event
- * will follow a path from queue 0 to queue 1 to queue 2 etc. If the flag is
- * set, events may be sent to queues in any order. If the flag is not set, the
- * eventdev will return an error when the application enqueues an event for a
+/**< Event device is capable of operating in non-sequential mode.
+ *
+ * The path of the event is not necessary to be sequential. Application can change
+ * the path of event at runtime and events may be sent to queues in any order.
+ *
+ * If the flag is not set, then event each event will follow a path from queue 0
+ * to queue 1 to queue 2 etc.
+ * The eventdev will return an error when the application enqueues an event for a
  * qid which is not the next in the sequence.
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK   (1ULL << 7)
-/**< Event device is capable of configuring the queue/port link at runtime.
+/**< Event device is capable of reconfiguring the queue/port link at runtime.
+ *
  * If the flag is not set, the eventdev queue/port link is only can be
- * configured during  initialization.
+ * configured during  initialization, or by stopping the device and
+ * then later restarting it after reconfiguration.
+ *
+ * @see rte_event_port_link rte_event_port_unlink
  */
 
 #define RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT (1ULL << 8)
-/**< Event device is capable of setting up the link between multiple queue
- * with single port. If the flag is not set, the eventdev can only map a
- * single queue to each port or map a single queue to many port.
+/**< Event device is capable of setting up links between multiple queues and a single port.
+ *
+ * If the flag is not set, each port may only be linked to a single queue, and
+ * so can only receive events from that queue.
+ * However, each queue may be linked to multiple ports.
+ *
+ * @see rte_event_port_link
  */
 
 #define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
-/**< Event device preserves the flow ID from the enqueued
- * event to the dequeued event if the flag is set. Otherwise,
- * the content of this field is implementation dependent.
+/**< Event device preserves the flow ID from the enqueued event to the dequeued event.
+ *
+ * If this flag is not set,
+ * the content of the flow-id field in dequeued events is implementation dependent.
+ *
+ * @see rte_event_dequeue_burst
  */
 
 #define RTE_EVENT_DEV_CAP_MAINTENANCE_FREE (1ULL << 10)
 /**< Event device *does not* require calls to rte_event_maintain().
+ *
  * An event device that does not set this flag requires calls to
  * rte_event_maintain() during periods when neither
  * rte_event_dequeue_burst() nor rte_event_enqueue_burst() are called
  * on a port. This will allow the event device to perform internal
  * processing, such as flushing buffered events, return credits to a
  * global pool, or process signaling related to load balancing.
+ *
+ * @see rte_event_maintain
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
 /**< Event device is capable of changing the queue attributes at runtime i.e
- * after rte_event_queue_setup() or rte_event_start() call sequence. If this
- * flag is not set, eventdev queue attributes can only be configured during
+ * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
+ *
+ * If this flag is not set, eventdev queue attributes can only be configured during
  * rte_event_queue_setup().
+ *
+ * @see rte_event_queue_setup
  */
 
 #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
-/**< Event device is capable of supporting multiple link profiles per event port
- * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
- * than one.
+/**< Event device is capable of supporting multiple link profiles per event port.
+ *
+ *
+ * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one, and multiple profiles may be configured and then switched at runtime.
+ * If not set, only a single profile may be configured, which may itself be
+ * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
+ *
+ * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
+ * @see rte_event_port_profile_switch
+ * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
  */
 
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
-/**< Highest priority expressed across eventdev subsystem
+/**< Highest priority expressed across eventdev subsystem.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_NORMAL    128
-/**< Normal priority expressed across eventdev subsystem
+/**< Normal priority expressed across eventdev subsystem.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_LOWEST    255
-/**< Lowest priority expressed across eventdev subsystem
+/**< Lowest priority expressed across eventdev subsystem.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 
 /* Event queue scheduling weights */
 #define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255
-/**< Highest weight of an event queue
+/**< Highest weight of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0
-/**< Lowest weight of an event queue
+/**< Lowest weight of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 
 /* Event queue scheduling affinity */
 #define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255
-/**< Highest scheduling affinity of an event queue
+/**< Highest scheduling affinity of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0
-/**< Lowest scheduling affinity of an event queue
+/**< Lowest scheduling affinity of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (2 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 03/11] eventdev: update documentation on device capability flags Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23  9:35     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 05/11] eventdev: improve function documentation for query fns Bruce Richardson
                     ` (8 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

Some small rewording changes to the doxygen comments on struct
rte_event_dev_info.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
 1 file changed, 25 insertions(+), 21 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 57a2791946..872f241df2 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -482,54 +482,58 @@ struct rte_event_dev_info {
 	const char *driver_name;	/**< Event driver name */
 	struct rte_device *dev;	/**< Device information */
 	uint32_t min_dequeue_timeout_ns;
-	/**< Minimum supported global dequeue timeout(ns) by this device */
+	/**< Minimum global dequeue timeout(ns) supported by this device */
 	uint32_t max_dequeue_timeout_ns;
-	/**< Maximum supported global dequeue timeout(ns) by this device */
+	/**< Maximum global dequeue timeout(ns) supported by this device */
 	uint32_t dequeue_timeout_ns;
 	/**< Configured global dequeue timeout(ns) for this device */
 	uint8_t max_event_queues;
-	/**< Maximum event_queues supported by this device */
+	/**< Maximum event queues supported by this device */
 	uint32_t max_event_queue_flows;
-	/**< Maximum supported flows in an event queue by this device*/
+	/**< Maximum number of flows within an event queue supported by this device*/
 	uint8_t max_event_queue_priority_levels;
 	/**< Maximum number of event queue priority levels by this device.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 * The priority levels are evenly distributed between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
 	 */
 	uint8_t max_event_priority_levels;
 	/**< Maximum number of event priority levels by this device.
 	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability
+	 * The priority levels are evenly distributed between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
 	 */
 	uint8_t max_event_ports;
 	/**< Maximum number of event ports supported by this device */
 	uint8_t max_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk dequeue will set this as 1.
+	/**< Maximum number of events that can be dequeued at a time from an event port
+	 * on this device.
+	 * A device that does not support bulk dequeue will set this to 1.
 	 */
 	uint32_t max_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk enqueue will set this as 1.
+	/**< Maximum number of events that can be enqueued at a time to an event port
+	 * on this device.
+	 * A device that does not support bulk enqueue will set this to 1.
 	 */
 	uint8_t max_event_port_links;
-	/**< Maximum number of queues that can be linked to a single event
-	 * port by this device.
+	/**< Maximum number of queues that can be linked to a single event port on this device.
 	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
-	 * can manage at a time. An *open system* event dev does not have a
-	 * limit and will specify this as -1.
+	 * can manage at a time.
+	 * Once the number of events tracked by an eventdev exceeds this number,
+	 * any enqueues of NEW events will fail.
+	 * An *open system* event dev does not have a limit and will specify this as -1.
 	 */
 	uint32_t event_dev_cap;
-	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	/**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*) */
 	uint8_t max_single_link_event_port_queue_pairs;
-	/**< Maximum number of event ports and queues that are optimized for
-	 * (and only capable of) single-link configurations supported by this
-	 * device. These ports and queues are not accounted for in
-	 * max_event_ports or max_event_queues.
+	/**< Maximum number of event ports and queues,  supported by this device,
+	 * that are optimized for (and only capable of) single-link configurations.
+	 * These ports and queues are not accounted for in max_event_ports or max_event_queues.
 	 */
 	uint8_t max_profiles_per_port;
-	/**< Maximum number of event queue profiles per event port.
+	/**< Maximum number of event queue link profiles per event port.
 	 * A device that doesn't support multiple profiles will set this as 1.
 	 */
 };
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 05/11] eventdev: improve function documentation for query fns
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (3 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23  9:40     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct Bruce Richardson
                     ` (7 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

General improvements to the doxygen docs for eventdev functions for
querying basic information:
* number of devices
* id for a particular device
* socket id of device
* capability information for a device

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 872f241df2..c57c93a22e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -440,8 +440,7 @@ struct rte_event;
  */
 
 /**
- * Get the total number of event devices that have been successfully
- * initialised.
+ * Get the total number of event devices available for application use.
  *
  * @return
  *   The total number of usable event devices.
@@ -456,8 +455,10 @@ rte_event_dev_count(void);
  *   Event device name to select the event device identifier.
  *
  * @return
- *   Returns event device identifier on success.
- *   - <0: Failure to find named event device.
+ *   Event device identifier (dev_id >= 0) on success.
+ *   Negative error code on failure:
+ *   - -EINVAL - input name parameter is invalid
+ *   - -ENODEV - no event device found with that name
  */
 int
 rte_event_dev_get_dev_id(const char *name);
@@ -470,7 +471,8 @@ rte_event_dev_get_dev_id(const char *name);
  * @return
  *   The NUMA socket id to which the device is connected or
  *   a default of zero if the socket could not be determined.
- *   -(-EINVAL)  dev_id value is out of range.
+ *   -EINVAL on error, where the given dev_id value does not
+ *   correspond to any event device.
  */
 int
 rte_event_dev_socket_id(uint8_t dev_id);
@@ -539,18 +541,20 @@ struct rte_event_dev_info {
 };
 
 /**
- * Retrieve the contextual information of an event device.
+ * Retrieve details of an event device's capabilities and configuration limits.
  *
  * @param dev_id
  *   The identifier of the device.
  *
  * @param[out] dev_info
  *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
- *   contextual information of the device.
+ *   information about the device's capabilities.
  *
  * @return
- *   - 0: Success, driver updates the contextual information of the event device
- *   - <0: Error code returned by the driver info get function.
+ *   - 0: Success, information about the event device is present in dev_info.
+ *   - <0: Failure, error code returned by the function.
+ *     - -EINVAL - invalid input parameters, e.g. incorrect device id
+ *     - -ENOTSUP - device does not support returning capabilities information
  */
 int
 rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (4 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 05/11] eventdev: improve function documentation for query fns Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23  9:46     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports Bruce Richardson
                     ` (6 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

General rewording and cleanup on the rte_event_dev_config structure.
Improved the wording of some sentences and created linked
cross-references out of the existing references to the dev_info
structure.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++------------------
 1 file changed, 24 insertions(+), 23 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index c57c93a22e..4139ccb982 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -599,9 +599,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 struct rte_event_dev_config {
 	uint32_t dequeue_timeout_ns;
 	/**< rte_event_dequeue_burst() timeout on this device.
-	 * This value should be in the range of *min_dequeue_timeout_ns* and
-	 * *max_dequeue_timeout_ns* which previously provided in
-	 * rte_event_dev_info_get()
+	 * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and
+	 * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
+	 * @ref rte_event_dev_info_get()
 	 * The value 0 is allowed, in which case, default dequeue timeout used.
 	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
 	 */
@@ -609,40 +609,41 @@ struct rte_event_dev_config {
 	/**< In a *closed system* this field is the limit on maximum number of
 	 * events that can be inflight in the eventdev at a given time. The
 	 * limit is required to ensure that the finite space in a closed system
-	 * is not overwhelmed. The value cannot exceed the *max_num_events*
-	 * as provided by rte_event_dev_info_get().
+	 * is not overwhelmed.
+	 * Once the limit has been reached, any enqueues of NEW events to the
+	 * system will fail.
+	 * The value cannot exceed @ref rte_event_dev_info.max_num_events
+	 * returned by rte_event_dev_info_get().
 	 * This value should be set to -1 for *open system*.
 	 */
 	uint8_t nb_event_queues;
 	/**< Number of event queues to configure on this device.
-	 * This value cannot exceed the *max_event_queues* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint8_t nb_event_ports;
 	/**< Number of event ports to configure on this device.
-	 * This value cannot exceed the *max_event_ports* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint32_t nb_event_queue_flows;
-	/**< Number of flows for any event queue on this device.
-	 * This value cannot exceed the *max_event_queue_flows* which previously
-	 * provided in rte_event_dev_info_get()
+	/**< Max number of flows needed for a single event queue on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint32_t nb_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_dequeue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Max number of events that can be dequeued at a time from an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_dequeue_burst()
 	 */
 	uint32_t nb_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_enqueue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Maximum number of events can be enqueued at a time to an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_enqueue_burst()
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
@@ -652,7 +653,7 @@ struct rte_event_dev_config {
 	 * queues; this value cannot exceed *nb_event_ports* or
 	 * *nb_event_queues*. If the device has ports and queues that are
 	 * optimized for single-link usage, this field is a hint for how many
-	 * to allocate; otherwise, regular event ports and queues can be used.
+	 * to allocate; otherwise, regular event ports and queues will be used.
 	 */
 };
 
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (5 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23  9:48     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 08/11] eventdev: improve doxygen comments on config fns Bruce Richardson
                     ` (5 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson,
	stable

The documentation of how single-link port-queue pairs were counted in
the rte_event_dev_config structure did not match the actual
implementation and, if following the documentation, certain valid
port/queue configurations would have been impossible to configure. Fix
this by changing the documentation to match the implementation - however
confusing that implementation ends up being.

Bugzilla ID:  1368
Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints")
Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 28 ++++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 4139ccb982..3b8f5b8101 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -490,7 +490,10 @@ struct rte_event_dev_info {
 	uint32_t dequeue_timeout_ns;
 	/**< Configured global dequeue timeout(ns) for this device */
 	uint8_t max_event_queues;
-	/**< Maximum event queues supported by this device */
+	/**< Maximum event queues supported by this device.
+	 * This excludes any queue-port pairs covered by the
+	 * *max_single_link_event_port_queue_pairs* value in this structure.
+	 */
 	uint32_t max_event_queue_flows;
 	/**< Maximum number of flows within an event queue supported by this device*/
 	uint8_t max_event_queue_priority_levels;
@@ -506,7 +509,10 @@ struct rte_event_dev_info {
 	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
 	 */
 	uint8_t max_event_ports;
-	/**< Maximum number of event ports supported by this device */
+	/**< Maximum number of event ports supported by this device
+	 * This excludes any queue-port pairs covered by the
+	 * *max_single_link_event_port_queue_pairs* value in this structure.
+	 */
 	uint8_t max_event_port_dequeue_depth;
 	/**< Maximum number of events that can be dequeued at a time from an event port
 	 * on this device.
@@ -618,13 +624,23 @@ struct rte_event_dev_config {
 	 */
 	uint8_t nb_event_queues;
 	/**< Number of event queues to configure on this device.
-	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues
-	 * returned by rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link queues i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_queues
 	 */
 	uint8_t nb_event_ports;
 	/**< Number of event ports to configure on this device.
-	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports
-	 * returned by rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link ports i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_ports
 	 */
 	uint32_t nb_event_queue_flows;
 	/**< Max number of flows needed for a single event queue on this device.
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 08/11] eventdev: improve doxygen comments on config fns
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (6 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23 10:00     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 09/11] eventdev: improve doxygen comments for control APIs Bruce Richardson
                     ` (4 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

Improve the documentation text for the configuration functions and
structures for configuring an eventdev, as well as ports and queues.
Clarify text where possible, and ensure references come through as links
in the html output.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 196 ++++++++++++++++++++++++------------
 1 file changed, 130 insertions(+), 66 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 3b8f5b8101..1fda8a5a13 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -676,12 +676,14 @@ struct rte_event_dev_config {
 /**
  * Configure an event device.
  *
- * This function must be invoked first before any other function in the
- * API. This function can also be re-invoked when a device is in the
- * stopped state.
+ * This function must be invoked before any other configuration function in the
+ * API, when preparing an event device for application use.
+ * This function can also be re-invoked when a device is in the stopped state.
  *
- * The caller may use rte_event_dev_info_get() to get the capability of each
- * resources available for this event device.
+ * The caller should use rte_event_dev_info_get() to get the capabilities and
+ * resource limits for this event device before calling this API.
+ * Many values in the dev_conf input parameter are subject to limits given
+ * in the device information returned from rte_event_dev_info_get().
  *
  * @param dev_id
  *   The identifier of the device to configure.
@@ -691,6 +693,9 @@ struct rte_event_dev_config {
  * @return
  *   - 0: Success, device configured.
  *   - <0: Error code returned by the driver configuration function.
+ *     - -ENOTSUP - device does not support configuration
+ *     - -EINVAL  - invalid input parameter
+ *     - -EBUSY   - device has already been started
  */
 int
 rte_event_dev_configure(uint8_t dev_id,
@@ -700,14 +705,33 @@ rte_event_dev_configure(uint8_t dev_id,
 
 /* Event queue configuration bitmap flags */
 #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (1ULL << 0)
-/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
+/**< Allow events with schedule types ATOMIC, ORDERED, and PARALLEL to be enqueued to this queue.
+ * The scheduling type to be used is that specified in each individual event.
+ * This flag can only be set when configuring queues on devices reporting the
+ * @ref RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability.
  *
+ * Without this flag, only events with the specific scheduling type configured at queue setup
+ * can be sent to the queue.
+ *
+ * @see RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES
  * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
  * @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 1)
 /**< This event queue links only to a single event port.
- *
+ * No load-balancing of events is performed, as all events
+ * sent to this queue end up at the same event port.
+ * The number of queues on which this flag is to be set must be
+ * configured at device configuration time, by setting
+ * @ref rte_event_dev_config.nb_single_link_event_port_queues
+ * parameter appropriately.
+ *
+ * This flag serves as a hint only, any devices without specific
+ * support for single-link queues can fall-back automatically to
+ * using regular queues with a single destination port.
+ *
+ *  @see rte_event_dev_info.max_single_link_event_port_queue_pairs
+ *  @see rte_event_dev_config.nb_single_link_event_port_queues
  *  @see rte_event_port_setup(), rte_event_port_link()
  */
 
@@ -715,56 +739,75 @@ rte_event_dev_configure(uint8_t dev_id,
 struct rte_event_queue_conf {
 	uint32_t nb_atomic_flows;
 	/**< The maximum number of active flows this queue can track at any
-	 * given time. If the queue is configured for atomic scheduling (by
-	 * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg
-	 * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the
-	 * value must be in the range of [1, nb_event_queue_flows], which was
-	 * previously provided in rte_event_dev_configure().
+	 * given time.
+	 *
+	 * If the queue is configured for atomic scheduling (by
+	 * applying the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to
+	 * @ref rte_event_queue_conf.event_queue_cfg
+	 * or @ref RTE_SCHED_TYPE_ATOMIC flag to @ref rte_event_queue_conf.schedule_type), then the
+	 * value must be in the range of [1, @ref rte_event_dev_config.nb_event_queue_flows],
+	 * which was previously provided in rte_event_dev_configure().
+	 *
+	 * If the queue is not configured for atomic scheduling this value is ignored.
 	 */
 	uint32_t nb_atomic_order_sequences;
 	/**< The maximum number of outstanding events waiting to be
 	 * reordered by this queue. In other words, the number of entries in
 	 * this queue’s reorder buffer.When the number of events in the
 	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
-	 * scheduler cannot schedule the events from this queue and invalid
-	 * event will be returned from dequeue until one or more entries are
+	 * scheduler cannot schedule the events from this queue and no
+	 * events will be returned from dequeue until one or more entries are
 	 * freed up/released.
+	 *
 	 * If the queue is configured for ordered scheduling (by applying the
-	 * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or
-	 * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must
-	 * be in the range of [1, nb_event_queue_flows], which was
+	 * @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to @ref rte_event_queue_conf.event_queue_cfg or
+	 * @ref RTE_SCHED_TYPE_ORDERED flag to @ref rte_event_queue_conf.schedule_type),
+	 * then the value must be in the range of
+	 * [1, @ref rte_event_dev_config.nb_event_queue_flows], which was
 	 * previously supplied to rte_event_dev_configure().
+	 *
+	 * If the queue is not configured for ordered scheduling, then this value is ignored
 	 */
 	uint32_t event_queue_cfg;
 	/**< Queue cfg flags(EVENT_QUEUE_CFG_) */
 	uint8_t schedule_type;
 	/**< Queue schedule type(RTE_SCHED_TYPE_*).
-	 * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in
-	 * event_queue_cfg.
+	 * Valid when @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is not set in
+	 * @ref rte_event_queue_conf.event_queue_cfg.
+	 *
+	 * If the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is set, then this field is ignored.
+	 *
+	 * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
 	 */
 	uint8_t priority;
 	/**< Priority for this event queue relative to other event queues.
 	 * The requested priority should in the range of
-	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+	 * [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
 	 * The implementation shall normalize the requested priority to
 	 * event device supported priority value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise
 	 */
 	uint8_t weight;
 	/**< Weight of the event queue relative to other event queues.
 	 * The requested weight should be in the range of
-	 * [RTE_EVENT_DEV_WEIGHT_HIGHEST, RTE_EVENT_DEV_WEIGHT_LOWEST].
+	 * [@ref RTE_EVENT_QUEUE_WEIGHT_HIGHEST, @ref RTE_EVENT_QUEUE_WEIGHT_LOWEST].
 	 * The implementation shall normalize the requested weight to event
 	 * device supported weight value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise.
 	 */
 	uint8_t affinity;
 	/**< Affinity of the event queue relative to other event queues.
 	 * The requested affinity should be in the range of
-	 * [RTE_EVENT_DEV_AFFINITY_HIGHEST, RTE_EVENT_DEV_AFFINITY_LOWEST].
+	 * [@ref RTE_EVENT_QUEUE_AFFINITY_HIGHEST, @ref RTE_EVENT_QUEUE_AFFINITY_LOWEST].
 	 * The implementation shall normalize the requested affinity to event
 	 * device supported affinity value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise.
 	 */
 };
 
@@ -779,7 +822,7 @@ struct rte_event_queue_conf {
  *   The identifier of the device.
  * @param queue_id
  *   The index of the event queue to get the configuration information.
- *   The value must be in the range [0, nb_event_queues - 1]
+ *   The value must be in the range [0, @ref rte_event_dev_config.nb_event_queues - 1]
  *   previously supplied to rte_event_dev_configure().
  * @param[out] queue_conf
  *   The pointer to the default event queue configuration data.
@@ -800,7 +843,8 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
  *   The identifier of the device.
  * @param queue_id
  *   The index of the event queue to setup. The value must be in the range
- *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
+ *   [0, @ref rte_event_dev_config.nb_event_queues - 1] previously supplied to
+ *   rte_event_dev_configure().
  * @param queue_conf
  *   The pointer to the configuration data to be used for the event queue.
  *   NULL value is allowed, in which case default configuration	used.
@@ -816,43 +860,44 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
 		      const struct rte_event_queue_conf *queue_conf);
 
 /**
- * The priority of the queue.
+ * Queue attribute id for the priority of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_PRIORITY 0
 /**
- * The number of atomic flows configured for the queue.
+ * Queue attribute id for the number of atomic flows configured for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS 1
 /**
- * The number of atomic order sequences configured for the queue.
+ * Queue attribute id for the number of atomic order sequences configured for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES 2
 /**
- * The cfg flags for the queue.
+ * Queue attribute id for the cfg flags for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG 3
 /**
- * The schedule type of the queue.
+ * Queue attribute id for the schedule type of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
 /**
- * The weight of the queue.
+ * Queue attribute id for the weight of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
 /**
- * Affinity of the queue.
+ * Queue attribute id for the affinity of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
 
 /**
- * Get an attribute from a queue.
+ * Get an attribute or property of an event queue.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param queue_id
- *   Eventdev queue id
+ *   The index of the event queue to query. The value must be in the range
+ *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to retrieve
+ *   The attribute ID to retrieve (RTE_EVENT_QUEUE_ATTR_*)
  * @param[out] attr_value
  *   A pointer that will be filled in with the attribute value if successful
  *
@@ -861,8 +906,8 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
  *   - -EINVAL: invalid device, queue or attr_id provided, or attr_value was
  *		NULL
  *   - -EOVERFLOW: returned when attr_id is set to
- *   RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and event_queue_cfg is set to
- *   RTE_EVENT_QUEUE_CFG_ALL_TYPES
+ *   @ref RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES is
+ *   set in the queue configuration flags.
  */
 int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
@@ -872,11 +917,13 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
  * Set an event queue attribute.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param queue_id
- *   Eventdev queue id
+ *   The index of the event queue to configure. The value must be in the range
+ *   [0, @ref rte_event_dev_config.nb_event_queues - 1] previously
+ *   supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to set
+ *   The attribute ID to set (RTE_EVENT_QUEUE_ATTR_*)
  * @param attr_value
  *   The attribute value to set
  *
@@ -902,7 +949,10 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
  */
 #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
 /**< This event port links only to a single event queue.
+ * The queue it links with should be similarly configured with the
+ * @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK flag.
  *
+ *  @see RTE_EVENT_QUEUE_CFG_SINGLE_LINK
  *  @see rte_event_port_setup(), rte_event_port_link()
  */
 #define RTE_EVENT_PORT_CFG_HINT_PRODUCER       (1ULL << 2)
@@ -918,7 +968,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 #define RTE_EVENT_PORT_CFG_HINT_CONSUMER       (1ULL << 3)
 /**< Hint that this event port will primarily dequeue events from the system.
  * A PMD can optimize its internal workings by assuming that this port is
- * primarily going to consume events, and not enqueue FORWARD or RELEASE
+ * primarily going to consume events, and not enqueue NEW or FORWARD
  * events.
  *
  * Note that this flag is only a hint, so PMDs must operate under the
@@ -944,48 +994,55 @@ struct rte_event_port_conf {
 	/**< A backpressure threshold for new event enqueues on this port.
 	 * Use for *closed system* event dev where event capacity is limited,
 	 * and cannot exceed the capacity of the event dev.
+	 *
 	 * Configuring ports with different thresholds can make higher priority
 	 * traffic less likely to  be backpressured.
 	 * For example, a port used to inject NIC Rx packets into the event dev
 	 * can have a lower threshold so as not to overwhelm the device,
 	 * while ports used for worker pools can have a higher threshold.
-	 * This value cannot exceed the *nb_events_limit*
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_events_limit value
 	 * which was previously supplied to rte_event_dev_configure().
-	 * This should be set to '-1' for *open system*.
+	 *
+	 * This should be set to '-1' for *open system*, i.e when
+	 * @ref rte_event_dev_info.max_num_events == -1.
 	 */
 	uint16_t dequeue_depth;
-	/**< Configure number of bulk dequeues for this event port.
-	 * This value cannot exceed the *nb_event_port_dequeue_depth*
-	 * which previously supplied to rte_event_dev_configure().
-	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
+	/**< Configure the maximum size of burst dequeues for this event port.
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_dequeue_depth value
+	 * which was previously supplied to rte_event_dev_configure().
+	 *
+	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
 	 */
 	uint16_t enqueue_depth;
-	/**< Configure number of bulk enqueues for this event port.
-	 * This value cannot exceed the *nb_event_port_enqueue_depth*
-	 * which previously supplied to rte_event_dev_configure().
-	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
+	/**< Configure the maximum size of burst enqueues to this event port.
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_enqueue_depth value
+	 * which was previously supplied to rte_event_dev_configure().
+	 *
+	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
 	 */
-	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
+	uint32_t event_port_cfg; /**< Port configuration flags(EVENT_PORT_CFG_) */
 };
 
 /**
  * Retrieve the default configuration information of an event port designated
  * by its *port_id* from the event driver for an event device.
  *
- * This function intended to be used in conjunction with rte_event_port_setup()
- * where caller needs to set up the port by overriding few default values.
+ * This function is intended to be used in conjunction with rte_event_port_setup()
+ * where the caller can set up the port by just overriding few default values.
  *
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
  *   The index of the event port to get the configuration information.
- *   The value must be in the range [0, nb_event_ports - 1]
+ *   The value must be in the range [0, @ref rte_event_dev_config.nb_event_ports - 1]
  *   previously supplied to rte_event_dev_configure().
  * @param[out] port_conf
- *   The pointer to the default event port configuration data
+ *   The pointer to a structure to store the default event port configuration data.
  * @return
  *   - 0: Success, driver updates the default event port configuration data.
  *   - <0: Error code returned by the driver info get function.
+ *      - -EINVAL - invalid input parameter
+ *      - -ENOTSUP - function is not supported for this device
  *
  * @see rte_event_port_setup()
  */
@@ -1000,18 +1057,24 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
  *   The identifier of the device.
  * @param port_id
  *   The index of the event port to setup. The value must be in the range
- *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ *   [0, @ref rte_event_dev_config.nb_event_ports - 1] previously supplied to
+ *   rte_event_dev_configure().
  * @param port_conf
- *   The pointer to the configuration data to be used for the queue.
- *   NULL value is allowed, in which case default configuration	used.
+ *   The pointer to the configuration data to be used for the port.
+ *   NULL value is allowed, in which case the default configuration is used.
  *
  * @see rte_event_port_default_conf_get()
  *
  * @return
  *   - 0: Success, event port correctly set up.
  *   - <0: Port configuration failed
- *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
- *   with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ *     - -EINVAL - Invalid input parameter
+ *     - -EBUSY - Port already started
+ *     - -ENOTSUP - Function not supported on this device, or a NULL pointer passed
+ *        as the port_conf parameter, and no default configuration function available
+ *        for this device.
+ *     - -EDQUOT - Application tried to link a queue configured
+ *      with @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event port.
  */
 int
 rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
@@ -1041,8 +1104,9 @@ typedef void (*rte_eventdev_port_flush_t)(uint8_t dev_id,
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
- *   The index of the event port to setup. The value must be in the range
- *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ *   The index of the event port to quiesce. The value must be in the range
+ *   [0, @ref rte_event_dev_config.nb_event_ports - 1]
+ *   previously supplied to rte_event_dev_configure().
  * @param release_cb
  *   Callback function invoked once per flushed event.
  * @param args
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 09/11] eventdev: improve doxygen comments for control APIs
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (7 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 08/11] eventdev: improve doxygen comments on config fns Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23 10:10     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types Bruce Richardson
                     ` (3 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

The doxygen comments for the port attributes, start and stop (and
related functions) are improved.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 34 +++++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 11 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 1fda8a5a13..2c6576e921 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1117,19 +1117,21 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
 		       rte_eventdev_port_flush_t release_cb, void *args);
 
 /**
- * The queue depth of the port on the enqueue side
+ * Port attribute id for the maximum size of a burst enqueue operation supported on a port
  */
 #define RTE_EVENT_PORT_ATTR_ENQ_DEPTH 0
 /**
- * The queue depth of the port on the dequeue side
+ * Port attribute id for the maximum size of a dequeue burst which can be returned from a port
  */
 #define RTE_EVENT_PORT_ATTR_DEQ_DEPTH 1
 /**
- * The new event threshold of the port
+ * Port attribute id for the new event threshold of the port.
+ * Once the number of events in the system exceeds this threshold, the enqueue of NEW-type
+ * events will fail.
  */
 #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
 /**
- * The implicit release disable attribute of the port
+ * Port attribute id for the implicit release disable attribute of the port
  */
 #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
 
@@ -1137,11 +1139,13 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
  * Get an attribute from a port.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param port_id
- *   Eventdev port id
+ *   The index of the event port to query. The value must be in the range
+ *   [0, @ref rte_event_dev_config.nb_event_ports - 1]
+ *   previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to retrieve
+ *   The attribute ID to retrieve (RTE_EVENT_PORT_ATTR_*)
  * @param[out] attr_value
  *   A pointer that will be filled in with the attribute value if successful
  *
@@ -1156,8 +1160,8 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 /**
  * Start an event device.
  *
- * The device start step is the last one and consists of setting the event
- * queues to start accepting the events and schedules to event ports.
+ * The device start step is the last one in device setup, and enables the event
+ * ports and queues to start accepting events and scheduling them to event ports.
  *
  * On success, all basic functions exported by the API (event enqueue,
  * event dequeue and so on) can be invoked.
@@ -1166,6 +1170,8 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
  *   Event device identifier
  * @return
  *   - 0: Success, device started.
+ *   - -EINVAL:  Invalid device id provided
+ *   - -ENOTSUP: Device does not support this operation.
  *   - -ESTALE : Not all ports of the device are configured
  *   - -ENOLINK: Not all queues are linked, which could lead to deadlock.
  */
@@ -1208,12 +1214,16 @@ typedef void (*rte_eventdev_stop_flush_t)(uint8_t dev_id,
  * callback function must be registered in every process that can call
  * rte_event_dev_stop().
  *
+ * Only one callback function may be registered. Each new call replaces
+ * the existing registered callback function with the new function passed in.
+ *
  * To unregister a callback, call this function with a NULL callback pointer.
  *
  * @param dev_id
  *   The identifier of the device.
  * @param callback
- *   Callback function invoked once per flushed event.
+ *   Callback function to be invoked once per flushed event.
+ *   Pass NULL to unset any previously-registered callback function.
  * @param userdata
  *   Argument supplied to callback.
  *
@@ -1235,7 +1245,9 @@ int rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
  * @return
  *  - 0 on successfully closing device
  *  - <0 on failure to close device
- *  - (-EAGAIN) if device is busy
+ *    - -EINVAL - invalid device id
+ *    - -ENOTSUP - operation not supported for this device
+ *    - -EAGAIN - device is busy
  */
 int
 rte_event_dev_close(uint8_t dev_id);
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (8 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 09/11] eventdev: improve doxygen comments for control APIs Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-23 16:19     ` Mattias Rönnblom
  2024-01-19 17:43   ` [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields Bruce Richardson
                     ` (2 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

The description of ordered and atomic scheduling given in the eventdev
doxygen documentation was not always clear. Try and simplify this so
that it is clearer for the end-user of the application

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---

NOTE TO REVIEWERS:
I've updated this based on my understanding of what these scheduling
types are meant to do. It matches my understanding of the support
offered by our Intel DLB2 driver, as well as the SW eventdev, and I
believe the DSW eventdev too. If it does not match the behaviour of
other eventdevs, let's have a discussion to see if we can reach a good
definition of the behaviour that is common.
---
 lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++-----------------
 1 file changed, 25 insertions(+), 22 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 2c6576e921..cb13602ffb 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1313,26 +1313,24 @@ struct rte_event_vector {
 #define RTE_SCHED_TYPE_ORDERED          0
 /**< Ordered scheduling
  *
- * Events from an ordered flow of an event queue can be scheduled to multiple
+ * Events from an ordered event queue can be scheduled to multiple
  * ports for concurrent processing while maintaining the original event order.
  * This scheme enables the user to achieve high single flow throughput by
- * avoiding SW synchronization for ordering between ports which bound to cores.
- *
- * The source flow ordering from an event queue is maintained when events are
- * enqueued to their destination queue within the same ordered flow context.
- * An event port holds the context until application call
- * rte_event_dequeue_burst() from the same port, which implicitly releases
- * the context.
- * User may allow the scheduler to release the context earlier than that
- * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
- *
- * Events from the source queue appear in their original order when dequeued
- * from a destination queue.
- * Event ordering is based on the received event(s), but also other
- * (newly allocated or stored) events are ordered when enqueued within the same
- * ordered context. Events not enqueued (e.g. released or stored) within the
- * context are  considered missing from reordering and are skipped at this time
- * (but can be ordered again within another context).
+ * avoiding SW synchronization for ordering between ports which are polled by
+ * different cores.
+ *
+ * As events are scheduled to ports/cores, the original event order from the
+ * source event queue is recorded internally in the scheduler. As events are
+ * returned (via FORWARD type enqueue) to the scheduler, the original event
+ * order is restored before the events are enqueued into their new destination
+ * queue.
+ *
+ * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly
+ * released by the next dequeue from a port, are skipped by the reordering
+ * stage and do not affect the reordering of returned events.
+ *
+ * The ordering behaviour of NEW events with respect to FORWARD events is
+ * undefined and implementation dependent.
  *
  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
  */
@@ -1340,18 +1338,23 @@ struct rte_event_vector {
 #define RTE_SCHED_TYPE_ATOMIC           1
 /**< Atomic scheduling
  *
- * Events from an atomic flow of an event queue can be scheduled only to a
+ * Events from an atomic flow, identified by @ref rte_event.flow_id,
+ * of an event queue can be scheduled only to a
  * single port at a time. The port is guaranteed to have exclusive (atomic)
  * access to the associated flow context, which enables the user to avoid SW
  * synchronization. Atomic flows also help to maintain event ordering
- * since only one port at a time can process events from a flow of an
+ * since only one port at a time can process events from each flow of an
  * event queue.
  *
- * The atomic queue synchronization context is dedicated to the port until
+ * The atomic queue synchronization context for a flow is dedicated to the port until
  * application call rte_event_dequeue_burst() from the same port,
  * which implicitly releases the context. User may allow the scheduler to
  * release the context earlier than that by invoking rte_event_enqueue_burst()
- * with RTE_EVENT_OP_RELEASE operation.
+ * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context
+ * is only released once the last event from the flow, outstanding on the port,
+ * is released. So long as there is one event from an atomic flow scheduled to
+ * a port/core (including any events in the port's dequeue queue, not yet read
+ * by the application), that port will hold the synchronization context.
  *
  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
  */
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (9 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types Bruce Richardson
@ 2024-01-19 17:43   ` Bruce Richardson
  2024-01-24 11:34     ` Mattias Rönnblom
  2024-02-01  9:35     ` Bruce Richardson
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
  12 siblings, 2 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-19 17:43 UTC (permalink / raw)
  To: dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, Bruce Richardson

Clarify the meaning of the NEW, FORWARD and RELEASE event types.
For the fields in "rte_event" struct, enhance the comments on each to
clarify the field's use, and whether it is preserved between enqueue and
dequeue, and it's role, if any, in scheduling.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---

As with the previous patch, please review this patch to ensure that the
expected semantics of the various event types and event fields have not
changed in an unexpected way.
---
 lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
 1 file changed, 77 insertions(+), 28 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index cb13602ffb..4eff1c4958 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1416,21 +1416,25 @@ struct rte_event_vector {

 /* Event enqueue operations */
 #define RTE_EVENT_OP_NEW                0
-/**< The event producers use this operation to inject a new event to the
+/**< The @ref rte_event.op field should be set to this type to inject a new event to the
  * event device.
  */
 #define RTE_EVENT_OP_FORWARD            1
-/**< The CPU use this operation to forward the event to different event queue or
- * change to new application specific flow or schedule type to enable
- * pipelining.
+/**< SW should set the @ref rte_event.op filed to this type to return a
+ * previously dequeued event to the event device for further processing.
  *
- * This operation must only be enqueued to the same port that the
+ * This event *must* be enqueued to the same port that the
  * event to be forwarded was dequeued from.
+ *
+ * The event's fields, including (but not limited to) flow_id, scheduling type,
+ * destination queue, and event payload e.g. mbuf pointer, may all be updated as
+ * desired by software, but the @ref rte_event.impl_opaque field must
+ * be kept to the same value as was present when the event was dequeued.
  */
 #define RTE_EVENT_OP_RELEASE            2
 /**< Release the flow context associated with the schedule type.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
+ * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
  * then this function hints the scheduler that the user has completed critical
  * section processing in the current atomic context.
  * The scheduler is now allowed to schedule events from the same flow from
@@ -1442,21 +1446,19 @@ struct rte_event_vector {
  * performance, but the user needs to design carefully the split into critical
  * vs non-critical sections.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
- * then this function hints the scheduler that the user has done all that need
- * to maintain event order in the current ordered context.
- * The scheduler is allowed to release the ordered context of this port and
- * avoid reordering any following enqueues.
- *
- * Early ordered context release may increase parallelism and thus system
- * performance.
+ * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
+ * then this function informs the scheduler that the current event has
+ * completed processing and will not be returned to the scheduler, i.e.
+ * it has been dropped, and so the reordering context for that event
+ * should be considered filled.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
+ * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_PARALLEL
  * or no scheduling context is held then this function may be an NOOP,
  * depending on the implementation.
  *
  * This operation must only be enqueued to the same port that the
- * event to be released was dequeued from.
+ * event to be released was dequeued from. The @ref rte_event.impl_opaque
+ * field in the release event must match that in the original dequeued event.
  */

 /**
@@ -1473,53 +1475,100 @@ struct rte_event {
 			/**< Targeted flow identifier for the enqueue and
 			 * dequeue operation.
 			 * The value must be in the range of
-			 * [0, nb_event_queue_flows - 1] which
+			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
 			 * previously supplied to rte_event_dev_configure().
+			 *
+			 * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
+			 * flow context for atomicity, such that events from each individual flow
+			 * will only be scheduled to one port at a time.
+			 *
+			 * This field is preserved between enqueue and dequeue when
+			 * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
+			 * capability. Otherwise the value is implementation dependent
+			 * on dequeue.
 			 */
 			uint32_t sub_event_type:8;
 			/**< Sub-event types based on the event source.
+			 *
+			 * This field is preserved between enqueue and dequeue.
+			 * This field is for SW or event adapter use,
+			 * and is unused in scheduling decisions.
+			 *
 			 * @see RTE_EVENT_TYPE_CPU
 			 */
 			uint32_t event_type:4;
-			/**< Event type to classify the event source.
-			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
+			/**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
+			 *
+			 * This field is preserved between enqueue and dequeue
+			 * This field is for SW or event adapter use,
+			 * and is unused in scheduling decisions.
 			 */
 			uint8_t op:2;
-			/**< The type of event enqueue operation - new/forward/
-			 * etc.This field is not preserved across an instance
+			/**< The type of event enqueue operation - new/forward/ etc.
+			 *
+			 * This field is *not* preserved across an instance
 			 * and is undefined on dequeue.
-			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
+			 *
+			 * @see RTE_EVENT_OP_NEW
+			 * @see RTE_EVENT_OP_FORWARD
+			 * @see RTE_EVENT_OP_RELEASE
 			 */
 			uint8_t rsvd:4;
-			/**< Reserved for future use */
+			/**< Reserved for future use.
+			 *
+			 * Should be set to zero on enqueue. Zero on dequeue.
+			 */
 			uint8_t sched_type:2;
 			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
 			 * associated with flow id on a given event queue
 			 * for the enqueue and dequeue operation.
+			 *
+			 * This field is used to determine the scheduling type
+			 * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
+			 * is supported.
+			 * For queues where only a single scheduling type is available,
+			 * this field must be set to match the configured scheduling type.
+			 *
+			 * This field is preserved between enqueue and dequeue.
+			 *
+			 * @see RTE_SCHED_TYPE_ORDERED
+			 * @see RTE_SCHED_TYPE_ATOMIC
+			 * @see RTE_SCHED_TYPE_PARALLEL
 			 */
 			uint8_t queue_id;
 			/**< Targeted event queue identifier for the enqueue or
 			 * dequeue operation.
 			 * The value must be in the range of
-			 * [0, nb_event_queues - 1] which previously supplied to
-			 * rte_event_dev_configure().
+			 * [0, @ref rte_event_dev_config.nb_event_queues - 1] which was
+			 * previously supplied to rte_event_dev_configure().
+			 *
+			 * This field is preserved between enqueue on dequeue.
 			 */
 			uint8_t priority;
 			/**< Event priority relative to other events in the
 			 * event queue. The requested priority should in the
-			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
-			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
+			 * range of  [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST,
+			 * @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
 			 * The implementation shall normalize the requested
 			 * priority to supported priority value.
+			 *
 			 * Valid when the device has
-			 * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
+			 * @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability.
+			 * Ignored otherwise.
+			 *
+			 * This field is preserved between enqueue and dequeue.
 			 */
 			uint8_t impl_opaque;
 			/**< Implementation specific opaque value.
+			 *
 			 * An implementation may use this field to hold
 			 * implementation specific value to share between
 			 * dequeue and enqueue operation.
+			 *
 			 * The application should not modify this field.
+			 * Its value is implementation dependent on dequeue,
+			 * and must be returned unmodified on enqueue when
+			 * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE
 			 */
 		};
 	};
--
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 01/11] eventdev: improve doxygen introduction text
  2024-01-19 17:43   ` [PATCH v2 01/11] eventdev: improve doxygen introduction text Bruce Richardson
@ 2024-01-23  8:57     ` Mattias Rönnblom
  2024-01-23  9:06       ` Bruce Richardson
  2024-01-31 13:45       ` Bruce Richardson
  0 siblings, 2 replies; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23  8:57 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> Make some textual improvements to the introduction to eventdev and event
> devices in the eventdev header file. This text appears in the doxygen
> output for the header file, and introduces the key concepts, for
> example: events, event devices, queues, ports and scheduling.
> 

Great stuff, Bruce.

> This patch makes the following improvements:
> * small textual fixups, e.g. correcting use of singular/plural
> * rewrites of some sentences to improve clarity
> * using doxygen markdown to split the whole large block up into
>    sections, thereby making it easier to read.
> 
> No large-scale changes are made, and blocks are not reordered
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++---------------
>   1 file changed, 66 insertions(+), 46 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index ec9b02455d..a36c89c7a4 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -12,12 +12,13 @@
>    * @file
>    *
>    * RTE Event Device API
> + * ====================
>    *
>    * In a polling model, lcores poll ethdev ports and associated rx queues

"In a polling model, lcores pick up packets from Ethdev ports and 
associated RX queues, runs the processing to completion, and enqueues 
the completed packets to a TX queue. NIC-level receive-side scaling 
(RSS) may be used to balance the load across multiple CPU cores."

I thought it might be worth to be a little more verbose on what is the 
reference model Eventdev is compared to. Maybe you can add "traditional" 
or "archetypal", or "simple" as a prefix to the "polling model". (I 
think I would call this a "simple run-to-completion model" rather than 
"polling model".)

"By contrast, in Eventdev, ingressing* packets are fed into an event 
device, which schedules packets across available lcores, in accordance 
to its configuration. This event-driven programming model offers 
applications automatic multicore scaling, dynamic load balancing, 
pipelining, packet order maintenance, synchronization, and quality of 
service."

* Is this a word?

> - * directly to look for packet. In an event driven model, by contrast, lcores
> - * call the scheduler that selects packets for them based on programmer
> - * specified criteria. Eventdev library adds support for event driven
> - * programming model, which offer applications automatic multicore scaling,
> + * directly to look for packets. In an event driven model, in contrast, lcores
> + * call a scheduler that selects packets for them based on programmer
> + * specified criteria. The eventdev library adds support for the event driven
> + * programming model, which offers applications automatic multicore scaling,
>    * dynamic load balancing, pipelining, packet ingress order maintenance and
>    * synchronization services to simplify application packet processing.
>    *
> @@ -25,12 +26,15 @@
>    *
>    * - The application-oriented Event API that includes functions to setup
>    *   an event device (configure it, setup its queues, ports and start it), to
> - *   establish the link between queues to port and to receive events, and so on.
> + *   establish the links between queues and ports to receive events, and so on.
>    *
>    * - The driver-oriented Event API that exports a function allowing
> - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> + *   an event poll Mode Driver (PMD) to register itself as
>    *   an event device driver.
>    *
> + * Application-oriented Event API
> + * ------------------------------
> + *
>    * Event device components:
>    *
>    *                     +-----------------+
> @@ -75,27 +79,33 @@
>    *            |                                                           |
>    *            +-----------------------------------------------------------+
>    *
> - * Event device: A hardware or software-based event scheduler.
> + * **Event device**: A hardware or software-based event scheduler.
>    *
> - * Event: A unit of scheduling that encapsulates a packet or other datatype
> - * like SW generated event from the CPU, Crypto work completion notification,
> - * Timer expiry event notification etc as well as metadata.
> - * The metadata includes flow ID, scheduling type, event priority, event_type,
> + * **Event**: A unit of scheduling that encapsulates a packet or other datatype,

"Event: Represents an item of work and is the smallest unit of 
scheduling. An event carries metadata, such as queue ID, scheduling 
type, and event priority, and data such as one or more packets or other 
kinds of buffers. Examples of events are a software-generated item of 
work originating from a lcore carrying a packet to be processed, a 
crypto work completion notification and a timer expiry notification."

I've found "work scheduler" as helpful term describing what role an 
event device serve in the system, and thus an event represent an item of 
work. "Event" and "Event device" are also good names, but lead some 
people to think libevent or event loop, which is not exactly right.

> + * such as: SW generated event from the CPU, crypto work completion notification,
> + * timer expiry event notification etc., as well as metadata about the packet or data.
> + * The metadata includes a flow ID (if any), scheduling type, event priority, event_type,
>    * sub_event_type etc.
>    *
> - * Event queue: A queue containing events that are scheduled by the event dev.
> + * **Event queue**: A queue containing events that are scheduled by the event device.
>    * An event queue contains events of different flows associated with scheduling
>    * types, such as atomic, ordered, or parallel.
> + * Each event given to an eventdev must have a valid event queue id field in the metadata,
"eventdev" -> "event device"

> + * to specify on which event queue in the device the event must be placed,
> + * for later scheduling to a core.

Events aren't nessarily scheduled to cores, so remove the last part.

>    *
> - * Event port: An application's interface into the event dev for enqueue and
> + * **Event port**: An application's interface into the event dev for enqueue and
>    * dequeue operations. Each event port can be linked with one or more
>    * event queues for dequeue operations.
> - *
> - * By default, all the functions of the Event Device API exported by a PMD
> - * are lock-free functions which assume to not be invoked in parallel on
> - * different logical cores to work on the same target object. For instance,
> - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> - * cores to operates on same  event port. Of course, this function
> + * Each port should be associated with a single core (enqueue and dequeue is not thread-safe).

Should, or must?

Either it's a MT safety issue, and any lcore can access the port with 
the proper serialization, or it's something where the lcore id used to 
store state between invocations, or some other mechanism that prevents a 
port from being used by multiple threads (lcore or not).

> + * To schedule events to a core, the event device will schedule them to the event port(s)
> + * being polled by that core.

"core" -> "lcore" ?

> + *
> + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> + * are lock-free functions, which must not be invoked on the same object in parallel on
> + * different logical cores.

This is a one-sentence contradiction. The term "lock free" implies a 
data structure which is MT safe, achieving this goal without the use of 
locks. A lock-free object thus *may* be called from different threads, 
including different lcore threads.

Ports are not MT safe, and thus one port should not be acted upon by 
more than one thread (either in parallel, or throughout the lifetime of 
the event device/port; see above).

The event device is MT safe, provided the different parallel callers use 
different ports.

A more subtle question and one with a less obvious answer is if the 
caller of also *must* be an EAL thread, or if a registered non-EAL 
thread or even an unregistered non-EAL thread may call the "fast path" 
functions (enqueue, dequeue etc).

For EAL threads, the event device implementation may safely use 
non-preemption safe constructs (like the default ring variant and spin 
locks).

If the caller is a registered non-EAL thread or an EAL thread, the lcore 
id may be used to index various data structures.

If "lcore id"-less threads may call the fast path APIs, what are the MT 
safety guarantees in that case? Like rte_random.h, or something else.

> + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> + * cores to operate on same  event port. Of course, this function
>    * can be invoked in parallel by different logical cores on different ports.
>    * It is the responsibility of the upper level application to enforce this rule.
>    *
> @@ -107,22 +117,19 @@
>    *
>    * Event devices are dynamically registered during the PCI/SoC device probing
>    * phase performed at EAL initialization time.
> - * When an Event device is being probed, a *rte_event_dev* structure and
> - * a new device identifier are allocated for that device. Then, the
> - * event_dev_init() function supplied by the Event driver matching the probed
> - * device is invoked to properly initialize the device.
> + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> + * for it and the event_dev_init() function supplied by the Event driver
> + * is invoked to properly initialize the device.
>    *
> - * The role of the device init function consists of resetting the hardware or
> - * software event driver implementations.
> + * The role of the device init function is to reset the device hardware or
> + * to initialize the software event driver implementation.
>    *
> - * If the device init operation is successful, the correspondence between
> - * the device identifier assigned to the new device and its associated
> - * *rte_event_dev* structure is effectively registered.
> - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> - * freed.
> + * If the device init operation is successful, the device is assigned a device
> + * id (dev_id) for application use.
> + * Otherwise, the *rte_event_dev* structure is freed.
>    *
>    * The functions exported by the application Event API to setup a device
> - * designated by its device identifier must be invoked in the following order:
> + * must be invoked in the following order:
>    *     - rte_event_dev_configure()
>    *     - rte_event_queue_setup()
>    *     - rte_event_port_setup()
> @@ -130,10 +137,15 @@
>    *     - rte_event_dev_start()
>    *
>    * Then, the application can invoke, in any order, the functions
> - * exported by the Event API to schedule events, dequeue events, enqueue events,
> - * change event queue(s) to event port [un]link establishment and so on.
> - *
> - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> + * exported by the Event API to dequeue events, enqueue events,
> + * and link and unlink event queue(s) to event ports.
> + *
> + * Before configuring a device, an application should call rte_event_dev_info_get()
> + * to determine the capabilities of the event device, and any queue or port
> + * limits of that device. The parameters set in the various device configuration
> + * structures may need to be adjusted based on the max values provided in the
> + * device information structure returned from the info_get API.
> + * An application may use rte_event_[queue/port]_default_conf_get() to get the
>    * default configuration to set up an event queue or event port by
>    * overriding few default values.
>    *
> @@ -145,7 +157,11 @@
>    * when the device is stopped.
>    *
>    * Finally, an application can close an Event device by invoking the
> - * rte_event_dev_close() function.
> + * rte_event_dev_close() function. Once closed, a device cannot be
> + * reconfigured or restarted.
> + *
> + * Driver-Oriented Event API
> + * -------------------------
>    *
>    * Each function of the application Event API invokes a specific function
>    * of the PMD that controls the target device designated by its device
> @@ -164,10 +180,13 @@
>    * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
>    *
>    * For performance reasons, the address of the fast-path functions of the
> - * Event driver is not contained in the *event_dev_ops* structure.
> + * Event driver are not contained in the *event_dev_ops* structure.

It's one address, so it should remain "is"?

>    * Instead, they are directly stored at the beginning of the *rte_event_dev*
>    * structure to avoid an extra indirect memory access during their invocation.
>    *
> + * Event Enqueue, Dequeue and Scheduling
> + * -------------------------------------
> + *
>    * RTE event device drivers do not use interrupts for enqueue or dequeue
>    * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
>    * functions to applications.
> @@ -179,21 +198,22 @@
>    * crypto work completion notification etc
>    *
>    * The *dequeue* operation gets one or more events from the event ports.
> - * The application process the events and send to downstream event queue through
> - * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
> - * on the final stage, the application may use Tx adapter API for maintaining
> - * the ingress order and then send the packet/event on the wire.
> + * The application processes the events and sends them to a downstream event queue through
> + * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
> + * On the final stage of processing, the application may use the Tx adapter API for maintaining
> + * the event ingress order while sending the packet/event on the wire via NIC Tx.
>    *
>    * The point at which events are scheduled to ports depends on the device.
>    * For hardware devices, scheduling occurs asynchronously without any software
>    * intervention. Software schedulers can either be distributed
>    * (each worker thread schedules events to its own port) or centralized
>    * (a dedicated thread schedules to all ports). Distributed software schedulers
> - * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
> - * scheduler logic need a dedicated service core for scheduling.
> - * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
> - * indicates the device is centralized and thus needs a dedicated scheduling
> - * thread that repeatedly calls software specific scheduling function.
> + * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
> + * software schedulers need a dedicated service core for scheduling.
> + * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
> + * indicates that the device is centralized and thus needs a dedicated scheduling
> + * thread, generally a service core,
> + * that repeatedly calls the software specific scheduling function.

In the SW case, what you have is a service that needs to be mapped to a 
service lcore.

"generally a RTE service that should be mapped to one or more service 
lcores"

>    *
>    * An event driven worker thread has following typical workflow on fastpath:
>    * \code{.c}

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 01/11] eventdev: improve doxygen introduction text
  2024-01-23  8:57     ` Mattias Rönnblom
@ 2024-01-23  9:06       ` Bruce Richardson
  2024-01-24 11:37         ` Mattias Rönnblom
  2024-01-31 13:45       ` Bruce Richardson
  1 sibling, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-23  9:06 UTC (permalink / raw)
  To: Mattias Rönnblom; +Cc: dev

On Tue, Jan 23, 2024 at 09:57:58AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Make some textual improvements to the introduction to eventdev and event
> > devices in the eventdev header file. This text appears in the doxygen
> > output for the header file, and introduces the key concepts, for
> > example: events, event devices, queues, ports and scheduling.
> > 
> 
> Great stuff, Bruce.
> 
Thanks, good feedback here. I'll take that into account and do a v3 later
when all feedback on this v2 is in.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 03/11] eventdev: update documentation on device capability flags
  2024-01-19 17:43   ` [PATCH v2 03/11] eventdev: update documentation on device capability flags Bruce Richardson
@ 2024-01-23  9:18     ` Mattias Rönnblom
  2024-01-23  9:34       ` Bruce Richardson
  2024-01-31 14:09       ` Bruce Richardson
  0 siblings, 2 replies; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23  9:18 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> Update the device capability docs, to:
> 
> * include more cross-references
> * split longer text into paragraphs, in most cases with each flag having
>    a single-line summary at the start of the doc block
> * general comment rewording and clarification as appropriate
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
>   1 file changed, 93 insertions(+), 37 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 949e957f1b..57a2791946 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -243,143 +243,199 @@ struct rte_event;
>   /* Event device capability bitmap flags */
>   #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
>   /**< Event scheduling prioritization is based on the priority and weight
> - * associated with each event queue. Events from a queue with highest priority
> - * is scheduled first. If the queues are of same priority, weight of the queues
> + * associated with each event queue.
> + *
> + * Events from a queue with highest priority
> + * are scheduled first. If the queues are of same priority, weight of the queues
>    * are considered to select a queue in a weighted round robin fashion.
>    * Subsequent dequeue calls from an event port could see events from the same
>    * event queue, if the queue is configured with an affinity count. Affinity
>    * count is the number of subsequent dequeue calls, in which an event port
>    * should use the same event queue if the queue is non-empty
>    *

Maybe the subject for a future documentation patch: but what happens to 
order maintenance for different-priority events. I've always assumed 
events on atomic/ordered queues where only ordered in the flow_id within 
the same priority, not flow_id alone.

> + * NOTE: A device may use both queue prioritization and event prioritization
> + * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions.
> + *
>    *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>    */
>   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>   /**< Event scheduling prioritization is based on the priority associated with
> - *  each event. Priority of each event is supplied in *rte_event* structure
> + *  each event.
> + *
> + *  Priority of each event is supplied in *rte_event* structure
>    *  on each enqueue operation.
> + *  If this capability is not set, the priority field of the event structure
> + *  is ignored for each event.
>    *
> + * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability)
> + * and event prioritization when making packet scheduling decisions.
> +
>    *  @see rte_event_enqueue_burst()
>    */
>   #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
>   /**< Event device operates in distributed scheduling mode.
> + *
>    * In distributed scheduling mode, event scheduling happens in HW or
> - * rte_event_dequeue_burst() or the combination of these two.
> + * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
>    * If the flag is not set then eventdev is centralized and thus needs a
>    * dedicated service core that acts as a scheduling thread .
>    *
> - * @see rte_event_dequeue_burst()
> + * @see rte_event_dev_service_id_get
>    */
>   #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
>   /**< Event device is capable of enqueuing events of any type to any queue.
> + *
>    * If this capability is not set, the queue only supports events of the
> - *  *RTE_SCHED_TYPE_* type that it was created with.
> + * *RTE_SCHED_TYPE_* type that it was created with.
> + * Any events of other types scheduled to the queue will handled in an
> + * implementation-dependent manner. They may be dropped by the
> + * event device, or enqueued with the scheduling type adjusted to the
> + * correct/supported value.

Having the application setting sched_type when it was already set on a 
the level of the queue never made sense to me.

I can't see any reasons why this field shouldn't be ignored by the event 
device on non-RTE_EVENT_QUEUE_CFG_ALL_TYPES queues.

If the behavior is indeed undefined, I think it's better to just say 
"undefined" rather than the above speculation.

>    *
> - * @see RTE_SCHED_TYPE_* values
> + * @see rte_event_enqueue_burst
> + * @see RTE_SCHED_TYPE_ATOMIC RTE_SCHED_TYPE_ORDERED RTE_SCHED_TYPE_PARALLEL
>    */
>   #define RTE_EVENT_DEV_CAP_BURST_MODE          (1ULL << 4)
>   /**< Event device is capable of operating in burst mode for enqueue(forward,
> - * release) and dequeue operation. If this capability is not set, application
> - * still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
> - * PMD accepts only one event at a time.
> + * release) and dequeue operation.
> + *
> + * If this capability is not set, application
> + * can still use the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
> + * PMD accepts or returns only one event at a time.
>    *
>    * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
>    */
>   #define RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE    (1ULL << 5)
>   /**< Event device ports support disabling the implicit release feature, in
>    * which the port will release all unreleased events in its dequeue operation.
> + *
>    * If this capability is set and the port is configured with implicit release
>    * disabled, the application is responsible for explicitly releasing events
> - * using either the RTE_EVENT_OP_FORWARD or the RTE_EVENT_OP_RELEASE event
> + * using either the @ref RTE_EVENT_OP_FORWARD or the @ref RTE_EVENT_OP_RELEASE event
>    * enqueue operations.
>    *
>    * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
>    */
>   
>   #define RTE_EVENT_DEV_CAP_NONSEQ_MODE         (1ULL << 6)
> -/**< Event device is capable of operating in none sequential mode. The path
> - * of the event is not necessary to be sequential. Application can change
> - * the path of event at runtime. If the flag is not set, then event each event
> - * will follow a path from queue 0 to queue 1 to queue 2 etc. If the flag is
> - * set, events may be sent to queues in any order. If the flag is not set, the
> - * eventdev will return an error when the application enqueues an event for a
> +/**< Event device is capable of operating in non-sequential mode.
> + *
> + * The path of the event is not necessary to be sequential. Application can change
> + * the path of event at runtime and events may be sent to queues in any order.
> + *
> + * If the flag is not set, then event each event will follow a path from queue 0
> + * to queue 1 to queue 2 etc.
> + * The eventdev will return an error when the application enqueues an event for a
>    * qid which is not the next in the sequence.
>    */
>   
>   #define RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK   (1ULL << 7)
> -/**< Event device is capable of configuring the queue/port link at runtime.
> +/**< Event device is capable of reconfiguring the queue/port link at runtime.
> + *
>    * If the flag is not set, the eventdev queue/port link is only can be
> - * configured during  initialization.
> + * configured during  initialization, or by stopping the device and
> + * then later restarting it after reconfiguration.
> + *
> + * @see rte_event_port_link rte_event_port_unlink
>    */
>   
>   #define RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT (1ULL << 8)
> -/**< Event device is capable of setting up the link between multiple queue
> - * with single port. If the flag is not set, the eventdev can only map a
> - * single queue to each port or map a single queue to many port.
> +/**< Event device is capable of setting up links between multiple queues and a single port.
> + *
> + * If the flag is not set, each port may only be linked to a single queue, and
> + * so can only receive events from that queue.
> + * However, each queue may be linked to multiple ports.
> + *
> + * @see rte_event_port_link
>    */
>   
>   #define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
> -/**< Event device preserves the flow ID from the enqueued
> - * event to the dequeued event if the flag is set. Otherwise,
> - * the content of this field is implementation dependent.
> +/**< Event device preserves the flow ID from the enqueued event to the dequeued event.
> + *
> + * If this flag is not set,
> + * the content of the flow-id field in dequeued events is implementation dependent.
> + *
> + * @see rte_event_dequeue_burst
>    */
>   
>   #define RTE_EVENT_DEV_CAP_MAINTENANCE_FREE (1ULL << 10)
>   /**< Event device *does not* require calls to rte_event_maintain().
> + *
>    * An event device that does not set this flag requires calls to
>    * rte_event_maintain() during periods when neither
>    * rte_event_dequeue_burst() nor rte_event_enqueue_burst() are called
>    * on a port. This will allow the event device to perform internal
>    * processing, such as flushing buffered events, return credits to a
>    * global pool, or process signaling related to load balancing.
> + *
> + * @see rte_event_maintain
>    */
>   
>   #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
>   /**< Event device is capable of changing the queue attributes at runtime i.e
> - * after rte_event_queue_setup() or rte_event_start() call sequence. If this
> - * flag is not set, eventdev queue attributes can only be configured during
> + * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
> + *
> + * If this flag is not set, eventdev queue attributes can only be configured during
>    * rte_event_queue_setup().

"event queue" or just "queue".

> + *
> + * @see rte_event_queue_setup
>    */
>   
>   #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
> -/**< Event device is capable of supporting multiple link profiles per event port
> - * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
> - * than one.
> +/**< Event device is capable of supporting multiple link profiles per event port.
> + *
> + *
> + * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
> + * than one, and multiple profiles may be configured and then switched at runtime.
> + * If not set, only a single profile may be configured, which may itself be
> + * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
> + *
> + * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
> + * @see rte_event_port_profile_switch
> + * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
>    */
>   
>   /* Event device priority levels */
>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
> -/**< Highest priority expressed across eventdev subsystem
> +/**< Highest priority expressed across eventdev subsystem.

"The highest priority an event device may support."
or
"The highest priority any event device may support."

Maybe this is a further improvement, beyond punctuation? "across 
eventdev subsystem" sounds awkward.

> + *
>    * @see rte_event_queue_setup(), rte_event_enqueue_burst()
>    * @see rte_event_port_link()
>    */
>   #define RTE_EVENT_DEV_PRIORITY_NORMAL    128
> -/**< Normal priority expressed across eventdev subsystem
> +/**< Normal priority expressed across eventdev subsystem.
> + *
>    * @see rte_event_queue_setup(), rte_event_enqueue_burst()
>    * @see rte_event_port_link()
>    */
>   #define RTE_EVENT_DEV_PRIORITY_LOWEST    255
> -/**< Lowest priority expressed across eventdev subsystem
> +/**< Lowest priority expressed across eventdev subsystem.
> + *
>    * @see rte_event_queue_setup(), rte_event_enqueue_burst()
>    * @see rte_event_port_link()
>    */
>   
>   /* Event queue scheduling weights */
>   #define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255
> -/**< Highest weight of an event queue
> +/**< Highest weight of an event queue.
> + *
>    * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>    */
>   #define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0
> -/**< Lowest weight of an event queue
> +/**< Lowest weight of an event queue.
> + *
>    * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>    */
>   
>   /* Event queue scheduling affinity */
>   #define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255
> -/**< Highest scheduling affinity of an event queue
> +/**< Highest scheduling affinity of an event queue.
> + *
>    * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>    */
>   #define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0
> -/**< Lowest scheduling affinity of an event queue
> +/**< Lowest scheduling affinity of an event queue.
> + *
>    * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>    */
>   

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 03/11] eventdev: update documentation on device capability flags
  2024-01-23  9:18     ` Mattias Rönnblom
@ 2024-01-23  9:34       ` Bruce Richardson
  2024-01-31 14:09       ` Bruce Richardson
  1 sibling, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-23  9:34 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 10:18:53AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Update the device capability docs, to:
> > 
> > * include more cross-references
> > * split longer text into paragraphs, in most cases with each flag having
> >    a single-line summary at the start of the doc block
> > * general comment rewording and clarification as appropriate
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
> >   1 file changed, 93 insertions(+), 37 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 949e957f1b..57a2791946 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -243,143 +243,199 @@ struct rte_event;
> >   /* Event device capability bitmap flags */
> >   #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
> >   /**< Event scheduling prioritization is based on the priority and weight
> > - * associated with each event queue. Events from a queue with highest priority
> > - * is scheduled first. If the queues are of same priority, weight of the queues
> > + * associated with each event queue.
> > + *
> > + * Events from a queue with highest priority
> > + * are scheduled first. If the queues are of same priority, weight of the queues
> >    * are considered to select a queue in a weighted round robin fashion.
> >    * Subsequent dequeue calls from an event port could see events from the same
> >    * event queue, if the queue is configured with an affinity count. Affinity
> >    * count is the number of subsequent dequeue calls, in which an event port
> >    * should use the same event queue if the queue is non-empty
> >    *
> 
> Maybe the subject for a future documentation patch: but what happens to
> order maintenance for different-priority events. I've always assumed events
> on atomic/ordered queues where only ordered in the flow_id within the same
> priority, not flow_id alone.
> 

Agree with this. If events with the same flow_id are spread across two
priority levels, they are not the same flow. I'll try and clarify this in
v3.

> > + * NOTE: A device may use both queue prioritization and event prioritization
> > + * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions.
> > + *
> >    *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
> >    */
> >   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
> >   /**< Event scheduling prioritization is based on the priority associated with
> > - *  each event. Priority of each event is supplied in *rte_event* structure
> > + *  each event.
> > + *
> > + *  Priority of each event is supplied in *rte_event* structure
> >    *  on each enqueue operation.
> > + *  If this capability is not set, the priority field of the event structure
> > + *  is ignored for each event.
> >    *
> > + * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability)
> > + * and event prioritization when making packet scheduling decisions.
> > +
> >    *  @see rte_event_enqueue_burst()
> >    */
> >   #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
> >   /**< Event device operates in distributed scheduling mode.
> > + *
> >    * In distributed scheduling mode, event scheduling happens in HW or
> > - * rte_event_dequeue_burst() or the combination of these two.
> > + * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
> >    * If the flag is not set then eventdev is centralized and thus needs a
> >    * dedicated service core that acts as a scheduling thread .
> >    *
> > - * @see rte_event_dequeue_burst()
> > + * @see rte_event_dev_service_id_get
> >    */
> >   #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
> >   /**< Event device is capable of enqueuing events of any type to any queue.
> > + *
> >    * If this capability is not set, the queue only supports events of the
> > - *  *RTE_SCHED_TYPE_* type that it was created with.
> > + * *RTE_SCHED_TYPE_* type that it was created with.
> > + * Any events of other types scheduled to the queue will handled in an
> > + * implementation-dependent manner. They may be dropped by the
> > + * event device, or enqueued with the scheduling type adjusted to the
> > + * correct/supported value.
> 
> Having the application setting sched_type when it was already set on a the
> level of the queue never made sense to me.
> 
> I can't see any reasons why this field shouldn't be ignored by the event
> device on non-RTE_EVENT_QUEUE_CFG_ALL_TYPES queues.
> 
> If the behavior is indeed undefined, I think it's better to just say
> "undefined" rather than the above speculation.
> 

+1, I completely agree with ignoring for fixed-type queues. Saves drivers
checking.

The reason I didn't put that in was a desire to minimise possible
semantic changes, but I think later on the patchset my desire to avoid such
changes waned and I have included more "severe" changes than I originally
would like. [The changes in "release" events on ordered queues being the
big one I'm aware of, that I should really have held back to a separate
dedicated patch/patchset]

Unless someone objects, I'll update that in a v3. However, many of these
subtle changes may mean updates to drivers, so how we go about clarifying
things and getting drivers compatible is something we need to think about.
We should probably target 24.11 as the point at which we should have all
behaviour clarified, and drivers updated if possible. There are so many
point of ambiguity - especially in error cases - I expect we may have some
work to do to get all aligned.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
  2024-01-19 17:43   ` [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure Bruce Richardson
@ 2024-01-23  9:35     ` Mattias Rönnblom
  2024-01-23  9:43       ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23  9:35 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> Some small rewording changes to the doxygen comments on struct
> rte_event_dev_info.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
>   1 file changed, 25 insertions(+), 21 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 57a2791946..872f241df2 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -482,54 +482,58 @@ struct rte_event_dev_info {
>   	const char *driver_name;	/**< Event driver name */
>   	struct rte_device *dev;	/**< Device information */
>   	uint32_t min_dequeue_timeout_ns;
> -	/**< Minimum supported global dequeue timeout(ns) by this device */
> +	/**< Minimum global dequeue timeout(ns) supported by this device */

Are we missing a bunch of "." here and in the other fields?

>   	uint32_t max_dequeue_timeout_ns;
> -	/**< Maximum supported global dequeue timeout(ns) by this device */
> +	/**< Maximum global dequeue timeout(ns) supported by this device */
>   	uint32_t dequeue_timeout_ns;
>   	/**< Configured global dequeue timeout(ns) for this device */
>   	uint8_t max_event_queues;
> -	/**< Maximum event_queues supported by this device */
> +	/**< Maximum event queues supported by this device */
>   	uint32_t max_event_queue_flows;
> -	/**< Maximum supported flows in an event queue by this device*/
> +	/**< Maximum number of flows within an event queue supported by this device*/
>   	uint8_t max_event_queue_priority_levels;
>   	/**< Maximum number of event queue priority levels by this device.
> -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> +	 * The priority levels are evenly distributed between
> +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.

This is a change of the API, in the sense it's defining something 
previously left undefined?

If you need 6 different priority levels in an app, how do you go about 
making sure you find the correct (distinct) Eventdev levels on any event 
device supporting >= 6 levels?

#define NUM_MY_LEVELS 6

#define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) * 
(RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) / 
NUM_MY_LEVELS)

This way? One would worry a bit exactly what "evenly" means, in terms of 
rounding errors. If you have an event device with 255 priority levels of 
(say) 256 levels available in the API, which two levels are the same 
priority?

>   	 */
>   	uint8_t max_event_priority_levels;
>   	/**< Maximum number of event priority levels by this device.
>   	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability
> +	 * The priority levels are evenly distributed between
> +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
>   	 */
>   	uint8_t max_event_ports;
>   	/**< Maximum number of event ports supported by this device */
>   	uint8_t max_event_port_dequeue_depth;
> -	/**< Maximum number of events can be dequeued at a time from an
> -	 * event port by this device.
> -	 * A device that does not support bulk dequeue will set this as 1.
> +	/**< Maximum number of events that can be dequeued at a time from an event port
> +	 * on this device.
> +	 * A device that does not support bulk dequeue will set this to 1.
>   	 */
>   	uint32_t max_event_port_enqueue_depth;
> -	/**< Maximum number of events can be enqueued at a time from an
> -	 * event port by this device.
> -	 * A device that does not support bulk enqueue will set this as 1.
> +	/**< Maximum number of events that can be enqueued at a time to an event port
> +	 * on this device.
> +	 * A device that does not support bulk enqueue will set this to 1.
>   	 */
>   	uint8_t max_event_port_links;
> -	/**< Maximum number of queues that can be linked to a single event
> -	 * port by this device.
> +	/**< Maximum number of queues that can be linked to a single event port on this device.
>   	 */
>   	int32_t max_num_events;
>   	/**< A *closed system* event dev has a limit on the number of events it
> -	 * can manage at a time. An *open system* event dev does not have a
> -	 * limit and will specify this as -1.
> +	 * can manage at a time.
> +	 * Once the number of events tracked by an eventdev exceeds this number,
> +	 * any enqueues of NEW events will fail.
> +	 * An *open system* event dev does not have a limit and will specify this as -1.
>   	 */
>   	uint32_t event_dev_cap;
> -	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
> +	/**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*) */
>   	uint8_t max_single_link_event_port_queue_pairs;
> -	/**< Maximum number of event ports and queues that are optimized for
> -	 * (and only capable of) single-link configurations supported by this
> -	 * device. These ports and queues are not accounted for in
> -	 * max_event_ports or max_event_queues.
> +	/**< Maximum number of event ports and queues,  supported by this device,
> +	 * that are optimized for (and only capable of) single-link configurations.
> +	 * These ports and queues are not accounted for in max_event_ports or max_event_queues.
>   	 */
>   	uint8_t max_profiles_per_port;
> -	/**< Maximum number of event queue profiles per event port.
> +	/**< Maximum number of event queue link profiles per event port.
>   	 * A device that doesn't support multiple profiles will set this as 1.
>   	 */
>   };

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 05/11] eventdev: improve function documentation for query fns
  2024-01-19 17:43   ` [PATCH v2 05/11] eventdev: improve function documentation for query fns Bruce Richardson
@ 2024-01-23  9:40     ` Mattias Rönnblom
  0 siblings, 0 replies; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23  9:40 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> General improvements to the doxygen docs for eventdev functions for
> querying basic information:
> * number of devices
> * id for a particular device
> * socket id of device
> * capability information for a device
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 22 +++++++++++++---------
>   1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 872f241df2..c57c93a22e 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -440,8 +440,7 @@ struct rte_event;
>    */
>   
>   /**
> - * Get the total number of event devices that have been successfully
> - * initialised.
> + * Get the total number of event devices available for application use.

Does "for application use" add information? If they aren't for 
application use, I would argue such are "unavailable".

"Get the total number of available event devices" or just "Get the total 
number of event devices".

>    *
>    * @return
>    *   The total number of usable event devices.
> @@ -456,8 +455,10 @@ rte_event_dev_count(void);
>    *   Event device name to select the event device identifier.
>    *
>    * @return
> - *   Returns event device identifier on success.
> - *   - <0: Failure to find named event device.
> + *   Event device identifier (dev_id >= 0) on success.
> + *   Negative error code on failure:
> + *   - -EINVAL - input name parameter is invalid
> + *   - -ENODEV - no event device found with that name

"."?

>    */
>   int
>   rte_event_dev_get_dev_id(const char *name);
> @@ -470,7 +471,8 @@ rte_event_dev_get_dev_id(const char *name);
>    * @return
>    *   The NUMA socket id to which the device is connected or
>    *   a default of zero if the socket could not be determined.
> - *   -(-EINVAL)  dev_id value is out of range.
> + *   -EINVAL on error, where the given dev_id value does not
> + *   correspond to any event device.
>    */
>   int
>   rte_event_dev_socket_id(uint8_t dev_id);
> @@ -539,18 +541,20 @@ struct rte_event_dev_info {
>   };
>   
>   /**
> - * Retrieve the contextual information of an event device.
> + * Retrieve details of an event device's capabilities and configuration limits.
>    *
>    * @param dev_id
>    *   The identifier of the device.
>    *
>    * @param[out] dev_info
>    *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
> - *   contextual information of the device.
> + *   information about the device's capabilities.
>    *
>    * @return
> - *   - 0: Success, driver updates the contextual information of the event device
> - *   - <0: Error code returned by the driver info get function.
> + *   - 0: Success, information about the event device is present in dev_info.
> + *   - <0: Failure, error code returned by the function.
> + *     - -EINVAL - invalid input parameters, e.g. incorrect device id
> + *     - -ENOTSUP - device does not support returning capabilities information
>    */
>   int
>   rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
  2024-01-23  9:35     ` Mattias Rönnblom
@ 2024-01-23  9:43       ` Bruce Richardson
  2024-01-24 11:51         ` Mattias Rönnblom
  0 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-23  9:43 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Some small rewording changes to the doxygen comments on struct
> > rte_event_dev_info.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
> >   1 file changed, 25 insertions(+), 21 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 57a2791946..872f241df2 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -482,54 +482,58 @@ struct rte_event_dev_info {
> >   	const char *driver_name;	/**< Event driver name */
> >   	struct rte_device *dev;	/**< Device information */
> >   	uint32_t min_dequeue_timeout_ns;
> > -	/**< Minimum supported global dequeue timeout(ns) by this device */
> > +	/**< Minimum global dequeue timeout(ns) supported by this device */
> 
> Are we missing a bunch of "." here and in the other fields?
> 
> >   	uint32_t max_dequeue_timeout_ns;
> > -	/**< Maximum supported global dequeue timeout(ns) by this device */
> > +	/**< Maximum global dequeue timeout(ns) supported by this device */
> >   	uint32_t dequeue_timeout_ns;
> >   	/**< Configured global dequeue timeout(ns) for this device */
> >   	uint8_t max_event_queues;
> > -	/**< Maximum event_queues supported by this device */
> > +	/**< Maximum event queues supported by this device */
> >   	uint32_t max_event_queue_flows;
> > -	/**< Maximum supported flows in an event queue by this device*/
> > +	/**< Maximum number of flows within an event queue supported by this device*/
> >   	uint8_t max_event_queue_priority_levels;
> >   	/**< Maximum number of event queue priority levels by this device.
> > -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> > +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> > +	 * The priority levels are evenly distributed between
> > +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
> 
> This is a change of the API, in the sense it's defining something previously
> left undefined?
> 

Well, undefined is pretty useless for app writers, no?
However, agreed that the range between HIGHEST and LOWEST is an assumption
on my part, chosen because it matches what happens to the event priorities
which are documented in event struct as "The implementation shall normalize
 the requested priority to supported priority value" - which, while better
than nothing, does technically leave the details of how normalization
occurs up to the implementation.

> If you need 6 different priority levels in an app, how do you go about
> making sure you find the correct (distinct) Eventdev levels on any event
> device supporting >= 6 levels?
> 
> #define NUM_MY_LEVELS 6
> 
> #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) *
> (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) /
> NUM_MY_LEVELS)
> 
> This way? One would worry a bit exactly what "evenly" means, in terms of
> rounding errors. If you have an event device with 255 priority levels of
> (say) 256 levels available in the API, which two levels are the same
> priority?

Yes, round etc. will be an issue in cases of non-powers-of 2.
However, I think we do need to clarify this behaviour, so I'm open to
alternative suggestions as to how update this.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct
  2024-01-19 17:43   ` [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct Bruce Richardson
@ 2024-01-23  9:46     ` Mattias Rönnblom
  2024-01-31 16:15       ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23  9:46 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> General rewording and cleanup on the rte_event_dev_config structure.
> Improved the wording of some sentences and created linked
> cross-references out of the existing references to the dev_info
> structure.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++------------------
>   1 file changed, 24 insertions(+), 23 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index c57c93a22e..4139ccb982 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -599,9 +599,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
>   struct rte_event_dev_config {
>   	uint32_t dequeue_timeout_ns;
>   	/**< rte_event_dequeue_burst() timeout on this device.
> -	 * This value should be in the range of *min_dequeue_timeout_ns* and
> -	 * *max_dequeue_timeout_ns* which previously provided in
> -	 * rte_event_dev_info_get()
> +	 * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and
> +	 * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
> +	 * @ref rte_event_dev_info_get()
>   	 * The value 0 is allowed, in which case, default dequeue timeout used.
>   	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
>   	 */
> @@ -609,40 +609,41 @@ struct rte_event_dev_config {
>   	/**< In a *closed system* this field is the limit on maximum number of
>   	 * events that can be inflight in the eventdev at a given time. The
>   	 * limit is required to ensure that the finite space in a closed system
> -	 * is not overwhelmed. The value cannot exceed the *max_num_events*
> -	 * as provided by rte_event_dev_info_get().
> +	 * is not overwhelmed.

"overwhelmed" -> "exhausted"

> +	 * Once the limit has been reached, any enqueues of NEW events to the
> +	 * system will fail.

While this is true, it's also a bit misleading. RTE_EVENT_OP_NEW events 
being backpressured is controlled by new_event_threshold on the level of 
the port.

> +	 * The value cannot exceed @ref rte_event_dev_info.max_num_events
> +	 * returned by rte_event_dev_info_get().
>   	 * This value should be set to -1 for *open system*.
>   	 */
>   	uint8_t nb_event_queues;
>   	/**< Number of event queues to configure on this device.
> -	 * This value cannot exceed the *max_event_queues* which previously
> -	 * provided in rte_event_dev_info_get()
> +	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues
> +	 * returned by rte_event_dev_info_get()
>   	 */
>   	uint8_t nb_event_ports;
>   	/**< Number of event ports to configure on this device.
> -	 * This value cannot exceed the *max_event_ports* which previously
> -	 * provided in rte_event_dev_info_get()
> +	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports
> +	 * returned by rte_event_dev_info_get()
>   	 */
>   	uint32_t nb_event_queue_flows;
> -	/**< Number of flows for any event queue on this device.
> -	 * This value cannot exceed the *max_event_queue_flows* which previously
> -	 * provided in rte_event_dev_info_get()
> +	/**< Max number of flows needed for a single event queue on this device.
> +	 * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows
> +	 * returned by rte_event_dev_info_get()
>   	 */
>   	uint32_t nb_event_port_dequeue_depth;
> -	/**< Maximum number of events can be dequeued at a time from an
> -	 * event port by this device.
> -	 * This value cannot exceed the *max_event_port_dequeue_depth*
> -	 * which previously provided in rte_event_dev_info_get().
> +	/**< Max number of events that can be dequeued at a time from an event port on this device.
> +	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth
> +	 * returned by rte_event_dev_info_get().
>   	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
> -	 * @see rte_event_port_setup()
> +	 * @see rte_event_port_setup() rte_event_dequeue_burst()
>   	 */
>   	uint32_t nb_event_port_enqueue_depth;
> -	/**< Maximum number of events can be enqueued at a time from an
> -	 * event port by this device.
> -	 * This value cannot exceed the *max_event_port_enqueue_depth*
> -	 * which previously provided in rte_event_dev_info_get().
> +	/**< Maximum number of events can be enqueued at a time to an event port on this device.
> +	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth
> +	 * returned by rte_event_dev_info_get().
>   	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
> -	 * @see rte_event_port_setup()
> +	 * @see rte_event_port_setup() rte_event_enqueue_burst()
>   	 */
>   	uint32_t event_dev_cfg;
>   	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
> @@ -652,7 +653,7 @@ struct rte_event_dev_config {
>   	 * queues; this value cannot exceed *nb_event_ports* or
>   	 * *nb_event_queues*. If the device has ports and queues that are
>   	 * optimized for single-link usage, this field is a hint for how many
> -	 * to allocate; otherwise, regular event ports and queues can be used.
> +	 * to allocate; otherwise, regular event ports and queues will be used.
>   	 */
>   };
>   

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports
  2024-01-19 17:43   ` [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports Bruce Richardson
@ 2024-01-23  9:48     ` Mattias Rönnblom
  2024-01-23  9:56       ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23  9:48 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, stable

On 2024-01-19 18:43, Bruce Richardson wrote:
> The documentation of how single-link port-queue pairs were counted in
> the rte_event_dev_config structure did not match the actual
> implementation and, if following the documentation, certain valid

What "documentation" and what "implementation" are you talking about here?

I'm confused. An DLB2 fix in the form of Eventdev API documentation update.

> port/queue configurations would have been impossible to configure. Fix
> this by changing the documentation to match the implementation - however
> confusing that implementation ends up being.
> 
> Bugzilla ID:  1368
> Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 28 ++++++++++++++++++++++------
>   1 file changed, 22 insertions(+), 6 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 4139ccb982..3b8f5b8101 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -490,7 +490,10 @@ struct rte_event_dev_info {
>   	uint32_t dequeue_timeout_ns;
>   	/**< Configured global dequeue timeout(ns) for this device */
>   	uint8_t max_event_queues;
> -	/**< Maximum event queues supported by this device */
> +	/**< Maximum event queues supported by this device.
> +	 * This excludes any queue-port pairs covered by the
> +	 * *max_single_link_event_port_queue_pairs* value in this structure.
> +	 */
>   	uint32_t max_event_queue_flows;
>   	/**< Maximum number of flows within an event queue supported by this device*/
>   	uint8_t max_event_queue_priority_levels;
> @@ -506,7 +509,10 @@ struct rte_event_dev_info {
>   	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
>   	 */
>   	uint8_t max_event_ports;
> -	/**< Maximum number of event ports supported by this device */
> +	/**< Maximum number of event ports supported by this device
> +	 * This excludes any queue-port pairs covered by the
> +	 * *max_single_link_event_port_queue_pairs* value in this structure.
> +	 */
>   	uint8_t max_event_port_dequeue_depth;
>   	/**< Maximum number of events that can be dequeued at a time from an event port
>   	 * on this device.
> @@ -618,13 +624,23 @@ struct rte_event_dev_config {
>   	 */
>   	uint8_t nb_event_queues;
>   	/**< Number of event queues to configure on this device.
> -	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues
> -	 * returned by rte_event_dev_info_get()
> +	 * This value *includes* any single-link queue-port pairs to be used.
> +	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues +
> +	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
> +	 * returned by rte_event_dev_info_get().
> +	 * The number of non-single-link queues i.e. this value less
> +	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
> +	 * @ref rte_event_dev_info.max_event_queues
>   	 */
>   	uint8_t nb_event_ports;
>   	/**< Number of event ports to configure on this device.
> -	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports
> -	 * returned by rte_event_dev_info_get()
> +	 * This value *includes* any single-link queue-port pairs to be used.
> +	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports +
> +	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
> +	 * returned by rte_event_dev_info_get().
> +	 * The number of non-single-link ports i.e. this value less
> +	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
> +	 * @ref rte_event_dev_info.max_event_ports
>   	 */
>   	uint32_t nb_event_queue_flows;
>   	/**< Max number of flows needed for a single event queue on this device.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports
  2024-01-23  9:48     ` Mattias Rönnblom
@ 2024-01-23  9:56       ` Bruce Richardson
  2024-01-31 16:18         ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-23  9:56 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, stable

On Tue, Jan 23, 2024 at 10:48:47AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > The documentation of how single-link port-queue pairs were counted in
> > the rte_event_dev_config structure did not match the actual
> > implementation and, if following the documentation, certain valid
> 
> What "documentation" and what "implementation" are you talking about here?
> 
> I'm confused. An DLB2 fix in the form of Eventdev API documentation update.
> 

The documentation in the header file did not match the implementation in
the rte_eventdev.c file.

The current documentation states[1] that "This value cannot exceed the
max_event_queues which previously provided in rte_event_dev_info_get()",
but if you check the implementation in the C file[2], it actually checks
the passed value against 
"info.max_event_queues + info.max_single_link_event_port_queue_pairs".


[1] https://doc.dpdk.org/api/structrte__event__dev__config.html#a703c026d74436b05fc656652324101e4
[2] https://git.dpdk.org/dpdk/tree/lib/eventdev/rte_eventdev.c#n402


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 08/11] eventdev: improve doxygen comments on config fns
  2024-01-19 17:43   ` [PATCH v2 08/11] eventdev: improve doxygen comments on config fns Bruce Richardson
@ 2024-01-23 10:00     ` Mattias Rönnblom
  2024-01-23 10:07       ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23 10:00 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> Improve the documentation text for the configuration functions and
> structures for configuring an eventdev, as well as ports and queues.
> Clarify text where possible, and ensure references come through as links
> in the html output.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 196 ++++++++++++++++++++++++------------
>   1 file changed, 130 insertions(+), 66 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 3b8f5b8101..1fda8a5a13 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -676,12 +676,14 @@ struct rte_event_dev_config {
>   /**
>    * Configure an event device.
>    *
> - * This function must be invoked first before any other function in the
> - * API. This function can also be re-invoked when a device is in the
> - * stopped state.
> + * This function must be invoked before any other configuration function in the
> + * API, when preparing an event device for application use.
> + * This function can also be re-invoked when a device is in the stopped state.
>    *
> - * The caller may use rte_event_dev_info_get() to get the capability of each
> - * resources available for this event device.
> + * The caller should use rte_event_dev_info_get() to get the capabilities and
> + * resource limits for this event device before calling this API.

"should" -> "may". If you know the limitations by other means, that's 
fine too.

> + * Many values in the dev_conf input parameter are subject to limits given
> + * in the device information returned from rte_event_dev_info_get().
>    *
>    * @param dev_id
>    *   The identifier of the device to configure.
> @@ -691,6 +693,9 @@ struct rte_event_dev_config {
>    * @return
>    *   - 0: Success, device configured.
>    *   - <0: Error code returned by the driver configuration function.
> + *     - -ENOTSUP - device does not support configuration
> + *     - -EINVAL  - invalid input parameter
> + *     - -EBUSY   - device has already been started
>    */
>   int
>   rte_event_dev_configure(uint8_t dev_id,
> @@ -700,14 +705,33 @@ rte_event_dev_configure(uint8_t dev_id,
>   
>   /* Event queue configuration bitmap flags */
>   #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (1ULL << 0)
> -/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
> +/**< Allow events with schedule types ATOMIC, ORDERED, and PARALLEL to be enqueued to this queue.
> + * The scheduling type to be used is that specified in each individual event.
> + * This flag can only be set when configuring queues on devices reporting the
> + * @ref RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability.
>    *
> + * Without this flag, only events with the specific scheduling type configured at queue setup
> + * can be sent to the queue.
> + *
> + * @see RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES
>    * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
>    * @see rte_event_enqueue_burst()
>    */
>   #define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 1)
>   /**< This event queue links only to a single event port.
> - *
> + * No load-balancing of events is performed, as all events
> + * sent to this queue end up at the same event port.
> + * The number of queues on which this flag is to be set must be
> + * configured at device configuration time, by setting
> + * @ref rte_event_dev_config.nb_single_link_event_port_queues
> + * parameter appropriately.
> + *
> + * This flag serves as a hint only, any devices without specific
> + * support for single-link queues can fall-back automatically to
> + * using regular queues with a single destination port.
> + *
> + *  @see rte_event_dev_info.max_single_link_event_port_queue_pairs
> + *  @see rte_event_dev_config.nb_single_link_event_port_queues
>    *  @see rte_event_port_setup(), rte_event_port_link()
>    */
>   
> @@ -715,56 +739,75 @@ rte_event_dev_configure(uint8_t dev_id,
>   struct rte_event_queue_conf {
>   	uint32_t nb_atomic_flows;
>   	/**< The maximum number of active flows this queue can track at any
> -	 * given time. If the queue is configured for atomic scheduling (by
> -	 * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg
> -	 * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the
> -	 * value must be in the range of [1, nb_event_queue_flows], which was
> -	 * previously provided in rte_event_dev_configure().
> +	 * given time.
> +	 *
> +	 * If the queue is configured for atomic scheduling (by
> +	 * applying the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to
> +	 * @ref rte_event_queue_conf.event_queue_cfg
> +	 * or @ref RTE_SCHED_TYPE_ATOMIC flag to @ref rte_event_queue_conf.schedule_type), then the
> +	 * value must be in the range of [1, @ref rte_event_dev_config.nb_event_queue_flows],
> +	 * which was previously provided in rte_event_dev_configure().
> +	 *
> +	 * If the queue is not configured for atomic scheduling this value is ignored.
>   	 */
>   	uint32_t nb_atomic_order_sequences;
>   	/**< The maximum number of outstanding events waiting to be
>   	 * reordered by this queue. In other words, the number of entries in
>   	 * this queue’s reorder buffer.When the number of events in the
>   	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
> -	 * scheduler cannot schedule the events from this queue and invalid
> -	 * event will be returned from dequeue until one or more entries are
> +	 * scheduler cannot schedule the events from this queue and no
> +	 * events will be returned from dequeue until one or more entries are
>   	 * freed up/released.
> +	 *
>   	 * If the queue is configured for ordered scheduling (by applying the
> -	 * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or
> -	 * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must
> -	 * be in the range of [1, nb_event_queue_flows], which was
> +	 * @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to @ref rte_event_queue_conf.event_queue_cfg or
> +	 * @ref RTE_SCHED_TYPE_ORDERED flag to @ref rte_event_queue_conf.schedule_type),
> +	 * then the value must be in the range of
> +	 * [1, @ref rte_event_dev_config.nb_event_queue_flows], which was
>   	 * previously supplied to rte_event_dev_configure().
> +	 *
> +	 * If the queue is not configured for ordered scheduling, then this value is ignored
>   	 */
>   	uint32_t event_queue_cfg;
>   	/**< Queue cfg flags(EVENT_QUEUE_CFG_) */
>   	uint8_t schedule_type;
>   	/**< Queue schedule type(RTE_SCHED_TYPE_*).
> -	 * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in
> -	 * event_queue_cfg.
> +	 * Valid when @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is not set in
> +	 * @ref rte_event_queue_conf.event_queue_cfg.
> +	 *
> +	 * If the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is set, then this field is ignored.
> +	 *
> +	 * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
>   	 */
>   	uint8_t priority;
>   	/**< Priority for this event queue relative to other event queues.
>   	 * The requested priority should in the range of
> -	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
> +	 * [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
>   	 * The implementation shall normalize the requested priority to
>   	 * event device supported priority value.
> -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> +	 *
> +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
> +	 * ignored otherwise
>   	 */
>   	uint8_t weight;
>   	/**< Weight of the event queue relative to other event queues.
>   	 * The requested weight should be in the range of
> -	 * [RTE_EVENT_DEV_WEIGHT_HIGHEST, RTE_EVENT_DEV_WEIGHT_LOWEST].
> +	 * [@ref RTE_EVENT_QUEUE_WEIGHT_HIGHEST, @ref RTE_EVENT_QUEUE_WEIGHT_LOWEST].
>   	 * The implementation shall normalize the requested weight to event
>   	 * device supported weight value.
> -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> +	 *
> +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
> +	 * ignored otherwise.
>   	 */
>   	uint8_t affinity;
>   	/**< Affinity of the event queue relative to other event queues.
>   	 * The requested affinity should be in the range of
> -	 * [RTE_EVENT_DEV_AFFINITY_HIGHEST, RTE_EVENT_DEV_AFFINITY_LOWEST].
> +	 * [@ref RTE_EVENT_QUEUE_AFFINITY_HIGHEST, @ref RTE_EVENT_QUEUE_AFFINITY_LOWEST].
>   	 * The implementation shall normalize the requested affinity to event
>   	 * device supported affinity value.
> -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> +	 *
> +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
> +	 * ignored otherwise.
>   	 */
>   };
>   
> @@ -779,7 +822,7 @@ struct rte_event_queue_conf {
>    *   The identifier of the device.
>    * @param queue_id
>    *   The index of the event queue to get the configuration information.
> - *   The value must be in the range [0, nb_event_queues - 1]
> + *   The value must be in the range [0, @ref rte_event_dev_config.nb_event_queues - 1]

The value must be < @ref rte_event_dev_config.nb_event_queues.

It's an unsigned type, so no need to specify a lower bound.

>    *   previously supplied to rte_event_dev_configure().
>    * @param[out] queue_conf
>    *   The pointer to the default event queue configuration data.
> @@ -800,7 +843,8 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
>    *   The identifier of the device.
>    * @param queue_id
>    *   The index of the event queue to setup. The value must be in the range
> - *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
> + *   [0, @ref rte_event_dev_config.nb_event_queues - 1] previously supplied to
> + *   rte_event_dev_configure().
>    * @param queue_conf
>    *   The pointer to the configuration data to be used for the event queue.
>    *   NULL value is allowed, in which case default configuration	used.
> @@ -816,43 +860,44 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>   		      const struct rte_event_queue_conf *queue_conf);
>   
>   /**
> - * The priority of the queue.
> + * Queue attribute id for the priority of the queue.
>    */
>   #define RTE_EVENT_QUEUE_ATTR_PRIORITY 0
>   /**
> - * The number of atomic flows configured for the queue.
> + * Queue attribute id for the number of atomic flows configured for the queue.
>    */
>   #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS 1
>   /**
> - * The number of atomic order sequences configured for the queue.
> + * Queue attribute id for the number of atomic order sequences configured for the queue.
>    */
>   #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES 2
>   /**
> - * The cfg flags for the queue.
> + * Queue attribute id for the cfg flags for the queue.

"cfg" -> "configuration"?

>    */
>   #define RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG 3
>   /**
> - * The schedule type of the queue.
> + * Queue attribute id for the schedule type of the queue.
>    */
>   #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
>   /**
> - * The weight of the queue.
> + * Queue attribute id for the weight of the queue.
>    */
>   #define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
>   /**
> - * Affinity of the queue.
> + * Queue attribute id for the affinity of the queue.
>    */
>   #define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
>   
>   /**
> - * Get an attribute from a queue.
> + * Get an attribute or property of an event queue.

What is the difference between property and attribute here?

>    *
>    * @param dev_id
> - *   Eventdev id
> + *   The identifier of the device.
>    * @param queue_id
> - *   Eventdev queue id
> + *   The index of the event queue to query. The value must be in the range
> + *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
>    * @param attr_id
> - *   The attribute ID to retrieve
> + *   The attribute ID to retrieve (RTE_EVENT_QUEUE_ATTR_*)
>    * @param[out] attr_value
>    *   A pointer that will be filled in with the attribute value if successful
>    *
> @@ -861,8 +906,8 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>    *   - -EINVAL: invalid device, queue or attr_id provided, or attr_value was
>    *		NULL
>    *   - -EOVERFLOW: returned when attr_id is set to
> - *   RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and event_queue_cfg is set to
> - *   RTE_EVENT_QUEUE_CFG_ALL_TYPES
> + *   @ref RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES is
> + *   set in the queue configuration flags.
>    */
>   int
>   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> @@ -872,11 +917,13 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>    * Set an event queue attribute.
>    *
>    * @param dev_id
> - *   Eventdev id
> + *   The identifier of the device.
>    * @param queue_id
> - *   Eventdev queue id
> + *   The index of the event queue to configure. The value must be in the range
> + *   [0, @ref rte_event_dev_config.nb_event_queues - 1] previously
> + *   supplied to rte_event_dev_configure().
>    * @param attr_id
> - *   The attribute ID to set
> + *   The attribute ID to set (RTE_EVENT_QUEUE_ATTR_*)
>    * @param attr_value
>    *   The attribute value to set
>    *
> @@ -902,7 +949,10 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>    */
>   #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>   /**< This event port links only to a single event queue.
> + * The queue it links with should be similarly configured with the
> + * @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK flag.
>    *
> + *  @see RTE_EVENT_QUEUE_CFG_SINGLE_LINK
>    *  @see rte_event_port_setup(), rte_event_port_link()
>    */
>   #define RTE_EVENT_PORT_CFG_HINT_PRODUCER       (1ULL << 2)
> @@ -918,7 +968,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>   #define RTE_EVENT_PORT_CFG_HINT_CONSUMER       (1ULL << 3)
>   /**< Hint that this event port will primarily dequeue events from the system.
>    * A PMD can optimize its internal workings by assuming that this port is
> - * primarily going to consume events, and not enqueue FORWARD or RELEASE
> + * primarily going to consume events, and not enqueue NEW or FORWARD
>    * events.
>    *
>    * Note that this flag is only a hint, so PMDs must operate under the
> @@ -944,48 +994,55 @@ struct rte_event_port_conf {
>   	/**< A backpressure threshold for new event enqueues on this port.
>   	 * Use for *closed system* event dev where event capacity is limited,
>   	 * and cannot exceed the capacity of the event dev.
> +	 *
>   	 * Configuring ports with different thresholds can make higher priority
>   	 * traffic less likely to  be backpressured.
>   	 * For example, a port used to inject NIC Rx packets into the event dev
>   	 * can have a lower threshold so as not to overwhelm the device,
>   	 * while ports used for worker pools can have a higher threshold.
> -	 * This value cannot exceed the *nb_events_limit*
> +	 * This value cannot exceed the @ref rte_event_dev_config.nb_events_limit value
>   	 * which was previously supplied to rte_event_dev_configure().
> -	 * This should be set to '-1' for *open system*.
> +	 *
> +	 * This should be set to '-1' for *open system*, i.e when
> +	 * @ref rte_event_dev_info.max_num_events == -1.
>   	 */
>   	uint16_t dequeue_depth;
> -	/**< Configure number of bulk dequeues for this event port.
> -	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> -	 * which previously supplied to rte_event_dev_configure().
> -	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
> +	/**< Configure the maximum size of burst dequeues for this event port.
> +	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_dequeue_depth value
> +	 * which was previously supplied to rte_event_dev_configure().
> +	 *
> +	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
>   	 */
>   	uint16_t enqueue_depth;
> -	/**< Configure number of bulk enqueues for this event port.
> -	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> -	 * which previously supplied to rte_event_dev_configure().
> -	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
> +	/**< Configure the maximum size of burst enqueues to this event port.
> +	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_enqueue_depth value
> +	 * which was previously supplied to rte_event_dev_configure().
> +	 *
> +	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
>   	 */
> -	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
> +	uint32_t event_port_cfg; /**< Port configuration flags(EVENT_PORT_CFG_) */
>   };
>   
>   /**
>    * Retrieve the default configuration information of an event port designated
>    * by its *port_id* from the event driver for an event device.
>    *
> - * This function intended to be used in conjunction with rte_event_port_setup()
> - * where caller needs to set up the port by overriding few default values.
> + * This function is intended to be used in conjunction with rte_event_port_setup()
> + * where the caller can set up the port by just overriding few default values.
>    *
>    * @param dev_id
>    *   The identifier of the device.
>    * @param port_id
>    *   The index of the event port to get the configuration information.
> - *   The value must be in the range [0, nb_event_ports - 1]
> + *   The value must be in the range [0, @ref rte_event_dev_config.nb_event_ports - 1]
>    *   previously supplied to rte_event_dev_configure().
>    * @param[out] port_conf
> - *   The pointer to the default event port configuration data
> + *   The pointer to a structure to store the default event port configuration data.
>    * @return
>    *   - 0: Success, driver updates the default event port configuration data.
>    *   - <0: Error code returned by the driver info get function.
> + *      - -EINVAL - invalid input parameter
> + *      - -ENOTSUP - function is not supported for this device
>    *
>    * @see rte_event_port_setup()
>    */
> @@ -1000,18 +1057,24 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
>    *   The identifier of the device.
>    * @param port_id
>    *   The index of the event port to setup. The value must be in the range
> - *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
> + *   [0, @ref rte_event_dev_config.nb_event_ports - 1] previously supplied to
> + *   rte_event_dev_configure().
>    * @param port_conf
> - *   The pointer to the configuration data to be used for the queue.
> - *   NULL value is allowed, in which case default configuration	used.
> + *   The pointer to the configuration data to be used for the port.
> + *   NULL value is allowed, in which case the default configuration is used.
>    *
>    * @see rte_event_port_default_conf_get()
>    *
>    * @return
>    *   - 0: Success, event port correctly set up.
>    *   - <0: Port configuration failed
> - *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
> - *   with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
> + *     - -EINVAL - Invalid input parameter
> + *     - -EBUSY - Port already started
> + *     - -ENOTSUP - Function not supported on this device, or a NULL pointer passed
> + *        as the port_conf parameter, and no default configuration function available
> + *        for this device.
> + *     - -EDQUOT - Application tried to link a queue configured

"." for each bullet?

> + *      with @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event port.
>    */
>   int
>   rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> @@ -1041,8 +1104,9 @@ typedef void (*rte_eventdev_port_flush_t)(uint8_t dev_id,
>    * @param dev_id
>    *   The identifier of the device.
>    * @param port_id
> - *   The index of the event port to setup. The value must be in the range
> - *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
> + *   The index of the event port to quiesce. The value must be in the range
> + *   [0, @ref rte_event_dev_config.nb_event_ports - 1]
> + *   previously supplied to rte_event_dev_configure().

Ranges can be simplified here as well.

"The index is always < @ref rte_event_dev_config.nb_event_ports"

>    * @param release_cb
>    *   Callback function invoked once per flushed event.
>    * @param args

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 08/11] eventdev: improve doxygen comments on config fns
  2024-01-23 10:00     ` Mattias Rönnblom
@ 2024-01-23 10:07       ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-23 10:07 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 11:00:50AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Improve the documentation text for the configuration functions and
> > structures for configuring an eventdev, as well as ports and queues.
> > Clarify text where possible, and ensure references come through as links
> > in the html output.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 196 ++++++++++++++++++++++++------------
> >   1 file changed, 130 insertions(+), 66 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 3b8f5b8101..1fda8a5a13 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -676,12 +676,14 @@ struct rte_event_dev_config {
> >   /**
> >    * Configure an event device.
> >    *
> > - * This function must be invoked first before any other function in the
> > - * API. This function can also be re-invoked when a device is in the
> > - * stopped state.
> > + * This function must be invoked before any other configuration function in the
> > + * API, when preparing an event device for application use.
> > + * This function can also be re-invoked when a device is in the stopped state.
> >    *
> > - * The caller may use rte_event_dev_info_get() to get the capability of each
> > - * resources available for this event device.
> > + * The caller should use rte_event_dev_info_get() to get the capabilities and
> > + * resource limits for this event device before calling this API.
> 
> "should" -> "may". If you know the limitations by other means, that's fine
> too.
> 

I think I'll keep it as "should", since it's strongly recommended. "Must"
would be incorrect, since it's not mandatory, but I think "may" is too
week.

> > + * Many values in the dev_conf input parameter are subject to limits given
> > + * in the device information returned from rte_event_dev_info_get().
> >    *
> >    * @param dev_id
> >    *   The identifier of the device to configure.
> > @@ -691,6 +693,9 @@ struct rte_event_dev_config {
> >    * @return
> >    *   - 0: Success, device configured.
> >    *   - <0: Error code returned by the driver configuration function.
> > + *     - -ENOTSUP - device does not support configuration
> > + *     - -EINVAL  - invalid input parameter
> > + *     - -EBUSY   - device has already been started
> >    */
> >   int
> >   rte_event_dev_configure(uint8_t dev_id,
> > @@ -700,14 +705,33 @@ rte_event_dev_configure(uint8_t dev_id,
> >   /* Event queue configuration bitmap flags */
> >   #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (1ULL << 0)
> > -/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
> > +/**< Allow events with schedule types ATOMIC, ORDERED, and PARALLEL to be enqueued to this queue.
> > + * The scheduling type to be used is that specified in each individual event.
> > + * This flag can only be set when configuring queues on devices reporting the
> > + * @ref RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability.
> >    *
> > + * Without this flag, only events with the specific scheduling type configured at queue setup
> > + * can be sent to the queue.
> > + *
> > + * @see RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES
> >    * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
> >    * @see rte_event_enqueue_burst()
> >    */
> >   #define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 1)
> >   /**< This event queue links only to a single event port.
> > - *
> > + * No load-balancing of events is performed, as all events
> > + * sent to this queue end up at the same event port.
> > + * The number of queues on which this flag is to be set must be
> > + * configured at device configuration time, by setting
> > + * @ref rte_event_dev_config.nb_single_link_event_port_queues
> > + * parameter appropriately.
> > + *
> > + * This flag serves as a hint only, any devices without specific
> > + * support for single-link queues can fall-back automatically to
> > + * using regular queues with a single destination port.
> > + *
> > + *  @see rte_event_dev_info.max_single_link_event_port_queue_pairs
> > + *  @see rte_event_dev_config.nb_single_link_event_port_queues
> >    *  @see rte_event_port_setup(), rte_event_port_link()
> >    */
> > @@ -715,56 +739,75 @@ rte_event_dev_configure(uint8_t dev_id,
> >   struct rte_event_queue_conf {
> >   	uint32_t nb_atomic_flows;
> >   	/**< The maximum number of active flows this queue can track at any
> > -	 * given time. If the queue is configured for atomic scheduling (by
> > -	 * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg
> > -	 * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the
> > -	 * value must be in the range of [1, nb_event_queue_flows], which was
> > -	 * previously provided in rte_event_dev_configure().
> > +	 * given time.
> > +	 *
> > +	 * If the queue is configured for atomic scheduling (by
> > +	 * applying the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to
> > +	 * @ref rte_event_queue_conf.event_queue_cfg
> > +	 * or @ref RTE_SCHED_TYPE_ATOMIC flag to @ref rte_event_queue_conf.schedule_type), then the
> > +	 * value must be in the range of [1, @ref rte_event_dev_config.nb_event_queue_flows],
> > +	 * which was previously provided in rte_event_dev_configure().
> > +	 *
> > +	 * If the queue is not configured for atomic scheduling this value is ignored.
> >   	 */
> >   	uint32_t nb_atomic_order_sequences;
> >   	/**< The maximum number of outstanding events waiting to be
> >   	 * reordered by this queue. In other words, the number of entries in
> >   	 * this queue’s reorder buffer.When the number of events in the
> >   	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
> > -	 * scheduler cannot schedule the events from this queue and invalid
> > -	 * event will be returned from dequeue until one or more entries are
> > +	 * scheduler cannot schedule the events from this queue and no
> > +	 * events will be returned from dequeue until one or more entries are
> >   	 * freed up/released.
> > +	 *
> >   	 * If the queue is configured for ordered scheduling (by applying the
> > -	 * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or
> > -	 * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must
> > -	 * be in the range of [1, nb_event_queue_flows], which was
> > +	 * @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to @ref rte_event_queue_conf.event_queue_cfg or
> > +	 * @ref RTE_SCHED_TYPE_ORDERED flag to @ref rte_event_queue_conf.schedule_type),
> > +	 * then the value must be in the range of
> > +	 * [1, @ref rte_event_dev_config.nb_event_queue_flows], which was
> >   	 * previously supplied to rte_event_dev_configure().
> > +	 *
> > +	 * If the queue is not configured for ordered scheduling, then this value is ignored
> >   	 */
> >   	uint32_t event_queue_cfg;
> >   	/**< Queue cfg flags(EVENT_QUEUE_CFG_) */
> >   	uint8_t schedule_type;
> >   	/**< Queue schedule type(RTE_SCHED_TYPE_*).
> > -	 * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in
> > -	 * event_queue_cfg.
> > +	 * Valid when @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is not set in
> > +	 * @ref rte_event_queue_conf.event_queue_cfg.
> > +	 *
> > +	 * If the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is set, then this field is ignored.
> > +	 *
> > +	 * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
> >   	 */
> >   	uint8_t priority;
> >   	/**< Priority for this event queue relative to other event queues.
> >   	 * The requested priority should in the range of
> > -	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
> > +	 * [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
> >   	 * The implementation shall normalize the requested priority to
> >   	 * event device supported priority value.
> > -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> > +	 *
> > +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
> > +	 * ignored otherwise
> >   	 */
> >   	uint8_t weight;
> >   	/**< Weight of the event queue relative to other event queues.
> >   	 * The requested weight should be in the range of
> > -	 * [RTE_EVENT_DEV_WEIGHT_HIGHEST, RTE_EVENT_DEV_WEIGHT_LOWEST].
> > +	 * [@ref RTE_EVENT_QUEUE_WEIGHT_HIGHEST, @ref RTE_EVENT_QUEUE_WEIGHT_LOWEST].
> >   	 * The implementation shall normalize the requested weight to event
> >   	 * device supported weight value.
> > -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> > +	 *
> > +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
> > +	 * ignored otherwise.
> >   	 */
> >   	uint8_t affinity;
> >   	/**< Affinity of the event queue relative to other event queues.
> >   	 * The requested affinity should be in the range of
> > -	 * [RTE_EVENT_DEV_AFFINITY_HIGHEST, RTE_EVENT_DEV_AFFINITY_LOWEST].
> > +	 * [@ref RTE_EVENT_QUEUE_AFFINITY_HIGHEST, @ref RTE_EVENT_QUEUE_AFFINITY_LOWEST].
> >   	 * The implementation shall normalize the requested affinity to event
> >   	 * device supported affinity value.
> > -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> > +	 *
> > +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
> > +	 * ignored otherwise.
> >   	 */
> >   };
> > @@ -779,7 +822,7 @@ struct rte_event_queue_conf {
> >    *   The identifier of the device.
> >    * @param queue_id
> >    *   The index of the event queue to get the configuration information.
> > - *   The value must be in the range [0, nb_event_queues - 1]
> > + *   The value must be in the range [0, @ref rte_event_dev_config.nb_event_queues - 1]
> 
> The value must be < @ref rte_event_dev_config.nb_event_queues.
> 
> It's an unsigned type, so no need to specify a lower bound.
> 

Ok. This probably applies in many places throughout the whole header file.

> >    *   previously supplied to rte_event_dev_configure().
> >    * @param[out] queue_conf
> >    *   The pointer to the default event queue configuration data.
> > @@ -800,7 +843,8 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
> >    *   The identifier of the device.
> >    * @param queue_id
> >    *   The index of the event queue to setup. The value must be in the range
> > - *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
> > + *   [0, @ref rte_event_dev_config.nb_event_queues - 1] previously supplied to
> > + *   rte_event_dev_configure().
> >    * @param queue_conf
> >    *   The pointer to the configuration data to be used for the event queue.
> >    *   NULL value is allowed, in which case default configuration	used.
> > @@ -816,43 +860,44 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
> >   		      const struct rte_event_queue_conf *queue_conf);
> >   /**
> > - * The priority of the queue.
> > + * Queue attribute id for the priority of the queue.
> >    */
> >   #define RTE_EVENT_QUEUE_ATTR_PRIORITY 0
> >   /**
> > - * The number of atomic flows configured for the queue.
> > + * Queue attribute id for the number of atomic flows configured for the queue.
> >    */
> >   #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS 1
> >   /**
> > - * The number of atomic order sequences configured for the queue.
> > + * Queue attribute id for the number of atomic order sequences configured for the queue.
> >    */
> >   #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES 2
> >   /**
> > - * The cfg flags for the queue.
> > + * Queue attribute id for the cfg flags for the queue.
> 
> "cfg" -> "configuration"?
> 
> >    */
> >   #define RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG 3
> >   /**
> > - * The schedule type of the queue.
> > + * Queue attribute id for the schedule type of the queue.
> >    */
> >   #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
> >   /**
> > - * The weight of the queue.
> > + * Queue attribute id for the weight of the queue.
> >    */
> >   #define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
> >   /**
> > - * Affinity of the queue.
> > + * Queue attribute id for the affinity of the queue.
> >    */
> >   #define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
> >   /**
> > - * Get an attribute from a queue.
> > + * Get an attribute or property of an event queue.
> 
> What is the difference between property and attribute here?
> 

Good question. Not sure what I had in mind here. I'll revert in v3, I
think.

> >    *
> >    * @param dev_id
> > - *   Eventdev id
> > + *   The identifier of the device.
> >    * @param queue_id
> > - *   Eventdev queue id
> > + *   The index of the event queue to query. The value must be in the range
> > + *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
> >    * @param attr_id
> > - *   The attribute ID to retrieve
> > + *   The attribute ID to retrieve (RTE_EVENT_QUEUE_ATTR_*)
> >    * @param[out] attr_value
> >    *   A pointer that will be filled in with the attribute value if successful
> >    *
> > @@ -861,8 +906,8 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
> >    *   - -EINVAL: invalid device, queue or attr_id provided, or attr_value was
> >    *		NULL
> >    *   - -EOVERFLOW: returned when attr_id is set to
> > - *   RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and event_queue_cfg is set to
> > - *   RTE_EVENT_QUEUE_CFG_ALL_TYPES
> > + *   @ref RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES is
> > + *   set in the queue configuration flags.
> >    */
> >   int
> >   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> > @@ -872,11 +917,13 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> >    * Set an event queue attribute.
> >    *
> >    * @param dev_id
> > - *   Eventdev id
> > + *   The identifier of the device.
> >    * @param queue_id
> > - *   Eventdev queue id
> > + *   The index of the event queue to configure. The value must be in the range
> > + *   [0, @ref rte_event_dev_config.nb_event_queues - 1] previously
> > + *   supplied to rte_event_dev_configure().
> >    * @param attr_id
> > - *   The attribute ID to set
> > + *   The attribute ID to set (RTE_EVENT_QUEUE_ATTR_*)
> >    * @param attr_value
> >    *   The attribute value to set
> >    *
> > @@ -902,7 +949,10 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> >    */
> >   #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
> >   /**< This event port links only to a single event queue.
> > + * The queue it links with should be similarly configured with the
> > + * @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK flag.
> >    *
> > + *  @see RTE_EVENT_QUEUE_CFG_SINGLE_LINK
> >    *  @see rte_event_port_setup(), rte_event_port_link()
> >    */
> >   #define RTE_EVENT_PORT_CFG_HINT_PRODUCER       (1ULL << 2)
> > @@ -918,7 +968,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> >   #define RTE_EVENT_PORT_CFG_HINT_CONSUMER       (1ULL << 3)
> >   /**< Hint that this event port will primarily dequeue events from the system.
> >    * A PMD can optimize its internal workings by assuming that this port is
> > - * primarily going to consume events, and not enqueue FORWARD or RELEASE
> > + * primarily going to consume events, and not enqueue NEW or FORWARD
> >    * events.
> >    *
> >    * Note that this flag is only a hint, so PMDs must operate under the
> > @@ -944,48 +994,55 @@ struct rte_event_port_conf {
> >   	/**< A backpressure threshold for new event enqueues on this port.
> >   	 * Use for *closed system* event dev where event capacity is limited,
> >   	 * and cannot exceed the capacity of the event dev.
> > +	 *
> >   	 * Configuring ports with different thresholds can make higher priority
> >   	 * traffic less likely to  be backpressured.
> >   	 * For example, a port used to inject NIC Rx packets into the event dev
> >   	 * can have a lower threshold so as not to overwhelm the device,
> >   	 * while ports used for worker pools can have a higher threshold.
> > -	 * This value cannot exceed the *nb_events_limit*
> > +	 * This value cannot exceed the @ref rte_event_dev_config.nb_events_limit value
> >   	 * which was previously supplied to rte_event_dev_configure().
> > -	 * This should be set to '-1' for *open system*.
> > +	 *
> > +	 * This should be set to '-1' for *open system*, i.e when
> > +	 * @ref rte_event_dev_info.max_num_events == -1.
> >   	 */
> >   	uint16_t dequeue_depth;
> > -	/**< Configure number of bulk dequeues for this event port.
> > -	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> > -	 * which previously supplied to rte_event_dev_configure().
> > -	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
> > +	/**< Configure the maximum size of burst dequeues for this event port.
> > +	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_dequeue_depth value
> > +	 * which was previously supplied to rte_event_dev_configure().
> > +	 *
> > +	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
> >   	 */
> >   	uint16_t enqueue_depth;
> > -	/**< Configure number of bulk enqueues for this event port.
> > -	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> > -	 * which previously supplied to rte_event_dev_configure().
> > -	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
> > +	/**< Configure the maximum size of burst enqueues to this event port.
> > +	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_enqueue_depth value
> > +	 * which was previously supplied to rte_event_dev_configure().
> > +	 *
> > +	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
> >   	 */
> > -	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
> > +	uint32_t event_port_cfg; /**< Port configuration flags(EVENT_PORT_CFG_) */
> >   };
> >   /**
> >    * Retrieve the default configuration information of an event port designated
> >    * by its *port_id* from the event driver for an event device.
> >    *
> > - * This function intended to be used in conjunction with rte_event_port_setup()
> > - * where caller needs to set up the port by overriding few default values.
> > + * This function is intended to be used in conjunction with rte_event_port_setup()
> > + * where the caller can set up the port by just overriding few default values.
> >    *
> >    * @param dev_id
> >    *   The identifier of the device.
> >    * @param port_id
> >    *   The index of the event port to get the configuration information.
> > - *   The value must be in the range [0, nb_event_ports - 1]
> > + *   The value must be in the range [0, @ref rte_event_dev_config.nb_event_ports - 1]
> >    *   previously supplied to rte_event_dev_configure().
> >    * @param[out] port_conf
> > - *   The pointer to the default event port configuration data
> > + *   The pointer to a structure to store the default event port configuration data.
> >    * @return
> >    *   - 0: Success, driver updates the default event port configuration data.
> >    *   - <0: Error code returned by the driver info get function.
> > + *      - -EINVAL - invalid input parameter
> > + *      - -ENOTSUP - function is not supported for this device
> >    *
> >    * @see rte_event_port_setup()
> >    */
> > @@ -1000,18 +1057,24 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
> >    *   The identifier of the device.
> >    * @param port_id
> >    *   The index of the event port to setup. The value must be in the range
> > - *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
> > + *   [0, @ref rte_event_dev_config.nb_event_ports - 1] previously supplied to
> > + *   rte_event_dev_configure().
> >    * @param port_conf
> > - *   The pointer to the configuration data to be used for the queue.
> > - *   NULL value is allowed, in which case default configuration	used.
> > + *   The pointer to the configuration data to be used for the port.
> > + *   NULL value is allowed, in which case the default configuration is used.
> >    *
> >    * @see rte_event_port_default_conf_get()
> >    *
> >    * @return
> >    *   - 0: Success, event port correctly set up.
> >    *   - <0: Port configuration failed
> > - *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
> > - *   with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
> > + *     - -EINVAL - Invalid input parameter
> > + *     - -EBUSY - Port already started
> > + *     - -ENOTSUP - Function not supported on this device, or a NULL pointer passed
> > + *        as the port_conf parameter, and no default configuration function available
> > + *        for this device.
> > + *     - -EDQUOT - Application tried to link a queue configured
> 
> "." for each bullet?
> 
Ideally, yes. I'll try and keep this consistent as I make changes, but it's
not going to be top of my worry-about list!

thanks again for the reviews.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 09/11] eventdev: improve doxygen comments for control APIs
  2024-01-19 17:43   ` [PATCH v2 09/11] eventdev: improve doxygen comments for control APIs Bruce Richardson
@ 2024-01-23 10:10     ` Mattias Rönnblom
  0 siblings, 0 replies; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23 10:10 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> The doxygen comments for the port attributes, start and stop (and
> related functions) are improved.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 34 +++++++++++++++++++++++-----------
>   1 file changed, 23 insertions(+), 11 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 1fda8a5a13..2c6576e921 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1117,19 +1117,21 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
>   		       rte_eventdev_port_flush_t release_cb, void *args);
>   
>   /**
> - * The queue depth of the port on the enqueue side
> + * Port attribute id for the maximum size of a burst enqueue operation supported on a port

"." missing.

>    */
>   #define RTE_EVENT_PORT_ATTR_ENQ_DEPTH 0
>   /**
> - * The queue depth of the port on the dequeue side
> + * Port attribute id for the maximum size of a dequeue burst which can be returned from a port
>    */
>   #define RTE_EVENT_PORT_ATTR_DEQ_DEPTH 1
>   /**
> - * The new event threshold of the port
> + * Port attribute id for the new event threshold of the port.
> + * Once the number of events in the system exceeds this threshold, the enqueue of NEW-type
> + * events will fail.
>    */
>   #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
>   /**
> - * The implicit release disable attribute of the port
> + * Port attribute id for the implicit release disable attribute of the port
>    */
>   #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>   
> @@ -1137,11 +1139,13 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
>    * Get an attribute from a port.
>    *
>    * @param dev_id
> - *   Eventdev id
> + *   The identifier of the device.
>    * @param port_id
> - *   Eventdev port id
> + *   The index of the event port to query. The value must be in the range
> + *   [0, @ref rte_event_dev_config.nb_event_ports - 1]
> + *   previously supplied to rte_event_dev_configure().

Does the range need to be mentioned everywhere? Seems like it should be 
pretty opaque to the app.

>    * @param attr_id
> - *   The attribute ID to retrieve
> + *   The attribute ID to retrieve (RTE_EVENT_PORT_ATTR_*)
>    * @param[out] attr_value
>    *   A pointer that will be filled in with the attribute value if successful
>    *
> @@ -1156,8 +1160,8 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>   /**
>    * Start an event device.
>    *
> - * The device start step is the last one and consists of setting the event
> - * queues to start accepting the events and schedules to event ports.
> + * The device start step is the last one in device setup, and enables the event
> + * ports and queues to start accepting events and scheduling them to event ports.
>    *
>    * On success, all basic functions exported by the API (event enqueue,
>    * event dequeue and so on) can be invoked.
> @@ -1166,6 +1170,8 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>    *   Event device identifier
>    * @return
>    *   - 0: Success, device started.
> + *   - -EINVAL:  Invalid device id provided
> + *   - -ENOTSUP: Device does not support this operation.
>    *   - -ESTALE : Not all ports of the device are configured
>    *   - -ENOLINK: Not all queues are linked, which could lead to deadlock.
>    */
> @@ -1208,12 +1214,16 @@ typedef void (*rte_eventdev_stop_flush_t)(uint8_t dev_id,
>    * callback function must be registered in every process that can call
>    * rte_event_dev_stop().
>    *
> + * Only one callback function may be registered. Each new call replaces
> + * the existing registered callback function with the new function passed in.
> + *
>    * To unregister a callback, call this function with a NULL callback pointer.
>    *
>    * @param dev_id
>    *   The identifier of the device.
>    * @param callback
> - *   Callback function invoked once per flushed event.
> + *   Callback function to be invoked once per flushed event.
> + *   Pass NULL to unset any previously-registered callback function.
>    * @param userdata
>    *   Argument supplied to callback.
>    *
> @@ -1235,7 +1245,9 @@ int rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
>    * @return
>    *  - 0 on successfully closing device
>    *  - <0 on failure to close device
> - *  - (-EAGAIN) if device is busy
> + *    - -EINVAL - invalid device id
> + *    - -ENOTSUP - operation not supported for this device
> + *    - -EAGAIN - device is busy

3 x "."

>    */
>   int
>   rte_event_dev_close(uint8_t dev_id);

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types
  2024-01-19 17:43   ` [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types Bruce Richardson
@ 2024-01-23 16:19     ` Mattias Rönnblom
  2024-01-24 11:21       ` Bruce Richardson
  2024-01-31 17:54       ` Bruce Richardson
  0 siblings, 2 replies; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-23 16:19 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> The description of ordered and atomic scheduling given in the eventdev
> doxygen documentation was not always clear. Try and simplify this so
> that it is clearer for the end-user of the application
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> 
> NOTE TO REVIEWERS:
> I've updated this based on my understanding of what these scheduling
> types are meant to do. It matches my understanding of the support
> offered by our Intel DLB2 driver, as well as the SW eventdev, and I
> believe the DSW eventdev too. If it does not match the behaviour of
> other eventdevs, let's have a discussion to see if we can reach a good
> definition of the behaviour that is common.
> ---
>   lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++-----------------
>   1 file changed, 25 insertions(+), 22 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 2c6576e921..cb13602ffb 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1313,26 +1313,24 @@ struct rte_event_vector {
>   #define RTE_SCHED_TYPE_ORDERED          0
>   /**< Ordered scheduling
>    *
> - * Events from an ordered flow of an event queue can be scheduled to multiple
> + * Events from an ordered event queue can be scheduled to multiple

What is the rationale for this change?

An implementation that impose a total order on all events on a 
particular ordered queue will still adhere to the current, more relaxed, 
per-flow ordering semantics.

An application wanting a total order would just set the flow id to 0 on 
all events destined that queue, and it would work on all event devices.

Why don't you just put a note in the DLB driver saying "btw it's total 
order", so any application where per-flow ordering is crucial for 
performance (i.e., where the potentially needless head-of-line blocking 
is an issue) can use multiple queues when running with the DLB.

In the API as-written, the app is free to express more relaxed ordering 
requirements (i.e., to have multiple flows) and it's up to the event 
device to figure out if it's in a position where it can translate this 
to lower latency.

>    * ports for concurrent processing while maintaining the original event order.

Maybe it's worth mentioning what is the original event order. "(i.e., 
the order in which the events were enqueued to the queue)". Especially 
since one like to specify what ordering guarantees one have of events 
enqueued to the same queue on different ports and by different lcores).

I don't know where that information should go though, since it's 
relevant for both atomic and ordered-type queues.

>    * This scheme enables the user to achieve high single flow throughput by
> - * avoiding SW synchronization for ordering between ports which bound to cores.
> - *
> - * The source flow ordering from an event queue is maintained when events are
> - * enqueued to their destination queue within the same ordered flow context.
> - * An event port holds the context until application call
> - * rte_event_dequeue_burst() from the same port, which implicitly releases
> - * the context.
> - * User may allow the scheduler to release the context earlier than that
> - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
> - *
> - * Events from the source queue appear in their original order when dequeued
> - * from a destination queue.
> - * Event ordering is based on the received event(s), but also other
> - * (newly allocated or stored) events are ordered when enqueued within the same
> - * ordered context. Events not enqueued (e.g. released or stored) within the
> - * context are  considered missing from reordering and are skipped at this time
> - * (but can be ordered again within another context).
> + * avoiding SW synchronization for ordering between ports which are polled by
> + * different cores.
> + *
> + * As events are scheduled to ports/cores, the original event order from the
> + * source event queue is recorded internally in the scheduler. As events are
> + * returned (via FORWARD type enqueue) to the scheduler, the original event
> + * order is restored before the events are enqueued into their new destination
> + * queue.

Delete the first sentence on implementation.

"As events are re-enqueued to the next queue (with the op field set to 
RTE_EVENT_OP_FORWARD), the event device restores the original event 
order before the events arrive on the destination queue."

> + *
> + * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly
> + * released by the next dequeue from a port, are skipped by the reordering
> + * stage and do not affect the reordering of returned events.
> + *
> + * The ordering behaviour of NEW events with respect to FORWARD events is
> + * undefined and implementation dependent.

For some reason I find this a little vague. "NEW and FORWARD events 
enqueued to a queue are not ordered in relation to each other (even if 
the flow id is the same)."

I think I agree that NEW shouldn't be ordered vis-a-vi FORWARD, but 
maybe one should say that an event device should avoid excessive 
reordering NEW and FORWARD events.

I think it would also be helpful to address port-to-port ordering 
guarantees (or a lack thereof).

"Events enqueued on one port are not ordered in relation to events 
enqueued on some other port."

Or are they? Not in DSW, at least, and I'm not sure I see a use case for 
such a guarantee, but it's a little counter-intuitive to have them 
potentially re-shuffled.

(This is also relevant for atomic queues.)

>    *
>    * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
>    */
> @@ -1340,18 +1338,23 @@ struct rte_event_vector {
>   #define RTE_SCHED_TYPE_ATOMIC           1
>   /**< Atomic scheduling
>    *
> - * Events from an atomic flow of an event queue can be scheduled only to a
> + * Events from an atomic flow, identified by @ref rte_event.flow_id,

A flow is identified by the combination of queue_id and flow_id, so if 
you reference one you should also reference the other.

> + * of an event queue can be scheduled only to a
>    * single port at a time. The port is guaranteed to have exclusive (atomic)
>    * access to the associated flow context, which enables the user to avoid SW
>    * synchronization. Atomic flows also help to maintain event ordering

"help" here needs to go, I think. It sounds like a best-effort affair. 
The atomic queue ordering guarantees (or the lack thereof) should be 
spelled out.

"Event order in an atomic flow is maintained."

> - * since only one port at a time can process events from a flow of an
> + * since only one port at a time can process events from each flow of an
>    * event queue.

Yes, and *but also since* the event device is not reshuffling events 
enqueued to an atomic queue. And that's more complicated than just 
something that falls out of atomicity, especially if you assume that 
FORWARD type enqueues are not ordered with other FORWARD type enqueues 
on a different port.

>    *
> - * The atomic queue synchronization context is dedicated to the port until
> + * The atomic queue synchronization context for a flow is dedicated to the port until

What is an "atomic queue synchronization context" (except for something 
that makes for long sentences).

How about:
"The atomic flow is locked to the port until /../"

You could also used the word "bound" instead of "locked".

>    * application call rte_event_dequeue_burst() from the same port,
>    * which implicitly releases the context. User may allow the scheduler to
>    * release the context earlier than that by invoking rte_event_enqueue_burst()
> - * with RTE_EVENT_OP_RELEASE operation.
> + * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context
> + * is only released once the last event from the flow, outstanding on the port,
> + * is released. So long as there is one event from an atomic flow scheduled to
> + * a port/core (including any events in the port's dequeue queue, not yet read
> + * by the application), that port will hold the synchronization context.

In case you like the "atomic flow locked/bound to port", this part would 
also need updating.

Maybe here is a good place to add a note on memory ordering and event 
ordering.

"Any memory stores done as a part of event processing will be globally 
visible before the next event in the same atomic flow is dequeued on a 
different lcore."

I.e., enqueue includes write barrier before the event can be seen.

One should probably mentioned a rmb in dequeue as well.

>    *
>    * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
>    */

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types
  2024-01-23 16:19     ` Mattias Rönnblom
@ 2024-01-24 11:21       ` Bruce Richardson
  2024-01-31 17:54       ` Bruce Richardson
  1 sibling, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-24 11:21 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 05:19:18PM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > The description of ordered and atomic scheduling given in the eventdev
> > doxygen documentation was not always clear. Try and simplify this so
> > that it is clearer for the end-user of the application
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > 
> > NOTE TO REVIEWERS:
> > I've updated this based on my understanding of what these scheduling
> > types are meant to do. It matches my understanding of the support
> > offered by our Intel DLB2 driver, as well as the SW eventdev, and I
> > believe the DSW eventdev too. If it does not match the behaviour of
> > other eventdevs, let's have a discussion to see if we can reach a good
> > definition of the behaviour that is common.
> > ---
> >   lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++-----------------
> >   1 file changed, 25 insertions(+), 22 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 2c6576e921..cb13602ffb 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -1313,26 +1313,24 @@ struct rte_event_vector {
> >   #define RTE_SCHED_TYPE_ORDERED          0
> >   /**< Ordered scheduling
> >    *
> > - * Events from an ordered flow of an event queue can be scheduled to multiple
> > + * Events from an ordered event queue can be scheduled to multiple
> 
> What is the rationale for this change?
> 
> An implementation that impose a total order on all events on a particular
> ordered queue will still adhere to the current, more relaxed, per-flow
> ordering semantics.
> 
> An application wanting a total order would just set the flow id to 0 on all
> events destined that queue, and it would work on all event devices.
> 
> Why don't you just put a note in the DLB driver saying "btw it's total
> order", so any application where per-flow ordering is crucial for
> performance (i.e., where the potentially needless head-of-line blocking is
> an issue) can use multiple queues when running with the DLB.
> 
> In the API as-written, the app is free to express more relaxed ordering
> requirements (i.e., to have multiple flows) and it's up to the event device
> to figure out if it's in a position where it can translate this to lower
> latency.
> 

Yes, you are right. I'll roll-back or rework this change in V3. Keep it
documented that flow-ordering is guaranteed, but note that some
implementations may use total ordering to achieve that.

> >    * ports for concurrent processing while maintaining the original event order.
> 
> Maybe it's worth mentioning what is the original event order. "(i.e., the
> order in which the events were enqueued to the queue)". Especially since one
> like to specify what ordering guarantees one have of events enqueued to the
> same queue on different ports and by different lcores).
> 
> I don't know where that information should go though, since it's relevant
> for both atomic and ordered-type queues.
> 

It's probably more relevant for ordered, but I'll try and see where it's
best to go.

> >    * This scheme enables the user to achieve high single flow throughput by
> > - * avoiding SW synchronization for ordering between ports which bound to cores.
> > - *
> > - * The source flow ordering from an event queue is maintained when events are
> > - * enqueued to their destination queue within the same ordered flow context.
> > - * An event port holds the context until application call
> > - * rte_event_dequeue_burst() from the same port, which implicitly releases
> > - * the context.
> > - * User may allow the scheduler to release the context earlier than that
> > - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
> > - *
> > - * Events from the source queue appear in their original order when dequeued
> > - * from a destination queue.
> > - * Event ordering is based on the received event(s), but also other
> > - * (newly allocated or stored) events are ordered when enqueued within the same
> > - * ordered context. Events not enqueued (e.g. released or stored) within the
> > - * context are  considered missing from reordering and are skipped at this time
> > - * (but can be ordered again within another context).
> > + * avoiding SW synchronization for ordering between ports which are polled by
> > + * different cores.
> > + *
> > + * As events are scheduled to ports/cores, the original event order from the
> > + * source event queue is recorded internally in the scheduler. As events are
> > + * returned (via FORWARD type enqueue) to the scheduler, the original event
> > + * order is restored before the events are enqueued into their new destination
> > + * queue.
> 
> Delete the first sentence on implementation.
> 
> "As events are re-enqueued to the next queue (with the op field set to
> RTE_EVENT_OP_FORWARD), the event device restores the original event order
> before the events arrive on the destination queue."
> 
> > + *
> > + * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly
> > + * released by the next dequeue from a port, are skipped by the reordering
> > + * stage and do not affect the reordering of returned events.
> > + *
> > + * The ordering behaviour of NEW events with respect to FORWARD events is
> > + * undefined and implementation dependent.
> 
> For some reason I find this a little vague. "NEW and FORWARD events enqueued
> to a queue are not ordered in relation to each other (even if the flow id is
> the same)."
> 
> I think I agree that NEW shouldn't be ordered vis-a-vi FORWARD, but maybe
> one should say that an event device should avoid excessive reordering NEW
> and FORWARD events.
> 
> I think it would also be helpful to address port-to-port ordering guarantees
> (or a lack thereof).
> 
> "Events enqueued on one port are not ordered in relation to events enqueued
> on some other port."
> 
> Or are they? Not in DSW, at least, and I'm not sure I see a use case for
> such a guarantee, but it's a little counter-intuitive to have them
> potentially re-shuffled.
> 
> (This is also relevant for atomic queues.)
> 

Ack.

> >    *
> >    * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
> >    */
> > @@ -1340,18 +1338,23 @@ struct rte_event_vector {
> >   #define RTE_SCHED_TYPE_ATOMIC           1
> >   /**< Atomic scheduling
> >    *
> > - * Events from an atomic flow of an event queue can be scheduled only to a
> > + * Events from an atomic flow, identified by @ref rte_event.flow_id,
> 
> A flow is identified by the combination of queue_id and flow_id, so if you
> reference one you should also reference the other.
> 

Yes, this is probably one to be reflected globally. Also on your previous
comment about priority, I believe that a flow for ordering guarantees
should be a combination of queue_id, flow_id and priority. Two packets with
different priorities should expect to be reordered, since that tends to be
what priority implies.

> > + * of an event queue can be scheduled only to a
> >    * single port at a time. The port is guaranteed to have exclusive (atomic)
> >    * access to the associated flow context, which enables the user to avoid SW
> >    * synchronization. Atomic flows also help to maintain event ordering
> 
> "help" here needs to go, I think. It sounds like a best-effort affair. The
> atomic queue ordering guarantees (or the lack thereof) should be spelled
> out.
> 
> "Event order in an atomic flow is maintained."

Ack.

> 
> > - * since only one port at a time can process events from a flow of an
> > + * since only one port at a time can process events from each flow of an
> >    * event queue.
> 
> Yes, and *but also since* the event device is not reshuffling events
> enqueued to an atomic queue. And that's more complicated than just something
> that falls out of atomicity, especially if you assume that FORWARD type
> enqueues are not ordered with other FORWARD type enqueues on a different
> port.
> 

Ack.

> >    *
> > - * The atomic queue synchronization context is dedicated to the port until
> > + * The atomic queue synchronization context for a flow is dedicated to the port until
> 
> What is an "atomic queue synchronization context" (except for something that
> makes for long sentences).
> 

Yes, it's rather wordy. I like the idea of using lock terminology you
suggest. The use of the word "contexts" in relation to atomic/ordered I
find confusing myself too.

> How about:
> "The atomic flow is locked to the port until /../"
> 
> You could also used the word "bound" instead of "locked".
> 
> >    * application call rte_event_dequeue_burst() from the same port,
> >    * which implicitly releases the context. User may allow the scheduler to
> >    * release the context earlier than that by invoking rte_event_enqueue_burst()
> > - * with RTE_EVENT_OP_RELEASE operation.
> > + * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context
> > + * is only released once the last event from the flow, outstanding on the port,
> > + * is released. So long as there is one event from an atomic flow scheduled to
> > + * a port/core (including any events in the port's dequeue queue, not yet read
> > + * by the application), that port will hold the synchronization context.
> 
> In case you like the "atomic flow locked/bound to port", this part would
> also need updating.
> 
> Maybe here is a good place to add a note on memory ordering and event
> ordering.
> 
> "Any memory stores done as a part of event processing will be globally
> visible before the next event in the same atomic flow is dequeued on a
> different lcore."
> 
> I.e., enqueue includes write barrier before the event can be seen.
> 
> One should probably mentioned a rmb in dequeue as well.
> 

Do we think that that is necessary? I can add it, but I would have thought
that - as with rings - it could be assumed.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-01-19 17:43   ` [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields Bruce Richardson
@ 2024-01-24 11:34     ` Mattias Rönnblom
  2024-02-01 16:59       ` Bruce Richardson
  2024-02-01 17:02       ` Bruce Richardson
  2024-02-01  9:35     ` Bruce Richardson
  1 sibling, 2 replies; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-24 11:34 UTC (permalink / raw)
  To: Bruce Richardson, dev
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-19 18:43, Bruce Richardson wrote:
> Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> For the fields in "rte_event" struct, enhance the comments on each to
> clarify the field's use, and whether it is preserved between enqueue and
> dequeue, and it's role, if any, in scheduling.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> 
> As with the previous patch, please review this patch to ensure that the
> expected semantics of the various event types and event fields have not
> changed in an unexpected way.
> ---
>   lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
>   1 file changed, 77 insertions(+), 28 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index cb13602ffb..4eff1c4958 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1416,21 +1416,25 @@ struct rte_event_vector {
> 
>   /* Event enqueue operations */
>   #define RTE_EVENT_OP_NEW                0
> -/**< The event producers use this operation to inject a new event to the
> +/**< The @ref rte_event.op field should be set to this type to inject a new event to the
>    * event device.
>    */

"type" -> "value"

"to" -> "into"?

You could also say "to mark the event as new".

What is new? Maybe "new (as opposed to a forwarded) event." or "new 
(i.e., not previously dequeued).".

"The application sets the @ref rte_event.op field of an enqueued event 
to this value to mark the event as new (i.e., not previously dequeued)."

>   #define RTE_EVENT_OP_FORWARD            1
> -/**< The CPU use this operation to forward the event to different event queue or
> - * change to new application specific flow or schedule type to enable
> - * pipelining.
> +/**< SW should set the @ref rte_event.op filed to this type to return a
> + * previously dequeued event to the event device for further processing.

"filed" -> "field"

"SW" -> "The application"

"to be scheduled for further processing (or transmission)"

The wording should otherwise be the same as NEW, whatever you choose there.

>    *
> - * This operation must only be enqueued to the same port that the
> + * This event *must* be enqueued to the same port that the
>    * event to be forwarded was dequeued from.

OK, so you "should" mark a new event RTE_EVENT_OP_FORWARD but you 
"*must*" enqueue it to the same port.

I think you "must" do both.

> + *
> + * The event's fields, including (but not limited to) flow_id, scheduling type,
> + * destination queue, and event payload e.g. mbuf pointer, may all be updated as
> + * desired by software, but the @ref rte_event.impl_opaque field must

"software" -> "application"

> + * be kept to the same value as was present when the event was dequeued.
>    */
>   #define RTE_EVENT_OP_RELEASE            2
>   /**< Release the flow context associated with the schedule type.
>    *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
>    * then this function hints the scheduler that the user has completed critical
>    * section processing in the current atomic context.
>    * The scheduler is now allowed to schedule events from the same flow from
> @@ -1442,21 +1446,19 @@ struct rte_event_vector {
>    * performance, but the user needs to design carefully the split into critical
>    * vs non-critical sections.
>    *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> - * then this function hints the scheduler that the user has done all that need
> - * to maintain event order in the current ordered context.
> - * The scheduler is allowed to release the ordered context of this port and
> - * avoid reordering any following enqueues.
> - *
> - * Early ordered context release may increase parallelism and thus system
> - * performance.
> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED

Isn't a missing "or @ref RTE_SCHED_TYPE_ATOMIC" just an oversight (in 
the original API wording)?

> + * then this function informs the scheduler that the current event has
> + * completed processing and will not be returned to the scheduler, i.e.
> + * it has been dropped, and so the reordering context for that event
> + * should be considered filled.
>    *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_PARALLEL
>    * or no scheduling context is held then this function may be an NOOP,
>    * depending on the implementation.

Maybe you can also fix this "function" -> "operation". I suggest you 
delete that sentence, because it makes no sense.

What is says in a somewhat vague manner is that you tread into the realm 
of undefined behavior if you release PARALLEL events.

>    *
>    * This operation must only be enqueued to the same port that the
> - * event to be released was dequeued from.
> + * event to be released was dequeued from. The @ref rte_event.impl_opaque
> + * field in the release event must match that in the original dequeued event.

I would say "same value" rather than "match".

"The @ref rte_event.impl_opaque field in the release event have the same 
value as in the original dequeued event."

>    */
> 
>   /**
> @@ -1473,53 +1475,100 @@ struct rte_event {
>   			/**< Targeted flow identifier for the enqueue and
>   			 * dequeue operation.
>   			 * The value must be in the range of
> -			 * [0, nb_event_queue_flows - 1] which
> +			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which

The same comment as I had before about ranges for unsigned types.

>   			 * previously supplied to rte_event_dev_configure().
> +			 *
> +			 * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
> +			 * flow context for atomicity, such that events from each individual flow
> +			 * will only be scheduled to one port at a time.

flow_id alone doesn't identify an atomic flow. It's queue_id + flow_id. 
I'm not sure I think "flow context" adds much, unless it's defined 
somewhere. Sounds like some assumed implementation detail.

> +			 *
> +			 * This field is preserved between enqueue and dequeue when
> +			 * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> +			 * capability. Otherwise the value is implementation dependent
> +			 * on dequeue >   			 */
>   			uint32_t sub_event_type:8;
>   			/**< Sub-event types based on the event source.
> +			 *
> +			 * This field is preserved between enqueue and dequeue.
> +			 * This field is for SW or event adapter use,

"SW" -> "application"

> +			 * and is unused in scheduling decisions.
> +			 *
>   			 * @see RTE_EVENT_TYPE_CPU
>   			 */
>   			uint32_t event_type:4;
> -			/**< Event type to classify the event source.
> -			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> +			/**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
> +			 *
> +			 * This field is preserved between enqueue and dequeue
> +			 * This field is for SW or event adapter use,
> +			 * and is unused in scheduling decisions.

"unused" -> "is not considered"?

>   			 */
>   			uint8_t op:2;
> -			/**< The type of event enqueue operation - new/forward/
> -			 * etc.This field is not preserved across an instance
> +			/**< The type of event enqueue operation - new/forward/ etc.
> +			 *
> +			 * This field is *not* preserved across an instance
>   			 * and is undefined on dequeue.

Maybe you should use "undefined" rather than "implementation dependent", 
or change this instance of undefined to implementation dependent. Now 
two different terms are used for the same thing.

> -			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> +			 *
> +			 * @see RTE_EVENT_OP_NEW
> +			 * @see RTE_EVENT_OP_FORWARD
> +			 * @see RTE_EVENT_OP_RELEASE
>   			 */
>   			uint8_t rsvd:4;
> -			/**< Reserved for future use */
> +			/**< Reserved for future use.
> +			 *
> +			 * Should be set to zero on enqueue. Zero on dequeue.
> +			 */

Why say they should be zero on dequeue? Doesn't this defeat the purpose 
of having reserved bits, for future use? With you suggested wording, you 
can't use these bits without breaking the ABI.

>   			uint8_t sched_type:2;
>   			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
>   			 * associated with flow id on a given event queue
>   			 * for the enqueue and dequeue operation.
> +			 *
> +			 * This field is used to determine the scheduling type
> +			 * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
> +			 * is supported.

"supported" -> "configured"

> +			 * For queues where only a single scheduling type is available,
> +			 * this field must be set to match the configured scheduling type.
> +			 *

Why is the API/event device asking this of the application?

> +			 * This field is preserved between enqueue and dequeue.
> +			 *
> +			 * @see RTE_SCHED_TYPE_ORDERED
> +			 * @see RTE_SCHED_TYPE_ATOMIC
> +			 * @see RTE_SCHED_TYPE_PARALLEL
>   			 */
>   			uint8_t queue_id;
>   			/**< Targeted event queue identifier for the enqueue or
>   			 * dequeue operation.
>   			 * The value must be in the range of
> -			 * [0, nb_event_queues - 1] which previously supplied to
> -			 * rte_event_dev_configure().
> +			 * [0, @ref rte_event_dev_config.nb_event_queues - 1] which was
> +			 * previously supplied to rte_event_dev_configure().
> +			 *
> +			 * This field is preserved between enqueue on dequeue.
>   			 */
>   			uint8_t priority;
>   			/**< Event priority relative to other events in the
>   			 * event queue. The requested priority should in the
> -			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
> -			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
> +			 * range of  [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST,
> +			 * @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
>   			 * The implementation shall normalize the requested
>   			 * priority to supported priority value.
> +			 *
>   			 * Valid when the device has
> -			 * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
> +			 * @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability.
> +			 * Ignored otherwise.
> +			 *
> +			 * This field is preserved between enqueue and dequeue.

Is the normalized or unnormalized value that is preserved?

>   			 */
>   			uint8_t impl_opaque;
>   			/**< Implementation specific opaque value.

Maybe you can also fix "implementation" here to be something more 
specific. Implementation, of what?

"Event device implementation" or just "event device".

> +			 *
>   			 * An implementation may use this field to hold
>   			 * implementation specific value to share between
>   			 * dequeue and enqueue operation.
> +			 *
>   			 * The application should not modify this field.
> +			 * Its value is implementation dependent on dequeue,
> +			 * and must be returned unmodified on enqueue when
> +			 * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE

Should it be mentioned that impl_opaque is ignored by the event device 
for NEW events?

>   			 */
>   		};
>   	};
> --
> 2.40.1
> 

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 01/11] eventdev: improve doxygen introduction text
  2024-01-23  9:06       ` Bruce Richardson
@ 2024-01-24 11:37         ` Mattias Rönnblom
  0 siblings, 0 replies; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-24 11:37 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

On 2024-01-23 10:06, Bruce Richardson wrote:
> On Tue, Jan 23, 2024 at 09:57:58AM +0100, Mattias Rönnblom wrote:
>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>> Make some textual improvements to the introduction to eventdev and event
>>> devices in the eventdev header file. This text appears in the doxygen
>>> output for the header file, and introduces the key concepts, for
>>> example: events, event devices, queues, ports and scheduling.
>>>
>>
>> Great stuff, Bruce.
>>
> Thanks, good feedback here. I'll take that into account and do a v3 later
> when all feedback on this v2 is in.
> 
> /Bruce

Sorry for such a piecemeal review. I didn't have time to do it all in 
one go.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
  2024-01-23  9:43       ` Bruce Richardson
@ 2024-01-24 11:51         ` Mattias Rönnblom
  2024-01-31 14:37           ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-01-24 11:51 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-23 10:43, Bruce Richardson wrote:
> On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote:
>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>> Some small rewording changes to the doxygen comments on struct
>>> rte_event_dev_info.
>>>
>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>> ---
>>>    lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
>>>    1 file changed, 25 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>> index 57a2791946..872f241df2 100644
>>> --- a/lib/eventdev/rte_eventdev.h
>>> +++ b/lib/eventdev/rte_eventdev.h
>>> @@ -482,54 +482,58 @@ struct rte_event_dev_info {
>>>    	const char *driver_name;	/**< Event driver name */
>>>    	struct rte_device *dev;	/**< Device information */
>>>    	uint32_t min_dequeue_timeout_ns;
>>> -	/**< Minimum supported global dequeue timeout(ns) by this device */
>>> +	/**< Minimum global dequeue timeout(ns) supported by this device */
>>
>> Are we missing a bunch of "." here and in the other fields?
>>
>>>    	uint32_t max_dequeue_timeout_ns;
>>> -	/**< Maximum supported global dequeue timeout(ns) by this device */
>>> +	/**< Maximum global dequeue timeout(ns) supported by this device */
>>>    	uint32_t dequeue_timeout_ns;
>>>    	/**< Configured global dequeue timeout(ns) for this device */
>>>    	uint8_t max_event_queues;
>>> -	/**< Maximum event_queues supported by this device */
>>> +	/**< Maximum event queues supported by this device */
>>>    	uint32_t max_event_queue_flows;
>>> -	/**< Maximum supported flows in an event queue by this device*/
>>> +	/**< Maximum number of flows within an event queue supported by this device*/
>>>    	uint8_t max_event_queue_priority_levels;
>>>    	/**< Maximum number of event queue priority levels by this device.
>>> -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
>>> +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
>>> +	 * The priority levels are evenly distributed between
>>> +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
>>
>> This is a change of the API, in the sense it's defining something previously
>> left undefined?
>>
> 
> Well, undefined is pretty useless for app writers, no?
> However, agreed that the range between HIGHEST and LOWEST is an assumption
> on my part, chosen because it matches what happens to the event priorities
> which are documented in event struct as "The implementation shall normalize
>   the requested priority to supported priority value" - which, while better
> than nothing, does technically leave the details of how normalization
> occurs up to the implementation.
> 
>> If you need 6 different priority levels in an app, how do you go about
>> making sure you find the correct (distinct) Eventdev levels on any event
>> device supporting >= 6 levels?
>>
>> #define NUM_MY_LEVELS 6
>>
>> #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) *
>> (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) /
>> NUM_MY_LEVELS)
>>
>> This way? One would worry a bit exactly what "evenly" means, in terms of
>> rounding errors. If you have an event device with 255 priority levels of
>> (say) 256 levels available in the API, which two levels are the same
>> priority?
> 
> Yes, round etc. will be an issue in cases of non-powers-of 2.
> However, I think we do need to clarify this behaviour, so I'm open to
> alternative suggestions as to how update this.
> 

In retrospect, maybe it would have been better to just express the 
number of priority levels an event device supported, only allow [0, 
max_levels - 1] in the prio field, and leave it to the app to do the 
conversion/normalization.

Maybe a new <rte_eventdev.h> helper macro would at least suggest to the 
PMD driver implementer and the application designer how this 
normalization should work. Something like the above, but where 
NUM_MY_LEVELS is an input parameter. Would result in an integer division 
though, so shouldn't be used in the fast path.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 01/11] eventdev: improve doxygen introduction text
  2024-01-23  8:57     ` Mattias Rönnblom
  2024-01-23  9:06       ` Bruce Richardson
@ 2024-01-31 13:45       ` Bruce Richardson
  1 sibling, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-31 13:45 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 09:57:58AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Make some textual improvements to the introduction to eventdev and event
> > devices in the eventdev header file. This text appears in the doxygen
> > output for the header file, and introduces the key concepts, for
> > example: events, event devices, queues, ports and scheduling.
> > 
> 
> Great stuff, Bruce.
> 
> > This patch makes the following improvements:
> > * small textual fixups, e.g. correcting use of singular/plural
> > * rewrites of some sentences to improve clarity
> > * using doxygen markdown to split the whole large block up into
> >    sections, thereby making it easier to read.
> > 
> > No large-scale changes are made, and blocks are not reordered
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++---------------
> >   1 file changed, 66 insertions(+), 46 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index ec9b02455d..a36c89c7a4 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -12,12 +12,13 @@
> >    * @file
> >    *
> >    * RTE Event Device API
> > + * ====================
> >    *
> >    * In a polling model, lcores poll ethdev ports and associated rx queues
> 
> "In a polling model, lcores pick up packets from Ethdev ports and associated
> RX queues, runs the processing to completion, and enqueues the completed
> packets to a TX queue. NIC-level receive-side scaling (RSS) may be used to
> balance the load across multiple CPU cores."
> 
> I thought it might be worth to be a little more verbose on what is the
> reference model Eventdev is compared to. Maybe you can add "traditional" or
> "archetypal", or "simple" as a prefix to the "polling model". (I think I
> would call this a "simple run-to-completion model" rather than "polling
> model".)
> 
> "By contrast, in Eventdev, ingressing* packets are fed into an event device,
> which schedules packets across available lcores, in accordance to its
> configuration. This event-driven programming model offers applications
> automatic multicore scaling, dynamic load balancing, pipelining, packet
> order maintenance, synchronization, and quality of service."
> 
> * Is this a word?
> 
Ack, taking these suggestions with minor tweaks. Changed "ingressing" to
"incoming", which should be clear enough and is definitely a word.

> > - * directly to look for packet. In an event driven model, by contrast, lcores
> > - * call the scheduler that selects packets for them based on programmer
> > - * specified criteria. Eventdev library adds support for event driven
> > - * programming model, which offer applications automatic multicore scaling,
> > + * directly to look for packets. In an event driven model, in contrast, lcores
> > + * call a scheduler that selects packets for them based on programmer
> > + * specified criteria. The eventdev library adds support for the event driven
> > + * programming model, which offers applications automatic multicore scaling,
> >    * dynamic load balancing, pipelining, packet ingress order maintenance and
> >    * synchronization services to simplify application packet processing.
> >    *
> > @@ -25,12 +26,15 @@
> >    *
> >    * - The application-oriented Event API that includes functions to setup
> >    *   an event device (configure it, setup its queues, ports and start it), to
> > - *   establish the link between queues to port and to receive events, and so on.
> > + *   establish the links between queues and ports to receive events, and so on.
> >    *
> >    * - The driver-oriented Event API that exports a function allowing
> > - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> > + *   an event poll Mode Driver (PMD) to register itself as
> >    *   an event device driver.
> >    *
> > + * Application-oriented Event API
> > + * ------------------------------
> > + *
> >    * Event device components:
> >    *
> >    *                     +-----------------+
> > @@ -75,27 +79,33 @@
> >    *            |                                                           |
> >    *            +-----------------------------------------------------------+
> >    *
> > - * Event device: A hardware or software-based event scheduler.
> > + * **Event device**: A hardware or software-based event scheduler.
> >    *
> > - * Event: A unit of scheduling that encapsulates a packet or other datatype
> > - * like SW generated event from the CPU, Crypto work completion notification,
> > - * Timer expiry event notification etc as well as metadata.
> > - * The metadata includes flow ID, scheduling type, event priority, event_type,
> > + * **Event**: A unit of scheduling that encapsulates a packet or other datatype,
> 
> "Event: Represents an item of work and is the smallest unit of scheduling.
> An event carries metadata, such as queue ID, scheduling type, and event
> priority, and data such as one or more packets or other kinds of buffers.
> Examples of events are a software-generated item of work originating from a
> lcore carrying a packet to be processed, a crypto work completion
> notification and a timer expiry notification."
> 
> I've found "work scheduler" as helpful term describing what role an event
> device serve in the system, and thus an event represent an item of work.
> "Event" and "Event device" are also good names, but lead some people to
> think libevent or event loop, which is not exactly right.
> 

Ack.

> > + * such as: SW generated event from the CPU, crypto work completion notification,
> > + * timer expiry event notification etc., as well as metadata about the packet or data.
> > + * The metadata includes a flow ID (if any), scheduling type, event priority, event_type,
> >    * sub_event_type etc.
> >    *
> > - * Event queue: A queue containing events that are scheduled by the event dev.
> > + * **Event queue**: A queue containing events that are scheduled by the event device.
> >    * An event queue contains events of different flows associated with scheduling
> >    * types, such as atomic, ordered, or parallel.
> > + * Each event given to an eventdev must have a valid event queue id field in the metadata,
> "eventdev" -> "event device"
> 
> > + * to specify on which event queue in the device the event must be placed,
> > + * for later scheduling to a core.
> 
> Events aren't nessarily scheduled to cores, so remove the last part.
> 
> >    *
> > - * Event port: An application's interface into the event dev for enqueue and
> > + * **Event port**: An application's interface into the event dev for enqueue and
> >    * dequeue operations. Each event port can be linked with one or more
> >    * event queues for dequeue operations.
> > - *
> > - * By default, all the functions of the Event Device API exported by a PMD
> > - * are lock-free functions which assume to not be invoked in parallel on
> > - * different logical cores to work on the same target object. For instance,
> > - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> > - * cores to operates on same  event port. Of course, this function
> > + * Each port should be associated with a single core (enqueue and dequeue is not thread-safe).
> 
> Should, or must?
> 
> Either it's a MT safety issue, and any lcore can access the port with the
> proper serialization, or it's something where the lcore id used to store
> state between invocations, or some other mechanism that prevents a port from
> being used by multiple threads (lcore or not).
> 

Rewording this to start with the fact that enqueue and dequeue functions are
not "thread-safe", and then stating that the expected configuration is that
each port is assigned to an lcore, otherwise sync mechanisms are needed.

> > + * To schedule events to a core, the event device will schedule them to the event port(s)
> > + * being polled by that core.
> 
> "core" -> "lcore" ?
> 
> > + *
> > + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> > + * are lock-free functions, which must not be invoked on the same object in parallel on
> > + * different logical cores.
> 
> This is a one-sentence contradiction. The term "lock free" implies a data
> structure which is MT safe, achieving this goal without the use of locks. A
> lock-free object thus *may* be called from different threads, including
> different lcore threads.
> 

Changed lock-free to non-thread-safe.

> Ports are not MT safe, and thus one port should not be acted upon by more
> than one thread (either in parallel, or throughout the lifetime of the event
> device/port; see above).
> 
> The event device is MT safe, provided the different parallel callers use
> different ports.
> 
> A more subtle question and one with a less obvious answer is if the caller
> of also *must* be an EAL thread, or if a registered non-EAL thread or even
> an unregistered non-EAL thread may call the "fast path" functions (enqueue,
> dequeue etc).
> 
> For EAL threads, the event device implementation may safely use
> non-preemption safe constructs (like the default ring variant and spin
> locks).
> 
> If the caller is a registered non-EAL thread or an EAL thread, the lcore id
> may be used to index various data structures.
> 
> If "lcore id"-less threads may call the fast path APIs, what are the MT
> safety guarantees in that case? Like rte_random.h, or something else.
> 

I don't know the answer to this. I believe right now that most/all eventdev
functions are callable on non-EAL threads, but I'm not sure we want to
guarantee that - e.g. some drivers may require registered threads. I think
we need to resolve and document this, but I'm not going to do so in this
patch(set).

> > + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> > + * cores to operate on same  event port. Of course, this function
> >    * can be invoked in parallel by different logical cores on different ports.
> >    * It is the responsibility of the upper level application to enforce this rule.
> >    *
> > @@ -107,22 +117,19 @@
> >    *
> >    * Event devices are dynamically registered during the PCI/SoC device probing
> >    * phase performed at EAL initialization time.
> > - * When an Event device is being probed, a *rte_event_dev* structure and
> > - * a new device identifier are allocated for that device. Then, the
> > - * event_dev_init() function supplied by the Event driver matching the probed
> > - * device is invoked to properly initialize the device.
> > + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> > + * for it and the event_dev_init() function supplied by the Event driver
> > + * is invoked to properly initialize the device.
> >    *
> > - * The role of the device init function consists of resetting the hardware or
> > - * software event driver implementations.
> > + * The role of the device init function is to reset the device hardware or
> > + * to initialize the software event driver implementation.
> >    *
> > - * If the device init operation is successful, the correspondence between
> > - * the device identifier assigned to the new device and its associated
> > - * *rte_event_dev* structure is effectively registered.
> > - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> > - * freed.
> > + * If the device init operation is successful, the device is assigned a device
> > + * id (dev_id) for application use.
> > + * Otherwise, the *rte_event_dev* structure is freed.
> >    *
> >    * The functions exported by the application Event API to setup a device
> > - * designated by its device identifier must be invoked in the following order:
> > + * must be invoked in the following order:
> >    *     - rte_event_dev_configure()
> >    *     - rte_event_queue_setup()
> >    *     - rte_event_port_setup()
> > @@ -130,10 +137,15 @@
> >    *     - rte_event_dev_start()
> >    *
> >    * Then, the application can invoke, in any order, the functions
> > - * exported by the Event API to schedule events, dequeue events, enqueue events,
> > - * change event queue(s) to event port [un]link establishment and so on.
> > - *
> > - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> > + * exported by the Event API to dequeue events, enqueue events,
> > + * and link and unlink event queue(s) to event ports.
> > + *
> > + * Before configuring a device, an application should call rte_event_dev_info_get()
> > + * to determine the capabilities of the event device, and any queue or port
> > + * limits of that device. The parameters set in the various device configuration
> > + * structures may need to be adjusted based on the max values provided in the
> > + * device information structure returned from the info_get API.
> > + * An application may use rte_event_[queue/port]_default_conf_get() to get the
> >    * default configuration to set up an event queue or event port by
> >    * overriding few default values.
> >    *
> > @@ -145,7 +157,11 @@
> >    * when the device is stopped.
> >    *
> >    * Finally, an application can close an Event device by invoking the
> > - * rte_event_dev_close() function.
> > + * rte_event_dev_close() function. Once closed, a device cannot be
> > + * reconfigured or restarted.
> > + *
> > + * Driver-Oriented Event API
> > + * -------------------------
> >    *
> >    * Each function of the application Event API invokes a specific function
> >    * of the PMD that controls the target device designated by its device
> > @@ -164,10 +180,13 @@
> >    * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
> >    *
> >    * For performance reasons, the address of the fast-path functions of the
> > - * Event driver is not contained in the *event_dev_ops* structure.
> > + * Event driver are not contained in the *event_dev_ops* structure.
> 
> It's one address, so it should remain "is"?

I think it should be "addresses of the functions", so adjusting that and
keeping it as "are". Next sentence already uses "they" in the plural too,
so then everything aligns nicely.

> 
> >    * Instead, they are directly stored at the beginning of the *rte_event_dev*
> >    * structure to avoid an extra indirect memory access during their invocation.
> >    *
> > + * Event Enqueue, Dequeue and Scheduling
> > + * -------------------------------------
> > + *
> >    * RTE event device drivers do not use interrupts for enqueue or dequeue
> >    * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
> >    * functions to applications.
> > @@ -179,21 +198,22 @@
> >    * crypto work completion notification etc
> >    *
> >    * The *dequeue* operation gets one or more events from the event ports.
> > - * The application process the events and send to downstream event queue through
> > - * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
> > - * on the final stage, the application may use Tx adapter API for maintaining
> > - * the ingress order and then send the packet/event on the wire.
> > + * The application processes the events and sends them to a downstream event queue through
> > + * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
> > + * On the final stage of processing, the application may use the Tx adapter API for maintaining
> > + * the event ingress order while sending the packet/event on the wire via NIC Tx.
> >    *
> >    * The point at which events are scheduled to ports depends on the device.
> >    * For hardware devices, scheduling occurs asynchronously without any software
> >    * intervention. Software schedulers can either be distributed
> >    * (each worker thread schedules events to its own port) or centralized
> >    * (a dedicated thread schedules to all ports). Distributed software schedulers
> > - * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
> > - * scheduler logic need a dedicated service core for scheduling.
> > - * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
> > - * indicates the device is centralized and thus needs a dedicated scheduling
> > - * thread that repeatedly calls software specific scheduling function.
> > + * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
> > + * software schedulers need a dedicated service core for scheduling.
> > + * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
> > + * indicates that the device is centralized and thus needs a dedicated scheduling
> > + * thread, generally a service core,
> > + * that repeatedly calls the software specific scheduling function.
> 
> In the SW case, what you have is a service that needs to be mapped to a
> service lcore.
> 
> "generally a RTE service that should be mapped to one or more service
> lcores"
> 
Ack, will use that rewording.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 03/11] eventdev: update documentation on device capability flags
  2024-01-23  9:18     ` Mattias Rönnblom
  2024-01-23  9:34       ` Bruce Richardson
@ 2024-01-31 14:09       ` Bruce Richardson
  2024-02-02  8:58         ` Mattias Rönnblom
  1 sibling, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-31 14:09 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 10:18:53AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Update the device capability docs, to:
> > 
> > * include more cross-references
> > * split longer text into paragraphs, in most cases with each flag having
> >    a single-line summary at the start of the doc block
> > * general comment rewording and clarification as appropriate
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
> >   1 file changed, 93 insertions(+), 37 deletions(-)
> > 
<snip>
> >    * If this capability is not set, the queue only supports events of the
> > - *  *RTE_SCHED_TYPE_* type that it was created with.
> > + * *RTE_SCHED_TYPE_* type that it was created with.
> > + * Any events of other types scheduled to the queue will handled in an
> > + * implementation-dependent manner. They may be dropped by the
> > + * event device, or enqueued with the scheduling type adjusted to the
> > + * correct/supported value.
> 
> Having the application setting sched_type when it was already set on a the
> level of the queue never made sense to me.
> 
> I can't see any reasons why this field shouldn't be ignored by the event
> device on non-RTE_EVENT_QUEUE_CFG_ALL_TYPES queues.
> 
> If the behavior is indeed undefined, I think it's better to just say
> "undefined" rather than the above speculation.
> 

Updating in v3 to just say it's undefined.

> >    *
> > - * @see RTE_SCHED_TYPE_* values
<snip>
> >   #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> >   /**< Event device is capable of changing the queue attributes at runtime i.e
> > - * after rte_event_queue_setup() or rte_event_start() call sequence. If this
> > - * flag is not set, eventdev queue attributes can only be configured during
> > + * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
> > + *
> > + * If this flag is not set, eventdev queue attributes can only be configured during
> >    * rte_event_queue_setup().
> 
> "event queue" or just "queue".
> 
Ack.

> > + *
> > + * @see rte_event_queue_setup
> >    */
> >   #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
> > -/**< Event device is capable of supporting multiple link profiles per event port
> > - * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
> > - * than one.
> > +/**< Event device is capable of supporting multiple link profiles per event port.
> > + *
> > + *
> > + * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
> > + * than one, and multiple profiles may be configured and then switched at runtime.
> > + * If not set, only a single profile may be configured, which may itself be
> > + * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
> > + *
> > + * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
> > + * @see rte_event_port_profile_switch
> > + * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
> >    */
> >   /* Event device priority levels */
> >   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
> > -/**< Highest priority expressed across eventdev subsystem
> > +/**< Highest priority expressed across eventdev subsystem.
> 
> "The highest priority an event device may support."
> or
> "The highest priority any event device may support."
> 
> Maybe this is a further improvement, beyond punctuation? "across eventdev
> subsystem" sounds awkward.
> 

Still not very clear. Talking about device support implies that its
possible some devices may not support it. How about:

"highest priority level for events and queues".


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
  2024-01-24 11:51         ` Mattias Rönnblom
@ 2024-01-31 14:37           ` Bruce Richardson
  2024-02-02  9:24             ` Mattias Rönnblom
  0 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-01-31 14:37 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Wed, Jan 24, 2024 at 12:51:03PM +0100, Mattias Rönnblom wrote:
> On 2024-01-23 10:43, Bruce Richardson wrote:
> > On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote:
> > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > Some small rewording changes to the doxygen comments on struct
> > > > rte_event_dev_info.
> > > > 
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
> > > >    lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
> > > >    1 file changed, 25 insertions(+), 21 deletions(-)
> > > > 
> > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > index 57a2791946..872f241df2 100644
> > > > --- a/lib/eventdev/rte_eventdev.h
> > > > +++ b/lib/eventdev/rte_eventdev.h
> > > > @@ -482,54 +482,58 @@ struct rte_event_dev_info {
> > > >    	const char *driver_name;	/**< Event driver name */
> > > >    	struct rte_device *dev;	/**< Device information */
> > > >    	uint32_t min_dequeue_timeout_ns;
> > > > -	/**< Minimum supported global dequeue timeout(ns) by this device */
> > > > +	/**< Minimum global dequeue timeout(ns) supported by this device */
> > > 
> > > Are we missing a bunch of "." here and in the other fields?
> > > 
> > > >    	uint32_t max_dequeue_timeout_ns;
> > > > -	/**< Maximum supported global dequeue timeout(ns) by this device */
> > > > +	/**< Maximum global dequeue timeout(ns) supported by this device */
> > > >    	uint32_t dequeue_timeout_ns;
> > > >    	/**< Configured global dequeue timeout(ns) for this device */
> > > >    	uint8_t max_event_queues;
> > > > -	/**< Maximum event_queues supported by this device */
> > > > +	/**< Maximum event queues supported by this device */
> > > >    	uint32_t max_event_queue_flows;
> > > > -	/**< Maximum supported flows in an event queue by this device*/
> > > > +	/**< Maximum number of flows within an event queue supported by this device*/
> > > >    	uint8_t max_event_queue_priority_levels;
> > > >    	/**< Maximum number of event queue priority levels by this device.
> > > > -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> > > > +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> > > > +	 * The priority levels are evenly distributed between
> > > > +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
> > > 
> > > This is a change of the API, in the sense it's defining something previously
> > > left undefined?
> > > 
> > 
> > Well, undefined is pretty useless for app writers, no?
> > However, agreed that the range between HIGHEST and LOWEST is an assumption
> > on my part, chosen because it matches what happens to the event priorities
> > which are documented in event struct as "The implementation shall normalize
> >   the requested priority to supported priority value" - which, while better
> > than nothing, does technically leave the details of how normalization
> > occurs up to the implementation.
> > 
> > > If you need 6 different priority levels in an app, how do you go about
> > > making sure you find the correct (distinct) Eventdev levels on any event
> > > device supporting >= 6 levels?
> > > 
> > > #define NUM_MY_LEVELS 6
> > > 
> > > #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) *
> > > (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) /
> > > NUM_MY_LEVELS)
> > > 
> > > This way? One would worry a bit exactly what "evenly" means, in terms of
> > > rounding errors. If you have an event device with 255 priority levels of
> > > (say) 256 levels available in the API, which two levels are the same
> > > priority?
> > 
> > Yes, round etc. will be an issue in cases of non-powers-of 2.
> > However, I think we do need to clarify this behaviour, so I'm open to
> > alternative suggestions as to how update this.
> > 
> 
> In retrospect, maybe it would have been better to just express the number of
> priority levels an event device supported, only allow [0, max_levels - 1] in
> the prio field, and leave it to the app to do the conversion/normalization.
>

Yes, in many ways that would be better.
 
> Maybe a new <rte_eventdev.h> helper macro would at least suggest to the PMD
> driver implementer and the application designer how this normalization
> should work. Something like the above, but where NUM_MY_LEVELS is an input
> parameter. Would result in an integer division though, so shouldn't be used
> in the fast path.

I think it's actually ok now, having the drivers do the work, since each
driver can choose optimal method. For those having power-of-2 number of
priorities, just a shift op works best.

The key thing for the documentation here, to my mind, is to make it clear
that though the number of priorities is N, you still specify the relative
priorities in the range of 0-255. This is documented in the queue
configuration, so, while we could leave it unmentionned here, I think for
clarity it should be called out. I'm going to reword slightly as:

 * The implementation shall normalize priority values specified between
 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
 * to map them internally to this range of priorities.
 *
 * @see rte_event_queue_conf.priority

This way the wording in the two places is similar.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct
  2024-01-23  9:46     ` Mattias Rönnblom
@ 2024-01-31 16:15       ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-31 16:15 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 10:46:00AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > General rewording and cleanup on the rte_event_dev_config structure.
> > Improved the wording of some sentences and created linked
> > cross-references out of the existing references to the dev_info
> > structure.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++------------------
> >   1 file changed, 24 insertions(+), 23 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index c57c93a22e..4139ccb982 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -599,9 +599,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
> >   struct rte_event_dev_config {
> >   	uint32_t dequeue_timeout_ns;
> >   	/**< rte_event_dequeue_burst() timeout on this device.
> > -	 * This value should be in the range of *min_dequeue_timeout_ns* and
> > -	 * *max_dequeue_timeout_ns* which previously provided in
> > -	 * rte_event_dev_info_get()
> > +	 * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and
> > +	 * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
> > +	 * @ref rte_event_dev_info_get()
> >   	 * The value 0 is allowed, in which case, default dequeue timeout used.
> >   	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> >   	 */
> > @@ -609,40 +609,41 @@ struct rte_event_dev_config {
> >   	/**< In a *closed system* this field is the limit on maximum number of
> >   	 * events that can be inflight in the eventdev at a given time. The
> >   	 * limit is required to ensure that the finite space in a closed system
> > -	 * is not overwhelmed. The value cannot exceed the *max_num_events*
> > -	 * as provided by rte_event_dev_info_get().
> > +	 * is not overwhelmed.
> 
> "overwhelmed" -> "exhausted"
> 
> > +	 * Once the limit has been reached, any enqueues of NEW events to the
> > +	 * system will fail.
> 
> While this is true, it's also a bit misleading. RTE_EVENT_OP_NEW events
> being backpressured is controlled by new_event_threshold on the level of the
> port.
> 

Right. Will remove this statement, and instead add a cross-reference to the
new_event_threshold setting at the end of the comment.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports
  2024-01-23  9:56       ` Bruce Richardson
@ 2024-01-31 16:18         ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-31 16:18 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak, stable

On Tue, Jan 23, 2024 at 09:56:23AM +0000, Bruce Richardson wrote:
> On Tue, Jan 23, 2024 at 10:48:47AM +0100, Mattias Rönnblom wrote:
> > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > The documentation of how single-link port-queue pairs were counted in
> > > the rte_event_dev_config structure did not match the actual
> > > implementation and, if following the documentation, certain valid
> > 
> > What "documentation" and what "implementation" are you talking about here?
> > 
> > I'm confused. An DLB2 fix in the form of Eventdev API documentation update.
> > 
> 
> The documentation in the header file did not match the implementation in
> the rte_eventdev.c file.
> 
> The current documentation states[1] that "This value cannot exceed the
> max_event_queues which previously provided in rte_event_dev_info_get()",
> but if you check the implementation in the C file[2], it actually checks
> the passed value against 
> "info.max_event_queues + info.max_single_link_event_port_queue_pairs".
> 
> 
> [1] https://doc.dpdk.org/api/structrte__event__dev__config.html#a703c026d74436b05fc656652324101e4
> [2] https://git.dpdk.org/dpdk/tree/lib/eventdev/rte_eventdev.c#n402
>

Dropping this as a separate patch for v3, and just including the necessary
doc corrections in the previous patches for the info and config structs.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types
  2024-01-23 16:19     ` Mattias Rönnblom
  2024-01-24 11:21       ` Bruce Richardson
@ 2024-01-31 17:54       ` Bruce Richardson
  1 sibling, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-01-31 17:54 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Jan 23, 2024 at 05:19:18PM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > The description of ordered and atomic scheduling given in the eventdev
> > doxygen documentation was not always clear. Try and simplify this so
> > that it is clearer for the end-user of the application
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > 
> > NOTE TO REVIEWERS:
> > I've updated this based on my understanding of what these scheduling
> > types are meant to do. It matches my understanding of the support
> > offered by our Intel DLB2 driver, as well as the SW eventdev, and I
> > believe the DSW eventdev too. If it does not match the behaviour of
> > other eventdevs, let's have a discussion to see if we can reach a good
> > definition of the behaviour that is common.
> > ---
> >   lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++-----------------
> >   1 file changed, 25 insertions(+), 22 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 2c6576e921..cb13602ffb 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -1313,26 +1313,24 @@ struct rte_event_vector {
> >   #define RTE_SCHED_TYPE_ORDERED          0
> >   /**< Ordered scheduling
> >    *
> > - * Events from an ordered flow of an event queue can be scheduled to multiple
> > + * Events from an ordered event queue can be scheduled to multiple
> 
> What is the rationale for this change?
> 
> An implementation that impose a total order on all events on a particular
> ordered queue will still adhere to the current, more relaxed, per-flow
> ordering semantics.
> 
> An application wanting a total order would just set the flow id to 0 on all
> events destined that queue, and it would work on all event devices.
> 
> Why don't you just put a note in the DLB driver saying "btw it's total
> order", so any application where per-flow ordering is crucial for
> performance (i.e., where the potentially needless head-of-line blocking is
> an issue) can use multiple queues when running with the DLB.
> 
> In the API as-written, the app is free to express more relaxed ordering
> requirements (i.e., to have multiple flows) and it's up to the event device
> to figure out if it's in a position where it can translate this to lower
> latency.
> 
> >    * ports for concurrent processing while maintaining the original event order.
> 
> Maybe it's worth mentioning what is the original event order. "(i.e., the
> order in which the events were enqueued to the queue)". Especially since one
> like to specify what ordering guarantees one have of events enqueued to the
> same queue on different ports and by different lcores).
> 
> I don't know where that information should go though, since it's relevant
> for both atomic and ordered-type queues.
> 
> >    * This scheme enables the user to achieve high single flow throughput by
> > - * avoiding SW synchronization for ordering between ports which bound to cores.
> > - *
> > - * The source flow ordering from an event queue is maintained when events are
> > - * enqueued to their destination queue within the same ordered flow context.
> > - * An event port holds the context until application call
> > - * rte_event_dequeue_burst() from the same port, which implicitly releases
> > - * the context.
> > - * User may allow the scheduler to release the context earlier than that
> > - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
> > - *
> > - * Events from the source queue appear in their original order when dequeued
> > - * from a destination queue.
> > - * Event ordering is based on the received event(s), but also other
> > - * (newly allocated or stored) events are ordered when enqueued within the same
> > - * ordered context. Events not enqueued (e.g. released or stored) within the
> > - * context are  considered missing from reordering and are skipped at this time
> > - * (but can be ordered again within another context).
> > + * avoiding SW synchronization for ordering between ports which are polled by
> > + * different cores.
> > + *
> > + * As events are scheduled to ports/cores, the original event order from the
> > + * source event queue is recorded internally in the scheduler. As events are
> > + * returned (via FORWARD type enqueue) to the scheduler, the original event
> > + * order is restored before the events are enqueued into their new destination
> > + * queue.
> 
> Delete the first sentence on implementation.
> 
> "As events are re-enqueued to the next queue (with the op field set to
> RTE_EVENT_OP_FORWARD), the event device restores the original event order
> before the events arrive on the destination queue."
> 

This whole section on ordered processing I'm reworking quite extensively
for v3, and hopefully I've taken all your comments into account. Finding it
really hard to try and explain it all simply and clearly. Please re-review
this part when I get the v3 finished and sent!

> > + *
> > + * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly
> > + * released by the next dequeue from a port, are skipped by the reordering
> > + * stage and do not affect the reordering of returned events.
> > + *
> > + * The ordering behaviour of NEW events with respect to FORWARD events is
> > + * undefined and implementation dependent.
> 
> For some reason I find this a little vague. "NEW and FORWARD events enqueued
> to a queue are not ordered in relation to each other (even if the flow id is
> the same)."
> 
> I think I agree that NEW shouldn't be ordered vis-a-vi FORWARD, but maybe
> one should say that an event device should avoid excessive reordering NEW
> and FORWARD events.
> 
> I think it would also be helpful to address port-to-port ordering guarantees
> (or a lack thereof).
> 
> "Events enqueued on one port are not ordered in relation to events enqueued
> on some other port."
> 
> Or are they? Not in DSW, at least, and I'm not sure I see a use case for
> such a guarantee, but it's a little counter-intuitive to have them
> potentially re-shuffled.
> 
> (This is also relevant for atomic queues.)
> 
> >    *
> >    * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
> >    */
> > @@ -1340,18 +1338,23 @@ struct rte_event_vector {
> >   #define RTE_SCHED_TYPE_ATOMIC           1
> >   /**< Atomic scheduling
> >    *
> > - * Events from an atomic flow of an event queue can be scheduled only to a
> > + * Events from an atomic flow, identified by @ref rte_event.flow_id,
> 
> A flow is identified by the combination of queue_id and flow_id, so if you
> reference one you should also reference the other.
> 

This is done in v3. I have mention of what defines in flow in both comments
for ordered and atomic.

> > + * of an event queue can be scheduled only to a
> >    * single port at a time. The port is guaranteed to have exclusive (atomic)
> >    * access to the associated flow context, which enables the user to avoid SW
> >    * synchronization. Atomic flows also help to maintain event ordering
> 
> "help" here needs to go, I think. It sounds like a best-effort affair. The
> atomic queue ordering guarantees (or the lack thereof) should be spelled
> out.
> 
> "Event order in an atomic flow is maintained."
> 
> > - * since only one port at a time can process events from a flow of an
> > + * since only one port at a time can process events from each flow of an
> >    * event queue.
> 
> Yes, and *but also since* the event device is not reshuffling events
> enqueued to an atomic queue. And that's more complicated than just something
> that falls out of atomicity, especially if you assume that FORWARD type
> enqueues are not ordered with other FORWARD type enqueues on a different
> port.
> 
> >    *
> > - * The atomic queue synchronization context is dedicated to the port until
> > + * The atomic queue synchronization context for a flow is dedicated to the port until
> 
> What is an "atomic queue synchronization context" (except for something that
> makes for long sentences).
> 
> How about:
> "The atomic flow is locked to the port until /../"
> 
> You could also used the word "bound" instead of "locked".
> 

Going with the term "lock" for v3.

> >    * application call rte_event_dequeue_burst() from the same port,
> >    * which implicitly releases the context. User may allow the scheduler to
> >    * release the context earlier than that by invoking rte_event_enqueue_burst()
> > - * with RTE_EVENT_OP_RELEASE operation.
> > + * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context
> > + * is only released once the last event from the flow, outstanding on the port,
> > + * is released. So long as there is one event from an atomic flow scheduled to
> > + * a port/core (including any events in the port's dequeue queue, not yet read
> > + * by the application), that port will hold the synchronization context.
> 
> In case you like the "atomic flow locked/bound to port", this part would
> also need updating.
> 
> Maybe here is a good place to add a note on memory ordering and event
> ordering.
> 
> "Any memory stores done as a part of event processing will be globally
> visible before the next event in the same atomic flow is dequeued on a
> different lcore."
> 
> I.e., enqueue includes write barrier before the event can be seen.
> 
> One should probably mentioned a rmb in dequeue as well.
> 
Not adding memory ordering in v3. If necessary we can add it later in
another patch.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-01-19 17:43   ` [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields Bruce Richardson
  2024-01-24 11:34     ` Mattias Rönnblom
@ 2024-02-01  9:35     ` Bruce Richardson
  2024-02-01 15:00       ` Jerin Jacob
  1 sibling, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-01  9:35 UTC (permalink / raw)
  To: dev, jerinj
  Cc: jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Jan 19, 2024 at 05:43:46PM +0000, Bruce Richardson wrote:
> Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> For the fields in "rte_event" struct, enhance the comments on each to
> clarify the field's use, and whether it is preserved between enqueue and
> dequeue, and it's role, if any, in scheduling.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> 
> As with the previous patch, please review this patch to ensure that the
> expected semantics of the various event types and event fields have not
> changed in an unexpected way.
> ---
>  lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
>  1 file changed, 77 insertions(+), 28 deletions(-)
> 
<snip>

>  #define RTE_EVENT_OP_RELEASE            2
>  /**< Release the flow context associated with the schedule type.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
>   * then this function hints the scheduler that the user has completed critical
>   * section processing in the current atomic context.
>   * The scheduler is now allowed to schedule events from the same flow from
> @@ -1442,21 +1446,19 @@ struct rte_event_vector {
>   * performance, but the user needs to design carefully the split into critical
>   * vs non-critical sections.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> - * then this function hints the scheduler that the user has done all that need
> - * to maintain event order in the current ordered context.
> - * The scheduler is allowed to release the ordered context of this port and
> - * avoid reordering any following enqueues.
> - *
> - * Early ordered context release may increase parallelism and thus system
> - * performance.

Before I do up a V3 of this patchset, I'd like to try and understand a bit
more what was meant by the original text for reordered here. The use of
"context" is very ambiguous, since it could refer to a number of different
things here.

For me, RELEASE for ordered queues should mean much the same as for atomic
queues - it means that the event being released is to be "dropped" from the
point of view of the eventdev scheduler - i.e. any atomic locks held for
that event should be released, and any reordering slots for it should be
skipped. However, the text above seems to imply that when we release one
event it means that we should stop reordering all subsequent events for
that port - which seems wrong to me. Especially in the case where
reordering may be done per flow, does one release mean that we need to go
through all flows and mark as skipped all reordered slots awaiting returned
events from that port? If this is what is intended, how is it better than
just doing another dequeue call from the port, which releases everything
automatically anyway?

Jerin, I believe you were the author of the original text, can you perhaps
clarify? Other PMD maintainers, can any of you chime in with current
supported behaviour when enqueuing a release of an ordered event?
Ideally, I'd like to see this simplified whereby release for ordered
behaves like that for atomic, and refers to the current event only, (and
drop any mention of contexts).

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-01  9:35     ` Bruce Richardson
@ 2024-02-01 15:00       ` Jerin Jacob
  2024-02-01 15:24         ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Jerin Jacob @ 2024-02-01 15:00 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Thu, Feb 1, 2024 at 3:05 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Fri, Jan 19, 2024 at 05:43:46PM +0000, Bruce Richardson wrote:
> > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > For the fields in "rte_event" struct, enhance the comments on each to
> > clarify the field's use, and whether it is preserved between enqueue and
> > dequeue, and it's role, if any, in scheduling.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >
> > As with the previous patch, please review this patch to ensure that the
> > expected semantics of the various event types and event fields have not
> > changed in an unexpected way.
> > ---
> >  lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> >  1 file changed, 77 insertions(+), 28 deletions(-)
> >
> <snip>
>
> >  #define RTE_EVENT_OP_RELEASE            2
> >  /**< Release the flow context associated with the schedule type.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> >   * then this function hints the scheduler that the user has completed critical
> >   * section processing in the current atomic context.
> >   * The scheduler is now allowed to schedule events from the same flow from
> > @@ -1442,21 +1446,19 @@ struct rte_event_vector {
> >   * performance, but the user needs to design carefully the split into critical
> >   * vs non-critical sections.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> > - * then this function hints the scheduler that the user has done all that need
> > - * to maintain event order in the current ordered context.
> > - * The scheduler is allowed to release the ordered context of this port and
> > - * avoid reordering any following enqueues.
> > - *
> > - * Early ordered context release may increase parallelism and thus system
> > - * performance.
>
> Before I do up a V3 of this patchset, I'd like to try and understand a bit
> more what was meant by the original text for reordered here. The use of
> "context" is very ambiguous, since it could refer to a number of different
> things here.
>
> For me, RELEASE for ordered queues should mean much the same as for atomic
> queues - it means that the event being released is to be "dropped" from the
> point of view of the eventdev scheduler - i.e. any atomic locks held for
> that event should be released, and any reordering slots for it should be
> skipped. However, the text above seems to imply that when we release one
> event it means that we should stop reordering all subsequent events for
> that port - which seems wrong to me. Especially in the case where
> reordering may be done per flow, does one release mean that we need to go
> through all flows and mark as skipped all reordered slots awaiting returned
> events from that port? If this is what is intended, how is it better than
> just doing another dequeue call from the port, which releases everything
> automatically anyway?
>
> Jerin, I believe you were the author of the original text, can you perhaps
> clarify? Other PMD maintainers, can any of you chime in with current
> supported behaviour when enqueuing a release of an ordered event?

If N number of cores does rte_event_dequeue_burst() and got the same
flow, and it is scheduled as
RTE_SCHED_TYPE_ORDERED and then irrespective of the timing downstream
rte_event_enqueue_burst()
invocation any core. Upon rte_event_enqueue_burst() completion, the
events will be enqueued the downstream
queue in the ingress order.

Assume, one of the core, calls RTE_EVENT_OP_RELEASE  in between
dequeue and enqueue, then that event no more
eligible for the ingress order maintenance.


> Ideally, I'd like to see this simplified whereby release for ordered
> behaves like that for atomic, and refers to the current event only, (and
> drop any mention of contexts).
>
> Thanks,
> /Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-01 15:00       ` Jerin Jacob
@ 2024-02-01 15:24         ` Bruce Richardson
  2024-02-01 16:20           ` Jerin Jacob
  0 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-01 15:24 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Thu, Feb 01, 2024 at 08:30:26PM +0530, Jerin Jacob wrote:
> On Thu, Feb 1, 2024 at 3:05 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Fri, Jan 19, 2024 at 05:43:46PM +0000, Bruce Richardson wrote:
> > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > For the fields in "rte_event" struct, enhance the comments on each to
> > > clarify the field's use, and whether it is preserved between enqueue and
> > > dequeue, and it's role, if any, in scheduling.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > >
> > > As with the previous patch, please review this patch to ensure that the
> > > expected semantics of the various event types and event fields have not
> > > changed in an unexpected way.
> > > ---
> > >  lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> > >  1 file changed, 77 insertions(+), 28 deletions(-)
> > >
> > <snip>
> >
> > >  #define RTE_EVENT_OP_RELEASE            2
> > >  /**< Release the flow context associated with the schedule type.
> > >   *
> > > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> > >   * then this function hints the scheduler that the user has completed critical
> > >   * section processing in the current atomic context.
> > >   * The scheduler is now allowed to schedule events from the same flow from
> > > @@ -1442,21 +1446,19 @@ struct rte_event_vector {
> > >   * performance, but the user needs to design carefully the split into critical
> > >   * vs non-critical sections.
> > >   *
> > > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> > > - * then this function hints the scheduler that the user has done all that need
> > > - * to maintain event order in the current ordered context.
> > > - * The scheduler is allowed to release the ordered context of this port and
> > > - * avoid reordering any following enqueues.
> > > - *
> > > - * Early ordered context release may increase parallelism and thus system
> > > - * performance.
> >
> > Before I do up a V3 of this patchset, I'd like to try and understand a bit
> > more what was meant by the original text for reordered here. The use of
> > "context" is very ambiguous, since it could refer to a number of different
> > things here.
> >
> > For me, RELEASE for ordered queues should mean much the same as for atomic
> > queues - it means that the event being released is to be "dropped" from the
> > point of view of the eventdev scheduler - i.e. any atomic locks held for
> > that event should be released, and any reordering slots for it should be
> > skipped. However, the text above seems to imply that when we release one
> > event it means that we should stop reordering all subsequent events for
> > that port - which seems wrong to me. Especially in the case where
> > reordering may be done per flow, does one release mean that we need to go
> > through all flows and mark as skipped all reordered slots awaiting returned
> > events from that port? If this is what is intended, how is it better than
> > just doing another dequeue call from the port, which releases everything
> > automatically anyway?
> >
> > Jerin, I believe you were the author of the original text, can you perhaps
> > clarify? Other PMD maintainers, can any of you chime in with current
> > supported behaviour when enqueuing a release of an ordered event?
> 
> If N number of cores does rte_event_dequeue_burst() and got the same
> flow, and it is scheduled as
> RTE_SCHED_TYPE_ORDERED and then irrespective of the timing downstream
> rte_event_enqueue_burst()
> invocation any core. Upon rte_event_enqueue_burst() completion, the
> events will be enqueued the downstream
> queue in the ingress order.
> 
> Assume, one of the core, calls RTE_EVENT_OP_RELEASE  in between
> dequeue and enqueue, then that event no more
> eligible for the ingress order maintenance.
> 
Thanks for the reply. Just to confirm my understanding - the RELEASE
applies to the event that is being skipped/dropped, which in a burst-mode
of operation i.e. when nb_dequeued > 1, other events may still be enqueued
from that burst and reordered appropriately. Correct?

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-01 15:24         ` Bruce Richardson
@ 2024-02-01 16:20           ` Jerin Jacob
  0 siblings, 0 replies; 123+ messages in thread
From: Jerin Jacob @ 2024-02-01 16:20 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Thu, Feb 1, 2024 at 8:54 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, Feb 01, 2024 at 08:30:26PM +0530, Jerin Jacob wrote:
> > On Thu, Feb 1, 2024 at 3:05 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Fri, Jan 19, 2024 at 05:43:46PM +0000, Bruce Richardson wrote:
> > > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > > For the fields in "rte_event" struct, enhance the comments on each to
> > > > clarify the field's use, and whether it is preserved between enqueue and
> > > > dequeue, and it's role, if any, in scheduling.
> > > >
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
> > > >
> > > > As with the previous patch, please review this patch to ensure that the
> > > > expected semantics of the various event types and event fields have not
> > > > changed in an unexpected way.
> > > > ---
> > > >  lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> > > >  1 file changed, 77 insertions(+), 28 deletions(-)
> > > >
> > > <snip>
> > >
> > > >  #define RTE_EVENT_OP_RELEASE            2
> > > >  /**< Release the flow context associated with the schedule type.
> > > >   *
> > > > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > > > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> > > >   * then this function hints the scheduler that the user has completed critical
> > > >   * section processing in the current atomic context.
> > > >   * The scheduler is now allowed to schedule events from the same flow from
> > > > @@ -1442,21 +1446,19 @@ struct rte_event_vector {
> > > >   * performance, but the user needs to design carefully the split into critical
> > > >   * vs non-critical sections.
> > > >   *
> > > > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> > > > - * then this function hints the scheduler that the user has done all that need
> > > > - * to maintain event order in the current ordered context.
> > > > - * The scheduler is allowed to release the ordered context of this port and
> > > > - * avoid reordering any following enqueues.
> > > > - *
> > > > - * Early ordered context release may increase parallelism and thus system
> > > > - * performance.
> > >
> > > Before I do up a V3 of this patchset, I'd like to try and understand a bit
> > > more what was meant by the original text for reordered here. The use of
> > > "context" is very ambiguous, since it could refer to a number of different
> > > things here.
> > >
> > > For me, RELEASE for ordered queues should mean much the same as for atomic
> > > queues - it means that the event being released is to be "dropped" from the
> > > point of view of the eventdev scheduler - i.e. any atomic locks held for
> > > that event should be released, and any reordering slots for it should be
> > > skipped. However, the text above seems to imply that when we release one
> > > event it means that we should stop reordering all subsequent events for
> > > that port - which seems wrong to me. Especially in the case where
> > > reordering may be done per flow, does one release mean that we need to go
> > > through all flows and mark as skipped all reordered slots awaiting returned
> > > events from that port? If this is what is intended, how is it better than
> > > just doing another dequeue call from the port, which releases everything
> > > automatically anyway?
> > >
> > > Jerin, I believe you were the author of the original text, can you perhaps
> > > clarify? Other PMD maintainers, can any of you chime in with current
> > > supported behaviour when enqueuing a release of an ordered event?
> >
> > If N number of cores does rte_event_dequeue_burst() and got the same
> > flow, and it is scheduled as
> > RTE_SCHED_TYPE_ORDERED and then irrespective of the timing downstream
> > rte_event_enqueue_burst()
> > invocation any core. Upon rte_event_enqueue_burst() completion, the
> > events will be enqueued the downstream
> > queue in the ingress order.
> >
> > Assume, one of the core, calls RTE_EVENT_OP_RELEASE  in between
> > dequeue and enqueue, then that event no more
> > eligible for the ingress order maintenance.
> >
> Thanks for the reply. Just to confirm my understanding - the RELEASE
> applies to the event that is being skipped/dropped, which in a burst-mode
> of operation i.e. when nb_dequeued > 1, other events may still be enqueued
> from that burst and reordered appropriately. Correct?

Yes. That's my understanding too.

>
> /Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-01-24 11:34     ` Mattias Rönnblom
@ 2024-02-01 16:59       ` Bruce Richardson
  2024-02-02  9:38         ` Mattias Rönnblom
  2024-02-01 17:02       ` Bruce Richardson
  1 sibling, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-01 16:59 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > For the fields in "rte_event" struct, enhance the comments on each to
> > clarify the field's use, and whether it is preserved between enqueue and
> > dequeue, and it's role, if any, in scheduling.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > 
> > As with the previous patch, please review this patch to ensure that the
> > expected semantics of the various event types and event fields have not
> > changed in an unexpected way.
> > ---
> >   lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> >   1 file changed, 77 insertions(+), 28 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index cb13602ffb..4eff1c4958 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -1416,21 +1416,25 @@ struct rte_event_vector {
> > 
> >   /* Event enqueue operations */
> >   #define RTE_EVENT_OP_NEW                0
> > -/**< The event producers use this operation to inject a new event to the
> > +/**< The @ref rte_event.op field should be set to this type to inject a new event to the
> >    * event device.
> >    */
> 
> "type" -> "value"
> 
> "to" -> "into"?
> 
> You could also say "to mark the event as new".
> 
> What is new? Maybe "new (as opposed to a forwarded) event." or "new (i.e.,
> not previously dequeued).".
> 

Using this latter suggested wording in V3.

> "The application sets the @ref rte_event.op field of an enqueued event to
> this value to mark the event as new (i.e., not previously dequeued)."
> 
> >   #define RTE_EVENT_OP_FORWARD            1
> > -/**< The CPU use this operation to forward the event to different event queue or
> > - * change to new application specific flow or schedule type to enable
> > - * pipelining.
> > +/**< SW should set the @ref rte_event.op filed to this type to return a
> > + * previously dequeued event to the event device for further processing.
> 
> "filed" -> "field"
> 
> "SW" -> "The application"
> 
> "to be scheduled for further processing (or transmission)"
> 
> The wording should otherwise be the same as NEW, whatever you choose there.
> 
Ack.

> >    *
> > - * This operation must only be enqueued to the same port that the
> > + * This event *must* be enqueued to the same port that the
> >    * event to be forwarded was dequeued from.
> 
> OK, so you "should" mark a new event RTE_EVENT_OP_FORWARD but you "*must*"
> enqueue it to the same port.
> 
> I think you "must" do both.
> 
Ack

> > + *
> > + * The event's fields, including (but not limited to) flow_id, scheduling type,
> > + * destination queue, and event payload e.g. mbuf pointer, may all be updated as
> > + * desired by software, but the @ref rte_event.impl_opaque field must
> 
> "software" -> "application"
>
Ack
 
> > + * be kept to the same value as was present when the event was dequeued.
> >    */
> >   #define RTE_EVENT_OP_RELEASE            2
> >   /**< Release the flow context associated with the schedule type.
> >    *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> >    * then this function hints the scheduler that the user has completed critical
> >    * section processing in the current atomic context.
> >    * The scheduler is now allowed to schedule events from the same flow from
> > @@ -1442,21 +1446,19 @@ struct rte_event_vector {
> >    * performance, but the user needs to design carefully the split into critical
> >    * vs non-critical sections.
> >    *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> > - * then this function hints the scheduler that the user has done all that need
> > - * to maintain event order in the current ordered context.
> > - * The scheduler is allowed to release the ordered context of this port and
> > - * avoid reordering any following enqueues.
> > - *
> > - * Early ordered context release may increase parallelism and thus system
> > - * performance.
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
> 
> Isn't a missing "or @ref RTE_SCHED_TYPE_ATOMIC" just an oversight (in the
> original API wording)?
> 

No, I don't think so, because ATOMIC is described above.

> > + * then this function informs the scheduler that the current event has
> > + * completed processing and will not be returned to the scheduler, i.e.
> > + * it has been dropped, and so the reordering context for that event
> > + * should be considered filled.
> >    *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_PARALLEL
> >    * or no scheduling context is held then this function may be an NOOP,
> >    * depending on the implementation.
> 
> Maybe you can also fix this "function" -> "operation". I suggest you delete
> that sentence, because it makes no sense.
> 
> What is says in a somewhat vague manner is that you tread into the realm of
> undefined behavior if you release PARALLEL events.
> 

Agree. Just deleting.

> >    *
> >    * This operation must only be enqueued to the same port that the
> > - * event to be released was dequeued from.
> > + * event to be released was dequeued from. The @ref rte_event.impl_opaque
> > + * field in the release event must match that in the original dequeued event.
> 
> I would say "same value" rather than "match".
> 
> "The @ref rte_event.impl_opaque field in the release event have the same
> value as in the original dequeued event."
> 
Ack.

> >    */
> > 
> >   /**
> > @@ -1473,53 +1475,100 @@ struct rte_event {
> >   			/**< Targeted flow identifier for the enqueue and
> >   			 * dequeue operation.
> >   			 * The value must be in the range of
> > -			 * [0, nb_event_queue_flows - 1] which
> > +			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
> 
> The same comment as I had before about ranges for unsigned types.
> 
Ack.

> >   			 * previously supplied to rte_event_dev_configure().
> > +			 *
> > +			 * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
> > +			 * flow context for atomicity, such that events from each individual flow
> > +			 * will only be scheduled to one port at a time.
> 
> flow_id alone doesn't identify an atomic flow. It's queue_id + flow_id. I'm
> not sure I think "flow context" adds much, unless it's defined somewhere.
> Sounds like some assumed implementation detail.
> 
Removing the word context, and adding that it identifies a flow "within a
queue and priority level", to make it clear that it's just not the flow_id
involved here, as you say.

> > +			 *
> > +			 * This field is preserved between enqueue and dequeue when
> > +			 * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> > +			 * capability. Otherwise the value is implementation dependent
> > +			 * on dequeue >   			 */
> >   			uint32_t sub_event_type:8;
> >   			/**< Sub-event types based on the event source.
> > +			 *
> > +			 * This field is preserved between enqueue and dequeue.
> > +			 * This field is for SW or event adapter use,
> 
> "SW" -> "application"
> 
Ack.

> > +			 * and is unused in scheduling decisions.
> > +			 *
> >   			 * @see RTE_EVENT_TYPE_CPU
> >   			 */
> >   			uint32_t event_type:4;
> > -			/**< Event type to classify the event source.
> > -			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> > +			/**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
> > +			 *
> > +			 * This field is preserved between enqueue and dequeue
> > +			 * This field is for SW or event adapter use,
> > +			 * and is unused in scheduling decisions.
> 
> "unused" -> "is not considered"?
> 
Ack.

> >   			 */
> >   			uint8_t op:2;
> > -			/**< The type of event enqueue operation - new/forward/
> > -			 * etc.This field is not preserved across an instance
> > +			/**< The type of event enqueue operation - new/forward/ etc.
> > +			 *
> > +			 * This field is *not* preserved across an instance
> >   			 * and is undefined on dequeue.
> 
> Maybe you should use "undefined" rather than "implementation dependent", or
> change this instance of undefined to implementation dependent. Now two
> different terms are used for the same thing.
> 

Using implementation dependent.
Ideally, I think we should update all drivers to set this to "FORWARD" by
default on dequeue, but for now it's "implementation dependent".

> > -			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> > +			 *
> > +			 * @see RTE_EVENT_OP_NEW
> > +			 * @see RTE_EVENT_OP_FORWARD
> > +			 * @see RTE_EVENT_OP_RELEASE
> >   			 */
> >   			uint8_t rsvd:4;
> > -			/**< Reserved for future use */
> > +			/**< Reserved for future use.
> > +			 *
> > +			 * Should be set to zero on enqueue. Zero on dequeue.
> > +			 */
> 
> Why say they should be zero on dequeue? Doesn't this defeat the purpose of
> having reserved bits, for future use? With you suggested wording, you can't
> use these bits without breaking the ABI.

Good point. Removing the dequeue value bit.

> 
> >   			uint8_t sched_type:2;
> >   			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
> >   			 * associated with flow id on a given event queue
> >   			 * for the enqueue and dequeue operation.
> > +			 *
> > +			 * This field is used to determine the scheduling type
> > +			 * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
> > +			 * is supported.
> 
> "supported" -> "configured"
> 
Ack.

> > +			 * For queues where only a single scheduling type is available,
> > +			 * this field must be set to match the configured scheduling type.
> > +			 *
> 
> Why is the API/event device asking this of the application?
> 
Historical reasons. I agree that it shouldn't, this should just be marked
as ignored on fixed-type queues, but the spec up till now says it must
match and some drivers do check this. Once we update the drivers to stop
checking then we can change the spec without affecting apps.

> > +			 * This field is preserved between enqueue and
> > dequeue.  +			 * +			 * @see
> > RTE_SCHED_TYPE_ORDERED +			 * @see
> > RTE_SCHED_TYPE_ATOMIC +			 * @see
> > RTE_SCHED_TYPE_PARALLEL */ uint8_t queue_id; /**< Targeted event queue
> > identifier for the enqueue or * dequeue operation.  * The value must be
> > in the range of -			 * [0, nb_event_queues - 1] which
> > previously supplied to -			 *
> > rte_event_dev_configure().  +			 * [0, @ref
> > rte_event_dev_config.nb_event_queues - 1] which was +
> > * previously supplied to rte_event_dev_configure().  +
> > * +			 * This field is preserved between enqueue on
> > dequeue.  */ uint8_t priority; /**< Event priority relative to other
> > events in the * event queue. The requested priority should in the -
> > * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST, -			 *
> > RTE_EVENT_DEV_PRIORITY_LOWEST].  +			 * range of  [@ref
> > RTE_EVENT_DEV_PRIORITY_HIGHEST, +			 * @ref
> > RTE_EVENT_DEV_PRIORITY_LOWEST].  * The implementation shall normalize
> > the requested * priority to supported priority value.  +
> > * * Valid when the device has -			 *
> > RTE_EVENT_DEV_CAP_EVENT_QOS capability.  +			 * @ref
> > RTE_EVENT_DEV_CAP_EVENT_QOS capability.  +			 * Ignored
> > otherwise.  +			 * +			 * This
> > field is preserved between enqueue and dequeue.
> 
> Is the normalized or unnormalized value that is preserved?
> 
Very good point. It's the normalized & then denormalised version that is
guaranteed to be preserved, I suspect. SW eventdevs probably preserve
as-is, but HW eventdevs may lose precision. Rather than making this
"implementation defined" or "not preserved" which would be annoying for
apps, I think, I'm going to document this as "preserved, but with possible
loss of precision".

> >   			 */
> >   			uint8_t impl_opaque;
> >   			/**< Implementation specific opaque value.
> 
> Maybe you can also fix "implementation" here to be something more specific.
> Implementation, of what?
> 
> "Event device implementation" or just "event device".
> 
"Opaque field for event device use"

> > +			 *
> >   			 * An implementation may use this field to hold
> >   			 * implementation specific value to share between
> >   			 * dequeue and enqueue operation.
> > +			 *
> >   			 * The application should not modify this field.
> > +			 * Its value is implementation dependent on dequeue,
> > +			 * and must be returned unmodified on enqueue when
> > +			 * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE
> 
> Should it be mentioned that impl_opaque is ignored by the event device for
> NEW events?
> 
Added in V3.

> >   			 */
> >   		};
> >   	};
> > --
> > 2.40.1
> > 

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-01-24 11:34     ` Mattias Rönnblom
  2024-02-01 16:59       ` Bruce Richardson
@ 2024-02-01 17:02       ` Bruce Richardson
  2024-02-02  9:14         ` Bruce Richardson
                           ` (2 more replies)
  1 sibling, 3 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-01 17:02 UTC (permalink / raw)
  To: Mattias Rönnblom, jerinj
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > For the fields in "rte_event" struct, enhance the comments on each to
> > clarify the field's use, and whether it is preserved between enqueue and
> > dequeue, and it's role, if any, in scheduling.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > 
> > As with the previous patch, please review this patch to ensure that the
> > expected semantics of the various event types and event fields have not
> > changed in an unexpected way.
> > ---
> >   lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> >   1 file changed, 77 insertions(+), 28 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index cb13602ffb..4eff1c4958 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
<snip>

> >   /**
> > @@ -1473,53 +1475,100 @@ struct rte_event {
> >   			/**< Targeted flow identifier for the enqueue and
> >   			 * dequeue operation.
> >   			 * The value must be in the range of
> > -			 * [0, nb_event_queue_flows - 1] which
> > +			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
> 
> The same comment as I had before about ranges for unsigned types.
> 
Actually, is this correct, does a range actually apply here? 

I thought that the number of queue flows supported was a guide as to how
internal HW resources were to be allocated, and that the flow_id was always
a 20-bit value, where it was up to the scheduler to work out how to map
that to internal atomic locks (when combined with queue ids etc.). It
should not be up to the app to have to do the range limiting itself!

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 03/11] eventdev: update documentation on device capability flags
  2024-01-31 14:09       ` Bruce Richardson
@ 2024-02-02  8:58         ` Mattias Rönnblom
  2024-02-02 11:20           ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-02-02  8:58 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-31 15:09, Bruce Richardson wrote:
> On Tue, Jan 23, 2024 at 10:18:53AM +0100, Mattias Rönnblom wrote:
>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>> Update the device capability docs, to:
>>>
>>> * include more cross-references
>>> * split longer text into paragraphs, in most cases with each flag having
>>>     a single-line summary at the start of the doc block
>>> * general comment rewording and clarification as appropriate
>>>
>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>> ---
>>>    lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
>>>    1 file changed, 93 insertions(+), 37 deletions(-)
>>>
> <snip>
>>>     * If this capability is not set, the queue only supports events of the
>>> - *  *RTE_SCHED_TYPE_* type that it was created with.
>>> + * *RTE_SCHED_TYPE_* type that it was created with.
>>> + * Any events of other types scheduled to the queue will handled in an
>>> + * implementation-dependent manner. They may be dropped by the
>>> + * event device, or enqueued with the scheduling type adjusted to the
>>> + * correct/supported value.
>>
>> Having the application setting sched_type when it was already set on a the
>> level of the queue never made sense to me.
>>
>> I can't see any reasons why this field shouldn't be ignored by the event
>> device on non-RTE_EVENT_QUEUE_CFG_ALL_TYPES queues.
>>
>> If the behavior is indeed undefined, I think it's better to just say
>> "undefined" rather than the above speculation.
>>
> 
> Updating in v3 to just say it's undefined.
> 
>>>     *
>>> - * @see RTE_SCHED_TYPE_* values
> <snip>
>>>    #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
>>>    /**< Event device is capable of changing the queue attributes at runtime i.e
>>> - * after rte_event_queue_setup() or rte_event_start() call sequence. If this
>>> - * flag is not set, eventdev queue attributes can only be configured during
>>> + * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
>>> + *
>>> + * If this flag is not set, eventdev queue attributes can only be configured during
>>>     * rte_event_queue_setup().
>>
>> "event queue" or just "queue".
>>
> Ack.
> 
>>> + *
>>> + * @see rte_event_queue_setup
>>>     */
>>>    #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
>>> -/**< Event device is capable of supporting multiple link profiles per event port
>>> - * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
>>> - * than one.
>>> +/**< Event device is capable of supporting multiple link profiles per event port.
>>> + *
>>> + *
>>> + * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
>>> + * than one, and multiple profiles may be configured and then switched at runtime.
>>> + * If not set, only a single profile may be configured, which may itself be
>>> + * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
>>> + *
>>> + * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
>>> + * @see rte_event_port_profile_switch
>>> + * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
>>>     */
>>>    /* Event device priority levels */
>>>    #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>> -/**< Highest priority expressed across eventdev subsystem
>>> +/**< Highest priority expressed across eventdev subsystem.
>>
>> "The highest priority an event device may support."
>> or
>> "The highest priority any event device may support."
>>
>> Maybe this is a further improvement, beyond punctuation? "across eventdev
>> subsystem" sounds awkward.
>>
> 
> Still not very clear. Talking about device support implies that its
> possible some devices may not support it. How about:
>  > "highest priority level for events and queues".
> 

Sounds good. I guess it's totally, 100% obvious highest means most urgent?

Otherwise, "highest (i.e., most urgent) priority level for events queues"

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-01 17:02       ` Bruce Richardson
@ 2024-02-02  9:14         ` Bruce Richardson
  2024-02-02  9:22         ` Jerin Jacob
  2024-02-02  9:45         ` Mattias Rönnblom
  2 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02  9:14 UTC (permalink / raw)
  To: Mattias Rönnblom, jerinj
  Cc: dev, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Thu, Feb 01, 2024 at 05:02:44PM +0000, Bruce Richardson wrote:
> On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > For the fields in "rte_event" struct, enhance the comments on each to
> > > clarify the field's use, and whether it is preserved between enqueue and
> > > dequeue, and it's role, if any, in scheduling.
> > > 
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > > 
> > > As with the previous patch, please review this patch to ensure that the
> > > expected semantics of the various event types and event fields have not
> > > changed in an unexpected way.
> > > ---
> > >   lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> > >   1 file changed, 77 insertions(+), 28 deletions(-)
> > > 
> > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > index cb13602ffb..4eff1c4958 100644
> > > --- a/lib/eventdev/rte_eventdev.h
> > > +++ b/lib/eventdev/rte_eventdev.h
> <snip>
> 
> > >   /**
> > > @@ -1473,53 +1475,100 @@ struct rte_event {
> > >   			/**< Targeted flow identifier for the enqueue and
> > >   			 * dequeue operation.
> > >   			 * The value must be in the range of
> > > -			 * [0, nb_event_queue_flows - 1] which
> > > +			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
> > 
> > The same comment as I had before about ranges for unsigned types.
> > 
> Actually, is this correct, does a range actually apply here? 
> 
> I thought that the number of queue flows supported was a guide as to how
> internal HW resources were to be allocated, and that the flow_id was always
> a 20-bit value, where it was up to the scheduler to work out how to map
> that to internal atomic locks (when combined with queue ids etc.). It
> should not be up to the app to have to do the range limiting itself!
> 
Looking at the RX adapter in eventdev, I don't see any obvious clamping of
the flow ids to the range of 0-nb_event_queue_flows, though I'm not that
familiar with that code, so I may have missed something. If I'm right,
it looks like this doc line may indeed by a mistake.

@Jerin, can you comment again here. Is flow_id really meant to be limited
to the specified range, or is it a full 20-bit value supplied in all cases?

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-01 17:02       ` Bruce Richardson
  2024-02-02  9:14         ` Bruce Richardson
@ 2024-02-02  9:22         ` Jerin Jacob
  2024-02-02  9:36           ` Bruce Richardson
  2024-02-02  9:45         ` Mattias Rönnblom
  2 siblings, 1 reply; 123+ messages in thread
From: Jerin Jacob @ 2024-02-02  9:22 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Mattias Rönnblom, jerinj, dev, mattias.ronnblom,
	abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak

On Thu, Feb 1, 2024 at 10:33 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > For the fields in "rte_event" struct, enhance the comments on each to
> > > clarify the field's use, and whether it is preserved between enqueue and
> > > dequeue, and it's role, if any, in scheduling.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > >
> > > As with the previous patch, please review this patch to ensure that the
> > > expected semantics of the various event types and event fields have not
> > > changed in an unexpected way.
> > > ---
> > >   lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> > >   1 file changed, 77 insertions(+), 28 deletions(-)
> > >
> > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > index cb13602ffb..4eff1c4958 100644
> > > --- a/lib/eventdev/rte_eventdev.h
> > > +++ b/lib/eventdev/rte_eventdev.h
> <snip>
>
> > >   /**
> > > @@ -1473,53 +1475,100 @@ struct rte_event {
> > >                     /**< Targeted flow identifier for the enqueue and
> > >                      * dequeue operation.
> > >                      * The value must be in the range of
> > > -                    * [0, nb_event_queue_flows - 1] which
> > > +                    * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
> >
> > The same comment as I had before about ranges for unsigned types.
> >
> Actually, is this correct, does a range actually apply here?
>
> I thought that the number of queue flows supported was a guide as to how
> internal HW resources were to be allocated, and that the flow_id was always
> a 20-bit value, where it was up to the scheduler to work out how to map
> that to internal atomic locks (when combined with queue ids etc.). It
> should not be up to the app to have to do the range limiting itself!

On CNXK HW, it supports 20bit value. I am not sure about other HW.
That is the reason I add this configuration parameter by allowing HW
to be configured if it is NOT free.

>
> /Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
  2024-01-31 14:37           ` Bruce Richardson
@ 2024-02-02  9:24             ` Mattias Rönnblom
  2024-02-02 10:30               ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-02-02  9:24 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-01-31 15:37, Bruce Richardson wrote:
> On Wed, Jan 24, 2024 at 12:51:03PM +0100, Mattias Rönnblom wrote:
>> On 2024-01-23 10:43, Bruce Richardson wrote:
>>> On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote:
>>>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>>>> Some small rewording changes to the doxygen comments on struct
>>>>> rte_event_dev_info.
>>>>>
>>>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>>>> ---
>>>>>     lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
>>>>>     1 file changed, 25 insertions(+), 21 deletions(-)
>>>>>
>>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>>> index 57a2791946..872f241df2 100644
>>>>> --- a/lib/eventdev/rte_eventdev.h
>>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>>> @@ -482,54 +482,58 @@ struct rte_event_dev_info {
>>>>>     	const char *driver_name;	/**< Event driver name */
>>>>>     	struct rte_device *dev;	/**< Device information */
>>>>>     	uint32_t min_dequeue_timeout_ns;
>>>>> -	/**< Minimum supported global dequeue timeout(ns) by this device */
>>>>> +	/**< Minimum global dequeue timeout(ns) supported by this device */
>>>>
>>>> Are we missing a bunch of "." here and in the other fields?
>>>>
>>>>>     	uint32_t max_dequeue_timeout_ns;
>>>>> -	/**< Maximum supported global dequeue timeout(ns) by this device */
>>>>> +	/**< Maximum global dequeue timeout(ns) supported by this device */
>>>>>     	uint32_t dequeue_timeout_ns;
>>>>>     	/**< Configured global dequeue timeout(ns) for this device */
>>>>>     	uint8_t max_event_queues;
>>>>> -	/**< Maximum event_queues supported by this device */
>>>>> +	/**< Maximum event queues supported by this device */
>>>>>     	uint32_t max_event_queue_flows;
>>>>> -	/**< Maximum supported flows in an event queue by this device*/
>>>>> +	/**< Maximum number of flows within an event queue supported by this device*/
>>>>>     	uint8_t max_event_queue_priority_levels;
>>>>>     	/**< Maximum number of event queue priority levels by this device.
>>>>> -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
>>>>> +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
>>>>> +	 * The priority levels are evenly distributed between
>>>>> +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
>>>>
>>>> This is a change of the API, in the sense it's defining something previously
>>>> left undefined?
>>>>
>>>
>>> Well, undefined is pretty useless for app writers, no?
>>> However, agreed that the range between HIGHEST and LOWEST is an assumption
>>> on my part, chosen because it matches what happens to the event priorities
>>> which are documented in event struct as "The implementation shall normalize
>>>    the requested priority to supported priority value" - which, while better
>>> than nothing, does technically leave the details of how normalization
>>> occurs up to the implementation.
>>>
>>>> If you need 6 different priority levels in an app, how do you go about
>>>> making sure you find the correct (distinct) Eventdev levels on any event
>>>> device supporting >= 6 levels?
>>>>
>>>> #define NUM_MY_LEVELS 6
>>>>
>>>> #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) *
>>>> (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) /
>>>> NUM_MY_LEVELS)
>>>>
>>>> This way? One would worry a bit exactly what "evenly" means, in terms of
>>>> rounding errors. If you have an event device with 255 priority levels of
>>>> (say) 256 levels available in the API, which two levels are the same
>>>> priority?
>>>
>>> Yes, round etc. will be an issue in cases of non-powers-of 2.
>>> However, I think we do need to clarify this behaviour, so I'm open to
>>> alternative suggestions as to how update this.
>>>
>>
>> In retrospect, maybe it would have been better to just express the number of
>> priority levels an event device supported, only allow [0, max_levels - 1] in
>> the prio field, and leave it to the app to do the conversion/normalization.
>>
> 
> Yes, in many ways that would be better.
>   
>> Maybe a new <rte_eventdev.h> helper macro would at least suggest to the PMD
>> driver implementer and the application designer how this normalization
>> should work. Something like the above, but where NUM_MY_LEVELS is an input
>> parameter. Would result in an integer division though, so shouldn't be used
>> in the fast path.
> 
> I think it's actually ok now, having the drivers do the work, since each
> driver can choose optimal method. For those having power-of-2 number of
> priorities, just a shift op works best.
> 

I had an application-usable macro in mind, not a macro for PMDs. Showing 
how app-level priority levels should map to Eventdev API-level priority 
levels would, by implication, show how event device should do the 
Eventdev API priority -> PMD level priority compression.

The event device has exactly zero freedom in choosing how to translate 
Eventdev API-level priorities to its internal priorities, or risk not 
differentiating between app-level priority levels. If an event device 
supports 128 levels, is RTE_EVENT_DEV_PRIORITY_NORMAL and 
RTE_EVENT_DEV_PRIORITY_NORMAL-1 the same PMD-level priority or not? I 
would *guess* the same, but that just an assumption, and not something 
that follows from "normalize", I think.

Anyway, this is not a problem this patch set necessarily needs to solve.

> The key thing for the documentation here, to my mind, is to make it clear
> that though the number of priorities is N, you still specify the relative
> priorities in the range of 0-255. This is documented in the queue
> configuration, so, while we could leave it unmentionned here, I think for
> clarity it should be called out. I'm going to reword slightly as:
> 
>   * The implementation shall normalize priority values specified between
>   * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
>   * to map them internally to this range of priorities.
>   *
>   * @see rte_event_queue_conf.priority
> 
> This way the wording in the two places is similar.
> 
> /Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-02  9:22         ` Jerin Jacob
@ 2024-02-02  9:36           ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02  9:36 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Mattias Rönnblom, jerinj, dev, mattias.ronnblom,
	abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak

On Fri, Feb 02, 2024 at 02:52:05PM +0530, Jerin Jacob wrote:
> On Thu, Feb 1, 2024 at 10:33 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > > For the fields in "rte_event" struct, enhance the comments on each to
> > > > clarify the field's use, and whether it is preserved between enqueue and
> > > > dequeue, and it's role, if any, in scheduling.
> > > >
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
> > > >
> > > > As with the previous patch, please review this patch to ensure that the
> > > > expected semantics of the various event types and event fields have not
> > > > changed in an unexpected way.
> > > > ---
> > > >   lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> > > >   1 file changed, 77 insertions(+), 28 deletions(-)
> > > >
> > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > index cb13602ffb..4eff1c4958 100644
> > > > --- a/lib/eventdev/rte_eventdev.h
> > > > +++ b/lib/eventdev/rte_eventdev.h
> > <snip>
> >
> > > >   /**
> > > > @@ -1473,53 +1475,100 @@ struct rte_event {
> > > >                     /**< Targeted flow identifier for the enqueue and
> > > >                      * dequeue operation.
> > > >                      * The value must be in the range of
> > > > -                    * [0, nb_event_queue_flows - 1] which
> > > > +                    * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
> > >
> > > The same comment as I had before about ranges for unsigned types.
> > >
> > Actually, is this correct, does a range actually apply here?
> >
> > I thought that the number of queue flows supported was a guide as to how
> > internal HW resources were to be allocated, and that the flow_id was always
> > a 20-bit value, where it was up to the scheduler to work out how to map
> > that to internal atomic locks (when combined with queue ids etc.). It
> > should not be up to the app to have to do the range limiting itself!
> 
> On CNXK HW, it supports 20bit value. I am not sure about other HW.
> That is the reason I add this configuration parameter by allowing HW
> to be configured if it is NOT free.
> 
Ok, but that is making the assumption that the number of flow slots is
directly related to the number of bits of flow_id which can be passed in. I
think it's the driver or device's job to hash down the bits if necessary
internally.

For v3 I'm going to remove this sentence, as the event RX adapter doesn't
seem to be limiting things, and nobody's reported an issue with it, and
also because the rte_event_dev_config struct itself doesn't mention the
config value having an impact on the flow-ids that can be passed.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-01 16:59       ` Bruce Richardson
@ 2024-02-02  9:38         ` Mattias Rönnblom
  2024-02-02 11:33           ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-02-02  9:38 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-02-01 17:59, Bruce Richardson wrote:
> On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>> Clarify the meaning of the NEW, FORWARD and RELEASE event types.
>>> For the fields in "rte_event" struct, enhance the comments on each to
>>> clarify the field's use, and whether it is preserved between enqueue and
>>> dequeue, and it's role, if any, in scheduling.
>>>
>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>> ---
>>>
>>> As with the previous patch, please review this patch to ensure that the
>>> expected semantics of the various event types and event fields have not
>>> changed in an unexpected way.
>>> ---
>>>    lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
>>>    1 file changed, 77 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>> index cb13602ffb..4eff1c4958 100644
>>> --- a/lib/eventdev/rte_eventdev.h
>>> +++ b/lib/eventdev/rte_eventdev.h
>>> @@ -1416,21 +1416,25 @@ struct rte_event_vector {
>>>
>>>    /* Event enqueue operations */
>>>    #define RTE_EVENT_OP_NEW                0
>>> -/**< The event producers use this operation to inject a new event to the
>>> +/**< The @ref rte_event.op field should be set to this type to inject a new event to the
>>>     * event device.
>>>     */
>>
>> "type" -> "value"
>>
>> "to" -> "into"?
>>
>> You could also say "to mark the event as new".
>>
>> What is new? Maybe "new (as opposed to a forwarded) event." or "new (i.e.,
>> not previously dequeued).".
>>
> 
> Using this latter suggested wording in V3.
> 
>> "The application sets the @ref rte_event.op field of an enqueued event to
>> this value to mark the event as new (i.e., not previously dequeued)."
>>
>>>    #define RTE_EVENT_OP_FORWARD            1
>>> -/**< The CPU use this operation to forward the event to different event queue or
>>> - * change to new application specific flow or schedule type to enable
>>> - * pipelining.
>>> +/**< SW should set the @ref rte_event.op filed to this type to return a
>>> + * previously dequeued event to the event device for further processing.
>>
>> "filed" -> "field"
>>
>> "SW" -> "The application"
>>
>> "to be scheduled for further processing (or transmission)"
>>
>> The wording should otherwise be the same as NEW, whatever you choose there.
>>
> Ack.
> 
>>>     *
>>> - * This operation must only be enqueued to the same port that the
>>> + * This event *must* be enqueued to the same port that the
>>>     * event to be forwarded was dequeued from.
>>
>> OK, so you "should" mark a new event RTE_EVENT_OP_FORWARD but you "*must*"
>> enqueue it to the same port.
>>
>> I think you "must" do both.
>>
> Ack
> 
>>> + *
>>> + * The event's fields, including (but not limited to) flow_id, scheduling type,
>>> + * destination queue, and event payload e.g. mbuf pointer, may all be updated as
>>> + * desired by software, but the @ref rte_event.impl_opaque field must
>>
>> "software" -> "application"
>>
> Ack
>   
>>> + * be kept to the same value as was present when the event was dequeued.
>>>     */
>>>    #define RTE_EVENT_OP_RELEASE            2
>>>    /**< Release the flow context associated with the schedule type.
>>>     *
>>> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
>>> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
>>>     * then this function hints the scheduler that the user has completed critical
>>>     * section processing in the current atomic context.
>>>     * The scheduler is now allowed to schedule events from the same flow from
>>> @@ -1442,21 +1446,19 @@ struct rte_event_vector {
>>>     * performance, but the user needs to design carefully the split into critical
>>>     * vs non-critical sections.
>>>     *
>>> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
>>> - * then this function hints the scheduler that the user has done all that need
>>> - * to maintain event order in the current ordered context.
>>> - * The scheduler is allowed to release the ordered context of this port and
>>> - * avoid reordering any following enqueues.
>>> - *
>>> - * Early ordered context release may increase parallelism and thus system
>>> - * performance.
>>> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
>>
>> Isn't a missing "or @ref RTE_SCHED_TYPE_ATOMIC" just an oversight (in the
>> original API wording)?
>>
> 
> No, I don't think so, because ATOMIC is described above.
> 
>>> + * then this function informs the scheduler that the current event has
>>> + * completed processing and will not be returned to the scheduler, i.e.
>>> + * it has been dropped, and so the reordering context for that event
>>> + * should be considered filled.
>>>     *
>>> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
>>> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_PARALLEL
>>>     * or no scheduling context is held then this function may be an NOOP,
>>>     * depending on the implementation.
>>
>> Maybe you can also fix this "function" -> "operation". I suggest you delete
>> that sentence, because it makes no sense.
>>
>> What is says in a somewhat vague manner is that you tread into the realm of
>> undefined behavior if you release PARALLEL events.
>>
> 
> Agree. Just deleting.
> 
>>>     *
>>>     * This operation must only be enqueued to the same port that the
>>> - * event to be released was dequeued from.
>>> + * event to be released was dequeued from. The @ref rte_event.impl_opaque
>>> + * field in the release event must match that in the original dequeued event.
>>
>> I would say "same value" rather than "match".
>>
>> "The @ref rte_event.impl_opaque field in the release event have the same
>> value as in the original dequeued event."
>>
> Ack.
> 
>>>     */
>>>
>>>    /**
>>> @@ -1473,53 +1475,100 @@ struct rte_event {
>>>    			/**< Targeted flow identifier for the enqueue and
>>>    			 * dequeue operation.
>>>    			 * The value must be in the range of
>>> -			 * [0, nb_event_queue_flows - 1] which
>>> +			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
>>
>> The same comment as I had before about ranges for unsigned types.
>>
> Ack.
> 
>>>    			 * previously supplied to rte_event_dev_configure().
>>> +			 *
>>> +			 * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
>>> +			 * flow context for atomicity, such that events from each individual flow
>>> +			 * will only be scheduled to one port at a time.
>>
>> flow_id alone doesn't identify an atomic flow. It's queue_id + flow_id. I'm
>> not sure I think "flow context" adds much, unless it's defined somewhere.
>> Sounds like some assumed implementation detail.
>>
> Removing the word context, and adding that it identifies a flow "within a
> queue and priority level", to make it clear that it's just not the flow_id
> involved here, as you say.
> 
>>> +			 *
>>> +			 * This field is preserved between enqueue and dequeue when
>>> +			 * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
>>> +			 * capability. Otherwise the value is implementation dependent
>>> +			 * on dequeue >   			 */
>>>    			uint32_t sub_event_type:8;
>>>    			/**< Sub-event types based on the event source.
>>> +			 *
>>> +			 * This field is preserved between enqueue and dequeue.
>>> +			 * This field is for SW or event adapter use,
>>
>> "SW" -> "application"
>>
> Ack.
> 
>>> +			 * and is unused in scheduling decisions.
>>> +			 *
>>>    			 * @see RTE_EVENT_TYPE_CPU
>>>    			 */
>>>    			uint32_t event_type:4;
>>> -			/**< Event type to classify the event source.
>>> -			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
>>> +			/**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
>>> +			 *
>>> +			 * This field is preserved between enqueue and dequeue
>>> +			 * This field is for SW or event adapter use,
>>> +			 * and is unused in scheduling decisions.
>>
>> "unused" -> "is not considered"?
>>
> Ack.
> 
>>>    			 */
>>>    			uint8_t op:2;
>>> -			/**< The type of event enqueue operation - new/forward/
>>> -			 * etc.This field is not preserved across an instance
>>> +			/**< The type of event enqueue operation - new/forward/ etc.
>>> +			 *
>>> +			 * This field is *not* preserved across an instance
>>>    			 * and is undefined on dequeue.
>>
>> Maybe you should use "undefined" rather than "implementation dependent", or
>> change this instance of undefined to implementation dependent. Now two
>> different terms are used for the same thing.
>>
> 
> Using implementation dependent.
> Ideally, I think we should update all drivers to set this to "FORWARD" by
> default on dequeue, but for now it's "implementation dependent".
> 

That would make a lot of sense.

>>> -			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
>>> +			 *
>>> +			 * @see RTE_EVENT_OP_NEW
>>> +			 * @see RTE_EVENT_OP_FORWARD
>>> +			 * @see RTE_EVENT_OP_RELEASE
>>>    			 */
>>>    			uint8_t rsvd:4;
>>> -			/**< Reserved for future use */
>>> +			/**< Reserved for future use.
>>> +			 *
>>> +			 * Should be set to zero on enqueue. Zero on dequeue.
>>> +			 */
>>
>> Why say they should be zero on dequeue? Doesn't this defeat the purpose of
>> having reserved bits, for future use? With you suggested wording, you can't
>> use these bits without breaking the ABI.
> 
> Good point. Removing the dequeue value bit.
> 
>>
>>>    			uint8_t sched_type:2;
>>>    			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
>>>    			 * associated with flow id on a given event queue
>>>    			 * for the enqueue and dequeue operation.
>>> +			 *
>>> +			 * This field is used to determine the scheduling type
>>> +			 * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
>>> +			 * is supported.
>>
>> "supported" -> "configured"
>>
> Ack.
> 
>>> +			 * For queues where only a single scheduling type is available,
>>> +			 * this field must be set to match the configured scheduling type.
>>> +			 *
>>
>> Why is the API/event device asking this of the application?
>>
> Historical reasons. I agree that it shouldn't, this should just be marked
> as ignored on fixed-type queues, but the spec up till now says it must
> match and some drivers do check this. Once we update the drivers to stop
> checking then we can change the spec without affecting apps.
> 
>>> +			 * This field is preserved between enqueue and
>>> dequeue.  +			 * +			 * @see
>>> RTE_SCHED_TYPE_ORDERED +			 * @see
>>> RTE_SCHED_TYPE_ATOMIC +			 * @see
>>> RTE_SCHED_TYPE_PARALLEL */ uint8_t queue_id; /**< Targeted event queue
>>> identifier for the enqueue or * dequeue operation.  * The value must be
>>> in the range of -			 * [0, nb_event_queues - 1] which
>>> previously supplied to -			 *
>>> rte_event_dev_configure().  +			 * [0, @ref
>>> rte_event_dev_config.nb_event_queues - 1] which was +
>>> * previously supplied to rte_event_dev_configure().  +
>>> * +			 * This field is preserved between enqueue on
>>> dequeue.  */ uint8_t priority; /**< Event priority relative to other
>>> events in the * event queue. The requested priority should in the -
>>> * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST, -			 *
>>> RTE_EVENT_DEV_PRIORITY_LOWEST].  +			 * range of  [@ref
>>> RTE_EVENT_DEV_PRIORITY_HIGHEST, +			 * @ref
>>> RTE_EVENT_DEV_PRIORITY_LOWEST].  * The implementation shall normalize
>>> the requested * priority to supported priority value.  +
>>> * * Valid when the device has -			 *
>>> RTE_EVENT_DEV_CAP_EVENT_QOS capability.  +			 * @ref
>>> RTE_EVENT_DEV_CAP_EVENT_QOS capability.  +			 * Ignored
>>> otherwise.  +			 * +			 * This
>>> field is preserved between enqueue and dequeue.
>>
>> Is the normalized or unnormalized value that is preserved?
>>
> Very good point. It's the normalized & then denormalised version that is
> guaranteed to be preserved, I suspect. SW eventdevs probably preserve
> as-is, but HW eventdevs may lose precision. Rather than making this
> "implementation defined" or "not preserved" which would be annoying for
> apps, I think, I'm going to document this as "preserved, but with possible
> loss of precision".
> 

This makes me again think it may be worth noting that Eventdev -> API 
priority normalization is (event.priority * PMD_LEVELS) / 
EVENTDEV_LEVELS (rounded down) - assuming that's how it's supposed to be 
done - or something to that effect.

>>>    			 */
>>>    			uint8_t impl_opaque;
>>>    			/**< Implementation specific opaque value.
>>
>> Maybe you can also fix "implementation" here to be something more specific.
>> Implementation, of what?
>>
>> "Event device implementation" or just "event device".
>>
> "Opaque field for event device use"
> 
>>> +			 *
>>>    			 * An implementation may use this field to hold
>>>    			 * implementation specific value to share between
>>>    			 * dequeue and enqueue operation.
>>> +			 *
>>>    			 * The application should not modify this field.
>>> +			 * Its value is implementation dependent on dequeue,
>>> +			 * and must be returned unmodified on enqueue when
>>> +			 * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE
>>
>> Should it be mentioned that impl_opaque is ignored by the event device for
>> NEW events?
>>
> Added in V3.
> 
>>>    			 */
>>>    		};
>>>    	};
>>> --
>>> 2.40.1
>>>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-01 17:02       ` Bruce Richardson
  2024-02-02  9:14         ` Bruce Richardson
  2024-02-02  9:22         ` Jerin Jacob
@ 2024-02-02  9:45         ` Mattias Rönnblom
  2024-02-02 10:32           ` Bruce Richardson
  2 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-02-02  9:45 UTC (permalink / raw)
  To: Bruce Richardson, jerinj
  Cc: dev, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-02-01 18:02, Bruce Richardson wrote:
> On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>> Clarify the meaning of the NEW, FORWARD and RELEASE event types.
>>> For the fields in "rte_event" struct, enhance the comments on each to
>>> clarify the field's use, and whether it is preserved between enqueue and
>>> dequeue, and it's role, if any, in scheduling.
>>>
>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>> ---
>>>
>>> As with the previous patch, please review this patch to ensure that the
>>> expected semantics of the various event types and event fields have not
>>> changed in an unexpected way.
>>> ---
>>>    lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
>>>    1 file changed, 77 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>> index cb13602ffb..4eff1c4958 100644
>>> --- a/lib/eventdev/rte_eventdev.h
>>> +++ b/lib/eventdev/rte_eventdev.h
> <snip>
> 
>>>    /**
>>> @@ -1473,53 +1475,100 @@ struct rte_event {
>>>    			/**< Targeted flow identifier for the enqueue and
>>>    			 * dequeue operation.
>>>    			 * The value must be in the range of
>>> -			 * [0, nb_event_queue_flows - 1] which
>>> +			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
>>
>> The same comment as I had before about ranges for unsigned types.
>>
> Actually, is this correct, does a range actually apply here?
> 
> I thought that the number of queue flows supported was a guide as to how
> internal HW resources were to be allocated, and that the flow_id was always
> a 20-bit value, where it was up to the scheduler to work out how to map
> that to internal atomic locks (when combined with queue ids etc.). It
> should not be up to the app to have to do the range limiting itself!
> 

Indeed, I also operated under this belief, which is reflected in DSW, 
which just takes the flow_id and (pseudo-)hash+mask it into the 
appropriate range.

Leaving it to the app allows app-level attempts to avoid collisions 
between large flows, I guess. Not sure I think apps will (or even 
should) really do this.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
  2024-02-02  9:24             ` Mattias Rönnblom
@ 2024-02-02 10:30               ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 10:30 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 02, 2024 at 10:24:54AM +0100, Mattias Rönnblom wrote:
> On 2024-01-31 15:37, Bruce Richardson wrote:
> > On Wed, Jan 24, 2024 at 12:51:03PM +0100, Mattias Rönnblom wrote:
> > > On 2024-01-23 10:43, Bruce Richardson wrote:
> > > > On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote:
> > > > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > > > Some small rewording changes to the doxygen comments on struct
> > > > > > rte_event_dev_info.
> > > > > > 
> > > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > > ---
> > > > > >     lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
> > > > > >     1 file changed, 25 insertions(+), 21 deletions(-)
> > > > > > 
> > > > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > > > index 57a2791946..872f241df2 100644
> > > > > > --- a/lib/eventdev/rte_eventdev.h
> > > > > > +++ b/lib/eventdev/rte_eventdev.h
> > > > > > @@ -482,54 +482,58 @@ struct rte_event_dev_info {
> > > > > >     	const char *driver_name;	/**< Event driver name */
> > > > > >     	struct rte_device *dev;	/**< Device information */
> > > > > >     	uint32_t min_dequeue_timeout_ns;
> > > > > > -	/**< Minimum supported global dequeue timeout(ns) by this device */
> > > > > > +	/**< Minimum global dequeue timeout(ns) supported by this device */
> > > > > 
> > > > > Are we missing a bunch of "." here and in the other fields?
> > > > > 
> > > > > >     	uint32_t max_dequeue_timeout_ns;
> > > > > > -	/**< Maximum supported global dequeue timeout(ns) by this device */
> > > > > > +	/**< Maximum global dequeue timeout(ns) supported by this device */
> > > > > >     	uint32_t dequeue_timeout_ns;
> > > > > >     	/**< Configured global dequeue timeout(ns) for this device */
> > > > > >     	uint8_t max_event_queues;
> > > > > > -	/**< Maximum event_queues supported by this device */
> > > > > > +	/**< Maximum event queues supported by this device */
> > > > > >     	uint32_t max_event_queue_flows;
> > > > > > -	/**< Maximum supported flows in an event queue by this device*/
> > > > > > +	/**< Maximum number of flows within an event queue supported by this device*/
> > > > > >     	uint8_t max_event_queue_priority_levels;
> > > > > >     	/**< Maximum number of event queue priority levels by this device.
> > > > > > -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> > > > > > +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> > > > > > +	 * The priority levels are evenly distributed between
> > > > > > +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
> > > > > 
> > > > > This is a change of the API, in the sense it's defining something previously
> > > > > left undefined?
> > > > > 
> > > > 
> > > > Well, undefined is pretty useless for app writers, no?
> > > > However, agreed that the range between HIGHEST and LOWEST is an assumption
> > > > on my part, chosen because it matches what happens to the event priorities
> > > > which are documented in event struct as "The implementation shall normalize
> > > >    the requested priority to supported priority value" - which, while better
> > > > than nothing, does technically leave the details of how normalization
> > > > occurs up to the implementation.
> > > > 
> > > > > If you need 6 different priority levels in an app, how do you go about
> > > > > making sure you find the correct (distinct) Eventdev levels on any event
> > > > > device supporting >= 6 levels?
> > > > > 
> > > > > #define NUM_MY_LEVELS 6
> > > > > 
> > > > > #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) *
> > > > > (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) /
> > > > > NUM_MY_LEVELS)
> > > > > 
> > > > > This way? One would worry a bit exactly what "evenly" means, in terms of
> > > > > rounding errors. If you have an event device with 255 priority levels of
> > > > > (say) 256 levels available in the API, which two levels are the same
> > > > > priority?
> > > > 
> > > > Yes, round etc. will be an issue in cases of non-powers-of 2.
> > > > However, I think we do need to clarify this behaviour, so I'm open to
> > > > alternative suggestions as to how update this.
> > > > 
> > > 
> > > In retrospect, maybe it would have been better to just express the number of
> > > priority levels an event device supported, only allow [0, max_levels - 1] in
> > > the prio field, and leave it to the app to do the conversion/normalization.
> > > 
> > 
> > Yes, in many ways that would be better.
> > > Maybe a new <rte_eventdev.h> helper macro would at least suggest to the PMD
> > > driver implementer and the application designer how this normalization
> > > should work. Something like the above, but where NUM_MY_LEVELS is an input
> > > parameter. Would result in an integer division though, so shouldn't be used
> > > in the fast path.
> > 
> > I think it's actually ok now, having the drivers do the work, since each
> > driver can choose optimal method. For those having power-of-2 number of
> > priorities, just a shift op works best.
> > 
> 
> I had an application-usable macro in mind, not a macro for PMDs. Showing how
> app-level priority levels should map to Eventdev API-level priority levels
> would, by implication, show how event device should do the Eventdev API
> priority -> PMD level priority compression.
> 
> The event device has exactly zero freedom in choosing how to translate
> Eventdev API-level priorities to its internal priorities, or risk not
> differentiating between app-level priority levels. If an event device
> supports 128 levels, is RTE_EVENT_DEV_PRIORITY_NORMAL and
> RTE_EVENT_DEV_PRIORITY_NORMAL-1 the same PMD-level priority or not? I would
> *guess* the same, but that just an assumption, and not something that
> follows from "normalize", I think.
> 
> Anyway, this is not a problem this patch set necessarily needs to solve.
> 
Yep, a good point. Would a public macro be enough, or would it be better
for drivers to provide a function to allow the app to query the internal
priority level for an eventdev one directly?

Other alternatives:
* have an API break where we change the meaning of the priority field so
  that the priorities are given in the range of 0 - max_prios-1.
* Keep same API, but explicitly state that devices must have a power-of-2
  number of supported priorities, and hence that only the top N bits of the
  priority field will be valid (any devices with support for non-power-of-2
  nb-priorities??)
  - to simplify things this could be followed by an API change where we
    report instead of priority levels, number of priority bits valid
  - if changing API for this anyway, could reduce size of event priority
    field - 256 event priority levels seems a lot! Cutting the field down
    to 4 bits, or even 3, might make sense. [It would also allow us to
    potentially expand the impl_opaque field up to 12 bits, allowing more
    than 256 outstanding events on a port, if using it for sequence numbers,
    or more useful metadata possibilities for devices/drivers]

Not something for this patchset though, as you say.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-02  9:45         ` Mattias Rönnblom
@ 2024-02-02 10:32           ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 10:32 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: jerinj, dev, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 02, 2024 at 10:45:34AM +0100, Mattias Rönnblom wrote:
> On 2024-02-01 18:02, Bruce Richardson wrote:
> > On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > > For the fields in "rte_event" struct, enhance the comments on each to
> > > > clarify the field's use, and whether it is preserved between enqueue and
> > > > dequeue, and it's role, if any, in scheduling.
> > > > 
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
> > > > 
> > > > As with the previous patch, please review this patch to ensure that the
> > > > expected semantics of the various event types and event fields have not
> > > > changed in an unexpected way.
> > > > ---
> > > >    lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> > > >    1 file changed, 77 insertions(+), 28 deletions(-)
> > > > 
> > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > index cb13602ffb..4eff1c4958 100644
> > > > --- a/lib/eventdev/rte_eventdev.h
> > > > +++ b/lib/eventdev/rte_eventdev.h
> > <snip>
> > 
> > > >    /**
> > > > @@ -1473,53 +1475,100 @@ struct rte_event {
> > > >    			/**< Targeted flow identifier for the enqueue and
> > > >    			 * dequeue operation.
> > > >    			 * The value must be in the range of
> > > > -			 * [0, nb_event_queue_flows - 1] which
> > > > +			 * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which
> > > 
> > > The same comment as I had before about ranges for unsigned types.
> > > 
> > Actually, is this correct, does a range actually apply here?
> > 
> > I thought that the number of queue flows supported was a guide as to how
> > internal HW resources were to be allocated, and that the flow_id was always
> > a 20-bit value, where it was up to the scheduler to work out how to map
> > that to internal atomic locks (when combined with queue ids etc.). It
> > should not be up to the app to have to do the range limiting itself!
> > 
> 
> Indeed, I also operated under this belief, which is reflected in DSW, which
> just takes the flow_id and (pseudo-)hash+mask it into the appropriate range.
> 
> Leaving it to the app allows app-level attempts to avoid collisions between
> large flows, I guess. Not sure I think apps will (or even should) really do
> this.

I'm just going to drop this restriction from v3.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 03/11] eventdev: update documentation on device capability flags
  2024-02-02  8:58         ` Mattias Rönnblom
@ 2024-02-02 11:20           ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 11:20 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 02, 2024 at 09:58:25AM +0100, Mattias Rönnblom wrote:
> On 2024-01-31 15:09, Bruce Richardson wrote:
> > On Tue, Jan 23, 2024 at 10:18:53AM +0100, Mattias Rönnblom wrote:
> > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > Update the device capability docs, to:
> > > > 
> > > > * include more cross-references
> > > > * split longer text into paragraphs, in most cases with each flag having
> > > >     a single-line summary at the start of the doc block
> > > > * general comment rewording and clarification as appropriate
> > > > 
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
> > > >    lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
> > > >    1 file changed, 93 insertions(+), 37 deletions(-)
> > > > 
> > <snip>
> > > >     * If this capability is not set, the queue only supports events of the
> > > > - *  *RTE_SCHED_TYPE_* type that it was created with.
> > > > + * *RTE_SCHED_TYPE_* type that it was created with.
> > > > + * Any events of other types scheduled to the queue will handled in an
> > > > + * implementation-dependent manner. They may be dropped by the
> > > > + * event device, or enqueued with the scheduling type adjusted to the
> > > > + * correct/supported value.
> > > 
> > > Having the application setting sched_type when it was already set on a the
> > > level of the queue never made sense to me.
> > > 
> > > I can't see any reasons why this field shouldn't be ignored by the event
> > > device on non-RTE_EVENT_QUEUE_CFG_ALL_TYPES queues.
> > > 
> > > If the behavior is indeed undefined, I think it's better to just say
> > > "undefined" rather than the above speculation.
> > > 
> > 
> > Updating in v3 to just say it's undefined.
> > 
> > > >     *
> > > > - * @see RTE_SCHED_TYPE_* values
> > <snip>
> > > >    #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> > > >    /**< Event device is capable of changing the queue attributes at runtime i.e
> > > > - * after rte_event_queue_setup() or rte_event_start() call sequence. If this
> > > > - * flag is not set, eventdev queue attributes can only be configured during
> > > > + * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
> > > > + *
> > > > + * If this flag is not set, eventdev queue attributes can only be configured during
> > > >     * rte_event_queue_setup().
> > > 
> > > "event queue" or just "queue".
> > > 
> > Ack.
> > 
> > > > + *
> > > > + * @see rte_event_queue_setup
> > > >     */
> > > >    #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
> > > > -/**< Event device is capable of supporting multiple link profiles per event port
> > > > - * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
> > > > - * than one.
> > > > +/**< Event device is capable of supporting multiple link profiles per event port.
> > > > + *
> > > > + *
> > > > + * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
> > > > + * than one, and multiple profiles may be configured and then switched at runtime.
> > > > + * If not set, only a single profile may be configured, which may itself be
> > > > + * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
> > > > + *
> > > > + * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
> > > > + * @see rte_event_port_profile_switch
> > > > + * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
> > > >     */
> > > >    /* Event device priority levels */
> > > >    #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
> > > > -/**< Highest priority expressed across eventdev subsystem
> > > > +/**< Highest priority expressed across eventdev subsystem.
> > > 
> > > "The highest priority an event device may support."
> > > or
> > > "The highest priority any event device may support."
> > > 
> > > Maybe this is a further improvement, beyond punctuation? "across eventdev
> > > subsystem" sounds awkward.
> > > 
> > 
> > Still not very clear. Talking about device support implies that its
> > possible some devices may not support it. How about:
> >  > "highest priority level for events and queues".
> > 
> 
> Sounds good. I guess it's totally, 100% obvious highest means most urgent?
> 
> Otherwise, "highest (i.e., most urgent) priority level for events queues"

I think it's clear enough that highest priority is most urgent.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-02  9:38         ` Mattias Rönnblom
@ 2024-02-02 11:33           ` Bruce Richardson
  2024-02-02 12:02             ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 11:33 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 02, 2024 at 10:38:10AM +0100, Mattias Rönnblom wrote:
> On 2024-02-01 17:59, Bruce Richardson wrote:
> > On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > > For the fields in "rte_event" struct, enhance the comments on each to
> > > > clarify the field's use, and whether it is preserved between enqueue and
> > > > dequeue, and it's role, if any, in scheduling.
> > > > 
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
> > > > 
<snip>
> > > Is the normalized or unnormalized value that is preserved?
> > > 
> > Very good point. It's the normalized & then denormalised version that is
> > guaranteed to be preserved, I suspect. SW eventdevs probably preserve
> > as-is, but HW eventdevs may lose precision. Rather than making this
> > "implementation defined" or "not preserved" which would be annoying for
> > apps, I think, I'm going to document this as "preserved, but with possible
> > loss of precision".
> > 
> 
> This makes me again think it may be worth noting that Eventdev -> API
> priority normalization is (event.priority * PMD_LEVELS) / EVENTDEV_LEVELS
> (rounded down) - assuming that's how it's supposed to be done - or something
> to that effect.
> 
Following my comment on the thread on the other patch about looking at
numbers of bits of priority being valid, I did a quick check of the evdev PMDs
by using grep for "max_event_priority_levels" in each driver. According to
that (and resolving some #defines), I see:

0 - dpaa, dpaa2
1 - cnxk, dsw, octeontx, opdl
4 - sw
8 - dlb2, skeleton

So it looks like switching to a bit-scheme is workable, where we measure
supported event levels in powers-of-two only. [And we can cut down priority
fields if we like].

Question for confirmation. For cases where the eventdev does not support
per-event prioritization, I suppose we should say that the priority field
is not preserved, as well as being ignored?

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields
  2024-02-02 11:33           ` Bruce Richardson
@ 2024-02-02 12:02             ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:02 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 02, 2024 at 11:33:19AM +0000, Bruce Richardson wrote:
> On Fri, Feb 02, 2024 at 10:38:10AM +0100, Mattias Rönnblom wrote:
> > On 2024-02-01 17:59, Bruce Richardson wrote:
> > > On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote:
> > > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > > > For the fields in "rte_event" struct, enhance the comments on each to
> > > > > clarify the field's use, and whether it is preserved between enqueue and
> > > > > dequeue, and it's role, if any, in scheduling.
> > > > > 
> > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > ---
> > > > > 
> <snip>
> > > > Is the normalized or unnormalized value that is preserved?
> > > > 
> > > Very good point. It's the normalized & then denormalised version that is
> > > guaranteed to be preserved, I suspect. SW eventdevs probably preserve
> > > as-is, but HW eventdevs may lose precision. Rather than making this
> > > "implementation defined" or "not preserved" which would be annoying for
> > > apps, I think, I'm going to document this as "preserved, but with possible
> > > loss of precision".
> > > 
> > 
> > This makes me again think it may be worth noting that Eventdev -> API
> > priority normalization is (event.priority * PMD_LEVELS) / EVENTDEV_LEVELS
> > (rounded down) - assuming that's how it's supposed to be done - or something
> > to that effect.
> > 
> Following my comment on the thread on the other patch about looking at
> numbers of bits of priority being valid, I did a quick check of the evdev PMDs
> by using grep for "max_event_priority_levels" in each driver. According to
> that (and resolving some #defines), I see:
> 
> 0 - dpaa, dpaa2
> 1 - cnxk, dsw, octeontx, opdl
> 4 - sw
> 8 - dlb2, skeleton
> 
> So it looks like switching to a bit-scheme is workable, where we measure
> supported event levels in powers-of-two only. [And we can cut down priority
> fields if we like].
> 
And just for reference, the advertized values for
max_event_queue_priority_levels are:

1 - dsw, opdl
8 - cnxk, dlb2, dpaa, dpaa2, octeontx, skeleton
255 - sw [though this should really be 256, it's an off-by-one error due to
          the range of uint8_t type. SW evdev just sorts queues by priority
          using the whole priority value specified.]

So I think we can treat queue priority similarly to event priority - giving
the number of bits which are valid. Also, if we decide to cut the event
priority level range to e.g. 0-15, I think we can do the same for the queue
priority levels, so that the ranges are similar, and then we can adjust the
min-max definitions to match.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 00/11] improve eventdev API specification/documentation
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (10 preceding siblings ...)
  2024-01-19 17:43   ` [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields Bruce Richardson
@ 2024-02-02 12:39   ` Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 01/11] eventdev: improve doxygen introduction text Bruce Richardson
                       ` (10 more replies)
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
  12 siblings, 11 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

This patchset makes rewording improvements to the eventdev doxygen
documentation to try and ensure that it is as clear as possible,
describes the implementation as accurately as possible, and is
consistent within itself.

Most changes are just minor rewordings, along with plenty of changes to
change references into doxygen links/cross-references.

In tightening up the definitions, there may be subtle changes in meaning
which should be checked for carefully by reviewers. Where there was
ambiguity, the behaviour of existing code is documented so as to avoid
breaking existing apps.

V3:
* major cleanup following review by Mattias and on-list discussions
* old patch 7 split in two and merged with other changes in the same
  area rather than being standalone.
* new patch 11 added at end of series.

V2:
* additional cleanup and changes
* remove "escaped" accidental change to .c file

Bruce Richardson (11):
  eventdev: improve doxygen introduction text
  eventdev: move text on driver internals to proper section
  eventdev: update documentation on device capability flags
  eventdev: cleanup doxygen comments on info structure
  eventdev: improve function documentation for query fns
  eventdev: improve doxygen comments on configure struct
  eventdev: improve doxygen comments on config fns
  eventdev: improve doxygen comments for control APIs
  eventdev: improve comments on scheduling types
  eventdev: clarify docs on event object fields and op types
  eventdev: drop comment for anon union from doxygen

 lib/eventdev/rte_eventdev.h | 952 +++++++++++++++++++++++-------------
 1 file changed, 620 insertions(+), 332 deletions(-)

--
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 01/11] eventdev: improve doxygen introduction text
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-07 10:14       ` Jerin Jacob
  2024-02-02 12:39     ` [PATCH v3 02/11] eventdev: move text on driver internals to proper section Bruce Richardson
                       ` (9 subsequent siblings)
  10 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

Make some textual improvements to the introduction to eventdev and event
devices in the eventdev header file. This text appears in the doxygen
output for the header file, and introduces the key concepts, for
example: events, event devices, queues, ports and scheduling.

This patch makes the following improvements:
* small textual fixups, e.g. correcting use of singular/plural
* rewrites of some sentences to improve clarity
* using doxygen markdown to split the whole large block up into
  sections, thereby making it easier to read.

No large-scale changes are made, and blocks are not reordered

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: reworked following feedback from Mattias
---
 lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
 1 file changed, 81 insertions(+), 51 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index ec9b02455d..a741832e8e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -12,25 +12,33 @@
  * @file
  *
  * RTE Event Device API
+ * ====================
  *
- * In a polling model, lcores poll ethdev ports and associated rx queues
- * directly to look for packet. In an event driven model, by contrast, lcores
- * call the scheduler that selects packets for them based on programmer
- * specified criteria. Eventdev library adds support for event driven
- * programming model, which offer applications automatic multicore scaling,
- * dynamic load balancing, pipelining, packet ingress order maintenance and
- * synchronization services to simplify application packet processing.
+ * In a traditional run-to-completion application model, lcores pick up packets
+ * from Ethdev ports and associated RX queues, run the packet processing to completion,
+ * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
+ * may be used to balance the load across multiple CPU cores.
+ *
+ * In contrast, in an event-driver model, as supported by this "eventdev" library,
+ * incoming packets are fed into an event device, which schedules those packets across
+ * the available lcores, in accordance with its configuration.
+ * This event-driven programming model offers applications automatic multicore scaling,
+ * dynamic load balancing, pipelining, packet order maintenance, synchronization,
+ * and prioritization/quality of service.
  *
  * The Event Device API is composed of two parts:
  *
  * - The application-oriented Event API that includes functions to setup
  *   an event device (configure it, setup its queues, ports and start it), to
- *   establish the link between queues to port and to receive events, and so on.
+ *   establish the links between queues and ports to receive events, and so on.
  *
  * - The driver-oriented Event API that exports a function allowing
- *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event poll Mode Driver (PMD) to register itself as
  *   an event device driver.
  *
+ * Application-oriented Event API
+ * ------------------------------
+ *
  * Event device components:
  *
  *                     +-----------------+
@@ -75,27 +83,39 @@
  *            |                                                           |
  *            +-----------------------------------------------------------+
  *
- * Event device: A hardware or software-based event scheduler.
+ * **Event device**: A hardware or software-based event scheduler.
  *
- * Event: A unit of scheduling that encapsulates a packet or other datatype
- * like SW generated event from the CPU, Crypto work completion notification,
- * Timer expiry event notification etc as well as metadata.
- * The metadata includes flow ID, scheduling type, event priority, event_type,
- * sub_event_type etc.
+ * **Event**: Represents an item of work and is the smallest unit of scheduling.
+ * An event carries metadata, such as queue ID, scheduling type, and event priority,
+ * and data such as one or more packets or other kinds of buffers.
+ * Some examples of events are:
+ * - a software-generated item of work originating from a lcore,
+ *   perhaps carrying a packet to be processed,
+ * - a crypto work completion notification
+ * - a timer expiry notification.
  *
- * Event queue: A queue containing events that are scheduled by the event dev.
+ * **Event queue**: A queue containing events that are scheduled by the event device.
  * An event queue contains events of different flows associated with scheduling
  * types, such as atomic, ordered, or parallel.
+ * Each event given to an event device must have a valid event queue id field in the metadata,
+ * to specify on which event queue in the device the event must be placed,
+ * for later scheduling.
  *
- * Event port: An application's interface into the event dev for enqueue and
+ * **Event port**: An application's interface into the event dev for enqueue and
  * dequeue operations. Each event port can be linked with one or more
  * event queues for dequeue operations.
- *
- * By default, all the functions of the Event Device API exported by a PMD
- * are lock-free functions which assume to not be invoked in parallel on
- * different logical cores to work on the same target object. For instance,
- * the dequeue function of a PMD cannot be invoked in parallel on two logical
- * cores to operates on same  event port. Of course, this function
+ * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
+ * that each port is polled by only a single lcore. [If this is not the case,
+ * a suitable synchronization mechanism should be used to prevent simultaneous
+ * access from multiple lcores.]
+ * To schedule events to an lcore, the event device will schedule them to the event port(s)
+ * being polled by that lcore.
+ *
+ * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
+ * are non-thread-safe functions, which must not be invoked on the same object in parallel on
+ * different logical cores.
+ * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operate on same  event port. Of course, this function
  * can be invoked in parallel by different logical cores on different ports.
  * It is the responsibility of the upper level application to enforce this rule.
  *
@@ -107,22 +127,19 @@
  *
  * Event devices are dynamically registered during the PCI/SoC device probing
  * phase performed at EAL initialization time.
- * When an Event device is being probed, a *rte_event_dev* structure and
- * a new device identifier are allocated for that device. Then, the
- * event_dev_init() function supplied by the Event driver matching the probed
- * device is invoked to properly initialize the device.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
  *
- * The role of the device init function consists of resetting the hardware or
- * software event driver implementations.
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
  *
- * If the device init operation is successful, the correspondence between
- * the device identifier assigned to the new device and its associated
- * *rte_event_dev* structure is effectively registered.
- * Otherwise, both the *rte_event_dev* structure and the device identifier are
- * freed.
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
  *
  * The functions exported by the application Event API to setup a device
- * designated by its device identifier must be invoked in the following order:
+ * must be invoked in the following order:
  *     - rte_event_dev_configure()
  *     - rte_event_queue_setup()
  *     - rte_event_port_setup()
@@ -130,10 +147,15 @@
  *     - rte_event_dev_start()
  *
  * Then, the application can invoke, in any order, the functions
- * exported by the Event API to schedule events, dequeue events, enqueue events,
- * change event queue(s) to event port [un]link establishment and so on.
- *
- * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * exported by the Event API to dequeue events, enqueue events,
+ * and link and unlink event queue(s) to event ports.
+ *
+ * Before configuring a device, an application should call rte_event_dev_info_get()
+ * to determine the capabilities of the event device, and any queue or port
+ * limits of that device. The parameters set in the various device configuration
+ * structures may need to be adjusted based on the max values provided in the
+ * device information structure returned from the info_get API.
+ * An application may use rte_event_[queue/port]_default_conf_get() to get the
  * default configuration to set up an event queue or event port by
  * overriding few default values.
  *
@@ -145,7 +167,11 @@
  * when the device is stopped.
  *
  * Finally, an application can close an Event device by invoking the
- * rte_event_dev_close() function.
+ * rte_event_dev_close() function. Once closed, a device cannot be
+ * reconfigured or restarted.
+ *
+ * Driver-Oriented Event API
+ * -------------------------
  *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
@@ -163,11 +189,14 @@
  * performs an indirect invocation of the corresponding driver function
  * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
  *
- * For performance reasons, the address of the fast-path functions of the
- * Event driver is not contained in the *event_dev_ops* structure.
+ * For performance reasons, the addresses of the fast-path functions of the
+ * event driver are not contained in the *event_dev_ops* structure.
  * Instead, they are directly stored at the beginning of the *rte_event_dev*
  * structure to avoid an extra indirect memory access during their invocation.
  *
+ * Event Enqueue, Dequeue and Scheduling
+ * -------------------------------------
+ *
  * RTE event device drivers do not use interrupts for enqueue or dequeue
  * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
  * functions to applications.
@@ -179,21 +208,22 @@
  * crypto work completion notification etc
  *
  * The *dequeue* operation gets one or more events from the event ports.
- * The application process the events and send to downstream event queue through
- * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
- * on the final stage, the application may use Tx adapter API for maintaining
- * the ingress order and then send the packet/event on the wire.
+ * The application processes the events and sends them to a downstream event queue through
+ * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
+ * On the final stage of processing, the application may use the Tx adapter API for maintaining
+ * the event ingress order while sending the packet/event on the wire via NIC Tx.
  *
  * The point at which events are scheduled to ports depends on the device.
  * For hardware devices, scheduling occurs asynchronously without any software
  * intervention. Software schedulers can either be distributed
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
- * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic need a dedicated service core for scheduling.
- * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
- * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls software specific scheduling function.
+ * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
+ * software schedulers need a dedicated service core for scheduling.
+ * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
+ * indicates that the device is centralized and thus needs a dedicated scheduling
+ * thread (generally an RTE service that should be mapped to one or more service cores)
+ * that repeatedly calls the software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 02/11] eventdev: move text on driver internals to proper section
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 01/11] eventdev: improve doxygen introduction text Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 03/11] eventdev: update documentation on device capability flags Bruce Richardson
                       ` (8 subsequent siblings)
  10 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

Inside the doxygen introduction text, some internal details of how
eventdev works was mixed in with application-relevant details. Move
these details on probing etc. to the driver-relevant section.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a741832e8e..37493464f9 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -122,22 +122,6 @@
  * In all functions of the Event API, the Event device is
  * designated by an integer >= 0 named the device identifier *dev_id*
  *
- * At the Event driver level, Event devices are represented by a generic
- * data structure of type *rte_event_dev*.
- *
- * Event devices are dynamically registered during the PCI/SoC device probing
- * phase performed at EAL initialization time.
- * When an Event device is being probed, an *rte_event_dev* structure is allocated
- * for it and the event_dev_init() function supplied by the Event driver
- * is invoked to properly initialize the device.
- *
- * The role of the device init function is to reset the device hardware or
- * to initialize the software event driver implementation.
- *
- * If the device init operation is successful, the device is assigned a device
- * id (dev_id) for application use.
- * Otherwise, the *rte_event_dev* structure is freed.
- *
  * The functions exported by the application Event API to setup a device
  * must be invoked in the following order:
  *     - rte_event_dev_configure()
@@ -173,6 +157,22 @@
  * Driver-Oriented Event API
  * -------------------------
  *
+ * At the Event driver level, Event devices are represented by a generic
+ * data structure of type *rte_event_dev*.
+ *
+ * Event devices are dynamically registered during the PCI/SoC device probing
+ * phase performed at EAL initialization time.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
+ *
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
+ *
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
+ *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
  * identifier.
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 03/11] eventdev: update documentation on device capability flags
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 01/11] eventdev: improve doxygen introduction text Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 02/11] eventdev: move text on driver internals to proper section Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-07 10:30       ` Jerin Jacob
  2024-02-02 12:39     ` [PATCH v3 04/11] eventdev: cleanup doxygen comments on info structure Bruce Richardson
                       ` (7 subsequent siblings)
  10 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

Update the device capability docs, to:

* include more cross-references
* split longer text into paragraphs, in most cases with each flag having
  a single-line summary at the start of the doc block
* general comment rewording and clarification as appropriate

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V3: Updated following feedback from Mattias
---
 lib/eventdev/rte_eventdev.h | 130 +++++++++++++++++++++++++-----------
 1 file changed, 92 insertions(+), 38 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 37493464f9..a33024479d 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -253,143 +253,197 @@ struct rte_event;
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
 /**< Event scheduling prioritization is based on the priority and weight
- * associated with each event queue. Events from a queue with highest priority
- * is scheduled first. If the queues are of same priority, weight of the queues
+ * associated with each event queue.
+ *
+ * Events from a queue with highest priority
+ * are scheduled first. If the queues are of same priority, weight of the queues
  * are considered to select a queue in a weighted round robin fashion.
  * Subsequent dequeue calls from an event port could see events from the same
  * event queue, if the queue is configured with an affinity count. Affinity
  * count is the number of subsequent dequeue calls, in which an event port
  * should use the same event queue if the queue is non-empty
  *
+ * NOTE: A device may use both queue prioritization and event prioritization
+ * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions.
+ *
  *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
- *  each event. Priority of each event is supplied in *rte_event* structure
+ *  each event.
+ *
+ *  Priority of each event is supplied in *rte_event* structure
  *  on each enqueue operation.
+ *  If this capability is not set, the priority field of the event structure
+ *  is ignored for each event.
  *
+ * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability)
+ * and event prioritization when making packet scheduling decisions.
+
  *  @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
 /**< Event device operates in distributed scheduling mode.
+ *
  * In distributed scheduling mode, event scheduling happens in HW or
- * rte_event_dequeue_burst() or the combination of these two.
+ * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
  * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_dequeue_burst()
+ * @see rte_event_dev_service_id_get
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
- * If this capability is not set, the queue only supports events of the
- *  *RTE_SCHED_TYPE_* type that it was created with.
  *
- * @see RTE_SCHED_TYPE_* values
+ * If this capability is not set, each queue only supports events of the
+ * *RTE_SCHED_TYPE_* type that it was created with.
+ * The behaviour when events of other scheduling types are sent to the queue is
+ * currently undefined.
+ *
+ * @see rte_event_enqueue_burst
+ * @see RTE_SCHED_TYPE_ATOMIC RTE_SCHED_TYPE_ORDERED RTE_SCHED_TYPE_PARALLEL
  */
 #define RTE_EVENT_DEV_CAP_BURST_MODE          (1ULL << 4)
 /**< Event device is capable of operating in burst mode for enqueue(forward,
- * release) and dequeue operation. If this capability is not set, application
- * still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
- * PMD accepts only one event at a time.
+ * release) and dequeue operation.
+ *
+ * If this capability is not set, application
+ * can still use the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
+ * PMD accepts or returns only one event at a time.
  *
  * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE    (1ULL << 5)
 /**< Event device ports support disabling the implicit release feature, in
  * which the port will release all unreleased events in its dequeue operation.
+ *
  * If this capability is set and the port is configured with implicit release
  * disabled, the application is responsible for explicitly releasing events
- * using either the RTE_EVENT_OP_FORWARD or the RTE_EVENT_OP_RELEASE event
+ * using either the @ref RTE_EVENT_OP_FORWARD or the @ref RTE_EVENT_OP_RELEASE event
  * enqueue operations.
  *
  * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
  */
 
 #define RTE_EVENT_DEV_CAP_NONSEQ_MODE         (1ULL << 6)
-/**< Event device is capable of operating in none sequential mode. The path
- * of the event is not necessary to be sequential. Application can change
- * the path of event at runtime. If the flag is not set, then event each event
- * will follow a path from queue 0 to queue 1 to queue 2 etc. If the flag is
- * set, events may be sent to queues in any order. If the flag is not set, the
- * eventdev will return an error when the application enqueues an event for a
+/**< Event device is capable of operating in non-sequential mode.
+ *
+ * The path of the event is not necessary to be sequential. Application can change
+ * the path of event at runtime and events may be sent to queues in any order.
+ *
+ * If the flag is not set, then event each event will follow a path from queue 0
+ * to queue 1 to queue 2 etc.
+ * The eventdev will return an error when the application enqueues an event for a
  * qid which is not the next in the sequence.
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK   (1ULL << 7)
-/**< Event device is capable of configuring the queue/port link at runtime.
+/**< Event device is capable of reconfiguring the queue/port link at runtime.
+ *
  * If the flag is not set, the eventdev queue/port link is only can be
- * configured during  initialization.
+ * configured during  initialization, or by stopping the device and
+ * then later restarting it after reconfiguration.
+ *
+ * @see rte_event_port_link rte_event_port_unlink
  */
 
 #define RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT (1ULL << 8)
-/**< Event device is capable of setting up the link between multiple queue
- * with single port. If the flag is not set, the eventdev can only map a
- * single queue to each port or map a single queue to many port.
+/**< Event device is capable of setting up links between multiple queues and a single port.
+ *
+ * If the flag is not set, each port may only be linked to a single queue, and
+ * so can only receive events from that queue.
+ * However, each queue may be linked to multiple ports.
+ *
+ * @see rte_event_port_link
  */
 
 #define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
-/**< Event device preserves the flow ID from the enqueued
- * event to the dequeued event if the flag is set. Otherwise,
- * the content of this field is implementation dependent.
+/**< Event device preserves the flow ID from the enqueued event to the dequeued event.
+ *
+ * If this flag is not set,
+ * the content of the flow-id field in dequeued events is implementation dependent.
+ *
+ * @see rte_event_dequeue_burst
  */
 
 #define RTE_EVENT_DEV_CAP_MAINTENANCE_FREE (1ULL << 10)
 /**< Event device *does not* require calls to rte_event_maintain().
+ *
  * An event device that does not set this flag requires calls to
  * rte_event_maintain() during periods when neither
  * rte_event_dequeue_burst() nor rte_event_enqueue_burst() are called
  * on a port. This will allow the event device to perform internal
  * processing, such as flushing buffered events, return credits to a
  * global pool, or process signaling related to load balancing.
+ *
+ * @see rte_event_maintain
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
 /**< Event device is capable of changing the queue attributes at runtime i.e
- * after rte_event_queue_setup() or rte_event_start() call sequence. If this
- * flag is not set, eventdev queue attributes can only be configured during
+ * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
+ *
+ * If this flag is not set, event queue attributes can only be configured during
  * rte_event_queue_setup().
+ *
+ * @see rte_event_queue_setup
  */
 
 #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
-/**< Event device is capable of supporting multiple link profiles per event port
- * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
- * than one.
+/**< Event device is capable of supporting multiple link profiles per event port.
+ *
+ *
+ * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one, and multiple profiles may be configured and then switched at runtime.
+ * If not set, only a single profile may be configured, which may itself be
+ * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
+ *
+ * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
+ * @see rte_event_port_profile_switch
+ * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
  */
 
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
-/**< Highest priority expressed across eventdev subsystem
+/**< Highest priority level for events and queues.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_NORMAL    128
-/**< Normal priority expressed across eventdev subsystem
+/**< Normal priority level for events and queues.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_LOWEST    255
-/**< Lowest priority expressed across eventdev subsystem
+/**< Lowest priority level for events and queues.
+ *
  * @see rte_event_queue_setup(), rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 
 /* Event queue scheduling weights */
 #define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255
-/**< Highest weight of an event queue
+/**< Highest weight of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0
-/**< Lowest weight of an event queue
+/**< Lowest weight of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 
 /* Event queue scheduling affinity */
 #define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255
-/**< Highest scheduling affinity of an event queue
+/**< Highest scheduling affinity of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0
-/**< Lowest scheduling affinity of an event queue
+/**< Lowest scheduling affinity of an event queue.
+ *
  * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
  */
 
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 04/11] eventdev: cleanup doxygen comments on info structure
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (2 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 03/11] eventdev: update documentation on device capability flags Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 05/11] eventdev: improve function documentation for query fns Bruce Richardson
                       ` (6 subsequent siblings)
  10 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

Some small rewording changes to the doxygen comments on struct
rte_event_dev_info.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: reworked following feedback
- added closing "." on comments
- added more cross-reference links
- reworded priority level comments
---
 lib/eventdev/rte_eventdev.h | 85 +++++++++++++++++++++++++------------
 1 file changed, 58 insertions(+), 27 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a33024479d..da3f72d89e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -487,57 +487,88 @@ rte_event_dev_socket_id(uint8_t dev_id);
  * Event device information
  */
 struct rte_event_dev_info {
-	const char *driver_name;	/**< Event driver name */
-	struct rte_device *dev;	/**< Device information */
+	const char *driver_name;	/**< Event driver name. */
+	struct rte_device *dev;	/**< Device information. */
 	uint32_t min_dequeue_timeout_ns;
-	/**< Minimum supported global dequeue timeout(ns) by this device */
+	/**< Minimum global dequeue timeout(ns) supported by this device. */
 	uint32_t max_dequeue_timeout_ns;
-	/**< Maximum supported global dequeue timeout(ns) by this device */
+	/**< Maximum global dequeue timeout(ns) supported by this device. */
 	uint32_t dequeue_timeout_ns;
-	/**< Configured global dequeue timeout(ns) for this device */
+	/**< Configured global dequeue timeout(ns) for this device. */
 	uint8_t max_event_queues;
-	/**< Maximum event_queues supported by this device */
+	/**< Maximum event queues supported by this device.
+	 *
+	 * This count excludes any queues covered by @ref max_single_link_event_port_queue_pairs.
+	 */
 	uint32_t max_event_queue_flows;
-	/**< Maximum supported flows in an event queue by this device*/
+	/**< Maximum number of flows within an event queue supported by this device. */
 	uint8_t max_event_queue_priority_levels;
-	/**< Maximum number of event queue priority levels by this device.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	/**< Maximum number of event queue priority levels supported by this device.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * The implementation shall normalize priority values specified between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
+	 * to map them internally to this range of priorities.
+	 * [For devices supporting a power-of-2 number of priority levels, this
+	 * normalization will be done via a right-shift operation, so only the top
+	 * log2(max_levels) bits will be used by the event device.]
+	 *
+	 * @see rte_event_queue_conf.priority
 	 */
 	uint8_t max_event_priority_levels;
 	/**< Maximum number of event priority levels by this device.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability.
+	 *
+	 * The implementation shall normalize priority values specified between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
+	 * to map them internally to this range of priorities.
+	 * [For devices supporting a power-of-2 number of priority levels, this
+	 * normalization will be done via a right-shift operation, so only the top
+	 * log2(max_levels) bits will be used by the event device.]
+	 *
+	 * @see rte_event.priority
 	 */
 	uint8_t max_event_ports;
-	/**< Maximum number of event ports supported by this device */
+	/**< Maximum number of event ports supported by this device.
+	 *
+	 * This count excludes any ports covered by @ref max_single_link_event_port_queue_pairs.
+	 */
 	uint8_t max_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk dequeue will set this as 1.
+	/**< Maximum number of events that can be dequeued at a time from an event port
+	 * on this device.
+	 *
+	 * A device that does not support burst dequeue
+	 * (@ref RTE_EVENT_DEV_CAP_BURST_MODE) will set this to 1.
 	 */
 	uint32_t max_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk enqueue will set this as 1.
+	/**< Maximum number of events that can be enqueued at a time to an event port
+	 * on this device.
+	 *
+	 * A device that does not support burst enqueue
+	 * (@ref RTE_EVENT_DEV_CAP_BURST_MODE) will set this to 1.
 	 */
 	uint8_t max_event_port_links;
-	/**< Maximum number of queues that can be linked to a single event
-	 * port by this device.
+	/**< Maximum number of queues that can be linked to a single event port on this device.
 	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
-	 * can manage at a time. An *open system* event dev does not have a
-	 * limit and will specify this as -1.
+	 * can manage at a time.
+	 * Once the number of events tracked by an eventdev exceeds this number,
+	 * any enqueues of NEW events will fail.
+	 * An *open system* event dev does not have a limit and will specify this as -1.
 	 */
 	uint32_t event_dev_cap;
-	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	/**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*). */
 	uint8_t max_single_link_event_port_queue_pairs;
-	/**< Maximum number of event ports and queues that are optimized for
-	 * (and only capable of) single-link configurations supported by this
-	 * device. These ports and queues are not accounted for in
-	 * max_event_ports or max_event_queues.
+	/**< Maximum number of event ports and queues, supported by this device,
+	 * that are optimized for (and only capable of) single-link configurations.
+	 * These ports and queues are not accounted for in @ref max_event_ports
+	 * or @ref max_event_queues.
 	 */
 	uint8_t max_profiles_per_port;
-	/**< Maximum number of event queue profiles per event port.
+	/**< Maximum number of event queue link profiles per event port.
 	 * A device that doesn't support multiple profiles will set this as 1.
 	 */
 };
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 05/11] eventdev: improve function documentation for query fns
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (3 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 04/11] eventdev: cleanup doxygen comments on info structure Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 06/11] eventdev: improve doxygen comments on configure struct Bruce Richardson
                       ` (5 subsequent siblings)
  10 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

General improvements to the doxygen docs for eventdev functions for
querying basic information:
* number of devices
* id for a particular device
* socket id of device
* capability information for a device

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: minor changes following review
---
 lib/eventdev/rte_eventdev.h | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index da3f72d89e..3cba13e2c4 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -448,8 +448,7 @@ struct rte_event;
  */
 
 /**
- * Get the total number of event devices that have been successfully
- * initialised.
+ * Get the total number of event devices.
  *
  * @return
  *   The total number of usable event devices.
@@ -464,8 +463,10 @@ rte_event_dev_count(void);
  *   Event device name to select the event device identifier.
  *
  * @return
- *   Returns event device identifier on success.
- *   - <0: Failure to find named event device.
+ *   Event device identifier (dev_id >= 0) on success.
+ *   Negative error code on failure:
+ *   - -EINVAL - input name parameter is invalid.
+ *   - -ENODEV - no event device found with that name.
  */
 int
 rte_event_dev_get_dev_id(const char *name);
@@ -478,7 +479,8 @@ rte_event_dev_get_dev_id(const char *name);
  * @return
  *   The NUMA socket id to which the device is connected or
  *   a default of zero if the socket could not be determined.
- *   -(-EINVAL)  dev_id value is out of range.
+ *   -EINVAL on error, where the given dev_id value does not
+ *   correspond to any event device.
  */
 int
 rte_event_dev_socket_id(uint8_t dev_id);
@@ -574,18 +576,20 @@ struct rte_event_dev_info {
 };
 
 /**
- * Retrieve the contextual information of an event device.
+ * Retrieve details of an event device's capabilities and configuration limits.
  *
  * @param dev_id
  *   The identifier of the device.
  *
  * @param[out] dev_info
  *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
- *   contextual information of the device.
+ *   information about the device's capabilities.
  *
  * @return
- *   - 0: Success, driver updates the contextual information of the event device
- *   - <0: Error code returned by the driver info get function.
+ *   - 0: Success, information about the event device is present in dev_info.
+ *   - <0: Failure, error code returned by the function.
+ *     - -EINVAL - invalid input parameters, e.g. incorrect device id.
+ *     - -ENOTSUP - device does not support returning capabilities information.
  */
 int
 rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 06/11] eventdev: improve doxygen comments on configure struct
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (4 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 05/11] eventdev: improve function documentation for query fns Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 07/11] eventdev: improve doxygen comments on config fns Bruce Richardson
                       ` (4 subsequent siblings)
  10 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

General rewording and cleanup on the rte_event_dev_config structure.
Improved the wording of some sentences and created linked
cross-references out of the existing references to the dev_info
structure.

As part of the rework, fix issue with how single-link port-queue pairs
were counted in the rte_event_dev_config structure. This did not match
the actual implementation and, if following the documentation, certain
valid port/queue configurations would have been impossible to configure.
Fix this by changing the documentation to match the implementation

Bugzilla ID:  1368
Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints")

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3:
- minor tweaks following review
- merged in doc fix for bugzilla 1368 into this patch, since it fit with
  other clarifications to the config struct.
---
 lib/eventdev/rte_eventdev.h | 61 ++++++++++++++++++++++---------------
 1 file changed, 37 insertions(+), 24 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 3cba13e2c4..027f5936fb 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -634,9 +634,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 struct rte_event_dev_config {
 	uint32_t dequeue_timeout_ns;
 	/**< rte_event_dequeue_burst() timeout on this device.
-	 * This value should be in the range of *min_dequeue_timeout_ns* and
-	 * *max_dequeue_timeout_ns* which previously provided in
-	 * rte_event_dev_info_get()
+	 * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and
+	 * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
+	 * @ref rte_event_dev_info_get()
 	 * The value 0 is allowed, in which case, default dequeue timeout used.
 	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
 	 */
@@ -644,40 +644,53 @@ struct rte_event_dev_config {
 	/**< In a *closed system* this field is the limit on maximum number of
 	 * events that can be inflight in the eventdev at a given time. The
 	 * limit is required to ensure that the finite space in a closed system
-	 * is not overwhelmed. The value cannot exceed the *max_num_events*
-	 * as provided by rte_event_dev_info_get().
-	 * This value should be set to -1 for *open system*.
+	 * is not exhausted.
+	 * The value cannot exceed @ref rte_event_dev_info.max_num_events
+	 * returned by rte_event_dev_info_get().
+	 *
+	 * This value should be set to -1 for *open systems*, that is,
+	 * those systems returning -1 in @ref rte_event_dev_info.max_num_events.
+	 *
+	 * @see rte_event_port_conf.new_event_threshold
 	 */
 	uint8_t nb_event_queues;
 	/**< Number of event queues to configure on this device.
-	 * This value cannot exceed the *max_event_queues* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link queues i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_queues
 	 */
 	uint8_t nb_event_ports;
 	/**< Number of event ports to configure on this device.
-	 * This value cannot exceed the *max_event_ports* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link ports i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_ports
 	 */
 	uint32_t nb_event_queue_flows;
-	/**< Number of flows for any event queue on this device.
-	 * This value cannot exceed the *max_event_queue_flows* which previously
-	 * provided in rte_event_dev_info_get()
+	/**< Max number of flows needed for a single event queue on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint32_t nb_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_dequeue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Max number of events that can be dequeued at a time from an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_dequeue_burst()
 	 */
 	uint32_t nb_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_enqueue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Maximum number of events can be enqueued at a time to an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_enqueue_burst()
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
@@ -687,7 +700,7 @@ struct rte_event_dev_config {
 	 * queues; this value cannot exceed *nb_event_ports* or
 	 * *nb_event_queues*. If the device has ports and queues that are
 	 * optimized for single-link usage, this field is a hint for how many
-	 * to allocate; otherwise, regular event ports and queues can be used.
+	 * to allocate; otherwise, regular event ports and queues will be used.
 	 */
 };
 
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 07/11] eventdev: improve doxygen comments on config fns
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (5 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 06/11] eventdev: improve doxygen comments on configure struct Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 08/11] eventdev: improve doxygen comments for control APIs Bruce Richardson
                       ` (3 subsequent siblings)
  10 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

Improve the documentation text for the configuration functions and
structures for configuring an eventdev, as well as ports and queues.
Clarify text where possible, and ensure references come through as links
in the html output.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: Update following review, mainly:
 - change ranges starting with 0, to just say "less than"
 - put in "." at end of sentences & bullet points
---
 lib/eventdev/rte_eventdev.h | 221 +++++++++++++++++++++++-------------
 1 file changed, 144 insertions(+), 77 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 027f5936fb..e2923a69fb 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -707,12 +707,14 @@ struct rte_event_dev_config {
 /**
  * Configure an event device.
  *
- * This function must be invoked first before any other function in the
- * API. This function can also be re-invoked when a device is in the
- * stopped state.
+ * This function must be invoked before any other configuration function in the
+ * API, when preparing an event device for application use.
+ * This function can also be re-invoked when a device is in the stopped state.
  *
- * The caller may use rte_event_dev_info_get() to get the capability of each
- * resources available for this event device.
+ * The caller should use rte_event_dev_info_get() to get the capabilities and
+ * resource limits for this event device before calling this API.
+ * Many values in the dev_conf input parameter are subject to limits given
+ * in the device information returned from rte_event_dev_info_get().
  *
  * @param dev_id
  *   The identifier of the device to configure.
@@ -722,6 +724,9 @@ struct rte_event_dev_config {
  * @return
  *   - 0: Success, device configured.
  *   - <0: Error code returned by the driver configuration function.
+ *     - -ENOTSUP - device does not support configuration.
+ *     - -EINVAL  - invalid input parameter.
+ *     - -EBUSY   - device has already been started.
  */
 int
 rte_event_dev_configure(uint8_t dev_id,
@@ -731,14 +736,35 @@ rte_event_dev_configure(uint8_t dev_id,
 
 /* Event queue configuration bitmap flags */
 #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (1ULL << 0)
-/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
+/**< Allow events with schedule types ATOMIC, ORDERED, and PARALLEL to be enqueued to this queue.
  *
+ * The scheduling type to be used is that specified in each individual event.
+ * This flag can only be set when configuring queues on devices reporting the
+ * @ref RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability.
+ *
+ * Without this flag, only events with the specific scheduling type configured at queue setup
+ * can be sent to the queue.
+ *
+ * @see RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES
  * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
  * @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 1)
 /**< This event queue links only to a single event port.
  *
+ * No load-balancing of events is performed, as all events
+ * sent to this queue end up at the same event port.
+ * The number of queues on which this flag is to be set must be
+ * configured at device configuration time, by setting
+ * @ref rte_event_dev_config.nb_single_link_event_port_queues
+ * parameter appropriately.
+ *
+ * This flag serves as a hint only, any devices without specific
+ * support for single-link queues can fall-back automatically to
+ * using regular queues with a single destination port.
+ *
+ *  @see rte_event_dev_info.max_single_link_event_port_queue_pairs
+ *  @see rte_event_dev_config.nb_single_link_event_port_queues
  *  @see rte_event_port_setup(), rte_event_port_link()
  */
 
@@ -746,56 +772,79 @@ rte_event_dev_configure(uint8_t dev_id,
 struct rte_event_queue_conf {
 	uint32_t nb_atomic_flows;
 	/**< The maximum number of active flows this queue can track at any
-	 * given time. If the queue is configured for atomic scheduling (by
-	 * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg
-	 * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the
-	 * value must be in the range of [1, nb_event_queue_flows], which was
-	 * previously provided in rte_event_dev_configure().
+	 * given time.
+	 *
+	 * If the queue is configured for atomic scheduling (by
+	 * applying the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to
+	 * @ref rte_event_queue_conf.event_queue_cfg
+	 * or @ref RTE_SCHED_TYPE_ATOMIC flag to @ref rte_event_queue_conf.schedule_type), then the
+	 * value must be in the range of [1, @ref rte_event_dev_config.nb_event_queue_flows],
+	 * which was previously provided in rte_event_dev_configure().
+	 *
+	 * If the queue is not configured for atomic scheduling this value is ignored.
 	 */
 	uint32_t nb_atomic_order_sequences;
 	/**< The maximum number of outstanding events waiting to be
 	 * reordered by this queue. In other words, the number of entries in
-	 * this queue’s reorder buffer.When the number of events in the
+	 * this queue’s reorder buffer. When the number of events in the
 	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
-	 * scheduler cannot schedule the events from this queue and invalid
-	 * event will be returned from dequeue until one or more entries are
+	 * scheduler cannot schedule the events from this queue and no
+	 * events will be returned from dequeue until one or more entries are
 	 * freed up/released.
+	 *
 	 * If the queue is configured for ordered scheduling (by applying the
-	 * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or
-	 * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must
-	 * be in the range of [1, nb_event_queue_flows], which was
+	 * @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to @ref rte_event_queue_conf.event_queue_cfg or
+	 * @ref RTE_SCHED_TYPE_ORDERED flag to @ref rte_event_queue_conf.schedule_type),
+	 * then the value must be in the range of
+	 * [1, @ref rte_event_dev_config.nb_event_queue_flows], which was
 	 * previously supplied to rte_event_dev_configure().
+	 *
+	 * If the queue is not configured for ordered scheduling, then this value is ignored.
 	 */
 	uint32_t event_queue_cfg;
 	/**< Queue cfg flags(EVENT_QUEUE_CFG_) */
 	uint8_t schedule_type;
 	/**< Queue schedule type(RTE_SCHED_TYPE_*).
-	 * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in
-	 * event_queue_cfg.
+	 *
+	 * Valid when @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is not set in
+	 * @ref rte_event_queue_conf.event_queue_cfg.
+	 *
+	 * If the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is set, then this field is ignored.
+	 *
+	 * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
 	 */
 	uint8_t priority;
 	/**< Priority for this event queue relative to other event queues.
+	 *
 	 * The requested priority should in the range of
-	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+	 * [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
 	 * The implementation shall normalize the requested priority to
 	 * event device supported priority value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise
 	 */
 	uint8_t weight;
 	/**< Weight of the event queue relative to other event queues.
+	 *
 	 * The requested weight should be in the range of
-	 * [RTE_EVENT_DEV_WEIGHT_HIGHEST, RTE_EVENT_DEV_WEIGHT_LOWEST].
+	 * [@ref RTE_EVENT_QUEUE_WEIGHT_HIGHEST, @ref RTE_EVENT_QUEUE_WEIGHT_LOWEST].
 	 * The implementation shall normalize the requested weight to event
 	 * device supported weight value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise.
 	 */
 	uint8_t affinity;
 	/**< Affinity of the event queue relative to other event queues.
+	 *
 	 * The requested affinity should be in the range of
-	 * [RTE_EVENT_DEV_AFFINITY_HIGHEST, RTE_EVENT_DEV_AFFINITY_LOWEST].
+	 * [@ref RTE_EVENT_QUEUE_AFFINITY_HIGHEST, @ref RTE_EVENT_QUEUE_AFFINITY_LOWEST].
 	 * The implementation shall normalize the requested affinity to event
 	 * device supported affinity value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise.
 	 */
 };
 
@@ -810,7 +859,7 @@ struct rte_event_queue_conf {
  *   The identifier of the device.
  * @param queue_id
  *   The index of the event queue to get the configuration information.
- *   The value must be in the range [0, nb_event_queues - 1]
+ *   The value must be less than @ref rte_event_dev_config.nb_event_queues
  *   previously supplied to rte_event_dev_configure().
  * @param[out] queue_conf
  *   The pointer to the default event queue configuration data.
@@ -830,8 +879,9 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
  * @param dev_id
  *   The identifier of the device.
  * @param queue_id
- *   The index of the event queue to setup. The value must be in the range
- *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
+ *   The index of the event queue to setup. The value must be
+ *   less than @ref rte_event_dev_config.nb_event_queues previously supplied to
+ *   rte_event_dev_configure().
  * @param queue_conf
  *   The pointer to the configuration data to be used for the event queue.
  *   NULL value is allowed, in which case default configuration	used.
@@ -840,60 +890,60 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
  *
  * @return
  *   - 0: Success, event queue correctly set up.
- *   - <0: event queue configuration failed
+ *   - <0: event queue configuration failed.
  */
 int
 rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
 		      const struct rte_event_queue_conf *queue_conf);
 
 /**
- * The priority of the queue.
+ * Queue attribute id for the priority of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_PRIORITY 0
 /**
- * The number of atomic flows configured for the queue.
+ * Queue attribute id for the number of atomic flows configured for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS 1
 /**
- * The number of atomic order sequences configured for the queue.
+ * Queue attribute id for the number of atomic order sequences configured for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES 2
 /**
- * The cfg flags for the queue.
+ * Queue attribute id for the configuration flags for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG 3
 /**
- * The schedule type of the queue.
+ * Queue attribute id for the schedule type of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
 /**
- * The weight of the queue.
+ * Queue attribute id for the weight of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
 /**
- * Affinity of the queue.
+ * Queue attribute id for the affinity of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
 
 /**
- * Get an attribute from a queue.
+ * Get an attribute of an event queue.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param queue_id
- *   Eventdev queue id
+ *   The index of the event queue to query. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_queues previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to retrieve
+ *   The attribute ID to retrieve (RTE_EVENT_QUEUE_ATTR_*).
  * @param[out] attr_value
- *   A pointer that will be filled in with the attribute value if successful
+ *   A pointer that will be filled in with the attribute value if successful.
  *
  * @return
  *   - 0: Successfully returned value
- *   - -EINVAL: invalid device, queue or attr_id provided, or attr_value was
- *		NULL
+ *   - -EINVAL: invalid device, queue or attr_id provided, or attr_value was NULL.
  *   - -EOVERFLOW: returned when attr_id is set to
- *   RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and event_queue_cfg is set to
- *   RTE_EVENT_QUEUE_CFG_ALL_TYPES
+ *   @ref RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES is
+ *   set in the queue configuration flags.
  */
 int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
@@ -903,19 +953,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
  * Set an event queue attribute.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param queue_id
- *   Eventdev queue id
+ *   The index of the event queue to configure. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_queues previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to set
+ *   The attribute ID to set (RTE_EVENT_QUEUE_ATTR_*).
  * @param attr_value
- *   The attribute value to set
+ *   The attribute value to set.
  *
  * @return
  *   - 0: Successfully set attribute.
- *   - -EINVAL: invalid device, queue or attr_id.
- *   - -ENOTSUP: device does not support setting the event attribute.
- *   - <0: failed to set event queue attribute
+ *   - <0: failed to set event queue attribute.
+ *   -   -EINVAL: invalid device, queue or attr_id.
+ *   -   -ENOTSUP: device does not support setting the event attribute.
  */
 int
 rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
@@ -933,7 +984,10 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
  */
 #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
 /**< This event port links only to a single event queue.
+ * The queue it links with should be similarly configured with the
+ * @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK flag.
  *
+ *  @see RTE_EVENT_QUEUE_CFG_SINGLE_LINK
  *  @see rte_event_port_setup(), rte_event_port_link()
  */
 #define RTE_EVENT_PORT_CFG_HINT_PRODUCER       (1ULL << 2)
@@ -949,7 +1003,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 #define RTE_EVENT_PORT_CFG_HINT_CONSUMER       (1ULL << 3)
 /**< Hint that this event port will primarily dequeue events from the system.
  * A PMD can optimize its internal workings by assuming that this port is
- * primarily going to consume events, and not enqueue FORWARD or RELEASE
+ * primarily going to consume events, and not enqueue NEW or FORWARD
  * events.
  *
  * Note that this flag is only a hint, so PMDs must operate under the
@@ -975,48 +1029,55 @@ struct rte_event_port_conf {
 	/**< A backpressure threshold for new event enqueues on this port.
 	 * Use for *closed system* event dev where event capacity is limited,
 	 * and cannot exceed the capacity of the event dev.
+	 *
 	 * Configuring ports with different thresholds can make higher priority
 	 * traffic less likely to  be backpressured.
 	 * For example, a port used to inject NIC Rx packets into the event dev
 	 * can have a lower threshold so as not to overwhelm the device,
 	 * while ports used for worker pools can have a higher threshold.
-	 * This value cannot exceed the *nb_events_limit*
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_events_limit value
 	 * which was previously supplied to rte_event_dev_configure().
-	 * This should be set to '-1' for *open system*.
+	 *
+	 * This should be set to '-1' for *open system*, i.e when
+	 * @ref rte_event_dev_info.max_num_events == -1.
 	 */
 	uint16_t dequeue_depth;
-	/**< Configure number of bulk dequeues for this event port.
-	 * This value cannot exceed the *nb_event_port_dequeue_depth*
-	 * which previously supplied to rte_event_dev_configure().
-	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
+	/**< Configure the maximum size of burst dequeues for this event port.
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_dequeue_depth value
+	 * which was previously supplied to rte_event_dev_configure().
+	 *
+	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
 	 */
 	uint16_t enqueue_depth;
-	/**< Configure number of bulk enqueues for this event port.
-	 * This value cannot exceed the *nb_event_port_enqueue_depth*
-	 * which previously supplied to rte_event_dev_configure().
-	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
+	/**< Configure the maximum size of burst enqueues to this event port.
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_enqueue_depth value
+	 * which was previously supplied to rte_event_dev_configure().
+	 *
+	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
 	 */
-	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
+	uint32_t event_port_cfg; /**< Port configuration flags(EVENT_PORT_CFG_) */
 };
 
 /**
  * Retrieve the default configuration information of an event port designated
  * by its *port_id* from the event driver for an event device.
  *
- * This function intended to be used in conjunction with rte_event_port_setup()
- * where caller needs to set up the port by overriding few default values.
+ * This function is intended to be used in conjunction with rte_event_port_setup()
+ * where the caller can set up the port by just overriding few default values.
  *
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
  *   The index of the event port to get the configuration information.
- *   The value must be in the range [0, nb_event_ports - 1]
+ *   The value must be less than @ref rte_event_dev_config.nb_event_ports
  *   previously supplied to rte_event_dev_configure().
  * @param[out] port_conf
- *   The pointer to the default event port configuration data
+ *   The pointer to a structure to store the default event port configuration data.
  * @return
  *   - 0: Success, driver updates the default event port configuration data.
  *   - <0: Error code returned by the driver info get function.
+ *      - -EINVAL - invalid input parameter.
+ *      - -ENOTSUP - function is not supported for this device.
  *
  * @see rte_event_port_setup()
  */
@@ -1030,19 +1091,25 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
- *   The index of the event port to setup. The value must be in the range
- *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ *   The index of the event port to setup. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_ports previously supplied to
+ *   rte_event_dev_configure().
  * @param port_conf
- *   The pointer to the configuration data to be used for the queue.
- *   NULL value is allowed, in which case default configuration	used.
+ *   The pointer to the configuration data to be used for the port.
+ *   NULL value is allowed, in which case the default configuration is used.
  *
  * @see rte_event_port_default_conf_get()
  *
  * @return
  *   - 0: Success, event port correctly set up.
- *   - <0: Port configuration failed
- *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
- *   with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ *   - <0: Port configuration failed.
+ *     - -EINVAL - Invalid input parameter.
+ *     - -EBUSY - Port already started.
+ *     - -ENOTSUP - Function not supported on this device, or a NULL pointer passed
+ *        as the port_conf parameter, and no default configuration function available
+ *        for this device.
+ *     - -EDQUOT - Application tried to link a queue configured
+ *      with @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event port.
  */
 int
 rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
@@ -1072,8 +1139,8 @@ typedef void (*rte_eventdev_port_flush_t)(uint8_t dev_id,
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
- *   The index of the event port to setup. The value must be in the range
- *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ *   The index of the event port to quiesce. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_ports previously supplied to rte_event_dev_configure().
  * @param release_cb
  *   Callback function invoked once per flushed event.
  * @param args
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 08/11] eventdev: improve doxygen comments for control APIs
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (6 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 07/11] eventdev: improve doxygen comments on config fns Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 09/11] eventdev: improve comments on scheduling types Bruce Richardson
                       ` (2 subsequent siblings)
  10 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

The doxygen comments for the port attributes, start and stop (and
related functions) are improved.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: add missing "." on end of sentences/lines.
---
 lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++++++--------------
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index e2923a69fb..a7d8c28015 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1151,19 +1151,21 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
 		       rte_eventdev_port_flush_t release_cb, void *args);
 
 /**
- * The queue depth of the port on the enqueue side
+ * Port attribute id for the maximum size of a burst enqueue operation supported on a port.
  */
 #define RTE_EVENT_PORT_ATTR_ENQ_DEPTH 0
 /**
- * The queue depth of the port on the dequeue side
+ * Port attribute id for the maximum size of a dequeue burst which can be returned from a port.
  */
 #define RTE_EVENT_PORT_ATTR_DEQ_DEPTH 1
 /**
- * The new event threshold of the port
+ * Port attribute id for the new event threshold of the port.
+ * Once the number of events in the system exceeds this threshold, the enqueue of NEW-type
+ * events will fail.
  */
 #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
 /**
- * The implicit release disable attribute of the port
+ * Port attribute id for the implicit release disable attribute of the port.
  */
 #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
 
@@ -1171,17 +1173,18 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
  * Get an attribute from a port.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param port_id
- *   Eventdev port id
+ *   The index of the event port to query. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_ports previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to retrieve
+ *   The attribute ID to retrieve (RTE_EVENT_PORT_ATTR_*)
  * @param[out] attr_value
  *   A pointer that will be filled in with the attribute value if successful
  *
  * @return
- *   - 0: Successfully returned value
- *   - (-EINVAL) Invalid device, port or attr_id, or attr_value was NULL
+ *   - 0: Successfully returned value.
+ *   - (-EINVAL) Invalid device, port or attr_id, or attr_value was NULL.
  */
 int
 rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
@@ -1190,17 +1193,19 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 /**
  * Start an event device.
  *
- * The device start step is the last one and consists of setting the event
- * queues to start accepting the events and schedules to event ports.
+ * The device start step is the last one in device setup, and enables the event
+ * ports and queues to start accepting events and scheduling them to event ports.
  *
  * On success, all basic functions exported by the API (event enqueue,
  * event dequeue and so on) can be invoked.
  *
  * @param dev_id
- *   Event device identifier
+ *   Event device identifier.
  * @return
  *   - 0: Success, device started.
- *   - -ESTALE : Not all ports of the device are configured
+ *   - -EINVAL:  Invalid device id provided.
+ *   - -ENOTSUP: Device does not support this operation.
+ *   - -ESTALE : Not all ports of the device are configured.
  *   - -ENOLINK: Not all queues are linked, which could lead to deadlock.
  */
 int
@@ -1242,18 +1247,22 @@ typedef void (*rte_eventdev_stop_flush_t)(uint8_t dev_id,
  * callback function must be registered in every process that can call
  * rte_event_dev_stop().
  *
+ * Only one callback function may be registered. Each new call replaces
+ * the existing registered callback function with the new function passed in.
+ *
  * To unregister a callback, call this function with a NULL callback pointer.
  *
  * @param dev_id
  *   The identifier of the device.
  * @param callback
- *   Callback function invoked once per flushed event.
+ *   Callback function to be invoked once per flushed event.
+ *   Pass NULL to unset any previously-registered callback function.
  * @param userdata
  *   Argument supplied to callback.
  *
  * @return
  *  - 0 on success.
- *  - -EINVAL if *dev_id* is invalid
+ *  - -EINVAL if *dev_id* is invalid.
  *
  * @see rte_event_dev_stop()
  */
@@ -1264,12 +1273,14 @@ int rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
  * Close an event device. The device cannot be restarted!
  *
  * @param dev_id
- *   Event device identifier
+ *   Event device identifier.
  *
  * @return
  *  - 0 on successfully closing device
- *  - <0 on failure to close device
- *  - (-EAGAIN) if device is busy
+ *  - <0 on failure to close device.
+ *    - -EINVAL - invalid device id.
+ *    - -ENOTSUP - operation not supported for this device.
+ *    - -EAGAIN - device is busy.
  */
 int
 rte_event_dev_close(uint8_t dev_id);
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 09/11] eventdev: improve comments on scheduling types
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (7 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 08/11] eventdev: improve doxygen comments for control APIs Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-08  9:18       ` Jerin Jacob
  2024-02-02 12:39     ` [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types Bruce Richardson
  2024-02-02 12:39     ` [PATCH v3 11/11] eventdev: drop comment for anon union from doxygen Bruce Richardson
  10 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

The description of ordered and atomic scheduling given in the eventdev
doxygen documentation was not always clear. Try and simplify this so
that it is clearer for the end-user of the application

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: extensive rework following feedback. Please re-review!
---
 lib/eventdev/rte_eventdev.h | 73 +++++++++++++++++++++++--------------
 1 file changed, 45 insertions(+), 28 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a7d8c28015..8d72765ae7 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1347,25 +1347,35 @@ struct rte_event_vector {
 /**< Ordered scheduling
  *
  * Events from an ordered flow of an event queue can be scheduled to multiple
- * ports for concurrent processing while maintaining the original event order.
+ * ports for concurrent processing while maintaining the original event order,
+ * i.e. the order in which they were first enqueued to that queue.
  * This scheme enables the user to achieve high single flow throughput by
- * avoiding SW synchronization for ordering between ports which bound to cores.
- *
- * The source flow ordering from an event queue is maintained when events are
- * enqueued to their destination queue within the same ordered flow context.
- * An event port holds the context until application call
- * rte_event_dequeue_burst() from the same port, which implicitly releases
- * the context.
- * User may allow the scheduler to release the context earlier than that
- * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
- *
- * Events from the source queue appear in their original order when dequeued
- * from a destination queue.
- * Event ordering is based on the received event(s), but also other
- * (newly allocated or stored) events are ordered when enqueued within the same
- * ordered context. Events not enqueued (e.g. released or stored) within the
- * context are  considered missing from reordering and are skipped at this time
- * (but can be ordered again within another context).
+ * avoiding SW synchronization for ordering between ports which are polled
+ * by different cores.
+ *
+ * After events are dequeued from a set of ports, as those events are re-enqueued
+ * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event
+ * device restores the original event order - including events returned from all
+ * ports in the set - before the events arrive on the destination queue.
+ *
+ * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly
+ * released by the next dequeue operation on a port, are skipped by the reordering
+ * stage and do not affect the reordering of other returned events.
+ *
+ * Any NEW events sent on a port are not ordered with respect to FORWARD events sent
+ * on the same port, since they have no original event order. They also are not
+ * ordered with respect to NEW events enqueued on other ports.
+ * However, NEW events to the same destination queue from the same port are guaranteed
+ * to be enqueued in the order they were submitted via rte_event_enqueue_burst().
+ *
+ * NOTE:
+ *   In restoring event order of forwarded events, the eventdev API guarantees that
+ *   all events from the same flow (i.e. same @ref rte_event.flow_id,
+ *   @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original
+ *   order before being forwarded to the destination queue.
+ *   Some eventdevs may implement stricter ordering to achieve this aim,
+ *   for example, restoring the order across *all* flows dequeued from the same ORDERED
+ *   queue.
  *
  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
  */
@@ -1373,18 +1383,25 @@ struct rte_event_vector {
 #define RTE_SCHED_TYPE_ATOMIC           1
 /**< Atomic scheduling
  *
- * Events from an atomic flow of an event queue can be scheduled only to a
+ * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id,
+ * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a
  * single port at a time. The port is guaranteed to have exclusive (atomic)
  * access to the associated flow context, which enables the user to avoid SW
- * synchronization. Atomic flows also help to maintain event ordering
- * since only one port at a time can process events from a flow of an
- * event queue.
- *
- * The atomic queue synchronization context is dedicated to the port until
- * application call rte_event_dequeue_burst() from the same port,
- * which implicitly releases the context. User may allow the scheduler to
- * release the context earlier than that by invoking rte_event_enqueue_burst()
- * with RTE_EVENT_OP_RELEASE operation.
+ * synchronization. Atomic flows also maintain event ordering
+ * since only one port at a time can process events from each flow of an
+ * event queue, and events within a flow are not reordered within the scheduler.
+ *
+ * An atomic flow is locked to a port when events from that flow are first
+ * scheduled to that port. That lock remains in place until the
+ * application calls rte_event_dequeue_burst() from the same port,
+ * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set).
+ * User may allow the scheduler to release the lock earlier than that by invoking
+ * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow.
+ *
+ * NOTE: The lock is only released once the last event from the flow, outstanding on the port,
+ * is released. So long as there is one event from an atomic flow scheduled to
+ * a port/core (including any events in the port's dequeue queue, not yet read
+ * by the application), that port will hold the synchronization lock for that flow.
  *
  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
  */
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (8 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 09/11] eventdev: improve comments on scheduling types Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  2024-02-09  9:14       ` Jerin Jacob
  2024-02-02 12:39     ` [PATCH v3 11/11] eventdev: drop comment for anon union from doxygen Bruce Richardson
  10 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

Clarify the meaning of the NEW, FORWARD and RELEASE event types.
For the fields in "rte_event" struct, enhance the comments on each to
clarify the field's use, and whether it is preserved between enqueue and
dequeue, and it's role, if any, in scheduling.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V3: updates following review
---
 lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
 1 file changed, 111 insertions(+), 50 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 8d72765ae7..58219e027e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1463,47 +1463,54 @@ struct rte_event_vector {
 
 /* Event enqueue operations */
 #define RTE_EVENT_OP_NEW                0
-/**< The event producers use this operation to inject a new event to the
- * event device.
+/**< The @ref rte_event.op field must be set to this operation type to inject a new event,
+ * i.e. one not previously dequeued, into the event device, to be scheduled
+ * for processing.
  */
 #define RTE_EVENT_OP_FORWARD            1
-/**< The CPU use this operation to forward the event to different event queue or
- * change to new application specific flow or schedule type to enable
- * pipelining.
+/**< The application must set the @ref rte_event.op field to this operation type to return a
+ * previously dequeued event to the event device to be scheduled for further processing.
  *
- * This operation must only be enqueued to the same port that the
+ * This event *must* be enqueued to the same port that the
  * event to be forwarded was dequeued from.
+ *
+ * The event's fields, including (but not limited to) flow_id, scheduling type,
+ * destination queue, and event payload e.g. mbuf pointer, may all be updated as
+ * desired by the application, but the @ref rte_event.impl_opaque field must
+ * be kept to the same value as was present when the event was dequeued.
  */
 #define RTE_EVENT_OP_RELEASE            2
 /**< Release the flow context associated with the schedule type.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
- * then this function hints the scheduler that the user has completed critical
- * section processing in the current atomic context.
- * The scheduler is now allowed to schedule events from the same flow from
- * an event queue to another port. However, the context may be still held
- * until the next rte_event_dequeue_burst() call, this call allows but does not
- * force the scheduler to release the context early.
- *
- * Early atomic context release may increase parallelism and thus system
+ * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
+ * then this operation type hints the scheduler that the user has completed critical
+ * section processing for this event in the current atomic context, and that the
+ * scheduler may unlock any atomic locks held for this event.
+ * If this is the last event from an atomic flow, i.e. all flow locks are released,
+ * the scheduler is now allowed to schedule events from that flow from to another port.
+ * However, the atomic locks may be still held until the next rte_event_dequeue_burst()
+ * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE allows,
+ * but does not force, the scheduler to release the atomic locks early.
+ *
+ * Early atomic lock release may increase parallelism and thus system
  * performance, but the user needs to design carefully the split into critical
  * vs non-critical sections.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
- * then this function hints the scheduler that the user has done all that need
- * to maintain event order in the current ordered context.
- * The scheduler is allowed to release the ordered context of this port and
- * avoid reordering any following enqueues.
- *
- * Early ordered context release may increase parallelism and thus system
- * performance.
+ * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
+ * then this operation type informs the scheduler that the current event has
+ * completed processing and will not be returned to the scheduler, i.e.
+ * it has been dropped, and so the reordering context for that event
+ * should be considered filled.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
- * or no scheduling context is held then this function may be an NOOP,
- * depending on the implementation.
+ * Events with this operation type must only be enqueued to the same port that the
+ * event to be released was dequeued from. The @ref rte_event.impl_opaque
+ * field in the release event must have the same value as that in the original dequeued event.
  *
- * This operation must only be enqueued to the same port that the
- * event to be released was dequeued from.
+ * If a dequeued event is re-enqueued with operation type of @ref RTE_EVENT_OP_RELEASE,
+ * then any subsequent enqueue of that event - or a copy of it - must be done as event of type
+ * @ref RTE_EVENT_OP_NEW, not @ref RTE_EVENT_OP_FORWARD. This is because any context for
+ * the originally dequeued event, i.e. atomic locks, or reorder buffer entries, will have
+ * been removed or invalidated by the release operation.
  */
 
 /**
@@ -1517,56 +1524,110 @@ struct rte_event {
 		/** Event attributes for dequeue or enqueue operation */
 		struct {
 			uint32_t flow_id:20;
-			/**< Targeted flow identifier for the enqueue and
-			 * dequeue operation.
-			 * The value must be in the range of
-			 * [0, nb_event_queue_flows - 1] which
-			 * previously supplied to rte_event_dev_configure().
+			/**< Target flow identifier for the enqueue and dequeue operation.
+			 *
+			 * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
+			 * flow for atomicity within a queue & priority level, such that events
+			 * from each individual flow will only be scheduled to one port at a time.
+			 *
+			 * This field is preserved between enqueue and dequeue when
+			 * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
+			 * capability. Otherwise the value is implementation dependent
+			 * on dequeue.
 			 */
 			uint32_t sub_event_type:8;
 			/**< Sub-event types based on the event source.
+			 *
+			 * This field is preserved between enqueue and dequeue.
+			 * This field is for application or event adapter use,
+			 * and is not considered in scheduling decisions.
+			 *
 			 * @see RTE_EVENT_TYPE_CPU
 			 */
 			uint32_t event_type:4;
-			/**< Event type to classify the event source.
-			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
+			/**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
+			 *
+			 * This field is preserved between enqueue and dequeue
+			 * This field is for application or event adapter use,
+			 * and is not considered in scheduling decisions.
 			 */
 			uint8_t op:2;
-			/**< The type of event enqueue operation - new/forward/
-			 * etc.This field is not preserved across an instance
-			 * and is undefined on dequeue.
-			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
+			/**< The type of event enqueue operation - new/forward/ etc.
+			 *
+			 * This field is *not* preserved across an instance
+			 * and is implementation dependent on dequeue.
+			 *
+			 * @see RTE_EVENT_OP_NEW
+			 * @see RTE_EVENT_OP_FORWARD
+			 * @see RTE_EVENT_OP_RELEASE
 			 */
 			uint8_t rsvd:4;
-			/**< Reserved for future use */
+			/**< Reserved for future use.
+			 *
+			 * Should be set to zero on enqueue.
+			 */
 			uint8_t sched_type:2;
 			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
 			 * associated with flow id on a given event queue
 			 * for the enqueue and dequeue operation.
+			 *
+			 * This field is used to determine the scheduling type
+			 * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
+			 * is configured.
+			 * For queues where only a single scheduling type is available,
+			 * this field must be set to match the configured scheduling type.
+			 *
+			 * This field is preserved between enqueue and dequeue.
+			 *
+			 * @see RTE_SCHED_TYPE_ORDERED
+			 * @see RTE_SCHED_TYPE_ATOMIC
+			 * @see RTE_SCHED_TYPE_PARALLEL
 			 */
 			uint8_t queue_id;
 			/**< Targeted event queue identifier for the enqueue or
 			 * dequeue operation.
-			 * The value must be in the range of
-			 * [0, nb_event_queues - 1] which previously supplied to
-			 * rte_event_dev_configure().
+			 * The value must be less than @ref rte_event_dev_config.nb_event_queues
+			 * which was previously supplied to rte_event_dev_configure().
+			 *
+			 * This field is preserved between enqueue on dequeue.
 			 */
 			uint8_t priority;
 			/**< Event priority relative to other events in the
 			 * event queue. The requested priority should in the
-			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
-			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
+			 * range of  [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST,
+			 * @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
+			 *
 			 * The implementation shall normalize the requested
 			 * priority to supported priority value.
-			 * Valid when the device has
-			 * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
+			 * [For devices with where the supported priority range is a power-of-2, the
+			 * normalization will be done via bit-shifting, so only the highest
+			 * log2(num_priorities) bits will be used by the event device]
+			 *
+			 * Valid when the device has @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability
+			 * and this field is preserved between enqueue and dequeue,
+			 * though with possible loss of precision due to normalization and
+			 * subsequent de-normalization. (For example, if a device only supports 8
+			 * priority levels, only the high 3 bits of this field will be
+			 * used by that device, and hence only the value of those 3 bits are
+			 * guaranteed to be preserved between enqueue and dequeue.)
+			 *
+			 * Ignored when device does not support @ref RTE_EVENT_DEV_CAP_EVENT_QOS
+			 * capability, and it is implementation dependent if this field is preserved
+			 * between enqueue and dequeue.
 			 */
 			uint8_t impl_opaque;
-			/**< Implementation specific opaque value.
-			 * An implementation may use this field to hold
+			/**< Opaque field for event device use.
+			 *
+			 * An event driver implementation may use this field to hold an
 			 * implementation specific value to share between
 			 * dequeue and enqueue operation.
-			 * The application should not modify this field.
+			 *
+			 * The application most not modify this field.
+			 * Its value is implementation dependent on dequeue,
+			 * and must be returned unmodified on enqueue when
+			 * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE.
+			 * This field is ignored on events with op type
+			 * @ref RTE_EVENT_OP_NEW.
 			 */
 		};
 	};
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v3 11/11] eventdev: drop comment for anon union from doxygen
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
                       ` (9 preceding siblings ...)
  2024-02-02 12:39     ` [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types Bruce Richardson
@ 2024-02-02 12:39     ` Bruce Richardson
  10 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-02 12:39 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom
  Cc: abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak, Bruce Richardson

Make the comments on the unnamed unions in the rte_event structure
regular comments rather than doxygen ones. The comments do not add
anything meaningful to the doxygen output.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 58219e027e..e31c927905 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1518,7 +1518,7 @@ struct rte_event_vector {
  * for dequeue and enqueue operation
  */
 struct rte_event {
-	/** WORD0 */
+	/* WORD0 */
 	union {
 		uint64_t event;
 		/** Event attributes for dequeue or enqueue operation */
@@ -1631,7 +1631,7 @@ struct rte_event {
 			 */
 		};
 	};
-	/** WORD1 */
+	/* WORD1 */
 	union {
 		uint64_t u64;
 		/**< Opaque 64-bit value */
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 01/11] eventdev: improve doxygen introduction text
  2024-02-02 12:39     ` [PATCH v3 01/11] eventdev: improve doxygen introduction text Bruce Richardson
@ 2024-02-07 10:14       ` Jerin Jacob
  2024-02-08  9:50         ` Mattias Rönnblom
  2024-02-20 16:33         ` Bruce Richardson
  0 siblings, 2 replies; 123+ messages in thread
From: Jerin Jacob @ 2024-02-07 10:14 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> Make some textual improvements to the introduction to eventdev and event
> devices in the eventdev header file. This text appears in the doxygen
> output for the header file, and introduces the key concepts, for
> example: events, event devices, queues, ports and scheduling.
>
> This patch makes the following improvements:
> * small textual fixups, e.g. correcting use of singular/plural
> * rewrites of some sentences to improve clarity
> * using doxygen markdown to split the whole large block up into
>   sections, thereby making it easier to read.
>
> No large-scale changes are made, and blocks are not reordered
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Thanks Bruce, While you are cleaning up, Please add following or
similar change to fix for not properly
parsing the struct rte_event_vector. i.e it is coming as global
variables in html files.

l[dpdk.org] $ git diff
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index e31c927905..ce4a195a8f 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1309,9 +1309,9 @@ struct rte_event_vector {
                 */
                struct {
                        uint16_t port;
-                       /* Ethernet device port id. */
+                       /**< Ethernet device port id. */
                        uint16_t queue;
-                       /* Ethernet device queue id. */
+                       /**< Ethernet device queue id. */
                };
        };
        /**< Union to hold common attributes of the vector array. */
@@ -1340,7 +1340,11 @@ struct rte_event_vector {
         * vector array can be an array of mbufs or pointers or opaque u64
         * values.
         */
+#ifndef __DOXYGEN__
 } __rte_aligned(16);
+#else
+};
+#endif

 /* Scheduler type definitions */
 #define RTE_SCHED_TYPE_ORDERED          0

>
> ---
> V3: reworked following feedback from Mattias
> ---
>  lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
>  1 file changed, 81 insertions(+), 51 deletions(-)
>
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index ec9b02455d..a741832e8e 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -12,25 +12,33 @@
>   * @file
>   *
>   * RTE Event Device API
> + * ====================
>   *
> - * In a polling model, lcores poll ethdev ports and associated rx queues
> - * directly to look for packet. In an event driven model, by contrast, lcores
> - * call the scheduler that selects packets for them based on programmer
> - * specified criteria. Eventdev library adds support for event driven
> - * programming model, which offer applications automatic multicore scaling,
> - * dynamic load balancing, pipelining, packet ingress order maintenance and
> - * synchronization services to simplify application packet processing.
> + * In a traditional run-to-completion application model, lcores pick up packets

Can we keep it is as poll mode instead of run-to-completion as event mode also
supports run to completion by having dequuee() and then Tx.

> + * from Ethdev ports and associated RX queues, run the packet processing to completion,
> + * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
> + * may be used to balance the load across multiple CPU cores.
> + *
> + * In contrast, in an event-driver model, as supported by this "eventdev" library,
> + * incoming packets are fed into an event device, which schedules those packets across

packets -> events. We may need to bring in Rx adapter if the event is packet.

> + * the available lcores, in accordance with its configuration.
> + * This event-driven programming model offers applications automatic multicore scaling,
> + * dynamic load balancing, pipelining, packet order maintenance, synchronization,
> + * and prioritization/quality of service.
>   *
>   * The Event Device API is composed of two parts:
>   *
>   * - The application-oriented Event API that includes functions to setup
>   *   an event device (configure it, setup its queues, ports and start it), to
> - *   establish the link between queues to port and to receive events, and so on.
> + *   establish the links between queues and ports to receive events, and so on.
>   *
>   * - The driver-oriented Event API that exports a function allowing
> - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> + *   an event poll Mode Driver (PMD) to register itself as
>   *   an event device driver.
>   *
> + * Application-oriented Event API
> + * ------------------------------
> + *
>   * Event device components:
>   *
>   *                     +-----------------+
> @@ -75,27 +83,39 @@
>   *            |                                                           |
>   *            +-----------------------------------------------------------+
>   *
> - * Event device: A hardware or software-based event scheduler.
> + * **Event device**: A hardware or software-based event scheduler.
>   *
> - * Event: A unit of scheduling that encapsulates a packet or other datatype
> - * like SW generated event from the CPU, Crypto work completion notification,
> - * Timer expiry event notification etc as well as metadata.
> - * The metadata includes flow ID, scheduling type, event priority, event_type,
> - * sub_event_type etc.
> + * **Event**: Represents an item of work and is the smallest unit of scheduling.
> + * An event carries metadata, such as queue ID, scheduling type, and event priority,
> + * and data such as one or more packets or other kinds of buffers.
> + * Some examples of events are:
> + * - a software-generated item of work originating from a lcore,

lcore.

> + *   perhaps carrying a packet to be processed,

processed.

> + * - a crypto work completion notification

notification.

> + * - a timer expiry notification.
>   *
> - * Event queue: A queue containing events that are scheduled by the event dev.
> + * **Event queue**: A queue containing events that are scheduled by the event device.

Shouldn't we add "to be" or so?
i.e
A queue containing events that are to be scheduled by the event device.

>   * An event queue contains events of different flows associated with scheduling
>   * types, such as atomic, ordered, or parallel.
> + * Each event given to an event device must have a valid event queue id field in the metadata,
> + * to specify on which event queue in the device the event must be placed,
> + * for later scheduling.
>   *
> - * Event port: An application's interface into the event dev for enqueue and
> + * **Event port**: An application's interface into the event dev for enqueue and
>   * dequeue operations. Each event port can be linked with one or more
>   * event queues for dequeue operations.
> - *
> - * By default, all the functions of the Event Device API exported by a PMD
> - * are lock-free functions which assume to not be invoked in parallel on
> - * different logical cores to work on the same target object. For instance,
> - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> - * cores to operates on same  event port. Of course, this function
> + * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
> + * that each port is polled by only a single lcore. [If this is not the case,
> + * a suitable synchronization mechanism should be used to prevent simultaneous
> + * access from multiple lcores.]
> + * To schedule events to an lcore, the event device will schedule them to the event port(s)
> + * being polled by that lcore.
> + *
> + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> + * are non-thread-safe functions, which must not be invoked on the same object in parallel on
> + * different logical cores.
> + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> + * cores to operate on same  event port. Of course, this function
>   * can be invoked in parallel by different logical cores on different ports.
>   * It is the responsibility of the upper level application to enforce this rule.
>   *
> @@ -107,22 +127,19 @@
>   *
>   * Event devices are dynamically registered during the PCI/SoC device probing
>   * phase performed at EAL initialization time.
> - * When an Event device is being probed, a *rte_event_dev* structure and
> - * a new device identifier are allocated for that device. Then, the
> - * event_dev_init() function supplied by the Event driver matching the probed
> - * device is invoked to properly initialize the device.
> + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> + * for it and the event_dev_init() function supplied by the Event driver
> + * is invoked to properly initialize the device.
>   *
> - * The role of the device init function consists of resetting the hardware or
> - * software event driver implementations.
> + * The role of the device init function is to reset the device hardware or
> + * to initialize the software event driver implementation.
>   *
> - * If the device init operation is successful, the correspondence between
> - * the device identifier assigned to the new device and its associated
> - * *rte_event_dev* structure is effectively registered.
> - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> - * freed.
> + * If the device init operation is successful, the device is assigned a device
> + * id (dev_id) for application use.
> + * Otherwise, the *rte_event_dev* structure is freed.
>   *
>   * The functions exported by the application Event API to setup a device
> - * designated by its device identifier must be invoked in the following order:
> + * must be invoked in the following order:
>   *     - rte_event_dev_configure()
>   *     - rte_event_queue_setup()
>   *     - rte_event_port_setup()
> @@ -130,10 +147,15 @@
>   *     - rte_event_dev_start()
>   *
>   * Then, the application can invoke, in any order, the functions
> - * exported by the Event API to schedule events, dequeue events, enqueue events,
> - * change event queue(s) to event port [un]link establishment and so on.
> - *
> - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> + * exported by the Event API to dequeue events, enqueue events,
> + * and link and unlink event queue(s) to event ports.
> + *
> + * Before configuring a device, an application should call rte_event_dev_info_get()
> + * to determine the capabilities of the event device, and any queue or port
> + * limits of that device. The parameters set in the various device configuration
> + * structures may need to be adjusted based on the max values provided in the
> + * device information structure returned from the info_get API.

Can we add full name of info_get()?

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 03/11] eventdev: update documentation on device capability flags
  2024-02-02 12:39     ` [PATCH v3 03/11] eventdev: update documentation on device capability flags Bruce Richardson
@ 2024-02-07 10:30       ` Jerin Jacob
  2024-02-20 16:42         ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Jerin Jacob @ 2024-02-07 10:30 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Sat, Feb 3, 2024 at 12:59 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> Update the device capability docs, to:
>
> * include more cross-references
> * split longer text into paragraphs, in most cases with each flag having
>   a single-line summary at the start of the doc block
> * general comment rewording and clarification as appropriate
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> V3: Updated following feedback from Mattias
> ---
>  lib/eventdev/rte_eventdev.h | 130 +++++++++++++++++++++++++-----------
>  1 file changed, 92 insertions(+), 38 deletions(-)

>   */
>  #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
>  /**< Event device operates in distributed scheduling mode.
> + *
>   * In distributed scheduling mode, event scheduling happens in HW or
> - * rte_event_dequeue_burst() or the combination of these two.
> + * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
>   * If the flag is not set then eventdev is centralized and thus needs a
>   * dedicated service core that acts as a scheduling thread .

Please remove space between thread and . in the existing code.

>   *
> - * @see rte_event_dequeue_burst()
> + * @see rte_event_dev_service_id_get

Could you add () around all the functions so that looks good across the series?


>   */
>  #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
>  /**< Event device is capable of enqueuing events of any type to any queue.
> - * If this capability is not set, the queue only supports events of the
> - *  *RTE_SCHED_TYPE_* type that it was created with.
>   *
> - * @see RTE_SCHED_TYPE_* values
> + * If this capability is not set, each queue only supports events of the
> + * *RTE_SCHED_TYPE_* type that it was created with.
> + * The behaviour when events of other scheduling types are sent to the queue is
> + * currently undefined.

I think, in header file, we can remove "currently"


p
>   */
>
>  #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
> -/**< Event device is capable of supporting multiple link profiles per event port
> - * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
> - * than one.
> +/**< Event device is capable of supporting multiple link profiles per event port.
> + *
> + *

The above line can be removed.

> + * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
> + * than one, and multiple profiles may be configured and then switched at runtime.
> + * If not set, only a single profile may be configured, which may itself be
> + * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
> + *
> + * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
> + * @see rte_event_port_profile_switch
> + * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
>   */
>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 09/11] eventdev: improve comments on scheduling types
  2024-02-02 12:39     ` [PATCH v3 09/11] eventdev: improve comments on scheduling types Bruce Richardson
@ 2024-02-08  9:18       ` Jerin Jacob
  2024-02-08 10:04         ` Mattias Rönnblom
  0 siblings, 1 reply; 123+ messages in thread
From: Jerin Jacob @ 2024-02-08  9:18 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> The description of ordered and atomic scheduling given in the eventdev
> doxygen documentation was not always clear. Try and simplify this so
> that it is clearer for the end-user of the application
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>
> ---
> V3: extensive rework following feedback. Please re-review!
> ---
>  lib/eventdev/rte_eventdev.h | 73 +++++++++++++++++++++++--------------
>  1 file changed, 45 insertions(+), 28 deletions(-)
>
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index a7d8c28015..8d72765ae7 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1347,25 +1347,35 @@ struct rte_event_vector {
>  /**< Ordered scheduling
>   *
>   * Events from an ordered flow of an event queue can be scheduled to multiple
> - * ports for concurrent processing while maintaining the original event order.
> + * ports for concurrent processing while maintaining the original event order,
> + * i.e. the order in which they were first enqueued to that queue.
>   * This scheme enables the user to achieve high single flow throughput by
> - * avoiding SW synchronization for ordering between ports which bound to cores.
> - *
> - * The source flow ordering from an event queue is maintained when events are
> - * enqueued to their destination queue within the same ordered flow context.
> - * An event port holds the context until application call
> - * rte_event_dequeue_burst() from the same port, which implicitly releases
> - * the context.
> - * User may allow the scheduler to release the context earlier than that
> - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
> - *
> - * Events from the source queue appear in their original order when dequeued
> - * from a destination queue.
> - * Event ordering is based on the received event(s), but also other
> - * (newly allocated or stored) events are ordered when enqueued within the same
> - * ordered context. Events not enqueued (e.g. released or stored) within the
> - * context are  considered missing from reordering and are skipped at this time
> - * (but can be ordered again within another context).
> + * avoiding SW synchronization for ordering between ports which are polled
> + * by different cores.

I prefer the following version to remove "polled" and to be more explicit.

avoiding SW synchronization for ordering between ports which are
dequeuing events
using @ref rte_event_deque_burst() across different cores.

> + *
> + * After events are dequeued from a set of ports, as those events are re-enqueued
> + * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event
> + * device restores the original event order - including events returned from all
> + * ports in the set - before the events arrive on the destination queue.

_arrrive_ is bit vague since we have enqueue operation. How about,
"before the events actually deposited on the destination queue."


> + *
> + * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly
> + * released by the next dequeue operation on a port, are skipped by the reordering
> + * stage and do not affect the reordering of other returned events.
> + *
> + * Any NEW events sent on a port are not ordered with respect to FORWARD events sent
> + * on the same port, since they have no original event order. They also are not
> + * ordered with respect to NEW events enqueued on other ports.
> + * However, NEW events to the same destination queue from the same port are guaranteed
> + * to be enqueued in the order they were submitted via rte_event_enqueue_burst().
> + *
> + * NOTE:
> + *   In restoring event order of forwarded events, the eventdev API guarantees that
> + *   all events from the same flow (i.e. same @ref rte_event.flow_id,
> + *   @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original
> + *   order before being forwarded to the destination queue.
> + *   Some eventdevs may implement stricter ordering to achieve this aim,
> + *   for example, restoring the order across *all* flows dequeued from the same ORDERED
> + *   queue.
>   *
>   * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
>   */
> @@ -1373,18 +1383,25 @@ struct rte_event_vector {
>  #define RTE_SCHED_TYPE_ATOMIC           1
>  /**< Atomic scheduling
>   *
> - * Events from an atomic flow of an event queue can be scheduled only to a
> + * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id,
> + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a
>   * single port at a time. The port is guaranteed to have exclusive (atomic)
>   * access to the associated flow context, which enables the user to avoid SW
> - * synchronization. Atomic flows also help to maintain event ordering
> - * since only one port at a time can process events from a flow of an
> - * event queue.
> - *
> - * The atomic queue synchronization context is dedicated to the port until
> - * application call rte_event_dequeue_burst() from the same port,
> - * which implicitly releases the context. User may allow the scheduler to
> - * release the context earlier than that by invoking rte_event_enqueue_burst()
> - * with RTE_EVENT_OP_RELEASE operation.
> + * synchronization. Atomic flows also maintain event ordering
> + * since only one port at a time can process events from each flow of an
> + * event queue, and events within a flow are not reordered within the scheduler.
> + *
> + * An atomic flow is locked to a port when events from that flow are first
> + * scheduled to that port. That lock remains in place until the
> + * application calls rte_event_dequeue_burst() from the same port,
> + * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set).
> + * User may allow the scheduler to release the lock earlier than that by invoking
> + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow.
> + *
> + * NOTE: The lock is only released once the last event from the flow, outstanding on the port,

I think, Note can start with something like below,

When there are multiple atomic events dequeue from @ref
rte_event_dequeue_burst()
for the same event queue, and it has same flow id then the lock is ....

> + * is released. So long as there is one event from an atomic flow scheduled to
> + * a port/core (including any events in the port's dequeue queue, not yet read
> + * by the application), that port will hold the synchronization lock for that flow.
>   *
>   * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
>   */
> --
> 2.40.1
>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 01/11] eventdev: improve doxygen introduction text
  2024-02-07 10:14       ` Jerin Jacob
@ 2024-02-08  9:50         ` Mattias Rönnblom
  2024-02-09  8:43           ` Jerin Jacob
  2024-02-20 16:33         ` Bruce Richardson
  1 sibling, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-02-08  9:50 UTC (permalink / raw)
  To: Jerin Jacob, Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-02-07 11:14, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
>>
>> Make some textual improvements to the introduction to eventdev and event
>> devices in the eventdev header file. This text appears in the doxygen
>> output for the header file, and introduces the key concepts, for
>> example: events, event devices, queues, ports and scheduling.
>>
>> This patch makes the following improvements:
>> * small textual fixups, e.g. correcting use of singular/plural
>> * rewrites of some sentences to improve clarity
>> * using doxygen markdown to split the whole large block up into
>>    sections, thereby making it easier to read.
>>
>> No large-scale changes are made, and blocks are not reordered
>>
>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> Thanks Bruce, While you are cleaning up, Please add following or
> similar change to fix for not properly
> parsing the struct rte_event_vector. i.e it is coming as global
> variables in html files.
> 
> l[dpdk.org] $ git diff
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index e31c927905..ce4a195a8f 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1309,9 +1309,9 @@ struct rte_event_vector {
>                   */
>                  struct {
>                          uint16_t port;
> -                       /* Ethernet device port id. */
> +                       /**< Ethernet device port id. */
>                          uint16_t queue;
> -                       /* Ethernet device queue id. */
> +                       /**< Ethernet device queue id. */
>                  };
>          };
>          /**< Union to hold common attributes of the vector array. */
> @@ -1340,7 +1340,11 @@ struct rte_event_vector {
>           * vector array can be an array of mbufs or pointers or opaque u64
>           * values.
>           */
> +#ifndef __DOXYGEN__
>   } __rte_aligned(16);
> +#else
> +};
> +#endif
> 
>   /* Scheduler type definitions */
>   #define RTE_SCHED_TYPE_ORDERED          0
> 
>>
>> ---
>> V3: reworked following feedback from Mattias
>> ---
>>   lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
>>   1 file changed, 81 insertions(+), 51 deletions(-)
>>
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index ec9b02455d..a741832e8e 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -12,25 +12,33 @@
>>    * @file
>>    *
>>    * RTE Event Device API
>> + * ====================
>>    *
>> - * In a polling model, lcores poll ethdev ports and associated rx queues
>> - * directly to look for packet. In an event driven model, by contrast, lcores
>> - * call the scheduler that selects packets for them based on programmer
>> - * specified criteria. Eventdev library adds support for event driven
>> - * programming model, which offer applications automatic multicore scaling,
>> - * dynamic load balancing, pipelining, packet ingress order maintenance and
>> - * synchronization services to simplify application packet processing.
>> + * In a traditional run-to-completion application model, lcores pick up packets
> 
> Can we keep it is as poll mode instead of run-to-completion as event mode also
> supports run to completion by having dequuee() and then Tx.
> 

A "traditional" DPDK app is both polling and run-to-completion. You 
could always add "polling" somewhere, but "run-to-completion" in that 
context serves a purpose, imo.

A single-stage eventdev-based pipeline will also process packets in a 
run-to-completion fashion. In such a scenario, the difference between 
eventdev and the "tradition" lies in the (ingress-only) load balancing 
mechanism used (which the below note on the "traditional" use of RSS 
indicates).

>> + * from Ethdev ports and associated RX queues, run the packet processing to completion,
>> + * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
>> + * may be used to balance the load across multiple CPU cores.
>> + *
>> + * In contrast, in an event-driver model, as supported by this "eventdev" library,
>> + * incoming packets are fed into an event device, which schedules those packets across
> 
> packets -> events. We may need to bring in Rx adapter if the event is packet.
> 
>> + * the available lcores, in accordance with its configuration.
>> + * This event-driven programming model offers applications automatic multicore scaling,
>> + * dynamic load balancing, pipelining, packet order maintenance, synchronization,
>> + * and prioritization/quality of service.
>>    *
>>    * The Event Device API is composed of two parts:
>>    *
>>    * - The application-oriented Event API that includes functions to setup
>>    *   an event device (configure it, setup its queues, ports and start it), to
>> - *   establish the link between queues to port and to receive events, and so on.
>> + *   establish the links between queues and ports to receive events, and so on.
>>    *
>>    * - The driver-oriented Event API that exports a function allowing
>> - *   an event poll Mode Driver (PMD) to simultaneously register itself as
>> + *   an event poll Mode Driver (PMD) to register itself as
>>    *   an event device driver.
>>    *
>> + * Application-oriented Event API
>> + * ------------------------------
>> + *
>>    * Event device components:
>>    *
>>    *                     +-----------------+
>> @@ -75,27 +83,39 @@
>>    *            |                                                           |
>>    *            +-----------------------------------------------------------+
>>    *
>> - * Event device: A hardware or software-based event scheduler.
>> + * **Event device**: A hardware or software-based event scheduler.
>>    *
>> - * Event: A unit of scheduling that encapsulates a packet or other datatype
>> - * like SW generated event from the CPU, Crypto work completion notification,
>> - * Timer expiry event notification etc as well as metadata.
>> - * The metadata includes flow ID, scheduling type, event priority, event_type,
>> - * sub_event_type etc.
>> + * **Event**: Represents an item of work and is the smallest unit of scheduling.
>> + * An event carries metadata, such as queue ID, scheduling type, and event priority,
>> + * and data such as one or more packets or other kinds of buffers.
>> + * Some examples of events are:
>> + * - a software-generated item of work originating from a lcore,
> 
> lcore.
> 
>> + *   perhaps carrying a packet to be processed,
> 
> processed.
> 
>> + * - a crypto work completion notification
> 
> notification.
> 
>> + * - a timer expiry notification.
>>    *
>> - * Event queue: A queue containing events that are scheduled by the event dev.
>> + * **Event queue**: A queue containing events that are scheduled by the event device.
> 
> Shouldn't we add "to be" or so?
> i.e
> A queue containing events that are to be scheduled by the event device.
> 
>>    * An event queue contains events of different flows associated with scheduling
>>    * types, such as atomic, ordered, or parallel.
>> + * Each event given to an event device must have a valid event queue id field in the metadata,
>> + * to specify on which event queue in the device the event must be placed,
>> + * for later scheduling.
>>    *
>> - * Event port: An application's interface into the event dev for enqueue and
>> + * **Event port**: An application's interface into the event dev for enqueue and
>>    * dequeue operations. Each event port can be linked with one or more
>>    * event queues for dequeue operations.
>> - *
>> - * By default, all the functions of the Event Device API exported by a PMD
>> - * are lock-free functions which assume to not be invoked in parallel on
>> - * different logical cores to work on the same target object. For instance,
>> - * the dequeue function of a PMD cannot be invoked in parallel on two logical
>> - * cores to operates on same  event port. Of course, this function
>> + * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
>> + * that each port is polled by only a single lcore. [If this is not the case,
>> + * a suitable synchronization mechanism should be used to prevent simultaneous
>> + * access from multiple lcores.]
>> + * To schedule events to an lcore, the event device will schedule them to the event port(s)
>> + * being polled by that lcore.
>> + *
>> + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
>> + * are non-thread-safe functions, which must not be invoked on the same object in parallel on
>> + * different logical cores.
>> + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
>> + * cores to operate on same  event port. Of course, this function
>>    * can be invoked in parallel by different logical cores on different ports.
>>    * It is the responsibility of the upper level application to enforce this rule.
>>    *
>> @@ -107,22 +127,19 @@
>>    *
>>    * Event devices are dynamically registered during the PCI/SoC device probing
>>    * phase performed at EAL initialization time.
>> - * When an Event device is being probed, a *rte_event_dev* structure and
>> - * a new device identifier are allocated for that device. Then, the
>> - * event_dev_init() function supplied by the Event driver matching the probed
>> - * device is invoked to properly initialize the device.
>> + * When an Event device is being probed, an *rte_event_dev* structure is allocated
>> + * for it and the event_dev_init() function supplied by the Event driver
>> + * is invoked to properly initialize the device.
>>    *
>> - * The role of the device init function consists of resetting the hardware or
>> - * software event driver implementations.
>> + * The role of the device init function is to reset the device hardware or
>> + * to initialize the software event driver implementation.
>>    *
>> - * If the device init operation is successful, the correspondence between
>> - * the device identifier assigned to the new device and its associated
>> - * *rte_event_dev* structure is effectively registered.
>> - * Otherwise, both the *rte_event_dev* structure and the device identifier are
>> - * freed.
>> + * If the device init operation is successful, the device is assigned a device
>> + * id (dev_id) for application use.
>> + * Otherwise, the *rte_event_dev* structure is freed.
>>    *
>>    * The functions exported by the application Event API to setup a device
>> - * designated by its device identifier must be invoked in the following order:
>> + * must be invoked in the following order:
>>    *     - rte_event_dev_configure()
>>    *     - rte_event_queue_setup()
>>    *     - rte_event_port_setup()
>> @@ -130,10 +147,15 @@
>>    *     - rte_event_dev_start()
>>    *
>>    * Then, the application can invoke, in any order, the functions
>> - * exported by the Event API to schedule events, dequeue events, enqueue events,
>> - * change event queue(s) to event port [un]link establishment and so on.
>> - *
>> - * Application may use rte_event_[queue/port]_default_conf_get() to get the
>> + * exported by the Event API to dequeue events, enqueue events,
>> + * and link and unlink event queue(s) to event ports.
>> + *
>> + * Before configuring a device, an application should call rte_event_dev_info_get()
>> + * to determine the capabilities of the event device, and any queue or port
>> + * limits of that device. The parameters set in the various device configuration
>> + * structures may need to be adjusted based on the max values provided in the
>> + * device information structure returned from the info_get API.
> 
> Can we add full name of info_get()?

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 09/11] eventdev: improve comments on scheduling types
  2024-02-08  9:18       ` Jerin Jacob
@ 2024-02-08 10:04         ` Mattias Rönnblom
  2024-02-20 17:23           ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-02-08 10:04 UTC (permalink / raw)
  To: Jerin Jacob, Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On 2024-02-08 10:18, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
>>
>> The description of ordered and atomic scheduling given in the eventdev
>> doxygen documentation was not always clear. Try and simplify this so
>> that it is clearer for the end-user of the application
>>
>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>
>> ---
>> V3: extensive rework following feedback. Please re-review!
>> ---
>>   lib/eventdev/rte_eventdev.h | 73 +++++++++++++++++++++++--------------
>>   1 file changed, 45 insertions(+), 28 deletions(-)
>>
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index a7d8c28015..8d72765ae7 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -1347,25 +1347,35 @@ struct rte_event_vector {
>>   /**< Ordered scheduling
>>    *
>>    * Events from an ordered flow of an event queue can be scheduled to multiple
>> - * ports for concurrent processing while maintaining the original event order.
>> + * ports for concurrent processing while maintaining the original event order,
>> + * i.e. the order in which they were first enqueued to that queue.
>>    * This scheme enables the user to achieve high single flow throughput by
>> - * avoiding SW synchronization for ordering between ports which bound to cores.
>> - *
>> - * The source flow ordering from an event queue is maintained when events are
>> - * enqueued to their destination queue within the same ordered flow context.
>> - * An event port holds the context until application call
>> - * rte_event_dequeue_burst() from the same port, which implicitly releases
>> - * the context.
>> - * User may allow the scheduler to release the context earlier than that
>> - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
>> - *
>> - * Events from the source queue appear in their original order when dequeued
>> - * from a destination queue.
>> - * Event ordering is based on the received event(s), but also other
>> - * (newly allocated or stored) events are ordered when enqueued within the same
>> - * ordered context. Events not enqueued (e.g. released or stored) within the
>> - * context are  considered missing from reordering and are skipped at this time
>> - * (but can be ordered again within another context).
>> + * avoiding SW synchronization for ordering between ports which are polled
>> + * by different cores.
> 
> I prefer the following version to remove "polled" and to be more explicit.
> 
> avoiding SW synchronization for ordering between ports which are
> dequeuing events
> using @ref rte_event_deque_burst() across different cores.
> 

"This scheme allows events pertaining to the same, potentially large 
flow to be processed in parallel on multiple cores without incurring any 
application-level order restoration logic overhead."

>> + *
>> + * After events are dequeued from a set of ports, as those events are re-enqueued
>> + * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event
>> + * device restores the original event order - including events returned from all
>> + * ports in the set - before the events arrive on the destination queue.
> 
> _arrrive_ is bit vague since we have enqueue operation. How about,
> "before the events actually deposited on the destination queue."
> 
> 
>> + *
>> + * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly
>> + * released by the next dequeue operation on a port, are skipped by the reordering
>> + * stage and do not affect the reordering of other returned events.
>> + *
>> + * Any NEW events sent on a port are not ordered with respect to FORWARD events sent
>> + * on the same port, since they have no original event order. They also are not
>> + * ordered with respect to NEW events enqueued on other ports.
>> + * However, NEW events to the same destination queue from the same port are guaranteed
>> + * to be enqueued in the order they were submitted via rte_event_enqueue_burst().
>> + *
>> + * NOTE:
>> + *   In restoring event order of forwarded events, the eventdev API guarantees that
>> + *   all events from the same flow (i.e. same @ref rte_event.flow_id,
>> + *   @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original
>> + *   order before being forwarded to the destination queue.
>> + *   Some eventdevs may implement stricter ordering to achieve this aim,
>> + *   for example, restoring the order across *all* flows dequeued from the same ORDERED
>> + *   queue.
>>    *
>>    * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
>>    */
>> @@ -1373,18 +1383,25 @@ struct rte_event_vector {
>>   #define RTE_SCHED_TYPE_ATOMIC           1
>>   /**< Atomic scheduling
>>    *
>> - * Events from an atomic flow of an event queue can be scheduled only to a
>> + * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id,
>> + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a
>>    * single port at a time. The port is guaranteed to have exclusive (atomic)
>>    * access to the associated flow context, which enables the user to avoid SW
>> - * synchronization. Atomic flows also help to maintain event ordering
>> - * since only one port at a time can process events from a flow of an
>> - * event queue.
>> - *
>> - * The atomic queue synchronization context is dedicated to the port until
>> - * application call rte_event_dequeue_burst() from the same port,
>> - * which implicitly releases the context. User may allow the scheduler to
>> - * release the context earlier than that by invoking rte_event_enqueue_burst()
>> - * with RTE_EVENT_OP_RELEASE operation.
>> + * synchronization. Atomic flows also maintain event ordering
>> + * since only one port at a time can process events from each flow of an
>> + * event queue, and events within a flow are not reordered within the scheduler.
>> + *
>> + * An atomic flow is locked to a port when events from that flow are first
>> + * scheduled to that port. That lock remains in place until the
>> + * application calls rte_event_dequeue_burst() from the same port,
>> + * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set).
>> + * User may allow the scheduler to release the lock earlier than that by invoking
>> + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow.
>> + *
>> + * NOTE: The lock is only released once the last event from the flow, outstanding on the port,
> 
> I think, Note can start with something like below,
> 
> When there are multiple atomic events dequeue from @ref
> rte_event_dequeue_burst()
> for the same event queue, and it has same flow id then the lock is ....
> 

Yes, or maybe describing the whole lock/unlock state.

"The conceptual per-queue-per-flow lock is in a locked state as long 
(and only as long) as one or more events pertaining to that flow were 
scheduled to the port in question, but are not yet released."

Maybe it needs to be more meaty, describing what released means. I don't 
have the full context of the documentation in my head when I'm writing this.

>> + * is released. So long as there is one event from an atomic flow scheduled to
>> + * a port/core (including any events in the port's dequeue queue, not yet read
>> + * by the application), that port will hold the synchronization lock for that flow.
>>    *
>>    * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
>>    */
>> --
>> 2.40.1
>>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 01/11] eventdev: improve doxygen introduction text
  2024-02-08  9:50         ` Mattias Rönnblom
@ 2024-02-09  8:43           ` Jerin Jacob
  2024-02-10  7:24             ` Mattias Rönnblom
  0 siblings, 1 reply; 123+ messages in thread
From: Jerin Jacob @ 2024-02-09  8:43 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Bruce Richardson, dev, jerinj, mattias.ronnblom,
	abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak

On Thu, Feb 8, 2024 at 3:20 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2024-02-07 11:14, Jerin Jacob wrote:
> > On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> >>
> >> Make some textual improvements to the introduction to eventdev and event
> >> devices in the eventdev header file. This text appears in the doxygen
> >> output for the header file, and introduces the key concepts, for
> >> example: events, event devices, queues, ports and scheduling.
> >>
> >> This patch makes the following improvements:
> >> * small textual fixups, e.g. correcting use of singular/plural
> >> * rewrites of some sentences to improve clarity
> >> * using doxygen markdown to split the whole large block up into
> >>    sections, thereby making it easier to read.
> >>
> >> No large-scale changes are made, and blocks are not reordered
> >>
> >> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> >
> > Thanks Bruce, While you are cleaning up, Please add following or
> > similar change to fix for not properly
> > parsing the struct rte_event_vector. i.e it is coming as global
> > variables in html files.
> >
> > l[dpdk.org] $ git diff
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index e31c927905..ce4a195a8f 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -1309,9 +1309,9 @@ struct rte_event_vector {
> >                   */
> >                  struct {
> >                          uint16_t port;
> > -                       /* Ethernet device port id. */
> > +                       /**< Ethernet device port id. */
> >                          uint16_t queue;
> > -                       /* Ethernet device queue id. */
> > +                       /**< Ethernet device queue id. */
> >                  };
> >          };
> >          /**< Union to hold common attributes of the vector array. */
> > @@ -1340,7 +1340,11 @@ struct rte_event_vector {
> >           * vector array can be an array of mbufs or pointers or opaque u64
> >           * values.
> >           */
> > +#ifndef __DOXYGEN__
> >   } __rte_aligned(16);
> > +#else
> > +};
> > +#endif
> >
> >   /* Scheduler type definitions */
> >   #define RTE_SCHED_TYPE_ORDERED          0
> >
> >>
> >> ---
> >> V3: reworked following feedback from Mattias
> >> ---
> >>   lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
> >>   1 file changed, 81 insertions(+), 51 deletions(-)
> >>
> >> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> >> index ec9b02455d..a741832e8e 100644
> >> --- a/lib/eventdev/rte_eventdev.h
> >> +++ b/lib/eventdev/rte_eventdev.h
> >> @@ -12,25 +12,33 @@
> >>    * @file
> >>    *
> >>    * RTE Event Device API
> >> + * ====================
> >>    *
> >> - * In a polling model, lcores poll ethdev ports and associated rx queues
> >> - * directly to look for packet. In an event driven model, by contrast, lcores
> >> - * call the scheduler that selects packets for them based on programmer
> >> - * specified criteria. Eventdev library adds support for event driven
> >> - * programming model, which offer applications automatic multicore scaling,
> >> - * dynamic load balancing, pipelining, packet ingress order maintenance and
> >> - * synchronization services to simplify application packet processing.
> >> + * In a traditional run-to-completion application model, lcores pick up packets
> >
> > Can we keep it is as poll mode instead of run-to-completion as event mode also
> > supports run to completion by having dequuee() and then Tx.
> >
>
> A "traditional" DPDK app is both polling and run-to-completion. You
> could always add "polling" somewhere, but "run-to-completion" in that
> context serves a purpose, imo.

Yeah. Some event devices can actually sleep to save power if packet is
not present(using WFE in arm64 world).

I think, We can be more specific then, like

In a traditional run-to-completion application model where packet are
dequeued from NIC RX queues, .......


>
> A single-stage eventdev-based pipeline will also process packets in a
> run-to-completion fashion. In such a scenario, the difference between
> eventdev and the "tradition" lies in the (ingress-only) load balancing
> mechanism used (which the below note on the "traditional" use of RSS
> indicates).

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types
  2024-02-02 12:39     ` [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types Bruce Richardson
@ 2024-02-09  9:14       ` Jerin Jacob
  2024-02-20 17:39         ` Bruce Richardson
                           ` (2 more replies)
  0 siblings, 3 replies; 123+ messages in thread
From: Jerin Jacob @ 2024-02-09  9:14 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> For the fields in "rte_event" struct, enhance the comments on each to
> clarify the field's use, and whether it is preserved between enqueue and
> dequeue, and it's role, if any, in scheduling.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> V3: updates following review
> ---
>  lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
>  1 file changed, 111 insertions(+), 50 deletions(-)
>
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 8d72765ae7..58219e027e 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1463,47 +1463,54 @@ struct rte_event_vector {
>
>  /* Event enqueue operations */
>  #define RTE_EVENT_OP_NEW                0
> -/**< The event producers use this operation to inject a new event to the
> - * event device.
> +/**< The @ref rte_event.op field must be set to this operation type to inject a new event,
> + * i.e. one not previously dequeued, into the event device, to be scheduled
> + * for processing.
>   */
>  #define RTE_EVENT_OP_FORWARD            1
> -/**< The CPU use this operation to forward the event to different event queue or
> - * change to new application specific flow or schedule type to enable
> - * pipelining.
> +/**< The application must set the @ref rte_event.op field to this operation type to return a
> + * previously dequeued event to the event device to be scheduled for further processing.
>   *
> - * This operation must only be enqueued to the same port that the
> + * This event *must* be enqueued to the same port that the
>   * event to be forwarded was dequeued from.
> + *
> + * The event's fields, including (but not limited to) flow_id, scheduling type,
> + * destination queue, and event payload e.g. mbuf pointer, may all be updated as
> + * desired by the application, but the @ref rte_event.impl_opaque field must
> + * be kept to the same value as was present when the event was dequeued.
>   */
>  #define RTE_EVENT_OP_RELEASE            2
>  /**< Release the flow context associated with the schedule type.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> - * then this function hints the scheduler that the user has completed critical
> - * section processing in the current atomic context.
> - * The scheduler is now allowed to schedule events from the same flow from
> - * an event queue to another port. However, the context may be still held
> - * until the next rte_event_dequeue_burst() call, this call allows but does not
> - * force the scheduler to release the context early.
> - *
> - * Early atomic context release may increase parallelism and thus system
> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> + * then this operation type hints the scheduler that the user has completed critical
> + * section processing for this event in the current atomic context, and that the
> + * scheduler may unlock any atomic locks held for this event.
> + * If this is the last event from an atomic flow, i.e. all flow locks are released,


Similar comment as other email
[Jerin] When there are multiple atomic events dequeue from @ref
rte_event_dequeue_burst()
for the same event queue, and it has same flow id then the lock is ....

[Mattias]
Yes, or maybe describing the whole lock/unlock state.

"The conceptual per-queue-per-flow lock is in a locked state as long
(and only as long) as one or more events pertaining to that flow were
scheduled to the port in question, but are not yet released."

Maybe it needs to be more meaty, describing what released means. I don't
have the full context of the documentation in my head when I'm writing this.







> + * the scheduler is now allowed to schedule events from that flow from to another port.
> + * However, the atomic locks may be still held until the next rte_event_dequeue_burst()
> + * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE allows,

Is ";" intended?

> + * but does not force, the scheduler to release the atomic locks early.

instead of "not force", can use the term _hint_ the driver and reword.

> + *
> + * Early atomic lock release may increase parallelism and thus system
>   * performance, but the user needs to design carefully the split into critical
>   * vs non-critical sections.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> - * then this function hints the scheduler that the user has done all that need
> - * to maintain event order in the current ordered context.
> - * The scheduler is allowed to release the ordered context of this port and
> - * avoid reordering any following enqueues.
> - *
> - * Early ordered context release may increase parallelism and thus system
> - * performance.
> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
> + * then this operation type informs the scheduler that the current event has
> + * completed processing and will not be returned to the scheduler, i.e.
> + * it has been dropped, and so the reordering context for that event
> + * should be considered filled.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
> - * or no scheduling context is held then this function may be an NOOP,
> - * depending on the implementation.
> + * Events with this operation type must only be enqueued to the same port that the
> + * event to be released was dequeued from. The @ref rte_event.impl_opaque
> + * field in the release event must have the same value as that in the original dequeued event.
>   *
> - * This operation must only be enqueued to the same port that the
> - * event to be released was dequeued from.
> + * If a dequeued event is re-enqueued with operation type of @ref RTE_EVENT_OP_RELEASE,
> + * then any subsequent enqueue of that event - or a copy of it - must be done as event of type
> + * @ref RTE_EVENT_OP_NEW, not @ref RTE_EVENT_OP_FORWARD. This is because any context for
> + * the originally dequeued event, i.e. atomic locks, or reorder buffer entries, will have
> + * been removed or invalidated by the release operation.
>   */
>
>  /**
> @@ -1517,56 +1524,110 @@ struct rte_event {
>                 /** Event attributes for dequeue or enqueue operation */
>                 struct {
>                         uint32_t flow_id:20;
> -                       /**< Targeted flow identifier for the enqueue and
> -                        * dequeue operation.
> -                        * The value must be in the range of
> -                        * [0, nb_event_queue_flows - 1] which
> -                        * previously supplied to rte_event_dev_configure().
> +                       /**< Target flow identifier for the enqueue and dequeue operation.
> +                        *
> +                        * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
> +                        * flow for atomicity within a queue & priority level, such that events
> +                        * from each individual flow will only be scheduled to one port at a time.
> +                        *
> +                        * This field is preserved between enqueue and dequeue when
> +                        * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> +                        * capability. Otherwise the value is implementation dependent
> +                        * on dequeue.
>                          */
>                         uint32_t sub_event_type:8;
>                         /**< Sub-event types based on the event source.
> +                        *
> +                        * This field is preserved between enqueue and dequeue.
> +                        * This field is for application or event adapter use,
> +                        * and is not considered in scheduling decisions.


cnxk driver is considering this for scheduling decision to
differentiate the producer i.e event adapters.
If other drivers are not then we can change the language around it is
implementation defined.


> +                        *
>                          * @see RTE_EVENT_TYPE_CPU
>                          */
>                         uint32_t event_type:4;
> -                       /**< Event type to classify the event source.
> -                        * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> +                       /**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
> +                        *
> +                        * This field is preserved between enqueue and dequeue
> +                        * This field is for application or event adapter use,
> +                        * and is not considered in scheduling decisions.


cnxk driver is considering this for scheduling decision to
differentiate the producer i.e event adapters.
If other drivers are not then we can change the language around it is
implementation defined.

>                          */
>                         uint8_t op:2;
> -                       /**< The type of event enqueue operation - new/forward/
> -                        * etc.This field is not preserved across an instance
> -                        * and is undefined on dequeue.
> -                        * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> +                       /**< The type of event enqueue operation - new/forward/ etc.
> +                        *
> +                        * This field is *not* preserved across an instance
> +                        * and is implementation dependent on dequeue.
> +                        *
> +                        * @see RTE_EVENT_OP_NEW
> +                        * @see RTE_EVENT_OP_FORWARD
> +                        * @see RTE_EVENT_OP_RELEASE
>                          */
>                         uint8_t rsvd:4;
> -                       /**< Reserved for future use */
> +                       /**< Reserved for future use.
> +                        *
> +                        * Should be set to zero on enqueue.

I am worried about some application explicitly start setting this to
zero on every enqueue.
Instead, can we say application should not touch the field, Since every eventdev
operations starts with dequeue() driver can fill to the correct value.

> +                        */
>                         uint8_t sched_type:2;
>                         /**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
>                          * associated with flow id on a given event queue
>                          * for the enqueue and dequeue operation.
> +                        *
> +                        * This field is used to determine the scheduling type
> +                        * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
> +                        * is configured.
> +                        * For queues where only a single scheduling type is available,
> +                        * this field must be set to match the configured scheduling type.
> +                        *
> +                        * This field is preserved between enqueue and dequeue.
> +                        *
> +                        * @see RTE_SCHED_TYPE_ORDERED
> +                        * @see RTE_SCHED_TYPE_ATOMIC
> +                        * @see RTE_SCHED_TYPE_PARALLEL
>                          */
>                         uint8_t queue_id;
>                         /**< Targeted event queue identifier for the enqueue or
>                          * dequeue operation.
> -                        * The value must be in the range of
> -                        * [0, nb_event_queues - 1] which previously supplied to
> -                        * rte_event_dev_configure().
> +                        * The value must be less than @ref rte_event_dev_config.nb_event_queues
> +                        * which was previously supplied to rte_event_dev_configure().

Some reason, similar text got removed for flow_id. Please add the same.


> +                        *
> +                        * This field is preserved between enqueue on dequeue.
>                          */
>                         uint8_t priority;
>                         /**< Event priority relative to other events in the
>                          * event queue. The requested priority should in the
> -                        * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
> -                        * RTE_EVENT_DEV_PRIORITY_LOWEST].
> +                        * range of  [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST,
> +                        * @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
> +                        *
>                          * The implementation shall normalize the requested
>                          * priority to supported priority value.
> -                        * Valid when the device has
> -                        * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
> +                        * [For devices with where the supported priority range is a power-of-2, the
> +                        * normalization will be done via bit-shifting, so only the highest
> +                        * log2(num_priorities) bits will be used by the event device]
> +                        *
> +                        * Valid when the device has @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability
> +                        * and this field is preserved between enqueue and dequeue,
> +                        * though with possible loss of precision due to normalization and
> +                        * subsequent de-normalization. (For example, if a device only supports 8
> +                        * priority levels, only the high 3 bits of this field will be
> +                        * used by that device, and hence only the value of those 3 bits are
> +                        * guaranteed to be preserved between enqueue and dequeue.)
> +                        *
> +                        * Ignored when device does not support @ref RTE_EVENT_DEV_CAP_EVENT_QOS
> +                        * capability, and it is implementation dependent if this field is preserved
> +                        * between enqueue and dequeue.
>                          */
>                         uint8_t impl_opaque;
> -                       /**< Implementation specific opaque value.
> -                        * An implementation may use this field to hold
> +                       /**< Opaque field for event device use.
> +                        *
> +                        * An event driver implementation may use this field to hold an
>                          * implementation specific value to share between
>                          * dequeue and enqueue operation.
> -                        * The application should not modify this field.
> +                        *
> +                        * The application most not modify this field.

most -> must

> +                        * Its value is implementation dependent on dequeue,
> +                        * and must be returned unmodified on enqueue when
> +                        * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE.
> +                        * This field is ignored on events with op type
> +                        * @ref RTE_EVENT_OP_NEW.
>                          */
>                 };
>         };
> --
> 2.40.1
>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 01/11] eventdev: improve doxygen introduction text
  2024-02-09  8:43           ` Jerin Jacob
@ 2024-02-10  7:24             ` Mattias Rönnblom
  2024-02-20 16:28               ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Mattias Rönnblom @ 2024-02-10  7:24 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Bruce Richardson, dev, jerinj, mattias.ronnblom,
	abdullah.sevincer, sachin.saxena, hemant.agrawal, pbhagavatula,
	pravin.pathak

On 2024-02-09 09:43, Jerin Jacob wrote:
> On Thu, Feb 8, 2024 at 3:20 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>>
>> On 2024-02-07 11:14, Jerin Jacob wrote:
>>> On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
>>> <bruce.richardson@intel.com> wrote:
>>>>
>>>> Make some textual improvements to the introduction to eventdev and event
>>>> devices in the eventdev header file. This text appears in the doxygen
>>>> output for the header file, and introduces the key concepts, for
>>>> example: events, event devices, queues, ports and scheduling.
>>>>
>>>> This patch makes the following improvements:
>>>> * small textual fixups, e.g. correcting use of singular/plural
>>>> * rewrites of some sentences to improve clarity
>>>> * using doxygen markdown to split the whole large block up into
>>>>     sections, thereby making it easier to read.
>>>>
>>>> No large-scale changes are made, and blocks are not reordered
>>>>
>>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>>
>>> Thanks Bruce, While you are cleaning up, Please add following or
>>> similar change to fix for not properly
>>> parsing the struct rte_event_vector. i.e it is coming as global
>>> variables in html files.
>>>
>>> l[dpdk.org] $ git diff
>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>> index e31c927905..ce4a195a8f 100644
>>> --- a/lib/eventdev/rte_eventdev.h
>>> +++ b/lib/eventdev/rte_eventdev.h
>>> @@ -1309,9 +1309,9 @@ struct rte_event_vector {
>>>                    */
>>>                   struct {
>>>                           uint16_t port;
>>> -                       /* Ethernet device port id. */
>>> +                       /**< Ethernet device port id. */
>>>                           uint16_t queue;
>>> -                       /* Ethernet device queue id. */
>>> +                       /**< Ethernet device queue id. */
>>>                   };
>>>           };
>>>           /**< Union to hold common attributes of the vector array. */
>>> @@ -1340,7 +1340,11 @@ struct rte_event_vector {
>>>            * vector array can be an array of mbufs or pointers or opaque u64
>>>            * values.
>>>            */
>>> +#ifndef __DOXYGEN__
>>>    } __rte_aligned(16);
>>> +#else
>>> +};
>>> +#endif
>>>
>>>    /* Scheduler type definitions */
>>>    #define RTE_SCHED_TYPE_ORDERED          0
>>>
>>>>
>>>> ---
>>>> V3: reworked following feedback from Mattias
>>>> ---
>>>>    lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
>>>>    1 file changed, 81 insertions(+), 51 deletions(-)
>>>>
>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>> index ec9b02455d..a741832e8e 100644
>>>> --- a/lib/eventdev/rte_eventdev.h
>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>> @@ -12,25 +12,33 @@
>>>>     * @file
>>>>     *
>>>>     * RTE Event Device API
>>>> + * ====================
>>>>     *
>>>> - * In a polling model, lcores poll ethdev ports and associated rx queues
>>>> - * directly to look for packet. In an event driven model, by contrast, lcores
>>>> - * call the scheduler that selects packets for them based on programmer
>>>> - * specified criteria. Eventdev library adds support for event driven
>>>> - * programming model, which offer applications automatic multicore scaling,
>>>> - * dynamic load balancing, pipelining, packet ingress order maintenance and
>>>> - * synchronization services to simplify application packet processing.
>>>> + * In a traditional run-to-completion application model, lcores pick up packets
>>>
>>> Can we keep it is as poll mode instead of run-to-completion as event mode also
>>> supports run to completion by having dequuee() and then Tx.
>>>
>>
>> A "traditional" DPDK app is both polling and run-to-completion. You
>> could always add "polling" somewhere, but "run-to-completion" in that
>> context serves a purpose, imo.
> 
> Yeah. Some event devices can actually sleep to save power if packet is
> not present(using WFE in arm64 world).
> 

Sure, and I believe you can do that with certain Ethdevs as well. Also, 
you can also use interrupts. So polling/energy-efficient polling 
(wfe/umwait)/interrupts aren't really a differentiator between Eventdev 
and "raw" Ethdev.

> I think, We can be more specific then, like
> 
> In a traditional run-to-completion application model where packet are
> dequeued from NIC RX queues, .......
> 

"In a traditional DPDK application model, the application polls Ethdev 
port RX queues to look for work, and processing is done in a 
run-to-completion manner, after which the packets are transmitted on a 
Ethdev TX queue. Load is distributed by statically assigning ports and 
queues to lcores, and NIC receive-side scaling (RSS, or similar) is 
employed to distribute network flows (and thus work) on the same port 
across multiple RX queues."

I don̈́'t know if that's too much.

> 
>>
>> A single-stage eventdev-based pipeline will also process packets in a
>> run-to-completion fashion. In such a scenario, the difference between
>> eventdev and the "tradition" lies in the (ingress-only) load balancing
>> mechanism used (which the below note on the "traditional" use of RSS
>> indicates).


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 01/11] eventdev: improve doxygen introduction text
  2024-02-10  7:24             ` Mattias Rönnblom
@ 2024-02-20 16:28               ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-20 16:28 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Jerin Jacob, dev, jerinj, mattias.ronnblom, abdullah.sevincer,
	sachin.saxena, hemant.agrawal, pbhagavatula, pravin.pathak

On Sat, Feb 10, 2024 at 08:24:29AM +0100, Mattias Rönnblom wrote:
> On 2024-02-09 09:43, Jerin Jacob wrote:
> > On Thu, Feb 8, 2024 at 3:20 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
> > > 
> > > On 2024-02-07 11:14, Jerin Jacob wrote:
> > > > On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > > 
> > > > > Make some textual improvements to the introduction to eventdev and event
> > > > > devices in the eventdev header file. This text appears in the doxygen
> > > > > output for the header file, and introduces the key concepts, for
> > > > > example: events, event devices, queues, ports and scheduling.
> > > > > 
> > > > > This patch makes the following improvements:
> > > > > * small textual fixups, e.g. correcting use of singular/plural
> > > > > * rewrites of some sentences to improve clarity
> > > > > * using doxygen markdown to split the whole large block up into
> > > > >     sections, thereby making it easier to read.
> > > > > 
> > > > > No large-scale changes are made, and blocks are not reordered
> > > > > 
> > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > 
> > > > Thanks Bruce, While you are cleaning up, Please add following or
> > > > similar change to fix for not properly
> > > > parsing the struct rte_event_vector. i.e it is coming as global
> > > > variables in html files.
> > > > 
> > > > l[dpdk.org] $ git diff
> > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > index e31c927905..ce4a195a8f 100644
> > > > --- a/lib/eventdev/rte_eventdev.h
> > > > +++ b/lib/eventdev/rte_eventdev.h
> > > > @@ -1309,9 +1309,9 @@ struct rte_event_vector {
> > > >                    */
> > > >                   struct {
> > > >                           uint16_t port;
> > > > -                       /* Ethernet device port id. */
> > > > +                       /**< Ethernet device port id. */
> > > >                           uint16_t queue;
> > > > -                       /* Ethernet device queue id. */
> > > > +                       /**< Ethernet device queue id. */
> > > >                   };
> > > >           };
> > > >           /**< Union to hold common attributes of the vector array. */
> > > > @@ -1340,7 +1340,11 @@ struct rte_event_vector {
> > > >            * vector array can be an array of mbufs or pointers or opaque u64
> > > >            * values.
> > > >            */
> > > > +#ifndef __DOXYGEN__
> > > >    } __rte_aligned(16);
> > > > +#else
> > > > +};
> > > > +#endif
> > > > 
> > > >    /* Scheduler type definitions */
> > > >    #define RTE_SCHED_TYPE_ORDERED          0
> > > > 
> > > > > 
> > > > > ---
> > > > > V3: reworked following feedback from Mattias
> > > > > ---
> > > > >    lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
> > > > >    1 file changed, 81 insertions(+), 51 deletions(-)
> > > > > 
> > > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > > index ec9b02455d..a741832e8e 100644
> > > > > --- a/lib/eventdev/rte_eventdev.h
> > > > > +++ b/lib/eventdev/rte_eventdev.h
> > > > > @@ -12,25 +12,33 @@
> > > > >     * @file
> > > > >     *
> > > > >     * RTE Event Device API
> > > > > + * ====================
> > > > >     *
> > > > > - * In a polling model, lcores poll ethdev ports and associated rx queues
> > > > > - * directly to look for packet. In an event driven model, by contrast, lcores
> > > > > - * call the scheduler that selects packets for them based on programmer
> > > > > - * specified criteria. Eventdev library adds support for event driven
> > > > > - * programming model, which offer applications automatic multicore scaling,
> > > > > - * dynamic load balancing, pipelining, packet ingress order maintenance and
> > > > > - * synchronization services to simplify application packet processing.
> > > > > + * In a traditional run-to-completion application model, lcores pick up packets
> > > > 
> > > > Can we keep it is as poll mode instead of run-to-completion as event mode also
> > > > supports run to completion by having dequuee() and then Tx.
> > > > 
> > > 
> > > A "traditional" DPDK app is both polling and run-to-completion. You
> > > could always add "polling" somewhere, but "run-to-completion" in that
> > > context serves a purpose, imo.
> > 
> > Yeah. Some event devices can actually sleep to save power if packet is
> > not present(using WFE in arm64 world).
> > 
> 
> Sure, and I believe you can do that with certain Ethdevs as well. Also, you
> can also use interrupts. So polling/energy-efficient polling
> (wfe/umwait)/interrupts aren't really a differentiator between Eventdev and
> "raw" Ethdev.
> 
> > I think, We can be more specific then, like
> > 
> > In a traditional run-to-completion application model where packet are
> > dequeued from NIC RX queues, .......
> > 
> 
> "In a traditional DPDK application model, the application polls Ethdev port
> RX queues to look for work, and processing is done in a run-to-completion
> manner, after which the packets are transmitted on a Ethdev TX queue. Load
> is distributed by statically assigning ports and queues to lcores, and NIC
> receive-side scaling (RSS, or similar) is employed to distribute network
> flows (and thus work) on the same port across multiple RX queues."
> 
> I don̈́'t know if that's too much.
> 
Looks fine to me, I'll just use that text in V4.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 01/11] eventdev: improve doxygen introduction text
  2024-02-07 10:14       ` Jerin Jacob
  2024-02-08  9:50         ` Mattias Rönnblom
@ 2024-02-20 16:33         ` Bruce Richardson
  1 sibling, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-20 16:33 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Wed, Feb 07, 2024 at 03:44:37PM +0530, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > Make some textual improvements to the introduction to eventdev and event
> > devices in the eventdev header file. This text appears in the doxygen
> > output for the header file, and introduces the key concepts, for
> > example: events, event devices, queues, ports and scheduling.
> >
> > This patch makes the following improvements:
> > * small textual fixups, e.g. correcting use of singular/plural
> > * rewrites of some sentences to improve clarity
> > * using doxygen markdown to split the whole large block up into
> >   sections, thereby making it easier to read.
> >
> > No large-scale changes are made, and blocks are not reordered
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> Thanks Bruce, While you are cleaning up, Please add following or
> similar change to fix for not properly
> parsing the struct rte_event_vector. i.e it is coming as global
> variables in html files.
> 
> l[dpdk.org] $ git diff
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index e31c927905..ce4a195a8f 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1309,9 +1309,9 @@ struct rte_event_vector {
>                  */
>                 struct {
>                         uint16_t port;
> -                       /* Ethernet device port id. */
> +                       /**< Ethernet device port id. */
>                         uint16_t queue;
> -                       /* Ethernet device queue id. */
> +                       /**< Ethernet device queue id. */
>                 };
>         };
>         /**< Union to hold common attributes of the vector array. */
> @@ -1340,7 +1340,11 @@ struct rte_event_vector {
>          * vector array can be an array of mbufs or pointers or opaque u64
>          * values.
>          */
> +#ifndef __DOXYGEN__
>  } __rte_aligned(16);
> +#else
> +};
> +#endif
> 

Yep, that's an easy enough extra patch to add to v4.

>  /* Scheduler type definitions */
>  #define RTE_SCHED_TYPE_ORDERED          0
> 
> >
> > ---
> > V3: reworked following feedback from Mattias
> > ---
> >  lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
> >  1 file changed, 81 insertions(+), 51 deletions(-)
> >
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index ec9b02455d..a741832e8e 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -12,25 +12,33 @@
> >   * @file
> >   *
> >   * RTE Event Device API
> > + * ====================
> >   *
> > - * In a polling model, lcores poll ethdev ports and associated rx queues
> > - * directly to look for packet. In an event driven model, by contrast, lcores
> > - * call the scheduler that selects packets for them based on programmer
> > - * specified criteria. Eventdev library adds support for event driven
> > - * programming model, which offer applications automatic multicore scaling,
> > - * dynamic load balancing, pipelining, packet ingress order maintenance and
> > - * synchronization services to simplify application packet processing.
> > + * In a traditional run-to-completion application model, lcores pick up packets
> 
> Can we keep it is as poll mode instead of run-to-completion as event mode also
> supports run to completion by having dequuee() and then Tx.
> 
> > + * from Ethdev ports and associated RX queues, run the packet processing to completion,
> > + * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
> > + * may be used to balance the load across multiple CPU cores.
> > + *
> > + * In contrast, in an event-driver model, as supported by this "eventdev" library,
> > + * incoming packets are fed into an event device, which schedules those packets across
> 
> packets -> events. We may need to bring in Rx adapter if the event is packet.
> 

I think keeping it as packets is correct, rather than confusing things too
much. However, I will put "incoming packets (or other input events) ..." to
acknowledge other sources. We don't need to bring in input adapters at this
point since we want to keep it high-level.

> > + * the available lcores, in accordance with its configuration.
> > + * This event-driven programming model offers applications automatic multicore scaling,
> > + * dynamic load balancing, pipelining, packet order maintenance, synchronization,
> > + * and prioritization/quality of service.
> >   *
> >   * The Event Device API is composed of two parts:
> >   *
> >   * - The application-oriented Event API that includes functions to setup
> >   *   an event device (configure it, setup its queues, ports and start it), to
> > - *   establish the link between queues to port and to receive events, and so on.
> > + *   establish the links between queues and ports to receive events, and so on.
> >   *
> >   * - The driver-oriented Event API that exports a function allowing
> > - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> > + *   an event poll Mode Driver (PMD) to register itself as
> >   *   an event device driver.
> >   *
> > + * Application-oriented Event API
> > + * ------------------------------
> > + *
> >   * Event device components:
> >   *
> >   *                     +-----------------+
> > @@ -75,27 +83,39 @@
> >   *            |                                                           |
> >   *            +-----------------------------------------------------------+
> >   *
> > - * Event device: A hardware or software-based event scheduler.
> > + * **Event device**: A hardware or software-based event scheduler.
> >   *
> > - * Event: A unit of scheduling that encapsulates a packet or other datatype
> > - * like SW generated event from the CPU, Crypto work completion notification,
> > - * Timer expiry event notification etc as well as metadata.
> > - * The metadata includes flow ID, scheduling type, event priority, event_type,
> > - * sub_event_type etc.
> > + * **Event**: Represents an item of work and is the smallest unit of scheduling.
> > + * An event carries metadata, such as queue ID, scheduling type, and event priority,
> > + * and data such as one or more packets or other kinds of buffers.
> > + * Some examples of events are:
> > + * - a software-generated item of work originating from a lcore,
> 
> lcore.
> 
Nak for this, since it's not the end of a sentence, but ack for the other
two below.

> > + *   perhaps carrying a packet to be processed,
> 
> processed.
> 
> > + * - a crypto work completion notification
> 
> notification.
> 
> > + * - a timer expiry notification.
> >   *
> > - * Event queue: A queue containing events that are scheduled by the event dev.
> > + * **Event queue**: A queue containing events that are scheduled by the event device.
> 
> Shouldn't we add "to be" or so?
> i.e
> A queue containing events that are to be scheduled by the event device.
> 

Sure, ack.

> >   * An event queue contains events of different flows associated with scheduling
> >   * types, such as atomic, ordered, or parallel.
> > + * Each event given to an event device must have a valid event queue id field in the metadata,
> > + * to specify on which event queue in the device the event must be placed,
> > + * for later scheduling.
> >   *
> > - * Event port: An application's interface into the event dev for enqueue and
> > + * **Event port**: An application's interface into the event dev for enqueue and
> >   * dequeue operations. Each event port can be linked with one or more
> >   * event queues for dequeue operations.
> > - *
> > - * By default, all the functions of the Event Device API exported by a PMD
> > - * are lock-free functions which assume to not be invoked in parallel on
> > - * different logical cores to work on the same target object. For instance,
> > - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> > - * cores to operates on same  event port. Of course, this function
> > + * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
> > + * that each port is polled by only a single lcore. [If this is not the case,
> > + * a suitable synchronization mechanism should be used to prevent simultaneous
> > + * access from multiple lcores.]
> > + * To schedule events to an lcore, the event device will schedule them to the event port(s)
> > + * being polled by that lcore.
> > + *
> > + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> > + * are non-thread-safe functions, which must not be invoked on the same object in parallel on
> > + * different logical cores.
> > + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> > + * cores to operate on same  event port. Of course, this function
> >   * can be invoked in parallel by different logical cores on different ports.
> >   * It is the responsibility of the upper level application to enforce this rule.
> >   *
> > @@ -107,22 +127,19 @@
> >   *
> >   * Event devices are dynamically registered during the PCI/SoC device probing
> >   * phase performed at EAL initialization time.
> > - * When an Event device is being probed, a *rte_event_dev* structure and
> > - * a new device identifier are allocated for that device. Then, the
> > - * event_dev_init() function supplied by the Event driver matching the probed
> > - * device is invoked to properly initialize the device.
> > + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> > + * for it and the event_dev_init() function supplied by the Event driver
> > + * is invoked to properly initialize the device.
> >   *
> > - * The role of the device init function consists of resetting the hardware or
> > - * software event driver implementations.
> > + * The role of the device init function is to reset the device hardware or
> > + * to initialize the software event driver implementation.
> >   *
> > - * If the device init operation is successful, the correspondence between
> > - * the device identifier assigned to the new device and its associated
> > - * *rte_event_dev* structure is effectively registered.
> > - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> > - * freed.
> > + * If the device init operation is successful, the device is assigned a device
> > + * id (dev_id) for application use.
> > + * Otherwise, the *rte_event_dev* structure is freed.
> >   *
> >   * The functions exported by the application Event API to setup a device
> > - * designated by its device identifier must be invoked in the following order:
> > + * must be invoked in the following order:
> >   *     - rte_event_dev_configure()
> >   *     - rte_event_queue_setup()
> >   *     - rte_event_port_setup()
> > @@ -130,10 +147,15 @@
> >   *     - rte_event_dev_start()
> >   *
> >   * Then, the application can invoke, in any order, the functions
> > - * exported by the Event API to schedule events, dequeue events, enqueue events,
> > - * change event queue(s) to event port [un]link establishment and so on.
> > - *
> > - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> > + * exported by the Event API to dequeue events, enqueue events,
> > + * and link and unlink event queue(s) to event ports.
> > + *
> > + * Before configuring a device, an application should call rte_event_dev_info_get()
> > + * to determine the capabilities of the event device, and any queue or port
> > + * limits of that device. The parameters set in the various device configuration
> > + * structures may need to be adjusted based on the max values provided in the
> > + * device information structure returned from the info_get API.
> 
> Can we add full name of info_get()?

Yep, that will turn it into a hyperlink, so will update in v4

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 03/11] eventdev: update documentation on device capability flags
  2024-02-07 10:30       ` Jerin Jacob
@ 2024-02-20 16:42         ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-20 16:42 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Wed, Feb 07, 2024 at 04:00:04PM +0530, Jerin Jacob wrote:
> On Sat, Feb 3, 2024 at 12:59 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > Update the device capability docs, to:
> >
> > * include more cross-references
> > * split longer text into paragraphs, in most cases with each flag having
> >   a single-line summary at the start of the doc block
> > * general comment rewording and clarification as appropriate
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > V3: Updated following feedback from Mattias
> > ---
> >  lib/eventdev/rte_eventdev.h | 130 +++++++++++++++++++++++++-----------
> >  1 file changed, 92 insertions(+), 38 deletions(-)
> 
> >   */
> >  #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
> >  /**< Event device operates in distributed scheduling mode.
> > + *
> >   * In distributed scheduling mode, event scheduling happens in HW or
> > - * rte_event_dequeue_burst() or the combination of these two.
> > + * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
> >   * If the flag is not set then eventdev is centralized and thus needs a
> >   * dedicated service core that acts as a scheduling thread .
> 
> Please remove space between thread and . in the existing code.
> 

ack

> >   *
> > - * @see rte_event_dequeue_burst()
> > + * @see rte_event_dev_service_id_get
> 
> Could you add () around all the functions so that looks good across the series?
> 

Yes. I'll also standardize them on one-per-line. Some had two per line but
put the third on a separate line because of code wrapping. Better to just
have everything on its own line for consistency.

> 
> >   */
> >  #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
> >  /**< Event device is capable of enqueuing events of any type to any queue.
> > - * If this capability is not set, the queue only supports events of the
> > - *  *RTE_SCHED_TYPE_* type that it was created with.
> >   *
> > - * @see RTE_SCHED_TYPE_* values
> > + * If this capability is not set, each queue only supports events of the
> > + * *RTE_SCHED_TYPE_* type that it was created with.
> > + * The behaviour when events of other scheduling types are sent to the queue is
> > + * currently undefined.
> 
> I think, in header file, we can remove "currently"
> 

Ack.

> 
> p
> >   */
> >
> >  #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
> > -/**< Event device is capable of supporting multiple link profiles per event port
> > - * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
> > - * than one.
> > +/**< Event device is capable of supporting multiple link profiles per event port.
> > + *
> > + *
> 
> The above line can be removed.

Ack.

> 
> > + * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
> > + * than one, and multiple profiles may be configured and then switched at runtime.
> > + * If not set, only a single profile may be configured, which may itself be
> > + * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
> > + *
> > + * @see rte_event_port_profile_links_set rte_event_port_profile_links_get
> > + * @see rte_event_port_profile_switch
> > + * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
> >   */
> >

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 09/11] eventdev: improve comments on scheduling types
  2024-02-08 10:04         ` Mattias Rönnblom
@ 2024-02-20 17:23           ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-20 17:23 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Jerin Jacob, dev, jerinj, mattias.ronnblom, abdullah.sevincer,
	sachin.saxena, hemant.agrawal, pbhagavatula, pravin.pathak

On Thu, Feb 08, 2024 at 11:04:03AM +0100, Mattias Rönnblom wrote:
> On 2024-02-08 10:18, Jerin Jacob wrote:
> > On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > > 
> > > The description of ordered and atomic scheduling given in the eventdev
> > > doxygen documentation was not always clear. Try and simplify this so
> > > that it is clearer for the end-user of the application
> > > 
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > 
> > > ---
> > > V3: extensive rework following feedback. Please re-review!
> > > ---
> > >   lib/eventdev/rte_eventdev.h | 73 +++++++++++++++++++++++--------------
> > >   1 file changed, 45 insertions(+), 28 deletions(-)
> > > 
> > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > index a7d8c28015..8d72765ae7 100644
> > > --- a/lib/eventdev/rte_eventdev.h
> > > +++ b/lib/eventdev/rte_eventdev.h
> > > @@ -1347,25 +1347,35 @@ struct rte_event_vector {
> > >   /**< Ordered scheduling
> > >    *
> > >    * Events from an ordered flow of an event queue can be scheduled to multiple
> > > - * ports for concurrent processing while maintaining the original event order.
> > > + * ports for concurrent processing while maintaining the original event order,
> > > + * i.e. the order in which they were first enqueued to that queue.
> > >    * This scheme enables the user to achieve high single flow throughput by
> > > - * avoiding SW synchronization for ordering between ports which bound to cores.
> > > - *
> > > - * The source flow ordering from an event queue is maintained when events are
> > > - * enqueued to their destination queue within the same ordered flow context.
> > > - * An event port holds the context until application call
> > > - * rte_event_dequeue_burst() from the same port, which implicitly releases
> > > - * the context.
> > > - * User may allow the scheduler to release the context earlier than that
> > > - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
> > > - *
> > > - * Events from the source queue appear in their original order when dequeued
> > > - * from a destination queue.
> > > - * Event ordering is based on the received event(s), but also other
> > > - * (newly allocated or stored) events are ordered when enqueued within the same
> > > - * ordered context. Events not enqueued (e.g. released or stored) within the
> > > - * context are  considered missing from reordering and are skipped at this time
> > > - * (but can be ordered again within another context).
> > > + * avoiding SW synchronization for ordering between ports which are polled
> > > + * by different cores.
> > 
> > I prefer the following version to remove "polled" and to be more explicit.
> > 
> > avoiding SW synchronization for ordering between ports which are
> > dequeuing events
> > using @ref rte_event_deque_burst() across different cores.
> > 
> 
> "This scheme allows events pertaining to the same, potentially large flow to
> be processed in parallel on multiple cores without incurring any
> application-level order restoration logic overhead."
> 

Ack.

> > > + *
> > > + * After events are dequeued from a set of ports, as those events are re-enqueued
> > > + * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event
> > > + * device restores the original event order - including events returned from all
> > > + * ports in the set - before the events arrive on the destination queue.
> > 
> > _arrrive_ is bit vague since we have enqueue operation. How about,
> > "before the events actually deposited on the destination queue."
> > 

I'll use the term "placed" rather than "deposited".

> > 
> > > + *
> > > + * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly
> > > + * released by the next dequeue operation on a port, are skipped by the reordering
> > > + * stage and do not affect the reordering of other returned events.
> > > + *
> > > + * Any NEW events sent on a port are not ordered with respect to FORWARD events sent
> > > + * on the same port, since they have no original event order. They also are not
> > > + * ordered with respect to NEW events enqueued on other ports.
> > > + * However, NEW events to the same destination queue from the same port are guaranteed
> > > + * to be enqueued in the order they were submitted via rte_event_enqueue_burst().
> > > + *
> > > + * NOTE:
> > > + *   In restoring event order of forwarded events, the eventdev API guarantees that
> > > + *   all events from the same flow (i.e. same @ref rte_event.flow_id,
> > > + *   @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original
> > > + *   order before being forwarded to the destination queue.
> > > + *   Some eventdevs may implement stricter ordering to achieve this aim,
> > > + *   for example, restoring the order across *all* flows dequeued from the same ORDERED
> > > + *   queue.
> > >    *
> > >    * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
> > >    */
> > > @@ -1373,18 +1383,25 @@ struct rte_event_vector {
> > >   #define RTE_SCHED_TYPE_ATOMIC           1
> > >   /**< Atomic scheduling
> > >    *
> > > - * Events from an atomic flow of an event queue can be scheduled only to a
> > > + * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id,
> > > + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a
> > >    * single port at a time. The port is guaranteed to have exclusive (atomic)
> > >    * access to the associated flow context, which enables the user to avoid SW
> > > - * synchronization. Atomic flows also help to maintain event ordering
> > > - * since only one port at a time can process events from a flow of an
> > > - * event queue.
> > > - *
> > > - * The atomic queue synchronization context is dedicated to the port until
> > > - * application call rte_event_dequeue_burst() from the same port,
> > > - * which implicitly releases the context. User may allow the scheduler to
> > > - * release the context earlier than that by invoking rte_event_enqueue_burst()
> > > - * with RTE_EVENT_OP_RELEASE operation.
> > > + * synchronization. Atomic flows also maintain event ordering
> > > + * since only one port at a time can process events from each flow of an
> > > + * event queue, and events within a flow are not reordered within the scheduler.
> > > + *
> > > + * An atomic flow is locked to a port when events from that flow are first
> > > + * scheduled to that port. That lock remains in place until the
> > > + * application calls rte_event_dequeue_burst() from the same port,
> > > + * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set).
> > > + * User may allow the scheduler to release the lock earlier than that by invoking
> > > + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow.
> > > + *
> > > + * NOTE: The lock is only released once the last event from the flow, outstanding on the port,
> > 
> > I think, Note can start with something like below,
> > 
> > When there are multiple atomic events dequeue from @ref
> > rte_event_dequeue_burst()
> > for the same event queue, and it has same flow id then the lock is ....
> > 
> 
> Yes, or maybe describing the whole lock/unlock state.
> 
> "The conceptual per-queue-per-flow lock is in a locked state as long (and
> only as long) as one or more events pertaining to that flow were scheduled
> to the port in question, but are not yet released."
> 
> Maybe it needs to be more meaty, describing what released means. I don't
> have the full context of the documentation in my head when I'm writing this.
>

I'd rather not go into what "released" means, but I'll reword this a bit in
v4. As part of that, I'll also put in a reference to forwarding events also
releasing the lock.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types
  2024-02-09  9:14       ` Jerin Jacob
@ 2024-02-20 17:39         ` Bruce Richardson
  2024-02-21  9:31           ` Jerin Jacob
  2024-02-20 17:50         ` Bruce Richardson
  2024-02-20 18:03         ` Bruce Richardson
  2 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-20 17:39 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 09, 2024 at 02:44:04PM +0530, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > For the fields in "rte_event" struct, enhance the comments on each to
> > clarify the field's use, and whether it is preserved between enqueue and
> > dequeue, and it's role, if any, in scheduling.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > V3: updates following review
> > ---
> >  lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
> >  1 file changed, 111 insertions(+), 50 deletions(-)
> >
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 8d72765ae7..58219e027e 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -1463,47 +1463,54 @@ struct rte_event_vector {
> >
> >  /* Event enqueue operations */
> >  #define RTE_EVENT_OP_NEW                0
> > -/**< The event producers use this operation to inject a new event to the
> > - * event device.
> > +/**< The @ref rte_event.op field must be set to this operation type to inject a new event,
> > + * i.e. one not previously dequeued, into the event device, to be scheduled
> > + * for processing.
> >   */
> >  #define RTE_EVENT_OP_FORWARD            1
> > -/**< The CPU use this operation to forward the event to different event queue or
> > - * change to new application specific flow or schedule type to enable
> > - * pipelining.
> > +/**< The application must set the @ref rte_event.op field to this operation type to return a
> > + * previously dequeued event to the event device to be scheduled for further processing.
> >   *
> > - * This operation must only be enqueued to the same port that the
> > + * This event *must* be enqueued to the same port that the
> >   * event to be forwarded was dequeued from.
> > + *
> > + * The event's fields, including (but not limited to) flow_id, scheduling type,
> > + * destination queue, and event payload e.g. mbuf pointer, may all be updated as
> > + * desired by the application, but the @ref rte_event.impl_opaque field must
> > + * be kept to the same value as was present when the event was dequeued.
> >   */
> >  #define RTE_EVENT_OP_RELEASE            2
> >  /**< Release the flow context associated with the schedule type.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > - * then this function hints the scheduler that the user has completed critical
> > - * section processing in the current atomic context.
> > - * The scheduler is now allowed to schedule events from the same flow from
> > - * an event queue to another port. However, the context may be still held
> > - * until the next rte_event_dequeue_burst() call, this call allows but does not
> > - * force the scheduler to release the context early.
> > - *
> > - * Early atomic context release may increase parallelism and thus system
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> > + * then this operation type hints the scheduler that the user has completed critical
> > + * section processing for this event in the current atomic context, and that the
> > + * scheduler may unlock any atomic locks held for this event.
> > + * If this is the last event from an atomic flow, i.e. all flow locks are released,
> 
> 
> Similar comment as other email
> [Jerin] When there are multiple atomic events dequeue from @ref
> rte_event_dequeue_burst()
> for the same event queue, and it has same flow id then the lock is ....
> 
> [Mattias]
> Yes, or maybe describing the whole lock/unlock state.
> 
> "The conceptual per-queue-per-flow lock is in a locked state as long
> (and only as long) as one or more events pertaining to that flow were
> scheduled to the port in question, but are not yet released."
> 
> Maybe it needs to be more meaty, describing what released means. I don't
> have the full context of the documentation in my head when I'm writing this.
>

Will take a look to reword a bit
 
> 
> > + * the scheduler is now allowed to schedule events from that flow from to another port.
> > + * However, the atomic locks may be still held until the next rte_event_dequeue_burst()
> > + * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE allows,
> 
> Is ";" intended?
> 
> > + * but does not force, the scheduler to release the atomic locks early.
> 
> instead of "not force", can use the term _hint_ the driver and reword.

Ok.
> 
> > + *
> > + * Early atomic lock release may increase parallelism and thus system
> >   * performance, but the user needs to design carefully the split into critical
> >   * vs non-critical sections.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> > - * then this function hints the scheduler that the user has done all that need
> > - * to maintain event order in the current ordered context.
> > - * The scheduler is allowed to release the ordered context of this port and
> > - * avoid reordering any following enqueues.
> > - *
> > - * Early ordered context release may increase parallelism and thus system
> > - * performance.
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
> > + * then this operation type informs the scheduler that the current event has
> > + * completed processing and will not be returned to the scheduler, i.e.
> > + * it has been dropped, and so the reordering context for that event
> > + * should be considered filled.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
> > - * or no scheduling context is held then this function may be an NOOP,
> > - * depending on the implementation.
> > + * Events with this operation type must only be enqueued to the same port that the
> > + * event to be released was dequeued from. The @ref rte_event.impl_opaque
> > + * field in the release event must have the same value as that in the original dequeued event.
> >   *
> > - * This operation must only be enqueued to the same port that the
> > - * event to be released was dequeued from.
> > + * If a dequeued event is re-enqueued with operation type of @ref RTE_EVENT_OP_RELEASE,
> > + * then any subsequent enqueue of that event - or a copy of it - must be done as event of type
> > + * @ref RTE_EVENT_OP_NEW, not @ref RTE_EVENT_OP_FORWARD. This is because any context for
> > + * the originally dequeued event, i.e. atomic locks, or reorder buffer entries, will have
> > + * been removed or invalidated by the release operation.
> >   */
> >
> >  /**
> > @@ -1517,56 +1524,110 @@ struct rte_event {
> >                 /** Event attributes for dequeue or enqueue operation */
> >                 struct {
> >                         uint32_t flow_id:20;
> > -                       /**< Targeted flow identifier for the enqueue and
> > -                        * dequeue operation.
> > -                        * The value must be in the range of
> > -                        * [0, nb_event_queue_flows - 1] which
> > -                        * previously supplied to rte_event_dev_configure().
> > +                       /**< Target flow identifier for the enqueue and dequeue operation.
> > +                        *
> > +                        * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
> > +                        * flow for atomicity within a queue & priority level, such that events
> > +                        * from each individual flow will only be scheduled to one port at a time.
> > +                        *
> > +                        * This field is preserved between enqueue and dequeue when
> > +                        * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> > +                        * capability. Otherwise the value is implementation dependent
> > +                        * on dequeue.
> >                          */
> >                         uint32_t sub_event_type:8;
> >                         /**< Sub-event types based on the event source.
> > +                        *
> > +                        * This field is preserved between enqueue and dequeue.
> > +                        * This field is for application or event adapter use,
> > +                        * and is not considered in scheduling decisions.
> 
> 
> cnxk driver is considering this for scheduling decision to
> differentiate the producer i.e event adapters.
> If other drivers are not then we can change the language around it is
> implementation defined.
> 
How does the event type influence the scheduling decision? I can drop the
last line here, but it seems strange to me that the type of event could affect
things. I would have thought that with the eventdev API only the queue,
flow id, and priority would be factors in scheduling?

> 
> > +                        *
> >                          * @see RTE_EVENT_TYPE_CPU
> >                          */
> >                         uint32_t event_type:4;
> > -                       /**< Event type to classify the event source.
> > -                        * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> > +                       /**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
> > +                        *
> > +                        * This field is preserved between enqueue and dequeue
> > +                        * This field is for application or event adapter use,
> > +                        * and is not considered in scheduling decisions.
> 
> 
> cnxk driver is considering this for scheduling decision to
> differentiate the producer i.e event adapters.
> If other drivers are not then we can change the language around it is
> implementation defined.
> 
> >                          */
> >                         uint8_t op:2;
> > -                       /**< The type of event enqueue operation - new/forward/
> > -                        * etc.This field is not preserved across an instance
> > -                        * and is undefined on dequeue.
> > -                        * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> > +                       /**< The type of event enqueue operation - new/forward/ etc.
> > +                        *
> > +                        * This field is *not* preserved across an instance
> > +                        * and is implementation dependent on dequeue.
> > +                        *
> > +                        * @see RTE_EVENT_OP_NEW
> > +                        * @see RTE_EVENT_OP_FORWARD
> > +                        * @see RTE_EVENT_OP_RELEASE
> >                          */
> >                         uint8_t rsvd:4;
> > -                       /**< Reserved for future use */
> > +                       /**< Reserved for future use.
> > +                        *
> > +                        * Should be set to zero on enqueue.
> 
> I am worried about some application explicitly start setting this to
> zero on every enqueue.
> Instead, can we say application should not touch the field, Since every eventdev
> operations starts with dequeue() driver can fill to the correct value.
> 

I'll set this to zero on "NEW", or untouched on FORWARD/RELEASE. 
If we don't state that it should be zeroed on NEW or untouched
otherwise we cannot use the space in future without ABI break.

> > +                        */
> >                         uint8_t sched_type:2;
> >                         /**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
> >                          * associated with flow id on a given event queue
> >                          * for the enqueue and dequeue operation.
> > +                        *
> > +                        * This field is used to determine the scheduling type
> > +                        * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
> > +                        * is configured.
> > +                        * For queues where only a single scheduling type is available,
> > +                        * this field must be set to match the configured scheduling type.
> > +                        *
> > +                        * This field is preserved between enqueue and dequeue.
> > +                        *
> > +                        * @see RTE_SCHED_TYPE_ORDERED
> > +                        * @see RTE_SCHED_TYPE_ATOMIC
> > +                        * @see RTE_SCHED_TYPE_PARALLEL
> >                          */
> >                         uint8_t queue_id;
> >                         /**< Targeted event queue identifier for the enqueue or
> >                          * dequeue operation.
> > -                        * The value must be in the range of
> > -                        * [0, nb_event_queues - 1] which previously supplied to
> > -                        * rte_event_dev_configure().
> > +                        * The value must be less than @ref rte_event_dev_config.nb_event_queues
> > +                        * which was previously supplied to rte_event_dev_configure().
> 
> Some reason, similar text got removed for flow_id. Please add the same.
> 

That was deliberate based on discussion on V2. See:

http://inbox.dpdk.org/dev/Zby3nb4NGs8T5odL@bricha3-MOBL.ger.corp.intel.com/

and wider thread discussion starting here:

http://inbox.dpdk.org/dev/ZbvOtAEpzja0gu7b@bricha3-MOBL.ger.corp.intel.com/

Basically, the comment is wrong based on what the code does now. No event
adapters or apps are limiting the flow-id, and nothing seems broken, so we
can remove the comment.


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types
  2024-02-09  9:14       ` Jerin Jacob
  2024-02-20 17:39         ` Bruce Richardson
@ 2024-02-20 17:50         ` Bruce Richardson
  2024-02-20 18:03         ` Bruce Richardson
  2 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-20 17:50 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 09, 2024 at 02:44:04PM +0530, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > For the fields in "rte_event" struct, enhance the comments on each to
> > clarify the field's use, and whether it is preserved between enqueue and
> > dequeue, and it's role, if any, in scheduling.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > V3: updates following review
> > ---
> >  lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
> >  1 file changed, 111 insertions(+), 50 deletions(-)
> >
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h

<snip>

> > + * the scheduler is now allowed to schedule events from that flow from to another port.
> > + * However, the atomic locks may be still held until the next rte_event_dequeue_burst()
> > + * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE allows,
> 
> Is ";" intended?

Yes, it was. :-)

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types
  2024-02-09  9:14       ` Jerin Jacob
  2024-02-20 17:39         ` Bruce Richardson
  2024-02-20 17:50         ` Bruce Richardson
@ 2024-02-20 18:03         ` Bruce Richardson
  2 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-20 18:03 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Fri, Feb 09, 2024 at 02:44:04PM +0530, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > For the fields in "rte_event" struct, enhance the comments on each to
> > clarify the field's use, and whether it is preserved between enqueue and
> > dequeue, and it's role, if any, in scheduling.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> > V3: updates following review
> > ---
> >  lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
> >  1 file changed, 111 insertions(+), 50 deletions(-)
> >
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 8d72765ae7..58219e027e 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -1463,47 +1463,54 @@ struct rte_event_vector {
> >
> >  /* Event enqueue operations */
> >  #define RTE_EVENT_OP_NEW                0
> > -/**< The event producers use this operation to inject a new event to the
> > - * event device.
> > +/**< The @ref rte_event.op field must be set to this operation type to inject a new event,
> > + * i.e. one not previously dequeued, into the event device, to be scheduled
> > + * for processing.
> >   */
> >  #define RTE_EVENT_OP_FORWARD            1
> > -/**< The CPU use this operation to forward the event to different event queue or
> > - * change to new application specific flow or schedule type to enable
> > - * pipelining.
> > +/**< The application must set the @ref rte_event.op field to this operation type to return a
> > + * previously dequeued event to the event device to be scheduled for further processing.
> >   *
> > - * This operation must only be enqueued to the same port that the
> > + * This event *must* be enqueued to the same port that the
> >   * event to be forwarded was dequeued from.
> > + *
> > + * The event's fields, including (but not limited to) flow_id, scheduling type,
> > + * destination queue, and event payload e.g. mbuf pointer, may all be updated as
> > + * desired by the application, but the @ref rte_event.impl_opaque field must
> > + * be kept to the same value as was present when the event was dequeued.
> >   */
> >  #define RTE_EVENT_OP_RELEASE            2
> >  /**< Release the flow context associated with the schedule type.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > - * then this function hints the scheduler that the user has completed critical
> > - * section processing in the current atomic context.
> > - * The scheduler is now allowed to schedule events from the same flow from
> > - * an event queue to another port. However, the context may be still held
> > - * until the next rte_event_dequeue_burst() call, this call allows but does not
> > - * force the scheduler to release the context early.
> > - *
> > - * Early atomic context release may increase parallelism and thus system
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> > + * then this operation type hints the scheduler that the user has completed critical
> > + * section processing for this event in the current atomic context, and that the
> > + * scheduler may unlock any atomic locks held for this event.
> > + * If this is the last event from an atomic flow, i.e. all flow locks are released,
> 
> 
> Similar comment as other email
> [Jerin] When there are multiple atomic events dequeue from @ref
> rte_event_dequeue_burst()
> for the same event queue, and it has same flow id then the lock is ....
> 
> [Mattias]
> Yes, or maybe describing the whole lock/unlock state.
> 
> "The conceptual per-queue-per-flow lock is in a locked state as long
> (and only as long) as one or more events pertaining to that flow were
> scheduled to the port in question, but are not yet released."
> 
> Maybe it needs to be more meaty, describing what released means. I don't
> have the full context of the documentation in my head when I'm writing this.
> 
Rather than trying to explain all again, I'm just going to put inline here a
cross-reference to the text on RTE_EVENT_TYPE_ATOMIC.

/Bruce

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types
  2024-02-20 17:39         ` Bruce Richardson
@ 2024-02-21  9:31           ` Jerin Jacob
  2024-02-21 10:28             ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Jerin Jacob @ 2024-02-21  9:31 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Tue, Feb 20, 2024 at 11:09 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Fri, Feb 09, 2024 at 02:44:04PM +0530, Jerin Jacob wrote:
> > On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > For the fields in "rte_event" struct, enhance the comments on each to
> > > clarify the field's use, and whether it is preserved between enqueue and
> > > dequeue, and it's role, if any, in scheduling.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > > V3: updates following review
> > > ---
> > >  lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
> > >  1 file changed, 111 insertions(+), 50 deletions(-)
> > >
> > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > index 8d72765ae7..58219e027e 100644
> > > --- a/lib/eventdev/rte_eventdev.h
> > > +++ b/lib/eventdev/rte_eventdev.h
> > > @@ -1463,47 +1463,54 @@ struct rte_event_vector {
> > >
> > >  /* Event enqueue operations */
> > >  #define RTE_EVENT_OP_NEW                0
> > > -/**< The event producers use this operation to inject a new event to the
> > > - * event device.
> > > +/**< The @ref rte_event.op field must be set to this operation type to inject a new event,
> > > + * i.e. one not previously dequeued, into the event device, to be scheduled
> > > + * for processing.
> > >   */
> > >  #define RTE_EVENT_OP_FORWARD            1
> > > -/**< The CPU use this operation to forward the event to different event queue or
> > > - * change to new application specific flow or schedule type to enable
> > > - * pipelining.
> > > +/**< The application must set the @ref rte_event.op field to this operation type to return a
> > > + * previously dequeued event to the event device to be scheduled for further processing.
> > >   *
> > > - * This operation must only be enqueued to the same port that the
> > > + * This event *must* be enqueued to the same port that the
> > >   * event to be forwarded was dequeued from.
> > > + *
> > > + * The event's fields, including (but not limited to) flow_id, scheduling type,
> > > + * destination queue, and event payload e.g. mbuf pointer, may all be updated as
> > > + * desired by the application, but the @ref rte_event.impl_opaque field must
> > > + * be kept to the same value as was present when the event was dequeued.
> > >   */
> > >  #define RTE_EVENT_OP_RELEASE            2
> > >  /**< Release the flow context associated with the schedule type.
> > >   *
> > > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > > - * then this function hints the scheduler that the user has completed critical
> > > - * section processing in the current atomic context.
> > > - * The scheduler is now allowed to schedule events from the same flow from
> > > - * an event queue to another port. However, the context may be still held
> > > - * until the next rte_event_dequeue_burst() call, this call allows but does not
> > > - * force the scheduler to release the context early.
> > > - *
> > > - * Early atomic context release may increase parallelism and thus system
> > > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> > > + * then this operation type hints the scheduler that the user has completed critical
> > > + * section processing for this event in the current atomic context, and that the
> > > + * scheduler may unlock any atomic locks held for this event.
> > > + * If this is the last event from an atomic flow, i.e. all flow locks are released,
> >
> >
> > Similar comment as other email
> > [Jerin] When there are multiple atomic events dequeue from @ref
> > rte_event_dequeue_burst()
> > for the same event queue, and it has same flow id then the lock is ....
> >
> > [Mattias]
> > Yes, or maybe describing the whole lock/unlock state.
> >
> > "The conceptual per-queue-per-flow lock is in a locked state as long
> > (and only as long) as one or more events pertaining to that flow were
> > scheduled to the port in question, but are not yet released."
> >
> > Maybe it needs to be more meaty, describing what released means. I don't
> > have the full context of the documentation in my head when I'm writing this.
> >
>
> Will take a look to reword a bit
>
> >
> > > + * the scheduler is now allowed to schedule events from that flow from to another port.
> > > + * However, the atomic locks may be still held until the next rte_event_dequeue_burst()
> > > + * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE allows,
> >
> > Is ";" intended?
> >
> > > + * but does not force, the scheduler to release the atomic locks early.
> >
> > instead of "not force", can use the term _hint_ the driver and reword.
>
> Ok.
> >
> > > + *
> > > + * Early atomic lock release may increase parallelism and thus system
> > >   * performance, but the user needs to design carefully the split into critical
> > >   * vs non-critical sections.
> > >   *
> > > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> > > - * then this function hints the scheduler that the user has done all that need
> > > - * to maintain event order in the current ordered context.
> > > - * The scheduler is allowed to release the ordered context of this port and
> > > - * avoid reordering any following enqueues.
> > > - *
> > > - * Early ordered context release may increase parallelism and thus system
> > > - * performance.
> > > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
> > > + * then this operation type informs the scheduler that the current event has
> > > + * completed processing and will not be returned to the scheduler, i.e.
> > > + * it has been dropped, and so the reordering context for that event
> > > + * should be considered filled.
> > >   *
> > > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
> > > - * or no scheduling context is held then this function may be an NOOP,
> > > - * depending on the implementation.
> > > + * Events with this operation type must only be enqueued to the same port that the
> > > + * event to be released was dequeued from. The @ref rte_event.impl_opaque
> > > + * field in the release event must have the same value as that in the original dequeued event.
> > >   *
> > > - * This operation must only be enqueued to the same port that the
> > > - * event to be released was dequeued from.
> > > + * If a dequeued event is re-enqueued with operation type of @ref RTE_EVENT_OP_RELEASE,
> > > + * then any subsequent enqueue of that event - or a copy of it - must be done as event of type
> > > + * @ref RTE_EVENT_OP_NEW, not @ref RTE_EVENT_OP_FORWARD. This is because any context for
> > > + * the originally dequeued event, i.e. atomic locks, or reorder buffer entries, will have
> > > + * been removed or invalidated by the release operation.
> > >   */
> > >
> > >  /**
> > > @@ -1517,56 +1524,110 @@ struct rte_event {
> > >                 /** Event attributes for dequeue or enqueue operation */
> > >                 struct {
> > >                         uint32_t flow_id:20;
> > > -                       /**< Targeted flow identifier for the enqueue and
> > > -                        * dequeue operation.
> > > -                        * The value must be in the range of
> > > -                        * [0, nb_event_queue_flows - 1] which
> > > -                        * previously supplied to rte_event_dev_configure().
> > > +                       /**< Target flow identifier for the enqueue and dequeue operation.
> > > +                        *
> > > +                        * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
> > > +                        * flow for atomicity within a queue & priority level, such that events
> > > +                        * from each individual flow will only be scheduled to one port at a time.
> > > +                        *
> > > +                        * This field is preserved between enqueue and dequeue when
> > > +                        * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> > > +                        * capability. Otherwise the value is implementation dependent
> > > +                        * on dequeue.
> > >                          */
> > >                         uint32_t sub_event_type:8;
> > >                         /**< Sub-event types based on the event source.
> > > +                        *
> > > +                        * This field is preserved between enqueue and dequeue.
> > > +                        * This field is for application or event adapter use,
> > > +                        * and is not considered in scheduling decisions.
> >
> >
> > cnxk driver is considering this for scheduling decision to
> > differentiate the producer i.e event adapters.
> > If other drivers are not then we can change the language around it is
> > implementation defined.
> >
> How does the event type influence the scheduling decision? I can drop the

For cnxk, From HW POV, the flow ID is 32 bit which is divided between
flow_id(20 bit), sub event type(8bit) and
event type(4bit)

> last line here

Yes. Please


 > but it seems strange to me that the type of event could affect
> things. I would have thought that with the eventdev API only the queue,
> flow id, and priority would be factors in scheduling?

>
> >
> > > +                        *
> > >                          * @see RTE_EVENT_TYPE_CPU
> > >                          */
> > >                         uint32_t event_type:4;
> > > -                       /**< Event type to classify the event source.
> > > -                        * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> > > +                       /**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
> > > +                        *
> > > +                        * This field is preserved between enqueue and dequeue
> > > +                        * This field is for application or event adapter use,
> > > +                        * and is not considered in scheduling decisions.
> >
> >
> > cnxk driver is considering this for scheduling decision to
> > differentiate the producer i.e event adapters.
> > If other drivers are not then we can change the language around it is
> > implementation defined.
> >
> > >                          */
> > >                         uint8_t op:2;
> > > -                       /**< The type of event enqueue operation - new/forward/
> > > -                        * etc.This field is not preserved across an instance
> > > -                        * and is undefined on dequeue.
> > > -                        * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> > > +                       /**< The type of event enqueue operation - new/forward/ etc.
> > > +                        *
> > > +                        * This field is *not* preserved across an instance
> > > +                        * and is implementation dependent on dequeue.
> > > +                        *
> > > +                        * @see RTE_EVENT_OP_NEW
> > > +                        * @see RTE_EVENT_OP_FORWARD
> > > +                        * @see RTE_EVENT_OP_RELEASE
> > >                          */
> > >                         uint8_t rsvd:4;
> > > -                       /**< Reserved for future use */
> > > +                       /**< Reserved for future use.
> > > +                        *
> > > +                        * Should be set to zero on enqueue.
> >
> > I am worried about some application explicitly start setting this to
> > zero on every enqueue.
> > Instead, can we say application should not touch the field, Since every eventdev
> > operations starts with dequeue() driver can fill to the correct value.
> >
>
> I'll set this to zero on "NEW", or untouched on FORWARD/RELEASE.

OK

> If we don't state that it should be zeroed on NEW or untouched
> otherwise we cannot use the space in future without ABI break.
>
> > > +                        */
> > >                         uint8_t sched_type:2;
> > >                         /**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
> > >                          * associated with flow id on a given event queue
> > >                          * for the enqueue and dequeue operation.
> > > +                        *
> > > +                        * This field is used to determine the scheduling type
> > > +                        * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
> > > +                        * is configured.
> > > +                        * For queues where only a single scheduling type is available,
> > > +                        * this field must be set to match the configured scheduling type.
> > > +                        *
> > > +                        * This field is preserved between enqueue and dequeue.
> > > +                        *
> > > +                        * @see RTE_SCHED_TYPE_ORDERED
> > > +                        * @see RTE_SCHED_TYPE_ATOMIC
> > > +                        * @see RTE_SCHED_TYPE_PARALLEL
> > >                          */
> > >                         uint8_t queue_id;
> > >                         /**< Targeted event queue identifier for the enqueue or
> > >                          * dequeue operation.
> > > -                        * The value must be in the range of
> > > -                        * [0, nb_event_queues - 1] which previously supplied to
> > > -                        * rte_event_dev_configure().
> > > +                        * The value must be less than @ref rte_event_dev_config.nb_event_queues
> > > +                        * which was previously supplied to rte_event_dev_configure().
> >
> > Some reason, similar text got removed for flow_id. Please add the same.
> >
>
> That was deliberate based on discussion on V2. See:
>
> http://inbox.dpdk.org/dev/Zby3nb4NGs8T5odL@bricha3-MOBL.ger.corp.intel.com/
>
> and wider thread discussion starting here:
>
> http://inbox.dpdk.org/dev/ZbvOtAEpzja0gu7b@bricha3-MOBL.ger.corp.intel.com/
>
> Basically, the comment is wrong based on what the code does now. No event
> adapters or apps are limiting the flow-id, and nothing seems broken, so we
> can remove the comment.

OK

>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v3 10/11] eventdev: clarify docs on event object fields and op types
  2024-02-21  9:31           ` Jerin Jacob
@ 2024-02-21 10:28             ` Bruce Richardson
  0 siblings, 0 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:28 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, mattias.ronnblom, abdullah.sevincer, sachin.saxena,
	hemant.agrawal, pbhagavatula, pravin.pathak

On Wed, Feb 21, 2024 at 03:01:06PM +0530, Jerin Jacob wrote:
> On Tue, Feb 20, 2024 at 11:09 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Fri, Feb 09, 2024 at 02:44:04PM +0530, Jerin Jacob wrote:
> > > On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > > > For the fields in "rte_event" struct, enhance the comments on each to
> > > > clarify the field's use, and whether it is preserved between enqueue and
> > > > dequeue, and it's role, if any, in scheduling.
> > > >
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
<snip>
> > > >                         uint32_t sub_event_type:8;
> > > >                         /**< Sub-event types based on the event source.
> > > > +                        *
> > > > +                        * This field is preserved between enqueue and dequeue.
> > > > +                        * This field is for application or event adapter use,
> > > > +                        * and is not considered in scheduling decisions.
> > >
> > >
> > > cnxk driver is considering this for scheduling decision to
> > > differentiate the producer i.e event adapters.
> > > If other drivers are not then we can change the language around it is
> > > implementation defined.
> > >
> > How does the event type influence the scheduling decision? I can drop the
> 
> For cnxk, From HW POV, the flow ID is 32 bit which is divided between
> flow_id(20 bit), sub event type(8bit) and
> event type(4bit)
> 
> > last line here
> 
> Yes. Please
> 
> 
Dropping last sentence in v4.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 00/12] improve eventdev API specification/documentation
  2024-01-19 17:43 ` [PATCH v2 00/11] improve eventdev API specification/documentation Bruce Richardson
                     ` (11 preceding siblings ...)
  2024-02-02 12:39   ` [PATCH v3 00/11] improve eventdev API specification/documentation Bruce Richardson
@ 2024-02-21 10:32   ` Bruce Richardson
  2024-02-21 10:32     ` [PATCH v4 01/12] eventdev: improve doxygen introduction text Bruce Richardson
                       ` (12 more replies)
  12 siblings, 13 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

This patchset makes rewording improvements to the eventdev doxygen
documentation to try and ensure that it is as clear as possible,
describes the implementation as accurately as possible, and is
consistent within itself.

Most changes are just minor rewordings, along with plenty of changes to
change references into doxygen links/cross-references.

In tightening up the definitions, there may be subtle changes in meaning
which should be checked for carefully by reviewers. Where there was
ambiguity, the behaviour of existing code is documented so as to avoid
breaking existing apps.

V4:
* additional rework following comments from Jerin and on-list discussion
* extra 12th patch to clean up some doxygen issues

V3:
* major cleanup following review by Mattias and on-list discussions
* old patch 7 split in two and merged with other changes in the same
  area rather than being standalone.
* new patch 11 added at end of series.

V2:
* additional cleanup and changes
* remove "escaped" accidental change to .c file

Bruce Richardson (12):
  eventdev: improve doxygen introduction text
  eventdev: move text on driver internals to proper section
  eventdev: update documentation on device capability flags
  eventdev: cleanup doxygen comments on info structure
  eventdev: improve function documentation for query fns
  eventdev: improve doxygen comments on configure struct
  eventdev: improve doxygen comments on config fns
  eventdev: improve doxygen comments for control APIs
  eventdev: improve comments on scheduling types
  eventdev: clarify docs on event object fields and op types
  eventdev: drop comment for anon union from doxygen
  eventdev: fix doxygen processing of event vector struct

 lib/eventdev/rte_eventdev.h | 1016 +++++++++++++++++++++++------------
 1 file changed, 663 insertions(+), 353 deletions(-)

--
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 01/12] eventdev: improve doxygen introduction text
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  4:51       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 02/12] eventdev: move text on driver internals to proper section Bruce Richardson
                       ` (11 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

Make some textual improvements to the introduction to eventdev and event
devices in the eventdev header file. This text appears in the doxygen
output for the header file, and introduces the key concepts, for
example: events, event devices, queues, ports and scheduling.

This patch makes the following improvements:
* small textual fixups, e.g. correcting use of singular/plural
* rewrites of some sentences to improve clarity
* using doxygen markdown to split the whole large block up into
  sections, thereby making it easier to read.

No large-scale changes are made, and blocks are not reordered

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V4: reworked following review by Jerin
V3: reworked following feedback from Mattias
---
 lib/eventdev/rte_eventdev.h | 140 ++++++++++++++++++++++--------------
 1 file changed, 86 insertions(+), 54 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 1f99e933c0..985286c616 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -12,25 +12,35 @@
  * @file
  *
  * RTE Event Device API
- *
- * In a polling model, lcores poll ethdev ports and associated rx queues
- * directly to look for packet. In an event driven model, by contrast, lcores
- * call the scheduler that selects packets for them based on programmer
- * specified criteria. Eventdev library adds support for event driven
- * programming model, which offer applications automatic multicore scaling,
- * dynamic load balancing, pipelining, packet ingress order maintenance and
- * synchronization services to simplify application packet processing.
+ * ====================
+ *
+ * In a traditional DPDK application model, the application polls Ethdev port RX
+ * queues to look for work, and processing is done in a run-to-completion manner,
+ * after which the packets are transmitted on a Ethdev TX queue. Load is
+ * distributed by statically assigning ports and queues to lcores, and NIC
+ * receive-side scaling (RSS), or similar, is employed to distribute network flows
+ * (and thus work) on the same port across multiple RX queues.
+ *
+ * In contrast, in an event-driver model, as supported by this "eventdev" library,
+ * incoming packets (or other input events) are fed into an event device, which
+ * schedules those packets across the available lcores, in accordance with its configuration.
+ * This event-driven programming model offers applications automatic multicore scaling,
+ * dynamic load balancing, pipelining, packet order maintenance, synchronization,
+ * and prioritization/quality of service.
  *
  * The Event Device API is composed of two parts:
  *
  * - The application-oriented Event API that includes functions to setup
  *   an event device (configure it, setup its queues, ports and start it), to
- *   establish the link between queues to port and to receive events, and so on.
+ *   establish the links between queues and ports to receive events, and so on.
  *
  * - The driver-oriented Event API that exports a function allowing
- *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event poll Mode Driver (PMD) to register itself as
  *   an event device driver.
  *
+ * Application-oriented Event API
+ * ------------------------------
+ *
  * Event device components:
  *
  *                     +-----------------+
@@ -75,27 +85,39 @@
  *            |                                                           |
  *            +-----------------------------------------------------------+
  *
- * Event device: A hardware or software-based event scheduler.
+ * **Event device**: A hardware or software-based event scheduler.
  *
- * Event: A unit of scheduling that encapsulates a packet or other datatype
- * like SW generated event from the CPU, Crypto work completion notification,
- * Timer expiry event notification etc as well as metadata.
- * The metadata includes flow ID, scheduling type, event priority, event_type,
- * sub_event_type etc.
+ * **Event**: Represents an item of work and is the smallest unit of scheduling.
+ * An event carries metadata, such as queue ID, scheduling type, and event priority,
+ * and data such as one or more packets or other kinds of buffers.
+ * Some examples of events are:
+ * - a software-generated item of work originating from a lcore,
+ *   perhaps carrying a packet to be processed.
+ * - a crypto work completion notification.
+ * - a timer expiry notification.
  *
- * Event queue: A queue containing events that are scheduled by the event dev.
+ * **Event queue**: A queue containing events that are to be scheduled by the event device.
  * An event queue contains events of different flows associated with scheduling
  * types, such as atomic, ordered, or parallel.
+ * Each event given to an event device must have a valid event queue id field in the metadata,
+ * to specify on which event queue in the device the event must be placed,
+ * for later scheduling.
  *
- * Event port: An application's interface into the event dev for enqueue and
+ * **Event port**: An application's interface into the event dev for enqueue and
  * dequeue operations. Each event port can be linked with one or more
  * event queues for dequeue operations.
- *
- * By default, all the functions of the Event Device API exported by a PMD
- * are lock-free functions which assume to not be invoked in parallel on
- * different logical cores to work on the same target object. For instance,
- * the dequeue function of a PMD cannot be invoked in parallel on two logical
- * cores to operates on same  event port. Of course, this function
+ * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
+ * that each port is polled by only a single lcore. [If this is not the case,
+ * a suitable synchronization mechanism should be used to prevent simultaneous
+ * access from multiple lcores.]
+ * To schedule events to an lcore, the event device will schedule them to the event port(s)
+ * being polled by that lcore.
+ *
+ * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
+ * are non-thread-safe functions, which must not be invoked on the same object in parallel on
+ * different logical cores.
+ * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operate on same  event port. Of course, this function
  * can be invoked in parallel by different logical cores on different ports.
  * It is the responsibility of the upper level application to enforce this rule.
  *
@@ -107,22 +129,19 @@
  *
  * Event devices are dynamically registered during the PCI/SoC device probing
  * phase performed at EAL initialization time.
- * When an Event device is being probed, a *rte_event_dev* structure and
- * a new device identifier are allocated for that device. Then, the
- * event_dev_init() function supplied by the Event driver matching the probed
- * device is invoked to properly initialize the device.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
  *
- * The role of the device init function consists of resetting the hardware or
- * software event driver implementations.
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
  *
- * If the device init operation is successful, the correspondence between
- * the device identifier assigned to the new device and its associated
- * *rte_event_dev* structure is effectively registered.
- * Otherwise, both the *rte_event_dev* structure and the device identifier are
- * freed.
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
  *
  * The functions exported by the application Event API to setup a device
- * designated by its device identifier must be invoked in the following order:
+ * must be invoked in the following order:
  *     - rte_event_dev_configure()
  *     - rte_event_queue_setup()
  *     - rte_event_port_setup()
@@ -130,12 +149,17 @@
  *     - rte_event_dev_start()
  *
  * Then, the application can invoke, in any order, the functions
- * exported by the Event API to schedule events, dequeue events, enqueue events,
- * change event queue(s) to event port [un]link establishment and so on.
- *
- * Application may use rte_event_[queue/port]_default_conf_get() to get the
- * default configuration to set up an event queue or event port by
- * overriding few default values.
+ * exported by the Event API to dequeue events, enqueue events,
+ * and link and unlink event queue(s) to event ports.
+ *
+ * Before configuring a device, an application should call rte_event_dev_info_get()
+ * to determine the capabilities of the event device, and any queue or port
+ * limits of that device. The parameters set in the various device configuration
+ * structures may need to be adjusted based on the max values provided in the
+ * device information structure returned from the rte_event_dev_info_get() API.
+ * An application may use rte_event_queue_default_conf_get() or
+ * rte_event_port_default_conf_get() to get the default configuration
+ * to set up an event queue or event port by overriding few default values.
  *
  * If the application wants to change the configuration (i.e. call
  * rte_event_dev_configure(), rte_event_queue_setup(), or
@@ -145,7 +169,11 @@
  * when the device is stopped.
  *
  * Finally, an application can close an Event device by invoking the
- * rte_event_dev_close() function.
+ * rte_event_dev_close() function. Once closed, a device cannot be
+ * reconfigured or restarted.
+ *
+ * Driver-Oriented Event API
+ * -------------------------
  *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
@@ -163,11 +191,14 @@
  * performs an indirect invocation of the corresponding driver function
  * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
  *
- * For performance reasons, the address of the fast-path functions of the
- * Event driver is not contained in the *event_dev_ops* structure.
+ * For performance reasons, the addresses of the fast-path functions of the
+ * event driver are not contained in the *event_dev_ops* structure.
  * Instead, they are directly stored at the beginning of the *rte_event_dev*
  * structure to avoid an extra indirect memory access during their invocation.
  *
+ * Event Enqueue, Dequeue and Scheduling
+ * -------------------------------------
+ *
  * RTE event device drivers do not use interrupts for enqueue or dequeue
  * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
  * functions to applications.
@@ -179,21 +210,22 @@
  * crypto work completion notification etc
  *
  * The *dequeue* operation gets one or more events from the event ports.
- * The application process the events and send to downstream event queue through
- * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
- * on the final stage, the application may use Tx adapter API for maintaining
- * the ingress order and then send the packet/event on the wire.
+ * The application processes the events and sends them to a downstream event queue through
+ * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
+ * On the final stage of processing, the application may use the Tx adapter API for maintaining
+ * the event ingress order while sending the packet/event on the wire via NIC Tx.
  *
  * The point at which events are scheduled to ports depends on the device.
  * For hardware devices, scheduling occurs asynchronously without any software
  * intervention. Software schedulers can either be distributed
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
- * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic need a dedicated service core for scheduling.
- * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
- * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls software specific scheduling function.
+ * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
+ * software schedulers need a dedicated service core for scheduling.
+ * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
+ * indicates that the device is centralized and thus needs a dedicated scheduling
+ * thread (generally an RTE service that should be mapped to one or more service cores)
+ * that repeatedly calls the software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 02/12] eventdev: move text on driver internals to proper section
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
  2024-02-21 10:32     ` [PATCH v4 01/12] eventdev: improve doxygen introduction text Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  5:01       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 03/12] eventdev: update documentation on device capability flags Bruce Richardson
                       ` (10 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

Inside the doxygen introduction text, some internal details of how
eventdev works was mixed in with application-relevant details. Move
these details on probing etc. to the driver-relevant section.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 985286c616..c2782b2e30 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -124,22 +124,6 @@
  * In all functions of the Event API, the Event device is
  * designated by an integer >= 0 named the device identifier *dev_id*
  *
- * At the Event driver level, Event devices are represented by a generic
- * data structure of type *rte_event_dev*.
- *
- * Event devices are dynamically registered during the PCI/SoC device probing
- * phase performed at EAL initialization time.
- * When an Event device is being probed, an *rte_event_dev* structure is allocated
- * for it and the event_dev_init() function supplied by the Event driver
- * is invoked to properly initialize the device.
- *
- * The role of the device init function is to reset the device hardware or
- * to initialize the software event driver implementation.
- *
- * If the device init operation is successful, the device is assigned a device
- * id (dev_id) for application use.
- * Otherwise, the *rte_event_dev* structure is freed.
- *
  * The functions exported by the application Event API to setup a device
  * must be invoked in the following order:
  *     - rte_event_dev_configure()
@@ -175,6 +159,22 @@
  * Driver-Oriented Event API
  * -------------------------
  *
+ * At the Event driver level, Event devices are represented by a generic
+ * data structure of type *rte_event_dev*.
+ *
+ * Event devices are dynamically registered during the PCI/SoC device probing
+ * phase performed at EAL initialization time.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
+ *
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
+ *
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
+ *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
  * identifier.
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 03/12] eventdev: update documentation on device capability flags
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
  2024-02-21 10:32     ` [PATCH v4 01/12] eventdev: improve doxygen introduction text Bruce Richardson
  2024-02-21 10:32     ` [PATCH v4 02/12] eventdev: move text on driver internals to proper section Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  5:07       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 04/12] eventdev: cleanup doxygen comments on info structure Bruce Richardson
                       ` (9 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

Update the device capability docs, to:

* include more cross-references
* split longer text into paragraphs, in most cases with each flag having
  a single-line summary at the start of the doc block
* general comment rewording and clarification as appropriate

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V4: Rebased on latest main branch
    Updated function cross-references for consistency
    General changes following review by Jerin
V3: Updated following feedback from Mattias
---
 lib/eventdev/rte_eventdev.h | 172 +++++++++++++++++++++++++-----------
 1 file changed, 121 insertions(+), 51 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index c2782b2e30..f7b98a6cfa 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -255,124 +255,178 @@ struct rte_event;
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
 /**< Event scheduling prioritization is based on the priority and weight
- * associated with each event queue. Events from a queue with highest priority
- * is scheduled first. If the queues are of same priority, weight of the queues
+ * associated with each event queue.
+ *
+ * Events from a queue with highest priority
+ * are scheduled first. If the queues are of same priority, weight of the queues
  * are considered to select a queue in a weighted round robin fashion.
  * Subsequent dequeue calls from an event port could see events from the same
  * event queue, if the queue is configured with an affinity count. Affinity
  * count is the number of subsequent dequeue calls, in which an event port
  * should use the same event queue if the queue is non-empty
  *
- *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
+ * NOTE: A device may use both queue prioritization and event prioritization
+ * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions.
+ *
+ *  @see rte_event_queue_setup()
+ *  @see rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
- *  each event. Priority of each event is supplied in *rte_event* structure
+ *  each event.
+ *
+ *  Priority of each event is supplied in *rte_event* structure
  *  on each enqueue operation.
+ *  If this capability is not set, the priority field of the event structure
+ *  is ignored for each event.
  *
+ * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability)
+ * and event prioritization when making packet scheduling decisions.
+
  *  @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
 /**< Event device operates in distributed scheduling mode.
+ *
  * In distributed scheduling mode, event scheduling happens in HW or
- * rte_event_dequeue_burst() or the combination of these two.
+ * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
- * dedicated service core that acts as a scheduling thread .
+ * dedicated service core that acts as a scheduling thread.
  *
- * @see rte_event_dequeue_burst()
+ * @see rte_event_dev_service_id_get()
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of accepting enqueued events, of any type
  * advertised as supported by the device, to all destination queues.
  *
- * When this capability is set, the "schedule_type" field of the
- * rte_event_queue_conf structure is ignored when a queue is being configured.
+ * When this capability is set, and @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is set
+ * in @ref rte_event_queue_conf.event_queue_cfg, the "schedule_type" field of the
+ * @ref rte_event_queue_conf structure is ignored when a queue is being configured.
  * Instead the "sched_type" field of each event enqueued is used to
  * select the scheduling to be performed on that event.
  *
- * If this capability is not set, the queue only supports events of the
- *  *RTE_SCHED_TYPE_* type specified in the rte_event_queue_conf structure
- *  at time of configuration.
+ * If this capability is not set, or the configuration flag is not set,
+ * the queue only supports events of the *RTE_SCHED_TYPE_* type specified
+ * in the @ref rte_event_queue_conf structure  at time of configuration.
+ * The behaviour when events of other scheduling types are sent to the queue is
+ * undefined.
  *
+ * @see RTE_EVENT_QUEUE_CFG_ALL_TYPES
  * @see RTE_SCHED_TYPE_ATOMIC
  * @see RTE_SCHED_TYPE_ORDERED
  * @see RTE_SCHED_TYPE_PARALLEL
+ * @see rte_event_queue_conf.event_queue_cfg
  * @see rte_event_queue_conf.schedule_type
+ * @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_BURST_MODE          (1ULL << 4)
 /**< Event device is capable of operating in burst mode for enqueue(forward,
- * release) and dequeue operation. If this capability is not set, application
- * still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
- * PMD accepts only one event at a time.
+ * release) and dequeue operation.
+ *
+ * If this capability is not set, application
+ * can still use the rte_event_dequeue_burst() and rte_event_enqueue_burst() but
+ * PMD accepts or returns only one event at a time.
  *
- * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
+ * @see rte_event_dequeue_burst()
+ * @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE    (1ULL << 5)
 /**< Event device ports support disabling the implicit release feature, in
  * which the port will release all unreleased events in its dequeue operation.
+ *
  * If this capability is set and the port is configured with implicit release
  * disabled, the application is responsible for explicitly releasing events
- * using either the RTE_EVENT_OP_FORWARD or the RTE_EVENT_OP_RELEASE event
+ * using either the @ref RTE_EVENT_OP_FORWARD or the @ref RTE_EVENT_OP_RELEASE event
  * enqueue operations.
  *
- * @see rte_event_dequeue_burst() rte_event_enqueue_burst()
+ * @see rte_event_dequeue_burst()
+ * @see rte_event_enqueue_burst()
  */
 
 #define RTE_EVENT_DEV_CAP_NONSEQ_MODE         (1ULL << 6)
-/**< Event device is capable of operating in none sequential mode. The path
- * of the event is not necessary to be sequential. Application can change
- * the path of event at runtime. If the flag is not set, then event each event
- * will follow a path from queue 0 to queue 1 to queue 2 etc. If the flag is
- * set, events may be sent to queues in any order. If the flag is not set, the
- * eventdev will return an error when the application enqueues an event for a
+/**< Event device is capable of operating in non-sequential mode.
+ *
+ * The path of the event is not necessary to be sequential. Application can change
+ * the path of event at runtime and events may be sent to queues in any order.
+ *
+ * If the flag is not set, then event each event will follow a path from queue 0
+ * to queue 1 to queue 2 etc.
+ * The eventdev will return an error when the application enqueues an event for a
  * qid which is not the next in the sequence.
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK   (1ULL << 7)
-/**< Event device is capable of configuring the queue/port link at runtime.
+/**< Event device is capable of reconfiguring the queue/port link at runtime.
+ *
  * If the flag is not set, the eventdev queue/port link is only can be
- * configured during  initialization.
+ * configured during  initialization, or by stopping the device and
+ * then later restarting it after reconfiguration.
+ *
+ * @see rte_event_port_link()
+ * @see rte_event_port_unlink()
  */
 
 #define RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT (1ULL << 8)
-/**< Event device is capable of setting up the link between multiple queue
- * with single port. If the flag is not set, the eventdev can only map a
- * single queue to each port or map a single queue to many port.
+/**< Event device is capable of setting up links between multiple queues and a single port.
+ *
+ * If the flag is not set, each port may only be linked to a single queue, and
+ * so can only receive events from that queue.
+ * However, each queue may be linked to multiple ports.
+ *
+ * @see rte_event_port_link()
  */
 
 #define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
-/**< Event device preserves the flow ID from the enqueued
- * event to the dequeued event if the flag is set. Otherwise,
- * the content of this field is implementation dependent.
+/**< Event device preserves the flow ID from the enqueued event to the dequeued event.
+ *
+ * If this flag is not set,
+ * the content of the flow-id field in dequeued events is implementation dependent.
+ *
+ * @see rte_event_dequeue_burst()
  */
 
 #define RTE_EVENT_DEV_CAP_MAINTENANCE_FREE (1ULL << 10)
 /**< Event device *does not* require calls to rte_event_maintain().
+ *
  * An event device that does not set this flag requires calls to
  * rte_event_maintain() during periods when neither
  * rte_event_dequeue_burst() nor rte_event_enqueue_burst() are called
  * on a port. This will allow the event device to perform internal
  * processing, such as flushing buffered events, return credits to a
  * global pool, or process signaling related to load balancing.
+ *
+ * @see rte_event_maintain()
  */
 
 #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
 /**< Event device is capable of changing the queue attributes at runtime i.e
- * after rte_event_queue_setup() or rte_event_start() call sequence. If this
- * flag is not set, eventdev queue attributes can only be configured during
+ * after rte_event_queue_setup() or rte_event_dev_start() call sequence.
+ *
+ * If this flag is not set, event queue attributes can only be configured during
  * rte_event_queue_setup().
+ *
+ * @see rte_event_queue_setup()
  */
 
 #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
-/**< Event device is capable of supporting multiple link profiles per event port
- * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
- * than one.
+/**< Event device is capable of supporting multiple link profiles per event port.
+ *
+ * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one, and multiple profiles may be configured and then switched at runtime.
+ * If not set, only a single profile may be configured, which may itself be
+ * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set).
+ *
+ * @see rte_event_port_profile_links_set()
+ * @see rte_event_port_profile_links_get()
+ * @see rte_event_port_profile_switch()
+ * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK
  */
 
 #define RTE_EVENT_DEV_CAP_ATOMIC  (1ULL << 13)
 /**< Event device is capable of atomic scheduling.
  * When this flag is set, the application can configure queues with scheduling type
  * atomic on this event device.
+ *
  * @see RTE_SCHED_TYPE_ATOMIC
  */
 
@@ -380,6 +434,7 @@ struct rte_event;
 /**< Event device is capable of ordered scheduling.
  * When this flag is set, the application can configure queues with scheduling type
  * ordered on this event device.
+ *
  * @see RTE_SCHED_TYPE_ORDERED
  */
 
@@ -387,44 +442,59 @@ struct rte_event;
 /**< Event device is capable of parallel scheduling.
  * When this flag is set, the application can configure queues with scheduling type
  * parallel on this event device.
+ *
  * @see RTE_SCHED_TYPE_PARALLEL
  */
 
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
-/**< Highest priority expressed across eventdev subsystem
- * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+/**< Highest priority level for events and queues.
+ *
+ * @see rte_event_queue_setup()
+ * @see rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_NORMAL    128
-/**< Normal priority expressed across eventdev subsystem
- * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+/**< Normal priority level for events and queues.
+ *
+ * @see rte_event_queue_setup()
+ * @see rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 #define RTE_EVENT_DEV_PRIORITY_LOWEST    255
-/**< Lowest priority expressed across eventdev subsystem
- * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+/**< Lowest priority level for events and queues.
+ *
+ * @see rte_event_queue_setup()
+ * @see rte_event_enqueue_burst()
  * @see rte_event_port_link()
  */
 
 /* Event queue scheduling weights */
 #define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255
-/**< Highest weight of an event queue
- * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+/**< Highest weight of an event queue.
+ *
+ * @see rte_event_queue_attr_get()
+ * @see rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0
-/**< Lowest weight of an event queue
- * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+/**< Lowest weight of an event queue.
+ *
+ * @see rte_event_queue_attr_get()
+ * @see rte_event_queue_attr_set()
  */
 
 /* Event queue scheduling affinity */
 #define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255
-/**< Highest scheduling affinity of an event queue
- * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+/**< Highest scheduling affinity of an event queue.
+ *
+ * @see rte_event_queue_attr_get()
+ * @see rte_event_queue_attr_set()
  */
 #define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0
-/**< Lowest scheduling affinity of an event queue
- * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+/**< Lowest scheduling affinity of an event queue.
+ *
+ * @see rte_event_queue_attr_get()
+ * @see rte_event_queue_attr_set()
  */
 
 /**
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 04/12] eventdev: cleanup doxygen comments on info structure
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (2 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 03/12] eventdev: update documentation on device capability flags Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  5:18       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 05/12] eventdev: improve function documentation for query fns Bruce Richardson
                       ` (8 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

Some small rewording changes to the doxygen comments on struct
rte_event_dev_info.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: reworked following feedback
- added closing "." on comments
- added more cross-reference links
- reworded priority level comments
---
 lib/eventdev/rte_eventdev.h | 85 +++++++++++++++++++++++++------------
 1 file changed, 58 insertions(+), 27 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index f7b98a6cfa..b9ec3fc45e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -537,57 +537,88 @@ rte_event_dev_socket_id(uint8_t dev_id);
  * Event device information
  */
 struct rte_event_dev_info {
-	const char *driver_name;	/**< Event driver name */
-	struct rte_device *dev;	/**< Device information */
+	const char *driver_name;	/**< Event driver name. */
+	struct rte_device *dev;	/**< Device information. */
 	uint32_t min_dequeue_timeout_ns;
-	/**< Minimum supported global dequeue timeout(ns) by this device */
+	/**< Minimum global dequeue timeout(ns) supported by this device. */
 	uint32_t max_dequeue_timeout_ns;
-	/**< Maximum supported global dequeue timeout(ns) by this device */
+	/**< Maximum global dequeue timeout(ns) supported by this device. */
 	uint32_t dequeue_timeout_ns;
-	/**< Configured global dequeue timeout(ns) for this device */
+	/**< Configured global dequeue timeout(ns) for this device. */
 	uint8_t max_event_queues;
-	/**< Maximum event_queues supported by this device */
+	/**< Maximum event queues supported by this device.
+	 *
+	 * This count excludes any queues covered by @ref max_single_link_event_port_queue_pairs.
+	 */
 	uint32_t max_event_queue_flows;
-	/**< Maximum supported flows in an event queue by this device*/
+	/**< Maximum number of flows within an event queue supported by this device. */
 	uint8_t max_event_queue_priority_levels;
-	/**< Maximum number of event queue priority levels by this device.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	/**< Maximum number of event queue priority levels supported by this device.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * The implementation shall normalize priority values specified between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
+	 * to map them internally to this range of priorities.
+	 * [For devices supporting a power-of-2 number of priority levels, this
+	 * normalization will be done via a right-shift operation, so only the top
+	 * log2(max_levels) bits will be used by the event device.]
+	 *
+	 * @see rte_event_queue_conf.priority
 	 */
 	uint8_t max_event_priority_levels;
 	/**< Maximum number of event priority levels by this device.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability.
+	 *
+	 * The implementation shall normalize priority values specified between
+	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
+	 * to map them internally to this range of priorities.
+	 * [For devices supporting a power-of-2 number of priority levels, this
+	 * normalization will be done via a right-shift operation, so only the top
+	 * log2(max_levels) bits will be used by the event device.]
+	 *
+	 * @see rte_event.priority
 	 */
 	uint8_t max_event_ports;
-	/**< Maximum number of event ports supported by this device */
+	/**< Maximum number of event ports supported by this device.
+	 *
+	 * This count excludes any ports covered by @ref max_single_link_event_port_queue_pairs.
+	 */
 	uint8_t max_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk dequeue will set this as 1.
+	/**< Maximum number of events that can be dequeued at a time from an event port
+	 * on this device.
+	 *
+	 * A device that does not support burst dequeue
+	 * (@ref RTE_EVENT_DEV_CAP_BURST_MODE) will set this to 1.
 	 */
 	uint32_t max_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * A device that does not support bulk enqueue will set this as 1.
+	/**< Maximum number of events that can be enqueued at a time to an event port
+	 * on this device.
+	 *
+	 * A device that does not support burst enqueue
+	 * (@ref RTE_EVENT_DEV_CAP_BURST_MODE) will set this to 1.
 	 */
 	uint8_t max_event_port_links;
-	/**< Maximum number of queues that can be linked to a single event
-	 * port by this device.
+	/**< Maximum number of queues that can be linked to a single event port on this device.
 	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
-	 * can manage at a time. An *open system* event dev does not have a
-	 * limit and will specify this as -1.
+	 * can manage at a time.
+	 * Once the number of events tracked by an eventdev exceeds this number,
+	 * any enqueues of NEW events will fail.
+	 * An *open system* event dev does not have a limit and will specify this as -1.
 	 */
 	uint32_t event_dev_cap;
-	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	/**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*). */
 	uint8_t max_single_link_event_port_queue_pairs;
-	/**< Maximum number of event ports and queues that are optimized for
-	 * (and only capable of) single-link configurations supported by this
-	 * device. These ports and queues are not accounted for in
-	 * max_event_ports or max_event_queues.
+	/**< Maximum number of event ports and queues, supported by this device,
+	 * that are optimized for (and only capable of) single-link configurations.
+	 * These ports and queues are not accounted for in @ref max_event_ports
+	 * or @ref max_event_queues.
 	 */
 	uint8_t max_profiles_per_port;
-	/**< Maximum number of event queue profiles per event port.
+	/**< Maximum number of event queue link profiles per event port.
 	 * A device that doesn't support multiple profiles will set this as 1.
 	 */
 };
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 05/12] eventdev: improve function documentation for query fns
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (3 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 04/12] eventdev: cleanup doxygen comments on info structure Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  5:18       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 06/12] eventdev: improve doxygen comments on configure struct Bruce Richardson
                       ` (7 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

General improvements to the doxygen docs for eventdev functions for
querying basic information:
* number of devices
* id for a particular device
* socket id of device
* capability information for a device

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: minor changes following review
---
 lib/eventdev/rte_eventdev.h | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index b9ec3fc45e..9d286168b1 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -498,8 +498,7 @@ struct rte_event;
  */
 
 /**
- * Get the total number of event devices that have been successfully
- * initialised.
+ * Get the total number of event devices.
  *
  * @return
  *   The total number of usable event devices.
@@ -514,8 +513,10 @@ rte_event_dev_count(void);
  *   Event device name to select the event device identifier.
  *
  * @return
- *   Returns event device identifier on success.
- *   - <0: Failure to find named event device.
+ *   Event device identifier (dev_id >= 0) on success.
+ *   Negative error code on failure:
+ *   - -EINVAL - input name parameter is invalid.
+ *   - -ENODEV - no event device found with that name.
  */
 int
 rte_event_dev_get_dev_id(const char *name);
@@ -528,7 +529,8 @@ rte_event_dev_get_dev_id(const char *name);
  * @return
  *   The NUMA socket id to which the device is connected or
  *   a default of zero if the socket could not be determined.
- *   -(-EINVAL)  dev_id value is out of range.
+ *   -EINVAL on error, where the given dev_id value does not
+ *   correspond to any event device.
  */
 int
 rte_event_dev_socket_id(uint8_t dev_id);
@@ -624,18 +626,20 @@ struct rte_event_dev_info {
 };
 
 /**
- * Retrieve the contextual information of an event device.
+ * Retrieve details of an event device's capabilities and configuration limits.
  *
  * @param dev_id
  *   The identifier of the device.
  *
  * @param[out] dev_info
  *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
- *   contextual information of the device.
+ *   information about the device's capabilities.
  *
  * @return
- *   - 0: Success, driver updates the contextual information of the event device
- *   - <0: Error code returned by the driver info get function.
+ *   - 0: Success, information about the event device is present in dev_info.
+ *   - <0: Failure, error code returned by the function.
+ *     - -EINVAL - invalid input parameters, e.g. incorrect device id.
+ *     - -ENOTSUP - device does not support returning capabilities information.
  */
 int
 rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 06/12] eventdev: improve doxygen comments on configure struct
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (4 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 05/12] eventdev: improve function documentation for query fns Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  6:36       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 07/12] eventdev: improve doxygen comments on config fns Bruce Richardson
                       ` (6 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

General rewording and cleanup on the rte_event_dev_config structure.
Improved the wording of some sentences and created linked
cross-references out of the existing references to the dev_info
structure.

As part of the rework, fix issue with how single-link port-queue pairs
were counted in the rte_event_dev_config structure. This did not match
the actual implementation and, if following the documentation, certain
valid port/queue configurations would have been impossible to configure.
Fix this by changing the documentation to match the implementation

Bugzilla ID:  1368
Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints")

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3:
- minor tweaks following review
- merged in doc fix for bugzilla 1368 into this patch, since it fit with
  other clarifications to the config struct.
---
 lib/eventdev/rte_eventdev.h | 61 ++++++++++++++++++++++---------------
 1 file changed, 37 insertions(+), 24 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 9d286168b1..73cc6b6688 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -684,9 +684,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 struct rte_event_dev_config {
 	uint32_t dequeue_timeout_ns;
 	/**< rte_event_dequeue_burst() timeout on this device.
-	 * This value should be in the range of *min_dequeue_timeout_ns* and
-	 * *max_dequeue_timeout_ns* which previously provided in
-	 * rte_event_dev_info_get()
+	 * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and
+	 * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
+	 * @ref rte_event_dev_info_get()
 	 * The value 0 is allowed, in which case, default dequeue timeout used.
 	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
 	 */
@@ -694,40 +694,53 @@ struct rte_event_dev_config {
 	/**< In a *closed system* this field is the limit on maximum number of
 	 * events that can be inflight in the eventdev at a given time. The
 	 * limit is required to ensure that the finite space in a closed system
-	 * is not overwhelmed. The value cannot exceed the *max_num_events*
-	 * as provided by rte_event_dev_info_get().
-	 * This value should be set to -1 for *open system*.
+	 * is not exhausted.
+	 * The value cannot exceed @ref rte_event_dev_info.max_num_events
+	 * returned by rte_event_dev_info_get().
+	 *
+	 * This value should be set to -1 for *open systems*, that is,
+	 * those systems returning -1 in @ref rte_event_dev_info.max_num_events.
+	 *
+	 * @see rte_event_port_conf.new_event_threshold
 	 */
 	uint8_t nb_event_queues;
 	/**< Number of event queues to configure on this device.
-	 * This value cannot exceed the *max_event_queues* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queues +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link queues i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_queues
 	 */
 	uint8_t nb_event_ports;
 	/**< Number of event ports to configure on this device.
-	 * This value cannot exceed the *max_event_ports* which previously
-	 * provided in rte_event_dev_info_get()
+	 * This value *includes* any single-link queue-port pairs to be used.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_ports +
+	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+	 * returned by rte_event_dev_info_get().
+	 * The number of non-single-link ports i.e. this value less
+	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
+	 * @ref rte_event_dev_info.max_event_ports
 	 */
 	uint32_t nb_event_queue_flows;
-	/**< Number of flows for any event queue on this device.
-	 * This value cannot exceed the *max_event_queue_flows* which previously
-	 * provided in rte_event_dev_info_get()
+	/**< Max number of flows needed for a single event queue on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows
+	 * returned by rte_event_dev_info_get()
 	 */
 	uint32_t nb_event_port_dequeue_depth;
-	/**< Maximum number of events can be dequeued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_dequeue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Max number of events that can be dequeued at a time from an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_dequeue_burst()
 	 */
 	uint32_t nb_event_port_enqueue_depth;
-	/**< Maximum number of events can be enqueued at a time from an
-	 * event port by this device.
-	 * This value cannot exceed the *max_event_port_enqueue_depth*
-	 * which previously provided in rte_event_dev_info_get().
+	/**< Maximum number of events can be enqueued at a time to an event port on this device.
+	 * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth
+	 * returned by rte_event_dev_info_get().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
-	 * @see rte_event_port_setup()
+	 * @see rte_event_port_setup() rte_event_enqueue_burst()
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
@@ -737,7 +750,7 @@ struct rte_event_dev_config {
 	 * queues; this value cannot exceed *nb_event_ports* or
 	 * *nb_event_queues*. If the device has ports and queues that are
 	 * optimized for single-link usage, this field is a hint for how many
-	 * to allocate; otherwise, regular event ports and queues can be used.
+	 * to allocate; otherwise, regular event ports and queues will be used.
 	 */
 };
 
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 07/12] eventdev: improve doxygen comments on config fns
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (5 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 06/12] eventdev: improve doxygen comments on configure struct Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  6:43       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 08/12] eventdev: improve doxygen comments for control APIs Bruce Richardson
                       ` (5 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

Improve the documentation text for the configuration functions and
structures for configuring an eventdev, as well as ports and queues.
Clarify text where possible, and ensure references come through as links
in the html output.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: Update following review, mainly:
 - change ranges starting with 0, to just say "less than"
 - put in "." at end of sentences & bullet points
---
 lib/eventdev/rte_eventdev.h | 221 +++++++++++++++++++++++-------------
 1 file changed, 144 insertions(+), 77 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 73cc6b6688..e38354cedd 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -757,12 +757,14 @@ struct rte_event_dev_config {
 /**
  * Configure an event device.
  *
- * This function must be invoked first before any other function in the
- * API. This function can also be re-invoked when a device is in the
- * stopped state.
+ * This function must be invoked before any other configuration function in the
+ * API, when preparing an event device for application use.
+ * This function can also be re-invoked when a device is in the stopped state.
  *
- * The caller may use rte_event_dev_info_get() to get the capability of each
- * resources available for this event device.
+ * The caller should use rte_event_dev_info_get() to get the capabilities and
+ * resource limits for this event device before calling this API.
+ * Many values in the dev_conf input parameter are subject to limits given
+ * in the device information returned from rte_event_dev_info_get().
  *
  * @param dev_id
  *   The identifier of the device to configure.
@@ -772,6 +774,9 @@ struct rte_event_dev_config {
  * @return
  *   - 0: Success, device configured.
  *   - <0: Error code returned by the driver configuration function.
+ *     - -ENOTSUP - device does not support configuration.
+ *     - -EINVAL  - invalid input parameter.
+ *     - -EBUSY   - device has already been started.
  */
 int
 rte_event_dev_configure(uint8_t dev_id,
@@ -781,14 +786,35 @@ rte_event_dev_configure(uint8_t dev_id,
 
 /* Event queue configuration bitmap flags */
 #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (1ULL << 0)
-/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
+/**< Allow events with schedule types ATOMIC, ORDERED, and PARALLEL to be enqueued to this queue.
  *
+ * The scheduling type to be used is that specified in each individual event.
+ * This flag can only be set when configuring queues on devices reporting the
+ * @ref RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability.
+ *
+ * Without this flag, only events with the specific scheduling type configured at queue setup
+ * can be sent to the queue.
+ *
+ * @see RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES
  * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
  * @see rte_event_enqueue_burst()
  */
 #define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 1)
 /**< This event queue links only to a single event port.
  *
+ * No load-balancing of events is performed, as all events
+ * sent to this queue end up at the same event port.
+ * The number of queues on which this flag is to be set must be
+ * configured at device configuration time, by setting
+ * @ref rte_event_dev_config.nb_single_link_event_port_queues
+ * parameter appropriately.
+ *
+ * This flag serves as a hint only, any devices without specific
+ * support for single-link queues can fall-back automatically to
+ * using regular queues with a single destination port.
+ *
+ *  @see rte_event_dev_info.max_single_link_event_port_queue_pairs
+ *  @see rte_event_dev_config.nb_single_link_event_port_queues
  *  @see rte_event_port_setup(), rte_event_port_link()
  */
 
@@ -796,56 +822,79 @@ rte_event_dev_configure(uint8_t dev_id,
 struct rte_event_queue_conf {
 	uint32_t nb_atomic_flows;
 	/**< The maximum number of active flows this queue can track at any
-	 * given time. If the queue is configured for atomic scheduling (by
-	 * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg
-	 * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the
-	 * value must be in the range of [1, nb_event_queue_flows], which was
-	 * previously provided in rte_event_dev_configure().
+	 * given time.
+	 *
+	 * If the queue is configured for atomic scheduling (by
+	 * applying the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to
+	 * @ref rte_event_queue_conf.event_queue_cfg
+	 * or @ref RTE_SCHED_TYPE_ATOMIC flag to @ref rte_event_queue_conf.schedule_type), then the
+	 * value must be in the range of [1, @ref rte_event_dev_config.nb_event_queue_flows],
+	 * which was previously provided in rte_event_dev_configure().
+	 *
+	 * If the queue is not configured for atomic scheduling this value is ignored.
 	 */
 	uint32_t nb_atomic_order_sequences;
 	/**< The maximum number of outstanding events waiting to be
 	 * reordered by this queue. In other words, the number of entries in
-	 * this queue’s reorder buffer.When the number of events in the
+	 * this queue’s reorder buffer. When the number of events in the
 	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
-	 * scheduler cannot schedule the events from this queue and invalid
-	 * event will be returned from dequeue until one or more entries are
+	 * scheduler cannot schedule the events from this queue and no
+	 * events will be returned from dequeue until one or more entries are
 	 * freed up/released.
+	 *
 	 * If the queue is configured for ordered scheduling (by applying the
-	 * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or
-	 * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must
-	 * be in the range of [1, nb_event_queue_flows], which was
+	 * @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to @ref rte_event_queue_conf.event_queue_cfg or
+	 * @ref RTE_SCHED_TYPE_ORDERED flag to @ref rte_event_queue_conf.schedule_type),
+	 * then the value must be in the range of
+	 * [1, @ref rte_event_dev_config.nb_event_queue_flows], which was
 	 * previously supplied to rte_event_dev_configure().
+	 *
+	 * If the queue is not configured for ordered scheduling, then this value is ignored.
 	 */
 	uint32_t event_queue_cfg;
 	/**< Queue cfg flags(EVENT_QUEUE_CFG_) */
 	uint8_t schedule_type;
 	/**< Queue schedule type(RTE_SCHED_TYPE_*).
-	 * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in
-	 * event_queue_cfg.
+	 *
+	 * Valid when @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is not set in
+	 * @ref rte_event_queue_conf.event_queue_cfg.
+	 *
+	 * If the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is set, then this field is ignored.
+	 *
+	 * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
 	 */
 	uint8_t priority;
 	/**< Priority for this event queue relative to other event queues.
+	 *
 	 * The requested priority should in the range of
-	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+	 * [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
 	 * The implementation shall normalize the requested priority to
 	 * event device supported priority value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise
 	 */
 	uint8_t weight;
 	/**< Weight of the event queue relative to other event queues.
+	 *
 	 * The requested weight should be in the range of
-	 * [RTE_EVENT_DEV_WEIGHT_HIGHEST, RTE_EVENT_DEV_WEIGHT_LOWEST].
+	 * [@ref RTE_EVENT_QUEUE_WEIGHT_HIGHEST, @ref RTE_EVENT_QUEUE_WEIGHT_LOWEST].
 	 * The implementation shall normalize the requested weight to event
 	 * device supported weight value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise.
 	 */
 	uint8_t affinity;
 	/**< Affinity of the event queue relative to other event queues.
+	 *
 	 * The requested affinity should be in the range of
-	 * [RTE_EVENT_DEV_AFFINITY_HIGHEST, RTE_EVENT_DEV_AFFINITY_LOWEST].
+	 * [@ref RTE_EVENT_QUEUE_AFFINITY_HIGHEST, @ref RTE_EVENT_QUEUE_AFFINITY_LOWEST].
 	 * The implementation shall normalize the requested affinity to event
 	 * device supported affinity value.
-	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
+	 *
+	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability,
+	 * ignored otherwise.
 	 */
 };
 
@@ -860,7 +909,7 @@ struct rte_event_queue_conf {
  *   The identifier of the device.
  * @param queue_id
  *   The index of the event queue to get the configuration information.
- *   The value must be in the range [0, nb_event_queues - 1]
+ *   The value must be less than @ref rte_event_dev_config.nb_event_queues
  *   previously supplied to rte_event_dev_configure().
  * @param[out] queue_conf
  *   The pointer to the default event queue configuration data.
@@ -880,8 +929,9 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
  * @param dev_id
  *   The identifier of the device.
  * @param queue_id
- *   The index of the event queue to setup. The value must be in the range
- *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
+ *   The index of the event queue to setup. The value must be
+ *   less than @ref rte_event_dev_config.nb_event_queues previously supplied to
+ *   rte_event_dev_configure().
  * @param queue_conf
  *   The pointer to the configuration data to be used for the event queue.
  *   NULL value is allowed, in which case default configuration	used.
@@ -890,60 +940,60 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
  *
  * @return
  *   - 0: Success, event queue correctly set up.
- *   - <0: event queue configuration failed
+ *   - <0: event queue configuration failed.
  */
 int
 rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
 		      const struct rte_event_queue_conf *queue_conf);
 
 /**
- * The priority of the queue.
+ * Queue attribute id for the priority of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_PRIORITY 0
 /**
- * The number of atomic flows configured for the queue.
+ * Queue attribute id for the number of atomic flows configured for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS 1
 /**
- * The number of atomic order sequences configured for the queue.
+ * Queue attribute id for the number of atomic order sequences configured for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES 2
 /**
- * The cfg flags for the queue.
+ * Queue attribute id for the configuration flags for the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG 3
 /**
- * The schedule type of the queue.
+ * Queue attribute id for the schedule type of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
 /**
- * The weight of the queue.
+ * Queue attribute id for the weight of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
 /**
- * Affinity of the queue.
+ * Queue attribute id for the affinity of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
 
 /**
- * Get an attribute from a queue.
+ * Get an attribute of an event queue.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param queue_id
- *   Eventdev queue id
+ *   The index of the event queue to query. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_queues previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to retrieve
+ *   The attribute ID to retrieve (RTE_EVENT_QUEUE_ATTR_*).
  * @param[out] attr_value
- *   A pointer that will be filled in with the attribute value if successful
+ *   A pointer that will be filled in with the attribute value if successful.
  *
  * @return
  *   - 0: Successfully returned value
- *   - -EINVAL: invalid device, queue or attr_id provided, or attr_value was
- *		NULL
+ *   - -EINVAL: invalid device, queue or attr_id provided, or attr_value was NULL.
  *   - -EOVERFLOW: returned when attr_id is set to
- *   RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and event_queue_cfg is set to
- *   RTE_EVENT_QUEUE_CFG_ALL_TYPES
+ *   @ref RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES is
+ *   set in the queue configuration flags.
  */
 int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
@@ -953,19 +1003,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
  * Set an event queue attribute.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param queue_id
- *   Eventdev queue id
+ *   The index of the event queue to configure. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_queues previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to set
+ *   The attribute ID to set (RTE_EVENT_QUEUE_ATTR_*).
  * @param attr_value
- *   The attribute value to set
+ *   The attribute value to set.
  *
  * @return
  *   - 0: Successfully set attribute.
- *   - -EINVAL: invalid device, queue or attr_id.
- *   - -ENOTSUP: device does not support setting the event attribute.
- *   - <0: failed to set event queue attribute
+ *   - <0: failed to set event queue attribute.
+ *   -   -EINVAL: invalid device, queue or attr_id.
+ *   -   -ENOTSUP: device does not support setting the event attribute.
  */
 int
 rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
@@ -983,7 +1034,10 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
  */
 #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
 /**< This event port links only to a single event queue.
+ * The queue it links with should be similarly configured with the
+ * @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK flag.
  *
+ *  @see RTE_EVENT_QUEUE_CFG_SINGLE_LINK
  *  @see rte_event_port_setup(), rte_event_port_link()
  */
 #define RTE_EVENT_PORT_CFG_HINT_PRODUCER       (1ULL << 2)
@@ -999,7 +1053,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 #define RTE_EVENT_PORT_CFG_HINT_CONSUMER       (1ULL << 3)
 /**< Hint that this event port will primarily dequeue events from the system.
  * A PMD can optimize its internal workings by assuming that this port is
- * primarily going to consume events, and not enqueue FORWARD or RELEASE
+ * primarily going to consume events, and not enqueue NEW or FORWARD
  * events.
  *
  * Note that this flag is only a hint, so PMDs must operate under the
@@ -1025,48 +1079,55 @@ struct rte_event_port_conf {
 	/**< A backpressure threshold for new event enqueues on this port.
 	 * Use for *closed system* event dev where event capacity is limited,
 	 * and cannot exceed the capacity of the event dev.
+	 *
 	 * Configuring ports with different thresholds can make higher priority
 	 * traffic less likely to  be backpressured.
 	 * For example, a port used to inject NIC Rx packets into the event dev
 	 * can have a lower threshold so as not to overwhelm the device,
 	 * while ports used for worker pools can have a higher threshold.
-	 * This value cannot exceed the *nb_events_limit*
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_events_limit value
 	 * which was previously supplied to rte_event_dev_configure().
-	 * This should be set to '-1' for *open system*.
+	 *
+	 * This should be set to '-1' for *open system*, i.e when
+	 * @ref rte_event_dev_info.max_num_events == -1.
 	 */
 	uint16_t dequeue_depth;
-	/**< Configure number of bulk dequeues for this event port.
-	 * This value cannot exceed the *nb_event_port_dequeue_depth*
-	 * which previously supplied to rte_event_dev_configure().
-	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
+	/**< Configure the maximum size of burst dequeues for this event port.
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_dequeue_depth value
+	 * which was previously supplied to rte_event_dev_configure().
+	 *
+	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
 	 */
 	uint16_t enqueue_depth;
-	/**< Configure number of bulk enqueues for this event port.
-	 * This value cannot exceed the *nb_event_port_enqueue_depth*
-	 * which previously supplied to rte_event_dev_configure().
-	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
+	/**< Configure the maximum size of burst enqueues to this event port.
+	 * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_enqueue_depth value
+	 * which was previously supplied to rte_event_dev_configure().
+	 *
+	 * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability.
 	 */
-	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
+	uint32_t event_port_cfg; /**< Port configuration flags(EVENT_PORT_CFG_) */
 };
 
 /**
  * Retrieve the default configuration information of an event port designated
  * by its *port_id* from the event driver for an event device.
  *
- * This function intended to be used in conjunction with rte_event_port_setup()
- * where caller needs to set up the port by overriding few default values.
+ * This function is intended to be used in conjunction with rte_event_port_setup()
+ * where the caller can set up the port by just overriding few default values.
  *
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
  *   The index of the event port to get the configuration information.
- *   The value must be in the range [0, nb_event_ports - 1]
+ *   The value must be less than @ref rte_event_dev_config.nb_event_ports
  *   previously supplied to rte_event_dev_configure().
  * @param[out] port_conf
- *   The pointer to the default event port configuration data
+ *   The pointer to a structure to store the default event port configuration data.
  * @return
  *   - 0: Success, driver updates the default event port configuration data.
  *   - <0: Error code returned by the driver info get function.
+ *      - -EINVAL - invalid input parameter.
+ *      - -ENOTSUP - function is not supported for this device.
  *
  * @see rte_event_port_setup()
  */
@@ -1080,19 +1141,25 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
- *   The index of the event port to setup. The value must be in the range
- *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ *   The index of the event port to setup. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_ports previously supplied to
+ *   rte_event_dev_configure().
  * @param port_conf
- *   The pointer to the configuration data to be used for the queue.
- *   NULL value is allowed, in which case default configuration	used.
+ *   The pointer to the configuration data to be used for the port.
+ *   NULL value is allowed, in which case the default configuration is used.
  *
  * @see rte_event_port_default_conf_get()
  *
  * @return
  *   - 0: Success, event port correctly set up.
- *   - <0: Port configuration failed
- *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
- *   with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ *   - <0: Port configuration failed.
+ *     - -EINVAL - Invalid input parameter.
+ *     - -EBUSY - Port already started.
+ *     - -ENOTSUP - Function not supported on this device, or a NULL pointer passed
+ *        as the port_conf parameter, and no default configuration function available
+ *        for this device.
+ *     - -EDQUOT - Application tried to link a queue configured
+ *      with @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event port.
  */
 int
 rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
@@ -1122,8 +1189,8 @@ typedef void (*rte_eventdev_port_flush_t)(uint8_t dev_id,
  * @param dev_id
  *   The identifier of the device.
  * @param port_id
- *   The index of the event port to setup. The value must be in the range
- *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ *   The index of the event port to quiesce. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_ports previously supplied to rte_event_dev_configure().
  * @param release_cb
  *   Callback function invoked once per flushed event.
  * @param args
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 08/12] eventdev: improve doxygen comments for control APIs
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (6 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 07/12] eventdev: improve doxygen comments on config fns Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  6:44       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 09/12] eventdev: improve comments on scheduling types Bruce Richardson
                       ` (4 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

The doxygen comments for the port attributes, start and stop (and
related functions) are improved.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: add missing "." on end of sentences/lines.
---
 lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++++++--------------
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index e38354cedd..72814719b2 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1201,19 +1201,21 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
 		       rte_eventdev_port_flush_t release_cb, void *args);
 
 /**
- * The queue depth of the port on the enqueue side
+ * Port attribute id for the maximum size of a burst enqueue operation supported on a port.
  */
 #define RTE_EVENT_PORT_ATTR_ENQ_DEPTH 0
 /**
- * The queue depth of the port on the dequeue side
+ * Port attribute id for the maximum size of a dequeue burst which can be returned from a port.
  */
 #define RTE_EVENT_PORT_ATTR_DEQ_DEPTH 1
 /**
- * The new event threshold of the port
+ * Port attribute id for the new event threshold of the port.
+ * Once the number of events in the system exceeds this threshold, the enqueue of NEW-type
+ * events will fail.
  */
 #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
 /**
- * The implicit release disable attribute of the port
+ * Port attribute id for the implicit release disable attribute of the port.
  */
 #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
 
@@ -1221,17 +1223,18 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
  * Get an attribute from a port.
  *
  * @param dev_id
- *   Eventdev id
+ *   The identifier of the device.
  * @param port_id
- *   Eventdev port id
+ *   The index of the event port to query. The value must be less than
+ *   @ref rte_event_dev_config.nb_event_ports previously supplied to rte_event_dev_configure().
  * @param attr_id
- *   The attribute ID to retrieve
+ *   The attribute ID to retrieve (RTE_EVENT_PORT_ATTR_*)
  * @param[out] attr_value
  *   A pointer that will be filled in with the attribute value if successful
  *
  * @return
- *   - 0: Successfully returned value
- *   - (-EINVAL) Invalid device, port or attr_id, or attr_value was NULL
+ *   - 0: Successfully returned value.
+ *   - (-EINVAL) Invalid device, port or attr_id, or attr_value was NULL.
  */
 int
 rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
@@ -1240,17 +1243,19 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 /**
  * Start an event device.
  *
- * The device start step is the last one and consists of setting the event
- * queues to start accepting the events and schedules to event ports.
+ * The device start step is the last one in device setup, and enables the event
+ * ports and queues to start accepting events and scheduling them to event ports.
  *
  * On success, all basic functions exported by the API (event enqueue,
  * event dequeue and so on) can be invoked.
  *
  * @param dev_id
- *   Event device identifier
+ *   Event device identifier.
  * @return
  *   - 0: Success, device started.
- *   - -ESTALE : Not all ports of the device are configured
+ *   - -EINVAL:  Invalid device id provided.
+ *   - -ENOTSUP: Device does not support this operation.
+ *   - -ESTALE : Not all ports of the device are configured.
  *   - -ENOLINK: Not all queues are linked, which could lead to deadlock.
  */
 int
@@ -1292,18 +1297,22 @@ typedef void (*rte_eventdev_stop_flush_t)(uint8_t dev_id,
  * callback function must be registered in every process that can call
  * rte_event_dev_stop().
  *
+ * Only one callback function may be registered. Each new call replaces
+ * the existing registered callback function with the new function passed in.
+ *
  * To unregister a callback, call this function with a NULL callback pointer.
  *
  * @param dev_id
  *   The identifier of the device.
  * @param callback
- *   Callback function invoked once per flushed event.
+ *   Callback function to be invoked once per flushed event.
+ *   Pass NULL to unset any previously-registered callback function.
  * @param userdata
  *   Argument supplied to callback.
  *
  * @return
  *  - 0 on success.
- *  - -EINVAL if *dev_id* is invalid
+ *  - -EINVAL if *dev_id* is invalid.
  *
  * @see rte_event_dev_stop()
  */
@@ -1314,12 +1323,14 @@ int rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
  * Close an event device. The device cannot be restarted!
  *
  * @param dev_id
- *   Event device identifier
+ *   Event device identifier.
  *
  * @return
  *  - 0 on successfully closing device
- *  - <0 on failure to close device
- *  - (-EAGAIN) if device is busy
+ *  - <0 on failure to close device.
+ *    - -EINVAL - invalid device id.
+ *    - -ENOTSUP - operation not supported for this device.
+ *    - -EAGAIN - device is busy.
  */
 int
 rte_event_dev_close(uint8_t dev_id);
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 09/12] eventdev: improve comments on scheduling types
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (7 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 08/12] eventdev: improve doxygen comments for control APIs Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  6:49       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 10/12] eventdev: clarify docs on event object fields and op types Bruce Richardson
                       ` (3 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

The description of ordered and atomic scheduling given in the eventdev
doxygen documentation was not always clear. Try and simplify this so
that it is clearer for the end-user of the application

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V4: reworked following review by Jerin
V3: extensive rework following feedback. Please re-review!
---
 lib/eventdev/rte_eventdev.h | 77 +++++++++++++++++++++++--------------
 1 file changed, 48 insertions(+), 29 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 72814719b2..6d881bd665 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1397,25 +1397,36 @@ struct rte_event_vector {
 /**< Ordered scheduling
  *
  * Events from an ordered flow of an event queue can be scheduled to multiple
- * ports for concurrent processing while maintaining the original event order.
- * This scheme enables the user to achieve high single flow throughput by
- * avoiding SW synchronization for ordering between ports which bound to cores.
- *
- * The source flow ordering from an event queue is maintained when events are
- * enqueued to their destination queue within the same ordered flow context.
- * An event port holds the context until application call
- * rte_event_dequeue_burst() from the same port, which implicitly releases
- * the context.
- * User may allow the scheduler to release the context earlier than that
- * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
- *
- * Events from the source queue appear in their original order when dequeued
- * from a destination queue.
- * Event ordering is based on the received event(s), but also other
- * (newly allocated or stored) events are ordered when enqueued within the same
- * ordered context. Events not enqueued (e.g. released or stored) within the
- * context are  considered missing from reordering and are skipped at this time
- * (but can be ordered again within another context).
+ * ports for concurrent processing while maintaining the original event order,
+ * i.e. the order in which they were first enqueued to that queue.
+ * This scheme allows events pertaining to the same, potentially large, flow to
+ * be processed in parallel on multiple cores without incurring any
+ * application-level order restoration logic overhead.
+ *
+ * After events are dequeued from a set of ports, as those events are re-enqueued
+ * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event
+ * device restores the original event order - including events returned from all
+ * ports in the set - before the events are placed on the destination queue,
+ * for subsequent scheduling to ports.
+ *
+ * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly
+ * released by the next dequeue operation on a port, are skipped by the reordering
+ * stage and do not affect the reordering of other returned events.
+ *
+ * Any NEW events sent on a port are not ordered with respect to FORWARD events sent
+ * on the same port, since they have no original event order. They also are not
+ * ordered with respect to NEW events enqueued on other ports.
+ * However, NEW events to the same destination queue from the same port are guaranteed
+ * to be enqueued in the order they were submitted via rte_event_enqueue_burst().
+ *
+ * NOTE:
+ *   In restoring event order of forwarded events, the eventdev API guarantees that
+ *   all events from the same flow (i.e. same @ref rte_event.flow_id,
+ *   @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original
+ *   order before being forwarded to the destination queue.
+ *   Some eventdevs may implement stricter ordering to achieve this aim,
+ *   for example, restoring the order across *all* flows dequeued from the same ORDERED
+ *   queue.
  *
  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
  */
@@ -1423,18 +1434,26 @@ struct rte_event_vector {
 #define RTE_SCHED_TYPE_ATOMIC           1
 /**< Atomic scheduling
  *
- * Events from an atomic flow of an event queue can be scheduled only to a
+ * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id,
+ * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a
  * single port at a time. The port is guaranteed to have exclusive (atomic)
  * access to the associated flow context, which enables the user to avoid SW
- * synchronization. Atomic flows also help to maintain event ordering
- * since only one port at a time can process events from a flow of an
- * event queue.
- *
- * The atomic queue synchronization context is dedicated to the port until
- * application call rte_event_dequeue_burst() from the same port,
- * which implicitly releases the context. User may allow the scheduler to
- * release the context earlier than that by invoking rte_event_enqueue_burst()
- * with RTE_EVENT_OP_RELEASE operation.
+ * synchronization. Atomic flows also maintain event ordering
+ * since only one port at a time can process events from each flow of an
+ * event queue, and events within a flow are not reordered within the scheduler.
+ *
+ * An atomic flow is locked to a port when events from that flow are first
+ * scheduled to that port. That lock remains in place until the
+ * application calls rte_event_dequeue_burst() from the same port,
+ * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set).
+ * User may allow the scheduler to release the lock earlier than that by invoking
+ * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow.
+ *
+ * NOTE: Where multiple events from the same queue and atomic flow are scheduled to a port,
+ * the lock for that flow is only released once the last event from the flow is released,
+ * or forwarded to another queue. So long as there is at least one event from an atomic
+ * flow scheduled to a port/core (including any events in the port's dequeue queue, not yet read
+ * by the application), that port will hold the synchronization lock for that flow.
  *
  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
  */
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 10/12] eventdev: clarify docs on event object fields and op types
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (8 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 09/12] eventdev: improve comments on scheduling types Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  6:52       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 11/12] eventdev: drop comment for anon union from doxygen Bruce Richardson
                       ` (2 subsequent siblings)
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

Clarify the meaning of the NEW, FORWARD and RELEASE event types.
For the fields in "rte_event" struct, enhance the comments on each to
clarify the field's use, and whether it is preserved between enqueue and
dequeue, and it's role, if any, in scheduling.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V4: reworked following review by Jerin
V3: updates following review
---
 lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
 1 file changed, 111 insertions(+), 50 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 6d881bd665..7e7e275620 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1515,47 +1515,55 @@ struct rte_event_vector {
 
 /* Event enqueue operations */
 #define RTE_EVENT_OP_NEW                0
-/**< The event producers use this operation to inject a new event to the
- * event device.
+/**< The @ref rte_event.op field must be set to this operation type to inject a new event,
+ * i.e. one not previously dequeued, into the event device, to be scheduled
+ * for processing.
  */
 #define RTE_EVENT_OP_FORWARD            1
-/**< The CPU use this operation to forward the event to different event queue or
- * change to new application specific flow or schedule type to enable
- * pipelining.
+/**< The application must set the @ref rte_event.op field to this operation type to return a
+ * previously dequeued event to the event device to be scheduled for further processing.
  *
- * This operation must only be enqueued to the same port that the
+ * This event *must* be enqueued to the same port that the
  * event to be forwarded was dequeued from.
+ *
+ * The event's fields, including (but not limited to) flow_id, scheduling type,
+ * destination queue, and event payload e.g. mbuf pointer, may all be updated as
+ * desired by the application, but the @ref rte_event.impl_opaque field must
+ * be kept to the same value as was present when the event was dequeued.
  */
 #define RTE_EVENT_OP_RELEASE            2
 /**< Release the flow context associated with the schedule type.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
- * then this function hints the scheduler that the user has completed critical
- * section processing in the current atomic context.
- * The scheduler is now allowed to schedule events from the same flow from
- * an event queue to another port. However, the context may be still held
- * until the next rte_event_dequeue_burst() call, this call allows but does not
- * force the scheduler to release the context early.
- *
- * Early atomic context release may increase parallelism and thus system
+ * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
+ * then this operation type hints the scheduler that the user has completed critical
+ * section processing for this event in the current atomic context, and that the
+ * scheduler may unlock any atomic locks held for this event.
+ * If this is the last event from an atomic flow, i.e. all flow locks are released
+ * (see @ref RTE_SCHED_TYPE_ATOMIC for details), the scheduler is now allowed to
+ * schedule events from that flow from to another port.
+ * However, the atomic locks may be still held until the next rte_event_dequeue_burst()
+ * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE is a hint only,
+ * allowing the scheduler to release the atomic locks early, but not requiring it to do so.
+ *
+ * Early atomic lock release may increase parallelism and thus system
  * performance, but the user needs to design carefully the split into critical
  * vs non-critical sections.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
- * then this function hints the scheduler that the user has done all that need
- * to maintain event order in the current ordered context.
- * The scheduler is allowed to release the ordered context of this port and
- * avoid reordering any following enqueues.
- *
- * Early ordered context release may increase parallelism and thus system
- * performance.
+ * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED
+ * then this operation type informs the scheduler that the current event has
+ * completed processing and will not be returned to the scheduler, i.e.
+ * it has been dropped, and so the reordering context for that event
+ * should be considered filled.
  *
- * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
- * or no scheduling context is held then this function may be an NOOP,
- * depending on the implementation.
+ * Events with this operation type must only be enqueued to the same port that the
+ * event to be released was dequeued from. The @ref rte_event.impl_opaque
+ * field in the release event must have the same value as that in the original dequeued event.
  *
- * This operation must only be enqueued to the same port that the
- * event to be released was dequeued from.
+ * If a dequeued event is re-enqueued with operation type of @ref RTE_EVENT_OP_RELEASE,
+ * then any subsequent enqueue of that event - or a copy of it - must be done as event of type
+ * @ref RTE_EVENT_OP_NEW, not @ref RTE_EVENT_OP_FORWARD. This is because any context for
+ * the originally dequeued event, i.e. atomic locks, or reorder buffer entries, will have
+ * been removed or invalidated by the release operation.
  */
 
 /**
@@ -1569,56 +1577,109 @@ struct rte_event {
 		/** Event attributes for dequeue or enqueue operation */
 		struct {
 			uint32_t flow_id:20;
-			/**< Targeted flow identifier for the enqueue and
-			 * dequeue operation.
-			 * The value must be in the range of
-			 * [0, nb_event_queue_flows - 1] which
-			 * previously supplied to rte_event_dev_configure().
+			/**< Target flow identifier for the enqueue and dequeue operation.
+			 *
+			 * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a
+			 * flow for atomicity within a queue & priority level, such that events
+			 * from each individual flow will only be scheduled to one port at a time.
+			 *
+			 * This field is preserved between enqueue and dequeue when
+			 * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
+			 * capability. Otherwise the value is implementation dependent
+			 * on dequeue.
 			 */
 			uint32_t sub_event_type:8;
 			/**< Sub-event types based on the event source.
+			 *
+			 * This field is preserved between enqueue and dequeue.
+			 *
 			 * @see RTE_EVENT_TYPE_CPU
 			 */
 			uint32_t event_type:4;
-			/**< Event type to classify the event source.
-			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
+			/**< Event type to classify the event source. (RTE_EVENT_TYPE_*)
+			 *
+			 * This field is preserved between enqueue and dequeue
 			 */
 			uint8_t op:2;
-			/**< The type of event enqueue operation - new/forward/
-			 * etc.This field is not preserved across an instance
-			 * and is undefined on dequeue.
-			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
+			/**< The type of event enqueue operation - new/forward/ etc.
+			 *
+			 * This field is *not* preserved across an instance
+			 * and is implementation dependent on dequeue.
+			 *
+			 * @see RTE_EVENT_OP_NEW
+			 * @see RTE_EVENT_OP_FORWARD
+			 * @see RTE_EVENT_OP_RELEASE
 			 */
 			uint8_t rsvd:4;
-			/**< Reserved for future use */
+			/**< Reserved for future use.
+			 *
+			 * Should be set to zero when initializing event structures.
+			 *
+			 * When forwarding or releasing existing events dequeued from the scheduler,
+			 * this field can be ignored.
+			 */
 			uint8_t sched_type:2;
 			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
 			 * associated with flow id on a given event queue
 			 * for the enqueue and dequeue operation.
+			 *
+			 * This field is used to determine the scheduling type
+			 * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES
+			 * is configured.
+			 * For queues where only a single scheduling type is available,
+			 * this field must be set to match the configured scheduling type.
+			 *
+			 * This field is preserved between enqueue and dequeue.
+			 *
+			 * @see RTE_SCHED_TYPE_ORDERED
+			 * @see RTE_SCHED_TYPE_ATOMIC
+			 * @see RTE_SCHED_TYPE_PARALLEL
 			 */
 			uint8_t queue_id;
 			/**< Targeted event queue identifier for the enqueue or
 			 * dequeue operation.
-			 * The value must be in the range of
-			 * [0, nb_event_queues - 1] which previously supplied to
-			 * rte_event_dev_configure().
+			 * The value must be less than @ref rte_event_dev_config.nb_event_queues
+			 * which was previously supplied to rte_event_dev_configure().
+			 *
+			 * This field is preserved between enqueue on dequeue.
 			 */
 			uint8_t priority;
 			/**< Event priority relative to other events in the
 			 * event queue. The requested priority should in the
-			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
-			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
+			 * range of  [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST,
+			 * @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
+			 *
 			 * The implementation shall normalize the requested
 			 * priority to supported priority value.
-			 * Valid when the device has
-			 * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
+			 * [For devices with where the supported priority range is a power-of-2, the
+			 * normalization will be done via bit-shifting, so only the highest
+			 * log2(num_priorities) bits will be used by the event device]
+			 *
+			 * Valid when the device has @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability
+			 * and this field is preserved between enqueue and dequeue,
+			 * though with possible loss of precision due to normalization and
+			 * subsequent de-normalization. (For example, if a device only supports 8
+			 * priority levels, only the high 3 bits of this field will be
+			 * used by that device, and hence only the value of those 3 bits are
+			 * guaranteed to be preserved between enqueue and dequeue.)
+			 *
+			 * Ignored when device does not support @ref RTE_EVENT_DEV_CAP_EVENT_QOS
+			 * capability, and it is implementation dependent if this field is preserved
+			 * between enqueue and dequeue.
 			 */
 			uint8_t impl_opaque;
-			/**< Implementation specific opaque value.
-			 * An implementation may use this field to hold
+			/**< Opaque field for event device use.
+			 *
+			 * An event driver implementation may use this field to hold an
 			 * implementation specific value to share between
 			 * dequeue and enqueue operation.
-			 * The application should not modify this field.
+			 *
+			 * The application must not modify this field.
+			 * Its value is implementation dependent on dequeue,
+			 * and must be returned unmodified on enqueue when
+			 * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE.
+			 * This field is ignored on events with op type
+			 * @ref RTE_EVENT_OP_NEW.
 			 */
 		};
 	};
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 11/12] eventdev: drop comment for anon union from doxygen
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (9 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 10/12] eventdev: clarify docs on event object fields and op types Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  6:52       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-21 10:32     ` [PATCH v4 12/12] eventdev: fix doxygen processing of event vector struct Bruce Richardson
  2024-02-23 12:36     ` [PATCH v4 00/12] improve eventdev API specification/documentation Jerin Jacob
  12 siblings, 1 reply; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson

Make the comments on the unnamed unions in the rte_event structure
regular comments rather than doxygen ones. The comments do not add
anything meaningful to the doxygen output.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 7e7e275620..03748eb437 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1571,7 +1571,7 @@ struct rte_event_vector {
  * for dequeue and enqueue operation
  */
 struct rte_event {
-	/** WORD0 */
+	/* WORD0 */
 	union {
 		uint64_t event;
 		/** Event attributes for dequeue or enqueue operation */
@@ -1683,7 +1683,7 @@ struct rte_event {
 			 */
 		};
 	};
-	/** WORD1 */
+	/* WORD1 */
 	union {
 		uint64_t u64;
 		/**< Opaque 64-bit value */
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v4 12/12] eventdev: fix doxygen processing of event vector struct
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (10 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 11/12] eventdev: drop comment for anon union from doxygen Bruce Richardson
@ 2024-02-21 10:32     ` Bruce Richardson
  2024-02-26  6:53       ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-03-04 15:35       ` Thomas Monjalon
  2024-02-23 12:36     ` [PATCH v4 00/12] improve eventdev API specification/documentation Jerin Jacob
  12 siblings, 2 replies; 123+ messages in thread
From: Bruce Richardson @ 2024-02-21 10:32 UTC (permalink / raw)
  To: dev, jerinj, mattias.ronnblom; +Cc: Bruce Richardson, stable

The event vector struct was missing comments on two members, and also
was inadvertently creating a local variable called "__rte_aligned" in
the doxygen output.

Correct the comment markers to fix the former issue, and fix the latter
by putting "#ifdef __DOXYGEN" around the alignment constraint.

Fixes: 1cc44d409271 ("eventdev: introduce event vector capability")
Fixes: 3c838062b91f ("eventdev: introduce event vector Rx capability")
Fixes: 699155f2d4e2 ("eventdev: fix clang C++ include")
Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 03748eb437..cf7d103a6c 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1358,10 +1358,8 @@ struct rte_event_vector {
 		 * port and queue of the mbufs in the vector
 		 */
 		struct {
-			uint16_t port;
-			/* Ethernet device port id. */
-			uint16_t queue;
-			/* Ethernet device queue id. */
+			uint16_t port;   /**< Ethernet device port id. */
+			uint16_t queue;  /**< Ethernet device queue id. */
 		};
 	};
 	/**< Union to hold common attributes of the vector array. */
@@ -1390,7 +1388,11 @@ struct rte_event_vector {
 	 * vector array can be an array of mbufs or pointers or opaque u64
 	 * values.
 	 */
+#ifndef __DOXYGEN__
 } __rte_aligned(16);
+#else
+};
+#endif
 
 /* Scheduler type definitions */
 #define RTE_SCHED_TYPE_ORDERED          0
-- 
2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v4 00/12] improve eventdev API specification/documentation
  2024-02-21 10:32   ` [PATCH v4 00/12] improve eventdev API specification/documentation Bruce Richardson
                       ` (11 preceding siblings ...)
  2024-02-21 10:32     ` [PATCH v4 12/12] eventdev: fix doxygen processing of event vector struct Bruce Richardson
@ 2024-02-23 12:36     ` Jerin Jacob
  12 siblings, 0 replies; 123+ messages in thread
From: Jerin Jacob @ 2024-02-23 12:36 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, jerinj, mattias.ronnblom

On Wed, Feb 21, 2024 at 4:10 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> This patchset makes rewording improvements to the eventdev doxygen
> documentation to try and ensure that it is as clear as possible,
> describes the implementation as accurately as possible, and is
> consistent within itself.
>
> Most changes are just minor rewordings, along with plenty of changes to
> change references into doxygen links/cross-references.
>
> In tightening up the definitions, there may be subtle changes in meaning
> which should be checked for carefully by reviewers. Where there was
> ambiguity, the behaviour of existing code is documented so as to avoid
> breaking existing apps.
>
> V4:
> * additional rework following comments from Jerin and on-list discussion
> * extra 12th patch to clean up some doxygen issues


@Mattias Rönnblom  I would like to merge this for rc2. It would be
great if you can review this version and Ack it.

^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 01/12] eventdev: improve doxygen introduction text
  2024-02-21 10:32     ` [PATCH v4 01/12] eventdev: improve doxygen introduction text Bruce Richardson
@ 2024-02-26  4:51       ` Pavan Nikhilesh Bhagavatula
  2024-02-26  9:59         ` Bruce Richardson
  0 siblings, 1 reply; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  4:51 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> Make some textual improvements to the introduction to eventdev and event
> devices in the eventdev header file. This text appears in the doxygen
> output for the header file, and introduces the key concepts, for
> example: events, event devices, queues, ports and scheduling.
> 
> This patch makes the following improvements:
> * small textual fixups, e.g. correcting use of singular/plural
> * rewrites of some sentences to improve clarity
> * using doxygen markdown to split the whole large block up into
>   sections, thereby making it easier to read.
> 
> No large-scale changes are made, and blocks are not reordered
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> 

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

> ---
> V4: reworked following review by Jerin
> V3: reworked following feedback from Mattias
> ---
>  lib/eventdev/rte_eventdev.h | 140 ++++++++++++++++++++++--------------
>  1 file changed, 86 insertions(+), 54 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 1f99e933c0..985286c616 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -12,25 +12,35 @@
>   * @file
>   *
>   * RTE Event Device API
> - *
> - * In a polling model, lcores poll ethdev ports and associated rx queues
> - * directly to look for packet. In an event driven model, by contrast, lcores
> - * call the scheduler that selects packets for them based on programmer
> - * specified criteria. Eventdev library adds support for event driven
> - * programming model, which offer applications automatic multicore scaling,
> - * dynamic load balancing, pipelining, packet ingress order maintenance and
> - * synchronization services to simplify application packet processing.
> + * ====================
> + *
> + * In a traditional DPDK application model, the application polls Ethdev port
> RX
> + * queues to look for work, and processing is done in a run-to-completion
> manner,
> + * after which the packets are transmitted on a Ethdev TX queue. Load is
> + * distributed by statically assigning ports and queues to lcores, and NIC
> + * receive-side scaling (RSS), or similar, is employed to distribute network
> flows
> + * (and thus work) on the same port across multiple RX queues.
> + *
> + * In contrast, in an event-driver model, as supported by this "eventdev"

Should be event-driven model.

> library,
> + * incoming packets (or other input events) are fed into an event device,
> which
> + * schedules those packets across the available lcores, in accordance with its
> configuration.
> + * This event-driven programming model offers applications automatic
> multicore scaling,
> + * dynamic load balancing, pipelining, packet order maintenance,
> synchronization,

<snip>

^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 02/12] eventdev: move text on driver internals to proper section
  2024-02-21 10:32     ` [PATCH v4 02/12] eventdev: move text on driver internals to proper section Bruce Richardson
@ 2024-02-26  5:01       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  5:01 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> Inside the doxygen introduction text, some internal details of how
> eventdev works was mixed in with application-relevant details. Move
> these details on probing etc. to the driver-relevant section.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

> ---
>  lib/eventdev/rte_eventdev.h | 32 ++++++++++++++++----------------
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 985286c616..c2782b2e30 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -124,22 +124,6 @@
>   * In all functions of the Event API, the Event device is
>   * designated by an integer >= 0 named the device identifier *dev_id*
>   *
> - * At the Event driver level, Event devices are represented by a generic
> - * data structure of type *rte_event_dev*.
> - *
> - * Event devices are dynamically registered during the PCI/SoC device probing
> - * phase performed at EAL initialization time.
> - * When an Event device is being probed, an *rte_event_dev* structure is
> allocated
> - * for it and the event_dev_init() function supplied by the Event driver
> - * is invoked to properly initialize the device.
> - *
> - * The role of the device init function is to reset the device hardware or
> - * to initialize the software event driver implementation.
> - *
> - * If the device init operation is successful, the device is assigned a device
> - * id (dev_id) for application use.
> - * Otherwise, the *rte_event_dev* structure is freed.
> - *
>   * The functions exported by the application Event API to setup a device
>   * must be invoked in the following order:
>   *     - rte_event_dev_configure()
> @@ -175,6 +159,22 @@
>   * Driver-Oriented Event API
>   * -------------------------
>   *
> + * At the Event driver level, Event devices are represented by a generic
> + * data structure of type *rte_event_dev*.
> + *
> + * Event devices are dynamically registered during the PCI/SoC device probing
> + * phase performed at EAL initialization time.
> + * When an Event device is being probed, an *rte_event_dev* structure is
> allocated
> + * for it and the event_dev_init() function supplied by the Event driver
> + * is invoked to properly initialize the device.
> + *
> + * The role of the device init function is to reset the device hardware or
> + * to initialize the software event driver implementation.
> + *
> + * If the device init operation is successful, the device is assigned a device
> + * id (dev_id) for application use.
> + * Otherwise, the *rte_event_dev* structure is freed.
> + *
>   * Each function of the application Event API invokes a specific function
>   * of the PMD that controls the target device designated by its device
>   * identifier.
> --
> 2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 03/12] eventdev: update documentation on device capability flags
  2024-02-21 10:32     ` [PATCH v4 03/12] eventdev: update documentation on device capability flags Bruce Richardson
@ 2024-02-26  5:07       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  5:07 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> Update the device capability docs, to:
> 
> * include more cross-references
> * split longer text into paragraphs, in most cases with each flag having
>   a single-line summary at the start of the doc block
> * general comment rewording and clarification as appropriate
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

> ---
> V4: Rebased on latest main branch
>     Updated function cross-references for consistency
>     General changes following review by Jerin
> V3: Updated following feedback from Mattias
> ---
>  lib/eventdev/rte_eventdev.h | 172 +++++++++++++++++++++++++-----------
>  1 file changed, 121 insertions(+), 51 deletions(-)


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 04/12] eventdev: cleanup doxygen comments on info structure
  2024-02-21 10:32     ` [PATCH v4 04/12] eventdev: cleanup doxygen comments on info structure Bruce Richardson
@ 2024-02-26  5:18       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  5:18 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> Some small rewording changes to the doxygen comments on struct
> rte_event_dev_info.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

> 
> ---
> V3: reworked following feedback
> - added closing "." on comments
> - added more cross-reference links
> - reworded priority level comments
> ---
>  lib/eventdev/rte_eventdev.h | 85 +++++++++++++++++++++++++------------
>  1 file changed, 58 insertions(+), 27 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index f7b98a6cfa..b9ec3fc45e 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -537,57 +537,88 @@ rte_event_dev_socket_id(uint8_t dev_id);
>   * Event device information
>   */
>  struct rte_event_dev_info {
> -	const char *driver_name;	/**< Event driver name */
> -	struct rte_device *dev;	/**< Device information */
> +	const char *driver_name;	/**< Event driver name. */
> +	struct rte_device *dev;	/**< Device information. */
>  	uint32_t min_dequeue_timeout_ns;
> -	/**< Minimum supported global dequeue timeout(ns) by this device
> */
> +	/**< Minimum global dequeue timeout(ns) supported by this device.
> */
>  	uint32_t max_dequeue_timeout_ns;
> -	/**< Maximum supported global dequeue timeout(ns) by this device
> */
> +	/**< Maximum global dequeue timeout(ns) supported by this device.
> */
>  	uint32_t dequeue_timeout_ns;
> -	/**< Configured global dequeue timeout(ns) for this device */
> +	/**< Configured global dequeue timeout(ns) for this device. */
>  	uint8_t max_event_queues;
> -	/**< Maximum event_queues supported by this device */
> +	/**< Maximum event queues supported by this device.
> +	 *
> +	 * This count excludes any queues covered by @ref
> max_single_link_event_port_queue_pairs.
> +	 */
>  	uint32_t max_event_queue_flows;
> -	/**< Maximum supported flows in an event queue by this device*/
> +	/**< Maximum number of flows within an event queue supported by
> this device. */
>  	uint8_t max_event_queue_priority_levels;
> -	/**< Maximum number of event queue priority levels by this device.
> -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS
> capability
> +	/**< Maximum number of event queue priority levels supported by
> this device.
> +	 *
> +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS
> capability.
> +	 *
> +	 * The implementation shall normalize priority values specified
> between
> +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref
> RTE_EVENT_DEV_PRIORITY_LOWEST
> +	 * to map them internally to this range of priorities.
> +	 * [For devices supporting a power-of-2 number of priority levels, this
> +	 * normalization will be done via a right-shift operation, so only the top
> +	 * log2(max_levels) bits will be used by the event device.]
> +	 *
> +	 * @see rte_event_queue_conf.priority
>  	 */
>  	uint8_t max_event_priority_levels;
>  	/**< Maximum number of event priority levels by this device.
> -	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS
> capability
> +	 *
> +	 * Valid when the device has @ref RTE_EVENT_DEV_CAP_EVENT_QOS
> capability.
> +	 *
> +	 * The implementation shall normalize priority values specified
> between
> +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref
> RTE_EVENT_DEV_PRIORITY_LOWEST
> +	 * to map them internally to this range of priorities.
> +	 * [For devices supporting a power-of-2 number of priority levels, this
> +	 * normalization will be done via a right-shift operation, so only the top
> +	 * log2(max_levels) bits will be used by the event device.]
> +	 *
> +	 * @see rte_event.priority
>  	 */
>  	uint8_t max_event_ports;
> -	/**< Maximum number of event ports supported by this device */
> +	/**< Maximum number of event ports supported by this device.
> +	 *
> +	 * This count excludes any ports covered by @ref
> max_single_link_event_port_queue_pairs.
> +	 */
>  	uint8_t max_event_port_dequeue_depth;
> -	/**< Maximum number of events can be dequeued at a time from an
> -	 * event port by this device.
> -	 * A device that does not support bulk dequeue will set this as 1.
> +	/**< Maximum number of events that can be dequeued at a time
> from an event port
> +	 * on this device.
> +	 *
> +	 * A device that does not support burst dequeue
> +	 * (@ref RTE_EVENT_DEV_CAP_BURST_MODE) will set this to 1.
>  	 */
>  	uint32_t max_event_port_enqueue_depth;
> -	/**< Maximum number of events can be enqueued at a time from an
> -	 * event port by this device.
> -	 * A device that does not support bulk enqueue will set this as 1.
> +	/**< Maximum number of events that can be enqueued at a time to
> an event port
> +	 * on this device.
> +	 *
> +	 * A device that does not support burst enqueue
> +	 * (@ref RTE_EVENT_DEV_CAP_BURST_MODE) will set this to 1.
>  	 */
>  	uint8_t max_event_port_links;
> -	/**< Maximum number of queues that can be linked to a single event
> -	 * port by this device.
> +	/**< Maximum number of queues that can be linked to a single event
> port on this device.
>  	 */
>  	int32_t max_num_events;
>  	/**< A *closed system* event dev has a limit on the number of events
> it
> -	 * can manage at a time. An *open system* event dev does not have a
> -	 * limit and will specify this as -1.
> +	 * can manage at a time.
> +	 * Once the number of events tracked by an eventdev exceeds this
> number,
> +	 * any enqueues of NEW events will fail.
> +	 * An *open system* event dev does not have a limit and will specify
> this as -1.
>  	 */
>  	uint32_t event_dev_cap;
> -	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
> +	/**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*). */
>  	uint8_t max_single_link_event_port_queue_pairs;
> -	/**< Maximum number of event ports and queues that are optimized
> for
> -	 * (and only capable of) single-link configurations supported by this
> -	 * device. These ports and queues are not accounted for in
> -	 * max_event_ports or max_event_queues.
> +	/**< Maximum number of event ports and queues, supported by this
> device,
> +	 * that are optimized for (and only capable of) single-link
> configurations.
> +	 * These ports and queues are not accounted for in @ref
> max_event_ports
> +	 * or @ref max_event_queues.
>  	 */
>  	uint8_t max_profiles_per_port;
> -	/**< Maximum number of event queue profiles per event port.
> +	/**< Maximum number of event queue link profiles per event port.
>  	 * A device that doesn't support multiple profiles will set this as 1.
>  	 */
>  };
> --
> 2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 05/12] eventdev: improve function documentation for query fns
  2024-02-21 10:32     ` [PATCH v4 05/12] eventdev: improve function documentation for query fns Bruce Richardson
@ 2024-02-26  5:18       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  5:18 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> General improvements to the doxygen docs for eventdev functions for
> querying basic information:
> * number of devices
> * id for a particular device
> * socket id of device
> * capability information for a device
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

 
> ---
> V3: minor changes following review
> ---
>  lib/eventdev/rte_eventdev.h | 22 +++++++++++++---------
>  1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index b9ec3fc45e..9d286168b1 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -498,8 +498,7 @@ struct rte_event;
>   */
> 
>  /**
> - * Get the total number of event devices that have been successfully
> - * initialised.
> + * Get the total number of event devices.
>   *
>   * @return
>   *   The total number of usable event devices.
> @@ -514,8 +513,10 @@ rte_event_dev_count(void);
>   *   Event device name to select the event device identifier.
>   *
>   * @return
> - *   Returns event device identifier on success.
> - *   - <0: Failure to find named event device.
> + *   Event device identifier (dev_id >= 0) on success.
> + *   Negative error code on failure:
> + *   - -EINVAL - input name parameter is invalid.
> + *   - -ENODEV - no event device found with that name.
>   */
>  int
>  rte_event_dev_get_dev_id(const char *name);
> @@ -528,7 +529,8 @@ rte_event_dev_get_dev_id(const char *name);
>   * @return
>   *   The NUMA socket id to which the device is connected or
>   *   a default of zero if the socket could not be determined.
> - *   -(-EINVAL)  dev_id value is out of range.
> + *   -EINVAL on error, where the given dev_id value does not
> + *   correspond to any event device.
>   */
>  int
>  rte_event_dev_socket_id(uint8_t dev_id);
> @@ -624,18 +626,20 @@ struct rte_event_dev_info {
>  };
> 
>  /**
> - * Retrieve the contextual information of an event device.
> + * Retrieve details of an event device's capabilities and configuration limits.
>   *
>   * @param dev_id
>   *   The identifier of the device.
>   *
>   * @param[out] dev_info
>   *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
> - *   contextual information of the device.
> + *   information about the device's capabilities.
>   *
>   * @return
> - *   - 0: Success, driver updates the contextual information of the event device
> - *   - <0: Error code returned by the driver info get function.
> + *   - 0: Success, information about the event device is present in dev_info.
> + *   - <0: Failure, error code returned by the function.
> + *     - -EINVAL - invalid input parameters, e.g. incorrect device id.
> + *     - -ENOTSUP - device does not support returning capabilities information.
>   */
>  int
>  rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info
> *dev_info);
> --
> 2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 06/12] eventdev: improve doxygen comments on configure struct
  2024-02-21 10:32     ` [PATCH v4 06/12] eventdev: improve doxygen comments on configure struct Bruce Richardson
@ 2024-02-26  6:36       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  6:36 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> General rewording and cleanup on the rte_event_dev_config structure.
> Improved the wording of some sentences and created linked
> cross-references out of the existing references to the dev_info
> structure.
> 
> As part of the rework, fix issue with how single-link port-queue pairs
> were counted in the rte_event_dev_config structure. This did not match
> the actual implementation and, if following the documentation, certain
> valid port/queue configurations would have been impossible to configure.
> Fix this by changing the documentation to match the implementation
> 
> Bugzilla ID:  1368
> Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints")
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> ---
> V3:
> - minor tweaks following review
> - merged in doc fix for bugzilla 1368 into this patch, since it fit with
>   other clarifications to the config struct.
> ---
>  lib/eventdev/rte_eventdev.h | 61 ++++++++++++++++++++++---------------
>  1 file changed, 37 insertions(+), 24 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 9d286168b1..73cc6b6688 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -684,9 +684,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t
> attr_id,
>  struct rte_event_dev_config {
>  	uint32_t dequeue_timeout_ns;
>  	/**< rte_event_dequeue_burst() timeout on this device.
> -	 * This value should be in the range of *min_dequeue_timeout_ns*
> and
> -	 * *max_dequeue_timeout_ns* which previously provided in
> -	 * rte_event_dev_info_get()
> +	 * This value should be in the range of @ref
> rte_event_dev_info.min_dequeue_timeout_ns and
> +	 * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
> +	 * @ref rte_event_dev_info_get()
>  	 * The value 0 is allowed, in which case, default dequeue timeout used.
>  	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
>  	 */
> @@ -694,40 +694,53 @@ struct rte_event_dev_config {
>  	/**< In a *closed system* this field is the limit on maximum number of
>  	 * events that can be inflight in the eventdev at a given time. The
>  	 * limit is required to ensure that the finite space in a closed system
> -	 * is not overwhelmed. The value cannot exceed the
> *max_num_events*
> -	 * as provided by rte_event_dev_info_get().
> -	 * This value should be set to -1 for *open system*.
> +	 * is not exhausted.
> +	 * The value cannot exceed @ref
> rte_event_dev_info.max_num_events
> +	 * returned by rte_event_dev_info_get().
> +	 *
> +	 * This value should be set to -1 for *open systems*, that is,
> +	 * those systems returning -1 in @ref
> rte_event_dev_info.max_num_events.
> +	 *
> +	 * @see rte_event_port_conf.new_event_threshold
>  	 */
>  	uint8_t nb_event_queues;
>  	/**< Number of event queues to configure on this device.
> -	 * This value cannot exceed the *max_event_queues* which previously
> -	 * provided in rte_event_dev_info_get()
> +	 * This value *includes* any single-link queue-port pairs to be used.
> +	 * This value cannot exceed @ref
> rte_event_dev_info.max_event_queues +
> +	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
> +	 * returned by rte_event_dev_info_get().
> +	 * The number of non-single-link queues i.e. this value less
> +	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
> +	 * @ref rte_event_dev_info.max_event_queues
>  	 */
>  	uint8_t nb_event_ports;
>  	/**< Number of event ports to configure on this device.
> -	 * This value cannot exceed the *max_event_ports* which previously
> -	 * provided in rte_event_dev_info_get()
> +	 * This value *includes* any single-link queue-port pairs to be used.
> +	 * This value cannot exceed @ref
> rte_event_dev_info.max_event_ports +
> +	 * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
> +	 * returned by rte_event_dev_info_get().
> +	 * The number of non-single-link ports i.e. this value less
> +	 * *nb_single_link_event_port_queues* in this struct, cannot exceed
> +	 * @ref rte_event_dev_info.max_event_ports
>  	 */
>  	uint32_t nb_event_queue_flows;
> -	/**< Number of flows for any event queue on this device.
> -	 * This value cannot exceed the *max_event_queue_flows* which
> previously
> -	 * provided in rte_event_dev_info_get()
> +	/**< Max number of flows needed for a single event queue on this
> device.
> +	 * This value cannot exceed @ref
> rte_event_dev_info.max_event_queue_flows
> +	 * returned by rte_event_dev_info_get()
>  	 */
>  	uint32_t nb_event_port_dequeue_depth;
> -	/**< Maximum number of events can be dequeued at a time from an
> -	 * event port by this device.
> -	 * This value cannot exceed the *max_event_port_dequeue_depth*
> -	 * which previously provided in rte_event_dev_info_get().
> +	/**< Max number of events that can be dequeued at a time from an
> event port on this device.
> +	 * This value cannot exceed @ref
> rte_event_dev_info.max_event_port_dequeue_depth
> +	 * returned by rte_event_dev_info_get().
>  	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE
> capable.
> -	 * @see rte_event_port_setup()
> +	 * @see rte_event_port_setup() rte_event_dequeue_burst()
>  	 */
>  	uint32_t nb_event_port_enqueue_depth;
> -	/**< Maximum number of events can be enqueued at a time from an
> -	 * event port by this device.
> -	 * This value cannot exceed the *max_event_port_enqueue_depth*
> -	 * which previously provided in rte_event_dev_info_get().
> +	/**< Maximum number of events can be enqueued at a time to an
> event port on this device.
> +	 * This value cannot exceed @ref
> rte_event_dev_info.max_event_port_enqueue_depth
> +	 * returned by rte_event_dev_info_get().
>  	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE
> capable.
> -	 * @see rte_event_port_setup()
> +	 * @see rte_event_port_setup() rte_event_enqueue_burst()
>  	 */
>  	uint32_t event_dev_cfg;
>  	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
> @@ -737,7 +750,7 @@ struct rte_event_dev_config {
>  	 * queues; this value cannot exceed *nb_event_ports* or
>  	 * *nb_event_queues*. If the device has ports and queues that are
>  	 * optimized for single-link usage, this field is a hint for how many
> -	 * to allocate; otherwise, regular event ports and queues can be used.
> +	 * to allocate; otherwise, regular event ports and queues will be used.
>  	 */
>  };
> 
> --
> 2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 07/12] eventdev: improve doxygen comments on config fns
  2024-02-21 10:32     ` [PATCH v4 07/12] eventdev: improve doxygen comments on config fns Bruce Richardson
@ 2024-02-26  6:43       ` Pavan Nikhilesh Bhagavatula
  2024-02-26  6:44         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  6:43 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> Improve the documentation text for the configuration functions and
> structures for configuring an eventdev, as well as ports and queues.
> Clarify text where possible, and ensure references come through as links
> in the html output.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Pavan Nikhilesh <pbhagavatula@marvell.com>

> 
> ---
> V3: Update following review, mainly:
>  - change ranges starting with 0, to just say "less than"
>  - put in "." at end of sentences & bullet points
> ---
>  lib/eventdev/rte_eventdev.h | 221 +++++++++++++++++++++++-------------
>  1 file changed, 144 insertions(+), 77 deletions(-)
> 


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 08/12] eventdev: improve doxygen comments for control APIs
  2024-02-21 10:32     ` [PATCH v4 08/12] eventdev: improve doxygen comments for control APIs Bruce Richardson
@ 2024-02-26  6:44       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  6:44 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> The doxygen comments for the port attributes, start and stop (and
> related functions) are improved.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

> 
> ---
> V3: add missing "." on end of sentences/lines.
> ---
>  lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++++++--------------
>  1 file changed, 29 insertions(+), 18 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index e38354cedd..72814719b2 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1201,19 +1201,21 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t
> port_id,
>  		       rte_eventdev_port_flush_t release_cb, void *args);
> 
>  /**
> - * The queue depth of the port on the enqueue side
> + * Port attribute id for the maximum size of a burst enqueue operation
> supported on a port.
>   */
>  #define RTE_EVENT_PORT_ATTR_ENQ_DEPTH 0
>  /**
> - * The queue depth of the port on the dequeue side
> + * Port attribute id for the maximum size of a dequeue burst which can be
> returned from a port.
>   */
>  #define RTE_EVENT_PORT_ATTR_DEQ_DEPTH 1
>  /**
> - * The new event threshold of the port
> + * Port attribute id for the new event threshold of the port.
> + * Once the number of events in the system exceeds this threshold, the
> enqueue of NEW-type
> + * events will fail.
>   */
>  #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
>  /**
> - * The implicit release disable attribute of the port
> + * Port attribute id for the implicit release disable attribute of the port.
>   */
>  #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
> 
> @@ -1221,17 +1223,18 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t
> port_id,
>   * Get an attribute from a port.
>   *
>   * @param dev_id
> - *   Eventdev id
> + *   The identifier of the device.
>   * @param port_id
> - *   Eventdev port id
> + *   The index of the event port to query. The value must be less than
> + *   @ref rte_event_dev_config.nb_event_ports previously supplied to
> rte_event_dev_configure().
>   * @param attr_id
> - *   The attribute ID to retrieve
> + *   The attribute ID to retrieve (RTE_EVENT_PORT_ATTR_*)
>   * @param[out] attr_value
>   *   A pointer that will be filled in with the attribute value if successful
>   *
>   * @return
> - *   - 0: Successfully returned value
> - *   - (-EINVAL) Invalid device, port or attr_id, or attr_value was NULL
> + *   - 0: Successfully returned value.
> + *   - (-EINVAL) Invalid device, port or attr_id, or attr_value was NULL.
>   */
>  int
>  rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
> @@ -1240,17 +1243,19 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t
> port_id, uint32_t attr_id,
>  /**
>   * Start an event device.
>   *
> - * The device start step is the last one and consists of setting the event
> - * queues to start accepting the events and schedules to event ports.
> + * The device start step is the last one in device setup, and enables the event
> + * ports and queues to start accepting events and scheduling them to event
> ports.
>   *
>   * On success, all basic functions exported by the API (event enqueue,
>   * event dequeue and so on) can be invoked.
>   *
>   * @param dev_id
> - *   Event device identifier
> + *   Event device identifier.
>   * @return
>   *   - 0: Success, device started.
> - *   - -ESTALE : Not all ports of the device are configured
> + *   - -EINVAL:  Invalid device id provided.
> + *   - -ENOTSUP: Device does not support this operation.
> + *   - -ESTALE : Not all ports of the device are configured.
>   *   - -ENOLINK: Not all queues are linked, which could lead to deadlock.
>   */
>  int
> @@ -1292,18 +1297,22 @@ typedef void
> (*rte_eventdev_stop_flush_t)(uint8_t dev_id,
>   * callback function must be registered in every process that can call
>   * rte_event_dev_stop().
>   *
> + * Only one callback function may be registered. Each new call replaces
> + * the existing registered callback function with the new function passed in.
> + *
>   * To unregister a callback, call this function with a NULL callback pointer.
>   *
>   * @param dev_id
>   *   The identifier of the device.
>   * @param callback
> - *   Callback function invoked once per flushed event.
> + *   Callback function to be invoked once per flushed event.
> + *   Pass NULL to unset any previously-registered callback function.
>   * @param userdata
>   *   Argument supplied to callback.
>   *
>   * @return
>   *  - 0 on success.
> - *  - -EINVAL if *dev_id* is invalid
> + *  - -EINVAL if *dev_id* is invalid.
>   *
>   * @see rte_event_dev_stop()
>   */
> @@ -1314,12 +1323,14 @@ int
> rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
>   * Close an event device. The device cannot be restarted!
>   *
>   * @param dev_id
> - *   Event device identifier
> + *   Event device identifier.
>   *
>   * @return
>   *  - 0 on successfully closing device
> - *  - <0 on failure to close device
> - *  - (-EAGAIN) if device is busy
> + *  - <0 on failure to close device.
> + *    - -EINVAL - invalid device id.
> + *    - -ENOTSUP - operation not supported for this device.
> + *    - -EAGAIN - device is busy.
>   */
>  int
>  rte_event_dev_close(uint8_t dev_id);
> --
> 2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 07/12] eventdev: improve doxygen comments on config fns
  2024-02-26  6:43       ` [EXT] " Pavan Nikhilesh Bhagavatula
@ 2024-02-26  6:44         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  6:44 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom



> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula
> Sent: Monday, February 26, 2024 12:13 PM
> To: Bruce Richardson <bruce.richardson@intel.com>; dev@dpdk.org; Jerin
> Jacob <jerinj@marvell.com>; mattias.ronnblom@ericsson.com
> Subject: RE: [EXT] [PATCH v4 07/12] eventdev: improve doxygen comments
> on config fns
> 
> > Improve the documentation text for the configuration functions and
> > structures for configuring an eventdev, as well as ports and queues.
> > Clarify text where possible, and ensure references come through as links
> > in the html output.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> >
> > ---
> > V3: Update following review, mainly:
> >  - change ranges starting with 0, to just say "less than"
> >  - put in "." at end of sentences & bullet points
> > ---
> >  lib/eventdev/rte_eventdev.h | 221 +++++++++++++++++++++++------------
> -
> >  1 file changed, 144 insertions(+), 77 deletions(-)
> >


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 09/12] eventdev: improve comments on scheduling types
  2024-02-21 10:32     ` [PATCH v4 09/12] eventdev: improve comments on scheduling types Bruce Richardson
@ 2024-02-26  6:49       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  6:49 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> The description of ordered and atomic scheduling given in the eventdev
> doxygen documentation was not always clear. Try and simplify this so
> that it is clearer for the end-user of the application
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

 
> ---
> V4: reworked following review by Jerin
> V3: extensive rework following feedback. Please re-review!
> ---
>  lib/eventdev/rte_eventdev.h | 77 +++++++++++++++++++++++--------------
>  1 file changed, 48 insertions(+), 29 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 72814719b2..6d881bd665 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1397,25 +1397,36 @@ struct rte_event_vector {
>  /**< Ordered scheduling
>   *
>   * Events from an ordered flow of an event queue can be scheduled to
> multiple
> - * ports for concurrent processing while maintaining the original event order.
> - * This scheme enables the user to achieve high single flow throughput by
> - * avoiding SW synchronization for ordering between ports which bound to
> cores.
> - *
> - * The source flow ordering from an event queue is maintained when events
> are
> - * enqueued to their destination queue within the same ordered flow
> context.
> - * An event port holds the context until application call
> - * rte_event_dequeue_burst() from the same port, which implicitly releases
> - * the context.
> - * User may allow the scheduler to release the context earlier than that
> - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE
> operation.
> - *
> - * Events from the source queue appear in their original order when
> dequeued
> - * from a destination queue.
> - * Event ordering is based on the received event(s), but also other
> - * (newly allocated or stored) events are ordered when enqueued within the
> same
> - * ordered context. Events not enqueued (e.g. released or stored) within the
> - * context are  considered missing from reordering and are skipped at this
> time
> - * (but can be ordered again within another context).
> + * ports for concurrent processing while maintaining the original event order,
> + * i.e. the order in which they were first enqueued to that queue.
> + * This scheme allows events pertaining to the same, potentially large, flow to
> + * be processed in parallel on multiple cores without incurring any
> + * application-level order restoration logic overhead.
> + *
> + * After events are dequeued from a set of ports, as those events are re-
> enqueued
> + * to another queue (with the op field set to @ref
> RTE_EVENT_OP_FORWARD), the event
> + * device restores the original event order - including events returned from all
> + * ports in the set - before the events are placed on the destination queue,
> + * for subsequent scheduling to ports.
> + *
> + * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly
> + * released by the next dequeue operation on a port, are skipped by the
> reordering
> + * stage and do not affect the reordering of other returned events.
> + *
> + * Any NEW events sent on a port are not ordered with respect to FORWARD
> events sent
> + * on the same port, since they have no original event order. They also are not
> + * ordered with respect to NEW events enqueued on other ports.
> + * However, NEW events to the same destination queue from the same port
> are guaranteed
> + * to be enqueued in the order they were submitted via
> rte_event_enqueue_burst().
> + *
> + * NOTE:
> + *   In restoring event order of forwarded events, the eventdev API
> guarantees that
> + *   all events from the same flow (i.e. same @ref rte_event.flow_id,
> + *   @ref rte_event.priority and @ref rte_event.queue_id) will be put in the
> original
> + *   order before being forwarded to the destination queue.
> + *   Some eventdevs may implement stricter ordering to achieve this aim,
> + *   for example, restoring the order across *all* flows dequeued from the
> same ORDERED
> + *   queue.
>   *
>   * @see rte_event_queue_setup(), rte_event_dequeue_burst(),
> RTE_EVENT_OP_RELEASE
>   */
> @@ -1423,18 +1434,26 @@ struct rte_event_vector {
>  #define RTE_SCHED_TYPE_ATOMIC           1
>  /**< Atomic scheduling
>   *
> - * Events from an atomic flow of an event queue can be scheduled only to a
> + * Events from an atomic flow, identified by a combination of @ref
> rte_event.flow_id,
> + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled
> only to a
>   * single port at a time. The port is guaranteed to have exclusive (atomic)
>   * access to the associated flow context, which enables the user to avoid SW
> - * synchronization. Atomic flows also help to maintain event ordering
> - * since only one port at a time can process events from a flow of an
> - * event queue.
> - *
> - * The atomic queue synchronization context is dedicated to the port until
> - * application call rte_event_dequeue_burst() from the same port,
> - * which implicitly releases the context. User may allow the scheduler to
> - * release the context earlier than that by invoking
> rte_event_enqueue_burst()
> - * with RTE_EVENT_OP_RELEASE operation.
> + * synchronization. Atomic flows also maintain event ordering
> + * since only one port at a time can process events from each flow of an
> + * event queue, and events within a flow are not reordered within the
> scheduler.
> + *
> + * An atomic flow is locked to a port when events from that flow are first
> + * scheduled to that port. That lock remains in place until the
> + * application calls rte_event_dequeue_burst() from the same port,
> + * which implicitly releases the lock (if @ref
> RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set).
> + * User may allow the scheduler to release the lock earlier than that by
> invoking
> + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for
> each event from that flow.
> + *
> + * NOTE: Where multiple events from the same queue and atomic flow are
> scheduled to a port,
> + * the lock for that flow is only released once the last event from the flow is
> released,
> + * or forwarded to another queue. So long as there is at least one event from
> an atomic
> + * flow scheduled to a port/core (including any events in the port's dequeue
> queue, not yet read
> + * by the application), that port will hold the synchronization lock for that
> flow.
>   *
>   * @see rte_event_queue_setup(), rte_event_dequeue_burst(),
> RTE_EVENT_OP_RELEASE
>   */
> --
> 2.40.1


^ permalink raw reply	[flat|nested] 123+ messages in thread

* RE: [EXT] [PATCH v4 10/12] eventdev: clarify docs on event object fields and op types
  2024-02-21 10:32     ` [PATCH v4 10/12] eventdev: clarify docs on event object fields and op types Bruce Richardson
@ 2024-02-26  6:52       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 123+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-26  6:52 UTC (permalink / raw)
  To: Bruce Richardson, dev, Jerin Jacob, mattias.ronnblom

> Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> For the fields in "rte_event" struct, enhance the comments on each to
> clarify the field's use, and whether it is preserved between enqueue and
> dequeue, and it's role, if any, in scheduling.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> V4: reworked following review by Jerin
> V3: updates following review
> ---
>  lib/eventdev/rte_eventdev.h | 161 +++++++++++++++++++++++++-----------
>  1 file changed, 111 insertions(+), 50 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 6d881bd665..7e7e275620 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1515,47 +1515,55 @@ struct rte_event_vector {
> 
>  /* Event enqueue operations */
>  #define RTE_EVENT_OP_NEW                0
> -/**< The event producers use this operation to inject a new event to the
> - * event device.
> +/**< The @ref rte_event.op field must be set to this operation type to inject a
> new event,
> + * i.e. one not previously dequeued, into the event device, to be scheduled
> + * for processing.
>   */
>  #define RTE_EVENT_OP_FORWARD            1
> -/**< The CPU use this operation to forward the event to different event
> queue or
> - * change to new application specific flow or schedule type to enable
> - * pipelining.
> +/**< The application must set the @ref rte_event.op field to this operation
> type to return a
> + * previously dequeued event to the event device to be scheduled for further
> processing.
>   *
> - * This operation must only be enqueued to the same port that the
> + * This event *must* be enqueued to the same port that the
>   * event to be forwarded was dequeued from.
> + *
> + * The event's fields, including (but not limited to) flow_id, scheduling type,
> + * destination queue, and event payload e.g. mbuf pointer, may all be
> updated as
> + * desired by the application, but the @ref rte_event.impl_opaque field must
> + * be kept to the same value as was present when the event was dequeued.
>   */
>  #define RTE_EVENT_OP_RELEASE            2
>  /**< Release the flow context associated with the schedule type.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> - * then this function hints the scheduler that the user has completed critical
> - * section processing in the current atomic context.
> - * The scheduler is now allowed to schedule events from the same flow from
> - * an event queue to another port. However, the context may be still held
> - * until the next rte_event_dequeue_burst() call, this call allows but does not
> - * force the scheduler to release the context early.
> - *
> - * Early atomic context release may increase parallelism and thus system
> + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> + * then this operation type hints the scheduler that the user has completed
> critical
> + * section processing for this event in the current atomic context, and that the
> + * scheduler may unlock any atomic locks held for this event.
> + * If this is the last event from an atomic flow, i.e. all flow locks are released
> + * (see @ref RTE_SCHED_TYPE_ATOMIC for details), the scheduler is now
> allowed to
> + * schedule events from that flow from to another port.
> + * However, the atomic locks may be still held until the next
> rte_event_dequeue_burst()
> + * call; enqueuing an event with opt type @ref RTE_EVENT_OP_RELEASE is a
> hint only,
> + * allowing the scheduler to release the atomic locks early, but not requiring it
> to do so.
> + *
> + * Early atomic lock release may increase parallelism and thus system
>   * performance, but the user needs to design carefully the split into critical
>   * vs non-critical sections.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> - * then this function hints the scheduler that the user has done all that need
> - * to maintain event order in the current ordered context.
> - * The scheduler is allowed to release the ordered context of this port and
> - * avoid reordering any following enqueues.
> - *
> - * Early ordered context release may increase parallelism and thus system
> - * performance.
> + * If current flow's scheduler type method is @ref
> RTE_SCHED_TYPE_ORDERED
> + * then this operation type informs the scheduler that the current event has
> + * completed processing and will not be returned to the scheduler, i.e.
> + * it has been dropped, and so the reordering context for that event
> + * should be considered filled.
>   *
> - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
> - * or no scheduling context is held then this function may be an NOOP,
> - * depending on the implementation.
> + * Events with this operation type must only be enqueued to the same port
> that the
> + * event to be released was dequeued from. The @ref
> rte_event.impl_opaque
> + * field in the release event must have the same value as that in the original
> dequeued event.
>   *
> - * This operation must only be enqueued to the same port that the
> - * event to be released was dequeued from.
> + * If a dequeued event is re-enqueued with operation type of @ref
> RTE_EVENT_OP_RELEASE,
> + * then any subsequent enqueue of that event - or a copy of it - must be
> done as event of type
> + * @ref RTE_EVENT_OP_NEW, not @ref RTE_EVENT_OP_FORWARD. This is
> because any context for
> + * the originally dequeued event, i.e. atomic locks, or reorder buffer entries,
> will have
> + * been removed or invalidated by the release operation.
>   */
> 
>  /**
> @@ -1569,56 +1577,109 @@ struct rte_event {
>  		/** Event attributes for dequeue or enqueue operation */
>  		struct {
>  			uint32_t flow_id:20;
> -			/**< Targeted flow identifier for the enqueue and
> -			 * dequeue operation.
> -			 * The value must be in the range of
> -			 * [0, nb_event_queue_flows - 1] which
> -			 * previously supplied to rte_event_dev_configure().
> +			/**< Target flow identifier for the enqueue and
> dequeue operation.
> +			 *
> +			 * For @ref RTE_SCHED_TYPE_ATOMIC, this field is
> used to identify a
> +			 * flow for atomicity within a queue & priority level,
> such that events
> +			 * from each individual flow will only be scheduled to
> one port at a time.
> +			 *
> +			 * This field is preserved between enqueue and
> dequeue when
> +			 * a device reports the @ref
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> +			 * capability. Otherwise the value is implementation
> dependent
> +			 * on dequeue.
>  			 */
>  			uint32_t sub_event_type:8;
>  			/**< Sub-event types based on the event source.
> +			 *
> +			 * This field is preserved between enqueue and
> dequeue.
> +			 *
>  			 * @see RTE_EVENT_TYPE_CPU
>  			 */
>  			uint32_t event_type:4;
> -			/**< Event type to classify the event source.
> -			 * @see RTE_EVENT_TYPE_ETHDEV,
> (RTE_EVENT_TYPE_*)
> +			/**< Event type to classify the event source.
> (RTE_EVENT_TYPE_*)
> +			 *
> +			 * This field is preserved between enqueue and
> dequeue
>  			 */
>  			uint8_t op:2;
> -			/**< The type of event enqueue operation -
> new/forward/
> -			 * etc.This field is not preserved across an instance
> -			 * and is undefined on dequeue.
> -			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> +			/**< The type of event enqueue operation -
> new/forward/ etc.
> +			 *
> +			 * This field is *not* preserved across an instance
> +			 * and is implementation dependent on dequeue.
> +			 *
> +			 * @see RTE_EVENT_OP_NEW
> +			 * @see RTE_EVENT_OP_FORWARD
> +			 * @see RTE_EVENT_OP_RELEASE
>  			 */
>  			uint8_t rsvd:4;
> -			/**< Reserved for future use */
> +			/**< Reserved for future use.
> +			 *
> +			 * Should be set to zero when initializing event
> structures.
> +			 *
> +			 * When forwarding or releasing existing events
> dequeued from the scheduler,
> +			 * this field can be ignored.
> +			 */
>  			uint8_t sched_type:2;
>  			/**< Scheduler synchronization type
> (RTE_SCHED_TYPE_*)
>  			 * associated with flow id on a given event queue
>  			 * for the enqueue and dequeue operation.
> +			 *
> +			 * This field is used to determine the scheduling type
> +			 * for events sent to queues where @ref
> RTE_EVENT_QUEUE_CFG_ALL_TYPES
> +			 * is configured.
> +			 * For queues where only a single scheduling type is
> available,
> +			 * this field must be set to match the configured
> scheduling type.
> +			 *
> +			 * This field is preserved between enqueue and
> dequeue.
> +			 *
> +			 * @see RTE_SCHED_TYPE_ORDERED
> +			 * @see RTE_SCHED_TYPE_ATOMIC
> +			 * @see RTE_SCHED_TYPE_PARALLEL
>  			 */
>  			uint8_t queue_id;
>  			/**< Targeted event queue identifier for the enqueue
> or
>  			 * dequeue operation.
> -			 * The value must be in the range of
> -			 * [0, nb_event_queues - 1] which previously supplied
> to
> -			 * rte_event_dev_configure().
> +			 * The value must be less than @ref
> rte_event_dev_config.nb_event_queues
> +			 * which was previously supplied to
> rte_event_dev_configure().
> +			 *
> +			 * This field is preserved between enqueue on
> dequeue.
>  			 */
>  			uint8_t priority;
>  			/**< Event priority relative to other events in the
>  			 * event queue. The requested priority should in the
> -			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
> -			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
> +			 * range of  [@ref
> RTE_EVENT_DEV_PRIORITY_HIGHEST,
> +			 * @ref RTE_EVENT_DEV_PRIORITY_LOWEST].
> +			 *
>  			 * The implementation shall normalize the requested
>  			 * priority to supported priority value.
> -			 * Valid when the device has
> -			 * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
> +			 * [For devices with where the supported priority
> range is a power-of-2, the
> +			 * normalization will be done via bit-shifting, so only
> the highest
> +			 * log2(num_priorities) bits will be used by the event
> device]
> +			 *
> +			 * Valid when the device has @ref
> RTE_EVENT_DEV_CAP_EVENT_QOS capability
> +			 * and this field is preserved between enqueue and
> dequeue,
> +			 * though with possible loss of precision due to
> normalization and
> +			 * subsequent de-normalization. (For example, if a
> device only supports 8
> +			 * priority levels, only the high 3 bits of this field will be
> +			 * used by that device, and hence only the value of
> those 3 bits are
> +			 * guaranteed to be preserved between enqueue and
> dequeue.)
> +			 *
> +			 * Ignored when device does not support @ref
> RTE_EVENT_DEV_CAP_EVENT_QOS
> +			 * capability, and it is implementation dependent if this
> field is preserved
> +			 * between enqueue and dequeue.
>  			 */
>  			uint8_t impl_opaque;
> -			/**< Implementation specific opaque value.
> -			 * An implementation may use this field to hold
> +			/**< Opaque field for event device use.
> +			 *
> +			 * An event driver implementation may use this field
> to hold an
>  			 * implementation specific value to share between
>  			 * dequeue and enqueue operation.
> -			 * The application should not modify this field.
> +			 *
> +