From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4DCC3439A6; Tue, 23 Jan 2024 10:46:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E070402C5; Tue, 23 Jan 2024 10:46:04 +0100 (CET) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id 264BF402BD for ; Tue, 23 Jan 2024 10:46:03 +0100 (CET) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id DD30618E2B for ; Tue, 23 Jan 2024 10:46:02 +0100 (CET) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id D01D318E2A; Tue, 23 Jan 2024 10:46:02 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on hermod.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=ALL_TRUSTED,AWL, T_SCC_BODY_TEXT_LINE autolearn=disabled version=4.0.0 X-Spam-Score: -1.5 Received: from [192.168.1.59] (h-62-63-215-114.A163.priv.bahnhof.se [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 1C21F18D77; Tue, 23 Jan 2024 10:46:01 +0100 (CET) Message-ID: Date: Tue, 23 Jan 2024 10:46:00 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct Content-Language: en-US To: Bruce Richardson , dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> <20240119174346.108905-7-bruce.richardson@intel.com> From: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= In-Reply-To: <20240119174346.108905-7-bruce.richardson@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2024-01-19 18:43, Bruce Richardson wrote: > General rewording and cleanup on the rte_event_dev_config structure. > Improved the wording of some sentences and created linked > cross-references out of the existing references to the dev_info > structure. > > Signed-off-by: Bruce Richardson > --- > lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++------------------ > 1 file changed, 24 insertions(+), 23 deletions(-) > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h > index c57c93a22e..4139ccb982 100644 > --- a/lib/eventdev/rte_eventdev.h > +++ b/lib/eventdev/rte_eventdev.h > @@ -599,9 +599,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id, > struct rte_event_dev_config { > uint32_t dequeue_timeout_ns; > /**< rte_event_dequeue_burst() timeout on this device. > - * This value should be in the range of *min_dequeue_timeout_ns* and > - * *max_dequeue_timeout_ns* which previously provided in > - * rte_event_dev_info_get() > + * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and > + * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by > + * @ref rte_event_dev_info_get() > * The value 0 is allowed, in which case, default dequeue timeout used. > * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT > */ > @@ -609,40 +609,41 @@ struct rte_event_dev_config { > /**< In a *closed system* this field is the limit on maximum number of > * events that can be inflight in the eventdev at a given time. The > * limit is required to ensure that the finite space in a closed system > - * is not overwhelmed. The value cannot exceed the *max_num_events* > - * as provided by rte_event_dev_info_get(). > + * is not overwhelmed. "overwhelmed" -> "exhausted" > + * Once the limit has been reached, any enqueues of NEW events to the > + * system will fail. While this is true, it's also a bit misleading. RTE_EVENT_OP_NEW events being backpressured is controlled by new_event_threshold on the level of the port. > + * The value cannot exceed @ref rte_event_dev_info.max_num_events > + * returned by rte_event_dev_info_get(). > * This value should be set to -1 for *open system*. > */ > uint8_t nb_event_queues; > /**< Number of event queues to configure on this device. > - * This value cannot exceed the *max_event_queues* which previously > - * provided in rte_event_dev_info_get() > + * This value cannot exceed @ref rte_event_dev_info.max_event_queues > + * returned by rte_event_dev_info_get() > */ > uint8_t nb_event_ports; > /**< Number of event ports to configure on this device. > - * This value cannot exceed the *max_event_ports* which previously > - * provided in rte_event_dev_info_get() > + * This value cannot exceed @ref rte_event_dev_info.max_event_ports > + * returned by rte_event_dev_info_get() > */ > uint32_t nb_event_queue_flows; > - /**< Number of flows for any event queue on this device. > - * This value cannot exceed the *max_event_queue_flows* which previously > - * provided in rte_event_dev_info_get() > + /**< Max number of flows needed for a single event queue on this device. > + * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows > + * returned by rte_event_dev_info_get() > */ > uint32_t nb_event_port_dequeue_depth; > - /**< Maximum number of events can be dequeued at a time from an > - * event port by this device. > - * This value cannot exceed the *max_event_port_dequeue_depth* > - * which previously provided in rte_event_dev_info_get(). > + /**< Max number of events that can be dequeued at a time from an event port on this device. > + * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth > + * returned by rte_event_dev_info_get(). > * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable. > - * @see rte_event_port_setup() > + * @see rte_event_port_setup() rte_event_dequeue_burst() > */ > uint32_t nb_event_port_enqueue_depth; > - /**< Maximum number of events can be enqueued at a time from an > - * event port by this device. > - * This value cannot exceed the *max_event_port_enqueue_depth* > - * which previously provided in rte_event_dev_info_get(). > + /**< Maximum number of events can be enqueued at a time to an event port on this device. > + * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth > + * returned by rte_event_dev_info_get(). > * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable. > - * @see rte_event_port_setup() > + * @see rte_event_port_setup() rte_event_enqueue_burst() > */ > uint32_t event_dev_cfg; > /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/ > @@ -652,7 +653,7 @@ struct rte_event_dev_config { > * queues; this value cannot exceed *nb_event_ports* or > * *nb_event_queues*. If the device has ports and queues that are > * optimized for single-link usage, this field is a hint for how many > - * to allocate; otherwise, regular event ports and queues can be used. > + * to allocate; otherwise, regular event ports and queues will be used. > */ > }; >