From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C494F439A6; Tue, 23 Jan 2024 10:35:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF38840DDA; Tue, 23 Jan 2024 10:35:06 +0100 (CET) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id 6D29C402BD for ; Tue, 23 Jan 2024 10:35:05 +0100 (CET) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id 274BD18C6E for ; Tue, 23 Jan 2024 10:35:05 +0100 (CET) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id 1B43418CC2; Tue, 23 Jan 2024 10:35:05 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on hermod.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=ALL_TRUSTED,AWL, T_SCC_BODY_TEXT_LINE autolearn=disabled version=4.0.0 X-Spam-Score: -1.5 Received: from [192.168.1.59] (h-62-63-215-114.A163.priv.bahnhof.se [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 220F918CBE; Tue, 23 Jan 2024 10:35:03 +0100 (CET) Message-ID: <65d09512-f184-4ac4-a513-e5820754889e@lysator.liu.se> Date: Tue, 23 Jan 2024 10:35:02 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure Content-Language: en-US To: Bruce Richardson , dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> <20240119174346.108905-5-bruce.richardson@intel.com> From: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= In-Reply-To: <20240119174346.108905-5-bruce.richardson@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2024-01-19 18:43, Bruce Richardson wrote: > Some small rewording changes to the doxygen comments on struct > rte_event_dev_info. > > Signed-off-by: Bruce Richardson > --- > lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++----------------- > 1 file changed, 25 insertions(+), 21 deletions(-) > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h > index 57a2791946..872f241df2 100644 > --- a/lib/eventdev/rte_eventdev.h > +++ b/lib/eventdev/rte_eventdev.h > @@ -482,54 +482,58 @@ struct rte_event_dev_info { > const char *driver_name; /**< Event driver name */ > struct rte_device *dev; /**< Device information */ > uint32_t min_dequeue_timeout_ns; > - /**< Minimum supported global dequeue timeout(ns) by this device */ > + /**< Minimum global dequeue timeout(ns) supported by this device */ Are we missing a bunch of "." here and in the other fields? > uint32_t max_dequeue_timeout_ns; > - /**< Maximum supported global dequeue timeout(ns) by this device */ > + /**< Maximum global dequeue timeout(ns) supported by this device */ > uint32_t dequeue_timeout_ns; > /**< Configured global dequeue timeout(ns) for this device */ > uint8_t max_event_queues; > - /**< Maximum event_queues supported by this device */ > + /**< Maximum event queues supported by this device */ > uint32_t max_event_queue_flows; > - /**< Maximum supported flows in an event queue by this device*/ > + /**< Maximum number of flows within an event queue supported by this device*/ > uint8_t max_event_queue_priority_levels; > /**< Maximum number of event queue priority levels by this device. > - * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability > + * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability. > + * The priority levels are evenly distributed between > + * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST. This is a change of the API, in the sense it's defining something previously left undefined? If you need 6 different priority levels in an app, how do you go about making sure you find the correct (distinct) Eventdev levels on any event device supporting >= 6 levels? #define NUM_MY_LEVELS 6 #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) * (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) / NUM_MY_LEVELS) This way? One would worry a bit exactly what "evenly" means, in terms of rounding errors. If you have an event device with 255 priority levels of (say) 256 levels available in the API, which two levels are the same priority? > */ > uint8_t max_event_priority_levels; > /**< Maximum number of event priority levels by this device. > * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability > + * The priority levels are evenly distributed between > + * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST. > */ > uint8_t max_event_ports; > /**< Maximum number of event ports supported by this device */ > uint8_t max_event_port_dequeue_depth; > - /**< Maximum number of events can be dequeued at a time from an > - * event port by this device. > - * A device that does not support bulk dequeue will set this as 1. > + /**< Maximum number of events that can be dequeued at a time from an event port > + * on this device. > + * A device that does not support bulk dequeue will set this to 1. > */ > uint32_t max_event_port_enqueue_depth; > - /**< Maximum number of events can be enqueued at a time from an > - * event port by this device. > - * A device that does not support bulk enqueue will set this as 1. > + /**< Maximum number of events that can be enqueued at a time to an event port > + * on this device. > + * A device that does not support bulk enqueue will set this to 1. > */ > uint8_t max_event_port_links; > - /**< Maximum number of queues that can be linked to a single event > - * port by this device. > + /**< Maximum number of queues that can be linked to a single event port on this device. > */ > int32_t max_num_events; > /**< A *closed system* event dev has a limit on the number of events it > - * can manage at a time. An *open system* event dev does not have a > - * limit and will specify this as -1. > + * can manage at a time. > + * Once the number of events tracked by an eventdev exceeds this number, > + * any enqueues of NEW events will fail. > + * An *open system* event dev does not have a limit and will specify this as -1. > */ > uint32_t event_dev_cap; > - /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/ > + /**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*) */ > uint8_t max_single_link_event_port_queue_pairs; > - /**< Maximum number of event ports and queues that are optimized for > - * (and only capable of) single-link configurations supported by this > - * device. These ports and queues are not accounted for in > - * max_event_ports or max_event_queues. > + /**< Maximum number of event ports and queues, supported by this device, > + * that are optimized for (and only capable of) single-link configurations. > + * These ports and queues are not accounted for in max_event_ports or max_event_queues. > */ > uint8_t max_profiles_per_port; > - /**< Maximum number of event queue profiles per event port. > + /**< Maximum number of event queue link profiles per event port. > * A device that doesn't support multiple profiles will set this as 1. > */ > };