* [Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible
@ 2024-01-18 11:18 bugzilla
2024-01-18 11:26 ` Bruce Richardson
2024-03-18 0:49 ` [DPDK/eventdev Bug " bugzilla
0 siblings, 2 replies; 4+ messages in thread
From: bugzilla @ 2024-01-18 11:18 UTC (permalink / raw)
To: dev
[-- Attachment #1: Type: text/plain, Size: 3573 bytes --]
https://bugs.dpdk.org/show_bug.cgi?id=1368
Bug ID: 1368
Summary: inconsistency in eventdev dev_info and config structs
makes some valid configs impossible
Product: DPDK
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: eventdev
Assignee: dev@dpdk.org
Reporter: bruce.richardson@intel.com
Target Milestone: ---
In the rte_event_dev_info struct[1], we have the max_event_queues[2] and
max_single_link_event_port_queue_pairs[3] members. The doxygen docs on the
latter states: "These ports and queues are not accounted for in
max_event_ports or max_event_queues."
This implies that a device which has 8 regular queues and an extra 8
single-link only queues, would report max_event_queues == 8, and
max_single_link_event_port_queue_pairs == 8 on return from
rte_event_dev_info_get() function.
Those values returned from info_get are generally to be used to guide the
configuration using rte_event_dev_configure() API, which takes the
rte_event_dev_config[4] struct. This has two similar fields, in
nb_event_queues[5] and nb_single_link_event_port_queues[6]. However, a
problem arises in that the documentation states that nb_event_queues cannot
be greater than the previously reported max_event_queues (which by itself
makes sense), but the documentation also states that
nb_single_link_event_port_queues is a subset of the overall event ports and
queues, and cannot be greater than the nb_event_queues given in the same
config structure.
To illustrate the issue by continuing to use the same example as above,
suppose an app wants to take that device with 8 regular queues and 8 single
link ones, and have an app with 2 shared processing queues, e.g. for
load-balancing packets/events among 8 cores, but also wants to use the 8
single link queues to allow sending packets/events directly to each core
without load balancing. In this 2 + 8 scenario, there is no valid
dev_config struct settings that will work:
* making the 8 a subset of the nb_event_queues, means that nb_event_queues
is 10, which is greater than max_event_queues and so invalid.
* keeping them separate, so that nb_event_queues == 2 and
nb_single_link_port_queues == 8 violates the constraint that the
single_link value cannot exceed the former nb_event_queues value.
We therefore need to adjust the constraints to make things work. Now we can
do so, while keeping the single_link value *not included* in the
total-count in dev_info, but have it *included* in the config struct, but
such a setup is very confusing for the user. Therefore, I think instead we
need to correct this by aligning the two structures - either the
single_link queues are included in the queue/port counts in both structs,
or they aren't included.
[1] https://doc.dpdk.org/api/structrte__event__dev__info.html
[2]
https://doc.dpdk.org/api/structrte__event__dev__info.html#a1cebb1d19943d6b8e3d6e51ffc72982a
[3]
https://doc.dpdk.org/api/structrte__event__dev__info.html#ae65bf9e4dba80ccb205f3c43f5907d5d
[4] https://doc.dpdk.org/api/structrte__event__dev__config.html
[5]
https://doc.dpdk.org/api/structrte__event__dev__config.html#a703c026d74436b05fc656652324101e4
[6]
https://doc.dpdk.org/api/structrte__event__dev__config.html#a39f29448dce5baf491f6685299faa0c9
--
You are receiving this mail because:
You are the assignee for the bug.
[-- Attachment #2: Type: text/html, Size: 6053 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible
2024-01-18 11:18 [Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible bugzilla
@ 2024-01-18 11:26 ` Bruce Richardson
2024-01-18 13:21 ` Bruce Richardson
2024-03-18 0:49 ` [DPDK/eventdev Bug " bugzilla
1 sibling, 1 reply; 4+ messages in thread
From: Bruce Richardson @ 2024-01-18 11:26 UTC (permalink / raw)
To: dev; +Cc: jerinj
On Thu, Jan 18, 2024 at 11:18:58AM +0000, bugzilla@dpdk.org wrote:
> Bug ID [1]1368
> Summary inconsistency in eventdev dev_info and config structs makes
> some valid configs impossible
> Product DPDK
> Version unspecified
> Hardware All
> OS All
> Status UNCONFIRMED
> Severity normal
> Priority Normal
> Component eventdev
> Assignee dev@dpdk.org
> Reporter bruce.richardson@intel.com
> Target Milestone ---
>
> In the rte_event_dev_info struct[1], we have the max_event_queues[2] and
> max_single_link_event_port_queue_pairs[3] members. The doxygen docs on the
> latter states: "These ports and queues are not accounted for in
> max_event_ports or max_event_queues."
>
> This implies that a device which has 8 regular queues and an extra 8
> single-link only queues, would report max_event_queues == 8, and
> max_single_link_event_port_queue_pairs == 8 on return from
> rte_event_dev_info_get() function.
>
> Those values returned from info_get are generally to be used to guide the
> configuration using rte_event_dev_configure() API, which takes the
> rte_event_dev_config[4] struct. This has two similar fields, in
> nb_event_queues[5] and nb_single_link_event_port_queues[6]. However, a
> problem arises in that the documentation states that nb_event_queues cannot
> be greater than the previously reported max_event_queues (which by itself
> makes sense), but the documentation also states that
> nb_single_link_event_port_queues is a subset of the overall event ports and
> queues, and cannot be greater than the nb_event_queues given in the same
> config structure.
>
> To illustrate the issue by continuing to use the same example as above,
> suppose an app wants to take that device with 8 regular queues and 8 single
> link ones, and have an app with 2 shared processing queues, e.g. for
> load-balancing packets/events among 8 cores, but also wants to use the 8
> single link queues to allow sending packets/events directly to each core
> without load balancing. In this 2 + 8 scenario, there is no valid
> dev_config struct settings that will work:
> * making the 8 a subset of the nb_event_queues, means that nb_event_queues
> is 10, which is greater than max_event_queues and so invalid.
> * keeping them separate, so that nb_event_queues == 2 and
> nb_single_link_port_queues == 8 violates the constraint that the
> single_link value cannot exceed the former nb_event_queues value.
>
> We therefore need to adjust the constraints to make things work. Now we can
> do so, while keeping the single_link value *not included* in the
> total-count in dev_info, but have it *included* in the config struct, but
> such a setup is very confusing for the user. Therefore, I think instead we
> need to correct this by aligning the two structures - either the
> single_link queues are included in the queue/port counts in both structs,
> or they aren't included.
>
Since I'm doing some clean-up work on rte_eventdev.h doxygen comments, I'm
happy enough to push a patch to help fix this, if we can agree on the
solution.
Of the two possibilities (make both have single-link included in count, or
make both have single-link not-included), I would suggest we go for having
them included, on the basis that that would involve an internal DPDK change
to increase the reported counts in dev_info from any drivers supporting
single link queues, but should *not* involve any changes to end
applications, which would already be specifying the values based on
nb_single_link being a subset of nb_event_queues. On the other hand,
changing semantics of the config struct fields would likely mean changes to
end-apps and so be an ABI/API break.
/Bruce
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible
2024-01-18 11:26 ` Bruce Richardson
@ 2024-01-18 13:21 ` Bruce Richardson
0 siblings, 0 replies; 4+ messages in thread
From: Bruce Richardson @ 2024-01-18 13:21 UTC (permalink / raw)
To: dev; +Cc: jerinj
On Thu, Jan 18, 2024 at 11:26:45AM +0000, Bruce Richardson wrote:
> On Thu, Jan 18, 2024 at 11:18:58AM +0000, bugzilla@dpdk.org wrote:
> > Bug ID [1]1368
> > Summary inconsistency in eventdev dev_info and config structs makes
> > some valid configs impossible
> > Product DPDK
> > Version unspecified
> > Hardware All
> > OS All
> > Status UNCONFIRMED
> > Severity normal
> > Priority Normal
> > Component eventdev
> > Assignee dev@dpdk.org
> > Reporter bruce.richardson@intel.com
> > Target Milestone ---
> >
> > In the rte_event_dev_info struct[1], we have the max_event_queues[2] and
> > max_single_link_event_port_queue_pairs[3] members. The doxygen docs on the
> > latter states: "These ports and queues are not accounted for in
> > max_event_ports or max_event_queues."
> >
> > This implies that a device which has 8 regular queues and an extra 8
> > single-link only queues, would report max_event_queues == 8, and
> > max_single_link_event_port_queue_pairs == 8 on return from
> > rte_event_dev_info_get() function.
> >
> > Those values returned from info_get are generally to be used to guide the
> > configuration using rte_event_dev_configure() API, which takes the
> > rte_event_dev_config[4] struct. This has two similar fields, in
> > nb_event_queues[5] and nb_single_link_event_port_queues[6]. However, a
> > problem arises in that the documentation states that nb_event_queues cannot
> > be greater than the previously reported max_event_queues (which by itself
> > makes sense), but the documentation also states that
> > nb_single_link_event_port_queues is a subset of the overall event ports and
> > queues, and cannot be greater than the nb_event_queues given in the same
> > config structure.
> >
> > To illustrate the issue by continuing to use the same example as above,
> > suppose an app wants to take that device with 8 regular queues and 8 single
> > link ones, and have an app with 2 shared processing queues, e.g. for
> > load-balancing packets/events among 8 cores, but also wants to use the 8
> > single link queues to allow sending packets/events directly to each core
> > without load balancing. In this 2 + 8 scenario, there is no valid
> > dev_config struct settings that will work:
> > * making the 8 a subset of the nb_event_queues, means that nb_event_queues
> > is 10, which is greater than max_event_queues and so invalid.
> > * keeping them separate, so that nb_event_queues == 2 and
> > nb_single_link_port_queues == 8 violates the constraint that the
> > single_link value cannot exceed the former nb_event_queues value.
> >
> > We therefore need to adjust the constraints to make things work. Now we can
> > do so, while keeping the single_link value *not included* in the
> > total-count in dev_info, but have it *included* in the config struct, but
> > such a setup is very confusing for the user. Therefore, I think instead we
> > need to correct this by aligning the two structures - either the
> > single_link queues are included in the queue/port counts in both structs,
> > or they aren't included.
> >
>
> Since I'm doing some clean-up work on rte_eventdev.h doxygen comments, I'm
> happy enough to push a patch to help fix this, if we can agree on the
> solution.
>
> Of the two possibilities (make both have single-link included in count, or
> make both have single-link not-included), I would suggest we go for having
> them included, on the basis that that would involve an internal DPDK change
> to increase the reported counts in dev_info from any drivers supporting
> single link queues, but should *not* involve any changes to end
> applications, which would already be specifying the values based on
> nb_single_link being a subset of nb_event_queues. On the other hand,
> changing semantics of the config struct fields would likely mean changes to
> end-apps and so be an ABI/API break.
>
Checking the implementation in the eventdev.c file, I find that
(unsurprisingly) the implementation doesn't correspond to the
documentation. For the problematic configuration described above, it is
actually possible to implement, since the API checks that nb_event_queues
(and nb_event_ports) is < max_event_queues + max_single_link_queues.
I will patch the documentation in the header to reflect this, but I still
think we should look to change this in future as it's rather inconsistent.
Regards,
/Bruce
^ permalink raw reply [flat|nested] 4+ messages in thread
* [DPDK/eventdev Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible
2024-01-18 11:18 [Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible bugzilla
2024-01-18 11:26 ` Bruce Richardson
@ 2024-03-18 0:49 ` bugzilla
1 sibling, 0 replies; 4+ messages in thread
From: bugzilla @ 2024-03-18 0:49 UTC (permalink / raw)
To: dev
[-- Attachment #1: Type: text/plain, Size: 548 bytes --]
https://bugs.dpdk.org/show_bug.cgi?id=1368
Thomas Monjalon (thomas@monjalon.net) changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |RESOLVED
Resolution|--- |FIXED
--- Comment #3 from Thomas Monjalon (thomas@monjalon.net) ---
Resolved in http://git.dpdk.org/dpdk/commit/?id=1203462c5a
--
You are receiving this mail because:
You are the assignee for the bug.
[-- Attachment #2: Type: text/html, Size: 2736 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-03-18 0:49 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-18 11:18 [Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible bugzilla
2024-01-18 11:26 ` Bruce Richardson
2024-01-18 13:21 ` Bruce Richardson
2024-03-18 0:49 ` [DPDK/eventdev Bug " bugzilla
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).