DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: <dev@dpdk.org>
Cc: <jerinj@marvell.com>
Subject: Re: [Bug 1368] inconsistency in eventdev dev_info and config structs makes some valid configs impossible
Date: Thu, 18 Jan 2024 13:21:34 +0000	[thread overview]
Message-ID: <Zakl3kuc3MV0UqYI@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <ZakK9QzJhndNLaIN@bricha3-MOBL.ger.corp.intel.com>

On Thu, Jan 18, 2024 at 11:26:45AM +0000, Bruce Richardson wrote:
> On Thu, Jan 18, 2024 at 11:18:58AM +0000, bugzilla@dpdk.org wrote:
> >    Bug ID [1]1368
> >    Summary inconsistency in eventdev dev_info and config structs makes
> >    some valid configs impossible
> >    Product DPDK
> >    Version unspecified
> >    Hardware All
> >    OS All
> >    Status UNCONFIRMED
> >    Severity normal
> >    Priority Normal
> >    Component eventdev
> >    Assignee dev@dpdk.org
> >    Reporter bruce.richardson@intel.com
> >    Target Milestone ---
> > 
> > In the rte_event_dev_info struct[1], we have the max_event_queues[2] and
> > max_single_link_event_port_queue_pairs[3] members. The doxygen docs on the
> > latter states: "These ports and queues are not accounted for in
> > max_event_ports or max_event_queues."
> > 
> > This implies that a device which has 8 regular queues and an extra 8
> > single-link only queues, would report max_event_queues == 8, and
> > max_single_link_event_port_queue_pairs == 8 on return from
> > rte_event_dev_info_get() function.
> > 
> > Those values returned from info_get are generally to be used to guide the
> > configuration using rte_event_dev_configure() API, which takes the
> > rte_event_dev_config[4] struct. This has two similar fields, in
> > nb_event_queues[5] and nb_single_link_event_port_queues[6]. However, a
> > problem arises in that the documentation states that nb_event_queues cannot
> > be greater than the previously reported max_event_queues (which by itself
> > makes sense), but the documentation also states that
> > nb_single_link_event_port_queues is a subset of the overall event ports and
> > queues, and cannot be greater than the nb_event_queues given in the same
> > config structure.
> > 
> > To illustrate the issue by continuing to use the same example as above,
> > suppose an app wants to take that device with 8 regular queues and 8 single
> > link ones, and have an app with 2 shared processing queues, e.g. for
> > load-balancing packets/events among 8 cores, but also wants to use the 8
> > single link queues to allow sending packets/events directly to each core
> > without load balancing.  In this 2 + 8 scenario, there is no valid
> > dev_config struct settings that will work:
> > * making the 8 a subset of the nb_event_queues, means that nb_event_queues
> >   is 10, which is greater than max_event_queues and so invalid.
> > * keeping them separate, so that nb_event_queues == 2 and
> >   nb_single_link_port_queues == 8 violates the constraint that the
> >   single_link value cannot exceed the former nb_event_queues value.
> > 
> > We therefore need to adjust the constraints to make things work. Now we can
> > do so, while keeping the single_link value *not included* in the
> > total-count in dev_info, but have it *included* in the config struct, but
> > such a setup is very confusing for the user. Therefore, I think instead we
> > need to correct this by aligning the two structures - either the
> > single_link queues are included in the queue/port counts in both structs,
> > or they aren't included.
> > 
> 
> Since I'm doing some clean-up work on rte_eventdev.h doxygen comments, I'm
> happy enough to push a patch to help fix this, if we can agree on the
> solution.
> 
> Of the two possibilities (make both have single-link included in count, or
> make both have single-link not-included), I would suggest we go for having
> them included, on the basis that that would involve an internal DPDK change
> to increase the reported counts in dev_info from any drivers supporting
> single link queues, but should *not* involve any changes to end
> applications, which would already be specifying the values based on
> nb_single_link being a subset of nb_event_queues. On the other hand,
> changing semantics of the config struct fields would likely mean changes to
> end-apps and so be an ABI/API break.
> 
Checking the implementation in the eventdev.c file, I find that
(unsurprisingly) the implementation doesn't correspond to the
documentation. For the problematic configuration described above, it is
actually possible to implement, since the API checks that nb_event_queues
(and nb_event_ports) is < max_event_queues + max_single_link_queues.

I will patch the documentation in the header to reflect this, but I still
think we should look to change this in future as it's rather inconsistent.

Regards,
/Bruce

  reply	other threads:[~2024-01-18 13:21 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-18 11:18 bugzilla
2024-01-18 11:26 ` Bruce Richardson
2024-01-18 13:21   ` Bruce Richardson [this message]
2024-03-18  0:49 ` [DPDK/eventdev Bug " bugzilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zakl3kuc3MV0UqYI@bricha3-MOBL.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).