From: Thomas Monjalon <thomas@monjalon.net>
To: Spike Du <spiked@nvidia.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: Matan Azrad <matan@nvidia.com>,
Slava Ovsiienko <viacheslavo@nvidia.com>,
Ori Kam <orika@nvidia.com>, Wenzhuo Lu <wenzhuo.lu@intel.com>,
Beilei Xing <beilei.xing@intel.com>,
Bernard Iremonger <bernard.iremonger@intel.com>,
Ray Kinsella <mdr@ashroe.eu>,
dev@dpdk.org,
"stephen@networkplumber.org" <stephen@networkplumber.org>,
"mb@smartsharesystems.com" <mb@smartsharesystems.com>,
Raslan Darawsheh <rasland@nvidia.com>,
ferruh.yigit@amd.com
Subject: Re: [PATCH v4 3/7] ethdev: introduce Rx queue based fill threshold
Date: Mon, 06 Jun 2022 23:30:41 +0200 [thread overview]
Message-ID: <7144473.aeNJFYEL58@thomas> (raw)
In-Reply-To: <de510090-677d-397e-a1df-1c1115108ac6@oktetlabs.ru>
It seems we share a common understanding
and we need to agree on a good wording
for the most meaningful API.
Questions inline below:
06/06/2022 19:15, Andrew Rybchenko:
> On 6/6/22 16:16, Spike Du wrote:
> > Hi Andrew,
> > Please see below for "fill threshold" concept, I'm ok with other comments about code.
> >
> > Regards,
> > Spike.
> >
> >
> > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> On 6/3/22 15:48, Spike Du wrote:
> >>> Fill threshold describes the fullness of a Rx queue. If the Rx queue
> >>> fullness is above the threshold, the device will trigger the event
> >>> RTE_ETH_EVENT_RX_FILL_THRESH.
> >>
> >> Sorry, I'm not sure that I understand. As far as I know the process to add
> >> more Rx buffers to Rx queue is called 'refill' in many drivers. So fill level is a
> >> number (or percentage) of free buffers in an Rx queue.
> >> If so, fill threashold should be a minimum fill level and below the level we
> >> should generate an event.
> >>
> >> However reading the first paragraph of the descrition it looks like you mean
> >> oposite thing - a number (or percentage) of ready Rx buffers with received
> >> packets.
> >>
> >> I think that the term "fill threshold" is suggested by me, but I did it with mine
> >> understanding of the added feature. Now I'm confused.
> >>
> >> Moreover, I don't understand how "fill threshold" could be in terms of ready
> >> Rx buffers. HW simply don't really know when ready Rx buffers are
> >> processed by SW. So, HW can't say for sure how many ready Rx buffers are
> >> pending. It could be calculated as Rx queue size minus number of free Rx
> >> buffers, but it is imprecise. First of all not all Rx descriptors could be used.
> >> Second, HW ring size could differ queue size specified in SW.
> >> Queue size specified in SW could just limit maximum nubmer of free Rx
> >> buffers provided by the driver.
> >>
> >
> > Let me use other terms because "fill"/"refill" is also ambiguous to me.
> > In a RX ring, there are Rx buffers with received packets, you call it "ready Rx buffers", there is a RTE api rte_eth_rx_queue_count() to get the number,
> > It's also called "used descriptors" in the code.
> > Also there are Rx buffers provided by SW to allow HW "fill in" received packets, we can call it "usable Rx buffers" (here "usable" means usable for HW).
>
> May be it is better to stick to Rx descriptor status terminology?
> Available - Rx descriptor available to HW to put received packet to
> Done - Rx descriptor with received packet reported to Sw
> Unavailable - other (e.g. gap which cannot be used or just processed
> Done, but not refilled (made available to HW).
>
> > Let's define Rx queue "fullness":
> > Fullness = ready-Rx-buffers/Rxq-size
>
> i.e. number of DONE descriptors divided by RxQ size
>
> > On the opposite, we have "emptiness"
> > Emptiness = usable-Rx-buffers/Rxq-size
>
> i.e. number of AVAIL descriptors divided by RxQ size
> Note, that AVAIL != RxQ-size - DONE
>
> HW really knows number of available descriptors by its nature.
> It is a space between latest done and latest received on refill.
>
> HW does not know which descriptors are DONE, since some which
> are DONE before could be already processed by SW, but not yet
> made available again.
>
>
> > Here "fill threshold" describes "fullness", it's not "refill" described in you above words. Because in your words, "refill" is the opposite, it's filling "usable/free Rx buffers", or "emptiness".
> >
> > I can only briefly explain how mlx5 works to get LWM, because I'm not a Firmware guy.
> > Mlx5 Rx queue is basically RDMA queue. It has two indexes: producer index which increases when HW fills in packet, consumer index which increases when SW consumes the packet.
> > The queue size is known when it's created. The fullness is something like (producer_index - consumer_index) (I don't consider in wrap-around here).
> > So mlx5 has the way to get the fullness or emptiness in HW or FW.
> > Another detail is mlx5 uses the term "LWM"(limit watermark), which describes "emptiness". When usable-Rx-buffers is below LWM, we trigger an event.
> > But Thomas think "fullness" is easier to understand, so we use "fullness" in rte APIs and we'll translate it to LWM in mlx5 PMD.
I may be wrong :)
> HW simply does now know fullness and there can't generate any events
> based on it. It is a problem on Rx when there is now available
> descriptors. I.e. emptiness.
So you think "empty_thresh" would be better?
Or "avail_thresh"?
> >>> Fill threshold is defined as a percentage of Rx queue size with valid
> >>> value of [0,99].
> >>> Setting fill threshold to 0 means disable it, which is the default.
> >>> Add fill threshold configuration and query driver callbacks in eth_dev_ops.
> >>> Add command line options to support fill_thresh per-rxq configure.
> >>> - Command syntax:
> >>> set port <port_id> rxq <rxq_id> fill_thresh <fill_thresh_num>
> >>>
> >>> - Example commands:
> >>> To configure fill_thresh as 30% of rxq size on port 1 rxq 0:
> >>> testpmd> set port 1 rxq 0 fill_thresh 30
> >>>
> >>> To disable fill_thresh on port 1 rxq 0:
> >>> testpmd> set port 1 rxq 0 fill_thresh 0
> >>>
> >>> Signed-off-by: Spike Du <spiked@nvidia.com>
next prev parent reply other threads:[~2022-06-06 21:30 UTC|newest]
Thread overview: 131+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-01 3:22 [RFC 0/6] net/mlx5: introduce limit watermark and host shaper Spike Du
2022-04-01 3:22 ` [RFC 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-05-06 3:56 ` [RFC v1 0/7] net/mlx5: introduce limit watermark and host shaper Spike Du
2022-05-06 3:56 ` [RFC v1 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-05-06 3:56 ` [RFC v1 2/7] common/mlx5: share interrupt management Spike Du
2022-05-06 3:56 ` [RFC v1 3/7] ethdev: introduce Rx queue based limit watermark Spike Du
2022-05-19 9:37 ` Andrew Rybchenko
2022-05-06 3:56 ` [RFC v1 4/7] net/mlx5: add LWM event handling support Spike Du
2022-05-06 3:56 ` [RFC v1 5/7] net/mlx5: support Rx queue based limit watermark Spike Du
2022-05-06 3:56 ` [RFC v1 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-05-06 3:56 ` [RFC v1 7/7] app/testpmd: add LWM and Host Shaper command Spike Du
2022-05-22 5:58 ` [RFC v2 0/7] introduce per-queue limit watermark and host shaper Spike Du
2022-05-22 5:58 ` [RFC v2 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-05-22 5:58 ` [RFC v2 2/7] common/mlx5: share interrupt management Spike Du
2022-05-22 5:58 ` [RFC v2 3/7] ethdev: introduce Rx queue based limit watermark Spike Du
2022-05-22 15:23 ` Stephen Hemminger
2022-05-23 3:01 ` Spike Du
2022-05-23 21:45 ` Thomas Monjalon
2022-05-24 2:50 ` Spike Du
2022-05-24 8:18 ` Thomas Monjalon
2022-05-25 12:59 ` Andrew Rybchenko
2022-05-25 13:58 ` Thomas Monjalon
2022-05-25 14:23 ` Andrew Rybchenko
2022-05-23 22:54 ` Stephen Hemminger
2022-05-24 3:46 ` Spike Du
2022-05-22 15:24 ` Stephen Hemminger
2022-05-23 2:18 ` Spike Du
2022-05-23 6:07 ` Morten Brørup
2022-05-23 10:58 ` Thomas Monjalon
2022-05-23 14:10 ` Spike Du
2022-05-23 14:39 ` Thomas Monjalon
2022-05-24 6:35 ` Andrew Rybchenko
2022-05-24 9:40 ` Morten Brørup
2022-05-22 5:58 ` [RFC v2 4/7] net/mlx5: add LWM event handling support Spike Du
2022-05-22 5:58 ` [RFC v2 5/7] net/mlx5: support Rx queue based limit watermark Spike Du
2022-05-22 5:58 ` [RFC v2 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-05-22 5:59 ` [RFC v2 7/7] app/testpmd: add LWM and Host Shaper command Spike Du
2022-05-24 15:20 ` [PATCH v3 0/7] introduce per-queue limit watermark and host shaper Spike Du
2022-05-24 15:20 ` [PATCH v3 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-05-24 15:20 ` [PATCH v3 2/7] common/mlx5: share interrupt management Spike Du
2022-05-24 15:20 ` [PATCH v3 3/7] ethdev: introduce Rx queue based limit watermark Spike Du
2022-05-24 15:20 ` [PATCH v3 4/7] net/mlx5: add LWM event handling support Spike Du
2022-05-24 15:20 ` [PATCH v3 5/7] net/mlx5: support Rx queue based limit watermark Spike Du
2022-05-24 15:20 ` [PATCH v3 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-05-24 15:20 ` [PATCH v3 7/7] app/testpmd: add LWM and Host Shaper command Spike Du
2022-05-24 15:59 ` [PATCH v3 0/7] introduce per-queue limit watermark and host shaper Thomas Monjalon
2022-05-24 19:00 ` Morten Brørup
2022-05-24 19:22 ` Thomas Monjalon
2022-05-25 14:11 ` Andrew Rybchenko
2022-05-25 13:14 ` Spike Du
2022-05-25 13:40 ` Morten Brørup
2022-05-25 13:59 ` Spike Du
2022-05-25 14:16 ` Morten Brørup
2022-05-25 14:30 ` Andrew Rybchenko
2022-06-03 12:48 ` [PATCH v4 0/7] introduce per-queue fill threshold " Spike Du
2022-06-03 12:48 ` [PATCH v4 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-06-03 12:48 ` [PATCH v4 2/7] common/mlx5: share interrupt management Spike Du
2022-06-03 14:30 ` Ray Kinsella
2022-06-03 12:48 ` [PATCH v4 3/7] ethdev: introduce Rx queue based fill threshold Spike Du
2022-06-03 14:30 ` Ray Kinsella
2022-06-04 12:46 ` Andrew Rybchenko
2022-06-06 13:16 ` Spike Du
2022-06-06 17:15 ` Andrew Rybchenko
2022-06-06 21:30 ` Thomas Monjalon [this message]
2022-06-07 8:02 ` Andrew Rybchenko
2022-06-07 6:00 ` Spike Du
2022-06-06 15:49 ` Stephen Hemminger
2022-06-03 12:48 ` [PATCH v4 4/7] net/mlx5: add LWM event handling support Spike Du
2022-06-03 12:48 ` [PATCH v4 5/7] net/mlx5: support Rx queue based fill threshold Spike Du
2022-06-03 12:48 ` [PATCH v4 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-06-03 14:55 ` Ray Kinsella
2022-06-03 12:48 ` [PATCH v4 7/7] app/testpmd: add Host Shaper command Spike Du
2022-06-07 12:59 ` [PATCH v5 0/7] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-07 12:59 ` [PATCH v5 1/7] net/mlx5: add LWM support for Rxq Spike Du
2022-06-08 20:10 ` Matan Azrad
2022-06-07 12:59 ` [PATCH v5 2/7] common/mlx5: share interrupt management Spike Du
2022-06-07 12:59 ` [PATCH v5 3/7] ethdev: introduce Rx queue based available descriptor threshold Spike Du
2022-06-07 12:59 ` [PATCH v5 4/7] net/mlx5: add LWM event handling support Spike Du
2022-06-07 12:59 ` [PATCH v5 5/7] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-07 12:59 ` [PATCH v5 6/7] net/mlx5: add private API to config host port shaper Spike Du
2022-06-07 12:59 ` [PATCH v5 7/7] app/testpmd: add Host Shaper command Spike Du
2022-06-09 7:55 ` Andrew Rybchenko
2022-06-10 2:22 ` Spike Du
2022-06-13 2:50 ` [PATCH v6] " Spike Du
2022-06-13 2:50 ` Spike Du
2022-06-14 9:43 ` Singh, Aman Deep
2022-06-14 9:54 ` Spike Du
2022-06-14 12:01 ` [PATCH v7] " Spike Du
2022-06-14 12:01 ` Spike Du
2022-06-15 7:51 ` Matan Azrad
2022-06-15 11:08 ` Thomas Monjalon
2022-06-15 12:58 ` [PATCH v8 0/6] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-15 12:58 ` [PATCH v8 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-06-15 14:43 ` [PATCH v9 0/6] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-15 14:43 ` [PATCH v9 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-06-16 8:41 ` [PATCH v10 0/6] introduce per-queue available descriptor threshold and host shaper Spike Du
2022-06-16 8:41 ` [PATCH v10 1/6] net/mlx5: add LWM support for Rxq Spike Du
2022-06-16 8:41 ` [PATCH v10 2/6] common/mlx5: share interrupt management Spike Du
2022-06-23 16:05 ` Ray Kinsella
2022-06-16 8:41 ` [PATCH v10 3/6] net/mlx5: add LWM event handling support Spike Du
2022-06-16 8:41 ` [PATCH v10 4/6] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-16 8:41 ` [PATCH v10 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-06-16 8:41 ` [PATCH v10 6/6] app/testpmd: add Host Shaper command Spike Du
2022-06-19 8:14 ` [PATCH v10 0/6] introduce per-queue available descriptor threshold and host shaper Raslan Darawsheh
2022-06-15 14:43 ` [PATCH v9 2/6] common/mlx5: share interrupt management Spike Du
2022-06-15 14:43 ` [PATCH v9 3/6] net/mlx5: add LWM event handling support Spike Du
2022-06-15 14:43 ` [PATCH v9 4/6] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-15 14:43 ` [PATCH v9 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-06-15 14:43 ` [PATCH v9 6/6] app/testpmd: add Host Shaper command Spike Du
2022-06-15 12:58 ` [PATCH v8 2/6] common/mlx5: share interrupt management Spike Du
2022-06-15 12:58 ` [PATCH v8 3/6] net/mlx5: add LWM event handling support Spike Du
2022-06-15 12:58 ` [PATCH v8 4/6] net/mlx5: support Rx queue based available descriptor threshold Spike Du
2022-06-15 12:58 ` [PATCH v8 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-06-15 12:58 ` [PATCH v8 6/6] app/testpmd: add Host Shaper command Spike Du
2022-06-08 9:43 ` [PATCH v5 0/7] introduce per-queue available descriptor threshold and host shaper Andrew Rybchenko
2022-06-08 16:35 ` [PATCH v6] ethdev: introduce available Rx descriptors threshold Andrew Rybchenko
2022-06-08 17:22 ` Thomas Monjalon
2022-06-08 17:46 ` Thomas Monjalon
2022-06-09 0:17 ` fengchengwen
2022-06-09 7:05 ` Thomas Monjalon
2022-06-10 0:01 ` fengchengwen
2022-04-01 3:22 ` [RFC 2/6] common/mlx5: share interrupt management Spike Du
2022-04-01 3:22 ` [RFC 3/6] net/mlx5: add LWM event handling support Spike Du
2022-04-01 3:22 ` [RFC 4/6] net/mlx5: add private API to configure Rxq LWM Spike Du
2022-04-01 3:22 ` [RFC 5/6] net/mlx5: add private API to config host port shaper Spike Du
2022-04-01 3:22 ` [RFC 6/6] app/testpmd: add LWM and Host Shaper command Spike Du
2022-04-05 8:58 ` [RFC 0/6] net/mlx5: introduce limit watermark and host shaper Jerin Jacob
2022-04-26 2:42 ` Spike Du
2022-05-01 12:50 ` Jerin Jacob
2022-05-02 3:58 ` Spike Du
2022-04-29 5:48 ` Spike Du
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7144473.aeNJFYEL58@thomas \
--to=thomas@monjalon.net \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=beilei.xing@intel.com \
--cc=bernard.iremonger@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@amd.com \
--cc=matan@nvidia.com \
--cc=mb@smartsharesystems.com \
--cc=mdr@ashroe.eu \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=spiked@nvidia.com \
--cc=stephen@networkplumber.org \
--cc=viacheslavo@nvidia.com \
--cc=wenzhuo.lu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).