From: Matan Azrad <matan@nvidia.com>
To: Chengchang Tang <tangchengchang@huawei.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "maryam.tahhan@intel.com" <maryam.tahhan@intel.com>,
"linuxarm@huawei.com" <linuxarm@huawei.com>,
"ferruh.yigit@intel.com" <ferruh.yigit@intel.com>,
"wenzhuo.lu@intel.com" <wenzhuo.lu@intel.com>,
NBU-Contact-Thomas Monjalon <thomas@monjalon.net>,
"arybchenko@solarflare.com" <arybchenko@solarflare.com>
Subject: Re: [dpdk-dev] [PATCH v3 1/4] ethdev: add a field for rxq info structure
Date: Wed, 2 Sep 2020 10:30:59 +0000 [thread overview]
Message-ID: <MW2PR12MB2492305D632893AEA8D7ED31DF2F0@MW2PR12MB2492.namprd12.prod.outlook.com> (raw)
In-Reply-To: <1a4dc7d6-5596-34cb-9eb1-adcd2adef2fb@huawei.com>
Hi Chengchang
From: Chengchang Tang
> Hi, Matan
>
> On 2020/9/2 15:19, Matan Azrad wrote:
> >
> > Hi Chengchang
> >
> > From: Chengchang Tang
> >> Hi, Matan
> >>
> >> On 2020/9/1 23:33, Matan Azrad wrote:
> >>>
> >>> Hi Chengchang
> >>>
> >>> Please see some question below.
> >>>
> >>> From: Chengchang Tang
> >>>> Add a field named rx_buf_size in rte_eth_rxq_info to indicate the
> >>>> buffer size used in receiving packets for HW.
> >>>>
> >>>> In this way, upper-layer users can get this information by calling
> >>>> rte_eth_rx_queue_info_get.
> >>>>
> >>>> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
> >>>> Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> >>>> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
> >>>> ---
> >>>> lib/librte_ethdev/rte_ethdev.h | 2 ++
> >>>> 1 file changed, 2 insertions(+)
> >>>>
> >>>> diff --git a/lib/librte_ethdev/rte_ethdev.h
> >>>> b/lib/librte_ethdev/rte_ethdev.h index 70295d7..9fed5cb 100644
> >>>> --- a/lib/librte_ethdev/rte_ethdev.h
> >>>> +++ b/lib/librte_ethdev/rte_ethdev.h
> >>>> @@ -1420,6 +1420,8 @@ struct rte_eth_rxq_info {
> >>>> struct rte_eth_rxconf conf; /**< queue config parameters. */
> >>>> uint8_t scattered_rx; /**< scattered packets RX supported. */
> >>>> uint16_t nb_desc; /**< configured number of RXDs. */
> >>>> + /**< buffer size used for hardware when receive packets. */
> >>>> + uint16_t rx_buf_size;
> >>>
> >>> Is it the maximum supported Rx buffer by the HW?
> >>> If yes, maybe max_rx_buf_size is better name?
> >>
> >> No, it is the Rx buffer size currently used by HW.
> > Doesn't it defined by the user? Using Rx queue mem-pool mbuf room size?
> >
> > And it may be different per Rx queue....
>
> Yes, it is defined by user using the Rx queue mem-pool mbuf room size.
> When different queues are bound to different mempools, different queues
> may have different value.
> >
> >> IMHO, the structure rte_eth_rxq_info and associated query API are
> >> mainly used to query HW configurations at runtime or after queue is
> >> configured/setup. Therefore, the content of this structure should be
> >> the current HW configuration.
> >
> > It looks me more like capabilities...
> > The one which define the current configuration is the user by the
> configuration APIs(after reading the capabilities).
>
> I prefer to consider the structure rte_eth_dev_info as the capabilities.
Yes.
> Because rxq_info and associated APIs are not available until the queue is
> configured. And the max rx_buf_size is already exists in dev_info.
> >
> > I don't think we have here all the current configurations, so what is special
> in this one?
>
> I think the structure is used to store the queue-related configuration,
> especially the final HW configuration that may be different from user
> configuration and some configurations that are not mandatory for the
> user(PMDs will use a default configuration). Such as the scatterred_rx and
> rx_drop_en in rte_eth_rxconf, some PMDs will adjust it in some cases based
> on their HW constraints.
Ok, this struct(struct rte_eth_rxq_info) is new for me.
Thanks for the explanation.
> This configuration item meets the above criteria. The value range of
> rx_buf_size varies according to HW. Some HW may require 1k-alignment,
> while others may require several fixed values. So, the PMDs will configure it
> based on their HW constraints. This results in difference between the user
> configuration and the actual configuration and this value affects the memory
> usage and performance.
> I think there's a need for a way to expose that information.
So the user can configure X and the driver will use Y!=X?
Should the application validate its own configurations after setting them successfully?
> >
> >>> Maybe document that 0 means - no limitation by HW?
> >>
> >> Yes, there is no need to fill this filed for HW that has no restrictions on it.
> >> I'll add it in v4.
> >>
> >>> Must application read it in order to know if its datapath should
> >>> handle
> >> multi-segment buffers?
> >>
> >> I think it's more appropriate to use scattered_rx to determine if
> >> multi- segment buffers should be handled.
> >>
> >>>
> >>> Maybe it will be good to force application to configure scatter when
> >>> this
> >> field is valid and smaller than max_rx_pkt_len\max_lro.. (<= room size)...
> >
> > Can you explain more what is the issue you came to solve?
>
> This HW information may be useful when there is some problems running a
> application. This structure and related APIs can be used to expose it at run
> time.
> >
OK
next prev parent reply other threads:[~2020-09-02 10:31 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-18 12:35 [dpdk-dev] [RFC] ethdev: add a field for rte_eth_rxq_info Chengchang Tang
2020-08-26 1:57 ` [dpdk-dev] [PATCH 0/3] add RX buffer size " Chengchang Tang
2020-08-26 1:57 ` [dpdk-dev] [PATCH 1/3] ethdev: add a field " Chengchang Tang
2020-08-26 1:57 ` [dpdk-dev] [PATCH 2/3] app/testpmd: Add RX buffer size display in queue info querry Chengchang Tang
2020-08-26 1:57 ` [dpdk-dev] [PATCH 3/3] net/hns3: add RX buffer size to rx qinfo querry Chengchang Tang
2020-08-26 7:12 ` [dpdk-dev] [PATCH v2 0/3] add Rx buffer size for rxq info structure Chengchang Tang
2020-08-26 7:12 ` [dpdk-dev] [PATCH v2 1/3] ethdev: add a field " Chengchang Tang
2020-08-26 7:29 ` Wei Hu (Xavier)
2020-08-26 7:12 ` [dpdk-dev] [PATCH v2 2/3] app/testpmd: add Rx buffer size display in queue info query Chengchang Tang
2020-08-26 7:28 ` Wei Hu (Xavier)
2020-08-26 14:42 ` Stephen Hemminger
2020-08-29 6:48 ` Chengchang Tang
2020-08-26 7:12 ` [dpdk-dev] [PATCH v2 3/3] net/hns3: add Rx buffer size to Rx qinfo query Chengchang Tang
2020-08-26 7:20 ` Wei Hu (Xavier)
2020-08-29 7:13 ` [dpdk-dev] [PATCH v3 0/4] add Rx buffer size for rxq info structure Chengchang Tang
2020-08-29 7:13 ` [dpdk-dev] [PATCH v3 1/4] ethdev: add a field " Chengchang Tang
2020-09-01 15:33 ` Matan Azrad
2020-09-02 3:52 ` Chengchang Tang
2020-09-02 7:19 ` Matan Azrad
2020-09-02 10:01 ` Chengchang Tang
2020-09-02 10:30 ` Matan Azrad [this message]
2020-09-03 1:48 ` Chengchang Tang
2020-09-06 13:45 ` Matan Azrad
2020-09-07 7:47 ` Chengchang Tang
2020-09-07 8:28 ` Matan Azrad
2020-09-07 12:06 ` Chengchang Tang
2020-09-07 13:02 ` Matan Azrad
2020-09-03 15:00 ` Ferruh Yigit
2020-09-03 14:55 ` Ferruh Yigit
2020-09-03 15:01 ` Ferruh Yigit
2020-09-04 1:43 ` Chengchang Tang
2020-09-03 15:35 ` Bruce Richardson
2020-09-04 14:25 ` Ferruh Yigit
2020-09-04 15:14 ` Bruce Richardson
2020-09-04 15:30 ` Bruce Richardson
2020-08-29 7:13 ` [dpdk-dev] [PATCH v3 2/4] app/testpmd: add Rx buffer size display in queue info query Chengchang Tang
2020-08-29 7:13 ` [dpdk-dev] [PATCH v3 3/4] app/procinfo: add Rx buffer size to --show-port Chengchang Tang
2020-08-29 7:13 ` [dpdk-dev] [PATCH v3 4/4] net/hns3: add Rx buffer size to Rx qinfo query Chengchang Tang
2020-09-05 9:07 ` [dpdk-dev] [PATCH v4 0/5] add Rx buffer size for rxq info structure Chengchang Tang
2020-09-05 9:07 ` [dpdk-dev] [PATCH v4 1/5] ethdev: add a field " Chengchang Tang
2020-09-05 16:50 ` Stephen Hemminger
2020-09-05 9:07 ` [dpdk-dev] [PATCH v4 2/5] app/testpmd: add Rx buffer size display in queue info query Chengchang Tang
2020-09-18 8:54 ` Ferruh Yigit
2020-09-20 8:47 ` Chengchang Tang
2020-09-05 9:07 ` [dpdk-dev] [PATCH v4 3/5] app/procinfo: add Rx buffer size to --show-port Chengchang Tang
2020-09-05 16:59 ` Stephen Hemminger
2020-09-07 9:14 ` Chengchang Tang
2020-09-18 22:11 ` Stephen Hemminger
2020-09-21 2:06 ` Chengchang Tang
2020-09-21 11:26 ` Ferruh Yigit
2020-09-05 9:07 ` [dpdk-dev] [PATCH v4 4/5] net/hns3: add Rx buffer size to Rx qinfo query Chengchang Tang
2020-09-05 9:07 ` [dpdk-dev] [PATCH v4 5/5] doc: remove rxq info structure deprecation notice Chengchang Tang
2020-09-05 16:33 ` Thomas Monjalon
2020-09-07 9:12 ` Chengchang Tang
2020-09-21 13:22 ` [dpdk-dev] [PATCH v5 0/2] add Rx buffer size for rxq info structure Chengchang Tang
2020-09-21 13:22 ` [dpdk-dev] [PATCH v5 1/2] ethdev: add a field " Chengchang Tang
2020-09-21 14:18 ` Ferruh Yigit
2020-09-21 13:22 ` [dpdk-dev] [PATCH v5 2/2] net/hns3: add Rx buffer size to Rx qinfo query Chengchang Tang
2020-09-21 14:19 ` [dpdk-dev] [PATCH v5 0/2] add Rx buffer size for rxq info structure Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MW2PR12MB2492305D632893AEA8D7ED31DF2F0@MW2PR12MB2492.namprd12.prod.outlook.com \
--to=matan@nvidia.com \
--cc=arybchenko@solarflare.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=linuxarm@huawei.com \
--cc=maryam.tahhan@intel.com \
--cc=tangchengchang@huawei.com \
--cc=thomas@monjalon.net \
--cc=wenzhuo.lu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).