From: "lihuisong (C)" <lihuisong@huawei.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: <dev@dpdk.org>, <thomas@monjalon.net>, <ferruh.yigit@amd.com>,
<andrew.rybchenko@oktetlabs.ru>, <liuyonglong@huawei.com>
Subject: Re: [PATCH v3 0/3] introduce maximum Rx buffer size
Date: Mon, 30 Oct 2023 09:25:34 +0800 [thread overview]
Message-ID: <c25f6966-04b7-ddbd-55ee-251ddc7466d3@huawei.com> (raw)
In-Reply-To: <20231029084838.122acb9e@hermes.local>
在 2023/10/29 23:48, Stephen Hemminger 写道:
> On Sat, 28 Oct 2023 09:48:44 +0800
> Huisong Li <lihuisong@huawei.com> wrote:
>
>> The "min_rx_bufsize" in struct rte_eth_dev_info stands for the minimum
>> Rx buffer size supported by hardware. Actually, some engines also have
>> the maximum Rx buffer specification, like, hns3.
>>
>> If mbuf data room size in mempool is greater then the maximum Rx buffer
>> size supported by HW, the data size application used in each mbuf is just
>> as much as the maximum Rx buffer size supported by HW instead of the whole
>> data room size.
>>
>> So introduce maximum Rx buffer size which is not enforced just to report
>> user to avoid memory waste.
> I am not convinced this is really necessary.
> Your device will use up to 4K of buffer size, not sure why an application
> would want to use much larger than that because it would be wasting
> a lot of buffer space (most packets are smaller) anyway.
>
> The only case where it might be useful is if application is using jumbo
> frames (9K) and the application was not able to handle multi segment packets.
Yeah, it is useful if user want a large packet (like, 6K) is in a mbuf.
But, in current layer, user don't know what the maximum buffer size per
descriptor supported by HW is.
> Not handling multi segment packets in SW is just programmer laziness.
User do decide their implement based on their cases in project.
May it be a point for this that user don't want to do memcpy for multi
segment packets and just use the first mbuf memory.
Now that there is the "min_rx_bufsize" to report in ethdev layer.
Anyway, DPDK is indeed the lack of the way to report the maximum Rx
buffer size per hw descriptor.
>
> .
next prev parent reply other threads:[~2023-10-30 1:25 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-08 4:02 [RFC] ethdev: " Huisong Li
2023-08-11 12:07 ` Andrew Rybchenko
2023-08-15 8:16 ` lihuisong (C)
2023-08-15 11:10 ` [PATCH v1 0/3] " Huisong Li
2023-08-15 11:10 ` [PATCH v1 1/3] ethdev: " Huisong Li
2023-09-28 15:56 ` Ferruh Yigit
2023-10-24 12:21 ` lihuisong (C)
2023-08-15 11:10 ` [PATCH v1 2/3] app/testpmd: add maximum Rx buffer size display Huisong Li
2023-08-15 11:10 ` [PATCH v1 3/3] net/hns3: report maximum buffer size Huisong Li
2023-10-27 4:15 ` [PATCH v2 0/3] introduce maximum Rx " Huisong Li
2023-10-27 4:15 ` [PATCH v2 1/3] ethdev: " Huisong Li
2023-10-27 6:27 ` fengchengwen
2023-10-27 7:40 ` Morten Brørup
2023-10-28 1:23 ` lihuisong (C)
2023-10-27 4:15 ` [PATCH v2 2/3] app/testpmd: add maximum Rx buffer size display Huisong Li
2023-10-27 6:28 ` fengchengwen
2023-10-27 4:15 ` [PATCH v2 3/3] net/hns3: report maximum buffer size Huisong Li
2023-10-27 6:17 ` fengchengwen
2023-10-28 1:48 ` [PATCH v3 0/3] introduce maximum Rx " Huisong Li
2023-10-28 1:48 ` [PATCH v3 1/3] ethdev: " Huisong Li
2023-10-29 15:43 ` Stephen Hemminger
2023-10-30 3:08 ` lihuisong (C)
2023-10-28 1:48 ` [PATCH v3 2/3] app/testpmd: add maximum Rx buffer size display Huisong Li
2023-10-28 1:48 ` [PATCH v3 3/3] net/hns3: report maximum buffer size Huisong Li
2023-10-29 15:48 ` [PATCH v3 0/3] introduce maximum Rx " Stephen Hemminger
2023-10-30 1:25 ` lihuisong (C) [this message]
2023-10-30 18:48 ` Stephen Hemminger
2023-10-31 2:57 ` lihuisong (C)
2023-10-31 7:48 ` Morten Brørup
2023-10-31 15:40 ` Stephen Hemminger
2023-11-01 2:36 ` lihuisong (C)
2023-11-01 16:08 ` Stephen Hemminger
2023-11-02 1:59 ` lihuisong (C)
2023-11-02 16:23 ` Ferruh Yigit
2023-11-02 16:51 ` Morten Brørup
2023-11-02 17:05 ` Ferruh Yigit
2023-11-02 17:12 ` Morten Brørup
2023-11-02 17:35 ` Ferruh Yigit
2023-11-03 2:13 ` lihuisong (C)
2023-11-02 12:16 ` [PATCH v4 " Huisong Li
2023-11-02 12:16 ` [PATCH v4 1/3] ethdev: " Huisong Li
2023-11-02 16:35 ` Ferruh Yigit
2023-11-03 2:21 ` lihuisong (C)
2023-11-03 3:30 ` Ferruh Yigit
2023-11-03 6:27 ` lihuisong (C)
2023-11-02 12:16 ` [PATCH v4 2/3] app/testpmd: add maximum Rx buffer size display Huisong Li
2023-11-02 16:42 ` Ferruh Yigit
2023-11-03 2:39 ` lihuisong (C)
2023-11-03 3:53 ` Ferruh Yigit
2023-11-03 6:37 ` lihuisong (C)
2023-11-02 12:16 ` [PATCH v4 3/3] net/hns3: report maximum buffer size Huisong Li
2023-11-03 10:27 ` [PATCH v5 0/3] introduce maximum Rx " Huisong Li
2023-11-03 10:27 ` [PATCH v5 1/3] ethdev: " Huisong Li
2023-11-03 12:37 ` Ivan Malov
2023-11-03 10:27 ` [PATCH v5 2/3] app/testpmd: add maximum Rx buffer size display Huisong Li
2023-11-03 10:27 ` [PATCH v5 3/3] net/hns3: report maximum buffer size Huisong Li
2023-11-03 11:53 ` [PATCH v5 0/3] introduce maximum Rx " Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c25f6966-04b7-ddbd-55ee-251ddc7466d3@huawei.com \
--to=lihuisong@huawei.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@amd.com \
--cc=liuyonglong@huawei.com \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).