From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 608C24324B; Tue, 31 Oct 2023 03:57:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C61C400EF; Tue, 31 Oct 2023 03:57:51 +0100 (CET) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 1F22A400D7 for ; Tue, 31 Oct 2023 03:57:48 +0100 (CET) Received: from kwepemm000004.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SKF9L1VXpzrRyr; Tue, 31 Oct 2023 10:54:46 +0800 (CST) Received: from [10.67.121.59] (10.67.121.59) by kwepemm000004.china.huawei.com (7.193.23.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 31 Oct 2023 10:57:46 +0800 Message-ID: Date: Tue, 31 Oct 2023 10:57:45 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH v3 0/3] introduce maximum Rx buffer size To: Stephen Hemminger CC: , , , , References: <20230808040234.12947-1-lihuisong@huawei.com> <20231028014847.27149-1-lihuisong@huawei.com> <20231029084838.122acb9e@hermes.local> <20231030114850.799f2bce@fedora> From: "lihuisong (C)" In-Reply-To: <20231030114850.799f2bce@fedora> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.121.59] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm000004.china.huawei.com (7.193.23.18) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 在 2023/10/31 2:48, Stephen Hemminger 写道: > On Mon, 30 Oct 2023 09:25:34 +0800 > "lihuisong (C)" wrote: > >>> >>>> The "min_rx_bufsize" in struct rte_eth_dev_info stands for the >>>> minimum Rx buffer size supported by hardware. Actually, some >>>> engines also have the maximum Rx buffer specification, like, hns3. >>>> >>>> If mbuf data room size in mempool is greater then the maximum Rx >>>> buffer size supported by HW, the data size application used in >>>> each mbuf is just as much as the maximum Rx buffer size supported >>>> by HW instead of the whole data room size. >>>> >>>> So introduce maximum Rx buffer size which is not enforced just to >>>> report user to avoid memory waste. >>> I am not convinced this is really necessary. >>> Your device will use up to 4K of buffer size, not sure why an >>> application would want to use much larger than that because it >>> would be wasting a lot of buffer space (most packets are smaller) >>> anyway. >>> >>> The only case where it might be useful is if application is using >>> jumbo frames (9K) and the application was not able to handle multi >>> segment packets. >> Yeah, it is useful if user want a large packet (like, 6K) is in a >> mbuf. But, in current layer, user don't know what the maximum buffer >> size per descriptor supported by HW is. >>> Not handling multi segment packets in SW is just programmer >>> laziness. >> User do decide their implement based on their cases in project. >> May it be a point for this that user don't want to do memcpy for >> multi segment packets and just use the first mbuf memory. >> >> Now that there is the "min_rx_bufsize" to report in ethdev layer. >> Anyway, DPDK is indeed the lack of the way to report the maximum Rx >> buffer size per hw descriptor. > My concern is that you are creating a special case for one driver. understand your concern. > And other drivers probably have similar upper bound. Yes, they also have similar upper bound. From the codes, the max buffer size of Most PMDs are 16K and bnxt is 9600Byte. Do we need to report this size? It's a common feature for all PMDs. > > Could the warning be better handled in the driver specific configure > routine rather than updating the ethdev API. Something like: > > if (multi-segment-flag off) { > if (mtu > driver max buf size) { > return error; > } else { > if (mtu > driver max buf size && > mtu < mempool_buf_size(mp)) { > warn that packet maybe segmented ?? > } > } > > > .