From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 179514322C; Mon, 30 Oct 2023 02:25:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A00E7402B4; Mon, 30 Oct 2023 02:25:40 +0100 (CET) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id C3B2640291 for ; Mon, 30 Oct 2023 02:25:38 +0100 (CET) Received: from kwepemm000004.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4SJb8K59H7zVlM8; Mon, 30 Oct 2023 09:21:37 +0800 (CST) Received: from [10.67.121.59] (10.67.121.59) by kwepemm000004.china.huawei.com (7.193.23.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 30 Oct 2023 09:25:35 +0800 Message-ID: Date: Mon, 30 Oct 2023 09:25:34 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH v3 0/3] introduce maximum Rx buffer size To: Stephen Hemminger CC: , , , , References: <20230808040234.12947-1-lihuisong@huawei.com> <20231028014847.27149-1-lihuisong@huawei.com> <20231029084838.122acb9e@hermes.local> From: "lihuisong (C)" In-Reply-To: <20231029084838.122acb9e@hermes.local> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.121.59] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm000004.china.huawei.com (7.193.23.18) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 在 2023/10/29 23:48, Stephen Hemminger 写道: > On Sat, 28 Oct 2023 09:48:44 +0800 > Huisong Li wrote: > >> The "min_rx_bufsize" in struct rte_eth_dev_info stands for the minimum >> Rx buffer size supported by hardware. Actually, some engines also have >> the maximum Rx buffer specification, like, hns3. >> >> If mbuf data room size in mempool is greater then the maximum Rx buffer >> size supported by HW, the data size application used in each mbuf is just >> as much as the maximum Rx buffer size supported by HW instead of the whole >> data room size. >> >> So introduce maximum Rx buffer size which is not enforced just to report >> user to avoid memory waste. > I am not convinced this is really necessary. > Your device will use up to 4K of buffer size, not sure why an application > would want to use much larger than that because it would be wasting > a lot of buffer space (most packets are smaller) anyway. > > The only case where it might be useful is if application is using jumbo > frames (9K) and the application was not able to handle multi segment packets. Yeah, it is useful if user want a large packet (like, 6K) is in a mbuf. But, in current layer, user don't know what the maximum buffer size per descriptor supported by HW is. > Not handling multi segment packets in SW is just programmer laziness. User do decide their implement based on their cases in project. May it be a point for this that user don't want to do memcpy for multi segment packets and just use the first mbuf memory. Now that there is the "min_rx_bufsize" to report in ethdev layer. Anyway, DPDK is indeed the lack of the way to report the maximum Rx buffer size per hw descriptor. > > .