From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9F28C43258; Wed, 1 Nov 2023 03:36:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E5D2402CF; Wed, 1 Nov 2023 03:36:14 +0100 (CET) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id F0DC3402B0 for ; Wed, 1 Nov 2023 03:36:12 +0100 (CET) Received: from kwepemm000004.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4SKrcg2NDDzPng2; Wed, 1 Nov 2023 10:32:03 +0800 (CST) Received: from [10.67.121.59] (10.67.121.59) by kwepemm000004.china.huawei.com (7.193.23.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Wed, 1 Nov 2023 10:36:08 +0800 Message-ID: Date: Wed, 1 Nov 2023 10:36:07 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH v3 0/3] introduce maximum Rx buffer size To: Stephen Hemminger CC: , , , , , =?UTF-8?Q?Morten_Br=c3=b8rup?= References: <20230808040234.12947-1-lihuisong@huawei.com> <20231028014847.27149-1-lihuisong@huawei.com> <20231029084838.122acb9e@hermes.local> <20231030114850.799f2bce@fedora> <20231031084017.64b9f342@fedora> From: "lihuisong (C)" In-Reply-To: <20231031084017.64b9f342@fedora> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.121.59] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm000004.china.huawei.com (7.193.23.18) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Stephen, 在 2023/10/31 23:40, Stephen Hemminger 写道: > On Tue, 31 Oct 2023 10:57:45 +0800 > "lihuisong (C)" wrote: > >>>> User do decide their implement based on their cases in project. >>>> May it be a point for this that user don't want to do memcpy for >>>> multi segment packets and just use the first mbuf memory. >>>> >>>> Now that there is the "min_rx_bufsize" to report in ethdev layer. >>>> Anyway, DPDK is indeed the lack of the way to report the maximum Rx >>>> buffer size per hw descriptor. >>> My concern is that you are creating a special case for one driver. >> understand your concern. >>> And other drivers probably have similar upper bound. >> Yes, they also have similar upper bound. >> From the codes, the max buffer size of Most PMDs are 16K and bnxt is >> 9600Byte. >> Do we need to report this size? It's a common feature for all PMDs. > It would make sense then to have max_rx_bufsize set to 16K by default > in ethdev, and PMD could then raise/lower based on hardware. It is not appropriate to set to 16K by default in ethdev layer. Because I don't see any check for the upper bound in some driver, like axgbe, enetc and so on. I'm not sure if they have no upper bound. And some driver's maximum buffer size is "16384(16K) - 128" So it's better to set to UINT32_MAX by default. what do you think? > > .