From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B6F243245; Mon, 30 Oct 2023 19:48:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 35CF14026F; Mon, 30 Oct 2023 19:48:55 +0100 (CET) Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by mails.dpdk.org (Postfix) with ESMTP id 4AD7B40266 for ; Mon, 30 Oct 2023 19:48:54 +0100 (CET) Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-27e0c1222d1so3278649a91.0 for ; Mon, 30 Oct 2023 11:48:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1698691733; x=1699296533; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=oX8Vvf+Fp3Qp0AcwXNgVSMml0wBPy2gT+JeGSQpXxMk=; b=str0gMaGfOtCC1j5DMQD8pwQFqQDc5PrGULJcw4M7dBgQu/qPIWVE/Y3KnGEbNcpj5 V77QTA22a/zdIxZZpv/wrTyCR0w7L924CnSUSjNA9Xm7pfmyWih3eeEw0/wTvYvFxE7Q nvUIZClnzM7JcfXTQG/M1HeA8OHvmRNqWbCO77Jc5QVD8Vzn/Ixx5JHorSwKsYAI2NHN XM4a3Ifk3Tpx+c2KA2ykSGqECvNiN1aSvg1iLsUkHFojgVBr2CxK8KjbFDvxZ8H88phY fHQtWYNownNc5YHxwCYqlq+eIvm3bmXHX41hIqpq4TCq6mxRELw6oxFi8ncKFyWEYjU1 hwOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698691733; x=1699296533; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oX8Vvf+Fp3Qp0AcwXNgVSMml0wBPy2gT+JeGSQpXxMk=; b=sBl6whaGGG3j8zqsl2Vbed/L2ZiEBdN1hZU18WF4Dooq59uNdJt8nEbeViekIOSJC0 nt4x568Tj+A0BQuRvz+jJAyuZyaG10FKbU2wO4UNU+DBO7UXlqdl7xiqXt4x7cZ40VrP w7i0Ita+DdEAU86PVJygC+wt2UUr8sDN1ENgCuhCvMlytRmHLXBwXkaYF3+bb/ufqxCz eVNMLZsqYUgYbKJ5AEk7XIHZ4sRGd2QUKVDx1wbU0Y1rwhf9bgcBwVuCoJnpQtRqdIpa wSTIVidygPFcmShVHe90sGq6ZaYjY+cDP6fWdklFrjVt+goL/3ioTRYd5XObrB7g1FfV Hj+g== X-Gm-Message-State: AOJu0Yw6qPkZneSDkvxGy8aNV/yM9Vl6RE8V4/SRsAsC3F5dOQud6I+5 KMb97h9lX63lLBKok1MJhXleVg== X-Google-Smtp-Source: AGHT+IE/ORBaxqUscHrVRh8w3KxbCSxWoyrZKiDwuiLpS/awo0Bo5tLG/Tw0MimUSVb9jHXl7nnOqA== X-Received: by 2002:a17:90a:d791:b0:27d:1c89:2160 with SMTP id z17-20020a17090ad79100b0027d1c892160mr8268057pju.47.1698691733504; Mon, 30 Oct 2023 11:48:53 -0700 (PDT) Received: from fedora ([38.142.2.14]) by smtp.gmail.com with ESMTPSA id d3-20020a17090a6f0300b002802d264240sm4242899pjk.29.2023.10.30.11.48.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Oct 2023 11:48:53 -0700 (PDT) Date: Mon, 30 Oct 2023 11:48:50 -0700 From: Stephen Hemminger To: "lihuisong (C)" Cc: , , , , Subject: Re: [PATCH v3 0/3] introduce maximum Rx buffer size Message-ID: <20231030114850.799f2bce@fedora> In-Reply-To: References: <20230808040234.12947-1-lihuisong@huawei.com> <20231028014847.27149-1-lihuisong@huawei.com> <20231029084838.122acb9e@hermes.local> X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Mon, 30 Oct 2023 09:25:34 +0800 "lihuisong (C)" wrote: > > > >> The "min_rx_bufsize" in struct rte_eth_dev_info stands for the > >> minimum Rx buffer size supported by hardware. Actually, some > >> engines also have the maximum Rx buffer specification, like, hns3. > >> > >> If mbuf data room size in mempool is greater then the maximum Rx > >> buffer size supported by HW, the data size application used in > >> each mbuf is just as much as the maximum Rx buffer size supported > >> by HW instead of the whole data room size. > >> > >> So introduce maximum Rx buffer size which is not enforced just to > >> report user to avoid memory waste. > > I am not convinced this is really necessary. > > Your device will use up to 4K of buffer size, not sure why an > > application would want to use much larger than that because it > > would be wasting a lot of buffer space (most packets are smaller) > > anyway. > > > > The only case where it might be useful is if application is using > > jumbo frames (9K) and the application was not able to handle multi > > segment packets. > Yeah, it is useful if user want a large packet (like, 6K) is in a > mbuf. But, in current layer, user don't know what the maximum buffer > size per descriptor supported by HW is. > > Not handling multi segment packets in SW is just programmer > > laziness. > User do decide their implement based on their cases in project. > May it be a point for this that user don't want to do memcpy for > multi segment packets and just use the first mbuf memory. > > Now that there is the "min_rx_bufsize" to report in ethdev layer. > Anyway, DPDK is indeed the lack of the way to report the maximum Rx > buffer size per hw descriptor. My concern is that you are creating a special case for one driver. And other drivers probably have similar upper bound. Could the warning be better handled in the driver specific configure routine rather than updating the ethdev API. Something like: if (multi-segment-flag off) { if (mtu > driver max buf size) { return error; } else { if (mtu > driver max buf size && mtu < mempool_buf_size(mp)) { warn that packet maybe segmented ?? } }