From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E895245C16; Wed, 30 Oct 2024 16:40:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6DFEE43421; Wed, 30 Oct 2024 16:40:12 +0100 (CET) Received: from office2.cesnet.cz (office2.cesnet.cz [78.128.248.237]) by mails.dpdk.org (Postfix) with ESMTP id C70FB40281 for ; Wed, 30 Oct 2024 16:40:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cesnet.cz; s=office2-2020; t=1730302811; bh=rxdc2Tvbl/80yYlBonJ5YSPZJJbH5X3iDzH0/yu6gU0=; h=Date:Subject:To:Cc:References:From:In-Reply-To; b=iUNecj0BE6cD09iJ+gDIYR4Sbgwa3+8amy7P86j9DcYxgQL6IlymZcdjaYISfrn8O 2xOAuG0e8grUNp12DTb5HFcI1BnDn75WjAZOtEXChmTq6OxjKK9Kyj/9KAFUUSZ9cx kjyLp3l5JscU1d5rYDUbmx0CgPi3s+Kwbov1Om1gdmB8L57HXBTiCRxCcSEWT/wQ1R aHVpZhBkYtZjfG97btosE7IAFfLJgByg4Bnu3ise06El5W+uXwkZJC7jLYTnHotLUz KATE0jLvFUeH9hVSSy3opNoi64lKJ+ffe/u5RGV0Soz1/cIiNAPqtLoW8EKv8w1Uim Q4ZjbC9AS9eNQ== Received: from [10.113.170.43] (rt-tmc-kou.liberouter.org [195.113.172.126]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by office2.cesnet.cz (Postfix) with ESMTPSA id 3A7FB118007D; Wed, 30 Oct 2024 16:40:11 +0100 (CET) Message-ID: <75463f4f-4139-4a53-9e63-05fe4cccb74f@cesnet.cz> Date: Wed, 30 Oct 2024 16:40:10 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] net: increase the maximum of RX/TX descriptors To: Stephen Hemminger Cc: =?UTF-8?Q?Morten_Br=C3=B8rup?= , anatoly.burakov@intel.com, ian.stokes@intel.com, dev@dpdk.org References: <20241029124832.224112-1-sismis@cesnet.cz> <98CBD80474FA8B44BF855DF32C47DC35E9F845@smartserver.smartshare.dk> <20241030082020.2fe8eadb@hermes.local> Content-Language: en-US From: =?UTF-8?B?THVrw6HFoSDFoGnFoW1pxaE=?= In-Reply-To: <20241030082020.2fe8eadb@hermes.local> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 30. 10. 24 16:20, Stephen Hemminger wrote: > On Wed, 30 Oct 2024 14:58:40 +0100 > Lukáš Šišmiš wrote: > >> On 29. 10. 24 15:37, Morten Brørup wrote: >>>> From: Lukas Sismis [mailto:sismis@cesnet.cz] >>>> Sent: Tuesday, 29 October 2024 13.49 >>>> >>>> Intel PMDs are capped by default to only 4096 RX/TX descriptors. >>>> This can be limiting for applications requiring a bigger buffer >>>> capabilities. The cap prevented the applications to configure >>>> more descriptors. By bufferring more packets with RX/TX >>>> descriptors, the applications can better handle the processing >>>> peaks. >>>> >>>> Signed-off-by: Lukas Sismis >>>> --- >>> Seems like a good idea. >>> >>> Have the max number of descriptors been checked with the datasheets for all the affected NIC chips? >>> >> I was hoping to get some feedback on this from the Intel folks. >> >> But it seems like I can change it only for ixgbe (82599) to 32k >> (possibly to 64k - 8), others - ice (E810) and i40e (X710) are capped at >> 8k - 32. >> >> I neither have any experience with other drivers nor I have them >> available to test so I will let it be in the follow-up version of this >> patch. >> >> Lukas >> > Having large number of descriptors especially at lower speeds will > increase buffer bloat. For real life applications, do not want increase > latency more than 1ms. > > 10 Gbps has 7.62Gbps of effective bandwidth due to overhead. > Rate for 1500 MTU is 7.62Gbs / (1500 * 8) = 635 K pps (i.e 1.5 us per packet) > A ring of 4096 descriptors can take 6 ms for full size packets. > > Be careful, optimizing for 64 byte benchmarks can be disaster in real world. > Thanks for the info Stephen, however I am not trying to optimize for 64 byte benchmarks. The work has been initiated by an IO problem and Intel NICs. Suricata IDS worker (1 core per queue) received a burst of packets and then sequentially processes them one by one. Well it seems like having a 4k buffers it seems to not be enough. NVIDIA NICs allow e.g. 32k descriptors and it works fine. In the end it worked fine when ixgbe descriptors were increased as well. I am not sure why AF-Packet can handle this much better than DPDK, AFP doesn't have crazy high number of descriptors configured <= 4096, yet it works better. At the moment I assume there is an internal buffering in the kernel which allows to handle processing spikes. To give more context here is the forum discussion - https://forum.suricata.io/t/high-packet-drop-rate-with-dpdk-compared-to-af-packet-in-suricata-7-0-7/4896