From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD56C41E71; Sat, 11 Mar 2023 10:16:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C58240DDC; Sat, 11 Mar 2023 10:16:44 +0100 (CET) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id 7DA2140A89; Sat, 11 Mar 2023 10:16:42 +0100 (CET) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 1F26A5C00E5; Sat, 11 Mar 2023 04:16:41 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Sat, 11 Mar 2023 04:16:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to; s=fm2; t= 1678526201; x=1678612601; bh=PoobySYtKHiRyTlTDexF8VbGjumM+ccQqVw 3LfJQr5w=; b=NWW4SlSVfL29WT3NBuzEctPrT0wAK1U9KSOy/Y2zsaWsLbgcuVh nIQQWBjjdT9HOdqmKZiqG7LGZt2bako+vHbNuu3gxRzMbybuZSOkScTIX+bNU/im AAsGD3orgrJGUYbFDDCFA0CAlvtaWjYEr/DqB6LkxYJWf9itaFXQp3fsIxr7zhiK uaef/hZ9j7BFjwca5X6ZtHDJqeopsiwi7OMmFnSrnJqpaNAmDgL5GBQMwGPk2lqT eLaAREEuZbLCG0LBUAFahhgx3+N2SghjqX6m6XMqGV0KRssSQQAE89XnETJmh+3R rm4XIiPVPn2JuLgWOl17jytQvfTxALCouMw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1678526201; x=1678612601; bh=PoobySYtKHiRyTlTDexF8VbGjumM+ccQqVw 3LfJQr5w=; b=AVzez4ObOylOqG/J5Ki3lHRx8LKv71EwvGWOI39e3cRb99cWG96 0Hm4A2gqglKs4QgaDXcGyoxnsiee8g5Kuz+61W5tE1iEpTHdXHuewZzlRRZuA2MO mjjMpbHt121sBD5wwU9S+4gbKHvh9jm1xvXTzK33fy/rDm/wEwSPM2s16U5eluZC g3154XbDkQ5SbZBLKk0x7Xmk0t8Y6voZyX2tf5PHRUzvEzy4AxuwGbnWREIKYRNn 38YYjkn+ZwchcIbXz+eyS4g2WmxENB8R2GapLbMUUJslfvahZcgQCLqCcfTDmzbr WGPDX8s+dGKQq5cLIDhMsVc6DhggCtbTNAg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdduledguddvjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhho mhgrshcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqne cuggftrfgrthhtvghrnheptdejieeifeehtdffgfdvleetueeffeehueejgfeuteeftddt ieekgfekudehtdfgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 11 Mar 2023 04:16:40 -0500 (EST) From: Thomas Monjalon To: Matt Cc: stable@dpdk.org, dev@dpdk.org, stephen@networkplumber.org, Ferruh Yigit Subject: Re: [PATCH v3] kni: fix possible alloc_q starvation when mbufs are exhausted Date: Sat, 11 Mar 2023 10:16:39 +0100 Message-ID: <6880737.18pcnM708K@thomas> In-Reply-To: References: <20221109060434.2012064-1-zhouyates@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 04/01/2023 15:34, Ferruh Yigit: > On 1/4/2023 11:57 AM, Matt wrote: > > Hi Ferruh, > > > > In my case, the traffic is not large, so I can't see the impact. > > I also tested under high load(>2Mpps with 2 DPDK cores and 2 kernel threads) > > and found no significant difference in performance either. > > I think the reason should be that it will not > > run to 'kni_fifo_count(kni->alloc_q) == 0' under high load. > > > > I agree, additional check most likely hit on the low bandwidth, > thanks for checking for performance impact. > > > On Tue, Jan 3, 2023 at 8:47 PM Ferruh Yigit > > wrote: > > > > On 12/30/2022 4:23 AM, Yangchao Zhou wrote: > > > In some scenarios, mbufs returned by rte_kni_rx_burst are not freed > > > immediately. So kni_allocate_mbufs may be failed, but we don't know. > > > > > > Even worse, when alloc_q is completely exhausted, kni_net_tx in > > > rte_kni.ko will drop all tx packets. kni_allocate_mbufs is never > > > called again, even if the mbufs are eventually freed. > > > > > > In this patch, we try to allocate mbufs for alloc_q when it is empty. > > > > > > According to historical experience, the performance bottleneck of KNI > > > is offen the usleep_range of kni thread in rte_kni.ko. > > > The check of kni_fifo_count is trivial and the cost should be > > acceptable. > > > > > > > Hi Yangchao, > > > > Are you observing any performance impact with this change in you use > > case? > > > > > > > Fixes: 3e12a98fe397 ("kni: optimize Rx burst") > > > Cc: stable@dpdk.org > > > > > > Signed-off-by: Yangchao Zhou > > > > Acked-by: Ferruh Yigit Applied, thanks.