From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 289DCA0550 for ; Wed, 25 May 2022 10:02:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C67540C35; Wed, 25 May 2022 10:02:22 +0200 (CEST) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by mails.dpdk.org (Postfix) with ESMTP id 911F7400D6; Wed, 25 May 2022 10:02:20 +0200 (CEST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 48ECD5C01C7; Wed, 25 May 2022 04:02:19 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Wed, 25 May 2022 04:02:19 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1653465739; x= 1653552139; bh=AvAtdcHokFCCsLSYw7yQQpjF+Bc9hixfwLY2TPgv2E8=; b=o 5so/+lc4ccZX+sf/qAPlv9Ph1imTk6qJFgv0S98EZZVgEk4dHXTC/HvnVRAEXiPU ci+5Sna5btdKZ3SaiU41no3qGvGwvEgP3MYZkBKjCLT7JKAUDb5SkNQi9+gFrw6A KEiYwUlMVj//4sZN74ezFMDybYmM5OKKxWjKcdtwZOgDPmZSiy+uZgIpYRomek0l 9bakfyOM22kSqxTB+q1r/rPE7BJBUhdiLPnoL5+oDRs/y/rpAjW81G2A0c+9frTj TaN6e4IDntG7PjPe9b8GCwx4dSyKLouzzU2JakTP9dSOtUqwbRyYq11Y7uYy3dZI XLF6MDHKktYZlhhnDgvlw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1653465739; x= 1653552139; bh=AvAtdcHokFCCsLSYw7yQQpjF+Bc9hixfwLY2TPgv2E8=; b=k koIr6woYG7SNIp/UXvr8wg5ighNQhqiwMnLSpeCREvA+RdBOYZSpRjutokNjAaVc hPIn2sBHYFcA2OE+ikVsSrJL7/oV1DFeAc8CfW/Ffp3p6cxs60VKzJFZDhb08XQg e7DzBngUZuCYvwe6K21Djl2zeeM6ObFvSEwlmfsjNu9Hoa4qH7varqATq1NP+QOc cYq4RgG8Ozhmjtqf64wcMoppLSpJ5UhF2gxxhQxCGCSuFgOeHh4cOTOnac5c8MMy Lyly4IFAtwXb0o5o983lqurMUe7cvX480wrHyyejI5sxsIkoeUYJpWAMZkLVUmxn xNx0VK9W2zNxnKXJQmQPg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrjeeggdduvdehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdei kefgkeduhedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 25 May 2022 04:02:16 -0400 (EDT) From: Thomas Monjalon To: Jerin Jacob Kollanukkaran , dev@dpdk.org Cc: "stable@dpdk.org" , "dev@dpdk.org" , "david.marchand@redhat.com" , "ferruh.yigit@intel.com" , "andrew.rybchenko@oktetlabs.ru" , "ajit.khaparde@broadcom.com" , Rakesh Kudurumalla Subject: Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are full and Tx fails Date: Wed, 25 May 2022 10:02:09 +0200 Message-ID: <1720176.4herOUoSWf@thomas> In-Reply-To: References: <20210720124713.603674-1-rkudurumalla@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org I'm pretty sure there is something wrong in the driver cnxk, but you keep insisting and nobody complained, so I accept the patch for the test application. Applied 24/05/2022 07:39, Rakesh Kudurumalla: > ping > > > From: Rakesh Kudurumalla > > Sent: Monday, May 9, 2022 3:32 PM > > > > Hi Thomas Monjalon, > > > > Same behavior is observed in cnxk driver as well. > > Can we please get this patch merged ? > > > > > From: Rakesh Kudurumalla > > > Sent: Monday, February 14, 2022 10:27 AM > > > > > > > > octeontx2 driver is removed > > > > Can we close this patch? > > > Same behavior is observed with cnxk driver, so we need this patch > > > > > > > > 01/02/2022 07:30, Rakesh Kudurumalla: > > > > > ping > > > > > > > > > > > From: Rakesh Kudurumalla > > > > > > Sent: Monday, January 10, 2022 2:35 PM > > > > > > > > > > > > ping > > > > > > > > > > > > > From: Rakesh Kudurumalla > > > > > > > Sent: Monday, December 13, 2021 12:10 PM > > > > > > > > From: Thomas Monjalon > > > > > > > > Sent: Monday, November 29, 2021 2:44 PM > > > > > > > > 29/11/2021 09:52, Rakesh Kudurumalla: > > > > > > > > > From: Thomas Monjalon > > > > > > > > > > 22/11/2021 08:59, Rakesh Kudurumalla: > > > > > > > > > > > From: Thomas Monjalon > > > > > > > > > > > > 20/07/2021 18:50, Rakesh Kudurumalla: > > > > > > > > > > > > > Current pmd_perf_autotest() in continuous mode > > > > > > > > > > > > > tries to enqueue MAX_TRAFFIC_BURST completely > > > > > > > > > > > > > before starting the > > > > > > test. > > > > > > > > > > > > > Some drivers cannot accept complete > > > > > > > > > > > > > MAX_TRAFFIC_BURST even though > > > > > > > > > > rx+tx > > > > > > > > > > > > > desc > > > > > > > > > > > > count > > > > > > > > > > > > > can fit it. > > > > > > > > > > > > > > > > > > > > > > > > Which driver is failing to do so? > > > > > > > > > > > > Why it cannot enqueue 32 packets? > > > > > > > > > > > > > > > > > > > > > > Octeontx2 driver is failing to enqueue because > > > > > > > > > > > hardware buffers are full > > > > > > > > > > before test. > > > > > > > > > > > > > > > > Aren't you stopping the support of octeontx2? > > > > > > > > Why do you care now? > > > > > > > > yes we are not supporting octeontx2,but this issue is > > > > > > > > observed in cnxk driver ,current patch fixes the same > > > > > > > > > > > > > > > > > > > > Why hardware buffers are full? > > > > > > > > > Hardware buffers are full because number of number of > > > > > > > > > descriptors in continuous mode Is less than > > > > > > > > > MAX_TRAFFIC_BURST, so if enque fails , there is no way > > > > > > > > > hardware > > > can drop the Packets . > > > > > > > > > pmd_per_autotest application evaluates performance after > > > > > > > > > enqueueing > > > > > > packets Initially. > > > > > > > > > > > > > > > > > > > > > pmd_perf_autotest() in continuous mode tries to > > > > > > > > > > > enqueue MAX_TRAFFIC_BURST (2048) before starting the > > test. > > > > > > > > > > > > > > > > > > > > > > > > This patch changes behaviour to stop enqueuing > > > > > > > > > > > > > after few > > > > > > retries. > > > > > > > > > > > > > > > > > > > > > > > > If there is a real limitation, there will be issues > > > > > > > > > > > > in more places than this test program. > > > > > > > > > > > > I feel it should be addressed either in the driver > > > > > > > > > > > > or at ethdev > > > > level.