From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 07B03A0A02; Wed, 24 Mar 2021 18:23:27 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 85006140F61; Wed, 24 Mar 2021 18:23:27 +0100 (CET) Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id 1B586140F29 for ; Wed, 24 Mar 2021 18:23:26 +0100 (CET) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id A6BA45C00E1; Wed, 24 Mar 2021 13:23:25 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Wed, 24 Mar 2021 13:23:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm3; bh= ExM4Z8HbLUxQaHiR6Dzkhhrwxy81/0KVDJV7IZedYqs=; b=hybWQ/UxeyvFobyV bnw73Hr/5DrX+auuO8n8tPSQjDd8I/91R9GRhPZwsvHJZQN+52EDG/qXWst/1B8P qiwPFiXrGXb1TqC8fStYmhzZ4NTb3vXjpezya+D6xIpVU5cDg0JAsrQMUakH6+eG KKqqH9tqveJXqzt7YweCkM1kluZ1OMSoM8h71ux63FeOGp9JZ3eVSIUylTPxbz/h hBxt2sPGlcEz2esBv/lrIXD3PZ9GwYpRICKcuK3QqATWJ3+FBmikzAP86x3aRHOY yJK6mmzuSPFU0Fv9xQFVwd9wTBA/hpNioZJzRq146ozBxtzu6G0BQh1FWJkINxbb s6AJHg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=ExM4Z8HbLUxQaHiR6Dzkhhrwxy81/0KVDJV7IZedY qs=; b=vXOyqYIZe95gYPa3a9b+JUJPd8b5TC1Tf5rBWQ+s7ZUY6EStzbmUqmZCS bhmBLMJ035sPKtTo0KH/+n1lmzs4Vdm0K/706nNsjOCjH22EhaKZIwmSYk3j51vX BY2GRXnJQwGsvWvdE7/WZ6SRMSgWB2wJNnThwo3wZ3fMTvj9TMBg3UUPg9N6exOD Gsfkf7ZCBn7TwKJNmih1Bm3vAjbMBwOKYM2zFlS7n9b9WM/YpXNLQgROSejOuQp7 TtszEwlXQx7q5R9rGjpN58bTgy4HzSwjsTEN4kv+sV84A2kB515x2aqfMpVX9XWo YZWV9IpTsZVQ8aUA3iEoJ5HnQ9FFg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrudegkedgleekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecukfhppeejjedrudefgedrvddtfedrudekgeenucevlhhushhtvghruf hiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghl ohhnrdhnvght X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 3790A1080063; Wed, 24 Mar 2021 13:23:24 -0400 (EDT) From: Thomas Monjalon To: Jerin Jacob , David Marchand , "Rong, Leyi" Cc: dev@dpdk.org, "Zhang, Qi Z" , bruce.richardson@intel.com, konstantin.ananyev@intel.com Date: Wed, 24 Mar 2021 18:23:23 +0100 Message-ID: <5787678.zEJ4OYuhaz@thomas> In-Reply-To: References: <20201104072810.105498-1-leyi.rong@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: enable multiple Tx queues on a lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 05/11/2020 10:24, Rong, Leyi: > From: Jerin Jacob > > On Wed, Nov 4, 2020 at 2:34 PM Rong, Leyi wrote: > > > From: David Marchand > > > > On Wed, Nov 4, 2020 at 9:34 AM Rong, Leyi wrote: > > > > > As there always has thoughput limit for per queue, on some > > > > > performance test case by using l3fwd, the result will limited by > > > > > the per queue thoughput limit. With multiple Tx queue enabled, the > > > > > per queue thoughput > > > > > limit can be eliminated if the CPU core is not the bottleneck. > > > > > > > > Ah interesting. > > > > Which nic has such limitations? > > > > How much of an improvement can be expected from this? > > > > > > The initial found was on XXV710 25Gb NIC, but suppose such issue can > > > happen on more NICs as the high-end CPU per core boundary is higher than > > > many NICs(except 100Gb and above) per queue performance boundary. > > > The improvement can be about 1.8X with that case@1t2q. > > > > As far as I understand, the Current l3fwd Tx queue creation is like this: > > If the app has N cores and M ports then l3fwd creates, N x M Tx queues in total, > > What will be new values based on this patch? Thank you Jerin for providing some info missing in the description of the patch. > Hi Jacob, > > Total queues number equals to queues per port multiply port number. > Just take #l3fwd -l 5,6 -n 6 -- -p 0x3 --config '(0,0,5),(0,1,5),(1,0,6),(1,1,6)' as example, > With this patch appied, totally 2x2=4 tx queues can be polled, while only > 1x2=2 tx queues can be used before. It does not reply above question with N x M. > > Does this patch has any regression in case the NIC queues able to cope up with > > the throughput limit from CPU. > > Regression test relevant with l3fwd passed with this patch, no obvious result drop > on other cases. It does not reply the general question for all drivers you did not test. As you probably noticed, this patch is blocked for months because it is not properly explained.