From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com [64.147.123.24]) by dpdk.org (Postfix) with ESMTP id 7E73E324D for ; Tue, 26 Mar 2019 12:00:48 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.west.internal (Postfix) with ESMTP id 37AAE45A1; Tue, 26 Mar 2019 07:00:47 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Tue, 26 Mar 2019 07:00:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=GY5qgpMwx2PnYbmtuTrNEptgkhGme2//ha4oHZcDnTU=; b=JhlCi7n1iWe5 On+dIZSqUvVidIiPzTzmUnEQvPi30BgcKbxMcBYRTTySQ2tC0SUIh52ybEwj+K1s +Rx1QjBEM7aXnKjwZY5GuO+iW4u58NTwUZ08HvewuermYgPC6Dav7p7R8MVGqaZU r4q/uRV0cRaZ3JZ+Eqsj+WzuOyVXowM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=GY5qgpMwx2PnYbmtuTrNEptgkhGme2//ha4oHZcDn TU=; b=gMiIMarfQgWtHZP/hGOfQx/T+JE5b5z90XU1jb2U5qMwqqeqGU2f9mPT8 sdzhCXwmpuORTJagmfCtBzlOwVEE2zid3fl+VAsJAAywxGI3kac3vfwEAGlcL4aV KeCFlE2LkYf4uMVmQWeTBjVDHbuFnIT3dwN6uUDuYGoHY5iOoOFY8veWQlzhDlpQ 5FMiVapCPx/dzNBorvz9r/a0TSaQysm/ODXeC8uza4W1yg9KMgH7SYk0RF3S2yJl rJ67Lz4LX6XaGshACFI4NG205phEHJuUJ/2oAFljkIJyhYLGxiJB5SkmLPKcVz4n MFa6i9To24RUbPoakmlE463pBX2eQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrkedtgddvvdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucfkph epjeejrddufeegrddvtdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhho mhgrshesmhhonhhjrghlohhnrdhnvghtnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 4E36C10391; Tue, 26 Mar 2019 07:00:45 -0400 (EDT) From: Thomas Monjalon To: Pavan Nikhilesh Bhagavatula Cc: dev@dpdk.org, Jerin Jacob Kollanukkaran , "arybchenko@solarflare.com" , "ferruh.yigit@intel.com" , "bernard.iremonger@intel.com" Date: Tue, 26 Mar 2019 12:00:42 +0100 Message-ID: <4933257.X50ECdZ2iI@xps> In-Reply-To: <20190301134700.8220-1-pbhagavatula@marvell.com> References: <20190228194128.14236-1-pbhagavatula@marvell.com> <20190301134700.8220-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: add mempool bulk get for txonly mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Mar 2019 11:00:49 -0000 01/03/2019 14:47, Pavan Nikhilesh Bhagavatula: > From: Pavan Nikhilesh > > Use mempool bulk get ops to alloc burst of packets and process them. > If bulk get fails fallback to rte_mbuf_raw_alloc. > > Suggested-by: Andrew Rybchenko > Signed-off-by: Pavan Nikhilesh > --- > > v2 Changes: > - Use bulk ops for fetching segments. (Andrew Rybchenko) > - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew Rybchenko) > - Fix mbufs not being freed when there is no more mbufs available for > segments. (Andrew Rybchenko) > > app/test-pmd/txonly.c | 159 ++++++++++++++++++++++++------------------ > 1 file changed, 93 insertions(+), 66 deletions(-) This is changing a lot of lines so it is difficult to know what is changed exactly. Please split it with a refactoring without any real change, and introduce the real change later. Then we'll be able to examine it and check the performance. We need to have more tests with more hardware in order to better understand the performance improvement. For info, a degradation is seen in Mellanox lab. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 15062A05D3 for ; Tue, 26 Mar 2019 12:00:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B15F53572; Tue, 26 Mar 2019 12:00:50 +0100 (CET) Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com [64.147.123.24]) by dpdk.org (Postfix) with ESMTP id 7E73E324D for ; Tue, 26 Mar 2019 12:00:48 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.west.internal (Postfix) with ESMTP id 37AAE45A1; Tue, 26 Mar 2019 07:00:47 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Tue, 26 Mar 2019 07:00:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=GY5qgpMwx2PnYbmtuTrNEptgkhGme2//ha4oHZcDnTU=; b=JhlCi7n1iWe5 On+dIZSqUvVidIiPzTzmUnEQvPi30BgcKbxMcBYRTTySQ2tC0SUIh52ybEwj+K1s +Rx1QjBEM7aXnKjwZY5GuO+iW4u58NTwUZ08HvewuermYgPC6Dav7p7R8MVGqaZU r4q/uRV0cRaZ3JZ+Eqsj+WzuOyVXowM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=GY5qgpMwx2PnYbmtuTrNEptgkhGme2//ha4oHZcDn TU=; b=gMiIMarfQgWtHZP/hGOfQx/T+JE5b5z90XU1jb2U5qMwqqeqGU2f9mPT8 sdzhCXwmpuORTJagmfCtBzlOwVEE2zid3fl+VAsJAAywxGI3kac3vfwEAGlcL4aV KeCFlE2LkYf4uMVmQWeTBjVDHbuFnIT3dwN6uUDuYGoHY5iOoOFY8veWQlzhDlpQ 5FMiVapCPx/dzNBorvz9r/a0TSaQysm/ODXeC8uza4W1yg9KMgH7SYk0RF3S2yJl rJ67Lz4LX6XaGshACFI4NG205phEHJuUJ/2oAFljkIJyhYLGxiJB5SkmLPKcVz4n MFa6i9To24RUbPoakmlE463pBX2eQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrkedtgddvvdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucfkph epjeejrddufeegrddvtdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhho mhgrshesmhhonhhjrghlohhnrdhnvghtnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 4E36C10391; Tue, 26 Mar 2019 07:00:45 -0400 (EDT) From: Thomas Monjalon To: Pavan Nikhilesh Bhagavatula Cc: dev@dpdk.org, Jerin Jacob Kollanukkaran , "arybchenko@solarflare.com" , "ferruh.yigit@intel.com" , "bernard.iremonger@intel.com" Date: Tue, 26 Mar 2019 12:00:42 +0100 Message-ID: <4933257.X50ECdZ2iI@xps> In-Reply-To: <20190301134700.8220-1-pbhagavatula@marvell.com> References: <20190228194128.14236-1-pbhagavatula@marvell.com> <20190301134700.8220-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: add mempool bulk get for txonly mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190326110042.ktdxHiyP-bZ3L3ItGNPsI7TNeyA-A6vxCVTGJ8dkuao@z> 01/03/2019 14:47, Pavan Nikhilesh Bhagavatula: > From: Pavan Nikhilesh > > Use mempool bulk get ops to alloc burst of packets and process them. > If bulk get fails fallback to rte_mbuf_raw_alloc. > > Suggested-by: Andrew Rybchenko > Signed-off-by: Pavan Nikhilesh > --- > > v2 Changes: > - Use bulk ops for fetching segments. (Andrew Rybchenko) > - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew Rybchenko) > - Fix mbufs not being freed when there is no more mbufs available for > segments. (Andrew Rybchenko) > > app/test-pmd/txonly.c | 159 ++++++++++++++++++++++++------------------ > 1 file changed, 93 insertions(+), 66 deletions(-) This is changing a lot of lines so it is difficult to know what is changed exactly. Please split it with a refactoring without any real change, and introduce the real change later. Then we'll be able to examine it and check the performance. We need to have more tests with more hardware in order to better understand the performance improvement. For info, a degradation is seen in Mellanox lab.