From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by dpdk.space (Postfix) with ESMTP id 82B47A0679
	for <public@inbox.dpdk.org>; Tue,  2 Apr 2019 09:06:25 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 5E7E02B99;
	Tue,  2 Apr 2019 09:06:25 +0200 (CEST)
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by dpdk.org (Postfix) with ESMTP id 34CF311A4
 for <dev@dpdk.org>; Tue,  2 Apr 2019 09:06:24 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 6856522205;
 Tue,  2 Apr 2019 03:06:23 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 02 Apr 2019 03:06:23 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h=
 from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding:content-type; s=mesmtp;
 bh=12Bsf7Fxh9aKdv8AtbsgiJSEzY4cWrSpeFjuCz96SCY=; b=bMp4Ir291iFW
 eZZ6UQtlWbTaAYRlZIgXUT0SAyJaET5tX3xmQ/mfkU/aHykZEPPz9w7unEiI9Z4v
 vzz7Yr9Kp/AfSxL1zSSWIg4rOY6Lu6w3hK7yBk5Mvq+Ryn1Qdr74FOu90H+1K7SD
 3ydysSNL5VraSF41JLVw8537IIn3ULE=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-transfer-encoding:content-type
 :date:from:in-reply-to:message-id:mime-version:references
 :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
 :x-sasl-enc; s=fm2; bh=12Bsf7Fxh9aKdv8AtbsgiJSEzY4cWrSpeFjuCz96S
 CY=; b=hV4AVKOKcKUpitf/DQFXRgXhiMDwffqY9UXqMIDpN29iLIaVu7L7wg/1F
 NLz2p47mkofP/cGvPnrvzRGgqWrxdLc0eJSCK407pluuaYJAXRf1NKLinxdAwcaT
 ib7+DFt6jfIvVwJ24dGviA8EMW6y2HommKgIuIlm5b+THDkHPvwIVoWdC8CGav0U
 c02+Xfci0lMy0rH6jXMi35U6N9vmKsAIH7EZSGO+de3t201x/7pJ7QI20j7zQxHd
 dz3BoefpaAhSOSYO1kdDrN6IYUjmbwkvw25bCK9veP3ZXKVosn+w33jd9nnEV31T
 j+NxKoeWl4JkuJNNgMMl+njzVM2SQ==
X-ME-Sender: <xms:7gmjXHVK0GNzPNF4xrCpxdLV1GuHr95fHqQL-EI_uqfC76dYkWmCLg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrleehgddutdelucetufdoteggodetrfdotf
 fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
 uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
 cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr
 shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecukf
 hppeejjedrudefgedrvddtfedrudekgeenucfrrghrrghmpehmrghilhhfrhhomhepthhh
 ohhmrghssehmohhnjhgrlhhonhdrnhgvthenucevlhhushhtvghrufhiiigvpedt
X-ME-Proxy: <xmx:7gmjXCvUlI9N_-Cm2CUWAilf5VSPMoKQWGBCeMhNS-QeKtFwfzKuWA>
 <xmx:7gmjXA2K4d69e2lswOdCC9P1WTT8usr_2PWCZ_QwMbIxcbwYs-JQow>
 <xmx:7gmjXG70LSHmLXt2B--WQNylbW3VcJefBamb0OOK1RjN76FefwmO5w>
 <xmx:7wmjXNI3BPOd3UncdQ5SeYhz9hiDmXVqcXvHeRmw8oxBeV1or9dYnQ>
Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184])
 by mail.messagingengine.com (Postfix) with ESMTPA id 80B4BE46C1;
 Tue,  2 Apr 2019 03:06:21 -0400 (EDT)
From: Thomas Monjalon <thomas@monjalon.net>
To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>,
 "shahafs@mellanox.com" <shahafs@mellanox.com>,
 "bernard.iremonger@intel.com" <bernard.iremonger@intel.com>,
 "dev@dpdk.org" <dev@dpdk.org>,
 "arybchenko@solarflare.com" <arybchenko@solarflare.com>,
 "ferruh.yigit@intel.com" <ferruh.yigit@intel.com>
Date: Tue, 02 Apr 2019 09:06:19 +0200
Message-ID: <1575251.HckvfjqYdM@xps>
In-Reply-To: <acd9b7164bd23224f343aed272303fce7ac3a3e4.camel@marvell.com>
References: <20190228194128.14236-1-pbhagavatula@marvell.com>
 <1732867.UvzobiCdsi@xps>
 <acd9b7164bd23224f343aed272303fce7ac3a3e4.camel@marvell.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="UTF-8"
Subject: Re: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly
	mode
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>
Message-ID: <20190402070619.6P0pHvPxSZGRL8kJRESHcjEEzlgf_0tBNgFz19vqMy4@z>

02/04/2019 03:03, Jerin Jacob Kollanukkaran:
> On Mon, 2019-04-01 at 22:53 +0200, Thomas Monjalon wrote:
> > 01/04/2019 22:25, Ferruh Yigit:
> > > On 3/31/2019 2:14 PM, Pavan Nikhilesh Bhagavatula wrote:
> > > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > > 
> > > > Optimize testpmd txonly mode by
> > > > 1. Moving per packet ethernet header copy above the loop.
> > > > 2. Use bulk ops for allocating segments instead of having a inner
> > > > loop
> > > > for every segment.
> > > > 
> > > > Also, move the packet prepare logic into a separate function so
> > > > that it
> > > > can be reused later.
> > > > 
> > > > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > > ---
> > > >  v5 Changes
> > > >  - Remove unnecessary change to struct rte_port *txp (movement).
> > > > (Bernard)
> > > > 
> > > >  v4 Changes:
> > > >  - Fix packet len calculation.
> > > > 
> > > >  v3 Changes:
> > > >  - Split the patches for easier review. (Thomas)
> > > >  - Remove unnecessary assignments to 0. (Bernard)
> > > > 
> > > >  v2 Changes:
> > > >  - Use bulk ops for fetching segments. (Andrew Rybchenko)
> > > >  - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew
> > > > Rybchenko)
> > > >  - Fix mbufs not being freed when there is no more mbufs
> > > > available for
> > > >  segments. (Andrew Rybchenko)
> > > 
> > > Hi Thomas, Shahafs,
> > > 
> > > I guess there was a performance issue on Mellanox with this patch,
> > > I assume it
> > > is still valid, since this version only has some cosmetic change,
> > > but can you
> > > please confirm?
> > 
> > We will check it.
> > 
> > > And what is the next step, can you guys provide some info to Pavan
> > > to solve the
> > > issue, or perhaps even better a fix?
> > 
> > Looking at the first patch, there are still 3 changes merged
> > together.
> > Why not splitting even more?
> 
> Splitting further more is not a issue. But we should not start the
> thread for squashing it patch latter. What would be interesting to know
> if there is any performance degradation with Mellanox NIC? If so, Why?
> Based on that, We can craft the patch as you need.

Regarding Mellanox degradation, we need to check if this is in this patch.

Not related to performance degradation, it is a good practice
to split logical changes of a rework.