From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 420A0A05D3 for ; Tue, 26 Mar 2019 12:50:14 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 68B132BFA; Tue, 26 Mar 2019 12:50:13 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 036622BF4 for ; Tue, 26 Mar 2019 12:50:11 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Mar 2019 04:50:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,271,1549958400"; d="scan'208";a="285978147" Received: from irsmsx152.ger.corp.intel.com ([163.33.192.66]) by orsmga004.jf.intel.com with ESMTP; 26 Mar 2019 04:50:06 -0700 Received: from irsmsx156.ger.corp.intel.com (10.108.20.68) by IRSMSX152.ger.corp.intel.com (163.33.192.66) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 26 Mar 2019 11:50:05 +0000 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.7]) by IRSMSX156.ger.corp.intel.com ([169.254.3.101]) with mapi id 14.03.0415.000; Tue, 26 Mar 2019 11:50:05 +0000 From: "Iremonger, Bernard" To: Thomas Monjalon , Pavan Nikhilesh Bhagavatula CC: "dev@dpdk.org" , Jerin Jacob Kollanukkaran , "arybchenko@solarflare.com" , "Yigit, Ferruh" Thread-Topic: [dpdk-dev] [PATCH v2] app/testpmd: add mempool bulk get for txonly mode Thread-Index: AQHU0DVVDGUOFsdO4Uy91EpXPn/rtqYd5eQAgAAMMOA= Date: Tue, 26 Mar 2019 11:50:05 +0000 Message-ID: <8CEF83825BEC744B83065625E567D7C260D7656E@IRSMSX108.ger.corp.intel.com> References: <20190228194128.14236-1-pbhagavatula@marvell.com> <20190301134700.8220-1-pbhagavatula@marvell.com> <4933257.X50ECdZ2iI@xps> In-Reply-To: <4933257.X50ECdZ2iI@xps> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMWZmOWIzMWMtMTI0MS00ZGY2LWJmZWUtMGEzYmE0YmM5NDlmIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiNHc0RlhSWVdVT1FWZUlBSjhuRXpubXVCOTN2SHVPNmVpait2VXpEc0UzK2c3S0doSGdQcEpydFYrSW94bkNvXC8ifQ== x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.400.15 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: add mempool bulk get for txonly mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190326115005.pYCMtzg11p8ibyc_JIlBwVVOTX3JCGHtAzSaQA--xes@z> Hi Pavan, > Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: add mempool bulk get for > txonly mode >=20 > 01/03/2019 14:47, Pavan Nikhilesh Bhagavatula: > > From: Pavan Nikhilesh > > > > Use mempool bulk get ops to alloc burst of packets and process them. > > If bulk get fails fallback to rte_mbuf_raw_alloc. > > > > Suggested-by: Andrew Rybchenko > > Signed-off-by: Pavan Nikhilesh > > --- > > > > v2 Changes: > > - Use bulk ops for fetching segments. (Andrew Rybchenko) > > - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew > > Rybchenko) > > - Fix mbufs not being freed when there is no more mbufs available for > > segments. (Andrew Rybchenko) > > > > app/test-pmd/txonly.c | 159 > > ++++++++++++++++++++++++------------------ > > 1 file changed, 93 insertions(+), 66 deletions(-) >=20 > This is changing a lot of lines so it is difficult to know what is change= d exactly. > Please split it with a refactoring without any real change, and introduce= the > real change later. > Then we'll be able to examine it and check the performance. >=20 > We need to have more tests with more hardware in order to better > understand the performance improvement. > For info, a degradation is seen in Mellanox lab. >=20 >=20 +1=20 Not easy to review. Btw, unnecessary change at lines 157 and 158 in txonly.c Regards, Bernard