From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id BFBD49E3 for ; Thu, 30 Mar 2017 18:45:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490892323; x=1522428323; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=TaripL0+CLMnc3AyXJWC9VswFP5c1folMgZcb5WZ7nQ=; b=ItvKMDxmUogVUsRwTZJjbJ7YRqFQqoScrohjIop90ChOdRp9z9wX/2nq S7vfra7217WadoVRwDAZvfkrCz2Png==; Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Mar 2017 09:45:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,247,1486454400"; d="scan'208";a="1128986524" Received: from irsmsx107.ger.corp.intel.com ([163.33.3.99]) by fmsmga001.fm.intel.com with ESMTP; 30 Mar 2017 09:45:20 -0700 Received: from irsmsx109.ger.corp.intel.com ([169.254.13.12]) by IRSMSX107.ger.corp.intel.com ([169.254.10.107]) with mapi id 14.03.0319.002; Thu, 30 Mar 2017 17:45:19 +0100 From: "Ananyev, Konstantin" To: "Richardson, Bruce" , Olivier Matz CC: "dev@dpdk.org" , "mb@smartsharesystems.com" , "Chilikin, Andrey" , "jblunck@infradead.org" , "nelio.laranjeiro@6wind.com" , "arybchenko@solarflare.com" Thread-Topic: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization Thread-Index: AQHSl/BM96NM/aKG30C3i/t54/q6k6GsCWqAgABGqICAAOACAIAAKlIAgAAFugCAAFkDYA== Date: Thu, 30 Mar 2017 16:45:18 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772583FAE2A51@IRSMSX109.ger.corp.intel.com> References: <1485271173-13408-1-git-send-email-olivier.matz@6wind.com> <1488966121-22853-1-git-send-email-olivier.matz@6wind.com> <20170329175629.68810924@platinum> <20170329200923.GA11516@bricha3-MOBL3.ger.corp.intel.com> <20170330093108.GA10652@bricha3-MOBL3.ger.corp.intel.com> <20170330140236.0d2ebac8@platinum> <20170330122305.GA14272@bricha3-MOBL3.ger.corp.intel.com> In-Reply-To: <20170330122305.GA14272@bricha3-MOBL3.ger.corp.intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Mar 2017 16:45:24 -0000 > -----Original Message----- > From: Richardson, Bruce > Sent: Thursday, March 30, 2017 1:23 PM > To: Olivier Matz > Cc: dev@dpdk.org; Ananyev, Konstantin ; mb@= smartsharesystems.com; Chilikin, Andrey > ; jblunck@infradead.org; nelio.laranjeiro@6win= d.com; arybchenko@solarflare.com > Subject: Re: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization >=20 > On Thu, Mar 30, 2017 at 02:02:36PM +0200, Olivier Matz wrote: > > On Thu, 30 Mar 2017 10:31:08 +0100, Bruce Richardson wrote: > > > On Wed, Mar 29, 2017 at 09:09:23PM +0100, Bruce Richardson wrote: > > > > On Wed, Mar 29, 2017 at 05:56:29PM +0200, Olivier Matz wrote: > > > > > Hi, > > > > > > > > > > Does anyone have any other comment on this series? > > > > > Can it be applied? > > > > > > > > > > > > > > > Thanks, > > > > > Olivier > > > > > > > > > > > > > I assume all driver maintainers have done performance analysis to c= heck > > > > for regressions. Perhaps they can confirm this is the case. > > > > > > > > /Bruce > > > > > > > > In the absence, of anyone else reporting performance numbers with thi= s > > > patchset, I ran a single-thread testpmd test using 2 x 40G ports (i40= e) > > > driver. With RX & TX descriptor ring sizes of 512 or above, I'm seein= g a > > > fairly noticable performance drop. I still need to dig in more, e.g. = do > > > an RFC2544 zero-loss test, and also bisect the patchset to see what > > > parts may be causing the problem. > > > > > > Has anyone else tried any other drivers or systems to see what the pe= rf > > > impact of this set may be? > > > > I did, of course. I didn't see any noticeable performance drop on > > ixgbe (4 NICs, one port per NIC, 1 core). I can replay the test with > > current version. > > > I had no doubt you did some perf testing! :-) >=20 > Perhaps the regression I see is limited to i40e driver. I've confirmed I > still see it with that driver in zero-loss tests, so next step is to try > and localise what change in the patchset is causing it. >=20 > Ideally, though, I think we should see acks or other comments from > driver maintainers at least confirming that they have tested. You cannot > be held responsible for testing every DPDK driver before you submit work > like this. >=20 Unfortunately I also see a regression. Did a quick flood test on 2.8 GHZ IVB with 4x10Gb. Observed a drop even with default testpmd RXD/TXD numbers (128/512): from 50.8 Mpps down to 47.8 Mpps. >>From what I am seeing the particular patch that causing it: [dpdk-dev,3/9] mbuf: set mbuf fields while in pool cc version 5.3.1 20160406 (Red Hat 5.3.1-6) (GCC) cmdline: ./dpdk.org-1705-mbuf1/x86_64-native-linuxapp-gcc/app/testpmd --lcores=3D'7= ,8' -n 4 --socket-mem=3D'1024,0' -w 04:00.1 -w 07:00.1 -w 0b:00.1 -w 0e:0= 0.1 -- -i Konstantin