From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 7CD4E2A66 for ; Fri, 31 Mar 2017 03:00:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490922054; x=1522458054; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=qIcR+VTmd4qQ54QdcE9T9FkQ7rBSpF2j8kiXuAzKHkA=; b=apgYE/ebHx10fGpwPNJgr4T/0GYkIof3UpvFD02X0Lr7sJHXKtk7r8qD xSZOm2eBJyo6BmHwntBCxHDfckGLRw==; Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Mar 2017 18:00:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,249,1486454400"; d="scan'208";a="72074164" Received: from irsmsx108.ger.corp.intel.com ([163.33.3.3]) by orsmga004.jf.intel.com with ESMTP; 30 Mar 2017 18:00:50 -0700 Received: from irsmsx109.ger.corp.intel.com ([169.254.13.12]) by IRSMSX108.ger.corp.intel.com ([169.254.11.239]) with mapi id 14.03.0319.002; Fri, 31 Mar 2017 02:00:49 +0100 From: "Ananyev, Konstantin" To: "Richardson, Bruce" , Olivier Matz CC: "dev@dpdk.org" , "mb@smartsharesystems.com" , "Chilikin, Andrey" , "jblunck@infradead.org" , "nelio.laranjeiro@6wind.com" , "arybchenko@solarflare.com" Thread-Topic: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization Thread-Index: AQHSl/BM96NM/aKG30C3i/t54/q6k6GsCWqAgABGqICAAOACAIAAKlIAgAAFugCAAFkDYIAAAV1QgACFrWA= Date: Fri, 31 Mar 2017 01:00:49 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772583FAE2DD8@IRSMSX109.ger.corp.intel.com> References: <1485271173-13408-1-git-send-email-olivier.matz@6wind.com> <1488966121-22853-1-git-send-email-olivier.matz@6wind.com> <20170329175629.68810924@platinum> <20170329200923.GA11516@bricha3-MOBL3.ger.corp.intel.com> <20170330093108.GA10652@bricha3-MOBL3.ger.corp.intel.com> <20170330140236.0d2ebac8@platinum> <20170330122305.GA14272@bricha3-MOBL3.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772583FAE2A51@IRSMSX109.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772583FAE2A6E@IRSMSX109.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772583FAE2A6E@IRSMSX109.ger.corp.intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Mar 2017 01:00:55 -0000 > > > > > > > > > -----Original Message----- > > > From: Richardson, Bruce > > > Sent: Thursday, March 30, 2017 1:23 PM > > > To: Olivier Matz > > > Cc: dev@dpdk.org; Ananyev, Konstantin ;= mb@smartsharesystems.com; Chilikin, Andrey > > > ; jblunck@infradead.org; nelio.laranjeiro@= 6wind.com; arybchenko@solarflare.com > > > Subject: Re: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization > > > > > > On Thu, Mar 30, 2017 at 02:02:36PM +0200, Olivier Matz wrote: > > > > On Thu, 30 Mar 2017 10:31:08 +0100, Bruce Richardson wrote: > > > > > On Wed, Mar 29, 2017 at 09:09:23PM +0100, Bruce Richardson wrote: > > > > > > On Wed, Mar 29, 2017 at 05:56:29PM +0200, Olivier Matz wrote: > > > > > > > Hi, > > > > > > > > > > > > > > Does anyone have any other comment on this series? > > > > > > > Can it be applied? > > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > Olivier > > > > > > > > > > > > > > > > > > > I assume all driver maintainers have done performance analysis = to check > > > > > > for regressions. Perhaps they can confirm this is the case. > > > > > > > > > > > > /Bruce > > > > > > > > > > > > In the absence, of anyone else reporting performance numbers with= this > > > > > patchset, I ran a single-thread testpmd test using 2 x 40G ports = (i40e) > > > > > driver. With RX & TX descriptor ring sizes of 512 or above, I'm s= eeing a > > > > > fairly noticable performance drop. I still need to dig in more, e= .g. do > > > > > an RFC2544 zero-loss test, and also bisect the patchset to see wh= at > > > > > parts may be causing the problem. > > > > > > > > > > Has anyone else tried any other drivers or systems to see what th= e perf > > > > > impact of this set may be? > > > > > > > > I did, of course. I didn't see any noticeable performance drop on > > > > ixgbe (4 NICs, one port per NIC, 1 core). I can replay the test wit= h > > > > current version. > > > > > > > I had no doubt you did some perf testing! :-) > > > > > > Perhaps the regression I see is limited to i40e driver. I've confirme= d I > > > still see it with that driver in zero-loss tests, so next step is to = try > > > and localise what change in the patchset is causing it. > > > > > > Ideally, though, I think we should see acks or other comments from > > > driver maintainers at least confirming that they have tested. You can= not > > > be held responsible for testing every DPDK driver before you submit w= ork > > > like this. > > > > > > > Unfortunately I also see a regression. > > Did a quick flood test on 2.8 GHZ IVB with 4x10Gb. >=20 > Sorry, forgot to mention - it is on ixgbe. > So it doesn't look like i40e specific. >=20 > > Observed a drop even with default testpmd RXD/TXD numbers (128/512): > > from 50.8 Mpps down to 47.8 Mpps. > > From what I am seeing the particular patch that causing it: > > [dpdk-dev,3/9] mbuf: set mbuf fields while in pool > > > > cc version 5.3.1 20160406 (Red Hat 5.3.1-6) (GCC) > > cmdline: > > ./dpdk.org-1705-mbuf1/x86_64-native-linuxapp-gcc/app/testpmd --lcores= =3D'7,8' -n 4 --socket-mem=3D'1024,0' -w 04:00.1 -w 07:00.1 -w > > 0b:00.1 -w 0e:00.1 -- -i > > After applying the patch below got nearly original numbers (though not quit= e) on my box. dpdk.org mainline: 50.8 with Olivier patch: 47.8 with patch below: 50.4 What I tried to do in it - avoid unnecessary updates of mbuf inside rte_pkt= mbuf_prefree_seg(). For one segment per packet it seems to help. Though so far I didn't try it on i40e and didn't do any testing for multi-s= eg scenario. Konstantin $ cat patch.mod4 diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index d7af852..558233f 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -1283,12 +1283,28 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) { __rte_mbuf_sanity_check(m, 0); - if (likely(rte_mbuf_refcnt_update(m, -1) =3D=3D 0)) { + if (likely(rte_mbuf_refcnt_read(m) =3D=3D 1)) { + + if (m->next !=3D NULL) { + m->next =3D NULL; + m->nb_segs =3D 1; + } + + if (RTE_MBUF_INDIRECT(m)) + rte_pktmbuf_detach(m); + + return m; + + } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) =3D=3D 0)= { + if (RTE_MBUF_INDIRECT(m)) rte_pktmbuf_detach(m); - m->next =3D NULL; - m->nb_segs =3D 1; + if (m->next !=3D NULL) { + m->next =3D NULL; + m->nb_segs =3D 1; + } + rte_mbuf_refcnt_set(m, 1); return m;