From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 8582458CF for ; Wed, 4 May 2016 15:53:43 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 04 May 2016 06:53:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,577,1455004800"; d="scan'208";a="946065959" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by orsmga001.jf.intel.com with ESMTP; 04 May 2016 06:53:40 -0700 Received: from irsmsx103.ger.corp.intel.com ([169.254.3.54]) by IRSMSX104.ger.corp.intel.com ([169.254.5.226]) with mapi id 14.03.0248.002; Wed, 4 May 2016 14:53:39 +0100 From: "Richardson, Bruce" To: Jerin Jacob CC: "dev@dpdk.org" , "thomas.monjalon@6wind.com" Thread-Topic: [dpdk-dev] [PATCH v2] ethdev: make struct rte_eth_dev cache aligned Thread-Index: AQHRpTlTSiubUAg6g0KVLGlPghlmrZ+ooIEAgAAZ5gCAABNmkA== Date: Wed, 4 May 2016 13:53:39 +0000 Message-ID: <59AF69C657FD0841A61C55336867B5B035A4D9C7@IRSMSX103.ger.corp.intel.com> References: <1462176465-21562-1-git-send-email-jerin.jacob@caviumnetworks.com> <1462279327-9876-1-git-send-email-jerin.jacob@caviumnetworks.com> <20160504110950.GB25492@bricha3-MOBL3> <20160504134231.GA13071@localhost.localdomain> In-Reply-To: <20160504134231.GA13071@localhost.localdomain> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNmE5MzVkN2EtYjQ4OS00ODc2LTkzMDUtNTAwMmFmY2UxYWI0IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IkYrQWpFTXJsdjQ2XC9HbGJVbG5JUVRKNTczUlZReVhuRDJFUnh6TTR6dmxvPSJ9 x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2] ethdev: make struct rte_eth_dev cache aligned X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 May 2016 13:53:44 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Wednesday, May 4, 2016 2:43 PM > To: Richardson, Bruce > Cc: dev@dpdk.org; thomas.monjalon@6wind.com > Subject: Re: [dpdk-dev] [PATCH v2] ethdev: make struct rte_eth_dev cache > aligned >=20 > On Wed, May 04, 2016 at 12:09:50PM +0100, Bruce Richardson wrote: > > On Tue, May 03, 2016 at 06:12:07PM +0530, Jerin Jacob wrote: > > > Elements of struct rte_eth_dev used in the fast path. > > > Make struct rte_eth_dev cache aligned to avoid the cases where > > > rte_eth_dev elements share the same cache line with other structures. > > > > > > Signed-off-by: Jerin Jacob > > > --- > > > v2: > > > Remove __rte_cache_aligned from rte_eth_devices and keep it only at > > > struct rte_eth_dev definition as suggested by Bruce > > > http://dpdk.org/dev/patchwork/patch/12328/ > > > --- > > > lib/librte_ether/rte_ethdev.h | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/lib/librte_ether/rte_ethdev.h > > > b/lib/librte_ether/rte_ethdev.h index 2757510..48f14d5 100644 > > > --- a/lib/librte_ether/rte_ethdev.h > > > +++ b/lib/librte_ether/rte_ethdev.h > > > @@ -1615,7 +1615,7 @@ struct rte_eth_dev { > > > struct rte_eth_rxtx_callback > *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT]; > > > uint8_t attached; /**< Flag indicating the port is attached */ > > > enum rte_eth_dev_type dev_type; /**< Flag indicating the device > > > type */ -}; > > > +} __rte_cache_aligned; > > > > > > struct rte_eth_dev_sriov { > > > uint8_t active; /**< SRIOV is active with 16, 32 or 6= 4 > pools */ > > > -- > > > > Hi Jerin, >=20 > Hi Bruce, >=20 > > > > have you seen a performance degradation due to ethdev elements sharing > > a cache >=20 > No. Not because of sharing the cache line. >=20 > > line? I ask because, surprisingly for me, I actually see a performance > > regression >=20 > I see performance degradation in PMD in my setup where independent change= s > are causing the performance issue in PMD(~<100k). That's the reason I > thought making aligned cache line stuff where ever it makes sense so that > independent change shouldn't impact the PMD performance and this patch wa= s > an initiative for the same. >=20 > > when I apply the above patch. It's not a big change - perf reduction > > of <1% - but still noticable across multiple runs using testpmd. I'm > > using two 1x40G NICs using i40e driver, and I see ~100kpps less > > traffic per port after applying the patch. [CPU: Intel(R) Xeon(R) CPU > > E5-2699 v3 @ 2.30GHz] >=20 > This particular patch does not have any performance degradation in my > setup. > CPU: ThunderX Ok, so I take it that this patch is performance neutral on your setup, then= ? If that's the case, can we hold off on merging it on the basis that it's no= t needed and does cause a slight regression. Thanks, /Bruce