From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 630F33DC for ; Mon, 12 Jun 2017 14:51:33 +0200 (CEST) Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Jun 2017 05:51:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,333,1493708400"; d="scan'208";a="113303174" Received: from irsmsx107.ger.corp.intel.com ([163.33.3.99]) by fmsmga005.fm.intel.com with ESMTP; 12 Jun 2017 05:51:31 -0700 Received: from irsmsx109.ger.corp.intel.com ([169.254.13.250]) by IRSMSX107.ger.corp.intel.com ([169.254.10.129]) with mapi id 14.03.0319.002; Mon, 12 Jun 2017 13:51:27 +0100 From: "Ananyev, Konstantin" To: Jerin Jacob CC: "Richardson, Bruce" , Stephen Hemminger , Yerden Zhumabekov , "Verkamp, Daniel" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation Thread-Index: AQHS29yUGhSpppN6a069aA11FRzec6ISDAhwgAAJcACAANJnAIADf1kAgAE3tZCAAB0sgIAAGAMwgASgk4CAAEsTgIAAA36AgAEITsCAAr4FAIAAh59Q///1LICAAAnDgIAACP4AgAASfcD///6PgIAAEdVA Date: Mon, 12 Jun 2017 12:51:26 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772583FB083B9@IRSMSX109.ger.corp.intel.com> References: <6908e71a-c849-83d3-e86d-745acf9f9491@sts.kz> <20170609101625.09075858@xeon-e3> <20170609172854.GA2828@jerin> <2601191342CEEE43887BDE71AB9772583FB07AEC@IRSMSX109.ger.corp.intel.com> <20170612030730.GA6870@jerin> <2601191342CEEE43887BDE71AB9772583FB082EC@IRSMSX109.ger.corp.intel.com> <20170612103409.GA4354@jerin> <20170612110907.GA64736@bricha3-MOBL3.ger.corp.intel.com> <20170612114117.GA17595@jerin> <2601191342CEEE43887BDE71AB9772583FB08394@IRSMSX109.ger.corp.intel.com> <20170612124218.GA20971@jerin> In-Reply-To: <20170612124218.GA20971@jerin> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 10.0.102.7 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Jun 2017 12:51:34 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Monday, June 12, 2017 1:42 PM > To: Ananyev, Konstantin > Cc: Richardson, Bruce ; Stephen Hemminger ; Yerden Zhumabekov > ; Verkamp, Daniel ; dev@dp= dk.org > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation >=20 > -----Original Message----- > > Date: Mon, 12 Jun 2017 12:17:48 +0000 > > From: "Ananyev, Konstantin" > > To: Jerin Jacob , "Richardson, Bruce" > > > > CC: Stephen Hemminger , Yerden Zhumabekov > > , "Verkamp, Daniel" , > > "dev@dpdk.org" > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation > > > > > > > > > -----Original Message----- > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > Sent: Monday, June 12, 2017 12:41 PM > > > To: Richardson, Bruce > > > Cc: Ananyev, Konstantin ; Stephen Hemmi= nger ; Yerden Zhumabekov > > > ; Verkamp, Daniel ; de= v@dpdk.org > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocati= on > > > > > > -----Original Message----- > > > > Date: Mon, 12 Jun 2017 12:09:07 +0100 > > > > From: Bruce Richardson > > > > To: Jerin Jacob > > > > CC: "Ananyev, Konstantin" , Stephen H= emminger > > > > , Yerden Zhumabekov , > > > > "Verkamp, Daniel" , "dev@dpdk.org" > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone alloca= tion > > > > User-Agent: Mutt/1.8.1 (2017-04-11) > > > > > > > > On Mon, Jun 12, 2017 at 04:04:11PM +0530, Jerin Jacob wrote: > > > > > -----Original Message----- > > > > > > Date: Mon, 12 Jun 2017 10:18:39 +0000 > > > > > > From: "Ananyev, Konstantin" > > > > > > To: Jerin Jacob > > > > > > CC: Stephen Hemminger , Yerden Zhum= abekov > > > > > > , "Richardson, Bruce" , > > > > > > "Verkamp, Daniel" , "dev@dpdk.org" > > > > > > > > > > > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone al= location > > > > > > > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > > > > > Sent: Monday, June 12, 2017 4:08 AM > > > > > > > To: Ananyev, Konstantin > > > > > > > Cc: Stephen Hemminger ; Yerden Zh= umabekov ; Richardson, Bruce > > > > > > > ; Verkamp, Daniel ; dev@dpdk.org > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone = allocation > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > Date: Sat, 10 Jun 2017 08:16:44 +0000 > > > > > > > > From: "Ananyev, Konstantin" > > > > > > > > To: Jerin Jacob , Stephen H= emminger > > > > > > > > > > > > > > > > CC: Yerden Zhumabekov , "Richardson, B= ruce" > > > > > > > > , "Verkamp, Daniel" > > > > > > > > , "dev@dpdk.org" > > > > > > > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzon= e allocation > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > > > > > > > Sent: Friday, June 9, 2017 6:29 PM > > > > > > > > > To: Stephen Hemminger > > > > > > > > > Cc: Yerden Zhumabekov ; Ananyev, Kon= stantin ; Richardson, Bruce > > > > > > > > > ; Verkamp, Daniel ; dev@dpdk.org > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memz= one allocation > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > > > Date: Fri, 9 Jun 2017 10:16:25 -0700 > > > > > > > > > > From: Stephen Hemminger > > > > > > > > > > To: Yerden Zhumabekov > > > > > > > > > > Cc: "Ananyev, Konstantin" , "Richardson, > > > > > > > > > > Bruce" , "Verkamp, Daniel" > > > > > > > > > > , "dev@dpdk.org" > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned me= mzone allocation > > > > > > > > > > > > > > > > > > > > On Fri, 9 Jun 2017 18:47:43 +0600 > > > > > > > > > > Yerden Zhumabekov wrote: > > > > > > > > > > > > > > > > > > > > > On 06.06.2017 19:19, Ananyev, Konstantin wrote: > > > > > > > > > > > > > > > > > > > > > > > >>>> Maybe there is some deeper reason for the >=3D = 128-byte alignment logic in rte_ring.h? > > > > > > > > > > > >>> Might be, would be good to hear opinion the autho= r of that change. > > > > > > > > > > > >> It gives improved performance for core-2-core tran= sfer. > > > > > > > > > > > > You mean empty cache-line(s) after prod/cons, corre= ct? > > > > > > > > > > > > That's ok but why we can't keep them and whole rte_= ring aligned on cache-line boundaries? > > > > > > > > > > > > Something like that: > > > > > > > > > > > > struct rte_ring { > > > > > > > > > > > > ... > > > > > > > > > > > > struct rte_ring_headtail prod __rte_cache_align= ed; > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned; > > > > > > > > > > > > struct rte_ring_headtail cons __rte_cache_align= ed; > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned; > > > > > > > > > > > > }; > > > > > > > > > > > > > > > > > > > > > > > > Konstantin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm curious, can anyone explain, how does it actually= affect > > > > > > > > > > > performance? Maybe we can utilize it application code= ? > > > > > > > > > > > > > > > > > > > > I think it is because on Intel CPU's the CPU will specu= latively fetch adjacent cache lines. > > > > > > > > > > If these cache lines change, then it will create false = sharing. > > > > > > > > > > > > > > > > > > I see. I think, In such cases it is better to abstract as= conditional > > > > > > > > > compilation. The above logic has worst case cache memory > > > > > > > > > requirement if CPU is 128B CL and no speculative prefetch= . > > > > > > > > > > > > I suppose we can keep exactly the same logic as we have now: > > > > > > archs with 64B cache-line would have an empty cache line, > > > > > > for archs with 128B cacheline - no. > > > > > > Is that what you are looking for? > > > > > > > > > > Its valid to an arch with 128B cache-line and speculative cache p= refetch. > > > > > (Cavium's recent SoCs comes with this property) > > > > > IMHO, Instead of making 128B as NOOP. We can introduce a new cond= itional > > > > > compilation flag(CONFIG_RTE_ARCH_SPECULATIVE_PREFETCH or somethin= g like > > > > > that) to decide the empty line and I think, In future we can use > > > > > the same config for similar use cases. > > > > > > > > > > Jerin > > > > > > > > > I'd rather not make it that complicated, and definitely don't like > > > > adding in more build time config options. Initially, I had the extr= a > > > > padding always-present, but it was felt that it made the resulting > > > > structure too big. For those systems with 128B cachelines, is the e= xtra > > > > 256 bytes of space per ring really a problem given modern systems h= ave > > > > ram in the 10's of Gigs? > > > > > > I think, RAM size does not matter here. I was referring more on L1 an= d L2 > > > cache size(which is very limited).i.e if you fetch the unwanted > > > lines then CPU have to evict fast and it will have effect on accommod= ating > > > interested lines in worker loop.. > > > > Not sure I understand you here - as I know, we can't control HW specula= tive fetch. >=20 > Yes. But we can know in advance if a product family supports HW speculati= ve fetch > or not. Typically a product family defines this feature in arm64 case and > we have different targets for each product family. >=20 > > It would either happen, or no depending on the actual CPU. > > The only thing we can control here - what exactly will be fetched: > > either empty cache line not used by anyone or cache-line with some data= . >=20 > Yes. If we know a given CPU does not provide HW speculative fetch in > advance then we don't need to give empty line. I understand that part, what I don't understand how not providing empty cac= he line (for specific HW) will improve L1/L2 cache usage? Suppose you do have an empty line for all cases, and HW doesn't fetch next = cache line. Then it should never occur into the cache hierarchy anyway: You never read/write to it manually, HW doesn't speculatively fetch it. correct? Konstantin