From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id C34D55684 for ; Fri, 9 Jan 2015 10:44:33 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP; 09 Jan 2015 01:44:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,729,1413270000"; d="scan'208";a="648627209" Received: from kmsmsx153.gar.corp.intel.com ([172.21.73.88]) by fmsmga001.fm.intel.com with ESMTP; 09 Jan 2015 01:44:28 -0800 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by KMSMSX153.gar.corp.intel.com (172.21.73.88) with Microsoft SMTP Server (TLS) id 14.3.195.1; Fri, 9 Jan 2015 17:41:15 +0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.216]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.110]) with mapi id 14.03.0195.001; Fri, 9 Jan 2015 17:40:55 +0800 From: "Liang, Cunming" To: "Ananyev, Konstantin" , Stephen Hemminger , "Richardson, Bruce" Thread-Topic: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore Thread-Index: AQHQFObgfmq64AgbV0e+OTw3KgLpHZyJoZWAgAGfW9CABL60AIAAkFJwgAQ6C4CAAV5A0IAADfiAgASiO/CAAA/UgIAAkhMAgAGGYrCAGRoLAIABjK2A Date: Fri, 9 Jan 2015 09:40:54 +0000 Message-ID: References: <1418263490-21088-1-git-send-email-cunming.liang@intel.com> <7C4248CAE043B144B1CD242D275626532FE15298@IRSMSX104.ger.corp.intel.com> <7C4248CAE043B144B1CD242D275626532FE232BA@IRSMSX104.ger.corp.intel.com> <7C4248CAE043B144B1CD242D275626532FE27C3B@IRSMSX104.ger.corp.intel.com> <20141219100342.GA3848@bricha3-MOBL3> <20141222094603.GA1768@bricha3-MOBL3> <20141222102852.7e6d5e81@urahara> <2601191342CEEE43887BDE71AB977258213D39EA@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB977258213D39EA@irsmsx105.ger.corp.intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Jan 2015 09:44:34 -0000 > -----Original Message----- > From: Ananyev, Konstantin > Sent: Friday, January 09, 2015 1:06 AM > To: Liang, Cunming; Stephen Hemminger; Richardson, Bruce > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore >=20 >=20 > Hi Steve, >=20 > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Liang, Cunming > > Sent: Tuesday, December 23, 2014 9:52 AM > > To: Stephen Hemminger; Richardson, Bruce > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore > > > > > > > > > -----Original Message----- > > > From: Stephen Hemminger [mailto:stephen@networkplumber.org] > > > Sent: Tuesday, December 23, 2014 2:29 AM > > > To: Richardson, Bruce > > > Cc: Liang, Cunming; dev@dpdk.org > > > Subject: Re: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lco= re > > > > > > On Mon, 22 Dec 2014 09:46:03 +0000 > > > Bruce Richardson wrote: > > > > > > > On Mon, Dec 22, 2014 at 01:51:27AM +0000, Liang, Cunming wrote: > > > > > ... > > > > > > I'm conflicted on this one. However, I think far more applicati= ons would > be > > > > > > broken > > > > > > to start having to use thread_id in place of an lcore_id than w= ould be > > > broken > > > > > > by having the lcore_id no longer actually correspond to a core. > > > > > > I'm actually struggling to come up with a large number of scena= rios > where > > > it's > > > > > > important to an app to determine the cpu it's running on, compa= red to > the > > > large > > > > > > number of cases where you need to have a data-structure per thr= ead. > In > > > DPDK > > > > > > libs > > > > > > alone, you see this assumption that lcore_id =3D=3D thread_id a= large > number > > > of > > > > > > times. > > > > > > > > > > > > Despite the slight logical inconsistency, I think it's better t= o avoid > > > introducing > > > > > > a thread-id and continue having lcore_id representing a unique = thread. > > > > > > > > > > > > /Bruce > > > > > > > > > > Ok, I understand it. > > > > > I list the implicit meaning if using lcore_id representing the un= ique thread. > > > > > 1). When lcore_id less than RTE_MAX_LCORE, it still represents th= e logical > > > core id. > > > > > 2). When lcore_id large equal than RTE_MAX_LCORE, it represents a= n > unique > > > id for thread. > > > > > 3). Most of APIs(except rte_lcore_id()) in rte_lcore.h suggest to= be used > only > > > in CASE 1) > > > > > 4). rte_lcore_id() can be used in CASE 2), but the return value n= o matter > > > represent a logical core id. > > > > > > > > > > If most of us feel it's acceptable, I'll prepare for the RFC v2 b= ase on this > > > conclusion. > > > > > > > > > > /Cunming > > > > > > > > Sorry, I don't like that suggestion either, as having lcore_id valu= es greater > > > > than RTE_MAX_LCORE is terrible, as how will people know how to > dimension > > > arrays > > > > to be indexes by lcore id? Given the choice, if we are not going to= just use > > > > lcore_id as a generic thread id, which is always between 0 and > > > RTE_MAX_LCORE > > > > we can look to define a new thread_id variable to hold that. Howeve= r, it > should > > > > have a bounded range. > > > > From an ease-of-porting perspective, I still think that the simples= t option is > to > > > > use the existing lcore_id and accept the fact that it's now a threa= d id rather > > > > than an actual physical lcore. Question is, is would that cause us = lots of > issues > > > > in the future? > > > > > > > > /Bruce > > > > > > The current rte_lcore_id() has different meaning the thread. Your pro= posal > will > > > break code that uses lcore_id to do per-cpu statistics and the lcore_= config > > > code in the samples. > > > q > > [Liang, Cunming] +1. >=20 > Few more thoughts on that subject: >=20 > Actually one more place in the lib, where lcore_id is used (and it should= be > unique): > rte_spinlock_recursive_lock() / rte_spinlock_recursive_trylock(). > So if we going to replace lcore_id with thread_id as uniques thread index= , then > these functions > have to be updated too. [Liang, Cunming] You're right, if deciding to use thread_id, we have to che= ck and replace=20 rte_lcore_id()/RTE_PER_LCORE(_lcore_id) on all the impact place. Now I'm buying the proposal to keep using rte_lcore_id() to return the=20 unique id. Meanwhile I think it's necessary to have real cpu id. It's helpful in NUMA socket checking.=20 I will provide new API rte_curr_cpu() to return the runtime cpu no matter=20 the thread running in coremasked or non-coremasked cpu. So the socket info stored in lcore_config still useful to choose the local= socket. >=20 > About maintaining our own unique thread_id inside shared memory > (_get_linear_tid()/_put_linear_tid()). > There is one thing that worries me with that approach: > In case of abnormal process termination, TIDs used by that process will r= emain > 'reserved' > and there is no way to know which TIDs were used by terminated process. > So there could be a situation with DPDK multi-process model, > when after secondary process abnormal termination, It wouldn't be possibl= e to > restart it - > we just run out of 'free' TIDs. [Liang, Cunming] That's a good point I think. I think it's not only for thr= ead id but=20 for all the dynamic allocated resource (e.g. memzone, mempool). we haven't a garbage collection or heartbeat to process the secondary abnor= mal exit. >=20 > Which makes me think probably there is no need to introduce new globally > unique 'thread_id'? > Might be just lcore_id is enough? > As Mirek and Bruce suggested we can treat it a sort of 'unique thread id'= inside > EAL. [Liang, Cunming] I think we'd better have two, one for 'unique thread id', = one for real cpu id. No matter which of them are named lcore_id/thread_id/cpu_id and etc. For cpu id, we need to check/get the NUMA info. Pthread may migrate from one core to another, the thread 'socket id' may ch= ange,=20 The per cpu socket info we have them in lcore_config. > Or as 'virtual' core id that can run on set of physical cpus, and these s= ubsets for > different 'virtual' cores can intersect. > Then basically we can keep legacy behaviour with '-c ,' wher= e each > lcore_id matches one to one with physical cpu, and introduce new one, > something like: > -- > lcores=3D'()=3D(),..()'. > So let say: --lcores=3D(0-7)=3D(0,2-4),(10)=3D(7),(8)=3D(all)' would mean= : > Create 10 EAL threads, bind threads with clore_id=3D[0-7] to cpuset: <0,2= ,3,4>, > thread with lcore_id=3D10 is binded to cpu 7, and allow to run lcore_id= =3D8 on any > cpu in the system. > Of course '-c' and '-lcores' would be mutually exclusive, and we will nee= d to > update rte_lcore_to_socket_id() > and introduce: rte_lcore_(set|get)_affinity(). >=20 > Does it make sense to you? [Liang, Cunming] If assign lcore_id during the command line, user have to h= andle=20 the conflict for '-c' and '--lcores'.=20 In this cases, if lcore_id 0~10 is occupied, the coremasked thread start fr= om 11 ? In case, application create a new pthread during the runtime. As there's no lcore id belongs to the new thread mentioned in the command l= ine, it then still back to dynamic allocate. I means on the startup, user may have no idea of how much pthread they will= run. 'rte_pthread_assign_lcore' do the things as 'rte_lcore_(set|get)_affinity()= ' If we keeping using lcore_id, I like the name you proposed. I'll send my code update on next Monday. >=20 > BTW, one more thing: while we are on it - it is probably a good time to = do > something with our interrupt thread? > It is a bit strange that we can't use rte_pktmbuf_free() or > rte_spinlock_recursive_lock() from our own interrupt/alarm handlers >=20 > Konstantin