From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 8AB47C312 for ; Mon, 18 May 2015 16:13:25 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP; 18 May 2015 07:13:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,453,1427785200"; d="scan'208";a="573098458" Received: from irsmsx106.ger.corp.intel.com ([163.33.3.31]) by orsmga003.jf.intel.com with ESMTP; 18 May 2015 07:13:24 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.73]) by IRSMSX106.ger.corp.intel.com ([169.254.8.189]) with mapi id 14.03.0224.002; Mon, 18 May 2015 15:13:23 +0100 From: "Ananyev, Konstantin" To: Zoltan Kiss , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] mempool: limit cache_size Thread-Index: AQHQja8Brv49E5keeEa06utz2ts91J2Bn/SAgAASQ1D///QMAIAAEsvQ///4pQCAABUQwA== Date: Mon, 18 May 2015 14:13:22 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772582142FCE9@irsmsx105.ger.corp.intel.com> References: <1431543554-442-1-git-send-email-zoltan.kiss@linaro.org> <5559DAC1.8050106@linaro.org> <2601191342CEEE43887BDE71AB9772582142FA3B@irsmsx105.ger.corp.intel.com> <5559E00C.3050708@linaro.org> <2601191342CEEE43887BDE71AB9772582142FA79@irsmsx105.ger.corp.intel.com> <5559E9A4.3020400@linaro.org> In-Reply-To: <5559E9A4.3020400@linaro.org> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 14:13:26 -0000 > -----Original Message----- > From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > Sent: Monday, May 18, 2015 2:31 PM > To: Ananyev, Konstantin; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size >=20 >=20 >=20 > On 18/05/15 14:14, Ananyev, Konstantin wrote: > > > > > >> -----Original Message----- > >> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > >> Sent: Monday, May 18, 2015 1:50 PM > >> To: Ananyev, Konstantin; dev@dpdk.org > >> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size > >> > >> > >> > >> On 18/05/15 13:41, Ananyev, Konstantin wrote: > >>> > >>> > >>>> -----Original Message----- > >>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss > >>>> Sent: Monday, May 18, 2015 1:28 PM > >>>> To: dev@dpdk.org > >>>> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size > >>>> > >>>> Hi, > >>>> > >>>> Any opinion on this patch? > >>>> > >>>> Regards, > >>>> > >>>> Zoltan > >>>> > >>>> On 13/05/15 19:59, Zoltan Kiss wrote: > >>>>> Otherwise cache_flushthresh can be bigger than n, and > >>>>> a consumer can starve others by keeping every element > >>>>> either in use or in the cache. > >>>>> > >>>>> Signed-off-by: Zoltan Kiss > >>>>> --- > >>>>> lib/librte_mempool/rte_mempool.c | 3 ++- > >>>>> lib/librte_mempool/rte_mempool.h | 2 +- > >>>>> 2 files changed, 3 insertions(+), 2 deletions(-) > >>>>> > >>>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/= rte_mempool.c > >>>>> index cf7ed76..ca6cd9c 100644 > >>>>> --- a/lib/librte_mempool/rte_mempool.c > >>>>> +++ b/lib/librte_mempool/rte_mempool.c > >>>>> @@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsig= ned n, unsigned elt_size, > >>>>> mempool_list =3D RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_me= mpool_list); > >>>>> > >>>>> /* asked cache too big */ > >>>>> - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { > >>>>> + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > >>>>> + (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) { > >>>>> rte_errno =3D EINVAL; > >>>>> return NULL; > >>>>> } > >>> > >>> Why just no 'cache_size > n' then? > >> > >> The commit message says: "Otherwise cache_flushthresh can be bigger th= an > >> n, and a consumer can starve others by keeping every element either in > >> use or in the cache." > > > > Ah yes, you right - your condition is more restrictive, which is better= . > > Though here you implicitly convert cache_size and n to floats and compa= re 2 floats : > > (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) > > Shouldn't it be: > > (uint32_t)(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER) > n) > > So we do conversion back to uint32_t compare to unsigned integers inste= ad? > > Same as below: > > mp->cache_flushthresh =3D (uint32_t) > > (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER); >=20 > To bring it further: how about ditching the whole cache_flushthresh > member of the mempool structure, and use this: >=20 > #define CACHE_FLUSHTHRESH(mp) (uint32_t)((mp)->cache_size * 1.5) That's quite expensive and I think would slow down mempool_put() quite a lo= t . So I'd suggest we keep cache_flushthresh as it is. >=20 > Furthermore, do we want to expose the flush threshold multiplier through > the config file? Hmm, my opinion is no - so far no one ask for that, and as general tendency - we trying to reduce number of options in config f= ile. Do you have any good justification when current value is not good enough? Anyway, that probably could be a subject of another patch/discussion. Konstantin >=20 > > ? > > > > In fact, as we use it more than once, it probably makes sense to create= a macro for it, > > something like: > > #define CALC_CACHE_FLUSHTHRESH(c) ((uint32_t)((c) * CACHE_FLUSHTHRESH_= MULTIPLIER) > > > > Or even > > > > #define CALC_CACHE_FLUSHTHRESH(c) ((typeof (c))((c) * CACHE_FLUSHTHRES= H_MULTIPLIER) > > > > > > Konstantin > > > >> > >>> Konstantin > >>> > >>>>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/= rte_mempool.h > >>>>> index 9001312..a4a9610 100644 > >>>>> --- a/lib/librte_mempool/rte_mempool.h > >>>>> +++ b/lib/librte_mempool/rte_mempool.h > >>>>> @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_me= mpool *, void *); > >>>>> * If cache_size is non-zero, the rte_mempool library will tr= y to > >>>>> * limit the accesses to the common lockless pool, by maintai= ning a > >>>>> * per-lcore object cache. This argument must be lower or equ= al to > >>>>> - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose > >>>>> + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised = to choose > >>>>> * cache_size to have "n modulo cache_size =3D=3D 0": if this= is > >>>>> * not the case, some elements will always stay in the pool a= nd will > >>>>> * never be used. The access to the per-lcore table is of cou= rse > >>>>>