From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 676D59A87 for ; Mon, 18 May 2015 15:14:53 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 18 May 2015 06:14:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,453,1427785200"; d="scan'208";a="711872018" Received: from irsmsx107.ger.corp.intel.com ([163.33.3.99]) by fmsmga001.fm.intel.com with ESMTP; 18 May 2015 06:14:50 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.73]) by IRSMSX107.ger.corp.intel.com ([169.254.10.94]) with mapi id 14.03.0224.002; Mon, 18 May 2015 14:14:50 +0100 From: "Ananyev, Konstantin" To: Zoltan Kiss , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] mempool: limit cache_size Thread-Index: AQHQja8Brv49E5keeEa06utz2ts91J2Bn/SAgAASQ1D///QMAIAAEsvQ Date: Mon, 18 May 2015 13:14:50 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772582142FA79@irsmsx105.ger.corp.intel.com> References: <1431543554-442-1-git-send-email-zoltan.kiss@linaro.org> <5559DAC1.8050106@linaro.org> <2601191342CEEE43887BDE71AB9772582142FA3B@irsmsx105.ger.corp.intel.com> <5559E00C.3050708@linaro.org> In-Reply-To: <5559E00C.3050708@linaro.org> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 13:14:54 -0000 > -----Original Message----- > From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > Sent: Monday, May 18, 2015 1:50 PM > To: Ananyev, Konstantin; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size >=20 >=20 >=20 > On 18/05/15 13:41, Ananyev, Konstantin wrote: > > > > > >> -----Original Message----- > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss > >> Sent: Monday, May 18, 2015 1:28 PM > >> To: dev@dpdk.org > >> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size > >> > >> Hi, > >> > >> Any opinion on this patch? > >> > >> Regards, > >> > >> Zoltan > >> > >> On 13/05/15 19:59, Zoltan Kiss wrote: > >>> Otherwise cache_flushthresh can be bigger than n, and > >>> a consumer can starve others by keeping every element > >>> either in use or in the cache. > >>> > >>> Signed-off-by: Zoltan Kiss > >>> --- > >>> lib/librte_mempool/rte_mempool.c | 3 ++- > >>> lib/librte_mempool/rte_mempool.h | 2 +- > >>> 2 files changed, 3 insertions(+), 2 deletions(-) > >>> > >>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rt= e_mempool.c > >>> index cf7ed76..ca6cd9c 100644 > >>> --- a/lib/librte_mempool/rte_mempool.c > >>> +++ b/lib/librte_mempool/rte_mempool.c > >>> @@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsigne= d n, unsigned elt_size, > >>> mempool_list =3D RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempo= ol_list); > >>> > >>> /* asked cache too big */ > >>> - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { > >>> + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > >>> + (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) { > >>> rte_errno =3D EINVAL; > >>> return NULL; > >>> } > > > > Why just no 'cache_size > n' then? >=20 > The commit message says: "Otherwise cache_flushthresh can be bigger than > n, and a consumer can starve others by keeping every element either in > use or in the cache." Ah yes, you right - your condition is more restrictive, which is better.=20 Though here you implicitly convert cache_size and n to floats and compare 2= floats : (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) Shouldn't it be: (uint32_t)(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER) > n) So we do conversion back to uint32_t compare to unsigned integers instead? Same as below: mp->cache_flushthresh =3D (uint32_t) (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER); ? In fact, as we use it more than once, it probably makes sense to create a m= acro for it, something like: #define CALC_CACHE_FLUSHTHRESH(c) ((uint32_t)((c) * CACHE_FLUSHTHRESH_MULT= IPLIER) Or even #define CALC_CACHE_FLUSHTHRESH(c) ((typeof (c))((c) * CACHE_FLUSHTHRESH_MU= LTIPLIER) Konstantin >=20 > > Konstantin > > > >>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rt= e_mempool.h > >>> index 9001312..a4a9610 100644 > >>> --- a/lib/librte_mempool/rte_mempool.h > >>> +++ b/lib/librte_mempool/rte_mempool.h > >>> @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_memp= ool *, void *); > >>> * If cache_size is non-zero, the rte_mempool library will try t= o > >>> * limit the accesses to the common lockless pool, by maintainin= g a > >>> * per-lcore object cache. This argument must be lower or equal = to > >>> - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose > >>> + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to= choose > >>> * cache_size to have "n modulo cache_size =3D=3D 0": if this is > >>> * not the case, some elements will always stay in the pool and = will > >>> * never be used. The access to the per-lcore table is of course > >>>