From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 6F2F2377C for ; Mon, 18 May 2015 14:41:43 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 18 May 2015 05:41:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,453,1427785200"; d="scan'208";a="495036610" Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157]) by FMSMGA003.fm.intel.com with ESMTP; 18 May 2015 05:41:25 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.73]) by IRSMSX103.ger.corp.intel.com ([169.254.3.215]) with mapi id 14.03.0224.002; Mon, 18 May 2015 13:41:23 +0100 From: "Ananyev, Konstantin" To: Zoltan Kiss , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] mempool: limit cache_size Thread-Index: AQHQja8Brv49E5keeEa06utz2ts91J2Bn/SAgAASQ1A= Date: Mon, 18 May 2015 12:41:23 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772582142FA3B@irsmsx105.ger.corp.intel.com> References: <1431543554-442-1-git-send-email-zoltan.kiss@linaro.org> <5559DAC1.8050106@linaro.org> In-Reply-To: <5559DAC1.8050106@linaro.org> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 12:41:53 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss > Sent: Monday, May 18, 2015 1:28 PM > To: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size >=20 > Hi, >=20 > Any opinion on this patch? >=20 > Regards, >=20 > Zoltan >=20 > On 13/05/15 19:59, Zoltan Kiss wrote: > > Otherwise cache_flushthresh can be bigger than n, and > > a consumer can starve others by keeping every element > > either in use or in the cache. > > > > Signed-off-by: Zoltan Kiss > > --- > > lib/librte_mempool/rte_mempool.c | 3 ++- > > lib/librte_mempool/rte_mempool.h | 2 +- > > 2 files changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_= mempool.c > > index cf7ed76..ca6cd9c 100644 > > --- a/lib/librte_mempool/rte_mempool.c > > +++ b/lib/librte_mempool/rte_mempool.c > > @@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsigned = n, unsigned elt_size, > > mempool_list =3D RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_= list); > > > > /* asked cache too big */ > > - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { > > + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > > + (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) { > > rte_errno =3D EINVAL; > > return NULL; > > } Why just no 'cache_size > n' then? Konstantin > > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_= mempool.h > > index 9001312..a4a9610 100644 > > --- a/lib/librte_mempool/rte_mempool.h > > +++ b/lib/librte_mempool/rte_mempool.h > > @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempoo= l *, void *); > > * If cache_size is non-zero, the rte_mempool library will try to > > * limit the accesses to the common lockless pool, by maintaining a > > * per-lcore object cache. This argument must be lower or equal to > > - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose > > + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to c= hoose > > * cache_size to have "n modulo cache_size =3D=3D 0": if this is > > * not the case, some elements will always stay in the pool and wil= l > > * never be used. The access to the per-lcore table is of course > >