From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 64E0D9A8F for ; Mon, 18 May 2015 17:51:10 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 18 May 2015 08:51:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,453,1427785200"; d="scan'208";a="727825558" Received: from irsmsx109.ger.corp.intel.com ([163.33.3.23]) by fmsmga002.fm.intel.com with ESMTP; 18 May 2015 08:51:01 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.73]) by IRSMSX109.ger.corp.intel.com ([169.254.13.51]) with mapi id 14.03.0224.002; Mon, 18 May 2015 16:51:00 +0100 From: "Ananyev, Konstantin" To: Zoltan Kiss , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v2] mempool: limit cache_size Thread-Index: AQHQkYBTGbxu+3fQk02fy23Mn+nYgJ2B4aQQ Date: Mon, 18 May 2015 15:51:01 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772582142FE14@irsmsx105.ger.corp.intel.com> References: <1431963314-3701-1-git-send-email-zoltan.kiss@linaro.org> In-Reply-To: <1431963314-3701-1-git-send-email-zoltan.kiss@linaro.org> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 15:51:11 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss > Sent: Monday, May 18, 2015 4:35 PM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH v2] mempool: limit cache_size >=20 > Otherwise cache_flushthresh can be bigger than n, and > a consumer can starve others by keeping every element > either in use or in the cache. >=20 > Signed-off-by: Zoltan Kiss Acked-by: Konstantin Ananyev > --- > v2: use macro for calculation, with proper casting >=20 > lib/librte_mempool/rte_mempool.c | 8 +++++--- > lib/librte_mempool/rte_mempool.h | 2 +- > 2 files changed, 6 insertions(+), 4 deletions(-) >=20 > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_me= mpool.c > index cf7ed76..5cfb96b 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -68,6 +68,8 @@ static struct rte_tailq_elem rte_mempool_tailq =3D { > EAL_REGISTER_TAILQ(rte_mempool_tailq) >=20 > #define CACHE_FLUSHTHRESH_MULTIPLIER 1.5 > +#define CALC_CACHE_FLUSHTHRESH(c) \ > + ((typeof (c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) >=20 > /* > * return the greatest common divisor between a and b (fast algorithm) > @@ -440,7 +442,8 @@ rte_mempool_xmem_create(const char *name, unsigned n,= unsigned elt_size, > mempool_list =3D RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_lis= t); >=20 > /* asked cache too big */ > - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { > + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > + CALC_CACHE_FLUSHTHRESH(cache_size) > n) { > rte_errno =3D EINVAL; > return NULL; > } > @@ -565,8 +568,7 @@ rte_mempool_xmem_create(const char *name, unsigned n,= unsigned elt_size, > mp->header_size =3D objsz.header_size; > mp->trailer_size =3D objsz.trailer_size; > mp->cache_size =3D cache_size; > - mp->cache_flushthresh =3D (uint32_t) > - (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER); > + mp->cache_flushthresh =3D CALC_CACHE_FLUSHTHRESH(cache_size); > mp->private_data_size =3D private_data_size; >=20 > /* calculate address of the first element for continuous mempool. */ > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_me= mpool.h > index 9001312..a4a9610 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool = *, void *); > * If cache_size is non-zero, the rte_mempool library will try to > * limit the accesses to the common lockless pool, by maintaining a > * per-lcore object cache. This argument must be lower or equal to > - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose > + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to cho= ose > * cache_size to have "n modulo cache_size =3D=3D 0": if this is > * not the case, some elements will always stay in the pool and will > * never be used. The access to the per-lcore table is of course > -- > 1.9.1