From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 1D208C326 for ; Mon, 18 May 2015 14:31:26 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 18 May 2015 05:31:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,453,1427785200"; d="scan'208";a="696543596" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.63]) by orsmga001.jf.intel.com with SMTP; 18 May 2015 05:31:23 -0700 Received: by (sSMTP sendmail emulation); Mon, 18 May 2015 13:31:22 +0025 Date: Mon, 18 May 2015 13:31:21 +0100 From: Bruce Richardson To: Zoltan Kiss Message-ID: <20150518123121.GA3860@bricha3-MOBL3> References: <1431543554-442-1-git-send-email-zoltan.kiss@linaro.org> <5559DAC1.8050106@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5559DAC1.8050106@linaro.org> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 12:31:27 -0000 On Mon, May 18, 2015 at 01:27:45PM +0100, Zoltan Kiss wrote: > Hi, > > Any opinion on this patch? > > Regards, > > Zoltan > > On 13/05/15 19:59, Zoltan Kiss wrote: > >Otherwise cache_flushthresh can be bigger than n, and > >a consumer can starve others by keeping every element > >either in use or in the cache. > > > >Signed-off-by: Zoltan Kiss Seems reasonable enough to me. Acked-by: Bruce Richardson > >--- > > lib/librte_mempool/rte_mempool.c | 3 ++- > > lib/librte_mempool/rte_mempool.h | 2 +- > > 2 files changed, 3 insertions(+), 2 deletions(-) > > > >diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > >index cf7ed76..ca6cd9c 100644 > >--- a/lib/librte_mempool/rte_mempool.c > >+++ b/lib/librte_mempool/rte_mempool.c > >@@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > > mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list); > > > > /* asked cache too big */ > >- if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { > >+ if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > >+ (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) { > > rte_errno = EINVAL; > > return NULL; > > } > >diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > >index 9001312..a4a9610 100644 > >--- a/lib/librte_mempool/rte_mempool.h > >+++ b/lib/librte_mempool/rte_mempool.h > >@@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *); > > * If cache_size is non-zero, the rte_mempool library will try to > > * limit the accesses to the common lockless pool, by maintaining a > > * per-lcore object cache. This argument must be lower or equal to > >- * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose > >+ * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose > > * cache_size to have "n modulo cache_size == 0": if this is > > * not the case, some elements will always stay in the pool and will > > * never be used. The access to the per-lcore table is of course > >