From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f50.google.com (mail-wg0-f50.google.com [74.125.82.50]) by dpdk.org (Postfix) with ESMTP id A268F234 for ; Wed, 20 May 2015 10:32:08 +0200 (CEST) Received: by wgfl8 with SMTP id l8so44583921wgf.2 for ; Wed, 20 May 2015 01:32:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=zg2fPqQYz4G9Qa4oHKRDNBsOsG2yUq8tUlNeCuJW4Fg=; b=ixMBtJK/cmC4FC9dvZSxhqCaQ1AxXVU2UfyakI9RDl/SRJKrkiKWiW6cW4xxwCy2nI EMK9ce1xhrUHimCB93RVNs/zxgEQYcP6oRr7dL2NZjohfmLPsUHsimq0udpYYjz/YC8x /vK9r1rIzXz+U38AnaBrPYYzsskEcjFmYuo4regeyz1rTzqxHX2Il3o9veHlp4lwJVAf GizR9rW57hVZBzJe7JxrsJZBiY0A1WcJMRiSxFf4Mbn1S+nW2a9AsGL4I+Mx9648A8Nf djCfgC9SaAEjrLBrRgY+asy8/8xv4TwPmeTTz59enqT/0WA4vjOkshR2RH7T09JZJpzF 1hJQ== X-Gm-Message-State: ALoCoQkBeFXhaaXtlYs8De9acHw8N1fE9uO0fslWsblWExHf7yXWF4TnS18FU1nc35tN6sO2ibxF X-Received: by 10.180.14.135 with SMTP id p7mr39230709wic.8.1432110728147; Wed, 20 May 2015 01:32:08 -0700 (PDT) Received: from [10.16.0.195] (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by mx.google.com with ESMTPSA id bg4sm25568242wjc.10.2015.05.20.01.32.07 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 May 2015 01:32:07 -0700 (PDT) Message-ID: <555C4683.7000702@6wind.com> Date: Wed, 20 May 2015 10:32:03 +0200 From: Olivier MATZ User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.6.0 MIME-Version: 1.0 To: Zoltan Kiss , dev@dpdk.org References: <1431963314-3701-1-git-send-email-zoltan.kiss@linaro.org> In-Reply-To: <1431963314-3701-1-git-send-email-zoltan.kiss@linaro.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 May 2015 08:32:09 -0000 On 05/18/2015 05:35 PM, Zoltan Kiss wrote: > Otherwise cache_flushthresh can be bigger than n, and > a consumer can starve others by keeping every element > either in use or in the cache. > > Signed-off-by: Zoltan Kiss Acked-by: Olivier Matz > --- > v2: use macro for calculation, with proper casting > > lib/librte_mempool/rte_mempool.c | 8 +++++--- > lib/librte_mempool/rte_mempool.h | 2 +- > 2 files changed, 6 insertions(+), 4 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index cf7ed76..5cfb96b 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -68,6 +68,8 @@ static struct rte_tailq_elem rte_mempool_tailq = { > EAL_REGISTER_TAILQ(rte_mempool_tailq) > > #define CACHE_FLUSHTHRESH_MULTIPLIER 1.5 > +#define CALC_CACHE_FLUSHTHRESH(c) \ > + ((typeof (c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) > > /* > * return the greatest common divisor between a and b (fast algorithm) > @@ -440,7 +442,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list); > > /* asked cache too big */ > - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { > + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > + CALC_CACHE_FLUSHTHRESH(cache_size) > n) { > rte_errno = EINVAL; > return NULL; > } > @@ -565,8 +568,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > mp->header_size = objsz.header_size; > mp->trailer_size = objsz.trailer_size; > mp->cache_size = cache_size; > - mp->cache_flushthresh = (uint32_t) > - (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER); > + mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size); > mp->private_data_size = private_data_size; > > /* calculate address of the first element for continuous mempool. */ > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index 9001312..a4a9610 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *); > * If cache_size is non-zero, the rte_mempool library will try to > * limit the accesses to the common lockless pool, by maintaining a > * per-lcore object cache. This argument must be lower or equal to > - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose > + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose > * cache_size to have "n modulo cache_size == 0": if this is > * not the case, some elements will always stay in the pool and will > * never be used. The access to the per-lcore table is of course >