From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f178.google.com (mail-wi0-f178.google.com [209.85.212.178]) by dpdk.org (Postfix) with ESMTP id 4AD3CC324 for ; Mon, 18 May 2015 17:35:25 +0200 (CEST) Received: by wicmx19 with SMTP id mx19so84309230wic.0 for ; Mon, 18 May 2015 08:35:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Ya3eaMk/3kv1O/nVCvlqXVjygmQ35Ltb6gxmmQioeTM=; b=S/Vp1KH29D66p4UxNwLF/gAfPHw6VDTCF3oBfuDy+UGhaVgCuDoWuXGMlsHeByoqNZ Bw/OqF58CdMQhGVPeS32eUNSzg+AqNHbPlOObgcJJsh9ofQSh0hllfIroE+G6W6kN7Wu jIfyO6nccDxYT4/JPC2fKyTXThVmPrtfocD52gwPNBvYZ/SlmgXolrBTBHL/HV9RyR+F cdivVRSGdtu5NywGGKv1m2RqT7J40ilaef1019cnDUqWUMHFK5oyDAC0YpMj3i14QbDh +4+HQ5fDYjhlGW/+oAIq3JKp+v27shONTV7BjfzqQV6D0cptwrb4RdWYlRQ3dg6KLnkU a3Kw== X-Gm-Message-State: ALoCoQnDk7SaaYSv0pxLwyXf3YFl/VvqVNibh34JajuKcXJuIK2f2eedsyCOTuNB5sgEMcOKrjve X-Received: by 10.194.6.37 with SMTP id x5mr46515318wjx.73.1431963325096; Mon, 18 May 2015 08:35:25 -0700 (PDT) Received: from localhost.localdomain ([90.152.119.35]) by mx.google.com with ESMTPSA id bg4sm17452671wjc.10.2015.05.18.08.35.23 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 18 May 2015 08:35:24 -0700 (PDT) From: Zoltan Kiss To: dev@dpdk.org Date: Mon, 18 May 2015 16:35:14 +0100 Message-Id: <1431963314-3701-1-git-send-email-zoltan.kiss@linaro.org> X-Mailer: git-send-email 1.9.1 Subject: [dpdk-dev] [PATCH v2] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 15:35:25 -0000 Otherwise cache_flushthresh can be bigger than n, and a consumer can starve others by keeping every element either in use or in the cache. Signed-off-by: Zoltan Kiss --- v2: use macro for calculation, with proper casting lib/librte_mempool/rte_mempool.c | 8 +++++--- lib/librte_mempool/rte_mempool.h | 2 +- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index cf7ed76..5cfb96b 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -68,6 +68,8 @@ static struct rte_tailq_elem rte_mempool_tailq = { EAL_REGISTER_TAILQ(rte_mempool_tailq) #define CACHE_FLUSHTHRESH_MULTIPLIER 1.5 +#define CALC_CACHE_FLUSHTHRESH(c) \ + ((typeof (c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) /* * return the greatest common divisor between a and b (fast algorithm) @@ -440,7 +442,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list); /* asked cache too big */ - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || + CALC_CACHE_FLUSHTHRESH(cache_size) > n) { rte_errno = EINVAL; return NULL; } @@ -565,8 +568,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, mp->header_size = objsz.header_size; mp->trailer_size = objsz.trailer_size; mp->cache_size = cache_size; - mp->cache_flushthresh = (uint32_t) - (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER); + mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size); mp->private_data_size = private_data_size; /* calculate address of the first element for continuous mempool. */ diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 9001312..a4a9610 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *); * If cache_size is non-zero, the rte_mempool library will try to * limit the accesses to the common lockless pool, by maintaining a * per-lcore object cache. This argument must be lower or equal to - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose * cache_size to have "n modulo cache_size == 0": if this is * not the case, some elements will always stay in the pool and will * never be used. The access to the per-lcore table is of course -- 1.9.1