From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f181.google.com (mail-wi0-f181.google.com [209.85.212.181]) by dpdk.org (Postfix) with ESMTP id ADC38377C for ; Mon, 18 May 2015 14:50:21 +0200 (CEST) Received: by wicmc15 with SMTP id mc15so88362415wic.1 for ; Mon, 18 May 2015 05:50:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=KP+JXXT7tpBKKfRPR4Gcv0gZ5PvkdcCIde+5pJ+LVjg=; b=WJQ3h8n2AlvVZ1SMX8g8U3YEOVNqzjWvvgK+1yCvpZZOIcBTsl2LPsBZwuqu1vg6bd v+1YtdszaPd+IehYseWV+wp3XOpmgmk3KCUtZHY+Aeh+wW/zXiDvMyeuJd9kNvQbNpa+ SGyvtsgSeEAxw4ByoPsMzijF37XQ3SL4yVTtPR3Qx3hAjJUaS9+y/Bn82CYl8kKdNDuL PRO5mD5OXUQfZCU+3ckLhrUAhjHCgeFJmfc6aZ+JOi9t5C8My08KjvyubdI8tRYLgYY7 QFELY/qY8sEwA9kRlxfJzSEPzZX0Xq+rNHjk0drhGMdr4Fi64Zz2DmEHLgA1cCKZZndZ qwpw== X-Gm-Message-State: ALoCoQmjMUm2c8pJLhcezLmEIR7PebkP3yfyXam8vUdPoaqqEMGcyDA21qu67UdU8bV1RdbCKWyi X-Received: by 10.194.242.195 with SMTP id ws3mr30829148wjc.155.1431953421572; Mon, 18 May 2015 05:50:21 -0700 (PDT) Received: from [192.168.0.101] ([90.152.119.35]) by mx.google.com with ESMTPSA id b10sm12357388wic.1.2015.05.18.05.50.20 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 18 May 2015 05:50:20 -0700 (PDT) Message-ID: <5559E00C.3050708@linaro.org> Date: Mon, 18 May 2015 13:50:20 +0100 From: Zoltan Kiss User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: "Ananyev, Konstantin" , "dev@dpdk.org" References: <1431543554-442-1-git-send-email-zoltan.kiss@linaro.org> <5559DAC1.8050106@linaro.org> <2601191342CEEE43887BDE71AB9772582142FA3B@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772582142FA3B@irsmsx105.ger.corp.intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 12:50:21 -0000 On 18/05/15 13:41, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss >> Sent: Monday, May 18, 2015 1:28 PM >> To: dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size >> >> Hi, >> >> Any opinion on this patch? >> >> Regards, >> >> Zoltan >> >> On 13/05/15 19:59, Zoltan Kiss wrote: >>> Otherwise cache_flushthresh can be bigger than n, and >>> a consumer can starve others by keeping every element >>> either in use or in the cache. >>> >>> Signed-off-by: Zoltan Kiss >>> --- >>> lib/librte_mempool/rte_mempool.c | 3 ++- >>> lib/librte_mempool/rte_mempool.h | 2 +- >>> 2 files changed, 3 insertions(+), 2 deletions(-) >>> >>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c >>> index cf7ed76..ca6cd9c 100644 >>> --- a/lib/librte_mempool/rte_mempool.c >>> +++ b/lib/librte_mempool/rte_mempool.c >>> @@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, >>> mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list); >>> >>> /* asked cache too big */ >>> - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { >>> + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || >>> + (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) { >>> rte_errno = EINVAL; >>> return NULL; >>> } > > Why just no 'cache_size > n' then? The commit message says: "Otherwise cache_flushthresh can be bigger than n, and a consumer can starve others by keeping every element either in use or in the cache." > Konstantin > >>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h >>> index 9001312..a4a9610 100644 >>> --- a/lib/librte_mempool/rte_mempool.h >>> +++ b/lib/librte_mempool/rte_mempool.h >>> @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *); >>> * If cache_size is non-zero, the rte_mempool library will try to >>> * limit the accesses to the common lockless pool, by maintaining a >>> * per-lcore object cache. This argument must be lower or equal to >>> - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose >>> + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose >>> * cache_size to have "n modulo cache_size == 0": if this is >>> * not the case, some elements will always stay in the pool and will >>> * never be used. The access to the per-lcore table is of course >>>