From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f180.google.com (mail-wi0-f180.google.com [209.85.212.180]) by dpdk.org (Postfix) with ESMTP id CFEC5C338 for ; Mon, 18 May 2015 17:48:03 +0200 (CEST) Received: by wicmx19 with SMTP id mx19so84741955wic.0 for ; Mon, 18 May 2015 08:48:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=qfK7AQECJYBovtQbp2PH2j4GEF0GE57njzjumdgobsk=; b=LKoxDDVxEjRGPXceEHspWPS5LMyvSlmOwPUbT8BSYW1h8wl56RcqKLkuxCQsWBOlnV foNySU8hMrJwlQSkGXlo/RgCGcNXPkE81z8FfXW/FcSANbCvTHkw7Qv+Db7B+JHkjxPL L1igj1kvVf4EJLY0nPNdFGf5WVwd/J55sHGzvtsVgin1Nx0IEQkc/eXXYonMM5L8eEUB Gxsh+TIlN+JLvT0IguvxcEhuV29Bbe5hK6GhpwIXnNLHytjIXQwfwNqLUjhyBDDqUzhz 32xvL3Ih+1TR4KbvSoD12EBt+mYrLFeKB4Vm4RhLGkj5XRhHQDokAW9GS7i8BVwL3P++ GQzg== X-Gm-Message-State: ALoCoQmQb3RKm8b4UFc2Qhz/HQWtO7QWf94M3wNzLeW4kahZf6S3yAyjmzLhEcDZAuK+4C6JRQ0B X-Received: by 10.194.95.41 with SMTP id dh9mr46295844wjb.55.1431964083698; Mon, 18 May 2015 08:48:03 -0700 (PDT) Received: from [192.168.0.101] ([90.152.119.35]) by mx.google.com with ESMTPSA id vy5sm17488920wjc.33.2015.05.18.08.48.02 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 18 May 2015 08:48:03 -0700 (PDT) Message-ID: <555A09B2.3070609@linaro.org> Date: Mon, 18 May 2015 16:48:02 +0100 From: Zoltan Kiss User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: "Ananyev, Konstantin" , "dev@dpdk.org" References: <1431543554-442-1-git-send-email-zoltan.kiss@linaro.org> <5559DAC1.8050106@linaro.org> <2601191342CEEE43887BDE71AB9772582142FA3B@irsmsx105.ger.corp.intel.com> <5559E00C.3050708@linaro.org> <2601191342CEEE43887BDE71AB9772582142FA79@irsmsx105.ger.corp.intel.com> <5559E9A4.3020400@linaro.org> <2601191342CEEE43887BDE71AB9772582142FCE9@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772582142FCE9@irsmsx105.ger.corp.intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 May 2015 15:48:04 -0000 On 18/05/15 15:13, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] >> Sent: Monday, May 18, 2015 2:31 PM >> To: Ananyev, Konstantin; dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size >> >> >> >> On 18/05/15 14:14, Ananyev, Konstantin wrote: >>> >>> >>>> -----Original Message----- >>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] >>>> Sent: Monday, May 18, 2015 1:50 PM >>>> To: Ananyev, Konstantin; dev@dpdk.org >>>> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size >>>> >>>> >>>> >>>> On 18/05/15 13:41, Ananyev, Konstantin wrote: >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss >>>>>> Sent: Monday, May 18, 2015 1:28 PM >>>>>> To: dev@dpdk.org >>>>>> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size >>>>>> >>>>>> Hi, >>>>>> >>>>>> Any opinion on this patch? >>>>>> >>>>>> Regards, >>>>>> >>>>>> Zoltan >>>>>> >>>>>> On 13/05/15 19:59, Zoltan Kiss wrote: >>>>>>> Otherwise cache_flushthresh can be bigger than n, and >>>>>>> a consumer can starve others by keeping every element >>>>>>> either in use or in the cache. >>>>>>> >>>>>>> Signed-off-by: Zoltan Kiss >>>>>>> --- >>>>>>> lib/librte_mempool/rte_mempool.c | 3 ++- >>>>>>> lib/librte_mempool/rte_mempool.h | 2 +- >>>>>>> 2 files changed, 3 insertions(+), 2 deletions(-) >>>>>>> >>>>>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c >>>>>>> index cf7ed76..ca6cd9c 100644 >>>>>>> --- a/lib/librte_mempool/rte_mempool.c >>>>>>> +++ b/lib/librte_mempool/rte_mempool.c >>>>>>> @@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, >>>>>>> mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list); >>>>>>> >>>>>>> /* asked cache too big */ >>>>>>> - if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) { >>>>>>> + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || >>>>>>> + (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) { >>>>>>> rte_errno = EINVAL; >>>>>>> return NULL; >>>>>>> } >>>>> >>>>> Why just no 'cache_size > n' then? >>>> >>>> The commit message says: "Otherwise cache_flushthresh can be bigger than >>>> n, and a consumer can starve others by keeping every element either in >>>> use or in the cache." >>> >>> Ah yes, you right - your condition is more restrictive, which is better. >>> Though here you implicitly convert cache_size and n to floats and compare 2 floats : >>> (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) >>> Shouldn't it be: >>> (uint32_t)(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER) > n) >>> So we do conversion back to uint32_t compare to unsigned integers instead? >>> Same as below: >>> mp->cache_flushthresh = (uint32_t) >>> (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER); >> >> To bring it further: how about ditching the whole cache_flushthresh >> member of the mempool structure, and use this: >> >> #define CACHE_FLUSHTHRESH(mp) (uint32_t)((mp)->cache_size * 1.5) > > That's quite expensive and I think would slow down mempool_put() quite a lot . > So I'd suggest we keep cache_flushthresh as it is. Ok, I have posted a v2 based on your suggestion. > >> >> Furthermore, do we want to expose the flush threshold multiplier through >> the config file? > > Hmm, my opinion is no - so far no one ask for that, > and as general tendency - we trying to reduce number of options in config file. > Do you have any good justification when current value is not good enough? Nothing special, just the arbitrary value choice seemed a bit odd. > Anyway, that probably could be a subject of another patch/discussion. > Konstantin > >> >>> ? >>> >>> In fact, as we use it more than once, it probably makes sense to create a macro for it, >>> something like: >>> #define CALC_CACHE_FLUSHTHRESH(c) ((uint32_t)((c) * CACHE_FLUSHTHRESH_MULTIPLIER) >>> >>> Or even >>> >>> #define CALC_CACHE_FLUSHTHRESH(c) ((typeof (c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER) >>> >>> >>> Konstantin >>> >>>> >>>>> Konstantin >>>>> >>>>>>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h >>>>>>> index 9001312..a4a9610 100644 >>>>>>> --- a/lib/librte_mempool/rte_mempool.h >>>>>>> +++ b/lib/librte_mempool/rte_mempool.h >>>>>>> @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *); >>>>>>> * If cache_size is non-zero, the rte_mempool library will try to >>>>>>> * limit the accesses to the common lockless pool, by maintaining a >>>>>>> * per-lcore object cache. This argument must be lower or equal to >>>>>>> - * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose >>>>>>> + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose >>>>>>> * cache_size to have "n modulo cache_size == 0": if this is >>>>>>> * not the case, some elements will always stay in the pool and will >>>>>>> * never be used. The access to the per-lcore table is of course >>>>>>>