From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f47.google.com (mail-oi0-f47.google.com [209.85.218.47]) by dpdk.org (Postfix) with ESMTP id BD8312BD6 for ; Thu, 24 Mar 2016 15:36:23 +0100 (CET) Received: by mail-oi0-f47.google.com with SMTP id d205so62799294oia.0 for ; Thu, 24 Mar 2016 07:36:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nofutznetworks-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-transfer-encoding; bh=FCA+Jzt7AkhJBPABpGPR/LuIjFw5J0cz7fLuST2cz5E=; b=o4iJFcyYNnDTtt2hgaufw1mXy5P6BRxHnzrBmttaOjWDKdLyBbGCV0Y6ZqecrAVW8j YCynm2xDQv+TCa2I5WDrcdz1owHIpFt7y7EbnS+ZdQwZGAcU/osuYW/RcdjW3aGV6pyt hrJrPSvqe4hWSzWToFL4xV1pD5EX2sNNh+DUrFgMeBrOE2VVJOfQJlmK4tTiSj6ugFJa j62kGMTlsXNJ7dUlkjKkbuApQOVFM9HcPDuaV1A7xKPzUoM35/3UhNglK6dWgwfRvZve 9hsav+WkEa4aPYaH0dSrpLr8KLKKr9rKbLXbmoALTwst1bcdsrys8dAA4Pe6Kz+SyGyQ bE3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-transfer-encoding; bh=FCA+Jzt7AkhJBPABpGPR/LuIjFw5J0cz7fLuST2cz5E=; b=loQhv4SI3DlAdgqjqGe5Q1sVYEg7SGjIplp9mRJMLKV5mP5zy/UNK0ZD2uSSMsxH1O 1dzi7EFGT0ghHIAssvQmCgjEw/ROLq3gTHWDgEBKDedGcILxTltCFY2oHxo48Lmx+ZQh y1ftQTX8wDj0dLDCH+qmq9u79G9VhpJAHnvebccO6Ko0rXPZWEodx3rur6NCuEGHVAG4 Ivv8N9D4xFcarZo/O3Pu7TE5TdUojgJmP7k9ZMcez6es0W4A6bz03v3K0xRZlqxpXvBh 3iwjVLvdbfMCBiVWOSBE1QbCQh2t4eF5GGJSHsGsC/RqgYhKkSPqh2J3bvOxC65Ufb9k QUKg== X-Gm-Message-State: AD7BkJKQu8uCpD/bAohpRxFutHBiskmgHKoDvn/2zB++tcpq7upbJiW26xY9uKusDYHrI+z2lr6PFdE+QFLzKg== MIME-Version: 1.0 X-Received: by 10.202.222.5 with SMTP id v5mr4447478oig.82.1458830147426; Thu, 24 Mar 2016 07:35:47 -0700 (PDT) Received: by 10.157.43.122 with HTTP; Thu, 24 Mar 2016 07:35:47 -0700 (PDT) In-Reply-To: <1FFBBED4-00F5-4703-BDEC-961EB800C21B@intel.com> References: <1457621082-22151-1-git-send-email-l@nofutznetworks.com> <56EFE781.4090809@6wind.com> <1FFBBED4-00F5-4703-BDEC-961EB800C21B@intel.com> Date: Thu, 24 Mar 2016 16:35:47 +0200 Message-ID: From: Lazaros Koromilas To: "Wiles, Keith" Cc: Olivier Matz , "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH] mempool: allow for user-owned mempool caches X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Mar 2016 14:36:24 -0000 On Mon, Mar 21, 2016 at 3:49 PM, Wiles, Keith wrote= : >>Hi Lazaros, >> >>Thanks for this patch. To me, this is a valuable enhancement. >>Please find some comments inline. >> >>On 03/10/2016 03:44 PM, Lazaros Koromilas wrote: >>> The mempool cache is only available to EAL threads as a per-lcore >>> resource. Change this so that the user can create and provide their own >>> cache on mempool get and put operations. This works with non-EAL thread= s >>> too. This commit introduces new API calls with the 'with_cache' suffix, >>> while the current ones default to the per-lcore local cache. >>> >>> Signed-off-by: Lazaros Koromilas >>> --- >>> lib/librte_mempool/rte_mempool.c | 65 +++++- >>> lib/librte_mempool/rte_mempool.h | 442 +++++++++++++++++++++++++++++++= +++++--- >>> 2 files changed, 467 insertions(+), 40 deletions(-) >>> >>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_= mempool.c >>> index f8781e1..cebc2b7 100644 >>> --- a/lib/librte_mempool/rte_mempool.c >>> +++ b/lib/librte_mempool/rte_mempool.c >>> @@ -375,6 +375,43 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_n= um, size_t elt_sz, >>> return usz; >>> } >>> >>> +#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 >> >>I wonder if this wouldn't cause a conflict with Keith's patch >>that removes some #ifdefs RTE_MEMPOOL_CACHE_MAX_SIZE. >>See: http://www.dpdk.org/dev/patchwork/patch/10492/ > > Hi Lazaros, > > The patch I submitted keeps the mempool cache structure (pointers and var= iables) and only allocates the cache if specified by the caller to use a ca= che. This means to me the caller could fill in the cache pointer and values= into the mempool structure to get a cache without a lot of extra code. If = we added a set of APIs to fill in these structure variables would that not = give you the external cache support. I have not really looked at the patch = to verify this will work, but it sure seems like it. > > So my suggestion the caller can just create a mempool without a cache and= then call a set of APIs to fill in his cache values, does that not work? > > If we can do this it reduces the API and possible the ABI changes to memp= ool as the new cache create routines and APIs could be in a new file I thin= k, which just updates the mempool structure correctly. Hi Keith, The main benefit of having an external cache is to allow mempool users (threads) to maintain a local cache even though they don't have a valid lcore_id (non-EAL threads). The fact that cache access is done by indexing with the lcore_id is what makes it difficult... What could happen is only have external caches somehow, but that hurts the common case where you want an automatic cache. Or a cache registration mechanism (overkill?). So, I'm going to work on the comments and send out a v2 asap. Thanks everyo= ne! Lazaros. > >> >>As this patch is already acked for 16.07, I think that your v2 >>could be rebased on top of it to avoid conflicts when Thomas will apply >>it. >> >>By the way, I also encourage you to have a look at other works in >>progress in mempool: >>http://www.dpdk.org/ml/archives/dev/2016-March/035107.html >>http://www.dpdk.org/ml/archives/dev/2016-March/035201.html >> >> > > Regards, > Keith > > > >