From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f41.google.com (mail-oi0-f41.google.com [209.85.218.41]) by dpdk.org (Postfix) with ESMTP id 49B812BE3 for ; Thu, 24 Mar 2016 14:51:36 +0100 (CET) Received: by mail-oi0-f41.google.com with SMTP id i21so12782458oig.1 for ; Thu, 24 Mar 2016 06:51:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nofutznetworks-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=siQjP2F356Qwno64Uk8lCsvM4nUHoTW/uOsq7EYYfQc=; b=Ntac+5Avm45WlMGk3ncmY5koZ5EPNKRpGK0aJlV0QlxLWW0QtiIfhsFbN7zKs+uBIP MZMdQ3tJ2QKf+usTgJ+8OlO25O7/Qgs/JZp83x96oDJGRG+UM+GGgYzN2cRGrd8/dKuT BiJCQCMGWwv3EUf7LXKXGVP19tSR9BVm3MIhfdmSsiu6UlhxIRPMMWK0T0g6qWOA+IQX myIgvsYQpAlJMpbjiTLtJ4yGRSrK+1zUI3wwcFNfbAjzx/UEZXQGUtGU2UVdRqJIPn0v FF0PbEaJxApcnU76DgCor08+2QCKImFwzSyQAgZxZo4/tRsEfnLabSIbAWlUOaWdXQqf joqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=siQjP2F356Qwno64Uk8lCsvM4nUHoTW/uOsq7EYYfQc=; b=W6ej8tXmGlwrg3w6bgVADYqmkMQjJkZKgNPGk2FMRfeP2qGOdKk5oqLk6HNFEfc8Pu Lu2enpIPVc4kbH1twB6QXPAd+G1VpagquVXl1XWtYlXivkRD53iD9/PGIUYAtT6rfxbO y2U+N7xnhsntFoQpQCCVD7kAZm+h2xTF3bHZW4Cw/0c0EGysLZtpH9Ubh4RI3f5a9uPY oTIfT8/wnbqB8kyjSmeOUwOw2+PhTo3A3faVaa9IqvNFHtm3ghDnXS84GjkgMQEHPH7B tWZ+tWqNyIcnLjK6epXJR71pNPLGwu3I8En7rzN5KfiMhefNwskuMdcoGdSUbpkmzRje gePg== X-Gm-Message-State: AD7BkJJtCG9xMF8F5foF2ICvUu0PBiWjiz4s1UOhJrhHTPTYqPJzGJwo7eXy5a2Xbo8QKYFr8sXFTI5D8ns1AQ== MIME-Version: 1.0 X-Received: by 10.157.11.200 with SMTP id 66mr4671815oth.45.1458827485548; Thu, 24 Mar 2016 06:51:25 -0700 (PDT) Received: by 10.157.43.122 with HTTP; Thu, 24 Mar 2016 06:51:25 -0700 (PDT) In-Reply-To: <2601191342CEEE43887BDE71AB97725836B1F1F5@irsmsx105.ger.corp.intel.com> References: <1457621082-22151-1-git-send-email-l@nofutznetworks.com> <56EFE781.4090809@6wind.com> <2601191342CEEE43887BDE71AB97725836B1F1F5@irsmsx105.ger.corp.intel.com> Date: Thu, 24 Mar 2016 15:51:25 +0200 Message-ID: From: Lazaros Koromilas To: "Ananyev, Konstantin" Cc: Olivier Matz , "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 Subject: Re: [dpdk-dev] [PATCH] mempool: allow for user-owned mempool caches X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Mar 2016 13:51:36 -0000 Hi Konstantin, On Mon, Mar 21, 2016 at 3:15 PM, Ananyev, Konstantin wrote: > > Hi lads, > >> >> Hi Lazaros, >> >> Thanks for this patch. To me, this is a valuable enhancement. >> Please find some comments inline. > > Yep, patch seems interesting. > One question - what would be the usage model for get/put_with_cache functions for non-EAL threads? > I meant for each non-EAL thread user will need to maintain an array (or list or some other structure) > of pair to make sure that the cache always matches the mempool, > correct? Yes, the user would have to also keep track of the caches. On the use case, the original idea was to enable caches for threads that run on the same core. There are cases that this can work, as described in the programming guide, section on multiple pthreads. Even with two EAL threads, we currently cannot have multiple caches on a single core, unless we use the --lcores long option to separate lcore id (which really is a thread id here) from the actual core affinity. So the cache should be a member of the thread and not the mempool, in my opinion. It has to be consistently used with the same mempool though. > Again, for non-EAL threads don't we need some sort of invalidate_cache() > that would put all mbufs in the cache back to the pool? > For thread termination case or so? I think also EAL threads benefit from an invalidate() function call. This can be part of free() that Olivier proposed. Thanks! Lazaros. > >> >> On 03/10/2016 03:44 PM, Lazaros Koromilas wrote: >> > The mempool cache is only available to EAL threads as a per-lcore >> > resource. Change this so that the user can create and provide their own >> > cache on mempool get and put operations. This works with non-EAL threads >> > too. This commit introduces new API calls with the 'with_cache' suffix, >> > while the current ones default to the per-lcore local cache. >> > >> > Signed-off-by: Lazaros Koromilas >> > --- >> > lib/librte_mempool/rte_mempool.c | 65 +++++- >> > lib/librte_mempool/rte_mempool.h | 442 ++++++++++++++++++++++++++++++++++++--- >> > 2 files changed, 467 insertions(+), 40 deletions(-) >> > >> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c >> > index f8781e1..cebc2b7 100644 >> > --- a/lib/librte_mempool/rte_mempool.c >> > +++ b/lib/librte_mempool/rte_mempool.c >> > @@ -375,6 +375,43 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz, >> > return usz; >> > } >> > >> > +#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 >> >> I wonder if this wouldn't cause a conflict with Keith's patch >> that removes some #ifdefs RTE_MEMPOOL_CACHE_MAX_SIZE. >> See: http://www.dpdk.org/dev/patchwork/patch/10492/ >> >> As this patch is already acked for 16.07, I think that your v2 >> could be rebased on top of it to avoid conflicts when Thomas will apply >> it. >> >> By the way, I also encourage you to have a look at other works in >> progress in mempool: >> http://www.dpdk.org/ml/archives/dev/2016-March/035107.html >> http://www.dpdk.org/ml/archives/dev/2016-March/035201.html >> >> >> > +static void >> > +mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size) >> > +{ >> > + cache->size = size; >> > + cache->flushthresh = CALC_CACHE_FLUSHTHRESH(size); >> > + cache->len = 0; >> > +} >> > + >> > +/* >> > + * Creates and initializes a cache for objects that are retrieved from and >> > + * returned to an underlying mempool. This structure is identical to the >> > + * structure included inside struct rte_mempool. >> > + */ >> >> On top of Keith's patch, this comment may be reworked as the cache >> structure is not included in the mempool structure anymore. >> >> nit: I think the imperative form is preferred >> >> >> > +struct rte_mempool_cache * >> > +rte_mempool_cache_create(uint32_t size) > > I suppose extra socket_id parameter is needed, so you can call > zmalloc_socket() beneath. > >> > +{ >> > + struct rte_mempool_cache *cache; >> > + >> > + if (size > RTE_MEMPOOL_CACHE_MAX_SIZE) { >> > + rte_errno = EINVAL; >> > + return NULL; >> > + } >> > + >> > + cache = rte_zmalloc("MEMPOOL_CACHE", sizeof(*cache), RTE_CACHE_LINE_SIZE); >> > + if (cache == NULL) { >> > + RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache!\n"); >> >> I would remove the '!' >> >> >> >> > @@ -587,10 +624,18 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, >> > mp->elt_size = objsz.elt_size; >> > mp->header_size = objsz.header_size; >> > mp->trailer_size = objsz.trailer_size; >> > - mp->cache_size = cache_size; >> > - mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size); >> > + mp->cache_size = cache_size; /* Keep this for backwards compat. */ >> >> I'm wondering if this should be kept for compat or if it makes sense >> to keep it. The question is: do we want the cache_size to be a parameter >> of the mempool or should it be a parameter of the cache? >> >> I think we could remove this field from the mempool structure. >> >> >> > @@ -673,13 +718,17 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp) >> > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 >> > unsigned lcore_id; >> > unsigned count = 0; >> > + unsigned cache_size; >> > unsigned cache_count; >> > >> > fprintf(f, " cache infos:\n"); >> > - fprintf(f, " cache_size=%"PRIu32"\n", mp->cache_size); >> > for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { >> > + cache_size = mp->local_cache[lcore_id].size; >> > + fprintf(f, " cache_size[%u]=%"PRIu32"\n", >> > + lcore_id, cache_size); >> > cache_count = mp->local_cache[lcore_id].len; >> > - fprintf(f, " cache_count[%u]=%u\n", lcore_id, cache_count); >> > + fprintf(f, " cache_count[%u]=%"PRIu32"\n", >> > + lcore_id, cache_count); >> > count += cache_count; >> >> Does it still make sense to dump the content of the cache as some >> external caches may exist? >> >> If we want to list all caches, we could imagine a sort of cache >> registration before using a cache structure. Example: >> >> cache = rte_mempool_add_cache() >> rte_mempool_del_cache() >> >> All the caches could be browsed internally using a list. >> >> Thoughts? > > Seems a bit excessive to me. > People still can have and use extrenal cache without any registration. > I suppose all we can do here - print 'internal' caches. > Konstantin >