From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 70C2E2BF3 for ; Mon, 21 Mar 2016 14:16:57 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP; 21 Mar 2016 06:16:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,372,1455004800"; d="scan'208";a="673176424" Received: from irsmsx106.ger.corp.intel.com ([163.33.3.31]) by FMSMGA003.fm.intel.com with ESMTP; 21 Mar 2016 06:16:30 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.35]) by IRSMSX106.ger.corp.intel.com ([169.254.8.172]) with mapi id 14.03.0248.002; Mon, 21 Mar 2016 13:15:48 +0000 From: "Ananyev, Konstantin" To: Olivier Matz , Lazaros Koromilas , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] mempool: allow for user-owned mempool caches Thread-Index: AQHRett1LUR/UQK2/0aiug0stcKmwZ9j4u+AgAAC5+A= Date: Mon, 21 Mar 2016 13:15:47 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836B1F1F5@irsmsx105.ger.corp.intel.com> References: <1457621082-22151-1-git-send-email-l@nofutznetworks.com> <56EFE781.4090809@6wind.com> In-Reply-To: <56EFE781.4090809@6wind.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZWUzYmMyOTktZjBiOC00NTQ2LWI4MGItYjU1OGFjOTdiNmVmIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IlBqbTBCczJPQWRJa0U0WHE2eGtCTHgwOHVDR3FSMUtNUWUybE9uc1kwMTQ9In0= x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] mempool: allow for user-owned mempool caches X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Mar 2016 13:16:58 -0000 Hi lads, >=20 > Hi Lazaros, >=20 > Thanks for this patch. To me, this is a valuable enhancement. > Please find some comments inline. Yep, patch seems interesting. One question - what would be the usage model for get/put_with_cache functio= ns for non-EAL threads? I meant for each non-EAL thread user will need to maintain an array (or lis= t or some other structure) of pair to make sure that the cache always= matches the mempool, correct? =20 Again, for non-EAL threads don't we need some sort of invalidate_cache() that would put all mbufs in the cache back to the pool? For thread termination case or so? >=20 > On 03/10/2016 03:44 PM, Lazaros Koromilas wrote: > > The mempool cache is only available to EAL threads as a per-lcore > > resource. Change this so that the user can create and provide their own > > cache on mempool get and put operations. This works with non-EAL thread= s > > too. This commit introduces new API calls with the 'with_cache' suffix, > > while the current ones default to the per-lcore local cache. > > > > Signed-off-by: Lazaros Koromilas > > --- > > lib/librte_mempool/rte_mempool.c | 65 +++++- > > lib/librte_mempool/rte_mempool.h | 442 +++++++++++++++++++++++++++++++= +++++--- > > 2 files changed, 467 insertions(+), 40 deletions(-) > > > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_= mempool.c > > index f8781e1..cebc2b7 100644 > > --- a/lib/librte_mempool/rte_mempool.c > > +++ b/lib/librte_mempool/rte_mempool.c > > @@ -375,6 +375,43 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_n= um, size_t elt_sz, > > return usz; > > } > > > > +#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 >=20 > I wonder if this wouldn't cause a conflict with Keith's patch > that removes some #ifdefs RTE_MEMPOOL_CACHE_MAX_SIZE. > See: http://www.dpdk.org/dev/patchwork/patch/10492/ >=20 > As this patch is already acked for 16.07, I think that your v2 > could be rebased on top of it to avoid conflicts when Thomas will apply > it. >=20 > By the way, I also encourage you to have a look at other works in > progress in mempool: > http://www.dpdk.org/ml/archives/dev/2016-March/035107.html > http://www.dpdk.org/ml/archives/dev/2016-March/035201.html >=20 >=20 > > +static void > > +mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size) > > +{ > > + cache->size =3D size; > > + cache->flushthresh =3D CALC_CACHE_FLUSHTHRESH(size); > > + cache->len =3D 0; > > +} > > + > > +/* > > + * Creates and initializes a cache for objects that are retrieved from= and > > + * returned to an underlying mempool. This structure is identical to t= he > > + * structure included inside struct rte_mempool. > > + */ >=20 > On top of Keith's patch, this comment may be reworked as the cache > structure is not included in the mempool structure anymore. >=20 > nit: I think the imperative form is preferred >=20 >=20 > > +struct rte_mempool_cache * > > +rte_mempool_cache_create(uint32_t size) I suppose extra socket_id parameter is needed, so you can call zmalloc_socket() beneath. > > +{ > > + struct rte_mempool_cache *cache; > > + > > + if (size > RTE_MEMPOOL_CACHE_MAX_SIZE) { > > + rte_errno =3D EINVAL; > > + return NULL; > > + } > > + > > + cache =3D rte_zmalloc("MEMPOOL_CACHE", sizeof(*cache), RTE_CACHE_LINE= _SIZE); > > + if (cache =3D=3D NULL) { > > + RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache!\n"); >=20 > I would remove the '!' >=20 >=20 >=20 > > @@ -587,10 +624,18 @@ rte_mempool_xmem_create(const char *name, unsigne= d n, unsigned elt_size, > > mp->elt_size =3D objsz.elt_size; > > mp->header_size =3D objsz.header_size; > > mp->trailer_size =3D objsz.trailer_size; > > - mp->cache_size =3D cache_size; > > - mp->cache_flushthresh =3D CALC_CACHE_FLUSHTHRESH(cache_size); > > + mp->cache_size =3D cache_size; /* Keep this for backwards compat. */ >=20 > I'm wondering if this should be kept for compat or if it makes sense > to keep it. The question is: do we want the cache_size to be a parameter > of the mempool or should it be a parameter of the cache? >=20 > I think we could remove this field from the mempool structure. >=20 >=20 > > @@ -673,13 +718,17 @@ rte_mempool_dump_cache(FILE *f, const struct rte_= mempool *mp) > > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > > unsigned lcore_id; > > unsigned count =3D 0; > > + unsigned cache_size; > > unsigned cache_count; > > > > fprintf(f, " cache infos:\n"); > > - fprintf(f, " cache_size=3D%"PRIu32"\n", mp->cache_size); > > for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { > > + cache_size =3D mp->local_cache[lcore_id].size; > > + fprintf(f, " cache_size[%u]=3D%"PRIu32"\n", > > + lcore_id, cache_size); > > cache_count =3D mp->local_cache[lcore_id].len; > > - fprintf(f, " cache_count[%u]=3D%u\n", lcore_id, cache_count); > > + fprintf(f, " cache_count[%u]=3D%"PRIu32"\n", > > + lcore_id, cache_count); > > count +=3D cache_count; >=20 > Does it still make sense to dump the content of the cache as some > external caches may exist? >=20 > If we want to list all caches, we could imagine a sort of cache > registration before using a cache structure. Example: >=20 > cache =3D rte_mempool_add_cache() > rte_mempool_del_cache() >=20 > All the caches could be browsed internally using a list. >=20 > Thoughts? Seems a bit excessive to me. People still can have and use extrenal cache without any registration. I suppose all we can do here - print 'internal' caches. Konstantin