From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 7071D1C52 for ; Tue, 8 Mar 2016 11:04:51 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP; 08 Mar 2016 02:04:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,556,1449561600"; d="scan'208";a="665724262" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.220.144]) ([10.237.220.144]) by FMSMGA003.fm.intel.com with ESMTP; 08 Mar 2016 02:04:33 -0800 From: "Hunt, David" To: Olivier MATZ , dev@dpdk.org References: <1453829155-1366-1-git-send-email-david.hunt@intel.com> <1453829155-1366-2-git-send-email-david.hunt@intel.com> <56B365A0.3080206@6wind.com> <56D599FE.90706@intel.com> <56D94FD1.2050901@6wind.com> Message-ID: <56DEA3AF.8040704@intel.com> Date: Tue, 8 Mar 2016 10:04:31 +0000 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <56D94FD1.2050901@6wind.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 1/5] mempool: add external mempool manager support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Mar 2016 10:04:52 -0000 Hi Olivier, On 3/4/2016 9:05 AM, Olivier MATZ wrote: > Hi David, > >>>> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name, >>>> unsigned n, unsigned elt_size, >>>> >>>> mp->elt_va_end = mp->elt_va_start; >>>> >>>> + /* Parameters are setup. Call the mempool handler alloc */ >>>> + if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL) >>>> + goto exit; >>>> + >>> I think some memory needs to be freed here. At least 'te'. >> Done in v2 > Please note that in the meanwhile, this fix has been pushed (as we need > it for next release): > http://dpdk.org/browse/dpdk/commit/lib/librte_mempool/rte_mempool.c?id=86f36ff9578b5f3d697c8fcf6072dcb70e2b246f v3 will be rebased on top of the latest head of the repo. >>>> diff --git a/lib/librte_mempool/rte_mempool.h >>>> b/lib/librte_mempool/rte_mempool.h >>>> index 6e2390a..620cfb7 100644 >>>> --- a/lib/librte_mempool/rte_mempool.h >>>> +++ b/lib/librte_mempool/rte_mempool.h >>>> @@ -88,6 +88,8 @@ extern "C" { >>>> struct rte_mempool_debug_stats { >>>> uint64_t put_bulk; /**< Number of puts. */ >>>> uint64_t put_objs; /**< Number of objects successfully >>>> put. */ >>>> + uint64_t put_pool_bulk; /**< Number of puts into pool. */ >>>> + uint64_t put_pool_objs; /**< Number of objects into pool. */ >>>> uint64_t get_success_bulk; /**< Successful allocation number. */ >>>> uint64_t get_success_objs; /**< Objects successfully allocated. */ >>>> uint64_t get_fail_bulk; /**< Failed allocation number. */ >>> I think the comment of put_pool_objs is not very clear. >>> Shouldn't we have the same stats for get? >>> >> Not used, removed. Covered by put_bulk. > I guess you mean it will be removed in v3? It is still there in the v2 > (the field, not the comment that has been fixed). > > Shouldn't we have the same stats for get? I believe get's are covered by the get_success_bulk and get_fail_bulk >>>> /** >>>> * The RTE mempool structure. >>>> */ >>>> struct rte_mempool { >>>> char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */ >>>> - struct rte_ring *ring; /**< Ring to store objects. */ >>>> phys_addr_t phys_addr; /**< Phys. addr. of mempool >>>> struct. */ >>>> int flags; /**< Flags of the mempool. */ >>>> uint32_t size; /**< Size of the mempool. */ >>>> @@ -194,6 +270,11 @@ struct rte_mempool { >>>> >>>> unsigned private_data_size; /**< Size of private data. */ >>>> >>>> + /* Common pool data structure pointer */ >>>> + void *rt_pool __rte_cache_aligned; >>> What is the meaning of rt_pool? >> I agree that it's probably not a very good name. Since it's basically >> the pointer which is used by the handlers callbacks, maybe we should >> call it mempool_storage? That leaves it generic enough that it can point >> at a ring, an array, or whatever else is needed for a particular handler. > My question was more about the "rt_" prefix. Maybe I missed something > obvious? I think "pool" or "pool_handler" is ok. v3 will use "pool" >>>> - >>>> + >>>> + /* add new handler index */ >>>> + handler_idx = mempool_handler_list.num_handlers++; >>>> + >>>> + snprintf(mempool_handler_list.handler[handler_idx].name, >>>> + RTE_MEMPOOL_NAMESIZE, "%s", h->name); >>>> + mempool_handler_list.handler[handler_idx].alloc = h->alloc; >>>> + mempool_handler_list.handler[handler_idx].put = h->put; >>>> + mempool_handler_list.handler[handler_idx].get = h->get; >>>> + mempool_handler_list.handler[handler_idx].get_count = h->get_count; >>>> + >>>> + rte_spinlock_unlock(&mempool_handler_list.sl); >>>> + >>>> + return handler_idx; >>>> +} >>> Why not using a similar mechanism than what we have for PMDs? >>> >>> void rte_eal_driver_register(struct rte_driver *driver) >>> { >>> TAILQ_INSERT_TAIL(&dev_driver_list, driver, next); >>> } >>> >> Already discussed previously, index needed over pointer because of >> secondary processes. > Could you add a comment stating this? It may help for next readers > to have this info. v3: Comment added to the header file where we define header_idx explaining the use of an index versus pointers >>> Last thing, I think this code should go in rte_mempool.c, not in >>> rte_mempool_default.c. >> I was trying to keep the default handlers together in their own file, >> rather than having them in with the mempool framework. I think it's >> better having them separate, and new handlers can go in their own files >> also. no? > OK for the following functions: > common_ring_mp_put() > common_ring_sp_put() > common_ring_mc_get() > common_ring_sc_get() > common_ring_get_count() > rte_mempool_common_ring_alloc() (note: only this one has > a rte_mempool prefix, maybe it should be fixed) > > The other functions are part of the framework to add an external > handler, I don't think they should go in rte_mempool_default.c > They could either go in rte_mempool.c or in another file > rte_mempool_handler.c. v3: Agreed. The bulk of the v3 is simplification of the files. All of the "common" callbacks are now in in rte_mempool_handler.c and rte_mempool_handler.h. I've renamed the 'alloc' function above to be in line with the naming of the others. The 'custom' handler has been banished to the autotest code, so as to keep the library as clean as possible. What's interesting is that the autotest can have all the code defining the custom mempool (including it's registration), keeping the library free of user code. >>>> --- /dev/null >>>> +++ b/lib/librte_mempool/rte_mempool_internal.h >>> Is it the proper name? >>> We could imagine a mempool handler provided by a plugin, and >>> in this case this code should go in rte_mempool.h. >> I was trying to keep the public APIs in rte_mempool.h, and aal the >> private stuff in rte_mempool_internal.h. Maybe a better name would be >> rte_mempool_private.h? > Are these functions internal? I mean, is it possible for an application > or an external PMD (.so) to provide its own handler? I think it would > be really interesting to have this capability. > > Then, I would prefer to have this either in rte_mempool.h or in > rte_mempool_handler.h (that would be coherent with the .c) > Now rte_mempool_handler.h So to synopsise: rte_mempool.[ch] - mempool create, populate, audit, dump, etc. rte_mempool_handler.[ch] - handler registration, and fns to call callbacks rte_mempool_default.c - default internal handlers sp/sc, mp/mc, etc. custom handler has been moved out to app/test/test_ext_mempool.c Thanks, Dave.