From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id DEC942B85 for ; Thu, 9 Jun 2016 13:24:13 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP; 09 Jun 2016 04:15:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,444,1459839600"; d="scan'208";a="998476270" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.220.91]) ([10.237.220.91]) by fmsmga002.fm.intel.com with ESMTP; 09 Jun 2016 04:06:29 -0700 To: Jerin Jacob References: <1464874043-67467-1-git-send-email-david.hunt@intel.com> <1464965906-108927-1-git-send-email-david.hunt@intel.com> <1464965906-108927-2-git-send-email-david.hunt@intel.com> <5756931C.70805@intel.com> <57593962.6010802@intel.com> <20160609103126.GA27529@localhost.localdomain> Cc: Shreyansh Jain , "dev@dpdk.org" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" From: "Hunt, David" Message-ID: <57594DB4.8080903@intel.com> Date: Thu, 9 Jun 2016 12:06:28 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <20160609103126.GA27529@localhost.localdomain> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2016 11:24:14 -0000 On 9/6/2016 11:31 AM, Jerin Jacob wrote: > On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote: >> Hi Shreyansh, >> >> On 8/6/2016 2:48 PM, Shreyansh Jain wrote: >>> Hi David, >>> >>> Thanks for explanation. I have some comments inline... >>> >>>> -----Original Message----- >>>> From: Hunt, David [mailto:david.hunt@intel.com] >>>> Sent: Tuesday, June 07, 2016 2:56 PM >>>> To: Shreyansh Jain ; dev@dpdk.org >>>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com; >>>> jerin.jacob@caviumnetworks.com >>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool >>>> operations >>>> >>>> Hi Shreyansh, >>>> >>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote: >>>>> Hi, >>>>> >>>>> (Apologies for overly-eager email sent on this thread earlier. Will be more >>>> careful in future). >>>>> This is more of a question/clarification than a comment. (And I have taken >>>> only some snippets from original mail to keep it cleaner) >>>>> >>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc); >>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc); >>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc); >>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc); >>>>> >>>>> >>>>> From the above what I understand is that multiple packet pool handlers can >>>> be created. >>>>> I have a use-case where application has multiple pools but only the packet >>>> pool is hardware backed. Using the hardware for general buffer requirements >>>> would prove costly. >>>>> From what I understand from the patch, selection of the pool is based on >>>> the flags below. >>>> >>>> The flags are only used to select one of the default handlers for >>>> backward compatibility through >>>> the rte_mempool_create call. If you wish to use a mempool handler that >>>> is not one of the >>>> defaults, (i.e. a new hardware handler), you would use the >>>> rte_create_mempool_empty >>>> followed by the rte_mempool_set_ops_byname call. >>>> So, for the external handlers, you create and empty mempool, then set >>>> the operations (ops) >>>> for that particular mempool. >>> I am concerned about the existing applications (for example, l3fwd). >>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications. >>> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently. >>> >>> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to: >>> >>> struct rte_mempool_ops custom_hw_allocator = {...} >>> >>> thereafter, in config/common_base: >>> >>> CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator" >>> >>> calls to rte_pktmbuf_pool_create would use the new allocator. >> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to >> rte_mempool_create will continue to use the default handlers (ring based). >>> But, another problem arises here. >>> >>> There are two distinct paths for allocations of a memory pool: >>> 1. A 'pkt' pool: >>> rte_pktmbuf_pool_create >>> \- rte_mempool_create_empty >>> | \- rte_mempool_set_ops_byname(..ring_mp_mc..) >>> | >>> `- rte_mempool_set_ops_byname >>> (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..) >>> /* Override default 'ring_mp_mc' of >>> * rte_mempool_create */ >>> >>> 2. Through generic mempool create API >>> rte_mempool_create >>> \- rte_mempool_create_empty >>> (passing pktmbuf and pool constructors) >>> I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools. >>> >>> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations. >>> >>> From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools. >> This is intended for backward compatibility, and API consistency. Any >> applications that use >> rte_mempool_create directly will continue to use the default mempool >> handlers. If the need >> to use a custeom hander, they will need to be modified to call the newer >> API, >> rte_mempool_create_empty and rte_mempool_set_ops_byname. >> >> >>>>> >>>>>> + /* >>>>>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to >>>>>> + * set the correct index into the table of ops structs. >>>>>> + */ >>>>>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) >>>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc"); >>>>>> + else if (flags & MEMPOOL_F_SP_PUT) >>>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc"); >>>>>> + else if (flags & MEMPOOL_F_SC_GET) >>>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc"); >>>>>> + else >>>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc"); >>>>>> + >>> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would: >>> >>> ... >>> #define MEMPOOL_F_SC_GET 0x0008 >>> #define MEMPOOL_F_PKT_ALLOC 0x0010 >>> ... >>> >>> in rte_mempool_create_empty: >>> ... after checking the other MEMPOOL_F_* flags... >>> >>> if (flags & MEMPOOL_F_PKT_ALLOC) >>> rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS) >>> >>> And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool(). >>> >>> Thereafter, rte_pktmbuf_pool_create can be changed to: >>> >>> ... >>> mp = rte_mempool_create_empty(name, n, elt_size, cache_size, >>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); >>> + sizeof(struct rte_pktmbuf_pool_private), socket_id, >>> + MEMPOOL_F_PKT_ALLOC); >>> if (mp == NULL) >>> return NULL; >> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that it >> replaces >> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we >> want >> to introduce a third method of creating a mempool to the developers. If we >> introduced this, we would then have: >> 1. rte_pktmbuf_pool_create() >> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would >> use the configured custom handler) >> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed >> by a call to rte_mempool_set_ops_byname() (would allow several different >> custom >> handlers to be used in one application >> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan? > As I mentioned earlier, My take is not to create the separate API's for > external mempool handlers.In my view, It's same, just that sepreate > mempool handler through function pointers. > > To keep the backward compatibility, I think we can extend the flags > in rte_mempool_create and have a single API external/internal pool > creation(makes easy for existing applications too, add a just mempool > flag command line argument to existing applications to choose the > mempool handler) > > Jerin Would a good compromise be what Shreyansh is suggesting, adding just 1 bit into the flags to allow the rte_mempool_create use the mempool handler as defined in CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS? That way we wouldn't have the complexity of the previously discussed proposed solution of parsing out the sections of the flags bits into bytes to get the handler index requested, and we wouldn't have to query by name to get the index of the handler we're interested in. It's just a simple "use the configured custom handler or not" bit. This way, applications can set the bit to use the custom handler defined in the config, but still have the flexibility using the other new API calls to have several handlers in use in an application. Regards, David.