From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 40AD91BBE for ; Thu, 9 Jun 2016 15:19:00 +0200 (CEST) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP; 09 Jun 2016 06:19:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,444,1459839600"; d="scan'208";a="118940294" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.220.91]) ([10.237.220.91]) by fmsmga004.fm.intel.com with ESMTP; 09 Jun 2016 06:18:58 -0700 To: Jerin Jacob , Shreyansh Jain References: <1464874043-67467-1-git-send-email-david.hunt@intel.com> <1464965906-108927-1-git-send-email-david.hunt@intel.com> <1464965906-108927-2-git-send-email-david.hunt@intel.com> <5756931C.70805@intel.com> <57593962.6010802@intel.com> <20160609103126.GA27529@localhost.localdomain> <20160609123034.GA30968@localhost.localdomain> Cc: "dev@dpdk.org" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" From: "Hunt, David" Message-ID: <57596CC1.8000509@intel.com> Date: Thu, 9 Jun 2016 14:18:57 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <20160609123034.GA30968@localhost.localdomain> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2016 13:19:00 -0000 On 9/6/2016 1:30 PM, Jerin Jacob wrote: > On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote: >> Hi Jerin, > Hi Shreyansh, > >>>> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that >>> it >>>> replaces >>>> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we >>>> want >>>> to introduce a third method of creating a mempool to the developers. If we >>>> introduced this, we would then have: >>>> 1. rte_pktmbuf_pool_create() >>>> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would >>>> use the configured custom handler) >>>> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed >>>> by a call to rte_mempool_set_ops_byname() (would allow several different >>>> custom >>>> handlers to be used in one application >>>> >>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan? >>> As I mentioned earlier, My take is not to create the separate API's for >>> external mempool handlers.In my view, It's same, just that sepreate >>> mempool handler through function pointers. >>> >>> To keep the backward compatibility, I think we can extend the flags >>> in rte_mempool_create and have a single API external/internal pool >>> creation(makes easy for existing applications too, add a just mempool >>> flag command line argument to existing applications to choose the >>> mempool handler) >> May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)? >> That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets. > It may vary from platform to platform or particular use case. For instance, > the HW external pool manager for generic buffers may scale better than SW multi > producers/multi-consumer implementation when the number of cores > N > (as no locking involved in enqueue/dequeue(again it is depended on > specific HW implementation)) > > I thought their no harm in selecting the external pool handlers > in root level itself(rte_mempool_create) as by default it is > SW MP/MC and it just an option to override if the application wants it. > > Jerin > So, how about we go with the following, based on Shreyansh's suggestion: 1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010 (EMM for External Mempool Manager) 2. Check this bit in rte_mempool_create() just before the other bits are checked (by the way, the flags check has been moved to rte_mempool_create(), as per an earlier patchset, but was inadvertantly reverted) /* * First check to see if we use the config'd mempool hander. * Then examine the combinations of SP/SC/MP/MC flags to * set the correct index into the table of ops structs. */ if (flags & MEMPOOL_F_EMM_ALLOC) rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS) else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) rte_mempool_set_ops_byname(mp, "ring_sp_sc"); else if (flags & MEMPOOL_F_SP_PUT) rte_mempool_set_ops_byname(mp, "ring_sp_mc"); else if (flags & MEMPOOL_F_SC_GET) rte_mempool_set_ops_byname(mp, "ring_mp_sc"); else rte_mempool_set_ops_byname(mp, "ring_mp_mc"); 3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); + sizeof(struct rte_pktmbuf_pool_private), socket_id, + MEMPOOL_F_PKT_ALLOC); This will allow legacy apps to use one external handler ( as defined RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to their flags in the call to rte_mempool_create(). Of course, if an app wants to use more than one external handler, they can call create_empty and set_ops_byname() for each mempool they create. Regards, Dave.