From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id DCBDA2B8D for ; Wed, 5 Oct 2016 15:15:41 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP; 05 Oct 2016 06:15:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,449,1473145200"; d="scan'208";a="16900958" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.221.40]) ([10.237.221.40]) by orsmga004.jf.intel.com with ESMTP; 05 Oct 2016 06:15:39 -0700 To: Hemant Agrawal , Olivier Matz , "dev@dpdk.org" References: <1474292567-21912-1-git-send-email-olivier.matz@6wind.com> Cc: "jerin.jacob@caviumnetworks.com" From: "Hunt, David" Message-ID: <0d413088-b297-1cb1-06b7-28c17ed113da@intel.com> Date: Wed, 5 Oct 2016 14:15:38 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [RFC 0/7] changing mbuf pool handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Oct 2016 13:15:42 -0000 On 5/10/2016 12:49 PM, Hemant Agrawal wrote: > Hi Olivier, > >> -----Original Message----- >> From: Hunt, David [mailto:david.hunt@intel.com] >> Hi Olivier, >> >> >> On 3/10/2016 4:49 PM, Olivier Matz wrote: >>> Hi Hemant, >>> >>> Thank you for your feedback. >>> >>> On 09/22/2016 01:52 PM, Hemant Agrawal wrote: >>>> Hi Olivier >>>> >>>> On 9/19/2016 7:12 PM, Olivier Matz wrote: >>>>> Hello, >>>>> >>>>> Following discussion from [1] ("usages issue with external mempool"). >>>>> >>>>> This is a tentative to make the mempool_ops feature introduced by >>>>> David Hunt [2] more widely used by applications. >>>>> >>>>> It applies on top of a minor fix in mbuf lib [3]. >>>>> >>>>> To sumarize the needs (please comment if I did not got it properly): >>>>> >>>>> - new hw-assisted mempool handlers will soon be introduced >>>>> - to make use of it, the new mempool API [4] >> (rte_mempool_create_empty, >>>>> rte_mempool_populate, ...) has to be used >>>>> - the legacy mempool API (rte_mempool_create) does not allow to >> change >>>>> the mempool ops. The default is "ring_p_c" depending on >>>>> flags. >>>>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change >>>>> them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS >>>>> ("ring_mp_mc") >>>>> - today, most (if not all) applications and examples use either >>>>> rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf >>>>> pool, making it difficult to take advantage of this feature with >>>>> existing apps. >>>>> >>>>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and >>>>> rte_mempool_create(), forcing the applications to use the new API, >>>>> which is more flexible. But after digging a bit, it appeared that >>>>> rte_mempool_create() is widely used, and not only for mbufs. >>>>> Deprecating it would have a big impact on applications, and >>>>> replacing it with the new API would be overkill in many use-cases. >>>> I agree with the proposal. >>>> >>>>> So I finally tried the following approach (inspired from a >>>>> suggestion Jerin [5]): >>>>> >>>>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). >> This >>>>> unfortunatelly breaks the API, but I implemented an ABI compat layer. >>>>> If the patch is accepted, we could discuss how to announce/schedule >>>>> the API change. >>>>> - update the applications and documentation to prefer >>>>> rte_pktmbuf_pool_create() as much as possible >>>>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new >> command >>>>> line argument to select the mempool handler >>>>> >>>>> I hope the external applications would then switch to >>>>> rte_pktmbuf_pool_create(), since it supports most of the use-cases >>>>> (even priv_size != 0, since we can call rte_mempool_obj_iter() after) . >>>>> >>>> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb, >>>> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single >>>> consolidated wrapper will almost make it certain that applications >>>> will not try to use rte_mempool_create for packet buffers. >>> The patch changes the example applications. I'm not sure I understand >>> why adding these arguments would force application to not use >>> rte_mempool_create() for packet buffers. Do you have a application in >> mind? >>> For the mempool_ops parameter, we must pass it at init because we need >>> to know the mempool handler before populating the pool. For object >>> initialization, it can be done after, so I thought it was better to >>> reduce the number of arguments to avoid to fall in the >>> mempool_create() syndrom :) >> I also agree with the proposal. Looks cleaner. >> >> I would lean to the side of keeping the parameters to the minimum, i.e. >> not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create. >> Developers always have the option of going with rte_mempool_create if they >> need more fine-grained control. > [Hemant] The implementations with hw offloaded mempools don't want developer using *rte_mempool_create* for packet buffer pools. > This API does not work for hw offloaded mempool. > > Also, *rte_mempool_create_empty* - may not be convenient for many application, as it requires calling 4+ APIs. > > Olivier is not in favor of deprecating the *rte_mempool_create*. I agree with concerns raised by him. > > Essentially, I was suggesting to upgrade * rte_pktmbuf_pool_create* to be like *rte_mempool_create* for packet buffers exclusively. > > This will provide a clear segregation for API usages w.r.t the packet buffer pool vs all other type of mempools. Yes, it does sound like we need those extra parameters on rte_pktmbuf_pool_create. Regards, Dave.