From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <david.hunt@intel.com>
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by dpdk.org (Postfix) with ESMTP id 60FE47F00
 for <dev@dpdk.org>; Wed,  5 Oct 2016 11:42:00 +0200 (CEST)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga103.jf.intel.com with ESMTP; 05 Oct 2016 02:41:59 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.31,448,1473145200"; d="scan'208";a="16391919"
Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.221.40])
 ([10.237.221.40])
 by fmsmga005.fm.intel.com with ESMTP; 05 Oct 2016 02:41:58 -0700
To: Olivier Matz <olivier.matz@6wind.com>,
 Hemant Agrawal <hemant.agrawal@nxp.com>, dev@dpdk.org
References: <1474292567-21912-1-git-send-email-olivier.matz@6wind.com>
 <d11f3842-d715-3afc-9d5d-9e02f2767c2d@nxp.com>
 <dc89a863-3f9e-c0e9-a851-04de2f6e08fe@6wind.com>
Cc: jerin.jacob@caviumnetworks.com
From: "Hunt, David" <david.hunt@intel.com>
Message-ID: <cde42672-b7f4-069c-4260-2ac7342ef71e@intel.com>
Date: Wed, 5 Oct 2016 10:41:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
 Thunderbird/45.3.0
MIME-Version: 1.0
In-Reply-To: <dc89a863-3f9e-c0e9-a851-04de2f6e08fe@6wind.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Subject: Re: [dpdk-dev] [RFC 0/7] changing mbuf pool handler
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 05 Oct 2016 09:42:01 -0000

Hi Olivier,


On 3/10/2016 4:49 PM, Olivier Matz wrote:
> Hi Hemant,
>
> Thank you for your feedback.
>
> On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
>> Hi Olivier
>>
>> On 9/19/2016 7:12 PM, Olivier Matz wrote:
>>> Hello,
>>>
>>> Following discussion from [1] ("usages issue with external mempool").
>>>
>>> This is a tentative to make the mempool_ops feature introduced
>>> by David Hunt [2] more widely used by applications.
>>>
>>> It applies on top of a minor fix in mbuf lib [3].
>>>
>>> To sumarize the needs (please comment if I did not got it properly):
>>>
>>> - new hw-assisted mempool handlers will soon be introduced
>>> - to make use of it, the new mempool API [4] (rte_mempool_create_empty,
>>>    rte_mempool_populate, ...) has to be used
>>> - the legacy mempool API (rte_mempool_create) does not allow to change
>>>    the mempool ops. The default is "ring_<s|m>p_<s|m>c" depending on
>>>    flags.
>>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>>>    them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>>>    ("ring_mp_mc")
>>> - today, most (if not all) applications and examples use either
>>>    rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>>>    pool, making it difficult to take advantage of this feature with
>>>    existing apps.
>>>
>>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
>>> rte_mempool_create(), forcing the applications to use the new API, which
>>> is more flexible. But after digging a bit, it appeared that
>>> rte_mempool_create() is widely used, and not only for mbufs. Deprecating
>>> it would have a big impact on applications, and replacing it with the
>>> new API would be overkill in many use-cases.
>> I agree with the proposal.
>>
>>> So I finally tried the following approach (inspired from a suggestion
>>> Jerin [5]):
>>>
>>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
>>>    unfortunatelly breaks the API, but I implemented an ABI compat layer.
>>>    If the patch is accepted, we could discuss how to announce/schedule
>>>    the API change.
>>> - update the applications and documentation to prefer
>>>    rte_pktmbuf_pool_create() as much as possible
>>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new command
>>>    line argument to select the mempool handler
>>>
>>> I hope the external applications would then switch to
>>> rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
>>> priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>>>
>> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
>> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
>> consolidated wrapper will almost make it certain that applications will
>> not try to use rte_mempool_create for packet buffers.
> The patch changes the example applications. I'm not sure I understand
> why adding these arguments would force application to not use
> rte_mempool_create() for packet buffers. Do you have a application in mind?
>
> For the mempool_ops parameter, we must pass it at init because we need
> to know the mempool handler before populating the pool. For object
> initialization, it can be done after, so I thought it was better to
> reduce the number of arguments to avoid to fall in the mempool_create()
> syndrom :)

I also agree with the proposal. Looks cleaner.

I would lean to the side of keeping the parameters to the minimum, i.e. 
not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create. 
Developers always have the option of going with rte_mempool_create if 
they need more fine-grained control.

Regards,
Dave.

> Any other opinions?
>
> Regards,
> Olivier