From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <david.hunt@intel.com>
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by dpdk.org (Postfix) with ESMTP id 6416EC67C
 for <dev@dpdk.org>; Fri, 29 Jan 2016 14:40:42 +0100 (CET)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga102.jf.intel.com with ESMTP; 29 Jan 2016 05:40:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.22,364,1449561600"; d="scan'208";a="903835585"
Received: from dhunt5x-mobl3.ger.corp.intel.com (HELO [10.237.221.4])
 ([10.237.221.4])
 by fmsmga002.fm.intel.com with ESMTP; 29 Jan 2016 05:40:40 -0800
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
References: <1453829155-1366-1-git-send-email-david.hunt@intel.com>
 <20160128172631.GA11992@localhost.localdomain>
From: "Hunt, David" <david.hunt@intel.com>
Message-ID: <56AB6BD8.9000403@intel.com>
Date: Fri, 29 Jan 2016 13:40:40 +0000
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101
 Thunderbird/38.3.0
MIME-Version: 1.0
In-Reply-To: <20160128172631.GA11992@localhost.localdomain>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] add external mempool manager
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 29 Jan 2016 13:40:42 -0000

On 28/01/2016 17:26, Jerin Jacob wrote:
> On Tue, Jan 26, 2016 at 05:25:50PM +0000, David Hunt wrote:
>> Hi all on the list.
>>
>> Here's a proposed patch for an external mempool manager
>>
>> The External Mempool Manager is an extension to the mempool API that allows
>> users to add and use an external mempool manager, which allows external memory
>> subsystems such as external hardware memory management systems and software
>> based memory allocators to be used with DPDK.
>
> I like this approach.It will be useful for external hardware memory
> pool managers.
>
> BTW, Do you encounter any performance impact on changing to function
> pointer based approach?

Jerin,
    Thanks for your comments.

The performance impacts I've seen depends on whether I'm using an object 
cache for the mempool or not. Without object cache, I see between 0-10% 
degradation. With object cache, I see a slight performance gain of 
between 0-5%. But that will most likely vary from system to system.

>> The existing API to the internal DPDK mempool manager will remain unchanged
>> and will be backward compatible.
>>
>> There are two aspects to external mempool manager.
>>    1. Adding the code for your new mempool handler. This is achieved by adding a
>>       new mempool handler source file into the librte_mempool library, and
>>       using the REGISTER_MEMPOOL_HANDLER macro.
>>    2. Using the new API to call rte_mempool_create_ext to create a new mempool
>>       using the name parameter to identify which handler to use.
>>
>> New API calls added
>>   1. A new mempool 'create' function which accepts mempool handler name.
>>   2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
>>      handler name, and returns the index to the relevant set of callbacks for
>>      that mempool handler
>>
>> Several external mempool managers may be used in the same application. A new
>> mempool can then be created by using the new 'create' function, providing the
>> mempool handler name to point the mempool to the relevant mempool manager
>> callback structure.
>>
>> The old 'create' function can still be called by legacy programs, and will
>> internally work out the mempool handle based on the flags provided (single
>> producer, single consumer, etc). By default handles are created internally to
>> implement the built-in DPDK mempool manager and mempool types.
>>
>> The external mempool manager needs to provide the following functions.
>>   1. alloc     - allocates the mempool memory, and adds each object onto a ring
>>   2. put       - puts an object back into the mempool once an application has
>>                  finished with it
>>   3. get       - gets an object from the mempool for use by the application
>>   4. get_count - gets the number of available objects in the mempool
>>   5. free      - frees the mempool memory
>>
>> Every time a get/put/get_count is called from the application/PMD, the
>> callback for that mempool is called. These functions are in the fastpath,
>> and any unoptimised handlers may limit performance.
>>
>> The new APIs are as follows:
>>
>> 1. rte_mempool_create_ext
>>
>> struct rte_mempool *
>> rte_mempool_create_ext(const char * name, unsigned n,
>>          unsigned cache_size, unsigned private_data_size,
>>          int socket_id, unsigned flags,
>>          const char * handler_name);
>>
>> 2. rte_get_mempool_handler
>>
>> int16_t
>> rte_get_mempool_handler(const char *name);
>
> Do we need above public API as, in any case we need rte_mempool* pointer to
> operate on mempools(which has the index anyway)?
>
> May a similar functional API with different name/return will be
> better to figure out, given "name" registered or not in ethernet driver
> which has dependency on a particular HW pool manager.

Good point. An earlier revision required getting the index first, then 
passing that to the create_ext call. Now that the call is by name, the 
'get' is mostly redundant. As you suggest, we may need an API for 
checking the existence of a particular manager/handler. Then again, we 
could always return an error from the create_ext api if it fails to find 
that handler. I'll remove the 'get' for the moment.

Thanks,
David.