From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <david.hunt@intel.com>
Received: from mga04.intel.com (mga04.intel.com [192.55.52.120])
 by dpdk.org (Postfix) with ESMTP id 39120378E
 for <dev@dpdk.org>; Tue, 31 May 2016 17:37:14 +0200 (CEST)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by fmsmga104.fm.intel.com with ESMTP; 31 May 2016 08:37:05 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.26,396,1459839600"; d="scan'208";a="818457472"
Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.220.49])
 ([10.237.220.49])
 by orsmga003.jf.intel.com with ESMTP; 31 May 2016 08:37:03 -0700
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
References: <1460642270-8803-1-git-send-email-olivier.matz@6wind.com>
 <1463665501-18325-1-git-send-email-david.hunt@intel.com>
 <1463665501-18325-2-git-send-email-david.hunt@intel.com>
 <20160524153509.GA11249@localhost.localdomain> <574818EA.7010806@intel.com>
 <20160527103311.GA13577@localhost.localdomain> <57485D4F.9020302@intel.com>
 <20160530094129.GA7963@localhost.localdomain> <574C239E.8070705@intel.com>
 <20160531085258.GA8030@localhost.localdomain>
Cc: dev@dpdk.org, olivier.matz@6wind.com, yuanhan.liu@linux.intel.com,
 pmatilai@redhat.com
From: "Hunt, David" <david.hunt@intel.com>
Message-ID: <574DAF9E.7060404@intel.com>
Date: Tue, 31 May 2016 16:37:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101
 Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <20160531085258.GA8030@localhost.localdomain>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Subject: Re: [dpdk-dev] [PATCH v5 1/3] mempool: support external handler
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Tue, 31 May 2016 15:37:14 -0000



On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>> New mempool handlers will use rte_mempool_create_empty(),
>> rte_mempool_set_handler(),
>> then rte_mempool_populate_*(). These three functions are new to this
>> release, to no problem
> Having separate APIs for external pool-manager create is worrisome in
> application perspective. Is it possible to have rte_mempool_[xmem]_create
> for the both external and existing SW pool manager and make
> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>
> IMO, We can do that by selecting  specific rte_mempool_set_handler()
> based on _flags_ encoding, something like below
>
> bit 0 - 16   // generic bits uses across all the pool managers
> bit 16 - 23  // pool handler specific flags bits
> bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
> pool managers, For backward compatibility, make '0'(in 24-31) to select
> existing SW pool manager.
>
> and applications can choose the handlers by selecting the flag in
> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> applications to choose the pool handler from command line etc in future.

There might be issues with the 8-bit handler number, as we'd have to add 
an api call to
first get the index of a given hander by name, then OR it into the 
flags. That would mean
also extra API calls for the non-default external handlers. I do agree 
with the handler-specific
bits though.

Having the _empty and _set_handler  APIs seems to me to be OK for the
moment. Maybe Olivier could comment?

> and we can remove "mbuf: get default mempool handler from configuration"
> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>
> What do you think?

The "configuration" patch is to allow users to quickly change the 
mempool handler
by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
handler. It could just as easily be left out and use the 
rte_mempool_create.

>> to add a parameter to one of them for the config data. Also since we're
>> adding some new
>> elements to the mempool structure, how about we add a new pointer for a void
>> pointer to a
>> config data structure, as defined by the handler.
>>
>> So, new element in rte_mempool struct alongside the *pool
>>      void *pool;
>>      void *pool_config;
>>
>> Then add a param to the rte_mempool_set_handler function:
>> int
>> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
>> *pool_config)
> IMO, Maybe we need to have _set_ and _get_.So I think we can have
> two separate callback in external pool-manger for that if required.
> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
> for the configuration as mentioned above and add the new callbacks for
> set and get when required?

OK, We'll keep the config to the 8 bits of the flags for now. That will 
also mean I won't
add the pool_config void pointer either (for the moment)

>>> 2) IMO, It is better to change void *pool in struct rte_mempool to
>>> anonymous union type, something like below, so that mempool
>>> implementation can choose the best type.
>>> 	union {
>>> 		void *pool;
>>> 		uint64_t val;
>>> 	}
>> Could we do this by using the union for the *pool_config suggested above,
>> would that give
>> you what you need?
> It would be an extra overhead for external pool manager to _alloc_ memory
> and store the allocated pointer in mempool struct(as *pool) and use pool for
> pointing other data structures as some implementation need only
> limited bytes to store the external pool manager specific context.
>
> In order to fix this problem, We may classify fast path and slow path
> elements in struct rte_mempool and move all fast path elements in first
> cache line and create an empty opaque space in the remaining bytes in the
> cache line so that if the external pool manager needs only limited space
> then it is not required to allocate the separate memory to save the
> per core cache  in fast-path
>
> something like below,
> union {
> 	void *pool;
> 	uint64_t val;
> 	uint8_t extra_mem[16] // available free bytes in fast path cache line
>
> }

Something for the future, perhaps? Will the 8-bits in the flags suffice 
for now?


> Other points,
>
> 1) Is it possible to remove unused is_mp in  __mempool_put_bulk
> function as it is just a internal function.

Fixed

> 2) Considering "get" and "put" are the fast-path callbacks for
> pool-manger, Is it possible to avoid the extra overhead of the following
> _load_ and additional cache line on each call,
> rte_mempool_handler_table.handler[handler_idx]
>
> I understand it is for multiprocess support but I am thing can we
> introduce something like ethernet API support for multiprocess and
> resolve "put" and "get" functions pointer on init and store in
> struct mempool. Some thinking like
>
> file: drivers/net/ixgbe/ixgbe_ethdev.c
> search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {

I'll look at this one before posting the next version of the patch 
(soon). :)


> Jerin
>
Thanks for your input on this, much appreciated.
Dave.