From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 555315960 for ; Fri, 3 Jun 2016 12:28:17 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP; 03 Jun 2016 03:28:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,411,1459839600"; d="scan'208";a="820918957" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.221.24]) ([10.237.221.24]) by orsmga003.jf.intel.com with ESMTP; 03 Jun 2016 03:28:15 -0700 To: Jerin Jacob References: <1464797998-76690-1-git-send-email-david.hunt@intel.com> <1464874043-67467-1-git-send-email-david.hunt@intel.com> <1464874043-67467-2-git-send-email-david.hunt@intel.com> <20160603063755.GA5277@localhost.localdomain> Cc: dev@dpdk.org, olivier.matz@6wind.com, viktorin@rehivetech.com From: "Hunt, David" Message-ID: <57515BBE.4070205@intel.com> Date: Fri, 3 Jun 2016 11:28:14 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <20160603063755.GA5277@localhost.localdomain> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v7 1/5] mempool: support external mempool operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Jun 2016 10:28:17 -0000 On 6/3/2016 7:38 AM, Jerin Jacob wrote: > On Thu, Jun 02, 2016 at 02:27:19PM +0100, David Hunt wrote: > > [snip] > >> /* create the internal ring if not already done */ >> if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) { > |> 1) May be RING can be replaced with some other higher abstraction name > |> for the internal MEMPOOL_F_RING_CREATED flag > | > |Agreed. I'll change to MEMPOOL_F_POOL_CREATED, since we're already > |changing the *ring > |element of the mempool struct to *pool > > Looks like you have missed the above review comment? Ah, yes, I'll address in the next patch. I've to remove some 'const's in the alloc functions anyway which I introduced in the last revision, which I shouldn't have. Future handlers (including the stack hander) will need to set the pool_data in the alloc. > > [snip] > >> static inline struct rte_mempool_ops * >> rte_mempool_ops_get(int ops_index) >> >> return &rte_mempool_ops_table.ops[ops_index]; > |> 2) Considering "get" and "put" are the fast-path callbacks for > |> pool-manger, Is it possible to avoid the extra overhead of the > |> following > |> _load_ and additional cache line on each call, > |> rte_mempool_handler_table.handler[handler_idx] > |> > |> I understand it is for multiprocess support but I am thing can we > |> introduce something like ethernet API support for multiprocess and > |> resolve "put" and "get" functions pointer on init and store in > |> struct mempool. Some thinking like > |> > |> file: drivers/net/ixgbe/ixgbe_ethdev.c > |> search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) { > | > |I'll look at this one before posting the next version of the patch > |(soon). :) > > Have you checked the above comment, if it difficult then postpone it. But > IMO it will reduce few cycles in fast-path and reduce the cache usage in > fast path > > [snip] I looked at it, but I'd need to do some more digging into it to see how it can be done properly. I'm not seeing any performance drop at the moment, and it may be a good way to improve further down the line. Is it OK to postpone? >> + >> +/** >> + * @internal wrapper for external mempool manager put callback. >> + * >> + * @param mp >> + * Pointer to the memory pool. >> + * @param obj_table >> + * Pointer to a table of void * pointers (objects). >> + * @param n >> + * Number of objects to put. >> + * @return >> + * - 0: Success; n objects supplied. >> + * - <0: Error; code of put function. >> + */ >> +static inline int >> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, >> + unsigned n) >> +{ >> + struct rte_mempool_ops *ops; >> + >> + ops = rte_mempool_ops_get(mp->ops_index); >> + return ops->put(mp->pool_data, obj_table, n); > Pass by value of "pool_data", On 32 bit systems, casting back to pool_id will > be an issue as void* on 32 bit is 4B. IMO, May be can use uint64_t to > pass by value and typecast to void* to fix it. OK. I see the problem. I'll see 4 callbacks that need to change, free, get, put and get_count. So the callbacks will be: typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp); typedef void (*rte_mempool_free_t)(uint64_t p); typedef int (*rte_mempool_put_t)(uint64_t p, void * const *obj_table, unsigned int n); typedef int (*rte_mempool_get_t)(uint64_t p, void **obj_table, unsigned int n); typedef unsigned (*rte_mempool_get_count)(uint64_t p); Regards, Dave.