From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f44.google.com (mail-wm0-f44.google.com [74.125.82.44]) by dpdk.org (Postfix) with ESMTP id 79D3247D0 for ; Tue, 31 May 2016 22:41:03 +0200 (CEST) Received: by mail-wm0-f44.google.com with SMTP id z87so24480608wmh.1 for ; Tue, 31 May 2016 13:41:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=subject:to:references:cc:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding; bh=ts8ePAmWkGnV69xqm0rdiPLcm2XqBnV36msCi/miIio=; b=XSeiSZlnKqfufHDmps360FDI+aJVY2WR58f63B93DhobbAopBe81PRuGB9dJK38+ML M4sezKfDPT5LqOxeM+UkGTBM/ZIP37DkDRZq7fOMkVSDS/KB87/LhSMnYTvHwFTsM7L0 WWRQnUDtlCMTCVXtClbkyD5Rdsus2V16pwwRQoJzvc+CcOT4w2sgwOq0IomaDHwXjDhs zWR+f+JsL/NaGMWZ8xHW3SHQbGItP712dmgDRr53aIU27RpZLRFIOn1UPHhZcBhNnFG0 z7UaI65W3n1M3CYZlT6uUnxOBjrlo1Ad/w6NURrHv1HpO7cCc/k8Aw8kzVBlpE0BD7B4 zp1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=ts8ePAmWkGnV69xqm0rdiPLcm2XqBnV36msCi/miIio=; b=CJiP5xD6oQRSnHRJqvnrc1w9WFo8WR5cKxD6pXZILWz25SDOtqcSfwcPTY91/HjB4o BBCVjRhjO7GD5oYqsHmnFNfMgVd4cfX5S0/suWtgmytkBJgmk4So6sfWYeMXc3jZUScj l66B821cR69+uDEbyx1R2PA2YUpRBnIOK6QZ4oICiWUYszry3g6wRq8g2xLSf5siLcLt 7mdGIdAw3xsglmWfGKJuC7I+eMOMEDNulmCxtRuAFSE1Xko2GHtQvMS5L9OWSziuoclf wvi+sfYz8EDH9i6n3J5auylGWPtxRKN9aYU6zfeEHZDCK9NK5IyurttCdK/63KVEyeL4 MNYw== X-Gm-Message-State: ALyK8tL6nKTsJlJbLUbV+HtHVyqTUar8akNt7Ghx8ByT7H0fQq581jC+zSrGYWfq2TZHP8nw X-Received: by 10.28.137.14 with SMTP id l14mr7810wmd.64.1464727263171; Tue, 31 May 2016 13:41:03 -0700 (PDT) Received: from [192.168.0.16] (85-171-34-230.rev.numericable.fr. [85.171.34.230]) by smtp.gmail.com with ESMTPSA id o129sm30999130wmb.17.2016.05.31.13.41.00 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 31 May 2016 13:41:01 -0700 (PDT) To: Jerin Jacob , "Hunt, David" References: <1463665501-18325-1-git-send-email-david.hunt@intel.com> <1463665501-18325-2-git-send-email-david.hunt@intel.com> <20160524153509.GA11249@localhost.localdomain> <574818EA.7010806@intel.com> <20160527103311.GA13577@localhost.localdomain> <57485D4F.9020302@intel.com> <20160530094129.GA7963@localhost.localdomain> <574C239E.8070705@intel.com> <20160531085258.GA8030@localhost.localdomain> <574DAF9E.7060404@intel.com> <20160531160334.GA21985@localhost.localdomain> Cc: dev@dpdk.org, yuanhan.liu@linux.intel.com, pmatilai@redhat.com, Jan Viktorin From: Olivier MATZ Message-ID: <574DF6DC.6040306@6wind.com> Date: Tue, 31 May 2016 22:41:00 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.6.0 MIME-Version: 1.0 In-Reply-To: <20160531160334.GA21985@localhost.localdomain> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v5 1/3] mempool: support external handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 May 2016 20:41:03 -0000 Hi, On 05/31/2016 06:03 PM, Jerin Jacob wrote: > On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote: >> >> >> On 5/31/2016 9:53 AM, Jerin Jacob wrote: >>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote: >>>> New mempool handlers will use rte_mempool_create_empty(), >>>> rte_mempool_set_handler(), >>>> then rte_mempool_populate_*(). These three functions are new to this >>>> release, to no problem >>> Having separate APIs for external pool-manager create is worrisome in >>> application perspective. Is it possible to have rte_mempool_[xmem]_create >>> for the both external and existing SW pool manager and make >>> rte_mempool_create_empty and rte_mempool_populate_* internal functions. >>> >>> IMO, We can do that by selecting specific rte_mempool_set_handler() >>> based on _flags_ encoding, something like below >>> >>> bit 0 - 16 // generic bits uses across all the pool managers >>> bit 16 - 23 // pool handler specific flags bits >>> bit 24 - 31 // to select the specific pool manager(Up to 256 different flavors of >>> pool managers, For backward compatibility, make '0'(in 24-31) to select >>> existing SW pool manager. >>> >>> and applications can choose the handlers by selecting the flag in >>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other >>> applications to choose the pool handler from command line etc in future. >> >> There might be issues with the 8-bit handler number, as we'd have to add an >> api call to >> first get the index of a given hander by name, then OR it into the flags. >> That would mean >> also extra API calls for the non-default external handlers. I do agree with >> the handler-specific >> bits though. > > That would be an internal API(upper 8 bits to handler name). Right ? > Seems to be OK for me. > >> >> Having the _empty and _set_handler APIs seems to me to be OK for the >> moment. Maybe Olivier could comment? >> > > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe > it is better reduce the public API in spec where ever possible ? > > Maybe Olivier could comment ? Well, I think having 3 different functions is not a problem if the API is clearer. In my opinion, the following: rte_mempool_create_empty() rte_mempool_set_handler() rte_mempool_populate() is clearer than: rte_mempool_create(15 args) Splitting the flags into 3 groups, with one not beeing flags but a pool handler number looks overcomplicated from a user perspective. >>> and we can remove "mbuf: get default mempool handler from configuration" >>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set >>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create. >>> >>> What do you think? >> >> The "configuration" patch is to allow users to quickly change the mempool >> handler >> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known >> handler. It could just as easily be left out and use the rte_mempool_create. >> > > Yes, I understand, but I am trying to avoid build time constant. IMO, It > would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not > defined in config. and for quick change developers can introduce the build > with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler" My understanding of the compile-time configuration option was to allow a specific architecture to define a specific hw-assisted handler by default. Indeed, if there is no such need for now, we may remove it. But we need a way to select another handler, at least in test-pmd (in command line arguments?). >>>> to add a parameter to one of them for the config data. Also since we're >>>> adding some new >>>> elements to the mempool structure, how about we add a new pointer for a void >>>> pointer to a >>>> config data structure, as defined by the handler. >>>> >>>> So, new element in rte_mempool struct alongside the *pool >>>> void *pool; >>>> void *pool_config; >>>> >>>> Then add a param to the rte_mempool_set_handler function: >>>> int >>>> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void >>>> *pool_config) >>> IMO, Maybe we need to have _set_ and _get_.So I think we can have >>> two separate callback in external pool-manger for that if required. >>> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23) >>> for the configuration as mentioned above and add the new callbacks for >>> set and get when required? >> >> OK, We'll keep the config to the 8 bits of the flags for now. That will also >> mean I won't >> add the pool_config void pointer either (for the moment) > > OK to me. I'm not sure I'm getting it. Does it mean having something like this ? rte_mempool_set_handler(struct rte_mempool *mp, const char *name, unsigned int flags) Or does it mean some of the flags passed to rte_mempool_create*() will be specific to some handlers? Before adding handler-specific flags or config, can we ensure we will need them? What kind of handler-specific configuration/flags do you think we will need? Just an idea: what about having a global configuration for all mempools using a given handler? >>>>> 2) IMO, It is better to change void *pool in struct rte_mempool to >>>>> anonymous union type, something like below, so that mempool >>>>> implementation can choose the best type. >>>>> union { >>>>> void *pool; >>>>> uint64_t val; >>>>> } >>>> Could we do this by using the union for the *pool_config suggested above, >>>> would that give >>>> you what you need? >>> It would be an extra overhead for external pool manager to _alloc_ memory >>> and store the allocated pointer in mempool struct(as *pool) and use pool for >>> pointing other data structures as some implementation need only >>> limited bytes to store the external pool manager specific context. >>> >>> In order to fix this problem, We may classify fast path and slow path >>> elements in struct rte_mempool and move all fast path elements in first >>> cache line and create an empty opaque space in the remaining bytes in the >>> cache line so that if the external pool manager needs only limited space >>> then it is not required to allocate the separate memory to save the >>> per core cache in fast-path >>> >>> something like below, >>> union { >>> void *pool; >>> uint64_t val; >>> uint8_t extra_mem[16] // available free bytes in fast path cache line >>> >>> } >> >> Something for the future, perhaps? Will the 8-bits in the flags suffice for >> now? > > OK. But simple anonymous union for same type should be OK add now? Not > much change I believe, If its difficult then postpone it > > union { > void *pool; > uint64_t val; > } I'm ok with the simple union with (void *) and (uint64_t). Maybe "val" should be replaced by something more appropriate. Is "pool_id" a better name? Thanks David for working on this, and thanks Jerin and Jan for the good comments and suggestions! Regards Olivier