From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1bon0059.outbound.protection.outlook.com [157.56.111.59]) by dpdk.org (Postfix) with ESMTP id 7A1D52A1A for ; Tue, 31 May 2016 23:12:01 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:To:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=NPOZIEqzBOLK2Zr5+SeZ/FlS6zzjVZr02jTTagsczg0=; b=Co3gGzX8kyTACphE+D2Bsd23YNnYSNvK3ao7fLKhybbi+UZc269EbguyrYc27zZe6BYOJ9Ak3OEOVqXcqcQHqOqs16ZLS5bxQlEkfBhTjh+yftYlGZIlqU02O6RdZGeUY7Gg1XQIL2CJfLZPuWbdQ05w/DASp2DsuOT0tlMOBnw= Authentication-Results: 6wind.com; dkim=none (message not signed) header.d=none;6wind.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from localhost.localdomain (171.48.17.148) by BN3PR0701MB1720.namprd07.prod.outlook.com (10.163.39.19) with Microsoft SMTP Server (TLS) id 15.1.506.9; Tue, 31 May 2016 21:11:55 +0000 Date: Wed, 1 Jun 2016 02:41:29 +0530 From: Jerin Jacob To: Olivier MATZ CC: "Hunt, David" , , , , Jan Viktorin Message-ID: <20160531211122.GA2139@localhost.localdomain> References: <20160524153509.GA11249@localhost.localdomain> <574818EA.7010806@intel.com> <20160527103311.GA13577@localhost.localdomain> <57485D4F.9020302@intel.com> <20160530094129.GA7963@localhost.localdomain> <574C239E.8070705@intel.com> <20160531085258.GA8030@localhost.localdomain> <574DAF9E.7060404@intel.com> <20160531160334.GA21985@localhost.localdomain> <574DF6DC.6040306@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <574DF6DC.6040306@6wind.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-Originating-IP: [171.48.17.148] X-ClientProxiedBy: PN1PR01CA0006.INDPRD01.PROD.OUTLOOK.COM (10.164.137.13) To BN3PR0701MB1720.namprd07.prod.outlook.com (10.163.39.19) X-MS-Office365-Filtering-Correlation-Id: 2c2e318e-d927-4073-9566-08d389983412 X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 2:h2f58XG9jQUg0tjLeSqOhKRxWspqo0sanj0py7oAF5swkOgUVDkEwAN2xg5rdrPNqo1/oUUq4d9S0weUgNogWe31oILGTrAe1qdUyA5yeyJ6f/s5cvxCujgmj7+q1CuCIV2JntLYMZ4zZ7hFEf1GmWSlfoD8ZdEqLo1NsXPlTwktAi7M4GW17NPt4yhxsQTy; 3:Uw1qddiBQMvcrhGiTSvGExfPrmjsoHa3UI9FN6GtWSL3B3MDdcuI3Jp5n82sga+EDXwiDO+KaXuBD8cn0dC69BiRuoqq8u1OFVUl9EkyOd1w3TiyPGfxyixfIeU+5wgu X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN3PR0701MB1720; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 25:ArNLSecxTgm6yTfIZqPpxaIGWu4/EjdvG1ruA/R5O57JYlbIbznX5WqEW76rA/uta+QdtKSG/v/hIxdvRWhdgEMdLwbuHciLu/L/Hf9ut8l2cU12VKN0Pxn3LOIgBv9DZZLOeAKnLXMUggjCvvovbYb4dzekdwg5R32kSgDmJBbNFT7ekjzplG93CNR32yNQMYURwL4+o6d/32aIVVVcoqDWet+zoXimkBgoGyLZmnOgCQWCR2BfjVnoCx8g1X3d6rnZ6uYjUZvEsPfilaxkMVsywarAsVVSZIgMTraTiFw229b44dVxMl+v08GlAdlllf+DDIkOoZEeFOfwN+F+JV4hskYjE9DOz7+gxg7kUvQhjraql0NReX3H1fpF1aiXBDKDuybNxlMCtHt7RTLWRqpPDHCTo8e9meHAx8IvrocT2N2SPMc2+OpyiRpcNXGhPaXKL4JbjcmKUIoI5M34N/UeNYZBH23Mfrdrg+L+sGjWCdYQaD+GlSlCNaMJtAqhbdxXuiIpgXExSw6lK+sz7G3lTWCGFiGQPXK4ELiFKcpyBVscG3z6PjQbL/sorwoJud2C4V3FBsFd8kfyMqYCtJ2+y2UtsfVSZumkfCYoF3yd/RT6mdB5+OqytVnlCzOhCxp7bnkbJyx744V0t5ypiyMm1a1O6LV3kGkTRVA1IFy+QMxJjzeutzRvO/smTfDDFwuhkkCNx9YH/VbJZ6BT8rjpMhscqEsAx0G4wDwP7eTbYdh87dQJsj5RtxF2TwSm X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 20:lRX5Np2gDFf1uMdkpHhiYRtO6Jpjw3nJQLXKpv0oU8YMNv6C5aX73dfl075YR82Gkc9h6VO8SZzIuGJMwqNGV5NaqtbfDU/wkbFLCwiNr4EYGmD12hOXdMgRBxENrNbrlg4gPRCr68NyvYPxniLQFU6s4tVq+fJls6OPG5GdOcz8tZIafitkHt315qsraA3KzT0upolBV+LP7aUIk4avPH8wzVDU7wnSXRxFBsRoHIrGou4EdN7nPeTgKiKzcYhGPpKzQ3zGpI+aa0etr7Ohzb/Bg6yNlkUDcxZesgQyyXa7jDvuRTQDbkJnZXIt80sRnRS+8QsSFsHtt7dIX3sPDd74VrZcjyBmzLIrQ20JUI7zywjLY5Ks4Q2LwQQz7w6WO8u+wJQU2XPiPwozbEDKTlaZ5mSd0KwLyPpW8uKBdBbYRHguu4GZpQChlZEnBKsFUmzuNoIKbsdFPkpbkYKqqO6apgZ6ZZdA7RpSCmHTtnhPIluU15fhL7cBcs+3239dPV1q0E7c4W/4n0jYmz9W25uqS6Ozk8an8E6Rfqv/7M3MCNILIl8eVstBw8h77ainYry1tYMS9tUgADbTLvXIsVLi7vyHt3y4urrjnXVaMJE= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(278428928389397)(131327999870524); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001); SRVR:BN3PR0701MB1720; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1720; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 4:gKJYe53qVp0PXhAbnpmR9jWIHynB/T01nzJT675Vt+BsyJZEkm1KZZsv77HFf+dhNNF6EQD4rhQWqcvZ206RLYP+JfgwfiHIR6vblxZjCO/h1TnFS/p7WB5H3/3VEpWXBd0GwMPB6AQk5sq5baGxbkh2swx4Z+2jBbz2fnHExs8TMcSznvCsZzK+imXRJmHJtOiZA+1nGFnCL51w3Nt/BEM0jGEba3jehyvkgQpV10re6PPUlSHodsxlOVjNmiMx78KKGXbayUaNtjx9fVRSmvJ2mztC2T0vQHjRf7fH9tBIYiBg0JcyW/0YM4wqBSuYNLKQXYXZAMWzmbfK6uQOqfmWcmTclp8hhzYooARA0JyMObrBnqdr1d8HcY+3w02OV9Z45rB1mC3R6+8SNstGAgBNhUl+ut77SZwNdBMbB1BUbahayVJt4bws4izfIpoN X-Forefront-PRVS: 095972DF2F X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6069001)(6009001)(24454002)(76104003)(377454003)(42186005)(66066001)(92566002)(81166006)(4326007)(8676002)(97756001)(61506002)(2906002)(50466002)(9686002)(47776003)(93886004)(1076002)(46406003)(19580395003)(189998001)(3846002)(586003)(23726003)(6116002)(110136002)(5008740100001)(77096005)(54356999)(50986999)(2950100001)(76176999)(33656002)(5004730100002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1720; H:localhost.localdomain; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 23:nFjmTojfq9hji9To/kZVYvx7uGxOcLWeNLB9U5lOroP1faQxkn0hhJqyiHqQ/yCzcDi+SVjNjlTH7Yy3UUomJjRYHVFT0ptSo4RVlEQEliPV4hHXkD/LkfUtqNjSEJGDnNtPUn+7ZrazOGxR/dblgFdoYeCaWQXqNoZ1Xt/7m001QyjG7Yuy1NfOFNc2WZkMCqiA4mwWc9TEO/zjZoNNUqqChZCsXJadlUOVH6qtxUcprlKdaLaxWDnVXIIkq43eeomGARLfsLYBI3dBmq6wIEv+GxIZoT0g9iOkCzqe5WIfSM8f15wfpeczLd74se+H3arH41pKYujyI72nItT4criLlrA88vE7+kf5brz/2+Xdd0viN3KvtjddzFvsx8yfHxtQoQA1QmXEfNFz8Ajrfhpxd8plH05wPtarf0Z7d9dTdvWjW00OEKoU35SxjROiEo8YsDvLDPmaJ/0dSut64Skf+6pHjUQ2gubnprLP52JheJRpaaEyiB5St+EwUAh1uqrP+ff/hVhKw/chGVmhScxpA8s2xQ+rk4BqMM8GfA9AFbB9dSVyl2vyqAeMYRT4e/RLoOtm3VwQdRhqpoy7Nm9xQZSfafayUN1axvtYvEORLYYdOdjEhAIFq8EcksjrCQDfyjk0NRva6FmCDF0uk9J9R56xf0t3SdF6lYPuu9cxd87auwo0KVuLpMivXqt9yFwccB9Hcki8mTBK8/5OEwD1tcI/Nhx5Zd8cYZfIW5rVzXaKouyYzDEpqjbiVmunIWHVEiZDi9Cy2//Ga8Zy847Q4mb3Dd86THn0HMU71sv7wNFnCDTdGaH7smwziodtHl9TO8HHQRk3FsJSIgFdmD1jd5GVbhKoHLAMkRhBDe2totuVeHAvMtWhHOcsJQOOE/fDKxSefLtACQI3OJK0lls25nv735NhSNBPqsiZ7EAkPX6JVdBSv6BZkjM/2gZB X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 5:4TMvrFD8VqeYFDO84HDYyB8bBhaNRwbixDM2gUMZO580cuX30uE/ubp0foxpGt7NWFpYbtIaOzgqCwDMIlMHa4s9ZD0pGGD+u0yN6huN4C11Q0sgoKSnB2PWff7Jjfjik0LJtwGOe+juQNjtACxxnA==; 24:aHaVRnGMQ893R/Ex37/efak6NKd06eaSTYybAnrKCnQgaA0erho2onHETHmBmirdKOj+YsLu9EzITCT4kVtWTV7IgwdwgmhU1BJVPf4IxnY=; 7:8ZoBkBxRkEbPSnv1fl2Qz5/1+mIW7Rq6LyAqFmHGFt5mdOVCSnS4Tl8J4qc6GlBXCNMI68DHC69trJrA8KyPfqGHvZVmbIdyFdERtOd6WaXiYEUeE3niohAs2nmcmBE7kpP8P47GyquLPe78yaRG2ItrNeVVu/CNVwQebpx0xVIwT7HhN7YF9Sc+bTKduhni SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2016 21:11:55.4319 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1720 Subject: Re: [dpdk-dev] [PATCH v5 1/3] mempool: support external handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 May 2016 21:12:02 -0000 On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote: > Hi, > > On 05/31/2016 06:03 PM, Jerin Jacob wrote: > > On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote: > >> > >> > >> On 5/31/2016 9:53 AM, Jerin Jacob wrote: > >>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote: > >>>> New mempool handlers will use rte_mempool_create_empty(), > >>>> rte_mempool_set_handler(), > >>>> then rte_mempool_populate_*(). These three functions are new to this > >>>> release, to no problem > >>> Having separate APIs for external pool-manager create is worrisome in > >>> application perspective. Is it possible to have rte_mempool_[xmem]_create > >>> for the both external and existing SW pool manager and make > >>> rte_mempool_create_empty and rte_mempool_populate_* internal functions. > >>> > >>> IMO, We can do that by selecting specific rte_mempool_set_handler() > >>> based on _flags_ encoding, something like below > >>> > >>> bit 0 - 16 // generic bits uses across all the pool managers > >>> bit 16 - 23 // pool handler specific flags bits > >>> bit 24 - 31 // to select the specific pool manager(Up to 256 different flavors of > >>> pool managers, For backward compatibility, make '0'(in 24-31) to select > >>> existing SW pool manager. > >>> > >>> and applications can choose the handlers by selecting the flag in > >>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other > >>> applications to choose the pool handler from command line etc in future. > >> > >> There might be issues with the 8-bit handler number, as we'd have to add an > >> api call to > >> first get the index of a given hander by name, then OR it into the flags. > >> That would mean > >> also extra API calls for the non-default external handlers. I do agree with > >> the handler-specific > >> bits though. > > > > That would be an internal API(upper 8 bits to handler name). Right ? > > Seems to be OK for me. > > > >> > >> Having the _empty and _set_handler APIs seems to me to be OK for the > >> moment. Maybe Olivier could comment? > >> > > > > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe > > it is better reduce the public API in spec where ever possible ? > > > > Maybe Olivier could comment ? > > Well, I think having 3 different functions is not a problem if the API > is clearer. > > In my opinion, the following: > rte_mempool_create_empty() > rte_mempool_set_handler() > rte_mempool_populate() > > is clearer than: > rte_mempool_create(15 args) But proposed scheme is not adding any new arguments to rte_mempool_create. It just extending the existing flag. rte_mempool_create(15 args) is still their as API for internal pool creation. > > Splitting the flags into 3 groups, with one not beeing flags but a > pool handler number looks overcomplicated from a user perspective. I am concerned with seem less integration with existing applications, IMO, Its not worth having separate functions for external vs internal pool creation for application(now each every applications has to added this logic every where for no good reason), just my 2 cents. > > >>> and we can remove "mbuf: get default mempool handler from configuration" > >>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set > >>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create. > >>> > >>> What do you think? > >> > >> The "configuration" patch is to allow users to quickly change the mempool > >> handler > >> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known > >> handler. It could just as easily be left out and use the rte_mempool_create. > >> > > > > Yes, I understand, but I am trying to avoid build time constant. IMO, It > > would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not > > defined in config. and for quick change developers can introduce the build > > with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler" > > My understanding of the compile-time configuration option was > to allow a specific architecture to define a specific hw-assisted > handler by default. > > Indeed, if there is no such need for now, we may remove it. But > we need a way to select another handler, at least in test-pmd > (in command line arguments?). like txflags in testpmd, IMO, mempool flags will help to select the handlers seamlessly as suggest above. If we are _not_ taking the flags based selection scheme then it makes to keep RTE_MBUF_DEFAULT_MEMPOOL_HANDLER > > > >>>> to add a parameter to one of them for the config data. Also since we're > >>>> adding some new > >>>> elements to the mempool structure, how about we add a new pointer for a void > >>>> pointer to a > >>>> config data structure, as defined by the handler. > >>>> > >>>> So, new element in rte_mempool struct alongside the *pool > >>>> void *pool; > >>>> void *pool_config; > >>>> > >>>> Then add a param to the rte_mempool_set_handler function: > >>>> int > >>>> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void > >>>> *pool_config) > >>> IMO, Maybe we need to have _set_ and _get_.So I think we can have > >>> two separate callback in external pool-manger for that if required. > >>> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23) > >>> for the configuration as mentioned above and add the new callbacks for > >>> set and get when required? > >> > >> OK, We'll keep the config to the 8 bits of the flags for now. That will also > >> mean I won't > >> add the pool_config void pointer either (for the moment) > > > > OK to me. > > I'm not sure I'm getting it. Does it mean having something like > this ? > > rte_mempool_set_handler(struct rte_mempool *mp, const char *name, > unsigned int flags) > > Or does it mean some of the flags passed to rte_mempool_create*() > will be specific to some handlers? > > > Before adding handler-specific flags or config, can we ensure we > will need them? What kind of handler-specific configuration/flags > do you think we will need? Just an idea: what about having a global > configuration for all mempools using a given handler? We may need to configure external pool manager like don't free the packets back to pool after it has been send out(just an example of valid external HW pool manager configuration) > > > > >>>>> 2) IMO, It is better to change void *pool in struct rte_mempool to > >>>>> anonymous union type, something like below, so that mempool > >>>>> implementation can choose the best type. > >>>>> union { > >>>>> void *pool; > >>>>> uint64_t val; > >>>>> } > >>>> Could we do this by using the union for the *pool_config suggested above, > >>>> would that give > >>>> you what you need? > >>> It would be an extra overhead for external pool manager to _alloc_ memory > >>> and store the allocated pointer in mempool struct(as *pool) and use pool for > >>> pointing other data structures as some implementation need only > >>> limited bytes to store the external pool manager specific context. > >>> > >>> In order to fix this problem, We may classify fast path and slow path > >>> elements in struct rte_mempool and move all fast path elements in first > >>> cache line and create an empty opaque space in the remaining bytes in the > >>> cache line so that if the external pool manager needs only limited space > >>> then it is not required to allocate the separate memory to save the > >>> per core cache in fast-path > >>> > >>> something like below, > >>> union { > >>> void *pool; > >>> uint64_t val; > >>> uint8_t extra_mem[16] // available free bytes in fast path cache line > >>> > >>> } > >> > >> Something for the future, perhaps? Will the 8-bits in the flags suffice for > >> now? > > > > OK. But simple anonymous union for same type should be OK add now? Not > > much change I believe, If its difficult then postpone it > > > > union { > > void *pool; > > uint64_t val; > > } > > I'm ok with the simple union with (void *) and (uint64_t). > Maybe "val" should be replaced by something more appropriate. > Is "pool_id" a better name? How about "opaque"? > > > Thanks David for working on this, and thanks Jerin and Jan for > the good comments and suggestions! > > Regards > Olivier