From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1bon0060.outbound.protection.outlook.com [157.56.111.60]) by dpdk.org (Postfix) with ESMTP id 9EC79378E for ; Tue, 31 May 2016 18:04:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:To:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=eO+O22PgptyYhNfq8bhhJ2sZHF2TWqK99JSLCmMb6Xk=; b=QiSDC0qLowkw1OYz6x7TBPOIVCdumxe7bYb1wsHkjrtA+24qaJQvgP88qibYyFofyLBlF4awr2AWiY1qaeWPfloWnrRM0lpvTqQkwLGmof1PF3tNMRxjFVc73aWrmOHTpZ0r/kFZwvPb4mCiNYdpnlrFXWfo3ogBK5c6jYiMD14= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from localhost.localdomain (111.93.218.67) by BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) with Microsoft SMTP Server (TLS) id 15.1.506.9; Tue, 31 May 2016 16:04:04 +0000 Date: Tue, 31 May 2016 21:33:42 +0530 From: Jerin Jacob To: "Hunt, David" CC: , , , Message-ID: <20160531160334.GA21985@localhost.localdomain> References: <1463665501-18325-1-git-send-email-david.hunt@intel.com> <1463665501-18325-2-git-send-email-david.hunt@intel.com> <20160524153509.GA11249@localhost.localdomain> <574818EA.7010806@intel.com> <20160527103311.GA13577@localhost.localdomain> <57485D4F.9020302@intel.com> <20160530094129.GA7963@localhost.localdomain> <574C239E.8070705@intel.com> <20160531085258.GA8030@localhost.localdomain> <574DAF9E.7060404@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <574DAF9E.7060404@intel.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: PN1PR01CA0060.INDPRD01.PROD.OUTLOOK.COM (10.164.136.160) To BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) X-MS-Office365-Filtering-Correlation-Id: 840903c9-c8ae-46e5-5eb8-08d3896d31d6 X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 2:rpPU4m5u+5a8WhAxXmgbnqgqMrNYaT/nyDZo097MTwYt8n3WynOxrIjJffw93YgApynDFpMA5QlXxDG5R/mJHVxPL0vkpawfD/Jky3sG/Xt5hjUsTU2iaBoHYrJJ6Q3BhNZ29GHPnypKC4t5xSP59xce3hnGb51okiA20ld2kHM3OYKKplK5uCb99sFgOdZR; 3:/M1DW6JgSOad2C7r70SSkIbT+o5F5UQHctgk6NWtbQ53/ytbuzgmHzJMsGY02N6g8LIHwyZebWKG3+GQGBgrp+me4lO1GYJWjXZ+qY0bq0TmduajeLm84OW3l7Fj/fGp; 25:V/4JIsT7R0QOIMgFwBZW9WltYM0P0jQ4bjMWF0OlEuO25oeez1UssBcX9h342ZyOX7/qLP5chsHSPCpqOUL5JkfkfZpwKYICjN6U8U3vlEeHllpA/nXxMXVL1Hbiov2uoIm9MLjiM/lkNd6Ww5B2GEaXZrl9KFaOH3JNvFtTYGy/dcjOzAWrPkSf/bGzo6tS9UzJIemPJwZ/eYU9vs6eRVRqT/aN9JIk8kWQqFlzTYjYaof52hqVnnnD+pdvgph5mr5Ey1hT+hG7dr3LUBC4ZT6WAzhHC5033pv4GngEZx79arRpnMsHI7JsJrVIIxa2/V6TqzbQGDSg5R8ptWEiXxRpSc2JHDjUYC36EIEQsP1sho7UWQmoxCXLblI579b5CZYmNfkH4aZk2aLdPd1pmkW1MnTr/2zLLf9LWn2+0To= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 20:VvKwXStTEE0EtPRgoEgAV857WCb/XjW8m/RqhBLmMYEfeAL8Cr+Gv8lK1Ncdo8mk/xwBYAJUnUxcjSk4eg3wtkEGaaqjGqX5KGwk3Z30Jt6c4rxdh/Nwop8o/PmTPpW0JzNVJQEJn9tCIrFzhjgewDosjg6WOk4tyOvlgHf5roVP4+o4nAEWMNSL98DFvUQ/Z2Tl4AeVR/328Gq/5/gQpZc4B/q195ajJgNfFNTzAdotl/i9yePeHIb2uAxQOL3RBD00ZTXCKZaqhyK6PJCLEw/wA4fbfSFWUHzUEJqCH7RkcN6ZLCRw1kb8aRkyC9MsRBq4XfT4VMzg5BiZWWv0uwVFGInPsMzNbhKufVZvLkLW/0Q8gDE5jOxSkS4tlPUZnL5K1IpOBneX82xz35Wbo2i+ipo2qQfMPsQeqzgeLHTMxdAIJqFZF42fYOak7R9m4t4uw6jRG2MUDkMJSoxGsOrIYDdTIyiCju3HaZ6gNblb5oRlG5PrbMl5gKaiq6mxZvCsWQTS4kwU7JYZx4FxrICEh0StjWVlf5jys85+JUENF9zKcGPbEqDzqZQswlpDjhEtvdy2UyPBtVt7uHCzKdjBNtacIzLkF46jDQOwMMM= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(278428928389397)(131327999870524); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001); SRVR:BN3PR0701MB1719; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 4:3lq6QXNp9+m1lyM1qtT0iq7BTU4lzqdPx05mcJTZhGXj/ZRLnLffKxOzfwGUiD5+TNgmVbLhp/6pqKgwEi1Z5oEjlFuYssGPrttLglxeqM5YXvYj4XMm1A9P4XSTAmB25ia0STQpwiU8fkIU+6FxyMgEkAHW8nliaqNdQiR641bAD1Zf82o3AKiq+9U/2CRVK5WaEQVYXvMKhpjj+rO7BVlGznSqzSYmqVd5H7WyMkuMd/GRWfneBq4WtBobWMDaCJbSoASbukE1Cs9Ij945frLalQvzIkcoNt/UYiv55OytsZrMZEkb5xDqeRBc2kkQesPMOr7oOzFfBp5EeOqME4hs0H1SlgDkJDpfY4BFykXj2jXBj259VuV8MCgajUO73lYOWUHoVdbkHr4iHkkYkhF24X47UPNqfXqtXD2XJFmroONkKSKM241HDN23N1dm X-Forefront-PRVS: 095972DF2F X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(6069001)(76104003)(24454002)(377454003)(92566002)(5009440100003)(19580395003)(97756001)(189998001)(110136002)(93886004)(5008740100001)(33656002)(9686002)(66066001)(77096005)(2950100001)(54356999)(76176999)(4326007)(50986999)(8676002)(81166006)(5004730100002)(42186005)(50466002)(23726003)(3846002)(6116002)(586003)(46406003)(1076002)(2906002)(7099028); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1719; H:localhost.localdomain; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1719; 23:RSNOKy7/xwtDjKGF+Wc9sYN79lnt+wT0ln6wOQG?= =?us-ascii?Q?hQcwleIKCuHMornX3dC3cH1NDZQ08rXTQRCO6UzqdbnROT1F7ZBzj7UwI32/?= =?us-ascii?Q?XUEwsFapsmnRW+d2RF+YudxboK2VbRi/eyq7j4VNldsjNj9g08IBP5v/Xp9C?= =?us-ascii?Q?k38STJDH/eCn0abEUOHgMNJbyzk0XPrFLputc3vU+yZdUGmXNDp9sTjs7VOj?= =?us-ascii?Q?CwcDXvZG7ZYN+gCwHe/astGlPjBppIYW2psrcgx9mcED7KHB6TVU1P9uOz2D?= =?us-ascii?Q?LOJXEzA4GYA1T1V5WPaLDlWv+Fvop5p6TyP35G+ozRoMrI3oE86EIiWpkXaz?= =?us-ascii?Q?0QKleRh+sBls3lIRtJQfJ9UlAfvmVe8I+aSHcWDLsiN1H3t8bWH01G8EAaB7?= =?us-ascii?Q?JGQHvK8FWkHSW1ybCM7yIaSpZ+rraHELCoeAsTpRiHzw9z9fUtZE8taqVzFl?= =?us-ascii?Q?VryIL0IVvBoMm0C/XMiZxe4awPLXCSO+2mTR4N1oCFYlIFElYbXEckRLWfQI?= =?us-ascii?Q?spQ5Ck8ZB1dRrL0DEVGV90BYoBnfdTMwICBz6QpiOV349Rq481EY8FwwTeGX?= =?us-ascii?Q?FKABPw8d8xgLggEpxgATpIsOWWK4okd43wbofEb8qX74xOVrXwzVTS4vlefP?= =?us-ascii?Q?89FaqcOWApPX3q5rxxh99FWBCJhtLH8bSetM0ekgOdUxt5X795x4WxZVX2zT?= =?us-ascii?Q?5WfryvoHkC0dMxb3elROB6Blu6yZEYdmL8nc6Rmsmbm68LIrSZin47tjdHbB?= =?us-ascii?Q?M0c1tNlp2jHGjNL1ny6lJ5aLi4M/JhJv98bsgnA9HFdqwqKcORVGvtZVMYHz?= =?us-ascii?Q?MEncdvO3Qz57lSDOdfEiH3Afcx8MmLHT7J8pBpBIiw6HAxmYVU2jnWjk5Pmx?= =?us-ascii?Q?kUdeMBIfc1geT9AKN4ACBeB76CmB3+9a7XDWR3LYF/zYn767UxAWKEueY2un?= =?us-ascii?Q?3qLqhYud5IKSY304UqN6L9WUvg40OftU/UhpDzHUjiw=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 5:CBKZC56bRtbhL6zFe55WzBkvBPWD/Qq5gyK5FMGQVail0OPT5IURijp1BZpi2DTwUq+PVcKDqrn0SsXSknBiTxksGMQZ+YTSrM09zo6jKpwFcKwITbadC9TXRLNd7ItfXSoPxfLTFGt1rQWV8Nj9mQ==; 24:nCmAMawUyXiLbQOxBBR1BG59rR6Hgyw8wVWw7RSa8R4Y6b4F90n0nI9uM8Y5Ks7ZqQSPnomwvX8Iz5OfEEQWrNux5LthVQCxOCbRVLPy1Oo=; 7:2NRmQ4HhelmJw+DeEcaploeibEHLN78OSUyxBKeKY+RGz4G7V3wpLtkTa63pi2UNy+OVc/b9RtOnHgEKI3u3i6TLUQsXZvL9Aj438yVzx++uDGB/gtWgvzGEBtmHNQyVdmXwCM26A6j1g5Jr1UBeVWs1gLcJcxzXE6hWdTEpbfLRxQwmctRh/IdC3CmnzFJa SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2016 16:04:04.8538 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1719 Subject: Re: [dpdk-dev] [PATCH v5 1/3] mempool: support external handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 May 2016 16:04:09 -0000 On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote: > > > On 5/31/2016 9:53 AM, Jerin Jacob wrote: > > On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote: > > > New mempool handlers will use rte_mempool_create_empty(), > > > rte_mempool_set_handler(), > > > then rte_mempool_populate_*(). These three functions are new to this > > > release, to no problem > > Having separate APIs for external pool-manager create is worrisome in > > application perspective. Is it possible to have rte_mempool_[xmem]_create > > for the both external and existing SW pool manager and make > > rte_mempool_create_empty and rte_mempool_populate_* internal functions. > > > > IMO, We can do that by selecting specific rte_mempool_set_handler() > > based on _flags_ encoding, something like below > > > > bit 0 - 16 // generic bits uses across all the pool managers > > bit 16 - 23 // pool handler specific flags bits > > bit 24 - 31 // to select the specific pool manager(Up to 256 different flavors of > > pool managers, For backward compatibility, make '0'(in 24-31) to select > > existing SW pool manager. > > > > and applications can choose the handlers by selecting the flag in > > rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other > > applications to choose the pool handler from command line etc in future. > > There might be issues with the 8-bit handler number, as we'd have to add an > api call to > first get the index of a given hander by name, then OR it into the flags. > That would mean > also extra API calls for the non-default external handlers. I do agree with > the handler-specific > bits though. That would be an internal API(upper 8 bits to handler name). Right ? Seems to be OK for me. > > Having the _empty and _set_handler APIs seems to me to be OK for the > moment. Maybe Olivier could comment? > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe it is better reduce the public API in spec where ever possible ? Maybe Olivier could comment ? > > and we can remove "mbuf: get default mempool handler from configuration" > > change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set > > the same with rte_mempool_set_handler in rte_mempool_[xmem]_create. > > > > What do you think? > > The "configuration" patch is to allow users to quickly change the mempool > handler > by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known > handler. It could just as easily be left out and use the rte_mempool_create. > Yes, I understand, but I am trying to avoid build time constant. IMO, It would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not defined in config. and for quick change developers can introduce the build with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler" > > > to add a parameter to one of them for the config data. Also since we're > > > adding some new > > > elements to the mempool structure, how about we add a new pointer for a void > > > pointer to a > > > config data structure, as defined by the handler. > > > > > > So, new element in rte_mempool struct alongside the *pool > > > void *pool; > > > void *pool_config; > > > > > > Then add a param to the rte_mempool_set_handler function: > > > int > > > rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void > > > *pool_config) > > IMO, Maybe we need to have _set_ and _get_.So I think we can have > > two separate callback in external pool-manger for that if required. > > IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23) > > for the configuration as mentioned above and add the new callbacks for > > set and get when required? > > OK, We'll keep the config to the 8 bits of the flags for now. That will also > mean I won't > add the pool_config void pointer either (for the moment) OK to me. > > > > > 2) IMO, It is better to change void *pool in struct rte_mempool to > > > > anonymous union type, something like below, so that mempool > > > > implementation can choose the best type. > > > > union { > > > > void *pool; > > > > uint64_t val; > > > > } > > > Could we do this by using the union for the *pool_config suggested above, > > > would that give > > > you what you need? > > It would be an extra overhead for external pool manager to _alloc_ memory > > and store the allocated pointer in mempool struct(as *pool) and use pool for > > pointing other data structures as some implementation need only > > limited bytes to store the external pool manager specific context. > > > > In order to fix this problem, We may classify fast path and slow path > > elements in struct rte_mempool and move all fast path elements in first > > cache line and create an empty opaque space in the remaining bytes in the > > cache line so that if the external pool manager needs only limited space > > then it is not required to allocate the separate memory to save the > > per core cache in fast-path > > > > something like below, > > union { > > void *pool; > > uint64_t val; > > uint8_t extra_mem[16] // available free bytes in fast path cache line > > > > } > > Something for the future, perhaps? Will the 8-bits in the flags suffice for > now? OK. But simple anonymous union for same type should be OK add now? Not much change I believe, If its difficult then postpone it union { void *pool; uint64_t val; } > > > > Other points, > > > > 1) Is it possible to remove unused is_mp in __mempool_put_bulk > > function as it is just a internal function. > > Fixed __mempool_get_bulk too. > > 2) Considering "get" and "put" are the fast-path callbacks for > > pool-manger, Is it possible to avoid the extra overhead of the following > > _load_ and additional cache line on each call, > > rte_mempool_handler_table.handler[handler_idx] > > > > I understand it is for multiprocess support but I am thing can we > > introduce something like ethernet API support for multiprocess and > > resolve "put" and "get" functions pointer on init and store in > > struct mempool. Some thinking like > > > > file: drivers/net/ixgbe/ixgbe_ethdev.c > > search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) { > > I'll look at this one before posting the next version of the patch (soon). > :) OK > > > > Jerin > > > Thanks for your input on this, much appreciated. > Dave. >