From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from na01-by2-obe.outbound.protection.outlook.com (mail-eopbgr700084.outbound.protection.outlook.com [40.107.70.84]) by dpdk.org (Postfix) with ESMTP id 22158E6D for ; Wed, 1 Jun 2016 13:18:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:To:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=RiWiD5YSpfpQXrY+roey7ED1w7IuwJl9SMscoXjmhPo=; b=TExXoGoY3QGkgxhpbq9/iKhZ4xqU7YEVx/SkXi3Mpa9n7hKdG12NdVz2RWxea6g2R0iQK4Ryhr25QQUkKkhdRJ2MXU7GLHDIv2JzQ6EjB6UJcJEu3N+ONVkgbN4+vFmM/+WUW/lpiUFqgJKy8yhyTSnVKwl81GW0eWvo0p9e7FM= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from localhost.localdomain (111.93.218.67) by BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) with Microsoft SMTP Server (TLS) id 15.1.506.9; Wed, 1 Jun 2016 11:18:22 +0000 Date: Wed, 1 Jun 2016 16:48:01 +0530 From: Jerin Jacob To: "Hunt, David" CC: Olivier MATZ , , , , Jan Viktorin Message-ID: <20160601111756.GA2495@localhost.localdomain> References: <20160527103311.GA13577@localhost.localdomain> <57485D4F.9020302@intel.com> <20160530094129.GA7963@localhost.localdomain> <574C239E.8070705@intel.com> <20160531085258.GA8030@localhost.localdomain> <574DAF9E.7060404@intel.com> <20160531160334.GA21985@localhost.localdomain> <574DF6DC.6040306@6wind.com> <20160531211122.GA2139@localhost.localdomain> <574EBCFC.5020802@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <574EBCFC.5020802@intel.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MAXPR01CA0066.INDPRD01.PROD.OUTLOOK.COM (10.164.146.166) To BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) X-MS-Office365-Filtering-Correlation-Id: 199d6f48-e56f-49b9-42be-08d38a0e732e X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 2:pZbkucnd3mDzjdlCGmOFQnmQm24GamoVw94XGn39TYlqNzbpKPQSoxGthOpd4HcGff3JQJtWV1ngurkGZDFqOv/WaFDEm1sxrbGWzfc/wRWXiCXRfejyX2WPs0PqXMEmCw6mpgLGU92moOiN0NCtjYEtUmirWlc6xU/dsq1JUi0whQ5ds8Q4xgWkfvzmhVgC; 3:eX4c6jfPrz6bkcgSNeVdv8Fjm/vSZ8H6rPl52eok/ccbnHSHeAZWFFalOpxq7ilwZEGI5rNkwZHLdo+U3op40HnxUYzx8v6tOv50MpV3Q5kdPZSkWdEj7C2kakwYB4vm X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 25:Wdjs6F/Wb8tNOB5qgx4irrmgPfv3JXS6ToRtGWQO+wkj3UX0PK53tJed7WDwMrAPPtLurEPdcJFu8vIRQbMWaZIFuhuKxb59qEgE7dITLsSs0qHgXNg+cK4eKkr2kaLNwTJPi7GlIJz+0TUAn4J7/cj9hDoehAx6u0f4vlbqyEsCCUh8SZTm0rKHJY2iljV8US3kZESEQtUA5N6mdCujpZ07Efi7b/L8gMXeStqtMQowXsr2SiFNrK8F/mkksiJYkqHwLNTdnafG/+wPQX3d19yzS2Hpm6ypf/zGqYJUZ8UoQB9RFbkYT2/gfxv9fAW9GW96DzgZlYKj4h8bJgnHHf5bBEkSAOigpcUh4+ntNRWCFvXODSjUh8XdMgHy1JfZmR3A6jWQMiAJq+Fn9cijZgk36DjrBaObKqSUYaYxDEt2Dc73llG9D0jiK9f3MkLaies8f6+jK6bN4XLGeG91vfhtA9S7cTiFJ4OBamtf2dVS10dRFePUzdLQnMs3LyNwRTQp71ry2ebdPgp15WHmwRK/xheh76xF8+C3CagGQBC35dMuztwDVBirImea79Wd4hnhdfJA/5W2oj+qc5LnCLDGBoaoTkR40I8H7DK0nkyueu04fMVcALlKIle4cX43cXy0sm251FxmewwXy8YjZQaFPI4QOoYQ7PNvGPd3GBmb63nEgI+kM/juRxZ61d47/1PL4z9yhZVqYrnD24C8mRbC7kZx4lio2khl1VsGTnhozrZNJe3ynT+rXBAN95VG X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 20:H2UGkj0kRKR2lcdEczDGeFX4ac+ML3M74PHO1prDQ16xmot4+OS+kCxnTJE4nQuBj8x8hXi/zO4iL4uQjNusgsAb+hPViW0LvI9HwYG1ByvxsMdGpZkDK8ONriTZE+Dhq778sn+MSsvlXBVLPSdbHk8nEZGvIdaAu1cSk3jm4rfMLzpynqjvJ6Kl4gfoQ2p64NdWQNa+cJcfBTKpfe58QZ/9EmIh8c2QJJiNq96uOKOcqTNXlpmEh+FzaqTPlTAm6uzo1QXtbG4dzPIxq+22M91UQc2it9xgHqtNZlWYqRI3u3+C+ao4HkpmxUbjSmawnO4nP/4NahitduADM1CisCv3exxhSMMxaO4rhsdRicZ+P59ZX/vKR0Z3wD4HutQKwtTa0568Xopa6pLYT+Ccbdg6PeSTsBqNOuHbwNKXzmBFVgHBVptGP6SQTH1DpLZ9n9YLm3Q2GmNOpoA+pBfW6WqZ11Og9dPT93vsuw3Dilmk+xOUfoxiIvc3TS6ZXdg8RxdkDjNOsa/wHPnOXW1JnSYB3CpXvzqbtbHyAylFB14JWwKEejieU4/QAwl70VGhyMoQnLBUfO2spT0mm8Yv4IOb8/29DstlmifHCiM1BkE= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(278428928389397)(131327999870524); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001); SRVR:BN3PR0701MB1719; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 4:IkCHzO10wzaYFAnwqRP/MEZv+BIjepzFHuIi/QBNJQA3VdNhhMVKLFUvooq96r71wiK2z/VUwyBVhK0SmnEm8CnWa+IeS/MrG+cunTM3d8tTZeGeu3ciVVQhBKJ6gXuQb42KSJz0Yvskdl7RyJnEPWzWk3NgrisPSOSZCo+1sQroWXQpsSfPOUyUlMJZVRAl/q7+iyEOuoy4kVyj6VtJirUqw4sv6RWwv1P+xy3wOBACLfHlQ9Q664ao7uhp2rZmib2PejhFLsQ+/Ep7Qy6tZC8Ckn8dV5C6e8MiXcPsXbKjuvKdDxDsPNc61IZu58xF9Bs+rEo0+SHqTZoEWMFTdJJJ+Tf5S6OHdHBZ1rC8hsTbn9uZMHqFdfux5wSG7o3+r5KqQ3IR9drcyzb4XC4yCxuauQlzWWuG0nu/qtTtDcZ6mhtWlYM7ZbX981hyMuBR X-Forefront-PRVS: 096029FF66 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(979002)(6009001)(6069001)(76104003)(24454002)(52314003)(377454003)(51444003)(92566002)(4001350100001)(5009440100003)(97756001)(189998001)(110136002)(61506002)(83506001)(93886004)(5004730100002)(9686002)(5008740100001)(33656002)(47776003)(66066001)(76176999)(54356999)(77096005)(2950100001)(50986999)(8676002)(81166006)(4326007)(1076002)(50466002)(23726003)(42186005)(3846002)(6116002)(586003)(46406003)(2906002)(7099028)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1719; H:localhost.localdomain; FPR:; SPF:None; MLV:ovrnspm; PTR:InfoNoRecords; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1719; 23:E41VNNg6qXA5QpB6NS6busiloJYtPD8fD3Y3pu9?= =?us-ascii?Q?WNxfar35e7/emgnUiLq+634zu0VH8kEeWB1oRs4OErlWaHWz/L6yjhGf6xVY?= =?us-ascii?Q?c19b5BI2Lbeup1BmicF5AYDI87O4VULt5v0/1wX003gLezU4soDusaCVKj7P?= =?us-ascii?Q?34wG5Ja86X8aZOl1Ve6aYDeAEGdbVAglgK+wBweMHE0qM0vWtEVH234WnaKb?= =?us-ascii?Q?M6/uWAAIJWGF66UUgHaXWcv32P+tJMC029RhybeuDqaaIQRIDYnzNHr1VW1G?= =?us-ascii?Q?oBazVzdbWus+pSzY0zehphSwt9+subWMjAC9dkQygw9woxYAvkJxf+qJBYQ6?= =?us-ascii?Q?EkCCfsvlKNacxcECY9JiYVronY1YPTIAw/IBdswL7GE69VxrRBF8px52kJre?= =?us-ascii?Q?Kw6mdIQmNr9Nd/3GTgDPTIUwFfJWoNhuM7WOCTBFKJ7/rrL/R+SdVhVcRdZS?= =?us-ascii?Q?MfkpyzRV/cw5W7P6mH/Y5ofW+TqoL7IgCHZtzSAUlBo7Q6MXppAtLL4BEzBM?= =?us-ascii?Q?BPsQZQrKGQSRGf54ilK5Qr2cYe6ZtLEWLXfOB94M+RnbP1XEprKyY6ITwWUj?= =?us-ascii?Q?YWen7KmbdeWf6ZT6pZf6WjcrK+Zg8+37pc0hYto+jdl+OfX6mNhGTiflxChM?= =?us-ascii?Q?A//NyouF8/kwbF0VV9I3G9BEpi58T7sROtQMv8J05Ajv55bN+3dRkm8wMZ1f?= =?us-ascii?Q?pSrZ8K6Am73+h47KGOkFne07oNLedV00sVroQ/Lsj5fPkA5Sk1K9egVMSCSM?= =?us-ascii?Q?Qv68STuRB75fGIn2Qy5cEFkVsH0FfL/+VlG+fvyNdMBCknFb69yG1Qwt9uaU?= =?us-ascii?Q?N/0T858KQEDs5u6NbMdxwRPCLil2dRakHQei6VL9h9Hu7fIlk/qPLuuBeME8?= =?us-ascii?Q?9nvDcyLYjpWOMud4eWnnBiYCPfCGKfqHIxhn3LxQcB3d/LCWxsHm1V80g3l9?= =?us-ascii?Q?EE04FkF3MAyOj7R7POw4PvTEgdrC261X0GoXwZzM+qGvKqMJkUAB5N6cFSZ8?= =?us-ascii?Q?FulKY4tBe+yRJJYR7Ik06iGrbaAEOoFDcLbqvux3+In5KLCpnZr8uV/SMeZk?= =?us-ascii?Q?JTBu2SkirBbEgGIC4lHmcCXuWthqXuiGQwUcaq75GkwDeISN8L2A1r+4Ng4x?= =?us-ascii?Q?NbY81+NVlDKBf7A17a2hSdiRIpag9YyogYMVPMedTn8z3cgnk81ELn/vQTI7?= =?us-ascii?Q?TFzSOncsYsTGDiSs=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 5:y9IpXsX4ZfmvBYU7w5g7cNbNQOnA8dAFIpQastUYOGIGRn/MM27Vi5eA57wNbhJDWmc/Bu0oRx1P13qvhL8nhI2Uei2KGSn4jDhBVKQg2HiC9aPRAitkYifsnm2uHSJHrx//PBoVWNPf/gI2VsxSCQ==; 24:v/sc3XxBNuiXM46ltH0++uHXHRRhHw5EiQneL2jgecR+rdjzhKvWiHRtFnsR/Lv0Y1wyCQuoy20EOSDNSEHfuGDFAbwyF5eMPgL05Dj4P5c=; 7:R0NFDCFbnn4sKGSR37IaqCGRyqmJTcM6ZJ8UumbV56L24NMZ/q3CtbsSxGnL2swAvBGzOnxPyPTs4wDLZTuBH3IAutKDy8eKGspfHiEFvjkaM41E7J5WfT05Uk3+8HDEdvhpky9qdd22KWojWqcITR/gVus+DuCcb4C3IhAQSRL32XL+wkwpvja6c+c+Wgj6 SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2016 11:18:22.1582 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1719 Subject: Re: [dpdk-dev] [PATCH v5 1/3] mempool: support external handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Jun 2016 11:18:29 -0000 On Wed, Jun 01, 2016 at 11:46:20AM +0100, Hunt, David wrote: > > > On 5/31/2016 10:11 PM, Jerin Jacob wrote: > > On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote: > > > Hi, > > > > > > On 05/31/2016 06:03 PM, Jerin Jacob wrote: > > > > On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote: > > > > > > > > > > On 5/31/2016 9:53 AM, Jerin Jacob wrote: > > > > > > On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote: > > > > > > > New mempool handlers will use rte_mempool_create_empty(), > > > > > > > rte_mempool_set_handler(), > > > > > > > then rte_mempool_populate_*(). These three functions are new to this > > > > > > > release, to no problem > > > > > > Having separate APIs for external pool-manager create is worrisome in > > > > > > application perspective. Is it possible to have rte_mempool_[xmem]_create > > > > > > for the both external and existing SW pool manager and make > > > > > > rte_mempool_create_empty and rte_mempool_populate_* internal functions. > > > > > > > > > > > > IMO, We can do that by selecting specific rte_mempool_set_handler() > > > > > > based on _flags_ encoding, something like below > > > > > > > > > > > > bit 0 - 16 // generic bits uses across all the pool managers > > > > > > bit 16 - 23 // pool handler specific flags bits > > > > > > bit 24 - 31 // to select the specific pool manager(Up to 256 different flavors of > > > > > > pool managers, For backward compatibility, make '0'(in 24-31) to select > > > > > > existing SW pool manager. > > > > > > > > > > > > and applications can choose the handlers by selecting the flag in > > > > > > rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other > > > > > > applications to choose the pool handler from command line etc in future. > > > > > There might be issues with the 8-bit handler number, as we'd have to add an > > > > > api call to > > > > > first get the index of a given hander by name, then OR it into the flags. > > > > > That would mean > > > > > also extra API calls for the non-default external handlers. I do agree with > > > > > the handler-specific > > > > > bits though. > > > > That would be an internal API(upper 8 bits to handler name). Right ? > > > > Seems to be OK for me. > > > > > > > > > Having the _empty and _set_handler APIs seems to me to be OK for the > > > > > moment. Maybe Olivier could comment? > > > > > > > > > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe > > > > it is better reduce the public API in spec where ever possible ? > > > > > > > > Maybe Olivier could comment ? > > > Well, I think having 3 different functions is not a problem if the API > > > is clearer. > > > > > > In my opinion, the following: > > > rte_mempool_create_empty() > > > rte_mempool_set_handler() > > > rte_mempool_populate() > > > > > > is clearer than: > > > rte_mempool_create(15 args) > > But proposed scheme is not adding any new arguments to > > rte_mempool_create. It just extending the existing flag. > > > > rte_mempool_create(15 args) is still their as API for internal pool > > creation. > > > > > Splitting the flags into 3 groups, with one not beeing flags but a > > > pool handler number looks overcomplicated from a user perspective. > > I am concerned with seem less integration with existing applications, > > IMO, Its not worth having separate functions for external vs internal > > pool creation for application(now each every applications has to added this > > logic every where for no good reason), just my 2 cents. > > I think that there is always going to be some extra code in the > applications > that want to use an external mempool. The _set_handler approach does > create, set_hander, populate. The Flags method queries the handler list to > get the index, sets the flags bits, then calls create. Both methods will > work. I was suggesting flags like TXQ in ethdev where application just selects the mode. Not sure why application has to get the index first. some thing like, #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */ #define ETH_TXQ_FLAGS_NOREFCOUNT 0x0002 /**< refcnt can be ignored */ #define ETH_TXQ_FLAGS_NOMULTMEMP 0x0004 /**< all bufs come from same mempool */ Anyway, Looks like no one else much bothered about external pool manger creation API being different. So, I given up. No objections from my side :-) > > But I think the _set_handler approach is more user friendly, therefore that > it the method I would lean towards. > > > > > > > and we can remove "mbuf: get default mempool handler from configuration" > > > > > > change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set > > > > > > the same with rte_mempool_set_handler in rte_mempool_[xmem]_create. > > > > > > > > > > > > What do you think? > > > > > The "configuration" patch is to allow users to quickly change the mempool > > > > > handler > > > > > by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known > > > > > handler. It could just as easily be left out and use the rte_mempool_create. > > > > > > > > > Yes, I understand, but I am trying to avoid build time constant. IMO, It > > > > would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not > > > > defined in config. and for quick change developers can introduce the build > > > > with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler" > > > My understanding of the compile-time configuration option was > > > to allow a specific architecture to define a specific hw-assisted > > > handler by default. > > > > > > Indeed, if there is no such need for now, we may remove it. But > > > we need a way to select another handler, at least in test-pmd > > > (in command line arguments?). > > like txflags in testpmd, IMO, mempool flags will help to select the handlers > > seamlessly as suggest above. > > > > If we are _not_ taking the flags based selection scheme then it makes to > > keep RTE_MBUF_DEFAULT_MEMPOOL_HANDLER > > see comment above Try to add some means to select the external handler for existing applications so that we can test existing applications in different modes. Thanks, Jerin > > > > > > > > > 2) IMO, It is better to change void *pool in struct rte_mempool to > > > > > > > > anonymous union type, something like below, so that mempool > > > > > > > > implementation can choose the best type. > > > > > > > > union { > > > > > > > > void *pool; > > > > > > > > uint64_t val; > > > > > > > > } > > > > > > > Could we do this by using the union for the *pool_config suggested above, > > > > > > > would that give > > > > > > > you what you need? > > > > > > It would be an extra overhead for external pool manager to _alloc_ memory > > > > > > and store the allocated pointer in mempool struct(as *pool) and use pool for > > > > > > pointing other data structures as some implementation need only > > > > > > limited bytes to store the external pool manager specific context. > > > > > > > > > > > > In order to fix this problem, We may classify fast path and slow path > > > > > > elements in struct rte_mempool and move all fast path elements in first > > > > > > cache line and create an empty opaque space in the remaining bytes in the > > > > > > cache line so that if the external pool manager needs only limited space > > > > > > then it is not required to allocate the separate memory to save the > > > > > > per core cache in fast-path > > > > > > > > > > > > something like below, > > > > > > union { > > > > > > void *pool; > > > > > > uint64_t val; > > > > > > uint8_t extra_mem[16] // available free bytes in fast path cache line > > > > > > > > > > > > } > > > > > Something for the future, perhaps? Will the 8-bits in the flags suffice for > > > > > now? > > > > OK. But simple anonymous union for same type should be OK add now? Not > > > > much change I believe, If its difficult then postpone it > > > > > > > > union { > > > > void *pool; > > > > uint64_t val; > > > > } > > > I'm ok with the simple union with (void *) and (uint64_t). > > > Maybe "val" should be replaced by something more appropriate. > > > Is "pool_id" a better name? > > How about "opaque"? > > I think I would lean towards pool_id in this case. > > > Regards, > David.