From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1bbn0100.outbound.protection.outlook.com [157.56.111.100]) by dpdk.org (Postfix) with ESMTP id CBE5BC4D0 for ; Thu, 28 Jan 2016 18:26:59 +0100 (CET) Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Jerin.Jacob@caviumnetworks.com; Received: from localhost.localdomain (122.166.129.161) by BN3PR0701MB1717.namprd07.prod.outlook.com (10.163.39.16) with Microsoft SMTP Server (TLS) id 15.1.390.13; Thu, 28 Jan 2016 17:26:55 +0000 Date: Thu, 28 Jan 2016 22:56:33 +0530 From: Jerin Jacob To: David Hunt Message-ID: <20160128172631.GA11992@localhost.localdomain> References: <1453829155-1366-1-git-send-email-david.hunt@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <1453829155-1366-1-git-send-email-david.hunt@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Originating-IP: [122.166.129.161] X-ClientProxiedBy: BM1PR01CA0013.INDPRD01.PROD.OUTLOOK.COM (25.163.198.148) To BN3PR0701MB1717.namprd07.prod.outlook.com (25.163.39.16) X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1717; 2:qJvXxEPXS2fU4v9V62Efn4zcJFg5lwW3tsqGw+m3FTfoqTq0debFYKAG2ay01MxW35+hhKeJYE4LNI3PMVkzhKPkjkiCxtkg6SrfU2Pg/EcAXrBJU4/zMKGABPc6ctnEUrB3eui2ST79MQXMfZ8cBg==; 3:mX4ub5laQMdfVgRQf+eqpcnncGj08a8NBGeN8hVW1O2rXmjN+GBH5X3v9Br/dqa/K7DIWsglmFDaovbK+4DOv+KBZ4KZ9LxfPfnoq0wuwc269f5dMlvDOLLIqWLOBqY/; 25:YqPpUXrWMbZl7ps+VpkEqVbcU9akCwD5lASlGg2F6s3Z//qGrcHkvnkpJulXVaiYlJdi/WUvUztZjvIBBsAInxqZhFWNW3PXoF1CxA2WgIVAgOkXwzJtib4XmmdkGhfm2aXCDr5WgSV32eLh3FoH2/SlPcBAB1trz0uWQY7qTcajrICrxuucvzOmw3guy8AYH439V3XjX6D9Eq0rC9droMYBfAflRaxq7Q2qfMLQl7pcXxIJrC7qPLlVWbkwMj/0 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN3PR0701MB1717; X-MS-Office365-Filtering-Correlation-Id: 81efd21d-7c72-49d0-dbcf-08d328083940 X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1717; 20:ZG7kyL197zhHA2iMDPTF8WolCG/LPQtFFNysouKiEfA/GwE7G4JEfccIUkPhh8m1Wqmz2ml8P7VBhgkOKD0Wlv8GcTEemRmwsCHScTVKmRALFXP40PukrFv+xdEAU/mPJWDRGNgsDlifvulVm/rmmzzQVPOVk45WpEfp1vUtft0PsORvx7XBnfsn56hPFht9gfZCXjDSxCGYMIt/tTcSFl2CjIG5sxoe+hoWfydp5KrIKyBZxycs9gtrKzBr7YwVdYmz6JjyKIab8UvKkFg+ST9URRJTCq9ZO1P++VmtGsZ546cWAUltT7YjuVMerIzQh7NOh4QfH4rLwoxyWwECgRANpWvEu6tZ+WG5k2/BJFfxL/iNtMbYohSQvhV22a+aHHwHimMPqnJZ/TPy6Ai4ryNmL7BjQJFZARnbOxqAY41jfKpIWyCoW9Y3QSNC1uCmR6rhT6oNfGI6pnLRkFKO3VgOPXURbXALUqYWH88TY0A0TEh98KIu+amtXvgOo9R8tNywySTdFsA579nvN6TLeRafvyUUvzYd1wIbcqZn7a8mJUTv3vD7r5tXK6sWByD1rDR5mcC6xK7Crz1clW7S1Vt74vViGV4/BzPCg/gYfIw= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001); SRVR:BN3PR0701MB1717; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1717; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1717; 4:bpVFKTjPSDFtFZkS7M4qkXcS3DoXcV1RNdn5DJHKrNWktVJzZDe2pkXiFKd9d6WuqgpvrUm97w3Hj1LHSFU7tnvaLABAMKpjUKYraws+qGf1cwsPu+IsQkV8EczY3JsE3YUaKJkiKkjrACzf2G+f9Lwkv8fWj/aohIyucdqeZt9Nefu1vZ/4X02m+x1mYQcC/7OB8p8YmC+WOKYPHEL3jNaRQ9H/h2r/GJmecsS0b2lgZBgGf0VOpb3XxUp2B+uEUcTA99TqwNOdleaepr2my4eh2ZYyVe2ikEzUCAxBA/wenkAaYQDv/V+HIGo7j7ipcrbEcK7qyo7R5+2yGnT/2rymRwA0McCm7sr/Sb1Jsn+1vt9H6TRfNiFTGmNajIBA X-Forefront-PRVS: 083526BF8A X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(6009001)(57704003)(24454002)(199003)(53754006)(189002)(81156007)(87976001)(47776003)(4001350100001)(97736004)(50466002)(110136002)(19580395003)(54356999)(50986999)(76176999)(83506001)(97756001)(105586002)(77096005)(5004730100002)(106356001)(66066001)(86362001)(189998001)(5001960100002)(2950100001)(101416001)(5008740100001)(122386002)(61506002)(4326007)(92566002)(23726003)(2906002)(42186005)(1076002)(3846002)(1096002)(3470700001)(586003)(46406003)(40100003)(6116002)(33656002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1717; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: caviumnetworks.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1717; 23:WiQqzS1UVHCTlWscMDWJXnwaEWz5JrYfl7s7mxH?= =?us-ascii?Q?V2r9G0rhYKNi2T1b8oHdgpP8kAucpzK7zN0fxIedK3O8kkpkF+pQXnSAfDWy?= =?us-ascii?Q?cYLHhfD6tOHJlS5OpVw4ErazJBooyjpEPUeAWRlpHKF2XcMN28ZLVmYl5iro?= =?us-ascii?Q?wzOCpSNJpCDutY/aeKopheqdeySPPV+knwErQgihvtWy/zASUr7OlICisf/6?= =?us-ascii?Q?ET8sMdJLOBOLJ0WOf6K0jPAXHtyVU0+1Z86R80v75AOvvdlHRwMDHAs1mYXj?= =?us-ascii?Q?S06UW4CpixiciGqlQtHhmkcCRSnWeN6KAGmaU1eSINV9NqF6yq5ZgFgsMhI4?= =?us-ascii?Q?wx6jQepNpi1of6mqBxtlEk59P98K5IszxPeqBU6EhUGq3o0azEY7pi8sKajM?= =?us-ascii?Q?3yts/fX9XA81JbZyoDi+P2275Un+ZtbIMhvP06IS7As42alzpuoQozT3HtV1?= =?us-ascii?Q?fVNAvRcug1vYeVUU6yKciU/k384R5WtdVjpx1j4jdtKcMm0wsnoUa7HtcfZV?= =?us-ascii?Q?Q+NAm+i+oGimkJceKje80604bzhrnAX+9asnvkIpUHvWD9k2h8VwhxyYDXbT?= =?us-ascii?Q?GeJBxN7MbCdoSOb8vPhhrjmUJdMYaHbt0dY6vv5Fnc6utILQNCQ8fHDA6VCK?= =?us-ascii?Q?mSQBAwYT0OFVXfnYHaFfPvOhbxw+gf4Ww6tbMkKLmQ5BOf9W4e9qDwNOVcko?= =?us-ascii?Q?0mG+vQoYszXaxSBbznRKGgYxZ4pYDFIQXd5nonJkuRTaixJ6T8yNngQz/b87?= =?us-ascii?Q?n1QhMkkQg3zC3IGiLx+qvoQEVo37govyMDqmKoYHeLJh/gQbH3BGRH4ccTBN?= =?us-ascii?Q?fU8mr+trSCQtp5BfGPh5ZRLEHTP5zQC5O5F4wjbamOAvqIYiZvULusT7tBrV?= =?us-ascii?Q?l7h6bimBAQoXynG1DlDwe6jv+8aKL3pkBEMXwzxhx+vIpAUlasFv+sdgVHYp?= =?us-ascii?Q?MH5nmIf//bqoaY99BAAH5BsYw7QThsHlO6oMavcqh0riRCc2LiAC2HwyWZiM?= =?us-ascii?Q?VZbvSg/a1a5idwes7g4Gn+eZ5/+nUwbO/fZGyQnruzunLEX+u83/L/dT2KDw?= =?us-ascii?Q?cIWsewZcK85SrYiVLmQsjxVYiMZBmOoJk3ppSB3d03E1Bj6YHSwdtatclxpW?= =?us-ascii?Q?havDHH8Pffh4QDtZ2Z/bQ/tBfjq+Uculkl08r1OuZR2HXBxee4DBNaJjVcW1?= =?us-ascii?Q?+2hqoYrYyc7kDEn7BSimJUhJXrPc3oy0lX1mP?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1717; 5:LzJpRhK4svyW7qADiaroW6fcS7CrPp3x3aOUn2XcXLhKwtARYhgRCoUrDySQ7QaKF6nJ3uuDFpvzxainySziMtG0IoFLraAHaY11Ba/IudhEIkuIYj59LbQbDV7u8WBDWUCyLVCtujMCFYXPADmrgA==; 24:S8jcs+HwHc0WCGXSSho3kU524kYLegQUgNM7z/cts3K+n5U0rNRu1oSozvCtKuur8j8UHDxKVh3z4ZHTA58KdrDSKOK1UVE3CMinKu+I+zo= SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jan 2016 17:26:55.4928 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1717 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 0/5] add external mempool manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Jan 2016 17:27:00 -0000 On Tue, Jan 26, 2016 at 05:25:50PM +0000, David Hunt wrote: > Hi all on the list. > > Here's a proposed patch for an external mempool manager > > The External Mempool Manager is an extension to the mempool API that allows > users to add and use an external mempool manager, which allows external memory > subsystems such as external hardware memory management systems and software > based memory allocators to be used with DPDK. I like this approach.It will be useful for external hardware memory pool managers. BTW, Do you encounter any performance impact on changing to function pointer based approach? > > The existing API to the internal DPDK mempool manager will remain unchanged > and will be backward compatible. > > There are two aspects to external mempool manager. > 1. Adding the code for your new mempool handler. This is achieved by adding a > new mempool handler source file into the librte_mempool library, and > using the REGISTER_MEMPOOL_HANDLER macro. > 2. Using the new API to call rte_mempool_create_ext to create a new mempool > using the name parameter to identify which handler to use. > > New API calls added > 1. A new mempool 'create' function which accepts mempool handler name. > 2. A new mempool 'rte_get_mempool_handler' function which accepts mempool > handler name, and returns the index to the relevant set of callbacks for > that mempool handler > > Several external mempool managers may be used in the same application. A new > mempool can then be created by using the new 'create' function, providing the > mempool handler name to point the mempool to the relevant mempool manager > callback structure. > > The old 'create' function can still be called by legacy programs, and will > internally work out the mempool handle based on the flags provided (single > producer, single consumer, etc). By default handles are created internally to > implement the built-in DPDK mempool manager and mempool types. > > The external mempool manager needs to provide the following functions. > 1. alloc - allocates the mempool memory, and adds each object onto a ring > 2. put - puts an object back into the mempool once an application has > finished with it > 3. get - gets an object from the mempool for use by the application > 4. get_count - gets the number of available objects in the mempool > 5. free - frees the mempool memory > > Every time a get/put/get_count is called from the application/PMD, the > callback for that mempool is called. These functions are in the fastpath, > and any unoptimised handlers may limit performance. > > The new APIs are as follows: > > 1. rte_mempool_create_ext > > struct rte_mempool * > rte_mempool_create_ext(const char * name, unsigned n, > unsigned cache_size, unsigned private_data_size, > int socket_id, unsigned flags, > const char * handler_name); > > 2. rte_get_mempool_handler > > int16_t > rte_get_mempool_handler(const char *name); Do we need above public API as, in any case we need rte_mempool* pointer to operate on mempools(which has the index anyway)? May a similar functional API with different name/return will be better to figure out, given "name" registered or not in ethernet driver which has dependency on a particular HW pool manager. > > Please see rte_mempool.h for further information on the parameters. > > > The important thing to note is that the mempool handler is passed by name > to rte_mempool_create_ext, and that in turn calls rte_get_mempool_handler to > get the handler index, which is stored in the rte_memool structure. This > allow multiple processes to use the same mempool, as the function pointers > are accessed via handler index. > > The mempool handler structure contains callbacks to the implementation of > the handler, and is set up for registration as follows: > > static struct rte_mempool_handler handler_sp_mc = { > .name = "ring_sp_mc", > .alloc = rte_mempool_common_ring_alloc, > .put = common_ring_sp_put, > .get = common_ring_mc_get, > .get_count = common_ring_get_count, > .free = common_ring_free, > }; > > And then the following macro will register the handler in the array of handlers > > REGISTER_MEMPOOL_HANDLER(handler_mp_mc); > > For and example of a simple malloc based mempool manager, see > lib/librte_mempool/custom_mempool.c > > For an example of API usage, please see app/test/test_ext_mempool.c, which > implements a rudimentary mempool manager using simple mallocs for each > mempool object (custom_mempool.c). > > > David Hunt (5): > mempool: add external mempool manager support > memool: add stack (lifo) based external mempool handler > mempool: add custom external mempool handler example > mempool: add autotest for external mempool custom example > mempool: allow rte_pktmbuf_pool_create switch between memool handlers > > app/test/Makefile | 1 + > app/test/test_ext_mempool.c | 470 ++++++++++++++++++++++++++++++ > app/test/test_mempool_perf.c | 2 - > lib/librte_mbuf/rte_mbuf.c | 11 + > lib/librte_mempool/Makefile | 3 + > lib/librte_mempool/custom_mempool.c | 158 ++++++++++ > lib/librte_mempool/rte_mempool.c | 208 +++++++++---- > lib/librte_mempool/rte_mempool.h | 205 +++++++++++-- > lib/librte_mempool/rte_mempool_default.c | 229 +++++++++++++++ > lib/librte_mempool/rte_mempool_internal.h | 70 +++++ > lib/librte_mempool/rte_mempool_stack.c | 162 ++++++++++ > 11 files changed, 1430 insertions(+), 89 deletions(-) > create mode 100644 app/test/test_ext_mempool.c > create mode 100644 lib/librte_mempool/custom_mempool.c > create mode 100644 lib/librte_mempool/rte_mempool_default.c > create mode 100644 lib/librte_mempool/rte_mempool_internal.h > create mode 100644 lib/librte_mempool/rte_mempool_stack.c > > -- > 1.9.3 >