From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1bon0098.outbound.protection.outlook.com [157.56.111.98]) by dpdk.org (Postfix) with ESMTP id E1446C4D0 for ; Thu, 28 Jan 2016 18:52:45 +0100 (CET) Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Jerin.Jacob@caviumnetworks.com; Received: from localhost.localdomain (122.166.129.161) by BY1PR0701MB1723.namprd07.prod.outlook.com (10.162.111.142) with Microsoft SMTP Server (TLS) id 15.1.390.13; Thu, 28 Jan 2016 17:52:40 +0000 Date: Thu, 28 Jan 2016 23:22:18 +0530 From: Jerin Jacob To: David Hunt Message-ID: <20160128175216.GB11992@localhost.localdomain> References: <1453829155-1366-1-git-send-email-david.hunt@intel.com> <1453829155-1366-2-git-send-email-david.hunt@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <1453829155-1366-2-git-send-email-david.hunt@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Originating-IP: [122.166.129.161] X-ClientProxiedBy: MAXPR01CA0003.INDPRD01.PROD.OUTLOOK.COM (25.164.147.10) To BY1PR0701MB1723.namprd07.prod.outlook.com (25.162.111.142) X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1723; 2:fwOyP1JVxSuA0Cukgbg+Nocvp0xmQAScwkdHyGPiLdfQy4CJdXDRSu8xulOqXh8hgAMtExOhW9KS4vqA2bcj4rQzyU07fuwSMAE40LYEm7UAHdcJ69l987MkcskjMpXPUZovG8PwqnReNYuEg655Kg==; 3:vYcsalLgsf2Hzotjsj9AskyGcQZKjmlIAi/UXUyrOVhIfwqZLVf4FxlHihG5X1idhQpe/GNX69htAromNjmuNclkqrs/rwFHzIcLlqnvOsf2fCKSz+8GIz0y5HvRePnU; 25:8mXHWejSHDotahy739Bfsu6QSnDHrRFU92F5WEozEgrC5vKfigW2Pb5rDSiWKAohXrFAkSQDTx8vCckm+tFDDa71wuyXAt9ZUVOov6qLpyXHrYnBOZBjpCDL0VDUdIMVug4rJBHrouc6ebX0+WNYign2pUg7VY6lt242rnzwRXCNmUQHt+oR1EPMSjpBi4XZJtiPP2wHgVmN52HuUe7xLDmN319bSOLYuKgLSZ16d0Z9sm128nRYIHRjxDghVOml X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BY1PR0701MB1723; X-MS-Office365-Filtering-Correlation-Id: 13c12199-9146-4bd1-5f7d-08d3280bd1dc X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1723; 20:426d9WpkqFlIx24WSkZPtNNaxLtm3Gtc6yfzml50VBzKert9Hu3mtdEvLBB+fCpR90FCV3S37wL8pvvuM4cHguMkG0PDWUi2y7cB2KQgU2zo5jn7vB+skpAyCXwzMvFtkaRkUX+YulM4C5NphXRBj0Lp36EOZ6xKfIhWDirxrvkElEG/r21ZQPIHF1osLcv7SrghRi1fh4KgVYNCX81QrnhFooFy9jYLcwTfefSE0upfY677n/R5PgM6BMerXmWdF/TNfOmCfblciJp7opY2zPjIueh+b7HKm5YUyuGaKs/5h3qSUfwb0TuZVFGSJ8NwG9ssWTYE4EcPpSh3Tnol20r/ckN/xh2HxlnLNftzv/9nLl6S7XSMaksswV7njna/A8w0Y1LMpBvy+uvJ44T/PiPOhnie8zCiX1vhEhCrSBc48FZj6RUUADKRB4m6yPj5gVKz60PWLPEeSNIelR4xEq3W4ZiK3xPYOKDR3NOwBqtwE2ToregfQ/AsFxzv2mU769RHawYrtjymMvHYb5VMMN4QTHWi9gYe44gls22Qq9nPhKrEaEB8VP5KhNcAcQGzH0oxRugIPEwU0jEJPN45Nqo5A9w8jIH48Itbw4efn3k= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001); SRVR:BY1PR0701MB1723; BCL:0; PCL:0; RULEID:; SRVR:BY1PR0701MB1723; X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1723; 4:JTPKQ2NL/CZYktzscpSBi+Uh+pfYfa8HyEafQ+HVaXD8zRxriHQ5YKlK4lpXNIhntXWtq85+mA9ymuzpMOo5OBRVood18MX7V/oF8BZKikuFs484WF+DQ0+tzBu9XwpjfrzbPDLhKP1J+PxvIwLOE+tMPmHOHyIirvRff3x9gKEZO+4ra79Y1I+shpFrcdwr5cNiVI1kesjlog1S0uzdn3b0P2/ldbxUXo29KS8psvqmo5M7dNTPnsqZe+SXzq/McKHgwck2lsTEal63U2jM6F5MacCFmPho4nqodzSAj5NntFKwEpr2w5BnyhfGBxDi0Closwm9hZgHv0jkzF0V/y7IKmR/KLECNccgKttNJiSTri1dPGGHrsAfUhcERnfA X-Forefront-PRVS: 083526BF8A X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(6009001)(51234002)(189002)(57704003)(199003)(24454002)(33656002)(5008740100001)(101416001)(122386002)(3470700001)(1076002)(6116002)(40100003)(92566002)(1096002)(3846002)(46406003)(586003)(61506002)(4326007)(2906002)(42186005)(23726003)(50986999)(54356999)(76176999)(110136002)(19580395003)(19580405001)(97756001)(83506001)(4001350100001)(47776003)(97736004)(81156007)(87976001)(50466002)(5001960100002)(86362001)(106356001)(2950100001)(189998001)(105586002)(66066001)(77096005)(5004730100002); DIR:OUT; SFP:1101; SCL:1; SRVR:BY1PR0701MB1723; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: caviumnetworks.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BY1PR0701MB1723; 23:OeIuqZrvfMi/1KPBmjqAYyp4uqjyCh6/vHbGEcH?= =?us-ascii?Q?O2gRMHWye+c67vQDNKQ8hnrqLiJ7arEsFdkj6jWUS/ZMi91BUTd4cwRUPmpM?= =?us-ascii?Q?K7SJIg+aTdKL4uoD5d2MPCXMQvUrFEu4vhr3hH0nx18rEbvUPNhivekgIM3o?= =?us-ascii?Q?lhEh8TVDkgzuAjECbKjZAH6A4/lfTctZStf+qCojmw05CQdZ/zwQdaIfgLf6?= =?us-ascii?Q?f0jzu/3JYxWtmfixe2vtz13ZzDsrq09gNVX3/ruZTwN/RSqoyqCO78XSL7Hm?= =?us-ascii?Q?31Da6fUxYoFTfMlDPy40k7Pjb9tBw5KRfdBhZfdCNTyTVBCSlJKeI+uMzayB?= =?us-ascii?Q?GxU0seIiBFiSIfUEjB56+Op03IGhk6MgIj69nQZ+HLh1eFqx7C39RIp1eA4c?= =?us-ascii?Q?I+hJq74CrM7xzLyK9gfCGTPKFavoX+vmI8tKSSWEO6WNKo8qhqbgUBvxOAtI?= =?us-ascii?Q?ZftHtE7xvDc5ObwloGiD5daZxolbLUh+9MLhkTQMEk9i/JkKlVk+AB7AKplX?= =?us-ascii?Q?8QVbY4Ps8qp2ue5r3avBJFLAniQfIAN4dFyrooTIePmo9pfSLE7Az279yzon?= =?us-ascii?Q?LvkRDaB8ZHWctZWbJMz+1MAEOkA4ZnXkiHOjhi6Guwf2A+2Pe4VdYPXHdzOY?= =?us-ascii?Q?l4yC3jQUIsTauk2phZqUUG4ORHvrXOhIalj/3CJ6jZPI2rAeFppUO/cKtWjw?= =?us-ascii?Q?jPs8ohSDTqyVh3qx09iC3W58XcGH9W8aOtxW9a49z0Sic6guM4G5lkazmaTt?= =?us-ascii?Q?FmF6I/VBHf9Q1myRVruTp3RPLJckHJQCj/hDOXyFFSWTsWdhWy8YX5cQxatt?= =?us-ascii?Q?2FPpawoIVOA64MvO5HOCkkWnnJtdZTNKpWYVeYP4cvSZtZr+DLhsmuJc9eoe?= =?us-ascii?Q?WnzacHUIA4/nAkz/lF09jIYR7UOGAj7osNWZsh2n6txrfPbuTUd2CDFYrdCf?= =?us-ascii?Q?XIH7Ow9Tvw8WK6I7fEILEZLZfaDURBmhMEe0XZGr98Cm91+P623ZhsIOO/uq?= =?us-ascii?Q?NLdXWOmZ1b5OwE3jpNaIj2xpYS3SpPYF3ERNr4etKZXSY6+Cv3hcBAlS63lV?= =?us-ascii?Q?Q534dHZzWaAiJ/jbaA7sgleAbSSHViMQ5OoIqJwYtk/ClQNXnWRCewyXBwq5?= =?us-ascii?Q?qGB/HXIshNG/EIejF5AxBYDAitBj8eLHLf4BXR25Xz4vurZ3Ge5WEspbEO8U?= =?us-ascii?Q?X96uzlVN3JJc3kUVHjBPaRcVCQt2OmF99dyhS5+D2qeiFjhOmEXdQhi7TLw?= =?us-ascii?Q?=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1723; 5:V7V8YS6/X4HvnjNo5nLcGlV4S3DBcfRyjtxbebaUn9whdxTExdG3kIdSeeIcY8n16KCmZvEhevRkNTfmInBGI/0lJtS6NEgU+daQaZfsDP0nhVD3U33F1vJ0znt7B/RlJhm3DAOJzt6PNVPqBxPouw==; 24:wNV5ecOXt3zOY/mV607ZcmKwcsxz+rBM8wf8aFvxpo3MICWNNC0Qu+9UHJf6hMxeMHDEPrEnWJgVEZEQjneygKSiUeMtazspj06e2xMfEsc= SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jan 2016 17:52:40.5784 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR0701MB1723 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 1/5] mempool: add external mempool manager support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Jan 2016 17:52:46 -0000 On Tue, Jan 26, 2016 at 05:25:51PM +0000, David Hunt wrote: > Adds the new rte_mempool_create_ext api and callback mechanism for > external mempool handlers > > Modifies the existing rte_mempool_create to set up the handler_idx to > the relevant mempool handler based on the handler name: > ring_sp_sc > ring_mp_mc > ring_sp_mc > ring_mp_sc > > Signed-off-by: David Hunt > --- > app/test/test_mempool_perf.c | 1 - > lib/librte_mempool/Makefile | 1 + > lib/librte_mempool/rte_mempool.c | 210 +++++++++++++++++++-------- > lib/librte_mempool/rte_mempool.h | 207 +++++++++++++++++++++++---- > lib/librte_mempool/rte_mempool_default.c | 229 ++++++++++++++++++++++++++++++ > lib/librte_mempool/rte_mempool_internal.h | 74 ++++++++++ > 6 files changed, 634 insertions(+), 88 deletions(-) > create mode 100644 lib/librte_mempool/rte_mempool_default.c > create mode 100644 lib/librte_mempool/rte_mempool_internal.h > > diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c > index cdc02a0..091c1df 100644 > --- a/app/test/test_mempool_perf.c > +++ b/app/test/test_mempool_perf.c > @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg) > n_get_bulk); > if (unlikely(ret < 0)) { > rte_mempool_dump(stdout, mp); > - rte_ring_dump(stdout, mp->ring); > /* in this case, objects are lost... */ > return -1; > } > diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile > index a6898ef..7c81ef6 100644 > --- a/lib/librte_mempool/Makefile > +++ b/lib/librte_mempool/Makefile > @@ -42,6 +42,7 @@ LIBABIVER := 1 > > # all source are stored in SRCS-y > SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool.c > +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_default.c > ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y) > SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_dom0_mempool.c > endif > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index aff5f6d..8c01838 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -59,10 +59,11 @@ > #include > > #include "rte_mempool.h" > +#include "rte_mempool_internal.h" > > TAILQ_HEAD(rte_mempool_list, rte_tailq_entry); > > -static struct rte_tailq_elem rte_mempool_tailq = { > +struct rte_tailq_elem rte_mempool_tailq = { > .name = "RTE_MEMPOOL", > }; > EAL_REGISTER_TAILQ(rte_mempool_tailq) > @@ -149,7 +150,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx, > obj_init(mp, obj_init_arg, obj, obj_idx); > > /* enqueue in ring */ > - rte_ring_sp_enqueue(mp->ring, obj); > + rte_mempool_ext_put_bulk(mp, &obj, 1); > } > > uint32_t > @@ -375,48 +376,28 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz, > return usz; > } > > -#ifndef RTE_LIBRTE_XEN_DOM0 > -/* stub if DOM0 support not configured */ > -struct rte_mempool * > -rte_dom0_mempool_create(const char *name __rte_unused, > - unsigned n __rte_unused, > - unsigned elt_size __rte_unused, > - unsigned cache_size __rte_unused, > - unsigned private_data_size __rte_unused, > - rte_mempool_ctor_t *mp_init __rte_unused, > - void *mp_init_arg __rte_unused, > - rte_mempool_obj_ctor_t *obj_init __rte_unused, > - void *obj_init_arg __rte_unused, > - int socket_id __rte_unused, > - unsigned flags __rte_unused) > -{ > - rte_errno = EINVAL; > - return NULL; > -} > -#endif > - > /* create the mempool */ > struct rte_mempool * > rte_mempool_create(const char *name, unsigned n, unsigned elt_size, > - unsigned cache_size, unsigned private_data_size, > - rte_mempool_ctor_t *mp_init, void *mp_init_arg, > - rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg, > - int socket_id, unsigned flags) > + unsigned cache_size, unsigned private_data_size, > + rte_mempool_ctor_t *mp_init, void *mp_init_arg, > + rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg, > + int socket_id, unsigned flags) > { > if (rte_xen_dom0_supported()) > return rte_dom0_mempool_create(name, n, elt_size, > - cache_size, private_data_size, > - mp_init, mp_init_arg, > - obj_init, obj_init_arg, > - socket_id, flags); > + cache_size, private_data_size, > + mp_init, mp_init_arg, > + obj_init, obj_init_arg, > + socket_id, flags); > else > return rte_mempool_xmem_create(name, n, elt_size, > - cache_size, private_data_size, > - mp_init, mp_init_arg, > - obj_init, obj_init_arg, > - socket_id, flags, > - NULL, NULL, MEMPOOL_PG_NUM_DEFAULT, > - MEMPOOL_PG_SHIFT_MAX); > + cache_size, private_data_size, > + mp_init, mp_init_arg, > + obj_init, obj_init_arg, > + socket_id, flags, > + NULL, NULL, > + MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX); > } > > /* > @@ -435,11 +416,9 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift) > { > char mz_name[RTE_MEMZONE_NAMESIZE]; > - char rg_name[RTE_RING_NAMESIZE]; > struct rte_mempool_list *mempool_list; > struct rte_mempool *mp = NULL; > struct rte_tailq_entry *te; > - struct rte_ring *r; > const struct rte_memzone *mz; > size_t mempool_size; > int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY; > @@ -469,7 +448,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > > /* asked cache too big */ > if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > - CALC_CACHE_FLUSHTHRESH(cache_size) > n) { > + CALC_CACHE_FLUSHTHRESH(cache_size) > n) { > rte_errno = EINVAL; > return NULL; > } > @@ -502,16 +481,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > return NULL; > } > > - rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK); > > - /* allocate the ring that will be used to store objects */ > - /* Ring functions will return appropriate errors if we are > - * running as a secondary process etc., so no checks made > - * in this function for that condition */ > - snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name); > - r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags); > - if (r == NULL) > - goto exit; > + rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK); > > /* > * reserve a memory zone for this mempool: private data is > @@ -588,7 +559,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > memset(mp, 0, sizeof(*mp)); > snprintf(mp->name, sizeof(mp->name), "%s", name); > mp->phys_addr = mz->phys_addr; > - mp->ring = r; > mp->size = n; > mp->flags = flags; > mp->elt_size = objsz.elt_size; > @@ -598,6 +568,22 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size); > mp->private_data_size = private_data_size; > > + /* > + * Since we have 4 combinations of the SP/SC/MP/MC, and stack, > + * examine the > + * flags to set the correct index into the handler table. > + */ > + if (flags & MEMPOOL_F_USE_STACK) > + mp->handler_idx = rte_get_mempool_handler("stack"); > + else if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) > + mp->handler_idx = rte_get_mempool_handler("ring_sp_sc"); > + else if (flags & MEMPOOL_F_SP_PUT) > + mp->handler_idx = rte_get_mempool_handler("ring_sp_mc"); > + else if (flags & MEMPOOL_F_SC_GET) > + mp->handler_idx = rte_get_mempool_handler("ring_mp_sc"); > + else > + mp->handler_idx = rte_get_mempool_handler("ring_mp_mc"); > + Why still use flag based selection? Why not "name" based? See below for more description > /* calculate address of the first element for continuous mempool. */ > obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) + > private_data_size; > @@ -613,7 +599,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > mp->elt_va_start = (uintptr_t)obj; > mp->elt_pa[0] = mp->phys_addr + > (mp->elt_va_start - (uintptr_t)mp); > - > /* mempool elements in a separate chunk of memory. */ > } else { > mp->elt_va_start = (uintptr_t)vaddr; > @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, > > mp->elt_va_end = mp->elt_va_start; > > + /* Parameters are setup. Call the mempool handler alloc */ > + if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL) > + goto exit; > + > /* call the initializer */ > if (mp_init) > mp_init(mp, mp_init_arg); > @@ -646,7 +635,7 @@ rte_mempool_count(const struct rte_mempool *mp) > { > unsigned count; > > - count = rte_ring_count(mp->ring); > + count = rte_mempool_ext_get_count(mp); > > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > { > @@ -681,7 +670,9 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp) > fprintf(f, " cache_size=%"PRIu32"\n", mp->cache_size); > for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { > cache_count = mp->local_cache[lcore_id].len; > - fprintf(f, " cache_count[%u]=%u\n", lcore_id, cache_count); > + if (cache_count > 0) > + fprintf(f, " cache_count[%u]=%u\n", > + lcore_id, cache_count); > count += cache_count; > } > fprintf(f, " total_cache_count=%u\n", count); > @@ -802,14 +793,13 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp) > > fprintf(f, "mempool <%s>@%p\n", mp->name, mp); > fprintf(f, " flags=%x\n", mp->flags); > - fprintf(f, " ring=<%s>@%p\n", mp->ring->name, mp->ring); > fprintf(f, " phys_addr=0x%" PRIx64 "\n", mp->phys_addr); > fprintf(f, " size=%"PRIu32"\n", mp->size); > fprintf(f, " header_size=%"PRIu32"\n", mp->header_size); > fprintf(f, " elt_size=%"PRIu32"\n", mp->elt_size); > fprintf(f, " trailer_size=%"PRIu32"\n", mp->trailer_size); > fprintf(f, " total_obj_size=%"PRIu32"\n", > - mp->header_size + mp->elt_size + mp->trailer_size); > + mp->header_size + mp->elt_size + mp->trailer_size); > > fprintf(f, " private_data_size=%"PRIu32"\n", mp->private_data_size); > fprintf(f, " pg_num=%"PRIu32"\n", mp->pg_num); > @@ -825,7 +815,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp) > mp->size); > > cache_count = rte_mempool_dump_cache(f, mp); > - common_count = rte_ring_count(mp->ring); > + common_count = /* rte_ring_count(mp->ring)*/0; > if ((cache_count + common_count) > mp->size) > common_count = mp->size - cache_count; > fprintf(f, " common_pool_count=%u\n", common_count); > @@ -904,7 +894,7 @@ rte_mempool_lookup(const char *name) > } > > void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *), > - void *arg) > + void *arg) > { > struct rte_tailq_entry *te = NULL; > struct rte_mempool_list *mempool_list; > @@ -919,3 +909,111 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *), > > rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK); > } > + > + > +/* create the mempool using and external mempool manager */ > +struct rte_mempool * > +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size, > + unsigned cache_size, unsigned private_data_size, > + rte_mempool_ctor_t *mp_init, void *mp_init_arg, > + rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg, > + int socket_id, unsigned flags, > + const char *handler_name) > +{ > + char mz_name[RTE_MEMZONE_NAMESIZE]; > + struct rte_mempool_list *mempool_list; > + struct rte_mempool *mp = NULL; > + struct rte_tailq_entry *te; > + const struct rte_memzone *mz; > + size_t mempool_size; > + int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY; > + int rg_flags = 0; > + int16_t handler_idx; > + > + mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list); > + > + /* asked cache too big */ > + if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > + CALC_CACHE_FLUSHTHRESH(cache_size) > n) { > + rte_errno = EINVAL; > + return NULL; > + } > + > + handler_idx = rte_get_mempool_handler(handler_name); > + if (handler_idx < 0) { > + RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n"); > + goto exit; > + } > + > + /* ring flags */ > + if (flags & MEMPOOL_F_SP_PUT) > + rg_flags |= RING_F_SP_ENQ; > + if (flags & MEMPOOL_F_SC_GET) > + rg_flags |= RING_F_SC_DEQ; > + rg_flags not used anywhere down > + rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK); > + > + /* > + * reserve a memory zone for this mempool: private data is > + * cache-aligned > + */ > + private_data_size = RTE_ALIGN_CEIL(private_data_size, > + RTE_MEMPOOL_ALIGN); > + > + /* try to allocate tailq entry */ > + te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); > + if (te == NULL) { > + RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n"); > + goto exit; > + } > + > + /* > + * If user provided an external memory buffer, then use it to > + * store mempool objects. Otherwise reserve a memzone that is large > + * enough to hold mempool header and metadata plus mempool objects. > + */ > + mempool_size = sizeof(*mp) + private_data_size; > + mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN); > + > + snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name); > + > + mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags); > + > + /* no more memory */ > + if (mz == NULL) { > + rte_free(te); > + goto exit; > + } > + > + /* init the mempool structure */ > + mp = mz->addr; > + memset(mp, 0, sizeof(*mp)); > + snprintf(mp->name, sizeof(mp->name), "%s", name); > + mp->phys_addr = mz->phys_addr; > + mp->size = n; > + mp->flags = flags; > + mp->cache_size = cache_size; > + mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size); > + mp->private_data_size = private_data_size; > + mp->handler_idx = handler_idx; > + mp->elt_size = elt_size; > + mp->rt_pool = rte_mempool_ext_alloc(mp, name, n, socket_id, flags); IMO, We can avoid the duplicaition of above code with rte_mempool_create. i.e rte_mempool_create -> rte_mempool_create_ext(..,"ring_mp_mc") > + > + /* call the initializer */ > + if (mp_init) > + mp_init(mp, mp_init_arg); > + > + mempool_populate(mp, n, 1, obj_init, obj_init_arg); > + > + te->data = (void *) mp; > + > + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK); > + TAILQ_INSERT_TAIL(mempool_list, te, next); > + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK); > + > +exit: > + rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK); > + > + return mp; > + > +} > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index 6e2390a..620cfb7 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -88,6 +88,8 @@ extern "C" { > struct rte_mempool_debug_stats { > uint64_t put_bulk; /**< Number of puts. */ > uint64_t put_objs; /**< Number of objects successfully put. */ > + uint64_t put_pool_bulk; /**< Number of puts into pool. */ > + uint64_t put_pool_objs; /**< Number of objects into pool. */ > uint64_t get_success_bulk; /**< Successful allocation number. */ > uint64_t get_success_objs; /**< Objects successfully allocated. */ > uint64_t get_fail_bulk; /**< Failed allocation number. */ > @@ -123,6 +125,7 @@ struct rte_mempool_objsz { > #define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */ > #define RTE_MEMPOOL_MZ_PREFIX "MP_" > > + > /* "MP_" */ > #define RTE_MEMPOOL_MZ_FORMAT RTE_MEMPOOL_MZ_PREFIX "%s" > > @@ -175,12 +178,85 @@ struct rte_mempool_objtlr { > #endif > }; > > +/* Handler functions for external mempool support */ > +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp, > + const char *name, unsigned n, int socket_id, unsigned flags); > +typedef int (*rte_mempool_put_t)(void *p, > + void * const *obj_table, unsigned n); > +typedef int (*rte_mempool_get_t)(void *p, void **obj_table, > + unsigned n); > +typedef unsigned (*rte_mempool_get_count)(void *p); > +typedef int(*rte_mempool_free_t)(struct rte_mempool *mp); > + > +/** > + * @internal wrapper for external mempool manager alloc callback. > + * > + * @param mp > + * Pointer to the memory pool. > + * @param name > + * Name of the statistics field to increment in the memory pool. > + * @param n > + * Number to add to the object-oriented statistics. > + * @param socket_id > + * socket id on which to allocate. > + * @param flags > + * general flags to allocate function > + */ > +void * > +rte_mempool_ext_alloc(struct rte_mempool *mp, > + const char *name, unsigned n, int socket_id, unsigned flags); > + > +/** > + * @internal wrapper for external mempool manager get callback. > + * > + * @param mp > + * Pointer to the memory pool. > + * @param obj_table > + * Pointer to a table of void * pointers (objects). > + * @param n > + * Number of objects to get > + */ > +int > +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, > + unsigned n); > + > +/** > + * @internal wrapper for external mempool manager put callback. > + * > + * @param mp > + * Pointer to the memory pool. > + * @param obj_table > + * Pointer to a table of void * pointers (objects). > + * @param n > + * Number of objects to put > + */ > +int > +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table, > + unsigned n); > + > +/** > + * @internal wrapper for external mempool manager get_count callback. > + * > + * @param mp > + * Pointer to the memory pool. > + */ > +int > +rte_mempool_ext_get_count(const struct rte_mempool *mp); > + > +/** > + * @internal wrapper for external mempool manager free callback. > + * > + * @param mp > + * Pointer to the memory pool. > + */ > +int > +rte_mempool_ext_free(struct rte_mempool *mp); > + > /** > * The RTE mempool structure. > */ > struct rte_mempool { > char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */ > - struct rte_ring *ring; /**< Ring to store objects. */ > phys_addr_t phys_addr; /**< Phys. addr. of mempool struct. */ > int flags; /**< Flags of the mempool. */ > uint32_t size; /**< Size of the mempool. */ > @@ -194,6 +270,11 @@ struct rte_mempool { > > unsigned private_data_size; /**< Size of private data. */ > > + /* Common pool data structure pointer */ > + void *rt_pool __rte_cache_aligned; Do we need to split rt_pool to next cache line, "cache_size" variable, etc are used in fast path, and one more cache line will occupy for this change > + > + int16_t handler_idx; > + > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > /** Per-lcore local cache. */ > struct rte_mempool_cache local_cache[RTE_MAX_LCORE]; > @@ -223,6 +304,10 @@ struct rte_mempool { > #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/ > #define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/ > #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ > +#define MEMPOOL_F_USE_STACK 0x0010 /**< Use a stack for the common pool. */ > +#define MEMPOOL_F_USE_TM 0x0020 > +#define MEMPOOL_F_NO_SECONDARY 0x0040 > + > > /** > * @internal When debug is enabled, store some statistics. > @@ -728,7 +813,6 @@ rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size, > rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg, > int socket_id, unsigned flags); > > - > /** > * Dump the status of the mempool to the console. > * > @@ -753,7 +837,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp); > */ > static inline void __attribute__((always_inline)) > __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table, > - unsigned n, int is_mp) > + unsigned n, __attribute__((unused)) int is_mp) > { > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > struct rte_mempool_cache *cache; > @@ -769,8 +853,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table, > > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > /* cache is not enabled or single producer or non-EAL thread */ > - if (unlikely(cache_size == 0 || is_mp == 0 || > - lcore_id >= RTE_MAX_LCORE)) > + if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE)) > goto ring_enqueue; > > /* Go straight to ring if put would overflow mem allocated for cache */ > @@ -793,8 +876,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table, > > cache->len += n; > > - if (cache->len >= flushthresh) { > - rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size], > + if (unlikely(cache->len >= flushthresh)) { > + rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size], > cache->len - cache_size); > cache->len = cache_size; > } > @@ -804,22 +887,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table, > ring_enqueue: > #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */ > > - /* push remaining objects in ring */ > -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > - if (is_mp) { > - if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0) > - rte_panic("cannot put objects in mempool\n"); > - } > - else { > - if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0) > - rte_panic("cannot put objects in mempool\n"); > - } > -#else > - if (is_mp) > - rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n); > - else > - rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n); > -#endif > + /* Increment stats counter to tell us how many pool puts happened */ > + __MEMPOOL_STAT_ADD(mp, put_pool, n); > + > + rte_mempool_ext_put_bulk(mp, obj_table, n); > } > > > @@ -943,7 +1014,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj) > */ > static inline int __attribute__((always_inline)) > __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > - unsigned n, int is_mc) > + unsigned n, __attribute__((unused))int is_mc) > { > int ret; > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > @@ -954,8 +1025,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > uint32_t cache_size = mp->cache_size; > > /* cache is not enabled or single consumer */ > - if (unlikely(cache_size == 0 || is_mc == 0 || > - n >= cache_size || lcore_id >= RTE_MAX_LCORE)) > + if (unlikely(cache_size == 0 || n >= cache_size || > + lcore_id >= RTE_MAX_LCORE)) > goto ring_dequeue; > > cache = &mp->local_cache[lcore_id]; > @@ -967,7 +1038,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > uint32_t req = n + (cache_size - cache->len); > > /* How many do we require i.e. number to fill the cache + the request */ > - ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req); > + ret = rte_mempool_ext_get_bulk(mp, > + &cache->objs[cache->len], req); > if (unlikely(ret < 0)) { > /* > * In the offchance that we are buffer constrained, > @@ -995,10 +1067,7 @@ ring_dequeue: > #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */ > > /* get remaining objects from ring */ > - if (is_mc) > - ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n); > - else > - ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n); > + ret = rte_mempool_ext_get_bulk(mp, obj_table, n); > > if (ret < 0) > __MEMPOOL_STAT_ADD(mp, get_fail, n); > @@ -1401,6 +1470,82 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz, > void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg), > void *arg); > > +/** > + * Function to get an index to an external mempool manager > + * > + * @param name > + * The name of the mempool handler to search for in the list of handlers > + * @return > + * The index of the mempool handler in the list of registered mempool > + * handlers > + */ > +int16_t > +rte_get_mempool_handler(const char *name); > + > + > +/** > + * Create a new mempool named *name* in memory. > + * > + * This function uses an externally defined alloc callback to allocate memory. > + * Its size is set to n elements. > + * All elements of the mempool are allocated separately to the mempool header. > + * > + * @param name > + * The name of the mempool. > + * @param n > + * The number of elements in the mempool. The optimum size (in terms of > + * memory usage) for a mempool is when n is a power of two minus one: > + * n = (2^q - 1). > + * @param cache_size > + * If cache_size is non-zero, the rte_mempool library will try to > + * limit the accesses to the common lockless pool, by maintaining a > + * per-lcore object cache. This argument must be lower or equal to > + * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose > + * cache_size to have "n modulo cache_size == 0": if this is > + * not the case, some elements will always stay in the pool and will > + * never be used. The access to the per-lcore table is of course > + * faster than the multi-producer/consumer pool. The cache can be > + * disabled if the cache_size argument is set to 0; it can be useful to > + * avoid losing objects in cache. Note that even if not used, the > + * memory space for cache is always reserved in a mempool structure, > + * except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0. > + * @param private_data_size > + * The size of the private data appended after the mempool > + * structure. This is useful for storing some private data after the > + * mempool structure, as is done for rte_mbuf_pool for example. > + * @param mp_init > + * A function pointer that is called for initialization of the pool, > + * before object initialization. The user can initialize the private > + * data in this function if needed. This parameter can be NULL if > + * not needed. > + * @param mp_init_arg > + * An opaque pointer to data that can be used in the mempool > + * constructor function. > + * @param obj_init > + * A function pointer that is called for each object at > + * initialization of the pool. The user can set some meta data in > + * objects if needed. This parameter can be NULL if not needed. > + * The obj_init() function takes the mempool pointer, the init_arg, > + * the object pointer and the object number as parameters. > + * @param obj_init_arg > + * An opaque pointer to data that can be used as an argument for > + * each call to the object constructor function. > + * @param socket_id > + * The *socket_id* argument is the socket identifier in the case of > + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA > + * constraint for the reserved zone. > + * @param flags > + * @return > + * The pointer to the new allocated mempool, on success. NULL on error > + */ > +struct rte_mempool * > +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size, > + unsigned cache_size, unsigned private_data_size, > + rte_mempool_ctor_t *mp_init, void *mp_init_arg, > + rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg, > + int socket_id, unsigned flags, > + const char *handler_name); > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c > new file mode 100644 > index 0000000..2493dc1 > --- /dev/null > +++ b/lib/librte_mempool/rte_mempool_default.c > @@ -0,0 +1,229 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2010-2016 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#include > +#include > +#include > +#include > + > +#include "rte_mempool_internal.h" > + > +/* > + * Indirect jump table to support external memory pools > + */ > +struct rte_mempool_handler_list mempool_handler_list = { > + .sl = RTE_SPINLOCK_INITIALIZER , > + .num_handlers = 0 > +}; > + > +/* TODO Convert to older mechanism of an array of stucts */ > +int16_t > +add_handler(struct rte_mempool_handler *h) > +{ > + int16_t handler_idx; > + > + /* */ > + rte_spinlock_lock(&mempool_handler_list.sl); > + > + /* Check whether jump table has space */ > + if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) { > + rte_spinlock_unlock(&mempool_handler_list.sl); > + RTE_LOG(ERR, MEMPOOL, > + "Maximum number of mempool handlers exceeded\n"); > + return -1; > + } > + > + if ((h->put == NULL) || (h->get == NULL) || > + (h->get_count == NULL)) { > + rte_spinlock_unlock(&mempool_handler_list.sl); > + RTE_LOG(ERR, MEMPOOL, > + "Missing callback while registering mempool handler\n"); > + return -1; > + } > + > + /* add new handler index */ > + handler_idx = mempool_handler_list.num_handlers++; > + > + snprintf(mempool_handler_list.handler[handler_idx].name, > + RTE_MEMPOOL_NAMESIZE, "%s", h->name); > + mempool_handler_list.handler[handler_idx].alloc = h->alloc; > + mempool_handler_list.handler[handler_idx].put = h->put; > + mempool_handler_list.handler[handler_idx].get = h->get; > + mempool_handler_list.handler[handler_idx].get_count = h->get_count; > + > + rte_spinlock_unlock(&mempool_handler_list.sl); > + > + return handler_idx; > +} > + > +/* TODO Convert to older mechanism of an array of stucts */ > +int16_t > +rte_get_mempool_handler(const char *name) > +{ > + int16_t i; > + > + for (i = 0; i < mempool_handler_list.num_handlers; i++) { > + if (!strcmp(name, mempool_handler_list.handler[i].name)) > + return i; > + } > + return -1; > +} > + > +static int > +common_ring_mp_put(void *p, void * const *obj_table, unsigned n) > +{ > + return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n); > +} > + > +static int > +common_ring_sp_put(void *p, void * const *obj_table, unsigned n) > +{ > + return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n); > +} > + > +static int > +common_ring_mc_get(void *p, void **obj_table, unsigned n) > +{ > + return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n); > +} > + > +static int > +common_ring_sc_get(void *p, void **obj_table, unsigned n) > +{ > + return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n); > +} > + > +static unsigned > +common_ring_get_count(void *p) > +{ > + return rte_ring_count((struct rte_ring *)p); > +} > + > + > +static void * > +rte_mempool_common_ring_alloc(struct rte_mempool *mp, > + const char *name, unsigned n, int socket_id, unsigned flags) > +{ > + struct rte_ring *r; > + char rg_name[RTE_RING_NAMESIZE]; > + int rg_flags = 0; > + > + if (flags & MEMPOOL_F_SP_PUT) > + rg_flags |= RING_F_SP_ENQ; > + if (flags & MEMPOOL_F_SC_GET) > + rg_flags |= RING_F_SC_DEQ; > + > + /* allocate the ring that will be used to store objects */ > + /* Ring functions will return appropriate errors if we are > + * running as a secondary process etc., so no checks made > + * in this function for that condition */ > + snprintf(rg_name, sizeof(rg_name), "%s-ring", name); > + r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags); > + if (r == NULL) > + return NULL; > + > + mp->rt_pool = (void *)r; > + > + return (void *) r; > +} > + > +void * > +rte_mempool_ext_alloc(struct rte_mempool *mp, > + const char *name, unsigned n, int socket_id, unsigned flags) > +{ > + if (mempool_handler_list.handler[mp->handler_idx].alloc) { > + return (mempool_handler_list.handler[mp->handler_idx].alloc) > + (mp, name, n, socket_id, flags); > + } > + return NULL; > +} > + > +inline int __attribute__((always_inline)) > +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n) > +{ > + return (mempool_handler_list.handler[mp->handler_idx].get) > + (mp->rt_pool, obj_table, n); > +} > + > +inline int __attribute__((always_inline)) > +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table, > + unsigned n) > +{ > + return (mempool_handler_list.handler[mp->handler_idx].put) > + (mp->rt_pool, obj_table, n); > +} > + > +int > +rte_mempool_ext_get_count(const struct rte_mempool *mp) > +{ > + return (mempool_handler_list.handler[mp->handler_idx].get_count) > + (mp->rt_pool); > +} > + > +static struct rte_mempool_handler handler_mp_mc = { > + .name = "ring_mp_mc", > + .alloc = rte_mempool_common_ring_alloc, > + .put = common_ring_mp_put, > + .get = common_ring_mc_get, > + .get_count = common_ring_get_count, > + .free = NULL > +}; > +static struct rte_mempool_handler handler_sp_sc = { > + .name = "ring_sp_sc", > + .alloc = rte_mempool_common_ring_alloc, > + .put = common_ring_sp_put, > + .get = common_ring_sc_get, > + .get_count = common_ring_get_count, > + .free = NULL > +}; > +static struct rte_mempool_handler handler_mp_sc = { > + .name = "ring_mp_sc", > + .alloc = rte_mempool_common_ring_alloc, > + .put = common_ring_mp_put, > + .get = common_ring_sc_get, > + .get_count = common_ring_get_count, > + .free = NULL > +}; > +static struct rte_mempool_handler handler_sp_mc = { > + .name = "ring_sp_mc", > + .alloc = rte_mempool_common_ring_alloc, > + .put = common_ring_sp_put, > + .get = common_ring_mc_get, > + .get_count = common_ring_get_count, > + .free = NULL > +}; > + > +REGISTER_MEMPOOL_HANDLER(handler_mp_mc); > +REGISTER_MEMPOOL_HANDLER(handler_sp_sc); > +REGISTER_MEMPOOL_HANDLER(handler_mp_sc); > +REGISTER_MEMPOOL_HANDLER(handler_sp_mc); > diff --git a/lib/librte_mempool/rte_mempool_internal.h b/lib/librte_mempool/rte_mempool_internal.h > new file mode 100644 > index 0000000..92b7bde > --- /dev/null > +++ b/lib/librte_mempool/rte_mempool_internal.h > @@ -0,0 +1,74 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2010-2016 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#ifndef _RTE_MEMPOOL_INTERNAL_H_ > +#define _RTE_MEMPOOL_INTERNAL_H_ > + > +#include > +#include > + > +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16 > + > +struct rte_mempool_handler { > + char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */ > + > + rte_mempool_alloc_t alloc; > + > + rte_mempool_put_t put __rte_cache_aligned; > + > + rte_mempool_get_t get __rte_cache_aligned; > + > + rte_mempool_get_count get_count __rte_cache_aligned; > + > + rte_mempool_free_t free __rte_cache_aligned; > +}; IMO, The structure should cache aligned not the individual elements as elements are likely read only in fast path. > + > +struct rte_mempool_handler_list { > + rte_spinlock_t sl; /**< Spinlock for add/delete. */ > + > + int32_t num_handlers; /**< Number of handlers that are valid. */ > + > + /* storage for all possible handlers */ > + struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX]; > +}; > + > +int16_t add_handler(struct rte_mempool_handler *h); > + > +#define REGISTER_MEMPOOL_HANDLER(h) \ > +static int16_t __attribute__((used)) testfn_##h(void);\ > +int16_t __attribute__((constructor, used)) testfn_##h(void)\ > +{\ > + return add_handler(&h);\ > +} > + > +#endif > -- > 1.9.3 >