From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30062.outbound.protection.outlook.com [40.107.3.62]) by dpdk.org (Postfix) with ESMTP id 3ED9C2BA3 for ; Thu, 17 Jan 2019 09:06:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector1-arm-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=clUtKDRUD+VcRbvu1RnTRXR03gTEFmN2vZNrHh+THGw=; b=pL4VAa5fDyBzRPG2Zrb43MWGN7DAxMLcD6YPV7BVFqoP559XMtH1RPfwNdqdLUkLULc8qIVmBDZ50SWtB4fyq6K9J69sPOEm1gB7WLJ1HYWffCRzlbehs2DwbsWtIiYRy6LM3r/zGbHS5xkVbwv4ipXqyjaS1O00e95Sy4zSn0c= Received: from DB7PR08MB3163.eurprd08.prod.outlook.com (52.134.110.149) by DB7PR08MB3644.eurprd08.prod.outlook.com (20.177.120.150) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1537.25; Thu, 17 Jan 2019 08:06:12 +0000 Received: from DB7PR08MB3163.eurprd08.prod.outlook.com ([fe80::d119:44d1:4793:5724]) by DB7PR08MB3163.eurprd08.prod.outlook.com ([fe80::d119:44d1:4793:5724%5]) with mapi id 15.20.1516.019; Thu, 17 Jan 2019 08:06:11 +0000 From: "Gavin Hu (Arm Technology China)" To: Gage Eads , "dev@dpdk.org" CC: "olivier.matz@6wind.com" , "arybchenko@solarflare.com" , "bruce.richardson@intel.com" , "konstantin.ananyev@intel.com" , Honnappa Nagarahalli , "Ruifeng Wang (Arm Technology China)" , "Phil Yang (Arm Technology China)" Thread-Topic: [dpdk-dev] [PATCH v2 2/2] mempool/nb_stack: add non-blocking stack mempool Thread-Index: AQHUrSJso0tnZem+LkyN190Zvz4vjKWzG7mg Date: Thu, 17 Jan 2019 08:06:11 +0000 Message-ID: References: <20190110205538.24435-1-gage.eads@intel.com> <20190115223232.31866-1-gage.eads@intel.com> <20190115223232.31866-3-gage.eads@intel.com> In-Reply-To: <20190115223232.31866-3-gage.eads@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Gavin.Hu@arm.com; x-originating-ip: [113.29.88.7] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB7PR08MB3644; 6:Kb0u5Q2OdKWVUIr6l+Oewq60HnF4NNifeR8XXygpfG+oKXebQNIChdGuufWRoqy4ZFQgGIyATe2w7xaSt9KDiQxvQR4npZszf+MIg7+AqIoN4MNKvvJQC2/1KHbf5AQq0U7XdDnPBy71rBf1mo8NeJ5A27+Mp4AhXDLIKUujdugNIp4NUcVBI4Fl0WJjsHQO2TzG7rbQFi3YuGQsNBY7C4qFX/Lu6H4NcBjGfqBRDQF6Y6IgS9cjEpsw3IwLpGlhXKglxW2XERDnCL+Sd5hMlHz+L8I+5Jr1LZaV7u6nf+gLUVDtWcCyGD3sGD8JQWb5C1Jrto3Q6YJSN/YiyVlPiok70m6smUZH8Fr5KKUmHTcLS7XarzhOFKzI6EvVl9EWafSJIKtO6mBALcaCOLKbHYVa5HNa1ODIDTTEV+vDwivXYYb/gr6zoKhwS9jAmJc6fe0iZvQHwF0YvExhuKPtNg==; 5:OL7VHsNaX/4nkxGi95rF568rrWR1WC5u9o2baIBbyZWVpB9u5lxWA8Z9r9dhvkMWAlJlQWciYxePUV32hDiBIpAqBOGKDzTZAVsTIhQaT7IyG6fZWikqtDb4aLS83UaMQhntGDdNDJyeMdWOCoXFLsmgVIpoirsLoOC9/ls8Mu7RQdnEr3UmBWJRZk8N8b+Qarx4bDcP9A7vruLXnBdt+A==; 7:WK4ARy5RG/QDmELYdK8FbNZW4fpzUqNooHFm+bgks74dzajl1OQHoZkF3X5hyCw3SJBsaJb/CAJtHjLoWw/QBkzxrD9gDHzY09icv58WmlaeyM6IizDNzRDbulHVbJaxAbloQKFXMLz+Qf3m3EOYfA== x-ms-exchange-antispam-srfa-diagnostics: SOS;SOR; x-ms-office365-filtering-correlation-id: 5ff57c33-60ba-4b07-9cb8-08d67c52a524 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600109)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:DB7PR08MB3644; x-ms-traffictypediagnostic: DB7PR08MB3644: x-microsoft-antispam-prvs: x-forefront-prvs: 0920602B08 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39860400002)(376002)(136003)(346002)(366004)(396003)(40434004)(13464003)(189003)(199004)(86362001)(2501003)(55016002)(6436002)(446003)(476003)(11346002)(486006)(3846002)(4744004)(71190400001)(30864003)(66066001)(71200400001)(229853002)(6116002)(33656002)(14454004)(72206003)(74316002)(478600001)(68736007)(53546011)(76176011)(102836004)(26005)(6506007)(55236004)(14444005)(256004)(97736004)(186003)(5024004)(99286004)(4326008)(7736002)(305945005)(54906003)(6246003)(110136005)(25786009)(106356001)(8936002)(5660300001)(7696005)(2906002)(53936002)(9686003)(81156014)(316002)(53946003)(81166006)(105586002); DIR:OUT; SFP:1101; SCL:1; SRVR:DB7PR08MB3644; H:DB7PR08MB3163.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: R3CLT5f9qu2b1b5I8oqdPHDhV48XT876GalE7r1ht/E7et/3qy79o9YUZuvgvuhZnErLW9yB8ZhLkG5SdbCvZ6fyRvojtZ31iXaHqrdfU1h6cJqMJSIvI1tDJdUaSwveDMot9SfIk7RlN0MdArlCibCy1WsPQink8ZWDqs9Qvh3/0sNDi8MrSIdW7C7TyIgn4sCEQ8ovSMPm+bZsy3KrJbSUBelsUQly1EgW0iTEz4goTpbWOjIQVOTueq+69RUfMAMvnBjLNVVEYL79MNUCYPeBLBrDsns864MrL/DEhlL+eYgvU/a7VYYco3OZ3Im99WgspSAUGTZDsEduOcncbiOsyDgQLW/jPlp4Ba7/WkSzC5fozvDirDB2EZVeOWgtaDESwDKHKHRv3Njp5INbVCOhJkx7s2DcwW43zx5WYig= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5ff57c33-60ba-4b07-9cb8-08d67c52a524 X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2019 08:06:11.8455 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3644 Subject: Re: [dpdk-dev] [PATCH v2 2/2] mempool/nb_stack: add non-blocking stack mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Jan 2019 08:06:13 -0000 > -----Original Message----- > From: dev On Behalf Of Gage Eads > Sent: Wednesday, January 16, 2019 6:33 AM > To: dev@dpdk.org > Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; > bruce.richardson@intel.com; konstantin.ananyev@intel.com > Subject: [dpdk-dev] [PATCH v2 2/2] mempool/nb_stack: add non-blocking > stack mempool > > This commit adds support for non-blocking (linked list based) stack > mempool > handler. The stack uses a 128-bit compare-and-swap instruction, and thus > is > limited to x86_64. The 128-bit CAS atomically updates the stack top > pointer > and a modification counter, which protects against the ABA problem. > > In mempool_perf_autotest the lock-based stack outperforms the non- > blocking > handler*, however: > - For applications with preemptible pthreads, a lock-based stack's > worst-case performance (i.e. one thread being preempted while > holding the spinlock) is much worse than the non-blocking stack's. > - Using per-thread mempool caches will largely mitigate the performance > difference. > > *Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v= 4, > running on isolcpus cores with a tickless scheduler. The lock-based stack= 's > rate_persec was 1x-3.5x the non-blocking stack's. > > Signed-off-by: Gage Eads > --- > MAINTAINERS | 4 + > config/common_base | 1 + > doc/guides/prog_guide/env_abstraction_layer.rst | 5 + > drivers/mempool/Makefile | 3 + > drivers/mempool/meson.build | 5 + > drivers/mempool/nb_stack/Makefile | 23 ++++ > drivers/mempool/nb_stack/meson.build | 4 + > drivers/mempool/nb_stack/nb_lifo.h | 147 > +++++++++++++++++++++ > drivers/mempool/nb_stack/rte_mempool_nb_stack.c | 125 > ++++++++++++++++++ > .../nb_stack/rte_mempool_nb_stack_version.map | 4 + > mk/rte.app.mk | 7 +- > 11 files changed, 326 insertions(+), 2 deletions(-) > create mode 100644 drivers/mempool/nb_stack/Makefile > create mode 100644 drivers/mempool/nb_stack/meson.build > create mode 100644 drivers/mempool/nb_stack/nb_lifo.h > create mode 100644 drivers/mempool/nb_stack/rte_mempool_nb_stack.c > create mode 100644 > drivers/mempool/nb_stack/rte_mempool_nb_stack_version.map > > diff --git a/MAINTAINERS b/MAINTAINERS > index 470f36b9c..5519d3323 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -416,6 +416,10 @@ M: Artem V. Andreev > > M: Andrew Rybchenko > F: drivers/mempool/bucket/ > > +Non-blocking stack memory pool > +M: Gage Eads > +F: drivers/mempool/nb_stack/ > + > > Bus Drivers > ----------- > diff --git a/config/common_base b/config/common_base > index 964a6956e..8a51f36b1 100644 > --- a/config/common_base > +++ b/config/common_base > @@ -726,6 +726,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=3Dn > # > CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=3Dy > CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=3D64 > +CONFIG_RTE_DRIVER_MEMPOOL_NB_STACK=3Dy NAK, as this applies to x86_64 only, it will break arm/ppc and even 32bit = i386 configurations. > CONFIG_RTE_DRIVER_MEMPOOL_RING=3Dy > CONFIG_RTE_DRIVER_MEMPOOL_STACK=3Dy > > diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst > b/doc/guides/prog_guide/env_abstraction_layer.rst > index 929d76dba..9497b879c 100644 > --- a/doc/guides/prog_guide/env_abstraction_layer.rst > +++ b/doc/guides/prog_guide/env_abstraction_layer.rst > @@ -541,6 +541,11 @@ Known Issues > > 5. It MUST not be used by multi-producer/consumer pthreads, whose > scheduling policies are SCHED_FIFO or SCHED_RR. > > + Alternatively, x86_64 applications can use the non-blocking stack > mempool handler. When considering this handler, note that: > + > + - it is limited to the x86_64 platform, because it uses an instruction= (16- > byte compare-and-swap) that is not available on other platforms. > + - it has worse average-case performance than the non-preemptive > rte_ring, but software caching (e.g. the mempool cache) can mitigate this > by reducing the number of handler operations. > + > + rte_timer > > Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. > However, resetting/stopping the timer from a non-EAL pthread is allowed. > diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile > index 28c2e8360..895cf8a34 100644 > --- a/drivers/mempool/Makefile > +++ b/drivers/mempool/Makefile > @@ -10,6 +10,9 @@ endif > ifeq ($(CONFIG_RTE_EAL_VFIO)$(CONFIG_RTE_LIBRTE_FSLMC_BUS),yy) > DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) +=3D dpaa2 > endif > +ifeq ($(CONFIG_RTE_ARCH_X86_64),y) > +DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_NB_STACK) +=3D nb_stack > +endif > DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) +=3D ring > DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) +=3D stack > DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) +=3D octeontx > diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build > index 4527d9806..01ee30fee 100644 > --- a/drivers/mempool/meson.build > +++ b/drivers/mempool/meson.build > @@ -2,6 +2,11 @@ > # Copyright(c) 2017 Intel Corporation > > drivers =3D ['bucket', 'dpaa', 'dpaa2', 'octeontx', 'ring', 'stack'] > + > +if dpdk_conf.has('RTE_ARCH_X86_64') > +drivers +=3D 'nb_stack' > +endif > + > std_deps =3D ['mempool'] > config_flag_fmt =3D 'RTE_LIBRTE_@0@_MEMPOOL' > driver_name_fmt =3D 'rte_mempool_@0@' > diff --git a/drivers/mempool/nb_stack/Makefile > b/drivers/mempool/nb_stack/Makefile > new file mode 100644 > index 000000000..318b18283 > --- /dev/null > +++ b/drivers/mempool/nb_stack/Makefile > @@ -0,0 +1,23 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2019 Intel Corporation > + > +include $(RTE_SDK)/mk/rte.vars.mk > + > +# > +# library name > +# > +LIB =3D librte_mempool_nb_stack.a > + > +CFLAGS +=3D -O3 > +CFLAGS +=3D $(WERROR_FLAGS) > + > +# Headers > +LDLIBS +=3D -lrte_eal -lrte_mempool > + > +EXPORT_MAP :=3D rte_mempool_nb_stack_version.map > + > +LIBABIVER :=3D 1 > + > +SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_NB_STACK) +=3D > rte_mempool_nb_stack.c > + > +include $(RTE_SDK)/mk/rte.lib.mk > diff --git a/drivers/mempool/nb_stack/meson.build > b/drivers/mempool/nb_stack/meson.build > new file mode 100644 > index 000000000..66d64a9ba > --- /dev/null > +++ b/drivers/mempool/nb_stack/meson.build > @@ -0,0 +1,4 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2019 Intel Corporation > + > +sources =3D files('rte_mempool_nb_stack.c') > diff --git a/drivers/mempool/nb_stack/nb_lifo.h > b/drivers/mempool/nb_stack/nb_lifo.h > new file mode 100644 > index 000000000..2edae1c0f > --- /dev/null > +++ b/drivers/mempool/nb_stack/nb_lifo.h > @@ -0,0 +1,147 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2019 Intel Corporation > + */ > + > +#ifndef _NB_LIFO_H_ > +#define _NB_LIFO_H_ > + > +struct nb_lifo_elem { > +void *data; > +struct nb_lifo_elem *next; > +}; > + > +struct nb_lifo_head { > +struct nb_lifo_elem *top; /**< Stack top */ > +uint64_t cnt; /**< Modification counter */ > +}; > + > +struct nb_lifo { > +volatile struct nb_lifo_head head __rte_aligned(16); > +rte_atomic64_t len; > +} __rte_cache_aligned; > + > +static __rte_always_inline void > +nb_lifo_init(struct nb_lifo *lifo) > +{ > +memset(lifo, 0, sizeof(*lifo)); > +rte_atomic64_set(&lifo->len, 0); > +} > + > +static __rte_always_inline unsigned int > +nb_lifo_len(struct nb_lifo *lifo) > +{ > +/* nb_lifo_push() and nb_lifo_pop() do not update the list's > contents > + * and lifo->len atomically, which can cause the list to appear > shorter > + * than it actually is if this function is called while other threads > + * are modifying the list. > + * > + * However, given the inherently approximate nature of the > get_count > + * callback -- even if the list and its size were updated atomically, > + * the size could change between when get_count executes and > when the > + * value is returned to the caller -- this is acceptable. > + * > + * The lifo->len updates are placed such that the list may appear to > + * have fewer elements than it does, but will never appear to have > more > + * elements. If the mempool is near-empty to the point that this is > a > + * concern, the user should consider increasing the mempool size. > + */ > +return (unsigned int)rte_atomic64_read(&lifo->len); > +} > + > +static __rte_always_inline void > +nb_lifo_push(struct nb_lifo *lifo, > + struct nb_lifo_elem *first, > + struct nb_lifo_elem *last, > + unsigned int num) > +{ > +while (1) { > +struct nb_lifo_head old_head, new_head; > + > +old_head =3D lifo->head; > + > +/* Swing the top pointer to the first element in the list and > + * make the last element point to the old top. > + */ > +new_head.top =3D first; > +new_head.cnt =3D old_head.cnt + 1; > + > +last->next =3D old_head.top; > + > +if (rte_atomic128_cmpset((volatile uint64_t *) &lifo->head, > + (uint64_t *)&old_head, > + (uint64_t *)&new_head)) > +break; > +} > + > +rte_atomic64_add(&lifo->len, num); > +} > + > +static __rte_always_inline void > +nb_lifo_push_single(struct nb_lifo *lifo, struct nb_lifo_elem *elem) > +{ > +nb_lifo_push(lifo, elem, elem, 1); > +} > + > +static __rte_always_inline struct nb_lifo_elem * > +nb_lifo_pop(struct nb_lifo *lifo, > + unsigned int num, > + void **obj_table, > + struct nb_lifo_elem **last) > +{ > +struct nb_lifo_head old_head; > + > +/* Reserve num elements, if available */ > +while (1) { > +uint64_t len =3D rte_atomic64_read(&lifo->len); > + > +/* Does the list contain enough elements? */ > +if (len < num) > +return NULL; > + > +if (rte_atomic64_cmpset((volatile uint64_t *)&lifo->len, > +len, len - num)) > +break; > +} > + > +/* Pop num elements */ > +while (1) { > +struct nb_lifo_head new_head; > +struct nb_lifo_elem *tmp; > +unsigned int i; > + > +old_head =3D lifo->head; > + > +tmp =3D old_head.top; > + > +/* Traverse the list to find the new head. A next pointer > will > + * either point to another element or NULL; if a thread > + * encounters a pointer that has already been popped, the > CAS > + * will fail. > + */ > +for (i =3D 0; i < num && tmp !=3D NULL; i++) { > +if (obj_table) > +obj_table[i] =3D tmp->data; > +if (last) > +*last =3D tmp; > +tmp =3D tmp->next; > +} > + > +/* If NULL was encountered, the list was modified while > + * traversing it. Retry. > + */ > +if (i !=3D num) > +continue; > + > +new_head.top =3D tmp; > +new_head.cnt =3D old_head.cnt + 1; > + > +if (rte_atomic128_cmpset((volatile uint64_t *) &lifo->head, > + (uint64_t *)&old_head, > + (uint64_t *)&new_head)) > +break; > +} > + > +return old_head.top; > +} > + > +#endif /* _NB_LIFO_H_ */ > diff --git a/drivers/mempool/nb_stack/rte_mempool_nb_stack.c > b/drivers/mempool/nb_stack/rte_mempool_nb_stack.c > new file mode 100644 > index 000000000..1818a2cfa > --- /dev/null > +++ b/drivers/mempool/nb_stack/rte_mempool_nb_stack.c > @@ -0,0 +1,125 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2019 Intel Corporation > + */ > + > +#include > +#include > +#include > + > +#include "nb_lifo.h" > + > +struct rte_mempool_nb_stack { > +uint64_t size; > +struct nb_lifo used_lifo; /**< LIFO containing mempool pointers */ > +struct nb_lifo free_lifo; /**< LIFO containing unused LIFO elements > */ > +}; > + > +static int > +nb_stack_alloc(struct rte_mempool *mp) > +{ > +struct rte_mempool_nb_stack *s; > +struct nb_lifo_elem *elems; > +unsigned int n =3D mp->size; > +unsigned int size, i; > + > +size =3D sizeof(*s) + n * sizeof(struct nb_lifo_elem); > + > +/* Allocate our local memory structure */ > +s =3D rte_zmalloc_socket("mempool-nb_stack", > + size, > + RTE_CACHE_LINE_SIZE, > + mp->socket_id); > +if (s =3D=3D NULL) { > +RTE_LOG(ERR, MEMPOOL, "Cannot allocate nb_stack!\n"); > +return -ENOMEM; > +} > + > +s->size =3D n; > + > +nb_lifo_init(&s->used_lifo); > +nb_lifo_init(&s->free_lifo); > + > +elems =3D (struct nb_lifo_elem *)&s[1]; > +for (i =3D 0; i < n; i++) > +nb_lifo_push_single(&s->free_lifo, &elems[i]); > + > +mp->pool_data =3D s; > + > +return 0; > +} > + > +static int > +nb_stack_enqueue(struct rte_mempool *mp, void * const *obj_table, > + unsigned int n) > +{ > +struct rte_mempool_nb_stack *s =3D mp->pool_data; > +struct nb_lifo_elem *first, *last, *tmp; > +unsigned int i; > + > +if (unlikely(n =3D=3D 0)) > +return 0; > + > +/* Pop n free elements */ > +first =3D nb_lifo_pop(&s->free_lifo, n, NULL, NULL); > +if (unlikely(first =3D=3D NULL)) > +return -ENOBUFS; > + > +/* Prepare the list elements */ > +tmp =3D first; > +for (i =3D 0; i < n; i++) { > +tmp->data =3D obj_table[i]; > +last =3D tmp; > +tmp =3D tmp->next; > +} > + > +/* Enqueue them to the used list */ > +nb_lifo_push(&s->used_lifo, first, last, n); > + > +return 0; > +} > + > +static int > +nb_stack_dequeue(struct rte_mempool *mp, void **obj_table, > + unsigned int n) > +{ > +struct rte_mempool_nb_stack *s =3D mp->pool_data; > +struct nb_lifo_elem *first, *last; > + > +if (unlikely(n =3D=3D 0)) > +return 0; > + > +/* Pop n used elements */ > +first =3D nb_lifo_pop(&s->used_lifo, n, obj_table, &last); > +if (unlikely(first =3D=3D NULL)) > +return -ENOENT; > + > +/* Enqueue the list elements to the free list */ > +nb_lifo_push(&s->free_lifo, first, last, n); > + > +return 0; > +} > + > +static unsigned > +nb_stack_get_count(const struct rte_mempool *mp) > +{ > +struct rte_mempool_nb_stack *s =3D mp->pool_data; > + > +return nb_lifo_len(&s->used_lifo); > +} > + > +static void > +nb_stack_free(struct rte_mempool *mp) > +{ > +rte_free(mp->pool_data); > +} > + > +static struct rte_mempool_ops ops_nb_stack =3D { > +.name =3D "nb_stack", > +.alloc =3D nb_stack_alloc, > +.free =3D nb_stack_free, > +.enqueue =3D nb_stack_enqueue, > +.dequeue =3D nb_stack_dequeue, > +.get_count =3D nb_stack_get_count > +}; > + > +MEMPOOL_REGISTER_OPS(ops_nb_stack); > diff --git > a/drivers/mempool/nb_stack/rte_mempool_nb_stack_version.map > b/drivers/mempool/nb_stack/rte_mempool_nb_stack_version.map > new file mode 100644 > index 000000000..fc8c95e91 > --- /dev/null > +++ b/drivers/mempool/nb_stack/rte_mempool_nb_stack_version.map > @@ -0,0 +1,4 @@ > +DPDK_19.05 { > + > +local: *; > +}; > diff --git a/mk/rte.app.mk b/mk/rte.app.mk > index 02e8b6f05..d4b4aaaf6 100644 > --- a/mk/rte.app.mk > +++ b/mk/rte.app.mk > @@ -131,8 +131,11 @@ endif > ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n) > # plugins (link only if static libraries) > > -_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) +=3D - > lrte_mempool_bucket > -_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) +=3D - > lrte_mempool_stack > +_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) +=3D - > lrte_mempool_bucket > +ifeq ($(CONFIG_RTE_ARCH_X86_64),y) > +_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_NB_STACK) +=3D - > lrte_mempool_nb_stack > +endif > +_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) +=3D - > lrte_mempool_stack > ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y) > _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) +=3D - > lrte_mempool_dpaa > endif > -- > 2.13.6 IMPORTANT NOTICE: The contents of this email and any attachments are confid= ential and may also be privileged. If you are not the intended recipient, p= lease notify the sender immediately and do not disclose the contents to any= other person, use it for any purpose, or store or copy the information in = any medium. Thank you.