DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace
@ 2021-10-18 14:49 Andrew Rybchenko
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
                   ` (7 more replies)
  0 siblings, 8 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-18 14:49 UTC (permalink / raw)
  To: Olivier Matz, David Marchand; +Cc: dev

Add RTE_ prefix to mempool API including internal.
Keep old public API with deprecation markup.
Internal API is just renamed.

Andrew Rybchenko (6):
  mempool: avoid flags documentation in the next line
  mempool: add namespace prefix to flags
  mempool: add namespace to internal but still visible API
  mempool: make header size calculation internal
  mempool: add namespace to driver register macro
  mempool: deprecate unused defines

 app/proc-info/main.c                          |  15 +-
 app/test-pmd/parameters.c                     |   4 +-
 app/test/test_mempool.c                       |   8 +-
 doc/guides/contributing/documentation.rst     |   4 +-
 doc/guides/prog_guide/mempool_lib.rst         |   2 +-
 doc/guides/rel_notes/deprecation.rst          |  15 ++
 doc/guides/rel_notes/release_21_11.rst        |  12 ++
 drivers/event/cnxk/cnxk_tim_evdev.c           |   2 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx/timvf_evdev.c          |   2 +-
 drivers/event/octeontx2/otx2_tim_evdev.c      |   2 +-
 drivers/mempool/bucket/rte_mempool_bucket.c   |  10 +-
 drivers/mempool/cnxk/cn10k_mempool_ops.c      |   2 +-
 drivers/mempool/cnxk/cn9k_mempool_ops.c       |   2 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   2 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   2 +-
 .../mempool/octeontx/rte_mempool_octeontx.c   |   2 +-
 drivers/mempool/octeontx2/otx2_mempool_ops.c  |   2 +-
 drivers/mempool/ring/rte_mempool_ring.c       |  16 +-
 drivers/mempool/stack/rte_mempool_stack.c     |   4 +-
 drivers/net/cnxk/cn10k_rx.h                   |  12 +-
 drivers/net/cnxk/cn10k_tx.h                   |  30 ++--
 drivers/net/cnxk/cn9k_rx.h                    |  12 +-
 drivers/net/cnxk/cn9k_tx.h                    |  26 +--
 drivers/net/octeontx/octeontx_rxtx.h          |   4 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   4 +-
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h    |   2 +-
 drivers/net/octeontx2/otx2_rx.c               |   8 +-
 drivers/net/octeontx2/otx2_rx.h               |   4 +-
 drivers/net/octeontx2/otx2_tx.c               |  16 +-
 drivers/net/octeontx2/otx2_tx.h               |   4 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
 drivers/raw/octeontx2_ep/otx2_ep_test.c       |   3 +-
 lib/mempool/rte_mempool.c                     |  54 +++----
 lib/mempool/rte_mempool.h                     | 151 ++++++++++--------
 lib/mempool/rte_mempool_ops.c                 |   2 +-
 lib/pdump/rte_pdump.c                         |   3 +-
 lib/vhost/iotlb.c                             |   4 +-
 38 files changed, 254 insertions(+), 197 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH 1/6] mempool: avoid flags documentation in the next line
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
@ 2021-10-18 14:49 ` Andrew Rybchenko
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-18 14:49 UTC (permalink / raw)
  To: Olivier Matz, David Marchand; +Cc: dev

Move documentation into a separate line just before define.
Prepare to have a bit longer flag name because of namespace prefix.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 lib/mempool/rte_mempool.h | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 88bcbc51ef..8ef4c8ed1e 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -250,13 +250,18 @@ struct rte_mempool {
 #endif
 }  __rte_cache_aligned;
 
+/** Spreading among memory channels not required. */
 #define MEMPOOL_F_NO_SPREAD      0x0001
-		/**< Spreading among memory channels not required. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
-#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
-#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
-#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Do not align objects on cache lines. */
+#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002
+/** Default put is "single-producer". */
+#define MEMPOOL_F_SP_PUT         0x0004
+/** Default get is "single-consumer". */
+#define MEMPOOL_F_SC_GET         0x0008
+/** Internal: pool is created. */
+#define MEMPOOL_F_POOL_CREATED   0x0010
+/** Don't need IOVA contiguous objects. */
+#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH 2/6] mempool: add namespace prefix to flags
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
@ 2021-10-18 14:49 ` Andrew Rybchenko
  2021-10-19  8:52   ` David Marchand
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-18 14:49 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Maryam Tahhan, Reshma Pattan,
	Xiaoyun Li, Ray Kinsella, Pavan Nikhilesh, Shijith Thotton,
	Jerin Jacob, Artem V. Andreev, Nithin Dabilpuram, Kiran Kumar K,
	Maciej Czekaj, Radha Mohan Chintakuntla, Veerasenareddy Burru,
	Maxime Coquelin, Chenbo Xia
  Cc: dev

Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
The old flags remain usable, but a deprecation warning is issued at
compilation.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                        | 15 ++++----
 app/test-pmd/parameters.c                   |  4 +--
 app/test/test_mempool.c                     |  6 ++--
 doc/guides/rel_notes/deprecation.rst        |  4 +++
 doc/guides/rel_notes/release_21_11.rst      |  3 ++
 drivers/event/cnxk/cnxk_tim_evdev.c         |  2 +-
 drivers/event/octeontx/timvf_evdev.c        |  2 +-
 drivers/event/octeontx2/otx2_tim_evdev.c    |  2 +-
 drivers/mempool/bucket/rte_mempool_bucket.c |  8 ++---
 drivers/mempool/ring/rte_mempool_ring.c     |  4 +--
 drivers/net/octeontx2/otx2_ethdev.c         |  4 +--
 drivers/net/thunderx/nicvf_ethdev.c         |  2 +-
 drivers/raw/octeontx2_ep/otx2_ep_test.c     |  3 +-
 lib/mempool/rte_mempool.c                   | 40 ++++++++++-----------
 lib/mempool/rte_mempool.h                   | 40 +++++++++++++--------
 lib/mempool/rte_mempool_ops.c               |  2 +-
 lib/pdump/rte_pdump.c                       |  3 +-
 lib/vhost/iotlb.c                           |  4 +--
 18 files changed, 85 insertions(+), 63 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..74d8fdc1db 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1298,12 +1298,15 @@ show_mempool(char *name)
 				"\t  -- No IOVA config (%c)\n",
 				ptr->name,
 				ptr->socket_id,
-				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_CACHE_ALIGN) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
-				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & RTE_MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SP_PUT) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SC_GET) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_POOL_CREATED) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) ?
+					'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index b3217d6e5c..2e67723630 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -1477,7 +1477,7 @@ launch_args_parse(int argc, char** argv)
 						 "noisy-lkup-num-reads-writes must be >= 0\n");
 			}
 			if (!strcmp(lgopts[opt_idx].name, "no-iova-contig"))
-				mempool_flags = MEMPOOL_F_NO_IOVA_CONTIG;
+				mempool_flags = RTE_MEMPOOL_F_NO_IOVA_CONTIG;
 
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
@@ -1521,7 +1521,7 @@ launch_args_parse(int argc, char** argv)
 	rx_mode.offloads = rx_offloads;
 	tx_mode.offloads = tx_offloads;
 
-	if (mempool_flags & MEMPOOL_F_NO_IOVA_CONTIG &&
+	if (mempool_flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG &&
 	    mp_alloc_type != MP_ALLOC_ANON) {
 		TESTPMD_LOG(WARNING, "cannot use no-iova-contig without "
 				  "mp-alloc=anon. mempool no-iova-contig is "
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 66bc8d86b7..ffe69e2d03 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -213,7 +213,7 @@ static int test_mempool_creation_with_unknown_flag(void)
 		MEMPOOL_ELT_SIZE, 0, 0,
 		NULL, NULL,
 		NULL, NULL,
-		SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG << 1);
+		SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG << 1);
 
 	if (mp_cov != NULL) {
 		rte_mempool_free(mp_cov);
@@ -336,8 +336,8 @@ test_mempool_sp_sc(void)
 			my_mp_init, NULL,
 			my_obj_init, NULL,
 			SOCKET_ID_ANY,
-			MEMPOOL_F_NO_CACHE_ALIGN | MEMPOOL_F_SP_PUT |
-			MEMPOOL_F_SC_GET);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN | RTE_MEMPOOL_F_SP_PUT |
+			RTE_MEMPOOL_F_SC_GET);
 		if (mp_spsc == NULL)
 			RET_ERR();
 	}
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e656a293ca..83a453b9bc 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,6 +39,10 @@ Deprecation Notices
   ``__atomic_thread_fence`` must be used for patches that need to be merged in
   20.08 onwards. This change will not introduce any performance degradation.
 
+* mempool: The mempool flags ``MEMPOOL_F_*`` are deprecated and will be
+  removed in DPDK 22.11. Corresponding flags with ``RTE_MEMPOOL_F_*``
+  should be used instead.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 7696d4098d..84bcad0e4a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -264,6 +264,9 @@ API Changes
   removed. Its usages have been replaced by a new function
   ``rte_kvargs_get_with_value()``.
 
+* mempool: The mempool flags ``MEMPOOL_F_*`` are deprecated.
+  Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 9d40e336d7..d325daed95 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -19,7 +19,7 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		plt_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index 688e9daa66..06fc53cc5b 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -310,7 +310,7 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr)
 	}
 
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		timvf_log_info("Using single producer mode");
 	}
 
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index de50c4c76e..3cdc468140 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -81,7 +81,7 @@ tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		otx2_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 8b9daa9782..8ff9e53007 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -426,7 +426,7 @@ bucket_init_per_lcore(unsigned int lcore_id, void *arg)
 		goto error;
 
 	rg_flags = RING_F_SC_DEQ;
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
 	bd->adoption_buffer_rings[lcore_id] = rte_ring_create(rg_name,
 		rte_align32pow2(mp->size + 1), mp->socket_id, rg_flags);
@@ -472,7 +472,7 @@ bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_data;
 	}
 	bd->pool = mp;
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		bucket_header_size = sizeof(struct bucket_header);
 	else
 		bucket_header_size = RTE_CACHE_LINE_SIZE;
@@ -494,9 +494,9 @@ bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_stacks;
 	}
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 	rc = snprintf(rg_name, sizeof(rg_name),
 		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
index b1f09ff28f..4b785971c4 100644
--- a/drivers/mempool/ring/rte_mempool_ring.c
+++ b/drivers/mempool/ring/rte_mempool_ring.c
@@ -110,9 +110,9 @@ common_ring_alloc(struct rte_mempool *mp)
 {
 	uint32_t rg_flags = 0;
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 
 	return ring_alloc(mp, rg_flags);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e0eb2b0307..69266e6514 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1124,7 +1124,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 
 	txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
 						 0, 0, dev->node,
-						 MEMPOOL_F_NO_SPREAD);
+						 RTE_MEMPOOL_F_NO_SPREAD);
 	txq->nb_sqb_bufs = nb_sqb_bufs;
 	txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
 	txq->nb_sqb_bufs_adj = nb_sqb_bufs -
@@ -1150,7 +1150,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 		goto fail;
 	}
 
-	tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz);
+	tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
 	if (dev->sqb_size != sz.elt_size) {
 		otx2_err("sqe pool block size is not expected %d != %d",
 			 dev->sqb_size, tmp);
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 2103f96d5e..b08701bce7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1301,7 +1301,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	}
 
 	/* Mempool memory must be physically contiguous */
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) {
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) {
 		PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous");
 		return -EINVAL;
 	}
diff --git a/drivers/raw/octeontx2_ep/otx2_ep_test.c b/drivers/raw/octeontx2_ep/otx2_ep_test.c
index b876275f7a..4183b73a13 100644
--- a/drivers/raw/octeontx2_ep/otx2_ep_test.c
+++ b/drivers/raw/octeontx2_ep/otx2_ep_test.c
@@ -71,7 +71,8 @@ sdp_ioq_mempool_create(void)
 				   NULL /*obj_init*/,
 				   NULL /*obj_init arg*/,
 				   rte_socket_id() /*socket id*/,
-				   (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET));
+				   (RTE_MEMPOOL_F_SP_PUT |
+				    RTE_MEMPOOL_F_SC_GET));
 
 	return mpool;
 }
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 607419ccaf..19210c702c 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -216,7 +216,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz = (sz != NULL) ? sz : &lsz;
 
 	sz->header_size = sizeof(struct rte_mempool_objhdr);
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0)
 		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
 			RTE_MEMPOOL_ALIGN);
 
@@ -230,7 +230,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));
 
 	/* expand trailer to next cache line */
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
 		sz->total_size = sz->header_size + sz->elt_size +
 			sz->trailer_size;
 		sz->trailer_size += ((RTE_MEMPOOL_ALIGN -
@@ -242,7 +242,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	 * increase trailer to add padding between objects in order to
 	 * spread them across memory channels/ranks
 	 */
-	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_SPREAD) == 0) {
 		unsigned new_size;
 		new_size = arch_mem_object_align
 			    (sz->header_size + sz->elt_size + sz->trailer_size);
@@ -294,11 +294,11 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+	if ((mp->flags & RTE_MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
 		if (ret != 0)
 			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
+		mp->flags |= RTE_MEMPOOL_F_POOL_CREATED;
 	}
 	return 0;
 }
@@ -336,7 +336,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
@@ -393,7 +393,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	size_t off, phys_len;
 	int ret, cnt = 0;
 
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA,
 			len, free_cb, opaque);
 
@@ -450,7 +450,7 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
 	if (ret < 0)
 		return -EINVAL;
 	alloc_in_ext_mem = (ret == 1);
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 
 	if (!need_iova_contig_obj)
 		*pg_sz = 0;
@@ -527,7 +527,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	 * reserve space in smaller chunks.
 	 */
 
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 	ret = rte_mempool_get_page_size(mp, &pg_sz);
 	if (ret < 0)
 		return ret;
@@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
 	rte_free(cache);
 }
 
-#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
-	| MEMPOOL_F_NO_CACHE_ALIGN \
-	| MEMPOOL_F_SP_PUT \
-	| MEMPOOL_F_SC_GET \
-	| MEMPOOL_F_POOL_CREATED \
-	| MEMPOOL_F_NO_IOVA_CONTIG \
+#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
+	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
+	| RTE_MEMPOOL_F_SP_PUT \
+	| RTE_MEMPOOL_F_SC_GET \
+	| RTE_MEMPOOL_F_POOL_CREATED \
+	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
 	)
 /* create an empty mempool */
 struct rte_mempool *
@@ -835,8 +835,8 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	}
 
 	/* "no cache align" imply "no spread" */
-	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
-		flags |= MEMPOOL_F_NO_SPREAD;
+	if (flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
+		flags |= RTE_MEMPOOL_F_NO_SPREAD;
 
 	/* calculate mempool object sizes. */
 	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
@@ -948,11 +948,11 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
 	 * set the correct index into the table of ops structs.
 	 */
-	if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET))
+	if ((flags & RTE_MEMPOOL_F_SP_PUT) && (flags & RTE_MEMPOOL_F_SC_GET))
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
-	else if (flags & MEMPOOL_F_SP_PUT)
+	else if (flags & RTE_MEMPOOL_F_SP_PUT)
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
-	else if (flags & MEMPOOL_F_SC_GET)
+	else if (flags & RTE_MEMPOOL_F_SC_GET)
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
 	else
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 8ef4c8ed1e..4725a40abe 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -251,17 +251,27 @@ struct rte_mempool {
 }  __rte_cache_aligned;
 
 /** Spreading among memory channels not required. */
-#define MEMPOOL_F_NO_SPREAD      0x0001
+#define RTE_MEMPOOL_F_NO_SPREAD		0x0001
+#define MEMPOOL_F_NO_SPREAD \
+		RTE_DEPRECATED(RTE_MEMPOOL_F_NO_SPREAD)
 /** Do not align objects on cache lines. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002
+#define RTE_MEMPOOL_F_NO_CACHE_ALIGN	0x0002
+#define MEMPOOL_F_NO_CACHE_ALIGN \
+		RTE_DEPRECATED(MEMPOOL_F_NO_CACHE_ALIGN)
 /** Default put is "single-producer". */
-#define MEMPOOL_F_SP_PUT         0x0004
+#define RTE_MEMPOOL_F_SP_PUT		0x0004
+#define MEMPOOL_F_SP_PUT \
+		RTE_DEPRECATED(RTE_MEMPOOL_F_SP_PUT)
 /** Default get is "single-consumer". */
-#define MEMPOOL_F_SC_GET         0x0008
+#define RTE_MEMPOOL_F_SC_GET		0x0008
+#define MEMPOOL_F_SC_GET \
+		RTE_DEPRECATED(RTE_MEMPOOL_F_SC_GET)
 /** Internal: pool is created. */
-#define MEMPOOL_F_POOL_CREATED   0x0010
+#define RTE_MEMPOOL_F_POOL_CREATED	0x0010
 /** Don't need IOVA contiguous objects. */
-#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020
+#define RTE_MEMPOOL_F_NO_IOVA_CONTIG	0x0020
+#define MEMPOOL_F_NO_IOVA_CONTIG \
+		RTE_DEPRECATED(RTE_MEMPOOL_F_NO_IOVA_CONTIG)
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -424,9 +434,9 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  * Calculate memory size required to store given number of objects.
  *
  * If mempool objects are not required to be IOVA-contiguous
- * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
+ * (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
  * virtually contiguous chunk size. Otherwise, if mempool objects must
- * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
+ * be IOVA-contiguous (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is clear),
  * min_chunk_size defines IOVA-contiguous chunk size.
  *
  * @param[in] mp
@@ -974,22 +984,22 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *   constraint for the reserved zone.
  * @param flags
  *   The *flags* arguments is an OR of following flags:
- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *   - RTE_MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
  *     between channels in RAM: the pool allocator will add padding
  *     between objects depending on the hardware configuration. See
  *     Memory alignment constraints for details. If this flag is set,
  *     the allocator will just align them to a cache line.
- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *   - RTE_MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
  *     cache-aligned. This flag removes this constraint, and no
  *     padding will be present between objects. This flag implies
- *     MEMPOOL_F_NO_SPREAD.
- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     RTE_MEMPOOL_F_NO_SPREAD.
+ *   - RTE_MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
  *     when using rte_mempool_put() or rte_mempool_put_bulk() is
  *     "single-producer". Otherwise, it is "multi-producers".
- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *   - RTE_MEMPOOL_F_SC_GET: If this flag is set, the default behavior
  *     when using rte_mempool_get() or rte_mempool_get_bulk() is
  *     "single-consumer". Otherwise, it is "multi-consumers".
- *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
+ *   - RTE_MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
  *     necessarily be contiguous in IO memory.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
@@ -1676,7 +1686,7 @@ rte_mempool_empty(const struct rte_mempool *mp)
  *   A pointer (virtual address) to the element of the pool.
  * @return
  *   The IO address of the elt element.
- *   If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the
+ *   If the mempool was created with RTE_MEMPOOL_F_NO_IOVA_CONTIG, the
  *   returned value is RTE_BAD_IOVA.
  */
 static inline rte_iova_t
diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c
index 5e22667787..2d36dee8f0 100644
--- a/lib/mempool/rte_mempool_ops.c
+++ b/lib/mempool/rte_mempool_ops.c
@@ -168,7 +168,7 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
 	unsigned i;
 
 	/* too late, the mempool is already populated. */
-	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+	if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED)
 		return -EEXIST;
 
 	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc15..46a87e2339 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -371,7 +371,8 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 		rte_errno = EINVAL;
 		return -1;
 	}
-	if (mp->flags & MEMPOOL_F_SP_PUT || mp->flags & MEMPOOL_F_SC_GET) {
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT ||
+	    mp->flags & RTE_MEMPOOL_F_SC_GET) {
 		PDUMP_LOG(ERR,
 			  "mempool with SP or SC set not valid for pdump,"
 			  "must have MP and MC set\n");
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e4a445e709..82bdb84526 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -321,8 +321,8 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
 	vq->iotlb_pool = rte_mempool_create(pool_name,
 			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
 			0, 0, NULL, NULL, NULL, socket,
-			MEMPOOL_F_NO_CACHE_ALIGN |
-			MEMPOOL_F_SP_PUT);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN |
+			RTE_MEMPOOL_F_SP_PUT);
 	if (!vq->iotlb_pool) {
 		VHOST_LOG_CONFIG(ERR,
 				"Failed to create IOTLB cache pool (%s)\n",
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH 3/6] mempool: add namespace to internal but still visible API
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
@ 2021-10-18 14:49 ` Andrew Rybchenko
  2021-10-19  8:47   ` David Marchand
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 4/6] mempool: make header size calculation internal Andrew Rybchenko
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-18 14:49 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Jerin Jacob, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Harman Kalra,
	Anoob Joseph
  Cc: dev

Add RTE_ prefix to internal API defined in public header.
Use the prefix instead of double underscore.
Use uppercase for macros in the case of name conflict.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/event/octeontx/ssovf_worker.h      |  2 +-
 drivers/net/cnxk/cn10k_rx.h                | 12 ++--
 drivers/net/cnxk/cn10k_tx.h                | 30 ++++----
 drivers/net/cnxk/cn9k_rx.h                 | 12 ++--
 drivers/net/cnxk/cn9k_tx.h                 | 26 +++----
 drivers/net/octeontx/octeontx_rxtx.h       |  4 +-
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h |  2 +-
 drivers/net/octeontx2/otx2_rx.c            |  8 +--
 drivers/net/octeontx2/otx2_rx.h            |  4 +-
 drivers/net/octeontx2/otx2_tx.c            | 16 ++---
 drivers/net/octeontx2/otx2_tx.h            |  4 +-
 lib/mempool/rte_mempool.c                  |  8 +--
 lib/mempool/rte_mempool.h                  | 81 +++++++++++-----------
 13 files changed, 105 insertions(+), 104 deletions(-)

diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h
index f609b296ed..ba9e1cd0fa 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -83,7 +83,7 @@ ssovf_octeontx_wqe_xtract_mseg(octtx_wqe_t *wqe,
 
 		mbuf->data_off = sizeof(octtx_pki_buflink_t);
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 		if (nb_segs == 1)
 			mbuf->data_len = bytes_left;
 		else
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index fcc451aa36..6b40a9d0b5 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -276,7 +276,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -306,7 +306,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
@@ -905,10 +905,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		roc_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		packets += NIX_DESCS_PER_LOOP;
 
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c6f349b352..0fd877f4ec 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -677,7 +677,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	} else {
 		sg->seg1_size = m->data_len;
 		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
@@ -789,7 +789,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 	m = m_next;
@@ -808,7 +808,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 #endif
 		slist++;
 		i++;
@@ -1177,7 +1177,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 
@@ -1194,7 +1194,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -1235,7 +1235,7 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1[0], 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		return;
@@ -1425,7 +1425,7 @@ cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1, 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1,
 						0);
 		rte_io_wmb();
 #endif
@@ -2352,28 +2352,28 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf0)->pool,
 					(void **)&mbuf0, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf1)->pool,
 					(void **)&mbuf1, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf2)->pool,
 					(void **)&mbuf2, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf3)->pool,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -2389,19 +2389,19 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			/* Mark mempool object as "put" since
 			 * it is freed by NIX
 			 */
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf0)->pool,
 				(void **)&mbuf0, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf1)->pool,
 				(void **)&mbuf1, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf2)->pool,
 				(void **)&mbuf2, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf3)->pool,
 				(void **)&mbuf3, 1, 0);
 		}
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 7ab415a194..ba3c3668f7 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -151,7 +151,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -288,7 +288,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		packet_type = nix_ptype_get(lookup_mem, w1);
@@ -757,10 +757,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 		roc_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		/* Advance head pointer and packets */
 		head += NIX_DESCS_PER_LOOP;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 44273eca90..83f4be84f1 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -285,7 +285,7 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	}
 }
 
@@ -397,7 +397,7 @@ cn9k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -611,7 +611,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 
@@ -628,7 +628,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -680,7 +680,7 @@ cn9k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1[0], 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		return 2 + !!(flags & NIX_TX_NEED_EXT_HDR) +
@@ -1627,28 +1627,28 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf0)->pool,
 					(void **)&mbuf0, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf1)->pool,
 					(void **)&mbuf1, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf2)->pool,
 					(void **)&mbuf2, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf3)->pool,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -1667,19 +1667,19 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			/* Mark mempool object as "put" since
 			 * it is freed by NIX
 			 */
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf0)->pool,
 				(void **)&mbuf0, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf1)->pool,
 				(void **)&mbuf1, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf2)->pool,
 				(void **)&mbuf2, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf3)->pool,
 				(void **)&mbuf3, 1, 0);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
index e0723ac26a..9af797c36c 100644
--- a/drivers/net/octeontx/octeontx_rxtx.h
+++ b/drivers/net/octeontx/octeontx_rxtx.h
@@ -344,7 +344,7 @@ __octeontx_xmit_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
 
 	/* Mark mempool object as "put" since it is freed by PKO */
 	if (!(cmd_buf[0] & (1ULL << 58)))
-		__mempool_check_cookies(m_tofree->pool, (void **)&m_tofree,
+		RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool, (void **)&m_tofree,
 					1, 0);
 	/* Get the gaura Id */
 	gaura_id =
@@ -417,7 +417,7 @@ __octeontx_xmit_mseg_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
 		 */
 		if (!(cmd_buf[nb_desc] & (1ULL << 57))) {
 			tx_pkt->next = NULL;
-			__mempool_check_cookies(m_tofree->pool,
+			RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool,
 						(void **)&m_tofree, 1, 0);
 		}
 		nb_desc++;
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
index 623a2a841e..65140b759c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
@@ -146,7 +146,7 @@ otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
 	sd->nix_iova.addr = rte_mbuf_data_iova(m);
 
 	/* Mark mempool object as "put" since it is freed by NIX */
-	__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 
 	if (!ev->sched_type)
 		otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952..0d85c898bf 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -296,10 +296,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 		otx2_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		/* Advance head pointer and packets */
 		head += NIX_DESCS_PER_LOOP; head &= qmask;
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index ea29aec62f..3dcc563be1 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -199,7 +199,7 @@ nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -309,7 +309,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b9..ad704d745b 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -202,7 +202,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -211,7 +211,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -220,7 +220,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -229,7 +229,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -245,22 +245,22 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 */
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 			RTE_SET_USED(mbuf);
 		}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 486248dff7..de1be0093c 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -372,7 +372,7 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	}
 }
 
@@ -450,7 +450,7 @@ otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 19210c702c..638eaa5fa2 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -167,7 +167,7 @@ mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2;
-	tlr = __mempool_get_trailer(obj);
+	tlr = rte_mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
 }
@@ -1064,7 +1064,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 			rte_panic("MEMPOOL: object is owned by another "
 				  "mempool\n");
 
-		hdr = __mempool_get_header(obj);
+		hdr = rte_mempool_get_header(obj);
 		cookie = hdr->cookie;
 
 		if (free == 0) {
@@ -1092,7 +1092,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 				rte_panic("MEMPOOL: bad header cookie (audit)\n");
 			}
 		}
-		tlr = __mempool_get_trailer(obj);
+		tlr = rte_mempool_get_trailer(obj);
 		cookie = tlr->cookie;
 		if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {
 			RTE_LOG(CRIT, MEMPOOL,
@@ -1144,7 +1144,7 @@ static void
 mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,
 	void *obj, __rte_unused unsigned idx)
 {
-	__mempool_check_cookies(mp, &obj, 1, 2);
+	RTE_MEMPOOL_CHECK_COOKIES(mp, &obj, 1, 2);
 }
 
 static void
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4725a40abe..11540c0d52 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -284,14 +284,14 @@ struct rte_mempool {
  *   Number to add to the object-oriented statistics.
  */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {                    \
+#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {                  \
 		unsigned __lcore_id = rte_lcore_id();           \
 		if (__lcore_id < RTE_MAX_LCORE) {               \
 			mp->stats[__lcore_id].name += n;        \
 		}                                               \
-	} while(0)
+	} while (0)
 #else
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0)
 #endif
 
 /**
@@ -307,7 +307,8 @@ struct rte_mempool {
 	(sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE)))
 
 /* return the header of a mempool object (internal) */
-static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj)
+static inline struct rte_mempool_objhdr *
+rte_mempool_get_header(void *obj)
 {
 	return (struct rte_mempool_objhdr *)RTE_PTR_SUB(obj,
 		sizeof(struct rte_mempool_objhdr));
@@ -324,12 +325,12 @@ static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj)
  */
 static inline struct rte_mempool *rte_mempool_from_obj(void *obj)
 {
-	struct rte_mempool_objhdr *hdr = __mempool_get_header(obj);
+	struct rte_mempool_objhdr *hdr = rte_mempool_get_header(obj);
 	return hdr->mp;
 }
 
 /* return the trailer of a mempool object (internal) */
-static inline struct rte_mempool_objtlr *__mempool_get_trailer(void *obj)
+static inline struct rte_mempool_objtlr *rte_mempool_get_trailer(void *obj)
 {
 	struct rte_mempool *mp = rte_mempool_from_obj(obj);
 	return (struct rte_mempool_objtlr *)RTE_PTR_ADD(obj, mp->elt_size);
@@ -353,10 +354,10 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 	void * const *obj_table_const, unsigned n, int free);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __mempool_check_cookies(mp, obj_table_const, n, free) \
+#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) \
 	rte_mempool_check_cookies(mp, obj_table_const, n, free)
 #else
-#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
+#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) do {} while (0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
 /**
@@ -378,13 +379,13 @@ void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
 	void * const *first_obj_table_const, unsigned int n, int free);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
-					      free) \
+#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \
+						free) \
 	rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
 						free)
 #else
-#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
-					      free) \
+#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \
+						free) \
 	do {} while (0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
@@ -719,8 +720,8 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 	ops = rte_mempool_get_ops(mp->ops_index);
 	ret = ops->dequeue(mp, obj_table, n);
 	if (ret == 0) {
-		__MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n);
 	}
 	return ret;
 }
@@ -769,8 +770,8 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 {
 	struct rte_mempool_ops *ops;
 
-	__MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n);
 	rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n);
 	ops = rte_mempool_get_ops(mp->ops_index);
 	return ops->enqueue(mp, obj_table, n);
@@ -1295,14 +1296,14 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache,
  *   A pointer to a mempool cache structure. May be NULL if not needed.
  */
 static __rte_always_inline void
-__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-		      unsigned int n, struct rte_mempool_cache *cache)
+rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
+			   unsigned int n, struct rte_mempool_cache *cache)
 {
 	void **cache_objs;
 
 	/* increment stat now, adding in mempool always success */
-	__MEMPOOL_STAT_ADD(mp, put_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, put_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
 
 	/* No cache provided or if put would overflow mem allocated for cache */
 	if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
@@ -1359,8 +1360,8 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
 			unsigned int n, struct rte_mempool_cache *cache)
 {
 	rte_mempool_trace_generic_put(mp, obj_table, n, cache);
-	__mempool_check_cookies(mp, obj_table, n, 0);
-	__mempool_generic_put(mp, obj_table, n, cache);
+	RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 0);
+	rte_mempool_do_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1384,7 +1385,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
 	rte_mempool_trace_put_bulk(mp, obj_table, n, cache);
-	rte_mempool_generic_put(mp, obj_table, n, cache);
+	rte_mempool_do_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1420,8 +1421,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   - <0: Error; code of ring dequeue function.
  */
 static __rte_always_inline int
-__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
-		      unsigned int n, struct rte_mempool_cache *cache)
+rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
+			   unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	uint32_t index, len;
@@ -1460,8 +1461,8 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 
 	cache->len -= n;
 
-	__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, get_success_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
 
 	return 0;
 
@@ -1471,11 +1472,11 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0) {
-		__MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
 	} else {
-		__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_success_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
 	}
 
 	return ret;
@@ -1506,9 +1507,9 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 			unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
-	ret = __mempool_generic_get(mp, obj_table, n, cache);
+	ret = rte_mempool_do_generic_get(mp, obj_table, n, cache);
 	if (ret == 0)
-		__mempool_check_cookies(mp, obj_table, n, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 1);
 	rte_mempool_trace_generic_get(mp, obj_table, n, cache);
 	return ret;
 }
@@ -1541,7 +1542,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
 	rte_mempool_trace_get_bulk(mp, obj_table, n, cache);
-	return rte_mempool_generic_get(mp, obj_table, n, cache);
+	return rte_mempool_do_generic_get(mp, obj_table, n, cache);
 }
 
 /**
@@ -1599,13 +1600,13 @@ rte_mempool_get_contig_blocks(struct rte_mempool *mp,
 
 	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
 	if (ret == 0) {
-		__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_success_blks, n);
-		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
-						      1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_blks, n);
+		RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table, n,
+							1);
 	} else {
-		__MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_fail_blks, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_blks, n);
 	}
 
 	rte_mempool_trace_get_contig_blocks(mp, first_obj_table, n);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH 4/6] mempool: make header size calculation internal
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
                   ` (2 preceding siblings ...)
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
@ 2021-10-18 14:49 ` Andrew Rybchenko
  2021-10-19  8:48   ` David Marchand
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-18 14:49 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Ray Kinsella; +Cc: dev

Add RTE_ prefix to helper macro to calculate mempool header size and
make it internal. Old macro is still available, but deprecated.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test/test_mempool.c                |  2 +-
 doc/guides/rel_notes/deprecation.rst   |  4 ++++
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 lib/mempool/rte_mempool.c              |  6 +++---
 lib/mempool/rte_mempool.h              | 10 +++++++---
 5 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index ffe69e2d03..8ecd0f10b8 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -111,7 +111,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 
 	printf("get private data\n");
 	if (rte_mempool_get_priv(mp) != (char *)mp +
-			MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
+			RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
 		GOTO_ERR(ret, out);
 
 #ifndef RTE_EXEC_ENV_FREEBSD /* rte_mem_virt2iova() not supported on bsd */
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 83a453b9bc..33ad418be7 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -43,6 +43,10 @@ Deprecation Notices
   removed in DPDK 22.11. Corresponding flags with ``RTE_MEMPOOL_F_*``
   should be used instead.
 
+* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated and will
+  be removed in DPDK 22.11. The replacement macro
+  ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 84bcad0e4a..dae421225b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -267,6 +267,9 @@ API Changes
 * mempool: The mempool flags ``MEMPOOL_F_*`` are deprecated.
   Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead.
 
+* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated.
+  The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 638eaa5fa2..4e3a15e49c 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -861,7 +861,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		goto exit_unlock;
 	}
 
-	mempool_size = MEMPOOL_HEADER_SIZE(mp, cache_size);
+	mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size);
 	mempool_size += private_data_size;
 	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
 
@@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 
 	/* init the mempool structure */
 	mp = mz->addr;
-	memset(mp, 0, MEMPOOL_HEADER_SIZE(mp, cache_size));
+	memset(mp, 0, RTE_MEMPOOL_HEADER_SIZE(mp, cache_size));
 	ret = strlcpy(mp->name, name, sizeof(mp->name));
 	if (ret < 0 || ret >= (int)sizeof(mp->name)) {
 		rte_errno = ENAMETOOLONG;
@@ -901,7 +901,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	 * The local_cache points to just past the elt_pa[] array.
 	 */
 	mp->local_cache = (struct rte_mempool_cache *)
-		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
+		RTE_PTR_ADD(mp, RTE_MEMPOOL_HEADER_SIZE(mp, 0));
 
 	/* Init all default caches. */
 	if (cache_size != 0) {
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 11540c0d52..b1dbcf7361 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -295,17 +295,21 @@ struct rte_mempool {
 #endif
 
 /**
- * Calculate the size of the mempool header.
+ * @internal Calculate the size of the mempool header.
  *
  * @param mp
  *   Pointer to the memory pool.
  * @param cs
  *   Size of the per-lcore cache.
  */
-#define MEMPOOL_HEADER_SIZE(mp, cs) \
+#define RTE_MEMPOOL_HEADER_SIZE(mp, cs) \
 	(sizeof(*(mp)) + (((cs) == 0) ? 0 : \
 	(sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE)))
 
+/** Deprecated. Use RTE_MEMPOOL_HEADER_SIZE() for internal purposes only. */
+#define MEMPOOL_HEADER_SIZE(mp, cs) \
+	RTE_DEPRECATED(RTE_MEMPOOL_HEADER_SIZE(mp, cs))
+
 /* return the header of a mempool object (internal) */
 static inline struct rte_mempool_objhdr *
 rte_mempool_get_header(void *obj)
@@ -1722,7 +1726,7 @@ void rte_mempool_audit(struct rte_mempool *mp);
 static inline void *rte_mempool_get_priv(struct rte_mempool *mp)
 {
 	return (char *)mp +
-		MEMPOOL_HEADER_SIZE(mp, mp->cache_size);
+		RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size);
 }
 
 /**
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
                   ` (3 preceding siblings ...)
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 4/6] mempool: make header size calculation internal Andrew Rybchenko
@ 2021-10-18 14:49 ` Andrew Rybchenko
  2021-10-19  8:49   ` David Marchand
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 6/6] mempool: deprecate unused defines Andrew Rybchenko
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-18 14:49 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Ray Kinsella, Artem V. Andreev,
	Ashwin Sekhar T K, Pavan Nikhilesh, Hemant Agrawal,
	Sachin Saxena, Harman Kalra, Jerin Jacob, Nithin Dabilpuram
  Cc: dev

Add RTE_ prefix to macro used to register mempool driver.
The old one is still available but deprecated.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 doc/guides/prog_guide/mempool_lib.rst           |  2 +-
 doc/guides/rel_notes/deprecation.rst            |  4 ++++
 doc/guides/rel_notes/release_21_11.rst          |  3 +++
 drivers/mempool/bucket/rte_mempool_bucket.c     |  2 +-
 drivers/mempool/cnxk/cn10k_mempool_ops.c        |  2 +-
 drivers/mempool/cnxk/cn9k_mempool_ops.c         |  2 +-
 drivers/mempool/dpaa/dpaa_mempool.c             |  2 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c        |  2 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c |  2 +-
 drivers/mempool/octeontx2/otx2_mempool_ops.c    |  2 +-
 drivers/mempool/ring/rte_mempool_ring.c         | 12 ++++++------
 drivers/mempool/stack/rte_mempool_stack.c       |  4 ++--
 lib/mempool/rte_mempool.h                       |  6 +++++-
 13 files changed, 28 insertions(+), 17 deletions(-)

diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index 890535eb23..55838317b9 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -115,7 +115,7 @@ management systems and software based memory allocators, to be used with DPDK.
 There are two aspects to a mempool handler.
 
 * Adding the code for your new mempool operations (ops). This is achieved by
-  adding a new mempool ops code, and using the ``MEMPOOL_REGISTER_OPS`` macro.
+  adding a new mempool ops code, and using the ``RTE_MEMPOOL_REGISTER_OPS`` macro.
 
 * Using the new API to call ``rte_mempool_create_empty()`` and
   ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 33ad418be7..f75b23ef03 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -47,6 +47,10 @@ Deprecation Notices
   be removed in DPDK 22.11. The replacement macro
   ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
 
+* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
+  deprecated and will be removed in DPDK 22.11. Use replacement macro
+  ``RTE_MEMPOOL_REGISTER_OPS()``.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dae421225b..a679bb90e3 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -270,6 +270,9 @@ API Changes
 * mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated.
   The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
 
+* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
+  deprecated.  Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 8ff9e53007..c0b480bfc7 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -663,4 +663,4 @@ static const struct rte_mempool_ops ops_bucket = {
 };
 
 
-MEMPOOL_REGISTER_OPS(ops_bucket);
+RTE_MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c
index 95458b34b7..4c669b878f 100644
--- a/drivers/mempool/cnxk/cn10k_mempool_ops.c
+++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c
@@ -316,4 +316,4 @@ static struct rte_mempool_ops cn10k_mempool_ops = {
 	.populate = cnxk_mempool_populate,
 };
 
-MEMPOOL_REGISTER_OPS(cn10k_mempool_ops);
+RTE_MEMPOOL_REGISTER_OPS(cn10k_mempool_ops);
diff --git a/drivers/mempool/cnxk/cn9k_mempool_ops.c b/drivers/mempool/cnxk/cn9k_mempool_ops.c
index c0cdba640b..b7967f8085 100644
--- a/drivers/mempool/cnxk/cn9k_mempool_ops.c
+++ b/drivers/mempool/cnxk/cn9k_mempool_ops.c
@@ -86,4 +86,4 @@ static struct rte_mempool_ops cn9k_mempool_ops = {
 	.populate = cnxk_mempool_populate,
 };
 
-MEMPOOL_REGISTER_OPS(cn9k_mempool_ops);
+RTE_MEMPOOL_REGISTER_OPS(cn9k_mempool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index f02056982c..f17aff9655 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -358,4 +358,4 @@ static const struct rte_mempool_ops dpaa_mpool_ops = {
 	.populate = dpaa_populate,
 };
 
-MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
+RTE_MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 771e0a0e28..39c6252a63 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -455,6 +455,6 @@ static const struct rte_mempool_ops dpaa2_mpool_ops = {
 	.populate = dpaa2_populate,
 };
 
-MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops);
+RTE_MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops);
 
 RTE_LOG_REGISTER_DEFAULT(dpaa2_logtype_mempool, NOTICE);
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index bd00700202..f4de1c8412 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -202,4 +202,4 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.populate = octeontx_fpavf_populate,
 };
 
-MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
+RTE_MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index d827fd8c7b..332e4f1cb2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -898,4 +898,4 @@ static struct rte_mempool_ops otx2_npa_ops = {
 #endif
 };
 
-MEMPOOL_REGISTER_OPS(otx2_npa_ops);
+RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops);
diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
index 4b785971c4..c6aa935eea 100644
--- a/drivers/mempool/ring/rte_mempool_ring.c
+++ b/drivers/mempool/ring/rte_mempool_ring.c
@@ -198,9 +198,9 @@ static const struct rte_mempool_ops ops_mt_hts = {
 	.get_count = common_ring_get_count,
 };
 
-MEMPOOL_REGISTER_OPS(ops_mp_mc);
-MEMPOOL_REGISTER_OPS(ops_sp_sc);
-MEMPOOL_REGISTER_OPS(ops_mp_sc);
-MEMPOOL_REGISTER_OPS(ops_sp_mc);
-MEMPOOL_REGISTER_OPS(ops_mt_rts);
-MEMPOOL_REGISTER_OPS(ops_mt_hts);
+RTE_MEMPOOL_REGISTER_OPS(ops_mp_mc);
+RTE_MEMPOOL_REGISTER_OPS(ops_sp_sc);
+RTE_MEMPOOL_REGISTER_OPS(ops_mp_sc);
+RTE_MEMPOOL_REGISTER_OPS(ops_sp_mc);
+RTE_MEMPOOL_REGISTER_OPS(ops_mt_rts);
+RTE_MEMPOOL_REGISTER_OPS(ops_mt_hts);
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 7e85c8d6b6..1476905227 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -93,5 +93,5 @@ static struct rte_mempool_ops ops_lf_stack = {
 	.get_count = stack_get_count
 };
 
-MEMPOOL_REGISTER_OPS(ops_stack);
-MEMPOOL_REGISTER_OPS(ops_lf_stack);
+RTE_MEMPOOL_REGISTER_OPS(ops_stack);
+RTE_MEMPOOL_REGISTER_OPS(ops_lf_stack);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index b1dbcf7361..eea91b20fb 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -905,12 +905,16 @@ int rte_mempool_register_ops(const struct rte_mempool_ops *ops);
  * Note that the rte_mempool_register_ops fails silently here when
  * more than RTE_MEMPOOL_MAX_OPS_IDX is registered.
  */
-#define MEMPOOL_REGISTER_OPS(ops)				\
+#define RTE_MEMPOOL_REGISTER_OPS(ops)				\
 	RTE_INIT(mp_hdlr_init_##ops)				\
 	{							\
 		rte_mempool_register_ops(&ops);			\
 	}
 
+/** Deprecated. Use RTE_MEMPOOL_REGISTER_OPS() instead. */
+#define MEMPOOL_REGISTER_OPS(ops) \
+	RTE_DEPRECATED(RTE_MEMPOOL_REGISTER_OPS(ops))
+
 /**
  * An object callback function for mempool.
  *
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH 6/6] mempool: deprecate unused defines
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
                   ` (4 preceding siblings ...)
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
@ 2021-10-18 14:49 ` Andrew Rybchenko
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
  7 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-18 14:49 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Ray Kinsella; +Cc: dev

MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 doc/guides/contributing/documentation.rst | 4 ++--
 doc/guides/rel_notes/deprecation.rst      | 3 +++
 doc/guides/rel_notes/release_21_11.rst    | 3 +++
 lib/mempool/rte_mempool.h                 | 7 ++++---
 4 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 8cbd4a0f6f..7fcbb7fc43 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -705,7 +705,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
      /**< Virtual address of the first mempool object. */
      uintptr_t   elt_va_end;
      /**< Virtual address of the <size + 1> mempool object. */
-     phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
+     phys_addr_t elt_pa[1];
      /**< Array of physical page addresses for the mempool buffer. */
 
   This doesn't have an effect on the rendered documentation but it is confusing for the developer reading the code.
@@ -724,7 +724,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
      /** Virtual address of the <size + 1> mempool object. */
      uintptr_t   elt_va_end;
      /** Array of physical page addresses for the mempool buffer. */
-     phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
+     phys_addr_t elt_pa[1];
 
 * Read the rendered section of the documentation that you have added for correctness, clarity and consistency
   with the surrounding text.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f75b23ef03..c2bd0cd736 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -51,6 +51,9 @@ Deprecation Notices
   deprecated and will be removed in DPDK 22.11. Use replacement macro
   ``RTE_MEMPOOL_REGISTER_OPS()``.
 
+* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and
+  will be removed in DPDK 22.11.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index a679bb90e3..c302d9e664 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -273,6 +273,9 @@ API Changes
 * mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
   deprecated.  Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``.
 
+* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and
+  will be removed in DPDK 22.11.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index eea91b20fb..4bb7349322 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -116,10 +116,11 @@ struct rte_mempool_objsz {
 /* "MP_<name>" */
 #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
 
-#define	MEMPOOL_PG_SHIFT_MAX	(sizeof(uintptr_t) * CHAR_BIT - 1)
+#define	MEMPOOL_PG_SHIFT_MAX \
+	RTE_DEPRECATED(sizeof(uintptr_t) * CHAR_BIT - 1)
 
-/** Mempool over one chunk of physically continuous memory */
-#define	MEMPOOL_PG_NUM_DEFAULT	1
+/** Deprecated. Mempool over one chunk of physically continuous memory */
+#define	MEMPOOL_PG_NUM_DEFAULT	RTE_DEPRECATED(1)
 
 #ifndef RTE_MEMPOOL_ALIGN
 /**
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 3/6] mempool: add namespace to internal but still visible API
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
@ 2021-10-19  8:47   ` David Marchand
  2021-10-19  9:10     ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: David Marchand @ 2021-10-19  8:47 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Olivier Matz, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, Harman Kalra, Anoob Joseph, dev

On Mon, Oct 18, 2021 at 4:49 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Add RTE_ prefix to internal API defined in public header.
> Use the prefix instead of double underscore.
> Use uppercase for macros in the case of name conflict.

Fwiw, I see no use out of dpdk for those helpers/macros.

$ git grep-all -E
'\<(__MEMPOOL_STAT_ADD|__mempool_contig_blocks_check_cookies|__mempool_check_cookies|__mempool_generic_get|__mempool_generic_put|__mempool_get_trailer|__mempool_get_header)\>'

Not a review, just something that caught my eye below:

[snip]

> @@ -1384,7 +1385,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>         struct rte_mempool_cache *cache;
>         cache = rte_mempool_default_cache(mp, rte_lcore_id());
>         rte_mempool_trace_put_bulk(mp, obj_table, n, cache);
> -       rte_mempool_generic_put(mp, obj_table, n, cache);
> +       rte_mempool_do_generic_put(mp, obj_table, n, cache);

Is this change expected?


>  }
>
>  /**

[snip]


> @@ -1541,7 +1542,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
>         struct rte_mempool_cache *cache;
>         cache = rte_mempool_default_cache(mp, rte_lcore_id());
>         rte_mempool_trace_get_bulk(mp, obj_table, n, cache);
> -       return rte_mempool_generic_get(mp, obj_table, n, cache);
> +       return rte_mempool_do_generic_get(mp, obj_table, n, cache);
>  }
>
>  /**

Idem.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 4/6] mempool: make header size calculation internal
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 4/6] mempool: make header size calculation internal Andrew Rybchenko
@ 2021-10-19  8:48   ` David Marchand
  2021-10-19  8:59     ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: David Marchand @ 2021-10-19  8:48 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Olivier Matz, Ray Kinsella, dev

On Mon, Oct 18, 2021 at 4:49 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Add RTE_ prefix to helper macro to calculate mempool header size and
> make it internal. Old macro is still available, but deprecated.
>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

No reference to this macro out there:
$ git grep-all -w MEMPOOL_HEADER_SIZE


The change looks fine to me.
I just wonder if we really need to expose this helper, is it
performance sensitive?

-- 
David marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
@ 2021-10-19  8:49   ` David Marchand
  2021-10-19  9:04     ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: David Marchand @ 2021-10-19  8:49 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev, Thomas Monjalon

On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Add RTE_ prefix to macro used to register mempool driver.
> The old one is still available but deprecated.

ODP seems to use its own mempools.

$ git grep-all -w MEMPOOL_REGISTER_OPS
OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);

I'd say it counts as a driver macro.
If so, we could hide it in a driver-only header, along with
rte_mempool_register_ops getting marked as internal.

$ git grep-all -w rte_mempool_register_ops
FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);



-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] mempool: add namespace prefix to flags
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
@ 2021-10-19  8:52   ` David Marchand
  2021-10-19  9:40     ` Thomas Monjalon
  0 siblings, 1 reply; 53+ messages in thread
From: David Marchand @ 2021-10-19  8:52 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon
  Cc: Olivier Matz, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Ray Kinsella, Pavan Nikhilesh, Shijith Thotton, Jerin Jacob,
	Artem V. Andreev, Nithin Dabilpuram, Kiran Kumar K,
	Maciej Czekaj, Radha Mohan Chintakuntla, Veerasenareddy Burru,
	Maxime Coquelin, Chenbo Xia, dev

On Mon, Oct 18, 2021 at 4:49 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
> The old flags remain usable, but a deprecation warning is issued at
> compilation.

We have a build failure in CI for SPDK.
This is most probably (I can't find the full compilation error in
logs..) because of the deprecation of MEMPOOL_F_NO_IOVA_CONTIG.


$ git grep-all -E
'\<(MEMPOOL_F_NO_IOVA_CONTIG|MEMPOOL_F_POOL_CREATED|MEMPOOL_F_SC_GET|MEMPOOL_F_SP_PUT|MEMPOOL_F_NO_CACHE_ALIGN|MEMPOOL_F_NO_SPREAD)\>'
BESS/core/packet_pool.cc:  pool_->flags |= MEMPOOL_F_NO_IOVA_CONTIG;
gatekeeper/cps/main.c:        socket_id, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET);
gatekeeper/cps/main.c:        socket_id, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET);
mTcp/mtcp/src/dpdk_module.c:                rte_socket_id(), MEMPOOL_F_SP_PUT |
mTcp/mtcp/src/dpdk_module.c:                MEMPOOL_F_SC_GET);
mTcp/mtcp/src/memory_mgt.c:                MEMPOOL_F_NO_SPREAD);
OpenDataplane/platform/linux-generic/pktio/dpdk.c:#define
MEMPOOL_FLAGS MEMPOOL_F_NO_IOVA_CONTIG
SPDK/lib/env_dpdk/env.c:                socket_id, MEMPOOL_F_NO_IOVA_CONTIG);
Trex/src/pal/linux_dpdk/mbuf.cpp:    unsigned flags = is_hugepages ? 0
: MEMPOOL_F_NO_IOVA_CONTIG;
Trex/src/pal/linux_dpdk/mbuf.cpp:        flags = (MEMPOOL_F_SP_PUT |
MEMPOOL_F_SC_GET);
Trex/src/pal/linux_dpdk/mbuf.cpp:        flags |= MEMPOOL_F_NO_IOVA_CONTIG;
Warp17/inc/tpg_memory.h:#define MEM_MBUF_POOL_FLAGS (MEMPOOL_F_SP_PUT
| MEMPOOL_F_SC_GET)
Warp17/inc/tpg_memory.h:#define MEM_TCB_POOL_FLAGS (MEMPOOL_F_SP_PUT |
MEMPOOL_F_SC_GET)
Warp17/inc/tpg_memory.h:#define MEM_UCB_POOL_FLAGS (MEMPOOL_F_SP_PUT |
MEMPOOL_F_SC_GET)
Warp17/src/ring_if/tpg_ring_if.c:static_assert(!(MEM_MBUF_POOL_FLAGS &
MEMPOOL_F_SP_PUT),
Warp17/src/ring_if/tpg_ring_if.c:              "MEM_MBUF_POOL_FLAGS
contains MEMPOOL_F_SP_PUT! This will corrupt memory when using Ring
Interfaces!");


If we had announced such a deprecation, I would not question the change.
I think we should postpone the deprecation part to 22.02.

Thomas, what do you think?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 4/6] mempool: make header size calculation internal
  2021-10-19  8:48   ` David Marchand
@ 2021-10-19  8:59     ` Andrew Rybchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19  8:59 UTC (permalink / raw)
  To: David Marchand; +Cc: Olivier Matz, Ray Kinsella, dev

On 10/19/21 11:48 AM, David Marchand wrote:
> On Mon, Oct 18, 2021 at 4:49 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> Add RTE_ prefix to helper macro to calculate mempool header size and
>> make it internal. Old macro is still available, but deprecated.
>>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> No reference to this macro out there:
> $ git grep-all -w MEMPOOL_HEADER_SIZE
> 
> 
> The change looks fine to me.
> I just wonder if we really need to expose this helper, is it
> performance sensitive?
> 

As far as I can see it is used by rte_mempool_get_priv()
which is inline and could be used on datapath.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  8:49   ` David Marchand
@ 2021-10-19  9:04     ` Andrew Rybchenko
  2021-10-19  9:23       ` Andrew Rybchenko
  2021-10-19  9:27       ` David Marchand
  0 siblings, 2 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19  9:04 UTC (permalink / raw)
  To: David Marchand
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev, Thomas Monjalon

On 10/19/21 11:49 AM, David Marchand wrote:
> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> Add RTE_ prefix to macro used to register mempool driver.
>> The old one is still available but deprecated.
> 
> ODP seems to use its own mempools.
> 
> $ git grep-all -w MEMPOOL_REGISTER_OPS
> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> 
> I'd say it counts as a driver macro.
> If so, we could hide it in a driver-only header, along with
> rte_mempool_register_ops getting marked as internal.
> 
> $ git grep-all -w rte_mempool_register_ops
> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);

Do I understand correctly that it is required to remove it from
stable ABI/API, but still allow external SW to use it?

Should I add one more patch to the series?



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 3/6] mempool: add namespace to internal but still visible API
  2021-10-19  8:47   ` David Marchand
@ 2021-10-19  9:10     ` Andrew Rybchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19  9:10 UTC (permalink / raw)
  To: David Marchand
  Cc: Olivier Matz, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, Harman Kalra, Anoob Joseph, dev

On 10/19/21 11:47 AM, David Marchand wrote:
> On Mon, Oct 18, 2021 at 4:49 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> Add RTE_ prefix to internal API defined in public header.
>> Use the prefix instead of double underscore.
>> Use uppercase for macros in the case of name conflict.
> 
> Fwiw, I see no use out of dpdk for those helpers/macros.
> 
> $ git grep-all -E
> '\<(__MEMPOOL_STAT_ADD|__mempool_contig_blocks_check_cookies|__mempool_check_cookies|__mempool_generic_get|__mempool_generic_put|__mempool_get_trailer|__mempool_get_header)\>'
> 
> Not a review, just something that caught my eye below:
> 
> [snip]
> 
>> @@ -1384,7 +1385,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>>         struct rte_mempool_cache *cache;
>>         cache = rte_mempool_default_cache(mp, rte_lcore_id());
>>         rte_mempool_trace_put_bulk(mp, obj_table, n, cache);
>> -       rte_mempool_generic_put(mp, obj_table, n, cache);
>> +       rte_mempool_do_generic_put(mp, obj_table, n, cache);
> 
> Is this change expected?

My bad. Many thanks for very careful review. Will fix in v2.

> 
> 
>>  }
>>
>>  /**
> 
> [snip]
> 
> 
>> @@ -1541,7 +1542,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
>>         struct rte_mempool_cache *cache;
>>         cache = rte_mempool_default_cache(mp, rte_lcore_id());
>>         rte_mempool_trace_get_bulk(mp, obj_table, n, cache);
>> -       return rte_mempool_generic_get(mp, obj_table, n, cache);
>> +       return rte_mempool_do_generic_get(mp, obj_table, n, cache);
>>  }
>>
>>  /**
> 
> Idem.

Same here.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:04     ` Andrew Rybchenko
@ 2021-10-19  9:23       ` Andrew Rybchenko
  2021-10-19  9:27       ` David Marchand
  1 sibling, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19  9:23 UTC (permalink / raw)
  To: David Marchand
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev, Thomas Monjalon

On 10/19/21 12:04 PM, Andrew Rybchenko wrote:
> On 10/19/21 11:49 AM, David Marchand wrote:
>> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>
>>> Add RTE_ prefix to macro used to register mempool driver.
>>> The old one is still available but deprecated.
>>
>> ODP seems to use its own mempools.
>>
>> $ git grep-all -w MEMPOOL_REGISTER_OPS
>> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
>>
>> I'd say it counts as a driver macro.
>> If so, we could hide it in a driver-only header, along with
>> rte_mempool_register_ops getting marked as internal.
>>
>> $ git grep-all -w rte_mempool_register_ops
>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> 
> Do I understand correctly that it is required to remove it from
> stable ABI/API, but still allow external SW to use it?
> 
> Should I add one more patch to the series?
> 

I'm afraid not now. It is too invasive or too illogical.
Basically it should more rte_mempool_ops to the header
as well, but it is heavily used by inline functions in
rte_mempool.h.

Of course, it is possible to move just register API
to the mempool_driver.h header, but value of such
changes is not really big.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:04     ` Andrew Rybchenko
  2021-10-19  9:23       ` Andrew Rybchenko
@ 2021-10-19  9:27       ` David Marchand
  2021-10-19  9:38         ` Andrew Rybchenko
  2021-10-19  9:42         ` Thomas Monjalon
  1 sibling, 2 replies; 53+ messages in thread
From: David Marchand @ 2021-10-19  9:27 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev

On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> On 10/19/21 11:49 AM, David Marchand wrote:
> > On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> > <andrew.rybchenko@oktetlabs.ru> wrote:
> >>
> >> Add RTE_ prefix to macro used to register mempool driver.
> >> The old one is still available but deprecated.
> >
> > ODP seems to use its own mempools.
> >
> > $ git grep-all -w MEMPOOL_REGISTER_OPS
> > OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> >
> > I'd say it counts as a driver macro.
> > If so, we could hide it in a driver-only header, along with
> > rte_mempool_register_ops getting marked as internal.
> >
> > $ git grep-all -w rte_mempool_register_ops
> > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>
> Do I understand correctly that it is required to remove it from
> stable ABI/API, but still allow external SW to use it?
>
> Should I add one more patch to the series?

If we want to do the full job, we need to inspect driver-only symbols
in rte_mempool.h.
But this goes way further than a simple prefixing as this series intended.

I just read your reply, I think we agree.
Let's go with simple prefix and take a note to cleanup in the future.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:27       ` David Marchand
@ 2021-10-19  9:38         ` Andrew Rybchenko
  2021-10-19  9:42         ` Thomas Monjalon
  1 sibling, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19  9:38 UTC (permalink / raw)
  To: David Marchand, Thomas Monjalon
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev

On 10/19/21 12:27 PM, David Marchand wrote:
> On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> On 10/19/21 11:49 AM, David Marchand wrote:
>>> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
>>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>>
>>>> Add RTE_ prefix to macro used to register mempool driver.
>>>> The old one is still available but deprecated.
>>>
>>> ODP seems to use its own mempools.
>>>
>>> $ git grep-all -w MEMPOOL_REGISTER_OPS
>>> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
>>>
>>> I'd say it counts as a driver macro.
>>> If so, we could hide it in a driver-only header, along with
>>> rte_mempool_register_ops getting marked as internal.
>>>
>>> $ git grep-all -w rte_mempool_register_ops
>>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>>
>> Do I understand correctly that it is required to remove it from
>> stable ABI/API, but still allow external SW to use it?
>>
>> Should I add one more patch to the series?
> 
> If we want to do the full job, we need to inspect driver-only symbols
> in rte_mempool.h.
> But this goes way further than a simple prefixing as this series intended.
> 
> I just read your reply, I think we agree.
> Let's go with simple prefix and take a note to cleanup in the future.

Agreed.



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] mempool: add namespace prefix to flags
  2021-10-19  8:52   ` David Marchand
@ 2021-10-19  9:40     ` Thomas Monjalon
  0 siblings, 0 replies; 53+ messages in thread
From: Thomas Monjalon @ 2021-10-19  9:40 UTC (permalink / raw)
  To: Andrew Rybchenko, David Marchand
  Cc: dev, Olivier Matz, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Ray Kinsella, Pavan Nikhilesh, Shijith Thotton, Jerin Jacob,
	Artem V. Andreev, Nithin Dabilpuram, Kiran Kumar K,
	Maciej Czekaj, Radha Mohan Chintakuntla, Veerasenareddy Burru,
	Maxime Coquelin, Chenbo Xia, dev

19/10/2021 10:52, David Marchand:
> On Mon, Oct 18, 2021 at 4:49 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
> >
> > Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
> > The old flags remain usable, but a deprecation warning is issued at
> > compilation.
> 
> We have a build failure in CI for SPDK.
> This is most probably (I can't find the full compilation error in
> logs..) because of the deprecation of MEMPOOL_F_NO_IOVA_CONTIG.
> 
> 
> $ git grep-all -E
> '\<(MEMPOOL_F_NO_IOVA_CONTIG|MEMPOOL_F_POOL_CREATED|MEMPOOL_F_SC_GET|MEMPOOL_F_SP_PUT|MEMPOOL_F_NO_CACHE_ALIGN|MEMPOOL_F_NO_SPREAD)\>'
> BESS/core/packet_pool.cc:  pool_->flags |= MEMPOOL_F_NO_IOVA_CONTIG;
> gatekeeper/cps/main.c:        socket_id, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET);
> gatekeeper/cps/main.c:        socket_id, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET);
> mTcp/mtcp/src/dpdk_module.c:                rte_socket_id(), MEMPOOL_F_SP_PUT |
> mTcp/mtcp/src/dpdk_module.c:                MEMPOOL_F_SC_GET);
> mTcp/mtcp/src/memory_mgt.c:                MEMPOOL_F_NO_SPREAD);
> OpenDataplane/platform/linux-generic/pktio/dpdk.c:#define
> MEMPOOL_FLAGS MEMPOOL_F_NO_IOVA_CONTIG
> SPDK/lib/env_dpdk/env.c:                socket_id, MEMPOOL_F_NO_IOVA_CONTIG);
> Trex/src/pal/linux_dpdk/mbuf.cpp:    unsigned flags = is_hugepages ? 0
> : MEMPOOL_F_NO_IOVA_CONTIG;
> Trex/src/pal/linux_dpdk/mbuf.cpp:        flags = (MEMPOOL_F_SP_PUT |
> MEMPOOL_F_SC_GET);
> Trex/src/pal/linux_dpdk/mbuf.cpp:        flags |= MEMPOOL_F_NO_IOVA_CONTIG;
> Warp17/inc/tpg_memory.h:#define MEM_MBUF_POOL_FLAGS (MEMPOOL_F_SP_PUT
> | MEMPOOL_F_SC_GET)
> Warp17/inc/tpg_memory.h:#define MEM_TCB_POOL_FLAGS (MEMPOOL_F_SP_PUT |
> MEMPOOL_F_SC_GET)
> Warp17/inc/tpg_memory.h:#define MEM_UCB_POOL_FLAGS (MEMPOOL_F_SP_PUT |
> MEMPOOL_F_SC_GET)
> Warp17/src/ring_if/tpg_ring_if.c:static_assert(!(MEM_MBUF_POOL_FLAGS &
> MEMPOOL_F_SP_PUT),
> Warp17/src/ring_if/tpg_ring_if.c:              "MEM_MBUF_POOL_FLAGS
> contains MEMPOOL_F_SP_PUT! This will corrupt memory when using Ring
> Interfaces!");
> 
> 
> If we had announced such a deprecation, I would not question the change.
> I think we should postpone the deprecation part to 22.02.
> 
> Thomas, what do you think?

Yes it is too early for such deprecation.
OK to introduce new names, but please keep full compatibility.




^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:27       ` David Marchand
  2021-10-19  9:38         ` Andrew Rybchenko
@ 2021-10-19  9:42         ` Thomas Monjalon
  1 sibling, 0 replies; 53+ messages in thread
From: Thomas Monjalon @ 2021-10-19  9:42 UTC (permalink / raw)
  To: Andrew Rybchenko, David Marchand
  Cc: dev, Olivier Matz, Ray Kinsella, Artem V. Andreev,
	Ashwin Sekhar T K, Pavan Nikhilesh, Hemant Agrawal,
	Sachin Saxena, Harman Kalra, Jerin Jacob, Nithin Dabilpuram, dev

19/10/2021 11:27, David Marchand:
> On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
> >
> > On 10/19/21 11:49 AM, David Marchand wrote:
> > > On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru> wrote:
> > >>
> > >> Add RTE_ prefix to macro used to register mempool driver.
> > >> The old one is still available but deprecated.
> > >
> > > ODP seems to use its own mempools.
> > >
> > > $ git grep-all -w MEMPOOL_REGISTER_OPS
> > > OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> > >
> > > I'd say it counts as a driver macro.
> > > If so, we could hide it in a driver-only header, along with
> > > rte_mempool_register_ops getting marked as internal.
> > >
> > > $ git grep-all -w rte_mempool_register_ops
> > > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> > > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> >
> > Do I understand correctly that it is required to remove it from
> > stable ABI/API, but still allow external SW to use it?
> >
> > Should I add one more patch to the series?
> 
> If we want to do the full job, we need to inspect driver-only symbols
> in rte_mempool.h.
> But this goes way further than a simple prefixing as this series intended.
> 
> I just read your reply, I think we agree.
> Let's go with simple prefix and take a note to cleanup in the future.

Yes, and we should probably discuss in techboard what should be kept
compatible for external mempool drivers.



^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
                   ` (5 preceding siblings ...)
  2021-10-18 14:49 ` [dpdk-dev] [PATCH 6/6] mempool: deprecate unused defines Andrew Rybchenko
@ 2021-10-19 10:08 ` Andrew Rybchenko
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
                     ` (5 more replies)
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
  7 siblings, 6 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 10:08 UTC (permalink / raw)
  To: Olivier Matz, David Marchand; +Cc: dev

Add RTE_ prefix to mempool API including internal. Keep old public API
with fallback to new defines. Internal API is just renamed.

v2:
    - do not deprecate MEMPOOL_F_* flags
    - fix unintended usage of internal get/put helpers from bulk get/put

Andrew Rybchenko (6):
  mempool: avoid flags documentation in the next line
  mempool: add namespace prefix to flags
  mempool: add namespace to internal but still visible API
  mempool: make header size calculation internal
  mempool: add namespace to driver register macro
  mempool: deprecate unused defines

 app/proc-info/main.c                          |  15 +-
 app/test-pmd/parameters.c                     |   4 +-
 app/test/test_mempool.c                       |   8 +-
 doc/guides/contributing/documentation.rst     |   4 +-
 doc/guides/prog_guide/mempool_lib.rst         |   2 +-
 doc/guides/rel_notes/deprecation.rst          |  11 ++
 doc/guides/rel_notes/release_21_11.rst        |  12 ++
 drivers/event/cnxk/cnxk_tim_evdev.c           |   2 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx/timvf_evdev.c          |   2 +-
 drivers/event/octeontx2/otx2_tim_evdev.c      |   2 +-
 drivers/mempool/bucket/rte_mempool_bucket.c   |  10 +-
 drivers/mempool/cnxk/cn10k_mempool_ops.c      |   2 +-
 drivers/mempool/cnxk/cn9k_mempool_ops.c       |   2 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   2 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   2 +-
 .../mempool/octeontx/rte_mempool_octeontx.c   |   2 +-
 drivers/mempool/octeontx2/otx2_mempool_ops.c  |   2 +-
 drivers/mempool/ring/rte_mempool_ring.c       |  16 +-
 drivers/mempool/stack/rte_mempool_stack.c     |   4 +-
 drivers/net/cnxk/cn10k_rx.h                   |  12 +-
 drivers/net/cnxk/cn10k_tx.h                   |  30 ++--
 drivers/net/cnxk/cn9k_rx.h                    |  12 +-
 drivers/net/cnxk/cn9k_tx.h                    |  26 +--
 drivers/net/octeontx/octeontx_rxtx.h          |   4 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   4 +-
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h    |   2 +-
 drivers/net/octeontx2/otx2_rx.c               |   8 +-
 drivers/net/octeontx2/otx2_rx.h               |   4 +-
 drivers/net/octeontx2/otx2_tx.c               |  16 +-
 drivers/net/octeontx2/otx2_tx.h               |   4 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
 lib/mempool/rte_mempool.c                     |  54 +++---
 lib/mempool/rte_mempool.h                     | 162 +++++++++++-------
 lib/mempool/rte_mempool_ops.c                 |   2 +-
 lib/pdump/rte_pdump.c                         |   3 +-
 lib/vhost/iotlb.c                             |   4 +-
 37 files changed, 261 insertions(+), 194 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
@ 2021-10-19 10:08   ` Andrew Rybchenko
  2021-10-19 16:13     ` Olivier Matz
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 10:08 UTC (permalink / raw)
  To: Olivier Matz, David Marchand; +Cc: dev

Move documentation into a separate line just before define.
Prepare to have a bit longer flag name because of namespace prefix.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 lib/mempool/rte_mempool.h | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 88bcbc51ef..8ef4c8ed1e 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -250,13 +250,18 @@ struct rte_mempool {
 #endif
 }  __rte_cache_aligned;
 
+/** Spreading among memory channels not required. */
 #define MEMPOOL_F_NO_SPREAD      0x0001
-		/**< Spreading among memory channels not required. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
-#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
-#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
-#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Do not align objects on cache lines. */
+#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002
+/** Default put is "single-producer". */
+#define MEMPOOL_F_SP_PUT         0x0004
+/** Default get is "single-consumer". */
+#define MEMPOOL_F_SC_GET         0x0008
+/** Internal: pool is created. */
+#define MEMPOOL_F_POOL_CREATED   0x0010
+/** Don't need IOVA contiguous objects. */
+#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
@ 2021-10-19 10:08   ` Andrew Rybchenko
  2021-10-19 16:13     ` Olivier Matz
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 10:08 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Maryam Tahhan, Reshma Pattan,
	Xiaoyun Li, Pavan Nikhilesh, Shijith Thotton, Jerin Jacob,
	Artem V. Andreev, Nithin Dabilpuram, Kiran Kumar K,
	Maciej Czekaj, Maxime Coquelin, Chenbo Xia
  Cc: dev

Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                        | 15 +++---
 app/test-pmd/parameters.c                   |  4 +-
 app/test/test_mempool.c                     |  6 +--
 doc/guides/rel_notes/release_21_11.rst      |  3 ++
 drivers/event/cnxk/cnxk_tim_evdev.c         |  2 +-
 drivers/event/octeontx/timvf_evdev.c        |  2 +-
 drivers/event/octeontx2/otx2_tim_evdev.c    |  2 +-
 drivers/mempool/bucket/rte_mempool_bucket.c |  8 +--
 drivers/mempool/ring/rte_mempool_ring.c     |  4 +-
 drivers/net/octeontx2/otx2_ethdev.c         |  4 +-
 drivers/net/thunderx/nicvf_ethdev.c         |  2 +-
 lib/mempool/rte_mempool.c                   | 40 +++++++--------
 lib/mempool/rte_mempool.h                   | 55 +++++++++++++++------
 lib/mempool/rte_mempool_ops.c               |  2 +-
 lib/pdump/rte_pdump.c                       |  3 +-
 lib/vhost/iotlb.c                           |  4 +-
 16 files changed, 94 insertions(+), 62 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..74d8fdc1db 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1298,12 +1298,15 @@ show_mempool(char *name)
 				"\t  -- No IOVA config (%c)\n",
 				ptr->name,
 				ptr->socket_id,
-				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_CACHE_ALIGN) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
-				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & RTE_MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SP_PUT) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SC_GET) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_POOL_CREATED) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) ?
+					'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3f94a82e32..b69897ef00 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -1396,7 +1396,7 @@ launch_args_parse(int argc, char** argv)
 						 "noisy-lkup-num-reads-writes must be >= 0\n");
 			}
 			if (!strcmp(lgopts[opt_idx].name, "no-iova-contig"))
-				mempool_flags = MEMPOOL_F_NO_IOVA_CONTIG;
+				mempool_flags = RTE_MEMPOOL_F_NO_IOVA_CONTIG;
 
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
@@ -1440,7 +1440,7 @@ launch_args_parse(int argc, char** argv)
 	rx_mode.offloads = rx_offloads;
 	tx_mode.offloads = tx_offloads;
 
-	if (mempool_flags & MEMPOOL_F_NO_IOVA_CONTIG &&
+	if (mempool_flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG &&
 	    mp_alloc_type != MP_ALLOC_ANON) {
 		TESTPMD_LOG(WARNING, "cannot use no-iova-contig without "
 				  "mp-alloc=anon. mempool no-iova-contig is "
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 66bc8d86b7..ffe69e2d03 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -213,7 +213,7 @@ static int test_mempool_creation_with_unknown_flag(void)
 		MEMPOOL_ELT_SIZE, 0, 0,
 		NULL, NULL,
 		NULL, NULL,
-		SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG << 1);
+		SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG << 1);
 
 	if (mp_cov != NULL) {
 		rte_mempool_free(mp_cov);
@@ -336,8 +336,8 @@ test_mempool_sp_sc(void)
 			my_mp_init, NULL,
 			my_obj_init, NULL,
 			SOCKET_ID_ANY,
-			MEMPOOL_F_NO_CACHE_ALIGN | MEMPOOL_F_SP_PUT |
-			MEMPOOL_F_SC_GET);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN | RTE_MEMPOOL_F_SP_PUT |
+			RTE_MEMPOOL_F_SC_GET);
 		if (mp_spsc == NULL)
 			RET_ERR();
 	}
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d5435a64aa..9a0e3832a3 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -221,6 +221,9 @@ API Changes
   removed. Its usages have been replaced by a new function
   ``rte_kvargs_get_with_value()``.
 
+* mempool: The mempool flags ``MEMPOOL_F_*`` will be deprecated in the future.
+  Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 9d40e336d7..d325daed95 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -19,7 +19,7 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		plt_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index 688e9daa66..06fc53cc5b 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -310,7 +310,7 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr)
 	}
 
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		timvf_log_info("Using single producer mode");
 	}
 
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index de50c4c76e..3cdc468140 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -81,7 +81,7 @@ tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		otx2_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 8b9daa9782..8ff9e53007 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -426,7 +426,7 @@ bucket_init_per_lcore(unsigned int lcore_id, void *arg)
 		goto error;
 
 	rg_flags = RING_F_SC_DEQ;
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
 	bd->adoption_buffer_rings[lcore_id] = rte_ring_create(rg_name,
 		rte_align32pow2(mp->size + 1), mp->socket_id, rg_flags);
@@ -472,7 +472,7 @@ bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_data;
 	}
 	bd->pool = mp;
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		bucket_header_size = sizeof(struct bucket_header);
 	else
 		bucket_header_size = RTE_CACHE_LINE_SIZE;
@@ -494,9 +494,9 @@ bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_stacks;
 	}
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 	rc = snprintf(rg_name, sizeof(rg_name),
 		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
index b1f09ff28f..4b785971c4 100644
--- a/drivers/mempool/ring/rte_mempool_ring.c
+++ b/drivers/mempool/ring/rte_mempool_ring.c
@@ -110,9 +110,9 @@ common_ring_alloc(struct rte_mempool *mp)
 {
 	uint32_t rg_flags = 0;
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 
 	return ring_alloc(mp, rg_flags);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d576bc6989..9db62acbd0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1124,7 +1124,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 
 	txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
 						 0, 0, dev->node,
-						 MEMPOOL_F_NO_SPREAD);
+						 RTE_MEMPOOL_F_NO_SPREAD);
 	txq->nb_sqb_bufs = nb_sqb_bufs;
 	txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
 	txq->nb_sqb_bufs_adj = nb_sqb_bufs -
@@ -1150,7 +1150,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 		goto fail;
 	}
 
-	tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz);
+	tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
 	if (dev->sqb_size != sz.elt_size) {
 		otx2_err("sqe pool block size is not expected %d != %d",
 			 dev->sqb_size, tmp);
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5502f1ee69..7e07d381dd 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1302,7 +1302,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	}
 
 	/* Mempool memory must be physically contiguous */
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) {
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) {
 		PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous");
 		return -EINVAL;
 	}
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 607419ccaf..19210c702c 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -216,7 +216,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz = (sz != NULL) ? sz : &lsz;
 
 	sz->header_size = sizeof(struct rte_mempool_objhdr);
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0)
 		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
 			RTE_MEMPOOL_ALIGN);
 
@@ -230,7 +230,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));
 
 	/* expand trailer to next cache line */
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
 		sz->total_size = sz->header_size + sz->elt_size +
 			sz->trailer_size;
 		sz->trailer_size += ((RTE_MEMPOOL_ALIGN -
@@ -242,7 +242,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	 * increase trailer to add padding between objects in order to
 	 * spread them across memory channels/ranks
 	 */
-	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_SPREAD) == 0) {
 		unsigned new_size;
 		new_size = arch_mem_object_align
 			    (sz->header_size + sz->elt_size + sz->trailer_size);
@@ -294,11 +294,11 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+	if ((mp->flags & RTE_MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
 		if (ret != 0)
 			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
+		mp->flags |= RTE_MEMPOOL_F_POOL_CREATED;
 	}
 	return 0;
 }
@@ -336,7 +336,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
@@ -393,7 +393,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	size_t off, phys_len;
 	int ret, cnt = 0;
 
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA,
 			len, free_cb, opaque);
 
@@ -450,7 +450,7 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
 	if (ret < 0)
 		return -EINVAL;
 	alloc_in_ext_mem = (ret == 1);
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 
 	if (!need_iova_contig_obj)
 		*pg_sz = 0;
@@ -527,7 +527,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	 * reserve space in smaller chunks.
 	 */
 
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 	ret = rte_mempool_get_page_size(mp, &pg_sz);
 	if (ret < 0)
 		return ret;
@@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
 	rte_free(cache);
 }
 
-#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
-	| MEMPOOL_F_NO_CACHE_ALIGN \
-	| MEMPOOL_F_SP_PUT \
-	| MEMPOOL_F_SC_GET \
-	| MEMPOOL_F_POOL_CREATED \
-	| MEMPOOL_F_NO_IOVA_CONTIG \
+#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
+	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
+	| RTE_MEMPOOL_F_SP_PUT \
+	| RTE_MEMPOOL_F_SC_GET \
+	| RTE_MEMPOOL_F_POOL_CREATED \
+	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
 	)
 /* create an empty mempool */
 struct rte_mempool *
@@ -835,8 +835,8 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	}
 
 	/* "no cache align" imply "no spread" */
-	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
-		flags |= MEMPOOL_F_NO_SPREAD;
+	if (flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
+		flags |= RTE_MEMPOOL_F_NO_SPREAD;
 
 	/* calculate mempool object sizes. */
 	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
@@ -948,11 +948,11 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
 	 * set the correct index into the table of ops structs.
 	 */
-	if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET))
+	if ((flags & RTE_MEMPOOL_F_SP_PUT) && (flags & RTE_MEMPOOL_F_SC_GET))
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
-	else if (flags & MEMPOOL_F_SP_PUT)
+	else if (flags & RTE_MEMPOOL_F_SP_PUT)
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
-	else if (flags & MEMPOOL_F_SC_GET)
+	else if (flags & RTE_MEMPOOL_F_SC_GET)
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
 	else
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 8ef4c8ed1e..d4bcb009fa 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -251,17 +251,42 @@ struct rte_mempool {
 }  __rte_cache_aligned;
 
 /** Spreading among memory channels not required. */
-#define MEMPOOL_F_NO_SPREAD      0x0001
+#define RTE_MEMPOOL_F_NO_SPREAD		0x0001
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_SPREAD.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_SPREAD		RTE_MEMPOOL_F_NO_SPREAD
 /** Do not align objects on cache lines. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002
+#define RTE_MEMPOOL_F_NO_CACHE_ALIGN	0x0002
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_CACHE_ALIGN.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_CACHE_ALIGN	RTE_MEMPOOL_F_NO_CACHE_ALIGN
 /** Default put is "single-producer". */
-#define MEMPOOL_F_SP_PUT         0x0004
+#define RTE_MEMPOOL_F_SP_PUT		0x0004
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_SP_PUT.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_SP_PUT		RTE_MEMPOOL_F_SP_PUT
 /** Default get is "single-consumer". */
-#define MEMPOOL_F_SC_GET         0x0008
+#define RTE_MEMPOOL_F_SC_GET		0x0008
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_SC_GET.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_SC_GET		RTE_MEMPOOL_F_SC_GET
 /** Internal: pool is created. */
-#define MEMPOOL_F_POOL_CREATED   0x0010
+#define RTE_MEMPOOL_F_POOL_CREATED	0x0010
 /** Don't need IOVA contiguous objects. */
-#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020
+#define RTE_MEMPOOL_F_NO_IOVA_CONTIG	0x0020
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_IOVA_CONTIG.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_IOVA_CONTIG	RTE_MEMPOOL_F_NO_IOVA_CONTIG
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -424,9 +449,9 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  * Calculate memory size required to store given number of objects.
  *
  * If mempool objects are not required to be IOVA-contiguous
- * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
+ * (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
  * virtually contiguous chunk size. Otherwise, if mempool objects must
- * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
+ * be IOVA-contiguous (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is clear),
  * min_chunk_size defines IOVA-contiguous chunk size.
  *
  * @param[in] mp
@@ -974,22 +999,22 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *   constraint for the reserved zone.
  * @param flags
  *   The *flags* arguments is an OR of following flags:
- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *   - RTE_MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
  *     between channels in RAM: the pool allocator will add padding
  *     between objects depending on the hardware configuration. See
  *     Memory alignment constraints for details. If this flag is set,
  *     the allocator will just align them to a cache line.
- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *   - RTE_MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
  *     cache-aligned. This flag removes this constraint, and no
  *     padding will be present between objects. This flag implies
- *     MEMPOOL_F_NO_SPREAD.
- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     RTE_MEMPOOL_F_NO_SPREAD.
+ *   - RTE_MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
  *     when using rte_mempool_put() or rte_mempool_put_bulk() is
  *     "single-producer". Otherwise, it is "multi-producers".
- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *   - RTE_MEMPOOL_F_SC_GET: If this flag is set, the default behavior
  *     when using rte_mempool_get() or rte_mempool_get_bulk() is
  *     "single-consumer". Otherwise, it is "multi-consumers".
- *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
+ *   - RTE_MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
  *     necessarily be contiguous in IO memory.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
@@ -1676,7 +1701,7 @@ rte_mempool_empty(const struct rte_mempool *mp)
  *   A pointer (virtual address) to the element of the pool.
  * @return
  *   The IO address of the elt element.
- *   If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the
+ *   If the mempool was created with RTE_MEMPOOL_F_NO_IOVA_CONTIG, the
  *   returned value is RTE_BAD_IOVA.
  */
 static inline rte_iova_t
diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c
index 5e22667787..2d36dee8f0 100644
--- a/lib/mempool/rte_mempool_ops.c
+++ b/lib/mempool/rte_mempool_ops.c
@@ -168,7 +168,7 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
 	unsigned i;
 
 	/* too late, the mempool is already populated. */
-	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+	if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED)
 		return -EEXIST;
 
 	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc15..46a87e2339 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -371,7 +371,8 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 		rte_errno = EINVAL;
 		return -1;
 	}
-	if (mp->flags & MEMPOOL_F_SP_PUT || mp->flags & MEMPOOL_F_SC_GET) {
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT ||
+	    mp->flags & RTE_MEMPOOL_F_SC_GET) {
 		PDUMP_LOG(ERR,
 			  "mempool with SP or SC set not valid for pdump,"
 			  "must have MP and MC set\n");
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e4a445e709..82bdb84526 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -321,8 +321,8 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
 	vq->iotlb_pool = rte_mempool_create(pool_name,
 			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
 			0, 0, NULL, NULL, NULL, socket,
-			MEMPOOL_F_NO_CACHE_ALIGN |
-			MEMPOOL_F_SP_PUT);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN |
+			RTE_MEMPOOL_F_SP_PUT);
 	if (!vq->iotlb_pool) {
 		VHOST_LOG_CONFIG(ERR,
 				"Failed to create IOTLB cache pool (%s)\n",
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] mempool: add namespace to internal but still visible API
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
@ 2021-10-19 10:08   ` Andrew Rybchenko
  2021-10-19 16:14     ` Olivier Matz
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal Andrew Rybchenko
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 10:08 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Jerin Jacob, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Harman Kalra,
	Anoob Joseph
  Cc: dev

Add RTE_ prefix to internal API defined in public header.
Use the prefix instead of double underscore.
Use uppercase for macros in the case of name conflict.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/event/octeontx/ssovf_worker.h      |  2 +-
 drivers/net/cnxk/cn10k_rx.h                | 12 ++--
 drivers/net/cnxk/cn10k_tx.h                | 30 ++++-----
 drivers/net/cnxk/cn9k_rx.h                 | 12 ++--
 drivers/net/cnxk/cn9k_tx.h                 | 26 ++++----
 drivers/net/octeontx/octeontx_rxtx.h       |  4 +-
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h |  2 +-
 drivers/net/octeontx2/otx2_rx.c            |  8 +--
 drivers/net/octeontx2/otx2_rx.h            |  4 +-
 drivers/net/octeontx2/otx2_tx.c            | 16 ++---
 drivers/net/octeontx2/otx2_tx.h            |  4 +-
 lib/mempool/rte_mempool.c                  |  8 +--
 lib/mempool/rte_mempool.h                  | 77 +++++++++++-----------
 13 files changed, 103 insertions(+), 102 deletions(-)

diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h
index f609b296ed..ba9e1cd0fa 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -83,7 +83,7 @@ ssovf_octeontx_wqe_xtract_mseg(octtx_wqe_t *wqe,
 
 		mbuf->data_off = sizeof(octtx_pki_buflink_t);
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 		if (nb_segs == 1)
 			mbuf->data_len = bytes_left;
 		else
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index fcc451aa36..6b40a9d0b5 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -276,7 +276,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -306,7 +306,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
@@ -905,10 +905,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		roc_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		packets += NIX_DESCS_PER_LOOP;
 
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c6f349b352..0fd877f4ec 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -677,7 +677,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	} else {
 		sg->seg1_size = m->data_len;
 		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
@@ -789,7 +789,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 	m = m_next;
@@ -808,7 +808,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 #endif
 		slist++;
 		i++;
@@ -1177,7 +1177,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 
@@ -1194,7 +1194,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -1235,7 +1235,7 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1[0], 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		return;
@@ -1425,7 +1425,7 @@ cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1, 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1,
 						0);
 		rte_io_wmb();
 #endif
@@ -2352,28 +2352,28 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf0)->pool,
 					(void **)&mbuf0, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf1)->pool,
 					(void **)&mbuf1, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf2)->pool,
 					(void **)&mbuf2, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf3)->pool,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -2389,19 +2389,19 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			/* Mark mempool object as "put" since
 			 * it is freed by NIX
 			 */
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf0)->pool,
 				(void **)&mbuf0, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf1)->pool,
 				(void **)&mbuf1, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf2)->pool,
 				(void **)&mbuf2, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf3)->pool,
 				(void **)&mbuf3, 1, 0);
 		}
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 7ab415a194..ba3c3668f7 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -151,7 +151,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -288,7 +288,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		packet_type = nix_ptype_get(lookup_mem, w1);
@@ -757,10 +757,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 		roc_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		/* Advance head pointer and packets */
 		head += NIX_DESCS_PER_LOOP;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 44273eca90..83f4be84f1 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -285,7 +285,7 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	}
 }
 
@@ -397,7 +397,7 @@ cn9k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -611,7 +611,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 
@@ -628,7 +628,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -680,7 +680,7 @@ cn9k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1[0], 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		return 2 + !!(flags & NIX_TX_NEED_EXT_HDR) +
@@ -1627,28 +1627,28 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf0)->pool,
 					(void **)&mbuf0, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf1)->pool,
 					(void **)&mbuf1, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf2)->pool,
 					(void **)&mbuf2, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf3)->pool,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -1667,19 +1667,19 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			/* Mark mempool object as "put" since
 			 * it is freed by NIX
 			 */
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf0)->pool,
 				(void **)&mbuf0, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf1)->pool,
 				(void **)&mbuf1, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf2)->pool,
 				(void **)&mbuf2, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf3)->pool,
 				(void **)&mbuf3, 1, 0);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
index e0723ac26a..9af797c36c 100644
--- a/drivers/net/octeontx/octeontx_rxtx.h
+++ b/drivers/net/octeontx/octeontx_rxtx.h
@@ -344,7 +344,7 @@ __octeontx_xmit_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
 
 	/* Mark mempool object as "put" since it is freed by PKO */
 	if (!(cmd_buf[0] & (1ULL << 58)))
-		__mempool_check_cookies(m_tofree->pool, (void **)&m_tofree,
+		RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool, (void **)&m_tofree,
 					1, 0);
 	/* Get the gaura Id */
 	gaura_id =
@@ -417,7 +417,7 @@ __octeontx_xmit_mseg_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
 		 */
 		if (!(cmd_buf[nb_desc] & (1ULL << 57))) {
 			tx_pkt->next = NULL;
-			__mempool_check_cookies(m_tofree->pool,
+			RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool,
 						(void **)&m_tofree, 1, 0);
 		}
 		nb_desc++;
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
index 623a2a841e..65140b759c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
@@ -146,7 +146,7 @@ otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
 	sd->nix_iova.addr = rte_mbuf_data_iova(m);
 
 	/* Mark mempool object as "put" since it is freed by NIX */
-	__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 
 	if (!ev->sched_type)
 		otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952..0d85c898bf 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -296,10 +296,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 		otx2_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		/* Advance head pointer and packets */
 		head += NIX_DESCS_PER_LOOP; head &= qmask;
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index ea29aec62f..3dcc563be1 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -199,7 +199,7 @@ nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -309,7 +309,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b9..ad704d745b 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -202,7 +202,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -211,7 +211,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -220,7 +220,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -229,7 +229,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -245,22 +245,22 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 */
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 			RTE_SET_USED(mbuf);
 		}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 486248dff7..de1be0093c 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -372,7 +372,7 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	}
 }
 
@@ -450,7 +450,7 @@ otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 19210c702c..638eaa5fa2 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -167,7 +167,7 @@ mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2;
-	tlr = __mempool_get_trailer(obj);
+	tlr = rte_mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
 }
@@ -1064,7 +1064,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 			rte_panic("MEMPOOL: object is owned by another "
 				  "mempool\n");
 
-		hdr = __mempool_get_header(obj);
+		hdr = rte_mempool_get_header(obj);
 		cookie = hdr->cookie;
 
 		if (free == 0) {
@@ -1092,7 +1092,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 				rte_panic("MEMPOOL: bad header cookie (audit)\n");
 			}
 		}
-		tlr = __mempool_get_trailer(obj);
+		tlr = rte_mempool_get_trailer(obj);
 		cookie = tlr->cookie;
 		if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {
 			RTE_LOG(CRIT, MEMPOOL,
@@ -1144,7 +1144,7 @@ static void
 mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,
 	void *obj, __rte_unused unsigned idx)
 {
-	__mempool_check_cookies(mp, &obj, 1, 2);
+	RTE_MEMPOOL_CHECK_COOKIES(mp, &obj, 1, 2);
 }
 
 static void
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index d4bcb009fa..979ab071cb 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -299,14 +299,14 @@ struct rte_mempool {
  *   Number to add to the object-oriented statistics.
  */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {                    \
+#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {                  \
 		unsigned __lcore_id = rte_lcore_id();           \
 		if (__lcore_id < RTE_MAX_LCORE) {               \
 			mp->stats[__lcore_id].name += n;        \
 		}                                               \
-	} while(0)
+	} while (0)
 #else
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0)
 #endif
 
 /**
@@ -322,7 +322,8 @@ struct rte_mempool {
 	(sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE)))
 
 /* return the header of a mempool object (internal) */
-static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj)
+static inline struct rte_mempool_objhdr *
+rte_mempool_get_header(void *obj)
 {
 	return (struct rte_mempool_objhdr *)RTE_PTR_SUB(obj,
 		sizeof(struct rte_mempool_objhdr));
@@ -339,12 +340,12 @@ static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj)
  */
 static inline struct rte_mempool *rte_mempool_from_obj(void *obj)
 {
-	struct rte_mempool_objhdr *hdr = __mempool_get_header(obj);
+	struct rte_mempool_objhdr *hdr = rte_mempool_get_header(obj);
 	return hdr->mp;
 }
 
 /* return the trailer of a mempool object (internal) */
-static inline struct rte_mempool_objtlr *__mempool_get_trailer(void *obj)
+static inline struct rte_mempool_objtlr *rte_mempool_get_trailer(void *obj)
 {
 	struct rte_mempool *mp = rte_mempool_from_obj(obj);
 	return (struct rte_mempool_objtlr *)RTE_PTR_ADD(obj, mp->elt_size);
@@ -368,10 +369,10 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 	void * const *obj_table_const, unsigned n, int free);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __mempool_check_cookies(mp, obj_table_const, n, free) \
+#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) \
 	rte_mempool_check_cookies(mp, obj_table_const, n, free)
 #else
-#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
+#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) do {} while (0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
 /**
@@ -393,13 +394,13 @@ void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
 	void * const *first_obj_table_const, unsigned int n, int free);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
-					      free) \
+#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \
+						free) \
 	rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
 						free)
 #else
-#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
-					      free) \
+#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \
+						free) \
 	do {} while (0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
@@ -734,8 +735,8 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 	ops = rte_mempool_get_ops(mp->ops_index);
 	ret = ops->dequeue(mp, obj_table, n);
 	if (ret == 0) {
-		__MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n);
 	}
 	return ret;
 }
@@ -784,8 +785,8 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 {
 	struct rte_mempool_ops *ops;
 
-	__MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n);
 	rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n);
 	ops = rte_mempool_get_ops(mp->ops_index);
 	return ops->enqueue(mp, obj_table, n);
@@ -1310,14 +1311,14 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache,
  *   A pointer to a mempool cache structure. May be NULL if not needed.
  */
 static __rte_always_inline void
-__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-		      unsigned int n, struct rte_mempool_cache *cache)
+rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
+			   unsigned int n, struct rte_mempool_cache *cache)
 {
 	void **cache_objs;
 
 	/* increment stat now, adding in mempool always success */
-	__MEMPOOL_STAT_ADD(mp, put_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, put_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
 
 	/* No cache provided or if put would overflow mem allocated for cache */
 	if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
@@ -1374,8 +1375,8 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
 			unsigned int n, struct rte_mempool_cache *cache)
 {
 	rte_mempool_trace_generic_put(mp, obj_table, n, cache);
-	__mempool_check_cookies(mp, obj_table, n, 0);
-	__mempool_generic_put(mp, obj_table, n, cache);
+	RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 0);
+	rte_mempool_do_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1435,8 +1436,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   - <0: Error; code of ring dequeue function.
  */
 static __rte_always_inline int
-__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
-		      unsigned int n, struct rte_mempool_cache *cache)
+rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
+			   unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	uint32_t index, len;
@@ -1475,8 +1476,8 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 
 	cache->len -= n;
 
-	__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, get_success_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
 
 	return 0;
 
@@ -1486,11 +1487,11 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0) {
-		__MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
 	} else {
-		__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_success_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
 	}
 
 	return ret;
@@ -1521,9 +1522,9 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 			unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
-	ret = __mempool_generic_get(mp, obj_table, n, cache);
+	ret = rte_mempool_do_generic_get(mp, obj_table, n, cache);
 	if (ret == 0)
-		__mempool_check_cookies(mp, obj_table, n, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 1);
 	rte_mempool_trace_generic_get(mp, obj_table, n, cache);
 	return ret;
 }
@@ -1614,13 +1615,13 @@ rte_mempool_get_contig_blocks(struct rte_mempool *mp,
 
 	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
 	if (ret == 0) {
-		__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_success_blks, n);
-		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
-						      1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_blks, n);
+		RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table, n,
+							1);
 	} else {
-		__MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_fail_blks, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_blks, n);
 	}
 
 	rte_mempool_trace_get_contig_blocks(mp, first_obj_table, n);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
@ 2021-10-19 10:08   ` Andrew Rybchenko
  2021-10-19 16:14     ` Olivier Matz
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines Andrew Rybchenko
  5 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 10:08 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Ray Kinsella; +Cc: dev

Add RTE_ prefix to helper macro to calculate mempool header size and
make it internal. Old macro is still available, but deprecated.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test/test_mempool.c                |  2 +-
 doc/guides/rel_notes/deprecation.rst   |  4 ++++
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 lib/mempool/rte_mempool.c              |  6 +++---
 lib/mempool/rte_mempool.h              | 10 +++++++---
 5 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index ffe69e2d03..8ecd0f10b8 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -111,7 +111,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 
 	printf("get private data\n");
 	if (rte_mempool_get_priv(mp) != (char *)mp +
-			MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
+			RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
 		GOTO_ERR(ret, out);
 
 #ifndef RTE_EXEC_ENV_FREEBSD /* rte_mem_virt2iova() not supported on bsd */
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 45239ca56e..bc3aca8ef1 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,6 +39,10 @@ Deprecation Notices
   ``__atomic_thread_fence`` must be used for patches that need to be merged in
   20.08 onwards. This change will not introduce any performance degradation.
 
+* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated and will
+  be removed in DPDK 22.11. The replacement macro
+  ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 9a0e3832a3..e95ddb93a6 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -224,6 +224,9 @@ API Changes
 * mempool: The mempool flags ``MEMPOOL_F_*`` will be deprecated in the future.
   Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead.
 
+* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated.
+  The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 638eaa5fa2..4e3a15e49c 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -861,7 +861,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		goto exit_unlock;
 	}
 
-	mempool_size = MEMPOOL_HEADER_SIZE(mp, cache_size);
+	mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size);
 	mempool_size += private_data_size;
 	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
 
@@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 
 	/* init the mempool structure */
 	mp = mz->addr;
-	memset(mp, 0, MEMPOOL_HEADER_SIZE(mp, cache_size));
+	memset(mp, 0, RTE_MEMPOOL_HEADER_SIZE(mp, cache_size));
 	ret = strlcpy(mp->name, name, sizeof(mp->name));
 	if (ret < 0 || ret >= (int)sizeof(mp->name)) {
 		rte_errno = ENAMETOOLONG;
@@ -901,7 +901,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	 * The local_cache points to just past the elt_pa[] array.
 	 */
 	mp->local_cache = (struct rte_mempool_cache *)
-		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
+		RTE_PTR_ADD(mp, RTE_MEMPOOL_HEADER_SIZE(mp, 0));
 
 	/* Init all default caches. */
 	if (cache_size != 0) {
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 979ab071cb..11ef60247e 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -310,17 +310,21 @@ struct rte_mempool {
 #endif
 
 /**
- * Calculate the size of the mempool header.
+ * @internal Calculate the size of the mempool header.
  *
  * @param mp
  *   Pointer to the memory pool.
  * @param cs
  *   Size of the per-lcore cache.
  */
-#define MEMPOOL_HEADER_SIZE(mp, cs) \
+#define RTE_MEMPOOL_HEADER_SIZE(mp, cs) \
 	(sizeof(*(mp)) + (((cs) == 0) ? 0 : \
 	(sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE)))
 
+/** Deprecated. Use RTE_MEMPOOL_HEADER_SIZE() for internal purposes only. */
+#define MEMPOOL_HEADER_SIZE(mp, cs) \
+	RTE_DEPRECATED(RTE_MEMPOOL_HEADER_SIZE(mp, cs))
+
 /* return the header of a mempool object (internal) */
 static inline struct rte_mempool_objhdr *
 rte_mempool_get_header(void *obj)
@@ -1737,7 +1741,7 @@ void rte_mempool_audit(struct rte_mempool *mp);
 static inline void *rte_mempool_get_priv(struct rte_mempool *mp)
 {
 	return (char *)mp +
-		MEMPOOL_HEADER_SIZE(mp, mp->cache_size);
+		RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size);
 }
 
 /**
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] mempool: add namespace to driver register macro
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal Andrew Rybchenko
@ 2021-10-19 10:08   ` Andrew Rybchenko
  2021-10-19 16:16     ` Olivier Matz
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines Andrew Rybchenko
  5 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 10:08 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Ray Kinsella, Artem V. Andreev,
	Ashwin Sekhar T K, Pavan Nikhilesh, Hemant Agrawal,
	Sachin Saxena, Harman Kalra, Jerin Jacob, Nithin Dabilpuram
  Cc: dev

Add RTE_ prefix to macro used to register mempool driver.
The old one is still available but deprecated.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 doc/guides/prog_guide/mempool_lib.rst           |  2 +-
 doc/guides/rel_notes/deprecation.rst            |  4 ++++
 doc/guides/rel_notes/release_21_11.rst          |  3 +++
 drivers/mempool/bucket/rte_mempool_bucket.c     |  2 +-
 drivers/mempool/cnxk/cn10k_mempool_ops.c        |  2 +-
 drivers/mempool/cnxk/cn9k_mempool_ops.c         |  2 +-
 drivers/mempool/dpaa/dpaa_mempool.c             |  2 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c        |  2 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c |  2 +-
 drivers/mempool/octeontx2/otx2_mempool_ops.c    |  2 +-
 drivers/mempool/ring/rte_mempool_ring.c         | 12 ++++++------
 drivers/mempool/stack/rte_mempool_stack.c       |  4 ++--
 lib/mempool/rte_mempool.h                       |  6 +++++-
 13 files changed, 28 insertions(+), 17 deletions(-)

diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index 890535eb23..55838317b9 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -115,7 +115,7 @@ management systems and software based memory allocators, to be used with DPDK.
 There are two aspects to a mempool handler.
 
 * Adding the code for your new mempool operations (ops). This is achieved by
-  adding a new mempool ops code, and using the ``MEMPOOL_REGISTER_OPS`` macro.
+  adding a new mempool ops code, and using the ``RTE_MEMPOOL_REGISTER_OPS`` macro.
 
 * Using the new API to call ``rte_mempool_create_empty()`` and
   ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bc3aca8ef1..0095d48084 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -43,6 +43,10 @@ Deprecation Notices
   be removed in DPDK 22.11. The replacement macro
   ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
 
+* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
+  deprecated and will be removed in DPDK 22.11. Use replacement macro
+  ``RTE_MEMPOOL_REGISTER_OPS()``.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index e95ddb93a6..9804c033c0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -227,6 +227,9 @@ API Changes
 * mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated.
   The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
 
+* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
+  deprecated.  Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 8ff9e53007..c0b480bfc7 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -663,4 +663,4 @@ static const struct rte_mempool_ops ops_bucket = {
 };
 
 
-MEMPOOL_REGISTER_OPS(ops_bucket);
+RTE_MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c
index 95458b34b7..4c669b878f 100644
--- a/drivers/mempool/cnxk/cn10k_mempool_ops.c
+++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c
@@ -316,4 +316,4 @@ static struct rte_mempool_ops cn10k_mempool_ops = {
 	.populate = cnxk_mempool_populate,
 };
 
-MEMPOOL_REGISTER_OPS(cn10k_mempool_ops);
+RTE_MEMPOOL_REGISTER_OPS(cn10k_mempool_ops);
diff --git a/drivers/mempool/cnxk/cn9k_mempool_ops.c b/drivers/mempool/cnxk/cn9k_mempool_ops.c
index c0cdba640b..b7967f8085 100644
--- a/drivers/mempool/cnxk/cn9k_mempool_ops.c
+++ b/drivers/mempool/cnxk/cn9k_mempool_ops.c
@@ -86,4 +86,4 @@ static struct rte_mempool_ops cn9k_mempool_ops = {
 	.populate = cnxk_mempool_populate,
 };
 
-MEMPOOL_REGISTER_OPS(cn9k_mempool_ops);
+RTE_MEMPOOL_REGISTER_OPS(cn9k_mempool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index f02056982c..f17aff9655 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -358,4 +358,4 @@ static const struct rte_mempool_ops dpaa_mpool_ops = {
 	.populate = dpaa_populate,
 };
 
-MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
+RTE_MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 771e0a0e28..39c6252a63 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -455,6 +455,6 @@ static const struct rte_mempool_ops dpaa2_mpool_ops = {
 	.populate = dpaa2_populate,
 };
 
-MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops);
+RTE_MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops);
 
 RTE_LOG_REGISTER_DEFAULT(dpaa2_logtype_mempool, NOTICE);
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index bd00700202..f4de1c8412 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -202,4 +202,4 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.populate = octeontx_fpavf_populate,
 };
 
-MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
+RTE_MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index d827fd8c7b..332e4f1cb2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -898,4 +898,4 @@ static struct rte_mempool_ops otx2_npa_ops = {
 #endif
 };
 
-MEMPOOL_REGISTER_OPS(otx2_npa_ops);
+RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops);
diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
index 4b785971c4..c6aa935eea 100644
--- a/drivers/mempool/ring/rte_mempool_ring.c
+++ b/drivers/mempool/ring/rte_mempool_ring.c
@@ -198,9 +198,9 @@ static const struct rte_mempool_ops ops_mt_hts = {
 	.get_count = common_ring_get_count,
 };
 
-MEMPOOL_REGISTER_OPS(ops_mp_mc);
-MEMPOOL_REGISTER_OPS(ops_sp_sc);
-MEMPOOL_REGISTER_OPS(ops_mp_sc);
-MEMPOOL_REGISTER_OPS(ops_sp_mc);
-MEMPOOL_REGISTER_OPS(ops_mt_rts);
-MEMPOOL_REGISTER_OPS(ops_mt_hts);
+RTE_MEMPOOL_REGISTER_OPS(ops_mp_mc);
+RTE_MEMPOOL_REGISTER_OPS(ops_sp_sc);
+RTE_MEMPOOL_REGISTER_OPS(ops_mp_sc);
+RTE_MEMPOOL_REGISTER_OPS(ops_sp_mc);
+RTE_MEMPOOL_REGISTER_OPS(ops_mt_rts);
+RTE_MEMPOOL_REGISTER_OPS(ops_mt_hts);
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 7e85c8d6b6..1476905227 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -93,5 +93,5 @@ static struct rte_mempool_ops ops_lf_stack = {
 	.get_count = stack_get_count
 };
 
-MEMPOOL_REGISTER_OPS(ops_stack);
-MEMPOOL_REGISTER_OPS(ops_lf_stack);
+RTE_MEMPOOL_REGISTER_OPS(ops_stack);
+RTE_MEMPOOL_REGISTER_OPS(ops_lf_stack);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 11ef60247e..409836d4d1 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -920,12 +920,16 @@ int rte_mempool_register_ops(const struct rte_mempool_ops *ops);
  * Note that the rte_mempool_register_ops fails silently here when
  * more than RTE_MEMPOOL_MAX_OPS_IDX is registered.
  */
-#define MEMPOOL_REGISTER_OPS(ops)				\
+#define RTE_MEMPOOL_REGISTER_OPS(ops)				\
 	RTE_INIT(mp_hdlr_init_##ops)				\
 	{							\
 		rte_mempool_register_ops(&ops);			\
 	}
 
+/** Deprecated. Use RTE_MEMPOOL_REGISTER_OPS() instead. */
+#define MEMPOOL_REGISTER_OPS(ops) \
+	RTE_DEPRECATED(RTE_MEMPOOL_REGISTER_OPS(ops))
+
 /**
  * An object callback function for mempool.
  *
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
@ 2021-10-19 10:08   ` Andrew Rybchenko
  2021-10-19 16:21     ` Olivier Matz
  5 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 10:08 UTC (permalink / raw)
  To: Olivier Matz, David Marchand, Ray Kinsella; +Cc: dev

MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 doc/guides/contributing/documentation.rst | 4 ++--
 doc/guides/rel_notes/deprecation.rst      | 3 +++
 doc/guides/rel_notes/release_21_11.rst    | 3 +++
 lib/mempool/rte_mempool.h                 | 7 ++++---
 4 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 8cbd4a0f6f..7fcbb7fc43 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -705,7 +705,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
      /**< Virtual address of the first mempool object. */
      uintptr_t   elt_va_end;
      /**< Virtual address of the <size + 1> mempool object. */
-     phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
+     phys_addr_t elt_pa[1];
      /**< Array of physical page addresses for the mempool buffer. */
 
   This doesn't have an effect on the rendered documentation but it is confusing for the developer reading the code.
@@ -724,7 +724,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
      /** Virtual address of the <size + 1> mempool object. */
      uintptr_t   elt_va_end;
      /** Array of physical page addresses for the mempool buffer. */
-     phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
+     phys_addr_t elt_pa[1];
 
 * Read the rendered section of the documentation that you have added for correctness, clarity and consistency
   with the surrounding text.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0095d48084..c59dd5ca98 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -47,6 +47,9 @@ Deprecation Notices
   deprecated and will be removed in DPDK 22.11. Use replacement macro
   ``RTE_MEMPOOL_REGISTER_OPS()``.
 
+* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and
+  will be removed in DPDK 22.11.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 9804c033c0..eea9c13151 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -230,6 +230,9 @@ API Changes
 * mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
   deprecated.  Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``.
 
+* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and
+  will be removed in DPDK 22.11.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 409836d4d1..8ef067fb12 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -116,10 +116,11 @@ struct rte_mempool_objsz {
 /* "MP_<name>" */
 #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
 
-#define	MEMPOOL_PG_SHIFT_MAX	(sizeof(uintptr_t) * CHAR_BIT - 1)
+#define	MEMPOOL_PG_SHIFT_MAX \
+	RTE_DEPRECATED(sizeof(uintptr_t) * CHAR_BIT - 1)
 
-/** Mempool over one chunk of physically continuous memory */
-#define	MEMPOOL_PG_NUM_DEFAULT	1
+/** Deprecated. Mempool over one chunk of physically continuous memory */
+#define	MEMPOOL_PG_NUM_DEFAULT	RTE_DEPRECATED(1)
 
 #ifndef RTE_MEMPOOL_ALIGN
 /**
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
@ 2021-10-19 16:13     ` Olivier Matz
  0 siblings, 0 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-19 16:13 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: David Marchand, dev

On Tue, Oct 19, 2021 at 01:08:40PM +0300, Andrew Rybchenko wrote:
> Move documentation into a separate line just before define.
> Prepare to have a bit longer flag name because of namespace prefix.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
@ 2021-10-19 16:13     ` Olivier Matz
  2021-10-19 16:15       ` Olivier Matz
  2021-10-19 17:45       ` Andrew Rybchenko
  0 siblings, 2 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-19 16:13 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: David Marchand, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Pavan Nikhilesh, Shijith Thotton, Jerin Jacob, Artem V. Andreev,
	Nithin Dabilpuram, Kiran Kumar K, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia, dev

On Tue, Oct 19, 2021 at 01:08:41PM +0300, Andrew Rybchenko wrote:
> Fix the mempool flgas namespace by adding an RTE_ prefix to the name.

nit: flgas -> flags

> The old flags remain usable, to be deprecated in the future.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

(...)

> @@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
>  	rte_free(cache);
>  }
>  
> -#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
> -	| MEMPOOL_F_NO_CACHE_ALIGN \
> -	| MEMPOOL_F_SP_PUT \
> -	| MEMPOOL_F_SC_GET \
> -	| MEMPOOL_F_POOL_CREATED \
> -	| MEMPOOL_F_NO_IOVA_CONTIG \
> +#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
> +	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
> +	| RTE_MEMPOOL_F_SP_PUT \
> +	| RTE_MEMPOOL_F_SC_GET \
> +	| RTE_MEMPOOL_F_POOL_CREATED \
> +	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
>  	)

I guess MEMPOOL_KNOWN_FLAGS was kept as is on purpose.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] mempool: add namespace to internal but still visible API
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
@ 2021-10-19 16:14     ` Olivier Matz
  0 siblings, 0 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-19 16:14 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: David Marchand, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, Harman Kalra, Anoob Joseph, dev

On Tue, Oct 19, 2021 at 01:08:42PM +0300, Andrew Rybchenko wrote:
> Add RTE_ prefix to internal API defined in public header.
> Use the prefix instead of double underscore.
> Use uppercase for macros in the case of name conflict.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal Andrew Rybchenko
@ 2021-10-19 16:14     ` Olivier Matz
  2021-10-19 17:23       ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: Olivier Matz @ 2021-10-19 16:14 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: David Marchand, Ray Kinsella, dev

On Tue, Oct 19, 2021 at 01:08:43PM +0300, Andrew Rybchenko wrote:
> Add RTE_ prefix to helper macro to calculate mempool header size and
> make it internal. Old macro is still available, but deprecated.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

(...)

> +/** Deprecated. Use RTE_MEMPOOL_HEADER_SIZE() for internal purposes only. */
> +#define MEMPOOL_HEADER_SIZE(mp, cs) \
> +	RTE_DEPRECATED(RTE_MEMPOOL_HEADER_SIZE(mp, cs))
> +


I think it should be instead:

#define MEMPOOL_HEADER_SIZE(mp, cs) \
	RTE_DEPRECATED(MEMPOOL_HEADER_SIZE) RTE_MEMPOOL_HEADER_SIZE(mp, cs)

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags
  2021-10-19 16:13     ` Olivier Matz
@ 2021-10-19 16:15       ` Olivier Matz
  2021-10-19 17:45       ` Andrew Rybchenko
  1 sibling, 0 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-19 16:15 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: David Marchand, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Pavan Nikhilesh, Shijith Thotton, Jerin Jacob, Artem V. Andreev,
	Nithin Dabilpuram, Kiran Kumar K, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia, dev

On Tue, Oct 19, 2021 at 06:13:54PM +0200, Olivier Matz wrote:
> On Tue, Oct 19, 2021 at 01:08:41PM +0300, Andrew Rybchenko wrote:
> > Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
> 
> nit: flgas -> flags
> 
> > The old flags remain usable, to be deprecated in the future.
> > 
> > Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> (...)
> 
> > @@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
> >  	rte_free(cache);
> >  }
> >  
> > -#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
> > -	| MEMPOOL_F_NO_CACHE_ALIGN \
> > -	| MEMPOOL_F_SP_PUT \
> > -	| MEMPOOL_F_SC_GET \
> > -	| MEMPOOL_F_POOL_CREATED \
> > -	| MEMPOOL_F_NO_IOVA_CONTIG \
> > +#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
> > +	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
> > +	| RTE_MEMPOOL_F_SP_PUT \
> > +	| RTE_MEMPOOL_F_SC_GET \
> > +	| RTE_MEMPOOL_F_POOL_CREATED \
> > +	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
> >  	)
> 
> I guess MEMPOOL_KNOWN_FLAGS was kept as is on purpose.
> 

I forgot to add the ack

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/6] mempool: add namespace to driver register macro
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
@ 2021-10-19 16:16     ` Olivier Matz
  0 siblings, 0 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-19 16:16 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: David Marchand, Ray Kinsella, Artem V. Andreev,
	Ashwin Sekhar T K, Pavan Nikhilesh, Hemant Agrawal,
	Sachin Saxena, Harman Kalra, Jerin Jacob, Nithin Dabilpuram, dev

On Tue, Oct 19, 2021 at 01:08:44PM +0300, Andrew Rybchenko wrote:
> Add RTE_ prefix to macro used to register mempool driver.
> The old one is still available but deprecated.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

(...)

>  		rte_mempool_register_ops(&ops);			\
>  	}
>  
> +/** Deprecated. Use RTE_MEMPOOL_REGISTER_OPS() instead. */
> +#define MEMPOOL_REGISTER_OPS(ops) \
> +	RTE_DEPRECATED(RTE_MEMPOOL_REGISTER_OPS(ops))
> +
>  /**
>   * An object callback function for mempool.
>   *

Same comment than 4/6

define MEMPOOL_REGISTER_OPS(ops) \
     RTE_DEPRECATED(MEMPOOL_REGISTER_OPS) RTE_MEMPOOL_REGISTER_OPS(ops)


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines
  2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines Andrew Rybchenko
@ 2021-10-19 16:21     ` Olivier Matz
  2021-10-19 17:23       ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: Olivier Matz @ 2021-10-19 16:21 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: David Marchand, Ray Kinsella, dev

On Tue, Oct 19, 2021 at 01:08:45PM +0300, Andrew Rybchenko wrote:
> MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Fixes: fd943c764a63 ("mempool: deprecate xmem functions") ?

> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -116,10 +116,11 @@ struct rte_mempool_objsz {
>  /* "MP_<name>" */
>  #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
>  
> -#define	MEMPOOL_PG_SHIFT_MAX	(sizeof(uintptr_t) * CHAR_BIT - 1)
> +#define	MEMPOOL_PG_SHIFT_MAX \
> +	RTE_DEPRECATED(sizeof(uintptr_t) * CHAR_BIT - 1)
>  
> -/** Mempool over one chunk of physically continuous memory */
> -#define	MEMPOOL_PG_NUM_DEFAULT	1
> +/** Deprecated. Mempool over one chunk of physically continuous memory */
> +#define	MEMPOOL_PG_NUM_DEFAULT	RTE_DEPRECATED(1)
>  

Same comment than previous patches here.


Thanks Andrew for this series!

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal
  2021-10-19 16:14     ` Olivier Matz
@ 2021-10-19 17:23       ` Andrew Rybchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:23 UTC (permalink / raw)
  To: Olivier Matz; +Cc: David Marchand, Ray Kinsella, dev

On 10/19/21 7:14 PM, Olivier Matz wrote:
> On Tue, Oct 19, 2021 at 01:08:43PM +0300, Andrew Rybchenko wrote:
>> Add RTE_ prefix to helper macro to calculate mempool header size and
>> make it internal. Old macro is still available, but deprecated.
>>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> (...)
> 
>> +/** Deprecated. Use RTE_MEMPOOL_HEADER_SIZE() for internal purposes only. */
>> +#define MEMPOOL_HEADER_SIZE(mp, cs) \
>> +	RTE_DEPRECATED(RTE_MEMPOOL_HEADER_SIZE(mp, cs))
>> +
> 
> 
> I think it should be instead:
> 
> #define MEMPOOL_HEADER_SIZE(mp, cs) \
> 	RTE_DEPRECATED(MEMPOOL_HEADER_SIZE) RTE_MEMPOOL_HEADER_SIZE(mp, cs)
> 

Thanks a lot (a bit ashamed... :) )

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines
  2021-10-19 16:21     ` Olivier Matz
@ 2021-10-19 17:23       ` Andrew Rybchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:23 UTC (permalink / raw)
  To: Olivier Matz; +Cc: David Marchand, Ray Kinsella, dev

On 10/19/21 7:21 PM, Olivier Matz wrote:
> On Tue, Oct 19, 2021 at 01:08:45PM +0300, Andrew Rybchenko wrote:
>> MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.
>>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> Fixes: fd943c764a63 ("mempool: deprecate xmem functions") ?

I think it is good idea. Without Cc to stable.

> 
>> --- a/lib/mempool/rte_mempool.h
>> +++ b/lib/mempool/rte_mempool.h
>> @@ -116,10 +116,11 @@ struct rte_mempool_objsz {
>>   /* "MP_<name>" */
>>   #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
>>   
>> -#define	MEMPOOL_PG_SHIFT_MAX	(sizeof(uintptr_t) * CHAR_BIT - 1)
>> +#define	MEMPOOL_PG_SHIFT_MAX \
>> +	RTE_DEPRECATED(sizeof(uintptr_t) * CHAR_BIT - 1)
>>   
>> -/** Mempool over one chunk of physically continuous memory */
>> -#define	MEMPOOL_PG_NUM_DEFAULT	1
>> +/** Deprecated. Mempool over one chunk of physically continuous memory */
>> +#define	MEMPOOL_PG_NUM_DEFAULT	RTE_DEPRECATED(1)
>>   
> 
> Same comment than previous patches here.
> 
> 
> Thanks Andrew for this series!

Thanks a lot for the review.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace
  2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
                   ` (6 preceding siblings ...)
  2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
@ 2021-10-19 17:40 ` Andrew Rybchenko
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
                     ` (6 more replies)
  7 siblings, 7 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:40 UTC (permalink / raw)
  To: Olivier Matz; +Cc: David Marchand, dev

Add RTE_ prefix to mempool API including internal. Keep old public API
with fallback to new defines. Internal API is just renamed.

v3:
    - fix typo
    - rebase on top of current main
    - add prefix to newly added MEMPOOL_F_NON_IO
    - fix deprecation usage
    - add Fixes tag the patch which deprecates unused macros

v2:
    - do not deprecate MEMPOOL_F_* flags
    - fix unintended usage of internal get/put helpers from bulk get/put

Andrew Rybchenko (6):
  mempool: avoid flags documentation in the next line
  mempool: add namespace prefix to flags
  mempool: add namespace to internal but still visible API
  mempool: make header size calculation internal
  mempool: add namespace to driver register macro
  mempool: deprecate unused defines

 app/proc-info/main.c                          |  17 +-
 app/test-pmd/parameters.c                     |   4 +-
 app/test/test_mempool.c                       |  18 +-
 doc/guides/contributing/documentation.rst     |   4 +-
 doc/guides/nics/mlx5.rst                      |   2 +-
 doc/guides/prog_guide/mempool_lib.rst         |   2 +-
 doc/guides/rel_notes/deprecation.rst          |  11 ++
 doc/guides/rel_notes/release_21_11.rst        |  14 +-
 drivers/common/mlx5/mlx5_common_mr.c          |   4 +-
 drivers/event/cnxk/cnxk_tim_evdev.c           |   2 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx/timvf_evdev.c          |   2 +-
 drivers/event/octeontx2/otx2_tim_evdev.c      |   2 +-
 drivers/mempool/bucket/rte_mempool_bucket.c   |  10 +-
 drivers/mempool/cnxk/cn10k_mempool_ops.c      |   2 +-
 drivers/mempool/cnxk/cn9k_mempool_ops.c       |   2 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   2 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   2 +-
 .../mempool/octeontx/rte_mempool_octeontx.c   |   2 +-
 drivers/mempool/octeontx2/otx2_mempool_ops.c  |   2 +-
 drivers/mempool/ring/rte_mempool_ring.c       |  16 +-
 drivers/mempool/stack/rte_mempool_stack.c     |   4 +-
 drivers/net/cnxk/cn10k_rx.h                   |  12 +-
 drivers/net/cnxk/cn10k_tx.h                   |  30 ++--
 drivers/net/cnxk/cn9k_rx.h                    |  12 +-
 drivers/net/cnxk/cn9k_tx.h                    |  26 +--
 drivers/net/mlx5/mlx5_mr.c                    |   2 +-
 drivers/net/octeontx/octeontx_rxtx.h          |   4 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   4 +-
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h    |   2 +-
 drivers/net/octeontx2/otx2_rx.c               |   8 +-
 drivers/net/octeontx2/otx2_rx.h               |   4 +-
 drivers/net/octeontx2/otx2_tx.c               |  16 +-
 drivers/net/octeontx2/otx2_tx.h               |   4 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
 lib/mempool/rte_mempool.c                     |  58 +++----
 lib/mempool/rte_mempool.h                     | 164 +++++++++++-------
 lib/mempool/rte_mempool_ops.c                 |   2 +-
 lib/pdump/rte_pdump.c                         |   3 +-
 lib/vhost/iotlb.c                             |   4 +-
 40 files changed, 275 insertions(+), 208 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v3 1/6] mempool: avoid flags documentation in the next line
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
@ 2021-10-19 17:40   ` Andrew Rybchenko
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:40 UTC (permalink / raw)
  To: Olivier Matz; +Cc: David Marchand, dev

Move documentation into a separate line just before define.
Prepare to have a bit longer flag name because of namespace prefix.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/mempool/rte_mempool.h | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index b2e20c8855..ee27f79d63 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -250,13 +250,18 @@ struct rte_mempool {
 #endif
 }  __rte_cache_aligned;
 
+/** Spreading among memory channels not required. */
 #define MEMPOOL_F_NO_SPREAD      0x0001
-		/**< Spreading among memory channels not required. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
-#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
-#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
-#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Do not align objects on cache lines. */
+#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002
+/** Default put is "single-producer". */
+#define MEMPOOL_F_SP_PUT         0x0004
+/** Default get is "single-consumer". */
+#define MEMPOOL_F_SC_GET         0x0008
+/** Internal: pool is created. */
+#define MEMPOOL_F_POOL_CREATED   0x0010
+/** Don't need IOVA contiguous objects. */
+#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020
 /** Internal: no object from the pool can be used for device IO (DMA). */
 #define MEMPOOL_F_NON_IO         0x0040
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v3 2/6] mempool: add namespace prefix to flags
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
@ 2021-10-19 17:40   ` Andrew Rybchenko
  2021-10-19 20:03     ` David Marchand
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:40 UTC (permalink / raw)
  To: Olivier Matz, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Matan Azrad, Viacheslav Ovsiienko, Pavan Nikhilesh,
	Shijith Thotton, Jerin Jacob, Artem V. Andreev,
	Nithin Dabilpuram, Kiran Kumar K, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia
  Cc: David Marchand, dev

Fix the mempool flags namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.

Flag MEMPOOL_F_NON_IO added in the release is just renamed to have RTE_
prefix.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/proc-info/main.c                        | 17 +++---
 app/test-pmd/parameters.c                   |  4 +-
 app/test/test_mempool.c                     | 16 +++---
 doc/guides/nics/mlx5.rst                    |  2 +-
 doc/guides/rel_notes/release_21_11.rst      |  5 +-
 drivers/common/mlx5/mlx5_common_mr.c        |  4 +-
 drivers/event/cnxk/cnxk_tim_evdev.c         |  2 +-
 drivers/event/octeontx/timvf_evdev.c        |  2 +-
 drivers/event/octeontx2/otx2_tim_evdev.c    |  2 +-
 drivers/mempool/bucket/rte_mempool_bucket.c |  8 +--
 drivers/mempool/ring/rte_mempool_ring.c     |  4 +-
 drivers/net/mlx5/mlx5_mr.c                  |  2 +-
 drivers/net/octeontx2/otx2_ethdev.c         |  4 +-
 drivers/net/thunderx/nicvf_ethdev.c         |  2 +-
 lib/mempool/rte_mempool.c                   | 44 ++++++++--------
 lib/mempool/rte_mempool.h                   | 57 +++++++++++++++------
 lib/mempool/rte_mempool_ops.c               |  2 +-
 lib/pdump/rte_pdump.c                       |  3 +-
 lib/vhost/iotlb.c                           |  4 +-
 19 files changed, 108 insertions(+), 76 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index 8ec9cadd79..a1d9bfbf30 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1299,13 +1299,16 @@ show_mempool(char *name)
 				"\t  -- Not used for IO (%c)\n",
 				ptr->name,
 				ptr->socket_id,
-				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_CACHE_ALIGN) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
-				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NON_IO) ? 'y' : 'n');
+				(flags & RTE_MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SP_PUT) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SC_GET) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_POOL_CREATED) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NON_IO) ? 'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3f94a82e32..b69897ef00 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -1396,7 +1396,7 @@ launch_args_parse(int argc, char** argv)
 						 "noisy-lkup-num-reads-writes must be >= 0\n");
 			}
 			if (!strcmp(lgopts[opt_idx].name, "no-iova-contig"))
-				mempool_flags = MEMPOOL_F_NO_IOVA_CONTIG;
+				mempool_flags = RTE_MEMPOOL_F_NO_IOVA_CONTIG;
 
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
@@ -1440,7 +1440,7 @@ launch_args_parse(int argc, char** argv)
 	rx_mode.offloads = rx_offloads;
 	tx_mode.offloads = tx_offloads;
 
-	if (mempool_flags & MEMPOOL_F_NO_IOVA_CONTIG &&
+	if (mempool_flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG &&
 	    mp_alloc_type != MP_ALLOC_ANON) {
 		TESTPMD_LOG(WARNING, "cannot use no-iova-contig without "
 				  "mp-alloc=anon. mempool no-iova-contig is "
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index f4947680bc..4ec236d239 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -215,7 +215,7 @@ static int test_mempool_creation_with_unknown_flag(void)
 		MEMPOOL_ELT_SIZE, 0, 0,
 		NULL, NULL,
 		NULL, NULL,
-		SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG << 1);
+		SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG << 1);
 
 	if (mp_cov != NULL) {
 		rte_mempool_free(mp_cov);
@@ -338,8 +338,8 @@ test_mempool_sp_sc(void)
 			my_mp_init, NULL,
 			my_obj_init, NULL,
 			SOCKET_ID_ANY,
-			MEMPOOL_F_NO_CACHE_ALIGN | MEMPOOL_F_SP_PUT |
-			MEMPOOL_F_SC_GET);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN | RTE_MEMPOOL_F_SP_PUT |
+			RTE_MEMPOOL_F_SC_GET);
 		if (mp_spsc == NULL)
 			RET_ERR();
 	}
@@ -752,7 +752,7 @@ test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
 	ret = rte_mempool_populate_default(mp);
 	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
 			rte_strerror(-ret));
-	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+	RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
 			"NON_IO flag is not set when NO_IOVA_CONTIG is set");
 	ret = TEST_SUCCESS;
 exit:
@@ -789,20 +789,20 @@ test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
 					RTE_BAD_IOVA, block_size, NULL, NULL);
 	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
 			rte_strerror(-ret));
-	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+	RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
 			"NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
 
 	ret = rte_mempool_populate_iova(mp, virt, iova, block_size, NULL, NULL);
 	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
 			rte_strerror(-ret));
-	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+	RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
 			"NON_IO flag is not unset when mempool is populated with valid IOVA");
 
 	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 2 * block_size),
 					RTE_BAD_IOVA, block_size, NULL, NULL);
 	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
 			rte_strerror(-ret));
-	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+	RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
 			"NON_IO flag is set even when some objects have valid IOVA");
 	ret = TEST_SUCCESS;
 
@@ -826,7 +826,7 @@ test_mempool_flag_non_io_unset_by_default(void)
 	ret = rte_mempool_populate_default(mp);
 	RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
 			      rte_strerror(-ret));
-	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+	RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
 			"NON_IO flag is set by default");
 	ret = TEST_SUCCESS;
 exit:
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 106e32e1c4..0597f147dd 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1004,7 +1004,7 @@ Driver options
 - ``mr_mempool_reg_en`` parameter [int]
 
   A nonzero value enables implicit registration of DMA memory of all mempools
-  except those having ``MEMPOOL_F_NON_IO``. This flag is set automatically
+  except those having ``RTE_MEMPOOL_F_NON_IO``. This flag is set automatically
   for mempools populated with non-contiguous objects or those without IOVA.
   The effect is that when a packet from a mempool is transmitted,
   its memory is already registered for DMA in the PMD and no registration
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 3362c52a73..7db4cb38c0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -227,9 +227,12 @@ API Changes
   removed. Its usages have been replaced by a new function
   ``rte_kvargs_get_with_value()``.
 
-* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+* mempool: Added ``RTE_MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
   that objects from this pool will not be used for device IO (e.g. DMA).
 
+* mempool: The mempool flags ``MEMPOOL_F_*`` will be deprecated in the future.
+  Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index 2e039a4e70..1beaead30d 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -1564,7 +1564,7 @@ int
 mlx5_mr_mempool_register(struct mlx5_mr_share_cache *share_cache, void *pd,
 			 struct rte_mempool *mp, struct mlx5_mp_id *mp_id)
 {
-	if (mp->flags & MEMPOOL_F_NON_IO)
+	if (mp->flags & RTE_MEMPOOL_F_NON_IO)
 		return 0;
 	switch (rte_eal_process_type()) {
 	case RTE_PROC_PRIMARY:
@@ -1635,7 +1635,7 @@ int
 mlx5_mr_mempool_unregister(struct mlx5_mr_share_cache *share_cache,
 			   struct rte_mempool *mp, struct mlx5_mp_id *mp_id)
 {
-	if (mp->flags & MEMPOOL_F_NON_IO)
+	if (mp->flags & RTE_MEMPOOL_F_NON_IO)
 		return 0;
 	switch (rte_eal_process_type()) {
 	case RTE_PROC_PRIMARY:
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 9d40e336d7..d325daed95 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -19,7 +19,7 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		plt_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index 688e9daa66..06fc53cc5b 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -310,7 +310,7 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr)
 	}
 
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		timvf_log_info("Using single producer mode");
 	}
 
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index de50c4c76e..3cdc468140 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -81,7 +81,7 @@ tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		otx2_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 8b9daa9782..8ff9e53007 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -426,7 +426,7 @@ bucket_init_per_lcore(unsigned int lcore_id, void *arg)
 		goto error;
 
 	rg_flags = RING_F_SC_DEQ;
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
 	bd->adoption_buffer_rings[lcore_id] = rte_ring_create(rg_name,
 		rte_align32pow2(mp->size + 1), mp->socket_id, rg_flags);
@@ -472,7 +472,7 @@ bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_data;
 	}
 	bd->pool = mp;
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		bucket_header_size = sizeof(struct bucket_header);
 	else
 		bucket_header_size = RTE_CACHE_LINE_SIZE;
@@ -494,9 +494,9 @@ bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_stacks;
 	}
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 	rc = snprintf(rg_name, sizeof(rg_name),
 		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
index b1f09ff28f..4b785971c4 100644
--- a/drivers/mempool/ring/rte_mempool_ring.c
+++ b/drivers/mempool/ring/rte_mempool_ring.c
@@ -110,9 +110,9 @@ common_ring_alloc(struct rte_mempool *mp)
 {
 	uint32_t rg_flags = 0;
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 
 	return ring_alloc(mp, rg_flags);
diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
index 55d27b50b9..fdbe7986fd 100644
--- a/drivers/net/mlx5/mlx5_mr.c
+++ b/drivers/net/mlx5/mlx5_mr.c
@@ -127,7 +127,7 @@ mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb)
 						     mr_ctrl, mp, addr);
 			/*
 			 * Lookup can only fail on invalid input, e.g. "addr"
-			 * is not from "mp" or "mp" has MEMPOOL_F_NON_IO set.
+			 * is not from "mp" or "mp" has RTE_MEMPOOL_F_NON_IO set.
 			 */
 			if (lkey != UINT32_MAX)
 				return lkey;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d576bc6989..9db62acbd0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1124,7 +1124,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 
 	txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
 						 0, 0, dev->node,
-						 MEMPOOL_F_NO_SPREAD);
+						 RTE_MEMPOOL_F_NO_SPREAD);
 	txq->nb_sqb_bufs = nb_sqb_bufs;
 	txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
 	txq->nb_sqb_bufs_adj = nb_sqb_bufs -
@@ -1150,7 +1150,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 		goto fail;
 	}
 
-	tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz);
+	tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
 	if (dev->sqb_size != sz.elt_size) {
 		otx2_err("sqe pool block size is not expected %d != %d",
 			 dev->sqb_size, tmp);
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5502f1ee69..7e07d381dd 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1302,7 +1302,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	}
 
 	/* Mempool memory must be physically contiguous */
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) {
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) {
 		PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous");
 		return -EINVAL;
 	}
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 7d7d97d85d..2eab38f0d4 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -228,7 +228,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz = (sz != NULL) ? sz : &lsz;
 
 	sz->header_size = sizeof(struct rte_mempool_objhdr);
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0)
 		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
 			RTE_MEMPOOL_ALIGN);
 
@@ -242,7 +242,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));
 
 	/* expand trailer to next cache line */
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
 		sz->total_size = sz->header_size + sz->elt_size +
 			sz->trailer_size;
 		sz->trailer_size += ((RTE_MEMPOOL_ALIGN -
@@ -254,7 +254,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	 * increase trailer to add padding between objects in order to
 	 * spread them across memory channels/ranks
 	 */
-	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_SPREAD) == 0) {
 		unsigned new_size;
 		new_size = arch_mem_object_align
 			    (sz->header_size + sz->elt_size + sz->trailer_size);
@@ -306,11 +306,11 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+	if ((mp->flags & RTE_MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
 		if (ret != 0)
 			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
+		mp->flags |= RTE_MEMPOOL_F_POOL_CREATED;
 	}
 	return 0;
 }
@@ -348,7 +348,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
@@ -374,7 +374,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 
 	/* At least some objects in the pool can now be used for IO. */
 	if (iova != RTE_BAD_IOVA)
-		mp->flags &= ~MEMPOOL_F_NON_IO;
+		mp->flags &= ~RTE_MEMPOOL_F_NON_IO;
 
 	/* Report the mempool as ready only when fully populated. */
 	if (mp->populated_size >= mp->size)
@@ -413,7 +413,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	size_t off, phys_len;
 	int ret, cnt = 0;
 
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA,
 			len, free_cb, opaque);
 
@@ -470,7 +470,7 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
 	if (ret < 0)
 		return -EINVAL;
 	alloc_in_ext_mem = (ret == 1);
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 
 	if (!need_iova_contig_obj)
 		*pg_sz = 0;
@@ -547,7 +547,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	 * reserve space in smaller chunks.
 	 */
 
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 	ret = rte_mempool_get_page_size(mp, &pg_sz);
 	if (ret < 0)
 		return ret;
@@ -798,12 +798,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
 	rte_free(cache);
 }
 
-#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
-	| MEMPOOL_F_NO_CACHE_ALIGN \
-	| MEMPOOL_F_SP_PUT \
-	| MEMPOOL_F_SC_GET \
-	| MEMPOOL_F_POOL_CREATED \
-	| MEMPOOL_F_NO_IOVA_CONTIG \
+#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
+	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
+	| RTE_MEMPOOL_F_SP_PUT \
+	| RTE_MEMPOOL_F_SC_GET \
+	| RTE_MEMPOOL_F_POOL_CREATED \
+	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
 	)
 /* create an empty mempool */
 struct rte_mempool *
@@ -859,11 +859,11 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	 * No objects in the pool can be used for IO until it's populated
 	 * with at least some objects with valid IOVA.
 	 */
-	flags |= MEMPOOL_F_NON_IO;
+	flags |= RTE_MEMPOOL_F_NON_IO;
 
 	/* "no cache align" imply "no spread" */
-	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
-		flags |= MEMPOOL_F_NO_SPREAD;
+	if (flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
+		flags |= RTE_MEMPOOL_F_NO_SPREAD;
 
 	/* calculate mempool object sizes. */
 	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
@@ -975,11 +975,11 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
 	 * set the correct index into the table of ops structs.
 	 */
-	if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET))
+	if ((flags & RTE_MEMPOOL_F_SP_PUT) && (flags & RTE_MEMPOOL_F_SC_GET))
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
-	else if (flags & MEMPOOL_F_SP_PUT)
+	else if (flags & RTE_MEMPOOL_F_SP_PUT)
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
-	else if (flags & MEMPOOL_F_SC_GET)
+	else if (flags & RTE_MEMPOOL_F_SC_GET)
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
 	else
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index ee27f79d63..aca35466bc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -251,19 +251,44 @@ struct rte_mempool {
 }  __rte_cache_aligned;
 
 /** Spreading among memory channels not required. */
-#define MEMPOOL_F_NO_SPREAD      0x0001
+#define RTE_MEMPOOL_F_NO_SPREAD		0x0001
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_SPREAD.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_SPREAD		RTE_MEMPOOL_F_NO_SPREAD
 /** Do not align objects on cache lines. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002
+#define RTE_MEMPOOL_F_NO_CACHE_ALIGN	0x0002
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_CACHE_ALIGN.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_CACHE_ALIGN	RTE_MEMPOOL_F_NO_CACHE_ALIGN
 /** Default put is "single-producer". */
-#define MEMPOOL_F_SP_PUT         0x0004
+#define RTE_MEMPOOL_F_SP_PUT		0x0004
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_SP_PUT.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_SP_PUT		RTE_MEMPOOL_F_SP_PUT
 /** Default get is "single-consumer". */
-#define MEMPOOL_F_SC_GET         0x0008
+#define RTE_MEMPOOL_F_SC_GET		0x0008
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_SC_GET.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_SC_GET		RTE_MEMPOOL_F_SC_GET
 /** Internal: pool is created. */
-#define MEMPOOL_F_POOL_CREATED   0x0010
+#define RTE_MEMPOOL_F_POOL_CREATED	0x0010
 /** Don't need IOVA contiguous objects. */
-#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020
+#define RTE_MEMPOOL_F_NO_IOVA_CONTIG	0x0020
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_IOVA_CONTIG.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_IOVA_CONTIG	RTE_MEMPOOL_F_NO_IOVA_CONTIG
 /** Internal: no object from the pool can be used for device IO (DMA). */
-#define MEMPOOL_F_NON_IO         0x0040
+#define RTE_MEMPOOL_F_NON_IO		0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -426,9 +451,9 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  * Calculate memory size required to store given number of objects.
  *
  * If mempool objects are not required to be IOVA-contiguous
- * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
+ * (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
  * virtually contiguous chunk size. Otherwise, if mempool objects must
- * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
+ * be IOVA-contiguous (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is clear),
  * min_chunk_size defines IOVA-contiguous chunk size.
  *
  * @param[in] mp
@@ -976,22 +1001,22 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *   constraint for the reserved zone.
  * @param flags
  *   The *flags* arguments is an OR of following flags:
- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *   - RTE_MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
  *     between channels in RAM: the pool allocator will add padding
  *     between objects depending on the hardware configuration. See
  *     Memory alignment constraints for details. If this flag is set,
  *     the allocator will just align them to a cache line.
- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *   - RTE_MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
  *     cache-aligned. This flag removes this constraint, and no
  *     padding will be present between objects. This flag implies
- *     MEMPOOL_F_NO_SPREAD.
- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     RTE_MEMPOOL_F_NO_SPREAD.
+ *   - RTE_MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
  *     when using rte_mempool_put() or rte_mempool_put_bulk() is
  *     "single-producer". Otherwise, it is "multi-producers".
- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *   - RTE_MEMPOOL_F_SC_GET: If this flag is set, the default behavior
  *     when using rte_mempool_get() or rte_mempool_get_bulk() is
  *     "single-consumer". Otherwise, it is "multi-consumers".
- *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
+ *   - RTE_MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
  *     necessarily be contiguous in IO memory.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
@@ -1678,7 +1703,7 @@ rte_mempool_empty(const struct rte_mempool *mp)
  *   A pointer (virtual address) to the element of the pool.
  * @return
  *   The IO address of the elt element.
- *   If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the
+ *   If the mempool was created with RTE_MEMPOOL_F_NO_IOVA_CONTIG, the
  *   returned value is RTE_BAD_IOVA.
  */
 static inline rte_iova_t
diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c
index 5e22667787..2d36dee8f0 100644
--- a/lib/mempool/rte_mempool_ops.c
+++ b/lib/mempool/rte_mempool_ops.c
@@ -168,7 +168,7 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
 	unsigned i;
 
 	/* too late, the mempool is already populated. */
-	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+	if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED)
 		return -EEXIST;
 
 	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc15..46a87e2339 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -371,7 +371,8 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 		rte_errno = EINVAL;
 		return -1;
 	}
-	if (mp->flags & MEMPOOL_F_SP_PUT || mp->flags & MEMPOOL_F_SC_GET) {
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT ||
+	    mp->flags & RTE_MEMPOOL_F_SC_GET) {
 		PDUMP_LOG(ERR,
 			  "mempool with SP or SC set not valid for pdump,"
 			  "must have MP and MC set\n");
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e4a445e709..82bdb84526 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -321,8 +321,8 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
 	vq->iotlb_pool = rte_mempool_create(pool_name,
 			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
 			0, 0, NULL, NULL, NULL, socket,
-			MEMPOOL_F_NO_CACHE_ALIGN |
-			MEMPOOL_F_SP_PUT);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN |
+			RTE_MEMPOOL_F_SP_PUT);
 	if (!vq->iotlb_pool) {
 		VHOST_LOG_CONFIG(ERR,
 				"Failed to create IOTLB cache pool (%s)\n",
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v3 3/6] mempool: add namespace to internal but still visible API
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
@ 2021-10-19 17:40   ` Andrew Rybchenko
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 4/6] mempool: make header size calculation internal Andrew Rybchenko
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:40 UTC (permalink / raw)
  To: Olivier Matz, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, Harman Kalra, Anoob Joseph
  Cc: David Marchand, dev

Add RTE_ prefix to internal API defined in public header.
Use the prefix instead of double underscore.
Use uppercase for macros in the case of name conflict.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 drivers/event/octeontx/ssovf_worker.h      |  2 +-
 drivers/net/cnxk/cn10k_rx.h                | 12 ++--
 drivers/net/cnxk/cn10k_tx.h                | 30 ++++-----
 drivers/net/cnxk/cn9k_rx.h                 | 12 ++--
 drivers/net/cnxk/cn9k_tx.h                 | 26 ++++----
 drivers/net/octeontx/octeontx_rxtx.h       |  4 +-
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h |  2 +-
 drivers/net/octeontx2/otx2_rx.c            |  8 +--
 drivers/net/octeontx2/otx2_rx.h            |  4 +-
 drivers/net/octeontx2/otx2_tx.c            | 16 ++---
 drivers/net/octeontx2/otx2_tx.h            |  4 +-
 lib/mempool/rte_mempool.c                  |  8 +--
 lib/mempool/rte_mempool.h                  | 77 +++++++++++-----------
 13 files changed, 103 insertions(+), 102 deletions(-)

diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h
index f609b296ed..ba9e1cd0fa 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -83,7 +83,7 @@ ssovf_octeontx_wqe_xtract_mseg(octtx_wqe_t *wqe,
 
 		mbuf->data_off = sizeof(octtx_pki_buflink_t);
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 		if (nb_segs == 1)
 			mbuf->data_len = bytes_left;
 		else
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index fcc451aa36..6b40a9d0b5 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -276,7 +276,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -306,7 +306,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
@@ -905,10 +905,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		roc_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		packets += NIX_DESCS_PER_LOOP;
 
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c6f349b352..0fd877f4ec 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -677,7 +677,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	} else {
 		sg->seg1_size = m->data_len;
 		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
@@ -789,7 +789,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 	m = m_next;
@@ -808,7 +808,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 #endif
 		slist++;
 		i++;
@@ -1177,7 +1177,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 
@@ -1194,7 +1194,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -1235,7 +1235,7 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1[0], 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		return;
@@ -1425,7 +1425,7 @@ cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1, 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1,
 						0);
 		rte_io_wmb();
 #endif
@@ -2352,28 +2352,28 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf0)->pool,
 					(void **)&mbuf0, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf1)->pool,
 					(void **)&mbuf1, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf2)->pool,
 					(void **)&mbuf2, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf3)->pool,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -2389,19 +2389,19 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			/* Mark mempool object as "put" since
 			 * it is freed by NIX
 			 */
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf0)->pool,
 				(void **)&mbuf0, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf1)->pool,
 				(void **)&mbuf1, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf2)->pool,
 				(void **)&mbuf2, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf3)->pool,
 				(void **)&mbuf3, 1, 0);
 		}
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 7ab415a194..ba3c3668f7 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -151,7 +151,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -288,7 +288,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		packet_type = nix_ptype_get(lookup_mem, w1);
@@ -757,10 +757,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 		roc_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		/* Advance head pointer and packets */
 		head += NIX_DESCS_PER_LOOP;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 44273eca90..83f4be84f1 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -285,7 +285,7 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	}
 }
 
@@ -397,7 +397,7 @@ cn9k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -611,7 +611,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	if (!(sg_u & (1ULL << 55)))
-		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+		RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	rte_io_wmb();
 #endif
 
@@ -628,7 +628,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
 			 */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
@@ -680,7 +680,7 @@ cn9k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		sg.u = vgetq_lane_u64(cmd1[0], 0);
 		if (!(sg.u & (1ULL << 55)))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		return 2 + !!(flags & NIX_TX_NEED_EXT_HDR) +
@@ -1627,28 +1627,28 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf0)->pool,
 					(void **)&mbuf0, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf1)->pool,
 					(void **)&mbuf1, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf2)->pool,
 					(void **)&mbuf2, 1, 0);
 
 			if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(
+				RTE_MEMPOOL_CHECK_COOKIES(
 					((struct rte_mbuf *)mbuf3)->pool,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -1667,19 +1667,19 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			/* Mark mempool object as "put" since
 			 * it is freed by NIX
 			 */
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf0)->pool,
 				(void **)&mbuf0, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf1)->pool,
 				(void **)&mbuf1, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf2)->pool,
 				(void **)&mbuf2, 1, 0);
 
-			__mempool_check_cookies(
+			RTE_MEMPOOL_CHECK_COOKIES(
 				((struct rte_mbuf *)mbuf3)->pool,
 				(void **)&mbuf3, 1, 0);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
index e0723ac26a..9af797c36c 100644
--- a/drivers/net/octeontx/octeontx_rxtx.h
+++ b/drivers/net/octeontx/octeontx_rxtx.h
@@ -344,7 +344,7 @@ __octeontx_xmit_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
 
 	/* Mark mempool object as "put" since it is freed by PKO */
 	if (!(cmd_buf[0] & (1ULL << 58)))
-		__mempool_check_cookies(m_tofree->pool, (void **)&m_tofree,
+		RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool, (void **)&m_tofree,
 					1, 0);
 	/* Get the gaura Id */
 	gaura_id =
@@ -417,7 +417,7 @@ __octeontx_xmit_mseg_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
 		 */
 		if (!(cmd_buf[nb_desc] & (1ULL << 57))) {
 			tx_pkt->next = NULL;
-			__mempool_check_cookies(m_tofree->pool,
+			RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool,
 						(void **)&m_tofree, 1, 0);
 		}
 		nb_desc++;
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
index 623a2a841e..65140b759c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
@@ -146,7 +146,7 @@ otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
 	sd->nix_iova.addr = rte_mbuf_data_iova(m);
 
 	/* Mark mempool object as "put" since it is freed by NIX */
-	__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 
 	if (!ev->sched_type)
 		otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952..0d85c898bf 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -296,10 +296,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 		otx2_prefetch_store_keep(mbuf3);
 
 		/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-		__mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
-		__mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
-		__mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
-		__mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
 
 		/* Advance head pointer and packets */
 		head += NIX_DESCS_PER_LOOP; head &= qmask;
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index ea29aec62f..3dcc563be1 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -199,7 +199,7 @@ nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
 		mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
 		mbuf = mbuf->next;
 
-		__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 		mbuf->data_len = sg & 0xFFFF;
 		sg = sg >> 16;
@@ -309,7 +309,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
-	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+	RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
 		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b9..ad704d745b 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -202,7 +202,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask01, 0);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -211,7 +211,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask01, 1);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -220,7 +220,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask23, 0);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 
@@ -229,7 +229,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			if (otx2_nix_prefree_seg(mbuf))
 				vsetq_lane_u64(0x80000, xmask23, 1);
 			else
-				__mempool_check_cookies(mbuf->pool,
+				RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
 							(void **)&mbuf,
 							1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
@@ -245,22 +245,22 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 */
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 
 			mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
 				offsetof(struct rte_mbuf, buf_iova));
-			__mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+			RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
 						1, 0);
 			RTE_SET_USED(mbuf);
 		}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 486248dff7..de1be0093c 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -372,7 +372,7 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		}
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 	}
 }
 
@@ -450,7 +450,7 @@ otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		/* Mark mempool object as "put" since it is freed by NIX */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 		if (!(sg_u & (1ULL << (i + 55))))
-			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+			RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
 		rte_io_wmb();
 #endif
 		slist++;
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 2eab38f0d4..4bb851f79b 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -179,7 +179,7 @@ mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2;
-	tlr = __mempool_get_trailer(obj);
+	tlr = rte_mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
 }
@@ -1091,7 +1091,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 			rte_panic("MEMPOOL: object is owned by another "
 				  "mempool\n");
 
-		hdr = __mempool_get_header(obj);
+		hdr = rte_mempool_get_header(obj);
 		cookie = hdr->cookie;
 
 		if (free == 0) {
@@ -1119,7 +1119,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 				rte_panic("MEMPOOL: bad header cookie (audit)\n");
 			}
 		}
-		tlr = __mempool_get_trailer(obj);
+		tlr = rte_mempool_get_trailer(obj);
 		cookie = tlr->cookie;
 		if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {
 			RTE_LOG(CRIT, MEMPOOL,
@@ -1171,7 +1171,7 @@ static void
 mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,
 	void *obj, __rte_unused unsigned idx)
 {
-	__mempool_check_cookies(mp, &obj, 1, 2);
+	RTE_MEMPOOL_CHECK_COOKIES(mp, &obj, 1, 2);
 }
 
 static void
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index aca35466bc..2657417edc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -301,14 +301,14 @@ struct rte_mempool {
  *   Number to add to the object-oriented statistics.
  */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {                    \
+#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {                  \
 		unsigned __lcore_id = rte_lcore_id();           \
 		if (__lcore_id < RTE_MAX_LCORE) {               \
 			mp->stats[__lcore_id].name += n;        \
 		}                                               \
-	} while(0)
+	} while (0)
 #else
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0)
 #endif
 
 /**
@@ -324,7 +324,8 @@ struct rte_mempool {
 	(sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE)))
 
 /* return the header of a mempool object (internal) */
-static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj)
+static inline struct rte_mempool_objhdr *
+rte_mempool_get_header(void *obj)
 {
 	return (struct rte_mempool_objhdr *)RTE_PTR_SUB(obj,
 		sizeof(struct rte_mempool_objhdr));
@@ -341,12 +342,12 @@ static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj)
  */
 static inline struct rte_mempool *rte_mempool_from_obj(void *obj)
 {
-	struct rte_mempool_objhdr *hdr = __mempool_get_header(obj);
+	struct rte_mempool_objhdr *hdr = rte_mempool_get_header(obj);
 	return hdr->mp;
 }
 
 /* return the trailer of a mempool object (internal) */
-static inline struct rte_mempool_objtlr *__mempool_get_trailer(void *obj)
+static inline struct rte_mempool_objtlr *rte_mempool_get_trailer(void *obj)
 {
 	struct rte_mempool *mp = rte_mempool_from_obj(obj);
 	return (struct rte_mempool_objtlr *)RTE_PTR_ADD(obj, mp->elt_size);
@@ -370,10 +371,10 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 	void * const *obj_table_const, unsigned n, int free);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __mempool_check_cookies(mp, obj_table_const, n, free) \
+#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) \
 	rte_mempool_check_cookies(mp, obj_table_const, n, free)
 #else
-#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
+#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) do {} while (0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
 /**
@@ -395,13 +396,13 @@ void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
 	void * const *first_obj_table_const, unsigned int n, int free);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
-					      free) \
+#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \
+						free) \
 	rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
 						free)
 #else
-#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
-					      free) \
+#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \
+						free) \
 	do {} while (0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
@@ -736,8 +737,8 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 	ops = rte_mempool_get_ops(mp->ops_index);
 	ret = ops->dequeue(mp, obj_table, n);
 	if (ret == 0) {
-		__MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n);
 	}
 	return ret;
 }
@@ -786,8 +787,8 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 {
 	struct rte_mempool_ops *ops;
 
-	__MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n);
 	rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n);
 	ops = rte_mempool_get_ops(mp->ops_index);
 	return ops->enqueue(mp, obj_table, n);
@@ -1312,14 +1313,14 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache,
  *   A pointer to a mempool cache structure. May be NULL if not needed.
  */
 static __rte_always_inline void
-__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-		      unsigned int n, struct rte_mempool_cache *cache)
+rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
+			   unsigned int n, struct rte_mempool_cache *cache)
 {
 	void **cache_objs;
 
 	/* increment stat now, adding in mempool always success */
-	__MEMPOOL_STAT_ADD(mp, put_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, put_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
 
 	/* No cache provided or if put would overflow mem allocated for cache */
 	if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
@@ -1376,8 +1377,8 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
 			unsigned int n, struct rte_mempool_cache *cache)
 {
 	rte_mempool_trace_generic_put(mp, obj_table, n, cache);
-	__mempool_check_cookies(mp, obj_table, n, 0);
-	__mempool_generic_put(mp, obj_table, n, cache);
+	RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 0);
+	rte_mempool_do_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1437,8 +1438,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   - <0: Error; code of ring dequeue function.
  */
 static __rte_always_inline int
-__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
-		      unsigned int n, struct rte_mempool_cache *cache)
+rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
+			   unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	uint32_t index, len;
@@ -1477,8 +1478,8 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 
 	cache->len -= n;
 
-	__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-	__MEMPOOL_STAT_ADD(mp, get_success_objs, n);
+	RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+	RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
 
 	return 0;
 
@@ -1488,11 +1489,11 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0) {
-		__MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
 	} else {
-		__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_success_objs, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
 	}
 
 	return ret;
@@ -1523,9 +1524,9 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
 			unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
-	ret = __mempool_generic_get(mp, obj_table, n, cache);
+	ret = rte_mempool_do_generic_get(mp, obj_table, n, cache);
 	if (ret == 0)
-		__mempool_check_cookies(mp, obj_table, n, 1);
+		RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 1);
 	rte_mempool_trace_generic_get(mp, obj_table, n, cache);
 	return ret;
 }
@@ -1616,13 +1617,13 @@ rte_mempool_get_contig_blocks(struct rte_mempool *mp,
 
 	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
 	if (ret == 0) {
-		__MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_success_blks, n);
-		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
-						      1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_success_blks, n);
+		RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table, n,
+							1);
 	} else {
-		__MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
-		__MEMPOOL_STAT_ADD(mp, get_fail_blks, n);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
+		RTE_MEMPOOL_STAT_ADD(mp, get_fail_blks, n);
 	}
 
 	rte_mempool_trace_get_contig_blocks(mp, first_obj_table, n);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v3 4/6] mempool: make header size calculation internal
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
@ 2021-10-19 17:40   ` Andrew Rybchenko
  2021-10-20  6:55     ` Olivier Matz
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
                     ` (2 subsequent siblings)
  6 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:40 UTC (permalink / raw)
  To: Olivier Matz, Ray Kinsella; +Cc: David Marchand, dev

Add RTE_ prefix to helper macro to calculate mempool header size and
make it internal. Old macro is still available, but deprecated.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test/test_mempool.c                |  2 +-
 doc/guides/rel_notes/deprecation.rst   |  4 ++++
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 lib/mempool/rte_mempool.c              |  6 +++---
 lib/mempool/rte_mempool.h              | 10 +++++++---
 5 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 4ec236d239..0962bf06cf 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -113,7 +113,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 
 	printf("get private data\n");
 	if (rte_mempool_get_priv(mp) != (char *)mp +
-			MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
+			RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
 		GOTO_ERR(ret, out);
 
 #ifndef RTE_EXEC_ENV_FREEBSD /* rte_mem_virt2iova() not supported on bsd */
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 45239ca56e..bc3aca8ef1 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,6 +39,10 @@ Deprecation Notices
   ``__atomic_thread_fence`` must be used for patches that need to be merged in
   20.08 onwards. This change will not introduce any performance degradation.
 
+* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated and will
+  be removed in DPDK 22.11. The replacement macro
+  ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 7db4cb38c0..5f780bbf9f 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -233,6 +233,9 @@ API Changes
 * mempool: The mempool flags ``MEMPOOL_F_*`` will be deprecated in the future.
   Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead.
 
+* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated.
+  The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 4bb851f79b..c988ebd87a 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -888,7 +888,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		goto exit_unlock;
 	}
 
-	mempool_size = MEMPOOL_HEADER_SIZE(mp, cache_size);
+	mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size);
 	mempool_size += private_data_size;
 	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
 
@@ -904,7 +904,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 
 	/* init the mempool structure */
 	mp = mz->addr;
-	memset(mp, 0, MEMPOOL_HEADER_SIZE(mp, cache_size));
+	memset(mp, 0, RTE_MEMPOOL_HEADER_SIZE(mp, cache_size));
 	ret = strlcpy(mp->name, name, sizeof(mp->name));
 	if (ret < 0 || ret >= (int)sizeof(mp->name)) {
 		rte_errno = ENAMETOOLONG;
@@ -928,7 +928,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	 * The local_cache points to just past the elt_pa[] array.
 	 */
 	mp->local_cache = (struct rte_mempool_cache *)
-		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
+		RTE_PTR_ADD(mp, RTE_MEMPOOL_HEADER_SIZE(mp, 0));
 
 	/* Init all default caches. */
 	if (cache_size != 0) {
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 2657417edc..81646ee35d 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -312,17 +312,21 @@ struct rte_mempool {
 #endif
 
 /**
- * Calculate the size of the mempool header.
+ * @internal Calculate the size of the mempool header.
  *
  * @param mp
  *   Pointer to the memory pool.
  * @param cs
  *   Size of the per-lcore cache.
  */
-#define MEMPOOL_HEADER_SIZE(mp, cs) \
+#define RTE_MEMPOOL_HEADER_SIZE(mp, cs) \
 	(sizeof(*(mp)) + (((cs) == 0) ? 0 : \
 	(sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE)))
 
+/** Deprecated. Use RTE_MEMPOOL_HEADER_SIZE() for internal purposes only. */
+#define MEMPOOL_HEADER_SIZE(mp, cs) \
+	RTE_DEPRECATED(MEMPOOL_HEADER_SIZE) RTE_MEMPOOL_HEADER_SIZE(mp, cs)
+
 /* return the header of a mempool object (internal) */
 static inline struct rte_mempool_objhdr *
 rte_mempool_get_header(void *obj)
@@ -1739,7 +1743,7 @@ void rte_mempool_audit(struct rte_mempool *mp);
 static inline void *rte_mempool_get_priv(struct rte_mempool *mp)
 {
 	return (char *)mp +
-		MEMPOOL_HEADER_SIZE(mp, mp->cache_size);
+		RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size);
 }
 
 /**
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v3 5/6] mempool: add namespace to driver register macro
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 4/6] mempool: make header size calculation internal Andrew Rybchenko
@ 2021-10-19 17:40   ` Andrew Rybchenko
  2021-10-20  6:57     ` Olivier Matz
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 6/6] mempool: deprecate unused defines Andrew Rybchenko
  2021-10-19 20:09   ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace David Marchand
  6 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:40 UTC (permalink / raw)
  To: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram
  Cc: David Marchand, dev

Add RTE_ prefix to macro used to register mempool driver.
The old one is still available but deprecated.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 doc/guides/prog_guide/mempool_lib.rst           |  2 +-
 doc/guides/rel_notes/deprecation.rst            |  4 ++++
 doc/guides/rel_notes/release_21_11.rst          |  3 +++
 drivers/mempool/bucket/rte_mempool_bucket.c     |  2 +-
 drivers/mempool/cnxk/cn10k_mempool_ops.c        |  2 +-
 drivers/mempool/cnxk/cn9k_mempool_ops.c         |  2 +-
 drivers/mempool/dpaa/dpaa_mempool.c             |  2 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c        |  2 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c |  2 +-
 drivers/mempool/octeontx2/otx2_mempool_ops.c    |  2 +-
 drivers/mempool/ring/rte_mempool_ring.c         | 12 ++++++------
 drivers/mempool/stack/rte_mempool_stack.c       |  4 ++--
 lib/mempool/rte_mempool.h                       |  6 +++++-
 13 files changed, 28 insertions(+), 17 deletions(-)

diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index 890535eb23..55838317b9 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -115,7 +115,7 @@ management systems and software based memory allocators, to be used with DPDK.
 There are two aspects to a mempool handler.
 
 * Adding the code for your new mempool operations (ops). This is achieved by
-  adding a new mempool ops code, and using the ``MEMPOOL_REGISTER_OPS`` macro.
+  adding a new mempool ops code, and using the ``RTE_MEMPOOL_REGISTER_OPS`` macro.
 
 * Using the new API to call ``rte_mempool_create_empty()`` and
   ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bc3aca8ef1..0095d48084 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -43,6 +43,10 @@ Deprecation Notices
   be removed in DPDK 22.11. The replacement macro
   ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
 
+* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
+  deprecated and will be removed in DPDK 22.11. Use replacement macro
+  ``RTE_MEMPOOL_REGISTER_OPS()``.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 5f780bbf9f..a8dd8031c0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -236,6 +236,9 @@ API Changes
 * mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated.
   The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only.
 
+* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
+  deprecated.  Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 8ff9e53007..c0b480bfc7 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -663,4 +663,4 @@ static const struct rte_mempool_ops ops_bucket = {
 };
 
 
-MEMPOOL_REGISTER_OPS(ops_bucket);
+RTE_MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c
index 95458b34b7..4c669b878f 100644
--- a/drivers/mempool/cnxk/cn10k_mempool_ops.c
+++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c
@@ -316,4 +316,4 @@ static struct rte_mempool_ops cn10k_mempool_ops = {
 	.populate = cnxk_mempool_populate,
 };
 
-MEMPOOL_REGISTER_OPS(cn10k_mempool_ops);
+RTE_MEMPOOL_REGISTER_OPS(cn10k_mempool_ops);
diff --git a/drivers/mempool/cnxk/cn9k_mempool_ops.c b/drivers/mempool/cnxk/cn9k_mempool_ops.c
index c0cdba640b..b7967f8085 100644
--- a/drivers/mempool/cnxk/cn9k_mempool_ops.c
+++ b/drivers/mempool/cnxk/cn9k_mempool_ops.c
@@ -86,4 +86,4 @@ static struct rte_mempool_ops cn9k_mempool_ops = {
 	.populate = cnxk_mempool_populate,
 };
 
-MEMPOOL_REGISTER_OPS(cn9k_mempool_ops);
+RTE_MEMPOOL_REGISTER_OPS(cn9k_mempool_ops);
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index f02056982c..f17aff9655 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -358,4 +358,4 @@ static const struct rte_mempool_ops dpaa_mpool_ops = {
 	.populate = dpaa_populate,
 };
 
-MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
+RTE_MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 771e0a0e28..39c6252a63 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -455,6 +455,6 @@ static const struct rte_mempool_ops dpaa2_mpool_ops = {
 	.populate = dpaa2_populate,
 };
 
-MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops);
+RTE_MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops);
 
 RTE_LOG_REGISTER_DEFAULT(dpaa2_logtype_mempool, NOTICE);
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index bd00700202..f4de1c8412 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -202,4 +202,4 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.populate = octeontx_fpavf_populate,
 };
 
-MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
+RTE_MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index d827fd8c7b..332e4f1cb2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -898,4 +898,4 @@ static struct rte_mempool_ops otx2_npa_ops = {
 #endif
 };
 
-MEMPOOL_REGISTER_OPS(otx2_npa_ops);
+RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops);
diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
index 4b785971c4..c6aa935eea 100644
--- a/drivers/mempool/ring/rte_mempool_ring.c
+++ b/drivers/mempool/ring/rte_mempool_ring.c
@@ -198,9 +198,9 @@ static const struct rte_mempool_ops ops_mt_hts = {
 	.get_count = common_ring_get_count,
 };
 
-MEMPOOL_REGISTER_OPS(ops_mp_mc);
-MEMPOOL_REGISTER_OPS(ops_sp_sc);
-MEMPOOL_REGISTER_OPS(ops_mp_sc);
-MEMPOOL_REGISTER_OPS(ops_sp_mc);
-MEMPOOL_REGISTER_OPS(ops_mt_rts);
-MEMPOOL_REGISTER_OPS(ops_mt_hts);
+RTE_MEMPOOL_REGISTER_OPS(ops_mp_mc);
+RTE_MEMPOOL_REGISTER_OPS(ops_sp_sc);
+RTE_MEMPOOL_REGISTER_OPS(ops_mp_sc);
+RTE_MEMPOOL_REGISTER_OPS(ops_sp_mc);
+RTE_MEMPOOL_REGISTER_OPS(ops_mt_rts);
+RTE_MEMPOOL_REGISTER_OPS(ops_mt_hts);
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 7e85c8d6b6..1476905227 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -93,5 +93,5 @@ static struct rte_mempool_ops ops_lf_stack = {
 	.get_count = stack_get_count
 };
 
-MEMPOOL_REGISTER_OPS(ops_stack);
-MEMPOOL_REGISTER_OPS(ops_lf_stack);
+RTE_MEMPOOL_REGISTER_OPS(ops_stack);
+RTE_MEMPOOL_REGISTER_OPS(ops_lf_stack);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 81646ee35d..657233ce45 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -922,12 +922,16 @@ int rte_mempool_register_ops(const struct rte_mempool_ops *ops);
  * Note that the rte_mempool_register_ops fails silently here when
  * more than RTE_MEMPOOL_MAX_OPS_IDX is registered.
  */
-#define MEMPOOL_REGISTER_OPS(ops)				\
+#define RTE_MEMPOOL_REGISTER_OPS(ops)				\
 	RTE_INIT(mp_hdlr_init_##ops)				\
 	{							\
 		rte_mempool_register_ops(&ops);			\
 	}
 
+/** Deprecated. Use RTE_MEMPOOL_REGISTER_OPS() instead. */
+#define MEMPOOL_REGISTER_OPS(ops) \
+	RTE_DEPRECATED(MEMPOOL_REGISTER_OPS) RTE_MEMPOOL_REGISTER_OPS(ops)
+
 /**
  * An object callback function for mempool.
  *
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [dpdk-dev] [PATCH v3 6/6] mempool: deprecate unused defines
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
@ 2021-10-19 17:40   ` Andrew Rybchenko
  2021-10-20  7:08     ` Olivier Matz
  2021-10-19 20:09   ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace David Marchand
  6 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:40 UTC (permalink / raw)
  To: Olivier Matz, Ray Kinsella; +Cc: David Marchand, dev

MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.

Fixes: fd943c764a63 ("mempool: deprecate xmem functions")

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 doc/guides/contributing/documentation.rst | 4 ++--
 doc/guides/rel_notes/deprecation.rst      | 3 +++
 doc/guides/rel_notes/release_21_11.rst    | 3 +++
 lib/mempool/rte_mempool.h                 | 7 ++++---
 4 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 8cbd4a0f6f..7fcbb7fc43 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -705,7 +705,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
      /**< Virtual address of the first mempool object. */
      uintptr_t   elt_va_end;
      /**< Virtual address of the <size + 1> mempool object. */
-     phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
+     phys_addr_t elt_pa[1];
      /**< Array of physical page addresses for the mempool buffer. */
 
   This doesn't have an effect on the rendered documentation but it is confusing for the developer reading the code.
@@ -724,7 +724,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
      /** Virtual address of the <size + 1> mempool object. */
      uintptr_t   elt_va_end;
      /** Array of physical page addresses for the mempool buffer. */
-     phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
+     phys_addr_t elt_pa[1];
 
 * Read the rendered section of the documentation that you have added for correctness, clarity and consistency
   with the surrounding text.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0095d48084..c59dd5ca98 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -47,6 +47,9 @@ Deprecation Notices
   deprecated and will be removed in DPDK 22.11. Use replacement macro
   ``RTE_MEMPOOL_REGISTER_OPS()``.
 
+* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and
+  will be removed in DPDK 22.11.
+
 * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
   A compatibility layer will be kept until DPDK 22.11, except for the flags
   that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index a8dd8031c0..bdaefd236d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -239,6 +239,9 @@ API Changes
 * mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is
   deprecated.  Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``.
 
+* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and
+  will be removed in DPDK 22.11.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 657233ce45..300dbdea4a 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -116,10 +116,11 @@ struct rte_mempool_objsz {
 /* "MP_<name>" */
 #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
 
-#define	MEMPOOL_PG_SHIFT_MAX	(sizeof(uintptr_t) * CHAR_BIT - 1)
+#define	MEMPOOL_PG_SHIFT_MAX \
+	RTE_DEPRECATED(MEMPOOL_PG_SHIFT_MAX) (sizeof(uintptr_t) * CHAR_BIT - 1)
 
-/** Mempool over one chunk of physically continuous memory */
-#define	MEMPOOL_PG_NUM_DEFAULT	1
+/** Deprecated. Mempool over one chunk of physically continuous memory */
+#define	MEMPOOL_PG_NUM_DEFAULT	RTE_DEPRECATED(MEMPOOL_PG_NUM_DEFAULT) 1
 
 #ifndef RTE_MEMPOOL_ALIGN
 /**
-- 
2.30.2


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags
  2021-10-19 16:13     ` Olivier Matz
  2021-10-19 16:15       ` Olivier Matz
@ 2021-10-19 17:45       ` Andrew Rybchenko
  1 sibling, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-19 17:45 UTC (permalink / raw)
  To: Olivier Matz
  Cc: David Marchand, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Pavan Nikhilesh, Shijith Thotton, Jerin Jacob, Artem V. Andreev,
	Nithin Dabilpuram, Kiran Kumar K, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia, dev

On 10/19/21 7:13 PM, Olivier Matz wrote:
> On Tue, Oct 19, 2021 at 01:08:41PM +0300, Andrew Rybchenko wrote:
>> Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
> 
> nit: flgas -> flags

Thanks, fixed.

> 
>> The old flags remain usable, to be deprecated in the future.
>>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> (...)
> 
>> @@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
>>   	rte_free(cache);
>>   }
>>   
>> -#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
>> -	| MEMPOOL_F_NO_CACHE_ALIGN \
>> -	| MEMPOOL_F_SP_PUT \
>> -	| MEMPOOL_F_SC_GET \
>> -	| MEMPOOL_F_POOL_CREATED \
>> -	| MEMPOOL_F_NO_IOVA_CONTIG \
>> +#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
>> +	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
>> +	| RTE_MEMPOOL_F_SP_PUT \
>> +	| RTE_MEMPOOL_F_SC_GET \
>> +	| RTE_MEMPOOL_F_POOL_CREATED \
>> +	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
>>   	)
> 
> I guess MEMPOOL_KNOWN_FLAGS was kept as is on purpose.
> 

Yes, since it is internal and located in .c file.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] mempool: add namespace prefix to flags
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
@ 2021-10-19 20:03     ` David Marchand
  2021-10-20  7:50       ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: David Marchand @ 2021-10-19 20:03 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Olivier Matz, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Matan Azrad, Viacheslav Ovsiienko, Pavan Nikhilesh,
	Shijith Thotton, Jerin Jacob, Artem V. Andreev,
	Nithin Dabilpuram, Kiran Kumar K, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia, dev

On Tue, Oct 19, 2021 at 7:40 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
> @@ -752,7 +752,7 @@ test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
>         ret = rte_mempool_populate_default(mp);
>         RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
>                         rte_strerror(-ret));
> -       RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
> +       RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
>                         "NON_IO flag is not set when NO_IOVA_CONTIG is set");
>         ret = TEST_SUCCESS;
>  exit:

There is one more flag, hunk fixed adding missing:

@@ -745,14 +745,14 @@ test_mempool_flag_non_io_set_when_no_iova_contig_set(void)

     mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
                       MEMPOOL_ELT_SIZE, 0, 0,
-                      SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
+                      SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG);
     RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
                  rte_strerror(rte_errno));
     rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
     ret = rte_mempool_populate_default(mp);
     RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
             rte_strerror(-ret));
-    RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+    RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
             "NON_IO flag is not set when NO_IOVA_CONTIG is set");
     ret = TEST_SUCCESS;
 exit:


-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace
  2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
                     ` (5 preceding siblings ...)
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 6/6] mempool: deprecate unused defines Andrew Rybchenko
@ 2021-10-19 20:09   ` David Marchand
  2021-10-20  7:52     ` David Marchand
  2021-10-20  7:52     ` Andrew Rybchenko
  6 siblings, 2 replies; 53+ messages in thread
From: David Marchand @ 2021-10-19 20:09 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Olivier Matz, dev

On Tue, Oct 19, 2021 at 7:40 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Add RTE_ prefix to mempool API including internal. Keep old public API
> with fallback to new defines. Internal API is just renamed.
>
> v3:
>     - fix typo
>     - rebase on top of current main
>     - add prefix to newly added MEMPOOL_F_NON_IO
>     - fix deprecation usage
>     - add Fixes tag the patch which deprecates unused macros

Thanks for the quick rebase.
I had rebased v2 before Olivier comments.
I spotted a little issue diffing with your v3 (see comment on patch
2), and fixed your v3 in a local branch of mine.
It passes my checks.

I'll wait tomorrow, to see if Olivier wants to send some acks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/6] mempool: make header size calculation internal
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 4/6] mempool: make header size calculation internal Andrew Rybchenko
@ 2021-10-20  6:55     ` Olivier Matz
  0 siblings, 0 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-20  6:55 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Ray Kinsella, David Marchand, dev

On Tue, Oct 19, 2021 at 08:40:20PM +0300, Andrew Rybchenko wrote:
> Add RTE_ prefix to helper macro to calculate mempool header size and
> make it internal. Old macro is still available, but deprecated.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 5/6] mempool: add namespace to driver register macro
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
@ 2021-10-20  6:57     ` Olivier Matz
  0 siblings, 0 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-20  6:57 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, David Marchand, dev

On Tue, Oct 19, 2021 at 08:40:21PM +0300, Andrew Rybchenko wrote:
> Add RTE_ prefix to macro used to register mempool driver.
> The old one is still available but deprecated.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 6/6] mempool: deprecate unused defines
  2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 6/6] mempool: deprecate unused defines Andrew Rybchenko
@ 2021-10-20  7:08     ` Olivier Matz
  0 siblings, 0 replies; 53+ messages in thread
From: Olivier Matz @ 2021-10-20  7:08 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Ray Kinsella, David Marchand, dev

On Tue, Oct 19, 2021 at 08:40:22PM +0300, Andrew Rybchenko wrote:
> MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.
> 
> Fixes: fd943c764a63 ("mempool: deprecate xmem functions")
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] mempool: add namespace prefix to flags
  2021-10-19 20:03     ` David Marchand
@ 2021-10-20  7:50       ` Andrew Rybchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-20  7:50 UTC (permalink / raw)
  To: David Marchand
  Cc: Olivier Matz, Maryam Tahhan, Reshma Pattan, Xiaoyun Li,
	Matan Azrad, Viacheslav Ovsiienko, Pavan Nikhilesh,
	Shijith Thotton, Jerin Jacob, Artem V. Andreev,
	Nithin Dabilpuram, Kiran Kumar K, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia, dev

On 10/19/21 11:03 PM, David Marchand wrote:
> On Tue, Oct 19, 2021 at 7:40 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>> @@ -752,7 +752,7 @@ test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
>>         ret = rte_mempool_populate_default(mp);
>>         RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
>>                         rte_strerror(-ret));
>> -       RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
>> +       RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
>>                         "NON_IO flag is not set when NO_IOVA_CONTIG is set");
>>         ret = TEST_SUCCESS;
>>  exit:
> 
> There is one more flag, hunk fixed adding missing:
> 
> @@ -745,14 +745,14 @@ test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
> 
>      mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
>                        MEMPOOL_ELT_SIZE, 0, 0,
> -                      SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
> +                      SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG);
>      RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
>                   rte_strerror(rte_errno));
>      rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
>      ret = rte_mempool_populate_default(mp);
>      RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
>              rte_strerror(-ret));
> -    RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
> +    RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
>              "NON_IO flag is not set when NO_IOVA_CONTIG is set");
>      ret = TEST_SUCCESS;
>  exit:
> 

Thanks, sorry that I've overlooked it on rebase.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace
  2021-10-19 20:09   ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace David Marchand
@ 2021-10-20  7:52     ` David Marchand
  2021-10-20  7:54       ` Andrew Rybchenko
  2021-10-20  7:52     ` Andrew Rybchenko
  1 sibling, 1 reply; 53+ messages in thread
From: David Marchand @ 2021-10-20  7:52 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Olivier Matz, dev

On Tue, Oct 19, 2021 at 10:09 PM David Marchand
<david.marchand@redhat.com> wrote:
> On Tue, Oct 19, 2021 at 7:40 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
> >
> > Add RTE_ prefix to mempool API including internal. Keep old public API
> > with fallback to new defines. Internal API is just renamed.
> >
> > v3:
> >     - fix typo
> >     - rebase on top of current main
> >     - add prefix to newly added MEMPOOL_F_NON_IO
> >     - fix deprecation usage
> >     - add Fixes tag the patch which deprecates unused macros
>
> I spotted a little issue diffing with your v3 (see comment on patch
> 2), and fixed your v3 in a local branch of mine.

Series applied with fix on patch 2.
Thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace
  2021-10-19 20:09   ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace David Marchand
  2021-10-20  7:52     ` David Marchand
@ 2021-10-20  7:52     ` Andrew Rybchenko
  2021-10-20  8:07       ` David Marchand
  1 sibling, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-20  7:52 UTC (permalink / raw)
  To: David Marchand; +Cc: Olivier Matz, dev

On 10/19/21 11:09 PM, David Marchand wrote:
> On Tue, Oct 19, 2021 at 7:40 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> Add RTE_ prefix to mempool API including internal. Keep old public API
>> with fallback to new defines. Internal API is just renamed.
>>
>> v3:
>>     - fix typo
>>     - rebase on top of current main
>>     - add prefix to newly added MEMPOOL_F_NON_IO
>>     - fix deprecation usage
>>     - add Fixes tag the patch which deprecates unused macros
> 
> Thanks for the quick rebase.
> I had rebased v2 before Olivier comments.
> I spotted a little issue diffing with your v3 (see comment on patch
> 2), and fixed your v3 in a local branch of mine.
> It passes my checks.
> 
> I'll wait tomorrow, to see if Olivier wants to send some acks.

Olivier has just added missing Acks. Do you need v4 from me
with patch 2 fixes? Your changes LGTM and I don't mind if you
fix it on apply.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace
  2021-10-20  7:52     ` David Marchand
@ 2021-10-20  7:54       ` Andrew Rybchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andrew Rybchenko @ 2021-10-20  7:54 UTC (permalink / raw)
  To: David Marchand; +Cc: Olivier Matz, dev

On 10/20/21 10:52 AM, David Marchand wrote:
> On Tue, Oct 19, 2021 at 10:09 PM David Marchand
> <david.marchand@redhat.com> wrote:
>> On Tue, Oct 19, 2021 at 7:40 PM Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>
>>> Add RTE_ prefix to mempool API including internal. Keep old public API
>>> with fallback to new defines. Internal API is just renamed.
>>>
>>> v3:
>>>     - fix typo
>>>     - rebase on top of current main
>>>     - add prefix to newly added MEMPOOL_F_NON_IO
>>>     - fix deprecation usage
>>>     - add Fixes tag the patch which deprecates unused macros
>>
>> I spotted a little issue diffing with your v3 (see comment on patch
>> 2), and fixed your v3 in a local branch of mine.
> 
> Series applied with fix on patch 2.
> Thanks.

Sorry, I've not noticed this reply before my question.
Many thanks that you agreed to accept these patches
that late.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace
  2021-10-20  7:52     ` Andrew Rybchenko
@ 2021-10-20  8:07       ` David Marchand
  0 siblings, 0 replies; 53+ messages in thread
From: David Marchand @ 2021-10-20  8:07 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Olivier Matz, dev

On Wed, Oct 20, 2021 at 9:52 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
> > Thanks for the quick rebase.
> > I had rebased v2 before Olivier comments.
> > I spotted a little issue diffing with your v3 (see comment on patch
> > 2), and fixed your v3 in a local branch of mine.
> > It passes my checks.
> >
> > I'll wait tomorrow, to see if Olivier wants to send some acks.
>
> Olivier has just added missing Acks. Do you need v4 from me
> with patch 2 fixes? Your changes LGTM and I don't mind if you
> fix it on apply.

I applied Olivier acks.
Patches are pushed if you want to double check, but I think we are good.


Now looking at mbuf offload namespace series... :-)


-- 
David Marchand


^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2021-10-20  8:07 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-18 14:49 [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
2021-10-18 14:49 ` [dpdk-dev] [PATCH 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
2021-10-18 14:49 ` [dpdk-dev] [PATCH 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
2021-10-19  8:52   ` David Marchand
2021-10-19  9:40     ` Thomas Monjalon
2021-10-18 14:49 ` [dpdk-dev] [PATCH 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
2021-10-19  8:47   ` David Marchand
2021-10-19  9:10     ` Andrew Rybchenko
2021-10-18 14:49 ` [dpdk-dev] [PATCH 4/6] mempool: make header size calculation internal Andrew Rybchenko
2021-10-19  8:48   ` David Marchand
2021-10-19  8:59     ` Andrew Rybchenko
2021-10-18 14:49 ` [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
2021-10-19  8:49   ` David Marchand
2021-10-19  9:04     ` Andrew Rybchenko
2021-10-19  9:23       ` Andrew Rybchenko
2021-10-19  9:27       ` David Marchand
2021-10-19  9:38         ` Andrew Rybchenko
2021-10-19  9:42         ` Thomas Monjalon
2021-10-18 14:49 ` [dpdk-dev] [PATCH 6/6] mempool: deprecate unused defines Andrew Rybchenko
2021-10-19 10:08 ` [dpdk-dev] [PATCH v2 0/6] mempool: cleanup namespace Andrew Rybchenko
2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
2021-10-19 16:13     ` Olivier Matz
2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
2021-10-19 16:13     ` Olivier Matz
2021-10-19 16:15       ` Olivier Matz
2021-10-19 17:45       ` Andrew Rybchenko
2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
2021-10-19 16:14     ` Olivier Matz
2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal Andrew Rybchenko
2021-10-19 16:14     ` Olivier Matz
2021-10-19 17:23       ` Andrew Rybchenko
2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
2021-10-19 16:16     ` Olivier Matz
2021-10-19 10:08   ` [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines Andrew Rybchenko
2021-10-19 16:21     ` Olivier Matz
2021-10-19 17:23       ` Andrew Rybchenko
2021-10-19 17:40 ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace Andrew Rybchenko
2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 1/6] mempool: avoid flags documentation in the next line Andrew Rybchenko
2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 2/6] mempool: add namespace prefix to flags Andrew Rybchenko
2021-10-19 20:03     ` David Marchand
2021-10-20  7:50       ` Andrew Rybchenko
2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 3/6] mempool: add namespace to internal but still visible API Andrew Rybchenko
2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 4/6] mempool: make header size calculation internal Andrew Rybchenko
2021-10-20  6:55     ` Olivier Matz
2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
2021-10-20  6:57     ` Olivier Matz
2021-10-19 17:40   ` [dpdk-dev] [PATCH v3 6/6] mempool: deprecate unused defines Andrew Rybchenko
2021-10-20  7:08     ` Olivier Matz
2021-10-19 20:09   ` [dpdk-dev] [PATCH v3 0/6] mempool: cleanup namespace David Marchand
2021-10-20  7:52     ` David Marchand
2021-10-20  7:54       ` Andrew Rybchenko
2021-10-20  7:52     ` Andrew Rybchenko
2021-10-20  8:07       ` David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).