DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros
@ 2014-11-19 12:26 Sergio Gonzalez Monroy
  2014-11-19 12:26 ` [dpdk-dev] [PATCH 1/3] Add RTE_ prefix to CACHE_LINE_SIZE macro Sergio Gonzalez Monroy
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Sergio Gonzalez Monroy @ 2014-11-19 12:26 UTC (permalink / raw)
  To: dev

Currently DPDK sets CACHE_LINE_SIZE value to 64 by default if the macro is
not already defined.

FreeBSD defines a CACHE_LINE_SIZE macro in the header file:
/usr/include/machine/param.h

These macros set different values, 64 in DPDK vs 128 in FreeBSD, causing
broken application behaviour if the system header file is included before
rte_memory.h (where DPDK sets CACHE_LINE_SIZE).

This is the case for some examples like ip_fragmentation.
In such application, DPDK library code would assume 64 bytes cache line size
and the application code would assume 128 cache line size.
Given that mbufs now take two cache lines and that the structure is being
aligned based on this value, the result is broken application functionality.

The approach to fix this issue is to add RTE_ prefix to all CACHE_LINE_xxxx
related macros to avoid conflicts.

Sergio Gonzalez Monroy (3):
  Add RTE_ prefix to CACHE_LINE_SIZE macro
  Add RTE_ prefix to CACHE_LINE_MASK macro
  Add RTE_ prefix to CACHE_LINE_ROUNDUP macro

 app/test-acl/main.c                               |  2 +-
 app/test-pipeline/runtime.c                       |  2 +-
 app/test-pmd/testpmd.c                            | 14 +++---
 app/test-pmd/testpmd.h                            |  4 +-
 app/test/test_distributor_perf.c                  |  2 +-
 app/test/test_ivshmem.c                           |  6 +--
 app/test/test_malloc.c                            | 32 ++++++-------
 app/test/test_mbuf.c                              |  2 +-
 app/test/test_memzone.c                           | 58 +++++++++++------------
 app/test/test_pmd_perf.c                          |  4 +-
 app/test/test_table.h                             |  2 +-
 doc/guides/sample_app_ug/kernel_nic_interface.rst |  2 +-
 examples/dpdk_qat/crypto.c                        |  2 +-
 examples/ip_pipeline/cmdline.c                    |  8 ++--
 examples/ip_pipeline/init.c                       |  4 +-
 examples/ip_pipeline/pipeline_passthrough.c       |  2 +-
 examples/ip_pipeline/pipeline_rx.c                |  2 +-
 examples/ip_pipeline/pipeline_tx.c                |  2 +-
 examples/ip_reassembly/main.c                     |  2 +-
 examples/kni/main.c                               |  2 +-
 examples/multi_process/l2fwd_fork/flib.c          |  4 +-
 examples/multi_process/symmetric_mp/main.c        |  2 +-
 examples/netmap_compat/lib/compat_netmap.c        |  4 +-
 examples/qos_sched/main.c                         |  4 +-
 examples/vhost/main.c                             |  6 +--
 examples/vhost_xen/vhost_monitor.c                |  4 +-
 lib/librte_acl/acl_gen.c                          |  6 +--
 lib/librte_acl/rte_acl.c                          |  2 +-
 lib/librte_acl/rte_acl_osdep_alone.h              |  6 +--
 lib/librte_distributor/rte_distributor.c          |  4 +-
 lib/librte_eal/common/eal_common_memzone.c        | 24 +++++-----
 lib/librte_eal/common/include/rte_memory.h        | 12 ++---
 lib/librte_ether/rte_ethdev.c                     | 10 ++--
 lib/librte_hash/rte_hash.c                        | 10 ++--
 lib/librte_ip_frag/rte_ip_frag_common.c           |  2 +-
 lib/librte_lpm/rte_lpm.c                          |  4 +-
 lib/librte_lpm/rte_lpm6.c                         |  4 +-
 lib/librte_malloc/malloc_elem.c                   |  4 +-
 lib/librte_malloc/malloc_elem.h                   |  2 +-
 lib/librte_malloc/malloc_heap.c                   |  6 +--
 lib/librte_malloc/rte_malloc.c                    |  2 +-
 lib/librte_mempool/rte_mempool.c                  | 24 +++++-----
 lib/librte_mempool/rte_mempool.h                  |  2 +-
 lib/librte_pipeline/rte_pipeline.c                |  4 +-
 lib/librte_pmd_e1000/em_rxtx.c                    | 10 ++--
 lib/librte_pmd_e1000/igb_rxtx.c                   |  8 ++--
 lib/librte_pmd_i40e/i40e_rxtx.c                   |  8 ++--
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c                 |  8 ++--
 lib/librte_pmd_virtio/virtio_ethdev.c             | 10 ++--
 lib/librte_pmd_virtio/virtio_rxtx.c               |  2 +-
 lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c           |  2 +-
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c             |  8 ++--
 lib/librte_pmd_xenvirt/rte_eth_xenvirt.c          |  6 +--
 lib/librte_port/rte_port_ethdev.c                 |  4 +-
 lib/librte_port/rte_port_frag.c                   |  2 +-
 lib/librte_port/rte_port_ras.c                    |  2 +-
 lib/librte_port/rte_port_ring.c                   |  4 +-
 lib/librte_port/rte_port_sched.c                  |  4 +-
 lib/librte_port/rte_port_source_sink.c            |  2 +-
 lib/librte_ring/rte_ring.c                        | 12 ++---
 lib/librte_sched/rte_bitmap.h                     |  8 ++--
 lib/librte_sched/rte_sched.c                      | 16 +++----
 lib/librte_table/rte_table_acl.c                  | 10 ++--
 lib/librte_table/rte_table_array.c                |  8 ++--
 lib/librte_table/rte_table_hash_ext.c             | 20 ++++----
 lib/librte_table/rte_table_hash_key16.c           | 36 +++++++-------
 lib/librte_table/rte_table_hash_key32.c           | 44 ++++++++---------
 lib/librte_table/rte_table_hash_key8.c            | 28 +++++------
 lib/librte_table/rte_table_hash_lru.c             | 16 +++----
 lib/librte_table/rte_table_lpm.c                  |  2 +-
 lib/librte_table/rte_table_lpm_ipv6.c             |  2 +-
 71 files changed, 294 insertions(+), 294 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH 1/3] Add RTE_ prefix to CACHE_LINE_SIZE macro
  2014-11-19 12:26 [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Sergio Gonzalez Monroy
@ 2014-11-19 12:26 ` Sergio Gonzalez Monroy
  2014-11-19 12:26 ` [dpdk-dev] [PATCH 2/3] Add RTE_ prefix to CACHE_LINE_MASK macro Sergio Gonzalez Monroy
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Sergio Gonzalez Monroy @ 2014-11-19 12:26 UTC (permalink / raw)
  To: dev

CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.

Adding RTE_ prefix to avoid conflicts.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 app/test-acl/main.c                               |  2 +-
 app/test-pipeline/runtime.c                       |  2 +-
 app/test-pmd/testpmd.c                            | 12 +++----
 app/test-pmd/testpmd.h                            |  4 +--
 app/test/test_distributor_perf.c                  |  2 +-
 app/test/test_ivshmem.c                           |  6 ++--
 app/test/test_malloc.c                            | 32 ++++++++---------
 app/test/test_mbuf.c                              |  2 +-
 app/test/test_memzone.c                           | 26 +++++++-------
 app/test/test_pmd_perf.c                          |  4 +--
 app/test/test_table.h                             |  2 +-
 doc/guides/sample_app_ug/kernel_nic_interface.rst |  2 +-
 examples/dpdk_qat/crypto.c                        |  2 +-
 examples/ip_pipeline/cmdline.c                    |  8 ++---
 examples/ip_pipeline/init.c                       |  4 +--
 examples/ip_pipeline/pipeline_passthrough.c       |  2 +-
 examples/ip_pipeline/pipeline_rx.c                |  2 +-
 examples/ip_pipeline/pipeline_tx.c                |  2 +-
 examples/ip_reassembly/main.c                     |  2 +-
 examples/kni/main.c                               |  2 +-
 examples/multi_process/l2fwd_fork/flib.c          |  4 +--
 examples/multi_process/symmetric_mp/main.c        |  2 +-
 examples/netmap_compat/lib/compat_netmap.c        |  4 +--
 examples/qos_sched/main.c                         |  4 +--
 examples/vhost/main.c                             |  6 ++--
 examples/vhost_xen/vhost_monitor.c                |  4 +--
 lib/librte_acl/acl_gen.c                          |  6 ++--
 lib/librte_acl/rte_acl.c                          |  2 +-
 lib/librte_acl/rte_acl_osdep_alone.h              |  6 ++--
 lib/librte_distributor/rte_distributor.c          |  2 +-
 lib/librte_eal/common/eal_common_memzone.c        | 12 +++----
 lib/librte_eal/common/include/rte_memory.h        | 10 +++---
 lib/librte_ether/rte_ethdev.c                     | 10 +++---
 lib/librte_hash/rte_hash.c                        | 10 +++---
 lib/librte_ip_frag/rte_ip_frag_common.c           |  2 +-
 lib/librte_lpm/rte_lpm.c                          |  4 +--
 lib/librte_lpm/rte_lpm6.c                         |  4 +--
 lib/librte_malloc/malloc_elem.c                   |  4 +--
 lib/librte_malloc/malloc_elem.h                   |  2 +-
 lib/librte_malloc/malloc_heap.c                   |  2 +-
 lib/librte_mempool/rte_mempool.c                  |  8 ++---
 lib/librte_mempool/rte_mempool.h                  |  2 +-
 lib/librte_pipeline/rte_pipeline.c                |  4 +--
 lib/librte_pmd_e1000/em_rxtx.c                    | 10 +++---
 lib/librte_pmd_e1000/igb_rxtx.c                   |  8 ++---
 lib/librte_pmd_i40e/i40e_rxtx.c                   |  8 ++---
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c                 |  8 ++---
 lib/librte_pmd_virtio/virtio_ethdev.c             | 10 +++---
 lib/librte_pmd_virtio/virtio_rxtx.c               |  2 +-
 lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c           |  2 +-
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c             |  8 ++---
 lib/librte_pmd_xenvirt/rte_eth_xenvirt.c          |  6 ++--
 lib/librte_port/rte_port_ethdev.c                 |  4 +--
 lib/librte_port/rte_port_frag.c                   |  2 +-
 lib/librte_port/rte_port_ras.c                    |  2 +-
 lib/librte_port/rte_port_ring.c                   |  4 +--
 lib/librte_port/rte_port_sched.c                  |  4 +--
 lib/librte_port/rte_port_source_sink.c            |  2 +-
 lib/librte_ring/rte_ring.c                        |  2 +-
 lib/librte_sched/rte_bitmap.h                     |  6 ++--
 lib/librte_sched/rte_sched.c                      |  2 +-
 lib/librte_table/rte_table_acl.c                  |  4 +--
 lib/librte_table/rte_table_array.c                |  8 ++---
 lib/librte_table/rte_table_hash_ext.c             |  6 ++--
 lib/librte_table/rte_table_hash_key16.c           | 36 +++++++++----------
 lib/librte_table/rte_table_hash_key32.c           | 44 +++++++++++------------
 lib/librte_table/rte_table_hash_key8.c            | 28 +++++++--------
 lib/librte_table/rte_table_hash_lru.c             |  6 ++--
 lib/librte_table/rte_table_lpm.c                  |  2 +-
 lib/librte_table/rte_table_lpm_ipv6.c             |  2 +-
 70 files changed, 230 insertions(+), 230 deletions(-)

diff --git a/app/test-acl/main.c b/app/test-acl/main.c
index 44add10..a2c127f 100644
--- a/app/test-acl/main.c
+++ b/app/test-acl/main.c
@@ -470,7 +470,7 @@ tracef_init(void)
 	struct ipv6_5tuple *w;
 
 	sz = config.nb_traces * (config.ipv6 ? sizeof(*w) : sizeof(*v));
-	config.traces = rte_zmalloc_socket(name, sz, CACHE_LINE_SIZE,
+	config.traces = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE,
 			SOCKET_ID_ANY);
 	if (config.traces == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot allocate %zu bytes for "
diff --git a/app/test-pipeline/runtime.c b/app/test-pipeline/runtime.c
index 14b7998..1f1ea5f 100644
--- a/app/test-pipeline/runtime.c
+++ b/app/test-pipeline/runtime.c
@@ -112,7 +112,7 @@ app_main_loop_worker(void) {
 		rte_lcore_id());
 
 	worker_mbuf = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
-			CACHE_LINE_SIZE, rte_socket_id());
+			RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (worker_mbuf == NULL)
 		rte_panic("Worker thread: cannot allocate buffer space\n");
 
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 12adafa..5f96899 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -520,7 +520,7 @@ init_config(void)
 	/* Configuration of logical cores. */
 	fwd_lcores = rte_zmalloc("testpmd: fwd_lcores",
 				sizeof(struct fwd_lcore *) * nb_lcores,
-				CACHE_LINE_SIZE);
+				RTE_CACHE_LINE_SIZE);
 	if (fwd_lcores == NULL) {
 		rte_exit(EXIT_FAILURE, "rte_zmalloc(%d (struct fwd_lcore *)) "
 							"failed\n", nb_lcores);
@@ -528,7 +528,7 @@ init_config(void)
 	for (lc_id = 0; lc_id < nb_lcores; lc_id++) {
 		fwd_lcores[lc_id] = rte_zmalloc("testpmd: struct fwd_lcore",
 					       sizeof(struct fwd_lcore),
-					       CACHE_LINE_SIZE);
+					       RTE_CACHE_LINE_SIZE);
 		if (fwd_lcores[lc_id] == NULL) {
 			rte_exit(EXIT_FAILURE, "rte_zmalloc(struct fwd_lcore) "
 								"failed\n");
@@ -566,7 +566,7 @@ init_config(void)
 	/* Configuration of Ethernet ports. */
 	ports = rte_zmalloc("testpmd: ports",
 			    sizeof(struct rte_port) * nb_ports,
-			    CACHE_LINE_SIZE);
+			    RTE_CACHE_LINE_SIZE);
 	if (ports == NULL) {
 		rte_exit(EXIT_FAILURE, "rte_zmalloc(%d struct rte_port) "
 							"failed\n", nb_ports);
@@ -637,7 +637,7 @@ reconfig(portid_t new_port_id)
 	/* Reconfiguration of Ethernet ports. */
 	ports = rte_realloc(ports,
 			    sizeof(struct rte_port) * nb_ports,
-			    CACHE_LINE_SIZE);
+			    RTE_CACHE_LINE_SIZE);
 	if (ports == NULL) {
 		rte_exit(EXIT_FAILURE, "rte_realloc(%d struct rte_port) failed\n",
 				nb_ports);
@@ -713,14 +713,14 @@ init_fwd_streams(void)
 	/* init new */
 	nb_fwd_streams = nb_fwd_streams_new;
 	fwd_streams = rte_zmalloc("testpmd: fwd_streams",
-		sizeof(struct fwd_stream *) * nb_fwd_streams, CACHE_LINE_SIZE);
+		sizeof(struct fwd_stream *) * nb_fwd_streams, RTE_CACHE_LINE_SIZE);
 	if (fwd_streams == NULL)
 		rte_exit(EXIT_FAILURE, "rte_zmalloc(%d (struct fwd_stream *)) "
 						"failed\n", nb_fwd_streams);
 
 	for (sm_id = 0; sm_id < nb_fwd_streams; sm_id++) {
 		fwd_streams[sm_id] = rte_zmalloc("testpmd: struct fwd_stream",
-				sizeof(struct fwd_stream), CACHE_LINE_SIZE);
+				sizeof(struct fwd_stream), RTE_CACHE_LINE_SIZE);
 		if (fwd_streams[sm_id] == NULL)
 			rte_exit(EXIT_FAILURE, "rte_zmalloc(struct fwd_stream)"
 								" failed\n");
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9cbfeac..b4cb5bd 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -67,8 +67,8 @@ int main(int argc, char **argv);
 
 #define DEF_MBUF_CACHE 250
 
-#define CACHE_LINE_SIZE_ROUNDUP(size) \
-	(CACHE_LINE_SIZE * ((size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE))
+#define RTE_CACHE_LINE_SIZE_ROUNDUP(size) \
+	(RTE_CACHE_LINE_SIZE * ((size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE))
 
 #define NUMA_NO_CONFIG 0xFF
 #define UMA_NO_CONFIG  0xFF
diff --git a/app/test/test_distributor_perf.c b/app/test/test_distributor_perf.c
index 48ee344..31431bb 100644
--- a/app/test/test_distributor_perf.c
+++ b/app/test/test_distributor_perf.c
@@ -73,7 +73,7 @@ static void
 time_cache_line_switch(void)
 {
 	/* allocate a full cache line for data, we use only first byte of it */
-	uint64_t data[CACHE_LINE_SIZE*3 / sizeof(uint64_t)];
+	uint64_t data[RTE_CACHE_LINE_SIZE*3 / sizeof(uint64_t)];
 
 	unsigned i, slaveid = rte_get_next_lcore(rte_lcore_id(), 0, 0);
 	volatile uint64_t *pdata = &data[0];
diff --git a/app/test/test_ivshmem.c b/app/test/test_ivshmem.c
index 2996a86..4e61488 100644
--- a/app/test/test_ivshmem.c
+++ b/app/test/test_ivshmem.c
@@ -136,13 +136,13 @@ test_ivshmem_create_lots_of_memzones(void)
 	for (i = 0; i < RTE_LIBRTE_IVSHMEM_MAX_ENTRIES; i++) {
 		snprintf(name, sizeof(name), "mz_%i", i);
 
-		mz = rte_memzone_reserve(name, CACHE_LINE_SIZE, SOCKET_ID_ANY, 0);
+		mz = rte_memzone_reserve(name, RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY, 0);
 		ASSERT(mz != NULL, "Failed to reserve memzone");
 
 		ASSERT(rte_ivshmem_metadata_add_memzone(mz, METADATA_NAME) == 0,
 				"Failed to add memzone");
 	}
-	mz = rte_memzone_reserve("one too many", CACHE_LINE_SIZE, SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve("one too many", RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY, 0);
 	ASSERT(mz != NULL, "Failed to reserve memzone");
 
 	ASSERT(rte_ivshmem_metadata_add_memzone(mz, METADATA_NAME) < 0,
@@ -159,7 +159,7 @@ test_ivshmem_create_duplicate_memzone(void)
 	ASSERT(rte_ivshmem_metadata_create(METADATA_NAME) == 0,
 			"Failed to create metadata");
 
-	mz = rte_memzone_reserve("mz", CACHE_LINE_SIZE, SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve("mz", RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY, 0);
 	ASSERT(mz != NULL, "Failed to reserve memzone");
 
 	ASSERT(rte_ivshmem_metadata_add_memzone(mz, METADATA_NAME) == 0,
diff --git a/app/test/test_malloc.c b/app/test/test_malloc.c
index ee34ca3..e8fac4b 100644
--- a/app/test/test_malloc.c
+++ b/app/test/test_malloc.c
@@ -481,13 +481,13 @@ test_realloc(void)
 	const unsigned size4 = size3 + 1024;
 
 	/* test data is the same even if element is moved*/
-	char *ptr1 = rte_zmalloc(NULL, size1, CACHE_LINE_SIZE);
+	char *ptr1 = rte_zmalloc(NULL, size1, RTE_CACHE_LINE_SIZE);
 	if (!ptr1){
 		printf("NULL pointer returned from rte_zmalloc\n");
 		return -1;
 	}
 	snprintf(ptr1, size1, "%s" ,hello_str);
-	char *ptr2 = rte_realloc(ptr1, size2, CACHE_LINE_SIZE);
+	char *ptr2 = rte_realloc(ptr1, size2, RTE_CACHE_LINE_SIZE);
 	if (!ptr2){
 		rte_free(ptr1);
 		printf("NULL pointer returned from rte_realloc\n");
@@ -511,7 +511,7 @@ test_realloc(void)
 	/* now allocate third element, free the second
 	 * and resize third. It should not move. (ptr1 is now invalid)
 	 */
-	char *ptr3 = rte_zmalloc(NULL, size3, CACHE_LINE_SIZE);
+	char *ptr3 = rte_zmalloc(NULL, size3, RTE_CACHE_LINE_SIZE);
 	if (!ptr3){
 		printf("NULL pointer returned from rte_zmalloc\n");
 		rte_free(ptr2);
@@ -526,7 +526,7 @@ test_realloc(void)
 		}
 	rte_free(ptr2);
 	/* first resize to half the size of the freed block */
-	char *ptr4 = rte_realloc(ptr3, size4, CACHE_LINE_SIZE);
+	char *ptr4 = rte_realloc(ptr3, size4, RTE_CACHE_LINE_SIZE);
 	if (!ptr4){
 		printf("NULL pointer returned from rte_realloc\n");
 		rte_free(ptr3);
@@ -538,7 +538,7 @@ test_realloc(void)
 		return -1;
 	}
 	/* now resize again to the full size of the freed block */
-	ptr4 = rte_realloc(ptr3, size3 + size2 + size1, CACHE_LINE_SIZE);
+	ptr4 = rte_realloc(ptr3, size3 + size2 + size1, RTE_CACHE_LINE_SIZE);
 	if (ptr3 != ptr4){
 		printf("Unexpected - ptr4 != ptr3 on second resize\n");
 		rte_free(ptr4);
@@ -549,12 +549,12 @@ test_realloc(void)
 	/* now try a resize to a smaller size, see if it works */
 	const unsigned size5 = 1024;
 	const unsigned size6 = size5 / 2;
-	char *ptr5 = rte_malloc(NULL, size5, CACHE_LINE_SIZE);
+	char *ptr5 = rte_malloc(NULL, size5, RTE_CACHE_LINE_SIZE);
 	if (!ptr5){
 		printf("NULL pointer returned from rte_malloc\n");
 		return -1;
 	}
-	char *ptr6 = rte_realloc(ptr5, size6, CACHE_LINE_SIZE);
+	char *ptr6 = rte_realloc(ptr5, size6, RTE_CACHE_LINE_SIZE);
 	if (!ptr6){
 		printf("NULL pointer returned from rte_realloc\n");
 		rte_free(ptr5);
@@ -569,8 +569,8 @@ test_realloc(void)
 
 	/* check for behaviour changing alignment */
 	const unsigned size7 = 1024;
-	const unsigned orig_align = CACHE_LINE_SIZE;
-	unsigned new_align = CACHE_LINE_SIZE * 2;
+	const unsigned orig_align = RTE_CACHE_LINE_SIZE;
+	unsigned new_align = RTE_CACHE_LINE_SIZE * 2;
 	char *ptr7 = rte_malloc(NULL, size7, orig_align);
 	if (!ptr7){
 		printf("NULL pointer returned from rte_malloc\n");
@@ -597,18 +597,18 @@ test_realloc(void)
 	 */
 	unsigned size9 = 1024, size10 = 1024;
 	unsigned size11 = size9 + size10 + 256;
-	char *ptr9 = rte_malloc(NULL, size9, CACHE_LINE_SIZE);
+	char *ptr9 = rte_malloc(NULL, size9, RTE_CACHE_LINE_SIZE);
 	if (!ptr9){
 		printf("NULL pointer returned from rte_malloc\n");
 		return -1;
 	}
-	char *ptr10 = rte_malloc(NULL, size10, CACHE_LINE_SIZE);
+	char *ptr10 = rte_malloc(NULL, size10, RTE_CACHE_LINE_SIZE);
 	if (!ptr10){
 		printf("NULL pointer returned from rte_malloc\n");
 		return -1;
 	}
 	rte_free(ptr9);
-	char *ptr11 = rte_realloc(ptr10, size11, CACHE_LINE_SIZE);
+	char *ptr11 = rte_realloc(ptr10, size11, RTE_CACHE_LINE_SIZE);
 	if (!ptr11){
 		printf("NULL pointer returned from rte_realloc\n");
 		rte_free(ptr10);
@@ -625,7 +625,7 @@ test_realloc(void)
 	 * We should get a malloc of the size requested*/
 	const size_t size12 = 1024;
 	size_t size12_check;
-	char *ptr12 = rte_realloc(NULL, size12, CACHE_LINE_SIZE);
+	char *ptr12 = rte_realloc(NULL, size12, RTE_CACHE_LINE_SIZE);
 	if (!ptr12){
 		printf("NULL pointer returned from rte_realloc\n");
 		return -1;
@@ -698,7 +698,7 @@ test_rte_malloc_validate(void)
 {
 	const size_t request_size = 1024;
 	size_t allocated_size;
-	char *data_ptr = rte_malloc(NULL, request_size, CACHE_LINE_SIZE);
+	char *data_ptr = rte_malloc(NULL, request_size, RTE_CACHE_LINE_SIZE);
 #ifdef RTE_LIBRTE_MALLOC_DEBUG
 	int retval;
 	char *over_write_vals = NULL;
@@ -773,7 +773,7 @@ test_zero_aligned_alloc(void)
 	char *p1 = rte_malloc(NULL,1024, 0);
 	if (!p1)
 		goto err_return;
-	if (!rte_is_aligned(p1, CACHE_LINE_SIZE))
+	if (!rte_is_aligned(p1, RTE_CACHE_LINE_SIZE))
 		goto err_return;
 	rte_free(p1);
 	return 0;
@@ -789,7 +789,7 @@ test_malloc_bad_params(void)
 {
 	const char *type = NULL;
 	size_t size = 0;
-	unsigned align = CACHE_LINE_SIZE;
+	unsigned align = RTE_CACHE_LINE_SIZE;
 
 	/* rte_malloc expected to return null with inappropriate size */
 	char *bad_ptr = rte_malloc(type, size, align);
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 66bcbc5..a720759 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -782,7 +782,7 @@ test_failing_mbuf_sanity_check(void)
 static int
 test_mbuf(void)
 {
-	RTE_BUILD_BUG_ON(sizeof(struct rte_mbuf) != CACHE_LINE_SIZE * 2);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_mbuf) != RTE_CACHE_LINE_SIZE * 2);
 
 	/* create pktmbuf pool if it does not exist */
 	if (pktmbuf_pool == NULL) {
diff --git a/app/test/test_memzone.c b/app/test/test_memzone.c
index 381f643..b665fce 100644
--- a/app/test/test_memzone.c
+++ b/app/test/test_memzone.c
@@ -281,7 +281,7 @@ test_memzone_reserve_max(void)
 			continue;
 
 		/* align everything */
-		last_addr = RTE_PTR_ALIGN_CEIL(ms[memseg_idx].addr, CACHE_LINE_SIZE);
+		last_addr = RTE_PTR_ALIGN_CEIL(ms[memseg_idx].addr, RTE_CACHE_LINE_SIZE);
 		len = ms[memseg_idx].len - RTE_PTR_DIFF(last_addr, ms[memseg_idx].addr);
 		len &= ~((size_t) CACHE_LINE_MASK);
 
@@ -374,7 +374,7 @@ test_memzone_reserve_max_aligned(void)
 			continue;
 
 		/* align everything */
-		last_addr = RTE_PTR_ALIGN_CEIL(ms[memseg_idx].addr, CACHE_LINE_SIZE);
+		last_addr = RTE_PTR_ALIGN_CEIL(ms[memseg_idx].addr, RTE_CACHE_LINE_SIZE);
 		len = ms[memseg_idx].len - RTE_PTR_DIFF(last_addr, ms[memseg_idx].addr);
 		len &= ~((size_t) CACHE_LINE_MASK);
 
@@ -589,7 +589,7 @@ check_memzone_bounded(const char *name, uint32_t len,  uint32_t align,
 	}
 
 	if ((mz->len & CACHE_LINE_MASK) != 0 || mz->len < len ||
-			mz->len < CACHE_LINE_SIZE) {
+			mz->len < RTE_CACHE_LINE_SIZE) {
 		printf("%s(%s): invalid length\n",
 			__func__, mz->name);
 		return (-1);
@@ -691,14 +691,14 @@ test_memzone_reserve_memory_in_smallest_segment(void)
 	prev_min_len = prev_min_ms->len;
 
 	/* try reserving a memzone in the smallest memseg */
-	mz = rte_memzone_reserve("smallest_mz", CACHE_LINE_SIZE,
+	mz = rte_memzone_reserve("smallest_mz", RTE_CACHE_LINE_SIZE,
 			SOCKET_ID_ANY, 0);
 	if (mz == NULL) {
 		printf("Failed to reserve memory from smallest memseg!\n");
 		return -1;
 	}
 	if (prev_min_ms->len != prev_min_len &&
-			min_ms->len != min_len - CACHE_LINE_SIZE) {
+			min_ms->len != min_len - RTE_CACHE_LINE_SIZE) {
 		printf("Reserved memory from wrong memseg!\n");
 		return -1;
 	}
@@ -737,7 +737,7 @@ test_memzone_reserve_memory_with_smallest_offset(void)
 
 	min_ms = NULL;  /*< smallest segment */
 	prev_min_ms = NULL; /*< second smallest segment */
-	align = CACHE_LINE_SIZE * 4;
+	align = RTE_CACHE_LINE_SIZE * 4;
 
 	/* find two smallest segments */
 	for (i = 0; i < RTE_MAX_MEMSEG; i++) {
@@ -777,7 +777,7 @@ test_memzone_reserve_memory_with_smallest_offset(void)
 
 		/* make sure final length is *not* aligned */
 		while (((min_ms->addr_64 + len) & (align-1)) == 0)
-			len += CACHE_LINE_SIZE;
+			len += RTE_CACHE_LINE_SIZE;
 
 		if (rte_memzone_reserve("dummy_mz1", len, SOCKET_ID_ANY, 0) == NULL) {
 			printf("Cannot reserve memory!\n");
@@ -792,12 +792,12 @@ test_memzone_reserve_memory_with_smallest_offset(void)
 	}
     /* if we don't need to touch smallest segment but it's aligned */
     else if ((min_ms->addr_64 & (align-1)) == 0) {
-            if (rte_memzone_reserve("align_mz1", CACHE_LINE_SIZE,
+            if (rte_memzone_reserve("align_mz1", RTE_CACHE_LINE_SIZE,
                     SOCKET_ID_ANY, 0) == NULL) {
                             printf("Cannot reserve memory!\n");
                             return -1;
             }
-            if (min_ms->len != min_len - CACHE_LINE_SIZE) {
+            if (min_ms->len != min_len - RTE_CACHE_LINE_SIZE) {
                     printf("Reserved memory from wrong segment!\n");
                     return -1;
             }
@@ -809,7 +809,7 @@ test_memzone_reserve_memory_with_smallest_offset(void)
 
 		/* make sure final length is aligned */
 		while (((prev_min_ms->addr_64 + len) & (align-1)) != 0)
-			len += CACHE_LINE_SIZE;
+			len += RTE_CACHE_LINE_SIZE;
 
 		if (rte_memzone_reserve("dummy_mz2", len, SOCKET_ID_ANY, 0) == NULL) {
 			printf("Cannot reserve memory!\n");
@@ -822,7 +822,7 @@ test_memzone_reserve_memory_with_smallest_offset(void)
 			return -1;
 		}
 	}
-	len = CACHE_LINE_SIZE;
+	len = RTE_CACHE_LINE_SIZE;
 
 
 
@@ -860,7 +860,7 @@ test_memzone_reserve_remainder(void)
 	int i, align;
 
 	min_len = 0;
-	align = CACHE_LINE_SIZE;
+	align = RTE_CACHE_LINE_SIZE;
 
 	config = rte_eal_get_configuration();
 
@@ -878,7 +878,7 @@ test_memzone_reserve_remainder(void)
 			min_ms = ms;
 
 			/* find maximum alignment this segment is able to hold */
-			align = CACHE_LINE_SIZE;
+			align = RTE_CACHE_LINE_SIZE;
 			while ((ms->addr_64 & (align-1)) == 0) {
 				align <<= 1;
 			}
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 1c1f236..941d099 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -592,7 +592,7 @@ poll_burst(void *args)
 	pkts_burst = (struct rte_mbuf **)
 		rte_calloc_socket("poll_burst",
 				  total, sizeof(void *),
-				  CACHE_LINE_SIZE, conf->socketid);
+				  RTE_CACHE_LINE_SIZE, conf->socketid);
 	if (!pkts_burst)
 		return -1;
 
@@ -797,7 +797,7 @@ test_pmd_perf(void)
 			rte_calloc_socket("tx_buff",
 					  MAX_TRAFFIC_BURST * nb_ports,
 					  sizeof(void *),
-					  CACHE_LINE_SIZE, socketid);
+					  RTE_CACHE_LINE_SIZE, socketid);
 		if (!tx_burst)
 			return -1;
 	}
diff --git a/app/test/test_table.h b/app/test/test_table.h
index 40e50db..64e9427 100644
--- a/app/test/test_table.h
+++ b/app/test/test_table.h
@@ -179,7 +179,7 @@ struct rte_table {
 	rte_pipeline_table_action_handler_hit f_action;
 	uint32_t table_next_id;
 	uint32_t table_next_id_valid;
-	uint8_t actions_lookup_miss[CACHE_LINE_SIZE];
+	uint8_t actions_lookup_miss[RTE_CACHE_LINE_SIZE];
 	uint32_t action_data_size;
 	void *h_table;
 };
diff --git a/doc/guides/sample_app_ug/kernel_nic_interface.rst b/doc/guides/sample_app_ug/kernel_nic_interface.rst
index 720142f..b75eed3 100644
--- a/doc/guides/sample_app_ug/kernel_nic_interface.rst
+++ b/doc/guides/sample_app_ug/kernel_nic_interface.rst
@@ -392,7 +392,7 @@ The code is as follows:
                 goto fail;
             }
 
-            kni_port_params_array[port_id] = (struct kni_port_params*)rte_zmalloc("KNI_port_params", sizeof(struct kni_port_params), CACHE_LINE_SIZE);
+            kni_port_params_array[port_id] = (struct kni_port_params*)rte_zmalloc("KNI_port_params", sizeof(struct kni_port_params), RTE_CACHE_LINE_SIZE);
             kni_port_params_array[port_id]->port_id = port_id;
             kni_port_params_array[port_id]->lcore_rx = (uint8_t)int_fld[i++];
             kni_port_params_array[port_id]->lcore_tx = (uint8_t)int_fld[i++];
diff --git a/examples/dpdk_qat/crypto.c b/examples/dpdk_qat/crypto.c
index 318d47c..213ffcb 100644
--- a/examples/dpdk_qat/crypto.c
+++ b/examples/dpdk_qat/crypto.c
@@ -339,7 +339,7 @@ get_crypto_instance_on_core(CpaInstanceHandle *pInstanceHandle,
 	}
 
 	pLocalInstanceHandles = rte_malloc("pLocalInstanceHandles",
-			sizeof(CpaInstanceHandle) * numInstances, CACHE_LINE_SIZE);
+			sizeof(CpaInstanceHandle) * numInstances, RTE_CACHE_LINE_SIZE);
 
 	if (NULL == pLocalInstanceHandles) {
 		return CPA_STATUS_FAIL;
diff --git a/examples/ip_pipeline/cmdline.c b/examples/ip_pipeline/cmdline.c
index a56335e..13d565e 100644
--- a/examples/ip_pipeline/cmdline.c
+++ b/examples/ip_pipeline/cmdline.c
@@ -568,7 +568,7 @@ cmd_arp_add_parsed(
 			struct app_rule *new_rule = (struct app_rule *)
 				rte_zmalloc_socket("CLI",
 				sizeof(struct app_rule),
-				CACHE_LINE_SIZE,
+				RTE_CACHE_LINE_SIZE,
 				rte_socket_id());
 
 			if (new_rule == NULL)
@@ -860,7 +860,7 @@ cmd_route_add_parsed(
 			struct app_rule *new_rule = (struct app_rule *)
 				rte_zmalloc_socket("CLI",
 				sizeof(struct app_rule),
-				CACHE_LINE_SIZE,
+				RTE_CACHE_LINE_SIZE,
 				rte_socket_id());
 
 			if (new_rule == NULL)
@@ -1193,7 +1193,7 @@ cmd_firewall_add_parsed(
 			struct app_rule *new_rule = (struct app_rule *)
 				rte_zmalloc_socket("CLI",
 				sizeof(struct app_rule),
-				CACHE_LINE_SIZE,
+				RTE_CACHE_LINE_SIZE,
 				rte_socket_id());
 
 			memcpy(new_rule, &rule, sizeof(rule));
@@ -1673,7 +1673,7 @@ cmd_flow_add_parsed(
 			struct app_rule *new_rule = (struct app_rule *)
 				rte_zmalloc_socket("CLI",
 				sizeof(struct app_rule),
-				CACHE_LINE_SIZE,
+				RTE_CACHE_LINE_SIZE,
 				rte_socket_id());
 
 			if (new_rule == NULL)
diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
index cb7568b..e0c0464 100644
--- a/examples/ip_pipeline/init.c
+++ b/examples/ip_pipeline/init.c
@@ -419,7 +419,7 @@ app_init_rings(void)
 	RTE_LOG(INFO, USER1, "Initializing %u SW rings ...\n", n_swq);
 
 	app.rings = rte_malloc_socket(NULL, n_swq * sizeof(struct rte_ring *),
-		CACHE_LINE_SIZE, rte_socket_id());
+		RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (app.rings == NULL)
 		rte_panic("Cannot allocate memory to store ring pointers\n");
 
@@ -595,7 +595,7 @@ app_init_etc(void)
 void
 app_init(void)
 {
-	if ((sizeof(struct app_pkt_metadata) % CACHE_LINE_SIZE) != 0)
+	if ((sizeof(struct app_pkt_metadata) % RTE_CACHE_LINE_SIZE) != 0)
 		rte_panic("Application pkt meta-data size mismatch\n");
 
 	app_check_core_params();
diff --git a/examples/ip_pipeline/pipeline_passthrough.c b/examples/ip_pipeline/pipeline_passthrough.c
index 4af6f44..948b2c1 100644
--- a/examples/ip_pipeline/pipeline_passthrough.c
+++ b/examples/ip_pipeline/pipeline_passthrough.c
@@ -188,7 +188,7 @@ app_main_loop_passthrough(void) {
 		core_id);
 
 	m = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
-		CACHE_LINE_SIZE, rte_socket_id());
+		RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (m == NULL)
 		rte_panic("%s: cannot allocate buffer space\n", __func__);
 
diff --git a/examples/ip_pipeline/pipeline_rx.c b/examples/ip_pipeline/pipeline_rx.c
index 8f1f781..383f1a9 100644
--- a/examples/ip_pipeline/pipeline_rx.c
+++ b/examples/ip_pipeline/pipeline_rx.c
@@ -295,7 +295,7 @@ app_main_loop_rx(void) {
 	RTE_LOG(INFO, USER1, "Core %u is doing RX (no pipeline)\n", core_id);
 
 	ma = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
-		CACHE_LINE_SIZE, rte_socket_id());
+		RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (ma == NULL)
 		rte_panic("%s: cannot allocate buffer space\n", __func__);
 
diff --git a/examples/ip_pipeline/pipeline_tx.c b/examples/ip_pipeline/pipeline_tx.c
index 64904b2..0077c12 100644
--- a/examples/ip_pipeline/pipeline_tx.c
+++ b/examples/ip_pipeline/pipeline_tx.c
@@ -234,7 +234,7 @@ app_main_loop_tx(void) {
 
 	for (i = 0; i < APP_MAX_PORTS; i++) {
 		m[i] = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
-			CACHE_LINE_SIZE, rte_socket_id());
+			RTE_CACHE_LINE_SIZE, rte_socket_id());
 		if (m[i] == NULL)
 			rte_panic("%s: Cannot allocate buffer space\n",
 				__func__);
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 39d60ec..780099d 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -860,7 +860,7 @@ setup_port_tbl(struct lcore_queue_conf *qconf, uint32_t lcore, int socket,
 	n = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST);
 	sz = sizeof (*mtb) + sizeof (mtb->m_table[0]) *  n;
 
-	if ((mtb = rte_zmalloc_socket(__func__, sz, CACHE_LINE_SIZE,
+	if ((mtb = rte_zmalloc_socket(__func__, sz, RTE_CACHE_LINE_SIZE,
 			socket)) == NULL) {
 		RTE_LOG(ERR, IP_RSMBL, "%s() for lcore: %u, port: %u "
 			"failed to allocate %zu bytes\n",
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 47cc873..45b96bc 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -463,7 +463,7 @@ parse_config(const char *arg)
 		}
 		kni_port_params_array[port_id] =
 			(struct kni_port_params*)rte_zmalloc("KNI_port_params",
-			sizeof(struct kni_port_params), CACHE_LINE_SIZE);
+			sizeof(struct kni_port_params), RTE_CACHE_LINE_SIZE);
 		kni_port_params_array[port_id]->port_id = port_id;
 		kni_port_params_array[port_id]->lcore_rx =
 					(uint8_t)int_fld[i++];
diff --git a/examples/multi_process/l2fwd_fork/flib.c b/examples/multi_process/l2fwd_fork/flib.c
index aace308..095e2f7 100644
--- a/examples/multi_process/l2fwd_fork/flib.c
+++ b/examples/multi_process/l2fwd_fork/flib.c
@@ -180,7 +180,7 @@ lcore_id_init(void)
 	/* Setup lcore ID allocation map */
 	lcore_cfg = rte_zmalloc("LCORE_ID_MAP",
 						sizeof(uint16_t) * RTE_MAX_LCORE,
-						CACHE_LINE_SIZE);
+						RTE_CACHE_LINE_SIZE);
 
 	if(lcore_cfg == NULL)
 		rte_panic("Failed to malloc\n");
@@ -300,7 +300,7 @@ flib_init(void)
 {
 	if ((core_cfg = rte_zmalloc("core_cfg",
 		sizeof(struct lcore_stat) * RTE_MAX_LCORE,
-		CACHE_LINE_SIZE)) == NULL ) {
+		RTE_CACHE_LINE_SIZE)) == NULL ) {
 		printf("rte_zmalloc failed\n");
 		return -1;
 	}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index ff48f20..01faae9 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -101,7 +101,7 @@ struct port_stats{
 	unsigned rx;
 	unsigned tx;
 	unsigned drop;
-} __attribute__((aligned(CACHE_LINE_SIZE / 2)));
+} __attribute__((aligned(RTE_CACHE_LINE_SIZE / 2)));
 
 static int proc_id = -1;
 static unsigned num_procs = 0;
diff --git a/examples/netmap_compat/lib/compat_netmap.c b/examples/netmap_compat/lib/compat_netmap.c
index 2348366..6a4737a 100644
--- a/examples/netmap_compat/lib/compat_netmap.c
+++ b/examples/netmap_compat/lib/compat_netmap.c
@@ -643,12 +643,12 @@ rte_netmap_init(const struct rte_netmap_conf *conf)
 	nmif_sz = NETMAP_IF_RING_OFS(port_rings, port_rings, port_slots);
 	sz = nmif_sz * port_num;
 
-	buf_ofs = RTE_ALIGN_CEIL(sz, CACHE_LINE_SIZE);
+	buf_ofs = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
 	sz = buf_ofs + port_bufs * conf->max_bufsz * port_num;
 
 	if (sz > UINT32_MAX ||
 			(netmap.mem = rte_zmalloc_socket(__func__, sz,
-			CACHE_LINE_SIZE, conf->socket_id)) == NULL) {
+			RTE_CACHE_LINE_SIZE, conf->socket_id)) == NULL) {
 		RTE_LOG(ERR, USER1, "%s: failed to allocate %zu bytes\n",
 			__func__, sz);
 		return (-ENOMEM);
diff --git a/examples/qos_sched/main.c b/examples/qos_sched/main.c
index 19a4f85..8114350 100644
--- a/examples/qos_sched/main.c
+++ b/examples/qos_sched/main.c
@@ -135,7 +135,7 @@ app_main_loop(__attribute__((unused))void *dummy)
 	else if (mode == (APP_TX_MODE | APP_WT_MODE)) {
 		for (i = 0; i < wt_idx; i++) {
 			wt_confs[i]->m_table = rte_malloc("table_wt", sizeof(struct rte_mbuf *)
-					* burst_conf.tx_burst, CACHE_LINE_SIZE);
+					* burst_conf.tx_burst, RTE_CACHE_LINE_SIZE);
 
 			if (wt_confs[i]->m_table == NULL)
 				rte_panic("flow %u unable to allocate memory buffer\n", i);
@@ -150,7 +150,7 @@ app_main_loop(__attribute__((unused))void *dummy)
 	else if (mode == APP_TX_MODE) {
 		for (i = 0; i < tx_idx; i++) {
 			tx_confs[i]->m_table = rte_malloc("table_tx", sizeof(struct rte_mbuf *)
-					* burst_conf.tx_burst, CACHE_LINE_SIZE);
+					* burst_conf.tx_burst, RTE_CACHE_LINE_SIZE);
 
 			if (tx_confs[i]->m_table == NULL)
 				rte_panic("flow %u unable to allocate memory buffer\n", i);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 1f1edbe..f9b14c3 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -156,7 +156,7 @@
 #define MAC_ADDR_CMP 0xFFFFFFFFFFFFULL
 
 /* Number of descriptors per cacheline. */
-#define DESC_PER_CACHELINE (CACHE_LINE_SIZE / sizeof(struct vring_desc))
+#define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc))
 
 /* mask of enabled ports */
 static uint32_t enabled_port_mask = 0;
@@ -2562,7 +2562,7 @@ new_device (struct virtio_net *dev)
 	struct vhost_dev *vdev;
 	uint32_t regionidx;
 
-	vdev = rte_zmalloc("vhost device", sizeof(*vdev), CACHE_LINE_SIZE);
+	vdev = rte_zmalloc("vhost device", sizeof(*vdev), RTE_CACHE_LINE_SIZE);
 	if (vdev == NULL) {
 		RTE_LOG(INFO, VHOST_DATA, "(%"PRIu64") Couldn't allocate memory for vhost dev\n",
 			dev->device_fh);
@@ -2584,7 +2584,7 @@ new_device (struct virtio_net *dev)
 
 		vdev->regions_hpa = (struct virtio_memory_regions_hpa *) rte_zmalloc("vhost hpa region",
 			sizeof(struct virtio_memory_regions_hpa) * vdev->nregions_hpa,
-			CACHE_LINE_SIZE);
+			RTE_CACHE_LINE_SIZE);
 		if (vdev->regions_hpa == NULL) {
 			RTE_LOG(ERR, VHOST_CONFIG, "Cannot allocate memory for hpa region\n");
 			rte_free(vdev);
diff --git a/examples/vhost_xen/vhost_monitor.c b/examples/vhost_xen/vhost_monitor.c
index 6994c9c..f683989 100644
--- a/examples/vhost_xen/vhost_monitor.c
+++ b/examples/vhost_xen/vhost_monitor.c
@@ -255,8 +255,8 @@ virtio_net_config_ll *new_device(unsigned int virtio_idx, struct xen_guest *gues
 
 	/* Setup device and virtqueues. */
 	new_ll_dev   = calloc(1, sizeof(struct virtio_net_config_ll));
-	virtqueue_rx = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), CACHE_LINE_SIZE);
-	virtqueue_tx = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), CACHE_LINE_SIZE);
+	virtqueue_rx = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), RTE_CACHE_LINE_SIZE);
+	virtqueue_tx = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), RTE_CACHE_LINE_SIZE);
 	if (new_ll_dev == NULL || virtqueue_rx == NULL || virtqueue_tx == NULL)
 		goto err;
 
diff --git a/lib/librte_acl/acl_gen.c b/lib/librte_acl/acl_gen.c
index f65e397..b1f766b 100644
--- a/lib/librte_acl/acl_gen.c
+++ b/lib/librte_acl/acl_gen.c
@@ -415,12 +415,12 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie,
 		node_bld_trie, num_tries, match_num);
 
 	/* Allocate runtime memory (align to cache boundary) */
-	total_size = RTE_ALIGN(data_index_sz, CACHE_LINE_SIZE) +
+	total_size = RTE_ALIGN(data_index_sz, RTE_CACHE_LINE_SIZE) +
 		indices.match_index * sizeof(uint64_t) +
 		(match_num + 2) * sizeof(struct rte_acl_match_results) +
 		XMM_SIZE;
 
-	mem = rte_zmalloc_socket(ctx->name, total_size, CACHE_LINE_SIZE,
+	mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE,
 			ctx->socket_id);
 	if (mem == NULL) {
 		RTE_LOG(ERR, ACL,
@@ -432,7 +432,7 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie,
 	/* Fill the runtime structure */
 	match_index = indices.match_index;
 	node_array = (uint64_t *)((uintptr_t)mem +
-		RTE_ALIGN(data_index_sz, CACHE_LINE_SIZE));
+		RTE_ALIGN(data_index_sz, RTE_CACHE_LINE_SIZE));
 
 	/*
 	 * Setup the NOMATCH node (a SINGLE at the
diff --git a/lib/librte_acl/rte_acl.c b/lib/librte_acl/rte_acl.c
index 4b21b8e..547e6da 100644
--- a/lib/librte_acl/rte_acl.c
+++ b/lib/librte_acl/rte_acl.c
@@ -203,7 +203,7 @@ rte_acl_create(const struct rte_acl_param *param)
 			goto exit;
 		}
 
-		ctx = rte_zmalloc_socket(name, sz, CACHE_LINE_SIZE, param->socket_id);
+		ctx = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE, param->socket_id);
 
 		if (ctx == NULL) {
 			RTE_LOG(ERR, ACL,
diff --git a/lib/librte_acl/rte_acl_osdep_alone.h b/lib/librte_acl/rte_acl_osdep_alone.h
index bdeba54..73d1701 100644
--- a/lib/librte_acl/rte_acl_osdep_alone.h
+++ b/lib/librte_acl/rte_acl_osdep_alone.h
@@ -180,13 +180,13 @@ rte_rdtsc(void)
  * rte_memory related.
  */
 #define	SOCKET_ID_ANY	-1                  /**< Any NUMA socket. */
-#define	CACHE_LINE_SIZE	64                  /**< Cache line size. */
-#define	CACHE_LINE_MASK	(CACHE_LINE_SIZE-1) /**< Cache line mask. */
+#define	RTE_CACHE_LINE_SIZE	64                  /**< Cache line size. */
+#define	CACHE_LINE_MASK	(RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */
 
 /**
  * Force alignment to cache line.
  */
-#define	__rte_cache_aligned	__attribute__((__aligned__(CACHE_LINE_SIZE)))
+#define	__rte_cache_aligned	__attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
 
 
 /*
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index 2c5d61c..0b4178c 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -77,7 +77,7 @@
  */
 union rte_distributor_buffer {
 	volatile int64_t bufptr64;
-	char pad[CACHE_LINE_SIZE*3];
+	char pad[RTE_CACHE_LINE_SIZE*3];
 } __rte_cache_aligned;
 
 struct rte_distributor_backlog {
diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c
index 5acd9ce..18e4f38 100644
--- a/lib/librte_eal/common/eal_common_memzone.c
+++ b/lib/librte_eal/common/eal_common_memzone.c
@@ -86,7 +86,7 @@ rte_memzone_reserve(const char *name, size_t len, int socket_id,
 		      unsigned flags)
 {
 	return rte_memzone_reserve_aligned(name,
-			len, socket_id, flags, CACHE_LINE_SIZE);
+			len, socket_id, flags, RTE_CACHE_LINE_SIZE);
 }
 
 /*
@@ -164,8 +164,8 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
 	}
 
 	/* alignment less than cache size is not allowed */
-	if (align < CACHE_LINE_SIZE)
-		align = CACHE_LINE_SIZE;
+	if (align < RTE_CACHE_LINE_SIZE)
+		align = RTE_CACHE_LINE_SIZE;
 
 
 	/* align length on cache boundary. Check for overflow before doing so */
@@ -178,7 +178,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
 	len &= ~((size_t) CACHE_LINE_MASK);
 
 	/* save minimal requested  length */
-	requested_len = RTE_MAX((size_t)CACHE_LINE_SIZE,  len);
+	requested_len = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE,  len);
 
 	/* check that boundary condition is valid */
 	if (bound != 0 &&
@@ -432,13 +432,13 @@ memseg_sanitize(struct rte_memseg *memseg)
 		return -1;
 
 	/* memseg is really too small, don't bother with it */
-	if (memseg->len < (2 * CACHE_LINE_SIZE)) {
+	if (memseg->len < (2 * RTE_CACHE_LINE_SIZE)) {
 		memseg->len = 0;
 		return 0;
 	}
 
 	/* align start address */
-	off = (CACHE_LINE_SIZE - phys_align) & CACHE_LINE_MASK;
+	off = (RTE_CACHE_LINE_SIZE - phys_align) & CACHE_LINE_MASK;
 	memseg->phys_addr += off;
 	memseg->addr = (char *)memseg->addr + off;
 	memseg->len -= off;
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 4cf8ea9..0502793 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -59,19 +59,19 @@ enum rte_page_sizes {
 };
 
 #define SOCKET_ID_ANY -1                    /**< Any NUMA socket. */
-#ifndef CACHE_LINE_SIZE
-#define CACHE_LINE_SIZE 64                  /**< Cache line size. */
+#ifndef RTE_CACHE_LINE_SIZE
+#define RTE_CACHE_LINE_SIZE 64                  /**< Cache line size. */
 #endif
-#define CACHE_LINE_MASK (CACHE_LINE_SIZE-1) /**< Cache line mask. */
+#define CACHE_LINE_MASK (RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */
 
 #define CACHE_LINE_ROUNDUP(size) \
-	(CACHE_LINE_SIZE * ((size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE))
+	(RTE_CACHE_LINE_SIZE * ((size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE))
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 8c65d72..6eadbe5 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -256,7 +256,7 @@ rte_eth_dev_init(struct rte_pci_driver *pci_drv,
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY){
 		eth_dev->data->dev_private = rte_zmalloc("ethdev private structure",
 				  eth_drv->dev_private_size,
-				  CACHE_LINE_SIZE);
+				  RTE_CACHE_LINE_SIZE);
 		if (eth_dev->data->dev_private == NULL)
 			rte_panic("Cannot allocate memzone for private port data\n");
 	}
@@ -332,7 +332,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 	if (dev->data->rx_queues == NULL) { /* first time configuration */
 		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
 				sizeof(dev->data->rx_queues[0]) * nb_queues,
-				CACHE_LINE_SIZE);
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->rx_queues == NULL) {
 			dev->data->nb_rx_queues = 0;
 			return -(ENOMEM);
@@ -345,7 +345,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 		for (i = nb_queues; i < old_nb_queues; i++)
 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
 		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
-				CACHE_LINE_SIZE);
+				RTE_CACHE_LINE_SIZE);
 		if (rxq == NULL)
 			return -(ENOMEM);
 
@@ -474,7 +474,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 	if (dev->data->tx_queues == NULL) { /* first time configuration */
 		dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
 				sizeof(dev->data->tx_queues[0]) * nb_queues,
-				CACHE_LINE_SIZE);
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->tx_queues == NULL) {
 			dev->data->nb_tx_queues = 0;
 			return -(ENOMEM);
@@ -487,7 +487,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 		for (i = nb_queues; i < old_nb_queues; i++)
 			(*dev->dev_ops->tx_queue_release)(txq[i]);
 		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
-				CACHE_LINE_SIZE);
+				RTE_CACHE_LINE_SIZE);
 		if (txq == NULL)
 			return -(ENOMEM);
 
diff --git a/lib/librte_hash/rte_hash.c b/lib/librte_hash/rte_hash.c
index d02b6b4..ba827d2 100644
--- a/lib/librte_hash/rte_hash.c
+++ b/lib/librte_hash/rte_hash.c
@@ -39,7 +39,7 @@
 #include <sys/queue.h>
 
 #include <rte_common.h>
-#include <rte_memory.h>         /* for definition of CACHE_LINE_SIZE */
+#include <rte_memory.h>         /* for definition of RTE_CACHE_LINE_SIZE */
 #include <rte_log.h>
 #include <rte_memcpy.h>
 #include <rte_prefetch.h>
@@ -206,11 +206,11 @@ rte_hash_create(const struct rte_hash_parameters *params)
 				     sizeof(hash_sig_t), SIG_BUCKET_ALIGNMENT);
 	key_size =  align_size(params->key_len, KEY_ALIGNMENT);
 
-	hash_tbl_size = align_size(sizeof(struct rte_hash), CACHE_LINE_SIZE);
+	hash_tbl_size = align_size(sizeof(struct rte_hash), RTE_CACHE_LINE_SIZE);
 	sig_tbl_size = align_size(num_buckets * sig_bucket_size,
-				  CACHE_LINE_SIZE);
+				  RTE_CACHE_LINE_SIZE);
 	key_tbl_size = align_size(num_buckets * key_size *
-				  params->bucket_entries, CACHE_LINE_SIZE);
+				  params->bucket_entries, RTE_CACHE_LINE_SIZE);
 
 	/* Total memory required for hash context */
 	mem_size = hash_tbl_size + sig_tbl_size + key_tbl_size;
@@ -233,7 +233,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
 	}
 
 	h = (struct rte_hash *)rte_zmalloc_socket(hash_name, mem_size,
-					   CACHE_LINE_SIZE, params->socket_id);
+					   RTE_CACHE_LINE_SIZE, params->socket_id);
 	if (h == NULL) {
 		RTE_LOG(ERR, HASH, "memory allocation failed\n");
 		rte_free(te);
diff --git a/lib/librte_ip_frag/rte_ip_frag_common.c b/lib/librte_ip_frag/rte_ip_frag_common.c
index e4d16d0..c982d8c 100644
--- a/lib/librte_ip_frag/rte_ip_frag_common.c
+++ b/lib/librte_ip_frag/rte_ip_frag_common.c
@@ -87,7 +87,7 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
 	}
 
 	sz = sizeof (*tbl) + nb_entries * sizeof (tbl->pkt[0]);
-	if ((tbl = rte_zmalloc_socket(__func__, sz, CACHE_LINE_SIZE,
+	if ((tbl = rte_zmalloc_socket(__func__, sz, RTE_CACHE_LINE_SIZE,
 			socket_id)) == NULL) {
 		RTE_LOG(ERR, USER1,
 			"%s: allocation of %zu bytes at socket %d failed do\n",
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 9e76988..983e04b 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -42,7 +42,7 @@
 #include <rte_log.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
-#include <rte_memory.h>        /* for definition of CACHE_LINE_SIZE */
+#include <rte_memory.h>        /* for definition of RTE_CACHE_LINE_SIZE */
 #include <rte_malloc.h>
 #include <rte_memzone.h>
 #include <rte_tailq.h>
@@ -199,7 +199,7 @@ rte_lpm_create(const char *name, int socket_id, int max_rules,
 
 	/* Allocate memory to store the LPM data structures. */
 	lpm = (struct rte_lpm *)rte_zmalloc_socket(mem_name, mem_size,
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (lpm == NULL) {
 		RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
 		rte_free(te);
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 9157103..42e6d80 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -195,7 +195,7 @@ rte_lpm6_create(const char *name, int socket_id,
 
 	/* Allocate memory to store the LPM data structures. */
 	lpm = (struct rte_lpm6 *)rte_zmalloc_socket(mem_name, (size_t)mem_size,
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 
 	if (lpm == NULL) {
 		RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
@@ -204,7 +204,7 @@ rte_lpm6_create(const char *name, int socket_id,
 	}
 
 	lpm->rules_tbl = (struct rte_lpm6_rule *)rte_zmalloc_socket(NULL,
-			(size_t)rules_size, CACHE_LINE_SIZE, socket_id);
+			(size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id);
 
 	if (lpm->rules_tbl == NULL) {
 		RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
diff --git a/lib/librte_malloc/malloc_elem.c b/lib/librte_malloc/malloc_elem.c
index 75a94d0..ef26e47 100644
--- a/lib/librte_malloc/malloc_elem.c
+++ b/lib/librte_malloc/malloc_elem.c
@@ -50,7 +50,7 @@
 #include "malloc_elem.h"
 #include "malloc_heap.h"
 
-#define MIN_DATA_SIZE (CACHE_LINE_SIZE)
+#define MIN_DATA_SIZE (RTE_CACHE_LINE_SIZE)
 
 /*
  * initialise a general malloc_elem header structure
@@ -308,7 +308,7 @@ malloc_elem_resize(struct malloc_elem *elem, size_t size)
 	if (elem->size - new_size >= MIN_DATA_SIZE + MALLOC_ELEM_OVERHEAD){
 		/* now we have a big block together. Lets cut it down a bit, by splitting */
 		struct malloc_elem *split_pt = RTE_PTR_ADD(elem, new_size);
-		split_pt = RTE_PTR_ALIGN_CEIL(split_pt, CACHE_LINE_SIZE);
+		split_pt = RTE_PTR_ALIGN_CEIL(split_pt, RTE_CACHE_LINE_SIZE);
 		split_elem(elem, split_pt);
 		malloc_elem_free_list_insert(split_pt);
 	}
diff --git a/lib/librte_malloc/malloc_elem.h b/lib/librte_malloc/malloc_elem.h
index 1d666a5..72f22a1 100644
--- a/lib/librte_malloc/malloc_elem.h
+++ b/lib/librte_malloc/malloc_elem.h
@@ -74,7 +74,7 @@ set_trailer(struct malloc_elem *elem __rte_unused){ }
 
 
 #else
-static const unsigned MALLOC_ELEM_TRAILER_LEN = CACHE_LINE_SIZE;
+static const unsigned MALLOC_ELEM_TRAILER_LEN = RTE_CACHE_LINE_SIZE;
 
 #define MALLOC_HEADER_COOKIE   0xbadbadbadadd2e55ULL /**< Header cookie. */
 #define MALLOC_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/
diff --git a/lib/librte_malloc/malloc_heap.c b/lib/librte_malloc/malloc_heap.c
index 94be0af..a1d0ebb 100644
--- a/lib/librte_malloc/malloc_heap.c
+++ b/lib/librte_malloc/malloc_heap.c
@@ -109,7 +109,7 @@ malloc_heap_add_memzone(struct malloc_heap *heap, size_t size, unsigned align)
 	struct malloc_elem *start_elem = (struct malloc_elem *)mz->addr;
 	struct malloc_elem *end_elem = RTE_PTR_ADD(mz->addr,
 			mz_size - MALLOC_ELEM_OVERHEAD);
-	end_elem = RTE_PTR_ALIGN_FLOOR(end_elem, CACHE_LINE_SIZE);
+	end_elem = RTE_PTR_ALIGN_FLOOR(end_elem, RTE_CACHE_LINE_SIZE);
 
 	const unsigned elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem;
 	malloc_elem_init(start_elem, heap, mz, elem_size);
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 332f469..bb09dae 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -114,10 +114,10 @@ static unsigned optimize_object_size(unsigned obj_size)
 		nrank = 1;
 
 	/* process new object size */
-	new_obj_size = (obj_size + CACHE_LINE_MASK) / CACHE_LINE_SIZE;
+	new_obj_size = (obj_size + CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;
 	while (get_gcd(new_obj_size, nrank * nchan) != 1)
 		new_obj_size++;
-	return new_obj_size * CACHE_LINE_SIZE;
+	return new_obj_size * RTE_CACHE_LINE_SIZE;
 }
 
 static void
@@ -255,7 +255,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 #endif
 	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
 		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
-			CACHE_LINE_SIZE);
+			RTE_CACHE_LINE_SIZE);
 
 	/* trailer contains the cookie in debug mode */
 	sz->trailer_size = 0;
@@ -269,7 +269,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
 		sz->total_size = sz->header_size + sz->elt_size +
 			sz->trailer_size;
-		sz->trailer_size += ((CACHE_LINE_SIZE -
+		sz->trailer_size += ((RTE_CACHE_LINE_SIZE -
 				  (sz->total_size & CACHE_LINE_MASK)) &
 				 CACHE_LINE_MASK);
 	}
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 7b641b0..3314651 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -216,7 +216,7 @@ struct rte_mempool {
  */
 #define	MEMPOOL_HEADER_SIZE(mp, pgn)	(sizeof(*(mp)) + \
 	RTE_ALIGN_CEIL(((pgn) - RTE_DIM((mp)->elt_pa)) * \
-	sizeof ((mp)->elt_pa[0]), CACHE_LINE_SIZE))
+	sizeof ((mp)->elt_pa[0]), RTE_CACHE_LINE_SIZE))
 
 /**
  * Returns TRUE if whole mempool is allocated in one contiguous block of memory.
diff --git a/lib/librte_pipeline/rte_pipeline.c b/lib/librte_pipeline/rte_pipeline.c
index f0349e3..ac7e887 100644
--- a/lib/librte_pipeline/rte_pipeline.c
+++ b/lib/librte_pipeline/rte_pipeline.c
@@ -203,7 +203,7 @@ rte_pipeline_create(struct rte_pipeline_params *params)
 
 	/* Allocate memory for the pipeline on requested socket */
 	p = rte_zmalloc_socket("PIPELINE", sizeof(struct rte_pipeline),
-			CACHE_LINE_SIZE, params->socket_id);
+			RTE_CACHE_LINE_SIZE, params->socket_id);
 
 	if (p == NULL) {
 		RTE_LOG(ERR, PIPELINE,
@@ -343,7 +343,7 @@ rte_pipeline_table_create(struct rte_pipeline *p,
 	entry_size = sizeof(struct rte_pipeline_table_entry) +
 		params->action_data_size;
 	default_entry = (struct rte_pipeline_table_entry *) rte_zmalloc_socket(
-		"PIPELINE", entry_size, CACHE_LINE_SIZE, p->socket_id);
+		"PIPELINE", entry_size, RTE_CACHE_LINE_SIZE, p->socket_id);
 	if (default_entry == NULL) {
 		RTE_LOG(ERR, PIPELINE,
 			"%s: Failed to allocate default entry\n", __func__);
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index 70d398f..aa0b88c 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -1120,7 +1120,7 @@ ring_dma_zone_reserve(struct rte_eth_dev *dev, const char *ring_name,
 
 #ifdef RTE_LIBRTE_XEN_DOM0
 	return rte_memzone_reserve_bounded(z_name, ring_size,
-			socket_id, 0, CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+			socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
 #else
 	return rte_memzone_reserve(z_name, ring_size, socket_id, 0);
 #endif
@@ -1279,13 +1279,13 @@ eth_em_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Allocate the tx queue data structure. */
 	if ((txq = rte_zmalloc("ethdev TX queue", sizeof(*txq),
-			CACHE_LINE_SIZE)) == NULL)
+			RTE_CACHE_LINE_SIZE)) == NULL)
 		return (-ENOMEM);
 
 	/* Allocate software ring */
 	if ((txq->sw_ring = rte_zmalloc("txq->sw_ring",
 			sizeof(txq->sw_ring[0]) * nb_desc,
-			CACHE_LINE_SIZE)) == NULL) {
+			RTE_CACHE_LINE_SIZE)) == NULL) {
 		em_tx_queue_release(txq);
 		return (-ENOMEM);
 	}
@@ -1406,13 +1406,13 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Allocate the RX queue data structure. */
 	if ((rxq = rte_zmalloc("ethdev RX queue", sizeof(*rxq),
-			CACHE_LINE_SIZE)) == NULL)
+			RTE_CACHE_LINE_SIZE)) == NULL)
 		return (-ENOMEM);
 
 	/* Allocate software ring. */
 	if ((rxq->sw_ring = rte_zmalloc("rxq->sw_ring",
 			sizeof (rxq->sw_ring[0]) * nb_desc,
-			CACHE_LINE_SIZE)) == NULL) {
+			RTE_CACHE_LINE_SIZE)) == NULL) {
 		em_rx_queue_release(rxq);
 		return (-ENOMEM);
 	}
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 0dca7b7..bc2999c 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -1240,7 +1240,7 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/* First allocate the tx queue data structure */
 	txq = rte_zmalloc("ethdev TX queue", sizeof(struct igb_tx_queue),
-							CACHE_LINE_SIZE);
+							RTE_CACHE_LINE_SIZE);
 	if (txq == NULL)
 		return (-ENOMEM);
 
@@ -1278,7 +1278,7 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
 	/* Allocate software ring */
 	txq->sw_ring = rte_zmalloc("txq->sw_ring",
 				   sizeof(struct igb_tx_entry) * nb_desc,
-				   CACHE_LINE_SIZE);
+				   RTE_CACHE_LINE_SIZE);
 	if (txq->sw_ring == NULL) {
 		igb_tx_queue_release(txq);
 		return (-ENOMEM);
@@ -1374,7 +1374,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* First allocate the RX queue data structure. */
 	rxq = rte_zmalloc("ethdev RX queue", sizeof(struct igb_rx_queue),
-			  CACHE_LINE_SIZE);
+			  RTE_CACHE_LINE_SIZE);
 	if (rxq == NULL)
 		return (-ENOMEM);
 	rxq->mb_pool = mp;
@@ -1416,7 +1416,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 	/* Allocate software ring. */
 	rxq->sw_ring = rte_zmalloc("rxq->sw_ring",
 				   sizeof(struct igb_rx_entry) * nb_desc,
-				   CACHE_LINE_SIZE);
+				   RTE_CACHE_LINE_SIZE);
 	if (rxq->sw_ring == NULL) {
 		igb_rx_queue_release(rxq);
 		return (-ENOMEM);
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 487591d..0cd8c98 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -1697,7 +1697,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	/* Allocate the rx queue data structure */
 	rxq = rte_zmalloc_socket("i40e rx queue",
 				 sizeof(struct i40e_rx_queue),
-				 CACHE_LINE_SIZE,
+				 RTE_CACHE_LINE_SIZE,
 				 socket_id);
 	if (!rxq) {
 		PMD_DRV_LOG(ERR, "Failed to allocate memory for "
@@ -1756,7 +1756,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->sw_ring =
 		rte_zmalloc_socket("i40e rx sw ring",
 				   sizeof(struct i40e_rx_entry) * len,
-				   CACHE_LINE_SIZE,
+				   RTE_CACHE_LINE_SIZE,
 				   socket_id);
 	if (!rxq->sw_ring) {
 		i40e_dev_rx_queue_release(rxq);
@@ -1981,7 +1981,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("i40e tx queue",
 				  sizeof(struct i40e_tx_queue),
-				  CACHE_LINE_SIZE,
+				  RTE_CACHE_LINE_SIZE,
 				  socket_id);
 	if (!txq) {
 		PMD_DRV_LOG(ERR, "Failed to allocate memory for "
@@ -2032,7 +2032,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->sw_ring =
 		rte_zmalloc_socket("i40e tx sw ring",
 				   sizeof(struct i40e_tx_entry) * nb_desc,
-				   CACHE_LINE_SIZE,
+				   RTE_CACHE_LINE_SIZE,
 				   socket_id);
 	if (!txq->sw_ring) {
 		i40e_dev_tx_queue_release(txq);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index f9b3fe3..0b6f2c7 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -1825,7 +1825,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/* First allocate the tx queue data structure */
 	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct igb_tx_queue),
-				 CACHE_LINE_SIZE, socket_id);
+				 RTE_CACHE_LINE_SIZE, socket_id);
 	if (txq == NULL)
 		return (-ENOMEM);
 
@@ -1873,7 +1873,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	/* Allocate software ring */
 	txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
 				sizeof(struct igb_tx_entry) * nb_desc,
-				CACHE_LINE_SIZE, socket_id);
+				RTE_CACHE_LINE_SIZE, socket_id);
 	if (txq->sw_ring == NULL) {
 		ixgbe_tx_queue_release(txq);
 		return (-ENOMEM);
@@ -2111,7 +2111,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* First allocate the rx queue data structure */
 	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct igb_rx_queue),
-				 CACHE_LINE_SIZE, socket_id);
+				 RTE_CACHE_LINE_SIZE, socket_id);
 	if (rxq == NULL)
 		return (-ENOMEM);
 	rxq->mb_pool = mp;
@@ -2177,7 +2177,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 #endif
 	rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring",
 					  sizeof(struct igb_rx_entry) * len,
-					  CACHE_LINE_SIZE, socket_id);
+					  RTE_CACHE_LINE_SIZE, socket_id);
 	if (rxq->sw_ring == NULL) {
 		ixgbe_rx_queue_release(rxq);
 		return (-ENOMEM);
diff --git a/lib/librte_pmd_virtio/virtio_ethdev.c b/lib/librte_pmd_virtio/virtio_ethdev.c
index c009f2a..b3b5bb6 100644
--- a/lib/librte_pmd_virtio/virtio_ethdev.c
+++ b/lib/librte_pmd_virtio/virtio_ethdev.c
@@ -274,18 +274,18 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		snprintf(vq_name, sizeof(vq_name), "port%d_rvq%d",
 			dev->data->port_id, queue_idx);
 		vq = rte_zmalloc(vq_name, sizeof(struct virtqueue) +
-			vq_size * sizeof(struct vq_desc_extra), CACHE_LINE_SIZE);
+			vq_size * sizeof(struct vq_desc_extra), RTE_CACHE_LINE_SIZE);
 	} else if (queue_type == VTNET_TQ) {
 		snprintf(vq_name, sizeof(vq_name), "port%d_tvq%d",
 			dev->data->port_id, queue_idx);
 		vq = rte_zmalloc(vq_name, sizeof(struct virtqueue) +
-			vq_size * sizeof(struct vq_desc_extra), CACHE_LINE_SIZE);
+			vq_size * sizeof(struct vq_desc_extra), RTE_CACHE_LINE_SIZE);
 	} else if (queue_type == VTNET_CQ) {
 		snprintf(vq_name, sizeof(vq_name), "port%d_cvq",
 			dev->data->port_id);
 		vq = rte_zmalloc(vq_name, sizeof(struct virtqueue) +
 			vq_size * sizeof(struct vq_desc_extra),
-			CACHE_LINE_SIZE);
+			RTE_CACHE_LINE_SIZE);
 	}
 	if (vq == NULL) {
 		PMD_INIT_LOG(ERR, "%s: Can not allocate virtqueue", __func__);
@@ -342,7 +342,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 			dev->data->port_id, queue_idx);
 		vq->virtio_net_hdr_mz = rte_memzone_reserve_aligned(vq_name,
 			vq_size * hw->vtnet_hdr_size,
-			socket_id, 0, CACHE_LINE_SIZE);
+			socket_id, 0, RTE_CACHE_LINE_SIZE);
 		if (vq->virtio_net_hdr_mz == NULL) {
 			rte_free(vq);
 			return -ENOMEM;
@@ -356,7 +356,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		snprintf(vq_name, sizeof(vq_name), "port%d_cvq_hdrzone",
 			dev->data->port_id);
 		vq->virtio_net_hdr_mz = rte_memzone_reserve_aligned(vq_name,
-			PAGE_SIZE, socket_id, 0, CACHE_LINE_SIZE);
+			PAGE_SIZE, socket_id, 0, RTE_CACHE_LINE_SIZE);
 		if (vq->virtio_net_hdr_mz == NULL) {
 			rte_free(vq);
 			return -ENOMEM;
diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
index 3f6bad2..c013f97 100644
--- a/lib/librte_pmd_virtio/virtio_rxtx.c
+++ b/lib/librte_pmd_virtio/virtio_rxtx.c
@@ -441,7 +441,7 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m)
 }
 
 #define VIRTIO_MBUF_BURST_SZ 64
-#define DESC_PER_CACHELINE (CACHE_LINE_SIZE / sizeof(struct vring_desc))
+#define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc))
 uint16_t
 virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 {
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c b/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
index 64789ac..963a8a5 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
@@ -347,7 +347,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 
 		/* Allocate memory structure for UPT1_RSSConf and configure */
 		mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf), "rss_conf",
-				      rte_socket_id(), CACHE_LINE_SIZE);
+				      rte_socket_id(), RTE_CACHE_LINE_SIZE);
 		if (mz == NULL) {
 			PMD_INIT_LOG(ERR,
 				     "ERROR: Creating rss_conf structure zone");
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 6c69f84..8425f32 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -744,7 +744,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	txq = rte_zmalloc("ethdev_tx_queue", sizeof(struct vmxnet3_tx_queue), CACHE_LINE_SIZE);
+	txq = rte_zmalloc("ethdev_tx_queue", sizeof(struct vmxnet3_tx_queue), RTE_CACHE_LINE_SIZE);
 	if (txq == NULL) {
 		PMD_INIT_LOG(ERR, "Can not allocate tx queue structure");
 		return -ENOMEM;
@@ -810,7 +810,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/* cmd_ring0 buf_info allocation */
 	ring->buf_info = rte_zmalloc("tx_ring_buf_info",
-				     ring->size * sizeof(vmxnet3_buf_info_t), CACHE_LINE_SIZE);
+				     ring->size * sizeof(vmxnet3_buf_info_t), RTE_CACHE_LINE_SIZE);
 	if (ring->buf_info == NULL) {
 		PMD_INIT_LOG(ERR, "ERROR: Creating tx_buf_info structure");
 		return -ENOMEM;
@@ -855,7 +855,7 @@ vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	rxq = rte_zmalloc("ethdev_rx_queue", sizeof(struct vmxnet3_rx_queue), CACHE_LINE_SIZE);
+	rxq = rte_zmalloc("ethdev_rx_queue", sizeof(struct vmxnet3_rx_queue), RTE_CACHE_LINE_SIZE);
 	if (rxq == NULL) {
 		PMD_INIT_LOG(ERR, "Can not allocate rx queue structure");
 		return -ENOMEM;
@@ -929,7 +929,7 @@ vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		ring->rid = i;
 		snprintf(mem_name, sizeof(mem_name), "rx_ring_%d_buf_info", i);
 
-		ring->buf_info = rte_zmalloc(mem_name, ring->size * sizeof(vmxnet3_buf_info_t), CACHE_LINE_SIZE);
+		ring->buf_info = rte_zmalloc(mem_name, ring->size * sizeof(vmxnet3_buf_info_t), RTE_CACHE_LINE_SIZE);
 		if (ring->buf_info == NULL) {
 			PMD_INIT_LOG(ERR, "ERROR: Creating rx_buf_info structure");
 			return -ENOMEM;
diff --git a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
index 891cb58..6555ec5 100644
--- a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
+++ b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
@@ -452,7 +452,7 @@ virtio_queue_setup(struct rte_eth_dev *dev, int queue_type)
 		snprintf(vq_name, sizeof(vq_name), "port%d_rvq",
 				dev->data->port_id);
 		vq = rte_zmalloc(vq_name, sizeof(struct virtqueue) +
-			vq_size * sizeof(struct vq_desc_extra), CACHE_LINE_SIZE);
+			vq_size * sizeof(struct vq_desc_extra), RTE_CACHE_LINE_SIZE);
 		if (vq == NULL) {
 			RTE_LOG(ERR, PMD, "%s: unabled to allocate virtqueue\n", __func__);
 			return NULL;
@@ -462,7 +462,7 @@ virtio_queue_setup(struct rte_eth_dev *dev, int queue_type)
 		snprintf(vq_name, sizeof(vq_name), "port%d_tvq",
 			dev->data->port_id);
 		vq = rte_zmalloc(vq_name, sizeof(struct virtqueue) +
-			vq_size * sizeof(struct vq_desc_extra), CACHE_LINE_SIZE);
+			vq_size * sizeof(struct vq_desc_extra), RTE_CACHE_LINE_SIZE);
 		if (vq == NULL) {
 			RTE_LOG(ERR, PMD, "%s: unabled to allocate virtqueue\n", __func__);
 			return NULL;
@@ -556,7 +556,7 @@ rte_eth_xenvirt_parse_args(struct xenvirt_dict *dict,
 	if (params == NULL)
 		return 0;
 
-	args = rte_zmalloc(NULL, strlen(params) + 1, CACHE_LINE_SIZE);
+	args = rte_zmalloc(NULL, strlen(params) + 1, RTE_CACHE_LINE_SIZE);
 	if (args == NULL) {
 		RTE_LOG(ERR, PMD, "Couldn't parse %s device \n", name);
 		return -1;
diff --git a/lib/librte_port/rte_port_ethdev.c b/lib/librte_port/rte_port_ethdev.c
index 2d6f279..d014913 100644
--- a/lib/librte_port/rte_port_ethdev.c
+++ b/lib/librte_port/rte_port_ethdev.c
@@ -61,7 +61,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
 		return NULL;
@@ -128,7 +128,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
 		return NULL;
diff --git a/lib/librte_port/rte_port_frag.c b/lib/librte_port/rte_port_frag.c
index 9f1bd3c..57d930b 100644
--- a/lib/librte_port/rte_port_frag.c
+++ b/lib/librte_port/rte_port_frag.c
@@ -93,7 +93,7 @@ rte_port_ring_reader_ipv4_frag_create(void *params, int socket_id)
 	}
 
 	/* Memory allocation */
-	port = rte_zmalloc_socket("PORT", sizeof(*port), CACHE_LINE_SIZE,
+	port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE,
 		socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
diff --git a/lib/librte_port/rte_port_ras.c b/lib/librte_port/rte_port_ras.c
index b1ac297..b6ab67a 100644
--- a/lib/librte_port/rte_port_ras.c
+++ b/lib/librte_port/rte_port_ras.c
@@ -86,7 +86,7 @@ rte_port_ring_writer_ipv4_ras_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate socket\n", __func__);
 		return NULL;
diff --git a/lib/librte_port/rte_port_ring.c b/lib/librte_port/rte_port_ring.c
index 85bab63..fa3d77b 100644
--- a/lib/librte_port/rte_port_ring.c
+++ b/lib/librte_port/rte_port_ring.c
@@ -60,7 +60,7 @@ rte_port_ring_reader_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
 		return NULL;
@@ -120,7 +120,7 @@ rte_port_ring_writer_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
 		return NULL;
diff --git a/lib/librte_port/rte_port_sched.c b/lib/librte_port/rte_port_sched.c
index 0e71494..2107f4c 100644
--- a/lib/librte_port/rte_port_sched.c
+++ b/lib/librte_port/rte_port_sched.c
@@ -60,7 +60,7 @@ rte_port_sched_reader_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
 		return NULL;
@@ -123,7 +123,7 @@ rte_port_sched_writer_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
 		return NULL;
diff --git a/lib/librte_port/rte_port_source_sink.c b/lib/librte_port/rte_port_source_sink.c
index 23e3878..b9a25bb 100644
--- a/lib/librte_port/rte_port_source_sink.c
+++ b/lib/librte_port/rte_port_source_sink.c
@@ -61,7 +61,7 @@ rte_port_source_create(void *params, int socket_id)
 
 	/* Memory allocation */
 	port = rte_zmalloc_socket("PORT", sizeof(*port),
-			CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (port == NULL) {
 		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
 		return NULL;
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index b9ddccc..e007b0f 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -110,7 +110,7 @@ rte_ring_get_memsize(unsigned count)
 	}
 
 	sz = sizeof(struct rte_ring) + count * sizeof(void *);
-	sz = RTE_ALIGN(sz, CACHE_LINE_SIZE);
+	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
 	return sz;
 }
 
diff --git a/lib/librte_sched/rte_bitmap.h b/lib/librte_sched/rte_bitmap.h
index 89ed7fb..43d1d43 100644
--- a/lib/librte_sched/rte_bitmap.h
+++ b/lib/librte_sched/rte_bitmap.h
@@ -83,7 +83,7 @@ extern "C" {
 #define RTE_BITMAP_SLAB_BIT_MASK                 (RTE_BITMAP_SLAB_BIT_SIZE - 1)
 
 /* Cache line (CL) */
-#define RTE_BITMAP_CL_BIT_SIZE                   (CACHE_LINE_SIZE * 8)
+#define RTE_BITMAP_CL_BIT_SIZE                   (RTE_CACHE_LINE_SIZE * 8)
 #define RTE_BITMAP_CL_BIT_SIZE_LOG2              9
 #define RTE_BITMAP_CL_BIT_MASK                   (RTE_BITMAP_CL_BIT_SIZE - 1)
 
@@ -178,7 +178,7 @@ __rte_bitmap_get_memory_footprint(uint32_t n_bits,
 	n_slabs_array1 = rte_align32pow2(n_slabs_array1);
 	n_slabs_context = (sizeof(struct rte_bitmap) + (RTE_BITMAP_SLAB_BIT_SIZE / 8) - 1) / (RTE_BITMAP_SLAB_BIT_SIZE / 8);
 	n_cache_lines_context_and_array1 = (n_slabs_context + n_slabs_array1 + RTE_BITMAP_CL_SLAB_SIZE - 1) / RTE_BITMAP_CL_SLAB_SIZE;
-	n_bytes_total = (n_cache_lines_context_and_array1 + n_cache_lines_array2) * CACHE_LINE_SIZE;
+	n_bytes_total = (n_cache_lines_context_and_array1 + n_cache_lines_array2) * RTE_CACHE_LINE_SIZE;
 
 	if (array1_byte_offset) {
 		*array1_byte_offset = n_slabs_context * (RTE_BITMAP_SLAB_BIT_SIZE / 8);
@@ -187,7 +187,7 @@ __rte_bitmap_get_memory_footprint(uint32_t n_bits,
 		*array1_slabs = n_slabs_array1;
 	}
 	if (array2_byte_offset) {
-		*array2_byte_offset = n_cache_lines_context_and_array1 * CACHE_LINE_SIZE;
+		*array2_byte_offset = n_cache_lines_context_and_array1 * RTE_CACHE_LINE_SIZE;
 	}
 	if (array2_slabs) {
 		*array2_slabs = n_cache_lines_array2 * RTE_BITMAP_CL_SLAB_SIZE;
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index ba60277..1447a27 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -617,7 +617,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
 	}
 
 	/* Allocate memory to store the data structures */
-	port = rte_zmalloc("qos_params", mem_size, CACHE_LINE_SIZE);
+	port = rte_zmalloc("qos_params", mem_size, RTE_CACHE_LINE_SIZE);
 	if (port == NULL) {
 		return NULL;
 	}
diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c
index c6d389e..ed0aae8 100644
--- a/lib/librte_table/rte_table_acl.c
+++ b/lib/librte_table/rte_table_acl.c
@@ -75,7 +75,7 @@ rte_table_acl_create(
 	uint32_t action_table_size, acl_rule_list_size, acl_rule_memory_size;
 	uint32_t total_size;
 
-	RTE_BUILD_BUG_ON(((sizeof(struct rte_table_acl) % CACHE_LINE_SIZE)
+	RTE_BUILD_BUG_ON(((sizeof(struct rte_table_acl) % RTE_CACHE_LINE_SIZE)
 		!= 0));
 
 	/* Check input parameters */
@@ -110,7 +110,7 @@ rte_table_acl_create(
 	total_size = sizeof(struct rte_table_acl) + action_table_size +
 		acl_rule_list_size + acl_rule_memory_size;
 
-	acl = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE,
+	acl = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE,
 		socket_id);
 	if (acl == NULL) {
 		RTE_LOG(ERR, TABLE,
diff --git a/lib/librte_table/rte_table_array.c b/lib/librte_table/rte_table_array.c
index f0f5e1e..0b1d42a 100644
--- a/lib/librte_table/rte_table_array.c
+++ b/lib/librte_table/rte_table_array.c
@@ -72,11 +72,11 @@ rte_table_array_create(void *params, int socket_id, uint32_t entry_size)
 
 	/* Memory allocation */
 	total_cl_size = (sizeof(struct rte_table_array) +
-			CACHE_LINE_SIZE) / CACHE_LINE_SIZE;
+			RTE_CACHE_LINE_SIZE) / RTE_CACHE_LINE_SIZE;
 	total_cl_size += (p->n_entries * entry_size +
-			CACHE_LINE_SIZE) / CACHE_LINE_SIZE;
-	total_size = total_cl_size * CACHE_LINE_SIZE;
-	t = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE) / RTE_CACHE_LINE_SIZE;
+	total_size = total_cl_size * RTE_CACHE_LINE_SIZE;
+	t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (t == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for array table\n",
diff --git a/lib/librte_table/rte_table_hash_ext.c b/lib/librte_table/rte_table_hash_ext.c
index 6e26d98..638c2cd 100644
--- a/lib/librte_table/rte_table_hash_ext.c
+++ b/lib/librte_table/rte_table_hash_ext.c
@@ -180,8 +180,8 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size)
 	/* Check input parameters */
 	if ((check_params_create(p) != 0) ||
 		(!rte_is_power_of_2(entry_size)) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		(sizeof(struct bucket) != (CACHE_LINE_SIZE / 2)))
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		(sizeof(struct bucket) != (RTE_CACHE_LINE_SIZE / 2)))
 		return NULL;
 
 	/* Memory allocation */
@@ -197,7 +197,7 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size)
 	total_size = table_meta_sz + bucket_sz + bucket_ext_sz + key_sz +
 		key_stack_sz + bkt_ext_stack_sz + data_sz;
 
-	t = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (t == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for hash table\n",
diff --git a/lib/librte_table/rte_table_hash_key16.c b/lib/librte_table/rte_table_hash_key16.c
index f5ec87d..a2887d5 100644
--- a/lib/librte_table/rte_table_hash_key16.c
+++ b/lib/librte_table/rte_table_hash_key16.c
@@ -123,8 +123,8 @@ rte_table_hash_create_key16_lru(void *params,
 
 	/* Check input parameters */
 	if ((check_params_create_lru(p) != 0) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		((sizeof(struct rte_bucket_4_16) % CACHE_LINE_SIZE) != 0))
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_16) % RTE_CACHE_LINE_SIZE) != 0))
 		return NULL;
 	n_entries_per_bucket = 4;
 	key_size = 16;
@@ -133,11 +133,11 @@ rte_table_hash_create_key16_lru(void *params,
 	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
 		n_entries_per_bucket);
 	bucket_size_cl = (sizeof(struct rte_bucket_4_16) + n_entries_per_bucket
-		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+		* entry_size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE;
 	total_size = sizeof(struct rte_table_hash) + n_buckets *
-		bucket_size_cl * CACHE_LINE_SIZE;
+		bucket_size_cl * RTE_CACHE_LINE_SIZE;
 
-	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	f = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (f == NULL) {
 		RTE_LOG(ERR, TABLE,
 		"%s: Cannot allocate %u bytes for hash table\n",
@@ -153,7 +153,7 @@ rte_table_hash_create_key16_lru(void *params,
 	f->n_entries_per_bucket = n_entries_per_bucket;
 	f->key_size = key_size;
 	f->entry_size = entry_size;
-	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->bucket_size = bucket_size_cl * RTE_CACHE_LINE_SIZE;
 	f->signature_offset = p->signature_offset;
 	f->key_offset = p->key_offset;
 	f->f_hash = p->f_hash;
@@ -341,8 +341,8 @@ rte_table_hash_create_key16_ext(void *params,
 
 	/* Check input parameters */
 	if ((check_params_create_ext(p) != 0) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		((sizeof(struct rte_bucket_4_16) % CACHE_LINE_SIZE) != 0))
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_16) % RTE_CACHE_LINE_SIZE) != 0))
 		return NULL;
 
 	n_entries_per_bucket = 4;
@@ -354,14 +354,14 @@ rte_table_hash_create_key16_ext(void *params,
 	n_buckets_ext = (p->n_entries_ext + n_entries_per_bucket - 1) /
 		n_entries_per_bucket;
 	bucket_size_cl = (sizeof(struct rte_bucket_4_16) + n_entries_per_bucket
-		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
-	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + CACHE_LINE_SIZE - 1)
-		/ CACHE_LINE_SIZE;
+		* entry_size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE;
+	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + RTE_CACHE_LINE_SIZE - 1)
+		/ RTE_CACHE_LINE_SIZE;
 	total_size = sizeof(struct rte_table_hash) +
 		((n_buckets + n_buckets_ext) * bucket_size_cl + stack_size_cl) *
-		CACHE_LINE_SIZE;
+		RTE_CACHE_LINE_SIZE;
 
-	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	f = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (f == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for hash table\n",
@@ -377,7 +377,7 @@ rte_table_hash_create_key16_ext(void *params,
 	f->n_entries_per_bucket = n_entries_per_bucket;
 	f->key_size = key_size;
 	f->entry_size = entry_size;
-	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->bucket_size = bucket_size_cl * RTE_CACHE_LINE_SIZE;
 	f->signature_offset = p->signature_offset;
 	f->key_offset = p->key_offset;
 	f->f_hash = p->f_hash;
@@ -608,7 +608,7 @@ rte_table_hash_entry_delete_key16_ext(
 	bucket1 = (struct rte_bucket_4_16 *)			\
 		&f->memory[bucket_index * f->bucket_size];	\
 	rte_prefetch0(bucket1);					\
-	rte_prefetch0((void *)(((uintptr_t) bucket1) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket1) + RTE_CACHE_LINE_SIZE));\
 }
 
 #define lookup1_stage2_lru(pkt2_index, mbuf2, bucket2,		\
@@ -684,7 +684,7 @@ rte_table_hash_entry_delete_key16_ext(
 	buckets_mask |= bucket_mask;				\
 	bucket_next = bucket->next;				\
 	rte_prefetch0(bucket_next);				\
-	rte_prefetch0((void *)(((uintptr_t) bucket_next) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket_next) + RTE_CACHE_LINE_SIZE));\
 	buckets[pkt_index] = bucket_next;			\
 	keys[pkt_index] = key;					\
 }
@@ -741,14 +741,14 @@ rte_table_hash_entry_delete_key16_ext(
 	bucket10 = (struct rte_bucket_4_16 *)			\
 		&f->memory[bucket10_index * f->bucket_size];	\
 	rte_prefetch0(bucket10);				\
-	rte_prefetch0((void *)(((uintptr_t) bucket10) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket10) + RTE_CACHE_LINE_SIZE));\
 								\
 	signature11 = RTE_MBUF_METADATA_UINT32(mbuf11, f->signature_offset);\
 	bucket11_index = signature11 & (f->n_buckets - 1);	\
 	bucket11 = (struct rte_bucket_4_16 *)			\
 		&f->memory[bucket11_index * f->bucket_size];	\
 	rte_prefetch0(bucket11);				\
-	rte_prefetch0((void *)(((uintptr_t) bucket11) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket11) + RTE_CACHE_LINE_SIZE));\
 }
 
 #define lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,\
diff --git a/lib/librte_table/rte_table_hash_key32.c b/lib/librte_table/rte_table_hash_key32.c
index e8f4812..3d576d0 100644
--- a/lib/librte_table/rte_table_hash_key32.c
+++ b/lib/librte_table/rte_table_hash_key32.c
@@ -123,8 +123,8 @@ rte_table_hash_create_key32_lru(void *params,
 
 	/* Check input parameters */
 	if ((check_params_create_lru(p) != 0) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		((sizeof(struct rte_bucket_4_32) % CACHE_LINE_SIZE) != 0)) {
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_32) % RTE_CACHE_LINE_SIZE) != 0)) {
 		return NULL;
 	}
 	n_entries_per_bucket = 4;
@@ -134,11 +134,11 @@ rte_table_hash_create_key32_lru(void *params,
 	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
 		n_entries_per_bucket);
 	bucket_size_cl = (sizeof(struct rte_bucket_4_32) + n_entries_per_bucket
-		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+		* entry_size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE;
 	total_size = sizeof(struct rte_table_hash) + n_buckets *
-		bucket_size_cl * CACHE_LINE_SIZE;
+		bucket_size_cl * RTE_CACHE_LINE_SIZE;
 
-	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	f = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (f == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for hash table\n",
@@ -154,7 +154,7 @@ rte_table_hash_create_key32_lru(void *params,
 	f->n_entries_per_bucket = n_entries_per_bucket;
 	f->key_size = key_size;
 	f->entry_size = entry_size;
-	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->bucket_size = bucket_size_cl * RTE_CACHE_LINE_SIZE;
 	f->signature_offset = p->signature_offset;
 	f->key_offset = p->key_offset;
 	f->f_hash = p->f_hash;
@@ -343,8 +343,8 @@ rte_table_hash_create_key32_ext(void *params,
 
 	/* Check input parameters */
 	if ((check_params_create_ext(p) != 0) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		((sizeof(struct rte_bucket_4_32) % CACHE_LINE_SIZE) != 0))
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_32) % RTE_CACHE_LINE_SIZE) != 0))
 		return NULL;
 
 	n_entries_per_bucket = 4;
@@ -356,14 +356,14 @@ rte_table_hash_create_key32_ext(void *params,
 	n_buckets_ext = (p->n_entries_ext + n_entries_per_bucket - 1) /
 		n_entries_per_bucket;
 	bucket_size_cl = (sizeof(struct rte_bucket_4_32) + n_entries_per_bucket
-		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
-	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + CACHE_LINE_SIZE - 1)
-		/ CACHE_LINE_SIZE;
+		* entry_size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE;
+	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + RTE_CACHE_LINE_SIZE - 1)
+		/ RTE_CACHE_LINE_SIZE;
 	total_size = sizeof(struct rte_table_hash) +
 		((n_buckets + n_buckets_ext) * bucket_size_cl + stack_size_cl) *
-		CACHE_LINE_SIZE;
+		RTE_CACHE_LINE_SIZE;
 
-	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	f = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (f == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for hash table\n",
@@ -379,7 +379,7 @@ rte_table_hash_create_key32_ext(void *params,
 	f->n_entries_per_bucket = n_entries_per_bucket;
 	f->key_size = key_size;
 	f->entry_size = entry_size;
-	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->bucket_size = bucket_size_cl * RTE_CACHE_LINE_SIZE;
 	f->signature_offset = p->signature_offset;
 	f->key_offset = p->key_offset;
 	f->f_hash = p->f_hash;
@@ -621,8 +621,8 @@ rte_table_hash_entry_delete_key32_ext(
 	bucket1 = (struct rte_bucket_4_32 *)			\
 		&f->memory[bucket_index * f->bucket_size];	\
 	rte_prefetch0(bucket1);					\
-	rte_prefetch0((void *)(((uintptr_t) bucket1) + CACHE_LINE_SIZE));\
-	rte_prefetch0((void *)(((uintptr_t) bucket1) + 2 * CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket1) + RTE_CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket1) + 2 * RTE_CACHE_LINE_SIZE));\
 }
 
 #define lookup1_stage2_lru(pkt2_index, mbuf2, bucket2,		\
@@ -698,9 +698,9 @@ rte_table_hash_entry_delete_key32_ext(
 	buckets_mask |= bucket_mask;				\
 	bucket_next = bucket->next;				\
 	rte_prefetch0(bucket_next);				\
-	rte_prefetch0((void *)(((uintptr_t) bucket_next) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket_next) + RTE_CACHE_LINE_SIZE));\
 	rte_prefetch0((void *)(((uintptr_t) bucket_next) +	\
-		2 * CACHE_LINE_SIZE));				\
+		2 * RTE_CACHE_LINE_SIZE));				\
 	buckets[pkt_index] = bucket_next;			\
 	keys[pkt_index] = key;					\
 }
@@ -758,16 +758,16 @@ rte_table_hash_entry_delete_key32_ext(
 	bucket10 = (struct rte_bucket_4_32 *)			\
 		&f->memory[bucket10_index * f->bucket_size];	\
 	rte_prefetch0(bucket10);				\
-	rte_prefetch0((void *)(((uintptr_t) bucket10) + CACHE_LINE_SIZE));\
-	rte_prefetch0((void *)(((uintptr_t) bucket10) + 2 * CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket10) + RTE_CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket10) + 2 * RTE_CACHE_LINE_SIZE));\
 								\
 	signature11 = RTE_MBUF_METADATA_UINT32(mbuf11, f->signature_offset);\
 	bucket11_index = signature11 & (f->n_buckets - 1);	\
 	bucket11 = (struct rte_bucket_4_32 *)			\
 		&f->memory[bucket11_index * f->bucket_size];	\
 	rte_prefetch0(bucket11);				\
-	rte_prefetch0((void *)(((uintptr_t) bucket11) + CACHE_LINE_SIZE));\
-	rte_prefetch0((void *)(((uintptr_t) bucket11) + 2 * CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket11) + RTE_CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket11) + 2 * RTE_CACHE_LINE_SIZE));\
 }
 
 #define lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,\
diff --git a/lib/librte_table/rte_table_hash_key8.c b/lib/librte_table/rte_table_hash_key8.c
index d60c96e..512a8c3 100644
--- a/lib/librte_table/rte_table_hash_key8.c
+++ b/lib/librte_table/rte_table_hash_key8.c
@@ -118,8 +118,8 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
 
 	/* Check input parameters */
 	if ((check_params_create_lru(p) != 0) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		((sizeof(struct rte_bucket_4_8) % CACHE_LINE_SIZE) != 0)) {
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_8) % RTE_CACHE_LINE_SIZE) != 0)) {
 		return NULL;
 	}
 	n_entries_per_bucket = 4;
@@ -129,11 +129,11 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
 	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
 		n_entries_per_bucket);
 	bucket_size_cl = (sizeof(struct rte_bucket_4_8) + n_entries_per_bucket *
-		entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+		entry_size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE;
 	total_size = sizeof(struct rte_table_hash) + n_buckets *
-		bucket_size_cl * CACHE_LINE_SIZE;
+		bucket_size_cl * RTE_CACHE_LINE_SIZE;
 
-	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	f = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (f == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for hash table\n",
@@ -149,7 +149,7 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
 	f->n_entries_per_bucket = n_entries_per_bucket;
 	f->key_size = key_size;
 	f->entry_size = entry_size;
-	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->bucket_size = bucket_size_cl * RTE_CACHE_LINE_SIZE;
 	f->signature_offset = p->signature_offset;
 	f->key_offset = p->key_offset;
 	f->f_hash = p->f_hash;
@@ -332,8 +332,8 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
 
 	/* Check input parameters */
 	if ((check_params_create_ext(p) != 0) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		((sizeof(struct rte_bucket_4_8) % CACHE_LINE_SIZE) != 0))
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_8) % RTE_CACHE_LINE_SIZE) != 0))
 		return NULL;
 
 	n_entries_per_bucket = 4;
@@ -345,14 +345,14 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
 	n_buckets_ext = (p->n_entries_ext + n_entries_per_bucket - 1) /
 		n_entries_per_bucket;
 	bucket_size_cl = (sizeof(struct rte_bucket_4_8) + n_entries_per_bucket *
-		entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
-	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + CACHE_LINE_SIZE - 1)
-		/ CACHE_LINE_SIZE;
+		entry_size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE;
+	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + RTE_CACHE_LINE_SIZE - 1)
+		/ RTE_CACHE_LINE_SIZE;
 	total_size = sizeof(struct rte_table_hash) + ((n_buckets +
 		n_buckets_ext) * bucket_size_cl + stack_size_cl) *
-		CACHE_LINE_SIZE;
+		RTE_CACHE_LINE_SIZE;
 
-	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	f = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (f == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for hash table\n",
@@ -368,7 +368,7 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
 	f->n_entries_per_bucket = n_entries_per_bucket;
 	f->key_size = key_size;
 	f->entry_size = entry_size;
-	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->bucket_size = bucket_size_cl * RTE_CACHE_LINE_SIZE;
 	f->signature_offset = p->signature_offset;
 	f->key_offset = p->key_offset;
 	f->f_hash = p->f_hash;
diff --git a/lib/librte_table/rte_table_hash_lru.c b/lib/librte_table/rte_table_hash_lru.c
index d1a4984..cea7a92 100644
--- a/lib/librte_table/rte_table_hash_lru.c
+++ b/lib/librte_table/rte_table_hash_lru.c
@@ -155,8 +155,8 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size)
 	/* Check input parameters */
 	if ((check_params_create(p) != 0) ||
 		(!rte_is_power_of_2(entry_size)) ||
-		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
-		(sizeof(struct bucket) != (CACHE_LINE_SIZE / 2))) {
+		((sizeof(struct rte_table_hash) % RTE_CACHE_LINE_SIZE) != 0) ||
+		(sizeof(struct bucket) != (RTE_CACHE_LINE_SIZE / 2))) {
 		return NULL;
 	}
 
@@ -169,7 +169,7 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size)
 	total_size = table_meta_sz + bucket_sz + key_sz + key_stack_sz +
 		data_sz;
 
-	t = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (t == NULL) {
 		RTE_LOG(ERR, TABLE,
 			"%s: Cannot allocate %u bytes for hash table\n",
diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
index a175ff3..59f87bb 100644
--- a/lib/librte_table/rte_table_lpm.c
+++ b/lib/librte_table/rte_table_lpm.c
@@ -96,7 +96,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
 	/* Memory allocation */
 	nht_size = RTE_TABLE_LPM_MAX_NEXT_HOPS * entry_size;
 	total_size = sizeof(struct rte_table_lpm) + nht_size;
-	lpm = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE,
+	lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE,
 		socket_id);
 	if (lpm == NULL) {
 		RTE_LOG(ERR, TABLE,
diff --git a/lib/librte_table/rte_table_lpm_ipv6.c b/lib/librte_table/rte_table_lpm_ipv6.c
index e3d59d0..2818c25 100644
--- a/lib/librte_table/rte_table_lpm_ipv6.c
+++ b/lib/librte_table/rte_table_lpm_ipv6.c
@@ -102,7 +102,7 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
 	/* Memory allocation */
 	nht_size = RTE_TABLE_LPM_MAX_NEXT_HOPS * entry_size;
 	total_size = sizeof(struct rte_table_lpm_ipv6) + nht_size;
-	lpm = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE,
+	lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE,
 		socket_id);
 	if (lpm == NULL) {
 		RTE_LOG(ERR, TABLE,
-- 
2.1.0

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH 2/3] Add RTE_ prefix to CACHE_LINE_MASK macro
  2014-11-19 12:26 [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Sergio Gonzalez Monroy
  2014-11-19 12:26 ` [dpdk-dev] [PATCH 1/3] Add RTE_ prefix to CACHE_LINE_SIZE macro Sergio Gonzalez Monroy
@ 2014-11-19 12:26 ` Sergio Gonzalez Monroy
  2014-11-19 12:26 ` [dpdk-dev] [PATCH 3/3] Add RTE_ prefix to CACHE_LINE_ROUNDUP macro Sergio Gonzalez Monroy
  2014-11-27 13:58 ` [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Thomas Monjalon
  3 siblings, 0 replies; 5+ messages in thread
From: Sergio Gonzalez Monroy @ 2014-11-19 12:26 UTC (permalink / raw)
  To: dev

Adding RTE_ for consistency with other renamed macros and to avoid
potential conflicts.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 app/test/test_memzone.c                    | 32 +++++++++++++++---------------
 lib/librte_acl/rte_acl_osdep_alone.h       |  2 +-
 lib/librte_distributor/rte_distributor.c   |  2 +-
 lib/librte_eal/common/eal_common_memzone.c | 14 ++++++-------
 lib/librte_eal/common/include/rte_memory.h |  2 +-
 lib/librte_mempool/rte_mempool.c           | 18 ++++++++---------
 lib/librte_ring/rte_ring.c                 | 10 +++++-----
 lib/librte_sched/rte_bitmap.h              |  2 +-
 8 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/app/test/test_memzone.c b/app/test/test_memzone.c
index b665fce..eeaac1f 100644
--- a/app/test/test_memzone.c
+++ b/app/test/test_memzone.c
@@ -283,7 +283,7 @@ test_memzone_reserve_max(void)
 		/* align everything */
 		last_addr = RTE_PTR_ALIGN_CEIL(ms[memseg_idx].addr, RTE_CACHE_LINE_SIZE);
 		len = ms[memseg_idx].len - RTE_PTR_DIFF(last_addr, ms[memseg_idx].addr);
-		len &= ~((size_t) CACHE_LINE_MASK);
+		len &= ~((size_t) RTE_CACHE_LINE_MASK);
 
 		/* cycle through all memzones */
 		for (memzone_idx = 0; memzone_idx < RTE_MAX_MEMZONE; memzone_idx++) {
@@ -376,7 +376,7 @@ test_memzone_reserve_max_aligned(void)
 		/* align everything */
 		last_addr = RTE_PTR_ALIGN_CEIL(ms[memseg_idx].addr, RTE_CACHE_LINE_SIZE);
 		len = ms[memseg_idx].len - RTE_PTR_DIFF(last_addr, ms[memseg_idx].addr);
-		len &= ~((size_t) CACHE_LINE_MASK);
+		len &= ~((size_t) RTE_CACHE_LINE_MASK);
 
 		/* cycle through all memzones */
 		for (memzone_idx = 0; memzone_idx < RTE_MAX_MEMZONE; memzone_idx++) {
@@ -474,11 +474,11 @@ test_memzone_aligned(void)
 		printf("Unable to reserve 64-byte aligned memzone!\n");
 		return -1;
 	}
-	if ((memzone_aligned_32->phys_addr & CACHE_LINE_MASK) != 0)
+	if ((memzone_aligned_32->phys_addr & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
-	if (((uintptr_t) memzone_aligned_32->addr & CACHE_LINE_MASK) != 0)
+	if (((uintptr_t) memzone_aligned_32->addr & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
-	if ((memzone_aligned_32->len & CACHE_LINE_MASK) != 0)
+	if ((memzone_aligned_32->len & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
 
 	if (memzone_aligned_128 == NULL) {
@@ -489,7 +489,7 @@ test_memzone_aligned(void)
 		return -1;
 	if (((uintptr_t) memzone_aligned_128->addr & 127) != 0)
 		return -1;
-	if ((memzone_aligned_128->len & CACHE_LINE_MASK) != 0)
+	if ((memzone_aligned_128->len & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
 
 	if (memzone_aligned_256 == NULL) {
@@ -500,7 +500,7 @@ test_memzone_aligned(void)
 		return -1;
 	if (((uintptr_t) memzone_aligned_256->addr & 255) != 0)
 		return -1;
-	if ((memzone_aligned_256->len & CACHE_LINE_MASK) != 0)
+	if ((memzone_aligned_256->len & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
 
 	if (memzone_aligned_512 == NULL) {
@@ -511,7 +511,7 @@ test_memzone_aligned(void)
 		return -1;
 	if (((uintptr_t) memzone_aligned_512->addr & 511) != 0)
 		return -1;
-	if ((memzone_aligned_512->len & CACHE_LINE_MASK) != 0)
+	if ((memzone_aligned_512->len & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
 
 	if (memzone_aligned_1024 == NULL) {
@@ -522,7 +522,7 @@ test_memzone_aligned(void)
 		return -1;
 	if (((uintptr_t) memzone_aligned_1024->addr & 1023) != 0)
 		return -1;
-	if ((memzone_aligned_1024->len & CACHE_LINE_MASK) != 0)
+	if ((memzone_aligned_1024->len & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
 
 	/* check that zones don't overlap */
@@ -588,7 +588,7 @@ check_memzone_bounded(const char *name, uint32_t len,  uint32_t align,
 		return (-1);
 	}
 
-	if ((mz->len & CACHE_LINE_MASK) != 0 || mz->len < len ||
+	if ((mz->len & RTE_CACHE_LINE_MASK) != 0 || mz->len < len ||
 			mz->len < RTE_CACHE_LINE_SIZE) {
 		printf("%s(%s): invalid length\n",
 			__func__, mz->name);
@@ -952,17 +952,17 @@ test_memzone(void)
 	/* check cache-line alignments */
 	printf("check alignments and lengths\n");
 
-	if ((memzone1->phys_addr & CACHE_LINE_MASK) != 0)
+	if ((memzone1->phys_addr & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
-	if ((memzone2->phys_addr & CACHE_LINE_MASK) != 0)
+	if ((memzone2->phys_addr & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
-	if (memzone3 != NULL && (memzone3->phys_addr & CACHE_LINE_MASK) != 0)
+	if (memzone3 != NULL && (memzone3->phys_addr & RTE_CACHE_LINE_MASK) != 0)
 		return -1;
-	if ((memzone1->len & CACHE_LINE_MASK) != 0 || memzone1->len == 0)
+	if ((memzone1->len & RTE_CACHE_LINE_MASK) != 0 || memzone1->len == 0)
 		return -1;
-	if ((memzone2->len & CACHE_LINE_MASK) != 0 || memzone2->len == 0)
+	if ((memzone2->len & RTE_CACHE_LINE_MASK) != 0 || memzone2->len == 0)
 		return -1;
-	if (memzone3 != NULL && ((memzone3->len & CACHE_LINE_MASK) != 0 ||
+	if (memzone3 != NULL && ((memzone3->len & RTE_CACHE_LINE_MASK) != 0 ||
 			memzone3->len == 0))
 		return -1;
 	if (memzone4->len != 1024)
diff --git a/lib/librte_acl/rte_acl_osdep_alone.h b/lib/librte_acl/rte_acl_osdep_alone.h
index 73d1701..a84b6f9 100644
--- a/lib/librte_acl/rte_acl_osdep_alone.h
+++ b/lib/librte_acl/rte_acl_osdep_alone.h
@@ -181,7 +181,7 @@ rte_rdtsc(void)
  */
 #define	SOCKET_ID_ANY	-1                  /**< Any NUMA socket. */
 #define	RTE_CACHE_LINE_SIZE	64                  /**< Cache line size. */
-#define	CACHE_LINE_MASK	(RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */
+#define	RTE_CACHE_LINE_MASK	(RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */
 
 /**
  * Force alignment to cache line.
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index 0b4178c..aa2f740 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -450,7 +450,7 @@ rte_distributor_create(const char *name,
 	const struct rte_memzone *mz;
 
 	/* compilation-time checks */
-	RTE_BUILD_BUG_ON((sizeof(*d) & CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((sizeof(*d) & RTE_CACHE_LINE_MASK) != 0);
 	RTE_BUILD_BUG_ON((RTE_DISTRIB_MAX_WORKERS & 7) != 0);
 	RTE_BUILD_BUG_ON(RTE_DISTRIB_MAX_WORKERS >
 				sizeof(d->in_flight_bitmask) * CHAR_BIT);
diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c
index 18e4f38..7af5a75 100644
--- a/lib/librte_eal/common/eal_common_memzone.c
+++ b/lib/librte_eal/common/eal_common_memzone.c
@@ -169,13 +169,13 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
 
 
 	/* align length on cache boundary. Check for overflow before doing so */
-	if (len > SIZE_MAX - CACHE_LINE_MASK) {
+	if (len > SIZE_MAX - RTE_CACHE_LINE_MASK) {
 		rte_errno = EINVAL; /* requested size too big */
 		return NULL;
 	}
 
-	len += CACHE_LINE_MASK;
-	len &= ~((size_t) CACHE_LINE_MASK);
+	len += RTE_CACHE_LINE_MASK;
+	len &= ~((size_t) RTE_CACHE_LINE_MASK);
 
 	/* save minimal requested  length */
 	requested_len = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE,  len);
@@ -421,8 +421,8 @@ memseg_sanitize(struct rte_memseg *memseg)
 	unsigned virt_align;
 	unsigned off;
 
-	phys_align = memseg->phys_addr & CACHE_LINE_MASK;
-	virt_align = (unsigned long)memseg->addr & CACHE_LINE_MASK;
+	phys_align = memseg->phys_addr & RTE_CACHE_LINE_MASK;
+	virt_align = (unsigned long)memseg->addr & RTE_CACHE_LINE_MASK;
 
 	/*
 	 * sanity check: phys_addr and addr must have the same
@@ -438,13 +438,13 @@ memseg_sanitize(struct rte_memseg *memseg)
 	}
 
 	/* align start address */
-	off = (RTE_CACHE_LINE_SIZE - phys_align) & CACHE_LINE_MASK;
+	off = (RTE_CACHE_LINE_SIZE - phys_align) & RTE_CACHE_LINE_MASK;
 	memseg->phys_addr += off;
 	memseg->addr = (char *)memseg->addr + off;
 	memseg->len -= off;
 
 	/* align end address */
-	memseg->len &= ~((uint64_t)CACHE_LINE_MASK);
+	memseg->len &= ~((uint64_t)RTE_CACHE_LINE_MASK);
 
 	return 0;
 }
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 0502793..ab20c4b 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -62,7 +62,7 @@ enum rte_page_sizes {
 #ifndef RTE_CACHE_LINE_SIZE
 #define RTE_CACHE_LINE_SIZE 64                  /**< Cache line size. */
 #endif
-#define CACHE_LINE_MASK (RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */
+#define RTE_CACHE_LINE_MASK (RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */
 
 #define CACHE_LINE_ROUNDUP(size) \
 	(RTE_CACHE_LINE_SIZE * ((size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE))
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index bb09dae..8f10be8 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -114,7 +114,7 @@ static unsigned optimize_object_size(unsigned obj_size)
 		nrank = 1;
 
 	/* process new object size */
-	new_obj_size = (obj_size + CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;
+	new_obj_size = (obj_size + RTE_CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;
 	while (get_gcd(new_obj_size, nrank * nchan) != 1)
 		new_obj_size++;
 	return new_obj_size * RTE_CACHE_LINE_SIZE;
@@ -270,8 +270,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 		sz->total_size = sz->header_size + sz->elt_size +
 			sz->trailer_size;
 		sz->trailer_size += ((RTE_CACHE_LINE_SIZE -
-				  (sz->total_size & CACHE_LINE_MASK)) &
-				 CACHE_LINE_MASK);
+				  (sz->total_size & RTE_CACHE_LINE_MASK)) &
+				 RTE_CACHE_LINE_MASK);
 	}
 
 	/*
@@ -418,18 +418,18 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 
 	/* compilation-time checks */
 	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 #endif
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 #endif
 
 	/* check that we have an initialised tail queue */
@@ -489,7 +489,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	 * cache-aligned
 	 */
 	private_data_size = (private_data_size +
-			     CACHE_LINE_MASK) & (~CACHE_LINE_MASK);
+			     RTE_CACHE_LINE_MASK) & (~RTE_CACHE_LINE_MASK);
 
 	if (! rte_eal_has_hugepages()) {
 		/*
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index e007b0f..f5899c4 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -120,18 +120,18 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
 {
 	/* compilation-time checks */
 	RTE_BUILD_BUG_ON((sizeof(struct rte_ring) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 #ifdef RTE_RING_SPLIT_PROD_CONS
 	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, cons) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 #endif
 	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 #ifdef RTE_LIBRTE_RING_DEBUG
 	RTE_BUILD_BUG_ON((sizeof(struct rte_ring_debug_stats) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, stats) &
-			  CACHE_LINE_MASK) != 0);
+			  RTE_CACHE_LINE_MASK) != 0);
 #endif
 
 	/* init the ring structure */
diff --git a/lib/librte_sched/rte_bitmap.h b/lib/librte_sched/rte_bitmap.h
index 43d1d43..95f3c0d 100644
--- a/lib/librte_sched/rte_bitmap.h
+++ b/lib/librte_sched/rte_bitmap.h
@@ -249,7 +249,7 @@ rte_bitmap_init(uint32_t n_bits, uint8_t *mem, uint32_t mem_size)
 		return NULL;
 	}
 
-	if ((mem == NULL) || (((uintptr_t) mem) & CACHE_LINE_MASK)) {
+	if ((mem == NULL) || (((uintptr_t) mem) & RTE_CACHE_LINE_MASK)) {
 		return NULL;
 	}
 
-- 
2.1.0

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH 3/3] Add RTE_ prefix to CACHE_LINE_ROUNDUP macro
  2014-11-19 12:26 [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Sergio Gonzalez Monroy
  2014-11-19 12:26 ` [dpdk-dev] [PATCH 1/3] Add RTE_ prefix to CACHE_LINE_SIZE macro Sergio Gonzalez Monroy
  2014-11-19 12:26 ` [dpdk-dev] [PATCH 2/3] Add RTE_ prefix to CACHE_LINE_MASK macro Sergio Gonzalez Monroy
@ 2014-11-19 12:26 ` Sergio Gonzalez Monroy
  2014-11-27 13:58 ` [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Thomas Monjalon
  3 siblings, 0 replies; 5+ messages in thread
From: Sergio Gonzalez Monroy @ 2014-11-19 12:26 UTC (permalink / raw)
  To: dev

Adding RTE_ for consistency with other renamed macros and to avoid
potential conflicts.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 app/test-pmd/testpmd.c                     |  2 +-
 lib/librte_eal/common/include/rte_memory.h |  2 +-
 lib/librte_malloc/malloc_heap.c            |  4 ++--
 lib/librte_malloc/rte_malloc.c             |  2 +-
 lib/librte_sched/rte_sched.c               | 14 +++++++-------
 lib/librte_table/rte_table_acl.c           |  6 +++---
 lib/librte_table/rte_table_hash_ext.c      | 14 +++++++-------
 lib/librte_table/rte_table_hash_lru.c      | 10 +++++-----
 8 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5f96899..7552bf9 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -444,7 +444,7 @@ mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
 	mbp_ctor_arg.seg_buf_size = (uint16_t) (RTE_PKTMBUF_HEADROOM +
 						mbuf_seg_size);
 	mb_ctor_arg.seg_buf_offset =
-		(uint16_t) CACHE_LINE_ROUNDUP(sizeof(struct rte_mbuf));
+		(uint16_t) RTE_CACHE_LINE_ROUNDUP(sizeof(struct rte_mbuf));
 	mb_ctor_arg.seg_buf_size = mbp_ctor_arg.seg_buf_size;
 	mb_size = mb_ctor_arg.seg_buf_offset + mb_ctor_arg.seg_buf_size;
 	mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name));
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index ab20c4b..05e55b9 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -64,7 +64,7 @@ enum rte_page_sizes {
 #endif
 #define RTE_CACHE_LINE_MASK (RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */
 
-#define CACHE_LINE_ROUNDUP(size) \
+#define RTE_CACHE_LINE_ROUNDUP(size) \
 	(RTE_CACHE_LINE_SIZE * ((size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE))
 /**< Return the first cache-aligned value greater or equal to size. */
 
diff --git a/lib/librte_malloc/malloc_heap.c b/lib/librte_malloc/malloc_heap.c
index a1d0ebb..95fcfec 100644
--- a/lib/librte_malloc/malloc_heap.c
+++ b/lib/librte_malloc/malloc_heap.c
@@ -155,8 +155,8 @@ void *
 malloc_heap_alloc(struct malloc_heap *heap,
 		const char *type __attribute__((unused)), size_t size, unsigned align)
 {
-	size = CACHE_LINE_ROUNDUP(size);
-	align = CACHE_LINE_ROUNDUP(align);
+	size = RTE_CACHE_LINE_ROUNDUP(size);
+	align = RTE_CACHE_LINE_ROUNDUP(align);
 	rte_spinlock_lock(&heap->lock);
 	struct malloc_elem *elem = find_suitable_element(heap, size, align);
 	if (elem == NULL){
diff --git a/lib/librte_malloc/rte_malloc.c b/lib/librte_malloc/rte_malloc.c
index ee36357..b966fc7 100644
--- a/lib/librte_malloc/rte_malloc.c
+++ b/lib/librte_malloc/rte_malloc.c
@@ -169,7 +169,7 @@ rte_realloc(void *ptr, size_t size, unsigned align)
 	if (elem == NULL)
 		rte_panic("Fatal error: memory corruption detected\n");
 
-	size = CACHE_LINE_ROUNDUP(size), align = CACHE_LINE_ROUNDUP(align);
+	size = RTE_CACHE_LINE_ROUNDUP(size), align = RTE_CACHE_LINE_ROUNDUP(align);
 	/* check alignment matches first, and if ok, see if we can resize block */
 	if (RTE_PTR_ALIGN(ptr,align) == ptr &&
 			malloc_elem_resize(elem, size) == 0)
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 1447a27..95dee27 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -417,25 +417,25 @@ rte_sched_port_get_array_base(struct rte_sched_port_params *params, enum rte_sch
 	base = 0;
 
 	if (array == e_RTE_SCHED_PORT_ARRAY_SUBPORT) return base;
-	base += CACHE_LINE_ROUNDUP(size_subport);
+	base += RTE_CACHE_LINE_ROUNDUP(size_subport);
 
 	if (array == e_RTE_SCHED_PORT_ARRAY_PIPE) return base;
-	base += CACHE_LINE_ROUNDUP(size_pipe);
+	base += RTE_CACHE_LINE_ROUNDUP(size_pipe);
 
 	if (array == e_RTE_SCHED_PORT_ARRAY_QUEUE) return base;
-	base += CACHE_LINE_ROUNDUP(size_queue);
+	base += RTE_CACHE_LINE_ROUNDUP(size_queue);
 
 	if (array == e_RTE_SCHED_PORT_ARRAY_QUEUE_EXTRA) return base;
-	base += CACHE_LINE_ROUNDUP(size_queue_extra);
+	base += RTE_CACHE_LINE_ROUNDUP(size_queue_extra);
 
 	if (array == e_RTE_SCHED_PORT_ARRAY_PIPE_PROFILES) return base;
-	base += CACHE_LINE_ROUNDUP(size_pipe_profiles);
+	base += RTE_CACHE_LINE_ROUNDUP(size_pipe_profiles);
 
 	if (array == e_RTE_SCHED_PORT_ARRAY_BMP_ARRAY) return base;
-	base += CACHE_LINE_ROUNDUP(size_bmp_array);
+	base += RTE_CACHE_LINE_ROUNDUP(size_bmp_array);
 
 	if (array == e_RTE_SCHED_PORT_ARRAY_QUEUE_ARRAY) return base;
-	base += CACHE_LINE_ROUNDUP(size_queue_array);
+	base += RTE_CACHE_LINE_ROUNDUP(size_queue_array);
 
 	return base;
 }
diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c
index ed0aae8..8a6eb0d 100644
--- a/lib/librte_table/rte_table_acl.c
+++ b/lib/librte_table/rte_table_acl.c
@@ -102,10 +102,10 @@ rte_table_acl_create(
 	entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
 
 	/* Memory allocation */
-	action_table_size = CACHE_LINE_ROUNDUP(p->n_rules * entry_size);
+	action_table_size = RTE_CACHE_LINE_ROUNDUP(p->n_rules * entry_size);
 	acl_rule_list_size =
-		CACHE_LINE_ROUNDUP(p->n_rules * sizeof(struct rte_acl_rule *));
-	acl_rule_memory_size = CACHE_LINE_ROUNDUP(p->n_rules *
+		RTE_CACHE_LINE_ROUNDUP(p->n_rules * sizeof(struct rte_acl_rule *));
+	acl_rule_memory_size = RTE_CACHE_LINE_ROUNDUP(p->n_rules *
 		RTE_ACL_RULE_SZ(p->n_rule_fields));
 	total_size = sizeof(struct rte_table_acl) + action_table_size +
 		acl_rule_list_size + acl_rule_memory_size;
diff --git a/lib/librte_table/rte_table_hash_ext.c b/lib/librte_table/rte_table_hash_ext.c
index 638c2cd..68cb957 100644
--- a/lib/librte_table/rte_table_hash_ext.c
+++ b/lib/librte_table/rte_table_hash_ext.c
@@ -185,15 +185,15 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size)
 		return NULL;
 
 	/* Memory allocation */
-	table_meta_sz = CACHE_LINE_ROUNDUP(sizeof(struct rte_table_hash));
-	bucket_sz = CACHE_LINE_ROUNDUP(p->n_buckets * sizeof(struct bucket));
+	table_meta_sz = RTE_CACHE_LINE_ROUNDUP(sizeof(struct rte_table_hash));
+	bucket_sz = RTE_CACHE_LINE_ROUNDUP(p->n_buckets * sizeof(struct bucket));
 	bucket_ext_sz =
-		CACHE_LINE_ROUNDUP(p->n_buckets_ext * sizeof(struct bucket));
-	key_sz = CACHE_LINE_ROUNDUP(p->n_keys * p->key_size);
-	key_stack_sz = CACHE_LINE_ROUNDUP(p->n_keys * sizeof(uint32_t));
+		RTE_CACHE_LINE_ROUNDUP(p->n_buckets_ext * sizeof(struct bucket));
+	key_sz = RTE_CACHE_LINE_ROUNDUP(p->n_keys * p->key_size);
+	key_stack_sz = RTE_CACHE_LINE_ROUNDUP(p->n_keys * sizeof(uint32_t));
 	bkt_ext_stack_sz =
-		CACHE_LINE_ROUNDUP(p->n_buckets_ext * sizeof(uint32_t));
-	data_sz = CACHE_LINE_ROUNDUP(p->n_keys * entry_size);
+		RTE_CACHE_LINE_ROUNDUP(p->n_buckets_ext * sizeof(uint32_t));
+	data_sz = RTE_CACHE_LINE_ROUNDUP(p->n_keys * entry_size);
 	total_size = table_meta_sz + bucket_sz + bucket_ext_sz + key_sz +
 		key_stack_sz + bkt_ext_stack_sz + data_sz;
 
diff --git a/lib/librte_table/rte_table_hash_lru.c b/lib/librte_table/rte_table_hash_lru.c
index cea7a92..a7fa03c 100644
--- a/lib/librte_table/rte_table_hash_lru.c
+++ b/lib/librte_table/rte_table_hash_lru.c
@@ -161,11 +161,11 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size)
 	}
 
 	/* Memory allocation */
-	table_meta_sz = CACHE_LINE_ROUNDUP(sizeof(struct rte_table_hash));
-	bucket_sz = CACHE_LINE_ROUNDUP(p->n_buckets * sizeof(struct bucket));
-	key_sz = CACHE_LINE_ROUNDUP(p->n_keys * p->key_size);
-	key_stack_sz = CACHE_LINE_ROUNDUP(p->n_keys * sizeof(uint32_t));
-	data_sz = CACHE_LINE_ROUNDUP(p->n_keys * entry_size);
+	table_meta_sz = RTE_CACHE_LINE_ROUNDUP(sizeof(struct rte_table_hash));
+	bucket_sz = RTE_CACHE_LINE_ROUNDUP(p->n_buckets * sizeof(struct bucket));
+	key_sz = RTE_CACHE_LINE_ROUNDUP(p->n_keys * p->key_size);
+	key_stack_sz = RTE_CACHE_LINE_ROUNDUP(p->n_keys * sizeof(uint32_t));
+	data_sz = RTE_CACHE_LINE_ROUNDUP(p->n_keys * entry_size);
 	total_size = table_meta_sz + bucket_sz + key_sz + key_stack_sz +
 		data_sz;
 
-- 
2.1.0

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros
  2014-11-19 12:26 [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Sergio Gonzalez Monroy
                   ` (2 preceding siblings ...)
  2014-11-19 12:26 ` [dpdk-dev] [PATCH 3/3] Add RTE_ prefix to CACHE_LINE_ROUNDUP macro Sergio Gonzalez Monroy
@ 2014-11-27 13:58 ` Thomas Monjalon
  3 siblings, 0 replies; 5+ messages in thread
From: Thomas Monjalon @ 2014-11-27 13:58 UTC (permalink / raw)
  To: Sergio Gonzalez Monroy; +Cc: dev

> Currently DPDK sets CACHE_LINE_SIZE value to 64 by default if the macro is
> not already defined.
> 
> FreeBSD defines a CACHE_LINE_SIZE macro in the header file:
> /usr/include/machine/param.h
> 
> These macros set different values, 64 in DPDK vs 128 in FreeBSD, causing
> broken application behaviour if the system header file is included before
> rte_memory.h (where DPDK sets CACHE_LINE_SIZE).
> 
> This is the case for some examples like ip_fragmentation.
> In such application, DPDK library code would assume 64 bytes cache line size
> and the application code would assume 128 cache line size.
> Given that mbufs now take two cache lines and that the structure is being
> aligned based on this value, the result is broken application functionality.
> 
> The approach to fix this issue is to add RTE_ prefix to all CACHE_LINE_xxxx
> related macros to avoid conflicts.
> 
> Sergio Gonzalez Monroy (3):
>   Add RTE_ prefix to CACHE_LINE_SIZE macro
>   Add RTE_ prefix to CACHE_LINE_MASK macro
>   Add RTE_ prefix to CACHE_LINE_ROUNDUP macro

Updated and applied in 1 commit.

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-11-27 13:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-19 12:26 [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Sergio Gonzalez Monroy
2014-11-19 12:26 ` [dpdk-dev] [PATCH 1/3] Add RTE_ prefix to CACHE_LINE_SIZE macro Sergio Gonzalez Monroy
2014-11-19 12:26 ` [dpdk-dev] [PATCH 2/3] Add RTE_ prefix to CACHE_LINE_MASK macro Sergio Gonzalez Monroy
2014-11-19 12:26 ` [dpdk-dev] [PATCH 3/3] Add RTE_ prefix to CACHE_LINE_ROUNDUP macro Sergio Gonzalez Monroy
2014-11-27 13:58 ` [dpdk-dev] [PATCH 0/3] Add RTE_ prefix to CACHE_LINE related macros Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).