DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test
@ 2019-05-28 11:51 Ray Kinsella
  2019-05-28 11:51 ` [dpdk-dev] [PATCH 1/2] app/test: Add ABI Version Testing functionality Ray Kinsella
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Ray Kinsella @ 2019-05-28 11:51 UTC (permalink / raw)
  To: bruce.richardson, vladimir.medvedkin; +Cc: dev, Ray Kinsella

This patchset adds ABI Version Testing functionality to the app/test unit
test framework.

The patchset is intended to address two issues previously raised during ML
conversations on ABI Stability;
1. How do we unit test still supported previous ABI Versions.
2. How to we unit test inline functions from still supported previous ABI
Versions.

The more obvious way to achieve both of the above is to simply archive
pre-built binaries compiled against previous versions of DPDK for use unit
testing previous ABI Versions, and while this should still be done as an
additional check, this approach does not scale well, must every DPDK
developer have a local copy of these binaries to test with, before
upstreaming changes?

Instead starting with rte_lpm, I did the following:-

* I reproduced mostly unmodified unit tests from previous ABI Versions,
  in this case v2.0 and v16.04
* I reproduced the rte_lpm interface header from these previous ABI
  Versions,including the inline functions and remapping symbols to
  appropriate versions.
* I added support for multiple abi versions to the app/test unit test
  framework to allow users to switch between abi versions (set_abi_version),
  without further polluting the already long list of unit tests available in
  app/test.

The intention here is that, in future as developers need to depreciate
APIs, their associated unit tests may move into the ABI Version testing
mechanism of the app/test instead of being replaced by the latest set of
unit tests as would be the case today.

ToDo:
* Refactor the v2.0 and v16.04 unit tests to separate functional and
  performance test cases.
* Add support for trigger ABI Version unit tests from the app/test command
  line.

Ray Kinsella (2):
  app/test: Add ABI Version Testing functionality
  app/test: LPMv4 ABI Version Testing

 app/test/Makefile              |   12 +-
 app/test/commands.c            |  131 ++-
 app/test/meson.build           |    5 +
 app/test/test.c                |    2 +
 app/test/test.h                |   52 +-
 app/test/test_lpm.c            |    1 +
 app/test/test_lpm_perf.c       |  293 +------
 app/test/test_lpm_routes.c     |  287 +++++++
 app/test/test_lpm_routes.h     |   25 +
 app/test/v16.04/dcompat.h      |   23 +
 app/test/v16.04/rte_lpm.h      |  463 +++++++++++
 app/test/v16.04/rte_lpm_neon.h |  119 +++
 app/test/v16.04/rte_lpm_sse.h  |  120 +++
 app/test/v16.04/test_lpm.c     | 1405 ++++++++++++++++++++++++++++++++
 app/test/v16.04/test_v1604.c   |   14 +
 app/test/v2.0/dcompat.h        |   23 +
 app/test/v2.0/rte_lpm.h        |  443 ++++++++++
 app/test/v2.0/test_lpm.c       | 1306 +++++++++++++++++++++++++++++
 app/test/v2.0/test_v20.c       |   14 +
 19 files changed, 4420 insertions(+), 318 deletions(-)
 create mode 100644 app/test/test_lpm_routes.c
 create mode 100644 app/test/test_lpm_routes.h
 create mode 100644 app/test/v16.04/dcompat.h
 create mode 100644 app/test/v16.04/rte_lpm.h
 create mode 100644 app/test/v16.04/rte_lpm_neon.h
 create mode 100644 app/test/v16.04/rte_lpm_sse.h
 create mode 100644 app/test/v16.04/test_lpm.c
 create mode 100644 app/test/v16.04/test_v1604.c
 create mode 100644 app/test/v2.0/dcompat.h
 create mode 100644 app/test/v2.0/rte_lpm.h
 create mode 100644 app/test/v2.0/test_lpm.c
 create mode 100644 app/test/v2.0/test_v20.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH 1/2] app/test: Add ABI Version Testing functionality
  2019-05-28 11:51 [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Ray Kinsella
@ 2019-05-28 11:51 ` Ray Kinsella
  2019-05-28 11:51 ` [dpdk-dev] [PATCH 2/2] app/test: LPMv4 ABI Version Testing Ray Kinsella
  2019-05-28 12:08 ` [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Bruce Richardson
  2 siblings, 0 replies; 7+ messages in thread
From: Ray Kinsella @ 2019-05-28 11:51 UTC (permalink / raw)
  To: bruce.richardson, vladimir.medvedkin; +Cc: dev, Ray Kinsella

This patchset adds ABI Version Testing functionality to the app/test
unit test framework, comprised of

1. The TEST_DPDK_ABI_VERSION_* and REISTER_TEST_ABI_VERSION macros to
   register abi versions with infrastructure.
2. The MAP_ABI_SYMBOL_VERSION macro to remap symbols based on their ABI
   Version.
3. The set_abi_version command, to switch between ABI Versions.

Signed-off-by: Ray Kinsella <ray.kinsella@intel.com>
---
 app/test/commands.c | 131 ++++++++++++++++++++++++++++++++++++++------
 app/test/test.c     |   2 +
 app/test/test.h     |  52 +++++++++++++++---
 3 files changed, 159 insertions(+), 26 deletions(-)

diff --git a/app/test/commands.c b/app/test/commands.c
index 8d5a03a95..06fc33ee5 100644
--- a/app/test/commands.c
+++ b/app/test/commands.c
@@ -50,12 +50,22 @@
 
 /****************/
 
+static uint8_t test_abi_version = TEST_DPDK_ABI_VERSION_DEFAULT;
+
+static struct test_abi_version_list abi_version_list =
+	TAILQ_HEAD_INITIALIZER(abi_version_list);
+
 static struct test_commands_list commands_list =
 	TAILQ_HEAD_INITIALIZER(commands_list);
 
-void
-add_test_command(struct test_command *t)
+void add_abi_version(struct test_abi_version *av)
+{
+	TAILQ_INSERT_TAIL(&abi_version_list, av, next);
+}
+
+void add_test_command(struct test_command *t, uint8_t abi_version)
 {
+	t->abi_version = abi_version;
 	TAILQ_INSERT_TAIL(&commands_list, t, next);
 }
 
@@ -63,6 +73,12 @@ struct cmd_autotest_result {
 	cmdline_fixed_string_t autotest;
 };
 
+cmdline_parse_token_string_t
+cmd_autotest_autotest[TEST_DPDK_ABI_VERSION_MAX] = {
+	[0 ... TEST_DPDK_ABI_VERSION_MAX-1] =
+	TOKEN_STRING_INITIALIZER(struct cmd_autotest_result, autotest, "")
+};
+
 static void cmd_autotest_parsed(void *parsed_result,
 				__attribute__((unused)) struct cmdline *cl,
 				__attribute__((unused)) void *data)
@@ -72,7 +88,8 @@ static void cmd_autotest_parsed(void *parsed_result,
 	int ret = 0;
 
 	TAILQ_FOREACH(t, &commands_list, next) {
-		if (!strcmp(res->autotest, t->command))
+		if (!strcmp(res->autotest, t->command)
+				&& t->abi_version == test_abi_version)
 			ret = t->callback();
 	}
 
@@ -86,10 +103,6 @@ static void cmd_autotest_parsed(void *parsed_result,
 	fflush(stdout);
 }
 
-cmdline_parse_token_string_t cmd_autotest_autotest =
-	TOKEN_STRING_INITIALIZER(struct cmd_autotest_result, autotest,
-				 "");
-
 cmdline_parse_inst_t cmd_autotest = {
 	.f = cmd_autotest_parsed,  /* function to call */
 	.data = NULL,      /* 2nd arg of func */
@@ -244,6 +257,53 @@ cmdline_parse_inst_t cmd_quit = {
 
 /****************/
 
+struct cmd_set_abi_version_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t abi_version_name;
+};
+
+static void cmd_set_abi_version_parsed(
+				void *parsed_result,
+				__attribute__((unused)) struct cmdline *cl,
+				__attribute__((unused)) void *data)
+{
+	struct test_abi_version *av;
+	struct cmd_set_abi_version_result *res = parsed_result;
+
+	TAILQ_FOREACH(av, &abi_version_list, next) {
+		if (!strcmp(res->abi_version_name, av->version_name)) {
+
+			printf("abi version set to %s\n", av->version_name);
+			test_abi_version = av->version_id;
+			cmd_autotest.tokens[0] =
+				(void *)&cmd_autotest_autotest[av->version_id];
+		}
+	}
+
+	fflush(stdout);
+}
+
+cmdline_parse_token_string_t cmd_set_abi_version_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_abi_version_result, set,
+				"set_abi_version");
+
+cmdline_parse_token_string_t cmd_set_abi_version_abi_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_abi_version_result,
+				abi_version_name, NULL);
+
+cmdline_parse_inst_t cmd_set_abi_version = {
+	.f = cmd_set_abi_version_parsed,  /* function to call */
+	.data = NULL,      /* 2nd arg of func */
+	.help_str = "set abi version: ",
+	.tokens = {        /* token list, NULL terminated */
+		(void *)&cmd_set_abi_version_set,
+		(void *)&cmd_set_abi_version_abi_version,
+		NULL,
+	},
+};
+
+/****************/
+
 struct cmd_set_rxtx_result {
 	cmdline_fixed_string_t set;
 	cmdline_fixed_string_t mode;
@@ -259,7 +319,7 @@ static void cmd_set_rxtx_parsed(void *parsed_result, struct cmdline *cl,
 
 cmdline_parse_token_string_t cmd_set_rxtx_set =
 	TOKEN_STRING_INITIALIZER(struct cmd_set_rxtx_result, set,
-				 "set_rxtx_mode");
+				"set_rxtx_mode");
 
 cmdline_parse_token_string_t cmd_set_rxtx_mode =
 	TOKEN_STRING_INITIALIZER(struct cmd_set_rxtx_result, mode, NULL);
@@ -360,29 +420,66 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_set_rxtx,
 	(cmdline_parse_inst_t *)&cmd_set_rxtx_anchor,
 	(cmdline_parse_inst_t *)&cmd_set_rxtx_sc,
+	(cmdline_parse_inst_t *)&cmd_set_abi_version,
 	NULL,
 };
 
 int commands_init(void)
 {
+	struct test_abi_version *av;
 	struct test_command *t;
-	char *commands;
-	int commands_len = 0;
+	char *commands[TEST_DPDK_ABI_VERSION_MAX];
+	char *help;
+
+	int commands_len[TEST_DPDK_ABI_VERSION_MAX] = {
+		[0 ... TEST_DPDK_ABI_VERSION_MAX-1] = 0
+	};
+	int help_len = strlen(cmd_set_abi_version.help_str);
+	int abi_version;
+
+	/* set the set_abi_version command help string */
+	TAILQ_FOREACH(av, &abi_version_list, next) {
+		help_len += strlen(av->version_name) + 1;
+	}
+
+	help = (char *)calloc(help_len, sizeof(char));
+	if (!help)
+		return -1;
+
+	strlcat(help, cmd_set_abi_version.help_str, help_len);
+	TAILQ_FOREACH(av, &abi_version_list, next) {
+		strlcat(help, av->version_name, help_len);
+		if (TAILQ_NEXT(av, next) != NULL)
+			strlcat(help, "|", help_len);
+	}
+
+	cmd_set_abi_version.help_str = help;
 
+	/* set the parse strings for the command lists */
 	TAILQ_FOREACH(t, &commands_list, next) {
-		commands_len += strlen(t->command) + 1;
+		commands_len[t->abi_version] += strlen(t->command) + 1;
 	}
 
-	commands = (char *)calloc(commands_len, sizeof(char));
-	if (!commands)
-		return -1;
+	for (abi_version = 0; abi_version < TEST_DPDK_ABI_VERSION_MAX;
+		abi_version++) {
+		commands[abi_version] =
+			(char *)calloc(commands_len[abi_version], sizeof(char));
+		if (!commands[abi_version])
+			return -1;
+	}
 
 	TAILQ_FOREACH(t, &commands_list, next) {
-		strlcat(commands, t->command, commands_len);
+		strlcat(commands[t->abi_version],
+			t->command, commands_len[t->abi_version]);
 		if (TAILQ_NEXT(t, next) != NULL)
-			strlcat(commands, "#", commands_len);
+			strlcat(commands[t->abi_version],
+				"#", commands_len[t->abi_version]);
 	}
 
-	cmd_autotest_autotest.string_data.str = commands;
+	for (abi_version = 0; abi_version < TEST_DPDK_ABI_VERSION_MAX;
+		abi_version++)
+		cmd_autotest_autotest[abi_version].string_data.str =
+			commands[abi_version];
+
 	return 0;
 }
diff --git a/app/test/test.c b/app/test/test.c
index ea1e98f2e..4bc9df4c2 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -297,3 +297,5 @@ unit_test_suite_runner(struct unit_test_suite *suite)
 
 	return 0;
 }
+
+REGISTER_TEST_ABI_VERSION(default, TEST_DPDK_ABI_VERSION_DEFAULT)
diff --git a/app/test/test.h b/app/test/test.h
index ac0c50616..5ec3728d0 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -162,25 +162,59 @@ int test_set_rxtx_conf(cmdline_fixed_string_t mode);
 int test_set_rxtx_anchor(cmdline_fixed_string_t type);
 int test_set_rxtx_sc(cmdline_fixed_string_t type);
 
+#define MAP_ABI_SYMBOL_VERSION(name, abi_version)                             \
+	__asm(".symver "RTE_STR(name)","RTE_STR(name)"@"RTE_STR(abi_version))
+
+#define TEST_DPDK_ABI_VERSION_DEFAULT 0
+#define TEST_DPDK_ABI_VERSION_V1604   1
+#define TEST_DPDK_ABI_VERSION_V20     2
+#define TEST_DPDK_ABI_VERSION_MAX     3
+
+TAILQ_HEAD(test_abi_version_list, test_abi_version);
+struct test_abi_version {
+	TAILQ_ENTRY(test_abi_version) next;
+	const char *version_name;
+	uint8_t version_id;
+};
+
+void add_abi_version(struct test_abi_version *av);
+
+/* Register a test function with its command string */
+#define REGISTER_TEST_ABI_VERSION(name, id)                                   \
+	static struct test_abi_version test_struct_##name = {                 \
+		.version_name = RTE_STR(name),                                \
+		.version_id = id,                                             \
+	};                                                                    \
+	RTE_INIT(test_register_##name)                                        \
+	{                                                                     \
+		add_abi_version(&test_struct_##name);                         \
+	}
+
 typedef int (test_callback)(void);
 TAILQ_HEAD(test_commands_list, test_command);
 struct test_command {
 	TAILQ_ENTRY(test_command) next;
 	const char *command;
 	test_callback *callback;
+	uint8_t abi_version;
 };
 
-void add_test_command(struct test_command *t);
+void add_test_command(struct test_command *t, uint8_t abi_version);
+
+/* Register a test function with its command string and abi version */
+#define REGISTER_TEST_COMMAND_VERSION(cmd, func, abi_version)                 \
+	static struct test_command test_struct_##cmd = {                      \
+		.command = RTE_STR(cmd),                                      \
+		.callback = func,                                             \
+	};                                                                    \
+	RTE_INIT(test_register_##cmd)                                         \
+	{                                                                     \
+		add_test_command(&test_struct_##cmd, abi_version);            \
+	}
 
 /* Register a test function with its command string */
+
 #define REGISTER_TEST_COMMAND(cmd, func) \
-	static struct test_command test_struct_##cmd = { \
-		.command = RTE_STR(cmd), \
-		.callback = func, \
-	}; \
-	RTE_INIT(test_register_##cmd) \
-	{ \
-		add_test_command(&test_struct_##cmd); \
-	}
+	REGISTER_TEST_COMMAND_VERSION(cmd, func, TEST_DPDK_ABI_VERSION_DEFAULT)
 
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH 2/2] app/test: LPMv4 ABI Version Testing
  2019-05-28 11:51 [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Ray Kinsella
  2019-05-28 11:51 ` [dpdk-dev] [PATCH 1/2] app/test: Add ABI Version Testing functionality Ray Kinsella
@ 2019-05-28 11:51 ` Ray Kinsella
  2019-05-29 13:50   ` Aaron Conole
  2019-05-28 12:08 ` [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Bruce Richardson
  2 siblings, 1 reply; 7+ messages in thread
From: Ray Kinsella @ 2019-05-28 11:51 UTC (permalink / raw)
  To: bruce.richardson, vladimir.medvedkin; +Cc: dev, Ray Kinsella

This second patch adds the LPM ABI Version Unit Tests, comprised of

1. Registering DPDK v2.0 and DPDK v16.04 ABI Versions with the
   infrastructure.
2. Forward Porting the DPDK v2.0 and DPDK v16.04 LPM Unit Test
   cases, remapping the LPM Library symbols to the appropriate versions.
3. Refactoring the lpm perf routes table to make this
   functionality available to the v2.0 and v16.04 unit tests, forwarding
   porting this code also from v2.0 etc would have increased the DPDK
   codebase several MLoC.q

Signed-off-by: Ray Kinsella <ray.kinsella@intel.com>
---
 app/test/Makefile              |   12 +-
 app/test/meson.build           |    5 +
 app/test/test_lpm.c            |    1 +
 app/test/test_lpm_perf.c       |  293 +------
 app/test/test_lpm_routes.c     |  287 +++++++
 app/test/test_lpm_routes.h     |   25 +
 app/test/v16.04/dcompat.h      |   23 +
 app/test/v16.04/rte_lpm.h      |  463 +++++++++++
 app/test/v16.04/rte_lpm_neon.h |  119 +++
 app/test/v16.04/rte_lpm_sse.h  |  120 +++
 app/test/v16.04/test_lpm.c     | 1405 ++++++++++++++++++++++++++++++++
 app/test/v16.04/test_v1604.c   |   14 +
 app/test/v2.0/dcompat.h        |   23 +
 app/test/v2.0/rte_lpm.h        |  443 ++++++++++
 app/test/v2.0/test_lpm.c       | 1306 +++++++++++++++++++++++++++++
 app/test/v2.0/test_v20.c       |   14 +
 16 files changed, 4261 insertions(+), 292 deletions(-)
 create mode 100644 app/test/test_lpm_routes.c
 create mode 100644 app/test/test_lpm_routes.h
 create mode 100644 app/test/v16.04/dcompat.h
 create mode 100644 app/test/v16.04/rte_lpm.h
 create mode 100644 app/test/v16.04/rte_lpm_neon.h
 create mode 100644 app/test/v16.04/rte_lpm_sse.h
 create mode 100644 app/test/v16.04/test_lpm.c
 create mode 100644 app/test/v16.04/test_v1604.c
 create mode 100644 app/test/v2.0/dcompat.h
 create mode 100644 app/test/v2.0/rte_lpm.h
 create mode 100644 app/test/v2.0/test_lpm.c
 create mode 100644 app/test/v2.0/test_v20.c

diff --git a/app/test/Makefile b/app/test/Makefile
index 68d6b4fbc..5899eb8b9 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -78,6 +78,10 @@ SRCS-y += test_ring.c
 SRCS-y += test_ring_perf.c
 SRCS-y += test_pmd_perf.c
 
+#ABI Version Testing
+SRCS-$(CONFIG_RTE_BUILD_SHARED_LIB) += v2.0/test_v20.c
+SRCS-$(CONFIG_RTE_BUILD_SHARED_LIB) += v16.04/test_v1604.c
+
 ifeq ($(CONFIG_RTE_LIBRTE_TABLE),y)
 SRCS-y += test_table.c
 SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += test_table_pipeline.c
@@ -107,7 +111,6 @@ SRCS-y += test_logs.c
 SRCS-y += test_memcpy.c
 SRCS-y += test_memcpy_perf.c
 
-
 SRCS-$(CONFIG_RTE_LIBRTE_MEMBER) += test_member.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMBER) += test_member_perf.c
 
@@ -122,11 +125,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_multiwriter.c
 SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_readwrite.c
 SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_readwrite_lf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm_routes.c
 SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm.c
 SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm6.c
 SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm6_perf.c
 
+#LPM ABI Testing
+ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
+SRCS-$(CONFIG_RTE_LIBRTE_LPM) += v2.0/test_lpm.c
+SRCS-$(CONFIG_RTE_LIBRTE_LPM) += v16.04/test_lpm.c
+endif
+
 SRCS-y += test_debug.c
 SRCS-y += test_errno.c
 SRCS-y += test_tailq.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 83391cef0..628f4e1ff 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -4,6 +4,8 @@
 test_sources = files('commands.c',
 	'packet_burst_generator.c',
 	'sample_packet_forward.c',
+	'v2.0/test_v20.c',
+	'v16.04/test_v1604.c',
 	'test.c',
 	'test_acl.c',
 	'test_alarm.c',
@@ -63,6 +65,9 @@ test_sources = files('commands.c',
 	'test_lpm6.c',
 	'test_lpm6_perf.c',
 	'test_lpm_perf.c',
+	'test_lpm_routes.c',
+	'v2.0/test_lpm.c',
+	'v16.04/test_lpm.c',
 	'test_malloc.c',
 	'test_mbuf.c',
 	'test_member.c',
diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 5d697dd0f..bfa702677 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -1277,6 +1277,7 @@ test_lpm(void)
 	int status, global_status = 0;
 
 	for (i = 0; i < NUM_LPM_TESTS; i++) {
+		printf("# test %02d\n", i);
 		status = tests[i]();
 		if (status < 0) {
 			printf("ERROR: LPM Test %u: FAIL\n", i);
diff --git a/app/test/test_lpm_perf.c b/app/test/test_lpm_perf.c
index 3b98ce0c8..a6b8b35c2 100644
--- a/app/test/test_lpm_perf.c
+++ b/app/test/test_lpm_perf.c
@@ -5,7 +5,6 @@
 #include <stdio.h>
 #include <stdint.h>
 #include <stdlib.h>
-#include <math.h>
 
 #include <rte_cycles.h>
 #include <rte_random.h>
@@ -13,6 +12,7 @@
 #include <rte_ip.h>
 #include <rte_lpm.h>
 
+#include "test_lpm_routes.h"
 #include "test.h"
 #include "test_xmmt_ops.h"
 
@@ -27,295 +27,6 @@
 #define BATCH_SIZE (1 << 12)
 #define BULK_SIZE 32
 
-#define MAX_RULE_NUM (1200000)
-
-struct route_rule {
-	uint32_t ip;
-	uint8_t depth;
-};
-
-struct route_rule large_route_table[MAX_RULE_NUM];
-
-static uint32_t num_route_entries;
-#define NUM_ROUTE_ENTRIES num_route_entries
-
-enum {
-	IP_CLASS_A,
-	IP_CLASS_B,
-	IP_CLASS_C
-};
-
-/* struct route_rule_count defines the total number of rules in following a/b/c
- * each item in a[]/b[]/c[] is the number of common IP address class A/B/C, not
- * including the ones for private local network.
- */
-struct route_rule_count {
-	uint32_t a[RTE_LPM_MAX_DEPTH];
-	uint32_t b[RTE_LPM_MAX_DEPTH];
-	uint32_t c[RTE_LPM_MAX_DEPTH];
-};
-
-/* All following numbers of each depth of each common IP class are just
- * got from previous large constant table in app/test/test_lpm_routes.h .
- * In order to match similar performance, they keep same depth and IP
- * address coverage as previous constant table. These numbers don't
- * include any private local IP address. As previous large const rule
- * table was just dumped from a real router, there are no any IP address
- * in class C or D.
- */
-static struct route_rule_count rule_count = {
-	.a = { /* IP class A in which the most significant bit is 0 */
-		    0, /* depth =  1 */
-		    0, /* depth =  2 */
-		    1, /* depth =  3 */
-		    0, /* depth =  4 */
-		    2, /* depth =  5 */
-		    1, /* depth =  6 */
-		    3, /* depth =  7 */
-		  185, /* depth =  8 */
-		   26, /* depth =  9 */
-		   16, /* depth = 10 */
-		   39, /* depth = 11 */
-		  144, /* depth = 12 */
-		  233, /* depth = 13 */
-		  528, /* depth = 14 */
-		  866, /* depth = 15 */
-		 3856, /* depth = 16 */
-		 3268, /* depth = 17 */
-		 5662, /* depth = 18 */
-		17301, /* depth = 19 */
-		22226, /* depth = 20 */
-		11147, /* depth = 21 */
-		16746, /* depth = 22 */
-		17120, /* depth = 23 */
-		77578, /* depth = 24 */
-		  401, /* depth = 25 */
-		  656, /* depth = 26 */
-		 1107, /* depth = 27 */
-		 1121, /* depth = 28 */
-		 2316, /* depth = 29 */
-		  717, /* depth = 30 */
-		   10, /* depth = 31 */
-		   66  /* depth = 32 */
-	},
-	.b = { /* IP class A in which the most 2 significant bits are 10 */
-		    0, /* depth =  1 */
-		    0, /* depth =  2 */
-		    0, /* depth =  3 */
-		    0, /* depth =  4 */
-		    1, /* depth =  5 */
-		    1, /* depth =  6 */
-		    1, /* depth =  7 */
-		    3, /* depth =  8 */
-		    3, /* depth =  9 */
-		   30, /* depth = 10 */
-		   25, /* depth = 11 */
-		  168, /* depth = 12 */
-		  305, /* depth = 13 */
-		  569, /* depth = 14 */
-		 1129, /* depth = 15 */
-		50800, /* depth = 16 */
-		 1645, /* depth = 17 */
-		 1820, /* depth = 18 */
-		 3506, /* depth = 19 */
-		 3258, /* depth = 20 */
-		 3424, /* depth = 21 */
-		 4971, /* depth = 22 */
-		 6885, /* depth = 23 */
-		39771, /* depth = 24 */
-		  424, /* depth = 25 */
-		  170, /* depth = 26 */
-		  433, /* depth = 27 */
-		   92, /* depth = 28 */
-		  366, /* depth = 29 */
-		  377, /* depth = 30 */
-		    2, /* depth = 31 */
-		  200  /* depth = 32 */
-	},
-	.c = { /* IP class A in which the most 3 significant bits are 110 */
-		     0, /* depth =  1 */
-		     0, /* depth =  2 */
-		     0, /* depth =  3 */
-		     0, /* depth =  4 */
-		     0, /* depth =  5 */
-		     0, /* depth =  6 */
-		     0, /* depth =  7 */
-		    12, /* depth =  8 */
-		     8, /* depth =  9 */
-		     9, /* depth = 10 */
-		    33, /* depth = 11 */
-		    69, /* depth = 12 */
-		   237, /* depth = 13 */
-		  1007, /* depth = 14 */
-		  1717, /* depth = 15 */
-		 14663, /* depth = 16 */
-		  8070, /* depth = 17 */
-		 16185, /* depth = 18 */
-		 48261, /* depth = 19 */
-		 36870, /* depth = 20 */
-		 33960, /* depth = 21 */
-		 50638, /* depth = 22 */
-		 61422, /* depth = 23 */
-		466549, /* depth = 24 */
-		  1829, /* depth = 25 */
-		  4824, /* depth = 26 */
-		  4927, /* depth = 27 */
-		  5914, /* depth = 28 */
-		 10254, /* depth = 29 */
-		  4905, /* depth = 30 */
-		     1, /* depth = 31 */
-		   716  /* depth = 32 */
-	}
-};
-
-static void generate_random_rule_prefix(uint32_t ip_class, uint8_t depth)
-{
-/* IP address class A, the most significant bit is 0 */
-#define IP_HEAD_MASK_A			0x00000000
-#define IP_HEAD_BIT_NUM_A		1
-
-/* IP address class B, the most significant 2 bits are 10 */
-#define IP_HEAD_MASK_B			0x80000000
-#define IP_HEAD_BIT_NUM_B		2
-
-/* IP address class C, the most significant 3 bits are 110 */
-#define IP_HEAD_MASK_C			0xC0000000
-#define IP_HEAD_BIT_NUM_C		3
-
-	uint32_t class_depth;
-	uint32_t range;
-	uint32_t mask;
-	uint32_t step;
-	uint32_t start;
-	uint32_t fixed_bit_num;
-	uint32_t ip_head_mask;
-	uint32_t rule_num;
-	uint32_t k;
-	struct route_rule *ptr_rule;
-
-	if (ip_class == IP_CLASS_A) {        /* IP Address class A */
-		fixed_bit_num = IP_HEAD_BIT_NUM_A;
-		ip_head_mask = IP_HEAD_MASK_A;
-		rule_num = rule_count.a[depth - 1];
-	} else if (ip_class == IP_CLASS_B) { /* IP Address class B */
-		fixed_bit_num = IP_HEAD_BIT_NUM_B;
-		ip_head_mask = IP_HEAD_MASK_B;
-		rule_num = rule_count.b[depth - 1];
-	} else {                             /* IP Address class C */
-		fixed_bit_num = IP_HEAD_BIT_NUM_C;
-		ip_head_mask = IP_HEAD_MASK_C;
-		rule_num = rule_count.c[depth - 1];
-	}
-
-	if (rule_num == 0)
-		return;
-
-	/* the number of rest bits which don't include the most significant
-	 * fixed bits for this IP address class
-	 */
-	class_depth = depth - fixed_bit_num;
-
-	/* range is the maximum number of rules for this depth and
-	 * this IP address class
-	 */
-	range = 1 << class_depth;
-
-	/* only mask the most depth significant generated bits
-	 * except fixed bits for IP address class
-	 */
-	mask = range - 1;
-
-	/* Widen coverage of IP address in generated rules */
-	if (range <= rule_num)
-		step = 1;
-	else
-		step = round((double)range / rule_num);
-
-	/* Only generate rest bits except the most significant
-	 * fixed bits for IP address class
-	 */
-	start = lrand48() & mask;
-	ptr_rule = &large_route_table[num_route_entries];
-	for (k = 0; k < rule_num; k++) {
-		ptr_rule->ip = (start << (RTE_LPM_MAX_DEPTH - depth))
-			| ip_head_mask;
-		ptr_rule->depth = depth;
-		ptr_rule++;
-		start = (start + step) & mask;
-	}
-	num_route_entries += rule_num;
-}
-
-static void insert_rule_in_random_pos(uint32_t ip, uint8_t depth)
-{
-	uint32_t pos;
-	int try_count = 0;
-	struct route_rule tmp;
-
-	do {
-		pos = lrand48();
-		try_count++;
-	} while ((try_count < 10) && (pos > num_route_entries));
-
-	if ((pos > num_route_entries) || (pos >= MAX_RULE_NUM))
-		pos = num_route_entries >> 1;
-
-	tmp = large_route_table[pos];
-	large_route_table[pos].ip = ip;
-	large_route_table[pos].depth = depth;
-	if (num_route_entries < MAX_RULE_NUM)
-		large_route_table[num_route_entries++] = tmp;
-}
-
-static void generate_large_route_rule_table(void)
-{
-	uint32_t ip_class;
-	uint8_t  depth;
-
-	num_route_entries = 0;
-	memset(large_route_table, 0, sizeof(large_route_table));
-
-	for (ip_class = IP_CLASS_A; ip_class <= IP_CLASS_C; ip_class++) {
-		for (depth = 1; depth <= RTE_LPM_MAX_DEPTH; depth++) {
-			generate_random_rule_prefix(ip_class, depth);
-		}
-	}
-
-	/* Add following rules to keep same as previous large constant table,
-	 * they are 4 rules with private local IP address and 1 all-zeros prefix
-	 * with depth = 8.
-	 */
-	insert_rule_in_random_pos(IPv4(0, 0, 0, 0), 8);
-	insert_rule_in_random_pos(IPv4(10, 2, 23, 147), 32);
-	insert_rule_in_random_pos(IPv4(192, 168, 100, 10), 24);
-	insert_rule_in_random_pos(IPv4(192, 168, 25, 100), 24);
-	insert_rule_in_random_pos(IPv4(192, 168, 129, 124), 32);
-}
-
-static void
-print_route_distribution(const struct route_rule *table, uint32_t n)
-{
-	unsigned i, j;
-
-	printf("Route distribution per prefix width: \n");
-	printf("DEPTH    QUANTITY (PERCENT)\n");
-	printf("--------------------------- \n");
-
-	/* Count depths. */
-	for (i = 1; i <= 32; i++) {
-		unsigned depth_counter = 0;
-		double percent_hits;
-
-		for (j = 0; j < n; j++)
-			if (table[j].depth == (uint8_t) i)
-				depth_counter++;
-
-		percent_hits = ((double)depth_counter)/((double)n) * 100;
-		printf("%.2u%15u (%.2f)\n", i, depth_counter, percent_hits);
-	}
-	printf("\n");
-}
-
 static int
 test_lpm_perf(void)
 {
@@ -375,7 +86,7 @@ test_lpm_perf(void)
 			(unsigned) cache_line_counter, (unsigned) cache_line_counter * 64);
 
 	printf("Average LPM Add: %g cycles\n",
-			(double)total_time / NUM_ROUTE_ENTRIES);
+	       (double)total_time / NUM_ROUTE_ENTRIES);
 
 	/* Measure single Lookup */
 	total_time = 0;
diff --git a/app/test/test_lpm_routes.c b/app/test/test_lpm_routes.c
new file mode 100644
index 000000000..08128542a
--- /dev/null
+++ b/app/test/test_lpm_routes.c
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#include <math.h>
+
+#include "rte_lpm.h"
+#include "test_lpm_routes.h"
+
+uint32_t num_route_entries;
+struct route_rule large_route_table[MAX_RULE_NUM];
+
+enum {
+	IP_CLASS_A,
+	IP_CLASS_B,
+	IP_CLASS_C
+};
+
+/* struct route_rule_count defines the total number of rules in following a/b/c
+ * each item in a[]/b[]/c[] is the number of common IP address class A/B/C, not
+ * including the ones for private local network.
+ */
+struct route_rule_count {
+	uint32_t a[RTE_LPM_MAX_DEPTH];
+	uint32_t b[RTE_LPM_MAX_DEPTH];
+	uint32_t c[RTE_LPM_MAX_DEPTH];
+};
+
+/* All following numbers of each depth of each common IP class are just
+ * got from previous large constant table in app/test/test_lpm_routes.h .
+ * In order to match similar performance, they keep same depth and IP
+ * address coverage as previous constant table. These numbers don't
+ * include any private local IP address. As previous large const rule
+ * table was just dumped from a real router, there are no any IP address
+ * in class C or D.
+ */
+static struct route_rule_count rule_count = {
+	.a = { /* IP class A in which the most significant bit is 0 */
+		    0, /* depth =  1 */
+		    0, /* depth =  2 */
+		    1, /* depth =  3 */
+		    0, /* depth =  4 */
+		    2, /* depth =  5 */
+		    1, /* depth =  6 */
+		    3, /* depth =  7 */
+		  185, /* depth =  8 */
+		   26, /* depth =  9 */
+		   16, /* depth = 10 */
+		   39, /* depth = 11 */
+		  144, /* depth = 12 */
+		  233, /* depth = 13 */
+		  528, /* depth = 14 */
+		  866, /* depth = 15 */
+		 3856, /* depth = 16 */
+		 3268, /* depth = 17 */
+		 5662, /* depth = 18 */
+		17301, /* depth = 19 */
+		22226, /* depth = 20 */
+		11147, /* depth = 21 */
+		16746, /* depth = 22 */
+		17120, /* depth = 23 */
+		77578, /* depth = 24 */
+		  401, /* depth = 25 */
+		  656, /* depth = 26 */
+		 1107, /* depth = 27 */
+		 1121, /* depth = 28 */
+		 2316, /* depth = 29 */
+		  717, /* depth = 30 */
+		   10, /* depth = 31 */
+		   66  /* depth = 32 */
+	},
+	.b = { /* IP class A in which the most 2 significant bits are 10 */
+		    0, /* depth =  1 */
+		    0, /* depth =  2 */
+		    0, /* depth =  3 */
+		    0, /* depth =  4 */
+		    1, /* depth =  5 */
+		    1, /* depth =  6 */
+		    1, /* depth =  7 */
+		    3, /* depth =  8 */
+		    3, /* depth =  9 */
+		   30, /* depth = 10 */
+		   25, /* depth = 11 */
+		  168, /* depth = 12 */
+		  305, /* depth = 13 */
+		  569, /* depth = 14 */
+		 1129, /* depth = 15 */
+		50800, /* depth = 16 */
+		 1645, /* depth = 17 */
+		 1820, /* depth = 18 */
+		 3506, /* depth = 19 */
+		 3258, /* depth = 20 */
+		 3424, /* depth = 21 */
+		 4971, /* depth = 22 */
+		 6885, /* depth = 23 */
+		39771, /* depth = 24 */
+		  424, /* depth = 25 */
+		  170, /* depth = 26 */
+		  433, /* depth = 27 */
+		   92, /* depth = 28 */
+		  366, /* depth = 29 */
+		  377, /* depth = 30 */
+		    2, /* depth = 31 */
+		  200  /* depth = 32 */
+	},
+	.c = { /* IP class A in which the most 3 significant bits are 110 */
+		     0, /* depth =  1 */
+		     0, /* depth =  2 */
+		     0, /* depth =  3 */
+		     0, /* depth =  4 */
+		     0, /* depth =  5 */
+		     0, /* depth =  6 */
+		     0, /* depth =  7 */
+		    12, /* depth =  8 */
+		     8, /* depth =  9 */
+		     9, /* depth = 10 */
+		    33, /* depth = 11 */
+		    69, /* depth = 12 */
+		   237, /* depth = 13 */
+		  1007, /* depth = 14 */
+		  1717, /* depth = 15 */
+		 14663, /* depth = 16 */
+		  8070, /* depth = 17 */
+		 16185, /* depth = 18 */
+		 48261, /* depth = 19 */
+		 36870, /* depth = 20 */
+		 33960, /* depth = 21 */
+		 50638, /* depth = 22 */
+		 61422, /* depth = 23 */
+		466549, /* depth = 24 */
+		  1829, /* depth = 25 */
+		  4824, /* depth = 26 */
+		  4927, /* depth = 27 */
+		  5914, /* depth = 28 */
+		 10254, /* depth = 29 */
+		  4905, /* depth = 30 */
+		     1, /* depth = 31 */
+		   716  /* depth = 32 */
+	}
+};
+
+static void generate_random_rule_prefix(uint32_t ip_class, uint8_t depth)
+{
+/* IP address class A, the most significant bit is 0 */
+#define IP_HEAD_MASK_A			0x00000000
+#define IP_HEAD_BIT_NUM_A		1
+
+/* IP address class B, the most significant 2 bits are 10 */
+#define IP_HEAD_MASK_B			0x80000000
+#define IP_HEAD_BIT_NUM_B		2
+
+/* IP address class C, the most significant 3 bits are 110 */
+#define IP_HEAD_MASK_C			0xC0000000
+#define IP_HEAD_BIT_NUM_C		3
+
+	uint32_t class_depth;
+	uint32_t range;
+	uint32_t mask;
+	uint32_t step;
+	uint32_t start;
+	uint32_t fixed_bit_num;
+	uint32_t ip_head_mask;
+	uint32_t rule_num;
+	uint32_t k;
+	struct route_rule *ptr_rule;
+
+	if (ip_class == IP_CLASS_A) {        /* IP Address class A */
+		fixed_bit_num = IP_HEAD_BIT_NUM_A;
+		ip_head_mask = IP_HEAD_MASK_A;
+		rule_num = rule_count.a[depth - 1];
+	} else if (ip_class == IP_CLASS_B) { /* IP Address class B */
+		fixed_bit_num = IP_HEAD_BIT_NUM_B;
+		ip_head_mask = IP_HEAD_MASK_B;
+		rule_num = rule_count.b[depth - 1];
+	} else {                             /* IP Address class C */
+		fixed_bit_num = IP_HEAD_BIT_NUM_C;
+		ip_head_mask = IP_HEAD_MASK_C;
+		rule_num = rule_count.c[depth - 1];
+	}
+
+	if (rule_num == 0)
+		return;
+
+	/* the number of rest bits which don't include the most significant
+	 * fixed bits for this IP address class
+	 */
+	class_depth = depth - fixed_bit_num;
+
+	/* range is the maximum number of rules for this depth and
+	 * this IP address class
+	 */
+	range = 1 << class_depth;
+
+	/* only mask the most depth significant generated bits
+	 * except fixed bits for IP address class
+	 */
+	mask = range - 1;
+
+	/* Widen coverage of IP address in generated rules */
+	if (range <= rule_num)
+		step = 1;
+	else
+		step = round((double)range / rule_num);
+
+	/* Only generate rest bits except the most significant
+	 * fixed bits for IP address class
+	 */
+	start = lrand48() & mask;
+	ptr_rule = &large_route_table[num_route_entries];
+	for (k = 0; k < rule_num; k++) {
+		ptr_rule->ip = (start << (RTE_LPM_MAX_DEPTH - depth))
+			| ip_head_mask;
+		ptr_rule->depth = depth;
+		ptr_rule++;
+		start = (start + step) & mask;
+	}
+	num_route_entries += rule_num;
+}
+
+static void insert_rule_in_random_pos(uint32_t ip, uint8_t depth)
+{
+	uint32_t pos;
+	int try_count = 0;
+	struct route_rule tmp;
+
+	do {
+		pos = lrand48();
+		try_count++;
+	} while ((try_count < 10) && (pos > num_route_entries));
+
+	if ((pos > num_route_entries) || (pos >= MAX_RULE_NUM))
+		pos = num_route_entries >> 1;
+
+	tmp = large_route_table[pos];
+	large_route_table[pos].ip = ip;
+	large_route_table[pos].depth = depth;
+	if (num_route_entries < MAX_RULE_NUM)
+		large_route_table[num_route_entries++] = tmp;
+}
+
+void generate_large_route_rule_table(void)
+{
+	uint32_t ip_class;
+	uint8_t  depth;
+
+	num_route_entries = 0;
+	memset(large_route_table, 0, sizeof(large_route_table));
+
+	for (ip_class = IP_CLASS_A; ip_class <= IP_CLASS_C; ip_class++) {
+		for (depth = 1; depth <= RTE_LPM_MAX_DEPTH; depth++)
+			generate_random_rule_prefix(ip_class, depth);
+	}
+
+	/* Add following rules to keep same as previous large constant table,
+	 * they are 4 rules with private local IP address and 1 all-zeros prefix
+	 * with depth = 8.
+	 */
+	insert_rule_in_random_pos(IPv4(0, 0, 0, 0), 8);
+	insert_rule_in_random_pos(IPv4(10, 2, 23, 147), 32);
+	insert_rule_in_random_pos(IPv4(192, 168, 100, 10), 24);
+	insert_rule_in_random_pos(IPv4(192, 168, 25, 100), 24);
+	insert_rule_in_random_pos(IPv4(192, 168, 129, 124), 32);
+}
+
+void
+print_route_distribution(const struct route_rule *table, uint32_t n)
+{
+	unsigned int i, j;
+
+	printf("Route distribution per prefix width: \n");
+	printf("DEPTH    QUANTITY (PERCENT)\n");
+	printf("---------------------------\n");
+
+	/* Count depths. */
+	for (i = 1; i <= 32; i++) {
+		unsigned int depth_counter = 0;
+		double percent_hits;
+
+		for (j = 0; j < n; j++)
+			if (table[j].depth == (uint8_t) i)
+				depth_counter++;
+
+		percent_hits = ((double)depth_counter)/((double)n) * 100;
+		printf("%.2u%15u (%.2f)\n", i, depth_counter, percent_hits);
+	}
+	printf("\n");
+}
diff --git a/app/test/test_lpm_routes.h b/app/test/test_lpm_routes.h
new file mode 100644
index 000000000..c7874ea8f
--- /dev/null
+++ b/app/test/test_lpm_routes.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#ifndef _TEST_LPM_ROUTES_H_
+#define _TEST_LPM_ROUTES_H_
+
+#include <rte_ip.h>
+
+#define MAX_RULE_NUM (1200000)
+
+struct route_rule {
+	uint32_t ip;
+	uint8_t depth;
+};
+
+extern struct route_rule large_route_table[MAX_RULE_NUM];
+
+extern uint32_t num_route_entries;
+#define NUM_ROUTE_ENTRIES num_route_entries
+
+void generate_large_route_rule_table(void);
+void print_route_distribution(const struct route_rule *table, uint32_t n);
+
+#endif
diff --git a/app/test/v16.04/dcompat.h b/app/test/v16.04/dcompat.h
new file mode 100644
index 000000000..889c3b503
--- /dev/null
+++ b/app/test/v16.04/dcompat.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#ifndef _DCOMPAT_H_
+#define _DCOMPAT_H_
+
+#define ABI_VERSION DPDK_16.04
+
+#define MAP_ABI_SYMBOL(name) \
+	MAP_ABI_SYMBOL_VERSION(name, ABI_VERSION)
+
+MAP_ABI_SYMBOL(rte_lpm_add);
+MAP_ABI_SYMBOL(rte_lpm_create);
+MAP_ABI_SYMBOL(rte_lpm_delete);
+MAP_ABI_SYMBOL(rte_lpm_delete_all);
+MAP_ABI_SYMBOL(rte_lpm_find_existing);
+MAP_ABI_SYMBOL(rte_lpm_free);
+MAP_ABI_SYMBOL(rte_lpm_is_rule_present);
+
+#undef MAP_ABI_SYMBOL
+
+#endif
diff --git a/app/test/v16.04/rte_lpm.h b/app/test/v16.04/rte_lpm.h
new file mode 100644
index 000000000..c3348fbc1
--- /dev/null
+++ b/app/test/v16.04/rte_lpm.h
@@ -0,0 +1,463 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _RTE_LPM_H_
+#define _RTE_LPM_H_
+
+/**
+ * @file
+ * RTE Longest Prefix Match (LPM)
+ */
+
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_memory.h>
+#include <rte_common.h>
+#include <rte_vect.h>
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** Max number of characters in LPM name. */
+#define RTE_LPM_NAMESIZE                32
+
+/** Maximum depth value possible for IPv4 LPM. */
+#define RTE_LPM_MAX_DEPTH               32
+
+/** @internal Total number of tbl24 entries. */
+#define RTE_LPM_TBL24_NUM_ENTRIES       (1 << 24)
+
+/** @internal Number of entries in a tbl8 group. */
+#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES  256
+
+/** @internal Max number of tbl8 groups in the tbl8. */
+#define RTE_LPM_MAX_TBL8_NUM_GROUPS         (1 << 24)
+
+/** @internal Total number of tbl8 groups in the tbl8. */
+#define RTE_LPM_TBL8_NUM_GROUPS         256
+
+/** @internal Total number of tbl8 entries. */
+#define RTE_LPM_TBL8_NUM_ENTRIES        (RTE_LPM_TBL8_NUM_GROUPS * \
+					RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
+
+/** @internal Macro to enable/disable run-time checks. */
+#if defined(RTE_LIBRTE_LPM_DEBUG)
+#define RTE_LPM_RETURN_IF_TRUE(cond, retval) do { \
+	if (cond) \
+		return (retval); \
+} while (0)
+#else
+#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
+#endif
+
+/** @internal bitmask with valid and valid_group fields set */
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03000000
+
+/** Bitmask used to indicate successful lookup */
+#define RTE_LPM_LOOKUP_SUCCESS          0x01000000
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+/** @internal Tbl24 entry structure. */
+struct rte_lpm_tbl_entry_v20 {
+	/**
+	 * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
+	 * a group index pointing to a tbl8 structure (tbl24 only, when
+	 * valid_group is set)
+	 */
+	union {
+		uint8_t next_hop;
+		uint8_t group_idx;
+	};
+	/* Using single uint8_t to store 3 values. */
+	uint8_t valid     :1;   /**< Validation flag. */
+	/**
+	 * For tbl24:
+	 *  - valid_group == 0: entry stores a next hop
+	 *  - valid_group == 1: entry stores a group_index pointing to a tbl8
+	 * For tbl8:
+	 *  - valid_group indicates whether the current tbl8 is in use or not
+	 */
+	uint8_t valid_group :1;
+	uint8_t depth       :6; /**< Rule depth. */
+};
+
+struct rte_lpm_tbl_entry {
+	/**
+	 * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
+	 * a group index pointing to a tbl8 structure (tbl24 only, when
+	 * valid_group is set)
+	 */
+	uint32_t next_hop    :24;
+	/* Using single uint8_t to store 3 values. */
+	uint32_t valid       :1;   /**< Validation flag. */
+	/**
+	 * For tbl24:
+	 *  - valid_group == 0: entry stores a next hop
+	 *  - valid_group == 1: entry stores a group_index pointing to a tbl8
+	 * For tbl8:
+	 *  - valid_group indicates whether the current tbl8 is in use or not
+	 */
+	uint32_t valid_group :1;
+	uint32_t depth       :6; /**< Rule depth. */
+};
+
+#else
+struct rte_lpm_tbl_entry_v20 {
+	uint8_t depth       :6;
+	uint8_t valid_group :1;
+	uint8_t valid       :1;
+	union {
+		uint8_t group_idx;
+		uint8_t next_hop;
+	};
+};
+
+struct rte_lpm_tbl_entry {
+	uint32_t depth       :6;
+	uint32_t valid_group :1;
+	uint32_t valid       :1;
+	uint32_t next_hop    :24;
+
+};
+
+#endif
+
+/** LPM configuration structure. */
+struct rte_lpm_config {
+	uint32_t max_rules;      /**< Max number of rules. */
+	uint32_t number_tbl8s;   /**< Number of tbl8s to allocate. */
+	int flags;               /**< This field is currently unused. */
+};
+
+/** @internal Rule structure. */
+struct rte_lpm_rule_v20 {
+	uint32_t ip; /**< Rule IP address. */
+	uint8_t  next_hop; /**< Rule next hop. */
+};
+
+struct rte_lpm_rule {
+	uint32_t ip; /**< Rule IP address. */
+	uint32_t next_hop; /**< Rule next hop. */
+};
+
+/** @internal Contains metadata about the rules table. */
+struct rte_lpm_rule_info {
+	uint32_t used_rules; /**< Used rules so far. */
+	uint32_t first_rule; /**< Indexes the first rule of a given depth. */
+};
+
+/** @internal LPM structure. */
+struct rte_lpm_v20 {
+	/* LPM metadata. */
+	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
+	uint32_t max_rules; /**< Max. balanced rules per lpm. */
+	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
+
+	/* LPM Tables. */
+	struct rte_lpm_tbl_entry_v20 tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
+			__rte_cache_aligned; /**< LPM tbl24 table. */
+	struct rte_lpm_tbl_entry_v20 tbl8[RTE_LPM_TBL8_NUM_ENTRIES]
+			__rte_cache_aligned; /**< LPM tbl8 table. */
+	struct rte_lpm_rule_v20 rules_tbl[0] \
+			__rte_cache_aligned; /**< LPM rules. */
+};
+
+struct rte_lpm {
+	/* LPM metadata. */
+	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
+	uint32_t max_rules; /**< Max. balanced rules per lpm. */
+	uint32_t number_tbl8s; /**< Number of tbl8s. */
+	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
+
+	/* LPM Tables. */
+	struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
+			__rte_cache_aligned; /**< LPM tbl24 table. */
+	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
+	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
+};
+
+/**
+ * Create an LPM object.
+ *
+ * @param name
+ *   LPM object name
+ * @param socket_id
+ *   NUMA socket ID for LPM table memory allocation
+ * @param config
+ *   Structure containing the configuration
+ * @return
+ *   Handle to LPM object on success, NULL otherwise with rte_errno set
+ *   to an appropriate values. Possible rte_errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - EINVAL - invalid parameter passed to function
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_lpm *
+rte_lpm_create(const char *name, int socket_id,
+		const struct rte_lpm_config *config);
+struct rte_lpm_v20 *
+rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
+struct rte_lpm *
+rte_lpm_create_v1604(const char *name, int socket_id,
+		const struct rte_lpm_config *config);
+
+/**
+ * Find an existing LPM object and return a pointer to it.
+ *
+ * @param name
+ *   Name of the lpm object as passed to rte_lpm_create()
+ * @return
+ *   Pointer to lpm object or NULL if object not found with rte_errno
+ *   set appropriately. Possible rte_errno values include:
+ *    - ENOENT - required entry not available to return.
+ */
+struct rte_lpm *
+rte_lpm_find_existing(const char *name);
+struct rte_lpm_v20 *
+rte_lpm_find_existing_v20(const char *name);
+struct rte_lpm *
+rte_lpm_find_existing_v1604(const char *name);
+
+/**
+ * Free an LPM object.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @return
+ *   None
+ */
+void
+rte_lpm_free(struct rte_lpm *lpm);
+void
+rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
+void
+rte_lpm_free_v1604(struct rte_lpm *lpm);
+
+/**
+ * Add a rule to the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be added to the LPM table
+ * @param depth
+ *   Depth of the rule to be added to the LPM table
+ * @param next_hop
+ *   Next hop of the rule to be added to the LPM table
+ * @return
+ *   0 on success, negative value otherwise
+ */
+int
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
+int
+rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
+		uint8_t next_hop);
+int
+rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+		uint32_t next_hop);
+
+/**
+ * Check if a rule is present in the LPM table,
+ * and provide its next hop if it is.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be searched
+ * @param depth
+ *   Depth of the rule to searched
+ * @param next_hop
+ *   Next hop of the rule (valid only if it is found)
+ * @return
+ *   1 if the rule exists, 0 if it does not, a negative value on failure
+ */
+int
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop);
+int
+rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
+uint8_t *next_hop);
+int
+rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop);
+
+/**
+ * Delete a rule from the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be deleted from the LPM table
+ * @param depth
+ *   Depth of the rule to be deleted from the LPM table
+ * @return
+ *   0 on success, negative value otherwise
+ */
+int
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+int
+rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth);
+int
+rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+
+/**
+ * Delete all rules from the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ */
+void
+rte_lpm_delete_all(struct rte_lpm *lpm);
+void
+rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm);
+void
+rte_lpm_delete_all_v1604(struct rte_lpm *lpm);
+
+/**
+ * Lookup an IP into the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP to be looked up in the LPM table
+ * @param next_hop
+ *   Next hop of the most specific rule found for IP (valid on lookup hit only)
+ * @return
+ *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
+ */
+static inline int
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop)
+{
+	unsigned tbl24_index = (ip >> 8);
+	uint32_t tbl_entry;
+	const uint32_t *ptbl;
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+
+	/* Copy tbl24 entry */
+	ptbl = (const uint32_t *)(&lpm->tbl24[tbl24_index]);
+	tbl_entry = *ptbl;
+
+	/* Copy tbl8 entry (only if needed) */
+	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+		unsigned tbl8_index = (uint8_t)ip +
+				(((uint32_t)tbl_entry & 0x00FFFFFF) *
+						RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+		ptbl = (const uint32_t *)&lpm->tbl8[tbl8_index];
+		tbl_entry = *ptbl;
+	}
+
+	*next_hop = ((uint32_t)tbl_entry & 0x00FFFFFF);
+	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+}
+
+/**
+ * Lookup multiple IP addresses in an LPM table. This may be implemented as a
+ * macro, so the address of the function should not be used.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ips
+ *   Array of IPs to be looked up in the LPM table
+ * @param next_hops
+ *   Next hop of the most specific rule found for IP (valid on lookup hit only).
+ *   This is an array of two byte values. The most significant byte in each
+ *   value says whether the lookup was successful (bitmask
+ *   RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
+ *   actual next hop.
+ * @param n
+ *   Number of elements in ips (and next_hops) array to lookup. This should be a
+ *   compile time constant, and divisible by 8 for best performance.
+ *  @return
+ *   -EINVAL for incorrect arguments, otherwise 0
+ */
+#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
+		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
+
+static inline int
+rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
+		uint32_t *next_hops, const unsigned n)
+{
+	unsigned i;
+	unsigned tbl24_indexes[n];
+	const uint32_t *ptbl;
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
+			(next_hops == NULL)), -EINVAL);
+
+	for (i = 0; i < n; i++) {
+		tbl24_indexes[i] = ips[i] >> 8;
+	}
+
+	for (i = 0; i < n; i++) {
+		/* Simply copy tbl24 entry to output */
+		ptbl = (const uint32_t *)&lpm->tbl24[tbl24_indexes[i]];
+		next_hops[i] = *ptbl;
+
+		/* Overwrite output with tbl8 entry if needed */
+		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+			unsigned tbl8_index = (uint8_t)ips[i] +
+					(((uint32_t)next_hops[i] & 0x00FFFFFF) *
+					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+			ptbl = (const uint32_t *)&lpm->tbl8[tbl8_index];
+			next_hops[i] = *ptbl;
+		}
+	}
+	return 0;
+}
+
+/* Mask four results. */
+#define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ffffff00ffffff)
+
+/**
+ * Lookup four IP addresses in an LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   Four IPs to be looked up in the LPM table
+ * @param hop
+ *   Next hop of the most specific rule found for IP (valid on lookup hit only).
+ *   This is an 4 elements array of two byte values.
+ *   If the lookup was succesfull for the given IP, then least significant byte
+ *   of the corresponding element is the  actual next hop and the most
+ *   significant byte is zero.
+ *   If the lookup for the given IP failed, then corresponding element would
+ *   contain default value, see description of then next parameter.
+ * @param defv
+ *   Default value to populate into corresponding element of hop[] array,
+ *   if lookup would fail.
+ */
+static inline void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
+	uint32_t defv);
+
+#if defined(RTE_ARCH_ARM) || defined(RTE_ARCH_ARM64)
+#include "rte_lpm_neon.h"
+#else
+#include "rte_lpm_sse.h"
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LPM_H_ */
diff --git a/app/test/v16.04/rte_lpm_neon.h b/app/test/v16.04/rte_lpm_neon.h
new file mode 100644
index 000000000..936ec7af3
--- /dev/null
+++ b/app/test/v16.04/rte_lpm_neon.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _RTE_LPM_NEON_H_
+#define _RTE_LPM_NEON_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_vect.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+static inline void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
+	uint32_t defv)
+{
+	uint32x4_t i24;
+	rte_xmm_t i8;
+	uint32_t tbl[4];
+	uint64_t idx, pt, pt2;
+	const uint32_t *ptbl;
+
+	const uint32_t mask = UINT8_MAX;
+	const int32x4_t mask8 = vdupq_n_s32(mask);
+
+	/*
+	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 2 LPM entries
+	 * as one 64-bit value (0x0300000003000000).
+	 */
+	const uint64_t mask_xv =
+		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
+		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32);
+
+	/*
+	 * RTE_LPM_LOOKUP_SUCCESS for 2 LPM entries
+	 * as one 64-bit value (0x0100000001000000).
+	 */
+	const uint64_t mask_v =
+		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
+		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32);
+
+	/* get 4 indexes for tbl24[]. */
+	i24 = vshrq_n_u32((uint32x4_t)ip, CHAR_BIT);
+
+	/* extract values from tbl24[] */
+	idx = vgetq_lane_u64((uint64x2_t)i24, 0);
+
+	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
+	tbl[0] = *ptbl;
+	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
+	tbl[1] = *ptbl;
+
+	idx = vgetq_lane_u64((uint64x2_t)i24, 1);
+
+	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
+	tbl[2] = *ptbl;
+	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
+	tbl[3] = *ptbl;
+
+	/* get 4 indexes for tbl8[]. */
+	i8.x = vandq_s32(ip, mask8);
+
+	pt = (uint64_t)tbl[0] |
+		(uint64_t)tbl[1] << 32;
+	pt2 = (uint64_t)tbl[2] |
+		(uint64_t)tbl[3] << 32;
+
+	/* search successfully finished for all 4 IP addresses. */
+	if (likely((pt & mask_xv) == mask_v) &&
+			likely((pt2 & mask_xv) == mask_v)) {
+		*(uint64_t *)hop = pt & RTE_LPM_MASKX4_RES;
+		*(uint64_t *)(hop + 2) = pt2 & RTE_LPM_MASKX4_RES;
+		return;
+	}
+
+	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[0] = i8.u32[0] +
+			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
+		tbl[0] = *ptbl;
+	}
+	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[1] = i8.u32[1] +
+			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
+		tbl[1] = *ptbl;
+	}
+	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[2] = i8.u32[2] +
+			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
+		tbl[2] = *ptbl;
+	}
+	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[3] = i8.u32[3] +
+			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
+		tbl[3] = *ptbl;
+	}
+
+	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[0] & 0x00FFFFFF : defv;
+	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[1] & 0x00FFFFFF : defv;
+	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[2] & 0x00FFFFFF : defv;
+	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[3] & 0x00FFFFFF : defv;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LPM_NEON_H_ */
diff --git a/app/test/v16.04/rte_lpm_sse.h b/app/test/v16.04/rte_lpm_sse.h
new file mode 100644
index 000000000..edfa36be1
--- /dev/null
+++ b/app/test/v16.04/rte_lpm_sse.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _RTE_LPM_SSE_H_
+#define _RTE_LPM_SSE_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_vect.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+static inline void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
+	uint32_t defv)
+{
+	__m128i i24;
+	rte_xmm_t i8;
+	uint32_t tbl[4];
+	uint64_t idx, pt, pt2;
+	const uint32_t *ptbl;
+
+	const __m128i mask8 =
+		_mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
+
+	/*
+	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 2 LPM entries
+	 * as one 64-bit value (0x0300000003000000).
+	 */
+	const uint64_t mask_xv =
+		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
+		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32);
+
+	/*
+	 * RTE_LPM_LOOKUP_SUCCESS for 2 LPM entries
+	 * as one 64-bit value (0x0100000001000000).
+	 */
+	const uint64_t mask_v =
+		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
+		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32);
+
+	/* get 4 indexes for tbl24[]. */
+	i24 = _mm_srli_epi32(ip, CHAR_BIT);
+
+	/* extract values from tbl24[] */
+	idx = _mm_cvtsi128_si64(i24);
+	i24 = _mm_srli_si128(i24, sizeof(uint64_t));
+
+	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
+	tbl[0] = *ptbl;
+	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
+	tbl[1] = *ptbl;
+
+	idx = _mm_cvtsi128_si64(i24);
+
+	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
+	tbl[2] = *ptbl;
+	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
+	tbl[3] = *ptbl;
+
+	/* get 4 indexes for tbl8[]. */
+	i8.x = _mm_and_si128(ip, mask8);
+
+	pt = (uint64_t)tbl[0] |
+		(uint64_t)tbl[1] << 32;
+	pt2 = (uint64_t)tbl[2] |
+		(uint64_t)tbl[3] << 32;
+
+	/* search successfully finished for all 4 IP addresses. */
+	if (likely((pt & mask_xv) == mask_v) &&
+			likely((pt2 & mask_xv) == mask_v)) {
+		*(uint64_t *)hop = pt & RTE_LPM_MASKX4_RES;
+		*(uint64_t *)(hop + 2) = pt2 & RTE_LPM_MASKX4_RES;
+		return;
+	}
+
+	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[0] = i8.u32[0] +
+			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
+		tbl[0] = *ptbl;
+	}
+	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[1] = i8.u32[1] +
+			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
+		tbl[1] = *ptbl;
+	}
+	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[2] = i8.u32[2] +
+			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
+		tbl[2] = *ptbl;
+	}
+	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[3] = i8.u32[3] +
+			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
+		tbl[3] = *ptbl;
+	}
+
+	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[0] & 0x00FFFFFF : defv;
+	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[1] & 0x00FFFFFF : defv;
+	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[2] & 0x00FFFFFF : defv;
+	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[3] & 0x00FFFFFF : defv;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LPM_SSE_H_ */
diff --git a/app/test/v16.04/test_lpm.c b/app/test/v16.04/test_lpm.c
new file mode 100644
index 000000000..2aab8d0cc
--- /dev/null
+++ b/app/test/v16.04/test_lpm.c
@@ -0,0 +1,1405 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ *
+ * LPM Autotests from DPDK v16.04 for abi compability testing.
+ *
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_memory.h>
+#include <rte_random.h>
+#include <rte_branch_prediction.h>
+#include <rte_ip.h>
+#include <time.h>
+
+#include "../test_lpm_routes.h"
+#include "../test.h"
+#include "../test_xmmt_ops.h"
+
+/* backported header from DPDK v16.04 */
+#include "rte_lpm.h"
+/* remapping of DPDK v16.04 symbols */
+#include "dcompat.h"
+
+#define TEST_LPM_ASSERT(cond) do {                                            \
+	if (!(cond)) {                                                        \
+		printf("Error at line %d: \n", __LINE__);                     \
+		return -1;                                                    \
+	}                                                                     \
+} while (0)
+
+typedef int32_t (*rte_lpm_test)(void);
+
+static int32_t test0(void);
+static int32_t test1(void);
+static int32_t test2(void);
+static int32_t test3(void);
+static int32_t test4(void);
+static int32_t test5(void);
+static int32_t test6(void);
+static int32_t test7(void);
+static int32_t test8(void);
+static int32_t test9(void);
+static int32_t test10(void);
+static int32_t test11(void);
+static int32_t test12(void);
+static int32_t test13(void);
+static int32_t test14(void);
+static int32_t test15(void);
+static int32_t test16(void);
+static int32_t test17(void);
+static int32_t perf_test(void);
+
+static rte_lpm_test tests[] = {
+/* Test Cases */
+	test0,
+	test1,
+	test2,
+	test3,
+	test4,
+	test5,
+	test6,
+	test7,
+	test8,
+	test9,
+	test10,
+	test11,
+	test12,
+	test13,
+	test14,
+	test15,
+	test16,
+	test17,
+	perf_test,
+};
+
+#define NUM_LPM_TESTS (sizeof(tests)/sizeof(tests[0]))
+#define MAX_DEPTH 32
+#define MAX_RULES 256
+#define NUMBER_TBL8S 256
+#define PASS 0
+
+/*
+ * Check that rte_lpm_create fails gracefully for incorrect user input
+ * arguments
+ */
+int32_t
+test0(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+
+	/* rte_lpm_create: lpm name == NULL */
+	lpm = rte_lpm_create(NULL, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm == NULL);
+
+	/* rte_lpm_create: max_rules = 0 */
+	/* Note: __func__ inserts the function name, in this case "test0". */
+	config.max_rules = 0;
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm == NULL);
+
+	/* socket_id < -1 is invalid */
+	config.max_rules = MAX_RULES;
+	lpm = rte_lpm_create(__func__, -2, &config);
+	TEST_LPM_ASSERT(lpm == NULL);
+
+	return PASS;
+}
+
+/*
+ * Create lpm table then delete lpm table 100 times
+ * Use a slightly different rules size each time
+ * */
+int32_t
+test1(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	int32_t i;
+
+	/* rte_lpm_free: Free NULL */
+	for (i = 0; i < 100; i++) {
+		config.max_rules = MAX_RULES - i;
+		lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+		TEST_LPM_ASSERT(lpm != NULL);
+
+		rte_lpm_free(lpm);
+	}
+
+	/* Can not test free so return success */
+	return PASS;
+}
+
+/*
+ * Call rte_lpm_free for NULL pointer user input. Note: free has no return and
+ * therefore it is impossible to check for failure but this test is added to
+ * increase function coverage metrics and to validate that freeing null does
+ * not crash.
+ */
+int32_t
+test2(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	rte_lpm_free(lpm);
+	rte_lpm_free(NULL);
+	return PASS;
+}
+
+/*
+ * Check that rte_lpm_add fails gracefully for incorrect user input arguments
+ */
+int32_t
+test3(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip = IPv4(0, 0, 0, 0), next_hop = 100;
+	uint8_t depth = 24;
+	int32_t status = 0;
+
+	/* rte_lpm_add: lpm == NULL */
+	status = rte_lpm_add(NULL, ip, depth, next_hop);
+	TEST_LPM_ASSERT(status < 0);
+
+	/*Create vaild lpm to use in rest of test. */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* rte_lpm_add: depth < 1 */
+	status = rte_lpm_add(lpm, ip, 0, next_hop);
+	TEST_LPM_ASSERT(status < 0);
+
+	/* rte_lpm_add: depth > MAX_DEPTH */
+	status = rte_lpm_add(lpm, ip, (MAX_DEPTH + 1), next_hop);
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Check that rte_lpm_delete fails gracefully for incorrect user input
+ * arguments
+ */
+int32_t
+test4(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip = IPv4(0, 0, 0, 0);
+	uint8_t depth = 24;
+	int32_t status = 0;
+
+	/* rte_lpm_delete: lpm == NULL */
+	status = rte_lpm_delete(NULL, ip, depth);
+	TEST_LPM_ASSERT(status < 0);
+
+	/*Create vaild lpm to use in rest of test. */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* rte_lpm_delete: depth < 1 */
+	status = rte_lpm_delete(lpm, ip, 0);
+	TEST_LPM_ASSERT(status < 0);
+
+	/* rte_lpm_delete: depth > MAX_DEPTH */
+	status = rte_lpm_delete(lpm, ip, (MAX_DEPTH + 1));
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Check that rte_lpm_lookup fails gracefully for incorrect user input
+ * arguments
+ */
+int32_t
+test5(void)
+{
+#if defined(RTE_LIBRTE_LPM_DEBUG)
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip = IPv4(0, 0, 0, 0), next_hop_return = 0;
+	int32_t status = 0;
+
+	/* rte_lpm_lookup: lpm == NULL */
+	status = rte_lpm_lookup(NULL, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status < 0);
+
+	/*Create vaild lpm to use in rest of test. */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* rte_lpm_lookup: depth < 1 */
+	status = rte_lpm_lookup(lpm, ip, NULL);
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+#endif
+	return PASS;
+}
+
+
+
+/*
+ * Call add, lookup and delete for a single rule with depth <= 24
+ */
+int32_t
+test6(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip = IPv4(0, 0, 0, 0), next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 24;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Call add, lookup and delete for a single rule with depth > 24
+ */
+
+int32_t
+test7(void)
+{
+	xmm_t ipx4;
+	uint32_t hop[4];
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip = IPv4(0, 0, 0, 0), next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 32;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ipx4 = vect_set_epi32(ip, ip + 0x100, ip - 0x100, ip);
+	rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
+	TEST_LPM_ASSERT(hop[0] == next_hop_add);
+	TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
+	TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+	TEST_LPM_ASSERT(hop[3] == next_hop_add);
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Use rte_lpm_add to add rules which effect only the second half of the lpm
+ * table. Use all possible depths ranging from 1..32. Set the next hop = to the
+ * depth. Check lookup hit for on every add and check for lookup miss on the
+ * first half of the lpm table after each add. Finally delete all rules going
+ * backwards (i.e. from depth = 32 ..1) and carry out a lookup after each
+ * delete. The lookup should return the next_hop_add value related to the
+ * previous depth value (i.e. depth -1).
+ */
+int32_t
+test8(void)
+{
+	xmm_t ipx4;
+	uint32_t hop[4];
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
+	uint32_t next_hop_add, next_hop_return;
+	uint8_t depth;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* Loop with rte_lpm_add. */
+	for (depth = 1; depth <= 32; depth++) {
+		/* Let the next_hop_add value = depth. Just for change. */
+		next_hop_add = depth;
+
+		status = rte_lpm_add(lpm, ip2, depth, next_hop_add);
+		TEST_LPM_ASSERT(status == 0);
+
+		/* Check IP in first half of tbl24 which should be empty. */
+		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+		TEST_LPM_ASSERT(status == -ENOENT);
+
+		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+			(next_hop_return == next_hop_add));
+
+		ipx4 = vect_set_epi32(ip2, ip1, ip2, ip1);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
+		TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[1] == next_hop_add);
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[3] == next_hop_add);
+	}
+
+	/* Loop with rte_lpm_delete. */
+	for (depth = 32; depth >= 1; depth--) {
+		next_hop_add = (uint8_t) (depth - 1);
+
+		status = rte_lpm_delete(lpm, ip2, depth);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+
+		if (depth != 1) {
+			TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add));
+		} else {
+			TEST_LPM_ASSERT(status == -ENOENT);
+		}
+
+		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+		TEST_LPM_ASSERT(status == -ENOENT);
+
+		ipx4 = vect_set_epi32(ip1, ip1, ip2, ip2);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
+		if (depth != 1) {
+			TEST_LPM_ASSERT(hop[0] == next_hop_add);
+			TEST_LPM_ASSERT(hop[1] == next_hop_add);
+		} else {
+			TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
+			TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
+		}
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[3] == UINT32_MAX);
+	}
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * - Add & lookup to hit invalid TBL24 entry
+ * - Add & lookup to hit valid TBL24 entry not extended
+ * - Add & lookup to hit valid extended TBL24 entry with invalid TBL8 entry
+ * - Add & lookup to hit valid extended TBL24 entry with valid TBL8 entry
+ *
+ */
+int32_t
+test9(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip, ip_1, ip_2;
+	uint8_t depth, depth_1, depth_2;
+	uint32_t next_hop_add, next_hop_add_1, next_hop_add_2, next_hop_return;
+	int32_t status = 0;
+
+	/* Add & lookup to hit invalid TBL24 entry */
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add & lookup to hit valid TBL24 entry not extended */
+	ip = IPv4(128, 0, 0, 0);
+	depth = 23;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	depth = 24;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	depth = 23;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add & lookup to hit valid extended TBL24 entry with invalid TBL8
+	 * entry */
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 5);
+	depth = 32;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add & lookup to hit valid extended TBL24 entry with valid TBL8
+	 * entry */
+	ip_1 = IPv4(128, 0, 0, 0);
+	depth_1 = 25;
+	next_hop_add_1 = 101;
+
+	ip_2 = IPv4(128, 0, 0, 5);
+	depth_2 = 32;
+	next_hop_add_2 = 102;
+
+	next_hop_return = 0;
+
+	status = rte_lpm_add(lpm, ip_1, depth_1, next_hop_add_1);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
+
+	status = rte_lpm_add(lpm, ip_2, depth_2, next_hop_add_2);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_2));
+
+	status = rte_lpm_delete(lpm, ip_2, depth_2);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
+
+	status = rte_lpm_delete(lpm, ip_1, depth_1);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+
+/*
+ * - Add rule that covers a TBL24 range previously invalid & lookup (& delete &
+ *   lookup)
+ * - Add rule that extends a TBL24 invalid entry & lookup (& delete & lookup)
+ * - Add rule that extends a TBL24 valid entry & lookup for both rules (&
+ *   delete & lookup)
+ * - Add rule that updates the next hop in TBL24 & lookup (& delete & lookup)
+ * - Add rule that updates the next hop in TBL8 & lookup (& delete & lookup)
+ * - Delete a rule that is not present in the TBL24 & lookup
+ * - Delete a rule that is not present in the TBL8 & lookup
+ *
+ */
+int32_t
+test10(void)
+{
+
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip, next_hop_add, next_hop_return;
+	uint8_t depth;
+	int32_t status = 0;
+
+	/* Add rule that covers a TBL24 range previously invalid & lookup
+	 * (& delete & lookup) */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 16;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 25;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add rule that extends a TBL24 valid entry & lookup for both rules
+	 * (& delete & lookup) */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	next_hop_add = 100;
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add rule that updates the next hop in TBL24 & lookup
+	 * (& delete & lookup) */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add rule that updates the next hop in TBL8 & lookup
+	 * (& delete & lookup) */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Delete a rule that is not present in the TBL24 & lookup */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status < 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Delete a rule that is not present in the TBL8 & lookup */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status < 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Add two rules, lookup to hit the more specific one, lookup to hit the less
+ * specific one delete the less specific rule and lookup previous values again;
+ * add a more specific rule than the existing rule, lookup again
+ *
+ * */
+int32_t
+test11(void)
+{
+
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip, next_hop_add, next_hop_return;
+	uint8_t depth;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	next_hop_add = 100;
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Add an extended rule (i.e. depth greater than 24, lookup (hit), delete,
+ * lookup (miss) in a for loop of 1000 times. This will check tbl8 extension
+ * and contraction.
+ *
+ * */
+
+int32_t
+test12(void)
+{
+	xmm_t ipx4;
+	uint32_t hop[4];
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip, i, next_hop_add, next_hop_return;
+	uint8_t depth;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	for (i = 0; i < 1000; i++) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add));
+
+		ipx4 = vect_set_epi32(ip, ip + 1, ip, ip - 1);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
+		TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[1] == next_hop_add);
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[3] == next_hop_add);
+
+		status = rte_lpm_delete(lpm, ip, depth);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT(status == -ENOENT);
+	}
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Add a rule to tbl24, lookup (hit), then add a rule that will extend this
+ * tbl24 entry, lookup (hit). delete the rule that caused the tbl24 extension,
+ * lookup (miss) and repeat for loop of 1000 times. This will check tbl8
+ * extension and contraction.
+ *
+ * */
+
+int32_t
+test13(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip, i, next_hop_add_1, next_hop_add_2, next_hop_return;
+	uint8_t depth;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add_1 = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add_1);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
+
+	depth = 32;
+	next_hop_add_2 = 101;
+
+	for (i = 0; i < 1000; i++) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_add_2);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add_2));
+
+		status = rte_lpm_delete(lpm, ip, depth);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add_1));
+	}
+
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
+ * No more tbl8 extensions will be allowed. Now add one more rule that required
+ * a tbl8 extension and get fail.
+ * */
+int32_t
+test14(void)
+{
+
+	/* We only use depth = 32 in the loop below so we must make sure
+	 * that we have enough storage for all rules at that depth*/
+
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = 256 * 32;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint32_t ip, next_hop_add, next_hop_return;
+	uint8_t depth;
+	int32_t status = 0;
+
+	/* Add enough space for 256 rules for every depth */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	depth = 32;
+	next_hop_add = 100;
+	ip = IPv4(0, 0, 0, 0);
+
+	/* Add 256 rules that require a tbl8 extension */
+	for (; ip <= IPv4(0, 0, 255, 0); ip += 256) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add));
+	}
+
+	/* All tbl8 extensions have been used above. Try to add one more and
+	 * we get a fail */
+	ip = IPv4(1, 0, 0, 0);
+	depth = 32;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Sequence of operations for find existing lpm table
+ *
+ *  - create table
+ *  - find existing table: hit
+ *  - find non-existing table: miss
+ *
+ */
+int32_t
+test15(void)
+{
+	struct rte_lpm *lpm = NULL, *result = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = 256 * 32;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+
+	/* Create lpm  */
+	lpm = rte_lpm_create("lpm_find_existing", SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* Try to find existing lpm */
+	result = rte_lpm_find_existing("lpm_find_existing");
+	TEST_LPM_ASSERT(result == lpm);
+
+	/* Try to find non-existing lpm */
+	result = rte_lpm_find_existing("lpm_find_non_existing");
+	TEST_LPM_ASSERT(result == NULL);
+
+	/* Cleanup. */
+	rte_lpm_delete_all(lpm);
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * test failure condition of overloading the tbl8 so no more will fit
+ * Check we get an error return value in that case
+ */
+int32_t
+test16(void)
+{
+	uint32_t ip;
+	struct rte_lpm_config config;
+
+	config.max_rules = 256 * 32;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	struct rte_lpm *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+
+	/* ip loops through all possibilities for top 24 bits of address */
+	for (ip = 0; ip < 0xFFFFFF; ip++) {
+		/* add an entry within a different tbl8 each time, since
+		 * depth >24 and the top 24 bits are different */
+		if (rte_lpm_add(lpm, (ip << 8) + 0xF0, 30, 0) < 0)
+			break;
+	}
+
+	if (ip != NUMBER_TBL8S) {
+		printf("Error, unexpected failure with filling tbl8 groups\n");
+		printf("Failed after %u additions, expected after %u\n",
+				(unsigned)ip, (unsigned)NUMBER_TBL8S);
+	}
+
+	rte_lpm_free(lpm);
+	return 0;
+}
+
+/*
+ * Test for overwriting of tbl8:
+ *  - add rule /32 and lookup
+ *  - add new rule /24 and lookup
+ *	- add third rule /25 and lookup
+ *	- lookup /32 and /24 rule to ensure the table has not been overwritten.
+ */
+int32_t
+test17(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = MAX_RULES;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	const uint32_t ip_10_32 = IPv4(10, 10, 10, 2);
+	const uint32_t ip_10_24 = IPv4(10, 10, 10, 0);
+	const uint32_t ip_20_25 = IPv4(10, 10, 20, 2);
+	const uint8_t d_ip_10_32 = 32,
+			d_ip_10_24 = 24,
+			d_ip_20_25 = 25;
+	const uint32_t next_hop_ip_10_32 = 100,
+			next_hop_ip_10_24 = 105,
+			next_hop_ip_20_25 = 111;
+	uint32_t next_hop_return = 0;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip_10_32, d_ip_10_32, next_hop_ip_10_32);
+	if (status < 0)
+		return -1;
+
+	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+	uint32_t test_hop_10_32 = next_hop_return;
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
+
+	status = rte_lpm_add(lpm, ip_10_24, d_ip_10_24, next_hop_ip_10_24);
+	if (status < 0)
+		return -1;
+
+	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+	uint32_t test_hop_10_24 = next_hop_return;
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
+
+	status = rte_lpm_add(lpm, ip_20_25, d_ip_20_25, next_hop_ip_20_25);
+	if (status < 0)
+		return -1;
+
+	status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
+	uint32_t test_hop_20_25 = next_hop_return;
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
+
+	if (test_hop_10_32 == test_hop_10_24) {
+		printf("Next hop return equal\n");
+		return -1;
+	}
+
+	if (test_hop_10_24 == test_hop_20_25) {
+		printf("Next hop return equal\n");
+		return -1;
+	}
+
+	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
+
+	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Lookup performance test
+ */
+
+#define ITERATIONS (1 << 10)
+#define BATCH_SIZE (1 << 12)
+#define BULK_SIZE 32
+
+int32_t
+perf_test(void)
+{
+	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_config config;
+
+	config.max_rules = 1000000;
+	config.number_tbl8s = NUMBER_TBL8S;
+	config.flags = 0;
+	uint64_t begin, total_time, lpm_used_entries = 0;
+	unsigned i, j;
+	uint32_t next_hop_add = 0xAA, next_hop_return = 0;
+	int status = 0;
+	uint64_t cache_line_counter = 0;
+	int64_t count = 0;
+
+	rte_srand(rte_rdtsc());
+
+	/* (re) generate the routing table */
+	generate_large_route_rule_table();
+
+	printf("No. routes = %u\n", (unsigned) NUM_ROUTE_ENTRIES);
+
+	print_route_distribution(large_route_table,
+				(uint32_t) NUM_ROUTE_ENTRIES);
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* Measue add. */
+	begin = rte_rdtsc();
+
+	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
+		if (rte_lpm_add(lpm, large_route_table[i].ip,
+				large_route_table[i].depth, next_hop_add) == 0)
+			status++;
+	}
+	/* End Timer. */
+	total_time = rte_rdtsc() - begin;
+
+	printf("Unique added entries = %d\n", status);
+	/* Obtain add statistics. */
+	for (i = 0; i < RTE_LPM_TBL24_NUM_ENTRIES; i++) {
+		if (lpm->tbl24[i].valid)
+			lpm_used_entries++;
+
+		if (i % 32 == 0) {
+			if ((uint64_t)count < lpm_used_entries) {
+				cache_line_counter++;
+				count = lpm_used_entries;
+			}
+		}
+	}
+
+	printf("Used table 24 entries = %u (%g%%)\n",
+			(unsigned) lpm_used_entries,
+			(lpm_used_entries * 100.0) / RTE_LPM_TBL24_NUM_ENTRIES);
+	printf("64 byte Cache entries used = %u (%u bytes)\n",
+			(unsigned) cache_line_counter, (unsigned) cache_line_counter * 64);
+
+	printf("Average LPM Add: %g cycles\n",
+			(double)total_time / NUM_ROUTE_ENTRIES);
+
+	/* Measure single Lookup */
+	total_time = 0;
+	count = 0;
+
+	for (i = 0; i < ITERATIONS; i++) {
+		static uint32_t ip_batch[BATCH_SIZE];
+
+		for (j = 0; j < BATCH_SIZE; j++)
+			ip_batch[j] = rte_rand();
+
+		/* Lookup per batch */
+		begin = rte_rdtsc();
+
+		for (j = 0; j < BATCH_SIZE; j++) {
+			if (rte_lpm_lookup(lpm, ip_batch[j], &next_hop_return) != 0)
+				count++;
+		}
+
+		total_time += rte_rdtsc() - begin;
+
+	}
+	printf("Average LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
+			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
+			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
+
+	/* Measure bulk Lookup */
+	total_time = 0;
+	count = 0;
+	for (i = 0; i < ITERATIONS; i++) {
+		static uint32_t ip_batch[BATCH_SIZE];
+		uint32_t next_hops[BULK_SIZE];
+
+		/* Create array of random IP addresses */
+		for (j = 0; j < BATCH_SIZE; j++)
+			ip_batch[j] = rte_rand();
+
+		/* Lookup per batch */
+		begin = rte_rdtsc();
+		for (j = 0; j < BATCH_SIZE; j += BULK_SIZE) {
+			unsigned k;
+			rte_lpm_lookup_bulk(lpm, &ip_batch[j], next_hops, BULK_SIZE);
+			for (k = 0; k < BULK_SIZE; k++)
+				if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS)))
+					count++;
+		}
+
+		total_time += rte_rdtsc() - begin;
+	}
+	printf("BULK LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
+			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
+			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
+
+	/* Measure LookupX4 */
+	total_time = 0;
+	count = 0;
+	for (i = 0; i < ITERATIONS; i++) {
+		static uint32_t ip_batch[BATCH_SIZE];
+		uint32_t next_hops[4];
+
+		/* Create array of random IP addresses */
+		for (j = 0; j < BATCH_SIZE; j++)
+			ip_batch[j] = rte_rand();
+
+		/* Lookup per batch */
+		begin = rte_rdtsc();
+		for (j = 0; j < BATCH_SIZE; j += RTE_DIM(next_hops)) {
+			unsigned k;
+			xmm_t ipx4;
+
+			ipx4 = vect_loadu_sil128((xmm_t *)(ip_batch + j));
+			ipx4 = *(xmm_t *)(ip_batch + j);
+			rte_lpm_lookupx4(lpm, ipx4, next_hops, UINT32_MAX);
+			for (k = 0; k < RTE_DIM(next_hops); k++)
+				if (unlikely(next_hops[k] == UINT32_MAX))
+					count++;
+		}
+
+		total_time += rte_rdtsc() - begin;
+	}
+	printf("LPM LookupX4: %.1f cycles (fails = %.1f%%)\n",
+			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
+			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
+
+	/* Delete */
+	status = 0;
+	begin = rte_rdtsc();
+
+	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
+		/* rte_lpm_delete(lpm, ip, depth) */
+		status += rte_lpm_delete(lpm, large_route_table[i].ip,
+				large_route_table[i].depth);
+	}
+
+	total_time += rte_rdtsc() - begin;
+
+	printf("Average LPM Delete: %g cycles\n",
+			(double)total_time / NUM_ROUTE_ENTRIES);
+
+	rte_lpm_delete_all(lpm);
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Do all unit and performance tests.
+ */
+
+static int
+test_lpm(void)
+{
+	unsigned i;
+	int status, global_status = 0;
+
+	for (i = 0; i < NUM_LPM_TESTS; i++) {
+		status = tests[i]();
+		if (status < 0) {
+			printf("ERROR: LPM Test %s: FAIL\n", RTE_STR(tests[i]));
+			global_status = status;
+		}
+	}
+
+	return global_status;
+}
+
+REGISTER_TEST_COMMAND_VERSION(lpm_autotest, test_lpm, TEST_DPDK_ABI_VERSION_V1604);
diff --git a/app/test/v16.04/test_v1604.c b/app/test/v16.04/test_v1604.c
new file mode 100644
index 000000000..a5399bbfe
--- /dev/null
+++ b/app/test/v16.04/test_v1604.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdlib.h>
+
+#include <rte_ip.h>
+#include <rte_lpm.h>
+
+#include "../test.h"
+
+REGISTER_TEST_ABI_VERSION(v1604, TEST_DPDK_ABI_VERSION_V1604);
diff --git a/app/test/v2.0/dcompat.h b/app/test/v2.0/dcompat.h
new file mode 100644
index 000000000..108fcf8f6
--- /dev/null
+++ b/app/test/v2.0/dcompat.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#ifndef _DCOMPAT_H_
+#define _DCOMPAT_H_
+
+#define ABI_VERSION DPDK_2.0
+
+#define MAP_ABI_SYMBOL(name) \
+	MAP_ABI_SYMBOL_VERSION(name, ABI_VERSION)
+
+MAP_ABI_SYMBOL(rte_lpm_add);
+MAP_ABI_SYMBOL(rte_lpm_create);
+MAP_ABI_SYMBOL(rte_lpm_delete);
+MAP_ABI_SYMBOL(rte_lpm_delete_all);
+MAP_ABI_SYMBOL(rte_lpm_find_existing);
+MAP_ABI_SYMBOL(rte_lpm_free);
+MAP_ABI_SYMBOL(rte_lpm_is_rule_present);
+
+#undef MAP_ABI_SYMBOL
+
+#endif
diff --git a/app/test/v2.0/rte_lpm.h b/app/test/v2.0/rte_lpm.h
new file mode 100644
index 000000000..b1efd1c2d
--- /dev/null
+++ b/app/test/v2.0/rte_lpm.h
@@ -0,0 +1,443 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _RTE_LPM_H_
+#define _RTE_LPM_H_
+
+/**
+ * @file
+ * RTE Longest Prefix Match (LPM)
+ */
+
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_common.h>
+#include <rte_vect.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** Max number of characters in LPM name. */
+#define RTE_LPM_NAMESIZE                32
+
+/** @deprecated Possible location to allocate memory. This was for last
+ * parameter of rte_lpm_create(), but is now redundant. The LPM table is always
+ * allocated in memory using librte_malloc which uses a memzone. */
+#define RTE_LPM_HEAP                    0
+
+/** @deprecated Possible location to allocate memory. This was for last
+ * parameter of rte_lpm_create(), but is now redundant. The LPM table is always
+ * allocated in memory using librte_malloc which uses a memzone. */
+#define RTE_LPM_MEMZONE                 1
+
+/** Maximum depth value possible for IPv4 LPM. */
+#define RTE_LPM_MAX_DEPTH               32
+
+/** @internal Total number of tbl24 entries. */
+#define RTE_LPM_TBL24_NUM_ENTRIES       (1 << 24)
+
+/** @internal Number of entries in a tbl8 group. */
+#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES  256
+
+/** @internal Total number of tbl8 groups in the tbl8. */
+#define RTE_LPM_TBL8_NUM_GROUPS         256
+
+/** @internal Total number of tbl8 entries. */
+#define RTE_LPM_TBL8_NUM_ENTRIES        (RTE_LPM_TBL8_NUM_GROUPS * \
+					RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
+
+/** @internal Macro to enable/disable run-time checks. */
+#if defined(RTE_LIBRTE_LPM_DEBUG)
+#define RTE_LPM_RETURN_IF_TRUE(cond, retval) do { \
+	if (cond) \
+		return (retval); \
+} while (0)
+#else
+#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
+#endif
+
+/** @internal bitmask with valid and ext_entry/valid_group fields set */
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
+
+/** Bitmask used to indicate successful lookup */
+#define RTE_LPM_LOOKUP_SUCCESS          0x0100
+
+/** @internal Tbl24 entry structure. */
+struct rte_lpm_tbl24_entry {
+	/* Stores Next hop or group index (i.e. gindex)into tbl8. */
+	union {
+		uint8_t next_hop;
+		uint8_t tbl8_gindex;
+	};
+	/* Using single uint8_t to store 3 values. */
+	uint8_t valid     :1; /**< Validation flag. */
+	uint8_t ext_entry :1; /**< External entry. */
+	uint8_t depth     :6; /**< Rule depth. */
+};
+
+/** @internal Tbl8 entry structure. */
+struct rte_lpm_tbl8_entry {
+	uint8_t next_hop; /**< next hop. */
+	/* Using single uint8_t to store 3 values. */
+	uint8_t valid       :1; /**< Validation flag. */
+	uint8_t valid_group :1; /**< Group validation flag. */
+	uint8_t depth       :6; /**< Rule depth. */
+};
+
+/** @internal Rule structure. */
+struct rte_lpm_rule {
+	uint32_t ip; /**< Rule IP address. */
+	uint8_t  next_hop; /**< Rule next hop. */
+};
+
+/** @internal Contains metadata about the rules table. */
+struct rte_lpm_rule_info {
+	uint32_t used_rules; /**< Used rules so far. */
+	uint32_t first_rule; /**< Indexes the first rule of a given depth. */
+};
+
+/** @internal LPM structure. */
+struct rte_lpm {
+	/* LPM metadata. */
+	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
+	int mem_location; /**< @deprecated @see RTE_LPM_HEAP and RTE_LPM_MEMZONE. */
+	uint32_t max_rules; /**< Max. balanced rules per lpm. */
+	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
+
+	/* LPM Tables. */
+	struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+			__rte_cache_aligned; /**< LPM tbl24 table. */
+	struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
+			__rte_cache_aligned; /**< LPM tbl8 table. */
+	struct rte_lpm_rule rules_tbl[0] \
+			__rte_cache_aligned; /**< LPM rules. */
+};
+
+/**
+ * Create an LPM object.
+ *
+ * @param name
+ *   LPM object name
+ * @param socket_id
+ *   NUMA socket ID for LPM table memory allocation
+ * @param max_rules
+ *   Maximum number of LPM rules that can be added
+ * @param flags
+ *   This parameter is currently unused
+ * @return
+ *   Handle to LPM object on success, NULL otherwise with rte_errno set
+ *   to an appropriate values. Possible rte_errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - EINVAL - invalid parameter passed to function
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_lpm *
+rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
+
+/**
+ * Find an existing LPM object and return a pointer to it.
+ *
+ * @param name
+ *   Name of the lpm object as passed to rte_lpm_create()
+ * @return
+ *   Pointer to lpm object or NULL if object not found with rte_errno
+ *   set appropriately. Possible rte_errno values include:
+ *    - ENOENT - required entry not available to return.
+ */
+struct rte_lpm *
+rte_lpm_find_existing(const char *name);
+
+/**
+ * Free an LPM object.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @return
+ *   None
+ */
+void
+rte_lpm_free(struct rte_lpm *lpm);
+
+/**
+ * Add a rule to the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be added to the LPM table
+ * @param depth
+ *   Depth of the rule to be added to the LPM table
+ * @param next_hop
+ *   Next hop of the rule to be added to the LPM table
+ * @return
+ *   0 on success, negative value otherwise
+ */
+int
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+
+/**
+ * Check if a rule is present in the LPM table,
+ * and provide its next hop if it is.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be searched
+ * @param depth
+ *   Depth of the rule to searched
+ * @param next_hop
+ *   Next hop of the rule (valid only if it is found)
+ * @return
+ *   1 if the rule exists, 0 if it does not, a negative value on failure
+ */
+int
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+uint8_t *next_hop);
+
+/**
+ * Delete a rule from the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be deleted from the LPM table
+ * @param depth
+ *   Depth of the rule to be deleted from the LPM table
+ * @return
+ *   0 on success, negative value otherwise
+ */
+int
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+
+/**
+ * Delete all rules from the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ */
+void
+rte_lpm_delete_all(struct rte_lpm *lpm);
+
+/**
+ * Lookup an IP into the LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP to be looked up in the LPM table
+ * @param next_hop
+ *   Next hop of the most specific rule found for IP (valid on lookup hit only)
+ * @return
+ *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
+ */
+static inline int
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
+{
+	unsigned tbl24_index = (ip >> 8);
+	uint16_t tbl_entry;
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+
+	/* Copy tbl24 entry */
+	tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
+
+	/* Copy tbl8 entry (only if needed) */
+	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+		unsigned tbl8_index = (uint8_t)ip +
+				((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+		tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
+	}
+
+	*next_hop = (uint8_t)tbl_entry;
+	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+}
+
+/**
+ * Lookup multiple IP addresses in an LPM table. This may be implemented as a
+ * macro, so the address of the function should not be used.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ips
+ *   Array of IPs to be looked up in the LPM table
+ * @param next_hops
+ *   Next hop of the most specific rule found for IP (valid on lookup hit only).
+ *   This is an array of two byte values. The most significant byte in each
+ *   value says whether the lookup was successful (bitmask
+ *   RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
+ *   actual next hop.
+ * @param n
+ *   Number of elements in ips (and next_hops) array to lookup. This should be a
+ *   compile time constant, and divisible by 8 for best performance.
+ *  @return
+ *   -EINVAL for incorrect arguments, otherwise 0
+ */
+#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
+		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
+
+static inline int
+rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
+		uint16_t *next_hops, const unsigned n)
+{
+	unsigned i;
+	unsigned tbl24_indexes[n];
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
+			(next_hops == NULL)), -EINVAL);
+
+	for (i = 0; i < n; i++) {
+		tbl24_indexes[i] = ips[i] >> 8;
+	}
+
+	for (i = 0; i < n; i++) {
+		/* Simply copy tbl24 entry to output */
+		next_hops[i] = *(const uint16_t *)&lpm->tbl24[tbl24_indexes[i]];
+
+		/* Overwrite output with tbl8 entry if needed */
+		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+			unsigned tbl8_index = (uint8_t)ips[i] +
+					((uint8_t)next_hops[i] *
+					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+			next_hops[i] = *(const uint16_t *)&lpm->tbl8[tbl8_index];
+		}
+	}
+	return 0;
+}
+
+/* Mask four results. */
+#define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ff00ff00ff00ff)
+
+/**
+ * Lookup four IP addresses in an LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   Four IPs to be looked up in the LPM table
+ * @param hop
+ *   Next hop of the most specific rule found for IP (valid on lookup hit only).
+ *   This is an 4 elements array of two byte values.
+ *   If the lookup was succesfull for the given IP, then least significant byte
+ *   of the corresponding element is the  actual next hop and the most
+ *   significant byte is zero.
+ *   If the lookup for the given IP failed, then corresponding element would
+ *   contain default value, see description of then next parameter.
+ * @param defv
+ *   Default value to populate into corresponding element of hop[] array,
+ *   if lookup would fail.
+ */
+static inline void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
+	uint16_t defv)
+{
+	__m128i i24;
+	rte_xmm_t i8;
+	uint16_t tbl[4];
+	uint64_t idx, pt;
+
+	const __m128i mask8 =
+		_mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
+
+	/*
+	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
+	 * as one 64-bit value (0x0300030003000300).
+	 */
+	const uint64_t mask_xv =
+		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
+		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 16 |
+		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32 |
+		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 48);
+
+	/*
+	 * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
+	 * as one 64-bit value (0x0100010001000100).
+	 */
+	const uint64_t mask_v =
+		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
+		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 16 |
+		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32 |
+		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 48);
+
+	/* get 4 indexes for tbl24[]. */
+	i24 = _mm_srli_epi32(ip, CHAR_BIT);
+
+	/* extract values from tbl24[] */
+	idx = _mm_cvtsi128_si64(i24);
+	i24 = _mm_srli_si128(i24, sizeof(uint64_t));
+
+	tbl[0] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
+	tbl[1] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
+
+	idx = _mm_cvtsi128_si64(i24);
+
+	tbl[2] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
+	tbl[3] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
+
+	/* get 4 indexes for tbl8[]. */
+	i8.x = _mm_and_si128(ip, mask8);
+
+	pt = (uint64_t)tbl[0] |
+		(uint64_t)tbl[1] << 16 |
+		(uint64_t)tbl[2] << 32 |
+		(uint64_t)tbl[3] << 48;
+
+	/* search successfully finished for all 4 IP addresses. */
+	if (likely((pt & mask_xv) == mask_v)) {
+		uintptr_t ph = (uintptr_t)hop;
+		*(uint64_t *)ph = pt & RTE_LPM_MASKX4_RES;
+		return;
+	}
+
+	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[0] = i8.u32[0] +
+			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[0] = *(const uint16_t *)&lpm->tbl8[i8.u32[0]];
+	}
+	if (unlikely((pt >> 16 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[1] = i8.u32[1] +
+			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[1] = *(const uint16_t *)&lpm->tbl8[i8.u32[1]];
+	}
+	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[2] = i8.u32[2] +
+			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[2] = *(const uint16_t *)&lpm->tbl8[i8.u32[2]];
+	}
+	if (unlikely((pt >> 48 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		i8.u32[3] = i8.u32[3] +
+			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[3] = *(const uint16_t *)&lpm->tbl8[i8.u32[3]];
+	}
+
+	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[0] : defv;
+	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[1] : defv;
+	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
+	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LPM_H_ */
diff --git a/app/test/v2.0/test_lpm.c b/app/test/v2.0/test_lpm.c
new file mode 100644
index 000000000..e71d213ba
--- /dev/null
+++ b/app/test/v2.0/test_lpm.c
@@ -0,0 +1,1306 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ *
+ * LPM Autotests from DPDK v2.0 for abi compability testing.
+ *
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_memory.h>
+#include <rte_random.h>
+#include <rte_branch_prediction.h>
+#include <rte_ip.h>
+#include <time.h>
+
+#include "../test_lpm_routes.h"
+#include "../test.h"
+
+/* remapping of DPDK v2.0 symbols */
+#include "dcompat.h"
+/* backported header from DPDK v2.0 */
+#include "rte_lpm.h"
+
+#define TEST_LPM_ASSERT(cond) do {                                            \
+	if (!(cond)) {                                                        \
+		printf("Error at line %d:\n", __LINE__);                      \
+		return -1;                                                    \
+	}                                                                     \
+} while (0)
+
+typedef int32_t (*rte_lpm_test)(void);
+
+static int32_t test0(void);
+static int32_t test1(void);
+static int32_t test2(void);
+static int32_t test3(void);
+static int32_t test4(void);
+static int32_t test5(void);
+static int32_t test6(void);
+static int32_t test7(void);
+static int32_t test8(void);
+static int32_t test9(void);
+static int32_t test10(void);
+static int32_t test11(void);
+static int32_t test12(void);
+static int32_t test13(void);
+static int32_t test14(void);
+static int32_t test15(void);
+static int32_t test16(void);
+static int32_t test17(void);
+static int32_t perf_test(void);
+
+static rte_lpm_test tests[] = {
+/* Test Cases */
+	test0,
+	test1,
+	test2,
+	test3,
+	test4,
+	test5,
+	test6,
+	test7,
+	test8,
+	test9,
+	test10,
+	test11,
+	test12,
+	test13,
+	test14,
+	test15,
+	test16,
+	test17,
+	perf_test,
+};
+
+#define NUM_LPM_TESTS (sizeof(tests)/sizeof(tests[0]))
+#define MAX_DEPTH 32
+#define MAX_RULES 256
+#define PASS 0
+
+/*
+ * Check that rte_lpm_create fails gracefully for incorrect user input
+ * arguments
+ */
+int32_t
+test0(void)
+{
+	struct rte_lpm *lpm = NULL;
+
+	/* rte_lpm_create: lpm name == NULL */
+	lpm = rte_lpm_create(NULL, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm == NULL);
+
+	/* rte_lpm_create: max_rules = 0 */
+	/* Note: __func__ inserts the function name, in this case "test0". */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, 0, 0);
+	TEST_LPM_ASSERT(lpm == NULL);
+
+	/* socket_id < -1 is invalid */
+	lpm = rte_lpm_create(__func__, -2, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm == NULL);
+
+	return PASS;
+}
+
+/*
+ * Create lpm table then delete lpm table 100 times
+ * Use a slightly different rules size each time
+ * */
+int32_t
+test1(void)
+{
+	struct rte_lpm *lpm = NULL;
+	int32_t i;
+
+	/* rte_lpm_free: Free NULL */
+	for (i = 0; i < 100; i++) {
+		lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES - i, 0);
+		TEST_LPM_ASSERT(lpm != NULL);
+
+		rte_lpm_free(lpm);
+	}
+
+	/* Can not test free so return success */
+	return PASS;
+}
+
+/*
+ * Call rte_lpm_free for NULL pointer user input. Note: free has no return and
+ * therefore it is impossible to check for failure but this test is added to
+ * increase function coverage metrics and to validate that freeing null does
+ * not crash.
+ */
+int32_t
+test2(void)
+{
+	struct rte_lpm *lpm = NULL;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, RTE_LPM_HEAP);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	rte_lpm_free(lpm);
+	rte_lpm_free(NULL);
+	return PASS;
+}
+
+/*
+ * Check that rte_lpm_add fails gracefully for incorrect user input arguments
+ */
+int32_t
+test3(void)
+{
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip = IPv4(0, 0, 0, 0);
+	uint8_t depth = 24, next_hop = 100;
+	int32_t status = 0;
+
+	/* rte_lpm_add: lpm == NULL */
+	status = rte_lpm_add(NULL, ip, depth, next_hop);
+	TEST_LPM_ASSERT(status < 0);
+
+	/*Create vaild lpm to use in rest of test. */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* rte_lpm_add: depth < 1 */
+	status = rte_lpm_add(lpm, ip, 0, next_hop);
+	TEST_LPM_ASSERT(status < 0);
+
+	/* rte_lpm_add: depth > MAX_DEPTH */
+	status = rte_lpm_add(lpm, ip, (MAX_DEPTH + 1), next_hop);
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Check that rte_lpm_delete fails gracefully for incorrect user input
+ * arguments
+ */
+int32_t
+test4(void)
+{
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip = IPv4(0, 0, 0, 0);
+	uint8_t depth = 24;
+	int32_t status = 0;
+
+	/* rte_lpm_delete: lpm == NULL */
+	status = rte_lpm_delete(NULL, ip, depth);
+	TEST_LPM_ASSERT(status < 0);
+
+	/*Create vaild lpm to use in rest of test. */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* rte_lpm_delete: depth < 1 */
+	status = rte_lpm_delete(lpm, ip, 0);
+	TEST_LPM_ASSERT(status < 0);
+
+	/* rte_lpm_delete: depth > MAX_DEPTH */
+	status = rte_lpm_delete(lpm, ip, (MAX_DEPTH + 1));
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Check that rte_lpm_lookup fails gracefully for incorrect user input
+ * arguments
+ */
+int32_t
+test5(void)
+{
+#if defined(RTE_LIBRTE_LPM_DEBUG)
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip = IPv4(0, 0, 0, 0);
+	uint8_t next_hop_return = 0;
+	int32_t status = 0;
+
+	/* rte_lpm_lookup: lpm == NULL */
+	status = rte_lpm_lookup(NULL, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status < 0);
+
+	/*Create vaild lpm to use in rest of test. */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* rte_lpm_lookup: depth < 1 */
+	status = rte_lpm_lookup(lpm, ip, NULL);
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+#endif
+	return PASS;
+}
+
+
+
+/*
+ * Call add, lookup and delete for a single rule with depth <= 24
+ */
+int32_t
+test6(void)
+{
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip = IPv4(0, 0, 0, 0);
+	uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Call add, lookup and delete for a single rule with depth > 24
+ */
+
+int32_t
+test7(void)
+{
+	__m128i ipx4;
+	uint16_t hop[4];
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip = IPv4(0, 0, 0, 0);
+	uint8_t depth = 32, next_hop_add = 100, next_hop_return = 0;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ipx4 = _mm_set_epi32(ip, ip + 0x100, ip - 0x100, ip);
+	rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+	TEST_LPM_ASSERT(hop[0] == next_hop_add);
+	TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
+	TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+	TEST_LPM_ASSERT(hop[3] == next_hop_add);
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Use rte_lpm_add to add rules which effect only the second half of the lpm
+ * table. Use all possible depths ranging from 1..32. Set the next hop = to the
+ * depth. Check lookup hit for on every add and check for lookup miss on the
+ * first half of the lpm table after each add. Finally delete all rules going
+ * backwards (i.e. from depth = 32 ..1) and carry out a lookup after each
+ * delete. The lookup should return the next_hop_add value related to the
+ * previous depth value (i.e. depth -1).
+ */
+int32_t
+test8(void)
+{
+	__m128i ipx4;
+	uint16_t hop[4];
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
+	uint8_t depth, next_hop_add, next_hop_return;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* Loop with rte_lpm_add. */
+	for (depth = 1; depth <= 32; depth++) {
+		/* Let the next_hop_add value = depth. Just for change. */
+		next_hop_add = depth;
+
+		status = rte_lpm_add(lpm, ip2, depth, next_hop_add);
+		TEST_LPM_ASSERT(status == 0);
+
+		/* Check IP in first half of tbl24 which should be empty. */
+		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+		TEST_LPM_ASSERT(status == -ENOENT);
+
+		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+			(next_hop_return == next_hop_add));
+
+		ipx4 = _mm_set_epi32(ip2, ip1, ip2, ip1);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+		TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[1] == next_hop_add);
+		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[3] == next_hop_add);
+	}
+
+	/* Loop with rte_lpm_delete. */
+	for (depth = 32; depth >= 1; depth--) {
+		next_hop_add = (uint8_t) (depth - 1);
+
+		status = rte_lpm_delete(lpm, ip2, depth);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+
+		if (depth != 1) {
+			TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add));
+		} else {
+			TEST_LPM_ASSERT(status == -ENOENT);
+		}
+
+		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+		TEST_LPM_ASSERT(status == -ENOENT);
+
+		ipx4 = _mm_set_epi32(ip1, ip1, ip2, ip2);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+		if (depth != 1) {
+			TEST_LPM_ASSERT(hop[0] == next_hop_add);
+			TEST_LPM_ASSERT(hop[1] == next_hop_add);
+		} else {
+			TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
+			TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
+		}
+		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[3] == UINT16_MAX);
+	}
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * - Add & lookup to hit invalid TBL24 entry
+ * - Add & lookup to hit valid TBL24 entry not extended
+ * - Add & lookup to hit valid extended TBL24 entry with invalid TBL8 entry
+ * - Add & lookup to hit valid extended TBL24 entry with valid TBL8 entry
+ *
+ */
+int32_t
+test9(void)
+{
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip, ip_1, ip_2;
+	uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
+		next_hop_add_2, next_hop_return;
+	int32_t status = 0;
+
+	/* Add & lookup to hit invalid TBL24 entry */
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add & lookup to hit valid TBL24 entry not extended */
+	ip = IPv4(128, 0, 0, 0);
+	depth = 23;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	depth = 24;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	depth = 23;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add & lookup to hit valid extended TBL24 entry with invalid TBL8
+	 * entry */
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 5);
+	depth = 32;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add & lookup to hit valid extended TBL24 entry with valid TBL8
+	 * entry */
+	ip_1 = IPv4(128, 0, 0, 0);
+	depth_1 = 25;
+	next_hop_add_1 = 101;
+
+	ip_2 = IPv4(128, 0, 0, 5);
+	depth_2 = 32;
+	next_hop_add_2 = 102;
+
+	next_hop_return = 0;
+
+	status = rte_lpm_add(lpm, ip_1, depth_1, next_hop_add_1);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
+
+	status = rte_lpm_add(lpm, ip_2, depth_2, next_hop_add_2);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_2));
+
+	status = rte_lpm_delete(lpm, ip_2, depth_2);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
+
+	status = rte_lpm_delete(lpm, ip_1, depth_1);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+
+/*
+ * - Add rule that covers a TBL24 range previously invalid & lookup (& delete &
+ *   lookup)
+ * - Add rule that extends a TBL24 invalid entry & lookup (& delete & lookup)
+ * - Add rule that extends a TBL24 valid entry & lookup for both rules (&
+ *   delete & lookup)
+ * - Add rule that updates the next hop in TBL24 & lookup (& delete & lookup)
+ * - Add rule that updates the next hop in TBL8 & lookup (& delete & lookup)
+ * - Delete a rule that is not present in the TBL24 & lookup
+ * - Delete a rule that is not present in the TBL8 & lookup
+ *
+ */
+int32_t
+test10(void)
+{
+
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip;
+	uint8_t depth, next_hop_add, next_hop_return;
+	int32_t status = 0;
+
+	/* Add rule that covers a TBL24 range previously invalid & lookup
+	 * (& delete & lookup) */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, RTE_LPM_HEAP);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 16;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 25;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add rule that extends a TBL24 valid entry & lookup for both rules
+	 * (& delete & lookup) */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	next_hop_add = 100;
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add rule that updates the next hop in TBL24 & lookup
+	 * (& delete & lookup) */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Add rule that updates the next hop in TBL8 & lookup
+	 * (& delete & lookup) */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Delete a rule that is not present in the TBL24 & lookup */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status < 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_delete_all(lpm);
+
+	/* Delete a rule that is not present in the TBL8 & lookup */
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status < 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Add two rules, lookup to hit the more specific one, lookup to hit the less
+ * specific one delete the less specific rule and lookup previous values again;
+ * add a more specific rule than the existing rule, lookup again
+ *
+ * */
+int32_t
+test11(void)
+{
+
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip;
+	uint8_t depth, next_hop_add, next_hop_return;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+	next_hop_add = 101;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	next_hop_add = 100;
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	ip = IPv4(128, 0, 0, 10);
+	depth = 32;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Add an extended rule (i.e. depth greater than 24, lookup (hit), delete,
+ * lookup (miss) in a for loop of 1000 times. This will check tbl8 extension
+ * and contraction.
+ *
+ * */
+
+int32_t
+test12(void)
+{
+	__m128i ipx4;
+	uint16_t hop[4];
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip, i;
+	uint8_t depth, next_hop_add, next_hop_return;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 32;
+	next_hop_add = 100;
+
+	for (i = 0; i < 1000; i++) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add));
+
+		ipx4 = _mm_set_epi32(ip, ip + 1, ip, ip - 1);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+		TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[1] == next_hop_add);
+		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[3] == next_hop_add);
+
+		status = rte_lpm_delete(lpm, ip, depth);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT(status == -ENOENT);
+	}
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Add a rule to tbl24, lookup (hit), then add a rule that will extend this
+ * tbl24 entry, lookup (hit). delete the rule that caused the tbl24 extension,
+ * lookup (miss) and repeat for loop of 1000 times. This will check tbl8
+ * extension and contraction.
+ *
+ * */
+
+int32_t
+test13(void)
+{
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip, i;
+	uint8_t depth, next_hop_add_1, next_hop_add_2, next_hop_return;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	ip = IPv4(128, 0, 0, 0);
+	depth = 24;
+	next_hop_add_1 = 100;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add_1);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
+
+	depth = 32;
+	next_hop_add_2 = 101;
+
+	for (i = 0; i < 1000; i++) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_add_2);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add_2));
+
+		status = rte_lpm_delete(lpm, ip, depth);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add_1));
+	}
+
+	depth = 24;
+
+	status = rte_lpm_delete(lpm, ip, depth);
+	TEST_LPM_ASSERT(status == 0);
+
+	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	TEST_LPM_ASSERT(status == -ENOENT);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
+ * No more tbl8 extensions will be allowed. Now add one more rule that required
+ * a tbl8 extension and get fail.
+ * */
+int32_t
+test14(void)
+{
+
+	/* We only use depth = 32 in the loop below so we must make sure
+	 * that we have enough storage for all rules at that depth*/
+
+	struct rte_lpm *lpm = NULL;
+	uint32_t ip;
+	uint8_t depth, next_hop_add, next_hop_return;
+	int32_t status = 0;
+
+	/* Add enough space for 256 rules for every depth */
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, 256 * 32, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	depth = 32;
+	next_hop_add = 100;
+	ip = IPv4(0, 0, 0, 0);
+
+	/* Add 256 rules that require a tbl8 extension */
+	for (; ip <= IPv4(0, 0, 255, 0); ip += 256) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+		TEST_LPM_ASSERT(status == 0);
+
+		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		TEST_LPM_ASSERT((status == 0) &&
+				(next_hop_return == next_hop_add));
+	}
+
+	/* All tbl8 extensions have been used above. Try to add one more and
+	 * we get a fail */
+	ip = IPv4(1, 0, 0, 0);
+	depth = 32;
+
+	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	TEST_LPM_ASSERT(status < 0);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Sequence of operations for find existing lpm table
+ *
+ *  - create table
+ *  - find existing table: hit
+ *  - find non-existing table: miss
+ *
+ */
+int32_t
+test15(void)
+{
+	struct rte_lpm *lpm = NULL, *result = NULL;
+
+	/* Create lpm  */
+	lpm = rte_lpm_create("lpm_find_existing", SOCKET_ID_ANY, 256 * 32, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* Try to find existing lpm */
+	result = rte_lpm_find_existing("lpm_find_existing");
+	TEST_LPM_ASSERT(result == lpm);
+
+	/* Try to find non-existing lpm */
+	result = rte_lpm_find_existing("lpm_find_non_existing");
+	TEST_LPM_ASSERT(result == NULL);
+
+	/* Cleanup. */
+	rte_lpm_delete_all(lpm);
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * test failure condition of overloading the tbl8 so no more will fit
+ * Check we get an error return value in that case
+ */
+int32_t
+test16(void)
+{
+	uint32_t ip;
+	struct rte_lpm *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY,
+			256 * 32, 0);
+
+	/* ip loops through all possibilities for top 24 bits of address */
+	for (ip = 0; ip < 0xFFFFFF; ip++) {
+		/* add an entry within a different tbl8 each time, since
+		 * depth >24 and the top 24 bits are different */
+		if (rte_lpm_add(lpm, (ip << 8) + 0xF0, 30, 0) < 0)
+			break;
+	}
+
+	if (ip != RTE_LPM_TBL8_NUM_GROUPS) {
+		printf("Error, unexpected failure with filling tbl8 groups\n");
+		printf("Failed after %u additions, expected after %u\n",
+				(unsigned)ip, (unsigned)RTE_LPM_TBL8_NUM_GROUPS);
+	}
+
+	rte_lpm_free(lpm);
+	return 0;
+}
+
+/*
+ * Test for overwriting of tbl8:
+ *  - add rule /32 and lookup
+ *  - add new rule /24 and lookup
+ *	- add third rule /25 and lookup
+ *	- lookup /32 and /24 rule to ensure the table has not been overwritten.
+ */
+int32_t
+test17(void)
+{
+	struct rte_lpm *lpm = NULL;
+	const uint32_t ip_10_32 = IPv4(10, 10, 10, 2);
+	const uint32_t ip_10_24 = IPv4(10, 10, 10, 0);
+	const uint32_t ip_20_25 = IPv4(10, 10, 20, 2);
+	const uint8_t d_ip_10_32 = 32,
+			d_ip_10_24 = 24,
+			d_ip_20_25 = 25;
+	const uint8_t next_hop_ip_10_32 = 100,
+			next_hop_ip_10_24 = 105,
+			next_hop_ip_20_25 = 111;
+	uint8_t next_hop_return = 0;
+	int32_t status = 0;
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	status = rte_lpm_add(lpm, ip_10_32, d_ip_10_32, next_hop_ip_10_32);
+	if (status < 0)
+		return -1;
+
+	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+	uint8_t test_hop_10_32 = next_hop_return;
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
+
+	status = rte_lpm_add(lpm, ip_10_24, d_ip_10_24, next_hop_ip_10_24);
+	if (status < 0)
+		return -1;
+
+	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+	uint8_t test_hop_10_24 = next_hop_return;
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
+
+	status = rte_lpm_add(lpm, ip_20_25, d_ip_20_25, next_hop_ip_20_25);
+	if (status < 0)
+		return -1;
+
+	status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
+	uint8_t test_hop_20_25 = next_hop_return;
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
+
+	if (test_hop_10_32 == test_hop_10_24) {
+		printf("Next hop return equal\n");
+		return -1;
+	}
+
+	if (test_hop_10_24 == test_hop_20_25) {
+		printf("Next hop return equal\n");
+		return -1;
+	}
+
+	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
+
+	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+	TEST_LPM_ASSERT(status == 0);
+	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
+
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Lookup performance test
+ */
+
+#define ITERATIONS (1 << 10)
+#define BATCH_SIZE (1 << 12)
+#define BULK_SIZE 32
+
+int32_t
+perf_test(void)
+{
+	struct rte_lpm *lpm = NULL;
+	uint64_t begin, total_time, lpm_used_entries = 0;
+	unsigned i, j;
+	uint8_t next_hop_add = 0xAA, next_hop_return = 0;
+	int status = 0;
+	uint64_t cache_line_counter = 0;
+	int64_t count = 0;
+
+	rte_srand(rte_rdtsc());
+
+	/* (re) generate the routing table */
+	generate_large_route_rule_table();
+
+	printf("No. routes = %u\n", (unsigned) NUM_ROUTE_ENTRIES);
+
+	print_route_distribution(large_route_table,
+				 (uint32_t) NUM_ROUTE_ENTRIES);
+
+	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, 1000000, 0);
+	TEST_LPM_ASSERT(lpm != NULL);
+
+	/* Measue add. */
+	begin = rte_rdtsc();
+
+	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
+		if (rte_lpm_add(lpm, large_route_table[i].ip,
+				large_route_table[i].depth, next_hop_add) == 0)
+			status++;
+	}
+	/* End Timer. */
+	total_time = rte_rdtsc() - begin;
+
+	printf("Unique added entries = %d\n", status);
+	/* Obtain add statistics. */
+	for (i = 0; i < RTE_LPM_TBL24_NUM_ENTRIES; i++) {
+		if (lpm->tbl24[i].valid)
+			lpm_used_entries++;
+
+		if (i % 32 == 0) {
+			if ((uint64_t)count < lpm_used_entries) {
+				cache_line_counter++;
+				count = lpm_used_entries;
+			}
+		}
+	}
+
+	printf("Used table 24 entries = %u (%g%%)\n",
+			(unsigned) lpm_used_entries,
+			(lpm_used_entries * 100.0) / RTE_LPM_TBL24_NUM_ENTRIES);
+	printf("64 byte Cache entries used = %u (%u bytes)\n",
+			(unsigned) cache_line_counter, (unsigned) cache_line_counter * 64);
+
+	printf("Average LPM Add: %g cycles\n", (double)total_time / NUM_ROUTE_ENTRIES);
+
+	/* Measure single Lookup */
+	total_time = 0;
+	count = 0;
+
+	for (i = 0; i < ITERATIONS; i++) {
+		static uint32_t ip_batch[BATCH_SIZE];
+
+		for (j = 0; j < BATCH_SIZE; j++)
+			ip_batch[j] = rte_rand();
+
+		/* Lookup per batch */
+		begin = rte_rdtsc();
+
+		for (j = 0; j < BATCH_SIZE; j++) {
+			if (rte_lpm_lookup(lpm, ip_batch[j], &next_hop_return) != 0)
+				count++;
+		}
+
+		total_time += rte_rdtsc() - begin;
+
+	}
+	printf("Average LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
+			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
+			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
+
+	/* Measure bulk Lookup */
+	total_time = 0;
+	count = 0;
+	for (i = 0; i < ITERATIONS; i++) {
+		static uint32_t ip_batch[BATCH_SIZE];
+		uint16_t next_hops[BULK_SIZE];
+
+		/* Create array of random IP addresses */
+		for (j = 0; j < BATCH_SIZE; j++)
+			ip_batch[j] = rte_rand();
+
+		/* Lookup per batch */
+		begin = rte_rdtsc();
+		for (j = 0; j < BATCH_SIZE; j += BULK_SIZE) {
+			unsigned k;
+			rte_lpm_lookup_bulk(lpm, &ip_batch[j], next_hops, BULK_SIZE);
+			for (k = 0; k < BULK_SIZE; k++)
+				if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS)))
+					count++;
+		}
+
+		total_time += rte_rdtsc() - begin;
+	}
+	printf("BULK LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
+			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
+			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
+
+	/* Measure LookupX4 */
+	total_time = 0;
+	count = 0;
+	for (i = 0; i < ITERATIONS; i++) {
+		static uint32_t ip_batch[BATCH_SIZE];
+		uint16_t next_hops[4];
+
+		/* Create array of random IP addresses */
+		for (j = 0; j < BATCH_SIZE; j++)
+			ip_batch[j] = rte_rand();
+
+		/* Lookup per batch */
+		begin = rte_rdtsc();
+		for (j = 0; j < BATCH_SIZE; j += RTE_DIM(next_hops)) {
+			unsigned k;
+			__m128i ipx4;
+
+			ipx4 = _mm_loadu_si128((__m128i *)(ip_batch + j));
+			ipx4 = *(__m128i *)(ip_batch + j);
+			rte_lpm_lookupx4(lpm, ipx4, next_hops, UINT16_MAX);
+			for (k = 0; k < RTE_DIM(next_hops); k++)
+				if (unlikely(next_hops[k] == UINT16_MAX))
+					count++;
+		}
+
+		total_time += rte_rdtsc() - begin;
+	}
+	printf("LPM LookupX4: %.1f cycles (fails = %.1f%%)\n",
+			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
+			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
+
+	/* Delete */
+	status = 0;
+	begin = rte_rdtsc();
+
+	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
+		/* rte_lpm_delete(lpm, ip, depth) */
+		status += rte_lpm_delete(lpm, large_route_table[i].ip,
+				large_route_table[i].depth);
+	}
+
+	total_time += rte_rdtsc() - begin;
+
+	printf("Average LPM Delete: %g cycles\n",
+			(double)total_time / NUM_ROUTE_ENTRIES);
+
+	rte_lpm_delete_all(lpm);
+	rte_lpm_free(lpm);
+
+	return PASS;
+}
+
+/*
+ * Do all unit and performance tests.
+ */
+
+static int
+test_lpm(void)
+{
+	unsigned i;
+	int status, global_status = 0;
+
+	for (i = 0; i < NUM_LPM_TESTS; i++) {
+		status = tests[i]();
+		if (status < 0) {
+			printf("ERROR: LPM Test %s: FAIL\n", RTE_STR(tests[i]));
+			global_status = status;
+		}
+	}
+
+	return global_status;
+}
+
+REGISTER_TEST_COMMAND_VERSION(lpm_autotest, test_lpm, TEST_DPDK_ABI_VERSION_V20);
diff --git a/app/test/v2.0/test_v20.c b/app/test/v2.0/test_v20.c
new file mode 100644
index 000000000..6285e2882
--- /dev/null
+++ b/app/test/v2.0/test_v20.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdlib.h>
+
+#include <rte_ip.h>
+#include <rte_lpm.h>
+
+#include "../test.h"
+
+REGISTER_TEST_ABI_VERSION(v20, TEST_DPDK_ABI_VERSION_V20);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test
  2019-05-28 11:51 [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Ray Kinsella
  2019-05-28 11:51 ` [dpdk-dev] [PATCH 1/2] app/test: Add ABI Version Testing functionality Ray Kinsella
  2019-05-28 11:51 ` [dpdk-dev] [PATCH 2/2] app/test: LPMv4 ABI Version Testing Ray Kinsella
@ 2019-05-28 12:08 ` Bruce Richardson
  2019-05-28 12:58   ` Ray Kinsella
  2019-05-28 14:01   ` Ray Kinsella
  2 siblings, 2 replies; 7+ messages in thread
From: Bruce Richardson @ 2019-05-28 12:08 UTC (permalink / raw)
  To: Ray Kinsella; +Cc: vladimir.medvedkin, dev

On Tue, May 28, 2019 at 12:51:56PM +0100, Ray Kinsella wrote:
> This patchset adds ABI Version Testing functionality to the app/test unit
> test framework.
> 
> The patchset is intended to address two issues previously raised during ML
> conversations on ABI Stability;
> 1. How do we unit test still supported previous ABI Versions.
> 2. How to we unit test inline functions from still supported previous ABI
> Versions.
> 
> The more obvious way to achieve both of the above is to simply archive
> pre-built binaries compiled against previous versions of DPDK for use unit
> testing previous ABI Versions, and while this should still be done as an
> additional check, this approach does not scale well, must every DPDK
> developer have a local copy of these binaries to test with, before
> upstreaming changes?
> 
> Instead starting with rte_lpm, I did the following:-
> 
> * I reproduced mostly unmodified unit tests from previous ABI Versions,
>   in this case v2.0 and v16.04
> * I reproduced the rte_lpm interface header from these previous ABI
>   Versions,including the inline functions and remapping symbols to
>   appropriate versions.
> * I added support for multiple abi versions to the app/test unit test
>   framework to allow users to switch between abi versions (set_abi_version),
>   without further polluting the already long list of unit tests available in
>   app/test.
> 
> The intention here is that, in future as developers need to depreciate
> APIs, their associated unit tests may move into the ABI Version testing
> mechanism of the app/test instead of being replaced by the latest set of
> unit tests as would be the case today.
> 
> ToDo:
> * Refactor the v2.0 and v16.04 unit tests to separate functional and
>   performance test cases.
> * Add support for trigger ABI Version unit tests from the app/test command
>   line.
> 
While I admire the goal, given the amount of code that seems to be involved
here, I'm not sure if the "test" binary is the place to put this. I think
it might be better as a separate ABI compatiblity test app.

A separate question is whether this is really necessary to ensure ABI
compatibility? Do other projects do this? Is the info from the abi
compatiblity checker script already in DPDK, or from other
already-available tools not sufficient?

/Bruce

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test
  2019-05-28 12:08 ` [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Bruce Richardson
@ 2019-05-28 12:58   ` Ray Kinsella
  2019-05-28 14:01   ` Ray Kinsella
  1 sibling, 0 replies; 7+ messages in thread
From: Ray Kinsella @ 2019-05-28 12:58 UTC (permalink / raw)
  To: Bruce Richardson, Ray Kinsella; +Cc: vladimir.medvedkin, dev

Hi Bruce,

There was a bit of a misfire on the patch submission - it came from the
wrong email a/c and the ML (rightly) rejected it.

Let me submit the patch properly and the feedback can begin in earnest then.

Ray K

On 28/05/2019 13:08, Bruce Richardson wrote:
> On Tue, May 28, 2019 at 12:51:56PM +0100, Ray Kinsella wrote:
>> This patchset adds ABI Version Testing functionality to the app/test unit
>> test framework.
>>
>> The patchset is intended to address two issues previously raised during ML
>> conversations on ABI Stability;
>> 1. How do we unit test still supported previous ABI Versions.
>> 2. How to we unit test inline functions from still supported previous ABI
>> Versions.
>>
>> The more obvious way to achieve both of the above is to simply archive
>> pre-built binaries compiled against previous versions of DPDK for use unit
>> testing previous ABI Versions, and while this should still be done as an
>> additional check, this approach does not scale well, must every DPDK
>> developer have a local copy of these binaries to test with, before
>> upstreaming changes?
>>
>> Instead starting with rte_lpm, I did the following:-
>>
>> * I reproduced mostly unmodified unit tests from previous ABI Versions,
>>   in this case v2.0 and v16.04
>> * I reproduced the rte_lpm interface header from these previous ABI
>>   Versions,including the inline functions and remapping symbols to
>>   appropriate versions.
>> * I added support for multiple abi versions to the app/test unit test
>>   framework to allow users to switch between abi versions (set_abi_version),
>>   without further polluting the already long list of unit tests available in
>>   app/test.
>>
>> The intention here is that, in future as developers need to depreciate
>> APIs, their associated unit tests may move into the ABI Version testing
>> mechanism of the app/test instead of being replaced by the latest set of
>> unit tests as would be the case today.
>>
>> ToDo:
>> * Refactor the v2.0 and v16.04 unit tests to separate functional and
>>   performance test cases.
>> * Add support for trigger ABI Version unit tests from the app/test command
>>   line.
>>
> While I admire the goal, given the amount of code that seems to be involved
> here, I'm not sure if the "test" binary is the place to put this. I think
> it might be better as a separate ABI compatiblity test app.
> 
> A separate question is whether this is really necessary to ensure ABI
> compatibility? Do other projects do this? Is the info from the abi
> compatiblity checker script already in DPDK, or from other
> already-available tools not sufficient?
> 
> /Bruce
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test
  2019-05-28 12:08 ` [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Bruce Richardson
  2019-05-28 12:58   ` Ray Kinsella
@ 2019-05-28 14:01   ` Ray Kinsella
  1 sibling, 0 replies; 7+ messages in thread
From: Ray Kinsella @ 2019-05-28 14:01 UTC (permalink / raw)
  To: Bruce Richardson, Ray Kinsella; +Cc: vladimir.medvedkin, dev

Someone kindly approved it, and saved me sending it again from the right
email a/c, thank you.

On 28/05/2019 13:08, Bruce Richardson wrote:
> On Tue, May 28, 2019 at 12:51:56PM +0100, Ray Kinsella wrote:
>> This patchset adds ABI Version Testing functionality to the app/test unit
>> test framework.
>>
>> The patchset is intended to address two issues previously raised during ML
>> conversations on ABI Stability;
>> 1. How do we unit test still supported previous ABI Versions.
>> 2. How to we unit test inline functions from still supported previous ABI
>> Versions.
>>
>> The more obvious way to achieve both of the above is to simply archive
>> pre-built binaries compiled against previous versions of DPDK for use unit
>> testing previous ABI Versions, and while this should still be done as an
>> additional check, this approach does not scale well, must every DPDK
>> developer have a local copy of these binaries to test with, before
>> upstreaming changes?
>>
>> Instead starting with rte_lpm, I did the following:-
>>
>> * I reproduced mostly unmodified unit tests from previous ABI Versions,
>>   in this case v2.0 and v16.04
>> * I reproduced the rte_lpm interface header from these previous ABI
>>   Versions,including the inline functions and remapping symbols to
>>   appropriate versions.
>> * I added support for multiple abi versions to the app/test unit test
>>   framework to allow users to switch between abi versions (set_abi_version),
>>   without further polluting the already long list of unit tests available in
>>   app/test.
>>
>> The intention here is that, in future as developers need to depreciate
>> APIs, their associated unit tests may move into the ABI Version testing
>> mechanism of the app/test instead of being replaced by the latest set of
>> unit tests as would be the case today.
>>
>> ToDo:
>> * Refactor the v2.0 and v16.04 unit tests to separate functional and
>>   performance test cases.
>> * Add support for trigger ABI Version unit tests from the app/test command
>>   line.
>>
> While I admire the goal, given the amount of code that seems to be involved
> here, I'm not sure if the "test" binary is the place to put this. I think
> it might be better as a separate ABI compatiblity test app.

I did think about that also - the test binary, similar to testpmd is
very busy. I sought to mitigate that with set_abi_version command,
mitigating unit test name collisions etc.

I would have a huge concern about putting it into a separate binary, as
these tcs would quickly become forgotten about.

> 
> A separate question is whether this is really necessary to ensure ABI
> compatibility? 

I would argue that we currently have no idea if ABI versioned functions
actually work? Why offer backward compatibility, if we don't test it?

> Do other projects do this? 

The C++ stdlib is many ways similar to DPDK, ABI compatibility is a well
understood problem over there. See the following presentation Slide 20.

https://accu.org/content/conf2015/JonathanWakely-What%20Is%20An%20ABI%20And%20Why%20Is%20It%20So%20Complicated.pdf

They have a small corpus of TC's for this

https://github.com/gcc-mirror/gcc/tree/master/libstdc%2B%2B-v3/testsuite/[backward,abi]

> Is the info from the abi
> compatiblity checker script already in DPDK, or from other
> already-available tools not sufficient?

Well this just tells you that you ABI has changed. We don't have
anything at the moment that tells us that the ABI compatible functions
actually work ... ?

> 
> /Bruce
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] app/test: LPMv4 ABI Version Testing
  2019-05-28 11:51 ` [dpdk-dev] [PATCH 2/2] app/test: LPMv4 ABI Version Testing Ray Kinsella
@ 2019-05-29 13:50   ` Aaron Conole
  0 siblings, 0 replies; 7+ messages in thread
From: Aaron Conole @ 2019-05-29 13:50 UTC (permalink / raw)
  To: Ray Kinsella; +Cc: bruce.richardson, vladimir.medvedkin, dev

Ray Kinsella <ray.kinsella@intel.com> writes:

> This second patch adds the LPM ABI Version Unit Tests, comprised of
>
> 1. Registering DPDK v2.0 and DPDK v16.04 ABI Versions with the
>    infrastructure.
> 2. Forward Porting the DPDK v2.0 and DPDK v16.04 LPM Unit Test
>    cases, remapping the LPM Library symbols to the appropriate versions.
> 3. Refactoring the lpm perf routes table to make this
>    functionality available to the v2.0 and v16.04 unit tests, forwarding
>    porting this code also from v2.0 etc would have increased the DPDK
>    codebase several MLoC.q
>
> Signed-off-by: Ray Kinsella <ray.kinsella@intel.com>
> ---

Hi Ray,

This patch causes build failures when building for AARCH64 platforms.

See:

https://travis-ci.com/ovsrobot/dpdk/jobs/203566521

>  app/test/Makefile              |   12 +-
>  app/test/meson.build           |    5 +
>  app/test/test_lpm.c            |    1 +
>  app/test/test_lpm_perf.c       |  293 +------
>  app/test/test_lpm_routes.c     |  287 +++++++
>  app/test/test_lpm_routes.h     |   25 +
>  app/test/v16.04/dcompat.h      |   23 +
>  app/test/v16.04/rte_lpm.h      |  463 +++++++++++
>  app/test/v16.04/rte_lpm_neon.h |  119 +++
>  app/test/v16.04/rte_lpm_sse.h  |  120 +++
>  app/test/v16.04/test_lpm.c     | 1405 ++++++++++++++++++++++++++++++++
>  app/test/v16.04/test_v1604.c   |   14 +
>  app/test/v2.0/dcompat.h        |   23 +
>  app/test/v2.0/rte_lpm.h        |  443 ++++++++++
>  app/test/v2.0/test_lpm.c       | 1306 +++++++++++++++++++++++++++++
>  app/test/v2.0/test_v20.c       |   14 +
>  16 files changed, 4261 insertions(+), 292 deletions(-)
>  create mode 100644 app/test/test_lpm_routes.c
>  create mode 100644 app/test/test_lpm_routes.h
>  create mode 100644 app/test/v16.04/dcompat.h
>  create mode 100644 app/test/v16.04/rte_lpm.h
>  create mode 100644 app/test/v16.04/rte_lpm_neon.h
>  create mode 100644 app/test/v16.04/rte_lpm_sse.h
>  create mode 100644 app/test/v16.04/test_lpm.c
>  create mode 100644 app/test/v16.04/test_v1604.c
>  create mode 100644 app/test/v2.0/dcompat.h
>  create mode 100644 app/test/v2.0/rte_lpm.h
>  create mode 100644 app/test/v2.0/test_lpm.c
>  create mode 100644 app/test/v2.0/test_v20.c
>
> diff --git a/app/test/Makefile b/app/test/Makefile
> index 68d6b4fbc..5899eb8b9 100644
> --- a/app/test/Makefile
> +++ b/app/test/Makefile
> @@ -78,6 +78,10 @@ SRCS-y += test_ring.c
>  SRCS-y += test_ring_perf.c
>  SRCS-y += test_pmd_perf.c
>  
> +#ABI Version Testing
> +SRCS-$(CONFIG_RTE_BUILD_SHARED_LIB) += v2.0/test_v20.c
> +SRCS-$(CONFIG_RTE_BUILD_SHARED_LIB) += v16.04/test_v1604.c
> +
>  ifeq ($(CONFIG_RTE_LIBRTE_TABLE),y)
>  SRCS-y += test_table.c
>  SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += test_table_pipeline.c
> @@ -107,7 +111,6 @@ SRCS-y += test_logs.c
>  SRCS-y += test_memcpy.c
>  SRCS-y += test_memcpy_perf.c
>  
> -
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMBER) += test_member.c
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMBER) += test_member_perf.c
>  
> @@ -122,11 +125,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_multiwriter.c
>  SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_readwrite.c
>  SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_readwrite_lf.c
>  
> +SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm_routes.c
>  SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm.c
>  SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm_perf.c
>  SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm6.c
>  SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm6_perf.c
>  
> +#LPM ABI Testing
> +ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
> +SRCS-$(CONFIG_RTE_LIBRTE_LPM) += v2.0/test_lpm.c
> +SRCS-$(CONFIG_RTE_LIBRTE_LPM) += v16.04/test_lpm.c
> +endif
> +
>  SRCS-y += test_debug.c
>  SRCS-y += test_errno.c
>  SRCS-y += test_tailq.c
> diff --git a/app/test/meson.build b/app/test/meson.build
> index 83391cef0..628f4e1ff 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -4,6 +4,8 @@
>  test_sources = files('commands.c',
>  	'packet_burst_generator.c',
>  	'sample_packet_forward.c',
> +	'v2.0/test_v20.c',
> +	'v16.04/test_v1604.c',
>  	'test.c',
>  	'test_acl.c',
>  	'test_alarm.c',
> @@ -63,6 +65,9 @@ test_sources = files('commands.c',
>  	'test_lpm6.c',
>  	'test_lpm6_perf.c',
>  	'test_lpm_perf.c',
> +	'test_lpm_routes.c',
> +	'v2.0/test_lpm.c',
> +	'v16.04/test_lpm.c',
>  	'test_malloc.c',
>  	'test_mbuf.c',
>  	'test_member.c',
> diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
> index 5d697dd0f..bfa702677 100644
> --- a/app/test/test_lpm.c
> +++ b/app/test/test_lpm.c
> @@ -1277,6 +1277,7 @@ test_lpm(void)
>  	int status, global_status = 0;
>  
>  	for (i = 0; i < NUM_LPM_TESTS; i++) {
> +		printf("# test %02d\n", i);
>  		status = tests[i]();
>  		if (status < 0) {
>  			printf("ERROR: LPM Test %u: FAIL\n", i);
> diff --git a/app/test/test_lpm_perf.c b/app/test/test_lpm_perf.c
> index 3b98ce0c8..a6b8b35c2 100644
> --- a/app/test/test_lpm_perf.c
> +++ b/app/test/test_lpm_perf.c
> @@ -5,7 +5,6 @@
>  #include <stdio.h>
>  #include <stdint.h>
>  #include <stdlib.h>
> -#include <math.h>
>  
>  #include <rte_cycles.h>
>  #include <rte_random.h>
> @@ -13,6 +12,7 @@
>  #include <rte_ip.h>
>  #include <rte_lpm.h>
>  
> +#include "test_lpm_routes.h"
>  #include "test.h"
>  #include "test_xmmt_ops.h"
>  
> @@ -27,295 +27,6 @@
>  #define BATCH_SIZE (1 << 12)
>  #define BULK_SIZE 32
>  
> -#define MAX_RULE_NUM (1200000)
> -
> -struct route_rule {
> -	uint32_t ip;
> -	uint8_t depth;
> -};
> -
> -struct route_rule large_route_table[MAX_RULE_NUM];
> -
> -static uint32_t num_route_entries;
> -#define NUM_ROUTE_ENTRIES num_route_entries
> -
> -enum {
> -	IP_CLASS_A,
> -	IP_CLASS_B,
> -	IP_CLASS_C
> -};
> -
> -/* struct route_rule_count defines the total number of rules in following a/b/c
> - * each item in a[]/b[]/c[] is the number of common IP address class A/B/C, not
> - * including the ones for private local network.
> - */
> -struct route_rule_count {
> -	uint32_t a[RTE_LPM_MAX_DEPTH];
> -	uint32_t b[RTE_LPM_MAX_DEPTH];
> -	uint32_t c[RTE_LPM_MAX_DEPTH];
> -};
> -
> -/* All following numbers of each depth of each common IP class are just
> - * got from previous large constant table in app/test/test_lpm_routes.h .
> - * In order to match similar performance, they keep same depth and IP
> - * address coverage as previous constant table. These numbers don't
> - * include any private local IP address. As previous large const rule
> - * table was just dumped from a real router, there are no any IP address
> - * in class C or D.
> - */
> -static struct route_rule_count rule_count = {
> -	.a = { /* IP class A in which the most significant bit is 0 */
> -		    0, /* depth =  1 */
> -		    0, /* depth =  2 */
> -		    1, /* depth =  3 */
> -		    0, /* depth =  4 */
> -		    2, /* depth =  5 */
> -		    1, /* depth =  6 */
> -		    3, /* depth =  7 */
> -		  185, /* depth =  8 */
> -		   26, /* depth =  9 */
> -		   16, /* depth = 10 */
> -		   39, /* depth = 11 */
> -		  144, /* depth = 12 */
> -		  233, /* depth = 13 */
> -		  528, /* depth = 14 */
> -		  866, /* depth = 15 */
> -		 3856, /* depth = 16 */
> -		 3268, /* depth = 17 */
> -		 5662, /* depth = 18 */
> -		17301, /* depth = 19 */
> -		22226, /* depth = 20 */
> -		11147, /* depth = 21 */
> -		16746, /* depth = 22 */
> -		17120, /* depth = 23 */
> -		77578, /* depth = 24 */
> -		  401, /* depth = 25 */
> -		  656, /* depth = 26 */
> -		 1107, /* depth = 27 */
> -		 1121, /* depth = 28 */
> -		 2316, /* depth = 29 */
> -		  717, /* depth = 30 */
> -		   10, /* depth = 31 */
> -		   66  /* depth = 32 */
> -	},
> -	.b = { /* IP class A in which the most 2 significant bits are 10 */
> -		    0, /* depth =  1 */
> -		    0, /* depth =  2 */
> -		    0, /* depth =  3 */
> -		    0, /* depth =  4 */
> -		    1, /* depth =  5 */
> -		    1, /* depth =  6 */
> -		    1, /* depth =  7 */
> -		    3, /* depth =  8 */
> -		    3, /* depth =  9 */
> -		   30, /* depth = 10 */
> -		   25, /* depth = 11 */
> -		  168, /* depth = 12 */
> -		  305, /* depth = 13 */
> -		  569, /* depth = 14 */
> -		 1129, /* depth = 15 */
> -		50800, /* depth = 16 */
> -		 1645, /* depth = 17 */
> -		 1820, /* depth = 18 */
> -		 3506, /* depth = 19 */
> -		 3258, /* depth = 20 */
> -		 3424, /* depth = 21 */
> -		 4971, /* depth = 22 */
> -		 6885, /* depth = 23 */
> -		39771, /* depth = 24 */
> -		  424, /* depth = 25 */
> -		  170, /* depth = 26 */
> -		  433, /* depth = 27 */
> -		   92, /* depth = 28 */
> -		  366, /* depth = 29 */
> -		  377, /* depth = 30 */
> -		    2, /* depth = 31 */
> -		  200  /* depth = 32 */
> -	},
> -	.c = { /* IP class A in which the most 3 significant bits are 110 */
> -		     0, /* depth =  1 */
> -		     0, /* depth =  2 */
> -		     0, /* depth =  3 */
> -		     0, /* depth =  4 */
> -		     0, /* depth =  5 */
> -		     0, /* depth =  6 */
> -		     0, /* depth =  7 */
> -		    12, /* depth =  8 */
> -		     8, /* depth =  9 */
> -		     9, /* depth = 10 */
> -		    33, /* depth = 11 */
> -		    69, /* depth = 12 */
> -		   237, /* depth = 13 */
> -		  1007, /* depth = 14 */
> -		  1717, /* depth = 15 */
> -		 14663, /* depth = 16 */
> -		  8070, /* depth = 17 */
> -		 16185, /* depth = 18 */
> -		 48261, /* depth = 19 */
> -		 36870, /* depth = 20 */
> -		 33960, /* depth = 21 */
> -		 50638, /* depth = 22 */
> -		 61422, /* depth = 23 */
> -		466549, /* depth = 24 */
> -		  1829, /* depth = 25 */
> -		  4824, /* depth = 26 */
> -		  4927, /* depth = 27 */
> -		  5914, /* depth = 28 */
> -		 10254, /* depth = 29 */
> -		  4905, /* depth = 30 */
> -		     1, /* depth = 31 */
> -		   716  /* depth = 32 */
> -	}
> -};
> -
> -static void generate_random_rule_prefix(uint32_t ip_class, uint8_t depth)
> -{
> -/* IP address class A, the most significant bit is 0 */
> -#define IP_HEAD_MASK_A			0x00000000
> -#define IP_HEAD_BIT_NUM_A		1
> -
> -/* IP address class B, the most significant 2 bits are 10 */
> -#define IP_HEAD_MASK_B			0x80000000
> -#define IP_HEAD_BIT_NUM_B		2
> -
> -/* IP address class C, the most significant 3 bits are 110 */
> -#define IP_HEAD_MASK_C			0xC0000000
> -#define IP_HEAD_BIT_NUM_C		3
> -
> -	uint32_t class_depth;
> -	uint32_t range;
> -	uint32_t mask;
> -	uint32_t step;
> -	uint32_t start;
> -	uint32_t fixed_bit_num;
> -	uint32_t ip_head_mask;
> -	uint32_t rule_num;
> -	uint32_t k;
> -	struct route_rule *ptr_rule;
> -
> -	if (ip_class == IP_CLASS_A) {        /* IP Address class A */
> -		fixed_bit_num = IP_HEAD_BIT_NUM_A;
> -		ip_head_mask = IP_HEAD_MASK_A;
> -		rule_num = rule_count.a[depth - 1];
> -	} else if (ip_class == IP_CLASS_B) { /* IP Address class B */
> -		fixed_bit_num = IP_HEAD_BIT_NUM_B;
> -		ip_head_mask = IP_HEAD_MASK_B;
> -		rule_num = rule_count.b[depth - 1];
> -	} else {                             /* IP Address class C */
> -		fixed_bit_num = IP_HEAD_BIT_NUM_C;
> -		ip_head_mask = IP_HEAD_MASK_C;
> -		rule_num = rule_count.c[depth - 1];
> -	}
> -
> -	if (rule_num == 0)
> -		return;
> -
> -	/* the number of rest bits which don't include the most significant
> -	 * fixed bits for this IP address class
> -	 */
> -	class_depth = depth - fixed_bit_num;
> -
> -	/* range is the maximum number of rules for this depth and
> -	 * this IP address class
> -	 */
> -	range = 1 << class_depth;
> -
> -	/* only mask the most depth significant generated bits
> -	 * except fixed bits for IP address class
> -	 */
> -	mask = range - 1;
> -
> -	/* Widen coverage of IP address in generated rules */
> -	if (range <= rule_num)
> -		step = 1;
> -	else
> -		step = round((double)range / rule_num);
> -
> -	/* Only generate rest bits except the most significant
> -	 * fixed bits for IP address class
> -	 */
> -	start = lrand48() & mask;
> -	ptr_rule = &large_route_table[num_route_entries];
> -	for (k = 0; k < rule_num; k++) {
> -		ptr_rule->ip = (start << (RTE_LPM_MAX_DEPTH - depth))
> -			| ip_head_mask;
> -		ptr_rule->depth = depth;
> -		ptr_rule++;
> -		start = (start + step) & mask;
> -	}
> -	num_route_entries += rule_num;
> -}
> -
> -static void insert_rule_in_random_pos(uint32_t ip, uint8_t depth)
> -{
> -	uint32_t pos;
> -	int try_count = 0;
> -	struct route_rule tmp;
> -
> -	do {
> -		pos = lrand48();
> -		try_count++;
> -	} while ((try_count < 10) && (pos > num_route_entries));
> -
> -	if ((pos > num_route_entries) || (pos >= MAX_RULE_NUM))
> -		pos = num_route_entries >> 1;
> -
> -	tmp = large_route_table[pos];
> -	large_route_table[pos].ip = ip;
> -	large_route_table[pos].depth = depth;
> -	if (num_route_entries < MAX_RULE_NUM)
> -		large_route_table[num_route_entries++] = tmp;
> -}
> -
> -static void generate_large_route_rule_table(void)
> -{
> -	uint32_t ip_class;
> -	uint8_t  depth;
> -
> -	num_route_entries = 0;
> -	memset(large_route_table, 0, sizeof(large_route_table));
> -
> -	for (ip_class = IP_CLASS_A; ip_class <= IP_CLASS_C; ip_class++) {
> -		for (depth = 1; depth <= RTE_LPM_MAX_DEPTH; depth++) {
> -			generate_random_rule_prefix(ip_class, depth);
> -		}
> -	}
> -
> -	/* Add following rules to keep same as previous large constant table,
> -	 * they are 4 rules with private local IP address and 1 all-zeros prefix
> -	 * with depth = 8.
> -	 */
> -	insert_rule_in_random_pos(IPv4(0, 0, 0, 0), 8);
> -	insert_rule_in_random_pos(IPv4(10, 2, 23, 147), 32);
> -	insert_rule_in_random_pos(IPv4(192, 168, 100, 10), 24);
> -	insert_rule_in_random_pos(IPv4(192, 168, 25, 100), 24);
> -	insert_rule_in_random_pos(IPv4(192, 168, 129, 124), 32);
> -}
> -
> -static void
> -print_route_distribution(const struct route_rule *table, uint32_t n)
> -{
> -	unsigned i, j;
> -
> -	printf("Route distribution per prefix width: \n");
> -	printf("DEPTH    QUANTITY (PERCENT)\n");
> -	printf("--------------------------- \n");
> -
> -	/* Count depths. */
> -	for (i = 1; i <= 32; i++) {
> -		unsigned depth_counter = 0;
> -		double percent_hits;
> -
> -		for (j = 0; j < n; j++)
> -			if (table[j].depth == (uint8_t) i)
> -				depth_counter++;
> -
> -		percent_hits = ((double)depth_counter)/((double)n) * 100;
> -		printf("%.2u%15u (%.2f)\n", i, depth_counter, percent_hits);
> -	}
> -	printf("\n");
> -}
> -
>  static int
>  test_lpm_perf(void)
>  {
> @@ -375,7 +86,7 @@ test_lpm_perf(void)
>  			(unsigned) cache_line_counter, (unsigned) cache_line_counter * 64);
>  
>  	printf("Average LPM Add: %g cycles\n",
> -			(double)total_time / NUM_ROUTE_ENTRIES);
> +	       (double)total_time / NUM_ROUTE_ENTRIES);
>  
>  	/* Measure single Lookup */
>  	total_time = 0;
> diff --git a/app/test/test_lpm_routes.c b/app/test/test_lpm_routes.c
> new file mode 100644
> index 000000000..08128542a
> --- /dev/null
> +++ b/app/test/test_lpm_routes.c
> @@ -0,0 +1,287 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2019 Intel Corporation
> + */
> +
> +#include <math.h>
> +
> +#include "rte_lpm.h"
> +#include "test_lpm_routes.h"
> +
> +uint32_t num_route_entries;
> +struct route_rule large_route_table[MAX_RULE_NUM];
> +
> +enum {
> +	IP_CLASS_A,
> +	IP_CLASS_B,
> +	IP_CLASS_C
> +};
> +
> +/* struct route_rule_count defines the total number of rules in following a/b/c
> + * each item in a[]/b[]/c[] is the number of common IP address class A/B/C, not
> + * including the ones for private local network.
> + */
> +struct route_rule_count {
> +	uint32_t a[RTE_LPM_MAX_DEPTH];
> +	uint32_t b[RTE_LPM_MAX_DEPTH];
> +	uint32_t c[RTE_LPM_MAX_DEPTH];
> +};
> +
> +/* All following numbers of each depth of each common IP class are just
> + * got from previous large constant table in app/test/test_lpm_routes.h .
> + * In order to match similar performance, they keep same depth and IP
> + * address coverage as previous constant table. These numbers don't
> + * include any private local IP address. As previous large const rule
> + * table was just dumped from a real router, there are no any IP address
> + * in class C or D.
> + */
> +static struct route_rule_count rule_count = {
> +	.a = { /* IP class A in which the most significant bit is 0 */
> +		    0, /* depth =  1 */
> +		    0, /* depth =  2 */
> +		    1, /* depth =  3 */
> +		    0, /* depth =  4 */
> +		    2, /* depth =  5 */
> +		    1, /* depth =  6 */
> +		    3, /* depth =  7 */
> +		  185, /* depth =  8 */
> +		   26, /* depth =  9 */
> +		   16, /* depth = 10 */
> +		   39, /* depth = 11 */
> +		  144, /* depth = 12 */
> +		  233, /* depth = 13 */
> +		  528, /* depth = 14 */
> +		  866, /* depth = 15 */
> +		 3856, /* depth = 16 */
> +		 3268, /* depth = 17 */
> +		 5662, /* depth = 18 */
> +		17301, /* depth = 19 */
> +		22226, /* depth = 20 */
> +		11147, /* depth = 21 */
> +		16746, /* depth = 22 */
> +		17120, /* depth = 23 */
> +		77578, /* depth = 24 */
> +		  401, /* depth = 25 */
> +		  656, /* depth = 26 */
> +		 1107, /* depth = 27 */
> +		 1121, /* depth = 28 */
> +		 2316, /* depth = 29 */
> +		  717, /* depth = 30 */
> +		   10, /* depth = 31 */
> +		   66  /* depth = 32 */
> +	},
> +	.b = { /* IP class A in which the most 2 significant bits are 10 */
> +		    0, /* depth =  1 */
> +		    0, /* depth =  2 */
> +		    0, /* depth =  3 */
> +		    0, /* depth =  4 */
> +		    1, /* depth =  5 */
> +		    1, /* depth =  6 */
> +		    1, /* depth =  7 */
> +		    3, /* depth =  8 */
> +		    3, /* depth =  9 */
> +		   30, /* depth = 10 */
> +		   25, /* depth = 11 */
> +		  168, /* depth = 12 */
> +		  305, /* depth = 13 */
> +		  569, /* depth = 14 */
> +		 1129, /* depth = 15 */
> +		50800, /* depth = 16 */
> +		 1645, /* depth = 17 */
> +		 1820, /* depth = 18 */
> +		 3506, /* depth = 19 */
> +		 3258, /* depth = 20 */
> +		 3424, /* depth = 21 */
> +		 4971, /* depth = 22 */
> +		 6885, /* depth = 23 */
> +		39771, /* depth = 24 */
> +		  424, /* depth = 25 */
> +		  170, /* depth = 26 */
> +		  433, /* depth = 27 */
> +		   92, /* depth = 28 */
> +		  366, /* depth = 29 */
> +		  377, /* depth = 30 */
> +		    2, /* depth = 31 */
> +		  200  /* depth = 32 */
> +	},
> +	.c = { /* IP class A in which the most 3 significant bits are 110 */
> +		     0, /* depth =  1 */
> +		     0, /* depth =  2 */
> +		     0, /* depth =  3 */
> +		     0, /* depth =  4 */
> +		     0, /* depth =  5 */
> +		     0, /* depth =  6 */
> +		     0, /* depth =  7 */
> +		    12, /* depth =  8 */
> +		     8, /* depth =  9 */
> +		     9, /* depth = 10 */
> +		    33, /* depth = 11 */
> +		    69, /* depth = 12 */
> +		   237, /* depth = 13 */
> +		  1007, /* depth = 14 */
> +		  1717, /* depth = 15 */
> +		 14663, /* depth = 16 */
> +		  8070, /* depth = 17 */
> +		 16185, /* depth = 18 */
> +		 48261, /* depth = 19 */
> +		 36870, /* depth = 20 */
> +		 33960, /* depth = 21 */
> +		 50638, /* depth = 22 */
> +		 61422, /* depth = 23 */
> +		466549, /* depth = 24 */
> +		  1829, /* depth = 25 */
> +		  4824, /* depth = 26 */
> +		  4927, /* depth = 27 */
> +		  5914, /* depth = 28 */
> +		 10254, /* depth = 29 */
> +		  4905, /* depth = 30 */
> +		     1, /* depth = 31 */
> +		   716  /* depth = 32 */
> +	}
> +};
> +
> +static void generate_random_rule_prefix(uint32_t ip_class, uint8_t depth)
> +{
> +/* IP address class A, the most significant bit is 0 */
> +#define IP_HEAD_MASK_A			0x00000000
> +#define IP_HEAD_BIT_NUM_A		1
> +
> +/* IP address class B, the most significant 2 bits are 10 */
> +#define IP_HEAD_MASK_B			0x80000000
> +#define IP_HEAD_BIT_NUM_B		2
> +
> +/* IP address class C, the most significant 3 bits are 110 */
> +#define IP_HEAD_MASK_C			0xC0000000
> +#define IP_HEAD_BIT_NUM_C		3
> +
> +	uint32_t class_depth;
> +	uint32_t range;
> +	uint32_t mask;
> +	uint32_t step;
> +	uint32_t start;
> +	uint32_t fixed_bit_num;
> +	uint32_t ip_head_mask;
> +	uint32_t rule_num;
> +	uint32_t k;
> +	struct route_rule *ptr_rule;
> +
> +	if (ip_class == IP_CLASS_A) {        /* IP Address class A */
> +		fixed_bit_num = IP_HEAD_BIT_NUM_A;
> +		ip_head_mask = IP_HEAD_MASK_A;
> +		rule_num = rule_count.a[depth - 1];
> +	} else if (ip_class == IP_CLASS_B) { /* IP Address class B */
> +		fixed_bit_num = IP_HEAD_BIT_NUM_B;
> +		ip_head_mask = IP_HEAD_MASK_B;
> +		rule_num = rule_count.b[depth - 1];
> +	} else {                             /* IP Address class C */
> +		fixed_bit_num = IP_HEAD_BIT_NUM_C;
> +		ip_head_mask = IP_HEAD_MASK_C;
> +		rule_num = rule_count.c[depth - 1];
> +	}
> +
> +	if (rule_num == 0)
> +		return;
> +
> +	/* the number of rest bits which don't include the most significant
> +	 * fixed bits for this IP address class
> +	 */
> +	class_depth = depth - fixed_bit_num;
> +
> +	/* range is the maximum number of rules for this depth and
> +	 * this IP address class
> +	 */
> +	range = 1 << class_depth;
> +
> +	/* only mask the most depth significant generated bits
> +	 * except fixed bits for IP address class
> +	 */
> +	mask = range - 1;
> +
> +	/* Widen coverage of IP address in generated rules */
> +	if (range <= rule_num)
> +		step = 1;
> +	else
> +		step = round((double)range / rule_num);
> +
> +	/* Only generate rest bits except the most significant
> +	 * fixed bits for IP address class
> +	 */
> +	start = lrand48() & mask;
> +	ptr_rule = &large_route_table[num_route_entries];
> +	for (k = 0; k < rule_num; k++) {
> +		ptr_rule->ip = (start << (RTE_LPM_MAX_DEPTH - depth))
> +			| ip_head_mask;
> +		ptr_rule->depth = depth;
> +		ptr_rule++;
> +		start = (start + step) & mask;
> +	}
> +	num_route_entries += rule_num;
> +}
> +
> +static void insert_rule_in_random_pos(uint32_t ip, uint8_t depth)
> +{
> +	uint32_t pos;
> +	int try_count = 0;
> +	struct route_rule tmp;
> +
> +	do {
> +		pos = lrand48();
> +		try_count++;
> +	} while ((try_count < 10) && (pos > num_route_entries));
> +
> +	if ((pos > num_route_entries) || (pos >= MAX_RULE_NUM))
> +		pos = num_route_entries >> 1;
> +
> +	tmp = large_route_table[pos];
> +	large_route_table[pos].ip = ip;
> +	large_route_table[pos].depth = depth;
> +	if (num_route_entries < MAX_RULE_NUM)
> +		large_route_table[num_route_entries++] = tmp;
> +}
> +
> +void generate_large_route_rule_table(void)
> +{
> +	uint32_t ip_class;
> +	uint8_t  depth;
> +
> +	num_route_entries = 0;
> +	memset(large_route_table, 0, sizeof(large_route_table));
> +
> +	for (ip_class = IP_CLASS_A; ip_class <= IP_CLASS_C; ip_class++) {
> +		for (depth = 1; depth <= RTE_LPM_MAX_DEPTH; depth++)
> +			generate_random_rule_prefix(ip_class, depth);
> +	}
> +
> +	/* Add following rules to keep same as previous large constant table,
> +	 * they are 4 rules with private local IP address and 1 all-zeros prefix
> +	 * with depth = 8.
> +	 */
> +	insert_rule_in_random_pos(IPv4(0, 0, 0, 0), 8);
> +	insert_rule_in_random_pos(IPv4(10, 2, 23, 147), 32);
> +	insert_rule_in_random_pos(IPv4(192, 168, 100, 10), 24);
> +	insert_rule_in_random_pos(IPv4(192, 168, 25, 100), 24);
> +	insert_rule_in_random_pos(IPv4(192, 168, 129, 124), 32);
> +}
> +
> +void
> +print_route_distribution(const struct route_rule *table, uint32_t n)
> +{
> +	unsigned int i, j;
> +
> +	printf("Route distribution per prefix width: \n");
> +	printf("DEPTH    QUANTITY (PERCENT)\n");
> +	printf("---------------------------\n");
> +
> +	/* Count depths. */
> +	for (i = 1; i <= 32; i++) {
> +		unsigned int depth_counter = 0;
> +		double percent_hits;
> +
> +		for (j = 0; j < n; j++)
> +			if (table[j].depth == (uint8_t) i)
> +				depth_counter++;
> +
> +		percent_hits = ((double)depth_counter)/((double)n) * 100;
> +		printf("%.2u%15u (%.2f)\n", i, depth_counter, percent_hits);
> +	}
> +	printf("\n");
> +}
> diff --git a/app/test/test_lpm_routes.h b/app/test/test_lpm_routes.h
> new file mode 100644
> index 000000000..c7874ea8f
> --- /dev/null
> +++ b/app/test/test_lpm_routes.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2019 Intel Corporation
> + */
> +
> +#ifndef _TEST_LPM_ROUTES_H_
> +#define _TEST_LPM_ROUTES_H_
> +
> +#include <rte_ip.h>
> +
> +#define MAX_RULE_NUM (1200000)
> +
> +struct route_rule {
> +	uint32_t ip;
> +	uint8_t depth;
> +};
> +
> +extern struct route_rule large_route_table[MAX_RULE_NUM];
> +
> +extern uint32_t num_route_entries;
> +#define NUM_ROUTE_ENTRIES num_route_entries
> +
> +void generate_large_route_rule_table(void);
> +void print_route_distribution(const struct route_rule *table, uint32_t n);
> +
> +#endif
> diff --git a/app/test/v16.04/dcompat.h b/app/test/v16.04/dcompat.h
> new file mode 100644
> index 000000000..889c3b503
> --- /dev/null
> +++ b/app/test/v16.04/dcompat.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2019 Intel Corporation
> + */
> +
> +#ifndef _DCOMPAT_H_
> +#define _DCOMPAT_H_
> +
> +#define ABI_VERSION DPDK_16.04
> +
> +#define MAP_ABI_SYMBOL(name) \
> +	MAP_ABI_SYMBOL_VERSION(name, ABI_VERSION)
> +
> +MAP_ABI_SYMBOL(rte_lpm_add);
> +MAP_ABI_SYMBOL(rte_lpm_create);
> +MAP_ABI_SYMBOL(rte_lpm_delete);
> +MAP_ABI_SYMBOL(rte_lpm_delete_all);
> +MAP_ABI_SYMBOL(rte_lpm_find_existing);
> +MAP_ABI_SYMBOL(rte_lpm_free);
> +MAP_ABI_SYMBOL(rte_lpm_is_rule_present);
> +
> +#undef MAP_ABI_SYMBOL
> +
> +#endif
> diff --git a/app/test/v16.04/rte_lpm.h b/app/test/v16.04/rte_lpm.h
> new file mode 100644
> index 000000000..c3348fbc1
> --- /dev/null
> +++ b/app/test/v16.04/rte_lpm.h
> @@ -0,0 +1,463 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2014 Intel Corporation
> + */
> +
> +#ifndef _RTE_LPM_H_
> +#define _RTE_LPM_H_
> +
> +/**
> + * @file
> + * RTE Longest Prefix Match (LPM)
> + */
> +
> +#include <errno.h>
> +#include <sys/queue.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_byteorder.h>
> +#include <rte_memory.h>
> +#include <rte_common.h>
> +#include <rte_vect.h>
> +#include <rte_compat.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/** Max number of characters in LPM name. */
> +#define RTE_LPM_NAMESIZE                32
> +
> +/** Maximum depth value possible for IPv4 LPM. */
> +#define RTE_LPM_MAX_DEPTH               32
> +
> +/** @internal Total number of tbl24 entries. */
> +#define RTE_LPM_TBL24_NUM_ENTRIES       (1 << 24)
> +
> +/** @internal Number of entries in a tbl8 group. */
> +#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES  256
> +
> +/** @internal Max number of tbl8 groups in the tbl8. */
> +#define RTE_LPM_MAX_TBL8_NUM_GROUPS         (1 << 24)
> +
> +/** @internal Total number of tbl8 groups in the tbl8. */
> +#define RTE_LPM_TBL8_NUM_GROUPS         256
> +
> +/** @internal Total number of tbl8 entries. */
> +#define RTE_LPM_TBL8_NUM_ENTRIES        (RTE_LPM_TBL8_NUM_GROUPS * \
> +					RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
> +
> +/** @internal Macro to enable/disable run-time checks. */
> +#if defined(RTE_LIBRTE_LPM_DEBUG)
> +#define RTE_LPM_RETURN_IF_TRUE(cond, retval) do { \
> +	if (cond) \
> +		return (retval); \
> +} while (0)
> +#else
> +#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> +#endif
> +
> +/** @internal bitmask with valid and valid_group fields set */
> +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03000000
> +
> +/** Bitmask used to indicate successful lookup */
> +#define RTE_LPM_LOOKUP_SUCCESS          0x01000000
> +
> +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> +/** @internal Tbl24 entry structure. */
> +struct rte_lpm_tbl_entry_v20 {
> +	/**
> +	 * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
> +	 * a group index pointing to a tbl8 structure (tbl24 only, when
> +	 * valid_group is set)
> +	 */
> +	union {
> +		uint8_t next_hop;
> +		uint8_t group_idx;
> +	};
> +	/* Using single uint8_t to store 3 values. */
> +	uint8_t valid     :1;   /**< Validation flag. */
> +	/**
> +	 * For tbl24:
> +	 *  - valid_group == 0: entry stores a next hop
> +	 *  - valid_group == 1: entry stores a group_index pointing to a tbl8
> +	 * For tbl8:
> +	 *  - valid_group indicates whether the current tbl8 is in use or not
> +	 */
> +	uint8_t valid_group :1;
> +	uint8_t depth       :6; /**< Rule depth. */
> +};
> +
> +struct rte_lpm_tbl_entry {
> +	/**
> +	 * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
> +	 * a group index pointing to a tbl8 structure (tbl24 only, when
> +	 * valid_group is set)
> +	 */
> +	uint32_t next_hop    :24;
> +	/* Using single uint8_t to store 3 values. */
> +	uint32_t valid       :1;   /**< Validation flag. */
> +	/**
> +	 * For tbl24:
> +	 *  - valid_group == 0: entry stores a next hop
> +	 *  - valid_group == 1: entry stores a group_index pointing to a tbl8
> +	 * For tbl8:
> +	 *  - valid_group indicates whether the current tbl8 is in use or not
> +	 */
> +	uint32_t valid_group :1;
> +	uint32_t depth       :6; /**< Rule depth. */
> +};
> +
> +#else
> +struct rte_lpm_tbl_entry_v20 {
> +	uint8_t depth       :6;
> +	uint8_t valid_group :1;
> +	uint8_t valid       :1;
> +	union {
> +		uint8_t group_idx;
> +		uint8_t next_hop;
> +	};
> +};
> +
> +struct rte_lpm_tbl_entry {
> +	uint32_t depth       :6;
> +	uint32_t valid_group :1;
> +	uint32_t valid       :1;
> +	uint32_t next_hop    :24;
> +
> +};
> +
> +#endif
> +
> +/** LPM configuration structure. */
> +struct rte_lpm_config {
> +	uint32_t max_rules;      /**< Max number of rules. */
> +	uint32_t number_tbl8s;   /**< Number of tbl8s to allocate. */
> +	int flags;               /**< This field is currently unused. */
> +};
> +
> +/** @internal Rule structure. */
> +struct rte_lpm_rule_v20 {
> +	uint32_t ip; /**< Rule IP address. */
> +	uint8_t  next_hop; /**< Rule next hop. */
> +};
> +
> +struct rte_lpm_rule {
> +	uint32_t ip; /**< Rule IP address. */
> +	uint32_t next_hop; /**< Rule next hop. */
> +};
> +
> +/** @internal Contains metadata about the rules table. */
> +struct rte_lpm_rule_info {
> +	uint32_t used_rules; /**< Used rules so far. */
> +	uint32_t first_rule; /**< Indexes the first rule of a given depth. */
> +};
> +
> +/** @internal LPM structure. */
> +struct rte_lpm_v20 {
> +	/* LPM metadata. */
> +	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
> +	uint32_t max_rules; /**< Max. balanced rules per lpm. */
> +	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
> +
> +	/* LPM Tables. */
> +	struct rte_lpm_tbl_entry_v20 tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
> +			__rte_cache_aligned; /**< LPM tbl24 table. */
> +	struct rte_lpm_tbl_entry_v20 tbl8[RTE_LPM_TBL8_NUM_ENTRIES]
> +			__rte_cache_aligned; /**< LPM tbl8 table. */
> +	struct rte_lpm_rule_v20 rules_tbl[0] \
> +			__rte_cache_aligned; /**< LPM rules. */
> +};
> +
> +struct rte_lpm {
> +	/* LPM metadata. */
> +	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
> +	uint32_t max_rules; /**< Max. balanced rules per lpm. */
> +	uint32_t number_tbl8s; /**< Number of tbl8s. */
> +	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
> +
> +	/* LPM Tables. */
> +	struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
> +			__rte_cache_aligned; /**< LPM tbl24 table. */
> +	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
> +	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> +};
> +
> +/**
> + * Create an LPM object.
> + *
> + * @param name
> + *   LPM object name
> + * @param socket_id
> + *   NUMA socket ID for LPM table memory allocation
> + * @param config
> + *   Structure containing the configuration
> + * @return
> + *   Handle to LPM object on success, NULL otherwise with rte_errno set
> + *   to an appropriate values. Possible rte_errno values include:
> + *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
> + *    - E_RTE_SECONDARY - function was called from a secondary process instance
> + *    - EINVAL - invalid parameter passed to function
> + *    - ENOSPC - the maximum number of memzones has already been allocated
> + *    - EEXIST - a memzone with the same name already exists
> + *    - ENOMEM - no appropriate memory area found in which to create memzone
> + */
> +struct rte_lpm *
> +rte_lpm_create(const char *name, int socket_id,
> +		const struct rte_lpm_config *config);
> +struct rte_lpm_v20 *
> +rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
> +struct rte_lpm *
> +rte_lpm_create_v1604(const char *name, int socket_id,
> +		const struct rte_lpm_config *config);
> +
> +/**
> + * Find an existing LPM object and return a pointer to it.
> + *
> + * @param name
> + *   Name of the lpm object as passed to rte_lpm_create()
> + * @return
> + *   Pointer to lpm object or NULL if object not found with rte_errno
> + *   set appropriately. Possible rte_errno values include:
> + *    - ENOENT - required entry not available to return.
> + */
> +struct rte_lpm *
> +rte_lpm_find_existing(const char *name);
> +struct rte_lpm_v20 *
> +rte_lpm_find_existing_v20(const char *name);
> +struct rte_lpm *
> +rte_lpm_find_existing_v1604(const char *name);
> +
> +/**
> + * Free an LPM object.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @return
> + *   None
> + */
> +void
> +rte_lpm_free(struct rte_lpm *lpm);
> +void
> +rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
> +void
> +rte_lpm_free_v1604(struct rte_lpm *lpm);
> +
> +/**
> + * Add a rule to the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP of the rule to be added to the LPM table
> + * @param depth
> + *   Depth of the rule to be added to the LPM table
> + * @param next_hop
> + *   Next hop of the rule to be added to the LPM table
> + * @return
> + *   0 on success, negative value otherwise
> + */
> +int
> +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
> +int
> +rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
> +		uint8_t next_hop);
> +int
> +rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> +		uint32_t next_hop);
> +
> +/**
> + * Check if a rule is present in the LPM table,
> + * and provide its next hop if it is.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP of the rule to be searched
> + * @param depth
> + *   Depth of the rule to searched
> + * @param next_hop
> + *   Next hop of the rule (valid only if it is found)
> + * @return
> + *   1 if the rule exists, 0 if it does not, a negative value on failure
> + */
> +int
> +rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> +uint32_t *next_hop);
> +int
> +rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
> +uint8_t *next_hop);
> +int
> +rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> +uint32_t *next_hop);
> +
> +/**
> + * Delete a rule from the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP of the rule to be deleted from the LPM table
> + * @param depth
> + *   Depth of the rule to be deleted from the LPM table
> + * @return
> + *   0 on success, negative value otherwise
> + */
> +int
> +rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
> +int
> +rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth);
> +int
> +rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
> +
> +/**
> + * Delete all rules from the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + */
> +void
> +rte_lpm_delete_all(struct rte_lpm *lpm);
> +void
> +rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm);
> +void
> +rte_lpm_delete_all_v1604(struct rte_lpm *lpm);
> +
> +/**
> + * Lookup an IP into the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP to be looked up in the LPM table
> + * @param next_hop
> + *   Next hop of the most specific rule found for IP (valid on lookup hit only)
> + * @return
> + *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
> + */
> +static inline int
> +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop)
> +{
> +	unsigned tbl24_index = (ip >> 8);
> +	uint32_t tbl_entry;
> +	const uint32_t *ptbl;
> +
> +	/* DEBUG: Check user input arguments. */
> +	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
> +
> +	/* Copy tbl24 entry */
> +	ptbl = (const uint32_t *)(&lpm->tbl24[tbl24_index]);
> +	tbl_entry = *ptbl;
> +
> +	/* Copy tbl8 entry (only if needed) */
> +	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +
> +		unsigned tbl8_index = (uint8_t)ip +
> +				(((uint32_t)tbl_entry & 0x00FFFFFF) *
> +						RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> +
> +		ptbl = (const uint32_t *)&lpm->tbl8[tbl8_index];
> +		tbl_entry = *ptbl;
> +	}
> +
> +	*next_hop = ((uint32_t)tbl_entry & 0x00FFFFFF);
> +	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> +}
> +
> +/**
> + * Lookup multiple IP addresses in an LPM table. This may be implemented as a
> + * macro, so the address of the function should not be used.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ips
> + *   Array of IPs to be looked up in the LPM table
> + * @param next_hops
> + *   Next hop of the most specific rule found for IP (valid on lookup hit only).
> + *   This is an array of two byte values. The most significant byte in each
> + *   value says whether the lookup was successful (bitmask
> + *   RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
> + *   actual next hop.
> + * @param n
> + *   Number of elements in ips (and next_hops) array to lookup. This should be a
> + *   compile time constant, and divisible by 8 for best performance.
> + *  @return
> + *   -EINVAL for incorrect arguments, otherwise 0
> + */
> +#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> +		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> +
> +static inline int
> +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
> +		uint32_t *next_hops, const unsigned n)
> +{
> +	unsigned i;
> +	unsigned tbl24_indexes[n];
> +	const uint32_t *ptbl;
> +
> +	/* DEBUG: Check user input arguments. */
> +	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> +			(next_hops == NULL)), -EINVAL);
> +
> +	for (i = 0; i < n; i++) {
> +		tbl24_indexes[i] = ips[i] >> 8;
> +	}
> +
> +	for (i = 0; i < n; i++) {
> +		/* Simply copy tbl24 entry to output */
> +		ptbl = (const uint32_t *)&lpm->tbl24[tbl24_indexes[i]];
> +		next_hops[i] = *ptbl;
> +
> +		/* Overwrite output with tbl8 entry if needed */
> +		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +
> +			unsigned tbl8_index = (uint8_t)ips[i] +
> +					(((uint32_t)next_hops[i] & 0x00FFFFFF) *
> +					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> +
> +			ptbl = (const uint32_t *)&lpm->tbl8[tbl8_index];
> +			next_hops[i] = *ptbl;
> +		}
> +	}
> +	return 0;
> +}
> +
> +/* Mask four results. */
> +#define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ffffff00ffffff)
> +
> +/**
> + * Lookup four IP addresses in an LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   Four IPs to be looked up in the LPM table
> + * @param hop
> + *   Next hop of the most specific rule found for IP (valid on lookup hit only).
> + *   This is an 4 elements array of two byte values.
> + *   If the lookup was succesfull for the given IP, then least significant byte
> + *   of the corresponding element is the  actual next hop and the most
> + *   significant byte is zero.
> + *   If the lookup for the given IP failed, then corresponding element would
> + *   contain default value, see description of then next parameter.
> + * @param defv
> + *   Default value to populate into corresponding element of hop[] array,
> + *   if lookup would fail.
> + */
> +static inline void
> +rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
> +	uint32_t defv);
> +
> +#if defined(RTE_ARCH_ARM) || defined(RTE_ARCH_ARM64)
> +#include "rte_lpm_neon.h"
> +#else
> +#include "rte_lpm_sse.h"
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_LPM_H_ */
> diff --git a/app/test/v16.04/rte_lpm_neon.h b/app/test/v16.04/rte_lpm_neon.h
> new file mode 100644
> index 000000000..936ec7af3
> --- /dev/null
> +++ b/app/test/v16.04/rte_lpm_neon.h
> @@ -0,0 +1,119 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2014 Intel Corporation
> + */
> +
> +#ifndef _RTE_LPM_NEON_H_
> +#define _RTE_LPM_NEON_H_
> +
> +#include <rte_branch_prediction.h>
> +#include <rte_byteorder.h>
> +#include <rte_common.h>
> +#include <rte_vect.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +static inline void
> +rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
> +	uint32_t defv)
> +{
> +	uint32x4_t i24;
> +	rte_xmm_t i8;
> +	uint32_t tbl[4];
> +	uint64_t idx, pt, pt2;
> +	const uint32_t *ptbl;
> +
> +	const uint32_t mask = UINT8_MAX;
> +	const int32x4_t mask8 = vdupq_n_s32(mask);
> +
> +	/*
> +	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 2 LPM entries
> +	 * as one 64-bit value (0x0300000003000000).
> +	 */
> +	const uint64_t mask_xv =
> +		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
> +		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32);
> +
> +	/*
> +	 * RTE_LPM_LOOKUP_SUCCESS for 2 LPM entries
> +	 * as one 64-bit value (0x0100000001000000).
> +	 */
> +	const uint64_t mask_v =
> +		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
> +		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32);
> +
> +	/* get 4 indexes for tbl24[]. */
> +	i24 = vshrq_n_u32((uint32x4_t)ip, CHAR_BIT);
> +
> +	/* extract values from tbl24[] */
> +	idx = vgetq_lane_u64((uint64x2_t)i24, 0);
> +
> +	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
> +	tbl[0] = *ptbl;
> +	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
> +	tbl[1] = *ptbl;
> +
> +	idx = vgetq_lane_u64((uint64x2_t)i24, 1);
> +
> +	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
> +	tbl[2] = *ptbl;
> +	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
> +	tbl[3] = *ptbl;
> +
> +	/* get 4 indexes for tbl8[]. */
> +	i8.x = vandq_s32(ip, mask8);
> +
> +	pt = (uint64_t)tbl[0] |
> +		(uint64_t)tbl[1] << 32;
> +	pt2 = (uint64_t)tbl[2] |
> +		(uint64_t)tbl[3] << 32;
> +
> +	/* search successfully finished for all 4 IP addresses. */
> +	if (likely((pt & mask_xv) == mask_v) &&
> +			likely((pt2 & mask_xv) == mask_v)) {
> +		*(uint64_t *)hop = pt & RTE_LPM_MASKX4_RES;
> +		*(uint64_t *)(hop + 2) = pt2 & RTE_LPM_MASKX4_RES;
> +		return;
> +	}
> +
> +	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[0] = i8.u32[0] +
> +			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
> +		tbl[0] = *ptbl;
> +	}
> +	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[1] = i8.u32[1] +
> +			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
> +		tbl[1] = *ptbl;
> +	}
> +	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[2] = i8.u32[2] +
> +			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
> +		tbl[2] = *ptbl;
> +	}
> +	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[3] = i8.u32[3] +
> +			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
> +		tbl[3] = *ptbl;
> +	}
> +
> +	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[0] & 0x00FFFFFF : defv;
> +	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[1] & 0x00FFFFFF : defv;
> +	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[2] & 0x00FFFFFF : defv;
> +	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[3] & 0x00FFFFFF : defv;
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_LPM_NEON_H_ */
> diff --git a/app/test/v16.04/rte_lpm_sse.h b/app/test/v16.04/rte_lpm_sse.h
> new file mode 100644
> index 000000000..edfa36be1
> --- /dev/null
> +++ b/app/test/v16.04/rte_lpm_sse.h
> @@ -0,0 +1,120 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2014 Intel Corporation
> + */
> +
> +#ifndef _RTE_LPM_SSE_H_
> +#define _RTE_LPM_SSE_H_
> +
> +#include <rte_branch_prediction.h>
> +#include <rte_byteorder.h>
> +#include <rte_common.h>
> +#include <rte_vect.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +static inline void
> +rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
> +	uint32_t defv)
> +{
> +	__m128i i24;
> +	rte_xmm_t i8;
> +	uint32_t tbl[4];
> +	uint64_t idx, pt, pt2;
> +	const uint32_t *ptbl;
> +
> +	const __m128i mask8 =
> +		_mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
> +
> +	/*
> +	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 2 LPM entries
> +	 * as one 64-bit value (0x0300000003000000).
> +	 */
> +	const uint64_t mask_xv =
> +		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
> +		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32);
> +
> +	/*
> +	 * RTE_LPM_LOOKUP_SUCCESS for 2 LPM entries
> +	 * as one 64-bit value (0x0100000001000000).
> +	 */
> +	const uint64_t mask_v =
> +		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
> +		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32);
> +
> +	/* get 4 indexes for tbl24[]. */
> +	i24 = _mm_srli_epi32(ip, CHAR_BIT);
> +
> +	/* extract values from tbl24[] */
> +	idx = _mm_cvtsi128_si64(i24);
> +	i24 = _mm_srli_si128(i24, sizeof(uint64_t));
> +
> +	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
> +	tbl[0] = *ptbl;
> +	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
> +	tbl[1] = *ptbl;
> +
> +	idx = _mm_cvtsi128_si64(i24);
> +
> +	ptbl = (const uint32_t *)&lpm->tbl24[(uint32_t)idx];
> +	tbl[2] = *ptbl;
> +	ptbl = (const uint32_t *)&lpm->tbl24[idx >> 32];
> +	tbl[3] = *ptbl;
> +
> +	/* get 4 indexes for tbl8[]. */
> +	i8.x = _mm_and_si128(ip, mask8);
> +
> +	pt = (uint64_t)tbl[0] |
> +		(uint64_t)tbl[1] << 32;
> +	pt2 = (uint64_t)tbl[2] |
> +		(uint64_t)tbl[3] << 32;
> +
> +	/* search successfully finished for all 4 IP addresses. */
> +	if (likely((pt & mask_xv) == mask_v) &&
> +			likely((pt2 & mask_xv) == mask_v)) {
> +		*(uint64_t *)hop = pt & RTE_LPM_MASKX4_RES;
> +		*(uint64_t *)(hop + 2) = pt2 & RTE_LPM_MASKX4_RES;
> +		return;
> +	}
> +
> +	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[0] = i8.u32[0] +
> +			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
> +		tbl[0] = *ptbl;
> +	}
> +	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[1] = i8.u32[1] +
> +			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
> +		tbl[1] = *ptbl;
> +	}
> +	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[2] = i8.u32[2] +
> +			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
> +		tbl[2] = *ptbl;
> +	}
> +	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[3] = i8.u32[3] +
> +			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
> +		tbl[3] = *ptbl;
> +	}
> +
> +	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[0] & 0x00FFFFFF : defv;
> +	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[1] & 0x00FFFFFF : defv;
> +	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[2] & 0x00FFFFFF : defv;
> +	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? tbl[3] & 0x00FFFFFF : defv;
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_LPM_SSE_H_ */
> diff --git a/app/test/v16.04/test_lpm.c b/app/test/v16.04/test_lpm.c
> new file mode 100644
> index 000000000..2aab8d0cc
> --- /dev/null
> +++ b/app/test/v16.04/test_lpm.c
> @@ -0,0 +1,1405 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2019 Intel Corporation
> + *
> + * LPM Autotests from DPDK v16.04 for abi compability testing.
> + *
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <errno.h>
> +#include <sys/queue.h>
> +
> +#include <rte_common.h>
> +#include <rte_cycles.h>
> +#include <rte_memory.h>
> +#include <rte_random.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_ip.h>
> +#include <time.h>
> +
> +#include "../test_lpm_routes.h"
> +#include "../test.h"
> +#include "../test_xmmt_ops.h"
> +
> +/* backported header from DPDK v16.04 */
> +#include "rte_lpm.h"
> +/* remapping of DPDK v16.04 symbols */
> +#include "dcompat.h"
> +
> +#define TEST_LPM_ASSERT(cond) do {                                            \
> +	if (!(cond)) {                                                        \
> +		printf("Error at line %d: \n", __LINE__);                     \
> +		return -1;                                                    \
> +	}                                                                     \
> +} while (0)
> +
> +typedef int32_t (*rte_lpm_test)(void);
> +
> +static int32_t test0(void);
> +static int32_t test1(void);
> +static int32_t test2(void);
> +static int32_t test3(void);
> +static int32_t test4(void);
> +static int32_t test5(void);
> +static int32_t test6(void);
> +static int32_t test7(void);
> +static int32_t test8(void);
> +static int32_t test9(void);
> +static int32_t test10(void);
> +static int32_t test11(void);
> +static int32_t test12(void);
> +static int32_t test13(void);
> +static int32_t test14(void);
> +static int32_t test15(void);
> +static int32_t test16(void);
> +static int32_t test17(void);
> +static int32_t perf_test(void);
> +
> +static rte_lpm_test tests[] = {
> +/* Test Cases */
> +	test0,
> +	test1,
> +	test2,
> +	test3,
> +	test4,
> +	test5,
> +	test6,
> +	test7,
> +	test8,
> +	test9,
> +	test10,
> +	test11,
> +	test12,
> +	test13,
> +	test14,
> +	test15,
> +	test16,
> +	test17,
> +	perf_test,
> +};
> +
> +#define NUM_LPM_TESTS (sizeof(tests)/sizeof(tests[0]))
> +#define MAX_DEPTH 32
> +#define MAX_RULES 256
> +#define NUMBER_TBL8S 256
> +#define PASS 0
> +
> +/*
> + * Check that rte_lpm_create fails gracefully for incorrect user input
> + * arguments
> + */
> +int32_t
> +test0(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +
> +	/* rte_lpm_create: lpm name == NULL */
> +	lpm = rte_lpm_create(NULL, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm == NULL);
> +
> +	/* rte_lpm_create: max_rules = 0 */
> +	/* Note: __func__ inserts the function name, in this case "test0". */
> +	config.max_rules = 0;
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm == NULL);
> +
> +	/* socket_id < -1 is invalid */
> +	config.max_rules = MAX_RULES;
> +	lpm = rte_lpm_create(__func__, -2, &config);
> +	TEST_LPM_ASSERT(lpm == NULL);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Create lpm table then delete lpm table 100 times
> + * Use a slightly different rules size each time
> + * */
> +int32_t
> +test1(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	int32_t i;
> +
> +	/* rte_lpm_free: Free NULL */
> +	for (i = 0; i < 100; i++) {
> +		config.max_rules = MAX_RULES - i;
> +		lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +		TEST_LPM_ASSERT(lpm != NULL);
> +
> +		rte_lpm_free(lpm);
> +	}
> +
> +	/* Can not test free so return success */
> +	return PASS;
> +}
> +
> +/*
> + * Call rte_lpm_free for NULL pointer user input. Note: free has no return and
> + * therefore it is impossible to check for failure but this test is added to
> + * increase function coverage metrics and to validate that freeing null does
> + * not crash.
> + */
> +int32_t
> +test2(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	rte_lpm_free(lpm);
> +	rte_lpm_free(NULL);
> +	return PASS;
> +}
> +
> +/*
> + * Check that rte_lpm_add fails gracefully for incorrect user input arguments
> + */
> +int32_t
> +test3(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip = IPv4(0, 0, 0, 0), next_hop = 100;
> +	uint8_t depth = 24;
> +	int32_t status = 0;
> +
> +	/* rte_lpm_add: lpm == NULL */
> +	status = rte_lpm_add(NULL, ip, depth, next_hop);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/*Create vaild lpm to use in rest of test. */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* rte_lpm_add: depth < 1 */
> +	status = rte_lpm_add(lpm, ip, 0, next_hop);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/* rte_lpm_add: depth > MAX_DEPTH */
> +	status = rte_lpm_add(lpm, ip, (MAX_DEPTH + 1), next_hop);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Check that rte_lpm_delete fails gracefully for incorrect user input
> + * arguments
> + */
> +int32_t
> +test4(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip = IPv4(0, 0, 0, 0);
> +	uint8_t depth = 24;
> +	int32_t status = 0;
> +
> +	/* rte_lpm_delete: lpm == NULL */
> +	status = rte_lpm_delete(NULL, ip, depth);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/*Create vaild lpm to use in rest of test. */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* rte_lpm_delete: depth < 1 */
> +	status = rte_lpm_delete(lpm, ip, 0);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/* rte_lpm_delete: depth > MAX_DEPTH */
> +	status = rte_lpm_delete(lpm, ip, (MAX_DEPTH + 1));
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Check that rte_lpm_lookup fails gracefully for incorrect user input
> + * arguments
> + */
> +int32_t
> +test5(void)
> +{
> +#if defined(RTE_LIBRTE_LPM_DEBUG)
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip = IPv4(0, 0, 0, 0), next_hop_return = 0;
> +	int32_t status = 0;
> +
> +	/* rte_lpm_lookup: lpm == NULL */
> +	status = rte_lpm_lookup(NULL, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/*Create vaild lpm to use in rest of test. */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* rte_lpm_lookup: depth < 1 */
> +	status = rte_lpm_lookup(lpm, ip, NULL);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +#endif
> +	return PASS;
> +}
> +
> +
> +
> +/*
> + * Call add, lookup and delete for a single rule with depth <= 24
> + */
> +int32_t
> +test6(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip = IPv4(0, 0, 0, 0), next_hop_add = 100, next_hop_return = 0;
> +	uint8_t depth = 24;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Call add, lookup and delete for a single rule with depth > 24
> + */
> +
> +int32_t
> +test7(void)
> +{
> +	xmm_t ipx4;
> +	uint32_t hop[4];
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip = IPv4(0, 0, 0, 0), next_hop_add = 100, next_hop_return = 0;
> +	uint8_t depth = 32;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ipx4 = vect_set_epi32(ip, ip + 0x100, ip - 0x100, ip);
> +	rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
> +	TEST_LPM_ASSERT(hop[0] == next_hop_add);
> +	TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
> +	TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
> +	TEST_LPM_ASSERT(hop[3] == next_hop_add);
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Use rte_lpm_add to add rules which effect only the second half of the lpm
> + * table. Use all possible depths ranging from 1..32. Set the next hop = to the
> + * depth. Check lookup hit for on every add and check for lookup miss on the
> + * first half of the lpm table after each add. Finally delete all rules going
> + * backwards (i.e. from depth = 32 ..1) and carry out a lookup after each
> + * delete. The lookup should return the next_hop_add value related to the
> + * previous depth value (i.e. depth -1).
> + */
> +int32_t
> +test8(void)
> +{
> +	xmm_t ipx4;
> +	uint32_t hop[4];
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
> +	uint32_t next_hop_add, next_hop_return;
> +	uint8_t depth;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* Loop with rte_lpm_add. */
> +	for (depth = 1; depth <= 32; depth++) {
> +		/* Let the next_hop_add value = depth. Just for change. */
> +		next_hop_add = depth;
> +
> +		status = rte_lpm_add(lpm, ip2, depth, next_hop_add);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		/* Check IP in first half of tbl24 which should be empty. */
> +		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
> +		TEST_LPM_ASSERT(status == -ENOENT);
> +
> +		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +			(next_hop_return == next_hop_add));
> +
> +		ipx4 = vect_set_epi32(ip2, ip1, ip2, ip1);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[1] == next_hop_add);
> +		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[3] == next_hop_add);
> +	}
> +
> +	/* Loop with rte_lpm_delete. */
> +	for (depth = 32; depth >= 1; depth--) {
> +		next_hop_add = (uint8_t) (depth - 1);
> +
> +		status = rte_lpm_delete(lpm, ip2, depth);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
> +
> +		if (depth != 1) {
> +			TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add));
> +		} else {
> +			TEST_LPM_ASSERT(status == -ENOENT);
> +		}
> +
> +		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
> +		TEST_LPM_ASSERT(status == -ENOENT);
> +
> +		ipx4 = vect_set_epi32(ip1, ip1, ip2, ip2);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
> +		if (depth != 1) {
> +			TEST_LPM_ASSERT(hop[0] == next_hop_add);
> +			TEST_LPM_ASSERT(hop[1] == next_hop_add);
> +		} else {
> +			TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
> +			TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
> +		}
> +		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[3] == UINT32_MAX);
> +	}
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * - Add & lookup to hit invalid TBL24 entry
> + * - Add & lookup to hit valid TBL24 entry not extended
> + * - Add & lookup to hit valid extended TBL24 entry with invalid TBL8 entry
> + * - Add & lookup to hit valid extended TBL24 entry with valid TBL8 entry
> + *
> + */
> +int32_t
> +test9(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip, ip_1, ip_2;
> +	uint8_t depth, depth_1, depth_2;
> +	uint32_t next_hop_add, next_hop_add_1, next_hop_add_2, next_hop_return;
> +	int32_t status = 0;
> +
> +	/* Add & lookup to hit invalid TBL24 entry */
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add & lookup to hit valid TBL24 entry not extended */
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 23;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	depth = 24;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	depth = 23;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add & lookup to hit valid extended TBL24 entry with invalid TBL8
> +	 * entry */
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 5);
> +	depth = 32;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add & lookup to hit valid extended TBL24 entry with valid TBL8
> +	 * entry */
> +	ip_1 = IPv4(128, 0, 0, 0);
> +	depth_1 = 25;
> +	next_hop_add_1 = 101;
> +
> +	ip_2 = IPv4(128, 0, 0, 5);
> +	depth_2 = 32;
> +	next_hop_add_2 = 102;
> +
> +	next_hop_return = 0;
> +
> +	status = rte_lpm_add(lpm, ip_1, depth_1, next_hop_add_1);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
> +
> +	status = rte_lpm_add(lpm, ip_2, depth_2, next_hop_add_2);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_2));
> +
> +	status = rte_lpm_delete(lpm, ip_2, depth_2);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
> +
> +	status = rte_lpm_delete(lpm, ip_1, depth_1);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +
> +/*
> + * - Add rule that covers a TBL24 range previously invalid & lookup (& delete &
> + *   lookup)
> + * - Add rule that extends a TBL24 invalid entry & lookup (& delete & lookup)
> + * - Add rule that extends a TBL24 valid entry & lookup for both rules (&
> + *   delete & lookup)
> + * - Add rule that updates the next hop in TBL24 & lookup (& delete & lookup)
> + * - Add rule that updates the next hop in TBL8 & lookup (& delete & lookup)
> + * - Delete a rule that is not present in the TBL24 & lookup
> + * - Delete a rule that is not present in the TBL8 & lookup
> + *
> + */
> +int32_t
> +test10(void)
> +{
> +
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip, next_hop_add, next_hop_return;
> +	uint8_t depth;
> +	int32_t status = 0;
> +
> +	/* Add rule that covers a TBL24 range previously invalid & lookup
> +	 * (& delete & lookup) */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 16;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 25;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add rule that extends a TBL24 valid entry & lookup for both rules
> +	 * (& delete & lookup) */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add rule that updates the next hop in TBL24 & lookup
> +	 * (& delete & lookup) */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add rule that updates the next hop in TBL8 & lookup
> +	 * (& delete & lookup) */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Delete a rule that is not present in the TBL24 & lookup */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Delete a rule that is not present in the TBL8 & lookup */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Add two rules, lookup to hit the more specific one, lookup to hit the less
> + * specific one delete the less specific rule and lookup previous values again;
> + * add a more specific rule than the existing rule, lookup again
> + *
> + * */
> +int32_t
> +test11(void)
> +{
> +
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip, next_hop_add, next_hop_return;
> +	uint8_t depth;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Add an extended rule (i.e. depth greater than 24, lookup (hit), delete,
> + * lookup (miss) in a for loop of 1000 times. This will check tbl8 extension
> + * and contraction.
> + *
> + * */
> +
> +int32_t
> +test12(void)
> +{
> +	xmm_t ipx4;
> +	uint32_t hop[4];
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip, i, next_hop_add, next_hop_return;
> +	uint8_t depth;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	for (i = 0; i < 1000; i++) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add));
> +
> +		ipx4 = vect_set_epi32(ip, ip + 1, ip, ip - 1);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[1] == next_hop_add);
> +		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[3] == next_hop_add);
> +
> +		status = rte_lpm_delete(lpm, ip, depth);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT(status == -ENOENT);
> +	}
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Add a rule to tbl24, lookup (hit), then add a rule that will extend this
> + * tbl24 entry, lookup (hit). delete the rule that caused the tbl24 extension,
> + * lookup (miss) and repeat for loop of 1000 times. This will check tbl8
> + * extension and contraction.
> + *
> + * */
> +
> +int32_t
> +test13(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip, i, next_hop_add_1, next_hop_add_2, next_hop_return;
> +	uint8_t depth;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add_1 = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add_1);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
> +
> +	depth = 32;
> +	next_hop_add_2 = 101;
> +
> +	for (i = 0; i < 1000; i++) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_add_2);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add_2));
> +
> +		status = rte_lpm_delete(lpm, ip, depth);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add_1));
> +	}
> +
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
> + * No more tbl8 extensions will be allowed. Now add one more rule that required
> + * a tbl8 extension and get fail.
> + * */
> +int32_t
> +test14(void)
> +{
> +
> +	/* We only use depth = 32 in the loop below so we must make sure
> +	 * that we have enough storage for all rules at that depth*/
> +
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = 256 * 32;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint32_t ip, next_hop_add, next_hop_return;
> +	uint8_t depth;
> +	int32_t status = 0;
> +
> +	/* Add enough space for 256 rules for every depth */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	depth = 32;
> +	next_hop_add = 100;
> +	ip = IPv4(0, 0, 0, 0);
> +
> +	/* Add 256 rules that require a tbl8 extension */
> +	for (; ip <= IPv4(0, 0, 255, 0); ip += 256) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add));
> +	}
> +
> +	/* All tbl8 extensions have been used above. Try to add one more and
> +	 * we get a fail */
> +	ip = IPv4(1, 0, 0, 0);
> +	depth = 32;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Sequence of operations for find existing lpm table
> + *
> + *  - create table
> + *  - find existing table: hit
> + *  - find non-existing table: miss
> + *
> + */
> +int32_t
> +test15(void)
> +{
> +	struct rte_lpm *lpm = NULL, *result = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = 256 * 32;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +
> +	/* Create lpm  */
> +	lpm = rte_lpm_create("lpm_find_existing", SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* Try to find existing lpm */
> +	result = rte_lpm_find_existing("lpm_find_existing");
> +	TEST_LPM_ASSERT(result == lpm);
> +
> +	/* Try to find non-existing lpm */
> +	result = rte_lpm_find_existing("lpm_find_non_existing");
> +	TEST_LPM_ASSERT(result == NULL);
> +
> +	/* Cleanup. */
> +	rte_lpm_delete_all(lpm);
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * test failure condition of overloading the tbl8 so no more will fit
> + * Check we get an error return value in that case
> + */
> +int32_t
> +test16(void)
> +{
> +	uint32_t ip;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = 256 * 32;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	struct rte_lpm *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +
> +	/* ip loops through all possibilities for top 24 bits of address */
> +	for (ip = 0; ip < 0xFFFFFF; ip++) {
> +		/* add an entry within a different tbl8 each time, since
> +		 * depth >24 and the top 24 bits are different */
> +		if (rte_lpm_add(lpm, (ip << 8) + 0xF0, 30, 0) < 0)
> +			break;
> +	}
> +
> +	if (ip != NUMBER_TBL8S) {
> +		printf("Error, unexpected failure with filling tbl8 groups\n");
> +		printf("Failed after %u additions, expected after %u\n",
> +				(unsigned)ip, (unsigned)NUMBER_TBL8S);
> +	}
> +
> +	rte_lpm_free(lpm);
> +	return 0;
> +}
> +
> +/*
> + * Test for overwriting of tbl8:
> + *  - add rule /32 and lookup
> + *  - add new rule /24 and lookup
> + *	- add third rule /25 and lookup
> + *	- lookup /32 and /24 rule to ensure the table has not been overwritten.
> + */
> +int32_t
> +test17(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = MAX_RULES;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	const uint32_t ip_10_32 = IPv4(10, 10, 10, 2);
> +	const uint32_t ip_10_24 = IPv4(10, 10, 10, 0);
> +	const uint32_t ip_20_25 = IPv4(10, 10, 20, 2);
> +	const uint8_t d_ip_10_32 = 32,
> +			d_ip_10_24 = 24,
> +			d_ip_20_25 = 25;
> +	const uint32_t next_hop_ip_10_32 = 100,
> +			next_hop_ip_10_24 = 105,
> +			next_hop_ip_20_25 = 111;
> +	uint32_t next_hop_return = 0;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip_10_32, d_ip_10_32, next_hop_ip_10_32);
> +	if (status < 0)
> +		return -1;
> +
> +	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
> +	uint32_t test_hop_10_32 = next_hop_return;
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
> +
> +	status = rte_lpm_add(lpm, ip_10_24, d_ip_10_24, next_hop_ip_10_24);
> +	if (status < 0)
> +		return -1;
> +
> +	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
> +	uint32_t test_hop_10_24 = next_hop_return;
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
> +
> +	status = rte_lpm_add(lpm, ip_20_25, d_ip_20_25, next_hop_ip_20_25);
> +	if (status < 0)
> +		return -1;
> +
> +	status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
> +	uint32_t test_hop_20_25 = next_hop_return;
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
> +
> +	if (test_hop_10_32 == test_hop_10_24) {
> +		printf("Next hop return equal\n");
> +		return -1;
> +	}
> +
> +	if (test_hop_10_24 == test_hop_20_25) {
> +		printf("Next hop return equal\n");
> +		return -1;
> +	}
> +
> +	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
> +
> +	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Lookup performance test
> + */
> +
> +#define ITERATIONS (1 << 10)
> +#define BATCH_SIZE (1 << 12)
> +#define BULK_SIZE 32
> +
> +int32_t
> +perf_test(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	struct rte_lpm_config config;
> +
> +	config.max_rules = 1000000;
> +	config.number_tbl8s = NUMBER_TBL8S;
> +	config.flags = 0;
> +	uint64_t begin, total_time, lpm_used_entries = 0;
> +	unsigned i, j;
> +	uint32_t next_hop_add = 0xAA, next_hop_return = 0;
> +	int status = 0;
> +	uint64_t cache_line_counter = 0;
> +	int64_t count = 0;
> +
> +	rte_srand(rte_rdtsc());
> +
> +	/* (re) generate the routing table */
> +	generate_large_route_rule_table();
> +
> +	printf("No. routes = %u\n", (unsigned) NUM_ROUTE_ENTRIES);
> +
> +	print_route_distribution(large_route_table,
> +				(uint32_t) NUM_ROUTE_ENTRIES);
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* Measue add. */
> +	begin = rte_rdtsc();
> +
> +	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
> +		if (rte_lpm_add(lpm, large_route_table[i].ip,
> +				large_route_table[i].depth, next_hop_add) == 0)
> +			status++;
> +	}
> +	/* End Timer. */
> +	total_time = rte_rdtsc() - begin;
> +
> +	printf("Unique added entries = %d\n", status);
> +	/* Obtain add statistics. */
> +	for (i = 0; i < RTE_LPM_TBL24_NUM_ENTRIES; i++) {
> +		if (lpm->tbl24[i].valid)
> +			lpm_used_entries++;
> +
> +		if (i % 32 == 0) {
> +			if ((uint64_t)count < lpm_used_entries) {
> +				cache_line_counter++;
> +				count = lpm_used_entries;
> +			}
> +		}
> +	}
> +
> +	printf("Used table 24 entries = %u (%g%%)\n",
> +			(unsigned) lpm_used_entries,
> +			(lpm_used_entries * 100.0) / RTE_LPM_TBL24_NUM_ENTRIES);
> +	printf("64 byte Cache entries used = %u (%u bytes)\n",
> +			(unsigned) cache_line_counter, (unsigned) cache_line_counter * 64);
> +
> +	printf("Average LPM Add: %g cycles\n",
> +			(double)total_time / NUM_ROUTE_ENTRIES);
> +
> +	/* Measure single Lookup */
> +	total_time = 0;
> +	count = 0;
> +
> +	for (i = 0; i < ITERATIONS; i++) {
> +		static uint32_t ip_batch[BATCH_SIZE];
> +
> +		for (j = 0; j < BATCH_SIZE; j++)
> +			ip_batch[j] = rte_rand();
> +
> +		/* Lookup per batch */
> +		begin = rte_rdtsc();
> +
> +		for (j = 0; j < BATCH_SIZE; j++) {
> +			if (rte_lpm_lookup(lpm, ip_batch[j], &next_hop_return) != 0)
> +				count++;
> +		}
> +
> +		total_time += rte_rdtsc() - begin;
> +
> +	}
> +	printf("Average LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
> +			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
> +			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
> +
> +	/* Measure bulk Lookup */
> +	total_time = 0;
> +	count = 0;
> +	for (i = 0; i < ITERATIONS; i++) {
> +		static uint32_t ip_batch[BATCH_SIZE];
> +		uint32_t next_hops[BULK_SIZE];
> +
> +		/* Create array of random IP addresses */
> +		for (j = 0; j < BATCH_SIZE; j++)
> +			ip_batch[j] = rte_rand();
> +
> +		/* Lookup per batch */
> +		begin = rte_rdtsc();
> +		for (j = 0; j < BATCH_SIZE; j += BULK_SIZE) {
> +			unsigned k;
> +			rte_lpm_lookup_bulk(lpm, &ip_batch[j], next_hops, BULK_SIZE);
> +			for (k = 0; k < BULK_SIZE; k++)
> +				if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS)))
> +					count++;
> +		}
> +
> +		total_time += rte_rdtsc() - begin;
> +	}
> +	printf("BULK LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
> +			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
> +			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
> +
> +	/* Measure LookupX4 */
> +	total_time = 0;
> +	count = 0;
> +	for (i = 0; i < ITERATIONS; i++) {
> +		static uint32_t ip_batch[BATCH_SIZE];
> +		uint32_t next_hops[4];
> +
> +		/* Create array of random IP addresses */
> +		for (j = 0; j < BATCH_SIZE; j++)
> +			ip_batch[j] = rte_rand();
> +
> +		/* Lookup per batch */
> +		begin = rte_rdtsc();
> +		for (j = 0; j < BATCH_SIZE; j += RTE_DIM(next_hops)) {
> +			unsigned k;
> +			xmm_t ipx4;
> +
> +			ipx4 = vect_loadu_sil128((xmm_t *)(ip_batch + j));
> +			ipx4 = *(xmm_t *)(ip_batch + j);
> +			rte_lpm_lookupx4(lpm, ipx4, next_hops, UINT32_MAX);
> +			for (k = 0; k < RTE_DIM(next_hops); k++)
> +				if (unlikely(next_hops[k] == UINT32_MAX))
> +					count++;
> +		}
> +
> +		total_time += rte_rdtsc() - begin;
> +	}
> +	printf("LPM LookupX4: %.1f cycles (fails = %.1f%%)\n",
> +			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
> +			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
> +
> +	/* Delete */
> +	status = 0;
> +	begin = rte_rdtsc();
> +
> +	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
> +		/* rte_lpm_delete(lpm, ip, depth) */
> +		status += rte_lpm_delete(lpm, large_route_table[i].ip,
> +				large_route_table[i].depth);
> +	}
> +
> +	total_time += rte_rdtsc() - begin;
> +
> +	printf("Average LPM Delete: %g cycles\n",
> +			(double)total_time / NUM_ROUTE_ENTRIES);
> +
> +	rte_lpm_delete_all(lpm);
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Do all unit and performance tests.
> + */
> +
> +static int
> +test_lpm(void)
> +{
> +	unsigned i;
> +	int status, global_status = 0;
> +
> +	for (i = 0; i < NUM_LPM_TESTS; i++) {
> +		status = tests[i]();
> +		if (status < 0) {
> +			printf("ERROR: LPM Test %s: FAIL\n", RTE_STR(tests[i]));
> +			global_status = status;
> +		}
> +	}
> +
> +	return global_status;
> +}
> +
> +REGISTER_TEST_COMMAND_VERSION(lpm_autotest, test_lpm, TEST_DPDK_ABI_VERSION_V1604);
> diff --git a/app/test/v16.04/test_v1604.c b/app/test/v16.04/test_v1604.c
> new file mode 100644
> index 000000000..a5399bbfe
> --- /dev/null
> +++ b/app/test/v16.04/test_v1604.c
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2014 Intel Corporation
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +
> +#include <rte_ip.h>
> +#include <rte_lpm.h>
> +
> +#include "../test.h"
> +
> +REGISTER_TEST_ABI_VERSION(v1604, TEST_DPDK_ABI_VERSION_V1604);
> diff --git a/app/test/v2.0/dcompat.h b/app/test/v2.0/dcompat.h
> new file mode 100644
> index 000000000..108fcf8f6
> --- /dev/null
> +++ b/app/test/v2.0/dcompat.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2019 Intel Corporation
> + */
> +
> +#ifndef _DCOMPAT_H_
> +#define _DCOMPAT_H_
> +
> +#define ABI_VERSION DPDK_2.0
> +
> +#define MAP_ABI_SYMBOL(name) \
> +	MAP_ABI_SYMBOL_VERSION(name, ABI_VERSION)
> +
> +MAP_ABI_SYMBOL(rte_lpm_add);
> +MAP_ABI_SYMBOL(rte_lpm_create);
> +MAP_ABI_SYMBOL(rte_lpm_delete);
> +MAP_ABI_SYMBOL(rte_lpm_delete_all);
> +MAP_ABI_SYMBOL(rte_lpm_find_existing);
> +MAP_ABI_SYMBOL(rte_lpm_free);
> +MAP_ABI_SYMBOL(rte_lpm_is_rule_present);
> +
> +#undef MAP_ABI_SYMBOL
> +
> +#endif
> diff --git a/app/test/v2.0/rte_lpm.h b/app/test/v2.0/rte_lpm.h
> new file mode 100644
> index 000000000..b1efd1c2d
> --- /dev/null
> +++ b/app/test/v2.0/rte_lpm.h
> @@ -0,0 +1,443 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2014 Intel Corporation
> + */
> +
> +#ifndef _RTE_LPM_H_
> +#define _RTE_LPM_H_
> +
> +/**
> + * @file
> + * RTE Longest Prefix Match (LPM)
> + */
> +
> +#include <errno.h>
> +#include <sys/queue.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_memory.h>
> +#include <rte_common.h>
> +#include <rte_vect.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/** Max number of characters in LPM name. */
> +#define RTE_LPM_NAMESIZE                32
> +
> +/** @deprecated Possible location to allocate memory. This was for last
> + * parameter of rte_lpm_create(), but is now redundant. The LPM table is always
> + * allocated in memory using librte_malloc which uses a memzone. */
> +#define RTE_LPM_HEAP                    0
> +
> +/** @deprecated Possible location to allocate memory. This was for last
> + * parameter of rte_lpm_create(), but is now redundant. The LPM table is always
> + * allocated in memory using librte_malloc which uses a memzone. */
> +#define RTE_LPM_MEMZONE                 1
> +
> +/** Maximum depth value possible for IPv4 LPM. */
> +#define RTE_LPM_MAX_DEPTH               32
> +
> +/** @internal Total number of tbl24 entries. */
> +#define RTE_LPM_TBL24_NUM_ENTRIES       (1 << 24)
> +
> +/** @internal Number of entries in a tbl8 group. */
> +#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES  256
> +
> +/** @internal Total number of tbl8 groups in the tbl8. */
> +#define RTE_LPM_TBL8_NUM_GROUPS         256
> +
> +/** @internal Total number of tbl8 entries. */
> +#define RTE_LPM_TBL8_NUM_ENTRIES        (RTE_LPM_TBL8_NUM_GROUPS * \
> +					RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
> +
> +/** @internal Macro to enable/disable run-time checks. */
> +#if defined(RTE_LIBRTE_LPM_DEBUG)
> +#define RTE_LPM_RETURN_IF_TRUE(cond, retval) do { \
> +	if (cond) \
> +		return (retval); \
> +} while (0)
> +#else
> +#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> +#endif
> +
> +/** @internal bitmask with valid and ext_entry/valid_group fields set */
> +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> +
> +/** Bitmask used to indicate successful lookup */
> +#define RTE_LPM_LOOKUP_SUCCESS          0x0100
> +
> +/** @internal Tbl24 entry structure. */
> +struct rte_lpm_tbl24_entry {
> +	/* Stores Next hop or group index (i.e. gindex)into tbl8. */
> +	union {
> +		uint8_t next_hop;
> +		uint8_t tbl8_gindex;
> +	};
> +	/* Using single uint8_t to store 3 values. */
> +	uint8_t valid     :1; /**< Validation flag. */
> +	uint8_t ext_entry :1; /**< External entry. */
> +	uint8_t depth     :6; /**< Rule depth. */
> +};
> +
> +/** @internal Tbl8 entry structure. */
> +struct rte_lpm_tbl8_entry {
> +	uint8_t next_hop; /**< next hop. */
> +	/* Using single uint8_t to store 3 values. */
> +	uint8_t valid       :1; /**< Validation flag. */
> +	uint8_t valid_group :1; /**< Group validation flag. */
> +	uint8_t depth       :6; /**< Rule depth. */
> +};
> +
> +/** @internal Rule structure. */
> +struct rte_lpm_rule {
> +	uint32_t ip; /**< Rule IP address. */
> +	uint8_t  next_hop; /**< Rule next hop. */
> +};
> +
> +/** @internal Contains metadata about the rules table. */
> +struct rte_lpm_rule_info {
> +	uint32_t used_rules; /**< Used rules so far. */
> +	uint32_t first_rule; /**< Indexes the first rule of a given depth. */
> +};
> +
> +/** @internal LPM structure. */
> +struct rte_lpm {
> +	/* LPM metadata. */
> +	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
> +	int mem_location; /**< @deprecated @see RTE_LPM_HEAP and RTE_LPM_MEMZONE. */
> +	uint32_t max_rules; /**< Max. balanced rules per lpm. */
> +	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
> +
> +	/* LPM Tables. */
> +	struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> +			__rte_cache_aligned; /**< LPM tbl24 table. */
> +	struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> +			__rte_cache_aligned; /**< LPM tbl8 table. */
> +	struct rte_lpm_rule rules_tbl[0] \
> +			__rte_cache_aligned; /**< LPM rules. */
> +};
> +
> +/**
> + * Create an LPM object.
> + *
> + * @param name
> + *   LPM object name
> + * @param socket_id
> + *   NUMA socket ID for LPM table memory allocation
> + * @param max_rules
> + *   Maximum number of LPM rules that can be added
> + * @param flags
> + *   This parameter is currently unused
> + * @return
> + *   Handle to LPM object on success, NULL otherwise with rte_errno set
> + *   to an appropriate values. Possible rte_errno values include:
> + *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
> + *    - E_RTE_SECONDARY - function was called from a secondary process instance
> + *    - EINVAL - invalid parameter passed to function
> + *    - ENOSPC - the maximum number of memzones has already been allocated
> + *    - EEXIST - a memzone with the same name already exists
> + *    - ENOMEM - no appropriate memory area found in which to create memzone
> + */
> +struct rte_lpm *
> +rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
> +
> +/**
> + * Find an existing LPM object and return a pointer to it.
> + *
> + * @param name
> + *   Name of the lpm object as passed to rte_lpm_create()
> + * @return
> + *   Pointer to lpm object or NULL if object not found with rte_errno
> + *   set appropriately. Possible rte_errno values include:
> + *    - ENOENT - required entry not available to return.
> + */
> +struct rte_lpm *
> +rte_lpm_find_existing(const char *name);
> +
> +/**
> + * Free an LPM object.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @return
> + *   None
> + */
> +void
> +rte_lpm_free(struct rte_lpm *lpm);
> +
> +/**
> + * Add a rule to the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP of the rule to be added to the LPM table
> + * @param depth
> + *   Depth of the rule to be added to the LPM table
> + * @param next_hop
> + *   Next hop of the rule to be added to the LPM table
> + * @return
> + *   0 on success, negative value otherwise
> + */
> +int
> +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
> +
> +/**
> + * Check if a rule is present in the LPM table,
> + * and provide its next hop if it is.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP of the rule to be searched
> + * @param depth
> + *   Depth of the rule to searched
> + * @param next_hop
> + *   Next hop of the rule (valid only if it is found)
> + * @return
> + *   1 if the rule exists, 0 if it does not, a negative value on failure
> + */
> +int
> +rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> +uint8_t *next_hop);
> +
> +/**
> + * Delete a rule from the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP of the rule to be deleted from the LPM table
> + * @param depth
> + *   Depth of the rule to be deleted from the LPM table
> + * @return
> + *   0 on success, negative value otherwise
> + */
> +int
> +rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
> +
> +/**
> + * Delete all rules from the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + */
> +void
> +rte_lpm_delete_all(struct rte_lpm *lpm);
> +
> +/**
> + * Lookup an IP into the LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   IP to be looked up in the LPM table
> + * @param next_hop
> + *   Next hop of the most specific rule found for IP (valid on lookup hit only)
> + * @return
> + *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
> + */
> +static inline int
> +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
> +{
> +	unsigned tbl24_index = (ip >> 8);
> +	uint16_t tbl_entry;
> +
> +	/* DEBUG: Check user input arguments. */
> +	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
> +
> +	/* Copy tbl24 entry */
> +	tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> +
> +	/* Copy tbl8 entry (only if needed) */
> +	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +
> +		unsigned tbl8_index = (uint8_t)ip +
> +				((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> +
> +		tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> +	}
> +
> +	*next_hop = (uint8_t)tbl_entry;
> +	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> +}
> +
> +/**
> + * Lookup multiple IP addresses in an LPM table. This may be implemented as a
> + * macro, so the address of the function should not be used.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ips
> + *   Array of IPs to be looked up in the LPM table
> + * @param next_hops
> + *   Next hop of the most specific rule found for IP (valid on lookup hit only).
> + *   This is an array of two byte values. The most significant byte in each
> + *   value says whether the lookup was successful (bitmask
> + *   RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
> + *   actual next hop.
> + * @param n
> + *   Number of elements in ips (and next_hops) array to lookup. This should be a
> + *   compile time constant, and divisible by 8 for best performance.
> + *  @return
> + *   -EINVAL for incorrect arguments, otherwise 0
> + */
> +#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> +		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> +
> +static inline int
> +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
> +		uint16_t *next_hops, const unsigned n)
> +{
> +	unsigned i;
> +	unsigned tbl24_indexes[n];
> +
> +	/* DEBUG: Check user input arguments. */
> +	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> +			(next_hops == NULL)), -EINVAL);
> +
> +	for (i = 0; i < n; i++) {
> +		tbl24_indexes[i] = ips[i] >> 8;
> +	}
> +
> +	for (i = 0; i < n; i++) {
> +		/* Simply copy tbl24 entry to output */
> +		next_hops[i] = *(const uint16_t *)&lpm->tbl24[tbl24_indexes[i]];
> +
> +		/* Overwrite output with tbl8 entry if needed */
> +		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +
> +			unsigned tbl8_index = (uint8_t)ips[i] +
> +					((uint8_t)next_hops[i] *
> +					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> +
> +			next_hops[i] = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> +		}
> +	}
> +	return 0;
> +}
> +
> +/* Mask four results. */
> +#define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ff00ff00ff00ff)
> +
> +/**
> + * Lookup four IP addresses in an LPM table.
> + *
> + * @param lpm
> + *   LPM object handle
> + * @param ip
> + *   Four IPs to be looked up in the LPM table
> + * @param hop
> + *   Next hop of the most specific rule found for IP (valid on lookup hit only).
> + *   This is an 4 elements array of two byte values.
> + *   If the lookup was succesfull for the given IP, then least significant byte
> + *   of the corresponding element is the  actual next hop and the most
> + *   significant byte is zero.
> + *   If the lookup for the given IP failed, then corresponding element would
> + *   contain default value, see description of then next parameter.
> + * @param defv
> + *   Default value to populate into corresponding element of hop[] array,
> + *   if lookup would fail.
> + */
> +static inline void
> +rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
> +	uint16_t defv)
> +{
> +	__m128i i24;
> +	rte_xmm_t i8;
> +	uint16_t tbl[4];
> +	uint64_t idx, pt;
> +
> +	const __m128i mask8 =
> +		_mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
> +
> +	/*
> +	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
> +	 * as one 64-bit value (0x0300030003000300).
> +	 */
> +	const uint64_t mask_xv =
> +		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
> +		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 16 |
> +		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32 |
> +		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 48);
> +
> +	/*
> +	 * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
> +	 * as one 64-bit value (0x0100010001000100).
> +	 */
> +	const uint64_t mask_v =
> +		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
> +		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 16 |
> +		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32 |
> +		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 48);
> +
> +	/* get 4 indexes for tbl24[]. */
> +	i24 = _mm_srli_epi32(ip, CHAR_BIT);
> +
> +	/* extract values from tbl24[] */
> +	idx = _mm_cvtsi128_si64(i24);
> +	i24 = _mm_srli_si128(i24, sizeof(uint64_t));
> +
> +	tbl[0] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
> +	tbl[1] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
> +
> +	idx = _mm_cvtsi128_si64(i24);
> +
> +	tbl[2] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
> +	tbl[3] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
> +
> +	/* get 4 indexes for tbl8[]. */
> +	i8.x = _mm_and_si128(ip, mask8);
> +
> +	pt = (uint64_t)tbl[0] |
> +		(uint64_t)tbl[1] << 16 |
> +		(uint64_t)tbl[2] << 32 |
> +		(uint64_t)tbl[3] << 48;
> +
> +	/* search successfully finished for all 4 IP addresses. */
> +	if (likely((pt & mask_xv) == mask_v)) {
> +		uintptr_t ph = (uintptr_t)hop;
> +		*(uint64_t *)ph = pt & RTE_LPM_MASKX4_RES;
> +		return;
> +	}
> +
> +	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[0] = i8.u32[0] +
> +			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		tbl[0] = *(const uint16_t *)&lpm->tbl8[i8.u32[0]];
> +	}
> +	if (unlikely((pt >> 16 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[1] = i8.u32[1] +
> +			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		tbl[1] = *(const uint16_t *)&lpm->tbl8[i8.u32[1]];
> +	}
> +	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[2] = i8.u32[2] +
> +			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		tbl[2] = *(const uint16_t *)&lpm->tbl8[i8.u32[2]];
> +	}
> +	if (unlikely((pt >> 48 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> +			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> +		i8.u32[3] = i8.u32[3] +
> +			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +		tbl[3] = *(const uint16_t *)&lpm->tbl8[i8.u32[3]];
> +	}
> +
> +	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[0] : defv;
> +	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[1] : defv;
> +	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
> +	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_LPM_H_ */
> diff --git a/app/test/v2.0/test_lpm.c b/app/test/v2.0/test_lpm.c
> new file mode 100644
> index 000000000..e71d213ba
> --- /dev/null
> +++ b/app/test/v2.0/test_lpm.c
> @@ -0,0 +1,1306 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2019 Intel Corporation
> + *
> + * LPM Autotests from DPDK v2.0 for abi compability testing.
> + *
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <errno.h>
> +#include <sys/queue.h>
> +
> +#include <rte_common.h>
> +#include <rte_cycles.h>
> +#include <rte_memory.h>
> +#include <rte_random.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_ip.h>
> +#include <time.h>
> +
> +#include "../test_lpm_routes.h"
> +#include "../test.h"
> +
> +/* remapping of DPDK v2.0 symbols */
> +#include "dcompat.h"
> +/* backported header from DPDK v2.0 */
> +#include "rte_lpm.h"
> +
> +#define TEST_LPM_ASSERT(cond) do {                                            \
> +	if (!(cond)) {                                                        \
> +		printf("Error at line %d:\n", __LINE__);                      \
> +		return -1;                                                    \
> +	}                                                                     \
> +} while (0)
> +
> +typedef int32_t (*rte_lpm_test)(void);
> +
> +static int32_t test0(void);
> +static int32_t test1(void);
> +static int32_t test2(void);
> +static int32_t test3(void);
> +static int32_t test4(void);
> +static int32_t test5(void);
> +static int32_t test6(void);
> +static int32_t test7(void);
> +static int32_t test8(void);
> +static int32_t test9(void);
> +static int32_t test10(void);
> +static int32_t test11(void);
> +static int32_t test12(void);
> +static int32_t test13(void);
> +static int32_t test14(void);
> +static int32_t test15(void);
> +static int32_t test16(void);
> +static int32_t test17(void);
> +static int32_t perf_test(void);
> +
> +static rte_lpm_test tests[] = {
> +/* Test Cases */
> +	test0,
> +	test1,
> +	test2,
> +	test3,
> +	test4,
> +	test5,
> +	test6,
> +	test7,
> +	test8,
> +	test9,
> +	test10,
> +	test11,
> +	test12,
> +	test13,
> +	test14,
> +	test15,
> +	test16,
> +	test17,
> +	perf_test,
> +};
> +
> +#define NUM_LPM_TESTS (sizeof(tests)/sizeof(tests[0]))
> +#define MAX_DEPTH 32
> +#define MAX_RULES 256
> +#define PASS 0
> +
> +/*
> + * Check that rte_lpm_create fails gracefully for incorrect user input
> + * arguments
> + */
> +int32_t
> +test0(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +
> +	/* rte_lpm_create: lpm name == NULL */
> +	lpm = rte_lpm_create(NULL, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm == NULL);
> +
> +	/* rte_lpm_create: max_rules = 0 */
> +	/* Note: __func__ inserts the function name, in this case "test0". */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, 0, 0);
> +	TEST_LPM_ASSERT(lpm == NULL);
> +
> +	/* socket_id < -1 is invalid */
> +	lpm = rte_lpm_create(__func__, -2, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm == NULL);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Create lpm table then delete lpm table 100 times
> + * Use a slightly different rules size each time
> + * */
> +int32_t
> +test1(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	int32_t i;
> +
> +	/* rte_lpm_free: Free NULL */
> +	for (i = 0; i < 100; i++) {
> +		lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES - i, 0);
> +		TEST_LPM_ASSERT(lpm != NULL);
> +
> +		rte_lpm_free(lpm);
> +	}
> +
> +	/* Can not test free so return success */
> +	return PASS;
> +}
> +
> +/*
> + * Call rte_lpm_free for NULL pointer user input. Note: free has no return and
> + * therefore it is impossible to check for failure but this test is added to
> + * increase function coverage metrics and to validate that freeing null does
> + * not crash.
> + */
> +int32_t
> +test2(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, RTE_LPM_HEAP);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	rte_lpm_free(lpm);
> +	rte_lpm_free(NULL);
> +	return PASS;
> +}
> +
> +/*
> + * Check that rte_lpm_add fails gracefully for incorrect user input arguments
> + */
> +int32_t
> +test3(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip = IPv4(0, 0, 0, 0);
> +	uint8_t depth = 24, next_hop = 100;
> +	int32_t status = 0;
> +
> +	/* rte_lpm_add: lpm == NULL */
> +	status = rte_lpm_add(NULL, ip, depth, next_hop);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/*Create vaild lpm to use in rest of test. */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* rte_lpm_add: depth < 1 */
> +	status = rte_lpm_add(lpm, ip, 0, next_hop);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/* rte_lpm_add: depth > MAX_DEPTH */
> +	status = rte_lpm_add(lpm, ip, (MAX_DEPTH + 1), next_hop);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Check that rte_lpm_delete fails gracefully for incorrect user input
> + * arguments
> + */
> +int32_t
> +test4(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip = IPv4(0, 0, 0, 0);
> +	uint8_t depth = 24;
> +	int32_t status = 0;
> +
> +	/* rte_lpm_delete: lpm == NULL */
> +	status = rte_lpm_delete(NULL, ip, depth);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/*Create vaild lpm to use in rest of test. */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* rte_lpm_delete: depth < 1 */
> +	status = rte_lpm_delete(lpm, ip, 0);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/* rte_lpm_delete: depth > MAX_DEPTH */
> +	status = rte_lpm_delete(lpm, ip, (MAX_DEPTH + 1));
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Check that rte_lpm_lookup fails gracefully for incorrect user input
> + * arguments
> + */
> +int32_t
> +test5(void)
> +{
> +#if defined(RTE_LIBRTE_LPM_DEBUG)
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip = IPv4(0, 0, 0, 0);
> +	uint8_t next_hop_return = 0;
> +	int32_t status = 0;
> +
> +	/* rte_lpm_lookup: lpm == NULL */
> +	status = rte_lpm_lookup(NULL, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	/*Create vaild lpm to use in rest of test. */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* rte_lpm_lookup: depth < 1 */
> +	status = rte_lpm_lookup(lpm, ip, NULL);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +#endif
> +	return PASS;
> +}
> +
> +
> +
> +/*
> + * Call add, lookup and delete for a single rule with depth <= 24
> + */
> +int32_t
> +test6(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip = IPv4(0, 0, 0, 0);
> +	uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Call add, lookup and delete for a single rule with depth > 24
> + */
> +
> +int32_t
> +test7(void)
> +{
> +	__m128i ipx4;
> +	uint16_t hop[4];
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip = IPv4(0, 0, 0, 0);
> +	uint8_t depth = 32, next_hop_add = 100, next_hop_return = 0;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ipx4 = _mm_set_epi32(ip, ip + 0x100, ip - 0x100, ip);
> +	rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
> +	TEST_LPM_ASSERT(hop[0] == next_hop_add);
> +	TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
> +	TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
> +	TEST_LPM_ASSERT(hop[3] == next_hop_add);
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Use rte_lpm_add to add rules which effect only the second half of the lpm
> + * table. Use all possible depths ranging from 1..32. Set the next hop = to the
> + * depth. Check lookup hit for on every add and check for lookup miss on the
> + * first half of the lpm table after each add. Finally delete all rules going
> + * backwards (i.e. from depth = 32 ..1) and carry out a lookup after each
> + * delete. The lookup should return the next_hop_add value related to the
> + * previous depth value (i.e. depth -1).
> + */
> +int32_t
> +test8(void)
> +{
> +	__m128i ipx4;
> +	uint16_t hop[4];
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
> +	uint8_t depth, next_hop_add, next_hop_return;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* Loop with rte_lpm_add. */
> +	for (depth = 1; depth <= 32; depth++) {
> +		/* Let the next_hop_add value = depth. Just for change. */
> +		next_hop_add = depth;
> +
> +		status = rte_lpm_add(lpm, ip2, depth, next_hop_add);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		/* Check IP in first half of tbl24 which should be empty. */
> +		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
> +		TEST_LPM_ASSERT(status == -ENOENT);
> +
> +		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +			(next_hop_return == next_hop_add));
> +
> +		ipx4 = _mm_set_epi32(ip2, ip1, ip2, ip1);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
> +		TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
> +		TEST_LPM_ASSERT(hop[1] == next_hop_add);
> +		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
> +		TEST_LPM_ASSERT(hop[3] == next_hop_add);
> +	}
> +
> +	/* Loop with rte_lpm_delete. */
> +	for (depth = 32; depth >= 1; depth--) {
> +		next_hop_add = (uint8_t) (depth - 1);
> +
> +		status = rte_lpm_delete(lpm, ip2, depth);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
> +
> +		if (depth != 1) {
> +			TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add));
> +		} else {
> +			TEST_LPM_ASSERT(status == -ENOENT);
> +		}
> +
> +		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
> +		TEST_LPM_ASSERT(status == -ENOENT);
> +
> +		ipx4 = _mm_set_epi32(ip1, ip1, ip2, ip2);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
> +		if (depth != 1) {
> +			TEST_LPM_ASSERT(hop[0] == next_hop_add);
> +			TEST_LPM_ASSERT(hop[1] == next_hop_add);
> +		} else {
> +			TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
> +			TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
> +		}
> +		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
> +		TEST_LPM_ASSERT(hop[3] == UINT16_MAX);
> +	}
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * - Add & lookup to hit invalid TBL24 entry
> + * - Add & lookup to hit valid TBL24 entry not extended
> + * - Add & lookup to hit valid extended TBL24 entry with invalid TBL8 entry
> + * - Add & lookup to hit valid extended TBL24 entry with valid TBL8 entry
> + *
> + */
> +int32_t
> +test9(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip, ip_1, ip_2;
> +	uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
> +		next_hop_add_2, next_hop_return;
> +	int32_t status = 0;
> +
> +	/* Add & lookup to hit invalid TBL24 entry */
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add & lookup to hit valid TBL24 entry not extended */
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 23;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	depth = 24;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	depth = 23;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add & lookup to hit valid extended TBL24 entry with invalid TBL8
> +	 * entry */
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 5);
> +	depth = 32;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add & lookup to hit valid extended TBL24 entry with valid TBL8
> +	 * entry */
> +	ip_1 = IPv4(128, 0, 0, 0);
> +	depth_1 = 25;
> +	next_hop_add_1 = 101;
> +
> +	ip_2 = IPv4(128, 0, 0, 5);
> +	depth_2 = 32;
> +	next_hop_add_2 = 102;
> +
> +	next_hop_return = 0;
> +
> +	status = rte_lpm_add(lpm, ip_1, depth_1, next_hop_add_1);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
> +
> +	status = rte_lpm_add(lpm, ip_2, depth_2, next_hop_add_2);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_2));
> +
> +	status = rte_lpm_delete(lpm, ip_2, depth_2);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
> +
> +	status = rte_lpm_delete(lpm, ip_1, depth_1);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +
> +/*
> + * - Add rule that covers a TBL24 range previously invalid & lookup (& delete &
> + *   lookup)
> + * - Add rule that extends a TBL24 invalid entry & lookup (& delete & lookup)
> + * - Add rule that extends a TBL24 valid entry & lookup for both rules (&
> + *   delete & lookup)
> + * - Add rule that updates the next hop in TBL24 & lookup (& delete & lookup)
> + * - Add rule that updates the next hop in TBL8 & lookup (& delete & lookup)
> + * - Delete a rule that is not present in the TBL24 & lookup
> + * - Delete a rule that is not present in the TBL8 & lookup
> + *
> + */
> +int32_t
> +test10(void)
> +{
> +
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip;
> +	uint8_t depth, next_hop_add, next_hop_return;
> +	int32_t status = 0;
> +
> +	/* Add rule that covers a TBL24 range previously invalid & lookup
> +	 * (& delete & lookup) */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, RTE_LPM_HEAP);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 16;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 25;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add rule that extends a TBL24 valid entry & lookup for both rules
> +	 * (& delete & lookup) */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add rule that updates the next hop in TBL24 & lookup
> +	 * (& delete & lookup) */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Add rule that updates the next hop in TBL8 & lookup
> +	 * (& delete & lookup) */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Delete a rule that is not present in the TBL24 & lookup */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_delete_all(lpm);
> +
> +	/* Delete a rule that is not present in the TBL8 & lookup */
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Add two rules, lookup to hit the more specific one, lookup to hit the less
> + * specific one delete the less specific rule and lookup previous values again;
> + * add a more specific rule than the existing rule, lookup again
> + *
> + * */
> +int32_t
> +test11(void)
> +{
> +
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip;
> +	uint8_t depth, next_hop_add, next_hop_return;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +	next_hop_add = 101;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	next_hop_add = 100;
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	ip = IPv4(128, 0, 0, 10);
> +	depth = 32;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Add an extended rule (i.e. depth greater than 24, lookup (hit), delete,
> + * lookup (miss) in a for loop of 1000 times. This will check tbl8 extension
> + * and contraction.
> + *
> + * */
> +
> +int32_t
> +test12(void)
> +{
> +	__m128i ipx4;
> +	uint16_t hop[4];
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip, i;
> +	uint8_t depth, next_hop_add, next_hop_return;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 32;
> +	next_hop_add = 100;
> +
> +	for (i = 0; i < 1000; i++) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add));
> +
> +		ipx4 = _mm_set_epi32(ip, ip + 1, ip, ip - 1);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
> +		TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
> +		TEST_LPM_ASSERT(hop[1] == next_hop_add);
> +		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
> +		TEST_LPM_ASSERT(hop[3] == next_hop_add);
> +
> +		status = rte_lpm_delete(lpm, ip, depth);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT(status == -ENOENT);
> +	}
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Add a rule to tbl24, lookup (hit), then add a rule that will extend this
> + * tbl24 entry, lookup (hit). delete the rule that caused the tbl24 extension,
> + * lookup (miss) and repeat for loop of 1000 times. This will check tbl8
> + * extension and contraction.
> + *
> + * */
> +
> +int32_t
> +test13(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip, i;
> +	uint8_t depth, next_hop_add_1, next_hop_add_2, next_hop_return;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	ip = IPv4(128, 0, 0, 0);
> +	depth = 24;
> +	next_hop_add_1 = 100;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add_1);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
> +
> +	depth = 32;
> +	next_hop_add_2 = 101;
> +
> +	for (i = 0; i < 1000; i++) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_add_2);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add_2));
> +
> +		status = rte_lpm_delete(lpm, ip, depth);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add_1));
> +	}
> +
> +	depth = 24;
> +
> +	status = rte_lpm_delete(lpm, ip, depth);
> +	TEST_LPM_ASSERT(status == 0);
> +
> +	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +	TEST_LPM_ASSERT(status == -ENOENT);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
> + * No more tbl8 extensions will be allowed. Now add one more rule that required
> + * a tbl8 extension and get fail.
> + * */
> +int32_t
> +test14(void)
> +{
> +
> +	/* We only use depth = 32 in the loop below so we must make sure
> +	 * that we have enough storage for all rules at that depth*/
> +
> +	struct rte_lpm *lpm = NULL;
> +	uint32_t ip;
> +	uint8_t depth, next_hop_add, next_hop_return;
> +	int32_t status = 0;
> +
> +	/* Add enough space for 256 rules for every depth */
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, 256 * 32, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	depth = 32;
> +	next_hop_add = 100;
> +	ip = IPv4(0, 0, 0, 0);
> +
> +	/* Add 256 rules that require a tbl8 extension */
> +	for (; ip <= IPv4(0, 0, 255, 0); ip += 256) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +		TEST_LPM_ASSERT(status == 0);
> +
> +		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> +		TEST_LPM_ASSERT((status == 0) &&
> +				(next_hop_return == next_hop_add));
> +	}
> +
> +	/* All tbl8 extensions have been used above. Try to add one more and
> +	 * we get a fail */
> +	ip = IPv4(1, 0, 0, 0);
> +	depth = 32;
> +
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	TEST_LPM_ASSERT(status < 0);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Sequence of operations for find existing lpm table
> + *
> + *  - create table
> + *  - find existing table: hit
> + *  - find non-existing table: miss
> + *
> + */
> +int32_t
> +test15(void)
> +{
> +	struct rte_lpm *lpm = NULL, *result = NULL;
> +
> +	/* Create lpm  */
> +	lpm = rte_lpm_create("lpm_find_existing", SOCKET_ID_ANY, 256 * 32, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* Try to find existing lpm */
> +	result = rte_lpm_find_existing("lpm_find_existing");
> +	TEST_LPM_ASSERT(result == lpm);
> +
> +	/* Try to find non-existing lpm */
> +	result = rte_lpm_find_existing("lpm_find_non_existing");
> +	TEST_LPM_ASSERT(result == NULL);
> +
> +	/* Cleanup. */
> +	rte_lpm_delete_all(lpm);
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * test failure condition of overloading the tbl8 so no more will fit
> + * Check we get an error return value in that case
> + */
> +int32_t
> +test16(void)
> +{
> +	uint32_t ip;
> +	struct rte_lpm *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY,
> +			256 * 32, 0);
> +
> +	/* ip loops through all possibilities for top 24 bits of address */
> +	for (ip = 0; ip < 0xFFFFFF; ip++) {
> +		/* add an entry within a different tbl8 each time, since
> +		 * depth >24 and the top 24 bits are different */
> +		if (rte_lpm_add(lpm, (ip << 8) + 0xF0, 30, 0) < 0)
> +			break;
> +	}
> +
> +	if (ip != RTE_LPM_TBL8_NUM_GROUPS) {
> +		printf("Error, unexpected failure with filling tbl8 groups\n");
> +		printf("Failed after %u additions, expected after %u\n",
> +				(unsigned)ip, (unsigned)RTE_LPM_TBL8_NUM_GROUPS);
> +	}
> +
> +	rte_lpm_free(lpm);
> +	return 0;
> +}
> +
> +/*
> + * Test for overwriting of tbl8:
> + *  - add rule /32 and lookup
> + *  - add new rule /24 and lookup
> + *	- add third rule /25 and lookup
> + *	- lookup /32 and /24 rule to ensure the table has not been overwritten.
> + */
> +int32_t
> +test17(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	const uint32_t ip_10_32 = IPv4(10, 10, 10, 2);
> +	const uint32_t ip_10_24 = IPv4(10, 10, 10, 0);
> +	const uint32_t ip_20_25 = IPv4(10, 10, 20, 2);
> +	const uint8_t d_ip_10_32 = 32,
> +			d_ip_10_24 = 24,
> +			d_ip_20_25 = 25;
> +	const uint8_t next_hop_ip_10_32 = 100,
> +			next_hop_ip_10_24 = 105,
> +			next_hop_ip_20_25 = 111;
> +	uint8_t next_hop_return = 0;
> +	int32_t status = 0;
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	status = rte_lpm_add(lpm, ip_10_32, d_ip_10_32, next_hop_ip_10_32);
> +	if (status < 0)
> +		return -1;
> +
> +	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
> +	uint8_t test_hop_10_32 = next_hop_return;
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
> +
> +	status = rte_lpm_add(lpm, ip_10_24, d_ip_10_24, next_hop_ip_10_24);
> +	if (status < 0)
> +		return -1;
> +
> +	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
> +	uint8_t test_hop_10_24 = next_hop_return;
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
> +
> +	status = rte_lpm_add(lpm, ip_20_25, d_ip_20_25, next_hop_ip_20_25);
> +	if (status < 0)
> +		return -1;
> +
> +	status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
> +	uint8_t test_hop_20_25 = next_hop_return;
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
> +
> +	if (test_hop_10_32 == test_hop_10_24) {
> +		printf("Next hop return equal\n");
> +		return -1;
> +	}
> +
> +	if (test_hop_10_24 == test_hop_20_25) {
> +		printf("Next hop return equal\n");
> +		return -1;
> +	}
> +
> +	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
> +
> +	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
> +	TEST_LPM_ASSERT(status == 0);
> +	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
> +
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Lookup performance test
> + */
> +
> +#define ITERATIONS (1 << 10)
> +#define BATCH_SIZE (1 << 12)
> +#define BULK_SIZE 32
> +
> +int32_t
> +perf_test(void)
> +{
> +	struct rte_lpm *lpm = NULL;
> +	uint64_t begin, total_time, lpm_used_entries = 0;
> +	unsigned i, j;
> +	uint8_t next_hop_add = 0xAA, next_hop_return = 0;
> +	int status = 0;
> +	uint64_t cache_line_counter = 0;
> +	int64_t count = 0;
> +
> +	rte_srand(rte_rdtsc());
> +
> +	/* (re) generate the routing table */
> +	generate_large_route_rule_table();
> +
> +	printf("No. routes = %u\n", (unsigned) NUM_ROUTE_ENTRIES);
> +
> +	print_route_distribution(large_route_table,
> +				 (uint32_t) NUM_ROUTE_ENTRIES);
> +
> +	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, 1000000, 0);
> +	TEST_LPM_ASSERT(lpm != NULL);
> +
> +	/* Measue add. */
> +	begin = rte_rdtsc();
> +
> +	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
> +		if (rte_lpm_add(lpm, large_route_table[i].ip,
> +				large_route_table[i].depth, next_hop_add) == 0)
> +			status++;
> +	}
> +	/* End Timer. */
> +	total_time = rte_rdtsc() - begin;
> +
> +	printf("Unique added entries = %d\n", status);
> +	/* Obtain add statistics. */
> +	for (i = 0; i < RTE_LPM_TBL24_NUM_ENTRIES; i++) {
> +		if (lpm->tbl24[i].valid)
> +			lpm_used_entries++;
> +
> +		if (i % 32 == 0) {
> +			if ((uint64_t)count < lpm_used_entries) {
> +				cache_line_counter++;
> +				count = lpm_used_entries;
> +			}
> +		}
> +	}
> +
> +	printf("Used table 24 entries = %u (%g%%)\n",
> +			(unsigned) lpm_used_entries,
> +			(lpm_used_entries * 100.0) / RTE_LPM_TBL24_NUM_ENTRIES);
> +	printf("64 byte Cache entries used = %u (%u bytes)\n",
> +			(unsigned) cache_line_counter, (unsigned) cache_line_counter * 64);
> +
> +	printf("Average LPM Add: %g cycles\n", (double)total_time / NUM_ROUTE_ENTRIES);
> +
> +	/* Measure single Lookup */
> +	total_time = 0;
> +	count = 0;
> +
> +	for (i = 0; i < ITERATIONS; i++) {
> +		static uint32_t ip_batch[BATCH_SIZE];
> +
> +		for (j = 0; j < BATCH_SIZE; j++)
> +			ip_batch[j] = rte_rand();
> +
> +		/* Lookup per batch */
> +		begin = rte_rdtsc();
> +
> +		for (j = 0; j < BATCH_SIZE; j++) {
> +			if (rte_lpm_lookup(lpm, ip_batch[j], &next_hop_return) != 0)
> +				count++;
> +		}
> +
> +		total_time += rte_rdtsc() - begin;
> +
> +	}
> +	printf("Average LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
> +			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
> +			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
> +
> +	/* Measure bulk Lookup */
> +	total_time = 0;
> +	count = 0;
> +	for (i = 0; i < ITERATIONS; i++) {
> +		static uint32_t ip_batch[BATCH_SIZE];
> +		uint16_t next_hops[BULK_SIZE];
> +
> +		/* Create array of random IP addresses */
> +		for (j = 0; j < BATCH_SIZE; j++)
> +			ip_batch[j] = rte_rand();
> +
> +		/* Lookup per batch */
> +		begin = rte_rdtsc();
> +		for (j = 0; j < BATCH_SIZE; j += BULK_SIZE) {
> +			unsigned k;
> +			rte_lpm_lookup_bulk(lpm, &ip_batch[j], next_hops, BULK_SIZE);
> +			for (k = 0; k < BULK_SIZE; k++)
> +				if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS)))
> +					count++;
> +		}
> +
> +		total_time += rte_rdtsc() - begin;
> +	}
> +	printf("BULK LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
> +			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
> +			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
> +
> +	/* Measure LookupX4 */
> +	total_time = 0;
> +	count = 0;
> +	for (i = 0; i < ITERATIONS; i++) {
> +		static uint32_t ip_batch[BATCH_SIZE];
> +		uint16_t next_hops[4];
> +
> +		/* Create array of random IP addresses */
> +		for (j = 0; j < BATCH_SIZE; j++)
> +			ip_batch[j] = rte_rand();
> +
> +		/* Lookup per batch */
> +		begin = rte_rdtsc();
> +		for (j = 0; j < BATCH_SIZE; j += RTE_DIM(next_hops)) {
> +			unsigned k;
> +			__m128i ipx4;
> +
> +			ipx4 = _mm_loadu_si128((__m128i *)(ip_batch + j));
> +			ipx4 = *(__m128i *)(ip_batch + j);
> +			rte_lpm_lookupx4(lpm, ipx4, next_hops, UINT16_MAX);
> +			for (k = 0; k < RTE_DIM(next_hops); k++)
> +				if (unlikely(next_hops[k] == UINT16_MAX))
> +					count++;
> +		}
> +
> +		total_time += rte_rdtsc() - begin;
> +	}
> +	printf("LPM LookupX4: %.1f cycles (fails = %.1f%%)\n",
> +			(double)total_time / ((double)ITERATIONS * BATCH_SIZE),
> +			(count * 100.0) / (double)(ITERATIONS * BATCH_SIZE));
> +
> +	/* Delete */
> +	status = 0;
> +	begin = rte_rdtsc();
> +
> +	for (i = 0; i < NUM_ROUTE_ENTRIES; i++) {
> +		/* rte_lpm_delete(lpm, ip, depth) */
> +		status += rte_lpm_delete(lpm, large_route_table[i].ip,
> +				large_route_table[i].depth);
> +	}
> +
> +	total_time += rte_rdtsc() - begin;
> +
> +	printf("Average LPM Delete: %g cycles\n",
> +			(double)total_time / NUM_ROUTE_ENTRIES);
> +
> +	rte_lpm_delete_all(lpm);
> +	rte_lpm_free(lpm);
> +
> +	return PASS;
> +}
> +
> +/*
> + * Do all unit and performance tests.
> + */
> +
> +static int
> +test_lpm(void)
> +{
> +	unsigned i;
> +	int status, global_status = 0;
> +
> +	for (i = 0; i < NUM_LPM_TESTS; i++) {
> +		status = tests[i]();
> +		if (status < 0) {
> +			printf("ERROR: LPM Test %s: FAIL\n", RTE_STR(tests[i]));
> +			global_status = status;
> +		}
> +	}
> +
> +	return global_status;
> +}
> +
> +REGISTER_TEST_COMMAND_VERSION(lpm_autotest, test_lpm, TEST_DPDK_ABI_VERSION_V20);
> diff --git a/app/test/v2.0/test_v20.c b/app/test/v2.0/test_v20.c
> new file mode 100644
> index 000000000..6285e2882
> --- /dev/null
> +++ b/app/test/v2.0/test_v20.c
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2014 Intel Corporation
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +
> +#include <rte_ip.h>
> +#include <rte_lpm.h>
> +
> +#include "../test.h"
> +
> +REGISTER_TEST_ABI_VERSION(v20, TEST_DPDK_ABI_VERSION_V20);

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-05-29 13:50 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-28 11:51 [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Ray Kinsella
2019-05-28 11:51 ` [dpdk-dev] [PATCH 1/2] app/test: Add ABI Version Testing functionality Ray Kinsella
2019-05-28 11:51 ` [dpdk-dev] [PATCH 2/2] app/test: LPMv4 ABI Version Testing Ray Kinsella
2019-05-29 13:50   ` Aaron Conole
2019-05-28 12:08 ` [dpdk-dev] [PATCH 0/2] Add ABI Version Testing to app/test Bruce Richardson
2019-05-28 12:58   ` Ray Kinsella
2019-05-28 14:01   ` Ray Kinsella

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).